Skip to Main Content
  • Questions
  • insert but discarding some unwanted/erroneous rows

Breadcrumb

Question and Answer

Tom Kyte

Thanks for the question.

Asked: May 03, 2002 - 8:45 pm UTC

Last updated: January 02, 2008 - 4:59 pm UTC

Version: 8.1.7.1

Viewed 10K+ times! This question is

You Asked

Hi Tom

I was wondering if with bulk collect in pl/sql is it possible to mass insert many rows but discarding those rows which would be discarded under circumstances for example:

insert... select ....

some rows of one column of the origin table has longer width than destination table, if we use SQL*Loader it will be discarded automatically. Is it possible to simulate this functionality with pl/sql? With a simple cursor and exception we could do it but it iterates row by row (slow) so I was wondering if bulk collect can be used

cheers

and Tom said...

Yes, you can simulate this:


ops$tkyte@ORA817DEV.US.ORACLE.COM> create table t ( x varchar2(10) NOT NULL );

Table created.

ops$tkyte@ORA817DEV.US.ORACLE.COM>
ops$tkyte@ORA817DEV.US.ORACLE.COM>
ops$tkyte@ORA817DEV.US.ORACLE.COM> declare
2 type array is table of varchar2(255);
3
4 l_data array := array( 'works 1', 'works 2',
5 'too long, much too long',
6 'works 4', NULL, 'works 6' );
7 l_start number := 1;
8 begin
9 loop
10 begin
11 forall i in l_start .. l_data.count
12 insert into t values ( l_data(i) );
13 EXIT;
14 exception
15 when others then
16 dbms_output.put_line( 'Bad row index = ' || (l_start+sql%rowcount) ||
17 ' ' || sqlerrm );
18 l_start := l_start + sql%rowcount + 1;
19 end;
20 end loop;
21 end;
22 /
Bad row index = 3 ORA-01401: inserted value too large for column
Bad row index = 5 ORA-01400: cannot insert NULL into ("OPS$TKYTE"."T"."X")

PL/SQL procedure successfully completed.

ops$tkyte@ORA817DEV.US.ORACLE.COM>
ops$tkyte@ORA817DEV.US.ORACLE.COM> select * from t;

X
----------
works 1
works 2
works 4
works 6

ops$tkyte@ORA817DEV.US.ORACLE.COM>


It inserts all of the data that is "good" stopping on each "bad" record.




Rating

  (67 ratings)

Is this answer out of date? If it is, please let us know via a Comment

Comments

for insert... select

A reader, May 04, 2002 - 1:43 pm UTC

Hi

For insert select how can we do it? How do we plug forall for select :?



Tom Kyte
May 04, 2002 - 2:18 pm UTC

insert select either WORKS or it does not WORK. Either all rows or no rows.

You could (in this example)

insert into t
select x from t2
where ( LENGTH(x) <= 10 and x is not null and <other constraints here> )


and do that for each column. With 9i, it is nice cause you can have a multi-table insert so that rows that cannot be inserted into t could be logged into a "bad" table.

using a cursor?

A reader, May 04, 2002 - 2:27 pm UTC

sorry i should have said not a plain insert select, rather using a cursor so instead of

forall i in l_start .. l_data.count

can I use

forall i in (select * from myTable)

?

Tom Kyte
May 04, 2002 - 4:35 pm UTC

Sorry it was not clear.

INSERT INTO T
select ... from ..... where ....

either WORKS or it does not WORK. There is no "forall" filtering (that involves bringing the data BACK to the client and then sending it back to the server)

What I tried to show was a way to avoid getting the "bad" records in the first place:

insert into t
select x from t2
where ( LENGTH(x) <= 10 and x is not null and <other constraints here> )



A reader, May 04, 2002 - 3:13 pm UTC

from your code

9 loop
10 begin
11 forall i in l_start .. l_data.count
12 insert into t values ( l_data(i) );
13 EXIT;

What is EXIT for?
Are we fetching row by row? Looks like it, if we are what is the use of forall here? how many rows is forall fetching each iteration?

Tom Kyte
May 04, 2002 - 4:37 pm UTC

Line 9 is the beginning of an infinite loop. We need a way out. the EXIT is the way out. When we do an insert that flings no exceptions -- leave the loop.

We are not fetching at all in this example. We are bulk inserting as many rows as we can and will insert all rows in the array UPTO a bad row.

The forall is inserting UPTO (l_data.count-l_start+1) rows -- if one of the rows in the array cannot be inserted, it will STOP inserting and return.

locks

A reader, August 28, 2002 - 11:57 am UTC

Hi Tom

I have folowing pl/sql block to insert 10000 rows to emp

declare
type array is table of number;
l_array array;
l_array_element number;
begin
l_array := array(); -- I dont know what does this do, I only know without it this doesnt work
for i in 1..9999
loop
l_array.extend;
l_array(l_array.last) := i;
end loop;
l_array_element := l_array.first;
dbms_output.put_line(l_array_element);
loop
begin
forall i in l_array_element..l_array.last
insert into lsc.emp(empno) values(l_array(i));
exit;
exception
when others
then l_array_element := l_array_element + sql%rowcount +1;
end;
end loop;
end;
/

I ran twice and I killed my database :-(
I see these locks in v$lock

HW - Space management operations on a specific segment
TT - Temporary table enqueue

They switch between each other, now TT next second HW next second goes back to TT and so on

what´s going on...?

Tom Kyte
August 28, 2002 - 12:19 pm UTC

You did not "kill" your database - whatever that means is beyond me.

l_array := array();

that sets l_array to an EMPTY array, before l_array was NULL and trying to "extend" NULL would fail.  You needed to "un-null" it before using it.

works great for me:

ops$tkyte@ORA920.LOCALHOST> drop table emp;

Table dropped.

ops$tkyte@ORA920.LOCALHOST> create table emp ( empno int );

Table created.

ops$tkyte@ORA920.LOCALHOST> declare
  2    type array is table of number;
  3    l_array array;
  4    l_array_element number;
  5  begin
  6    l_array := array();
  7    for i in 1..9999
  8    loop
  9       l_array.extend;
 10       l_array(l_array.last) := i;
 11    end loop;
 12  
 13    l_array_element := l_array.first;
 14    dbms_output.put_line(l_array_element);
 15    loop
 16         begin
 17           forall i in l_array_element..l_array.last
 18           insert into emp(empno) values(l_array(i));
 19           exit;
 20         exception
 21         when others
 22             then l_array_element := l_array_element + sql%rowcount +1;
 23         end;
 24       end loop;
 25  end;
 26  /
1

PL/SQL procedure successfully completed.

ops$tkyte@ORA920.LOCALHOST> spool off



without more info beyond "it killed my database" which is meaningless -- i cannot say anything more.
 

100% CPU usage

A reader, August 28, 2002 - 12:31 pm UTC

Hi sorry for my bad language, I meant the server CPU usage is spinning at 100%. I queried v$lock I see

ADDR KADDR SID TY ID1 ID2 LMODE REQUEST CTIME BLOCK
-------- -------- ---------- -- ---------- ---------- ---------- ---------- ---------- ----------
7982C33C 7982C448 9 TX 589861 291 6 0 2736 1
79F442A0 79F442B4 9 TM 30585 0 3 0 2736 0
79F4421C 79F44230 9 TM 30372 0 3 0 2736 0
7A6CD9D4 7A6CD9E4 9 HW 1 8388745 6 0 0 0
7A6CD988 7A6CD998 9 TT 1 16 4 0 0 0

sid 9 is my insert session, what I did was ran the pl/sql block twice, one after another without commit or rollback, I killed the session 30 minutes ago but these locks still appears in v$lock

In my code, if I comment

for i in 1..9999
loop
l_array.extend;
-- l_array(l_array.last) := i;
end loop;

I get this error back (I commented out the exception part to see the error)

ERROR at line 1:
ORA-01400: cannot insert NULL into ("LSC"."DEPT"."DEPTNO")
ORA-06512: at line 16

:-0

Tom Kyte
August 28, 2002 - 2:18 pm UTC

so, give us the entire test case from start to finish (include the create table and all)

I just did it -- works great -- two times in a row.

Versions down to 4 digits are always appreciated
Operating system type as well.



One thing I note right away is the error message complains about DEPTNO, but you never filled in DEPTNO so I don't think you are giving us the WHOLE story here. Do a cut and paste -- from beginning to END, just like I do!

it is the entire code!!!

A reader, August 28, 2002 - 3:53 pm UTC

declare
type array is table of number;
l_array array;
l_array_element number;
begin
l_array := array(); -- I dont know what does this do, I only know without it
this doesnt work
for i in 1..9999
loop
l_array.extend;
l_array(l_array.last) := i;
end loop;
l_array_element := l_array.first;
dbms_output.put_line(l_array_element);
loop
begin
forall i in l_array_element..l_array.last
insert into lsc.emp(empno) values(l_array(i));
exit;
exception
when others
then l_array_element := l_array_element + sql%rowcount +1;
end;
end loop;
end;
/

this is the entire code!!!!

declare
type array is table of number;
l_array array;
l_array_element number;
begin
l_array := array(); -- I dont know what does this do, I only know without it
this doesnt work
for i in 1..9999
loop
l_array.extend;
-- l_array(l_array.last) := i;
end loop;
l_array_element := l_array.first;
dbms_output.put_line(l_array_element);
loop
begin
forall i in l_array_element..l_array.last
insert into lsc.emp(empno) values(l_array(i));
exit;
-- exception
-- when others
-- then l_array_element := l_array_element + sql%rowcount +1;
end;
end loop;
end;
/


this the code that gives me deptno error, the table is the dept table that comes with demobld.sql in $ORACLE/HOME/sqlplud/demo :-(

forgot version and OS

A reader, August 28, 2002 - 3:55 pm UTC

sorry, the version is 9.2.0.1 and OS win2k

sorry the first works when I try in another server

A reader, August 28, 2002 - 4:06 pm UTC

sorry the first pl/sql works, i dont understand what happened in other server that gave those waits which I havent seen before

the second does not

SQL>   declare
  2      type array is table of number;
  3      l_array array;
  4      l_array_element number;
  5    begin
  6      l_array := array(); -- I dont know what does this do, I only know without it this doesnt work
  7      for i in 1..9999
  8      loop
  9        l_array.extend;
 10    --    l_array(l_array.last) := i;
 11      end loop;
 12        l_array_element := l_array.first;
 13        dbms_output.put_line(l_array_element);
 14        loop
 15          begin
 16            forall i in l_array_element..l_array.last
 17            insert into lsc.emp(empno) values(l_array(i));
 18            exit;
 19   --       exception
 20   --       when others
 21   --       then l_array_element := l_array_element + sql%rowcount +1;
 22          end;
 23        end loop;
 24    end;
 25  /
  declare
*
ERROR at line 1:
ORA-01400: cannot insert NULL into ("LSC"."EMP"."EMPNO")
ORA-06512: at line 16

The deptno was another error, very sorry :-( 

Bhagat Singh, August 28, 2002 - 11:27 pm UTC

Hi Tom,

It was great example can this type of situation handled via pro*c code if yes then can you show some sample of the same so that it can help us to do bulk loading of the data.

Regards
Bhagat Singh

Tom Kyte
August 29, 2002 - 7:23 pm UTC

It is in fact virtually the same as the PLSQL except you have to increment a host variable pointer for us. Consider this piece of code:

void process()
{
exec sql begin declare section;
int int_data[] = { 1,2,33,4,5,66,7,8,99,0 };
int * int_hv;

typedef char asciiz [20];
EXEC SQL TYPE asciiz is STRING(20);
asciiz char_data[] = { "one", "two", "three", "four", "five",
"six", "seven", "eight", "nine", "ten" };
asciiz * * char_hv;
int n;

int test_number;
asciiz test_string;
EXEC SQL END DECLARE SECTION;


EXEC SQL WHENEVER SQLERROR CONTINUE;
EXEC SQL drop table t;

EXEC SQL WHENEVER SQLERROR DO sqlerror_hard();
exec sql create table t ( x number(1), y varchar2(20) );
EXEC SQL WHENEVER SQLERROR CONTINUE;

for( int_hv = int_data, char_hv = char_data,
n = sizeof(int_data)/sizeof(int_data[0]); ; )
{
exec sql for :n insert into t values ( :int_hv, :char_hv );
printf( "sqlcode = %d, sqlerrd[2] = %d\n",
sqlca.sqlcode, sqlca.sqlerrd[2] );
if ( sqlca.sqlcode )
{
int_hv = int_hv + (sqlca.sqlerrd[2]+1);
char_hv = (char *)char_hv + (sqlca.sqlerrd[2]+1) * sizeof(char_data[0]); n -= sqlca.sqlerrd[2]+1;
}
else
{
break;
}
}

EXEC SQL WHENEVER SQLERROR DO sqlerror_hard();
exec sql whenever NOTFOUND DO BREAK;
exec sql declare c cursor for select * from t;
exec sql open c;
for(;;)
{
exec sql fetch c into :test_number, :test_string;
printf( "%d, %s\n", test_number, test_string );
}
exec sql whenever notfound continue;
exec sql close c;


}


when run, it generated this output:


Connected to ORACLE as user: /

sqlcode = -1438, sqlerrd[2] = 2
sqlcode = -1438, sqlerrd[2] = 2
sqlcode = -1438, sqlerrd[2] = 2
sqlcode = 0, sqlerrd[2] = 1
1, one
2, two
4, four
5, five
7, seven
8, eight
0, ten


showing the numbers 33, 66, 99 failed and we just skipped them.


Hey! Unidentified reader!!!

Unidentified also, August 29, 2002 - 12:01 pm UTC


8 loop
9 l_array.extend;
10 -- l_array(l_array.last) := i;
11 end loop;
12 l_array_element := l_array.first;
13 dbms_output.put_line(l_array_element);
14 loop
15 begin
16 forall i in l_array_element..l_array.last
17 insert into lsc.emp(empno) values(l_array(i));

You rem'd out line 10 thus extending but never initializing l_array, inserting into a (assumed) required field.

Built in exception handler with Bulk collects ?

Ram, September 06, 2002 - 7:31 am UTC

Thanks for the fantastic example !

I have a couple of questions regarding exception handling when using bulk collects.
a) Is there a built in exception array into which you can populate error messages as and when they occur and loop thru them at the end and take necessary action?
Reason i am asking is that i saw some code using bulk collects that had a 'Save Exceptions' clause added to forall.
b) if the answer to the above is yes, is it specific to some version ?

thanks in advance.
your book is GREAT ! looking forward to your
next one :-)

Tom Kyte
September 06, 2002 - 7:44 am UTC

Yes, this is new in Oracle9i release 1 and up.  Here is the exmaple from the book:

ops$tkyte@ORA9I.WORLD> create table emp ( sal int );

Table created.

ops$tkyte@ORA9I.WORLD> insert into emp values ( 1000 );

1 row created.

ops$tkyte@ORA9I.WORLD> 
ops$tkyte@ORA9I.WORLD> 
ops$tkyte@ORA9I.WORLD> DECLARE
  2     TYPE NumList IS TABLE OF NUMBER;
  3     num_tab NumList := NumList(10,0,11,12,30,0,20,199,2,0,9,1);
  4     errors NUMBER;
  5     dml_errors EXCEPTION;
  6     PRAGMA exception_init(dml_errors, -24381);
  7  BEGIN
  8     FORALL i IN num_tab.FIRST..num_tab.LAST SAVE EXCEPTIONS
  9        DELETE FROM emp WHERE sal > 500000/num_tab(i);
 10  EXCEPTION
 11     WHEN dml_errors THEN
 12        errors := SQL%BULK_EXCEPTIONS.COUNT;
 13        dbms_output.put_line('Number of errors is ' || errors);
 14            FOR i IN 1..errors LOOP
 15                  dbms_output.put_line('Error ' || i || ' occurred during '||
 16             'iteration ' || SQL%BULK_EXCEPTIONS(i).ERROR_INDEX);
 17          dbms_output.put_line('Oracle error is ' || SQLERRM(-SQL%BULK_EXCEPTIONS(i).ERROR_CODE));
 18            END LOOP;
 19  end;
 20  /
Number of errors is 3
Error 1 occurred during iteration 2
Oracle error is ORA-01476: divisor is equal to zero
Error 2 occurred during iteration 6
Oracle error is ORA-01476: divisor is equal to zero
Error 3 occurred during iteration 10
Oracle error is ORA-01476: divisor is equal to zero

PL/SQL procedure successfully completed.

As you can see, it greatly simplified this entire process.
 

insert fail

mo, February 06, 2003 - 9:26 am UTC

Tom:

Is there a way when an insert fails in SQL/PL-SQL on a table to tell which column exactly could not accept the data or I have to examine each column in a large table manually.

Thanks,

Tom Kyte
February 06, 2003 - 10:10 am UTC

depends on the error:

Bad row index = 3 ORA-01401: inserted value too large for column
Bad row index = 5 ORA-01400: cannot insert NULL into ("OPS$TKYTE"."T"."X")

 

fail insert

mo, February 07, 2003 - 11:45 am UTC

TOm:

how did you get the row index value in SQL or PL/SQL? Is this the troublesome column?

Here is what I mean. How do I let oracle tell me which column has the problem rather than me trying to figure it out in a large table?

SQL>   insert into emp1(empno,ename,job,mgr) values (222,'michael dell','MGR','abc')
                                                                        *
ERROR at line 1:
ORA-01401: inserted value too large for column


SQL> insert into emp1(empno,ename,job,mgr) values (222,'mike dell','MGR','abc')
SQL> /
 insert into emp1(empno,ename,job,mgr) values (222,'mike dell','MGR','abc')
                                                                     *
ERROR at line 1:
ORA-01722: invalid number 

Tom Kyte
February 07, 2003 - 1:21 pm UTC

I told you -- depends on the ERROR. 1401, no joy. others --maybe.

the row index was the index form the array insert.

insert fail

mo, February 07, 2003 - 2:11 pm UTC

Tom:

You did not tell me how you got this:

I am not seeing this in SQL. All I get is the ORA-01401 message. Do i have to create an index on the column or do some other kind of setup for it?

Also, in PL/SQL can you report that index value back to the user on a web page (by assigning it to a variable)?


<
Bad row index = 3 ORA-01401: inserted value too large for column
Bad row index = 5 ORA-01400: cannot insert NULL into ("OPS$TKYTE"."T"."X")
>


Tom Kyte
February 07, 2003 - 2:32 pm UTC

All i get is the 1401 too -- are you seeing something I'm not??????

the index is NOT the column number.
the index is printed by me, an index into an ARRAY of rows I was inserting.

that says "row 3 had a problem"

Trapping multiple errors

A reader, April 23, 2003 - 4:26 am UTC

Hi Tom,
Is it possible to find multiple errors (if any) in the data, while inserting/updating? Like; the error could be because of NULL value, PK and/or FK violation etc. This could be useful to fix all errors in one go rather then running the program again and again. The DB version I am using is 8.1.7.3 on HP-UNIX 11i
Thanks

Tom Kyte
April 23, 2003 - 7:27 am UTC

no, once an error is found, the statement is terminated.

bulk processing

lakshmi, May 15, 2003 - 12:35 am UTC

Excellent

Excellent

A reader, June 27, 2003 - 10:42 pm UTC

what if i only want to insert those rows/records that have passed some validation previous to the forall.

do i have to create a second set of arrays of valid values, and insert that set?

I saw a technique used by yourself for an update where you set a changed flag for each record.

thanks

Tom Kyte
June 28, 2003 - 9:12 am UTC

same concept:

ops$tkyte@ORA920> create table t ( data number );

Table created.

ops$tkyte@ORA920>
ops$tkyte@ORA920> declare
  2     type array is table of varchar2(20);
  3     l_data array := array( '1','2','3','d','4','5' );
  4     l_flags array := array();
  5  begin
  6          l_flags.extend(l_data.count);
  7          for i in 1 .. l_data.count
  8          loop
  9                  if ( l_data(i) != 'd' )
 10                  then
 11                          l_flags(i) := 'Y';
 12                  else
 13                          l_flags(i) := 'N';
 14                  end if;
 15          end loop;
 16
 17          forall i in 1 .. l_data.count
 18                  insert into t ( data)
 19                  select l_data(i)
 20                    from dual
 21                   where l_flags(i) = 'Y';
 22  end;
 23  /

PL/SQL procedure successfully completed.

ops$tkyte@ORA920> select * from t;

      DATA
----------
         1
         2
         3
         4
         5

ops$tkyte@ORA920>
 

I am not worthy !!!

phil, June 28, 2003 - 8:39 pm UTC

thanks

Su Baba, August 08, 2003 - 2:40 pm UTC

Hi Tom,

I have a situation where I need to do a bulk insert, but I want 
to get rid of unwanted rows based on a function applied to 
a column. It seems like the entire operation stops as soon as
it hits the first exception. Is there any way to continue the 
process like the one in your reply to the original question?

Here's my code to simulate the situation:


SQL> CREATE OR REPLACE FUNCTION func (x IN NUMBER)
  2  RETURN NUMBER IS
  3     l_ret_val          NUMBER;
  4     e_custom_exception EXCEPTION;
  5     PRAGMA EXCEPTION_INIT(e_custom_exception, -1000);
  6  
  7  BEGIN
  8     l_ret_val := x * 100;
  9  
 10     IF (l_ret_val > 100000) THEN
 11        RAISE e_custom_exception;
 12     END IF;
 13  
 14     RETURN l_ret_val;
 15  
 16     EXCEPTION
 17        WHEN e_custom_exception THEN
 18           dbms_output.put_line('Error occurred...');
 19           l_ret_val := 0;
 20  END;
 21  /

Function created.

SQL> CREATE TABLE x (
  2  col1 NUMBER
  3  );

Table created.

SQL> 
SQL> INSERT INTO x VALUES (1);

1 row created.

SQL> INSERT INTO x VALUES (100);

1 row created.

SQL> INSERT INTO x VALUES (10000);

1 row created.

SQL> INSERT INTO x VALUES (200);

1 row created.

SQL> INSERT INTO x VALUES (20000);

1 row created.

SQL> commit;

Commit complete.

SQL> 
SQL> CREATE TABLE z (
  2  col1 NUMBER
  3  );

Table created.

SQL> DECLARE
  2      type number_tbl_type is table of NUMBER index by binary_integer;
  3      l_col1_tbl    number_tbl_type;
  4      l_start NUMBER := 1;
  5  
  6      CURSOR c IS 
  7         SELECT col1 FROM x;
  8  
  9  BEGIN
 10      OPEN c;
 11  
 12      LOOP
 13          FETCH c BULK COLLECT INTO l_col1_tbl
 14             LIMIT 100;
 15  
 16          BEGIN
 17             FORALL i IN 1..l_col1_tbl.COUNT
 18                INSERT INTO z (col1) VALUES
 19                (func(l_col1_tbl(i)));
 20  
 21             EXCEPTION
 22                WHEN others THEN
 23                   dbms_output.put_line('Error at ' ||
 24                      (l_start + sql%rowcount));
 25  
 26                   l_start := l_start + sql%rowcount + 1;
 27          END;
 28  
 29          EXIT WHEN c%NOTFOUND;
 30      END LOOP;
 31  
 32      CLOSE c;
 33  END;
 34  /
Error occurred...
Error at 3

PL/SQL procedure successfully completed.

SQL> select * from z;

      COL1
----------
       100
     10000



SQL> show error
No errors.

 

Tom Kyte
August 10, 2003 - 11:59 am UTC

make the function return a 0 or 1 (success or failure) instead and either

select col1 from x where f(col1) = 0

to filter then on the way OUT of the database or less efficiently

insert into z ( col1 ) select l_col1_tbl(i) from dual where f(l_col1_tbl(i)) = 0


or select them all out and move them into another array to be inserted:

fetch bulk collect...

l_data_array := l_empty_array;
for i in 1 .. l_col1_tbl.count loop
if ( f(l_col1_tbl(i)) = 0 ) then l_data_array(l_data_array.count+1) := l_col1_tbl(i);
end if;
end loop
insert


or use the SAVE EXCEPTIONS (search for "save exceptions" on this site in quotes to read about that) if you are using 9i


Me, I would probably do the "select them all, move them into an array, insert them" to reduce the number of context switches from sql to plsql which can be expensive.

Su Baba, August 10, 2003 - 3:45 pm UTC

Hi Tom,

Datababase version : 8.1.7
Number of Rows in x : 1,000,000

If I make the function to return either 0 or 1, how do I get the return value computed by the function?

I modified the above bulk collect/insert code into the following. Would it work this way?

thanks

DECLARE
type number_tbl_type is table of NUMBER index by binary_integer;
l_col1_tbl number_tbl_type;
l_data_array number_tbl_type;
l_empty_array number_tbl_type;
l_start NUMBER := 1;

CURSOR c IS
SELECT col1 FROM x;

BEGIN
OPEN c;

LOOP
FETCH c BULK COLLECT INTO l_col1_tbl
LIMIT 1000;

l_data_array := l_empty_array;

FOR i IN 1..l_col1_tbl.COUNT LOOP
IF (func(l_col1_tbl(i)) = 0) THEN
l_data_array (l_data_array.COUNT + 1) := l_col1_tbl(i);

ELSE
-- "dbms_output" the exception message
END IF;

FORALL i IN 1..l_data_array.COUNT
INSERT INTO z (col1) VALUES l_data_array(i);

EXIT WHEN c%NOTFOUND;
END LOOP;

CLOSE c;
END;
/



Tom Kyte
August 10, 2003 - 4:00 pm UTC

use my preferred option (do it in the code) and leave the exception flinging the way it is I guess

or have the function return "some obviously invalid value" like 999999999999999999999999 on error.


I would

fetch bulk collect
for i in 1 ...
loop
begin
another_array(another_array.count+1) := f(data(i));
exception
when your_exception then do whatever you want
end;
end loop
forall insert another_array;


Fanstastic!

Kashif, August 14, 2003 - 4:00 pm UTC

Kashif

Your "when others exception" example using "dbms_output.put_line"

ht, August 25, 2003 - 2:23 pm UTC

Tom,
I haven't been successful in running your sample code so I tried the basic sample below. All I'm trying to do is print the Oracle error message and error code in a pl/sql session.

What am I doing wrong?

Thanks,
ht
1 declare
2 x date;
3 begin
4 select sysdatex into x from dual;
5 exception when others then
6 dbms_output.put_line('error code:'||sqlcode);
7 dbms_output.put_line('error message:'||sqlerrm);
8* end;
>/
select sysdatex into x from dual;
*
ERROR at line 4:
ORA-06550: line 4, column 16:
PL/SQL: ORA-00904: "SYSDATEX": invalid identifier
ORA-06550: line 4, column 9:
PL/SQL: SQL Statement ignored


Tom Kyte
August 26, 2003 - 7:41 am UTC

you have to experience a runtime error, not a compile time error.


sysdatex is NOT a column
hence this block of code cannot be compiled
hence it never actually runs


if you do something like



ops$tkyte@ORA920> declare
  2          x date;
  3  begin
  4          x := 'a';
  5          exception when others then
  6                  dbms_output.put_line('error code:'||sqlcode);
  7                  dbms_output.put_line('error message:'||sqlerrm);
  8  end;
  9  /
error code:-1858
error message:ORA-01858: a non-numeric character was found where a numeric was expected
 
PL/SQL procedure successfully completed.


that'll generate a RUNTIME error for you.
 

A reader, August 26, 2003 - 3:11 pm UTC


save exceptions and so on in Oracle <= 9.0

Matthias Rogel, August 28, 2003 - 5:44 am UTC

Hallo Tom,

U often mention the new pl/sql-feature "save exceptions" introduced in Oracle 9 Release 1.

however, I think it's worth telling the audience that U can achieve everything introduced by this feature also in prior versions.

Take your example on
http://asktom.oracle.com/pls/asktom/f?p=100:11:::::P11_QUESTION_ID:3675533064673#5303751976529
"from the book" (I don't read books @all, sorry 4 that!)

here's how I do the same example in Oracle 9.0 (would also work in Oracle 8, I'd say, cannot test it 4 the moment):
SQL> select * from v$version;

BANNER
----------------------------------------------------------------
Oracle9i Enterprise Edition Release 9.0.1.0.0 - Production
PL/SQL Release 9.0.1.0.0 - Production
CORE    9.0.1.0.0       Production
TNS for Linux: Version 9.0.1.0.0 - Production
NLSRTL Version 9.0.1.0.0 - Production

Abgelaufen: 00:00:00.00
SQL> create table emp ( sal int );

Tabelle wurde angelegt.

Abgelaufen: 00:00:00.00
SQL> insert into emp values ( 1000 );

1 Zeile wurde erstellt.

Abgelaufen: 00:00:00.00
SQL>
SQL>
SQL> declare
  2     TYPE NumList IS TABLE OF NUMBER;
  3     num_tab NumList := NumList(10,0,11,12,30,0,20,199,2,0,9,1);
  4     num_tab_ dbms_sql.number_table;
  5     errors number;
  6     bulk_cursor integer;
  7     arr_sqlerrm dbms_sql.varchar2_table;
  8     dummy integer;
  9     i integer;
 10  begin
 11     bulk_cursor := dbms_sql.open_cursor;
 12     dbms_sql.parse(bulk_cursor,
 13           'begin' || chr(10) ||
 14           '   delete from emp where sal > 500000/:num;'|| chr(10) ||
 15           '   exception when others then :vsqlerrm := sqlerrm;'|| chr(10) ||
 16           'end;',
 17           dbms_sql.native);
 18     for c in num_tab.first .. num_tab.last loop
 19        num_tab_(num_tab_.count+1) := num_tab(c);
 20     end loop;
 21     dbms_sql.bind_array(bulk_cursor, ':num', num_tab_);
 22     dbms_sql.bind_array(bulk_cursor, ':vsqlerrm', arr_sqlerrm);
 23     dummy := dbms_sql.execute(bulk_cursor);
 24     dbms_sql.variable_value(bulk_cursor, ':VSQLERRM', ARR_SQLERRM);
 25     dbms_sql.close_cursor(bulk_cursor);
 26     dbms_output.put_line('Number of errors is ' || arr_sqlerrm.count);
 27     i := arr_sqlerrm.first;
 28     while i is not null loop
 29        if arr_sqlerrm(i) is not null then
 30           dbms_output.put_line('Oracle error is ' ||
 31                                 arr_sqlerrm(i));
 32        end if;
 33        i := arr_sqlerrm.next(i);
 34     end loop;
 35  end;
 36  /
Number of errors is 3
Oracle error is ORA-01476: Divisor ist Null
Oracle error is ORA-01476: Divisor ist Null
Oracle error is ORA-01476: Divisor ist Null

PL/SQL-Prozedur wurde erfolgreich abgeschlossen.

Abgelaufen: 00:00:00.03




If you also need the "SQL%BULK_EXCEPTIONS(i).ERROR_INDEX"
you had to go like
SQL> declare
  2     TYPE NumList IS TABLE OF NUMBER;
  3     num_tab NumList := NumList(10,0,11,12,30,0,20,199,2,0,9,1);
  4     num_tab_ dbms_sql.number_table;
  5     errors number;
  6     bulk_cursor integer;
  7     arr_sqlerrm dbms_sql.varchar2_table;
  8     arr_it dbms_sql.number_table;
  9     dummy integer;
 10     i integer;
 11  begin
 12     bulk_cursor := dbms_sql.open_cursor;
 13     dbms_sql.parse(bulk_cursor,
 14           'begin' || chr(10) ||
 15           '   delete from emp where sal > 500000/:num;'|| chr(10) ||
 16           '   :vsqlerrm := null;'|| chr(10) ||
 17           '   exception when others then :vsqlerrm := sqlerrm;'|| chr(10) ||
 18           'end;',
 19           dbms_sql.native);
 20     for c in num_tab.first .. num_tab.last loop
 21        num_tab_(num_tab_.count+1) := num_tab(c);
 22        arr_it(arr_it.count+1) := c;
 23     end loop;
 24     dbms_sql.bind_array(bulk_cursor, ':num', num_tab_);
 25     dbms_sql.bind_array(bulk_cursor, ':vsqlerrm', arr_sqlerrm);
 26     dummy := dbms_sql.execute(bulk_cursor);
 27     dbms_sql.variable_value(bulk_cursor, ':VSQLERRM', ARR_SQLERRM);
 28     dbms_sql.close_cursor(bulk_cursor);
 29     i := arr_sqlerrm.first;
 30     while i is not null loop
 31        if arr_sqlerrm(i) is not null then
 32           dbms_output.put_line('Error occurred during '||
 33                               'iteration ' || arr_it(i));
 34           dbms_output.put_line('Oracle error is ' ||
 35                                 arr_sqlerrm(i));
 36        end if;
 37        i := arr_sqlerrm.next(i);
 38     end loop;
 39  end;
 40  /
Error occurred during iteration 2
Oracle error is ORA-01476: Divisor ist Null
Error occurred during iteration 6
Oracle error is ORA-01476: Divisor ist Null
Error occurred during iteration 10
Oracle error is ORA-01476: Divisor ist Null


or so.

Of course, your way is
1. easier to code
2. faster (I know, execute immediate and forall and this stuff is faster than dbms_sql, however, I don't know why)

But my way is still pretty performant

best
regards
matthias  

Sorry, but .... "save exceptions" and instead-of-trigger

A reader, September 01, 2003 - 8:28 am UTC

sorry for that bad example.

just saw, that Tom's method (of course) also works
in 9.0.1.

BUT:
I came to it with a sample where "save exceptions"
don't work, in 9.0 AND in 9.2

in my opinion, it seems to be an Oracle Bug - just
don't know whether in the engine or in the documentation.

here it is


SQL> select * from v$version;

BANNER
----------------------------------------------------------------
Oracle9i Enterprise Edition Release 9.2.0.1.0 - Production
PL/SQL Release 9.2.0.1.0 - Production
CORE    9.2.0.1.0       Production
TNS for 32-bit Windows: Version 9.2.0.1.0 - Production
NLSRTL Version 9.2.0.1.0 - Production


SQL> create table test(n number);

Tabelle wurde angelegt.

SQL> create view test_ as select * from test;

View wurde angelegt.

SQL> create trigger tr_test
  2  instead of insert on test_
  3  for each row
  4  begin
  5     if :new.n > 5 then raise_application_error(-20001, 'only values<=5 allowed'); end if;
  6     insert into test values(:new.n);
  7  end;
  8  /


SQL> insert into test_ values(6);
insert into test_ values(6)
            *
FEHLER in Zeile 1:
ORA-20001: only values<=5 allowed
ORA-06512: in "DP.TR_TEST", Zeile 2
ORA-04088: Fehler bei der Ausf³hrung von Trigger 'DP.TR_TEST'


SQL> insert into test_ values(4);

1 Zeile wurde erstellt.


SQL> select * from test;

         N
----------
         4


REM ********************************************
REM so far, so good
REM now, let's take Tom's approach to bulk-insert
REM into test_ ....
REM ********************************************


SQL> DECLARE
  2    TYPE NumList IS TABLE OF NUMBER;
  3    num_tab NumList := NumList(10,0,11,12,30,0,20,199,2,0,9,1);
  4    errors NUMBER;
  5    dml_errors EXCEPTION;
  6    PRAGMA exception_init(dml_errors, -24381);
  7    BEGIN
  8    FORALL i IN num_tab.FIRST..num_tab.LAST SAVE EXCEPTIONS
  9    insert into test_ values(num_tab(i));
 10   EXCEPTION
 11   WHEN dml_errors THEN
 12   errors := SQL%BULK_EXCEPTIONS.COUNT;
 13   dbms_output.put_line('Number of errors is ' || errors);
 14   FOR i IN 1..errors LOOP
 15   dbms_output.put_line('Error ' || i || ' occurred during '||
 16   'iteration ' || SQL%BULK_EXCEPTIONS(i).ERROR_INDEX);
 17   dbms_output.put_line('Oracle error is ' ||
 18  SQLERRM(-SQL%BULK_EXCEPTIONS(i).ERROR_CODE));
 19   END LOOP;
 20   end;
 21   /
DECLARE
*
FEHLER in Zeile 1:
ORA-20001: only values<=5 allowed
ORA-06512: in "DP.TR_TEST", Zeile 2
ORA-04088: Fehler bei der Ausf³hrung von Trigger 'DP.TR_TEST'
ORA-06512: in Zeile 8


REM ********************************************
REM sorry, what's that ?
REM NO exception was saved ?
REM It seems that when U want
REM to do bulk-dml on a view with
REM instead-of-triggers
REM using save exceptions
REM the only way U are able to do this is
REM dbms_sql
REM ********************************************




Am I missing sth. here ?

best regards,
matthias rogel 

Tom Kyte
September 01, 2003 - 8:55 am UTC

i concurr -- the appearance of an instead of trigger causes the save exceptions to not work. I've filed a bug on it.

Peacemonger, September 18, 2003 - 12:25 pm UTC

In presenting the PL/SQL code for bulk inserting rows of data, you say

It inserts all of the data that is "good" stopping on each "bad" record.

Does this mean that every "stop" entails a context switch to complete the forall...insert statement before continuing the LOOP?

If so, then it follows that the number of context switches is directly proportional to the number of rows provoking an EXCEPTION. Correct?

Tom Kyte
September 18, 2003 - 12:41 pm UTC

if you do it without SAVE EXCEPTIONS -- it context switches on each error.

if you do it with SAVE EXCEPTIONS -- it does not.

save exceptions is a new 9i feature.

Intriguing

Peacemonger, September 18, 2003 - 1:24 pm UTC

I'll definitely want to explore that when upgrading to 9i

Inserting Single row v/s bulk of rows

Vivek Sharma, October 14, 2003 - 6:46 am UTC

Dear Tom,

In the code which you have mentioned above, what difference would it have made if you insert the rows into a table in the for loop after the IF statement check for !='d'

declare
type array is table of varchar2(20);
l_data array := array( '1','2','3','d','4','5' );
l_flags array := array();
begin
l_flags.extend(l_data.count);
for i in 1 .. l_data.count
loop
if ( l_data(i) != 'd' )
then
l_flags(i) := 'Y';
else
l_flags(i) := 'N';
end if;
end loop;
forall i in 1 .. l_data.count
insert into t ( data)
select l_data(i)
from dual
where l_flags(i) = 'Y';
end;

I modified this plsql block to check the difference when
1. I Bulk fetch 1000 records and based on mod function set the flag. Then insert into table t using forall for flag='Y'.
2. I Bulk fetch 1000 records and based on mod function insert a record in t table.
3. Normaul processining of single record and based on mod function inserts into t table.

The modified block is as under :

insert into t1 select rownum from all_objects;

17166 rows created.
commit;

set serveroutput on
declare
type array is table of varchar2(20);
l_flags array := array();
l_start number;
l_x t.x%type;
cursor main is
select * from t1;
begin
l_start := dbms_utility.get_time;
open main ;
loop
fetch main bulk collect into l_data limit 1000;
l_flags.extend(l_data.count);
for i in 1 .. l_data.count
loop
if ( mod(l_data(i),100) != 0 )
then
l_flags(i) := 'Y';
else
l_flags(i) := 'N';
end if;
end loop;
forall i in 1 .. l_data.count
insert into t
select l_data(i)
from dual
where l_flags(i) = 'Y';
exit when main%notfound;
end loop;
close main;
dbms_output.put_line('Time taken '||to_char(dbms_utility.get_time-l_start)||' hsecs');
l_start:=dbms_utility.get_time;
open main;
loop
fetch main bulk collect into l_data limit 1000;

for i in 1 .. l_data.count
loop
if ( mod(l_data(i),100) != 0 )
then
insert into t
values(l_data(i));
end if;
end loop;
exit when main%notfound;
end loop;
close main;
dbms_output.put_line('Time taken '||to_char(dbms_utility.get_time-l_start)||' hsecs');
l_start:=dbms_utility.get_time;
open main;
loop
fetch main into l_x;
exit when main%notfound;
if ( mod(l_x,100) != 0 )
then
insert into t
values(l_x);
end if;
end loop;
close main;
dbms_output.put_line('Time taken '||to_char(dbms_utility.get_time-l_start)||' hsecs');
end;
/

Input truncated to 1 characters
Time taken 630 hsecs
Time taken 696 hsecs
Time taken 736 hsecs

PL/SQL procedure successfully completed.

Which is better according to you ? I have my application wherein I need to process 6 lakhs records. What is your opinion ? Will first approach improve the performance because I could not see much of the difference when running for 17000 odd records.

Also I can understand that that 3rd approach will be slower but what is the difference between first 2 in terms of Performance.

Thanks in Advance.

Regards
Vivek Sharma


Tom Kyte
October 14, 2003 - 8:08 am UTC

well, you need to be a little more "real world"  you probably don't have a single column table like that do you?

Here we'll use a 40 byte row with an indexed value.  Using runstats, we can see what is really going on:


ops$tkyte@ORA920LAP> rem drop table t;
ops$tkyte@ORA920LAP> rem drop table t1;
ops$tkyte@ORA920LAP>
ops$tkyte@ORA920LAP> rem create table t ( x int primary key, data char(40) );
ops$tkyte@ORA920LAP>
ops$tkyte@ORA920LAP> rem create table t1
ops$tkyte@ORA920LAP> rem as
ops$tkyte@ORA920LAP> rem select rownum x from all_objects;
ops$tkyte@ORA920LAP>
ops$tkyte@ORA920LAP> exec runstats_pkg.rs_start

PL/SQL procedure successfully completed.

ops$tkyte@ORA920LAP>
ops$tkyte@ORA920LAP> truncate table t;

Table truncated.

ops$tkyte@ORA920LAP> declare
  2     type array is table of varchar2(20);
  3     l_flags array := array();
  4     l_data  array;
  5     l_start    number;
  6     l_x    t.x%type;
  7     cursor main is
  8        select * from t1;
  9  begin
 10     open main ;
 11     loop
 12        fetch main bulk collect into l_data limit 1000;
 13        l_flags.extend(l_data.count);
 14        for i in 1 .. l_data.count
 15        loop
 16           if ( mod(l_data(i),100) != 0 )
 17           then
 18              l_flags(i) := 'Y';
 19           else
 20              l_flags(i) := 'N';
 21           end if;
 22         end loop;
 23         forall i in 1 .. l_data.count
 24             insert into t
 25             select l_data(i), 'x' from dual
 26              where l_flags(i) = 'Y';
 27         exit when main%notfound;
 28     end loop;
 29     close main;
 30  end;
 31  /

PL/SQL procedure successfully completed.

ops$tkyte@ORA920LAP> exec runstats_pkg.rs_middle

PL/SQL procedure successfully completed.

ops$tkyte@ORA920LAP>
ops$tkyte@ORA920LAP> truncate table t;

Table truncated.

ops$tkyte@ORA920LAP> declare
  2     type array is table of varchar2(20);
  3     l_flags array := array();
  4     l_data  array;
  5     l_start    number;
  6     l_x    t.x%type;
  7     cursor main is
  8        select * from t1;
  9  begin
 10      open main;
 11      loop
 12          fetch main bulk collect into l_data limit 1000;
 13
 14          for i in 1 .. l_data.count
 15          loop
 16          if ( mod(l_data(i),100) != 0 )
 17          then
 18             insert into t values(l_data(i), 'x' );
 19          end if;
 20          end loop;
 21          exit when main%notfound;
 22      end loop;
 23      close main;
 24  end;
 25  /

PL/SQL procedure successfully completed.

ops$tkyte@ORA920LAP> exec runstats_pkg.rs_stop(30000)
Run1 ran in 591 hsecs
Run2 ran in 714 hsecs
run 1 ran in 82.77% of the time

<b>so the bulk inserts are measurably faster -- the more data you insert, the more indexes you have -- the faster they will be as opposed to single rows</b>

Name                                  Run1        Run2        Diff
STAT...recursive calls               1,093      31,406      30,313
STAT...execute count                   129      30,462      30,333
LATCH.shared pool                      722      31,066      30,344
STAT...buffer is not pinned co      30,643         278     -30,365
STAT...table scans (short tabl      30,369           4     -30,365
STAT...table scan rows gotten       61,108      30,743     -30,365
STAT...table scan blocks gotte      30,446          81     -30,365
STAT...no work - consistent re      30,544         179     -30,365
LATCH.library cache                  1,379      62,015      60,636
LATCH.library cache pin                915      61,560      60,645
STAT...calls to get snapshot s      91,726      30,688     -61,038
STAT...session pga memory                0      65,536      65,536
STAT...consistent gets              91,830         740     -91,090
STAT...session logical reads       186,862      95,736     -91,126
LATCH.cache buffers chains         596,611     415,418    -181,193
STAT...redo size                15,580,544  15,989,964     409,420

<b>look at the redo difference here as well</b>


Run1 latches total versus runs -- difference and pct
Run1        Run2        Diff       Pct
677,178     649,052     -28,126    104.33%

PL/SQL procedure successfully completed.



the bulk inserts are better 

Regarding Code posted by Matthias Rogel from Kaiserslautern, Germany

Matt, January 26, 2004 - 11:34 pm UTC

See:

save exceptions and so on in Oracle <= 9.0 August 28, 2003
Reviewer: Matthias Rogel from Kaiserslautern, Germany

Above

I'm looking for a solution similar to the one posted by the gentleman referenced above (thanks Matthias). I need to generate a procedure to transfer data from an external table to a heap table. I need to parameterise the table names.

As I want to bulk load arrays of data many times AND capture any exception information for rows that fail the load the posted code looks like dbms_sql is the way to go (I can control when the cursors get opened, parsed and closed).

The problem is that to use the exception array functionality I need to bind some dummy values to the :num array (ie: dynamically generate "insert into t where :num = :num"). Basically, I have the overhead of building a dummy array that is the same size as my (parameterised) array size.

This works, but is there a 'better' way to do this? What I have is really a workaround.




To Clarify...

Matt, January 27, 2004 - 1:18 am UTC

How do you use bulk exceptions using dbms_sql?

Is it possible to write a procedure to accept the name of source and target tables and carry out the transfer of data in bulk?

Basically how you dynamically decide how many dbms_sql.Varchar2_Table etc table types you need (at runtime) and how should these be defined efficiently?



Tom Kyte
January 27, 2004 - 8:49 am UTC

you don't -- you fake it as demonstrated above. You are basically doing row by row with an exception block on each row -- but bulk binding to this row by row thing.

better than true row by row.


Probably -- what I would do here is write a procedure to write a procedure. Then, run generated procedure (which can use all static sql, bulk binds, the whole suite of functionality).

Bulk Insert

Ganesh Iyer, May 06, 2004 - 6:54 am UTC

I am trying to bulk insert 100 rows at a time into a table, while saving exceptions using FORALL statement. But whenever an exception is raised, remaining records are ignored. For e.g. while inserting 100 rows at a time, if the exception occurs at 90th record, the records from 91 to 100 are lost. It then continues with the next set of 100 records.

Tom Kyte
May 06, 2004 - 7:58 am UTC

ops$tkyte@ORA9IR2> DECLARE
  2     TYPE NumList IS TABLE OF NUMBER;
  3     num_tab NumList := NumList(10,0,11,12,30,0,20,199,2,0,9,1);
  4     errors NUMBER;
  5     dml_errors EXCEPTION;
  6     PRAGMA exception_init(dml_errors, -24381);
  7  BEGIN
  8     FORALL i IN num_tab.FIRST..num_tab.LAST SAVE EXCEPTIONS
  9          insert into emp values(num_tab(i));
 10  EXCEPTION
 11     WHEN dml_errors THEN
 12        errors := SQL%BULK_EXCEPTIONS.COUNT;
 13        dbms_output.put_line('Number of errors is ' || errors);
 14            FOR i IN 1..errors LOOP
 15                  dbms_output.put_line('Error ' || i || ' occurred during '||
 16             'iteration ' || SQL%BULK_EXCEPTIONS(i).ERROR_INDEX);
 17          dbms_output.put_line('Oracle error is ' || SQLERRM(-SQL%BULK_EXCEPTIONS(i).ERROR_CODE));
 18            END LOOP;
 19  end;
 20  /
Number of errors is 3
Error 1 occurred during iteration 2
Oracle error is ORA-02290: check constraint (.) violated
Error 2 occurred during iteration 6
Oracle error is ORA-02290: check constraint (.) violated
Error 3 occurred during iteration 10
Oracle error is ORA-02290: check constraint (.) violated
 
PL/SQL procedure successfully completed.
 
ops$tkyte@ORA9IR2> select * from emp;
 
       SAL
----------
        10
        11
        12
        30
        20
       199
         2
         9
         1
 
9 rows selected.



Not my experience, can you make this example fail in the manner you describe so we can all reproduce. 

Bulk Insert

Ganesh Iyer, May 07, 2004 - 8:46 am UTC

I failed to mention few points im my previous review.
I am selecting data from remote database tables using DB link and then bulk inserting it into the local database. There is one column in the source database table declared as TIMESTAMP. I convert it into DATE after retrieving because it is declared as DATE in the destination database. I am processing the records in batch ( 100 records at a time ). When an exception is encountered FORALL statement loses some valid records but continues with the next set. If I don't include the column( which is declared as TIMESTAMP at the source database ) all the records are processed successfully. Also, if I append an ORDER BY clause on the Primary Key without commenting out the TIMESTAMP column processing is successfull. I am confused as to what exactly is causing this problem. Please advise.

Tom Kyte
May 07, 2004 - 9:01 am UTC

give me *an example* -- set up dummy data in a table T1, create a table T2, process say "10 records" at a time. Put in bad data and show us the logic you are using that causes this.


I've given you an example that shows with forall and save exceptions -- the rows after the errors are in fact inserted.

You need to show me how to change the code to get your behaviour.

then, and only then, can we say "whats wrong"

Bulk Insert

Ganesh Iyer, May 07, 2004 - 10:15 am UTC

Hi

I have enclosed the example as a PL/SQL block ( Please see below ). The column lgd_determination.date_of_determination
( declared as TIMESTAMP in the remote database )is commented out. I would appreciate if you give me feedback about the logic I have used. I have added the ORDER BY CLAUSE eventhough it is not necessary otherwise some records are ignored.

Many Thanks
Ganesh


DECLARE
-- Global Variables
gv_array_size INTEGER := 100;
gv_err NUMBER;
gv_index NUMBER;
gv_error_code VARCHAR2(50);
gv_error_text VARCHAR2(4000);
bulk_errors EXCEPTION;
PRAGMA exception_init(bulk_errors, -24381);

-- Variable Declarations
TYPE clg_tab_typ IS TABLE OF test_crdl_lgd%ROWTYPE
INDEX BY PLS_INTEGER;
clg_tab clg_tab_typ;

-- Fetch LGD data
CURSOR cd_clg IS
SELECT lgd_value.lgd_value_id,
lgd_value.facility_id,
lgd_value.facility_event_id,
lgd_value.lgd_value,
lgd_value.override_lgd_value,
lgd_value.calculated_lgd_value,
lgd_value.unsecured_recovery_rate,
NVL(override_reason_xref.description, lgd_value.override_reason) override_reason,
lgd_value.override_comment,
lgd_value.calculation_error,
lgd_value.calculation_error_reason calculation_error_desc,
NVL(methodology_xref.description, lgd_value.methodology) methodology,
unsecured_recovery_model.name lgd_model_type,
unsecured_recovery_model.version lgd_model_version,
--TO_CHAR(lgd_determination.date_of_determination,'DD-MON-RRRR') lgd_date,
NVL(status_xref.description, lgd_value.status) lgd_status,
lgd_determination.user_id,
users.team user_team_id
FROM bsl_lgd_value@BSL.WORLD lgd_value,
bsl_unsecured_recovery_model@BSL.WORLD unsecured_recovery_model,
bsl_lgd_determination@BSL.WORLD lgd_determination,
bsl_users@BSL.WORLD users,
( SELECT code,
description
FROM bsl_code@BSL.WORLD
WHERE table_name = 'BSL_LGD_VALUE'
AND column_name = 'OVERRIDE_REASON') override_reason_xref,
( SELECT code,
description
FROM bsl_code@BSL.WORLD
WHERE table_name = 'BSL_LGD_VALUE'
AND column_name = 'METHODOLOGY') methodology_xref,
( SELECT code,
description
FROM bsl_code@BSL.WORLD
WHERE table_name = 'BSL_LGD_VALUE'
AND column_name = 'STATUS') status_xref
WHERE lgd_value.unsecured_recovery_model_id = unsecured_recovery_model.unsecured_recovery_model_id
AND lgd_value.lgd_determination_id = lgd_determination.lgd_determination_id
AND lgd_determination.user_id = users.user_id
AND lgd_determination.user_event_id = users.creation_event_id
AND lgd_value.override_reason = override_reason_xref.code (+)
AND lgd_value.methodology = methodology_xref.code (+)
AND lgd_value.status = status_xref.code (+)
ORDER BY 1;

PROCEDURE pr_insert_errors( p_module_name VARCHAR2,
p_error_code VARCHAR2,
p_error_text VARCHAR2 ) IS
BEGIN

INSERT INTO test_application_errors
VALUES
( p_module_name,
p_error_code,
p_error_text,
SYSDATE
);

END;



BEGIN

-- Retrieve LGD data
OPEN cd_clg;
LOOP
BEGIN
-- STORE LGD data in Memory
FETCH cd_clg BULK COLLECT INTO clg_tab LIMIT gv_array_size;
-- Bulk Insert LGD data
FORALL i IN 1 .. clg_tab.COUNT SAVE EXCEPTIONS
INSERT INTO test_crdl_lgd VALUES clg_tab(i);
EXCEPTION
WHEN bulk_errors THEN
COMMIT;
gv_err := SQL%BULK_EXCEPTIONS.COUNT;
IF gv_err > 0
THEN
FOR i in 1 .. gv_err LOOP
gv_error_code := -SQL%BULK_EXCEPTIONS(i).ERROR_CODE;
gv_index := SQL%BULK_EXCEPTIONS(i).ERROR_INDEX;
gv_error_text := 'Error '||SQLERRM(-SQL%BULK_EXCEPTIONS(i).ERROR_CODE)||
' encountered for LGD Value Id '||
clg_tab(gv_index).lgd_value_id;
pr_insert_errors( 'PKG_ARDBD_CRDL_EXTRACT.PR_INSERT_LGD',
gv_error_code,
gv_error_text
);
END LOOP;
END IF;
END;
-- Commit Work
COMMIT;
-- Exit When all data fetched
EXIT WHEN cd_clg%NOTFOUND;
END LOOP;

-- Close Cursor
CLOSE cd_clg;

-- Clear the contents from Memory
clg_tab.delete;

END;
/


Tom Kyte
May 07, 2004 - 1:00 pm UTC

like I said, please generate an example we can all run -- on our machines -- that reproduces the error.  Something like this:


ops$tkyte@ORA9IR2> drop table test_crdl_lgd;
 
Table dropped.
 
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> create table test_crdl_lgd
  2  as
  3  select empno, ename from scott.emp where 1=0;
 
Table created.
 
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> alter table test_crdl_lgd add constraint check_ename check (ename not like '%A%');
 
Table altered.
 
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> DECLARE
  2  -- Global Variables
  3     gv_array_size INTEGER := 10;
  4     gv_err NUMBER;
  5     gv_index NUMBER;
  6     gv_error_code VARCHAR2(50);
  7     gv_error_text VARCHAR2(4000);
  8     bulk_errors EXCEPTION;
  9     PRAGMA exception_init(bulk_errors, -24381);
 10
 11   -- Variable Declarations
 12      TYPE clg_tab_typ IS TABLE OF test_crdl_lgd%ROWTYPE
 13      INDEX BY PLS_INTEGER;
 14      clg_tab clg_tab_typ;
 15
 16   -- Fetch LGD data
 17     CURSOR cd_clg IS
 18        select empno, ename from scott.emp;
 19
 20     PROCEDURE pr_insert_errors( p_module_name VARCHAR2,
 21                                 p_error_code  VARCHAR2,
 22                                 p_error_text  VARCHAR2 ) IS
 23     BEGIN
 24            dbms_output.put_line
 25        (  p_module_name || ', ' ||
 26           p_error_code || ', ' ||
 27           p_error_text || ', ' ||
 28           SYSDATE
 29        );
 30     END;
 31  BEGIN
 32        OPEN cd_clg;
 33        LOOP
 34        BEGIN
 35           FETCH cd_clg BULK COLLECT INTO clg_tab LIMIT gv_array_size;
 36           FORALL i IN 1 .. clg_tab.COUNT SAVE EXCEPTIONS
 37               INSERT INTO test_crdl_lgd VALUES clg_tab(i);
 38        EXCEPTION
 39        WHEN bulk_errors THEN
 40           COMMIT;
 41           gv_err := SQL%BULK_EXCEPTIONS.COUNT;
 42                   dbms_output.put_line
 43                   ( 'Errors = ' || gv_err ||
 44                     '  first empno = ' || clg_tab(1).empno ||
 45                     ' last empno = ' || clg_tab(clg_tab.count).empno );
 46           IF gv_err > 0
 47           THEN
 48              FOR i in 1 .. gv_err LOOP
 49                 gv_error_code := -SQL%BULK_EXCEPTIONS(i).ERROR_CODE;
 50                 gv_index := SQL%BULK_EXCEPTIONS(i).ERROR_INDEX;
 51                 gv_error_text := 'Error '||SQLERRM(-SQL%BULK_EXCEPTIONS(i).ERROR_CODE)||
 52                                  ' encountered for LGD Value Id '||
 53                                  clg_tab(gv_index).empno;
 54                 pr_insert_errors( 'PKG_ARDBD_CRDL_EXTRACT.PR_INSERT_LGD',
 55                                    gv_error_code,
 56                                    gv_error_text
 57                                  );
 58              END LOOP;
 59           END IF;
 60        END;
 61           COMMIT;
 62           EXIT WHEN cd_clg%NOTFOUND;
 63        END LOOP;
 64        CLOSE cd_clg;
 65        clg_tab.delete;
 66  END;
 67  /
Errors = 5  first empno = 7369 last empno = 7844
PKG_ARDBD_CRDL_EXTRACT.PR_INSERT_LGD, -2290, Error ORA-02290: check
constraint (.) violated encountered for LGD Value Id 7499, 07-MAY-04
PKG_ARDBD_CRDL_EXTRACT.PR_INSERT_LGD, -2290, Error ORA-02290: check
constraint (.) violated encountered for LGD Value Id 7521, 07-MAY-04
PKG_ARDBD_CRDL_EXTRACT.PR_INSERT_LGD, -2290, Error ORA-02290: check
constraint (.) violated encountered for LGD Value Id 7654, 07-MAY-04
PKG_ARDBD_CRDL_EXTRACT.PR_INSERT_LGD, -2290, Error ORA-02290: check
constraint (.) violated encountered for LGD Value Id 7698, 07-MAY-04
PKG_ARDBD_CRDL_EXTRACT.PR_INSERT_LGD, -2290, Error ORA-02290: check
constraint (.) violated encountered for LGD Value Id 7782, 07-MAY-04
Errors = 2  first empno = 7876 last empno = 7934
PKG_ARDBD_CRDL_EXTRACT.PR_INSERT_LGD, -2290, Error ORA-02290: check
constraint (.) violated encountered for LGD Value Id 7876, 07-MAY-04
PKG_ARDBD_CRDL_EXTRACT.PR_INSERT_LGD, -2290, Error ORA-02290: check
constraint (.) violated encountered for LGD Value Id 7900, 07-MAY-04
 
PL/SQL procedure successfully completed.


ops$tkyte@ORA9IR2> select count(*) from test_crdl_lgd;
 
  COUNT(*)
----------
         7




Now, that diagnostic at the end shows

a) 7 out of 14 records inserted
b) 7 rejected properly
c) the rejects were in the "middle" of the arrays -- eg: 7844 was the last empno in the first array, 7782 was the last reject -- there were elements AFTER 7782 that got inserted


I need to see this "fail to insert the remaining records" 

Bulk Bind

Ganesh Iyer, May 10, 2004 - 9:32 am UTC

Hi

Thanks for your help. I am still unable to understand what the problem is. I have failed to replicate the problem by creating dummy tables with matching structure in another database. Just to let you know bulk bind causes problems with multibyte character settings ( UTF8,AL32UTF8 - Oracle has confirmed).

Could you please suggest any other efficient way of retrieving data ( volume in millions ) and reporting exceptions at the same time. Apart from the weird behaviour that I am experiencing with bulk bind it is very fast. I have tried conventional insert but it is slow.

Also, I need to update/insert records on a daily basis ( volume in thousands ). I thought of using MERGE command but I am not sure if I would be able to trap and report bad records.

Please advise.

Thanks
Ganesh




Tom Kyte
May 10, 2004 - 9:59 am UTC

You'll need to work this with support unless you can get me an example that "fails". I cannot fix that which I cannot experience.

Bulk Insert

Ganesh Iyer, May 10, 2004 - 12:05 pm UTC

Hi

Like I said before I couldn't replicate the problem in another database. Hence this could be database specific. I am struggling to give you examples because the data is retrieved from 7 remote tables using db links. The volume is 400000 records out of which two records are ignored. I would appreciate if you please advise on the second point from my previous review. I am planning to use MERGE command to insert/update new records. Is it possible to trap and report exceptions using this command.

Thanks in advance
Ganesh

Tom Kyte
May 10, 2004 - 12:50 pm UTC

i cannot -- other than row by row processing.

merge is going to be "all or nothing"

Bulk Insert

Ganesh Iyer, May 11, 2004 - 6:02 am UTC

Thanks very much.

your PRO*C code

unis, September 10, 2004 - 5:24 am UTC

Hi

From your PRO*C code on the top:

void process() etc etc etc
exec sql for :n insert into t values ( :int_hv, :char_hv );
etc etc etc

:n varies

is it possible to make it constant? We would like to process N elements in an array and N should be constant (for example 100). In your example

n = sizeof(int_data)/sizeof(int_data[0]); (by the way how int_data varies ? does it decrease?)

so when you process, the number of elements varies, it´s fine if the array is small but if the array is large (over 1 million) probably it´s better process contant number of elements to have some sort of control right?

Tom Kyte
September 10, 2004 - 8:51 am UTC



int x[100];

exec sql insert into t values ( :x );





Pro*C by default picks up the array's size.

If the array is over a million -- we need to talk. talk about coding in chunks, coding without requiring terabytes of ram....


the "for :n" is used when you are processing an input set of unknown size - you are processing 100 rows at a time (memory friendly, efficient, effective). However your input file has 12032 records in it. At time point, you are only going to have 32 records to process. So, N=100 most of the time, but the last insert just does 32 records.




bulk exceeptions clause error message

Menon, November 29, 2004 - 8:03 am UTC

Hi Tom
Wonder why the bulk exceptions error message is not
the generic error message that you get normally:

consider:
benchmark@ORA10G> create table t1( x number primary key constraint check_nonnegative_lt_10 check( x > 0 and x <= 10 )) ;

Table created.

benchmark@ORA10G> insert into t1(x) values( null );
insert into t1(x) values( null )
*
ERROR at line 1:
ORA-01400: cannot insert NULL into ("BENCHMARK"."T6"."X")


benchmark@ORA10G> insert into t1(x) values( -5 );
insert into t1(x) values( -5 )
*
ERROR at line 1:
ORA-02290: check constraint (BENCHMARK.CHECK_NONNEGATIVE_LT_10) violated

So far so good - we get meaningful error messages.
But when we use bulk exceptions, the same error
messages are less informative:

benchmark@ORA10G> declare
2 type number_table is table of number;
3 l_number_table number_table;
4 begin
5 l_number_table := number_table( 1, 2, 3, null, -5, 6, 7, 8, 9, 11);
6 begin
7 forall i in 1..l_number_table.count save exceptions
8 insert into t1( x ) values ( l_number_table(i) );
9 exception
10 when others then
11 dbms_output.put_line( 'number of exceptions raised: ' || sql%bulk_exceptions.count );
12 for i in 1..sql%bulk_exceptions.count loop
13 dbms_output.put_line( 'row number : ' || sql%bulk_exceptions(i).error_index );
14 dbms_output.put_line( 'error code: ' || sql%bulk_exceptions(i).error_code );
15 dbms_output.put_line( 'message: ' || sqlerrm( -sql%bulk_exceptions(i).error_code ));
16
17 end loop;
18 end;
19 end;
20 /
number of exceptions raised: 3
row number : 4
error code: 1400
-- column info missing from the following:
message: ORA-01400: cannot insert NULL into ()
row number : 5
error code: 2290
-- constraint name(s) missing from the following:
message: ORA-02290: check constraint (.) violated
row number : 10
error code: 2290
message: ORA-02290: check constraint (.) violated

PL/SQL procedure successfully completed.

Is this a known bug or am I missing something?
Thanx.

Tom Kyte
November 29, 2004 - 8:36 am UTC

it is the way it has always worked, you are not getting the error message from the bulk collect, you are only getting the error code.

the function sqlerrm takes the error code and gives you the error message.

it would be an enhancement for the bulk exception code to return the array of formatted error messages as well as codes.

ok - thanx.

Menon, November 29, 2004 - 9:25 am UTC


Re: bulk exceeptions clause error message

Menon, November 29, 2004 - 11:04 am UTC

However, if you just print the error message using
sqlcode and sqlerrm, you get the correct message:
---
benchmark@ORA10G> create table t1( x number primary key constraint check_nonnegative_lt_10 check( x > 0 and x <= 10 )) ;

Table created.

benchmark@ORA10G> begin
2 insert into t1(x) values( null );
3 exception when others then
4 dbms_output.put_line( 'ERROR : '||sqlerrm ( sqlcode ) );
5 raise;
6 end;
7 /
ERROR : ORA-01400: cannot insert NULL into ("BENCHMARK"."T6"."X")
begin
*
ERROR at line 1:
ORA-01400: cannot insert NULL into ("BENCHMARK"."T6"."X")
ORA-06512: at line 5


benchmark@ORA10G> declare
2 type number_table is table of number;
3 l_number_table number_table := number_table( null );
4 begin
5 forall i in 1..l_number_table.count save exceptions
6 insert into t1 values( l_number_table(i) );
7 exception when others then
8 for i in 1..sql%bulk_exceptions.count
9 loop
10 dbms_output.put_line( 'ERROR : '||sqlerrm ( -1 *sql%bulk_exceptions(i).error_code) );
11 end loop;
12 end;
13 /
ERROR : ORA-01400: cannot insert NULL into ()

PL/SQL procedure successfully completed.
---

Most likely, we are storing the sqlcode in the sql%bulk_exceptions collection, but we could
(as an enhancement) either store the entire error message
as you suggested or just the context associated with the error (e.g. "BENCHMARK"."T6"."X" would be the context in the above message) The "context" would be parameter values to the error message.

Tom Kyte
November 29, 2004 - 3:25 pm UTC

you get the last one, what happens when row 1 fails due to ora-00001, row 2 due to ora-4132 and row 1 due to ora-00001 and so on

unless and until there is a sql%bulk_exceptions(i).error_msg, there is no "array of inputs to be used by sqlerrm"

I see..

Menon, November 29, 2004 - 3:38 pm UTC

"you get the last one, what happens when row 1 fails due to ora-00001, row 2 due
to ora-4132 and row 1 due to ora-00001 and so on"

That is true. The way sqlerrm works, my suggestion
will not be feasible.

Thanx.



Bulk into two tables

Yoav, December 15, 2005 - 4:36 am UTC

Hi Mr. Kyte,

Could you please explain how to bulk insert into TWO diffrent tables when there is a parent-child relationship between those tables like DEPT-EMP in SCOTT schema ?
Also, Could you demonstrate how to insert the values to those table if there is a sequence on the deptno column and empno column ?

Thank You Very Much.


Tom Kyte
December 15, 2005 - 10:35 am UTC

bulk insert into parent and then bulk insert into child?


if you are generating the keys on the fly, then you cannot do this obviously, you have to create the parent, get the KEY you just assigned on the fly, then you can bulk insert the child records for this parent.

Bulk into TWO tables

Yoav, December 15, 2005 - 12:09 pm UTC

Hi Mr. Kyte.

Thanks for your response.
I would like to ask it again in a differnet way.

Please look at the following simply block:
1. In first for loop im inserting data into the
Parent table. On the fly im generating a Key.
2. In second for loop im inserting data into the
child table (including the KEY from the parent table),
and also im creating on the fly a second KEY based on
the FIRST KEY.

Could you please demonstrate how to move from this two for..loops into a BULK insert ?

declare

v_company_no number := 1;

cursor c is
select a.item_seq_no,sum(stock_qty) stock_qty
FROM t1 a
group by a.item_seq_no;

cursor c1(v_item_seq_no number) is
select item_seq_no,stock_qty, size, color
FROM t1 a
AND a.item_seq_no = v_item_seq_no;

v_parent_seq_no number;
v_sub_seq number;

BEGIN

for c_rec in c loop
-- Generate Key on the fly , for the parent table
v_parent_seq_no:= GetTheParentKey;

INSERT INTO t3 (company_no,
trans_no,
item_seq_no,
trans_qty)
values(v_company_no,
v_parent_seq_no,
c_rec.item_seq_no,
c_rec.stock_qty);

for c1_rec in c1(c_rec.item_seq_no) loop
-- Generate Second Key On the fly , for the child table
v_sub_seq := GetSecondKey(v_parent_seq_no);

INSERT INTO t4 (company_no,
trans_no,
sub_trans_no,
size,
color,
sub_qty)
values(v_company_no,
v_parent_seq_no, ----> The Parent key.
v_sub_seq, ----> The second key
c1_rec.size,
c1_rec.color,
c1_rec.stock_qty);
end loop;
end loop;

end;

FUNCTION GetSecondKey(v_parent_seq_no in number) Return number
IS
v_sub_no number;
BEGIN
SELECT nvl(max(sub_trans_no),0) + 1
INTO v_sub_no
FROM t4
WHERE trans_no = v_parent_seq_no;

Return v_sub_no;
END GetSecondKey;

Thanks Again.

Tom Kyte
December 15, 2005 - 12:32 pm UTC

you can certainly array insert in the inner loop. in the inner loop, fill arrays, that you forall insert after the loop is done.

INSERT into a table with primary, select duplicated rows.

Charlie Zhu, December 15, 2005 - 1:36 pm UTC

Hi Tom,

I want to insert duplicated rows into a table with PK,
how to discard duplicated rows?
(Oracle 10.1.0.4, Linux AS 3.0)

If I use SQL*Loader, more code and complexed as you mentioned in you 2nd book -- Effective Design.

The blow INSERT is what I used before,
it cost more db resource and might be slow.

INSERT INTO T(col_id, col_desc)
select col_id, Max(col_desc)
from external_src_table
group by col_id;

I know 10.2 will give an exception handle syntax for INSERT statement.
we're going to update to 10.2 after the patch 10.2.0.2 released.

Thanks.

Tom Kyte
December 15, 2005 - 2:46 pm UTC

insert into t ( ... )
select ....
from (select ..., row_number() over (partition by PK order by anything) rn
from external_table )
where rn = 1;




INSERT into a table with primary key, continue

Charlie Zhu, December 15, 2005 - 7:15 pm UTC

Will this analytic SQL use temporary segment for sorting data?

What's the performance diff from the SQL I gave above.

Tom Kyte
December 15, 2005 - 7:21 pm UTC

it might, it might not, depends. But databases do this stuff pretty good.

yours works well, if you have col1 and want max(col2) and that is "it"

the analytic one works for any number of columns - and you order by whatever you want to find the "first" record to keep.

SELECT every 1 row PARTITON BY some keys, continue

Charlie Zhu, December 15, 2005 - 7:54 pm UTC

I don't care the Max() or Min, just pick up 1 row.

They're same by my test, both 144 Logical Reads and 1 memory sorts. I like the Oracle 10.2 SQL Auto Trace tool.

It might be better for Analytic SQL when I select multi columns.

The interesting one is the Full Scan got 291 consistent gets, doubled vs. the GROUP BY SQL.
rows selected are about marginally same.

system@lab_abelabs-em64t-32bit> select listingsid,VENDCATSID from
(select listingsid,VENDCATSID, row_number() over (partition by listingsid order by VENDCATSID) rn
from scott.lstvendcat ) where rn = 1;

27426 rows selected.

Elapsed: 00:00:00.29

Execution Plan
----------------------------------------------------------
Plan hash value: 1532173944

-----------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time
-----------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 29564 | 1125K| | 238 (2)| 00:0
|* 1 | VIEW | | 29564 | 1125K| | 238 (2)| 00:0
|* 2 | WINDOW SORT PUSHED RANK| | 29564 | 317K| 1172K| 238 (2)| 00:0
| 3 | TABLE ACCESS FULL | LSTVENDCAT | 29564 | 317K| | 18 (0)| 00:0
-----------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
1 - filter("RN"=1)
2 - filter(ROW_NUMBER() OVER ( PARTITION BY "LISTINGSID" ORDER BY "VENDCATSID")<=1)

Statistics
----------------------------------------------------------
144 consistent gets
1 sorts (memory)
0 sorts (disk)
27426 rows processed

system@lab_abelabs-em64t-32bit> select listingsid, Min(VENDCATSID) from scott.lstvendcat
group by listingsid;

27426 rows selected.

Elapsed: 00:00:00.25

Execution Plan
----------------------------------------------------------
Plan hash value: 644254678

-----------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time |
-----------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 29043 | 311K| | 236 (2)| 00:00:03 |
| 1 | SORT GROUP BY | | 29043 | 311K| 1164K| 236 (2)| 00:00:03 |
| 2 | TABLE ACCESS FULL| LSTVENDCAT | 29564 | 317K| | 18 (0)| 00:00:01 |
-----------------------------------------------------------------------------------------

Statistics
----------------------------------------------------------
0 recursive calls
0 db block gets
144 consistent gets
1 sorts (memory)
0 sorts (disk)
27426 rows processed

system@lab_abelabs-em64t-32bit> l
1* select listingsid, VENDCATSID from scott.lstvendcat
system@lab_abelabs-em64t-32bit> /

29548 rows selected.

Elapsed: 00:00:00.25

Execution Plan
----------------------------------------------------------
Plan hash value: 1786917994

--------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
--------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 29564 | 317K| 18 (0)| 00:00:01 |
| 1 | TABLE ACCESS FULL| LSTVENDCAT | 29564 | 317K| 18 (0)| 00:00:01 |
--------------------------------------------------------------------------------

Statistics
----------------------------------------------------------
291 consistent gets
0 sorts (memory)
0 sorts (disk)
29548 rows processed

system@lab_em64t-32bit> desc scott.LSTVENDCAT
Name Null? Type
--------------- -------- ----------
VENDCATSID NOT NULL NUMBER(10)
LISTINGSID NOT NULL NUMBER(20)


Tom Kyte
December 16, 2005 - 8:10 am UTC

the difference in LIO has to do with "data was copied to temp and processed" vs "data was processed"

consider the results of running this script:

drop table t;

create table t as select * from all_objects;

set termout off
select * from t;
select * from t order by object_id;
set termout on

set autotrace traceonly
set arraysize 2
select * from t;
select * from t order by object_id;
set arraysize 100
select * from t;
select * from t order by object_id;
set autotrace off




ops$tkyte@ORA10GR2> set autotrace traceonly
ops$tkyte@ORA10GR2> set arraysize 2
ops$tkyte@ORA10GR2> select * from t;
49261 rows selected.

--------------------------------------------------------------------------
| Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |      | 43103 |  5387K|   157   (2)| 00:00:02 |
|   1 |  TABLE ACCESS FULL| T    | 43103 |  5387K|   157   (2)| 00:00:02 |
--------------------------------------------------------------------------


Statistics
----------------------------------------------------------
          0  recursive calls
          0  db block gets
      24991  consistent gets
...
          0  sorts (memory)
          0  sorts (disk)
      49261  rows processed


<b>this table has about 600/700 blocks.  So how come 24,991 LIO's?  The array size, sqlplus said "get me rows 1 and 2", the database latched block 1 of the table, got them, unlatched block, returned them.  sqlplus said "get me rows 3 and 4", the database latched block 1 of the table (AGAIN).... and so on.  We would read block 1 many times, block 2 many times and so on...

Add a sort step (not done via an index:</b>

ops$tkyte@ORA10GR2> select * from t order by object_id;
49261 rows selected.

-----------------------------------------------------------------------------------
| Id  | Operation          | Name | Rows  | Bytes |TempSpc| Cost (%CPU)| Time     |
-----------------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |      | 43103 |  5387K|       |  1396   (1)| 
|   1 |  SORT ORDER BY     |      | 43103 |  5387K|    13M|  1396   (1)| 
|   2 |   TABLE ACCESS FULL| T    | 43103 |  5387K|       |   157   (2)| 
-----------------------------------------------------------------------------------


Statistics
----------------------------------------------------------
          0  recursive calls
          0  db block gets
        682  consistent gets
...
          1  sorts (memory)
          0  sorts (disk)
      49261  rows processed

<b> we significantly reduced the LIO since in order to get row 1 - we had to read every row and sort them into temp, once in temp - we don't have to do LIO anymore (we OWN the data, it is already consistent).  

Hence the reason I say "in general, the goal is to reduce LIO when tuning a query" - because according to this - I just "tuned" the query by sorting :)  but we all know that the query wasn't really "tuned" (although I just dramatically reduced the chance of a 1555 if this was a long running query, although I just massively increased the time to get the first row, although I just massively increased the memory used by the query, I might have increased temp space on disk used too, and so on......)

So, how to "tune" that query - in this case, upping the arraysize:</b>


ops$tkyte@ORA10GR2> set arraysize 100
ops$tkyte@ORA10GR2> select * from t;

49261 rows selected.

--------------------------------------------------------------------------
| Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |      | 43103 |  5387K|   157   (2)| 00:00:02 |
|   1 |  TABLE ACCESS FULL| T    | 43103 |  5387K|   157   (2)| 00:00:02 |
--------------------------------------------------------------------------

Statistics
----------------------------------------------------------
          0  recursive calls
          0  db block gets
       1166  consistent gets
          0  physical reads
          0  redo size
    4927973  bytes sent via SQL*Net to client
       5797  bytes received via SQL*Net from client
        494  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
      49261  rows processed

ops$tkyte@ORA10GR2> select * from t order by object_id;

49261 rows selected.


Execution Plan
----------------------------------------------------------
Plan hash value: 961378228

-----------------------------------------------------------------------------------
| Id  | Operation          | Name | Rows  | Bytes |TempSpc| Cost (%CPU)| Time     |
-----------------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |      | 43103 |  5387K|       |  1396   (1)| 
|   1 |  SORT ORDER BY     |      | 43103 |  5387K|    13M|  1396   (1)| 
|   2 |   TABLE ACCESS FULL| T    | 43103 |  5387K|       |   157   (2)| 
-----------------------------------------------------------------------------------

Note
-----
   - dynamic sampling used for this statement


Statistics
----------------------------------------------------------
          0  recursive calls
          0  db block gets
        682  consistent gets
          0  physical reads
          0  redo size
    1947892  bytes sent via SQL*Net to client
       5797  bytes received via SQL*Net from client
        494  SQL*Net roundtrips to/from client
          1  sorts (memory)
          0  sorts (disk)
      49261  rows processed

ops$tkyte@ORA10GR2> set autotrace off

<b>that shows that upping the arraysize can sometimes help some queries, but is not a "rule of thumb" for all problems...</b>
 

Transposing

Charlie Zhu, December 16, 2005 - 1:53 pm UTC

Thanks for the deep explaination about the Full table scan and sorting.

My SQL*PLus default arraysize is 200
arraysize 200, 291 LIO, table full scan

After I changed the arraysize:
set arraysize 500, 204 consistent gets
set arraysize 1000, 174 consistent gets
set arraysize 5000, 150 consistent gets

I'll start to using the RunStats to find more details diff.
(What a wonderful thing if SQL autotrace have a switch to tunr on RunStats)

I have to post a new question here (you blackout the new QA :), please forgive me. )

How to transform/Transpose

deptno empno
10 101
10 102
10 103
10 104
...
50 505
50 506

to

deptno empno_list
10 101 102 103 104 105
50 505 506

There're hundred millions rows, as fast as possible.

Is below SQL the only way?

select deptno,
max(decode( dense_rank, 1, sal )) sal1,
max(decode( dense_rank, 2, sal )) sal2,
max(decode( dense_rank, 3, sal )) sal3
from (
select deptno, sal,
dense_rank() over( partition by deptno order by sal desc) dense_rank
from emp
)
where dense_rank <= 3
group by deptno
/


Tom Kyte
December 16, 2005 - 4:30 pm UTC

you can use collect() in 10g...

you can use the pivot as you have...

you can use stragg


pretty much everything is going to involve some sort of full scan, sort or hash to aggregate.

Transposing

Charlie Zhu, December 16, 2005 - 6:53 pm UTC

What's the "stragg" meaning?

The COLLECT is really slow vs. Analytic Transposing.

SELECT listingsid, COLLECT(c.bsacode)
FROM scott.lstrsrch c group by listingsid;

system@lab> select listingsid,bsacode, row_number() over (partition by listingsid order by bsacode) rn, count(listingsid) over (partition by listingsid) cnt
from scott.lstrsrch ;

517738 consistent gets vs. 1258 consistent gets.

Anyway, I'll throw this to our Java programmer to handle.

Thanks again,

Tom Kyte
December 17, 2005 - 11:12 am UTC

search this site for stragg.


umm, let SQL do it please, the java guy will be slower than anything else.

dbms_output an array

A reader, January 02, 2006 - 7:36 am UTC

Hi

I have this code

create or replace function f_stripe_char(p_char varchar2, p_text varchar2)
return varchar2
as
l_char varchar2(2) := p_char;
l_text varchar2(2000) := p_text;
begin
l_text := substr(l_text, 1, instr(l_text, ',', -1) - 1);
return l_text;
end;
/


declare
type l_array is table of emp%rowtype;
l_rec_data l_array;
l_select_clause varchar2(2000) := '';
l_sql varchar2(2000) := '';
l_tab_orig varchar2(30) := 'EMP';
l_predicate varchar2(2000) := 'rownum <= 10';
begin
for x in (select * from user_tab_columns where table_name = l_tab_orig)
loop
l_select_clause := l_select_clause || x.column_name ||', ';
end loop;
l_select_clause := f_stripe_char(',', l_select_clause);
l_sql := 'select ' || l_select_clause || ' from ' || l_tab_orig || ' where 1 = 1 and ' || l_predicate;
execute immediate l_sql bulk collect into l_rec_data;
dbms_output.put_line( l_rec_data(2));
end;
/


I get this error

dbms_output.put_line( l_rec_data(2));
*
ERROR at line 16:
ORA-06550: line 16, column 2:
PLS-00306: wrong number or types of arguments in call to 'PUT_LINE'
ORA-06550: line 16, column 2:
PL/SQL: Statement ignored

How can I see the contents inside the collection?

Tom Kyte
January 02, 2006 - 10:07 am UTC

you have to pass to dbms_output simple scalars

dbms_output.put_line( l_rec_data(2).ENAME || ', ' || l_rec_data(2).empno || ...)

A reader, January 19, 2006 - 11:02 pm UTC

Tom,

If records have to be entered into a database while it's unknown how many of them already exist, which of the following approaches is better? Simply keep inserting, catch error and print a custom error message if a key already exists. Second approach: For each key, select count(*) from table, if count=0 insert the record, otherwise print error. I'd say the second way is not necessary and results in overhead. Thanks.

Tom Kyte
January 20, 2006 - 10:05 am UTC

depends entirely.

IF the record FREQUENTLY exists and I'm using "single row inserts" (inserts with a values clause), I might want to check - but I don't need a separate SQL statement for that - AND I still need to have a "dup val on index" handling routine (multi-user after all I assume)

If the record almost never exists, just insert it and deal with the error.


If this were a bulk load (external table, staging table) using an insert, I would use a NOT IN to load only records I wanted to load in the first place..


Use this to test - change the mod(object_id,2) to have different values from 2 (as coded, this'll have 50% duplicate entries approximately, make it 1 - 100%, make it 3 - 33% and so on...)

</code> http://asktom.oracle.com/~tkyte/runstats.html <code>
has runstats used at the bottom.



create table data as select * from all_objects;
create table t1 as select * from data where mod(object_id,2) = 0;
alter table t1 add constraint t1_pk primary key(object_id);
create table t2 as select * from data where mod(object_id,2) = 0;
alter table t2 add constraint t2_pk primary key(object_id);
create table mydual (id primary key) organization index as select 1 from dual;

exec dbms_stats.gather_table_stats( user, 'MYDUAL', cascade=>true );
exec dbms_stats.gather_table_stats( user, 'DATA', cascade=>true );
exec dbms_stats.gather_table_stats( user, 'T1', cascade=>true );
exec dbms_stats.gather_table_stats( user, 'T2', cascade=>true );

create or replace procedure p1
as
begin
for x in ( select * from data )
loop
begin
insert into t1
( OWNER, OBJECT_NAME, SUBOBJECT_NAME, OBJECT_ID,
DATA_OBJECT_ID, OBJECT_TYPE, CREATED, LAST_DDL_TIME,
TIMESTAMP, STATUS, TEMPORARY, GENERATED, SECONDARY )
values
( x.OWNER, x.OBJECT_NAME, x.SUBOBJECT_NAME, x.OBJECT_ID,
x.DATA_OBJECT_ID, x.OBJECT_TYPE, x.CREATED, x.LAST_DDL_TIME,
x.TIMESTAMP, x.STATUS, x.TEMPORARY, x.GENERATED, x.SECONDARY );
exception
when dup_val_on_index then NULL;
end;
end loop;
end;
/
create or replace procedure p2
as
begin
for x in ( select * from data )
loop
begin
insert into t2
( OWNER, OBJECT_NAME, SUBOBJECT_NAME, OBJECT_ID,
DATA_OBJECT_ID, OBJECT_TYPE, CREATED, LAST_DDL_TIME,
TIMESTAMP, STATUS, TEMPORARY, GENERATED, SECONDARY )
select
x.OWNER, x.OBJECT_NAME, x.SUBOBJECT_NAME, x.OBJECT_ID,
x.DATA_OBJECT_ID, x.OBJECT_TYPE, x.CREATED, x.LAST_DDL_TIME,
x.TIMESTAMP, x.STATUS, x.TEMPORARY, x.GENERATED, x.SECONDARY
from mydual where id = 1 and not exists (select null from t2 where object_id = x.object_id);
exception
when dup_val_on_index then NULL;
end;
end loop;
end;
/



exec runstats_pkg.rs_start
exec p1
exec runstats_pkg.rs_middle
exec p2
exec runstats_pkg.rs_stop(50000)


The RunStats output

Charlie Zhu, February 08, 2006 - 1:39 pm UTC

Run1 ran in 1114 hsecs
Run2 ran in 395 hsecs
run 1 ran in 282.03% of the run2 time

Name Run1 Run2 Diff
LATCH.library cache pin 93,818 37,672 -56,146
STAT...session pga memory -65,536 0 65,536
STAT...recursive calls 103,376 19,133 -84,243
LATCH.row cache objects 84,719 403 -84,316
STAT...db block gets 116,288 30,582 -85,706
STAT...db block gets from cach 116,288 30,582 -85,706
STAT...table scan rows gotten 128,682 25,832 -102,850
LATCH.library cache 140,984 38,031 -102,953
STAT...session logical reads 201,544 78,487 -123,057
LATCH.cache buffers chains 512,891 189,857 -323,034
STAT...undo change vector size 2,670,664 1,841,124 -829,540
STAT...redo size 11,405,808 5,788,504 -5,617,304

Run1 latches total versus Run2 -- difference and pct
Run1 Run2 Diff Pct
1,020,395 306,860 -713,535 332.53%

Tom Kyte
February 09, 2006 - 4:50 am UTC

did you see the "it depends entirely"

there are three distinct cases to consider

a) record almost never exists, insert works - duplicates are few and far between
b) record almost always exists, insert fails frequently
c) something in between a and b.


You ran one of them (not really sure which one).

Single SQL INSERT

Charlie Zhu, February 08, 2006 - 2:59 pm UTC

Hi Tom,

I would suggest to use Single SQL instead of the row based procedure loop. (I think it's your idea too)

Single INSERT take 7 times faster and 80% less latches and redo vs. p2.

exec runstats_pkg.rs_start
exec p2
exec runstats_pkg.rs_middle

insert into t1(
OWNER, OBJECT_NAME, SUBOBJECT_NAME, OBJECT_ID,
DATA_OBJECT_ID, OBJECT_TYPE, CREATED, LAST_DDL_TIME,
TIMESTAMP, STATUS, TEMPORARY, GENERATED, SECONDARY)
select
a.*
from data a, t1 b
where a.object_id = b.object_id (+)
and b.object_id is null;

exec runstats_pkg.rs_stop(50000)

system@DEV_vicdbtst> exec runstats_pkg.rs_stop(50000)
Run1 ran in 283 hsecs
Run2 ran in 35 hsecs
run 1 ran in 808.57% of the run2 time

Name Run1 Run2 Diff
STAT...session logical reads 78,787 23,074 -55,713
LATCH.cache buffers chains 181,442 53,689 -127,753
STAT...undo change vector size 1,860,508 374,864 -1,485,644
STAT...redo size 5,833,484 1,873,100 -3,960,384

Run1 latches total versus runs -- difference and pct
Run1 Run2 Diff Pct
292,354 59,237 -233,117 493.53%

Tom Kyte
February 09, 2006 - 4:55 am UTC

depends on which of the three conditions you have!

"it depends entirely"

Charlie Zhu, February 09, 2006 - 5:07 pm UTC

I understand, I saw it ("it depends entirely") a little late.

1st RunStats output just follow your sample, 50% exist.
2nd RunStats is 50% exist too.

From the small bench mark, row based INSERT is slower than set based insert, even when existing rows > 50%.
I'll try to find the changing point ( How many percent new rows).

Many thanks

A reader, March 26, 2006 - 12:51 pm UTC

Thank you Tom for your Bulk Collect and Forall save exceptions examples. I tried them and they work well.

Need some directions

A reader, May 08, 2006 - 12:50 pm UTC

Tom,

I am getting this error, however I want to catch the
records for which I am getting this error in another
table. Any idea how I can do that?


SQL>  BEGIN 
  2    TERM.NVRM_PKG.container;
  3    COMMIT; 
  4  END; 
  5  /
 BEGIN
*
ERROR at line 1:
ORA-20006: Error in nvrm_PKG.container - ORA-20007: ORA-00001: unique
constraint (TERM.NVRM_CONT_PK) violated

PROCEDURE FOLLOWING:

PROCEDURE container
IS
   CURSOR his_CUR IS
  -- SELECT *
  --  FROM(
      select i.container
            ,i.container_id
            ,cm.ssl_owner_code SSL_USER_CODE
            ,INITCAP(s.long_description) SSL_USER_DESCRIPTION
            ,cm.lht_code
            ,i.status_code
            ,i.chassis
            ,i.in_trucker_code
            ,i.in_date
            ,i.in_mode
            ,i.in_visit
            ,i.out_trucker_code
            ,i.out_date
            ,i.out_mode
            ,i.out_visit
            ,i.remarks
            ,i.out_of_service
            ,i.stripped_date
            ,i.stuffed_date
            ,'F' AREA
        from inv_containers i
            ,container_masters cm
            ,ssl s
       where i.container = cm.container
         and cm.ssl_owner_code = s.code
   UNION ALL
      select h.container
            ,h.container_id
            ,h.ssl_owner_code SSL_USER_CODE
            ,INITCAP(s.long_description) SSL_USER_DESCRIPTION
            ,h.lht_code
            ,h.status_code
            ,h.chassis
            ,h.in_trucker_code
            ,h.in_date
            ,h.in_mode
            ,h.in_visit
            ,h.out_trucker_code
            ,h.out_date
            ,h.out_mode
            ,h.out_visit
            ,h.remarks
            ,h.out_of_service
            ,h.stripped_date
            ,h.stuffed_date
            ,'F' AREA
        from his_containers h
            ,ssl s
       where h.voided_date IS NULL
         and h.ssl_owner_code = s.code
   UNION ALL
      select i.container
            ,i.container_id
            ,cm.ssl_owner_code SSL_USER_CODE
            ,INITCAP(s.long_description) SSL_USER_DESCRIPTION
            ,cm.lht_code
            ,i.status_code
            ,i.chassis
            ,i.in_trucker_code
            ,i.in_date
            ,i.in_mode
            ,i.in_visit
            ,i.out_trucker_code
            ,i.out_date
            ,i.out_mode
            ,i.out_visit
            ,i.remarks
            ,i.out_of_service
            ,i.stripped_date
            ,i.stuffed_date
            ,'T' AREA
        from inv_containers@tripoli i
            ,container_masters@tripoli cm
            ,ssl@tripoli s
       where i.container = cm.container
         and cm.ssl_owner_code = s.code
   UNION ALL
      select h.container
            ,h.container_id
            ,h.ssl_owner_code SSL_USER_CODE
            ,INITCAP(s.long_description) SSL_USER_DESCRIPTION
            ,h.lht_code
            ,h.status_code
            ,h.chassis
            ,h.in_trucker_code
            ,h.in_date
            ,h.in_mode
            ,h.in_visit
            ,h.out_trucker_code
            ,h.out_date
            ,h.out_mode
            ,h.out_visit
            ,h.remarks
            ,h.out_of_service
            ,h.stripped_date
            ,h.stuffed_date
            ,'T' AREA
        from his_containers@tripoli h
            ,ssl@tripoli s
       where h.voided_date IS NULL
         and h.ssl_owner_code = s.code
   -- ordering by container and in_date
   --)
   --WHERE container = '    000117'
   ORDER BY 1, 9;

   his1_REC his_CUR%ROWTYPE := NULL;
   --his2_REC his_CUR%ROWTYPE := NULL;
   
  
   TYPE his_tab_type IS TABLE OF his_cur%ROWTYPE;  
    
    hisTab his_tab_type;
 
     
BEGIN
   EXECUTE IMMEDIATE 'truncate table nvrm_temp';
   C1_SIBT   := 0;
   C1_SOBT   := 0;
   C2_SIBT   := 0;
   C2_SOBT   := 0;
   OPEN his_CUR;
   LOOP
      FETCH his_cur BULK COLLECT INTO hisTab LIMIT 1000;
      nc := 0; 
      nvrmTab.DELETE;
      FOR K IN 1 .. hisTab.COUNT LOOP  
        out90days_PROC(his1_REC, hisTab(K));
        scenario2_PROC(his1_REC, hisTab(K));
        scenario3_PROC(his1_REC, hisTab(K));
        his1_REC := hisTab(K);
        C1_SIBT  := C2_SIBT;
        C1_SOBT  := C2_SOBT;
        C1_SOBT_CHECKED := C2_SOBT_CHECKED;
        --C1_SIBT_CHECKED := C2_SIBT_CHECKED;
        --C1_SIBT_CHECKED := C1_SIBT_CHECKED;
      END LOOP; 
      BEGIN
        FORALL K IN 1 .. nvrmTab.COUNT
         INSERT INTO nvrm_temp VALUES nvrmTab(K);
        COMMIT;
      EXCEPTION
        WHEN OTHERS THEN
         raise_application_error(-20007,SQLERRM);
      END;    
      EXIT WHEN his_CUR%NOTFOUND;
      
   END LOOP;
   CLOSE his_CUR;
   nc := 0;
   nvrmTab.DELETE;
   out90days_PROC(his1_REC, NULL);  -- original package miss this step
   IF nc > 0 THEN
    INSERT INTO nvrm_temp VALUES nvrmTab(nc);
   END IF; 
   COMMIT;
EXCEPTION
   WHEN OTHERS THEN
      IF his_CUR%ISOPEN THEN
         CLOSE his_CUR;
      END IF;
      ROLLBACK;
      RAISE_APPLICATION_ERROR(-20006,'Error in nvrm_PKG.container - '||SQLERRM);
END container;
 

Tom Kyte
May 08, 2006 - 1:39 pm UTC

search this site for "save exceptions"


</code> http://asktom.oracle.com/pls/ask/search?p_string=%22save+exceptions%22 <code>

Thanks

A reader, May 08, 2006 - 9:32 pm UTC

Tom,

I was able to get it...thanks a lot..

One more request

A reader, May 08, 2006 - 9:36 pm UTC

Tom,

I know you can help here. Is there any way, I can reduce the union here and avoid calling the proc inside the bulk collect? this cause it the proc to be row and row....any suggestion will be good.

Tom Kyte
May 09, 2006 - 7:39 am UTC

given that you select from completely different sets of tables in the union all.... I fail to see how it would be reasonable to eliminate it.

and not understanding what the procedures in the loop DO - it would likewise be not really possible to explain how to remove them.


I can say I hate that commit in the loop. One wonders what happens when one encounters an error and one has loaded half of the data

and one has this totally non-safely restartable routine.... lost the COMMIT for absolutely sure.

Thanks!!

A reader, May 09, 2006 - 9:38 am UTC

Tom,

Thank you for your suggestions. I already removed the commit. For what is worth...I am including the proc. inside
of the loop.

PROCEDURE out90days_PROC(his_rec_1 IN his_rec_TYPE
,his_rec_2 IN his_rec_TYPE)
IS
v_days_out NUMBER := 0;

BEGIN
C2_SIBT := 0;
C2_SIBT_Checked := 0;
IF his_rec_1.container IS NULL THEN
RAISE EXIT_EXCEPTION;
END IF;

IF his_rec_1.status_code <> global_PKG.EMPTY THEN
RAISE EXIT_EXCEPTION;
END IF;
IF his_rec_1.out_date IS NULL THEN
RAISE EXIT_EXCEPTION;
END IF;

IF his_rec_1.out_mode <> global_PKG.TRUCK AND his_rec_1.out_mode <> global_PKG.RAIL THEN
RAISE EXIT_EXCEPTION;
END IF;

IF his_rec_1.in_mode = 'V' THEN
RAISE EXIT_EXCEPTION;
END IF;

IF his_rec_1.area = 'F' THEN
IF C1_SOBT > 0 OR C1_SOBT_CHECKED = 0 AND is_FLEET_Container_SIOBT(his_rec_1.container_id, global_pkg.STRING_LINE_ENTRY_OBT) > 0
THEN
C1_SOBT := 1;
RAISE EXIT_EXCEPTION;
END IF;
ELSE

IF C1_SOBT > 0 OR C1_SOBT_CHECKED = 0 AND is_TRIPOLI_SIOBT_FUNC(his_rec_1.container_id,global_PKG.STRING_LINE_ENTRY_OBT) > 0
THEN
C1_SOBT := 1;
RAISE EXIT_EXCEPTION;
END IF;
END IF;

IF his_rec_1.container = his_rec_2.container THEN

IF his_rec_2.area = 'F' THEN
IF is_FLEET_Container_SIOBT(his_rec_2.container_id,global_PKG.STRING_LINE_ENTRY_IBT) > 0 THEN
C2_SIBT := 1;
C2_SIBT_Checked := 1;
RAISE EXIT_EXCEPTION;
END IF;
C2_SIBT_Checked := 1;
ELSE

IF is_TRIPOLI_SIOBT_FUNC(his_rec_2.container_id, global_PKG.STRING_LINE_ENTRY_IBT) > 0 THEN
C2_SIBT := 1;
C2_SIBT_Checked := 1;
RAISE EXIT_EXCEPTION;
END IF;
C2_SIBT_Checked := 1;
END IF;

v_days_out := TRUNC(his_rec_2.in_date) - TRUNC(his_rec_1.out_date);
IF v_days_out > 90 THEN
write_to_temp_PROC(his_rec_1, '1', v_days_out, his_rec_2.in_date, his_rec_2.in_mode, his_rec_2.in_visit);
END IF;
ELSE
v_days_out := TRUNC(SYSDATE) - TRUNC(his_rec_1.out_date);
IF v_days_out > 90 THEN
write_to_temp_PROC(his_rec_1, '1', v_days_out, NULL, NULL, NULL);
END IF;
END IF;
EXCEPTION
WHEN EXIT_EXCEPTION THEN
NULL;
WHEN OTHERS THEN
ROLLBACK;
RAISE_APPLICATION_ERROR(-20006,'Error in nvrm_PKG.out90days_PROC - '||SQLERRM);
END out90days_PROC;


PROCEDURE scenario2_PROC(his_rec_1 IN his_rec_TYPE
,his_rec_2 IN his_rec_TYPE)
IS
BEGIN
C2_SOBT := 0;
C2_SOBT_Checked := 0;
IF his_rec_2.status_code = global_PKG.EMPTY AND his_rec_2.out_date IS NOT NULL THEN

IF his_rec_2.area = 'F' THEN
IF C2_SIBT > 0 OR C2_SIBT_Checked = 0 AND is_FLEET_Container_SIOBT(his_rec_2.container_id,global_PKG.STRING_LINE_ENTRY_IBT) > 0
THEN -- is_Container_SIBT(his_rec_2.container_id) > 0
C2_SIBT := 1 ;
C2_SIBT_Checked := 1;
RAISE EXIT_EXCEPTION;
END IF;
C2_SIBT_Checked := 1;

IF is_FLEET_Container_SIOBT(his_rec_2.container_id,global_PKG.STRING_LINE_ENTRY_OBT) > 0
THEN
--is_Container_SOBT(his_rec_2.container_id) > 0 THEN
C2_SOBT := 1;
C2_SOBT_Checked := 1;
RAISE EXIT_EXCEPTION;
END IF;
C2_SOBT_Checked := 1;
ELSE

IF C2_SIBT > 0 OR C2_SIBT_Checked = 0 AND is_TRIPOLI_SIOBT_FUNC(his_rec_2.container_id, global_PKG.STRING_LINE_ENTRY_IBT) > 0 THEN
C2_SIBT := 1;
C2_SIBT_Checked := 1;
RAISE EXIT_EXCEPTION;
END IF;
C2_SIBT_Checked := 1;
IF is_TRIPOLI_SIOBT_FUNC(his_rec_2.container_id,global_PKG.STRING_LINE_ENTRY_OBT) > 0 THEN
C2_SOBT := 1;
C2_SOBT_Checked := 1;
RAISE EXIT_EXCEPTION;
END IF;
C2_SOBT_CHECKED := 1;
END IF;
IF his_rec_1.container = his_rec_2.container THEN
IF (his_rec_1.out_mode <> global_PKG.TRUCK AND his_rec_1.out_mode <> global_PKG.RAIL) AND
(his_rec_2.in_mode = global_PKG.TRUCK OR his_rec_2.in_mode = global_PKG.RAIL) AND
(his_rec_2.out_mode <> global_PKG.VESSEL) AND
(his_rec_1.out_date IS NOT NULL AND his_rec_2.out_date IS NOT NULL) AND
(his_rec_2.out_of_service = 0 AND his_rec_2.stripped_date IS NULL AND his_rec_2.stuffed_date IS NULL) THEN
write_to_temp_PROC(his_rec_2, '2');
END IF;
ELSE
IF (his_rec_2.in_mode = global_PKG.TRUCK OR his_rec_2.in_mode = global_PKG.RAIL) AND
(his_rec_2.out_mode <> global_PKG.VESSEL) AND
(his_rec_2.out_of_service = 0 AND his_rec_2.stripped_date IS NULL AND his_rec_2.stuffed_date IS NULL) THEN
write_to_temp_PROC(his_rec_2, '2');
END IF;
END IF;
ELSE
RAISE EXIT_EXCEPTION;
END IF;
EXCEPTION
WHEN EXIT_EXCEPTION THEN
NULL;
WHEN OTHERS THEN
ROLLBACK;
RAISE_APPLICATION_ERROR(-20006,'Error in nvrm_PKG.scenario2_PROC - '||SQLERRM);
END scenario2_PROC;

Tom Kyte
May 09, 2006 - 9:56 am UTC

well, you have lots of procedural code in there - can YOU turn that into a sql statement - then you could use a scalar subquery or maybe a case statement in place of the function calls and just "select" out the data.

follow up

A reader, May 10, 2006 - 10:11 pm UTC

how can I begin to turn this into "sql"...I am not looking for the whole thing but how to begin ...

Thanks ;)

Tom Kyte
May 11, 2006 - 9:01 am UTC

too large for me to really look at, digest and answer in the review/followup here. sorry.

Pass sql%bulk_exceptions as parameter?

Rich, May 17, 2006 - 12:03 pm UTC

Thank you very much for your articles on bulk exception handling.

We have been able to use your examples for handling bulk exceptions using SAVE EXCEPTIONS and SQL%BULK_EXCEPTIONS. It works very well in our procedures.

My questions is: would it be possible to pass the
SQL%BULK_EXCEPTIONS as a parameter to another procedure? We now have several procedures that use this exception handling and we would like to create a common error-handling routine. I guess my problem is that I don't know what datatype to use when passing the SQL%BULK_EXCEPTIONS collection.

Many thanks for all your help.

Tom Kyte
May 18, 2006 - 9:51 am UTC

the datatype for that table is not "exported" for our use. The only supported way to approach this will be to move the data into a table of your own.

How?

Yuan, August 16, 2006 - 11:10 am UTC

Earlier in this thread, you stated:

<quote>With 9i, it is nice cause you can have a
multi-table insert so that rows that cannot be inserted into t could be logged
into a "bad" table.<quote>

Please give an example of how to do this. I have tried searching for one to no avail.

Tom Kyte
August 16, 2006 - 11:30 am UTC

http://asktom.oracle.com/Misc/how-cool-is-this.html http://asktom.oracle.com/Misc/how-cool-is-this-part-ii.html

for 10gR2 of course...


ops$tkyte%ORA9IR2> create table t ( x varchar2(20) );
 
Table created.
 
ops$tkyte%ORA9IR2> create table good ( x varchar2(10) );
 
Table created.
 
ops$tkyte%ORA9IR2> create table bad ( x varchar2(4000) );
 
Table created.
 
ops$tkyte%ORA9IR2> insert into t values ( 'helloWorld' );
 
1 row created.
 
ops$tkyte%ORA9IR2> insert into t values ( rpad('*',20,'*') );
 
1 row created.
 
ops$tkyte%ORA9IR2>
ops$tkyte%ORA9IR2> select * from t;
 
X
--------------------
helloWorld
********************
 
ops$tkyte%ORA9IR2> insert
  2    when (length(x) <= 10) then
  3      into good(x) values (x)
  4    else
  5      into bad(x) values(x)
  6  select * from t;
 
2 rows created.
 
ops$tkyte%ORA9IR2> select * from good;
 
X
----------
helloWorld
 
ops$tkyte%ORA9IR2> select * from bad;
 
X
-------------------------------------------------------------------------------
********************
 

How? (Continued)

Yuan, August 16, 2006 - 12:31 pm UTC

10g is what I want, because there are multiple constraints to worry about on top of the length considerations. Unfortunately, I'm working on 9i for this project, so I guess I'll stick with BULK operations to log the errors. Using your 9i solution, I'd have to type all the constraint conditions and maintain this insert for every constraint change.

Thanks anyway.

Strange error

Ram, January 02, 2008 - 5:35 am UTC

Hi Tom,

I'm facing a strange issue here in my code.

I have a collection variable which is a TABLE%ROWTYPE .
I'm assigning the values to the required columns of collection variable.
The table also has an audit column (INSERT_DATE) which is defaulted to SYSDATE.

When I'm trying to insert the collection into table I get ORA-01400: cannot insert NULL into () error.

When I tried printing ALL the NOT NULL values I could see that the INSERT_DATE column in collection is NULL (which is obvious as I'm not doing anything in there),But then why the default SYSDATE value was not inserted instead of throwing error??

Im using FORALL statement :

FORALL k IN l_valid_rec.FIRST..l_valid_rec.LAST
SAVE EXCEPTIONS
INSERT INTO <TABLE>
VALUES l_valid_rec(k) ;


Interestingly when I manually assign the value to collection variable ,It did work !!!

When i did :

l_valid_rec(w).INSERT_DATE := SYSDATE;

my code worked absolutely well however if I comment this line I get ORA-1400 error.

I dont understand why the NULL value for the INSERT_DATE column is not considered as SYSDATE while inserting as the default is there for this column in table definition.

Can you please advice where I'm going wrong?Am I missing anything?

Regards
Ram
Tom Kyte
January 02, 2008 - 11:20 am UTC

a default is used in the absence of any supplied value. NULL is in fact a supplied value. We only default a value when that value is NOT mentioned in the insert at all.

so, if you have an insert that references column "C" (as you do), then the default value for "C" would not be used (unless of course the values clause referenced the keyword DEFAULT, but it isn't, it is referencing a bind value)


Defaults only apply when the column is not referenced at all - you are supplying a value for this column and that value is NULL.

NULLs

SeánMacGC, January 02, 2008 - 11:50 am UTC

Tom said:
"Defaults only apply when the column is not referenced at all - you are supplying a value for this column and that value is NULL."

*********************

Except that NULLs are NOT values at all, of course, otherwise we could legitimately say something like 'NULL = NULL', and get data back ;o)
Tom Kyte
January 02, 2008 - 2:41 pm UTC

when do you supply a default value?

when you don't supply any value.

NULL is the value you supplied. Period.


NULL = NULL does not apply here - the otherwise doesn't count. NULL is just a special value that represents the absence of a value (and default should not kick in).

NULLs

SeánMacGC, January 02, 2008 - 3:31 pm UTC

Sorry, I'll never buy that NULL is a value, it's the very opposite, the ABSENCE of a value, an informational void/vacuum. What is supplied here is an instruction to explicitly use 'THE ABSENCE OF A VALUE' for this column, hence the default will not be invoked.

Tom Kyte
January 02, 2008 - 3:48 pm UTC

if you supply the 'value' NULL for a column and we overrode that with the default - how could you, would you create a record with a null value.

a null value - a value of null for a column - is a value, the value of that column, that columns value is "NULL" which in database speak is the "absence of a value"

but that column absolutely has an application specified value - OF NULL

insert into t ( pk, c ) values ( seq.nextval, null );

is specifying "I would like that row to have c have a value of NULL"

whereas:

insert into t (pk) values ( seq.nextval );

is specifying "I would like that row to have c be assigned it's default value, which by default is NULL but could be different"


You can argue semantics on this one, but "we have assigned a null value" to that column.

http://docs.oracle.com/docs/cd/B19306_01/server.102/b14220/glossary.htm#sthref4174

http://www.oracle.com/pls/db102/search?remark=advanced_search&word=%22NULL+VALUE%22&format=ranked&book=&preference=

http://www.google.com/search?q=%22null+value%22


NULLs

SeánMacGC, January 02, 2008 - 4:39 pm UTC

Nope, still not buying it. NULL as 'NULL value' to me is nonsensical, a complete contradiction in terms, something akin to 'Vacuum Plenum'; NULL is the 'absence of a value', and therefore can never mean the total reverse of that under any circumstances. That it is so generally accepted as a valid expression (within the SQL DBMS sphere) does not lend it semantic legitimacy, and it's a big, big piece of semantics to me, which is seriously deficient.

None of that's to say there shouldn't be an ACTUAL value that would represent a NULL, which would then legitimately be a 'NULL value'.

Tom Kyte
January 02, 2008 - 4:59 pm UTC

you don't have to "buy" anything

we insert a null VALUE.

that value is to be interpreted by us humans as "absence of a value", but it is a NULL VALUE.

it is like NAN - not a number, that is a value, that can be placed into a float/double - it is a value - NAN is - but it is not a number...

You are free to parse it however you like, but the conventional terminology (as pointed to - heck - null value is in the glossary) is the conventional terminology.

http://www.google.com/search?q=define%3A%22null+value%22&btnG=Search



http://www.mcaggis.com/Glossary.html#N
null value
The absence of a value. If a particular column of a row in a table is null, that means there is no value stored. Null is not the same as blank or zero.

http://www.5starsupport.com/glossary/n.htm
Null Value:
In computer programming, this represents something of no value. A null value could also indicate that the value for a row is either missing or not known. Placing a zero in a row would not be representative of a null value because zero is a value.



NULLs

SeánMacGC, January 02, 2008 - 5:42 pm UTC

Glossaries or no, NULL as 'NULL value' is a logical impossibility (NAN is not directly comparable), just unfortunate that so many have bought into this computing piece of self-contradictory terminology.

We'll be agreeing to differ :o)

More to Explore

PL/SQL demos

Check out more PL/SQL tutorials on our LiveSQL tool.

PL/SQL docs

PL/SQL reference manual from the Oracle documentation library