Skip to Main Content
  • Questions
  • BULK COLLECT - SAVE EXCEPTIONS - how to know the exact row.

Breadcrumb

Dev Live Dev Intro

We are celebrating Developers at AskTOM. We welcome Developers of all levels of experience to join us at our FREE Developer Live events coming in August and September. Just click on the left to register today! If you are brand new to Database Technology, then we also have got you covered. Just click on the right for your comprehensive FREE training program to kick start your Oracle Database Development journey!

Question and Answer

Tom Kyte

Thanks for the question, Jaya.

Asked: January 18, 2009 - 12:49 am UTC

Answered by: Tom Kyte - Last updated: July 17, 2020 - 2:23 pm UTC

Category: Database - Version: 10.2.0

Viewed 100K+ times! This question is

You Asked

Hi Tom,

I am working on a datawarehoue project and using BULK COLLECT with SAVE EXCEPTIONS to log errors while loading data from source to destination tables. I loop through the BULK_EXCEPTIONS and insert the error_index and error_code with description into my own error table.
(eg : ....SQL%BULK_EXCEPTIONS(j).error_index and SQLERRM(-SQL%BULK_EXCEPTIONS(j).ERROR_CODE)

Logging the array index does not help me, since by the time the support person queries my error log table and takes the error index and goes to check in the source the table, he might not be looking at the same erroneous row at all, because the source table might have had many updates and deletes during that time span and trying to match the error index to the rownum will be mostly futile.


What i need is a way to capture the primary key value of the source table while using BULK COLLECT...SAVE EXCEPTIONS.....?

Or do i have to go back to CURSOR processing just for this purpose and compromise on execution time.....?


Please help.
Thanks
Jaya

and we said...

why not use dml error logging instead?

You have the primary key - you have the index into your array of values you were using in the forall (you use save exceptions with forall, not with bulk collect). The array you were processing with "forall" has this information.

you will have code that looks like, bits in bold are what you are looking for, I was loading FROM a table that is a copy of all objects to a table that is a copy of all objects:

declare
    cursor C is
    select ID, OWNER, OBJECT_NAME, SUBOBJECT_NAME, OBJECT_ID, DATA_OBJECT_ID,
           decode( mod(rownum,100000), 1, rpad('*',20,'*'), OBJECT_TYPE ) object_type,
           CREATED, LAST_DDL_TIME, TIMESTAMP, STATUS, TEMPORARY, GENERATED, SECONDARY
      from big_table;
    type array is table of c%rowtype;
    l_data array;

    dml_errors EXCEPTION;
    PRAGMA exception_init(dml_errors, -24381);
    l_errors number;
    l_errno    number;
    l_msg    varchar2(4000);
    l_idx    number;
begin
    open c;
    loop
        fetch c bulk collect into l_data limit 100;
        begin
            forall i in 1 .. l_data.count SAVE EXCEPTIONS
                insert into t2 values l_data(i);
        exception
            when DML_ERRORS then
                l_errors := sql%bulk_exceptions.count;
                for i in 1 .. l_errors
                loop
                    l_errno := sql%bulk_exceptions(i).error_code;
                    l_msg   := sqlerrm(-l_errno);
                    l_idx   := sql%bulk_exceptions(i).error_index;<b>
                    insert into err$_t2
                    ( ora_err_number$, ora_err_mesg$, ora_err_optyp$,
                      ID, OWNER, OBJECT_NAME, SUBOBJECT_NAME, OBJECT_ID,
                      DATA_OBJECT_ID, object_type, CREATED, LAST_DDL_TIME,
                      TIMESTAMP, STATUS, TEMPORARY, GENERATED, SECONDARY )
                    values
                    ( l_errno, l_msg, 'I',
                      l_data(l_idx).ID,
                      l_data(l_idx).OWNER,
                      l_data(l_idx).OBJECT_NAME,
                      l_data(l_idx).SUBOBJECT_NAME,
                      l_data(l_idx).OBJECT_ID,
                      l_data(l_idx).DATA_OBJECT_ID,
                      l_data(l_idx).object_type,
                      l_data(l_idx).CREATED,
                      l_data(l_idx).LAST_DDL_TIME,
                      l_data(l_idx).TIMESTAMP,
                      l_data(l_idx).STATUS,
                      l_data(l_idx).TEMPORARY,
                      l_data(l_idx).GENERATED,
                      l_data(l_idx).SECONDARY );</b>
                end loop;
        end;
        exit when c%notfound;
    end loop;
    close c;
end;
/



but the funny thing is that code can be replaced with:


exec dbms_errlog.create_error_log('T1');


insert /*+ append */
  into t1
select ID, OWNER, OBJECT_NAME, SUBOBJECT_NAME, OBJECT_ID, DATA_OBJECT_ID,
       decode( mod(rownum,100000), 1, rpad('*',20,'*'), OBJECT_TYPE ) object_type,
       CREATED, LAST_DDL_TIME, TIMESTAMP, STATUS, TEMPORARY, GENERATED, SECONDARY
  from big_table
   LOG ERRORS REJECT LIMIT UNLIMITED;




you might look into that....

http://asktom.oracle.com/Misc/how-cool-is-this.html
http://asktom.oracle.com/Misc/how-cool-is-this-part-ii.html

and you rated our response

  (20 ratings)

Is this answer out of date? If it is, please let us know via a Review

Reviews

June 01, 2010 - 2:35 pm UTC

Reviewer: Jignesh from Ashburn, VA

I like the LOG ERRORS option; Only thing is /*+ append */ -- causes it to ignore all PK errors.
Any work around for that ?

Tom Kyte

Followup  

June 08, 2010 - 8:54 am UTC

do not use /*+ append */ is the "work around"

the problem is, we do not detect duplicates until AFTER the append has happened, until AFTER the insert is done

That is part of the magic of append - what can (in some circumstances) make it more efficient.

New rows are loaded above the high water mark.

A mini index for these new rows is created in temp.

This mini index is merged into the real index after all of the rows are loaded - in bulk, so the index maintenance is done in bulk not slow by slow.


that last part is why it cannot detect duplicates at the row level during an insert append, the index used to detect duplicates and not maintained during the load - only at the end.

Multiple Errors per row

August 29, 2010 - 11:43 pm UTC

Reviewer: Anamika from India

Hi Tom,

This(dbms_errlog approach) is really a very good approach. Thanks for sharing.

But we also have a requirement to capture "multiple errors per row". e.g. considering the above example, if OWNER & OBJECT_NAME are both mandatory columns, and SUBOBJECT_NAME is defined as VARCHAR2(30), then is there a way to capture 'multiple errors per row' while inserting the data..??

So, if I insert a row with OWNER as NULL, OBJECT_NAME as NULL, and SUBOBJECT_NAME with 35 characters, then 3 constraints will be violated. Is there a way to capture all 3 errors in one go..??

Also, what do you suggest? Should I implement this in Oracle or should I use some other solution(e.g. Perl) to handle this type of requirement?

Thanks,
Anamika
Tom Kyte

Followup  

September 09, 2010 - 9:00 am UTC

you cannot, you will have to change the requirement. Once a constraint is found to be violated - the error is thrown.

do not do this in perl, the row failed - log it, and continue. Doing this in perl or any other language will increase your run time by orders of magnitude.

New Question.??

September 01, 2010 - 3:58 am UTC

Reviewer: Anamika from India

Hi Tom,

Should I raise this as a new question..??(above mentioned comments - multiple errors per row..)


Anamika
Tom Kyte

Followup  

September 09, 2010 - 11:54 am UTC

see above.

may be you can pivot the data..and insert

September 09, 2010 - 12:57 pm UTC

Reviewer: Ajeet

Anamika,

One way would be to Pivot your data - convert 3 columns into 3 rows and then you would be able capture all the 3 voliations..while inserting the data...little bit of extra effort there..may be a stage table - intermediate table and then report out the error from error log table - pivot it back...3 rows to 3 columns.

Just a thought however

Ajeet

data profiling/quality tools

September 10, 2010 - 2:10 pm UTC

Reviewer: Ajeet

The requirement above is more suitable for a data qulaity /profiling tool - which pepole generally use in a DW environment - but yes it would be a nice to have feature..such requirements are seen many times in DW projects .

Thanks

Thanks

September 22, 2010 - 2:40 am UTC

Reviewer: Anamika from India

Hi Tom,
Thanks for the confirmation. But yes, its nice to have feature..!!

Also, can you introduce some notification feature for your asktom site, so that, whenever you respond, email is automatically sent to the person who raised the question...This is also nice to have feature:))

Ajeet,
Thanks for your inputs


Anamika

Question of slowness regarding DML error logging

November 16, 2010 - 9:36 am UTC

Reviewer: A reader from USA

Tom:

exec dbms_errlog.create_error_log('T1');


insert /*+ append */
into t1
select ID, OWNER, OBJECT_NAME, SUBOBJECT_NAME, OBJECT_ID, DATA_OBJECT_ID,
decode( mod(rownum,100000), 1, rpad('*',20,'*'), OBJECT_TYPE ) object_type,
CREATED, LAST_DDL_TIME, TIMESTAMP, STATUS, TEMPORARY, GENERATED, SECONDARY
from big_table
LOG ERRORS REJECT LIMIT UNLIMITED;


If I do not use /*+ append */ hint the amount of time it takes to insert 30 million records is 20 times more. Because i see latch contentions and tons of redo and undo being generated. Do I have do it programatically?

Thanks,
Tom Kyte

Followup  

November 17, 2010 - 5:35 am UTC

20 times more seems unreasonable.

what latching issues do you see? (that implies contention with other users - what kind)

are you seeing waits on redo generation (in general, that shouldn't cause you to be waiting, it is generated and then written in the background by lgwr, you don't wait for that)?

why not use append if you can?

if you do it "programatically" - what approach will you take that you think will be superior to a single sql statement?

Got it worked

November 16, 2010 - 9:11 pm UTC

Reviewer: A reader

Tom:

Please ignore my previous question posted on 16-Nov-2010. I got mine worked.

Thanks,

November 17, 2010 - 12:38 pm UTC

Reviewer: A reader

Tom:

I have 9 processes writing to a process table with a primary key constraint. I cannot use append. We are not in 11gR2. I have "latch: cache buffers chains' waits and after a while the system freezes. How can I catch unique index exceptions effectively with out FORALL...save exceptions?
Tom Kyte

Followup  

November 18, 2010 - 3:13 am UTC

why do you believe (erroneously) that taking a nice bulky operation and making it go slow by slow in bulk (better than just slow by slow - but not as good as insert as select) will help?


cache buffers chains latches means you have a ton of people going after the same blocks at the same time. You have contention in the buffer cache - what you would expect if everyone is hitting the same stuff (which they are).

The system should not be freezing - can you define what you mean by freezing?


what is your primary key made up of (and why are you expecting to hit duplicates if they are loading - what is the logic behind your key generation???)

Exception Bulk Collect Select

July 04, 2012 - 5:27 am UTC

Reviewer: Manas

CREATE OR REPLACE TYPE t_account_mapping as object(
key VARCHAR2(50 CHAR),
close_date DATE,
first_name VARCHAR2(255 CHAR),
num VARCHAR2(50 CHAR),
legal_entity VARCHAR2(50 CHAR),
classification_cd VARCHAR2(20 CHAR),
open_date DATE,
other_name VARCHAR2(50 CHAR),
status_cd VARCHAR2(20 CHAR),
type_cd VARCHAR2(20 CHAR),
non_address NUMBER(1),
r_level VARCHAR2(50 CHAR),
s_cd VARCHAR2(50 CHAR),
c_factor_2 VARCHAR2(255 CHAR),
c_factor_3 VARCHAR2(50 CHAR),
cf_1 VARCHAR2(50 CHAR),
cf_2 VARCHAR2(50 CHAR),
r_cd VARCHAR2(20 CHAR));

CREATE TYPE account_mapping_t AS TABLE OF t_account_mapping;

Now the package body is something like-

Create or replace Package body
as
...
procedure proc1()
as
.
SELECT t_account_mapping(
key,
DECODE(......),
COALESCE(.......),
SUBSTR(...),
CASE .....,
NVL(...)'
.
.)
BULK COLLECT INTO l_account_mapping_t
FROM table_name
WHERE <<conditions>>;
..
proc2(l_account_mapping_t);
proc3(l_account_mapping_t)
..
end proc1;

procedure proc2(l_account_mapping_t)
do a MERGE;
end proc2();

procedure proc3(l_account_mapping_t)
do a INSERT;
end proc3();
end package;


The issue is when the size of one of the columns (returned by SELECT) is more than that of the same in the Object than the complete SELECT fails.
We want these error rows to be captured and rest of the statement to be processed.
As you have stated above Bulk Exceptions is of no help. Even though we'll be using a Merge later on, we cannot do a direct (DML error logging)

Merge into
...
SQL Statement

because we want multiple processing based on the data & merge does not supports the RETURNING clause. Please provide a workaround.
Tom Kyte

Followup  

July 05, 2012 - 7:12 am UTC

sorry, you'll not be able to do that in a single sql statement.

you can add where conditions to assure the lengths are appropriate to get the good rows, process them.

then you can run the sql again with the where condition for the lengths reversed with a NOT.



Or - you could just through out the procedural code altogether and use DML error logging (my choice)

Memory usage of arrays of IS TABLE OF %ROWTYPE

July 19, 2012 - 4:32 am UTC

Reviewer: Robert Graf from AUSTRIA, Vienna

Hi Tom,

how much memory will be consumed with array of structures? Is there a way to find it out?

We receive a lot of data files and have to import the data into our database design.

We are using PL/SQL with BULK SELECT (cursor) and BULK MERGE.

If a table has e.g. 30 columns but I only have to fill e.g. 3 columns (due less data in a file) which option is better?

1)

TYPE nt_fct IS TABLE OF TABLE_X%ROWTYPE;

2)

TYPE rt_fct IS RECORD
(
ISIN TABLE_X.ISIN%TYPE,
MARKET TABLE_X.MARKET%TYPE,
CURRENCY TABLE_X.CURRENCY%TYPE
);

TYPE nt_fct IS TABLE OF rt_fct;

E.g. the BULK LIMIT is 1000 I would have 1000 elements of the type nt_fct.


Does option 1) consume a lot of memory for the columns I don't need/use?

E.g. if the other 27 columns would be VARCHAR2(30) - would there be a reserved memory about ~ 27*30*1000 bytes "for nothing"?

I want to use ONE system for all the hundreds of imports and tables we have. Option 1) looks much cleaner in the source code but if there would be hugh memory usage I would maybe choose option 2).

We also have tables with e.g. 150 columns and can only fill e.g. 30 columns.

What would you do or recommend?

Currently we are using 10gR2. Is there a difference in 11gR2 (hopefully we have it until early 2013).

Thanks a lot,
Robert

Tom Kyte

Followup  

July 19, 2012 - 12:24 pm UTC

forget about memory, option #2 seems to be the only correct approach. You won't be using select * (because you know better than to have production code with select * in it) so you'll be selecting just the three columns of interest - and therefore would need a structure suited to hold that.


option#1 looks bad to me in the source code, it is less clear as to what is going on, it makes it look like you need every column. You don't.

and you can make it "cleaner" this way:

ops$tkyte%ORA11GR2> declare
  2          cursor c is select empno, ename, job from scott.emp;
  3  
  4          type nt_fct is table of c%rowtype;
  5          l_data nt_fct;
  6  begin
  7          open c;
  8          fetch c bulk collect into l_data limit 1000;
  9          close c;
 10  end;
 11  /

PL/SQL procedure successfully completed.




since you are bulk collecting, I know you are defining your cursor so you can build a record type based on that.



Memory usage of arrays of IS TABLE OF %ROWTYPE

July 20, 2012 - 3:34 am UTC

Reviewer: Robert Graf from AUSTRIA, Vienna

Thanks a lot for the fast reply Tom,

for reading the data from the file (loaded table) I have the cursor defined as you wrote of course. Only the columns I need for my process.

I read one row from the cursor and have to insert/update x rows into different tables (e.g. I get a char(8) with a possible "x" on each position and one position means available on stock exchange Stuttgart, 2nd position means Frankfurt, ...). So I create arrays and when they are full (bulk limit) they will be inserted or updated (deactivation), etc.

In this case I wrote about the definition of "our" designed database to use this record definition.

However - it makes no difference - I can also define the small amount of used (to store) columns in a record type which makes it "clean what will be used" but the source code becomes more "unreadable" (I get a file and have to fill in this case 9 tables (superb file delivery haha)!).

The single definition from "our" database table would reduce the amount of source code - but might use more memory (how is it implemented internally?). Which columns will be inserted/updated I could see at the statement itself. I don't update all columns of a row (SET ROW).

I have to think about it. :-)

I want to ask question

August 01, 2012 - 1:48 am UTC

Reviewer: Dipti from India

Thanks a lot for sharing such importent topic in save exceptions in bulk collect..
but i want to ask is there any way to capture rownumber dat raised exception while bulk collect????
Tom Kyte

Followup  

August 01, 2012 - 6:53 am UTC

since rownumbers only exist for a row in the context of a result set - they are not the number of a row in a table - what use would this be?


But the answer is "of course" - you have:

 l_idx   := sql%bulk_exceptions(i).error_index;



if you either


a) fetch rownum as part of your result, you'll have array(l_idx).row_num to use

b) count how many times you've iterated the fetch loop at the bottom of the loop, the rownum would be:

l_loop_count * l_array_fetch_size + l_idx;



Exception Capture and Rollback

June 25, 2013 - 9:15 pm UTC

Reviewer: Robert from St Louis, MO

Tom,

When I do INSERT INTO <table> SELECT * FROM <table> I need to capture the error rows in the error table at the same time rollback the entire transaction in case of any error.
How to achieve this without quering the error table becuase it already has old records as well.

Thanks a lot.



Tom Kyte

Followup  

July 01, 2013 - 6:45 pm UTC

you can tag the rows

http://docs.oracle.com/cd/B28359_01/server.111/b28286/statements_9014.htm#SQLRF20000

so you can identify which error rows are "yours", just generate a unique tag for your session

Performance of Exception Block

March 04, 2014 - 8:51 pm UTC

Reviewer: A reader

Tom,

I am trying to move data from our data warehouse into a reporting database, and I am having a hard time deciding on the best approach to move data over. I am looking at the example you offered in the first post of this thread and trying to figure out if I should use that method in the solution.

We have an ETL process that populates a set of staging tables, and then we have a process that will read these staging tables and move the data into the "real" tables. The simplest example we have is taking a single staging table and writing to a single "real" table. The more complex examples will read from multiple staging tables, potentially from multiple systems that see the data "differently", transform it, then write the data to multiple "real" tables. For example, in one system, our CUSTOMER entity is an actual table that stores individual customers, while in another system our CUSTOMER is actually just attributes as part of a transaction, so we have to match the data through IDs, etc. Staging table A will have a single instance of a customer, while staging table B could have many instances, but we want to get down to 1 customer and relate the data. I have the queries all figured out to boil each of these requests into single result sets, but we then need to process these rows efficiently, and for some of these tables we are talking millions of new records and deltas.


All of our processes have an "If exists Update, Else Insert" logic to them. I have messed around with a few bulk operations options, but I am not very well versed in them, so I have not found one that is working very well for me. If I were to use the example you provided with the FORALL block and tried to insert the record, and then catch DUP_VAL_ON_INDEX records in the exception block to do an update, will I negate the speed benefits provided by FORALL? Do I need to do a FORALL statement for the records that need to be inserted, and then a second FORALL statement for the records that need to be updated to keep performance? Is there a better option (especially when I have child records involved; i.e. we have one staging table that is stored as "transactions", but we want an "aggregate" record above it in our system, so we write those records to two tables).

Commit?

May 08, 2014 - 4:16 pm UTC

Reviewer: pranav

Hi Tom,

In your reply to the original question, is it statement level commit/rollback or it would commit all the successful records into the table?

RAISE with Bulk Exception Handling

July 14, 2020 - 4:37 pm UTC

Reviewer: Rob from USA

Hello TOM, I understand that while looping through the bulk exceptions (say for logging those) one should not use explicit RAISE statement. Becuase doing so will stop the looping through the FORALL? Is that correct? If so, the only way to know of bulk errors is to check logged errors otherwise calling application wouldn't know any error occured?

Thank you!

Script

July 14, 2020 - 8:28 pm UTC

Reviewer: Rob from USA

To be precise, in the following example script, where should I put RAISE to let calling application know that there were some exceptions during bulk processing? Or I have to write a procedure to mine the log table to see if there were any errors and use RAISE_APPLICATION_ERROR in some way to raise an error?

Thank you!

--Data Prep

DROP TABLE t_a;
DROP TABLE t_b;

CREATE TABLE t_a
(
    id         NUMBER,
    id_desc    VARCHAR2 (40),
    name       VARCHAR2 (50)
);

CREATE TABLE t_b
(
    id         NUMBER,
    id_desc    VARCHAR2 (4),
    name       VARCHAR2 (5)
);

INSERT INTO t_a (id, id_desc, name)
     VALUES (1, 'W', 'WWW');

INSERT INTO t_a (id, id_desc, name)
     VALUES (2, 'WPOUYT', 'MMMM');

INSERT INTO t_a (id, id_desc, name)
     VALUES (3, 'LPO', 'WWWMLKOUY');

INSERT INTO t_a (id, id_desc, name)
     VALUES (4, 'X', 'XUYT');

INSERT INTO t_a (id, id_desc, name)
     VALUES (5, 'EO', 'EPOI');

INSERT INTO t_a (id, id_desc, name)
     VALUES (6, 'DS', 'NDJHKDL');

INSERT INTO t_a (id, id_desc, name)
     VALUES (7, 'VCV', 'VXCVBDDG');

INSERT INTO t_a (id, id_desc, name)
     VALUES (8, 'D', 'CV');

INSERT INTO t_a (id, id_desc, name)
     VALUES (9, 'R', 'FHB');

INSERT INTO t_a (id, id_desc, name)
     VALUES (10, 'QW', 'DFDF');

INSERT INTO t_a (id, id_desc, name)
     VALUES (11, 'WE', 'DF');

INSERT INTO t_a (id, id_desc, name)
     VALUES (12, 'ER', 'GHYJ');

INSERT INTO t_a (id, id_desc, name)
     VALUES (13, 'ETR', 'RTY');

INSERT INTO t_a (id, id_desc, name)
     VALUES (14, 'SER', 'FGFGDFGDGH');

INSERT INTO t_a (id, id_desc, name)
     VALUES (15, 'RTT', 'DFF');

INSERT INTO t_a (id, id_desc, name)
     VALUES (16, 'DF', 'DFE');

INSERT INTO t_a (id, id_desc, name)
     VALUES (17, 'YY', 'YJYJ');

INSERT INTO t_a (id, id_desc, name)
     VALUES (18, 'W', 'IIJK');

INSERT INTO t_a (id, id_desc, name)
     VALUES (19, 'KLOPU', 'K');

INSERT INTO t_a (id, id_desc, name)
     VALUES (20, 'D', 'JGJKKJ');

COMMIT;

--Test
SET SERVEROUTPUT ON SIZE UNLIMITED

DECLARE
    TYPE ntt_t_a IS TABLE OF t_a%ROWTYPE;

    ntv_t_a       ntt_t_a;

    CURSOR cur_t_a IS SELECT * FROM t_a;

    err_str       VARCHAR2 (4000);

    bulk_errors   EXCEPTION;
    PRAGMA EXCEPTION_INIT (bulk_errors, -24381); --ORA-24381: error(s) in array DML

    l_id          t_a.id%TYPE;
    l_name        t_a.name%TYPE;
    l_limit       NUMBER := 10;
BEGIN
    OPEN cur_t_a;

    LOOP
        FETCH cur_t_a BULK COLLECT INTO ntv_t_a LIMIT l_limit;

        EXIT WHEN ntv_t_a.COUNT = 0;

        BEGIN
            FORALL indx IN 1 .. ntv_t_a.COUNT SAVE EXCEPTIONS
                INSERT INTO t_b
                     VALUES ntv_t_a (indx);
        EXCEPTION
            WHEN bulk_errors
            THEN
                FOR indx IN 1 .. SQL%BULK_EXCEPTIONS.COUNT
                LOOP
                    err_str :=
                           'BULK EXCEPTION : Key columns : ID = '
                        || ntv_t_a (SQL%BULK_EXCEPTIONS (indx).ERROR_INDEX).id
                        || ' : Error message = '
                        || SQLERRM (-SQL%BULK_EXCEPTIONS (indx).ERROR_CODE);

                    DBMS_OUTPUT.put_line ('t_a_t_b : ' || err_str); -- instead, this will be logged in a global error table shared across processes
                END LOOP;
        END;
    END LOOP;

    SELECT id
      INTO l_id
      FROM t_a
     WHERE id_desc = 'WPOUYT';

    SELECT name
      INTO l_name
      FROM t_b
     WHERE id = l_id;
EXCEPTION
    WHEN OTHERS
    THEN
        DBMS_OUTPUT.put_line (
               't_a_t_b : OTHER EXCEPTION : Key Columns : ID = '
            || l_id
            || ', '
            || DBMS_UTILITY.format_error_stack
            || DBMS_UTILITY.format_error_backtrace); -- instead, this will be logged in a global error table shared across processes
        RAISE;
END;
/

COMMIT;

SELECT COUNT (*) FROM t_a;

SELECT COUNT (*) FROM t_b;

TRUNCATE TABLE t_b;

DELETE FROM t_b;

SELECT * FROM t_a
MINUS
SELECT * FROM t_b;

Chris Saxon

Followup  

July 15, 2020 - 9:43 am UTC

(re)raise the exception after the bulk exception handling loop:

EXCEPTION
    WHEN bulk_errors
    THEN
      FOR indx IN 1 .. SQL%BULK_EXCEPTIONS.COUNT
      LOOP
        ... 
      END LOOP;
      
      RAISE; -- raise here
  END;

(re)raising the bulk error masks some errors

July 15, 2020 - 3:36 pm UTC

Reviewer: Rob from USA

Thank you Chris for the answer! I tried what you suggested and I see an issue (this was the reason I posted in the first place)

With BULK COLLECT limit 1, no bulk error (re)raise, all errors get logged nicely

t_a_t_b : BULK EXCEPTION : Key columns : ID = 2 : Error message = ORA-12899:
value too large for column  (actual: , maximum: )
t_a_t_b : BULK EXCEPTION : Key columns : ID = 3 : Error message = ORA-12899:
value too large for column  (actual: , maximum: )
t_a_t_b : BULK EXCEPTION : Key columns : ID = 6 : Error message = ORA-12899:
value too large for column  (actual: , maximum: )
t_a_t_b : BULK EXCEPTION : Key columns : ID = 7 : Error message = ORA-12899:
value too large for column  (actual: , maximum: )
t_a_t_b : BULK EXCEPTION : Key columns : ID = 14 : Error message = ORA-12899:
value too large for column  (actual: , maximum: )
t_a_t_b : BULK EXCEPTION : Key columns : ID = 19 : Error message = ORA-12899:
value too large for column  (actual: , maximum: )
t_a_t_b : BULK EXCEPTION : Key columns : ID = 20 : Error message = ORA-12899:
value too large for column  (actual: , maximum: )
t_a_t_b : OTHER EXCEPTION : Key Columns : ID = 2, ORA-01403: no data
found
ORA-06512: at line 49

DECLARE
*
ERROR at line 1:
ORA-01403: no data found
ORA-06512: at line 62


---------------------------------------------------

With BULK COLLECT limit 1, and bulk error (re)raise, not all errors get logged - this is masking some errors while reporting some.

t_a_t_b : BULK EXCEPTION : Key columns : ID = 2 : Error message = ORA-12899:
value too large for column  (actual: , maximum: )
t_a_t_b : OTHER EXCEPTION : Key Columns : ID = , ORA-24381: error(s) in array
DML
ORA-12899: value too large for column "CMLAB1"."T_B"."ID_DESC" (actual: 6,
maximum: 4)
ORA-06512: at line 42

ERROR:
ORA-24381: error(s) in array DML
ORA-12899: value too large for column "CMLAB1"."T_B"."ID_DESC" (actual: 6,
maximum: 4)
ORA-06512: at line 64


How to log all errors in the sample code above while (re)raising the bulk error?

Thank you!
Chris Saxon

Followup  

July 17, 2020 - 8:51 am UTC

OK, so you want to keep fetching and process the next batch after DML errors?

When you reraise in the exception handler, it's not that you're masking some errors, it's that you never process those rows!

Process all the data before raising, set an error flag in the bulk errors exception handler. Then once you've finished processing the data, raise an exception:

    dml_errors boolean := false;
BEGIN
    OPEN cur_t_a;

    LOOP
        FETCH cur_t_a BULK COLLECT INTO ntv_t_a LIMIT l_limit;

        EXIT WHEN ntv_t_a.COUNT = 0;

        BEGIN
            FORALL indx IN 1 .. ntv_t_a.COUNT SAVE EXCEPTIONS
                INSERT INTO t_b
                     VALUES ntv_t_a (indx);
        EXCEPTION
            WHEN bulk_errors
            THEN 
              ...
              dml_errors := true;
        END;
    END LOOP;
    
    if dml_errors then
      raise_application_error ( -20001, 'DML errors' );
    end if;

Spot on!

July 17, 2020 - 1:15 pm UTC

Reviewer: Rob from USA

Thank you Chris! This is exactly what I needed :)
Chris Saxon

Followup  

July 17, 2020 - 2:23 pm UTC

Great :)

More to Explore

DBMS_ERRLOG

More on PL/SQL routine DBMS_ERRLOG here