Skip to Main Content
  • Questions
  • FORALL Insert With Parallel DML Enabled

Breadcrumb

Question and Answer

Tom Kyte

Thanks for the question, Emerson.

Asked: April 16, 2004 - 11:08 am UTC

Last updated: January 13, 2006 - 11:08 am UTC

Version: 9.2.0.3.0

Viewed 10K+ times! This question is

You Asked

Hi Tom:

I have a question regarding the PL/SQL FORALL INSERTstatement used in Oracle 9i Enterprise Edition Release 9.2.0.3.0 If we enabled Parallel DML as follows, we got an Oracle error (i.e., ORA-12838: cannot read/modify an object after modifying it in parallel):

-- Nested table z_tab has been populated with data

EXECUTE IMMEDIATE 'ALTER SESSION ENABLE PARALLEL DML';

FORALL i IN 1..z_tab.COUNT
INSERT /*+ PARALLEL(x) */ INTO table_x x (...)
SELECT ... FROM table_y y WHERE y.column_z = z_tab(i);

COMMIT;

EXECUTE IMMEDIATE 'ALTER SESSION DISABLE PARALLEL DML';

It seems that Enabling Parallel DML with the hint Parallel cannot be used in the PL/SQL FORALL INSERT statement but it's OK with the FORALL UPDATE and DELETE statements.

I know that 'Enable Parallel DML' is basically used in the SQL statements such as:

EXECUTE IMMEDIATE 'ALTER SESSION ENABLE PARALLEL DML';

INSERT /*+ PARALLEL(x) */ INTO .table_x x (...)
SELECT ... FROM table_y WHERE ...;

COMMIT;

EXECUTE IMMEDIATE 'ALTER SESSION DISABLE PARALLEL DML';


Even though it might not be a good idea to use 'Enable Parallel DML' with the FORALL INSERT statement, Oracle should not generate any error like ORA-12838. I'd like to know if this is a feature or a bug in Oracle 9i Enterprise Edition Release 9.2.0.3.0 release.


Thanks,

Emerson


and Tom said...

this error naturally arises since the default mode of a parallel insert like that is a DIRECT PATH.

Once you DIRECT PATH into a table -- you cannot DIRECT PATH into it again without a commit.

Your forall is executing the insert N times (not just once) -- hence the second execute will *fail*.


You an use NOAPPEND in order to change the default, but that will disable the parallel mode.

It is a "feature", it is working as designed.


You would reall want to execute:


INSERT /*+ PARALLEL(x) */ INTO table_x x (...)
SELECT ... FROM table_y y WHERE y.column_z IN
( select * from table( z_tab(i) ) );
commit;

in any case.



Rating

  (11 ratings)

Is this answer out of date? If it is, please let us know via a Comment

Comments

FORALL Insert With Parallel DML Enabled

A reader, April 16, 2004 - 4:22 pm UTC

It's good to know this is a "feature". Thank you for providing us with alternative solutions.

FORALL Update/Delete When Parallel DML Enabled

A reader, April 16, 2004 - 11:08 pm UTC

Hi Tom,

Could you please comment on why the following FORALL Update and Delete statements with Parallel DML Enabled worked fine without the error ORA-12838:

EXECUTE IMMEDIATE 'ALTER SESSION ENABLE PARALLEL DML';

FORALL i IN 1..x_tab.COUNT
UPDATE /*+ PARALLEL(x) */ table_x x (...)
SET x.column_x = x_tab(i);

COMMIT;

EXECUTE IMMEDIATE 'ALTER SESSION DISABLE PARALLEL DML';

EXECUTE IMMEDIATE 'ALTER SESSION ENABLE PARALLEL DML';

FORALL i IN 1..x_tab.COUNT
DELETE /*+ PARALLEL(x) */ table_x x (...)
WHERE x.column_x = x_tab(i);

COMMIT;

EXECUTE IMMEDIATE 'ALTER SESSION DISABLE PARALLEL DML';

It seemed that Oracle processed the FORALL DML statements differently (i.e., not consistently).

Thank you in advance.



Tom Kyte
April 17, 2004 - 5:13 pm UTC

there a optimizations to allow for batching. if you tkprofed (sql_traced) that you would discover the update was actually executed just one time (with all of the inputs).

the insert as select... cannot be so batched -- it would be quite impossible.


So, it is an optimization that will sometimes permit the updates/deletes to be batched (similar batching happens for insert values -- but that'll be conventional path, so it will in fact work)

FORALL Update/Delete When Parallel DML Enabled

A reader, April 16, 2004 - 11:17 pm UTC

Sorry, some typo in my above post and the correct Update and Delete statements are as follows

(1) Update
EXECUTE IMMEDIATE 'ALTER SESSION ENABLE PARALLEL DML';

FORALL i IN 1..my_tab.COUNT
UPDATE /*+ PARALLEL(x) */ table_x x
SET x.column_a = 1
WHERE x.column_b = my_tab(i);

COMMIT;

EXECUTE IMMEDIATE 'ALTER SESSION DISABLE PARALLEL DML';

(2) Delete
EXECUTE IMMEDIATE 'ALTER SESSION ENABLE PARALLEL DML';

FORALL i IN 1..my_tab.COUNT
DELETE /*+ PARALLEL(x) */ table_x x
WHERE x.column_a = my_tab(i);

COMMIT;

EXECUTE IMMEDIATE 'ALTER SESSION DISABLE PARALLEL DML';

Thanks.


FORALL Insert With Parallel DML Enabled

Emerson Wang, April 18, 2004 - 1:30 pm UTC

Hi Tom,

Thank you very much for your answer to my question. Could you please further answer the following two related questions:

(1) If the FORALL Insert statement in my original question is executing the insert N times (not just once) as you pointed out, my FORALL Insert statement is actually the same as the following FOR loop:

FOR i IN 1..z_tab.COUNT
LOOP
INSERT INTO table_x x (...)
SELECT ... FROM table_y y WHERE y.column_z = z_tab(i);
LOOP;

We learned that the FORALL statements can be used to reduce the switches between the PL/SQL engine and the SQL engine for performance enhancement as compared to the FOR loops.

Your answer indicated that the performance of the FORALL statements is not always faster than that of the FOR loops because they may execute the statements in the way similar to the FOR loop. Could you please confirm if the above conclusion is correct or not.

(2) For the FORALL Update/Delete/Insert statements, the optimizations may allow for batching, i.e., executing it just one time (with all of the inputs). This means that batching really depends upon how we construct SQL for the FORALL statements. Could you please confirm it.

If batching is allowed for the FORALL statement, do you think if enabling parallel DML will make any difference to improve FORALL performance.

Tom Kyte
April 18, 2004 - 1:46 pm UTC

no -- that is not what is happening here at all.


for i in 1 .. z_tab.count
loop
insert ...
end loop

does this:


a) gather up inputs to insert (z_tab(I))
b) perform context switch from PLSQL to SQL
c) execute insert once
d) perform context switch back from SQL to PLSQL
e) goto A until you are done looping



forall i in 1 .. z_tab.count
insert


does this:

a) gather up inpus to insert (the entire z_tab)
b) perform context switch from PLSQL to SQL
c) execute insert N-times
d) perform context switch back from SQL to PLSQL


steps B and D (expensive) are done once, not once per execute.


but, your method of using "insert as select" with a set -- but not using the set as an in was (is) inefficient. You SHOULD just use a single insert and not forall at all in your example!




2) the batching is on the step c) in the above -- not the context switching.






At the end of the day, if you need PARALLEL -- it means you have something HUGE, literally HUGE. In that case the overhead of the context switch is tiny in comparision. You use parallel to update tons and tons of data -- the time will be spent in the update, not in the context switch. If you have a need for parallel - forall isn't going to materially affect you anyway. It'll be like 1/100 of a percent of your runtime! not even worth "thinking about"




FORALL Insert/Update/Delete When Parallel DML Enabled

Emerson Wang, April 19, 2004 - 8:28 pm UTC

Thanks, Tom. Great answers to good questions.

FORALL with /*+ APPEND */

Michael, February 28, 2005 - 8:47 am UTC

Hello Tom,

now what if I want to combine best of both worlds? I came across same kind 
of problem having an input-table of objects which I want to insert into a 
regular table:

SQL> desc al_numbers_2
 Name                                      Null?    Type
 ----------------------------------------- -------- -------------------

 ALNO_ID                                   NOT NULL VARCHAR2(30)
 CLIENT_NO                                 NOT NULL NUMBER(38)
 DESCRIPTION                                        VARCHAR2(174)
 CREATEDON                                 NOT NULL DATE
 CREATEDBY_ID                              NOT NULL NUMBER(38)
 UPDATEDON                                 NOT NULL DATE
 UPDATEDBY_ID                              NOT NULL NUMBER(38)
 DELETED                                            NUMBER(38)


SQL> desc al_numbers_typ
 Name                                      Null?    Type
 ----------------------------------------- -------- -----------------

 ALNO_ID                                            VARCHAR2(30)
 CLIENT_NO                                          NUMBER(38)
 DESCRIPTION                                        VARCHAR2(174)
 CREATEDON                                          DATE
 CREATEDBY_ID                                       NUMBER(38)
 UPDATEDON                                          DATE
 UPDATEDBY_ID                                       NUMBER(38)
 DELETED                                            NUMBER(38)


SQL> desc al_numbers_tab
 al_numbers_tab TABLE OF AL_NUMBERS_TYP



Now issuing sth. like 

DECLARE
   p_al_numbers_tab   al_numbers_tab          := al_numbers_tab ();
   p_alno_id          DBMS_SQL.varchar2_table;
   p_client_no        DBMS_SQL.number_table;
   p_description      DBMS_SQL.varchar2_table;
   p_createdon        DBMS_SQL.date_table;
   p_createdby_id     DBMS_SQL.number_table;
   p_updatedon        DBMS_SQL.date_table;
   p_updatedby_id     DBMS_SQL.number_table;
   p_deleted          DBMS_SQL.number_table;
BEGIN
/* Just for filling my Array, which would normally come from Input
   Variables 
*/
   SELECT CAST (MULTISET (SELECT *
                            FROM al_numbers) AS al_numbers_tab)
     INTO p_al_numbers_tab
     FROM DUAL;

   SELECT *
   BULK COLLECT INTO p_alno_id,
           p_client_no,
           p_description,
           p_createdon,
           p_createdby_id,
           p_updatedon,
           p_updatedby_id,
           p_deleted
     FROM TABLE (CAST (p_al_numbers_tab AS al_numbers_tab));


   FORALL i IN 1 .. p_al_numbers_tab.COUNT SAVE EXCEPTIONS
      INSERT      /*+ APPEND */INTO al_numbers_2
         SELECT p_alno_id (i),
                p_client_no (i),
                p_description (i),
                p_createdon (i),
                p_createdby_id (i),
                p_updatedon (i),
                p_updatedby_id (i),
                p_deleted (i)
           FROM DUAL;
END;


with al_numbers my source of data, I get error 

ORA-12838: cannot read/modify an object after modifying it in parallel
ORA-06512: at line 29

but I want to be able to use FORALL with SAVE EXCPTIONS clause, which comes 
in quite handy since definetely some records will be discarded.
This could even be enhanced when the Input-records become big and 
additionally use the LIMIT clause.


Would be really great to have a working solution for that ;)

Thanks a lot
Michael

 

Tom Kyte
February 28, 2005 - 8:53 am UTC

ouch -- good thing that did not work!


You would have put one record per high water mark advance! This would an an inappropriate use of APPEND -- it would be slightly space consuming and would more than likely end up generating MORE redo/undo than a normal insert values.... (for all of the recursive sql that would be involved)

it would not be the best of both -- you don't want append here (and no need to therefore select from dual -- which in itself would add to the overhead)

FORALL with /*+ APPEND */

Michael, February 28, 2005 - 11:37 am UTC

Thanks for your fast reply!

So I'll discard APPEND. But would have been nice to actually see the differences of both approaches. But that does not seem to be possible, because of above error message :(

Thanks again,
Michael

FORALL with /*+ APPEND */ - ouch!

Uwe M. Kuechler, June 30, 2005 - 6:55 am UTC

Tom,

thanks to your last answer, I was able to convince a customer with a second opinion, NOT to combine FORALL and Append-Hints.

Best regards,
Uwe

FORALL insert via database link

Anne, October 24, 2005 - 6:04 pm UTC

Hi Tom,

I am bulk collecting into an array and then doing a FORALL insert into a table on a remote database @qstn_fg : getting error ORA-01400: cannot insert NULL into ("APPS"."CUSTOM_TRX"."TRX_NUMBER")
If I output the array it shows the correct values ... Could you please help me out. Thank you!

declare
cursor c1 is
select i.invoice_id
, i.invoice_nbr trx_number
from .....
where ....

type invoice_id_type is table of ....;
trx_number_type is table of custom_trx.trx_number@qstn_fg%type;

invoice_id_array invoice_id_type;
trx_number_array trx_number_type;

begin
open c1;
fetch c1 bulk collect into
invoice_id_array
, trx_number_array;
close c1;

for i in 1 ..invoice_id_array.count
loop
dbms_output.put_line(trx_number_array(i));
end loop;

forall i in 1 .. invoice_id_array.count
save exceptions
insert into custom_trx@qstn_fg
( trx_id
, invoice_id
, trx_number)
values(trx_id_seq.nextval@qstn_fg
, invoice_id_array(i)
, trx_number_array(i));

.......
/

Output :
108871481
102914487
ORA-01400: cannot insert NULL into ("APPS"."CUSTOM_TRX"."TRX_NUMBER")
ORA-02063: preceding line from QSTN_FG




Tom Kyte
October 25, 2005 - 1:26 am UTC

sorry - bulk processing over a dblink is not supported.


why isn't that code just

insert into ... select from ....

Insert into select from

Anne, October 25, 2005 - 6:32 am UTC

Hi Tom,

Thanks for the info - that's too bad the Forall does not work over remote databases....
I had tried insert into ... select from, but I got ORA-02287: sequence number not allowed here. Any ideas? ( I could do it using an insert in a loop though which I am trying to avoid)

insert into custom_trx@qstn_fg
( trx_id
, invoice_id
, trx_number)
select trx_id_seq.nextval@qstn_fg
, invoice_id
, trx_number
from inv
where....
order by invoice_id;

.......


Forall is slow ...

reader, January 12, 2006 - 9:50 pm UTC

Our DB is 10gr2
we are using forall as per the example in your book
" Effective Oracle by Design" Page 566 .
The results are ---
We are trying to insert 500000 records.
1. 10 mins if we use Insert .... select .... (SQL)
2. 30 mins if we use pl/sql conventional cursor
3. 120 mins if we use forall (NB we are not using LIMIT in our bulk collect )

Can you suggest where should we start this investigation .

Also ,is there a better way to define the variables is 10g --
we are using l_owner dbms_sql.varchar2_table ; (pg 566)
....
....


Tom Kyte
January 13, 2006 - 11:08 am UTC

I don't carry the book with me - and am on travel, so you'll need to refresh my memory here.

(but I probably said "the RIGHT way to do this is a single sql statement!" - meaning #1 is the right thing to do)

I'd want a test case to work with - like I give you....

What does "NB" mean?  


But - there was a big change in 10g as far as cursor processing goes.  Here is a excerpt from a paper I recently wrote:

.......
Well, fortunately, I won’t write like that.  Too many times in the past when just providing my experiences, I’ve been wrong.  I use pages where others might use paragraphs and try to convey an understanding of why what I say is true (or not, you can see for yourself!).  Here is a quote from that book (very similar to others you’ll see in fine books like Mastering Oracle PL/SQL: Practical Solutions by Connor McDonald) showing how I truly believe all technical information needs to be presented.  Your idea’s come out, what you think is true comes out, some evidence supporting that is shown, the numbers are discussed, caveats explored – it is all right there:

 

<quote src=Effective Oracle by Design>

Use Bulk Processing When It Has Dramatic Effects

 

As an example, we'll compare processing the EMP table 14 rows at a time versus processing it a row at a time. Here is the version for row-at-a-time processing:

 

ops$tkyte@ORA920> exec runstats_pkg.rs_start;

PL/SQL procedure successfully completed.

 

ops$tkyte@ORA920> begin

  2      for i in 1 .. 5000

  3      loop

  4          for x in ( select ename, empno, hiredate from emp )

  5          loop

  6              null;

  7          end loop;

  8      end loop;

  9  end;

 10  /

PL/SQL procedure successfully completed.

 

And here is the version that uses bulk processing:

 

ops$tkyte@ORA920> declare

  2      l_ename    dbms_sql.varchar2_table;

  3      l_empno    dbms_sql.number_table;

  4      l_hiredate dbms_sql.date_table;

  5  begin

  6      for i in 1 .. 5000

  7      loop

  8          select ename, empno, hiredate

  9            bulk collect into l_ename, l_empno, l_hiredate

 10            from emp;

 11      end loop;

 12  end;

 13  /

PL/SQL procedure successfully completed.

 

Running Runstats to compare the versions shows the following:

 

ops$tkyte@ORA920> exec runstats_pkg.rs_stop(10000);

Run1 ran in 274 hsecs

Run2 ran in 132 hsecs

run 1 ran in 207.58% of the time

 

This shows that fetching our entire result set using BULK COLLECT in one SQL statement runs faster (about twice as fast in this case) than doing the same thing a single row at a time.

 

The response times you see will be a function of the amount of data you array-fetch, as well. More or less data in the result set will have a definite impact on the performance here. The more data you bulk-fetch, up to a point, the better relative performance you will see from the BULK COLLECT over time. For example, when I put 56 rows in EMP, the BULK COLLECT version was 380% better. When I put 1 row in EMP, both versions ran in the same amount of time. At some point, however, the BULK COLLECT will cease being more efficient, as the amount of RAM it consumes increases greatly. Where that point is varies, but I find a BULK COLLECT size of about 100 rows to be universally "good" in practice. Later, we'll look at using the LIMIT clause to control this.

 

Looking further in the Runstats report, we see some interesting numbers:

 

Name                                  Run1        Run2        Diff

STAT...session logical reads        80,522      15,525     -64,997

STAT...consistent gets              80,003      15,004     -64,999

STAT...buffer is not pinned co      70,000       5,000     -65,000

STAT...no work - consistent re      70,000       5,000     -65,000

STAT...table scan blocks gotte      70,000       5,000     -65,000

STAT...recursive calls              75,003       5,003     -70,000

LATCH.cache buffers chains         162,601      32,582    -130,019

Overall latching is reduced.

Run1 latches total versus runs -- difference and pct

Run1        Run2        Diff       Pct

188,736      58,658    -130,078    321.76%

 

PL/SQL procedure successfully completed.y

 

That is analogous to what we observed in SQL*Plus in Chapter 2, when we played with the ARRAYSIZE setting while using AUTOTRACE. The larger the array size, the fewer consistent gets we performed, and the better the performance and scalability. The same rules apply here, but the impact is not as transparent as just adjusting an ARRAYSIZE setting is. Here, we needed to rewrite the code using PL/SQL table types or collections. We needed to declare variables to fetch into. We used more memory in our session. We can use V$MYSTAT, a dynamic performance view, to see the net effect on memory usage.

 

It is for these reasons that I recommend using bulk processing only where and when it would have the most dramatic effect. In the example shown here, it looks dramatic. But that is only because we did it for 14  5,000 rows. It would be worthwhile here, if you did that process many times. If you did that process for 50 rows, you would discover they run in about the same amount of time and that the BULK COLLECT actually does more latching!

</quote>

 

There you have pretty much all of the facts – and perhaps most importantly you have some way to verify the truth.  Say you just bought that book (I wrote that book as Oracle 9iR2 was in full swing, Oracle 10g was still a glint in our eyes) and you read the advice.  You might be tempted to take it to heart and write all of your code using BULK COLLECT, or even worse, rewrite all of your existing code to do so.

 

Only to discover the expert is totally wrong.

 

Because things change. (just a side note, most everything else in that books applies to 10g!  And you have the examples available to you to see that)

 

But fortunately you bought the book with evidence, and you grabbed the examples from the website.  And you ran it in 10g.  And you observed:

 

ops$tkyte@ORA10G> exec runStats_pkg.rs_start

PL/SQL procedure successfully completed.

 

ops$tkyte@ORA10G> begin

  2      for i in 1 .. 5000

  3      loop

  4          for x in ( select ename, empno, hiredate from emp )

  5          loop

  6              null;

  7          end loop;

  8      end loop;

  9  end;

 10  /

PL/SQL procedure successfully completed.

 

ops$tkyte@ORA10G> exec runStats_pkg.rs_middle

PL/SQL procedure successfully completed.

 

ops$tkyte@ORA10G> declare

  2      l_ename    dbms_sql.varchar2_table;

  3      l_empno    dbms_sql.number_table;

  4      l_hiredate dbms_sql.date_table;

  5  begin

  6      for i in 1 .. 5000

  7      loop

  8          select ename, empno, hiredate

  9            bulk collect into l_ename, l_empno, l_hiredate

 10           from emp;

 11      end loop;

 12  end;

 13  /

PL/SQL procedure successfully completed.

 

ops$tkyte@ORA10G> exec runStats_pkg.rs_stop(10000)

Run1 ran in 51 hsecs

Run2 ran in 48 hsecs

run 1 ran in 106.25% of the time

 

Name                                  Run1        Run2        Diff

 

Run1 latches total versus runs -- difference and pct

Run1        Run2        Diff       Pct

58,709      58,859         150     99.75%

 

PL/SQL procedure successfully completed.

 

Well, it would seem hardly worth it to do the bulk collect here wouldn’t it?  The reason – in Oracle 10g, PLSQL is silently array fetching 100 rows at a time for us, when we do “for x in ( select * from t “ – PLSQL has already bulk collected 100 rows.  We no longer needed to do that extra code, the extra work.

 

And, had I not had a test case, I might still not know that. I might still be giving the advice “bulk collect, everywhere”.  Someone reading my book was kind enough to email me and ask “why do I get these numbers when I run your example”.  60 seconds of research and I found out “why” (tkprof reveals a lot! A simple sql_trace=true and away we go).

  

More to Explore

Hints

Need more information on hints? Check out the complete hint descriptions