Skip to Main Content
  • Questions
  • Cursor FOR loops optimization in 10g

Breadcrumb

May 4th

Question and Answer

Tom Kyte

Thanks for the question, Bhadri.

Asked: September 30, 2008 - 3:04 am UTC

Last updated: October 02, 2008 - 7:44 am UTC

Version: 10g

Viewed 10K+ times! This question is

You Asked

Is it true that Oracle Database 10g will automatically optimize the cursor FOR loops to perform at speeds comparable to BULK COLLECT?
I just got curious after reading in one question on PL/SQL best practices on Oracle Magazine March/April 2008.
Link-> https://asktom.oracle.com/Misc/oramag/oracle-database-11g-redux.html
But I could not find anything like that in Oracle documentation.


Thanks,

and Tom said...

Well, I disagree with his examples - but not the gist.

His examples are the form of:

get some rows
process them one by one


Now, if it were:

get some rows
process them
do a bulk update back into database

Then I would say "bulk bind".


The implicit cursor for loop implicitly array fetches 100 at a time - as you can see from his numbers - 100 was the 'sweet spot' (it is in general, the good number to use for this sort of stuff). So, if you are just fetching and processing the rows - a row at a time, just use the cursor for loop.

Here is an example of when I would not explicitly bulk bind:


ops$tkyte%ORA10GR2> create table t as select * from all_users where rownum <= 10;

Table created.

ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> create or replace procedure process( p_str in out varchar2 )
  2  is
  3  begin
  4          p_str := lower(p_str);
  5  end;
  6  /

Procedure created.

ops$tkyte%ORA10GR2><b>
ops$tkyte%ORA10GR2> begin
  2          for x in (select username from t)
  3          loop
  4                  process( x.username );
  5          end loop;
  6  end;
  7  /

PL/SQL procedure successfully completed. </b>



why complicate that - there would be no benefit. Now, on the other hand, if I called PROCESS and then put the data back (process in this case would have to be something a LOT more complex than what I did above - in this example, we should not even be using plsql - but assume process is really complex procedural code and is necessary ) then I would, for example:

<b>
ops$tkyte%ORA10GR2> declare
  2      l_rowids    sys.odciridlist;
  3      l_usernames sys.odcivarchar2list;
  4      cursor c is select rowid, username from t;
  5      l_limit     number := 100;
  6  begin
  7      open c;
  8      loop
  9          fetch c bulk collect into l_rowids, l_usernames limit l_limit;
 10          exit when ( l_rowids.count = 0 );
 11
 12          for i in 1 .. l_rowids.count
 13          loop
 14              process( l_usernames(i) );
 15          end loop;
 16
 17          forall i in 1 .. l_rowids.count
 18              update t set username = l_usernames(i) where rowid = l_rowids(i);
 19      end loop;
 20      close c;
 21  end;
 22  /

PL/SQL procedure successfully completed.
 </b>


Now it makes sense to do it explictly - because we get N rows, process them all, and then FORALL update them (bulk update)

If you take the forall away - remove the need to update - I would revert back to my prior example (lots easier to code, read, understand, maintain, debug whatever)


So, I stick with the for x in (select) construct unless I have an opportunity to forall i bulk update/insert/delete - then I would implicitly array fetch - so as to have the data in an array, process it, and bulk it back in.



Rating

  (6 ratings)

Is this answer out of date? If it is, please let us know via a Comment

Comments

Excellent

A reader, September 30, 2008 - 5:39 pm UTC

"forall" changes so much things.

This thread makes the BULK CORRECT much clear.

Very Good example

Kavish Jha, October 01, 2008 - 12:53 am UTC

Very good example of Bulk Collect and lots of confusion is clear

Thanks

Bhadri, October 01, 2008 - 2:03 am UTC

Great explanation and example.
This is exactly what I am looking for. Thanks so much!

Cursor FOR loops optimization in 10g

yocs, October 01, 2008 - 8:17 am UTC

The cursor for loop is optimized in 10g and gave performance gain of about 10-15%. Is it true?
Tom Kyte
October 01, 2008 - 1:11 pm UTC

it is precisely 10.2145%

kidding.

It depends, it entirely and utterly depends.


Do you fetch enough rows that the array fetch makes a difference? Maybe yes, maybe no.

Does your query experience reduced logical IO when you array fetch? Maybe yes, maybe no.

It is somewhere between -100 and +100% faster.


Is the optimizing compiler in 10g generating better, faster code? Yes in general.

Are implicit cursor for loops optimized with an array fetch of 100? Yes
Will that always mean 10-15% gain? No, it might run slower, faster or the same as before. I can give examples of all three cases. In general, it is beneficial - but you cannot ever pin a X-Y% on *anything*.

Limits

Chris Seymour, October 02, 2008 - 1:29 am UTC

Wanted to get your opinion on Steven's CPU timings for different limits in the original poster's link. It looks like it actually took less CPU time for a limit of 10,000 than it did for 100.

Where is your suggestion of 100 actually culled from? What are the implications of actually going to say 10,000?
Tom Kyte
October 02, 2008 - 7:44 am UTC

Elapsed CPU time for limit of 1 = 1839
Elapsed CPU time for limit of 5 = 716
Elapsed CPU time for limit of 25 = 539
Elapsed CPU time for limit of 50 = 545
Elapsed CPU time for limit of 75 = 489
Elapsed CPU time for limit of 100 = 490
Elapsed CPU time for limit of 1000 = 501
Elapsed CPU time for limit of 10000 = 478
Elapsed CPU time for limit of 100000 = 527


I would say that after abouty 75-100, those numbers are identical - until you get to 100,000 where it starts to go back up.

Run his test case a couple dozen times and smooth the numbers out. He even suggests that:

... From tests I (and others) have performed, however, it appears that you will see roughly the same performance no matter what value you choose for the limit, as long as it is at least 25. The test_diff_limits.sql script, included with the sample code for this column, at otn.oracle.com/oramag/oracle/08-mar/o28plsql.zip, demonstrates this behavior, using the ALL_SOURCE data dictionary view on an Oracle Database 11g instance. Here are the results I saw (in hundredths of seconds) when fetching all the rows (a total of 470,000): ...

Why use the RAM if there is no real benefit - That is the basis for 100 from me. It is been in general the 'best' number over the years - in pro*c, jdbc, whatever. The biggest bang for the buck.

Thanks

Chris Seymour, October 02, 2008 - 10:06 am UTC

Thanks Tom - Good explanation.

I didn't really see it from the point of view that using a higher limit would use more RAM but wouldn't really produce any significant differences. Lightbulb kind of just went off in my head. But that does make complete sense now.

Funny thing is I had read that article months back in the paper copy of the magazine and had the same question because I always read about using a limit of 100.

More to Explore

PL/SQL demos

Check out more PL/SQL tutorials on our LiveSQL tool.

PL/SQL docs

PL/SQL reference manual from the Oracle documentation library