There is no hard and fast rule.
*Generally* bulk collect is going to be faster than row-by-row fetching, but with a diminishing level of returns as the limit gets higher. For example
SQL> create table t
2 as select d.* from dba_objects d,
3 ( select 1 from dual connect by level <= 10 );
Table created.
SQL> declare
2 type tlist is table of t%rowtype
3 index by pls_integer;
4 r tlist;
5 s timestamp;
6 limit_size int := 5;
7 cursor c is select * from t;
8 begin
9 for i in 1 .. 8 loop
10 s := localtimestamp;
11 open c;
12 loop
13 fetch c bulk collect into r limit limit_size;
14 exit when c%notfound;
15 end loop;
16 close c;
17 dbms_output.put_line('Limit='||limit_size||',time='||(localtimestamp-s));
18 limit_size :=limit_size * 2;
19 end loop;
20 end;
21 /
Limit=5,time=+000000000 00:00:02.866000000
Limit=10,time=+000000000 00:00:02.239000000
Limit=20,time=+000000000 00:00:02.007000000
Limit=40,time=+000000000 00:00:01.855000000
Limit=80,time=+000000000 00:00:01.734000000
Limit=160,time=+000000000 00:00:01.747000000
Limit=320,time=+000000000 00:00:01.758000000
Limit=640,time=+000000000 00:00:01.762000000
PL/SQL procedure successfully completed.
So on my machine, above 80 I'm not seeing much benefit. There are some boundary cases that *might* come into play. For example, I might get lucky and find 10 rows at the very "top" of a table, and the 11th row is much further through in a full table scan...So in that particular case, a limit of 10 would appear faster than a limit of 20, for the first pass....But they are generally not worth considering custom code to handle.
My general approach is:
- if I *know* its one row (eg primary key lookup), I use select-into
- if I *know* its a small finite list (eg "list of countries") I'll just select-bulk-collect
- otherwise I'll fetch with a 100 fetch size.