Skip to Main Content
  • Questions
  • Impact with rows prefetching very large

Breadcrumb

Question and Answer

Tom Kyte

Thanks for the question, Ludovic.

Asked: June 04, 2013 - 12:29 pm UTC

Last updated: June 04, 2013 - 2:50 pm UTC

Version: 11.2

Viewed 1000+ times

You Asked

Hello Tom,

I know rows prefetching defined on the client is very important because it could impact the roundtrips client/server number and the logicals I/O number.

But I would to know what are the impacts if the rows prefecthing is set with a very large value.

In my following exemple, I do not see any impact directly when I setting arraysize (in sqlplus) with 5 or 5000 to read my table contains 5 rows. If there is not impact, we can set the rows prefetching directly with the max value every time (I hightly doubt of this because every action has a cost necessarily).

SQL> create table T as select * from dba_tables where rownum<=5;

Table created.

SQL> select count(*) from T;

COUNT(*)
----------
5

SQL> set autotrace traceonly stat

SQL> set arraysize 5
SQL> select * from T;


Statistics
----------------------------------------------------------
0 recursive calls
0 db block gets
4 consistent gets
0 physical reads
0 redo size
4877 bytes sent via SQL*Net to client
364 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
5 rows processed

SQL> set arraysize 5000
SQL> select * from T;


Statistics
----------------------------------------------------------
0 recursive calls
0 db block gets
4 consistent gets
0 physical reads
0 redo size
4877 bytes sent via SQL*Net to client
364 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
5 rows processed

Thanks in advance.

and Tom said...

Setting the arraysize larger than 100 in most cases is not a good idea.

There are clients that would look at the rowsize (the maximum row length) and then multiply that by the arraysize to determine the maximum buffer they might have to allocate to fetch the data into. Setting the arraysize to a really large number would be detrimental to such a client.

Then there are the clients that get 25 rows and stop, display them on screen, wait for a page down to get the next 25. Making them wait to get the first 5,000 rows to show the first 25 would not be smart.

The arraysize should be set based on the clients behavior and needs. If you don't know what to set it to, stick with 100 as an all around "pretty good number for all circumstances" (it is what plsql uses for example)...


An arraysize setting of anything larger than 2 for your count(*) query will never affect its performance, except perhaps in a negative fashion as some tools might want to allocate more memory for it.. Think about it - if you only have one row to fetch - ever - how could an arraysize of 5000 help anything?

You'd want to use an arraysize into the 1,000's in very specific, rather rare cases. A data unloader might want to use 1,000 to fetch with. A data loader might want to use 10,000 to load with. for most everything else - 100 is a good place to be. And make sure to make it a settable, tunable bit of data in your program so you can benchmark with different sizes.

Rating

  (1 rating)

Is this answer out of date? If it is, please let us know via a Comment

Comments

Ludovic, June 04, 2013 - 3:05 pm UTC

Thank you for your answer. I understand why a large value of the rows prefetching can degraded the client behavior.