Skip to Main Content
  • Questions
  • How RowProcessed works and updated in GV$SQL

Breadcrumb

Question and Answer

Connor McDonald

Thanks for the question, Mansi.

Asked: February 16, 2017 - 6:54 am UTC

Last updated: February 18, 2017 - 4:25 am UTC

Version: Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit

Viewed 1000+ times

You Asked

How row_processed,disk read and buffer gets actually works while executing any query. I came across two scenarios while executing two different
queries.(My sql arrayfetch size is 50)

1. Disk read and buffergets increased till the entire execution completed and row Processed =50.
When I go for select all row,row processed keeps increasing but there is no change in disk reads and buffer gets,

Is it like,the entire execution completed,and all rows are processed but displayed only 50 due to array size?

But if this is so,what happens in second scenario as below.

2. Disk read and buffergets increased till the entire execution completed and row Processed =50 but when I go for select all rows,all three parameter keeps increasing(row_processed,diskread and buffer gets)

and Connor said...

It depends on the query. Here's example 1

SQL> create table t as select * from dba_objects;

Table created.

SQL> set arraysize 50
SQL> set pause on
SQL> select /*test1*/ * from t;

[data]



Now, for that query, I dont need to read the entire table - pause/arraysize means "just read 50 and stop". So when I look at v$sql I see this:

SQL> select sql_text, rows_processed, disk_reads, buffer_gets
  2  from v$sql where sql_id = '280q6uvsgs2d3';

SQL_TEXT                                                         ROWS_PROCESSED DISK_READS BUFFER_GETS
---------------------------------------------------------------- -------------- ---------- -----------
select /*test1*/ * from t                                                    51          5           6


Now I keep hitting Enter to get batches of 50 rows, as I do, the counters increment as I hit *new* blocks in the table that I have *yet* to read.

SQL_TEXT                                                         ROWS_PROCESSED DISK_READS BUFFER_GETS
---------------------------------------------------------------- -------------- ---------- -----------
select /*test1*/ * from t                                                    51          5           6

SQL> /

SQL_TEXT                                                         ROWS_PROCESSED DISK_READS BUFFER_GETS
---------------------------------------------------------------- -------------- ---------- -----------
select /*test1*/ * from t                                                   101          5           8

SQL> /

SQL_TEXT                                                         ROWS_PROCESSED DISK_READS BUFFER_GETS
---------------------------------------------------------------- -------------- ---------- -----------
select /*test1*/ * from t                                                   251          5          13



Now consider a query like

select /*test2*/ * from t order by object_id

To do the sorting, I need to read every single row in the table *before* I can even start to return the first row. So by the time I start returning the first 50, I've done *all* the work.


SQL> select sql_text, rows_processed, disk_reads, buffer_gets
  2  from v$sql where sql_id = '69djn1r805gvf';

SQL_TEXT                                                         ROWS_PROCESSED DISK_READS BUFFER_GETS
---------------------------------------------------------------- -------------- ---------- -----------
select /*test2*/ * from t order by object_id                                 51       1745        1751

mcdonac@np12
SQL> /

SQL_TEXT                                                         ROWS_PROCESSED DISK_READS BUFFER_GETS
---------------------------------------------------------------- -------------- ---------- -----------
select /*test2*/ * from t order by object_id                                201       1745        1751




There's obviously other permutations as well, eg, my sorting might have had to dump to disk, so as I'm reading the result I might need disk reads (not from the data, but from the temp sort area) etc.

Rating

  (1 rating)

Is this answer out of date? If it is, please let us know via a Comment

Comments

Thanks very much

A reader, February 17, 2017 - 5:36 am UTC

This was useful.

Same confusion while exporting data into Excel as well.

Should we first do all the work,and fetch all the rows than start exporting? Or we can start once the first 50 rows are processed.

Which option will consume less time?
Connor McDonald
February 18, 2017 - 4:25 am UTC

I wouldnt worry too much about that - because it depends so much on the query. A much bigger impact is the size of each fetch, so set this to 100 or more if you're unloading data.