Skip to Main Content

Breadcrumb

Dev Live Dev Intro

This month we are celebrating Developers at AskTOM. We welcome Developers of all levels of experience to join us at our FREE Developer Live events coming in August. Just click on the left to register today! If you are brand new to Database Technology, then we also have got you covered. Just click on the right for your comprehensive FREE training program to kick start your Oracle Database Development journey!

Question and Answer

Tom Kyte

Thanks for the question, Stephane.

Asked: September 04, 2008 - 11:34 am UTC

Answered by: Tom Kyte - Last updated: June 15, 2018 - 1:32 am UTC

Category: Database - Version: 9.2.0.8

Viewed 50K+ times! This question is

You Asked

Hi Tom,

I've been reading through a few posts regarding pagination. I'm using the following query template in my application, to implement a "previous", "next" feature to page through a search resultset: (J2EE front-end calls a stored proc returning a range of results)

select * from (select /*+ first_rows(50) */ id,col1,col2,
row_number() over
(order by id DESC) rn
FROM table1) where rn between 1 and 50 order by rn;

This gives me the previous and next functionality I'm looking for, but our client is looking for more.

Example: 1300 Results found. Page 1 of 26, Showing Rows 1 - 50

I use APEX alot for other Apps, and I remember reading that choosing a pagination scheme showing total pages, and results etc. is expensive and not recommended. (I notice on your site, your search results use the basic previous/next scheme)

Is this because you must run an additional unfiltered (no range predicate) query to obtain the total results count, then run the same query to obtain the result subset?

As always, thanks for the info!

Stephane

and we said...

Giving the "132,023,342 results found" will necessarily imply "you shall run the query to completion for every single page - and remember the number of results found will change from page to page because people modify data"

read some of these:

http://asktom.oracle.com/pls/ask/search?p_string=google+page+100+counting


(hint there is no page 100 on google, and google totally guesses, google does it RIGHT)


Ask the customer if they

a) want to quintuple the money they spend on hardware
b) wait hundreds of times longer than they do now to get the answer


Because - to count 1,300 hits in order to show the first 50 will result in your finding 1,300 records - not 50. That will take a much larger machine - and necessarily make response times "not too good"



And besides, no human being will EVER look at 1,300 results, it is just too much. They will page at most 5-10 times. Sometimes they need to get to the last page- but the last page is really the FIRST page of a query ordered in the opposite direction!


Give them prev, next, tell them "there is more", let them pick the sort (to get the 'end' of the result) but refuse to count!

and you rated our response

  (30 ratings)

Is this answer out of date? If it is, please let us know via a Review

Reviews

Excellent

September 05, 2008 - 11:51 am UTC

Reviewer: Stephane

Just what I was hoping for an as an answer! Thank you !

article on pagination

September 05, 2008 - 1:25 pm UTC

Reviewer: Gábor András from Debrecen, Hungary

Hi,

I have an article on the performance of various pagination techniques. Maybe you find it useful too:

quick overview
http://www.inf.unideb.hu/~gabora/pagination/results.html

detailed stuff:
http://www.inf.unideb.hu/~gabora/pagination/



September 05, 2008 - 1:40 pm UTC

Reviewer: A reader

Tom,

Are the following two pagination queries equivalent in terms of performance? Does Oracle recognize pagination queries in a certain form so that it will perform special optimization for that query?

select * from (select /*+ first_rows(50) */ id,col1,col2,
row_number() over
    (order by id DESC) rn
FROM table1) where rn between 1 and 50 order by rn;

SELECT *
FRMO  (
   SELECT id, col1, col2, rownum rn
   FROM (
      SELECT /*+ first_rows(50) */ id, col1, col2
      FROM   table1
      ORDER  BY id DESC
   )
   WHERE   rownum <= 50
)
WHERE  rn >= 1;

Tom Kyte

Followup  

September 05, 2008 - 4:39 pm UTC

in current releases - row_number() is recognized and pushed as a top-n query - thus, the 2nd query is equivalent now to the first.

*in current releases*... It wasn't true in the beginning of analytics.

Showing Total Pages/Record Count

September 19, 2008 - 6:37 pm UTC

Reviewer: A reader

Tom,

I understand that showing total pages/record count is a very expensive operation. If the requirement is slight different:

- If the total record count is less than 1,000, then display the exact record count.
- If the total record count is greater or equal to 1,000, then display "more than 1,000".

Will the SQL still be expensive? How do I write a SQL to accomplish this?
Tom Kyte

Followup  

September 21, 2008 - 1:35 pm UTC

Do you understand that by the time you give them an exact count, the ANSWER IS PROBABLY WRONG already. Unless you are on a static database, an exact count just isn't reality.

But, sure, you could do this - even though 1,000 is about 10 times too too much.

And sure, it'll be easily an order of magnitude more expensive -think about it, anytime you do more work than you have to...



select ...., count(*) over () cnt
  from .....
 where .....
   and rownum <= 1000;



but then, you'd have a hard time getting past 1,000, you'd have to do a different query if you ever wanted to go there (but I'd just say *no*, you cannot get to the 100th page - just like google)


another think I'd like to recommend thinking about is the word requirement. I hate it anymore. Developers take the work requirement literally - like "it is a requirement, it is REQUIRED, it HAS TO BE DONE"

that is nonsense. requirements start life as requests. Many times, once someone is informed of the cost (that is the job, the requirement of the development team actually - to inform of the true cost of the request) - the supposed requirement becomes "well, it would have been cool - but I understand...."

Writing software shouldn't be like herding sheep.

Which approach is better

July 22, 2010 - 1:58 am UTC

Reviewer: A reader from MA, USA

I have one query that is used to search on products on website and uses pagination.
I have read your articles on pagination and Top N query, where you have described both the option.

http://www.oracle.com/technology/oramag/oracle/06-sep/o56asktom.html
http://www.oracle.com/technology/oramag/oracle/07-jan/o17asktom.html

I checked following 2 approach and need your opinion on which one is better?

Approach 1 :
SELECT PKID
      ,NAME
      ,PRODUCT_NUMBER
      ,PROD_TYPE
      ,RENEWABLE_FLAG
      ,FROM_DATE
      ,TO_DATE
--      */
FROM   (SELECT ROWNUM RNUM
              ,V.*
          FROM (SELECT   PKID
                        ,NAME
                        ,PRODUCT_NUMBER
                        ,PROD_TYPE
                        ,RENEWABLE_FLAG
                        ,FROM_DATE
                        ,TO_DATE
                    FROM PRODUCT PRD
                   WHERE 1 = 1 AND PROD_TYPE = 'Sell'
                ORDER BY UPPER (NAME) DESC) V
         WHERE ROWNUM <= :V_CNT2)
 WHERE RNUM >= :V_CNT;

Plan for one time execution
SELECT STATEMENT CHOOSE Cost: 19 Bytes: 20,655 Cardinality: 15
5 VIEW UHC_USER. Cost: 19 Bytes: 20,655 Cardinality: 15
4 COUNT STOPKEY
3 VIEW UHC_USER. Cost: 19 Bytes: 23,188 Cardinality: 17
2 TABLE ACCESS BY INDEX ROWID TABLE UHC_USER.PRODUCT Cost: 19 Bytes: 832,910,169 Cardinality: 3,050,953
1 INDEX FULL SCAN DESCENDING INDEX UHC_USER.PROD_UPP_NAME_IX Cost: 3 Cardinality: 17
34 msecs

Plan for 100000 executions
SELECT STATEMENT CHOOSE Cost: 197,270 Bytes: 4,201,162,281 Cardinality: 3,050,953
5 VIEW UHC_USER. Cost: 197,270 Bytes: 4,201,162,281 Cardinality: 3,050,953
4 COUNT STOPKEY
3 VIEW UHC_USER. Cost: 197,270 Bytes: 4,161,499,892 Cardinality: 3,050,953
2 SORT ORDER BY STOPKEY Cost: 197,270 Bytes: 832,910,169 Cardinality: 3,050,953
1 TABLE ACCESS FULL TABLE UHC_USER.PRODUCT Cost: 17,765 Bytes: 832,910,169 Cardinality: 3,050,953
5.61 Seconds

Approach 2

SELECT PKID
,NAME
,PRODUCT_NUMBER
,PROD_TYPE
,RENEWABLE_FLAG
,FROM_DATE
,TO_DATE
FROM (SELECT /*+ FIRST_ROWS(10) */
PKID
,NAME
,PRODUCT_NUMBER
,PROD_TYPE
,RENEWABLE_FLAG
,FROM_DATE
,TO_DATE
,ROW_NUMBER () OVER (ORDER BY UPPER (NAME) DESC) AS RNUM
FROM PRODUCT P
WHERE PROD_TYPE = 'Sell')
WHERE RNUM BETWEEN :V_CNT AND :V_CNT2

Plan for one time execution
SELECT STATEMENT HINT: FIRST_ROWS Cost: 29 Bytes: 158,301 Cardinality: 27
5 SORT ORDER BY Cost: 29 Bytes: 158,301 Cardinality: 27
4 VIEW UHC_USER. Cost: 28 Bytes: 158,301 Cardinality: 27
3 WINDOW NOSORT STOPKEY Cost: 28 Bytes: 7,371 Cardinality: 27
2 TABLE ACCESS BY INDEX ROWID TABLE UHC_USER.PRODUCT Cost: 28 Bytes: 832,910,169 Cardinality: 3,050,953
1 INDEX FULL SCAN DESCENDING INDEX UHC_USER.PROD_UPP_NAME_IX Cost: 3 Cardinality: 27
109 msecs

Plan for 100000 executios
SELECT STATEMENT HINT: FIRST_ROWS Cost: 15 Bytes: 70,356 Cardinality: 12
6 SORT ORDER BY Cost: 15 Bytes: 70,356 Cardinality: 12
5 VIEW UHC_USER. Cost: 14 Bytes: 70,356 Cardinality: 12
4 WINDOW NOSORT Cost: 14 Bytes: 3,276 Cardinality: 12
3 FILTER
2 TABLE ACCESS BY INDEX ROWID TABLE UHC_USER.PRODUCT Cost: 14 Bytes: 832,910,169 Cardinality: 3,050,953
1 INDEX FULL SCAN DESCENDING INDEX UHC_USER.PROD_UPP_NAME_IX Cost: 3 Cardinality: 12

8.55 Seconds

Thanks,
Tom Kyte

Followup  

July 23, 2010 - 9:06 am UTC

I don't know what you mean by "plan for 1000,000 executions", what does that mean exactly?


July 26, 2010 - 2:05 am UTC

Reviewer: A reader

Sorry for confusion. I meant, plan for executing the same query for 100000 times. The range used to pick limited rows were changing each time in the loop and after running the query in loop for 100000 times, plan changed.

So, the plan for single execution is for first time when you run the query and range will be 1 to 10 rows, and plan for (should be after) 100000 execution is when you run the query after 100000 execution where the range will be say 100001 to 100010.


Tom Kyte

Followup  

July 26, 2010 - 8:24 am UTC

sorry, but I did not follow what you are saying here at all.

July 26, 2010 - 8:32 am UTC

Reviewer: A reader

Ok. In above,
Plan for one time execution --> exeuction plan in the begining for the sql
Plan for 100000 executions --> plan after 100000 execution for the same sql.

Tom Kyte

Followup  

July 26, 2010 - 8:39 am UTC

what does that mean - I'll keep saying that until you explain it better.


For me, the plan for a one time execution = plan for 100,000 executions - there is only one plan. We "share sql", it is our design.

Plan change

July 26, 2010 - 9:58 am UTC

Reviewer: E Aviles from Orlando, FL

"For me, the plan for a one time execution = plan for 100,000 executions - there is only one plan. We "share sql", it is our design."

True but I think what he means is that the plan changed for the same SQL due to different inputs on the loop. That's how I interpret his explanation.


Tom Kyte

Followup  

July 26, 2010 - 10:01 am UTC

then I don't agree with his analysis until I see some supporting information.

But, I don't get that from what he said - he shows a single plan for "single execution" and another single plan for "100000" exeuctions.

Unless he did something that would cause the plan to be flushed from the shared pool (gathered/set statistics, added an index, flush shared pool, run lots of other things that flood shared pool causing his sql to age out, etc) - the plan would be the plan. We'd expect to see "misses in library cache during execution = <something greater than zero>" in the tkprof report if that were the case.

July 26, 2010 - 12:45 pm UTC

Reviewer: A reader

Thanks Aviles.

Tom,
Thanks again for your reply.

Yes, I did run some more queries and flushed share pool, but what I dont understand is how come the plan for approach #1 and #2 behaved differently after flushing the share pool and running lot of other queries.

Also, is it expected behavior for plan to change after flushing share pool? I mean, I did not insert any record or collected statistics, I ran only few select statement.

Also, if you can tell me in your opinion, which approach is better? Or both are fine and I dont worry about it at all ?

Thanks in advance


Tom Kyte

Followup  

July 27, 2010 - 12:06 pm UTC

bind peeking.

http://www.google.com/search?q=site%3Atkyte.blogspot.com+bind+peeking



July 26, 2010 - 12:57 pm UTC

Reviewer: A reader


SET serveroutput on;

DECLARE
V_CNT NUMBER;
V_CNT2 NUMBER;
L_START NUMBER DEFAULT DBMS_UTILITY.GET_TIME;

TYPE RC IS REF CURSOR;

L_RC RC;
BEGIN
--V_CNT := 0
--V_CNT2 := 0
FOR I IN 1 .. 100000
LOOP
V_CNT := I;
V_CNT2 := I + 10;

OPEN L_RC FOR
SELECT *
FROM (SELECT /*+ FIRST_ROWS(10) */
P.*
,ROW_NUMBER () OVER (ORDER BY UPPER (NAME) DESC)
AS RNUM
FROM PRODUCT P
WHERE PROD_TYPE = 'Sell')
WHERE RNUM BETWEEN :V_CNT AND :V_CNT2
ORDER BY RNUM;
OPEN L_RC FOR
SELECT PKID
,NAME
,PRODUCT_NUMBER
,PROD_CLASS_PKID
,VERSION
,PROD_FAMILY_PKID
,PROD_TYPE
,RENEWABLE_FLAG
,FROM_DATE
,TO_DATE
,TERM_QTY
,TERM_UNIT
-- */
FROM (SELECT ROWNUM RNUM
,V.*
FROM (SELECT PKID
,NAME
,PRODUCT_NUMBER
,PROD_CLASS_PKID
,VERSION
,PROD_FAMILY_PKID
,PROD_TYPE
,RENEWABLE_FLAG
,FROM_DATE
,TO_DATE
,TERM_QTY
,TERM_UNIT
FROM PRODUCT PRD
WHERE 1 = 1 AND PROD_TYPE = 'Sell'
ORDER BY UPPER (NAME) DESC) V
WHERE ROWNUM <= :V_CNT2)
WHERE RNUM >= :V_CNT;
END LOOP;

DBMS_OUTPUT.PUT_LINE ( ROUND ((DBMS_UTILITY.GET_TIME - L_START) / 100, 2)
|| ' Seconds...'
);
END;
Tom Kyte

Followup  

July 27, 2010 - 12:09 pm UTC

very pretty? Not sure what it is about..

Pagination Approach

July 28, 2010 - 1:31 am UTC

Reviewer: A reader

Hi Tom,

Which method from the following 2 is better for pagination ?

Method 1
========

http://www.oracle.com/technology/oramag/oracle/07-jan/o17asktom.html

select *
 from (
select /*+ first_rows(25) */
  your_columns,
  row_number() 
  over (order by something unique)rn
 from your_tables )
where rn between :n and :m 
order by rn;


Method 2
========

http://www.oracle.com/technology/oramag/oracle/06-sep/o56asktom.html

select * 
  from ( select /*+ FIRST_ROWS(n) */ 
  a.*, ROWNUM rnum 
      from ( your_query_goes_here, 
      with order by ) a 
      where ROWNUM <= 
      :MAX_ROW_TO_FETCH ) 
where rnum  >= :MIN_ROW_TO_FETCH;

Tom Kyte

Followup  

July 28, 2010 - 7:41 am UTC

I'm old school, I prefer the second approach. The current releases of the optimizer usually see the two as "the same".

analytics to get count ?

July 29, 2010 - 3:56 am UTC

Reviewer: Jignesh from Mumbai, India.

Hi Tom,

In continuation with the fact that counting the rows from result set is usually a costly affair, how about using analytics to get the count.

Below is the modified query which was posted in original question, to get the count using max() function.

select * from (select /*+ first_rows(50) */ id,col1,col2,
row_number() over
(order by id DESC) rn ,
max(rownumber) over ( order by null) cnt
FROM table1) where rn between :from_N and :to_M order by rn;

I checked cost of the plan and it doesnt seem to be adding much to the cost of the original query; the cost increased merely by 1. Probably because in order to get a rowset from N to M rows, the original query too had to traverse through all the rows and then it picked up rows numbered N to M from the ordered set ?

Thank you.
Tom Kyte

Followup  

July 29, 2010 - 9:38 am UTC

rownumber is what exactly???

July 29, 2010 - 4:32 am UTC

Reviewer: Jignesh from Mumbai, India.

Sorry for the typo; in the query it should be like -

max(rownum) over ( order by null ) cnt

Tom Kyte

Followup  

July 29, 2010 - 9:43 am UTC

that would be entirely inefficient, it would have to retrieve the entire result set.

The cost = a guess. Reality = hugely different

ops$tkyte%ORA11GR2> /*
ops$tkyte%ORA11GR2> drop table t;
ops$tkyte%ORA11GR2>
ops$tkyte%ORA11GR2> create table t
ops$tkyte%ORA11GR2> ( id not null, col1, col2, data )
ops$tkyte%ORA11GR2> as
ops$tkyte%ORA11GR2> select rownum id, rownum col1, rownum col2, rpad( '*', 2000, '*' )data
ops$tkyte%ORA11GR2>   from dual
ops$tkyte%ORA11GR2>  connect by level <= 1000;
ops$tkyte%ORA11GR2>
ops$tkyte%ORA11GR2> create index t_idx on t(id);
ops$tkyte%ORA11GR2> */
ops$tkyte%ORA11GR2>
ops$tkyte%ORA11GR2> variable from_n number
ops$tkyte%ORA11GR2> variable to_M number
ops$tkyte%ORA11GR2>
ops$tkyte%ORA11GR2> exec :from_n := 1; :to_M := 5

PL/SQL procedure successfully completed.

ops$tkyte%ORA11GR2> set autotrace on
ops$tkyte%ORA11GR2> select * from (select /*+ first_rows(50) */ id,col1,col2,
  2  row_number() over
  3      (order by id DESC) rn
  4  FROM t) where rn between :from_N  and :to_M order by rn;

        ID       COL1       COL2         RN
---------- ---------- ---------- ----------
      1000       1000       1000          1
       999        999        999          2
       998        998        998          3
       997        997        997          4
       996        996        996          5


Execution Plan
----------------------------------------------------------
Plan hash value: 788328359

-----------------------------------------------------------------------------------------
| Id  | Operation                       | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
-----------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                |       |   945 | 49140 |    21   (5)| 00:00:01 |
|   1 |  SORT ORDER BY                  |       |   945 | 49140 |    21   (5)| 00:00:01 |
|*  2 |   VIEW                          |       |   945 | 49140 |    20   (0)| 00:00:01 |
|   3 |    WINDOW NOSORT                |       |   945 | 36855 |    20   (0)| 00:00:01 |
|*  4 |     FILTER                      |       |       |       |            |          |
|   5 |      TABLE ACCESS BY INDEX ROWID| T     |   945 | 36855 |    20   (0)| 00:00:01 |
|   6 |       INDEX FULL SCAN DESCENDING| T_IDX |    50 |       |     2   (0)| 00:00:01 |
-----------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - filter("RN">=TO_NUMBER(:FROM_N) AND "RN"<=TO_NUMBER(:TO_M))
   4 - filter(TO_NUMBER(:FROM_N)<=TO_NUMBER(:TO_M))

Note
-----
   - dynamic sampling used for this statement (level=2)


Statistics
----------------------------------------------------------
          0  recursive calls
          0  db block gets
          5  consistent gets
          0  physical reads
          0  redo size
        707  bytes sent via SQL*Net to client
        420  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          1  sorts (memory)
          0  sorts (disk)
          5  rows processed

ops$tkyte%ORA11GR2> select * from (select /*+ first_rows(50) */ id,col1,col2,
  2  row_number() over
  3      (order by id DESC) rn ,
  4  max(rownum) over ( order by null) cnt
  5  FROM t) where rn between :from_N  and :to_M order by rn;

        ID       COL1       COL2         RN        CNT
---------- ---------- ---------- ---------- ----------
      1000       1000       1000          1       1000
       999        999        999          2       1000
       998        998        998          3       1000
       997        997        997          4       1000
       996        996        996          5       1000


Execution Plan
----------------------------------------------------------
Plan hash value: 1222234556

------------------------------------------------------------------------------------------
| Id  | Operation                        | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                 |       |   945 | 61425 |    21   (5)| 00:00:01 |
|   1 |  SORT ORDER BY                   |       |   945 | 61425 |    21   (5)| 00:00:01 |
|*  2 |   FILTER                         |       |       |       |            |          |
|*  3 |    VIEW                          |       |   945 | 61425 |    20   (0)| 00:00:01 |
|   4 |     WINDOW BUFFER                |       |   945 | 36855 |    20   (0)| 00:00:01 |
|   5 |      COUNT                       |       |       |       |            |          |
|   6 |       TABLE ACCESS BY INDEX ROWID| T     |   945 | 36855 |    20   (0)| 00:00:01 |
|   7 |        INDEX FULL SCAN DESCENDING| T_IDX |    50 |       |     2   (0)| 00:00:01 |
------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - filter(TO_NUMBER(:FROM_N)<=TO_NUMBER(:TO_M))
   3 - filter("RN">=TO_NUMBER(:FROM_N) AND "RN"<=TO_NUMBER(:TO_M))

Note
-----
   - dynamic sampling used for this statement (level=2)


Statistics
----------------------------------------------------------
          0  recursive calls
          0  db block gets
        338  consistent gets
          0  physical reads
          0  redo size
        770  bytes sent via SQL*Net to client
        420  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          2  sorts (memory)
          0  sorts (disk)
          5  rows processed

ops$tkyte%ORA11GR2> set autotrace off

Pagination with row selection

July 29, 2010 - 11:27 am UTC

Reviewer: Deepak from India

Hi Tom,

We have the following requirement.

1> We have a table having 3 million rows (table segment size is 3GB).
2> We need to display a page of information (e.g., 1000 rows) in the application along with the page # with respect to the entire data and the sort order.
3> User can click on the column headers to sort on any column on the page. Hence with every sort request we need to execute the query to get the new sort order and display a page of 1000 rows.
4> Also if the user has selected(or highlighted) any row on the page and changes the sort order then the page which the selected row belongs to needs to be shown. This behavior is similar to MS Outlook application where the selected item remains selected even when the sort order is changed.

The problem is that we always need to scan the entire table and in case of #4 we execute 2 queries (1st to get the position of the row and next to get the data). The query takes approximately 120 secs which is not acceptable to the user.

As observed from the explain plan of the queries the sorting operation has the highest cost and the sorting happens entirely in memory.

I am not sure if any optimizations can be done so that the query execution time can be brought down to less than a minute.

Can you please help by providing your comments to improve performance.

Apologies if the problem is not very clear.


Tom Kyte

Followup  

July 29, 2010 - 12:29 pm UTC

well, if you have an index on each of the columns you permit them to sort on, getting the first page should be super fast

Just query:


select /*+ first_rows(1000) */ * 
  from (select rownum rnum, a.*
          from (select * from t order by that_column)
         where rownum <= (:page)*1000 )
 where rnum >= (:page-1)*1000
/

You maintain the page number so that part is easy. Anytime they click "sort" you start at page one again. That will be instantaneous (assuming the THAT_COLUMN is not null - if it is not, you would want an index on (that_column,0) so we can use the index to retrieve the data "sorted")

Getting page 1, 2, 3, ... N (where N is some reasonable number) will be fast.



As for the second requirement - that is going to be slow if you want to figure out what "page" of 1,000's it is on. Remember - outlook has to sort how many records? We have to manage how many? Taking a feature from a toy that doesn't have to scale and trying to make it scale is not always feasible.

However, you don't need to do that do you. You just need to show that row and give the user the ability to page back/forth.

So you would code:

select /*+ first_rows(1000) */ * 
  from (select rownum rnum, a.*
          from (select * from t 
                 WHERE <b>that_column >= :that_column </b>
                 order by that_column)
         where rownum <= (:page)*1000 )
 where rnum >= (:page-1)*1000
/


that would be page 1 - even though it is in the middle, If you needed to know what "page that really was" (as bad an idea as having a page 100 on google, read above), you would have to

select count(*) from t where table_column <= :that_column

first to figure out how many rows you skipped over, and hence what "page" you are on


Now, the query:

select /*+ first_rows(1000) */ * 
  from (select rownum rnum, a.*
          from (select * from t 
                 WHERE <b>that_column >= :that_column </b>
                 order by that_column)
         where rownum <= (:page)*1000 )
 where rnum >= (:page-1)*1000
/


lets you page forward from page 1 - it will be page 1 in your code, the end user might think of it as some other page but it is page 1. You should be able to figure out now how to get page 0, -1 and so on to let them page back from that based on this logic...

Thanks

July 29, 2010 - 10:15 pm UTC

Reviewer: Deepak from India

Brilliant !!!

Thanks a lot Tom for the solution. I have one followup question on the solution #1.

Unfortunately in our application all the columns (32 columns) are sortable hence would be a prudent idea to create index on all the columns?
Tom Kyte

Followup  

August 02, 2010 - 7:57 am UTC

... hence would be a prudent idea to create index on all the columns? ..

the answer to that is "it depends"

What needs to be optimized here? (can insert/update performance suffer because of this - if so, consider it)

Do all columns have the same probability of being sorted on? (if not, might it be OK to educate the end users that some columns are really efficient, others not so efficient)

and so on.

re: Pagination with row selection

July 30, 2010 - 6:48 am UTC

Reviewer: Stew Ashton from Paris, France


Tom, you've taught elsewhere that the ORDER BY clause must be deterministic, so for potentially non-unique columns the ORDER BY must include the rowid. Shouldn't this also apply to the WHERE clauses that reference such a column? If table_column has the same value hundreds of times, then
select count(*) from t where table_column <= :that_column
could get you the wrong "page".

Unfortunately, adding the rowid to the ORDER BY changes the execution plan and causes a full table scan. For example,
SQL> drop table T;

Table dropped.

SQL> create table t(key1, val1, filler, constraint t_pk primary key(key1)) as
select level key1
, dbms_random.random val1
, rpad('*', 100, '*') filler
from dual connect by level <= 300000;
  2    3    4    5  
Table created.

SQL> create index t_idx_val1 on t(val1,0);

Index created.

SQL> begin
dbms_stats.gather_table_stats(ownname => user, tabname => 'T', estimate_percent => 100, cascade => TRUE);
end;
/
  2    3    4  
PL/SQL procedure successfully completed.

SQL> alter session set statistics_level='ALL';

Session altered.

SQL> variable page number
SQL> begin
:page := 1;
end;
/
  2    3    4  
PL/SQL procedure successfully completed.

SQL> select /*+ first_rows(1000) */ * 
  from (select rownum rnum, a.*
          from (select * from t order by val1) a
         where rownum <= (:page)*1000 )
where rnum >= (:page-1)*1000;

-- results omitted

SQL> select * from table(dbms_xplan.display_cursor(null,null,'IOSTATS LAST'));

SQL_ID  9g8k1ty04ph8w, child number 0
-------------------------------------
select /*+ first_rows(1000) */ *   from (select rownum rnum, a.*
   from (select * from t order by val1) a          where rownum <=
(:page)*1000 ) where rnum >= (:page-1)*1000

Plan hash value: 2609069190

-------------------------------------------------------------------------------------------------------
| Id  | Operation                      | Name       | Starts | E-Rows | A-Rows |   A-Time   | Buffers |
-------------------------------------------------------------------------------------------------------

|   0 | SELECT STATEMENT               |            |      1 |        |   1000 |00:00:00.05 |    1072 |
|*  1 |  VIEW                          |            |      1 |   1001 |   1000 |00:00:00.05 |    1072 |
|*  2 |   COUNT STOPKEY                |            |      1 |        |   1000 |00:00:00.04 |    1072 |
|   3 |    VIEW                        |            |      1 |   1001 |   1000 |00:00:00.03 |    1072 |
|   4 |     TABLE ACCESS BY INDEX ROWID| T          |      1 |    300K|   1000 |00:00:00.02 |    1072 |
|   5 |      INDEX FULL SCAN           | T_IDX_VAL1 |      1 |   1001 |   1000 |00:00:00.01 |      72 |
-------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------


   1 - filter("RNUM">=(:PAGE-1)*1000)
   2 - filter(ROWNUM<=:PAGE*1000)


25 rows selected.

SQL> select /*+ first_rows(1000) */ * 
  from (select rownum rnum, a.*
          from (select * from t order by val1, rowid) a
         where rownum <= (:page)*1000 )
 where rnum >= (:page-1)*1000;

-- results omitted

SQL> select * from table(dbms_xplan.display_cursor(null,null,'IOSTATS LAST'));

SQL_ID  0m11w2bbrn5th, child number 0
-------------------------------------
select /*+ first_rows(1000) */ *   from (select rownum rnum, a.*
   from (select * from t order by val1, rowid) a          where rownum
<= (:page)*1000 )  where rnum >= (:page-1)*1000

Plan hash value: 882605040

-------------------------------------------------------------------------------------------
| Id  | Operation                | Name | Starts | E-Rows | A-Rows |   A-Time   | Buffers |
-------------------------------------------------------------------------------------------

|   0 | SELECT STATEMENT         |      |      1 |        |   1000 |00:00:02.72 |    4927 |
|*  1 |  VIEW                    |      |      1 |    300K|   1000 |00:00:02.72 |    4927 |
|*  2 |   COUNT STOPKEY          |      |      1 |        |   1000 |00:00:02.71 |    4927 |
|   3 |    VIEW                  |      |      1 |    300K|   1000 |00:00:02.70 |    4927 |
|*  4 |     SORT ORDER BY STOPKEY|      |      1 |    300K|   1000 |00:00:02.69 |    4927 |
|   5 |      TABLE ACCESS FULL   | T    |      1 |    300K|    300K|00:00:01.31 |    4927 |
-------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------


   1 - filter("RNUM">=(:PAGE-1)*1000)
   2 - filter(ROWNUM<=:PAGE*1000)
   4 - filter(ROWNUM<=:PAGE*1000)


26 rows selected.

Tom Kyte

Followup  

August 02, 2010 - 8:35 am UTC

I was actually thinking about that the other night in retrospect and was going to come back to this.

In short "yes", thanks!

paging

September 23, 2011 - 11:48 am UTC

Reviewer: A reader

Tom:

If I have a catalog table with 100,000 rows (new rows are added weekly but old dat is mostly static), and I am displaying 500 rows per page using the pagination query.

If I need to display a dropdown list for all pages with data (word column) describing the data like this, so when user selects that page it displays the other rows

Page 1 (Row1 aaa - Row 499 bbb)
Page 2 (Row 500 ccc - Row 999 fff)
Page 3 (Row 1000 word - Row 1500 word)

How do you implement this (sql or plsql or dedicated table refreshed nightly with only those ROW entries)?
I dont think something like this would be efficient.

select word from (SUBQUERY for RESULTSET) where rownum in (1,499,500,999,....)

I know you dont like this but it is a must requirement for one application.
Tom Kyte

Followup  

September 23, 2011 - 12:54 pm UTC

I would not implement this - for it requires actually executing the query to completion every time. I would point you to google and say "they are sort of the gold standard when it comes to displaying search results" - they do not do this (it would kill them and their response time). They guess at the number of rows, they optimize to get the first rows as fast as possible.


But if this is a static catalog table - then by all means do the materialization of the pages/data to show for each page. That would be efficient - run the query once for everyone - save the results - and use them over and over (why do you think this would not be efficient???)



paging

September 23, 2011 - 2:12 pm UTC

Reviewer: A reader

Tom:

I did not mean to say that an MV would be inefficent.

I meant doing this SQL for 100,000 records you will be adding a long where clause which is inefficient.

select word from (SUBQUERY for RESULTSET) where rownum in (1,499,500,999,1000,1499,1500....)



I have not tested this yet but if I have an index on that "word" column, it might be fast enough.
I might be stuck with a full scan though because of the subquery.


select * from
(select x.*, rownum rn from
(select * from your table
order by some_column) x
where rownum <= 100000)
where rn = 1 or mod(rn,500) in (0,499)


google only give you total number of articles found. if you search for "oracle" you will get
288,000,000 result with links to each page and they only display 10 links at a time.

Tom Kyte

Followup  

September 23, 2011 - 3:17 pm UTC

but you would only be doing it once - if you HAVE To do it (which if you are pushy enough, like me, maybe you don't :) ), then only do it once.

and you cannot do "where rownum in ( 1, 499, ... )", there isn't a rownum 499 if there isn't a rownum 2 first (and 3, and 4 and so on). You'd have to assign rownum to the result set AND THEN down select on that.

I'm not afraid of a full scan - you are doing it once. Indexes are not all goodness you know. 100,000 rows is teeny tiny anyway, it'll take a very short amount of time.


I know what google does, they do it perfectly :)

pagination

September 23, 2011 - 3:34 pm UTC

Reviewer: A reader

Tom:

Are you saying do it once using an MV??

If i do it real-time I will have to run it on every page.
Are you saying it should run quick with an index.

I did a one column test and it took ONE SEC to find those rows. This is what the query does. 

create table T5 (
 x    number );
 
 begin
  for i in 1..100000 
  loop
    insert into T5(x) values (i);
   end loop;
 end;
 /
 
 alter table t5 add constraint pk_t5  primary key (x);
 
 
 
 
 SQL> select * from
   2    (select x.*, rownum rn from
   3    (select * from T5
   4    order by x) x
   5    where rownum <=  100000)
   6   where rn = 1 or mod(rn,500) in (0,499);
 
          X         RN
 ---------- ----------
          1          1
        499        499
        500        500
        999        999
       1000       1000
       1499       1499
       1500       1500
       1999       1999
       2000       2000
       2499       2499
       2500       2500
       2999       2999
       3000       3000
       3499       3499
       3500       3500
       3999       3999
       4000       4000
       4499       4499
       4500       4500
       4999       4999
       5000       5000
              5499       5499
              
        Elapsed: 00:00:01.08
 
 Execution Plan
 ----------------------------------------------------------
    0      SELECT STATEMENT Optimizer=CHOOSE
    1    0   VIEW
    2    1     COUNT (STOPKEY)
    3    2       VIEW
    4    3         INDEX (FULL SCAN) OF 'PK_T5' (UNIQUE)

Tom Kyte

Followup  

September 23, 2011 - 3:56 pm UTC

do it once - save it - query it over and over.

But if this is a static catalog table - then by all means do the materialization of the pages/data to show for each page. That would be efficient - run the query once for everyone - save the results - and use them over and over (why do you think this would not be efficient???)


I'm not afraid of a full scan - you are doing it once. Indexes are not all goodness you know. 100,000 rows is teeny tiny anyway, it'll take a very short amount of time.


save it so you don't have to burn resources when you really don't need to.

pagination

September 23, 2011 - 7:35 pm UTC

Reviewer: A reader

Tom:

Can you explain how google makes a guess. When I do a keyword search on "oracle", do not they do a COUNT records that contain a keyword oracle in it. Do you think they have an MV and store the number 2,800,0000 and update it nightly.

I assume the links they post at bottom of webpage are just variables to pass to the main pagination query (MIN_ROWS, MAX_ROWS). right?


Tom Kyte

Followup  

September 24, 2011 - 1:46 pm UTC

they just start looking around and say "hey, there is a ton of hits - guess a big number"

I don't know how many times I've done a search and they tell me "you are looking at 1-10 of about 1,000" and I go to page 2 and they tell me "you are looking at 11-17 of 17"

You'd have to ask google for their 'exact' routine - their algorithm. We can use things like oracle text (it has a 'guess hit count' routine, i use it on asktom), the estimated cardinality from the plan of the query - get it right from v$sql after you execute the query - or one of my favorite ways:

You are looking at rows 1-10 of more than 10 results, press page forward to go to next page. If you want to go to the last page, just order the data backwards from the ordering you have now and resubmit the search...


google pagination

September 23, 2011 - 8:03 pm UTC

Reviewer: A reader

Tom:

you are right. the "prev" and "next" buttons in google are hints

http://www.google.com/support/webmasters/bin/answer.py?answer=1663744

They also changed/updated their pagination method last week to include "view All"

http://searchenginewatch.com/article/2109932/Google-Introduces-New-Pagination-Tags-Pushes-View-All-Pages


Fantastic option

January 15, 2013 - 11:41 pm UTC

Reviewer: Amit Thakur

Thanks Tom for this fantastic option of mentioning the pages in the query itself saving a huge time on fetching all the rows and then filtering a few records based on page number.

regarding pagination

January 29, 2013 - 12:23 am UTC

Reviewer: shawreba from india

Full tbale scan is very expensive option to show total number of record getting through query but what basis next option avaliable. as each time we are shwoing 10 rceord per page yhen how would user know their more number of rcord stil to come
Tom Kyte

Followup  

January 31, 2013 - 9:10 am UTC

I don't believe you read this article. read my original answer again.


if you are showing 10 records at a time, then you would fetch and display the first 10 records and if the result set was not exhausted - you would then also display a "next page" button (there are more rows, you know there is a next page)

if you only have 8 to display, do not show a next page button.


if you want - you can show the first page and if there is more - put up a link for the next say 10 pages.

In the event there are only 52 rows (six pages) and they ask to go to page 7, you would just tell them "sorry, apparently there are not 70 rows so here is the last page" and then only show them a link for pages 1..6 - just like google does ...

"View Questions" link in Asktom home page

April 26, 2013 - 5:52 pm UTC

Reviewer: Rajeshwaran Jeyabal

Tom,

In your home page you have a link saying "click here to view your question", where the users enters his/her email id and click "go" button.

You display a drop-down having dynamic values like "rows(s) 1-15 of 246" "16-30 of 246" ..etc.

1) Can you explain us/ show your script how you arrive this dynamic values? ( say based on email, will you insert matching records in a temp table and do a count(*) to get the total rows (say, 246) in the above example ) ?


-Rajesh
Tom Kyte

Followup  

April 26, 2013 - 6:14 pm UTC

APEX does all of the work for me. When they programmed it, they decided (since the result would be very small) to use an exact count. It is one of their report types.


you would just need to add "count(*) over ()" to your select list. I do not recommend it in general, only for very small results like that.

on the home page, it is always a "guess", an estimate as to the actual row count.

order by and spatial questions

September 17, 2013 - 2:45 pm UTC

Reviewer: Gregory Stefanakos from Athens, Greece

Hi Tom

First of all, my deepest gratitude for the precious jewels you offer to all of us!
I've being using your site for quite some time and *every* time, I leave full of admiration and respect.


Regarding the pagination query: I've seen it in more than one 'incarnations', so, to make things clear, I'm refering to:
SELECT /*+ FIRST_ROWS*/ *
  FROM ( SELECT /*+ FIRST_ROWS*/ A.*, ROWNUM RNUM
           FROM ( SELECT /*+ FIRST_ROWS*/ 
           FROM the_table
       WHERE if_applicable
       ORDER BY a_uniquely_indexed_column_set ) A
          WHERE ROWNUM <= :max_val )
 WHERE RNUM > :min_val


We use this query to get search results from our application (Weblogic/ADF, on 11.2 DB/Linux).
Each time the user hits 'next' or 'prev', the paging values change accordingly (page size = 50 rows).


I have 2 questions:

1) At first, we used the pagination query, as it appears in http://www.oracle.com/technetwork/issue-archive/2007/07-jan/o17asktom-093877.html : It is WITHOUT the first_rows hints AND WITHOUT the order by
Q: Am I correct in saying that if the order by is missing from the innermost query AND the results span through more than a couple of pages, there is a risk of not getting a correct answer (a row appearing in 2 separate pages, or missing from the result set alltogether - I've come accross both)?

2) And here's the tough part: There is a spatial part in our application, in which the pagination query is also used by Geoserver (the user does a search and the results are viewable on a map).
If the user searches by alphanumeric attributes, then all is OK.
If the query is a spatial one, eg (the innermost):
select id, ...
  from the_spatial_table
 where sdo_relate (table's_geom, a_window_drawn_by_the_user, 'mask=ANYINTERACT querytype=WINDOW') = 'TRUE'
 order by id

If ran on its own, the optimizer uses the domain (spatial) index and everything is fine.
When this query, however, gets 'encapsulated' to form the pagination query, the optimizer picks other (alphanumeric) indices which leads to HUGE execution time (never had the patience to let it finish).
The correct way to run a spatial pagination query, would be to use a /*+ INDEX(spatial_table spatial_index) */ hint for the innermost and keep the /*+ FIRST_ROWS*/ for the outer two selects. Unfortunately, the pagination query is in Geoserver's code. We can modify the code to put the /*+ FIRST_ROWS*/ and the
ORDER BY
in, but passing in a different hint for each table is not so easy (doable, but not desirable).

I would very much like to have your input on this (these).

Thanks in advance (and, once again, a million thanks)
Greg

Tom Kyte

Followup  

September 24, 2013 - 11:06 pm UTC

1) look for deterministic in that article. If you do not have an order by, on a set of attributes that are unique, then the pagination is unreliable. If the rows are not sorted by something unique - there is no way to say "this is the 22nd row" for example. you need to have a deterministic query ordering

2) i don't have a quick answer for this. I don't see how this would use any other index - except maybe one on id? which isn't alphanumeric?

order by and spatial questions (2)

September 27, 2013 - 11:12 am UTC

Reviewer: Gregory Stefanakos from Athens, Greece

Hi Tom
Regarding your followup September 24, 2013 - 11pm UTC:

1) Your answer is exactly what I had thought (and warned my colleagues about).

2) As far as I know, there is no such thing as a 'unique spatial index' - which could be used as a form of id.
So, I guess I'm stuck with either of:
a)
Modifying Geoserver's code to pass the appropriate optimizer hints
This is definitely not the desirable way, because whatever modifications we make now may not be compatible with Geoserver's next release - let alone that it is not an easy modification in itself
or b)
Coming up with a 'clever' optimizer hint, that would use the spatial index (if the innermost query is a spatial one) and still deliver the first 50 rows fast enough.
Any contribution to the latter, would be immensely appreciated.

Thank you *very* much for the time you've put into this.

Regards
Greg

Pagination with ROWNUM vs Row limiting Clause in 12c

October 28, 2016 - 7:32 am UTC

Reviewer: Rajeshwaran Jeyabal

Team,

I was comparing the sql pagination performance with row limiting clause and ROWNUM approach.

The script to create the "big_table" is available at
https://asktom.oracle.com/pls/apex/f?p=100:11:0::::P11_QUESTION_ID:1132417600346069010

Loaded around 10M rows into "Big_Table" and created these addition indexes.

create index big_idx_00 on big_table(owner,created,id);

begin 
 dbms_stats.gather_table_stats(
  ownname=>user,
  tabname=>'BIG_TABLE',
  method_opt=>'for all indexed columns size 255',
  cascade=>true,
  no_invalidate=>false);
end;
/  

variable x varchar2(30)
variable n_max_rows number
variable n_min_rows number
exec :x := 'SYS';
exec :n_max_rows := 50;
exec :n_min_rows := 0;


script.sql file has this contents.

variable x varchar2(30)
variable n_max_rows number
variable n_min_rows number
exec :x := 'SYS';
exec :n_max_rows := 50;
exec :n_min_rows := 0;

set termout off

select /*+ gather_plan_statistics */ *
from big_table
where owner =:x
order by created,id
offset :n_min_rows rows
fetch next :n_max_rows rows only;
set termout on

select * from table(dbms_xplan.display_cursor(format=>'allstats last'));

set termout off

select /*+ gather_plan_statistics */ *
from (
 select t1.*, rownum rn
 from (
  select * from big_table
  where owner =:x
  order by created,id
   ) t1
 where rownum <= :n_max_rows
  )
where rn > :n_min_rows;
set termout on
 
select * from table(dbms_xplan.display_cursor(format=>'allstats last'));


and the plan shows this

demo@ORA12C> @d:\script.sql
demo@ORA12C> variable x varchar2(30)
demo@ORA12C> variable n_max_rows number
demo@ORA12C> variable n_min_rows number
demo@ORA12C> exec :x := 'SYS';

PL/SQL procedure successfully completed.

demo@ORA12C> exec :n_max_rows := 50;

PL/SQL procedure successfully completed.

demo@ORA12C> exec :n_min_rows := 0;

PL/SQL procedure successfully completed.

demo@ORA12C>
demo@ORA12C> set termout off
demo@ORA12C>
demo@ORA12C> select * from table(dbms_xplan.display_cursor(format=>'allstats last'));

PLAN_TABLE_OUTPUT
-----------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID  csr3q905y0thz, child number 0
-------------------------------------
select /*+ gather_plan_statistics */ * from big_table where owner =:x
order by created,id offset :n_min_rows rows fetch next :n_max_rows rows
only

Plan hash value: 3030648028

----------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                    | Name      | Starts | E-Rows | A-Rows |   A-Time   | Buffers | Reads  |  OMem |  1Mem | Used-Mem |
----------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT             |           |      1 |        |     50 |00:00:02.65 |   48895 |  48879 |       |       |          |
|*  1 |  VIEW                        |           |      1 |   4557K|     50 |00:00:02.65 |   48895 |  48879 |       |       |          |
|*  2 |   WINDOW SORT PUSHED RANK    |           |      1 |   4557K|     50 |00:00:02.65 |   48895 |  48879 | 31744 | 31744 |28672  (0)|
|*  3 |    FILTER                    |           |      1 |        |   4557K|00:00:01.63 |   48895 |  48879 |       |       |          |
|*  4 |     TABLE ACCESS STORAGE FULL| BIG_TABLE |      1 |   4557K|   4557K|00:00:01.18 |   48895 |  48879 |  1025K|  1025K| 3085K (0)|
----------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter(("from$_subquery$_002"."rowlimit_$$_rownumber"<=CASE  WHEN (:N_MIN_ROWS>=0) THEN
              FLOOR(TO_NUMBER(TO_CHAR(:N_MIN_ROWS))) ELSE 0 END +:N_MAX_ROWS AND "from$_subquery$_002"."rowlimit_$$_rownumber">:N_MIN_ROWS))
   2 - filter(ROW_NUMBER() OVER ( ORDER BY "BIG_TABLE"."CREATED","BIG_TABLE"."ID")<=CASE  WHEN (:N_MIN_ROWS>=0) THEN
              FLOOR(TO_NUMBER(TO_CHAR(:N_MIN_ROWS))) ELSE 0 END +:N_MAX_ROWS)
   3 - filter(:N_MIN_ROWS<CASE  WHEN (:N_MIN_ROWS>=0) THEN FLOOR(TO_NUMBER(TO_CHAR(:N_MIN_ROWS))) ELSE 0 END +:N_MAX_ROWS)
   4 - storage("OWNER"=:X)
       filter("OWNER"=:X)


29 rows selected.

demo@ORA12C>
demo@ORA12C> set termout off
demo@ORA12C>
demo@ORA12C> select * from table(dbms_xplan.display_cursor(format=>'allstats last'));

PLAN_TABLE_OUTPUT
-----------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID  frhfau9fxq5d4, child number 0
-------------------------------------
select /*+ gather_plan_statistics */ * from (  select t1.*, rownum rn
from (   select * from big_table   where owner =:x   order by
created,id    ) t1  where rownum <= :n_max_rows   ) where rn >
:n_min_rows

Plan hash value: 3225192148

----------------------------------------------------------------------------------------------------------------
| Id  | Operation                      | Name       | Starts | E-Rows | A-Rows |   A-Time   | Buffers | Reads  |
----------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT               |            |      1 |        |     50 |00:00:00.01 |       6 |      4 |
|*  1 |  VIEW                          |            |      1 |     50 |     50 |00:00:00.01 |       6 |      4 |
|*  2 |   COUNT STOPKEY                |            |      1 |        |     50 |00:00:00.01 |       6 |      4 |
|   3 |    VIEW                        |            |      1 |     50 |     50 |00:00:00.01 |       6 |      4 |
|   4 |     TABLE ACCESS BY INDEX ROWID| BIG_TABLE  |      1 |     50 |     50 |00:00:00.01 |       6 |      4 |
|*  5 |      INDEX RANGE SCAN          | BIG_IDX_00 |      1 |        |     50 |00:00:00.01 |       4 |      3 |
----------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter("RN">:N_MIN_ROWS)
   2 - filter(ROWNUM<=:N_MAX_ROWS)
   5 - access("OWNER"=:X)


27 rows selected.

demo@ORA12C>


The ROWNUM approach makes use of the index and less Logical IO, but the row-limiting clause doesn't - am i missing something here?

Even without bind variables in place, the row-liming clause doesn't scale.

demo@ORA12C> @d:\script2.sql
demo@ORA12C> set termout off
demo@ORA12C>
demo@ORA12C> select * from table(dbms_xplan.display_cursor(format=>'allstats last'));

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID  bu7ugm45j5qy6, child number 0
-------------------------------------
select /*+ gather_plan_statistics */ * from big_table where owner =
'SYS' order by created,id fetch first 50 rows only

Plan hash value: 2679878340

---------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                   | Name      | Starts | E-Rows | A-Rows |   A-Time   | Buffers | Reads  |  OMem |  1Mem | Used-Mem |
---------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |           |      1 |        |     50 |00:00:02.46 |   48895 |  48879 |       |       |          |
|*  1 |  VIEW                       |           |      1 |     50 |     50 |00:00:02.46 |   48895 |  48879 |       |       |          |
|*  2 |   WINDOW SORT PUSHED RANK   |           |      1 |   4557K|     50 |00:00:02.46 |   48895 |  48879 | 15360 | 15360 |14336  (0)|
|*  3 |    TABLE ACCESS STORAGE FULL| BIG_TABLE |      1 |   4557K|   4557K|00:00:01.19 |   48895 |  48879 |  1025K|  1025K| 3085K (0)|
---------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter("from$_subquery$_002"."rowlimit_$$_rownumber"<=50)
   2 - filter(ROW_NUMBER() OVER ( ORDER BY "BIG_TABLE"."CREATED","BIG_TABLE"."ID")<=50)
   3 - storage("OWNER"='SYS')
       filter("OWNER"='SYS')


24 rows selected.

demo@ORA12C>
demo@ORA12C> set termout off
demo@ORA12C>
demo@ORA12C> select * from table(dbms_xplan.display_cursor(format=>'allstats last'));

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID  8tvjbgpn1y5h0, child number 0
-------------------------------------
select /*+ gather_plan_statistics */ * from ( select t1.*, rownum rn
from ( select * from big_table where owner='SYS' order by created, id
   ) t1 where rownum <=50     ) where rn > 0

Plan hash value: 3225192148

-------------------------------------------------------------------------------------------------------
| Id  | Operation                      | Name       | Starts | E-Rows | A-Rows |   A-Time   | Buffers |
-------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT               |            |      1 |        |     50 |00:00:00.01 |       6 |
|*  1 |  VIEW                          |            |      1 |     50 |     50 |00:00:00.01 |       6 |
|*  2 |   COUNT STOPKEY                |            |      1 |        |     50 |00:00:00.01 |       6 |
|   3 |    VIEW                        |            |      1 |     50 |     50 |00:00:00.01 |       6 |
|   4 |     TABLE ACCESS BY INDEX ROWID| BIG_TABLE  |      1 |     50 |     50 |00:00:00.01 |       6 |
|*  5 |      INDEX RANGE SCAN          | BIG_IDX_00 |      1 |        |     50 |00:00:00.01 |       4 |
-------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter("RN">0)
   2 - filter(ROWNUM<=50)
   5 - access("OWNER"='SYS')


26 rows selected.

demo@ORA12C>
demo@ORA12C>

Chris Saxon

Followup  

October 28, 2016 - 1:58 pm UTC

Add the first_rows (n) hint to avoid this issue:

http://blog.dbi-services.com/oracle-rownum-vs-rownumber-and-12c-fetch-first/

first_rows hint

October 28, 2016 - 8:59 am UTC

Reviewer: Rajeshwaran Jeyabal

I missed out the FIRST_ROWS(n) hint, when added the row_limiting clause works fine - sorry to trouble you.

demo@ORA12C> select * from table(dbms_xplan.display_cursor(format=>'allstats last'));

PLAN_TABLE_OUTPUT
---------------------------------------------------------------------------------------------------------------------------
SQL_ID  6ggrmn7vfyuk0, child number 0
-------------------------------------
select /*+ first_rows(50) gather_plan_statistics */ * from big_table
where owner =:x order by created,id offset :n_min_rows rows fetch next
:n_max_rows rows only

Plan hash value: 3409143824

----------------------------------------------------------------------------------------------------------------
| Id  | Operation                      | Name       | Starts | E-Rows | A-Rows |   A-Time   | Buffers | Reads  |
----------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT               |            |      1 |        |     50 |00:00:00.01 |       7 |      3 |
|*  1 |  VIEW                          |            |      1 |     50 |     50 |00:00:00.01 |       7 |      3 |
|*  2 |   WINDOW NOSORT STOPKEY        |            |      1 |     50 |     50 |00:00:00.01 |       7 |      3 |
|*  3 |    FILTER                      |            |      1 |        |     51 |00:00:00.01 |       7 |      3 |
|   4 |     TABLE ACCESS BY INDEX ROWID| BIG_TABLE  |      1 |     50 |     51 |00:00:00.01 |       7 |      3 |
|*  5 |      INDEX RANGE SCAN          | BIG_IDX_00 |      1 |        |     51 |00:00:00.01 |       5 |      3 |
----------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter(("from$_subquery$_002"."rowlimit_$$_rownumber"<=CASE  WHEN (:N_MIN_ROWS>=0) THEN
              FLOOR(TO_NUMBER(TO_CHAR(:N_MIN_ROWS))) ELSE 0 END +:N_MAX_ROWS AND
              "from$_subquery$_002"."rowlimit_$$_rownumber">:N_MIN_ROWS))
   2 - filter(ROW_NUMBER() OVER ( ORDER BY "BIG_TABLE"."CREATED","BIG_TABLE"."ID")<=CASE  WHEN
              (:N_MIN_ROWS>=0) THEN FLOOR(TO_NUMBER(TO_CHAR(:N_MIN_ROWS))) ELSE 0 END +:N_MAX_ROWS)
   3 - filter(:N_MIN_ROWS<CASE  WHEN (:N_MIN_ROWS>=0) THEN FLOOR(TO_NUMBER(TO_CHAR(:N_MIN_ROWS))) ELSE
              0 END +:N_MAX_ROWS)
   5 - access("OWNER"=:X)


31 rows selected.

demo@ORA12C>

Chris Saxon

Followup  

October 28, 2016 - 1:59 pm UTC

I see you've found it already :)

Pagination with filters on both parent and child tables

June 13, 2018 - 3:20 pm UTC

Reviewer: Rajeshwaran, Jeyabal

drop table t1 purge;
drop table t2 purge;

create table t1 as select * from all_objects order by dbms_random.random;
create table t2 as select t1.*, rownum as id from t1 order by object_id;

alter table t1 add constraint t1_pk primary key(object_id);
declare
 l_id int; 
begin 
 select max(id) 
 into l_id
 from t2;
 
 for i in 1..4
 loop
  insert into t2
  select t1.*, rownum + l_id
  from t1;
  l_id := l_id + sql%rowcount;
 end loop;
 commit;
end;
/ 
alter table t2 add constraint t2_fk 
foreign key(object_id)
references t1;

alter table t2 add constraint t2_pk
primary key(id);

create index t1_idx on t1(created);
create index t2_idx on t2(owner);

exec dbms_stats.gather_table_stats(user,'t1');
exec dbms_stats.gather_table_stats(user,'t2',degree=>4);


select *
from (
select /*+ first_rows(200) */ t1.owner,t1.object_id, t1.object_type, 
  t2.object_name, t2.created 
from t1, t2 
where t1.object_id = t2.object_id
and t1.created between to_date('01-Jan-2012','dd-mon-yyyy') and sysdate
and t2.owner ='SYS'
order by t2.created desc
 )
where rownum <= 200;


demo@ORA11G> @script.sql

Session altered.


PLAN_TABLE_OUTPUT
-----------------------------------------------------------------------------------------------------------------------------------
-----------------------------------------------------------
SQL_ID  d5uvhxx2k22mw, child number 0
-------------------------------------
select * from ( select /*+ first_rows(200) */ t1.owner,t1.object_id,
t1.object_type,   t2.object_name, t2.created from t1, t2 where
t1.object_id = t2.object_id and t1.created between
to_date('01-Jan-2012','dd-mon-yyyy') and sysdate and t2.owner ='SYS'
order by t2.created desc  ) where rownum <= 200

Plan hash value: 2138277504

--------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                        | Name   | Starts | E-Rows | A-Rows |   A-Time   | Buffers |  OMem |  1Mem | Used-Mem |
--------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                 |        |      1 |        |    200 |00:00:00.40 |    7737 |       |       |          |
|*  1 |  COUNT STOPKEY                   |        |      1 |        |    200 |00:00:00.40 |    7737 |       |       |          |
|   2 |   VIEW                           |        |      1 |  12824 |    200 |00:00:00.40 |    7737 |       |       |          |
|*  3 |    SORT ORDER BY STOPKEY         |        |      1 |  12824 |    200 |00:00:00.40 |    7737 | 70656 | 70656 |63488  (0)|
|*  4 |     FILTER                       |        |      1 |        |    185K|00:00:00.35 |    7737 |       |       |          |
|*  5 |      HASH JOIN                   |        |      1 |  12824 |    185K|00:00:00.32 |    7737 |    15M|  2898K|   18M (0)|
|   6 |       TABLE ACCESS BY INDEX ROWID| T2     |      1 |  12824 |    185K|00:00:00.13 |    6524 |       |       |          |
|*  7 |        INDEX RANGE SCAN          | T2_IDX |      1 |  12824 |    185K|00:00:00.03 |     392 |       |       |          |
|*  8 |       TABLE ACCESS FULL          | T1     |      1 |  84640 |  84640 |00:00:00.02 |    1213 |       |       |          |
--------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter(ROWNUM<=200)
   3 - filter(ROWNUM<=200)
   4 - filter(SYSDATE@!>=TO_DATE(' 2012-01-01 00:00:00', 'syyyy-mm-dd hh24:mi:ss'))
   5 - access("T1"."OBJECT_ID"="T2"."OBJECT_ID")
   7 - access("T2"."OWNER"='SYS')
   8 - filter(("T1"."CREATED">=TO_DATE(' 2012-01-01 00:00:00', 'syyyy-mm-dd hh24:mi:ss') AND "T1"."CREATED"<=SYSDATE@!))


34 rows selected.

demo@ORA11G>


Team,

we have an application (Transactional system) running in Oracle 11g database.
that has a search screen, where the users can pick the filter for search.
they do a search using the Itemid and date range for search. (itemid is available on the child table and the date range is on the parent table)

In the above example you can pretend OWNER as itemid and created column as date range.

so in the above plan most logical IO was agains the step id = 6
can we have or inform the optimizer to stop retriveing the rows from T2 to T1 once we cross the rownum limit? is that possible ?
by that way we operate only for those 200 rows rather than the entire data set that matches the filter at the step id=7

Connor McDonald

Followup  

June 14, 2018 - 2:58 am UTC

Since you have predicates on both T1 and T2, I don't think the optimizer will be able to sort that out. Even with the FK, it means that a matching row in T1 does not mandate a match in T2 and vice versa

Even if it could, I think there would difficulties because you are ordering by T2.CREATED_DATE and there's no index on that. So your options are:

1) drive first by T2, must get 230k rows for owner='SYS' in order to sort by created date, and cannot eliminate rows earlier because I don't know if I'll get a match into T1

2) drive first by T1, 70k matching rows - and I need to drive into T2 for *every* one to get t2.created_date sequencing




Pagination with filters on both parent and child tables

June 14, 2018 - 6:21 am UTC

Reviewer: Rajeshwaran, Jeyabal

Thanks for the details.

was planning to have a FAST Refresh Mview for this. something like this.

demo@ORA11G> create materialized view t1_t2_mv
  2  build immediate
  3  refresh fast on commit
  4  enable query rewrite
  5  as
  6  select t1.owner,t1.object_type, t1.created,
  7     t2.created as t2_created,
  8     t2.object_id as t2_object_id,
  9     t2.object_name as t2_object_name,
 10     t2.owner as t2_owner,
 11     t2.rowid as t2_rid,
 12     t1.rowid as t1_rid,
 13     t2.id
 14  from t1, t2
 15  where t1.object_id = t2.object_id ;

Materialized view created.
demo@ORA11G> create index t1_t2_idx on t1_t2_mv ( t2_owner, created );

Index created.

demo@ORA11G> exec dbms_stats.gather_table_stats(user,'t1_t2_mv' );

PL/SQL procedure successfully completed.

demo@ORA11G> @script.sql

Session altered.


PLAN_TABLE_OUTPUT
----------------------------------------------------------------------------------------------------------------------
SQL_ID  gryfzb06rwfw9, child number 0
-------------------------------------
select * from ( select /*+ first_rows(200) */ t1.owner,t1.object_id,
t1.object_type,   t2.object_name, t2.created from t1, t2 where
t1.object_id = t2.object_id and t1.created between
to_date('01-Jan-2012','dd-mon-yyyy') and
to_date('01-Jan-2015','dd-mon-yyyy') and t2.owner ='SYS' order by
t1.created desc  ) where rownum <= 200

Plan hash value: 3816355685

----------------------------------------------------------------------------------------------------------------
| Id  | Operation                                | Name      | Starts | E-Rows | A-Rows |   A-Time   | Buffers |
----------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                         |           |      1 |        |    200 |00:00:00.01 |     172 |
|*  1 |  COUNT STOPKEY                           |           |      1 |        |    200 |00:00:00.01 |     172 |
|   2 |   VIEW                                   |           |      1 |    201 |    200 |00:00:00.01 |     172 |
|   3 |    MAT_VIEW REWRITE ACCESS BY INDEX ROWID| T1_T2_MV  |      1 |   3375 |    200 |00:00:00.01 |     172 |
|*  4 |     INDEX RANGE SCAN DESCENDING          | T1_T2_IDX |      1 |    201 |    200 |00:00:00.01 |       5 |
----------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter(ROWNUM<=200)
   4 - access("T1_T2_MV"."T2_OWNER"='SYS' AND "T1_T2_MV"."CREATED"<=TO_DATE(' 2015-01-01 00:00:00',
              'syyyy-mm-dd hh24:mi:ss') AND "T1_T2_MV"."CREATED">=TO_DATE(' 2012-01-01 00:00:00', 'syyyy-mm-dd
              hh24:mi:ss'))


29 rows selected.


Even with the data changes, in place, the Mview looks fresh.

demo@ORA11G> delete from t2 where object_id = 20;

5 rows deleted.

demo@ORA11G> delete from t1 where object_id = 20;

1 row deleted.

demo@ORA11G> commit;

Commit complete.

demo@ORA11G> select mview_name,staleness,compile_state
  2  from user_mviews
  3  where mview_name ='T1_T2_MV';

MVIEW_NAME                     STALENESS           COMPILE_STATE
------------------------------ ------------------- -------------------
T1_T2_MV                       FRESH               VALID

demo@ORA11G> @script.sql

Session altered.


PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------
SQL_ID  gryfzb06rwfw9, child number 0
-------------------------------------
select * from ( select /*+ first_rows(200) */ t1.owner,t1.object_id,
t1.object_type,   t2.object_name, t2.created from t1, t2 where
t1.object_id = t2.object_id and t1.created between
to_date('01-Jan-2012','dd-mon-yyyy') and
to_date('01-Jan-2015','dd-mon-yyyy') and t2.owner ='SYS' order by
t1.created desc  ) where rownum <= 200

Plan hash value: 3816355685

----------------------------------------------------------------------------------------------------------------
| Id  | Operation                                | Name      | Starts | E-Rows | A-Rows |   A-Time   | Buffers |
----------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                         |           |      1 |        |    200 |00:00:00.01 |     172 |
|*  1 |  COUNT STOPKEY                           |           |      1 |        |    200 |00:00:00.01 |     172 |
|   2 |   VIEW                                   |           |      1 |    201 |    200 |00:00:00.01 |     172 |
|   3 |    MAT_VIEW REWRITE ACCESS BY INDEX ROWID| T1_T2_MV  |      1 |   3375 |    200 |00:00:00.01 |     172 |
|*  4 |     INDEX RANGE SCAN DESCENDING          | T1_T2_IDX |      1 |    201 |    200 |00:00:00.01 |       5 |
----------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter(ROWNUM<=200)
   4 - access("T1_T2_MV"."T2_OWNER"='SYS' AND "T1_T2_MV"."CREATED"<=TO_DATE(' 2015-01-01 00:00:00',
              'syyyy-mm-dd hh24:mi:ss') AND "T1_T2_MV"."CREATED">=TO_DATE(' 2012-01-01 00:00:00', 'syyyy-mm-dd
              hh24:mi:ss'))


29 rows selected.

demo@ORA11G>


would you recommend in having such a materialized view for this requirement in an OLTP system? is it advisable to have FAST Refresh Mview for this?

have heard/studied about Mview are only for DW system. your thoughts/inputs please
Connor McDonald

Followup  

June 15, 2018 - 1:32 am UTC

have heard/studied about Mview are only for DW system

Definitely not. You pay a non-trivial license fee for Oracle, so *every* feature should be considered and evaluated and exploited if it can help your requirements.

So definitely explore the mview option. The key thing you'll want to be benchmarking is the impact of the materialized view logs on your transactional tables. If that is not an issue for your OLTP throughput, then this is a good candidate.

The next question to be thought about then is how up to date you want the mview to be. The more up to date, the more costly it is. But there's lot of options to benchmark -
- on query computation, on commit, on statement, on schedule etc etc