Skip to Main Content

Breadcrumb

May 4th

Question and Answer

Tom Kyte

Thanks for the question, raghu .

Asked: May 02, 2000 - 1:20 pm UTC

Last updated: August 28, 2013 - 4:42 pm UTC

Version:

Viewed 10K+ times! This question is

You Asked

I have a Search field in my form. When my user enters a
value in the Search field, I want to display the first 10 rows
which match the query, then the next 10..and so on. I will give
him 2 button (Previous 10 & Next 10) options. How do I go about it?



and Tom said...

You don't mention what the user interface is built using. I'll assume the WEB then and that state is not maintained between pages. There are a couple of methods.

o Lets say the query is something like "select * from T where x = :host_variable order by y" and the query is relatively 'fast'. I can then write a query:

select *
from ( select a.*, rownum r
from ( select *
from t
where x = :host_variable
order by y ) a
where rownum < :HigerBound )
where r > :LowerBound
/


That query gets all of the rows between LowerBound and HigherBound from the ordered result set in the innermost query. This only works with Oracle8i, release 8.1 and up (order by in subqueries not recognized until then).


o Lets say the query is like the above however it takes a 'long' time to execute. I would set up a 'hits' table like the following:

create table hits( sess_id number, seqno number, rid rowid, primary key(sess_id,seqno) );


Then, when I wanted to search I would code something like:

cnt := 0;
for y in ( select t.rowid rid from t where x = :host_variable order by y )
loop
cnt := cnt + 1;
insert into hits values ( userenv('sessionid'), cnt, y.rid );
end loop;

You have to pass that userenv('sessionid') from page to page as a hidden variable then to identify the result set.

And to display results from this i would query:


select * from t, hits
where hits.seqno between :lowBound and :highBound
and hits.sess_id = :theSessionId
order by hits.sess_id, hits.seqno

Normally I keep another 'parent' table to go along with hits and I store the session id, the TIME of the query and such in there as well. I put a foreign key from hits back to this parent with ON DELETE CASCADE and run a job (scheduled via dbms_jobs) to issue "delete from parent_table where time < sysdate-1/24" -- that cleans out queries older then 1 hour. Occasionally, i truncate the table to lower the high water mark as well.


followup to comment one below

the rowid is simply a primary key. Use the true primary key instead. Lets say you needed to do this for EMP and DEPT, you might do:

insert into hits
select SESSION_ID, rownum, empno, deptno
from ( select ename, dname
from emp, dept
where emp.deptno = dept.deptno
order by emp.ename );


Now, to display rows 100 - 110 you would:

select emp.ename, dept.dname
from hits, emp, dept
where emp.empno = hits.empno
and dept.deptno = hits.deptno
and sequence_col between :min and :max
and session_id = :session_id;


You can use the rowids of the underlying tables as well.



Rating

  (82 ratings)

Is this answer out of date? If it is, please let us know via a Comment

Comments

Implementing Page wise Concept in web based technology

Shiva Ram Krishna Sistla, May 28, 2001 - 6:37 am UTC

Dear Sir,
Initially i tried with 'from select' option of yours, but that query as you said is a performance issue. ours is a financial project, where we using complex queries to retrieve data. Then i have gone for implementing ur second method that is hits table method. because of my complex queries i am not able to retrieve rowid field. Generally how do we go about retrieving Complex queries data, is there anyother method. Sir i have 'Previous', 'Next' and 'No. of records' button. if u want i can send my queries also. Thanks for ur help in advance. I require the solution very badly sir..

Still Study, June 18, 2001 - 10:56 am UTC


Extremely helpful hint

Phil Wheeler, October 05, 2001 - 6:41 am UTC

The key for me was the use of 2 levels of subquery, and renaming the rownum pseudocolumn. (select a.*, rownum r...)

This approach has led to an effective paging mechanism for my ASP application. The performance far exceeds an ASP-only approach, as only the rows of interest are transferred to the web server.

Without this article, I would not have hit upon the correct SQL to make this work, AND to retain the sort order. Thanks very much!

Can it be improved with temporary tables so that no archive logs are generated?

Mihail Daskalov, January 09, 2002 - 4:45 am UTC

Hi,
the second method with a table hits is what I like to use.
Can it somehow be implemented so that inserting into table hits doesn't generate redo logs.

Can it be used together with global temporary tables in Oracle 8.1.7 and mod_plsql. How can the session be preserved so that temporary table contents are not lost between clicks.

What will be the pros and cons of such a method.

Is it better to do it with Java Servlets with scrollable JDBC result sets?

Thanks a lot for all of your helpful responses.

Tom Kyte
January 09, 2002 - 8:55 am UTC

temporary tables last for the life of a session. This solution is geared towards the stateless web environment where a session comes and goes. A temporary table would be very inappropriate.

I would use a stateless environment personally with the "real" table. Frankly I find the times when I need to save the result set very very few and far between. On asktom here for example, when you page - I rerun the query. A search query should be optimized to find the first rows fast -- then you need not save anything.

I don't like having a state maintained in the middle tier. It kills scalability. I can do more on this small machine that I have without state management then you would need to preserve that state and use java.

Using session can be a bug.

Min-Koo, Seo., January 18, 2002 - 3:04 am UTC

Primary key of the table is 'SESSION_ID, SEQ'.
I think this could leads to some bug.

Let me see.
User 'A' log in and did some query and log-off.
User 'B' log in and do some query.

What happens when the session ID of User 'A' and User 'B'
is equal?

We use DBMS_JOB to erase data which was populated 1 day
ago. So, if same SESSION_ID is generated by WEB-SERVER
with in a day, User 'B' will not be able to do query,
because of the primary key constraints.

I think the page(jsp or asp or whatever), should include
some logic to erase the data in the table.

Tom Kyte
January 18, 2002 - 4:41 pm UTC

how would the sessionid's be the same? userenv('sessionid') is fed off of a sequence. They won't be the same.

I use this, I've used it, I will continue to use it -- it works.

There is NO way a page (jsp/asp or whatever) could include logic to "erase" it. You would defeat the entire purpose -- to cache the answer and be able to paginate rapidly. If you had a page that erased it -- you would lose your answer -- then what? You cannot RELY on a page erasing it after you are done -- there is nothing stopping me from jumping right out of your application and never getting to the "erase data" page.


Just use a unique sessionid -- very very very common in this web based environment. If you are using a servlet solution -- they use sessionids as well, typically in a cookie. You can use your own sequence, or you can use userenv('sessionid') (if you create a session/page as many solutions do)

Very interesting

Yogeeraj, January 18, 2002 - 11:47 pm UTC

Can anyone just guide me through the process of creating the html page and deploy it on my web/application server?

i am using 9iAS, Oracle 8.1.7 on a Unix Box

thanks
yogeeraj

record paging

Mo, January 20, 2002 - 12:39 pm UTC

Tom:

1. How do you define :min and :max? Do you pre-define it in the Pl/SQL program if we are doing html/plsql.

2. Also, I have a problem printing a question answer when viewing each week's results. It seems to print all answers. ANy idea on only to print selected question page?

Thank you,


Tom Kyte
January 20, 2002 - 3:01 pm UTC

1) :min and :max are just bind variables, if it were a plsql procedure, they would just be parameters to the procedure:

create procedure p ( p_min in number, p_max in number, p_session_id in number )
as
begin
for x in ( select emp.ename, dept.dname
from hits, emp, dept
where emp.empno = hits.empno
and dept.deptno = hits.deptno
and sequence_col between P_MIN and P_MAX
and session_id = P_SESSION_ID )
loop
......


2) well, I didn't really have that feature -- it was just one big page. So, I added a link "article only" after the date in the list. That'll take you to a specific page just for that article.

Implement Paging

Amarjeet Singh, February 27, 2002 - 4:52 pm UTC

The following sql was very helpful:

select *
from ( select a.*, rownum r
from ( select *
from t
where x = :host_variable
order by y ) a
where rownum < :HigerBound )
where r > :LowerBound

It works fine for smaller tables and where the 'order by' clause has a numeric field. One of my table has 120,000 records. I have to sort (order by) that table by a column, service_name( varchar2(20)) and then display the page with few records. It takes more then 1 minute in my localhost environment to execute that query. Would some one can suggest me a faster way of sorting if the table field is not numeric.

Tom Kyte
February 28, 2002 - 9:12 am UTC

It works great -- if the rows from the inner most query can come back quickly.  It matters <b>not if the datatype of the column in the order by is a number, date, string, whatever</b>.  If the proper indexes are in place and the first row of the inner most query can be returned before the last row is figured out -- it'll be fast. 

Start with a table like this:

create table t as select * from all_objects;
insert /*+ append */ into t select * from t;
commit;
...
insert /*+ append */ into t select * from t;
commit;

Now run:

ops$tkyte@ORA817DEV.US.ORACLE.COM> select count(*) from t
  2  /

  COUNT(*)
----------
    274704
ops$tkyte@ORA817DEV.US.ORACLE.COM> create index t_owner_oname_idx on t(owner,object_name)
  2  /
ops$tkyte@ORA817DEV.US.ORACLE.COM> analyze table t compute statistics
  2  /
ops$tkyte@ORA817DEV.US.ORACLE.COM> 



Now, lets get the "first" page and the "last" page and see the time to get each:

ops$tkyte@ORA817DEV.US.ORACLE.COM> variable high number
ops$tkyte@ORA817DEV.US.ORACLE.COM> variable low  number
ops$tkyte@ORA817DEV.US.ORACLE.COM> variable host_variable varchar2(30)


ops$tkyte@ORA817DEV.US.ORACLE.COM> exec :high := 10; :low := 1; :host_variable := 'SYS'

PL/SQL procedure successfully completed.

ops$tkyte@ORA817DEV.US.ORACLE.COM> set autotrace on explain
ops$tkyte@ORA817DEV.US.ORACLE.COM> set timing on
ops$tkyte@ORA817DEV.US.ORACLE.COM> select *
  2    from ( select a.*, rownum r
  3             from ( select /*+ FIRST_ROWS */ owner, object_name, created
  4                      from t
  5                     where owner = :host_variable
  6                     order by object_name ) a
  7            where rownum < :High )
  8   where r > :Low
  9  /

OWNER      OBJECT_NAME                    CREATED            R
---------- ------------------------------ --------- ----------
SYS        /1005bd30_LnkdConstant         28-AUG-01          2
SYS        /1005bd30_LnkdConstant         28-AUG-01          3
SYS        /1005bd30_LnkdConstant         28-AUG-01          4
SYS        /1005bd30_LnkdConstant         28-AUG-01          5
SYS        /1005bd30_LnkdConstant         28-AUG-01          6
SYS        /1005bd30_LnkdConstant         28-AUG-01          7
SYS        /1005bd30_LnkdConstant         28-AUG-01          8
SYS        /1005bd30_LnkdConstant         28-AUG-01          9

8 rows selected.
<b>
Elapsed: 00:00:00.01
</b>
Execution Plan
----------------------------------------------------------
   0      SELECT STATEMENT Optimizer=HINT: FIRST_ROWS (Cost=272809 Card=10989 
   1    0   VIEW (Cost=272809 Card=10989 Bytes=615384)
   2    1     COUNT (STOPKEY)
   3    2       VIEW (Cost=272809 Card=10989 Bytes=472527)
   4    3         TABLE ACCESS (BY INDEX ROWID) OF 'T' (Cost=272809 Card=10989 
   5    4           INDEX (FULL SCAN) OF 'T_IDX1' (NON-UNIQUE) (Cost=1364 



ops$tkyte@ORA817DEV.US.ORACLE.COM> set timing off
ops$tkyte@ORA817DEV.US.ORACLE.COM> exec :high := 100010; :low := 100000; :host_variable := 'SYS'

PL/SQL procedure successfully completed.

ops$tkyte@ORA817DEV.US.ORACLE.COM> set timing on
ops$tkyte@ORA817DEV.US.ORACLE.COM> /

OWNER      OBJECT_NAME                    CREATED            R
---------- ------------------------------ --------- ----------
SYS        V_$HS_PARAMETER                28-AUG-01     100001
SYS        V_$HS_PARAMETER                28-AUG-01     100002
SYS        V_$HS_PARAMETER                28-AUG-01     100003
SYS        V_$HS_PARAMETER                28-AUG-01     100004
SYS        V_$HS_PARAMETER                28-AUG-01     100005
SYS        V_$HS_PARAMETER                28-AUG-01     100006
SYS        V_$HS_PARAMETER                28-AUG-01     100007
SYS        V_$HS_PARAMETER                28-AUG-01     100008
SYS        V_$HS_PARAMETER                28-AUG-01     100009

9 rows selected.
<b>
Elapsed: 00:00:06.68
</b>
Execution Plan
----------------------------------------------------------
   0      SELECT STATEMENT Optimizer=HINT: FIRST_ROWS (Cost=272809 Card=10989 
   1    0   VIEW (Cost=272809 Card=10989 Bytes=615384)
   2    1     COUNT (STOPKEY)
   3    2       VIEW (Cost=272809 Card=10989 Bytes=472527)
   4    3         TABLE ACCESS (BY INDEX ROWID) OF 'T' (Cost=272809 Card=10989 
   5    4           INDEX (FULL SCAN) OF 'T_IDX1' (NON-UNIQUE) (Cost=1364 




ops$tkyte@ORA817DEV.US.ORACLE.COM> set timing off
ops$tkyte@ORA817DEV.US.ORACLE.COM> set autotrace off

<b>so, as you can see -- if you get the query such that the first row can be returned fast -- as in this exmaple -- it'll be great.  If we have to find the entire result set (140,000 rows were SYS owned in my example here) AND THEN sort, it'll obviously take a bit.  

Proper indexing -- the judicious use of hints (this is one time where a hint might be called for -- to say "i want the first rows fastest" due to the nature of what you are doing) will make this fast.</b> 

Unique seeion id

Vikas Sharma, July 09, 2002 - 1:24 pm UTC

Hi, Tom
I am using mod_plsql in my web application of 8.1.7.0. Kindly clear me the following.
When I call my search page. Which is called by apache http server, the userenv(‘sessionid’) or sys_context(‘userenv’,’sessionid’) value can be same for two users. So if the value of the host variable (search criteria) in select statement is different then in my insert (insert into hits values ( userenv('sessionid'), cnt, y.rid ) the session id will be same but the rows will be inserted for both search criteria. Which will cause non desirable result for search when paging.

As I know the when apache sends the first request to database it gets a session id. And for a stateless application the session status becomes INACTIVE when your request is over. And for any other request the by any other user(not oracle user, I mean another user’s request with same dad) session id will be same till this session exists in v$session. In the case of ASP or JSP every time you open a new window you get a unique session id unlike this case.



Tom Kyte
July 10, 2002 - 6:52 am UTC

Then don't use userenv('sessionid').

create a sequence and use select seq.nextval from dual;


How to get the total count

Bala, July 18, 2002 - 9:18 am UTC

Hi Tom,

I am using your first example,
and i want to return the total count also.
I am using a function that returns a ref cursor.


Thanks


Tom Kyte
July 18, 2002 - 9:27 am UTC

the only way to get the total count is to actually fetch the last row. No one knows (not even Oracle) how many rows will be fetched until it actually fetches the last row.



Sorry for asking that question

Bala, July 18, 2002 - 9:52 am UTC

Hi Tom
I have gone through your other discussions about
doing count on the resultset.
I understand now, why its not a good idea...


Thanks

We will need N number of hits table, if we are having N number of searches

Siva, January 21, 2003 - 12:32 pm UTC

Paging seems to be one of the major issues in all Web apps.

To make sure, we understand correctly.

We will need as many hit tables as there are searches.

The following hits table is for one search.

insert into hits
select SESSION_ID, rownum, empno, deptno
from ( select ename, dname
from emp, dept
where emp.deptno = dept.deptno
order by emp.ename );

If we have another search, we will need another hits table.


Tom Kyte
January 21, 2003 - 1:00 pm UTC

only if you actually need to materialize the results into a table -- 99% or more you shouldn't -- if it takes a long time to get that first page, most of the times you would not have been using that in an interactive application anyway.

We will need N number of hits table, if we are having N number of searches

Siva, January 21, 2003 - 12:32 pm UTC

Paging seems to be one of the major issues in all Web apps.

To make sure, we understand correctly.

We will need as many hit tables as there are searches.

The following hits table is for one search.

insert into hits
select SESSION_ID, rownum, empno, deptno
from ( select ename, dname
from emp, dept
where emp.deptno = dept.deptno
order by emp.ename );

If we have another search, we will need another hits table.


paging

mo, January 21, 2003 - 2:42 pm UTC

Tom:

1. Can you implement method 1 for ref cursor. E.G.

l_query = 'select * from emp order by empno'

Do you append the string as :
l_query :=
select * from (select a.*,rownum r from ('||l_query||')a
where rownum < :highbound ) where r > lowerbound '

open l_cursor for l_query;

2. For a table of 3000 records would you use the method or you would still check how fast query us running. It is strange that inline views would be slower than two table join?

Thank you,

Tom Kyte
January 21, 2003 - 2:58 pm UTC

1) it works for many queries -- yes. Consider:

scott@ORA920> variable x refcursor
scott@ORA920>
scott@ORA920> begin
2 open :x for
3 'select *
4 from ( select a.*, rownum r
5 from ( SELECT * FROM EMP ORDER BY EMPNO ) a
6 where rownum <= 5 )
7 where r >= 1';
8 end;
9 /

PL/SQL procedure successfully completed.

scott@ORA920> begin
2 open :x for
3 'select *
4 from ( select a.*, rownum r
5 from ( SELECT * FROM EMP, DEPT ) a
6 where rownum <= 5 )
7 where r >= 1';
8 end;
9 /
begin
*
ERROR at line 1:
ORA-00918: column ambiguously defined
ORA-06512: at line 2



If the query could have been used to create a view, this technique will work -- else the original query MUST have uniquely named column names....


2) does not compute -- don't understand what you are asking. what is the relationship between an inline view (which could have a two table join), two table join (which could itself be an inline view), and 3,000 records (which is nothing)

paging

mo, January 22, 2003 - 12:28 pm UTC

Tom:

Do you know why I can not access the rownum 'R' column within the loop? The l_next statement keeps creating an error that it does not recognize r. I see it in sql*plus fine.


l_query := 'select * from (select a.*,rownum r from ('||l_query||') a
where rownum <= '||l_total||' ) where r >= '||p_start ;

htp.p('p_start = '||p_start);


OPEN l_cursor for l_query;
LOOP
FETCH l_cursor into l_rec;
exit when l_cursor%notfound;
l_next := l_rec.r = p_start+l_pagesize;

2. What I meant before is that why would select * from (select * from ) be slower than creating a hits table and loading it with records and making a join. It seems method 2 is more overhead but you say it is faster.

THank you,

Tom Kyte
January 22, 2003 - 12:48 pm UTC

1) umm, look at your definition of l_rec I guess.  Must not have R in it.

ops$tkyte@ORA920> declare
  2          l_rec emp%rowtype;
  3          l_query varchar2(255) := 'select emp.*, rownum r from emp';
  4          l_rc    sys_refcursor;
  5  begin
  6          open l_rc for l_query;
  7          fetch l_rc into l_rec;
  8          dbms_output.put_line( l_rec.ename );
  9  end;
 10  /
SMITH

PL/SQL procedure successfully completed.

ops$tkyte@ORA920> declare
  2          l_rec emp%rowtype;
  3          l_query varchar2(255) := 'select emp.*, rownum r from emp';
  4          l_rc    sys_refcursor;
  5  begin
  6          open l_rc for l_query;
  7          fetch l_rc into l_rec;
  8          dbms_output.put_line( l_rec.r );
  9  end;
 10  /
        dbms_output.put_line( l_rec.r );
                                    *
ERROR at line 8:
ORA-06550: line 8, column 30:
PLS-00302: component 'R' must be declared
ORA-06550: line 8, column 2:
PL/SQL: Statement ignored


(this seems to be an exercise in programming??)


2) re-read the answer.  I said:

<b>
o Lets say the query is like the above however it takes a 'long' time to 
execute.  I would set up a 'hits' table like the following:
</b>

the query takes a long time to execute.  You only want to run it once -- not once per page.  different strokes for different folks. 

The Second Option

Siva, February 03, 2003 - 1:41 pm UTC

The first option solves most of the problem.

The second option, I do not understand your earlier follow up to my query.

--------------------------------
To make sure, we understand correctly.

We will need as many hit tables as there are searches.

The following hits table is for one search.

insert into hits
select SESSION_ID, rownum, empno, deptno
from ( select ename, dname
from emp, dept
where emp.deptno = dept.deptno
order by emp.ename );

If we have another search, we will need another hits table.


Followup:
only if you actually need to materialize the results into a table -- 99% or more
you shouldn't -- if it takes a long time to get that first page, most of the
times you would not have been using that in an interactive application anyway.
----------------------------------

To me , the second option can be implemented only by materializing the primary key of the result. How do you implement the second option without materializing the primary key.

Siva

Tom Kyte
February 03, 2003 - 1:50 pm UTC

that is what i did -- empno, deptno = primary keys of emp and dept. i should have had empno and deptno in the inline view -- that is an oversight.

paging through results on the web

jose antonio morcillo, March 13, 2003 - 12:56 pm UTC

very useful method of paging

A reader, March 27, 2003 - 1:44 pm UTC

o Lets say the query is something like "select * from T where x = :host_variable
order by y" and the query is relatively 'fast'. I can then write a query:

select *
from ( select a.*, rownum r
from ( select *
from t
where x = :host_variable
order by y ) a <<<=====
where rownum < :HigerBound )
where r > :LowerBound
/

/***************************************/

hi tom, when ever I want the result set ordered by,
we really need to operate on the whole result set?

because if i am browsing the data page by page it is
most likely that I will not go at the end of the page like
google so I don't want to fetch 1m-2m search result
just show 1st 50 (but those must be sorted !!) so
just to sort DO I HAVE TO fetch 1m-2m where my user is looking prob. 1st 20-25 results ??

TIA


Tom Kyte
March 27, 2003 - 2:17 pm UTC

didn't you see the "and the query is relatively fast"

I'm assuming that perhaps the query uses an index on T(x,y). that'll be "fast" to get the first rows via the index.

Also, Oracle has TOP-N optimizations in it (9i and up) whereby it doesn't build a 1,000,000 row result set AND THEN return the first 10. It just builds the first 10 and returns them -- as long as you write the query I as I describe above anyway.

how ?

A reader, March 27, 2003 - 3:40 pm UTC

if you don't fatch it how would you sort it ?
and give 1st 10-result sorted by name_key ?

please explain this technique !!

Tom Kyte
March 27, 2003 - 4:08 pm UTC

you run a top N query:

select *
from ( select ...
from ...
where ...
order by ... )
where rownum <= :n;


Oracle sees that and say "ah hah, only :n rows will be returned". I'll therefore just set up (this is conceptual remember) an array of :N in size.

Then, I'll start running the query (without the order by). The first :n rows will be sorted into this array. when I get the :N+1st row -- I'll see if it is less then the :Nth row -- if so, I'll put it in the array and bump one of the existing rows out. Repeat this process for the rest of the rows.

So, instead of sorting and storing 1,000,000 rows -- we store :n rows and only keep those that "fit in there".




A reader, March 27, 2003 - 4:14 pm UTC

thanks tom,
(N-:N)
1.) so are we doing (n) comparisons/page ?
2.) are we doing it for each page(of say 20 records) ?



Tom Kyte
March 27, 2003 - 4:21 pm UTC

1) no. say you had 100 rows. You would get the first 10 and sort them. Then the next 90 would come and if all of them were > the last row -- done.

2) obviously -- each time you run the query, the answer could be different.


Tell you what, rather then trying to talk yourself out of it -- benchmark it, apply the technique, observe the results. Reality counts, hypothetical doesn't. You might like it sometimes, you might not like it others. It is NOT a black and white decision.

A reader, March 27, 2003 - 4:46 pm UTC

1) no. say you had 100 rows. You would get the first 10 and sort them.

" Then the next 90 would come "

and if all of them were > the last row -- done.


/************************************************/

so after getting 1st 10 rows and sotring them
we do have to compare the last row with rest of the
rows (90 rows) right ?


how would u compare it without fethcing it ?



Tom Kyte
March 27, 2003 - 7:18 pm UTC

the server is doing all of this work for us.

Consider (this is 9i functionality here)


ops$tkyte@ORA920> create table t ( object_name char(2000), object_id number, owner varchar2(30) );
Table created.

ops$tkyte@ORA920>
ops$tkyte@ORA920> insert /*+ append */ into t
  2  select object_name, object_id, owner
  3    from all_objects
  4   where rownum < 1001;
1000 rows created.


ops$tkyte@ORA920> commit;
Commit complete.



ops$tkyte@ORA920> alter session set sql_trace=true;
Session altered.


...
 10      for i in 1 .. 100
 11      loop
 12          for x in ( select *
 13                        from ( select * from t order by object_name )
 14                       where rownum <= 10 )
 15          loop
 16              null;
 17          end loop;
 18      end loop;
...
 24      for i in 1 .. 100
 25      loop
 26                  l_cnt := 0;
 27          for x in ( select * from t order by object_name )
 28          loop
 29              l_cnt := l_cnt+1;
 30              exit when l_cnt >= 10;
 31          end loop;
 32      end loop;
.....


and tkprof says:

select *
                      from ( select * from t order by object_name )
                     where rownum <= 10

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute    100      0.00       0.01          0          0          0           0
Fetch     1100      1.74       1.81      33301      33803          1        1000
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total     1201      1.75       1.83      33301      33803          1        1000

Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: 309     (recursive depth: 1)

Rows     Row Source Operation
-------  ---------------------------------------------------
   1000  COUNT STOPKEY
   1000   VIEW<b>
   1000    SORT ORDER BY STOPKEY</b>
 100000     TABLE ACCESS FULL T


<b>that step is the "magic" here..</b>


select *
from
 t order by object_name


call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute    200      0.04       0.04          0          0          0           0
Fetch     1000      5.81      16.66      34091      33800        200        1000
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total     1201      5.86      16.71      34091      33800        200        1000

Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: 309     (recursive depth: 1)

Rows     Row Source Operation
-------  ---------------------------------------------------
   1000  SORT ORDER BY
 100000   TABLE ACCESS FULL T




 

Thanks,

A reader, March 28, 2003 - 10:29 am UTC

good moring tom,

thanks for your proof by example... :)

but, it is still magic for me do you
or matalink have the algorithm ?

I am on 8.1.7 (linux 73)

Thanks,



Tom Kyte
March 28, 2003 - 3:57 pm UTC

have what algorithm?

A reader, March 28, 2003 - 4:21 pm UTC

that does

select * from ( select * from t order by object_name )
where rownum <= 10
faster

AND

select * from t order by object_name
slower ?


Tom Kyte
March 28, 2003 - 5:06 pm UTC

they are not faster nor slower -- they are different??

Above I explained conceptually what is happening, right below that -- benchmarked it.

If the goal is the get the first 10 rows AFTER sorting -- rownum is superior. it tells the db -- Hey, only interested in 10 rows. so the db only generates those 10 rows for us, much more efficient.

cool

A reader, March 28, 2003 - 5:35 pm UTC

thanks tom

Smart Sort

Logan Palanisamy, March 28, 2003 - 6:27 pm UTC

Here is further proof that, it is actually sorting only the the firtst N (e.g 
10, 100, 150) records, replacing (1000 - N) as they are retrieved to get the top 
N records


  1* select * from (select * from t order by object_name) where rownum < 10
SQL> /

9 rows selected.


Execution Plan
----------------------------------------------------------
   0      SELECT STATEMENT Optimizer=CHOOSE
   1    0   COUNT (STOPKEY)
   2    1     VIEW
   3    2       SORT (ORDER BY STOPKEY)
   4    3         TABLE ACCESS (FULL) OF 'T'




Statistics
----------------------------------------------------------
          0  recursive calls
         18  db block gets
        340  consistent gets
        334  physical reads
          0  redo size
      18932  bytes sent via SQL*Net to client
        424  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          1  sorts (memory)
          0  sorts (disk)
          9  rows processed

SQL> c/10/100/
  1* select * from (select * from t order by object_name) where rownum < 100
SQL> /

99 rows selected.


Execution Plan
----------------------------------------------------------
   0      SELECT STATEMENT Optimizer=CHOOSE
   1    0   COUNT (STOPKEY)
   2    1     VIEW
   3    2       SORT (ORDER BY STOPKEY)
   4    3         TABLE ACCESS (FULL) OF 'T'




Statistics
----------------------------------------------------------
          0  recursive calls
         18  db block gets
        340  consistent gets
        334  physical reads
          0  redo size
     204516  bytes sent via SQL*Net to client
       1090  bytes received via SQL*Net from client
          8  SQL*Net roundtrips to/from client
          1  sorts (memory)
          0  sorts (disk)
         99  rows processed

SQL> c/100/150/
  1* select * from (select * from t order by object_name) where rownum < 150
SQL> /

149 rows selected.


Execution Plan
----------------------------------------------------------
   0      SELECT STATEMENT Optimizer=CHOOSE
   1    0   COUNT (STOPKEY)
   2    1     VIEW
   3    2       SORT (ORDER BY STOPKEY)
   4    3         TABLE ACCESS (FULL) OF 'T'




Statistics
----------------------------------------------------------
          0  recursive calls
         20  db block gets
        340  consistent gets
        372  physical reads
          0  redo size
     307579  bytes sent via SQL*Net to client
       1423  bytes received via SQL*Net from client
         11  SQL*Net roundtrips to/from client
          0  sorts (memory)
          1  sorts (disk)
        149  rows processed


If all the 1000 records were sorted, all the three cases should have been "disk" 
sorts. In the examples above, only when I want the TOP150, it does disk sort. 
When I want top 10 or 100, it does memory sort proving that it is sorting the 
first 10 or 100, replacing the sorted list with the remaining only when they are 
greater.
 

A reader, April 07, 2003 - 3:39 pm UTC

I have a similar paging query. But I need to get row count also to show the number of available records (like page 1-25). User can change page size (the following query shows 30) and sort column name (the example query used column 1, user can change it to any column).

I used the following query to get my results.

SELECT * FROM ( SELECT Z.*, RANK() OVER (ORDER BY First_column ASC, ROWNUM) AS RNK, MAX(ROWNUM) OVER (ORDER
BY First_column ASC ROWS BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING) AS MAX_ROWNUM FROM (
--My Qyery to get the data
) Z ) WHERE RNK BETWEEN 1 AND 30

Currently it is taking long time when the data is huge. Is my query is right? Or Can you suggest any better method/query?

Thanks in advance.


Tom Kyte
April 07, 2003 - 4:43 pm UTC

yuck, i refuse to tell them exactly how many rows -- since it is wrong the split instant you show them it. look at google "1 thru 10 of about..."

When your result is huge, it must materialize the ENTIRE RESULT SET. What a waste of resources. This is one practice that if I could stomp it out would save the world trillions of dollars in hardware costs. The amount of computing power we use to tell a user "hey, you selected way more data then you will ever actually look at" is amazing.

A reader, April 07, 2003 - 5:59 pm UTC

:) Thanks for your quick response, I got your point and I will accept it for huge search results.

But, most of the times my query is going to return 100-300 rows. But the original query is going to access around 10 huge tables to get these results.

I need to show the definite row count when the results are 100 and users can easily go to the required page.

Could you please comment based on the above scenario?


Tom Kyte
April 07, 2003 - 6:40 pm UTC

Your system is doing (since a normal page is 10 rows) 10 to 30 times as much work as it should then. That is yet another way to look at this. 10 to 30 times as much work as it should.

Do you want to buy a machine that is 10 to 30 times more powerful for this?

You got the money -- and the hardware -- go for it.


Me, I've always found that knowing:

a) I have some data to look at
b) there is more data to look at if I like
c) there are prior pages of data I've already looked at

is not only sufficient -- but down right pleasant. Look at the way this worked in client server days -- we forgot something there. Forms for example -- it fetched 15 rows -- STOP. When you tried to goto row 16 it fetched 15 more -- STOP.

Maybe what you need is a push button that says "show me about how many rows this search returns", when the user hits it you put up a dialog that says "deposit 25cents to cover the cost of this query" and then you do a count(*) for them...


Its like Cary Millsap says "The fastest way to speed something up is to not do it at all"

A reader, April 07, 2003 - 8:47 pm UTC

I do agree with your comments, it’s not a good idea to show row count for search results.

Could you please take a look at the following two queries and let me know which is better for paging? (I saw mixed results in my testing)

1. Using rank analytical function

SELECT * FROM ( SELECT Z.*, RANK() OVER (ORDER BY First_column ASC, ROWNUM) AS RNK
FROM (
--My Qyery to get the data
) Z ) WHERE RNK BETWEEN 1 AND 10

2. Using rownum

select *
from ( select a.*, rownum r
from ( My Query
order by First_column ) a
where rownum < :HigerBound )
where r > :LowerBound

btw, what if I need to show the list of users in an web based application. Shouldn’t you accept it is a better idea to show the count of available users and provide a paging mechanism with 10 records per each page?


Tom Kyte
April 07, 2003 - 8:59 pm UTC

rownum is faster, cheaper, better.

No, I don't agree (sorry -- thought I was fairly clear on that particular opinion!). All I need to know is:

a) I've some users to look at on my screen
b) there are more users to look at (or not)

that there are 123 users -- so?



Performance Issue

Jap Boon Churn, September 05, 2003 - 8:39 am UTC

Hi Tom,

1 year plus ago, i read your reply to this question
and found you great idea on storing search result in a hits table. So, i apply the technique to my application.

I use this technique because each searches take quite some time (18 secs) and therefore, i don't want the user to wait for the same amount of time for every next pages that move forward.


However, after sometime when our data grows, i found that insertion of search result into my hits table result in some performance issue. For example, i have query that result more than 3000 rows. When my users play around with the system by continuously submitting the searches, i found that it will degrade the database performance, because every searches will have 3000 insert statements executed, and what more to say if we have a lot of concurrent submission.

Is there any turnaround solution for this?
or there's somewhere that i can fine-tune it.

Please advise.
Thanks.



Tom Kyte
September 05, 2003 - 3:58 pm UTC

quote from the original answer:


Normally I keep another 'parent' table to go along with hits and I store the
session id, the TIME of the query and such in there as well. I put a foreign
key from hits back to this parent with ON DELETE CASCADE and run a job
(scheduled via dbms_jobs) to issue "delete from parent_table where time <
sysdate-1/24" -- that cleans out queries older then 1 hour. Occasionally, i
truncate the table to lower the high water mark as well.



I clean the table out from time to time.

Use cellsprint or use pagination query

robert, December 25, 2003 - 3:46 pm UTC

>>select *
from ( select a.*, rownum r
from ( My Query
order by First_column ) a
where rownum < :HigerBound )
where r > :LowerBound
--------------------------
Merry Christmas, Tom.
I do pagination via owa_util.cellsprint.
I see you code the pagination by fetching the whole set and actually "skipping" unwanted records...

I am wondering if it's worth the effort to write a wrapper module that wrap the dynamic query string in the query above, bind and then fetch'em...is there much to again ?

Thanks


Tom Kyte
December 25, 2003 - 4:55 pm UTC

when i first wrote cellsprint (i called it "cells_from_query") it was 1995, 7.3 was just being release :)

order by in a subquery would not exist for 4 more years.

If asked to do it over again, I would use the pagination query yes.

throw a first_rows hint in there (trim the string, add first rows after the keyword select).

wrap in the layers of inline views.

execute it (in fact that is what bc4j does now in java).


more records to paginate..

robert, December 26, 2003 - 1:28 pm UTC

"cellsprint" tells you if there are more records....
How do I know that using the pagination query ?

thanks

Tom Kyte
December 26, 2003 - 2:07 pm UTC

>>select *
from ( select a.*, rownum r
from ( My Query
order by First_column ) a
where rownum < :HigerBound+1 )
where r > :LowerBound


just ask for one more row then you really wanted.

jumping through pages ...

Sudhir, December 26, 2003 - 8:58 pm UTC

Tom,

I have a case here where knowing your data may allow you to use the simpler SQL (may be). See the following:

SQL> create table t nologging as select * from dba_objects where object_id is not null;

Table created.

SQL> create index t_idx on t(object_id) ;

Index created.

SQL> analyze table t compute statistics for table
  2  for all indexes   
  3  for all indexed columns;

SQL> set autot trace exp

SQL> select min(object_id) from t;                 

Execution Plan
----------------------------------------------------------
   0      SELECT STATEMENT Optimizer=CHOOSE (Cost=32 Card=1 Bytes=4)
   1    0   SORT (AGGREGATE)
   2    1     INDEX (FULL SCAN (MIN/MAX)) OF 'T_IDX' (NON-UNIQUE) (Cost=32 Card=15048 Bytes=60192)

SQL> select max(object_id) from t;                  

Execution Plan
----------------------------------------------------------
   0      SELECT STATEMENT Optimizer=CHOOSE (Cost=32 Card=1 Bytes=4)
   1    0   SORT (AGGREGATE)
   2    1     INDEX (FULL SCAN (MIN/MAX)) OF 'T_IDX' (NON-UNIQUE) (Cost=32 Card=15048 Bytes=60192)

SQL> set autot off

SQL> l
  1* select max(object_id) from t
oracle@ssp:SQL> /

MAX(OBJECT_ID)
--------------
        561149

I just guessed next jump number << This is the gray area

SQL> select owner,object_id from t where object_id < 125 and rownum < 6;

Execution Plan
----------------------------------------------------------
   0      SELECT STATEMENT Optimizer=CHOOSE (Cost=5 Card=5 Bytes=105)
   1    0   COUNT (STOPKEY)
   2    1     TABLE ACCESS (BY INDEX ROWID) OF 'T' (Cost=5 Card=5 Bytes=105)
   3    2       INDEX (RANGE SCAN) OF 'T_IDX' (NON-UNIQUE) (Cost=2 Card=5)

SQL> set autot off
SQL> /

OWNER                           OBJECT_ID
------------------------------ ----------
SYS                                    98
SYS                                    99
SYS                                   100
SYS                                   101
SYS                                   107
SQL> select owner,object_id from t where object_id > 107 and object_id < 125 and 
  2  rownum < 6;

OWNER                           OBJECT_ID
------------------------------ ----------
SYS                                   121

SQL> set autot trace exp
SQL> /

Execution Plan
----------------------------------------------------------
   0      SELECT STATEMENT Optimizer=CHOOSE (Cost=4 Card=3 Bytes=63)
   1    0   COUNT (STOPKEY)
   2    1     TABLE ACCESS (BY INDEX ROWID) OF 'T' (Cost=4 Card=3 Bytes=63)
   3    2       INDEX (RANGE SCAN) OF 'T_IDX' (NON-UNIQUE) (Cost=2 Card=3)

SQL> select owner,object_id from t where object_id < 200 and object_id > 121
  2  and rownum < 6;

Execution Plan
----------------------------------------------------------
   0      SELECT STATEMENT Optimizer=CHOOSE (Cost=10 Card=5 Bytes=105)
   1    0   COUNT (STOPKEY)
   2    1     TABLE ACCESS (BY INDEX ROWID) OF 'T' (Cost=10 Card=15 Bytes=315)
   3    2       INDEX (RANGE SCAN) OF 'T_IDX' (NON-UNIQUE) (Cost=2 Card=15)


SQL> set autot off

SQL> /

OWNER                           OBJECT_ID
------------------------------ ----------
SYS                                   125
PUBLIC                                141
SYS                                   142
PUBLIC                                145
PUBLIC                                148


So I think if you start with a "guess" such as 125 and then get the five rows. Then jump to next 5 by limiting. Only one problem is some "Next>" pages may contain just one row and not a fixed number of rows per page.

There is another approach mentioned by Jonathan Lewis in his book using object_type where by the array holds all the data, you just skip back and forth through the array. Kind of lime memory temp table.

Thanks for your thoughts 

save several records for a session for using across pages

bn, May 11, 2004 - 6:55 pm UTC

Hi TOM,

We have a oracle(9i) web page where we have to show a customers accounts(can be more than 1) details which are fetched from mainframe (using ordcom). i.e while loading the page first thing is to call a ORDCOM method.
If we send 3 account numbers of a customer through ORDCOM method we get one single big string with account numbers and balance,limit etc seperated by delimiters(field wise/accountwise). These details are to be used across several pages which gets popped up by various clicks on main page.
To my knowledge i have 2 choices to achieve this
a) create a table for the user everytime main page is refreshed. Before this try to drop if one exists... Or can reuse same table by truncate and reload.
b)Send the one big string as argument to other pages and extract any particular accounts details as per need. But the argument can be ugly long as there are several values per account.

1) Pls suggest any or some other best approach.

I take liberty of asking a couple of questions not relevant to the current topic. Pls answer them too. Have tight deadline.

2) I send rowidtochar() as argument across web pages for some updation ..(i will use your urlencode function)..are there any pitfalls in using rowid !! i think i read some where that rowids are now can be different at different time !!

3)When i have to update a table column on button click on a web page i call another page by onClick javascript function. Another web page means popup (or reload same page)..this popup page code has just update statement and then window.close().
Is this a sensible way to do !! Pls suggest a better solution if any.

4)On button click we gotto make a transaction in mainframe again via ORDCOM method. So i plan to do what i described in my Q3 and where i call the method in place of update SQL statement.
Here the issue is i want to wait for the transaction to be completed for say 30 secconds. I would like to show a count on screen if possible. After 30 sec if there is no response from mainframe my session has to exit by prompting the user with appropriate message. How can i achieve this functionality ? I went through several posts on this site and came across things like alerts/sleep etc.. but could not understand how to use them.

Hope its not too many Qs.

Thanks a ton in advance.

Tom Kyte
May 11, 2004 - 9:21 pm UTC

1)

a) why why why would you create a table everytime????? i cannot *imagine*.


hows about - you get the value back, you insert value and seq.nextval into a table (maybe you PARSE the string and insert parsed data and seq.nextval into a table -- a single table, a single table that always exists, forever).... and you pass a little tiny number (like a session id) from page to page???

thats how i work here. see the big ugly number in your url up there? your session id, i use it to store a state about you (think cookies are evil, we need no cookies to do everything a cookie does and more)...


we expire the state from time to time.


2) make sure to escape it -- rowids contain magic characters like /, + and others.

urlencode it for sure. + = blank in a url.

3) i don't know, for me it would not work, i have popup's blocked. I hate them :)

4) i don't know what "ordcom" is. i don't know what "ordcom" can or cannot do.

save several records for a session for using across pages

bn, May 12, 2004 - 3:11 am UTC

TOM,

I am greately relieved.. with answers to 1) and 2).

For 3) and 4) may be i did nt frame question properly.

3) I was asking how a table record is updated on button click.
The event can be handled by OnClick() only NO!!!. So from javascript can i call a function which can update the record ? or an URL to a proc which updates.
How would you handle this?
4) OK we can ignore ORDCOM ...we treat it as just a PLSQL function..

On button click following proc is called

Create or replace procedure updateRec as
l_returnCode varchar2(200);
begin
htp.p('Processing the transaction'):
l_returnCode := functionUpdateMainframeRecord;
..
wait 60 seconds for this function to return..

if not responding then...
htp.p(' <script language=javascript>
alert("Could not get response from mainframe ..session is closing .. pls contact main frame for the status of transaction...");
</script>
');
end

if resonse arrives..
htp.p(' <script language=javascript>
alert("successful..");
</script>
');

end updateRec;

Pls help me get over this situation.

Thanks

Tom Kyte
May 12, 2004 - 7:41 am UTC

3) i don't code javascript, i use htmldb, i let it generate the UI and I just do the data part.

4) if ordcom cannot time out (you cannot ignore the crux of the question unfortunately) then you won't be successful. You need to have ordcom help you here -- else you might tell your user "hey, we couldn't do it" at the 60 second mark but since ordcom would continue on.... maybe at second 65 it finishes. now what.

ordcom needs to supply the answer.

javascript

mo, May 13, 2004 - 7:23 pm UTC

As a comment to the previous question:

Javascript runs on the client PC. It can't be used to call a PL/SQL procedure residing in the oracle database. It can be used to submit/post the form to the server where the pl/sql procedure can be a form handler.

Most helpful

Jens, July 09, 2004 - 10:01 am UTC

most helpful

Dynamilc sql

Venkatraman Kalyanam, February 16, 2005 - 7:35 am UTC

Hi Tom,

Your suggestion looks very impressive, but wondering how this will scale when we have dynamic where clauses and dynamic field selection.

For example i want to apply filtering logic based on input param and the number of columns which i am selecting also comes in the form of input param.

But i still wanted to have the paging logic.

In addition to that, in my UI i have a requirement to show the paging details like total how many records, how many pages, what is the current page. For this i endup firing the same query as the above with a count(*) and it really eating the response time. Do you have any suggestions???

Thanks in advance

Tom Kyte
February 16, 2005 - 8:35 am UTC

I use dynamic where clauses here on this site..... and use this pagination.

most crucial: use binds. DO NOT concatenate values in, USE BINDS.

otherwise you have a massive security issue
and performance/scalability is unachievable.

those are facts -- pure facts.


rethink your requirement. It is laughable first of all (the instant after you start your count(*) query, the number of rows satisfying your query might change).

secondly, people just need to know there are more pages, not that there are 1,234,234 more pages. That and no one goes beyond page 10.

thirdly, google who is the GOLD standard for search engine, totally "guesses" (and you can to, you can get the estimated cardnality from v$sql_plan). And it guesses totally wrong. And when it guesses wrong, it corrects itself. If you run a query and it says "here is 1..10 of about 32" and when you goto page 3 it discovers "oh wait, there were only 20, no page three" it just shows you page 2 again and says "oh, here is 11-20 of 20, sorry about that". If if you goto page 4 and it discovers "we are now seeing rows 31..40 of MORE than 40..." it gives you another next button.


And further if you TRY to go to page 100 of a 1,000 page result set (go ahead, just try) it says "umm, what are you kidding....."

go ahead, give it a go -- demo google to the team and ask them "should not we be as good as them?"


otherwise, if you count the rows, you have to run the ENTIRE QUERY. You want to talk about "scaling issues", you will have them in a big time way.


(if you have effective oracle by design, I cover this disscussion in full in that book -- pagination, the uselessness of "counting" rows, and how google does it right)

Paging

Johny, April 05, 2005 - 11:19 am UTC

Hello I have the following query for implementing paging to limit it between rownum 1 to 10... why is this query running so slow... any ideas to improve the speed of this query...

select * from ( select a.*, rownum r from ( select distinct o.name, o.id_num, o.address
from order f, tem foi
WHERE f.id = foi.id) a
where rownum < 11)
where r > 0;

I also tried using resultset.next in the jsp page but that is very slow too....

Tom Kyte
April 05, 2005 - 12:33 pm UTC

funny, i ran it and it ran fast....

but I don't have your data.

distinct is going to be an "all rows" sort of thing.


in order to "distinct", you have to get the "last row" before you can return the first.

is the distinct necessary?

Paging

Johny, April 05, 2005 - 11:21 am UTC

Hello I have the following query for implementing paging to limit it between rownum 1 to 10... why is this query running so slow... any ideas to improve the speed of this query...

select * from ( select a.*, rownum r from ( select distinct o.name, o.id_num, o.address
from order o, item i
WHERE o.id = i.id) a
where rownum < 11)
where r > 0;

I also tried using resultset.next in the jsp page but that is very slow too....

Johny

Johny, April 05, 2005 - 12:46 pm UTC

Ya distinct is necessary.... i have close to 2500 records... could that be the reason... it seems like the subselects are slowing it down... it takes me close to 8.2 secs to load ten records in the sql prompt... and gets much slower in the gui window...

Tom Kyte
April 05, 2005 - 6:38 pm UTC

"subselects", you haven't any.

but think about it, you have rows retrieve by ID

Now id=1 might have name='X'
and id=500 might have name='Y'
and id=1000 might have name='X'

and your table with id's to retrieve needs to get id=1,500,1000

(and pretend name was the only column to distint)

In order to "distinct" the set of x's coming back, well, you have to get the last X and the first X in fact all X's (and y's)

are you willing to use a pipelined function?
is the problem this simple or much more horribly complex in real life?

how about date

hernk, April 07, 2005 - 5:40 am UTC

Hi Tom,I have been trying your paging code in my simple table wich medium record.but I'v got the problem.how to display record which it contains date records and I want to display 8th last record in 1st page.
If I use your paging it will be work for rownum where number records is 1 to 8.(I have used cursors,which default min number is 1 and max is 8).
And 2nd questions is can we short the data first before paging?

Tom Kyte
April 07, 2005 - 10:05 am UTC

can you give an example of what you mean?

if you have "page 1", the 8th last record should be well known to you? you have it already?

"can we short the data"?

or do you mean sort -- if so, yes, the original example at the top has an order by y in it for example.

continue

hernk, April 09, 2005 - 4:38 am UTC

here is the example..

I have 20 data text, 5 data start by letter A ex: ABBA,ABBB.. next 5 data start by letter B ex: BAAA, 5 data start by letter C and 5 data start by D.the 20 Data input into the database randomly.

1)I want paging 5 data and I need result in first page is start by A,the 2nd page start by B, and 3rd page start by C, and 4th page start by D.How to do that?

2)If data inserted one by one randomly in different month. (1 month insert 1 data with random letter).How can I paging that data.I need result in 1st page is 5 data in this month,and 2nd page is 5 data in last month..

thanks Tom..

Tom Kyte
April 09, 2005 - 7:45 am UTC

1) order by? or just "where" with order by

... from t where whatever like :x || '%' order by something;

bind in :x = 'D' and you'll get the 'D' page..


2) see #1

Is this a bug?

Patrice Borne, April 09, 2005 - 2:19 pm UTC

Tom,

I am developing a form in ASP.NET to retrieve rows n through m from a potentially huge table. This form allows the user to page through. I am using Oracle 9.2.0.4.

I implemented it using the query you gave, in order to be as light as possible on the Oracle side.

However, I see rows that are duplicated when paging. First I thought there was a problem in my ASP.NET code or in the data provider. Then, I had a look at the queries themselves and I really don't see what I am missing.

Here is an example (I am using all_table here, not my real table):

select *
from ( select a.*, rownum r
from ( select *
from all_tables
where owner = 'SYS'
order by tablespace_name ) a
where rownum <= 60 )
where r > 40

and

select *
from ( select a.*, rownum r
from ( select *
from all_tables
where owner = 'SYS'
order by tablespace_name ) a
where rownum <= 40 )
where r > 20

As you can see, I am paging through, 20 rows at a time.

On my machine (AIX 5.2) the two result sets returned contain rows that are common, which should not happen, since I am retrieving data from two separate 'pages'.

For instance, I get a row in both sets where the table_name is 'CDC_SYSTEM$'. So, I checked if there were more than one row:

select *
from all_tables
where owner = 'SYS'
and table_name = 'CDC_SYSTEM$'

And there is exactly one row returned.

Am I doing something wrong in the 'paging' query or is there something else going on?

Thanks.

Tom Kyte
April 09, 2005 - 3:48 pm UTC

you'll need to order by "more", like tablespace_name, owner, table_name

right now, you are just ordering by tablespace_name, there are tons of stuff in a tablespace. The order of the rows is indeterminate after that.

I'll make sure to caveat that more in the future. You sorted the rows by tablespace name. But the order of the rows within a tablespace_name is indeterminate.

Regd. Pagination of emails

Naveen Kona, September 06, 2005 - 4:04 pm UTC

Hi Tom,

I read the entire Article and it was excellent. I have a requirement something like this. Our company has around 10,000 customers and each customer has an email account with us.I need to display their mails page by page. I need to find the total number of mails (i.e. count(*)) for each user, whenever he comes online. I should provide him with a list box where he can select the page he wants to go directly ( that is like if he selects page 11, mails 100 to 110 should be displayed). you have suggested that using count(*) is a bad idea. but here i must find the total number of messages the user has in his inbox. Since I have to display all the page numbers in a list box.

I have one more requirement that Whenever the user clicks on the column header all the emails must be sorted and only top 10 emails needs to be displayed.and the pagination should continue. (i.e the requirement in Paragraph 1 applies here also after sorting by column header)

In the begining of this article you have suggested to use a temporary table with session id. I have a doubt that if we always have 8000 users online at all times, maintaining 8000 temporary tables is going to be a performance issue or not? if it is a performance issue how serious is it going to be? Each user on an average has 5000 emails in the database.This is the current situation and it is expected that they are going to get another 5000 emails in a span of next year.I want to implement a solution that you think is going to be appropriate for this case.

that is should i use a nested query as you suggested

"select *
from ( select a.*, rownum rnum
from ( YOUR_QUERY_GOES_HERE -- including the order by ) a
where rownum <= MAX_ROWS )
where rnum >= MIN_ROWS
/
"
(or)

should i use a temporary tables solution?

Please let me know the drawbacks and advantages in each method. Please elaborate on having 10,000 temporary tables at the same time, would n't it make getting results slower?

Thanks in advance.

Tom Kyte
September 06, 2005 - 9:15 pm UTC

you might consider maintaining the count (rollup) by user then. why COUNT when a user comes in, if you can maintain that count as the emails come in...

you will not maintain 8000 temp tables, there is ONE -- only ONE for all.


"select *
from ( select a.*, rownum rnum
from ( YOUR_QUERY_GOES_HERE -- including the order by ) a
where rownum <= MAX_ROWS )
where rnum >= MIN_ROWS
/
"

seems the most likely choice to me, it is what I use most often.

Sorting on homogenious columns

Sergei, November 15, 2006 - 1:57 pm UTC

Tom, first of all, thanks for your explanation. I think this method is very useful and more web applications should take advantage of it. Default sorting and paging in .NET applications requires passing the entire data set back and forth which adds a lot of bloat and waiting time to some .Net web applications.

I've been using this method for years w/o encountering a problem until recently. I have a complex inner query with many aggregate functions, distinct keywords and an order by:

with myquery as (
select * from (SELECT d.fk_revid, d.userid, d.grouplevel,
MAX(DISTINCT DECODE(cols.name, 'FIRSTNAME', cel.value, NULL)) AS "FIRSTNAME",
MAX(DISTINCT DECODE(cols.name, 'LASTNAME', cel.value, NULL)) AS "LASTNAME",
MAX(DISTINCT DECODE(cols.name, 'EMPLOYEEROLE', cel.value, NULL)) AS "EMPLOYEEROLE",
...
FROM accessreviews.review_report_details d,
accessreviews.COLUMNS cols,
accessreviews.CELLS cel
WHERE d.fk_revid = 509 AND
d.fk_cellid = cel.cellid AND
cel.fk_colid = cols.colid GROUP BY d.fk_revid, d.userid, d.grouplevel)
where (UPPER("CFIN_GROUP") = UPPER('cfin01'))
ORDER BY "LASTNAME"
)

The distinct keyword is not necessary but it seems to help with another issue when there are too many aggregate functions in a query and oracle starts throwing errors.

I then use the method you mention above:

with myquery as (<see above>)
select *
from (SELECT rownum rn, q.* FROM myquery q WHERE rownum < 2 * 50 + 1)
WHERE rn > (2 - 1) * 50

Page size is 50, page number in the current example is 2, so I should get records 51-100 back... and I do. Everything is wonderful and as expected. Now I try sorting on the "EMPLOYEEROLE" column and this is where things get strange. The column value is almost the same for everyone ("E"), so the sort functionality is not very useful, but what starts happening is that the same record from the inner query will show up on pages 2 and 9. Then when I run the inner query it only returns that record once, as expected. When I start comparing partial results from the inner query to what is returned by the pagination query it matches completely for a while (first page) but then the second page from an inner query doesn't match the second page from the outer query. Again, this is only a problem when sorting by a column where the value is the same for nearly every row. This happens on Oracle9i Enterprise Edition Release 9.2.0.5.0 - 64bit. Your help is much appreciated.

Tom Kyte
November 16, 2006 - 8:13 am UTC

you need to sort by something "unique" to make it deterministic from call to call.

see
</code> https://www.oracle.com/technetwork/issue-archive/2006/06-sep/o56asktom-086197.html <code>

What about "endless" pagination like Amazon.com recs?

Doug C, December 05, 2006 - 2:15 am UTC

I got curious about pagination when on amazon.com. If you go to recommendations, you will find materials that are clearly correlated to recent purchases, perhaps items in your shopping cart, maybe things you have browsed, but it seems as if it NEVER ends.. I have a feeling I could page through recommendations until I have gone through everything on the site. However, I've noticed some dups here and there. How do you suppose they work this trick? Do they have some top 1000 based on a batch run and then throw stuff out randomly? They seem to be able to adjust recomendations VERY quickly. I bought 2 old 50's sci-fi movies, "The Day the Earth Stood Still", and "It came from Outer Space" and presto bingo - there are dozens of B-sci fi films in my favorites within seconds, and again, I can page on.. and on.. and on. 10 minutes later, there is still a hit on old B-movies.. Any ideas?

Tom Kyte
December 05, 2006 - 9:56 pm UTC

you'd need to ask them - not sure about 'endless' pagination.

Performance of the above mentioned issues

Khan, January 11, 2007 - 5:39 pm UTC

Hello Tom,

This is an excellent example ,( I mean the second approach of using a hits table )that returns the required rows only ata given time, but the first response for a given search criteria would take considerably longertime assuming that we will have to insert all teh rows in the hits table , and we know that the insert commands take more time than other DML commands.

If in a given scenario there is a possibility of relatively large (1000) rows being returned all those thousand rows need to inserted into the hits table ,before they could be used by the subsequent search/paginated requests.

I am just wondering how much of an impact would the fact that all the returned rows have to be inserted into the hits table have on the performance of a system implemented using such a strategy.

I guess my requirement need to be Temp for Hits

Jitender Paswan, August 16, 2007 - 5:06 am UTC

Hi Tom,
Many places you are suggesting to use Hits as real table instead of Global Temp Table and to findout right record use sesssion id and count.
But in my case, there are more then 10k user simlutanesouly searching for data. This search contains complax query. In my requirement I need to tell them no. of records found and no. of pages too.
User can also change their Sorting condition, Search condition. And the at same time it might possiible some records may get deleted/added by other user. So I can't keep Hits table static. And also we are not sure that user will get same session again. We are using connection pooling.
Presently I am using your Ist idea. But to findout max records and to display data in pagination I need to fire search query twice and process takes long time to complete.
If I implement it with Global temporary table then I need to fire and populate temp table everytime even if he wants navigate to next page.
I guess global temp table will safe for me also I need to execute only once for seacrh query and also reduces complaxity of my pagination query.
Please suggest me on it. We are using ASP technology.
Thanks
Jitender
Tom Kyte
August 20, 2007 - 7:33 pm UTC

... But to findout max records ....

my suggestion is to kill that "requirement" (and please remember, requirements are requests - NOT required things).

Goto google, mimick google, they got it right.

read some of these:

http://asktom.oracle.com/pls/ask/search?p_string=google+is+the+gold+standard+counting


and then just use the query at the top of the page.

Fast(er) query pagination when index can be used

Stew Ashton, August 21, 2007 - 8:24 am UTC

Tom, you show above how to paginate quickly by using an index to satisfy the WHERE and the ORDER BY clauses. I'd like your comments on a variant that reduces logical I/O when getting later pages.

First, your standard pagination query using an index :
>create table t as select level id, chr(96+mod(level,26))||level label, rpad(' ', 180, ' ') as filler
from dual
connect by level <= 260000;
  
Table created.

>create unique index t_idx on t(label);

Index created.

>analyze table T VALIDATE structure cascade;

Table analyzed.

>set autotrace traceonly;
>set arraysize 100;
>variable searchvar varchar2(8)
>variable firstline number
>variable lastline number
>exec :searchvar := 'a%';

PL/SQL procedure successfully completed.

>exec :firstline := 1;

PL/SQL procedure successfully completed.

>exec :lastline := 21;

PL/SQL procedure successfully completed.

>select * from
 (select a.*, rownum RNUM from (
select label, id from t where label like :searchvar order by label
 ) a where rownum <= :lastline)
where RNUM >= :firstline;

Execution Plan
----------------------------------------------------------
   0      SELECT STATEMENT Optimizer=CHOOSE
   1    0   VIEW
   2    1     COUNT (STOPKEY)
   3    2       VIEW
   4    3         TABLE ACCESS (BY INDEX ROWID) OF 'T'
   5    4           INDEX (RANGE SCAN) OF 'T_IDX' (UNIQUE)

Statistics
----------------------------------------------------------
          0  recursive calls
          0  db block gets
         20  consistent gets
         ...
         21  rows processed

This is pretty darn good, but for later pages more work is done :

>exec :firstline := 9001;

PL/SQL procedure successfully completed.

>exec :lastline := 9021;

PL/SQL procedure successfully completed.

>select * from
 (select a.*, rownum RNUM from (
select label, id from t where label like :searchvar order by label
 ) a where rownum <= :lastline)
where RNUM >= :firstline;

Statistics
----------------------------------------------------------
          0  recursive calls
          0  db block gets
       7083  consistent gets
       ...
         21  rows processed

What happens here is that 9021 table rows are visited while only 21 are retained. I suggest getting only the rowid for the records we want, then selecting by rowid just the 21 table records we need:

>select label, id from t where rowid in
  (select /*+cardinality(21) */ rid from
    (select rowid rid, rownum rnum from t where rownum <= :lastline and
    label like :searchvar order by label)
  where rnum >= :firstline)
order by label;

Execution Plan
----------------------------------------------------------
   0      SELECT STATEMENT Optimizer=CHOOSE
   1    0   SORT (ORDER BY)
   2    1     NESTED LOOPS
   3    2       VIEW OF 'VW_NSO_1'
   4    3         SORT (UNIQUE)
   5    4           VIEW
   6    5             COUNT (STOPKEY)
   7    6               INDEX (RANGE SCAN) OF 'T_IDX' (UNIQUE)
   8    2       TABLE ACCESS (BY USER ROWID) OF 'T'

Statistics
----------------------------------------------------------
          0  recursive calls
          0  db block gets
         45  consistent gets
         ...
         21  rows processed

This technique may not be necessary in most cases, but it demonstrates just how fast pagination can be: getting the last page is practically as fast as getting the first.
Tom Kyte
August 22, 2007 - 11:32 am UTC

I suggest that you really meant to order by DESC and look at the first pages!

That is, no end user on the planet would get to page 360 (25 rows at a time). They would not hit page forward that often

Heck, real sites like google won't even let you GO THERE, they stop at page 100 and say "enough already, you need to refine your search"

So, I never hit this 'problem' because I'd just say to them "hit the arrow that sorts the other way if you want the end of the result set (only the ends are interesting...)"

Re: Fast(er) pagination

Stew Ashton, August 23, 2007 - 6:42 am UTC


Real sites like Google don't make the user hit page forward, they provide links to the different pages.

Let's say we max out at 100 and the user surfs to page 50 : the above technique reduces table I/O by 98%. If the query joins to lookup tables, as real queries sometimes do, the extra I/O can add up and it's all single block I/O that fills up the buffer cache.

Still, I agree completely that real users will never go to page 50, much less page 360. I came up with this to show developers, to stop them putting the whole result set into a middle-tier cache and paging from there. Those guys will do everything to find the page with the slowest response time :)
Tom Kyte
August 23, 2007 - 12:14 pm UTC

they provide links to the next 10 pages

and if there are only 5, they say "so sorry, here is the fifth page, we lied before saying there was 1000 pages"



Hints for better pagination query

Jitender Paswan, August 24, 2007 - 3:19 am UTC

Hi,
Thanks for advice to findout max records.
We are facing performance issue with pagination query. Following is query pattern that we use:
{outer most and inner most has first_rows(10) and inner have first_rows hints)

SELECT /*+ FIRST_ROWS(10) */ Columns ...
FROM (SELECT /*+ FIRST_ROWS */ Columns ...
FROM (SELECT /*+ FIRST_ROWS */ Columns ...
FROM (SELECT /*+ FIRST_ROWS(10) */ Columns ... FROM table
ORDER BY first_name ASC
)
WHERE ROWNUM <= :b1
)
WHERE ROWPOS >= :b2
) PG
INNER JOIN ....

Upto forth level it works fine. But as the level between 3rd and 4th increases it start giving us poor peformance because of these hints. In that case either we need to remove all hints, or need to create temp table.
Also same pattern some time with different installtion does'nt work good.
Can you explain how to use hints in better way.
Leve between 3rd and 4th needs to be create to get related data which is required for final output.. basically those joins with main data query to fetch exact output.

Tom Kyte
August 24, 2007 - 3:06 pm UTC

I don't know what you mean by "3rd and 4th level" or whatnot.

a real example would be useful.

How to use Hint for Better pagination query

Jitender, August 27, 2007 - 3:13 am UTC

Hi,
Sorry I couldn¿t explain you properly about my problem.
In below I have mentioned level. Basically 4th level is the main data query. 2nd and 3rd are pagination logic.
Ist level is needed to join with some other table to display records set. We fetch all PK values from 4th level data query with all possible where clause and order by. This query uses full text search also.

SELECT /*+ FIRST_ROWS(10) */ Columns ... --** Level-1st
FROM (SELECT /*+ FIRST_ROWS */ Columns ... --**Level -2nd
FROM (SELECT /*+ FIRST_ROWS */ Columns ... --** Level-3rd
FROM (SELECT /*+ FIRST_ROWS(10)*/Columns .¿**Level-4th
FROM table
Where exp.
ORDER BY first_name ASC
)
WHERE ROWNUM <= :b1
)
WHERE ROWPOS >= :b2
) PG
INNER JOIN ....

1) My first questions are whether these hints are sufficient as we normally fetch 10 records out of l00K+ record. As we observed that without hints we didn¿t get good performance.
2) At the same time if 4th level data query, some time further need to break down into with more inner query or union.

For example, we have Activities, these activities are in structural format, means each activities may have child. In this case above query pattern will return all the parents activities but not child. To find out child, we break down 4th level query by using union, or inner query. In that case First_rows hints increase the cost of query. If we remove the query then its cost goes down, but not so low.
To resolve this issue some time we use temp table to short the main data query. But in some of the installation our temp table spaces goes at very high level more 10-12GB.
In our application we have lots of full text search.

In short, I want to write pagination query in best possible way as we have large data set. 100k+ and we have interactive environment.
Hope at this time you are in position to understand what I want to know.

scrolable cursor

vasile, August 28, 2007 - 2:43 am UTC

What about scrollable cursor ?
Can you explain me the advantages and the disadvantages of using this OCI feature ?

Tom Kyte
September 04, 2007 - 12:40 pm UTC

you need to maintain a state - like in client server days. it is not a very scalable if you have more than a handful of users.

what if I do need some metrics from all resultset?

Jeff, February 25, 2008 - 1:48 am UTC

Tom,

Thanks for your answers and they are very helpful. I agree in that get total count of pages is not a good idea.

However, sometimes we do have a requirement of, for example, get totals of some metrics based on a search criteria. For example, if I have a HR system, and users select certain search criteria, I do need to know, (1). how many employees fit the search criteria, and (2). what is their total budget, cost, etc.

What is the most efficient way of doing that then? It seems I have to get all result set and do some calculation?
Tom Kyte
February 25, 2008 - 2:24 am UTC

why do you need (1) or (2).

funny thing about "requirements", they sound so "required". I find if you actually tell people "well, we could do that, but you'll need to increase your budget for hardware by a billion dollars in order to have the hardware to do that", the requirement becomes a "it would have been nice, but of course we don't really need it"

there is no efficient way of doing that short of COUNTING THEM ALL. You have to get the entire set and then you can start displaying results.

Analytics are good for doing that, but efficient - nothing can be "efficient" if you have the process the entire thing before you can start showing the first results.



metrics from all resultset?

Sunj, March 21, 2008 - 7:19 pm UTC

Tom,
Were you suggesting to use Analytical functions, so that the count and result-set could be returned in a single query.

Let's say, a search results in 2000 records, pagination being at 20 per page.

My initial instinct was to get the total result count in a separate query.
Then execute another query that would return a page full of data, using your original query (from 8i example).
Tom Kyte
March 24, 2008 - 11:09 am UTC

... My initial instinct was to get the total result count in a separate query.
Then execute another query that would return a page full of data, using your
original query (from 8i example). ...


well, think about that - since the result set can change between a) get count and b) get full page of data - by the time you did a), it (the number, the count) could change....

I hate getting the "total rowcount", I won't do it - in order to tell the user there is 2,235,232 records - you would actually need to HIT THEM ALL, what a waste of resources.

Metrics from all resultset

Sunj, March 27, 2008 - 2:30 pm UTC

Thank you for your input.
Unfortunately I am unable to convince the users. Besides another indexing software like ENDECA is able to do these kind of things and return a lot more dimensions effortlessly.

I wonder if Oracle has any plans to compete with indexing technologies that Endeca and Fast uses.
Tom Kyte
March 30, 2008 - 8:29 am UTC

sure, we call it 'secure enterprise search', or 'oracle text' - I give "estimates" (just like the search engine google does - we realize it just doesn't make sense to get an exact count, heck by the time you get it - it is probably wrong already, unless you are searching static information, but one must then ask why are you searching static, non-changing data...)

paging web performance problem

A reader, April 28, 2008 - 6:08 pm UTC

Hi Tom

I see that you have used a hits table approach (in your very first reply) to avoid performance problem when the query using order by and rownum to page the results affects performance badly which is our problem.

We have a query like

select *
from (select t1.*, rownum rn from t1 where x = :x and y = :y
order by id desc)
where rn <= 25

and more queries to limit 25 rows per page, the problem is the inner query is deadly slow when order by is used, around 25 seconds whereas without order by it takes microseconds.

So if I use your approach in a Java environment how can I ensure the session id is same everytime I execute the search query?

Thanks


Tom Kyte
April 29, 2008 - 8:40 am UTC

select *
from (select t1.*, rownum rn from t1 where x = :x and y = :y
       order by id desc)
where rn <= 25


I hope you do not have a query like that.

that gets 25 random records - you need to:

select *
from (select t1.*from t1 where x = :x and y = :y
       order by id desc)
where rownum <= 25



reference rownum AFTER SORTING.


if you had an index on (x,y,id) and did an order by X desc, Y desc, ID desc - it would be "pretty fast"


... So if I use your approach in a Java environment how can I ensure the session id
is same everytime I execute the search query?....

well, your application would have to pass it back and forth - most java applications already have the concept of a "session id" - if yours do not, you'll have to implement it.

inserting the hits table

pingusan, May 01, 2008 - 4:07 am UTC

Hi

In your first reply you suggested using this code to insert

cnt := 0;
for y in ( select t.rowid rid from t where x = :host_variable order by y )
loop
cnt := cnt + 1;
insert into hits values ( userenv('sessionid'), cnt, y.rid );
end loop;


Cant we use this?

insert into hits
select a.*, rownum
(select t.rowid rid
from t
where x = :host_variable
order by y)

Thank you
Tom Kyte
May 01, 2008 - 9:47 am UTC

just read a tiny bit further down a paragraph or two in the original answer...

....
  insert into hits
  select SESSION_ID, rownum, empno, deptno
    from ( select ename, dname
             from emp, dept
            where emp.deptno = dept.deptno
            order by emp.ename );
.....

Paging response,

Asad Khan, May 28, 2008 - 8:49 am UTC

Tom in your example of:

select *
from ( select a.*, rownum r
from ( select *
from t
where x = :host_variable
order by y ) a
where rownum < :HigerBound )
where r > :LowerBound


Doesn't oracle actually return teh entire data of inner query then apply the rownum boundaries. I am just trying to get a an understanding of how Oracle looks at it. If it doesn't, then it is great so no major impact on performance for the early pages.


But what about the nth page (which can be 30000 record away), that will be a lot of data that will be processed before the last 10 records are displayed, what is the ideal way to handle it.

Thanks.

Once again great examples.

Tom Kyte
May 28, 2008 - 10:00 am UTC

No, Oracle doesn't actually return the entire data of the inner query and then apply rownum boundaries.

https://www.oracle.com/technetwork/issue-archive/2006/06-sep/o56asktom-086197.html

see the top-n query optimization.



and you are not getting the fact that NO ONE WILL EVER GO TO THE NTH PAGE WHERE N IS LARGE.

humans get bored, there is nothing interesting on the 100th page of a result set for us.


And if you are saying "well the last page is interesting to them", then I say make the last page FIRST (eg: your order by is backwards!)

Never give them the option to go to page N, maybe pages 1-10, but not just any page.

Never give them the option to go to the last page, or if you do, just reverse the sort order for them and show them the first page.

Paginating result set

Manuel, November 14, 2008 - 8:56 am UTC

Hi,

I'm trying to implement a procedure that search's something in a view and paginating those same results.

I have this code:

CREATE OR REPLACE PROCEDURE Searchservicos 
(
     p_pageNr IN NUMBER, 
     p_pageSize IN NUMBER, 
    p_queryConditions IN VARCHAR,
    p_totalRecords OUT NUMBER
)
IS
 MIN_ROWS NUMBER;
 MAX_ROWS NUMBER;
 strCountRecords VARCHAR (1000);
 queryResults v_lista_servicos%ROWTYPE; 
BEGIN
    MAX_ROWS := p_pageNr * p_pageSize;
 MIN_ROWS := MAX_ROWS - p_pageSize;
 
 ---
 strCountRecords := 'SELECT COUNT(id_servico) FROM v_lista_servicos WHERE 1=1 ' || p_queryConditions;
 EXECUTE IMMEDIATE strCountRecords INTO p_totalRecords;
 ---
 
 SELECT * INTO queryResults
      FROM ( SELECT /*+ FIRST_ROWS(n) */ 
      a.*, ROWNUM rnum
            FROM ( 
       SELECT * FROM v_lista_servicos WHERE 1=1
      AND v_lista_servicos.id_isp=2  
   ) a
       WHERE ROWNUM <= MAX_ROWS )
 WHERE rnum >= MIN_ROWS;
 
    --COMMIT;
   
    EXCEPTION
    WHEN OTHERS THEN
           RAISE;
END Searchservicos;
/


I'm getting an error in this line
FROM ( SELECT /*+ FIRST_ROWS(n) */ 


PL/SQL: ORA-00947: not enough values

Can you please tell me what's wrong?

Best Regards,
Manuel
Tom Kyte
November 18, 2008 - 6:04 pm UTC

I hate your code

a) no binds
b) because of no binds, it is as INSECURE as possible (please google "sql injection"
c) why do you have that when others? do you just like making debugging harder??? It is good that it is followed by raise, but that you have "when others then raise" indicates to me you have entirely missed the point of exceptions - this is a really important thing, you want to change this and learn why you don't want to do this - this is a really UNDESIRABLE thing you have done. It adds nothing, it removes USEFUL information (we don't know what line of code causes the failure anymore :( you've made it infinitely hard to debug...)


d) gosh, how do I hate counting records. You do fully understand that between the time you count and the time you query - it could be DIFFERENT. Also, you do understand that in order to count the records, you have to RUN THE ENTIRE QUERY - THE ENTIRE QUERY - think about that.


you select * into a record, but there is more than one row, that'll never work.

you don't apply your where clause to the main query?!!?!?!?!?!?

first_rows(n) isn't "sensible", N should be a number (eg: your pagesize)


but the "not enough values" is because you select into a record that has fields for the table, but the query added ROWNUM to the select list, your record has "not enough fields"

ops$tkyte%ORA10GR2> CREATE OR REPLACE PROCEDURE Searchservicos
  2  (
  3          p_pageNr IN NUMBER,
  4          p_pageSize IN NUMBER,
  5         p_queryConditions IN VARCHAR,
  6         p_totalRecords OUT NUMBER
  7  )
  8  IS
  9      MIN_ROWS NUMBER;
 10      MAX_ROWS NUMBER;
 11      strCountRecords VARCHAR (1000);
 12
 13          cursor template is select a.*, rownum rnum from v_lista_servicos a;
 14      queryResults template%ROWTYPE;
 15  BEGIN
 16          MAX_ROWS := p_pageNr * p_pageSize;
 17      MIN_ROWS := MAX_ROWS - p_pageSize;
 18      ---
 19      strCountRecords := 'SELECT COUNT(id_servico) FROM v_lista_servicos WHERE 1=1 ' || p_queryConditions;
 20      EXECUTE IMMEDIATE strCountRecords INTO p_totalRecords;
 21      ---
 22      SELECT * INTO queryResults
 23           FROM ( SELECT /*+ FIRST_ROWS(n) */
 24                 a.*, ROWNUM rnum
 25                 FROM (
 26                     SELECT * FROM v_lista_servicos WHERE 1=1
 27                    AND v_lista_servicos.id_isp=2
 28              ) a
 29         WHERE ROWNUM <= MAX_ROWS )
 30      WHERE rnum >= MIN_ROWS;
 31  END Searchservicos;
 32  /

Procedure created.


Pagination using 10g features?

Praveen, December 15, 2008 - 10:22 am UTC

I particularly like the second method you had suggested for implementing the pagination.
But please consider the following scenarios:-

(a) There are nearly 500 simultaneous users on an average.
(b) The search query is dynamically constructed. The selected columns and ordering of records varies depending on user preferences who issues the query !
(c) Search query issued by each users takes nearly 10 seconds to execute. That is the time taken to return first rows when tested from the backend for a single user. (real-time could be worse).
(d) It is a web based environment running Oracle 10g.
(e) The user can sort the result set that is displayed on his page (upon multiple columns-ASC or DESC).


My doubts are:-

(1) If 500 users connect and search, would it be wise to create 500 'hit' tables? One single table cannot be created because the each search query returns different kind/number of columns.
(2) The session from the page is lost as soon as the first set of rows is displayed. When the user clicks <NEXT> or clicks to sort upon a column, a new session is created. Hence, how could it be possible to identify the 'hit' table based on the 'session_id'?
(3) You answered when there was no 10g. How would like you like to re-write your answer when there are so many advanced features provided by 10g are now available.

--Thank you
Tom Kyte
December 29, 2008 - 10:38 am UTC

...
(c) Search query issued by each users takes nearly 10 seconds to execute. That
is the time taken to return first rows when tested from the backend for a
single user. (real-time could be worse).
....


that is what you need to work on. Unless you have a 500 cpu machine with 5,000 disks, the actual response times from this system will be "not usable"

(do the math)




as for the doubts...

(1) so what, use a single table with varchar2 columns if you really wanted to do that (only that will make the first page take even longer than it does now)

(2) you cannot sort of figure out how to establish your own sessionid? It is pretty common in web applications. Look at my URL, there is a sessionid in there for your session to remember YOU and where YOU are paging...

(3) it would be the same, although you can use row_number() if you like.

https://www.oracle.com/technetwork/issue-archive/2006/06-sep/o56asktom-086197.html

https://asktom.oracle.com/Misc/oramag/on-top-n-and-pagination-queries.html



but in all seriousness - you cannot do a 10 second search with 500 concurrent users - you will need to give serious consideration and design thought here to reduce that to "something a lot less"


Use of LIMIT clause

pranav, March 12, 2009 - 11:45 am UTC

Hi,

I was just wondering if there is some way to return the results in Chunks (pagination records) by using the Limit clause of Bulk collect.

The app can execute the db function until the Limit clause of the Bulk collect ends and show the data based on every record set it retrieved.


Thanks
Pranav.


Tom Kyte
March 12, 2009 - 1:06 pm UTC

I'm not following your thought here, I don't know what you mean.

ORA-00918 with LEFT OUTER JOIN

Jorge Gomez, September 17, 2009 - 10:39 am UTC

I'm Tryying to paginate as follows:

SELECT *
FROM ( SELECT a.*, rownum r
FROM (
SELECT Fileservicio.ano, Fileservicio.num_file, Fileprincipal.ano, Fileprincipal.num_file
FROM FILE_SERVICIOS Fileservicio
LEFT OUTER JOIN FILE_PRINCIPAL Fileprincipal
ON Fileservicio.ano = Fileprincipal.ano
AND Fileservicio.num_file = Fileprincipal.num_file
WHERE 1 = 1
) a
WHERE rownum <= 5 )
WHERE r >= 1

But I'm getting Error SQL: ORA-00918

My database tables with fields with equal name and I can't change them (it's curently in production with a client/server app and I'm doing a PHP web view), how can I do a db search paging results using joins to another tables with the same colum names, emulating function limit?

Thanks
Tom Kyte
September 17, 2009 - 1:36 pm UTC

you'd have to give me a simple create table - with few columns and simple names (like c1, c2, c3) to reproduce with please.


KC, September 30, 2009 - 1:38 pm UTC

Hi Tom,

... if you get the query such that the first row can be returned fast -- as in this exmaple -- it'll be great. If we have to find the entire result set (140,000 rows were SYS owned in my example here) AND THEN sort, it'll obviously take a bit... (quote: Followup - February 28, 2002)

I am using Oracle 10g.

My question is I don’t understand why we need to read the entire result set (140,000 rows).

We already have an index on t(owner,object_name). Assume this a non-unique index, the index data is data sorted first by index key values (in the order of the index key) and then by rowid ascending.

Thus, if I want to get records 5001-5010 (:LowerBound = 5001, :HigerBound = 5010), purely from the index I ‘guess’ we should be able to know which 10 rowid in table t we wanted to retrieve to form the final resultset. Once we know the 10 rowid from index, we can then access table t by rowed – which is relatively fast.

This is how I 'thought' Oracle is going to work. I don't understand why we need to access all the 140,000 records in order to get just 10 of them. Please correct my understanding. Thanks.
Tom Kyte
October 07, 2009 - 8:58 am UTC



... if you get the query such that the first row can be returned fast -- as in
this example
-- it'll be great. ...


In this example - because of the index - we DID NOT have to find them all.


I wrote explicitly that we DO NOT in this case - because the INDEX IS THERE.

KC, October 07, 2009 - 9:43 am UTC

... Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=HINT: FIRST_ROWS (Cost=272809 Card=10989
1 0 VIEW (Cost=272809 Card=10989 Bytes=615384)
2 1 COUNT (STOPKEY)
3 2 VIEW (Cost=272809 Card=10989 Bytes=472527)
4 3 TABLE ACCESS (BY INDEX ROWID) OF 'T' (Cost=272809 Card=10989
5 4 INDEX (FULL SCAN) OF 'T_IDX1' (NON-UNIQUE) (Cost=1364
... (quote from THIS EXAMPLE)

Are you saying that the INDEX (FULL SCAN) at step 5 returns only 10 ROWID to STEP 4 (TABLE ACCESS)?

Many thanks.

first_row and rownum

Tony, February 04, 2010 - 2:24 pm UTC

Hi Tom,
       Thanks a lot for this informative site, 

I have two queries, almost same, one comes back in milli-seconds and the other takes 13+ minutes. Only change is FRHT_BILL_ID values , GL_TRNS_ID is the primary key. Can you please explain what might be causing it and how to resolve it. 

FYI-- If I change "rownum < 20" in the second query to "rownum < 5" (as there are only 4 rows meeting the criteria then it comes back fine in ~ 2 seconds) but I don't know in advance how many rows there will be.

First Query:
================
insert into global_temp_t (NUM_ID) select * from ( select  /*+ first_rows */ GT.GL_TRNS_ID from
 gl_trns_t GT where 1 = 1 AND GT.FRHT_BILL_ID = 1418555 AND GT.CLASSID = 4006 order by GT.GL_TRNS_ID
 desc ) where rownum < 20;
 
 INSERT STATEMENT   Cost = 1706246
  LOAD TABLE CONVENTIONAL  GLOBAL_TEMP_T
    COUNT STOPKEY
      VIEW
        TABLE ACCESS BY INDEX ROWID GL_TRNS_T
          INDEX FULL SCAN DESCENDING PK_GLTRNS
          
          
SQL> insert into global_temp_t (NUM_ID) select * from ( select  /*+ first_rows */ GT.GL_TRNS_ID from
 gl_trns_t GT where 1 = 1 AND GT.FRHT_BILL_ID = 1418555 AND GT.CLASSID = 4006 order by GT.GL_TRNS_ID
 desc ) where rownum < 20;

19 rows created.

Elapsed: 00:00:00.01


select count(*) from gl_trns_t GT where 1 = 1 AND GT.FRHT_BILL_ID = 1418555 AND GT.CLASSID = 4006;

  COUNT(*)
----------
        30


Second Query:
===============

insert into global_temp_t (NUM_ID) select * from ( select  /*+ first_rows */ GT.GL_TRNS_ID from
 gl_trns_t GT where 1 = 1 AND GT.FRHT_BILL_ID = 1381993 AND GT.CLASSID = 4006 order by GT.GL_TRNS_ID
 desc ) where rownum < 20;
 
 
INSERT STATEMENT   Cost = 1706246
  LOAD TABLE CONVENTIONAL  GLOBAL_TEMP_T
    COUNT STOPKEY
      VIEW
        TABLE ACCESS BY INDEX ROWID GL_TRNS_T
          INDEX FULL SCAN DESCENDING PK_GLTRNS
          

SQL> insert into global_temp_t (NUM_ID) select * from ( select  /*+ first_rows */ GT.GL_TRNS_ID from
 gl_trns_t GT where 1 = 1 AND GT.FRHT_BILL_ID = 1381993 AND GT.CLASSID = 4006 order by GT.GL_TRNS_ID
 desc ) where rownum < 20;

4 rows created.

Elapsed: 00:13:33.14          
          

select count(*) from gl_trns_t GT where 1 = 1 AND GT.FRHT_BILL_ID = 1381993 AND GT.CLASSID = 4006;

  COUNT(*)
----------
         4 

Tom Kyte
February 04, 2010 - 3:28 pm UTC

this is easy - I think

I'll have to make some assumptions

1) there are lots of rows in the table.
2) the index is only on gl_trns_id and gl_trns_id is not null


hence, for the rows in (3), we read the index desc - for each entry we find in there, we go to the table and VERY RAPIDLY, after just a few hits on the table, we get the first 20 rows and stop. We got lucky. Since there are more than 20 rows in the table that match our predicate we know for a fact we'll stop before we get to the end of the table. In this case - we stopped "fast".

and for the rows in (4), we read the index desc and for each entry we find in there - we go to the table and basically end up reading almost the entire table to find 20 rows - in fact, we end up reading the ENTIRE table since there are only 4 rows to be found (and we don't know that until we run out of rows)

In other words - anytime you have LESS THAN 20 records, this must read EVERY ROW out of the table via the index (slow slow slow). Any time - every and any time.

I can demonstrate that easily:


ops$tkyte%ORA10GR2> /*
ops$tkyte%ORA10GR2> drop table t;
ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> create table t
ops$tkyte%ORA10GR2> as
ops$tkyte%ORA10GR2> select level gl_trns_id, level frnt_bill_id, level classid, rpad('*',80,'*') data
ops$tkyte%ORA10GR2>   from dual
ops$tkyte%ORA10GR2> connect by level <= 500000
ops$tkyte%ORA10GR2> /
ops$tkyte%ORA10GR2> alter table t modify gl_trns_id not null;
ops$tkyte%ORA10GR2> create index t_idx on t(gl_trns_id);
ops$tkyte%ORA10GR2> update t set frnt_bill_id = 499999, classid = 499999 where gl_trns_id between 499900 and 500000;
ops$tkyte%ORA10GR2> commit;
ops$tkyte%ORA10GR2> */
ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> set autotrace on
ops$tkyte%ORA10GR2> select *
  2    from (
  3  select /*+ first_rows */ gl_trns_id
  4    from t
  5   where frnt_bill_id = 499999 and classid = 499999
  6   order by gl_trns_id DESC
  7         )
  8   where rownum < 20
  9  /

GL_TRNS_ID
----------
    500000
    499999
    499998
    499997
    499996
    499995
    499994
    499993
    499992
    499991
    499990
    499989
    499988
    499987
    499986
    499985
    499984
    499983
    499982

19 rows selected.


Execution Plan
----------------------------------------------------------
Plan hash value: 3806458071

---------------------------------------------------------------------------------------
| Id  | Operation                     | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT              |       |    19 |   247 |   827   (1)| 00:00:07 |
|*  1 |  COUNT STOPKEY                |       |       |       |            |          |
|   2 |   VIEW                        |       |    50 |   650 |   827   (1)| 00:00:07 |
|*  3 |    TABLE ACCESS BY INDEX ROWID| T     |    50 |  1950 |   827   (1)| 00:00:07 |
|   4 |     INDEX FULL SCAN DESCENDING| T_IDX |   500K|       |    26   (0)| 00:00:01 |
---------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter(ROWNUM<20)
   3 - filter("FRNT_BILL_ID"=499999 AND "CLASSID"=499999)

Note
-----
   - dynamic sampling used for this statement


Statistics
----------------------------------------------------------
          0  recursive calls
          0  db block gets
          8  consistent gets
          0  physical reads
          0  redo size
        742  bytes sent via SQL*Net to client
        411  bytes received via SQL*Net from client
          3  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
         19  rows processed

ops$tkyte%ORA10GR2> select *
  2    from (
  3  select /*+ first_rows */ gl_trns_id
  4    from t
  5   where frnt_bill_id = 1 and classid = 1
  6   order by gl_trns_id DESC
  7         )
  8   where rownum < 20
  9  /

GL_TRNS_ID
----------
         1


Execution Plan
----------------------------------------------------------
Plan hash value: 3806458071

---------------------------------------------------------------------------------------
| Id  | Operation                     | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT              |       |    19 |   247 |   827   (1)| 00:00:07 |
|*  1 |  COUNT STOPKEY                |       |       |       |            |          |
|   2 |   VIEW                        |       |    50 |   650 |   827   (1)| 00:00:07 |
|*  3 |    TABLE ACCESS BY INDEX ROWID| T     |    50 |  1950 |   827   (1)| 00:00:07 |
|   4 |     INDEX FULL SCAN DESCENDING| T_IDX |   500K|       |    26   (0)| 00:00:01 |
---------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter(ROWNUM<20)
   3 - filter("FRNT_BILL_ID"=1 AND "CLASSID"=1)

Note
-----
   - dynamic sampling used for this statement


Statistics
----------------------------------------------------------
          0  recursive calls
          0  db block gets
       8154  consistent gets
          0  physical reads
          0  redo size
        413  bytes sent via SQL*Net to client
        400  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
          1  rows processed

ops$tkyte%ORA10GR2> set autotrace off


so, what might be your solution? Add to the index, so we can avoid the table and avoid the sort:

ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> drop index t_idx;
ops$tkyte%ORA10GR2> create index t_idx on t(frnt_bill_id,classid,gl_trns_id);
ops$tkyte%ORA10GR2> 
ops$tkyte%ORA10GR2> set autotrace on
ops$tkyte%ORA10GR2> select *
  2    from (
  3  select /*+ first_rows */ gl_trns_id
  4    from t
  5   where frnt_bill_id = 499999 and classid = 499999
  6   order by frnt_bill_id DESC, classid DESC, gl_trns_id DESC
  7         )
  8   where rownum < 20
  9  /

GL_TRNS_ID
----------
    500000
    499999
    499998
    499997
    499996
    499995
    499994
    499993
    499992
    499991
    499990
    499989
    499988
    499987
    499986
    499985
    499984
    499983
    499982

19 rows selected.


Execution Plan
----------------------------------------------------------
Plan hash value: 646987072

---------------------------------------------------------------------------------------
| Id  | Operation                     | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT              |       |    19 |   247 |     3   (0)| 00:00:01 |
|*  1 |  COUNT STOPKEY                |       |       |       |            |          |
|   2 |   VIEW                        |       |   101 |  1313 |     3   (0)| 00:00:01 |
|*  3 |    INDEX RANGE SCAN DESCENDING| T_IDX |   101 |  3939 |     3   (0)| 00:00:01 |
---------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter(ROWNUM<20)
   3 - access("FRNT_BILL_ID"=499999 AND "CLASSID"=499999)

Note
-----
   - dynamic sampling used for this statement


Statistics
----------------------------------------------------------
          0  recursive calls
          0  db block gets
          5  consistent gets
          0  physical reads
          0  redo size
        742  bytes sent via SQL*Net to client
        411  bytes received via SQL*Net from client
          3  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
         19  rows processed

ops$tkyte%ORA10GR2> select *
  2    from (
  3  select /*+ first_rows */ gl_trns_id
  4    from t
  5   where frnt_bill_id = 1 and classid = 1
  6   order by frnt_bill_id DESC, classid DESC, gl_trns_id DESC
  7         )
  8   where rownum < 20
  9  /

GL_TRNS_ID
----------
         1


Execution Plan
----------------------------------------------------------
Plan hash value: 646987072

---------------------------------------------------------------------------------------
| Id  | Operation                     | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT              |       |     1 |    13 |     3   (0)| 00:00:01 |
|*  1 |  COUNT STOPKEY                |       |       |       |            |          |
|   2 |   VIEW                        |       |     1 |    13 |     3   (0)| 00:00:01 |
|*  3 |    INDEX RANGE SCAN DESCENDING| T_IDX |     1 |    39 |     3   (0)| 00:00:01 |
---------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter(ROWNUM<20)
   3 - access("FRNT_BILL_ID"=1 AND "CLASSID"=1)

Note
-----
   - dynamic sampling used for this statement


Statistics
----------------------------------------------------------
          0  recursive calls
          0  db block gets
          3  consistent gets
          0  physical reads
          0  redo size
        413  bytes sent via SQL*Net to client
        400  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
          1  rows processed

ops$tkyte%ORA10GR2> set autotrace off



No more table, just all index - and very fast (note: modified order by - modified in a way that doesn't change the answer though...)

rownum first_rows

Tony, February 04, 2010 - 2:29 pm UTC

Sorry I forgot the mention there are 17,098,248 rows in the table.
Tom Kyte
February 04, 2010 - 3:29 pm UTC

assumption number 1 :)

Thanks Tom

Tony, February 04, 2010 - 4:09 pm UTC

That was amazing, just created the index and timing came down from 13+ mins to "Elapsed: 00:00:00.06"

Actually I already had one index on FRHT_BILL_ID + CALSSID and another on GL_TRNS_ID (as primary key) but as you suggested combining the 2 did the miracle :)

Thanks a lot

server side cursor

Peter, February 05, 2010 - 12:52 pm UTC

Hi

We have a web application used to search houses renting and sales.

We have had hard time with database performance when paging the result set since our queries in order to return a city's renting houses can return up to 25000 hits (around 3000 consistent gets). When this is returned to user whenever he hist next page the query is rerun but changing rownum (from between 1 and 50 to 51 to 100 and so on). If the concurrency is high the database is hit hammered!

My boss has seen from https://www.hibernate.org/314.html that in SQL Server has server side cursor and seems that we dont need to rerun the queries when paging the results (similar to JDBC scrollable cursors?). I wonder if there is something similar in Oracle 10g or 11g?

I have read about result cache but not sure if it might help.

We are stateless by the way. Using Tomcat connection pools.

Thanks
Tom Kyte
February 08, 2010 - 9:32 pm UTC

ok, I stopped reading when I read:

... We have had hard time with database performance when paging the result set
since our queries in order to return a city's renting houses can return up to
25000 hits (around 3000 consistent gets). ...

No human being, NONE, period - never - none, nada, nunca, whatever you want to say - NO ONE will read 25,000 hits.

No one will read 100 hits - ever. go to google and do a search for Oracle. Now, I defy you to get to page 101. You'll never get there, google says "no way, don't be silly"

Limit your search to FEWER ROWS.


25,000 records is far too many for anyone to absorb.


that said, you do not say how you are returning rows N thru M - are you using the technique I've outlined below??? If you are - the time to get the first N records (the first page) and the work involved should be a lot less than page 100 (just like on google - page 1 comes much faster than page 99 does).

Rownum indexes

Tony, February 08, 2010 - 9:11 am UTC

Tom,
Thanks a lot for resolving the query performance issue, my concern is:

It's fine to create an index for that particular query but these queries are web based and user can change the selection criteria i.e can add more columns to "where" clause and/or change columns in "Order By" list.

We can't just keep adding the indexes for all possible scenarios, even hard to predict all scenarios.

Then what should be the generic solution?
Tom Kyte
February 15, 2010 - 10:55 am UTC

think about what you just said:

"Then what should be the generic solution?"


if you cannot even predict "all scenarios", how could we? and how could we set up a universal "end all, be all, sort all" index?

If we had that, there would be only one database on the planet. All others would be rendered useless.


Now that said, you could create a series of single column bitmap indexes (one column per index, one index per column). That would let us find the rows fast - and then we'd sort them after the fact.

This will work well if there are always a small number of rows to return before the sorting phase. We'd find them fast, sort in memory (an optimized top-n sort) and return the first 20.

something like this:

ops$tkyte%ORA11GR2> /*
ops$tkyte%ORA11GR2>
ops$tkyte%ORA11GR2> drop table t;
ops$tkyte%ORA11GR2>
ops$tkyte%ORA11GR2> create table t as select * from all_objects;
ops$tkyte%ORA11GR2>
ops$tkyte%ORA11GR2> create bitmap index bm_idx1 on t(owner);
ops$tkyte%ORA11GR2> create bitmap index bm_idx2 on t(object_type);
ops$tkyte%ORA11GR2> create bitmap index bm_idx3 on t(status);
ops$tkyte%ORA11GR2> */
ops$tkyte%ORA11GR2> set linesize 1000
ops$tkyte%ORA11GR2> set autotrace on
ops$tkyte%ORA11GR2> select *
  2    from (
  3  select /*+ first_rows */ *
  4    from t
  5   where owner = 'SCOTT' and object_type = 'TABLE'
  6   order by owner DESC, object_type DESC, status DESC
  7         )
  8   where rownum < 20
  9  /

OWNER                          OBJECT_NAME                    SUBOBJECT_NAME                  OBJECT_ID DATA_OBJECT_ID OBJECT_TYPE  CREATED   LAST_DDL_ TIMESTAMP           STATUS     T G S  NAMESPACE EDITION_NAME
------------------------------ ------------------------------ ------------------------------ ---------- -------------- ------------ --------- --------- ------------------- ---------- - - - ---------- ------------
SCOTT                          DEPT                                                               73179          73179 TABLE        13-AUG-09 13-AUG-09 2009-08-13:23:35:45 VALID      N N N          1
SCOTT                          EMP                                                                73181          73181              13-AUG-09 05-JAN-10 2009-08-13:23:35:45 VALID      N N N          1
SCOTT                          BONUS                                                              73183          73183              13-AUG-09 13-AUG-09 2009-08-13:23:35:45 VALID      N N N          1
SCOTT                          SALGRADE                                                           73184          73184              13-AUG-09 13-AUG-09 2009-08-13:23:35:45 VALID      N N N          1
SCOTT                          ACCOUNTS                                                           87094          87094              20-JAN-10 20-JAN-10 2010-01-20:19:07:38 VALID      N N N          1
SCOTT                          PNAMES                                                             82355          82355              14-DEC-09 14-DEC-09 2009-12-14:13:30:06 VALID      N N N          1
SCOTT                          T                                                                  86997          86997              20-JAN-10 20-JAN-10 2010-01-20:12:28:15 VALID      N N N          1
SCOTT                          SYS_TEMP_FBT                                                       81755                             09-DEC-09 09-DEC-09 2009-12-09:14:36:43 VALID      Y N N          1


8 rows selected.


Execution Plan
----------------------------------------------------------
Plan hash value: 197438597

-------------------------------------------------------------------------------------------
| Id  | Operation                       | Name    | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                |         |     6 |   948 |     4  (25)| 00:00:01 |
|*  1 |  COUNT STOPKEY                  |         |       |       |            |          |
|   2 |   VIEW                          |         |     6 |   948 |     4  (25)| 00:00:01 |
|*  3 |    SORT ORDER BY STOPKEY        |         |     6 |   948 |     4  (25)| 00:00:01 |
|   4 |     TABLE ACCESS BY INDEX ROWID | T       |     6 |   948 |     3   (0)| 00:00:01 |
|   5 |      BITMAP CONVERSION TO ROWIDS|         |       |       |            |          |
|   6 |       BITMAP AND                |         |       |       |            |          |
|*  7 |        BITMAP INDEX SINGLE VALUE| BM_IDX1 |       |       |            |          |
|*  8 |        BITMAP INDEX SINGLE VALUE| BM_IDX2 |       |       |            |          |
-------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter(ROWNUM<20)
   3 - filter(ROWNUM<20)
   7 - access("OWNER"='SCOTT')
   8 - access("OBJECT_TYPE"='TABLE')

Note
-----
   - dynamic sampling used for this statement (level=2)


Statistics
----------------------------------------------------------
          0  recursive calls
          0  db block gets
          7  consistent gets
          0  physical reads
          0  redo size
       1800  bytes sent via SQL*Net to client
        420  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          1  sorts (memory)
          0  sorts (disk)
          8  rows processed



but do bear in mind that bitmap indexes only work - ONLY WORK - in a read only/ read mostly system. If you do single row inserts/updates/deletes OR you have more than one session modifying this table concurrently - bitmaps will not be something to consider.


server side cursor

Peter, February 20, 2010 - 1:22 am UTC

Hi Tom

I wrote to you around 2 weeks ago, just read your reply because I was travelling a lot sorry for the delay.

Which techniques do you mean?

https://asktom.oracle.com/Misc/oramag/on-rownum-and-limiting-results.html

or

http://asktom.oracle.com/pls/apex/f?p=100:11:0::::P11_QUESTION_ID:3235910044216 where it says

create table result_cache
( session_id number,
seq number,
rid rowid,
primary key (session_id,seq)
)
organization index;


Thank
Tom Kyte
February 25, 2010 - 12:56 am UTC

the first link would be most appropriate to what I was suggesting.



sam

sam, January 24, 2011 - 8:22 am UTC

Tom:

Trying to implement ROW paging for a web page.

I have this REF cursor in SP that runs this SQL in database A that connects to database B (same machine) via link.

All the tables are in database A.

This was taking 12 seconds to run. I moved the VIEW to database B and it reduce time to 7 seconds.

When i run the query for ref cursor in local B it runs so fast in under 1/2 second. It is using the indexes and stats locally.

Is there a way to gurantee that my application in database A would run the whole thing in B. I tried hint /*DRIVING_SITE*/ but still same performance.

The problem is that query uses the USERID of vendor who logs in and the two paging variables. I cant make that part of the LOCAL view because it wont pass parameters to the VIEW.

Any ideas.

select * from (
select a.*, rownum r from (
(select * from VIEW@B_link where vendor=p_userid and prod_stage in ('AB','EF') and end_date is null order by bk_no,bk_med) a where rownum < = MAX_ROWS)
where rownum >= MIN_ROWS

paging

sam, February 02, 2011 - 3:06 pm UTC

Tom:

you don't have any HINT on running all of this Query in Database "B" and passing only results to "A".

It runs in 0.05 secs in "B" and 5 secs in "A".
My issue is that query parameters depend on what user enters in webpage in "A" and i cant make pass that to a view.

select * from (
select a.*, rownum r from (
(select * from VIEW@B_link where vendor=p_userid and
prod_stage in ('AB','EF') and end_date is null
order by bk_no,bk_med)
a
where rownum < = MAX_ROWS)
where rownum >= MIN_ROWS


Tom Kyte
February 03, 2011 - 2:23 pm UTC

do you first_rows hint the query blocks?

query

sam, February 03, 2011 - 10:16 pm UTC

Tom:

I did not quite understand your comment.

Do you want me to add "FIRST_ROWS" hint somewhere to the query and try it t osee it it performs better.
Tom Kyte
February 04, 2011 - 9:08 am UTC

yes, that is sort of what I said.

order disturbed when order by field have duplicate value

yehui, September 13, 2011 - 1:58 am UTC

We have a table, the order by field have duplicate value, when we use the suggested sql to do paging, some records never get queried.

But change to the following sql is ok.

select *
from ( select a.*, rownum r
from ( select *
from t
where x = :host_variable
order by y ) a
)
where r > :LowerBound and r < :HigerBound

The suggested sql changed the original execution plan.

select *
from ( select a.*, rownum r
from ( select *
from t
where x = :host_variable
order by y ) a
where rownum < :HigerBound )
where r > :LowerBound
Tom Kyte
September 13, 2011 - 6:23 am UTC

read:

http://www.oracle.com/technetwork/issue-archive/2006/06-sep/o56asktom-086197.html


you need to sort by something deterministic - your sort key must be unique.

Just add rowid after y and use the original query. Order by y, rowid.


Excluding rownum from result

Alex, May 16, 2012 - 3:10 am UTC

select *
from ( select a.*, rownum r
from ( select *
from t
where x = :host_variable
order by y ) a
where rownum < :HigerBound )
where r > :LowerBound

It always includes R in the result. But how to realize the same method with R excluded from result?
Tom Kyte
May 16, 2012 - 5:56 am UTC

select the columns you want.

You should never be using * in production code anyway. It is good for a quick ad-hoc query or a demo, but not for real code.

Excluding rownum from result

Alex, May 16, 2012 - 9:09 am UTC

SELECT * FROM (SELECT t1.*,rownum r FROM (SELECT c1,c2,c3 FROM t) t1 WHERE rownum<7) WHERE r>2

Ok in this query I selected the columns I want (c1,c2 and c3). How to exclude the R column from the result?
Tom Kyte
May 17, 2012 - 2:29 am UTC

umm, the other "*"

???? you really couldn't answer that yourself?

Excluding rownum from result

Alex, May 16, 2012 - 9:29 am UTC

Also I have to add that the query (SELECT c1,c2,c3 FROM t) comes to a function which just returns it as a query shown above. So the columns of the coming query are unknown.
Tom Kyte
May 17, 2012 - 2:37 am UTC

sorry, it is what it is.

they will either have to accept R

or they will have to tell you the list of columns they ultimately want.

Suggest it would be easiest for them to accept the existence of R at the end of the select list.

Dirty Read

Yuan, November 20, 2012 - 1:40 pm UTC

Doesn't this method of paging cause a dirty read? Let's say there are items 1 through 20. You're looking at page 1 where each page shows 10 items. Then before you click next, item 7 gets deleted. Wouldn't the next page show items 12 - 20, and you would incorrectly not have see item 11?
Tom Kyte
November 20, 2012 - 3:36 pm UTC

that is not a dirty read - you are in a stateless web environment. Unless you want to maintain a state, the issue you describe is a fact of life - each query is independent.


However, in Oracle you do have an opportunity to avoid that.

Just as "as of SCN :x" or "as of timestamp :x" to the tables in the from list, use flashback query to query the data as of a point in time.

As Of Would Work!

Yuan, November 21, 2012 - 11:32 am UTC

You're right, the as of feature would solve the problem I raised. Thanks!

FIRST_ROWS Hint for Pagination

BILU, August 15, 2013 - 11:46 am UTC

Hi Tom,

I have one query regarding the use of First_rows hint in pagination query. First_rows help to retieve the first say 25 rows faster, but what if iam trying to fetch 1000 to 1025. Will it be slower?
Tom Kyte
August 28, 2013 - 4:42 pm UTC

It would just be like google - if you go to page 1 of a result in google - it takes less time than to get to page 99 of a search result.

And google STOPS you at page 99 (try to go to page 100 - you cannot get there).

google never goes to 1000-1025, that is just "too far" into a result set for a human being to process.

and just like that - using first_rows will take a bit longer to get to each subsequent page - each page+1 will take longer, use more resources than page did.



ORDER BY for the outer query?

David Balažic, September 21, 2013 - 12:03 am UTC

Hi!

In the "classic" pagination statement:
select * 
  from ( select /*+ FIRST_ROWS(n) */ 
  a.*, ROWNUM rnum 
      from ( your_query_goes_here, 
      with order by ) a 
      where ROWNUM <= 
      :MAX_ROW_TO_FETCH ) 
where rnum  >= :MIN_ROW_TO_FETCH;


the outer SELECT has no ORDER BY clause. Does that mean the final resultset has no deterministic order? Or does it take rows in the order as they are ordered in the inner query?
(I know in practice the results are ordered, but it occurred to me, that "theoretically" they are not guarantied to be ordered - or I missed something fundamental)

Thanks,
David Balažic