Skip to Main Content
  • Questions
  • getting rows N through M of a result set

Breadcrumb

Question and Answer

Tom Kyte

Thanks for the question, Rajesh .

Asked: May 02, 2000 - 1:21 pm UTC

Last updated: May 03, 2022 - 2:46 am UTC

Version:

Viewed 100K+ times! This question is

You Asked

I would like to fetch data after joining 3 tables and
sorting based on some field. As this query results into approx
100 records, I would like to cut the result set into 4, each of
25 record. and I would like to give sequence number to each
record. Can I do using SQL Plus ?




and Tom said...



In Oracle8i, release 8.1 -- yes.

select *
from ( select a.*, rownum rnum
from ( YOUR_QUERY_GOES_HERE -- including the order by ) a
where rownum <= MAX_ROWS )
where rnum >= MIN_ROWS
/

that'll do it. It will *not* work in 8.0 or before.


Rating

  (364 ratings)

Is this answer out of date? If it is, please let us know via a Comment

Comments

Pagination

Karthik, August 03, 2001 - 1:25 am UTC

It was to the point and very very useful
i would keep pestering you with more questions in the weeks to come.

Yes,

A reader, September 25, 2001 - 1:21 am UTC

it was useful.

Lifesaver....

Robert Jackson, October 16, 2001 - 4:28 pm UTC

This information was invaluable... I would have had to "kludge" something....

Parag

Parag Mehta, March 31, 2002 - 5:30 am UTC

Tom :

Great .... I think Ora ( Oracle ) has been made for u.
I am highly impressed by ur answere.

Regards
- Parag

Tom Kyte
March 31, 2002 - 9:07 am UTC

you = u
your = ur

is your keyboard broken such that Y and O do not work anymore? cle might be the next to go.

(there are enough abbreviations and three letter acronyms in the world, do we really have to make it HARDER to read stuff everyday by making up new ones all of the time)

Upset

Parag, March 31, 2002 - 10:24 am UTC

I am very Upset with "YOUR" Behaviour. I have not expected the same from " YOU". You could have convey the same in a different Professional Words.

For " YOUR" kind information Dear Tom , My KEYBOARD has not broken down at all. It's working perfectly.


With you tom on 'YOUR' comment on 'u' or 'ur'

Sean, March 31, 2002 - 5:54 pm UTC

Mr. Parag,

You just way over reacted.

U R GR8

Mark A. Williams, April 01, 2002 - 8:57 am UTC

Tom,

Maybe you could put something on the main page indicating appropriate use of abbreviations? Although, now that I think about it, it probably wouldn't do much good, as it appears people ignore what is there (and on the 'acceptance' page) anyway...

- Mark

Tom Kyte
April 01, 2002 - 10:08 am UTC

Already there ;)

It's my new crusade (along with bind variables). But yes, you are correct -- most people don't read it anyway.

You would probably be surprised how many people ask me "where can I read about your book" -- surprising given that it is right there on the home page...

Saw it was there after the fact

Mark A. Williams, April 01, 2002 - 10:27 am UTC

Tom:

Saw that you had added the message about the abbreviations after the fact. That's what I get for having my bookmark point to the 'Search/Archives' tab instead of the main page...

- Mark

A reader, April 01, 2002 - 11:37 am UTC

Excellent query. I just want to be sure I understand it.
You run the query 4 times, each time changeing the MAX and MIN rownumbers. Correct?

Tom Kyte
April 01, 2002 - 1:06 pm UTC

You just change min and max to get different ranges of rows, yes.

Very good

Natthawut, April 01, 2002 - 12:18 pm UTC

This will be useful for me in the future.
Thanks.

PS. Don't listen to Mr.Parag. He just envy you ;)

between

Mikito harakiri, April 01, 2002 - 7:39 pm UTC

Returning to the old discussion about difference between

select p.*, rownum rnum
from (select * from hz_parties ) p
where rownum between 90 and 100

vs

select * from (
select p.*, rownum rnum
from (select * from hz_parties ) p
where rownum < 100
) where rnum >= 90

I claim that they are identical fron perfomance standpoint. Indeed, the plan for the first one

SELECT STATEMENT 20/100
VIEW 20/100
Filter Predicates
from$_subquery$_001.RNUM>=90
COUNT (STOPKEY)
Filter Predicates
ROWNUM<=100
TABLE ACCESS (FULL) hz_parties 20/3921

seems to be faster than

SELECT STATEMENT 20/100
COUNT (STOPKEY)
Filter Predicates
ROWNUM<=100
FILTER
Filter Predicates
ROWNUM>=90
TABLE ACCESS (FULL) hz_parties 20/3921


But, note that all nodes in the plan are unblocking!. Therefore, it doesn't matter which condition is evaluated earier...


Tom Kyte
April 01, 2002 - 8:51 pm UTC

Please don't claim -- benchmark and PROVE (come on -- I do it all of the time).

Your first query "where rownum between 90 and 100" never returns ANY data.  that predicate will ALWAYS evaluate to false -- always.

I've already proven in another question (believe it was with you again) that 

select * from ( 
   select p.*, rownum rnum
           from (select * from hz_parties ) p
          where rownum < 100
) where rnum >= 90

is faster then:

select * from ( 
   select p.*, rownum rnum
           from (select * from hz_parties ) p
) where rnum between 90 and 100

which is what I believe you INTENDED to type.  It has to do with the way we process the COUNT(STOPKEY) and the fact that we must evaluate 

   select p.*, rownum rnum
           from (select * from hz_parties ) p

AND THEN apply the filter where as the other will find the first 100 AND THEN stop.

so, say I have an unindexed table:

ops$tkyte@ORA817DEV.US.ORACLE.COM> select count(*) from big_table;

  COUNT(*)
----------
   1099008

(a copy of all_objects over and over and over) and I run three queries.  Yours to show it fails (no data), what I think you meant to type and what I would type:

select p.*, rownum rnu
  from ( select * from big_table ) p
 where rownum between 90 and 100

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute      1      0.00       0.00          0          0          0           0
Fetch        1      6.17      15.31      14938      14985         81           0
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        3      6.17      15.31      14938      14985         81           0

Misses in library cache during parse: 0
Optimizer goal: CHOOSE
Parsing user id: 216

Rows     Row Source Operation
-------  ---------------------------------------------------
      0  COUNT STOPKEY
      0   FILTER
1099009    TABLE ACCESS FULL BIG_TABLE


<b>your query -- no data found....  Look at the number of rows inspected however</b>



select *
from (
select p.*, rownum rnum
  from ( select * from big_table ) p
)
 where rnum between 90 and 100

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute      1      0.00       0.00          0          0          0           0
Fetch        2      7.93      17.03      14573      14986         81          11
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        4      7.93      17.03      14573      14986         81          11

Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: 216

Rows     Row Source Operation
-------  ---------------------------------------------------
     11  VIEW
1099008   COUNT
1099008    TABLE ACCESS FULL BIG_TABLE

<b>what I believe you mean to type in -- agein -- look at the rows processed!

Now, what I've been telling everyone to use:</b>


select * from (
   select p.*, rownum rnum
           from (select * from big_table ) p
          where rownum < 100
) where rnum >= 90

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute      1      0.00       0.00          0          0          0           0
Fetch        2      0.00       0.01          1          7         12          10
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        4      0.00       0.01          1          7         12          10

Misses in library cache during parse: 0
Optimizer goal: CHOOSE
Parsing user id: 216

Rows     Row Source Operation
-------  ---------------------------------------------------
     10  VIEW
     99   COUNT STOPKEY
     99    TABLE ACCESS FULL BIG_TABLE


<b>HUGE difference.  Beat that...  

Claims -- don't want em.
Benchmark, metrics, statistics -- love em -- want em -- need em.
</b>




 

Over the top!

Thevaraj Subramaniam, April 01, 2002 - 10:30 pm UTC

Tom, I am really very impressed with the way you prove it with examples and explanations. Answering questions from around the world, and at the same time facing hurdles along the way and overcoming it. You are the best! Will be always supporting asktom.oracle.com. Cheers.

Thank goodness!

Jim, April 03, 2002 - 1:25 am UTC

Tom,

Liked the solution and your new rule.

You have my vote on the rule not to use "u" for you
and "ur" for your. It's not clever, it simply makes
things harder to read, in fact I think it's just plain
lazy

Anyone that doesn't like it can simply ask someone else.


between

Mikito harakiri, April 03, 2002 - 3:37 pm UTC

Thanks Tom. I finally noticed that you have rownum in one predicate and rnum in the other and they are different:-)

sql>select * from (
2 select p.*, rownum rnum
3 from (select * from hz_parties ) p
4 where rownum < 100
5 ) where rnum >= 90

Statistics
----------------------------------------------------------
7 consistent gets
5 physical reads

The best solution I was able to get:

appsmain>select * from (
2 select * from (
3 select p.*, rownum rnum
4 from (select * from hz_parties ) p
5 ) where rnum between 90 and 100
6 ) where rownum < 10

Statistics
----------------------------------------------------------
15 consistent gets
5 physical reads

It's neither faster, nor more elegant:-(

actual "between" test

Mikito harakiri, April 03, 2002 - 8:32 pm UTC

Tom,

Sorry, but I see no difference:

public static void main(String[] args) throws Exception {
Class.forName("oracle.jdbc.driver.OracleDriver");
System.out.println(execute("select * from (select p.*, rownum rnum "
+ " from (select * from hz_parties ) p "
+ " where rownum < 100 "
+ " ) where rnum >= 90 "));
System.out.println(execute("select * from ( \n"
+ " select p.*, rownum rnum "
+ " from (select * from hz_parties ) p "
+ " ) where rnum between 90 and 100"));

}
static long execute( String query ) throws Exception {
Connection con = DriverManager.getConnection("jdbc:oracle:thin:@dlserv7:1524:main","apps","apps");
con.setAutoCommit(false);

con.createStatement().execute("alter system flush shared_pool");
long t1 = System.currentTimeMillis();
ResultSet rs = con.createStatement().executeQuery(query);
rs.next();
rs.next();
rs.next();
rs.next();
rs.next();
rs.next();
long t2 = System.currentTimeMillis();

con.rollback();
con.close();
return t2 - t1;
}

Both queries return in 0.6 sec. Here is my interpretation: The "between" query in the context where we open cursor, read first rows, and then discard the rest is essentially the same as "between" query with stopcount (that goofy sql in my last reply). The execution engine doesn't seem to go forward and check the between predicate for the whole table, or does it?

Tom Kyte
April 04, 2002 - 11:31 am UTC

TKPROF, TKPROF, TKPROF.

thats all you need to use.

This query:


select *
from ( select p.*, rownum rnum
from ( YOUR_QUERY )
where rownum < 100
)
where rnum >= 90


runs your query and gathers the first 100 rows and stops. IF YOUR_QUERY must materialize all of the rows before it can get the first row (eg: it has certain constructs like groups by and such) -- then the difference in your case may not be as large -- but its there. Use TKPROF to get RID of the java overhead in the timings (timing in a client like that isn't very reliable).

Consider:

here we obviously don't need to get the last row before the first row -- it's very "fast"

select *
from ( select p.*, rownum rnum
from ( select owner, object_name, object_type
from big_table
) p
where rownum <= 100
)
where rnum >= 90

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 2 0.01 0.00 63 7 12 11
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 4 0.01 0.00 63 7 12 11

Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: 216

Rows Row Source Operation
------- ---------------------------------------------------
11 VIEW
100 COUNT STOPKEY
100 TABLE ACCESS FULL BIG_TABLE



Now, lets add an aggregate -- here we do have to process all rows in the table HOWEVER, since the rownum is pushed down as far as we can push it - we can do some suboptimizations that make this faster


select *
from ( select p.*, rownum rnum
from ( select owner, object_name, object_type, count(*)
from big_table
group by owner, object_name, object_type
) p
where rownum <= 100
)
where rnum >= 90

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.01 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 2 5.78 18.08 14794 14985 81 11
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 4 5.79 18.08 14794 14985 81 11

Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: 216

Rows Row Source Operation
------- ---------------------------------------------------
11 VIEW
100 COUNT STOPKEY
100 VIEW
100 SORT GROUP BY STOPKEY
1099008 TABLE ACCESS FULL BIG_TABLE

Lastly, we'll do it your way -- here we don't push the rownum down, the chance for optimization is gone and you run really slow

select *
from ( select p.*, rownum rnum
from ( select owner, object_name, object_type, count(*)
from big_table
group by owner, object_name, object_type
) p
)
where rnum between 90 and 100

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.03 0 0 0 0
Execute 2 0.00 0.00 0 0 0 0
Fetch 2 20.15 112.44 24136 14985 184 11
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 5 20.15 112.47 24136 14985 184 11

Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: 216

Rows Row Source Operation
------- ---------------------------------------------------
11 VIEW
17172 COUNT
17172 VIEW
17172 SORT GROUP BY
1099008 TABLE ACCESS FULL BIG_TABLE


I guess, at the end of the day, it is up to you. I can only show you that it is faster so many times. In the end -- it is your choice.

In your case, this is what I am guessing:

o hz_parties is a view (recognize it from apps)
o its a view that gets the last row before it can get the first
o the number of rows you can see is not significant (maybe a thousand or so, something that fits in RAM nicely)
o the rownum optimization in your case doesn't do much -- if you see the tkprof, you'll be able to quantify what it does for you.


In general I can say this:

you would be doing the wrong thing to use "where rnum between a and b" when you can push the rownum DOWN into the inner query and achieve PHENOMEMAL performance gains in general. But again, that is your choice.


nuff said




Performance difference

Ken Chiu, July 25, 2002 - 5:25 pm UTC

The 1st query below is more than half faster than the 2nd query, please explain what happened ?

select b.*
(Select * from A Order by A.Id) b
where rownum<100

select * from
(select b.*,rownum rnum
(Select * from A Order by A.Id) b
where rownum<100)
and rnum >= 50

thanks.


Tom Kyte
July 25, 2002 - 10:35 pm UTC

half faster... Hmmm.... wonder what that means.

I can say that (after fixing your queries) -- My findings differ from yours. In my case, big_table is a 1,000,000 row table and I see:

big_table@ORA920.US.ORACLE.COM> set autotrace traceonly
big_table@ORA920.US.ORACLE.COM> select b.*
2 from (Select * from big_table A Order by A.Id) b
3 where rownum<100
4 /

99 rows selected.


Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=15735 Card=99 Bytes=141000000)
1 0 COUNT (STOPKEY)
2 1 VIEW (Cost=15735 Card=1000000 Bytes=141000000)
3 2 TABLE ACCESS (BY INDEX ROWID) OF 'BIG_TABLE' (Cost=15735 Card=1000000 Byte
s=89000000)

4 3 INDEX (FULL SCAN) OF 'BIG_TABLE_PK' (UNIQUE) (Cost=2090 Card=1000000)




Statistics
----------------------------------------------------------
0 recursive calls
0 db block gets
19 consistent gets
0 physical reads
0 redo size
9701 bytes sent via SQL*Net to client
565 bytes received via SQL*Net from client
8 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
99 rows processed

big_table@ORA920.US.ORACLE.COM>
big_table@ORA920.US.ORACLE.COM> select * from
2 (select b.*,rownum rnum
3 from (Select * from big_table A Order by A.Id) b
4 where rownum<100)
5 where rnum >= 50
6 /

50 rows selected.


Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=15735 Card=99 Bytes=15246)
1 0 VIEW (Cost=15735 Card=99 Bytes=15246)
2 1 COUNT (STOPKEY)
3 2 VIEW (Cost=15735 Card=1000000 Bytes=141000000)
4 3 TABLE ACCESS (BY INDEX ROWID) OF 'BIG_TABLE' (Cost=15735 Card=1000000 By
tes=89000000)

5 4 INDEX (FULL SCAN) OF 'BIG_TABLE_PK' (UNIQUE) (Cost=2090 Card=1000000)




Statistics
----------------------------------------------------------
0 recursive calls
0 db block gets
13 consistent gets
0 physical reads
0 redo size
5667 bytes sent via SQL*Net to client
532 bytes received via SQL*Net from client
5 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
50 rows processed

big_table@ORA920.US.ORACLE.COM>
big_table@ORA920.US.ORACLE.COM> set autotrace off
big_table@ORA920.US.ORACLE.COM> spool off



The second query is more efficient then the first.

Food for thought

Mike Moore, September 09, 2002 - 7:46 pm UTC

The testing shows stats for a range at the beginning of a large table. I wonder what that stats look like when selecting rows 999000 thru 999100 ... in other words, rows at the end of a large table?
I'd try it myself if I could.

Tom Kyte
September 09, 2002 - 8:09 pm UTC

Every subsequent query as you page down can get slower and slower (goto google, you'll see that there as well)

HOWEVER, common sense says that no end user will have the patience 10 or 25 rows at a time to get to rows 999000 thru 999100 -- even google cuts you off WAY before you get crazy. A result set that large is quite simply meaningless for us humans.

But then again you can goto asktom and search from somthing and keep paging forward till you get board. It is true you'll get 18,000 hits at most since thats all thats in there so for -- but your NEVER have the patience to get to the end.


Sort of like the old commericial if you remember the wise old owl "how many licks does it take to get to the center of a tootsie pop" (i think the owl only got to three before he just bit the lollipop). For those not in the US and who didn't grow up in the 70's -- ignore that last couple of sentences ;)



Food for thought (cont)

Michael J. Moore, September 10, 2002 - 9:34 pm UTC

Good point! I mean about nobody actually going to page throught that much data. I confess that I don't completely understand how to read an EXECUTE PLAN so my question is only intended to prove to myself that I do or don't understand what is actually going on. Suppose a person wanted to use your SELECT technique for choosing rows N thru M towards the end of a large table as I earlier suggested. Maybe they are not using it for paging, but for some bizarre twilight zone reason that is what they want to do. Is it true that one could expect the performance of the SELECT to degrade as ranges deeper and deeper into the table are selected? If 'yes' then I say 'great, I understand what is happening. If 'no', then I say, "darn, I still don't have a clue."
As for the 70's, I voted for McCarthy, but Dick Nixon won.

Tom Kyte
September 11, 2002 - 7:36 am UTC

I would order the result set backwards and get the first page instead of the last then (flip the order of the data around).

Yes, it'll take longer to get the last N rows then the first N rows in general (not every time, but you can reasonable expect it to be the case)

problem in query

Ankit Chhibber, September 21, 2002 - 3:09 am UTC

I tried this query on an ordered view, the view has about 7000 records with eventseverity as 64.

select * from
( select fmeventsview.* , rownum rnum from
(select * from fmeventsview where EventSeverity = 64 )fmeventsview where rownum <=500 ) where rnum >0;

but i get just 234 rows in the result set.

if i fire the embeded query

"select fmeventsview.* , rownum rnum from
(select * from fmeventsview where EventSeverity = 64 )fmeventsview where rownum <=500 "
i do get 500 records with RNUM values from 1-500

I don't know where i am goofing up :-(
please advice on the same



Tom Kyte
September 21, 2002 - 11:14 am UTC

I hate views with order bys. Add the order by to the query itself. The order by doesn't have to be specifically obeyed in the view once you start doing wacky things to the query. It must be throwing off the rownum somehow -- but not have a test case to play with -- I cannot say.

Getting rows 10,00,001 to 10,00,010 - Query taking forever to execute

Brijesh, September 22, 2002 - 4:35 am UTC

Hi Tom,
The query which you've shown is very good and working very fast within a range of 100,000 to 150,000 rows but
when trying to get rows more than 500,000's it is taking a minute for doing so.

The query :

select fatwaid,fatwatitle
from (select a.*,rownum r
from (select * from fatwa order by fatwaid) a
where rownum <= &upperbound )
where r >= &lowerbound

when executed with 150001 and 150010 gives me
following output and plan

10 rows selected.

Elapsed: 00:00:02.01

Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=826 Card=150010 Byte
s=11700780)

1 0 VIEW (Cost=826 Card=150010 Bytes=11700780)
2 1 COUNT (STOPKEY)
3 2 VIEW (Cost=826 Card=1282785 Bytes=83381025)
4 3 TABLE ACCESS (BY INDEX ROWID) OF 'FATWA' (Cost=826 C
ard=1282785 Bytes=2837520420)

5 4 INDEX (FULL SCAN) OF 'PK_FATWA' (UNIQUE) (Cost=26
Card=1282785)

When executed with values of
1000001 and 1000010

Following is the plan and time
10 rows selected.

Elapsed: 00:01:01.08

Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=826 Card=1000010 Byt
es=78000780)

1 0 VIEW (Cost=826 Card=1000010 Bytes=78000780)
2 1 COUNT (STOPKEY)
3 2 VIEW (Cost=826 Card=1282785 Bytes=83381025)
4 3 TABLE ACCESS (BY INDEX ROWID) OF 'FATWA' (Cost=826 C
ard=1282785 Bytes=2837520420)

5 4 INDEX (FULL SCAN) OF 'PK_FATWA' (UNIQUE) (Cost=26
Card=1282785)


How Can I speed up the process of getting last rows?

Tom Kyte
September 22, 2002 - 10:12 am UTC

Nope, no go -- this is good for paging through a result set. Given that HUMANS page through a result set and pages are 10-25 rows and we as humans would NEVER in a billion years have the patience to page down 100,000 times -- it is very workable.

Perhaps you want to order by DESC and get the first page?

(think about it -- to get the "last page", one must iterate over all of the preceding pages. a desc sort would tend to read the index backwards)



invalid coloumn name exception

Ankit Chhibber, October 04, 2002 - 5:32 am UTC

Hi,
when this query is fired simultanously ( from Java application using JDBC) from multiple threads, oracle sometimes gives an exception "invalid coloumn name" :-(
can you please explain the reason ???


Tom Kyte
October 04, 2002 - 8:25 am UTC

Umm, magic. A bug. Programmer error. I don't know.

Sounds like time to file a TAR with support. One would need tons of information such as (and don't give it to me, give it to support) type of driver used, version of driver, version of db, a test case (as small as humanly possible) that can be run to reproduce the issue.

that last part will be the hard part maybe. but you should be able to start up a small java program with a couple of threads all that just wildly parse and execute queries that eventually hits this error.

An old question revivied again

Ankit Chhibber, October 21, 2002 - 12:41 pm UTC

Hi Tom,
I am using your queryto do a lot of DB operations :-), I am reading records 1000 at a time based on your approach. when there are 100,000 records in DB (This is acceptable situation, that is what people tell me :-) ), the fetch for first 1000 rows takes about 50 seconds :-(.(Our Os is solaris, and i use JDBC for accesing the DB)
I am doing a sort (order by) on one of the primary keys.
Can you suggest some way of improving the performance here ???

It would be of real help

regards
Ankit

Tom Kyte
October 21, 2002 - 1:13 pm UTC

</code> http://asktom.oracle.com/~tkyte/tkprof.html <code>

use that tool (sql_trace + TIMED_STATISTICS) to see the query plan, rows flowing through the steps of the plan and use that as your jump off point for tuning.

You might be a candidate for FIRST_ROWS optimization.

why 1000 rows, 25 or 100 is more reasonable. But anyway -- it is probably the fact that you need to sort 100k rows each time -- check your sort area size as well.

Getting rows 10,00,001 to 10,00,010 - Query taking forever to execute

Brijesh, October 22, 2002 - 12:57 am UTC

Now i've got it,
its just a matter of thinking why would a user would page through all the 100000 pages to go to 100001.

Even me searched on google many times but never went beyond the tenth page.

Thanks for all you are doing for developers,
and for the reply.
Regards Brijesh


get the count for my query

Cesar, November 11, 2002 - 1:27 pm UTC

How i can get the count in my query?

select *
from ( select a.*, rownum rnum
from ( YOUR_QUERY_GOES_HERE -- HOW GET HOW MANY ROWS ARE HERE?? ) a
where rownum <= MAX_ROWS )
where rnum >= MIN_ROWS



Excellent stuff

Harpreet, December 12, 2002 - 12:56 am UTC

hi
I was having the same problem for a few days of how to do pagination. Howard suggested to look in to your site and i found the answer, with some very good discussions.

this is really good work.



how to group it by

A reader, December 17, 2002 - 2:45 pm UTC

I have table t as

select * from t;


T1 COLOR
---------- --------------------
1 PINK
2 GREEN
3 BLUE
4 RED
5 YELLOW

select * from xabc;
COLOR C_DATE
-------------------- -----------------
RED MON
RED TUE
RED WED
RED THU
RED FRI
RED SAT
RED SUN
PINK MON
PINK TUE
PINK WED


now I need get the resuleset as follows

COLOR C_DATE
-------------------- -----------------
RED MON
RED TUE
RED WED
RED THU
PINK MON

because red = 4 in t and pink = 1 in t
how to do it ?

TIA

Tom Kyte
December 18, 2002 - 10:54 am UTC

does not compute. No idea what you mean. so what if red = 4 and pink = 1?

Thanks,

A reader, December 17, 2002 - 4:37 pm UTC

don't spend time answering that I got it !!



1 select p.*
2 from (
3 select x.color,x.c_date,
4 row_number() over (partition by x.color order by c_date) r
5 from xabc x,t
6 where x.color = t.color
7 ) p , t
8 where p.color = t.color
9* and r <= t.t1
nydev168-->/

COLOR C_DATE R
-------------------- -------------------- ----------
PINK MON 1
RED FRI 1
RED MON 2
RED SAT 3
RED SUN 4




Thanks :)

Scrollable cursors

A reader, December 18, 2002 - 5:53 pm UTC

Tom,

Are scrollable cursors (9.2) available in pl/sql and jdbc, or only pro c/c++?

If not, when will this feature become available from pl/sql?

Tom Kyte
December 19, 2002 - 7:14 am UTC

jdbc has then.

I cannot imagine a case whereby plsql would need/desire them. I can see their usefulness in a situation where you have a client/server stated connection and want to page up/down through a result set - but plsql does not lend itself to that sort of environment? We rely on the client to do that (eg: something like forms, or jdbc). In a stored procedure -- when would you want to "go backwards"?

what if red = 4 and pink = 1?

A reader, December 20, 2002 - 11:22 am UTC

it means there should be only 4 rows returned for red
and only 1 row should be returned for pink evenif ther are 10 rows for pink



Master Oracle Guru

Denise, February 05, 2003 - 4:08 pm UTC

Tom

I wish I had 1/5 your knowledge...everytime I come
here seeking answers and solutions you always seem to
hit the target head on...and then top it off with superb
code that is easy to understand and apply.

Everytime I come here my answers are solved and I learn
something new everytime.

I am DEFINITELY buying your book!

as a newbie I can't express enough how important it is
for those of us venturing into this brave new world of
Oracle to have someone of your stature, expertise & knowledge paving the way.

I think your(errrrr...'ur') TERRIFIC!!!
Denise


Helena Markova, February 13, 2003 - 2:52 am UTC


Excellent.

Chandra S.Reddy, February 20, 2003 - 8:17 am UTC

Hi Tom,
You are really great. This solution is very much useful for me.
I believe, there will not be any much resource utilization with this approach.
Is that right Tom?

Tom Kyte
February 20, 2003 - 8:25 am UTC

there will be as much resource utilitization as needed to process the query?

How can I do this in sql?

A reader, February 22, 2003 - 3:22 am UTC

Tom,
If I want to return set n from each group of records based upon key, for example, the data is like this..
store customer qty
1 10 10
1 100 20
1 1000 30
.......................
2 20 20
2 200 200
...........
...........
I want to return any two records from each group of store i.e, two records for each store.
Thanks

Tom Kyte
February 22, 2003 - 10:48 am UTC

select *
from ( select t.*, row_number() over (partition by store order by customer) rn
from t
)
where rn <= 2;


would do it -- or




Thanks a lot !

A reader, February 22, 2003 - 5:12 pm UTC

Tom,
This is regarding followup:
"select *
from ( select t.*, row_number() over (partition by store order by customer) rn
from t
)
where rn <= 2;
would do it -- or"

1.What is or....?
2.Where can I find more about such type of queries?

Thanks

Tom Kyte
February 22, 2003 - 5:17 pm UTC

ignore the or ;)


analytics are documented in

o sql reference manual
o data warehousing guide

and I think the write up I have on them in my book "Expert one on one Oracle" is pretty understandable if you have that - I have a chapter on analytics.

row_number is one of about 40 analytic functions we have

Upset

A reader, February 23, 2003 - 3:31 pm UTC

u r the hard 1 not Parag

Tom Kyte
February 23, 2003 - 3:52 pm UTC

maybe if you use real words next time, I'll actually understand what you are trying to communicate.

it is not that much to ask is it? T

That is what is known as a rhetorical question, I don't expect a response. The answer is "no, when communicating, it is not too much to ask people to use a common agreed upon language as opposed to making one up"...

so, don't go away mad, just....


Rownum

phunghung, February 25, 2003 - 5:33 am UTC

Exellent !!!
It's very useful for me
Special thanks :D

Pagination in other scenario.

Chandra S.Reddy, February 28, 2003 - 4:35 am UTC

Tom,
This is further R&D for the approach you have provided.
If someone wants to get M to N records of Dept Number instaed of Emp Number.

select * from (select tmp1.*, rownum1 rnum
from (select e.* from scott.emp e, scott.dept d where e.deptno = d.deptno)tmp1,
(select deptno, rownum rownum1 from scott.dept)tmp2
where tmp1.deptno = tmp2.deptno and
rownum1 <= :End ) where rnum >= :Start ;
/


Tom Kyte
February 28, 2003 - 10:01 am UTC

Well, there are other ways to fry that fish.

analytics rock and roll:

scott@ORA920> select dept.deptno, dname, ename,
2 dense_rank() over ( order by dept.deptno ) dr
3 from emp, dept
4 where emp.deptno = dept.deptno
5 /

DEPTNO DNAME ENAME DR
---------- -------------- ---------- ----------
10 ACCOUNTING CLARK 1
10 ACCOUNTING KING 1
10 ACCOUNTING MILLER 1
20 RESEARCH SMITH 2
20 RESEARCH ADAMS 2
20 RESEARCH FORD 2
20 RESEARCH SCOTT 2
20 RESEARCH JONES 2
30 SALES ALLEN 3
30 SALES BLAKE 3
30 SALES MARTIN 3
30 SALES JAMES 3
30 SALES TURNER 3
30 SALES WARD 3

14 rows selected.

scott@ORA920>
scott@ORA920> variable x number
scott@ORA920> variable y number
scott@ORA920>
scott@ORA920> exec :x := 2; :y := 3;

PL/SQL procedure successfully completed.

scott@ORA920>
scott@ORA920> select *
2 from (
3 select dept.deptno, dname, ename,
4 dense_rank() over ( order by dept.deptno ) dr
5 from emp, dept
6 where emp.deptno = dept.deptno
7 )
8 where dr between :x and :y
9 /

DEPTNO DNAME ENAME DR
---------- -------------- ---------- ----------
20 RESEARCH SMITH 2
20 RESEARCH ADAMS 2
20 RESEARCH FORD 2
20 RESEARCH SCOTT 2
20 RESEARCH JONES 2
30 SALES ALLEN 3
30 SALES BLAKE 3
30 SALES MARTIN 3
30 SALES JAMES 3
30 SALES TURNER 3
30 SALES WARD 3

11 rows selected.

Using dates is giving error.

Chandra S.Reddy, March 02, 2003 - 9:03 am UTC

Tom,
Very nice to see many approaches to implent  the pagination.

When I try to implement one of your method, I got some problems.

Issue #1.

Please see below.

SQL> create or replace procedure sp(out_cvGenric OUT PKG_SWIP_CommDefi.GenCurTyp) is
  2  begin
  3  
  4  OPEN out_cvGenric FOR 
  5  select *
  6      from (
  7    select dept.deptno, dname, ename,to_char(hiredate,'dd-mm-yyyy'),
  8           dense_rank() over ( order by dept.deptno ) dr
  9      from emp, dept
 10     where emp.deptno = dept.deptno and hiredate between '17-DEC-80' and '17-DEC-82'
 11           )
 12  where dr between 2 and 3;

 19  end ;
 20  /

Warning: Procedure created with compilation errors.
SQL> show err;

LINE/COL ERROR
-------- -----------------------------------------------------------------
8/28     PLS-00103: Encountered the symbol "(" when expecting one of the
         following:
         , from

I managed this problem by keeping the query in strings(OPEN out_cvGenric FOR 'select * from ... ' ) and using USING clause. It worked very fine.

Why is this error Tom.?

Issue #2.

Please check below code. This is my actual implementation.Above is PL/SQL shape for your answer.

procedure sp_clips_reports_soandso (
                in_noperationcenterid in number,
                in_dreportfromdt in  date , 
                in_dreporttodt in date ,
                in_cusername in varchar2,
                in_ntirestatuscode in number,
                in_cwipaccount in varchar2,
                in_npagestart in  number,
                in_npageend in  number ,
                out_nrecordcnt out number ,
                out_nstatuscode out number,
                out_cvgenric out pkg_clips_commdefi.gencurtyp,
                out_cerrordesc out varchar2) is

            v_tempstart    number(5) ;
            v_tempend    number(5) ;
begin
        out_nstatuscode := 0;

            select count(tire_trn_number) into out_nrecordcnt
            from    t_clips_tire 
            where     redirect_operation_center_id = in_noperationcenterid
                and    tire_status_id = in_ntirestatuscode
                and    tire_date >= in_dreportfromdt
                and tire_date <= in_dreporttodt
                and wip_account = in_cwipaccount ;

        if in_npagestart =  -1 and in_npageend = -1 then
        
            v_tempstart    := 1;
            v_tempend    := out_nrecordcnt ;
        else
              v_tempstart :=   in_npagestart ;
              v_tempend :=    in_npageend ;

        end if ;
open out_cvgenric for 
'select *
    from (
  select tire.tire_trn_number tiretrnnumber,
                    to_char(tire.tire_date,''mm/dd/yy''),
                    tire.tire_time,
                    tire.direct_submitter_name user_name,
                dense_rank() over ( order by tire.tire_trn_number ) dr
            from    t_clips_tire tire,
                t_clips_afs_transaction transactions,
                t_clips_transaction_code transactionscd
            where
                tire.tire_trn_number = transactions.tire_trn_number and
                transactions.tran_code = transactionscd.tran_code and 
                redirect_operation_center_id = :opp and
                tire.tire_status_id = :stcode  and
                tire.wip_account = :wip and
                tire.tire_date > :reportfromdt and
                tire.tire_date < :reporttodt and
            order by transactions.tire_trn_number,tran_seq
         )
where dr between :start and :end' using in_noperationcenterid,in_ntirestatuscode,in_cwipaccount,v_tempstart,v_tempend;

end sp_clips_reports_soandso;
/
show err;
no errors.
sql> var out_cvgenric refcursor;
sql> var out_nstatuscode  number; 
sql> declare
  2  out_cerrordesc varchar2(2000) ;
  3  --var out_nrecordcnt number ;
  4  begin
  5  sp_clips_reports_soandso(4,'16-feb-02', '16-feb-03',null,2,'0293450720',1,10,:out_nrecordcnt, :out_nstatuscode ,:out_cvgenric,out_cerrordesc);
  6  dbms_output.put_line(out_cerrordesc);
  7  end ;
  8  /
declare
*
error at line 1:
ora-00936: missing expression
ora-06512: at "CLIPStest2.sp_clips_reports_soandso", line 40
ora-06512: at line 5

In the above code,query is in string,program got compiled.
But while calling it is showing errors.
If I remove "tire.tire_date > :ReportFromDt and tire.tire_date < :ReportToDt" from the WHERE clause, the query is working fine and giving results.
If the dates are in query, it is going wrong.

To say, this pagination in SP, will remove much burdens on the application server. But unfortunately am not coming with the solution.

Could you please provide me the solution.
Thanks in advance.

 

Tom Kyte
March 02, 2003 - 9:32 am UTC

1) see

http://asktom.oracle.com/pls/asktom/f?p=100:11:::::P11_QUESTION_ID:3027089372477

same issue -- same workaround in 8i and before, native dynamic sql or a view

2) why are you counting -- that is a very very very bad idea.  First -- the answer can and will change (your count is a "guess").  Second, it is the best way to make a system CRAWL to a halt.  "Oh, I think I'll do a bunch of work and then do it all over again".  Time to double up the machine.

you have a buggy sql statement -- two things I see straight off:

        tire.tire_date < :reporttodt and
            order by transactions.tire_trn_number,tran_seq
         )

AND ORDER BY -- you are "missing an expression" in there.

Second - you are missing a pair of binds.  I see 5 "using" variables but count 7 binds.


it is not the dates in the query -- it is the invalid query itself.


Suggestion -- this is how I diagnosed this -- cut and paste the query into sqlplus, change '' into ' globally in the query.  and run it after doing "variable" statements:

ops$tkyte@ORA920> variable opp varchar2(20)
ops$tkyte@ORA920> variable stcode varchar2(20)
ops$tkyte@ORA920> variable wip varchar2(20)
ops$tkyte@ORA920> variable reportfromdt varchar2(20)
ops$tkyte@ORA920> variable reporttodt varchar2(20)
ops$tkyte@ORA920> variable start varchar2(20)
ops$tkyte@ORA920> variable end varchar2(20)
ops$tkyte@ORA920>
ops$tkyte@ORA920> select *
  2      from (
  3    select tire.tire_trn_number tiretrnnumber,
  4                      to_char(tire.tire_date,'mm/dd/yy'),
  5                      tire.tire_time,
  6                      tire.direct_submitter_name user_name,
  7                  dense_rank() over ( order by tire.tire_trn_number ) dr
  8              from    t_clips_tire tire,
  9                  t_clips_afs_transaction transactions,
 10                  t_clips_transaction_code transactionscd
 11              where
 12                  tire.tire_trn_number = transactions.tire_trn_number and
 13                  transactions.tran_code = transactionscd.tran_code and
 14                  redirect_operation_center_id = :opp and
 15                  tire.tire_status_id = :stcode  and
 16                  tire.wip_account = :wip and
 17                  tire.tire_date > :reportfromdt and
 18                  tire.tire_date < :reporttodt and
 19              order by transactions.tire_trn_number,tran_seq
 20           )
 21  where dr between :start and :end
 22  /
            order by transactions.tire_trn_number,tran_seq
            *
ERROR at line 19:
ORA-00936: missing expression


Now it becomes crystal clear where the mistake it. 

Using dates is giving error.

Chandra S.Reddy, March 02, 2003 - 9:39 am UTC

Hi Tom,
In my previous question with the same title, USING caluse is wrong. There bind variables for 'tire_date' fields are missing. It was wrongly pasted. Sorry for that.
Please find the correct one below.
--
USING in_noperationcenterid,in_ntirestatuscode, in_cwipaccount,in_dreportfromdt, in_dreporttodt,v_tempstart, v_tempend ;
----

Thanks you very much.

Tom Kyte
March 02, 2003 - 9:55 am UTC

still -- missing expression -- figure it out, not hard given information I already supplied.

Thank you.

A reader, March 02, 2003 - 11:05 am UTC

Tom,
Thank you for the suggestion.
COUNT is bad idea, but I should return this to application. There application will decide the pagination factor depending on the no. of records. So am using count there.



Why does between not work?

Errick, March 26, 2003 - 10:27 am UTC

Tom,
I've been reading through this set of posts, and was curious. Why exactly does the between 90 and 100 not work, whereas just select * from bigtable where rownum < 100 work? Maybe Im missing something from the article. Just curious.

Tom Kyte
March 26, 2003 - 3:59 pm UTC

because rownum starts at 1 and is incremented only when a row is output.

so,

select * from t where rownum between 90 and 100 would be like this:


rownum := 1;
for x in ( select * from t )
loop
if ( rownum between 90 and 100 )
then
output
rownum := rownum+1;
end if;
end loop;

nothing ever comes out of that loop.

Let me understand it better...

David, April 07, 2003 - 8:49 am UTC

Tom,

I am a DBA and I am sometimes a bit confused when it it comes to supporting web applications.

The web development people have asked me how to implement pagination, since their connection is stateless.

I would like to submit the query one time only, but I ended up creating something like below, which "re-parses", "re-executes" and "re-fetches" for each page:

select * from
(select b.*,rownum rnum
from (Select * from big_table a order by a.id) b
where rownum < :max )
where rnum >= :min ;

1) To my knowledge, each time I do this I have to "re-parse", "re-execute" and "re-fetch" the data. The bind variable values are kept and incremented for each page in the application. Is this a good approach ?

2) Wouldn't it be better if I could return the entire set (with first_rows) ?

3) How would be a mechanism for that (how would I code that) ?

4) Using this last approach, couldn't I do some kind of "pipelining" so the rows are returned to the application, submitting the query only once and without having to return the whole set -- since the entire table is too large.

Thanks


Tom Kyte
April 07, 2003 - 1:39 pm UTC

1) yes, it is what I do. Short of maintaining a connection and becoming a client server application -- there is no real avoiding this.

Me -- I'd rather have to parse (soft) for each page then to keep a physical, private connection (and all of the associated resources) open for that user IN CASE they hit page forward.

2) and you have a 500 row result set -- and the user only looks at the first ten -- and never ever goes to page 2? So you do 50 times the LIO you need to? Probably at least 40 times as much LIO as you ever would (pages are like 10 rows and the users NEVER goto page 11).


No, you want to do as little work as possible, save in the knowledge that people get bored and go away after hitting page down once or twice.

3) you'd be on your own...

4) that would be back to "i'm client server, I always have a stated connection, I always consume massive resources on your machine -- even when I'm not using it"

A Belated Defence of Parag

Richard, April 07, 2003 - 11:43 am UTC

With reference to Parag's use of abbreviations: Parag's meaning was clear; it must have been to you, too, or you wouldn't have known to put u = you, ur = your.

Yes, the world IS awash with abbreviations (3-letter and otherwise)and acronyms, but that's because they usually (as in Parag's case) make perfect sense and would be likely to confuse or befuddle only the elderly and the infirm!

yrs etc.,

Richard

Tom Kyte
April 07, 2003 - 2:22 pm UTC

Elmer Fudd here,

Weww, I disagwee. You see, I gets wots of qwestions -- some in pewfect engwish, some in bwoken engwish, some in foweign wanguages. Oh, dat scwewy wabbit! I twy to pawse these qwestions -- make sense of them and evewy woadbwock someone puts in thewe makes it hawd fow me to do that. Just wike weading this pawagwaph is hawd fow you now. I do not think it is too much to ask to use pwopew wowds in a fowum wike this. Oh, dat scwewy wabbit! Dis is NOT a ceww phone hewe, this is not instant messaging. Dis is a discussion pwace, a pwace to wead things. Oh, dat scwewy wabbit! Using made up things just makes it hawdew to undewstand. I don't ask fow too many things, this is one that I keep asking fow though.

that really hard to read text brought to you by the dialectizer:
</code> http://www.rinkworks.com/dialect/ <code>


Well, I disagree. You see, I gets lots of questions -- some in perfect english, some in broken english, some in foreign languages. I try to parse these questions -- make sense of them and every roadblock someone puts in there makes it hard for me to do that.

Just like reading this paragraph is hard for you now.

I do not think it is too much to ask to use proper words in a forum like this. This is NOT a cell phone here, this is not instant messaging. This is a discussion place, a place to read things. Using made up things just makes it harder to understand.

I don't ask for too many things, this is one that I keep asking for though.

Sending results to the Internet application

B. Robinson, April 07, 2003 - 12:10 pm UTC

DBA David,

It is not just that the connections are stateless, but the connections are pooled and rotated such that there may be a different database connection used for every web page request from a given user.

So the only way to avoid requerying for every subset of the large result set would be to return the whole massive result set to the web app, and the web app would cache the all results in memory, reading each subset from memory as needed. But since this would require the entire result set to be read from the database, it would make more sense to use all_rows.

Naturally, that approach uses up gobs of memory on the app server or web server, so it may not be feasible for a web app with thousands of users.

Tom Kyte
April 07, 2003 - 2:24 pm UTC

the connection from the client (browser) to the app server is stateless.

time

A reader, April 07, 2003 - 5:56 pm UTC


just a note.

on toms site,

to load first 3-4 pages is very fast about < 2 secs.
when we go to 490-500 of 501 takes 10 sec. to load a very simple page

Tom Kyte
April 07, 2003 - 6:38 pm UTC

and it gets worse the further you go. my stuff is optimized to get your the first rows fast -- I do not give you the ability to goto to "row 3421" -- what meaning would that have in a search like this?


google search for Oracle


Results 1 - 10 of about 6,840,000. Search took 0.11 seconds.
Results 91 - 100 of about 7,800,000. Search took 0.24 seconds.
Results 181 - 190 of about 6,840,000. Search took 0.49 seconds.
(wow, thats wacky - the counts change too)
Results 811 - 820 of about 6,840,000. Search took 0.91 seconds.
Results 901 - 908 of about 6,840,000. Search took 0.74 seconds.

what? they cut me off -- I'm sure my answer was 909, I'm just sure of it!

Results xxx of about xxxxxx

A reader, April 08, 2003 - 6:54 am UTC

I recently went through a load of code removing every count(*) that there was before the actual query that was done by a developer before I came on the project.

It was amazing the argument I had with the (PHP) web developer about it. I just made the change and let the users decide if they liked the improved performance more than the missing bit of fairly pointless information. Guess what they preferred!

The thing that is missing is the "results 1-10 of about 500" (or whatever), which would be useful. The user might well want to know if there are just a few more records to look at, in which case it might well be worth paging, or whether there are lots, so that they would no to refine the search.

I know Oracle Text can do this sort of thing, but is there anything that can help in "Standard" Oracle? Using Oracle Text would need quite a re-write of the system.

What we could do is have the application ask for 21 rows of data. If the cursor came back with 10-20 more rows, the screen would say ">> 7 more rows" (or whatever), and if it hits the 21, then display ">> at least 11 more rows".

Have you any comments?

Thanks

Tom Kyte
April 08, 2003 - 7:54 am UTC

...
The thing that is missing is the "results 1-10 of about 500" (or whatever),
.....

if using Oracle Text queries (like I do here) there is an API for that.

if using the CBO in 9i -- you can get the estimated cardinality for the query in v$sql_plan...




For: Srinivas M

A Reader, April 08, 2003 - 9:00 am UTC

Hi,

All those fieldx IS LIKE '''' OR fieldX IS NULL .... what is that for ?!! don't you just want fieldX is null ?? Anyway...maybe I missed something...

I'm sure Tom will have lots to say on this, and apologies for 'butting in' but I thought i'd give my opinion and if its off base at least I'll learn :)

Do you need that table to be deleted and inserted into each time (looks like a pseudo-temporary table) ? All that looping and fetching - and it looks to me like if you had a llimit of 1000, you are going to fetch and do nothing with 1000 rows ??! Can't you change your query to use the constructs Tom has already defined in this article i.e.

SELECT * FROM (YOUR QUERY IN HERE, BUT SELECTING rownum BACK ALSO) WHERE rownum BETWEEN llimit and ulimit

??

Then I suspect you don't need your table, your delete, your loops and fetches, you can just open this and return the cursor.

Regards,

Paul

A reader, April 08, 2003 - 9:06 am UTC

Hi Srinivas,
Sorry to jump in between but i would like to say one thing. Tom has already given us his views and coding tips and tricks. Lets not waste his time by asking him to correct our code. I think this site provides us enough knowledge and tools. Only thing required on our part is applying it correctly and doing some research.


Screwy Rabbit!

Richard, April 08, 2003 - 10:28 am UTC

Hi,

Elmer Fudd... priceless! Seldom has an explanation been so funny! Point taken, though.

How about always translating your pages? Daffy Duck's my favourite.

Wegards,

Wichard

is this the proc. you are using for your site ?

A reader, April 08, 2003 - 3:23 pm UTC

is this the proc. you are using for your site ?


if you bind the variable in a session and the
http connection is state less so how will you
do it?

pls explain

Tom Kyte
April 08, 2003 - 5:47 pm UTC

yes, this is the procedure I use here...


the "bind variables" are of course passed from page to page -- in my case I use a sessionid (look up at that really big number in the URL) and your session "state" is but a row in a table to me.

Hidden fields, cookies -- they work just as well.

Thanks

A reader, April 08, 2003 - 6:14 pm UTC


Want a trick on this

DeeeBeee Crazeee, April 28, 2003 - 8:35 am UTC

Hi Tom,

I just wanted to know if there is a trick of combining multiple rows into a single row with values comma seperated.

For example, I have the department table :

Dept:

Dept_name
---------
ACCOUNTS
HR
MARKETING

I need a query that would return me...

Dept_name
---------
ACCOUNTS, HR, MARKETING

....is there a way with SQL or do we have to use PL/SQL. The number of rows are not fixed.

thanks a lot

PS: Just wanted to check if I can post my questions here (in this section, without asking it afresh) ....because, I just happened to come accross a page wherein a reader was apologizing for having asked a question in the comments. Do let me know on this, so that I can apologise too when I ask you a question in this section the next time ;)



Tom Kyte
April 28, 2003 - 8:48 am UTC

search this site for

stragg



What about this ?

A reader, May 16, 2003 - 2:40 pm UTC

I happened to found this in an article on pagination:

select rownum, col1
from foobar
group by rownum, col1
having rownum >= :start and rownum < :end

What do you think ? How does it compare to your solution to the original question ?

Tom Kyte
May 16, 2003 - 5:30 pm UTC

try it, it doesn't work.


set start to 10 and end to 15

you'll never get anything.

the way to do it -- it is above, my method works and is the most efficient method (as of May 16 2003, maybe some day in the furture there will be another more efficient method)

paging result set

lakshmi, May 17, 2003 - 4:38 am UTC

Excellent

Dynamic order by using rownum

vinodhps, May 27, 2003 - 6:17 am UTC

Hi tom ,
our current oracle version is 8.0.4 , i got one query which has to be ordered dynamically ie. if Max_ind is X then low_value column has to be order Ascending or if Max_ind is N then low_value column has to be ordered Descending. But i could i do that in query.. iam using this in my form
in the below query order by Desc or Asc is depend on the value passing for the max_ind(N or X).

SELECT insp_dtl.test_value,
insp_dtl.lab_test_sno,
purity.low_value, purity.high_value,
purity.pro_rata_flag, purity.pro_rata_type,
purity.cumulative_flag, purity.incr,
purity.prcnt, purity.flat_rate,
purity.cal_mode, NVL (purity.precision, 1) precision,
purity.min_max_ind
FROM t_las_matl_insp_hdr insp_hdr,
t_las_matl_insp_dtl insp_dtl,
t_pur_po_matl_purity_fact_dtl purity
WHERE insp_hdr.lab_test_sno = insp_dtl.lab_test_sno
AND insp_hdr.cnr_no = 200300905
AND purity.po_no = 200200607
-- AND purity.matl_code = f_matl_code
AND purity.para_code = insp_dtl.para_code
-- AND purity.para_code = f_para_code
ORDER BY low_value;




LAB_TEST_SNO LOW_VALUE HIGH_VALUE Max_ind
------------ --------- ---------- ---------
200300208 1.1 1.5 X
200300208 1.1 2 N
200300208 1.6 2 N
200300208 86 87.9 X
200300208 88 89.9 N

Tom Kyte
May 27, 2003 - 7:53 am UTC

great, thanks for letting us know?

Not really sure what you are trying to say here.

dynamically order by clause

vinodhps, May 27, 2003 - 9:08 am UTC

Hi Tom,
Thanks for your immediate response,
well i will put my question in this way..

SQL> create table order_by
  2  (low_value number(5),
  3   max_ind   varchar2(1));

Table created.


  1  insert into order_by
  2  select rownum ,'X' from all_objects
  3  where rownum < 10
  4* order by rownum desc
SQL> /

9 rows created.


  1  insert into order_by
  2  select rownum ,'N' from all_objects
  3  where rownum < 10
  4* order by rownum
SQL> /

9 rows created.

Now i would like to select all the values from the table by passing a value for max_ind(a indicator) whether its maximum or minimum value here in this query if i pass variable X then the query order by clause must be descending or else it should be ascending, actually it is a cursor .

SQL> select low_value  from order_by order by low_value desc;

LOW_VALUE
---------
        9
        9
        8
        8
        7
        7
        6
        6
        5
        5
        4
        4
        3
        3
        2
        2
        1
        1

18 rows selected.

This DESC or ASC will be decided dynamically.

Is it possible to do it dynamically Tom,

I  think above statements are clear. 

Tom Kyte
May 27, 2003 - 9:42 am UTC

you would use native dynamic sql to get the optimum query plan.

l_query := 'select .... order by low_value ' || p_asc_or_desc;

where p_asc_or_desc is a variable you set to ASC or DESC.


that would be best.

you can use decode, but you'll never use an index to sort with if that matters to you


order by decode( p_input, 'ASC', low_value, 0 ) ASC,
decode( p_input, 'DESC', low_value, 0 ) DESC




Thank you tom

vinodhps, May 27, 2003 - 10:11 am UTC

Thank you tom for your immediate responce...

Hope to see more from you.

Thank you,


Very useful, thanks Tom. One more question.

Jack Liu, June 02, 2003 - 3:10 pm UTC

1. Did you get total result number by count(*)? this pagin needs to know the count(*) or not, because select count take longer time.

2. How to optimize order by, it will use only 3s for below query but 44s with order?
select * from
( select qu.*, rownum rnum
from ( select issn,volume,issue,foa,title,author,subtitle,a.aid,rtype
from article a , ec.language l where 1=1
AND rtype in ('ART','REV','SER')
AND a.aid=l.aid AND l.langcode='eng'
AND a.issue is not null ORDER BY a.ayear desc ) qu
where rownum < 61)
where rnum >= 31


Tom Kyte
June 02, 2003 - 3:33 pm UTC

1) no, i use text's "get me an approximation of what you think the result set size might be" function. (its a text query for me)

2) /*+ FIRST_ROWS */

do you have an index on a.ayear?
is ayear NOT NULL?

if so, it could use the index, with first_rows, to read the data descending.

Very, Very...Helpful

Ralph, June 02, 2003 - 6:18 pm UTC

Along those lines...How can we get the maximum rows that will be fetched? i.e. to be able to show 1-10 of 1000 records, how do we know that there are total 1000 records without writing another select with count(*)

Tom Kyte
June 02, 2003 - 8:10 pm UTC

you don't -- all you need to show is

"you are seeing 1-10 of more then 10, hit next to see what might be 11-20 (or maybe less"


If you use text, you can approximate the result set size.
If you use the CBO and 9i, you can get the estimated cardinality from v$SQL_PLAN



Very helpful, thanks Tom, follow up with my question.

Jack, June 03, 2003 - 1:53 pm UTC

Tom,
Thanks for your quick response. This is really a very good place for Oracle Users. Just follow up my original question:

1) Is this "get me an approximation of what you think the result set size might be" function only in Oracle Text? If I use Oracle Intermedia text, any solution to show total result?

2) a.ayear is indexed but has some null. I know it can not use index to replace order in this situation, but why I use /*+ INDEX_ASC (article article_ayear) */, it doesn't work either? The optimizer mode is "choose" in svrmgrl> show parameter optimizer_mode;

Many many thanks.

Jack
I am planing to buy "expert one-on-one".


Tom Kyte
June 03, 2003 - 2:02 pm UTC

1) the approximation I'm showing is from text and only works with text.

you can get the estimated cardinality from an explain plan for other queries -- in 9i, that is right in v$sql_plan so you do not need to explain the query using explain plan

2) you answered your own question. The query does not permit the index to be used since using the index would miss NULL entries -- resulting in the wrong answer.

can you add "and a.ayear IS NOT NULL" or "and a.ayear > to_date( '01010001','ddmmyyyy')" to the query. then, an index on ayear alone can be used.


better be quick on the book purchase (bookpool still has it as of jun/3/2003 -- see link on homepage)

Publisher went under, book no longer printed ;(
New book in august though ;)

Thank you for your quick response!

Jack, June 03, 2003 - 3:39 pm UTC

Tom,
Thanks, actually I want the order is Ascending by ayear, since oracle default use descending for index, that's the reason I use /*+ INDEX_ASC (article article_ayear) */ hint.
Just don't know why it doesn't work, here is the explain plan with INDEX_ASC hint, I don't know why the hint is still choose.
SELECT STATEMENT Hint=CHOOSE 6 K 6911 VIEW 6 K 1 M 6911
COUNT STOPKEY NESTED LOOPS 6 K 736 K 6911 TABLE ACCESS FULL LANGUAGE 6 K 113 K 56 TABLE ACCESS BY INDEX ROWID ARTICLE 63 K 5 M 1 INDEX UNIQUE SCAN SYS_C004334 63 K

Thanks,

Jack



Tom Kyte
June 04, 2003 - 7:33 am UTC

Oracle uses indexes ASCENDING by default.

I told you why the index cannot be used -- ayear is nullable, using that index would (could) result in missing rows that needed to be processed.

hence, add the predicate I described above to make it so that the index CAN in fact be used.

paging and a join

marc, June 05, 2003 - 1:42 pm UTC

Which way would be better with a large table and the user wants to see an average of 500 rows back. The query has a main driving table and a 2nd table that will only be used to show a column's data. The 2nd table will not be used in the where or the order of the main select.

option 1(all table joined in the main select):

select name,emp_id,salary from (
select a.*, rownum rnum from (
SELECT emp.name,emp.emp_id,salary.salary FROM EMP,SALARY
where zip = someting and
EMP.emp_id = salary.emp_id order by name
) a where rownum <= MAX_ROWS )
where rnum >= MIN_ROWS

option 2(only the driving table is in the main select and joins are done at the higher level. Then oracle would only have to join the 2 tables with the data the the user will show. ):
select a.name,a.empid,salary.salary from (
select a.*, rownum rnum from (
SELECT emp.name,emp.emp_id FROM where zip = someting order by name
) a ,SALARY
where a.emp_id = salary.emp_id where rownum <= MAX_ROWS )
where rnum >= MIN_ROWS


Tom Kyte
June 05, 2003 - 1:44 pm UTC

users don't want to see 500 rows. 500 rows -- waayyy too much data, you cannot consume that much information...

option 1. using first_rows hint.


it'll only access the other table as much as it needs to and is much easier to understand.

but benchmark it on your data, that'll tell YOU for sure what is best in YOUR case.

marc, June 06, 2003 - 2:25 pm UTC

My users must be supper users, because they would like to see all that data (page by page of course) so they can eyeball a forecast and see a trend. These people are brokers that look at 1 column of the dataset, which is price per share (money), So they do look at many rows at a time. The extra data is superfluous. 500 rows is nothing for these users.

I was asking your opinion if you think it is better to do the join in the main dataset and not in the pagination piece. My perfmaince tuning told me it putting the data in the main or subset is all related on the type of data in I need to show, Example a join to get the name can be in main select, but the get the (select count(trades) from othertable) works better in the pagination section.


Tom Kyte
June 06, 2003 - 2:55 pm UTC

as long as the query is a "first rows" sort of query that can terminate with a COUNT STOPKEY -- the join can go anywhere.

Using index with order by

Jon, July 15, 2003 - 6:21 am UTC

Will an index be used with an order by if the table has already been accessed via another index? I thought the CBO would only work with one index per table (except with bitmap indexes).

I'm working on returning search results. Users want first 2000 rows (I know, I know... what will they do with it all - was originally top 500 rows, but that wasn't enough for them). The main table is already being accessed via another index to limit the result set initially. Explain Plan tells me that the index on the order by column is not being used. How to use the index for ordering?

Actually as I'm writing this, I think the answer came to me - concatenated indexes - of form (limit_cols, order_by_col), and then include the leading index column(s) in the order by clause.

Secondly, if I work with a union clause on similar, but not identical, queries, can an index be used for ordering in this case?

E.g.
select * from (
select * from (
select ... from x,y,z where ...
union all
select ... from x,y,w where ...
) order by x.col1
) where rownum <= 2000

or would we get better results with this approach:

select * from (
select * from (
select * from (
select ... from x,y,z where ...
order by x.col1
) where rownum <= 2000
union all
select * from (
select ... from x,y,w where ...
order by x.col1
) where rownum <= 2000
) order by x.col1
) where rownum <= 2000

So, if the result is partially sorted, does an order by perform better than if not sorted (this brings make memories of sorting algorithms many years ago...). I would think yes - but not sure of Oracle's internal sort algorithm?

Tom Kyte
July 15, 2003 - 9:56 am UTC

if you use the "index to sort", how can you use another index "to find"

You can either use an index to sort OR you can use an index to find, but tell me -- how could you imagine using both?

Your concatenated index will work in some cases -- yes.


the 2cnd approach -- where you limit all of the subresults - will most likely be the better approach.


You cannot go into the "does an order by perform better ....", that is so far out of the realm of your control at this point as to be something to not even think about.

Jon, July 15, 2003 - 7:14 pm UTC

"how could you imagine using both?" - not sure I understand you here. Wanting to use two indexes is a common requirement - so I can easily imagine it:

select *
from emp
where hire_date between to_date('01/01/2002','DD/MM/YYYY')
and to_date('01/02/2002','DD/MM/YYYY')
order by emp_no

If this was a large table, the ability to use an index to filter and an index to order by would seem advantageous.

As for internal sort algorithms - do you know what Oracle uses - or is it secret squirrel stuff?

Tom Kyte
July 15, 2003 - 7:21 pm UTC

so tell me -- how would it work, give us the "psuedo code", make it real.

Hmmm...

Jon, July 16, 2003 - 10:22 am UTC

I mean Oracle does that fancy index combine operation with bitmap indexes. I guess I'll just have to build it for you.

Tell you what, if I come up with a way of doing something similar for b*tree's, I'll sell it to Oracle... then I'll retire :-)

Tom Kyte
July 16, 2003 - 10:48 am UTC

Oh, we can use more then one index

we have index joins -- for example:


create table t ( x int, y int );

create index t_idx1 on t(x);
create index t_idx2 on t(y);

then select x, y from t where x = 5 and y = 55;

could range scan both t_idx1, t_idx2 and then hash join them together by rowid.


We have bitmaps where by we can AND and OR bitmaps together...



BUT - I want you to explain an algorithm that would permit you to

a) range scan by index index in order to locate data
b) use another index to "sort it"


None of the multi-index approaches "sort" data, they are used to find data.

All this thinking makes my brain hurt.

Jon, July 16, 2003 - 11:46 pm UTC

Well, since we CAN combine two indexes, how about:

1) Use idx1 to range scan
2) Hash join rowids to idx2 to produce result set
3) Do a sort-merge between 2) result set and idx2 to order

The efficiency of doing 2) & 3) over sort of table data would probably depend on cardinality of 1).

More fun for the CBO team and the Oracle mathematics dept...

Tom Kyte
July 17, 2003 - 10:23 am UTC


it would depend on cardinality of 1 and 2 really.

if card of 1 is small but card of 2 is big and you have to (must) full scan idx2 a block at a time to look for matches (we have to inspect every index entry) -- full scanning the index could take a really really long time

step 3 would would not be neccesary in this scenario as the full scan of index 2 would be 'sorted' and would just probe the hash table you built in 1



To clarify

Jon, July 16, 2003 - 11:50 pm UTC

By sort-merge in 3) I mean a set intersection operation.

getting rows N through M of a result set

Mohan, July 17, 2003 - 8:13 am UTC

Regarding the discussion regarding pagination of the result set into random chunks and sequencing them

consider the table customer_data

create table customer_data(custno number, invoiceno number);
insert into customer_data(custno, invoiceno) values(1,110);
insert into customer_data(custno, invoiceno) values(1,111);
insert into customer_data(custno, invoiceno) values(1,112);
insert into customer_data(custno, invoiceno) values(2,1150);
insert into customer_data(custno, invoiceno) values(2,1611);
insert into customer_data(custno, invoiceno) values(3,1127);
insert into customer_data(custno, invoiceno) values(2,3150);
insert into customer_data(custno, invoiceno) values(2,3611);
insert into customer_data(custno, invoiceno) values(3,3127);

The following query will break the result sets based on custno and sequences each chunk.

select b.rnum-a.minrnum+1 slno, a.custno, b.invoiceno from(select custno, min(rnum) minrnum from(select rownum rnum, custno, invoiceno from (select custno, invoiceno from customer_data order by custno, invoiceno)) group by custno) a, (select rownum rnum, custno, invoiceno from (select custno, invoiceno from customer_data order by custno, invoiceno)) b where a.custno=b.custno;


Mohan


Tom Kyte
July 17, 2003 - 10:42 am UTC

ok, put 100,000 rows in there and let us know how it goes... (speed and resource useage wise)

It works..

DD, July 17, 2003 - 5:15 pm UTC

<quote>
What about this ? May 16, 2003
Reviewer: A reader

I happened to found this in an article on pagination:

select rownum, col1
from foobar
group by rownum, col1
having rownum >= :start and rownum < :end

What do you think ? How does it compare to your solution to the original
question ?


Followup:
try it, it doesn't work.


set start to 10 and end to 15

you'll never get anything.

the way to do it -- it is above, my method works and is the most efficient
method (as of May 16 2003, maybe some day in the furture there will be another
more efficient method)
</quote>

Tom,
Your reply above states that this does not work. Infact it does work and it MUST work. The group by will be done before the having clause is applied and so we will get the correct result set. Please let me know your views and what is it that makes you think this wont work. Here are my results.

RKPD01> select rownum, object_id from big_table
2 group by rownum, object_id
3 having rownum > 10 and rownum < 15;

ROWNUM OBJECT_ID
---------- ----------
11 911
12 915
13 1091
14 1103


I havent tried to see if it is efficient but I wanted to verify why it wouldnt work when it should. Hope to Hear

Thanks
DD


Tom Kyte
July 17, 2003 - 7:37 pm UTC

oh, i messed up, saw the having and read it as 'where'

totally 100% inefficient, not a good way to do it. it does the entire result set and then gets rows 10-15

as opposed to my method which gets 15 rows, then throws out the first couple.



getting rows N through M of a result set

Mohan K, July 19, 2003 - 3:00 am UTC

Refer to the review on July 17, 2003

If the custno column is not indexed, then the performance will be a problem.

Run the following scripts to test the above query.


create table customer_data(custno number, invoiceno number);

declare
n1 number;
n2 number;
begin
for n1 in 1..2500 LOOP
for n2 in 1..100 LOOP
insert into customer_data(custno, invoiceno) values(n1, n2);
END LOOP;
END LOOP;
end;
/

commit;

create index customer_data_idx on customer_data(custno);


The first sql statement will create the table. The PL/SQL script will populate the table with 250000 rows. The next statement will create an index.


Now run the query as given below

select b.rnum-a.minrnum+1 slno, a.custno, b.invoiceno from(select custno, min(rnum) minrnum from
(select rownum rnum, custno, invoiceno from (select custno, invoiceno from customer_data order by custno, invoiceno)) group by custno) a,
(select rownum rnum, custno, invoiceno from (select custno, invoiceno from customer_data order by custno, invoiceno)) b
where a.custno=b.custno;

Mohan



Is it Possible?

A reader, July 23, 2003 - 12:39 pm UTC

Hi Tom,

I have a table like this

Name
Date
Amount

Data will be like

User1 01-JAN-03 100
User1 22-JUL-03 20
......
User2 23-JUL-03 90

Is there any way, I can get the last 6 (Order by Date desc)records for each user with a Single query?

I need get the output like

User1 22-JUL-03 20
User1 01-JAN-03 100
....
User2 23-JUL-03........

Thank you very much Tom. (I am using 8.1.7)

Tom Kyte
July 23, 2003 - 7:02 pm UTC

select *
from (select name, date, amount,
row_number() over (Partition by user order by date DESC ) rn
from t )
where rn <= 6;

For Query example on CUSTOMER_DATA table posted above...

Kamal Kishore, July 23, 2003 - 10:05 pm UTC

It is my understanding that the same output can be produced by using the following query:

SELECT row_number() over(PARTITION BY custno ORDER BY custno, invoiceno) slno,
       custno,
       invoiceno
FROM   customer_data
WHERE  custno IN (1, 2)
ORDER  BY custno,
          invoiceno
/


I may be understanding wrong. Maybe, Tom can verify this.

I ran the two queries on the CUSTOMER_DATA table (with 250000 rows) and below are the statistics. I ran both queries several times to remove any benefit of doubt, but results were similar.

I see a huge performance difference on the two queries.

Waiting for inputs/insights from Tom.
Thanks,

==========================================================

SQL> SELECT row_number() over(PARTITION BY custno ORDER BY custno, invoiceno) slno,
  2         custno,
  3         invoiceno
  4  FROM   customer_data
  5  WHERE  custno IN (1, 2)
  6  ORDER  BY custno,
  7            invoiceno
  8  /

200 rows selected.

Elapsed: 00:00:00.02

Execution Plan
----------------------------------------------------------
   0      SELECT STATEMENT Optimizer=CHOOSE
   1    0   WINDOW (SORT)
   2    1     CONCATENATION
   3    2       TABLE ACCESS (BY INDEX ROWID) OF 'CUSTOMER_DATA'
   4    3         INDEX (RANGE SCAN) OF 'CUSTOMER_DATA_IDX' (NON-UNIQU
          E)

   5    2       TABLE ACCESS (BY INDEX ROWID) OF 'CUSTOMER_DATA'
   6    5         INDEX (RANGE SCAN) OF 'CUSTOMER_DATA_IDX' (NON-UNIQU
          E)





Statistics
----------------------------------------------------------
          0  recursive calls
          0  db block gets
          8  consistent gets
          0  physical reads
          0  redo size
       2859  bytes sent via SQL*Net to client
        510  bytes received via SQL*Net from client
          3  SQL*Net roundtrips to/from client
          1  sorts (memory)
          0  sorts (disk)
        200  rows processed

SQL> SELECT b.rnum - a.minrnum + 1 slno,
  2         a.custno,
  3         b.invoiceno
  4  FROM   (SELECT custno,
  5                 MIN(rnum) minrnum
  6          FROM   (SELECT rownum rnum,
  7                         custno,
  8                         invoiceno
  9                  FROM   (SELECT custno,
 10                                 invoiceno
 11                          FROM   customer_data
 12                          ORDER  BY custno,
 13                                    invoiceno))
 14          GROUP  BY custno) a,
 15         (SELECT rownum rnum,
 16                 custno,
 17                 invoiceno
 18          FROM   (SELECT custno,
 19                         invoiceno
 20                  FROM   customer_data
 21                  ORDER  BY custno,
 22                            invoiceno)) b
 23  WHERE  a.custno = b.custno AND a.custno in (1, 2)
 24  ORDER  BY custno,
 25            invoiceno
 26  /

200 rows selected.

Elapsed: 00:00:20.08

Execution Plan
----------------------------------------------------------
   0      SELECT STATEMENT Optimizer=CHOOSE
   1    0   MERGE JOIN
   2    1     VIEW
   3    2       COUNT
   4    3         VIEW
   5    4           SORT (ORDER BY)
   6    5             TABLE ACCESS (FULL) OF 'CUSTOMER_DATA'
   7    1     SORT (JOIN)
   8    7       VIEW
   9    8         SORT (GROUP BY)
  10    9           VIEW
  11   10             COUNT
  12   11               VIEW
  13   12                 SORT (ORDER BY)
  14   13                   TABLE ACCESS (FULL) OF 'CUSTOMER_DATA'




Statistics
----------------------------------------------------------
          0  recursive calls
         88  db block gets
       1740  consistent gets
       8679  physical reads
          0  redo size
       2859  bytes sent via SQL*Net to client
        510  bytes received via SQL*Net from client
          3  SQL*Net roundtrips to/from client
          2  sorts (memory)
          2  sorts (disk)
        200  rows processed

SQL>
 

Example on customer_data table

Mohan K, July 24, 2003 - 4:06 am UTC

Specify the where clause in the inner query. The same where clause has to be applied twice.

select b.rnum-a.minrnum+1 slno, a.custno, b.invoiceno from(select custno,
min(rnum) minrnum from
(select rownum rnum, custno, invoiceno from (select custno, invoiceno from
customer_data where custno in(2,3) order by custno, invoiceno)) group by custno) a,
(select rownum rnum, custno, invoiceno from (select custno, invoiceno from
customer_data where custno in(2,3) order by custno, invoiceno)) b
where a.custno=b.custno
/


Mohan


tkprof results on CUSTOMER_DATA query...

Kamal Kishore, July 24, 2003 - 8:50 am UTC

SELECT row_number() over(PARTITION BY custno ORDER BY custno, invoiceno) slno,
custno,
invoiceno
FROM customer_data
WHERE custno IN (1, 2)
ORDER BY custno,
invoiceno

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 3 0.02 0.01 0 8 0 200
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 5 0.02 0.01 0 8 0 200



SELECT b.rnum - a.minrnum + 1 slno, a.custno, b.invoiceno
FROM (SELECT custno, MIN(rnum) minrnum
FROM (SELECT rownum rnum, custno, invoiceno
FROM (SELECT custno, invoiceno
FROM customer_data
WHERE custno IN (1, 2)
ORDER BY custno, invoiceno))
GROUP BY custno) a,
(SELECT rownum rnum, custno, invoiceno
FROM (SELECT custno, invoiceno
FROM customer_data
WHERE custno IN (1, 2)
ORDER BY custno, invoiceno)) b
WHERE a.custno = b.custno

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.01 0.04 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 3 0.06 0.05 0 16 0 200
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 5 0.07 0.10 0 16 0 200


**********************************************************
==========================================================
**********************************************************

SELECT row_number() over(PARTITION BY custno ORDER BY custno, invoiceno) slno,
custno,
invoiceno
FROM customer_data
ORDER BY custno,
invoiceno

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 2501 16.53 19.38 2080 436 50 250000
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 2503 16.53 19.38 2080 436 50 250000


SELECT b.rnum - a.minrnum + 1 slno, a.custno, b.invoiceno
FROM (SELECT custno, MIN(rnum) minrnum
FROM (SELECT rownum rnum, custno, invoiceno
FROM (SELECT custno, invoiceno
FROM customer_data
ORDER BY custno, invoiceno))
GROUP BY custno) a,
(SELECT rownum rnum, custno, invoiceno
FROM (SELECT custno, invoiceno
FROM customer_data
ORDER BY custno, invoiceno)) b
WHERE a.custno = b.custno

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.03 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 2501 71.99 82.11 5007 872 100 250000
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 2503 71.99 82.14 5007 872 100 250000


ALL_ROWS or FIRST_ROWS ?

Tatiane, August 05, 2003 - 1:50 pm UTC

After all, using your pagination method, what optimization mode (or goal) should we use ?

Tom Kyte
August 05, 2003 - 2:22 pm UTC

FIRST_ROWS definitely

A reader, August 05, 2003 - 2:41 pm UTC

Maybe FIRST_ROWS_1, 10, 100, 1000 ????

From the 9.2 Reference:

<q>
first_rows_n

The optimizer uses a cost-based approach, regardless of the presence of statistics, and optimizes with a goal of best response time to return the first n rows (where n = 1, 10, 100, 1000).

first_rows

The optimizer uses a mix of costs and heuristics to find a best plan for fast delivery of the first few rows.
</q>

What is the difference in this case ?

I still don't get it

Sudha Bhagavatula, August 11, 2003 - 5:04 pm UTC

I'm trying to run this and I get only 25 rows:

select *
from (select cl.prov_full_name full_name,
cl.spec_desc specialty_dsc,
sum(cl.plan_liab_amt) tot_pd,
sum(cl.co_ins_amt+cl.ded_amt+cl.copay_amt) patient_resp,
count(distinct clm10_id) claims
from aso.t_medical_claims_detail cl,
aso.t_employer_groups_data g,
aso.t_categories_data c
where g.emp_super_grp_id||g.emp_sub_grp_id = cl.emp_grp_id
and c.cat_dim_id = g.cat_dim_id
and c.cat_name like 'America%'
and cl.paid_date between to_date('01/01/2003','mm/dd/yyyy')
and to_date('06/30/2003','mm/dd/yyyy')
and prov_full_name not like '*%'
and spec_desc not like '*%'
group by prov_full_name,
spec_desc
order by count(distinct clm10_id) desc )
where rownum < 26
union
select decode(full_name,null,' ', 'All Other Providers') full_name,decode(specialty_dsc,null,' ','y') specialty_dsc,tot_pd,patient_resp,claims
from (select cl.prov_full_name full_name,
cl.spec_desc specialty_dsc,
sum(cl.plan_liab_amt) tot_pd,
sum(cl.co_ins_amt+cl.ded_amt+cl.copay_amt) patient_resp,
count(distinct clm10_id) claims
from aso.t_medical_claims_detail cl,
aso.t_employer_groups_data g,
aso.t_categories_data c
where g.emp_super_grp_id||g.emp_sub_grp_id = cl.emp_grp_id
and c.cat_dim_id = g.cat_dim_id
and c.cat_name like 'America%'
and cl.paid_date between to_date('01/01/2003','mm/dd/yyyy')
and to_date('06/30/2003','mm/dd/yyyy')
and prov_full_name not like '*%'
and spec_desc not like '*%'
group by prov_full_name,
spec_desc
order by count(distinct clm10_id) desc )
where rownum >= 26

Tom Kyte
August 11, 2003 - 6:50 pm UTC

it by very definition will only ever return 25 rows at most.

"where rownum > 26" is assured to return 0 records.

rownum is assigned to a row like this:


rownum = 1
loop over potential records in the result set
if predicate satisified
then
OUTPUT RECORD
rownum = rownum + 1
end if
end loop


So, you see -- rownum is ALWAYS 1 since rownum is never > 26 and rownum never gets incremented.

So how do I get the rows

Sudha Bhagavatula, August 12, 2003 - 8:34 am UTC

So how do I get the result that I'm trying to achieve ? Can it be done ?

Thanks
Sudha

Tom Kyte
August 12, 2003 - 9:02 am UTC

I don't know -- why don't you phrase IN ENGLISH what you are trying to achieve.

The sql parser in my brain doesn't like to parse big queries and try to reverse engineer what you MIGHT have wanted (given that the question isn't phrased properly in the first place and all)....

This is my question

Sudha Bhagavatula, August 12, 2003 - 9:34 am UTC

I have to create a report showing the top 25 providers based on the number of distinct claims. Get the the total for the 25 providers, compute percentages against the total for all the providers, and then total the claims for the providers not in the top 25.

This is how the report should be :

provider #claims %of total

xxxxxxx 1234 14%
yyyyyyy 987 11%
-------


---till the top 25
All other providers 3210 32%

Thanks
Sudha

Tom Kyte
August 12, 2003 - 9:52 am UTC

ops$tkyte@ORA920> /*
DOC>
DOC>drop table t1;
DOC>drop table t2;
DOC>
DOC>create table t1 ( provider int );
DOC>
DOC>create table t2 ( provider int, claim_no int );
DOC>
DOC>
DOC>-- 100 providers...
DOC>insert into t1 select rownum from all_objects where rownum <= 100;
DOC>
DOC>insert into t2
DOC>select dbms_random.value( 1, 100 ), rownum
DOC>  from all_objects;
DOC>*/
ops$tkyte@ORA920>
ops$tkyte@ORA920>
ops$tkyte@ORA920> select case when rn <= 25
  2              then to_char(provider)
  3              else 'all others'
  4         end provider,
  5         to_char( round(sum( rtr ) * 100 ,2), '999.99' )  || '%'
  6    from (
  7  select provider, cnt, rtr, row_number() over (order by rtr) rn
  8    from (
  9  select provider, cnt, ratio_to_report(cnt) over () rtr
 10    from (
 11  select t1.provider, count(*) cnt
 12    from t1, t2
 13   where t1.provider = t2.provider
 14   group by t1.provider
 15         )
 16         )
 17         )
 18   group by case when rn <= 25
 19                 then to_char(provider)
 20                 else 'all others'
 21             end
 22   order by count(*), sum(rtr) desc
 23  /

PROVIDER                                 TO_CHAR(
---------------------------------------- --------
69                                           .97%
45                                           .97%
14                                           .97%
99                                           .97%
27                                           .97%
43                                           .97%
5                                            .96%
72                                           .96%
2                                            .96%
61                                           .96%
78                                           .96%
29                                           .95%
92                                           .95%
88                                           .95%
63                                           .95%
91                                           .95%
35                                           .94%
67                                           .93%
77                                           .93%
60                                           .91%
76                                           .91%
55                                           .91%
79                                           .88%
1                                            .48%
100                                          .48%
all others                                 77.24%

26 rows selected.
 

Great solution

Sudha Bhagavatula, August 12, 2003 - 2:27 pm UTC

Tom,

That worked like a charm, thanks !

Sudha



Works great, but bind variables giving bad plan

Mike Madland, August 22, 2003 - 4:57 pm UTC

Hi Tom,

Thanks for a great web site and a great book.

I'm using your awesome paginate query and getting great
results but I'm running into issues with the optimizer
giving me a bad plan when I use bind variables for the
beginning and ending row numbers.  I've tried all kinds
of hints but ended up resorting to dynamic sql to get the
fastest plan.

Do you have any ideas on why my query with the bind
variables is insisting on doing a hash join (and thus is
slower) and if there is any fix?  Thanks in advance.

Connected to:
Oracle9i Enterprise Edition Release 9.2.0.3.0 - Production
With the Partitioning, OLAP and Oracle Data Mining options
JServer Release 9.2.0.3.0 - Production

SQL> create sequence s;

Sequence created.

SQL> create table t as
  2  select s.nextval pk, object_name, created, object_type
  3   from all_objects;

Table created.

SQL> insert /*+ append */
  2    into t (pk, object_name, created, object_type)
  3  select s.nextval,  object_name, created, object_type
  4   from t;

21158 rows created.

SQL> commit;

Commit complete.

SQL> insert /*+ append */
  2    into t (pk, object_name, created, object_type)
  3  select s.nextval,  object_name, created, object_type
  4   from t;

42316 rows created.

SQL> commit;

Commit complete.

SQL> insert /*+ append */
  2    into t (pk, object_name, created, object_type)
  3  select s.nextval,  object_name, created, object_type
  4   from t;

84632 rows created.

SQL>  commit;

Commit complete.

SQL> insert /*+ append */
  2    into t (pk, object_name, created, object_type)
  3  select s.nextval,  object_name, created, object_type
  4   from t;

169264 rows created.

SQL> commit;

Commit complete.

SQL> alter table t add constraint pk_t primary key (pk);

Table altered.

SQL> create index t_u on t (lower(object_name), pk);

Index created.

SQL> analyze table t compute statistics
  2    for table for all indexes for all indexed columns
  3  /

Table analyzed.

SQL> set timing on
SQL> alter session set sql_trace=true;

Session altered.

SQL> SELECT t.pk, t.object_name, t.created, object_type
  2    FROM (SELECT *
  3            FROM (select innermost.*, rownum as rowpos
  4                    from (SELECT pk
  5                            FROM t
  6                           ORDER BY LOWER(object_name)
  7                         ) innermost
  8                   where rownum <= 10 )
  9           where rowpos >= 1) pg
 10         INNER JOIN t ON pg.pk = t.pk
 11   ORDER BY pg.rowpos;

        PK OBJECT_NAME                    CREATED   OBJECT_TYPE
---------- ------------------------------ --------- -------------
         1 /1005bd30_LnkdConstant         13-AUG-03 JAVA CLASS
     10352 /1005bd30_LnkdConstant         13-AUG-03 SYNONYM
     21159 /1005bd30_LnkdConstant         13-AUG-03 JAVA CLASS
     31510 /1005bd30_LnkdConstant         13-AUG-03 SYNONYM
     42317 /1005bd30_LnkdConstant         13-AUG-03 JAVA CLASS
     52668 /1005bd30_LnkdConstant         13-AUG-03 SYNONYM
     63475 /1005bd30_LnkdConstant         13-AUG-03 JAVA CLASS
     73826 /1005bd30_LnkdConstant         13-AUG-03 SYNONYM
     84633 /1005bd30_LnkdConstant         13-AUG-03 JAVA CLASS
     94984 /1005bd30_LnkdConstant         13-AUG-03 SYNONYM

10 rows selected.

Elapsed: 00:00:00.03
SQL>
SQL> variable r            refcursor
SQL>
SQL> declare
  2  i_endrow    integer;
  3  i_startrow  integer;
  4
  5  begin
  6  i_endrow   := 10;
  7  i_startrow := 1;
  8
  9  open :r FOR
 10  SELECT t.pk, t.object_name, t.created, object_type
 11    FROM (SELECT *
 12            FROM (select innermost.*, rownum as rowpos
 13                    from (SELECT pk
 14                            FROM t
 15                           ORDER BY LOWER(object_name)
 16                         ) innermost
 17                   where rownum <= i_endrow )
 18           where rowpos >= i_startrow) pg
 19         INNER JOIN t ON pg.pk = t.pk
 20   ORDER BY pg.rowpos;
 21  END;
 22  /

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.00
SQL>
SQL> print :r

        PK OBJECT_NAME                    CREATED   OBJECT_TYPE
---------- ------------------------------ --------- -------------
         1 /1005bd30_LnkdConstant         13-AUG-03 JAVA CLASS
     10352 /1005bd30_LnkdConstant         13-AUG-03 SYNONYM
     21159 /1005bd30_LnkdConstant         13-AUG-03 JAVA CLASS
     31510 /1005bd30_LnkdConstant         13-AUG-03 SYNONYM
     42317 /1005bd30_LnkdConstant         13-AUG-03 JAVA CLASS
     52668 /1005bd30_LnkdConstant         13-AUG-03 SYNONYM
     63475 /1005bd30_LnkdConstant         13-AUG-03 JAVA CLASS
     73826 /1005bd30_LnkdConstant         13-AUG-03 SYNONYM
     84633 /1005bd30_LnkdConstant         13-AUG-03 JAVA CLASS
     94984 /1005bd30_LnkdConstant         13-AUG-03 SYNONYM

10 rows selected.

Elapsed: 00:00:02.05

---- From TKPROF ----

SELECT t.pk, t.object_name, t.created, object_type
  FROM (SELECT *
          FROM (select innermost.*, rownum as rowpos
                  from (SELECT pk
                          FROM t
                         ORDER BY LOWER(object_name)
                       ) innermost
                 where rownum <= 10 )
         where rowpos >= 1) pg
       INNER JOIN t ON pg.pk = t.pk
 ORDER BY pg.rowpos

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.01       0.01          0          0          0           0
Execute      1      0.00       0.00          0          0          0           0
Fetch        2      0.00       0.36         22         25          0          10
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        4      0.01       0.38         22         25          0          10

Rows     Execution Plan
-------  ---------------------------------------------------
      0  SELECT STATEMENT   GOAL: CHOOSE
     10   SORT (ORDER BY)
     10    NESTED LOOPS
     10     VIEW
     10      COUNT (STOPKEY)
     10       VIEW
     10        INDEX   GOAL: ANALYZED (FULL SCAN) OF 'T_U'
                   (NON-UNIQUE)
     10     TABLE ACCESS   GOAL: ANALYZED (BY INDEX ROWID) OF 'T'
     10      INDEX   GOAL: ANALYZED (UNIQUE SCAN) OF 'PK_T' (UNIQUE)

********************************************************************************

SELECT t.pk, t.object_name, t.created, object_type
  FROM (SELECT *
          FROM (select innermost.*, rownum as rowpos
                  from (SELECT pk
                          FROM t
                         ORDER BY LOWER(object_name)
                       ) innermost
                 where rownum <= :b1 )
         where rowpos >= :b2) pg
       INNER JOIN t ON pg.pk = t.pk
 ORDER BY pg.rowpos

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute      1      0.00       0.00          0          0          0           0
Fetch        1      2.34       2.55       1152       2492          0          10
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        3      2.34       2.56       1152       2492          0          10

Rows     Execution Plan
-------  ---------------------------------------------------
      0  SELECT STATEMENT   GOAL: CHOOSE
     10   SORT (ORDER BY)
     10    HASH JOIN
     10     VIEW
     10      COUNT (STOPKEY)
     10       VIEW
     10        INDEX   GOAL: ANALYZED (FULL SCAN) OF 'T_U'
                   (NON-UNIQUE)
 338528     TABLE ACCESS   GOAL: ANALYZED (FULL) OF 'T'
 

Tom Kyte
August 23, 2003 - 10:00 am UTC

first_rows all of the subqueries. that is appropriate for pagination. I should have put that into the original response I guess!


consider:

SELECT t.pk, t.object_name, t.created, object_type
FROM (SELECT *
FROM (select innermost.*, rownum as rowpos
from (SELECT pk
FROM t
ORDER BY LOWER(object_name)
) innermost
where rownum <= :b1 )
where rowpos >= :b2) pg
INNER JOIN t ON pg.pk = t.pk
ORDER BY pg.rowpos

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 1 1.92 2.03 2 2491 0 10
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 3 1.92 2.03 2 2491 0 10

Rows Row Source Operation
------- ---------------------------------------------------
10 SORT ORDER BY (cr=2491 r=2 w=0 time=2033994 us)
10 HASH JOIN (cr=2491 r=2 w=0 time=2033701 us)
10 VIEW (cr=3 r=2 w=0 time=1029 us)
10 COUNT STOPKEY (cr=3 r=2 w=0 time=955 us)
10 VIEW (cr=3 r=2 w=0 time=883 us)
10 INDEX FULL SCAN T_U (cr=3 r=2 w=0 time=848 us)(object id 55317)
350000 TABLE ACCESS FULL T (cr=2488 r=0 w=0 time=598964 us)


versus:

********************************************************************************
SELECT t.pk, t.object_name, t.created, object_type
FROM (SELECT /*+ FIRST_ROWS */ *
FROM (select /*+ FIRST_ROWS */ innermost.*, rownum as rowpos
from (SELECT /*+ FIRST_ROWS */ pk
FROM t
ORDER BY LOWER(object_name)
) innermost
where rownum <= :b1 )
where rowpos >= :b2) pg
INNER JOIN t ON pg.pk = t.pk
ORDER BY pg.rowpos

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 1 0.00 0.00 0 20 0 10
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 3 0.00 0.00 0 20 0 10

Rows Row Source Operation
------- ---------------------------------------------------
10 SORT ORDER BY (cr=20 r=0 w=0 time=954 us)
10 TABLE ACCESS BY INDEX ROWID OBJ#(55315) (cr=20 r=0 w=0 time=749 us)
21 NESTED LOOPS (cr=15 r=0 w=0 time=589 us)
10 VIEW (cr=3 r=0 w=0 time=278 us)
10 COUNT STOPKEY (cr=3 r=0 w=0 time=210 us)
10 VIEW (cr=3 r=0 w=0 time=151 us)
10 INDEX FULL SCAN OBJ#(55317) (cr=3 r=0 w=0 time=98 us)(object id 55317)
10 INDEX RANGE SCAN OBJ#(55316) (cr=12 r=0 w=0 time=188 us)(object id 55316)

A reader, August 25, 2003 - 4:37 am UTC


Perfect

Mike Madland, September 03, 2003 - 12:43 pm UTC

Tom, thank you so much. I had tried first_rows, but not on *all* of the subqueries. This is great.

how about 8.0.

s devarshi, September 13, 2003 - 3:34 am UTC

what if ia want to do the same in 8.0.4 version

plsql ?

i have few other problem and wanted to ask you about it,
'ask your question later ' is blocking me

devarshi

Tom Kyte
September 13, 2003 - 9:27 am UTC

you cannot use order by in a subquery in 8.0 so this technique doesn't apply.

you have to open the cursor.

fetch the first N rows and ignore them

then fetch the next M rows and keep them

close the cursor



that's it.

One question about your approach

julie, September 25, 2003 - 11:04 am UTC

My java developer is asking me how he will know
how many rows are in the table for him to pass me
the minimum and maximun number. So that he can
pass 20, 40 and so on on the jsp page.




Tom Kyte
September 25, 2003 - 11:26 pm UTC

you have a "first page"

you have a "next page"

when "next page" returns less rows then requested -- you know you have hit "last page"

it is the way I do it... works great. uses least resources.

ORDER OF SELECTS

Tracy, October 08, 2003 - 12:04 pm UTC

I have a table accounts with a varchar2(50) column accountnumber.

I want to select the row with the highest value in accountnumber where the column contains numbers only so I do this:


test> select max(acno)
2 from
3 (select to_number(ACCOUNTNUMBER) acno
4 from ACCOUNTS
5 where replace(translate(ACCOUNTNUMBER, '1234567890', 'aaaaaaaaaa'),'a','') is null);

MAX(ACNO)
------------
179976182723

which works fine. (May not be the best way of doing it, but it works.)

I then want to refine it by adding 'only if the number is less than 500000' so I add

where acno < 500000

and then I get ORA-01722: invalid number.

test> l
1 select max(acno)
2 from
3 (select to_number(ACCOUNTNUMBER) acno
4 from ACCOUNTS
5* where replace(translate(ACCOUNTNUMBER, '1234567890', 'aaaaaaaaaa'),'a','') is null) where acno < 500000
test> /
where replace(translate(ACCOUNTNUMBER, '1234567890', 'aaaaaaaaaa'),'a','') is null) where acno < 500000
*
ERROR at line 5:
ORA-01722: invalid number

Presumably this is to do with the order in which the selects work, but I thought that because the inner select is returning numbers
only that the outer select would work ok?

Tom Kyte
October 09, 2003 - 3:24 pm UTC

you are ascribing procedural constructs to a non-procedural language!

you are thinking "inline view done AND then outer stuff"

in fact that query is not any different then the query with the inline view removed -- the acno < 50000 is done "whenever".


you can:

where
decode( replace(translate( accountNumber, '1234567890','0000000000'),'0',''),
NULL, to_number( accountNumber ),
NULL ) < 50000


hint: don't use 'a', else a string with 'a' in it would be considered a valid number!


search producing wrong results

Paul Druker, October 09, 2003 - 11:33 am UTC

Tom, I was looking for from$_subquery$ combination on your site (I saw it in dba_audit_trail.obj_name). However, search for from$_subquery$ provides approximately 871 records, which is not correct. For example, this page does contain this word, but almost all extracted pages don't. It's interesting that search for from_subquery (without underscoire and $ sign) provides the same result. Search for "from$_subquery$" provides the same 871 results. I'd understand "special treatment" of underscore sign, but why $ sign?

Tom Kyte
October 09, 2003 - 6:04 pm UTC

Implementing dynamic query to suggested pagination query

Stephane Gouin, October 24, 2003 - 8:57 am UTC

Hi Tom

Using owa_util.cellsprint, but wanting to customize the table rows a little (adding a style sheet to hilight every other row, as a visual aid to users), so forced to look at the following query, as given in this thread:

select *
from ( select a.*, rownum rnum
from ( YOUR_QUERY_GOES_HERE -- including the order by ) a
where rownum <= MAX_ROWS )
where rnum >= MIN_ROWS
/

My question is however, how do you introduce a dynamic query into the mix, given I want to build a re-usable module to others can implement. This is exactly what owa_util.cellsprint with the dynamic cursor accomplishes, but I can't get in there to tweak it the layout.

Thanks for your help

Tom Kyte
October 24, 2003 - 9:43 am UTC

cellsprint only supports dynamic sql? not sure what the issue is here?

Stephane Gouin, October 24, 2003 - 10:54 am UTC

Hi Tom,

Sorry, wasn't clear in my question. I was using cellsprint, but I realized I can't insert a style in the table row tag for instance (ie <tr class="h1">) The objective is to add a style to the row, where via a CSS, I can hilite alternate rows, giving user a little contrast when dealing with long lists.

I want to extend the cellsprint function, by allowing further control over the table tags... (ie style sheets, alignment, widths etc...)

Using a REF Cursor (or owa_util.bind_variables) for the subquery, how could I implement it using the pagination query.

Hope I clarified the question enough..

Tom Kyte
October 24, 2003 - 11:09 am UTC

you actually have access to the source code for cellsprint (its not wrapped). just copy it as your own and modify as you see fit.



getting rows N through M of a result se

Edward Girard, October 30, 2003 - 10:35 am UTC

Excelllent thread

Very useful for web-based applications

Saminathan Seerangan, November 01, 2003 - 12:00 am UTC


HAVING can be efficient

Joe Levy, November 12, 2003 - 1:27 pm UTC

Agreed that this

<quote>
select rownum, col1
from foobar
group by rownum, col1
having rownum >= :start and rownum < :end
</quote>

is inefficient. But

select rownum, col1
from foobar
where rownum < :end -- added line to improve performance
group by rownum, col1
having rownum >= :start and rownum < :end

is almost as efficient as your preferred method. And it has the advantage of being usable in a scalar subquery. (The additional nesting required by your preferred method puts columns from tables in the outer query out of scope.)

Is there a reason not to use a HAVING clause with ROWNUM when variable scope demands it?


Tom Kyte
November 12, 2003 - 4:47 pm UTC

why would a scalar subquery need the N'th row?

but yes, that would work (don't need the second and rownum < :end)

Row_Number() or ROWNUM

Ranjit Desai, November 19, 2003 - 5:50 am UTC

Hi Tom,

We do use row_Number() and other Analytical functions. But recently came across some limitation. Oracle 8i standard edition don't support this functions. They are only available in Enterprise Edition. Many of our sites are on Standard edition on Oracle 8i.

So current method to use Row_NUmber() to get required output needs to be changed.

SELECT deptno, ename, hiredate,
ROW_NUMBER() OVER (PARTITION BY deptno ORDER BY hiredate) AS emp_id
FROM emp

To get similar output in SELECT query what can we do? Is it possible to use ROWNUM?? or Used defined function?

Please help us. As we have already tried some options without success.

Thanks & Regards,

Ranjit Desai

Tom Kyte
November 21, 2003 - 11:25 am UTC

you cannot use rownum to achieve that. analytics are mandatory for getting those numbers "partitioned"

9iR2 SE (standard) offers analytics as a feature.

Fetching rows N-M

Stevef, November 26, 2003 - 5:28 am UTC

Can the first N rows optimization feature be used in association with the paging technique to enhance the performance these queries ?

SELECT /*+ FIRST_ROWS(N) */ ....



</code> http://otn.oracle.com/products/oracle9i/daily/jan28.html <code>

Tom Kyte
November 26, 2003 - 7:49 am UTC

yes, i usually just use first_rows myself.

Fetching rows N-M

Stevef, November 27, 2003 - 8:24 am UTC

Actually, Weird effects. The first query below returns 10 rows as expected but the second returns 19 rows !!!!
(Oracle9i Enterprise Edition Release 9.2.0.2.1 Win2000)

select*
from (select a.*,rownum r
from (select /*+ first_rows */ customerid from customer order by 1) a
where rownum <= 10+9 )
where r >= 10

select*
from (select a.*,rownum r
from (select /*+ first_rows(10) */ customerid from customer order by 1) a
where rownum <= 10+9 )
where r >= 10



Tom Kyte
November 27, 2003 - 10:51 am UTC

confirmed -- filed a bug, temporary workaround is to add "order by r"

we can see they lose the filter using dbms_xplan:


ops$tkyte@ORA920> delete from plan_table;
6 rows deleted.
 
ops$tkyte@ORA920> explain plan for
  2  select*
  3     from (select a.*,rownum r
  4             from (select /*+ first_rows(10) */ empno from scott.emp order by 1) a
  5     where rownum <= 19 )
  6  where r >= 10
  7  /
 
Explained.
 
ops$tkyte@ORA920> select * from table(dbms_xplan.display);
 
PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------
 
-------------------------------------------------------------------------
| Id  | Operation            |  Name       | Rows  | Bytes | Cost (%CPU)|
-------------------------------------------------------------------------
|   0 | SELECT STATEMENT     |             |    14 |   364 |     2  (50)|
|   1 |  VIEW                |             |    14 |   364 |            |
|*  2 |   COUNT STOPKEY      |             |       |       |            |
|   3 |    VIEW              |             |    14 |   182 |            |
|   4 |     INDEX FULL SCAN  | EMP_PK      |    14 |    42 |     2  (50)|
-------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
   2 - filter(ROWNUM<=19)
 
15 rows selected.
 
ops$tkyte@ORA920>
ops$tkyte@ORA920> delete from plan_table;
 
5 rows deleted.
 
ops$tkyte@ORA920> explain plan for
  2  select*
  3     from (select a.*,rownum r
  4             from (select /*+ first_rows(10) */ empno from scott.emp order by 1) a
  5     where rownum <= 19 )
  6  where r >= 10
  7  order by r
  8  /
 
Explained.
 
ops$tkyte@ORA920> select * from table(dbms_xplan.display);
 
PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------
 
-------------------------------------------------------------------------
| Id  | Operation            |  Name       | Rows  | Bytes | Cost (%CPU)|
-------------------------------------------------------------------------
|   0 | SELECT STATEMENT     |             |    14 |   364 |     3  (67)|
|   1 |  SORT ORDER BY       |             |    14 |   364 |     3  (67)|
|*  2 |   VIEW               |             |    14 |   364 |            |
|*  3 |    COUNT STOPKEY     |             |       |       |            |
|   4 |     VIEW             |             |    14 |   182 |            |
|   5 |      INDEX FULL SCAN | EMP_PK      |    14 |    42 |     2  (50)|
-------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 <b>
   2 - filter("from$_subquery$_001"."R">=10)</b>
   3 - filter(ROWNUM<=19)
 
17 rows selected.
 
ops$tkyte@ORA920>
ops$tkyte@ORA920>
ops$tkyte@ORA920> select*
  2     from (select a.*,rownum r
  3             from (select /*+ first_rows(10) */ empno from scott.emp order by 1) a
  4     where rownum <= 19 )
  5  where r >= 10
  6  order by r
  7  /
 
     EMPNO          R
---------- ----------
      7844         10
      7876         11
      7900         12
      7902         13
      7934         14
 
ops$tkyte@ORA920>
 

rows N-M

Stevef, November 28, 2003 - 6:11 am UTC

Gosh Tom, your momma sure raised you smart!
Great detective work!

Getting total rows..

Naveen, December 04, 2003 - 4:41 am UTC

Hi Tom:

The application we are developing should use pagination to show the results. The devlopers want me to get the total rows that the query return so that they can display that many pages. (Say, if the total rows returned are 100 and the number of results that have to be displayed in each page is 10 rows, they can set 10 pages to display results sets.) The requirement is that we have to display the page number with hyperlink, so when user clicks on page 3, we have to display rows 31-40.

To do this i have to first find the count of rows that the query returns and then fire the query to return the rows N through M. This is two I/O calls to the database and two queries to be parsed to display a page. Is there any work around.

Thanks
Nav.

Tom Kyte
December 04, 2003 - 8:36 am UTC

I have a very very very very simple solution to this problem.

DON'T DO IT.

Your developers probably love www.google.com right?
they appreciate its speed, accuracy, usefulness.

All you need to do is tell them "use google as the gold standard for searching. DO WHAT IT DOES"

Google lies constantly. the hit count is never real. It tells you "here are the first 10 pages" -- but you'll find if you click on page 10, you'll be on page 7 (there wasnt any page 8, 9 or 10 -- they didn't know that)

google guesses. (i guess -- search on asktom, "approximately")

google is the gold standard -- just remember that.

In order to tell the end user "hey, there are 15 pages" you would have to run the entire query to completion on page one

and guess what, by the time page 1 is delivered to them (after waiting and waiting for it) there is a good chance their result set won't have 15 pages!!! (it is a database after all, people do write to it). they might have 16 or maybe 14, or maybe NONE or maybe lots more the next time they page up or down!!

google is the gold standard.

did you know, you'll never go past page 100 on google - try it, they won't let you.

Here is a short excerpt from my book "Effective Oracle By Design" where I talk about this very topic (pagination in a web environment)


<quote>
Keep in mind that people are impatient and have short attention spans. How many times have you gone past the tenth page on a search page on the Internet? When I do a Google (www.google.com) search that returns more hits than the number of hamburgers sold by McDonald's, I never go to the last page; in fact, I never get to page 11. By the time I've looked at the first five pages or so, I realize that I need to refine my search because this is too much data. Your end users will, believe it or not, do the same.


Some Advice on Web-based Searches with Pagination

My advice for handling web-based searches that you need to paginate through is to never provide an exact hit count. Use an estimate to tell the users about N hits. This is what I do on my asktom web site, for example. I use Oracle Text to index the content. Before I run a query, I ask Oracle Text for an estimate. You can do the same with your relational queries using EXPLAIN PLAN in Oracle8i and earlier, or by querying V$SQL_PLAN in Oracle9i and up.
You may want to tell the end users they got 1,032,231 hits, but the problem with that is twofold:

o It takes a long time to count that many hits. You need to run that ALL_ROWS type of query to the end to find that out! It is really slow.
o By the time you count the hits, in all probability (unless you are on a read-only database), the answer has already changed and you do not have that number of hits anymore!


My other advice for this type of application is to never provide a Last Page button or give the user more than ten pages at a time from which to choose. Look at the standard, www.google.com, and do what it does.

Follow those two pieces of advice, and your pagination worries are over.
</quote>




Thanks Tom..

Naveen, December 04, 2003 - 10:24 pm UTC

Hi Tom,

Got what you said. I'll try to convince my developers with this information. Day by day the admiration for you keeps growing.

Thank you

Nav.




Displaying Top N rows in 8.0.6

Russell, December 09, 2003 - 4:49 am UTC

G'day Tom,

On September 13, 2003 or there abouts you left the following:
----------
you cannot use order by in a subquery in 8.0 so this technique doesn't apply.

you have to open the cursor.

fetch the first N rows and ignore them

then fetch the next M rows and keep them

close the cursor
that's it.

----------

I have an application, where a set of grouped records is in the vicinity of 800 combinations. For the purposes of analysis, 80% of work is in top 20% of grouped entries. As such most gains will be achieved by analysing these entries with the most reords. As I am trying to do the majority of the grunt work in Oracle, parameters are passed by users, to a procedure with a ref Cursor being OUTput to a Crystal report.

One of the parameters I am inputting, is TopN hoping to return grouped entries with the greatest Record counts by the grouping needed.

I include this statement in a cursor, and loop through for 1 to TopN, appending the resulting Group Names to a varchar2 variable hoping to include the contents of this string in a subsequent where statement.

A possible example:

declare
TopN := 3; -- return all records matching the group identifer with TopN most records

Counter Number := 0;
vchString varchar2(200);
begin
for I in (Select Dept, count(*) from employees
where ....
group by Dept order by count(*) desc)
Loop

if counter > TopN then
exit;
end if;

if counter > 0 then
vchString:=vchstring||',';
end if;
vchString:=vchString||i.dept;
-- or vchstring:=vchstring|| ''''||string||''''
-- for columns containing varchar data....
end loop;

I then have a statement
Cursor is ....
select ......
from employees
where .....
AND DEPT in ( vchString);

end;

with the hope that the original cursor might return something like
DEPT COUNT(*)
30 8
20 7
19 6
10 5
15 3
7 2
1 1
4 1


and the returning cursor in the optimum world would therefore become something like
select ......
from employees
where .....
AND DEPT in ( 30,20,19);

Hence having to select, group, sum, and display the 12(TopN) group entries instead of 800 ish.

The loop works, and populates the varchar2 variable, but the contents of that variable don't seem to be transposed or included into the IN statement. As mentioned above I am using Oracle database version 8.0.6, and having read a number of threads on your site, don't think I can use the Analytic functions included in 8.1 and above.

Please advise what my problem is, or if there is a better way to try and do what I am after.

Thanks in advance.

Catastrophic Performance Degredation

Doh!, December 16, 2003 - 11:32 am UTC

Any ideas as to why the act of putting an outer "wrapper" on an inner query

select * from ( my_inner_query )

can cause the performance of a query to degrade by a factor of 3000 ?

First the innermost query:

SQL>     ( SELECT a.*, ROWNUM RECORDINDEX FROM
  2      ( SELECT 'Townland Boundary' "FEATURENAME", mME.MinX, mME.MaxX, mME.MinY, mME.MaxY,  gL.LayerName,
  3        gL.LayerAlias,  TOWNLAND.GEONAME COLUMN1 , COUNTY.GEONAME COLUMN2
  4       FROM COUNTY ,TOWNLAND , MinMaxExtent mME, GeoLayer gL
  5        WHERE COUNTY.GEONAME = 'L123'
  6         AND  TOWNLAND.GEONAME LIKE 'BALL%'
  7         AND TOWNLAND.COUNTYID = COUNTY.COUNTYID
  8         AND mME.ForeignID = TOWNLAND.GEOMETRYID
  9        AND  mME.TableName = 'TOWNLAND'
 10        AND gL.TableName = mME.TableName
 11        AND gL.LayerName = 'TOWNLAND'
 12      ORDER BY  TOWNLAND.GEONAME , COUNTY.GEONAME)
 13      a WHERE ROWNUM <= 10)
 14  /

10 rows selected.

Elapsed: 00:00:00.01
Execution Plan
----------------------------------------------------------
   0      SELECT STATEMENT Optimizer=CHOOSE (Cost=169 Card=2 Bytes=3214)
   1    0   COUNT (STOPKEY)
   2    1     VIEW (Cost=169 Card=2 Bytes=3214)
   3    2       SORT (ORDER BY STOPKEY) (Cost=169 Card=2 Bytes=224)
   4    3         NESTED LOOPS (Cost=167 Card=2 Bytes=224)
   5    4           NESTED LOOPS (Cost=163 Card=2 Bytes=158)
   6    5             MERGE JOIN (CARTESIAN) (Cost=3 Card=1 Bytes=57)
   7    6               TABLE ACCESS (BY INDEX ROWID) OF 'GEOLAYER' (Cost=2 Card=1 Bytes=45)
   8    7                 INDEX (RANGE SCAN) OF 'GEOLAYER_LAYERNAME_IDx' (NON-UNIQUE) (Cost=1 Card=1)
   9    6               BUFFER (SORT) (Cost=1 Card=1 Bytes=12)
  10    9                 TABLE ACCESS (BY INDEX ROWID) OF 'COUNTY' (Cost=1 Card=1 Bytes=12)
  11   10                   INDEX (RANGE SCAN) OF 'COUNTY_GEONAME_IDX'    (NON-UNIQUE)
  12    5             TABLE ACCESS (BY INDEX ROWID) OF 'TOWNLAND' (Cost=163 Card=1 Bytes=22)
  13   12               BITMAP CONVERSION (TO ROWIDS)
  14   13                 BITMAP AND
  15   14                   BITMAP CONVERSION (FROM ROWIDS)
  16   15                     INDEX (RANGE SCAN) OF 'TOWNLAND_COUNTYID_IDX' (NON-UNIQUE) (Cost=4 Card=1950)
  17   14                   BITMAP CONVERSION (FROM ROWIDS)
  18   17                     SORT (ORDER BY)
  19   18                       INDEX (RANGE SCAN) OF 'TOWNLAND_GEONAME_IDX' (NON-UNIQUE) (Cost=14 Card=1950)
  20    4           TABLE ACCESS (BY INDEX ROWID) OF 'MINMAXEXTENT' (Cost=2 Card=50698 Bytes=1673034)
  21   20             INDEX (UNIQUE SCAN) OF 'MINMAXEXT_UK' (UNIQUE) (  Cost=1 Card=4)

Statistics
----------------------------------------------------------
          0  recursive calls
          6  db block gets
        530  consistent gets
         21  physical reads
          0  redo size
       1458  bytes sent via SQL*Net to client
        498  bytes received via SQL*Net from client
          4  SQL*Net roundtrips to/from client
          2  sorts (memory)
          1  sorts (disk)
         10  rows processed

Now the final outer wrapper:

SQL> SELECT a.* FROM
  2      ( SELECT a.*, ROWNUM RECORDINDEX FROM
  3      ( SELECT 'Townland Boundary' "FEATURENAME", mME.MinX, mME.MaxX, mME.MinY, mME.MaxY,  gL.LayerName,
  4        gL.LayerAlias,  TOWNLAND.GEONAME COLUMN1 , COUNTY.GEONAME COLUMN2
  5       FROM COUNTY ,TOWNLAND , MinMaxExtent mME, GeoLayer gL
  6        WHERE COUNTY.GEONAME = 'L123'
  7         AND  TOWNLAND.GEONAME LIKE 'BALL%'
  8         AND TOWNLAND.COUNTYID = COUNTY.COUNTYID
  9         AND mME.ForeignID = TOWNLAND.GEOMETRYID
 10        AND  mME.TableName = 'TOWNLAND'
 11        AND gL.TableName = mME.TableName
 12        AND gL.LayerName = 'TOWNLAND'
 13      ORDER BY  TOWNLAND.GEONAME , COUNTY.GEONAME)
 14      a WHERE ROWNUM <= 10) a
 15  /

10 rows selected.

Elapsed: 00:00:32.00

Execution Plan
----------------------------------------------------------
   0      SELECT STATEMENT Optimizer=CHOOSE (Cost=466 Card=2 Bytes=3240)

   1    0   VIEW (Cost=466 Card=2 Bytes=3240)
   2    1     COUNT (STOPKEY)
   3    2       VIEW (Cost=466 Card=2 Bytes=3214)
   4    3         SORT (ORDER BY STOPKEY) (Cost=466 Card=2 Bytes=224)
   5    4           TABLE ACCESS (BY INDEX ROWID) OF 'TOWNLAND' (Cost=464 Card=1 Bytes=22)
   6    5             NESTED LOOPS (Cost=464 Card=2 Bytes=224)
   7    6               NESTED LOOPS (Cost=68 Card=722 Bytes=64980)
   8    7                 MERGE JOIN (CARTESIAN) (Cost=3 Card=1 Bytes=57)
   9    8                   TABLE ACCESS (BY INDEX ROWID) OF 'COUNTY'(Cost=2 Card=1 Bytes=12)
  10    9                     INDEX (RANGE SCAN) OF 'COUNTY_GEONAME_IDX' (NON-UNIQUE) (Cost=1 Card=1)
  11    8                   BUFFER (SORT) (Cost=1 Card=1 Bytes=45)
  12   11                     TABLE ACCESS (BY INDEX ROWID) OF 'GEOLAYER' (Cost=1 Card=1 Bytes=45)
  13   12                       INDEX (RANGE SCAN) OF 'GEOLAYER_LAYERNAME_IDX' (NON-UNIQUE)
  14    7                 TABLE ACCESS (BY INDEX ROWID) OF 'MINMAXEXTENT' (Cost=65 Card=1 Bytes=33)
  15   14                   INDEX (RANGE SCAN) OF 'MINMAXEXT_UK' (UNIQUE) (Cost=43 Card=2112)
  16    6               BITMAP CONVERSION (TO ROWIDS)
  17   16                 BITMAP AND
  18   17                   BITMAP CONVERSION (FROM ROWIDS)
  19   18                     INDEX (RANGE SCAN) OF 'TOWNLAND_PK' (UNIQUE)
  20   17                   BITMAP CONVERSION (FROM ROWIDS)
  21   20                     INDEX (RANGE SCAN) OF 'TOWNLAND_COUNTYID_IDX' (NON-UNIQUE) (Cost=4 Card=12)

Statistics
----------------------------------------------------------
          0  recursive calls
          0  db block gets
     254501  consistent gets
        847  physical reads
          0  redo size
       1458  bytes sent via SQL*Net to client
        498  bytes received via SQL*Net from client
          4  SQL*Net roundtrips to/from client
          2  sorts (memory)
          0  sorts (disk)
         10  rows processed

 

Tom Kyte
December 16, 2003 - 1:46 pm UTC

if you push a first_rows hint into the innermost queries -- what happens then (no answer for why this is happening -- don't know in this case -- for that, suggest a tar but lets try to find a way to workaround the issue here)

Improvement

A reader, December 17, 2003 - 6:16 am UTC

Query elapsed time falls to about 1 second. Huge improvement but still not as snappy as the original query at 0.01 seconds!

  1    SELECT a.* FROM
  2           ( SELECT  a.*, ROWNUM RECORDINDEX FROM
  3           ( SELECT /*+ FIRST_ROWS */ 'Townland Boundary' "FEATURENAME", mME.MinX, mME.MaxX, mME.MinY, mME.MaxY,  gL.LayerName,
  4             gL.LayerAlias,  TOWNLAND.GEONAME COLUMN1 , COUNTY.GEONAME COLUMN2
  5            FROM COUNTY ,TOWNLAND , MinMaxExtent mME, GeoLayer gL
  6             WHERE COUNTY.GEONAME = 'L123'
  7              AND  TOWNLAND.GEONAME LIKE 'BALL%'
  8             AND TOWNLAND.COUNTYID = COUNTY.COUNTYID
  9              AND mME.ForeignID = TOWNLAND.GEOMETRYID
 10            AND  mME.TableName = 'TOWNLAND'
 11           AND gL.TableName = mME.TableName
 12            AND gL.LayerName = 'TOWNLAND'
 13          ORDER BY  TOWNLAND.GEONAME , COUNTY.GEONAME)
 14*         a WHERE ROWNUM <= 10) a
SQL> /

10 rows selected.

Elapsed: 00:00:01.08

Execution Plan
----------------------------------------------------------
   0      SELECT STATEMENT Optimizer=HINT: FIRST_ROWS (Cost=8578 Card=2 Bytes=3240)
   1    0   VIEW (Cost=8578 Card=2 Bytes=3240)
   2    1     COUNT (STOPKEY)
   3    2       VIEW (Cost=8578 Card=2 Bytes=3214)
   4    3         SORT (ORDER BY STOPKEY) (Cost=8578 Card=2 Bytes=224)
   5    4           NESTED LOOPS (Cost=8576 Card=2 Bytes=224)
   6    5             MERGE JOIN (CARTESIAN) (Cost=8264 Card=156 Bytes=12324)
   7    6               NESTED LOOPS (Cost=8108 Card=156 Bytes=5304)
   8    7                 TABLE ACCESS (BY INDEX ROWID) OF 'TOWNLAND'(Cost=4052 Card=4056 Bytes=89232)
   9    8                   INDEX (RANGE SCAN) OF 'TOWNLAND_GEONAME_IDX' (NON-UNIQUE) (Cost=15 Card=4056)
  10    7                 TABLE ACCESS (BY INDEX ROWID) OF 'COUNTY' (Cost=1 Card=1 Bytes=12)
  11   10                   INDEX (UNIQUE SCAN) OF 'COUNTY_UK' (UNIQUE)
  12    6               BUFFER (SORT) (Cost=8263 Card=1 Bytes=45)
  13   12                 TABLE ACCESS (BY INDEX ROWID) OF 'GEOLAYER' (Cost=1 Card=1 Bytes=45)
  14   13                   INDEX (RANGE SCAN) OF 'GEOLAYER_LAYERNAME_IDX' (NON-UNIQUE)
  15    5             TABLE ACCESS (BY INDEX ROWID) OF 'MINMAXEXTENT'(Cost=2 Card=1 Bytes=33)
  16   15               INDEX (UNIQUE SCAN) OF 'MINMAXEXT_UK' (UNIQUE)(Cost=1 Card=1)

Statistics
----------------------------------------------------------
          0  recursive calls
          0  db block gets
      10413  consistent gets
       4174  physical reads
          0  redo size
       1458  bytes sent via SQL*Net to client
        498  bytes received via SQL*Net from client
          4  SQL*Net roundtrips to/from client
          2  sorts (memory)
          0  sorts (disk)
         10  rows processed
 

Tom Kyte
December 17, 2003 - 7:03 am UTC

do you have a tkprof -- are the "estimations" in the autotrace anywhere near the "real numbers" in the tkprof -- are the stats current and up to date.

A reader, December 17, 2003 - 9:53 am UTC

Stats are current.

Here we have tkprof first with and then without first_rows hint:

SELECT a.* FROM
( SELECT a.*, ROWNUM RECORDINDEX FROM
( SELECT /*+ FIRST_ROWS */ :"SYS_B_0" "FEATURENAME",
mME.MinX, mME.MaxX, mME.MinY, mME.MaxY, gL.LayerName,
gL.LayerAlias, TOWNLAND.GEONAME COLUMN1 , COUNTY.GEONAME COLUMN2
FROM COUNTY ,TOWNLAND , MinMaxExtent mME, GeoLayer gL
WHERE COUNTY.GEONAME = :"SYS_B_1"
AND TOWNLAND.GEONAME LIKE :"SYS_B_2"
AND TOWNLAND.COUNTYID = COUNTY.COUNTYID
AND mME.ForeignID = TOWNLAND.GEOMETRYID
AND mME.TableName = :"SYS_B_3"
AND gL.TableName = mME.TableName
AND gL.LayerName = :"SYS_B_4"
ORDER BY TOWNLAND.GEONAME , COUNTY.GEONAME)
a WHERE ROWNUM <= :"SYS_B_5") a

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 2 0.00 0.00 0 0 0 0
Execute 2 0.00 0.00 0 0 0 0
Fetch 3 0.68 11.24 8694 20432 0 10
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 7 0.68 11.25 8694 20432 0 10

Misses in library cache during parse: 1
Optimizer goal: FIRST_ROWS
Parsing user id: 89

Rows Row Source Operation
------- ---------------------------------------------------
0 VIEW (cr=10019 r=4443 w=0 time=9453637 us)
0 COUNT STOPKEY (cr=10019 r=4443 w=0 time=9453630 us)
0 VIEW (cr=10019 r=4443 w=0 time=9453627 us)
0 SORT ORDER BY STOPKEY (cr=10019 r=4443 w=0 time=9453621 us)
0 NESTED LOOPS (cr=10019 r=4443 w=0 time=9453566 us)
0 MERGE JOIN CARTESIAN (cr=10019 r=4443 w=0 time=9453562 us)
0 NESTED LOOPS (cr=10019 r=4443 w=0 time=9453555 us)
5011 TABLE ACCESS BY INDEX ROWID TOWNLAND (cr=5006 r=4438 w=0 time=9339026 us)
5011 INDEX RANGE SCAN TOWNLAND_GEONAME_IDX (cr=21 r=0 w=0 time=30694 us)(object id 55399)
0 TABLE ACCESS BY INDEX ROWID COUNTY (cr=5013 r=5 w=0 time=84014 us)
5011 INDEX UNIQUE SCAN COUNTY_UK (cr=2 r=1 w=0 time=32598 us)(object id 55015)
0 BUFFER SORT (cr=0 r=0 w=0 time=0 us)
0 TABLE ACCESS BY INDEX ROWID GEOLAYER (cr=0 r=0 w=0 time=0 us)
0 INDEX RANGE SCAN GEOLAYER_LAYERNAME_IDX (cr=0 r=0 w=0 time=0 us)(object id 55930)
0 TABLE ACCESS BY INDEX ROWID MINMAXEXTENT (cr=0 r=0 w=0 time=0 us)
0 INDEX UNIQUE SCAN MINMAXEXT_UK (cr=0 r=0 w=0 time=0 us)(object id 55602)

********************************************************************************


SELECT a.* FROM
( SELECT a.*, ROWNUM RECORDINDEX FROM
( SELECT :"SYS_B_0" "FEATURENAME",
mME.MinX, mME.MaxX, mME.MinY, mME.MaxY, gL.LayerName,
gL.LayerAlias, TOWNLAND.GEONAME COLUMN1 , COUNTY.GEONAME COLUMN2
FROM COUNTY ,TOWNLAND , MinMaxExtent mME, GeoLayer gL
WHERE COUNTY.GEONAME = :"SYS_B_1"
AND TOWNLAND.GEONAME LIKE :"SYS_B_2"
AND TOWNLAND.COUNTYID = COUNTY.COUNTYID
AND mME.ForeignID = TOWNLAND.GEOMETRYID
AND mME.TableName = :"SYS_B_3"
AND gL.TableName = mME.TableName
AND gL.LayerName = :"SYS_B_4"
ORDER BY TOWNLAND.GEONAME , COUNTY.GEONAME)
a WHERE ROWNUM <= :"SYS_B_5") a

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 2 136.84 139.22 851 254502 0 10
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 4 136.84 139.22 851 254502 0 10

Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: 89

Rows Row Source Operation
------- ---------------------------------------------------
10 VIEW (cr=254502 r=851 w=0 time=139222875 us)
10 COUNT STOPKEY (cr=254502 r=851 w=0 time=139222837 us)
10 VIEW (cr=254502 r=851 w=0 time=139222808 us)
10 SORT ORDER BY STOPKEY (cr=254502 r=851 w=0 time=139222779 us)
130 TABLE ACCESS BY INDEX ROWID TOWNLAND (cr=254502 r=851 w=0 time=139221108 us)
51817 NESTED LOOPS (cr=254226 r=695 w=0 time=138882960 us)
50698 NESTED LOOPS (cr=628 r=597 w=0 time=1651025 us)
1 MERGE JOIN CARTESIAN (cr=5 r=0 w=0 time=319 us)
1 TABLE ACCESS BY INDEX ROWID COUNTY (cr=3 r=0 w=0 time=127 us)
1 INDEX RANGE SCAN COUNTY_GEONAME_IDX (cr=1 r=0 w=0 time=56 us)(object id 55401)
1 BUFFER SORT (cr=2 r=0 w=0 time=96 us)
1 TABLE ACCESS BY INDEX ROWID GEOLAYER (cr=2 r=0 w=0 time=31 us)
1 INDEX RANGE SCAN GEOLAYER_LAYERNAME_IDX (cr=1 r=0 w=0 time=18 us)(object id 55930)
50698 TABLE ACCESS BY INDEX ROWID MINMAXEXTENT (cr=623 r=597 w=0 time=1548952 us)
50698 INDEX RANGE SCAN MINMAXEXT_UK (cr=154 r=137 w=0 time=592031 us)(object id 55602)
1118 BITMAP CONVERSION TO ROWIDS (cr=253598 r=98 w=0 time=136695508 us)
1118 BITMAP AND (cr=253598 r=98 w=0 time=136580794 us)
50698 BITMAP CONVERSION FROM ROWIDS (cr=50804 r=98 w=0 time=1338601 us)
50698 INDEX RANGE SCAN TOWNLAND_PK (cr=50804 r=98 w=0 time=960672 us)(object id 55131)
27108 BITMAP CONVERSION FROM ROWIDS (cr=202794 r=0 w=0 time=134967141 us)
56680364 INDEX RANGE SCAN TOWNLAND_COUNTYID_IDX (cr=202794 r=0 w=0 time=76815579 us)(object id 55416)





Tom Kyte
December 18, 2003 - 8:28 am UTC

I wanted to compare the first_rows to the one that is "fast", not the slow one.

additional tkprof

A reader, December 17, 2003 - 10:50 am UTC

here's the tkprof for the query without the outer wrapper:

SELECT a.*, ROWNUM RECORDINDEX FROM
( SELECT :"SYS_B_0" "FEATURENAME",
mME.MinX, mME.MaxX, mME.MinY, mME.MaxY, gL.LayerName,
gL.LayerAlias, TOWNLAND.GEONAME COLUMN1 , COUNTY.GEONAME COLUMN2
FROM COUNTY ,TOWNLAND , MinMaxExtent mME, GeoLayer gL
WHERE COUNTY.GEONAME = :"SYS_B_1"
AND TOWNLAND.GEONAME LIKE :"SYS_B_2"
AND TOWNLAND.COUNTYID = COUNTY.COUNTYID
AND mME.ForeignID = TOWNLAND.GEOMETRYID
AND mME.TableName = :"SYS_B_3"
AND gL.TableName = mME.TableName
AND gL.LayerName = :"SYS_B_4"
ORDER BY TOWNLAND.GEONAME , COUNTY.GEONAME)
a WHERE ROWNUM <= :"SYS_B_5"

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 2 0.03 0.08 21 530 6 10
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 4 0.03 0.08 21 530 6 10

Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: 89

Rows Row Source Operation
------- ---------------------------------------------------
10 COUNT STOPKEY (cr=530 r=21 w=21 time=85530 us)
10 VIEW (cr=530 r=21 w=21 time=85489 us)
10 SORT ORDER BY STOPKEY (cr=530 r=21 w=21 time=85462 us)
130 NESTED LOOPS (cr=530 r=21 w=21 time=84925 us)
130 NESTED LOOPS (cr=138 r=21 w=21 time=82066 us)
1 MERGE JOIN CARTESIAN (cr=4 r=0 w=0 time=236 us)
1 TABLE ACCESS BY INDEX ROWID GEOLAYER (cr=2 r=0 w=0 time=93 us)
1 INDEX RANGE SCAN GEOLAYER_LAYERNAME_IDX (cr=1 r=0 w=0 time=56 us)(object id 55930)
1 BUFFER SORT (cr=2 r=0 w=0 time=92 us)
1 TABLE ACCESS BY INDEX ROWID COUNTY (cr=2 r=0 w=0 time=38 us)
1 INDEX RANGE SCAN COUNTY_GEONAME_IDX (cr=1 r=0 w=0 time=19 us)(object id 55401)
130 TABLE ACCESS BY INDEX ROWID TOWNLAND (cr=134 r=21 w=21 time=81527 us)
130 BITMAP CONVERSION TO ROWIDS (cr=26 r=21 w=21 time=80386 us)
1 BITMAP AND (cr=26 r=21 w=21 time=80228 us)
1 BITMAP CONVERSION FROM ROWIDS (cr=5 r=0 w=0 time=2782 us)
1118 INDEX RANGE SCAN TOWNLAND_COUNTYID_IDX (cr=5 r=0 w=0 time=1551 us)(object id 55416)
1 BITMAP CONVERSION FROM ROWIDS (cr=21 r=21 w=21 time=77315 us)
5011 SORT ORDER BY (cr=21 r=21 w=21 time=72098 us)
5011 INDEX RANGE SCAN TOWNLAND_GEONAME_IDX (cr=21 r=0 w=0 time=20016 us)(object id 55399)
130 TABLE ACCESS BY INDEX ROWID MINMAXEXTENT (cr=392 r=0 w=0 time=2105 us)
130 INDEX UNIQUE SCAN MINMAXEXT_UK (cr=262 r=0 w=0 time=1126 us)(object id 55602)

********************************************************************************

Tom Kyte
December 18, 2003 - 8:36 am UTC

wonder if it is a side effect of cursor sharing here -- hmm. The difference between the plans is one is using b*tree - bitmap conversions, avoiding the table access by rowid (and that is what is causing the "slowdown", all of the IO to read that table a block at a time)

what happens to the plans if you turned off cursor sharing for a minute (alter session set cursor_sharing=exact). just curious at this point.

A reader, December 18, 2003 - 11:15 am UTC

Attached:

First the fastest and then the slower with first_rows hint:



alter session set cursor_sharing=exact

SELECT a.*, ROWNUM RECORDINDEX FROM
( SELECT 'Townland Boundary' "FEATURENAME", mME.MinX, mME.MaxX, mME.MinY, mME.MaxY, gL.LayerName,
gL.LayerAlias, TOWNLAND.GEONAME COLUMN1 , COUNTY.GEONAME COLUMN2
FROM COUNTY ,TOWNLAND , MinMaxExtent mME, GeoLayer gL
WHERE COUNTY.GEONAME = 'L123'
AND TOWNLAND.GEONAME LIKE 'BALL%'
AND TOWNLAND.COUNTYID = COUNTY.COUNTYID
AND mME.ForeignID = TOWNLAND.GEOMETRYID
AND mME.TableName = 'TOWNLAND'
AND gL.TableName = mME.TableName
AND gL.LayerName = 'TOWNLAND'
ORDER BY TOWNLAND.GEONAME , COUNTY.GEONAME)
a WHERE ROWNUM <= 10

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.40 0.41 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 2 0.09 0.28 120 530 6 10
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 4 0.50 0.69 120 530 6 10

Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: 89

Rows Row Source Operation
------- ---------------------------------------------------
10 COUNT STOPKEY (cr=530 r=120 w=21 time=284532 us)
10 VIEW (cr=530 r=120 w=21 time=284494 us)
10 SORT ORDER BY STOPKEY (cr=530 r=120 w=21 time=284464 us)
130 NESTED LOOPS (cr=530 r=120 w=21 time=283692 us)
130 NESTED LOOPS (cr=138 r=119 w=21 time=270656 us)
1 MERGE JOIN CARTESIAN (cr=4 r=1 w=0 time=11295 us)
1 TABLE ACCESS BY INDEX ROWID GEOLAYER (cr=2 r=0 w=0 time=89 us)
1 INDEX RANGE SCAN GEOLAYER_LAYERNAME_IDX (cr=1 r=0 w=0 time=52 us)(object id 55930)
1 BUFFER SORT (cr=2 r=1 w=0 time=11101 us)
1 TABLE ACCESS BY INDEX ROWID COUNTY (cr=2 r=1 w=0 time=11002 us)
1 INDEX RANGE SCAN COUNTY_GEONAME_IDX (cr=1 r=1 w=0 time=10963 us)(object id 55401)
130 TABLE ACCESS BY INDEX ROWID TOWNLAND (cr=134 r=118 w=21 time=258993 us)
130 BITMAP CONVERSION TO ROWIDS (cr=26 r=40 w=21 time=138838 us)
1 BITMAP AND (cr=26 r=40 w=21 time=138608 us)
1 BITMAP CONVERSION FROM ROWIDS (cr=5 r=0 w=0 time=2838 us)
1118 INDEX RANGE SCAN TOWNLAND_COUNTYID_IDX (cr=5 r=0 w=0 time=1601 us)(object id 55416)
1 BITMAP CONVERSION FROM ROWIDS (cr=21 r=40 w=21 time=135671 us)
5011 SORT ORDER BY (cr=21 r=40 w=21 time=130355 us)
5011 INDEX RANGE SCAN TOWNLAND_GEONAME_IDX (cr=21 r=19 w=0 time=78502 us)(object id 55399)
130 TABLE ACCESS BY INDEX ROWID MINMAXEXTENT (cr=392 r=1 w=0 time=11953 us)
130 INDEX UNIQUE SCAN MINMAXEXT_UK (cr=262 r=0 w=0 time=1625 us)(object id 55602)

********************************************************************************


SELECT a.* FROM
( SELECT /*+ first_rows */ a.*, ROWNUM RECORDINDEX FROM
( SELECT 'Townland Boundary' "FEATURENAME", mME.MinX, mME.MaxX, mME.MinY, mME.MaxY, gL.LayerName,
gL.LayerAlias, TOWNLAND.GEONAME COLUMN1 , COUNTY.GEONAME COLUMN2
FROM COUNTY ,TOWNLAND , MinMaxExtent mME, GeoLayer gL
WHERE COUNTY.GEONAME = 'L123'
AND TOWNLAND.GEONAME LIKE 'BALL%'
AND TOWNLAND.COUNTYID = COUNTY.COUNTYID
AND mME.ForeignID = TOWNLAND.GEOMETRYID
AND mME.TableName = 'TOWNLAND'
AND gL.TableName = mME.TableName
AND gL.LayerName = 'TOWNLAND'
ORDER BY TOWNLAND.GEONAME , COUNTY.GEONAME)
a WHERE ROWNUM <= 10) a

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.03 0.02 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 2 0.56 12.38 3889 10413 0 10
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 4 0.59 12.41 3889 10413 0 10

Misses in library cache during parse: 1
Optimizer goal: FIRST_ROWS
Parsing user id: 89

Rows Row Source Operation
------- ---------------------------------------------------
10 VIEW (cr=10413 r=3889 w=0 time=12388676 us)
10 COUNT STOPKEY (cr=10413 r=3889 w=0 time=12388629 us)
10 VIEW (cr=10413 r=3889 w=0 time=12388592 us)
10 SORT ORDER BY STOPKEY (cr=10413 r=3889 w=0 time=12388562 us)
130 NESTED LOOPS (cr=10413 r=3889 w=0 time=12387024 us)
130 MERGE JOIN CARTESIAN (cr=10021 r=3888 w=0 time=12380531 us)
130 NESTED LOOPS (cr=10019 r=3888 w=0 time=12377287 us)
5011 TABLE ACCESS BY INDEX ROWID TOWNLAND (cr=5006 r=3888 w=0 time=12252430 us)
5011 INDEX RANGE SCAN TOWNLAND_GEONAME_IDX (cr=21 r=15 w=0 time=41073 us)(object id 55399)
130 TABLE ACCESS BY INDEX ROWID COUNTY (cr=5013 r=0 w=0 time=88160 us)
5011 INDEX UNIQUE SCAN COUNTY_UK (cr=2 r=0 w=0 time=28698 us)(object id 55015)
130 BUFFER SORT (cr=2 r=0 w=0 time=1745 us)
1 TABLE ACCESS BY INDEX ROWID GEOLAYER (cr=2 r=0 w=0 time=38 us)
1 INDEX RANGE SCAN GEOLAYER_LAYERNAME_IDX (cr=1 r=0 w=0 time=25 us)(object id 55930)
130 TABLE ACCESS BY INDEX ROWID MINMAXEXTENT (cr=392 r=1 w=0 time=4649 us)
130 INDEX UNIQUE SCAN MINMAXEXT_UK (cr=262 r=0 w=0 time=2697 us)(object id 55602)

********************************************************************************

Re: Catastrophic Performance Degredation

T Truong, February 11, 2004 - 5:39 pm UTC

Mr. Kyte,
We are having the same performance problem as with reviewer Doh!

The following query (provided in your first post of this thread) works perfectly prior to our database upgrade from 8.1.7 to 9.2.0.

select *
from ( select a.*, rownum rnum
from ( YOUR_QUERY_GOES_HERE -- including the order by ) a
where rownum <= MAX_ROWS )
where rnum >= MIN_ROWS
/

After the upgrade to 9.2.0, we have the performance problem with the above query.

Please continue this thread to determine the cause.

Best Regards,


Tom Kyte
February 11, 2004 - 6:51 pm UTC

hows about your example? your query, your 8i tkprf and your 9i one as well

Re: Catastrophic Performance Degredation

T Truong, February 11, 2004 - 8:24 pm UTC

Mr. Kyte,
Thank you for your prompt response.

As a developer, don't have access to tkprof utilities to check out the query statistics and we're not getting much time from our DBAs so far to run tkprof; though we will be getting some of their time soon (hopefully next week).

So far, we know that if we set the initial parameter OPTIMIZER_FEATURES_ENABLE to 8.1.7, then the query runs just as fast as it was prior to the database upgrade.

We don't have any other 8i sandbox to regenerate the explan plan for the query, but here's the query and its explain plan in our only 9i 9.2.0 sandbox

select
       x.ccbnumber
      ,x.chgtype
      ,x.pkgid
      ,x.project_status
      ,x.title
      ,x.projleadname
 from (
    select a.*, rownum rnum
     from (
           select
                  pkg_info.chgty_code || pkg_info.pkg_seq_id ccbnumber
                 ,pkg_info.chgty_code chgtype
                 ,pkg_info.pkg_seq_id pkgid
                 ,(
                   select decode(pkg_info.sty_code,'INPROCESS','In-Process'
                                                  ,'CREATED','Created'
                                                  ,decode(
                                                          min(
                                                              decode(pvs.sty_code,'COMPLETE',10
                                                                                 ,'CANCEL',20
                                                                                 ,'APPROVE',30
                                                                                 ,'APPROVEDEF',30
                                                                                 ,'INPROCESS',40
                                                                                 ,'DISAPPROVE',50
                                                                                 ,'DISAPPRXA',50
                                                                                 ,'VOID',60
                                                                                 ,'CREATED',70
                                                                                 ,null
                                                                    )
                                                             ),10,'Complete'
                                                              ,20,'Cancelled'
                                                              ,30,'Approved'
                                                              ,40,'In-Process'
                                                              ,50,'Disapproved'
                                                              ,50,'Disapproved XA'
                                                              ,60,'Void'
                                                              ,70,'Created'
                                                         )
                                ) project_status
                     from cms_pkg_ver_statuses pvs
                         ,cms_pkg_vers pv
                    where pv.pkg_seq_id = pkg_info.pkg_seq_id
                      and pv.pkg_seq_id = pvs.pkgver_pkg_seq_id
                      and pv.seq_num = pvs.pkgver_seq_num
                      and pvs.sty_scty_dp_code = 'PKG'
                      and pvs.sty_scty_code = 'STATE'
                      and pvs.create_date =
                             (select max(create_date)
                                from cms_pkg_ver_statuses
                               where pkgver_pkg_seq_id = pvs.pkgver_pkg_seq_id
                                 and pkgver_seq_num = pvs.pkgver_seq_num
                                 and sty_scty_dp_code = 'PKG'
                                 and sty_scty_code = 'STATE'
                             )
                      and pvs.create_date =
                             (select max(create_date)
                                from cms_pkg_ver_statuses a
                               where a.pkgver_pkg_seq_id = pvs.pkgver_pkg_seq_id
                                 and a.pkgver_seq_num = pvs.pkgver_seq_num
                                 and a.sty_scty_dp_code = 'PKG'
                                 and a.sty_scty_code = 'STATE'
                                 and a.create_date =
                                        (select max(create_date)
                                           from cms_pkg_ver_statuses
                                          where pkgver_pkg_seq_id = a.pkgver_pkg_seq_id
                                            and pkgver_seq_num = a.pkgver_seq_num
                                            and sty_scty_dp_code = 'PKG'
                                            and sty_scty_code = 'STATE'
                                        )
                             )
                  ) project_status
                 ,pkg_info.title title
                 ,emp.user_name projleadname
             from pit_pkg_info pkg_info
                 ,emp_person emp
            where pkg_info.projmgr_emp_employee_num = emp.emp_no(+)
              and pkg_info.title like '%AIR%'
            order by pkgid
          ) a
     where rownum <= 100
      ) x
where x.rnum >= 51
;


OPERATION                   OPTIONS         OBJECT_NAME         COST POSITION
--------------------------- --------------- ----------------- ------ --------
SELECT STATEMENT                                                   4        4
  VIEW                                                             4        1
    COUNT                   STOPKEY                                         1
      VIEW                                                         4        1
        NESTED LOOPS        OUTER                                  4        1
          NESTED LOOPS      OUTER                                  3        1
            TABLE ACCESS    BY INDEX ROWID  PIT_PKG_INFO           3        1
              INDEX         RANGE SCAN      PIT_PKG_PKGVER_I       2        1
            INDEX           UNIQUE SCAN     STY_PK                          2
          TABLE ACCESS      BY INDEX ROWID  EMP_PERSON             1        2
            INDEX           UNIQUE SCAN     SYS_C001839                     1

11 rows selected.

Follows are the 9i initial parameters

SQL> show parameters
O7_DICTIONARY_ACCESSIBILITY          boolean     FALSE                          
_trace_files_public                  boolean     TRUE                           
active_instance_count                integer                                    
aq_tm_processes                      integer     0                              
archive_lag_target                   integer     0                              
audit_file_dest                      string      ?/rdbms/audit                  
audit_sys_operations                 boolean     FALSE                          
audit_trail                          string      TRUE                           
background_core_dump                 string      partial                        
background_dump_dest                 string      /u01/app/oracle/admin/U50DAMC/ 
                                                 bdump                          
backup_tape_io_slaves                boolean     FALSE                          
bitmap_merge_area_size               integer     1048576                        
blank_trimming                       boolean     FALSE                          
buffer_pool_keep                     string                                     
buffer_pool_recycle                  string                                     
circuits                             integer     0                              
cluster_database                     boolean     FALSE                          
cluster_database_instances           integer     1                              
cluster_interconnects                string                                     
commit_point_strength                integer     1                              
compatible                           string      9.2.0.0                        
control_file_record_keep_time        integer     3                              
control_files                        string      /np70/oradata/U50DAMC/cr1/cont 
                                                 rol01.ctl, /np70/oradata/U50DA 
                                                 MC/cr2/control02.ctl, /np70/or 
                                                 adata/U50DAMC/cr3/control03.ct 
                                                 l                              
core_dump_dest                       string      /u01/app/oracle/admin/U50DAMC/ 
                                                 cdump                          
cpu_count                            integer     4                              
create_bitmap_area_size              integer     8388608                        
cursor_sharing                       string      EXACT                          
cursor_space_for_time                boolean     FALSE                          
db_16k_cache_size                    big integer 0                              
db_2k_cache_size                     big integer 0                              
db_32k_cache_size                    big integer 0                              
db_4k_cache_size                     big integer 0                              
db_8k_cache_size                     big integer 0                              
db_block_buffers                     integer     6000                           
db_block_checking                    boolean     FALSE                          
db_block_checksum                    boolean     TRUE                           
db_block_size                        integer     8192                           
db_cache_advice                      string      OFF                            
db_cache_size                        big integer 0                              
db_create_file_dest                  string                                     
db_create_online_log_dest_1          string                                     
db_create_online_log_dest_2          string                                     
db_create_online_log_dest_3          string                                     
db_create_online_log_dest_4          string                                     
db_create_online_log_dest_5          string                                     
db_domain                            string      lgb.ams.boeing.com             
db_file_multiblock_read_count        integer     8                              
db_file_name_convert                 string                                     
db_files                             integer     1024                           
db_keep_cache_size                   big integer 0                              
db_name                              string      U50DAMC                        
db_recycle_cache_size                big integer 0                              
db_writer_processes                  integer     4                              
dblink_encrypt_login                 boolean     FALSE                          
dbwr_io_slaves                       integer     0                              
dg_broker_config_file1               string      ?/dbs/dr1@.dat                 
dg_broker_config_file2               string      ?/dbs/dr2@.dat                 
dg_broker_start                      boolean     FALSE                          
disk_asynch_io                       boolean     TRUE                           
dispatchers                          string                                     
distributed_lock_timeout             integer     60                             
dml_locks                            integer     800                            
drs_start                            boolean     FALSE                          
enqueue_resources                    integer     2389                           
event                                string                                     
fal_client                           string                                     
fal_server                           string                                     
fast_start_io_target                 integer     0                              
fast_start_mttr_target               integer     0                              
fast_start_parallel_rollback         string      LOW                            
file_mapping                         boolean     FALSE                          
filesystemio_options                 string      asynch                         
fixed_date                           string                                     
gc_files_to_locks                    string                                     
global_context_pool_size             string                                     
global_names                         boolean     FALSE                          
hash_area_size                       integer     10000000                       
hash_join_enabled                    boolean     TRUE                           
hi_shared_memory_address             integer     0                              
hpux_sched_noage                     integer     0                              
hs_autoregister                      boolean     TRUE                           
ifile                                file                                       
instance_groups                      string                                     
instance_name                        string      U50DAMC                        
instance_number                      integer     0                              
java_max_sessionspace_size           integer     0                              
java_pool_size                       big integer 50331648                       
java_soft_sessionspace_limit         integer     0                              
job_queue_processes                  integer     4                              
large_pool_size                      big integer 16777216                       
license_max_sessions                 integer     0                              
license_max_users                    integer     0                              
license_sessions_warning             integer     0                              
local_listener                       string                                     
lock_name_space                      string                                     
lock_sga                             boolean     FALSE                          
log_archive_dest                     string                                     
log_archive_dest_1                   string      location=/np70/oradata/U50DAMC 
                                                 /arch MANDATORY REOPEN=60      
log_archive_dest_10                  string                                     
log_archive_dest_2                   string      location=/u01/app/oracle/admin 
                                                 /altarch/U50DAMC OPTIONAL      
log_archive_dest_3                   string                                     
log_archive_dest_4                   string                                     
log_archive_dest_5                   string                                     
log_archive_dest_6                   string                                     
log_archive_dest_7                   string                                     
log_archive_dest_8                   string                                     
log_archive_dest_9                   string                                     
log_archive_dest_state_1             string      enable                         
log_archive_dest_state_10            string      enable                         
log_archive_dest_state_2             string      defer                          
log_archive_dest_state_3             string      defer                          
log_archive_dest_state_4             string      defer                          
log_archive_dest_state_5             string      defer                          
log_archive_dest_state_6             string      enable                         
log_archive_dest_state_7             string      enable                         
log_archive_dest_state_8             string      enable                         
log_archive_dest_state_9             string      enable                         
log_archive_duplex_dest              string                                     
log_archive_format                   string      U50DAMC_%T_%S.ARC              
log_archive_max_processes            integer     2                              
log_archive_min_succeed_dest         integer     1                              
log_archive_start                    boolean     TRUE                           
log_archive_trace                    integer     0                              
log_buffer                           integer     1048576                        
log_checkpoint_interval              integer     10000                          
log_checkpoint_timeout               integer     1800                           
log_checkpoints_to_alert             boolean     FALSE                          
log_file_name_convert                string                                     
log_parallelism                      integer     1                              
logmnr_max_persistent_sessions       integer     1                              
max_commit_propagation_delay         integer     700                            
max_dispatchers                      integer     5                              
max_dump_file_size                   string      10240K                         
max_enabled_roles                    integer     148                            
max_rollback_segments                integer     40                             
max_shared_servers                   integer     20                             
mts_circuits                         integer     0                              
mts_dispatchers                      string                                     
mts_listener_address                 string                                     
mts_max_dispatchers                  integer     5                              
mts_max_servers                      integer     20                             
mts_multiple_listeners               boolean     FALSE                          
mts_servers                          integer     0                              
mts_service                          string      U50DAMC                        
mts_sessions                         integer     0                              
nls_calendar                         string                                     
nls_comp                             string                                     
nls_currency                         string                                     
nls_date_format                      string                                     
nls_date_language                    string                                     
nls_dual_currency                    string                                     
nls_iso_currency                     string                                     
nls_language                         string      AMERICAN                       
nls_length_semantics                 string      BYTE                           
nls_nchar_conv_excp                  string      FALSE                          
nls_numeric_characters               string                                     
nls_sort                             string                                     
nls_territory                        string      AMERICA                        
nls_time_format                      string                                     
nls_time_tz_format                   string                                     
nls_timestamp_format                 string                                     
nls_timestamp_tz_format              string                                     
object_cache_max_size_percent        integer     10                             
object_cache_optimal_size            integer     102400                         
olap_page_pool_size                  integer     33554432                       
open_cursors                         integer     500                            
open_links                           integer     100                            
open_links_per_instance              integer     4                              
optimizer_dynamic_sampling           integer     0                              
optimizer_features_enable            string      8.1.7                          
optimizer_index_caching              integer     0                              
optimizer_index_cost_adj             integer     100                            
optimizer_max_permutations           integer     80000                          
optimizer_mode                       string      CHOOSE                         
oracle_trace_collection_name         string                                     
oracle_trace_collection_path         string      ?/otrace/admin/cdf             
oracle_trace_collection_size         integer     5242880                        
oracle_trace_enable                  boolean     FALSE                          
oracle_trace_facility_name           string      oracled                        
oracle_trace_facility_path           string      ?/otrace/admin/fdf             
os_authent_prefix                    string      ops_                           
os_roles                             boolean     FALSE                          
parallel_adaptive_multi_user         boolean     FALSE                          
parallel_automatic_tuning            boolean     FALSE                          
parallel_execution_message_size      integer     2152                           
parallel_instance_group              string                                     
parallel_max_servers                 integer     5                              
parallel_min_percent                 integer     0                              
parallel_min_servers                 integer     0                              
parallel_server                      boolean     FALSE                          
parallel_server_instances            integer     1                              
parallel_threads_per_cpu             integer     2                              
partition_view_enabled               boolean     FALSE                          
pga_aggregate_target                 big integer 25165824                       
plsql_compiler_flags                 string      INTERPRETED                    
plsql_native_c_compiler              string                                     
plsql_native_library_dir             string                                     
plsql_native_library_subdir_count    integer     0                              
plsql_native_linker                  string                                     
plsql_native_make_file_name          string                                     
plsql_native_make_utility            string                                     
plsql_v2_compatibility               boolean     FALSE                          
pre_page_sga                         boolean     FALSE                          
processes                            integer     600                            
query_rewrite_enabled                string      false                          
query_rewrite_integrity              string      enforced                       
rdbms_server_dn                      string                                     
read_only_open_delayed               boolean     FALSE                          
recovery_parallelism                 integer     0                              
remote_archive_enable                string      true                           
remote_dependencies_mode             string      TIMESTAMP                      
remote_listener                      string                                     
remote_login_passwordfile            string      EXCLUSIVE                      
remote_os_authent                    boolean     FALSE                          
remote_os_roles                      boolean     FALSE                          
replication_dependency_tracking      boolean     TRUE                           
resource_limit                       boolean     FALSE                          
resource_manager_plan                string                                     
rollback_segments                    string      r01, r02, r03, r04, r05, r06,  
                                                 r07, r08                       
row_locking                          string      always                         
serial_reuse                         string      DISABLE                        
serializable                         boolean     FALSE                          
service_names                        string      U50DAMC.lgb.ams.boeing.com     
session_cached_cursors               integer     0                              
session_max_open_files               integer     10                             
sessions                             integer     665                            
sga_max_size                         big integer 386756664                      
shadow_core_dump                     string      partial                        
shared_memory_address                integer     0                              
shared_pool_reserved_size            big integer 10066329                       
shared_pool_size                     big integer 201326592                      
shared_server_sessions               integer     0                              
shared_servers                       integer     0                              
sort_area_retained_size              integer     5000000                        
sort_area_size                       integer     5000000                        
spfile                               string                                     
sql92_security                       boolean     FALSE                          
sql_trace                            boolean     FALSE                          
sql_version                          string      NATIVE                         
standby_archive_dest                 string      ?/dbs/arch                     
standby_file_management              string      MANUAL                         
star_transformation_enabled          string      FALSE                          
statistics_level                     string      TYPICAL                        
tape_asynch_io                       boolean     TRUE                           
thread                               integer     0                              
timed_os_statistics                  integer     0                              
timed_statistics                     boolean     TRUE                           
trace_enabled                        boolean     TRUE                           
tracefile_identifier                 string                                     
transaction_auditing                 boolean     TRUE                           
transactions                         integer     200                            
transactions_per_rollback_segment    integer     5                              
undo_management                      string      MANUAL                         
undo_retention                       integer     900                            
undo_suppress_errors                 boolean     FALSE                          
undo_tablespace                      string                                     
use_indirect_data_buffers            boolean     FALSE                          
user_dump_dest                       string      /u01/app/oracle/admin/U50DAMC/ 
                                                 udump                          
utl_file_dir                         string                                     
workarea_size_policy                 string      AUTO                           

Hope you can spot something out of these

Best Regards,
Thomas
 

Tom Kyte
February 12, 2004 - 8:31 am UTC

I dont like your DBA's then. Really, they are preventing everyone from doing *their job*. arg.....


anyway, that plan "looks dandy" -- it looks like it would get first rows first.

We really need to "compare" plans.

Can you at least get an autotrace traceonly explain out of 8i (or at least an explain plan)

can you tell me "how fast it was in 8i", "how slow it is in 9i" and are the machines you are testing on even remotely similar.

Very handy

Sajid Anwar, March 08, 2004 - 11:29 am UTC

Hi Tom,
Just simple one about your SPECIAL QUERY for paging. I am using your method for paging.

select *
from ( select a.*, rownum rnum
from ( select * from t ) a
where rownum <= 5
) b
where rnum >= 2;

This gives me everything plus one extra column rnum that I dont want. How do I get rid of it in the same query?


Many thanks in advance.

Regards,
Sajid


Tom Kyte
March 08, 2004 - 2:05 pm UTC

besides just selecting the columns you want in the outer wrapper? nothing


select a, b, c, d
from ......

instead of select *

Regarding post from "T Truong from Long Beach, CA"

Matt, March 08, 2004 - 6:39 pm UTC

Tom,

There are various ways of getting access to trace files on the server, some of which have been documented on this site.

However, once the developer has the raw trace they need to access TKPROF. As far as I am aware this is not part of the default Oracle client distribution. Installing the DB on each desktop would be nice but an administrators nightmare.

As there any licencing restrictions that might prevent copying the required libraries and executables for tkprof and placing these on a desktop for developer use? I tested this (though not exhaustively) and it appears to work.

Do you see any problems with this approach?

Cheers,


Tom Kyte
March 09, 2004 - 10:50 am UTC

why an admins nightmare? developers cannot install software?


i'm not aware of any issues with getting people access to the code they have licensed.

how do I display rows with out using /* + first_rows */ hint ?

A reader, March 09, 2004 - 11:35 am UTC

Hi tom we use
saveral applications to brows the data from oracle.
one of them is toad.

They show a part of the data as soon as it is available on the screen and don't wait for the complete result set to be returned.


I checked the v$sql, v$sqlarea there is not any stmt with the first rows hint . It show the exact sql stmt that we "user" passed. How can I do that in my custom application ? I don't what to wait for 4k record and then show it to user, I need first rows hint functinality with out changing the stmt ? possible ? how ? is pagging involded ?

and yes we are using Java 1.4 + classes12.jar
and to display results, we use JTable





Tom Kyte
March 09, 2004 - 3:26 pm UTC

you can alter your session to set the optimizer goal if you like.

Response to Tom

Matt, March 09, 2004 - 5:00 pm UTC

>> Why an admins nightmare? developers cannot install software?

I intending to create a developer/administrator divide here - software development is a team effort. Of course developers can install software however, I would prefer that everyone runs the same patched version of the DB and managing this when there are multiple desktop DB's I see as being problematic.

>>i'm not aware of any issues with getting people access to the code they have licensed.

This is the issue, I guess. Is a patched desktop version of the DB that is used for development "licenced for development" (ie: free) or is there a licence payment required?

I understand that the "standalone" tkprof might fall into a different category. But, if a patched desktop version may be licenced for development, I don't see an issue.

Ta.

Tom Kyte
March 09, 2004 - 10:41 pm UTC

i don't know what you mean by a patched desktop version?

Very Useful

shabana, March 16, 2004 - 5:33 am UTC

I had problems populating large result sets. The Query helped me in fetching the needed rows keeping a page count from web tier

"order by"

A reader, April 01, 2004 - 6:45 pm UTC

hi tom
"select *
from ( select a.*, rownum rnum
from ( YOUR_QUERY_GOES_HERE -- including the order by ) a
where rownum <= MAX_ROWS )
where rnum >= MIN_ROWS
/
"

Does not the order by force you to go through the
entire rows anyways = which is pretty much
the same overhead as a "count(*)" thus defeating
the purpose? And I think in most cases, users
do want to sort their result in some order.
(Also in order by case the FIRST_ROWS hint also
is useless...
thanx!

Tom Kyte
April 02, 2004 - 9:49 am UTC

No it doesn't


think "index"


also, using rownum with the order by trick above has special top-n optimizations, so in the case where it would have to get the entire result set -- it is much more efficient then asking Oracle to generate the result set and just fetching the first N rows (using the rownum kicks in a special top-n optimization)


This is NOT like count(*). count(*) is something we can definitely 100% live without and would force the entire result set to be computed (most of the times, we don't need to get the entire result set here!)

ORDER BY is something you cannot live without -- we definitely 100% need it in many cases.

thanx!

A reader, April 02, 2004 - 10:56 am UTC

I tried it out myself and I am getting the results
that you say. I believe the "count stopkey" indicates
the rownum based top n optimization you talked
about (t1 is a copy of all_objects with some 30,000
rows one index on all columns of t1 being selected.)
select * from
(
select /*+ FIRST_ROWS */ a.*, rownum rnum
from
(
/* OUR QUERY */
select owner, object_name, object_type, rownum
from t1
where owner = 'PUBLIC'
order by owner, object_name, object_type
) a
where rownum <= 10
)
where rnum >= 1

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.02 0.02 0 0 0 0
Execute 1 0.01 0.00 0 0 0 0
Fetch 2 0.00 0.00 0 4 0 10
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 4 0.03 0.02 0 4 0 10

Misses in library cache during parse: 1
Optimizer mode: FIRST_ROWS
Parsing user id: 64

Rows Row Source Operation
------- ---------------------------------------------------
10 VIEW (cr=4 pr=0 pw=0 time=486 us)
10 COUNT STOPKEY (cr=4 pr=0 pw=0 time=427 us)
10 VIEW (cr=4 pr=0 pw=0 time=373 us)
10 COUNT (cr=4 pr=0 pw=0 time=309 us)
10 INDEX RANGE SCAN OBJ#(56907) (cr=4 pr=0 pw=0 time=270 us)(object id 56907)

Thanx!!!!



Tom Kyte
April 02, 2004 - 1:31 pm UTC

it also applies in sorting unindexed stuff as well (full details in expert one on one)

basically if you

select *
from ( select * from really_huge_table order by something_not_indexed )
where rownum < 10

oracle will get the first record and put it into slot 1 in a result set

it'll get the second record and if that is less than the one in one, it'll push it down to two and put this in one, else this goes into two

and so on for the first 10 records -- we now have 10 sorted records -- now it'll get the 11th and either

a) the 11th exceeds the one in the 10th slot -- this new record is discarded
b) the 11th is less than one of the existing 10 -- the current 10th goes away and this gets stuffed in there.


lots more efficient to sort the top N, than it would be to sort the entire result set into temp, merge it all back together -- just to fetch the first 10...

wow!

A reader, April 02, 2004 - 1:59 pm UTC

awesome - thanx a lot!!! ( not sure if you are on
vacation or is this your idea of vacation?;))

Regards

i think

A reader, April 02, 2004 - 2:03 pm UTC

"(full details in expert one
on one)"
You meant effective oracle by design (page 502)
thanx!



Tom Kyte
April 02, 2004 - 3:21 pm UTC

doh, you are right.

so here is the second test (without indexes)

A reader, April 02, 2004 - 2:45 pm UTC

thought I would share with others since I anyways
ran it.
------schema
spool s3
set echo on
drop table t2;
create table t2
as select owner, object_name, object_type
from all_objects;
insert into t2
select * from t2;
commit;

analyze table t2 compute statistics for table for all indexes for all
indexed columns;
-------------------
notice we have no indexes created
--------- selects ran - one with rownum and one without

set termout off
alter session set timed_statistics=true;
alter session set events '10046 trace name context forever, level 12';
select * from
(
select /*+ FIRST_ROWS */ a.*, rownum rnum
from
(
/* OUR QUERY ROWNUM ABSENT */
select owner, object_name, object_type, rownum
from t2
where owner = 'PUBLIC'
order by object_name
) a
);

select * from
(
select /*+ FIRST_ROWS */ a.*, rownum rnum
from
(
/* OUR QUERY ROWNUM PRESENT */
select owner, object_name, object_type, rownum
from t2
where owner = 'PUBLIC'
order by object_name
) a
where rownum <= 10
)
where rnum >= 1;

--------tkprof results----
-- FIRST CASE - ROWNUM ABSENT
select * from
(
select /*+ FIRST_ROWS */ a.*, rownum rnum
from
(
/* OUR QUERY ROWNUM ABSENT */
select owner, object_name, object_type, rownum
from t2
where owner = 'PUBLIC'
order by object_name
) a
)

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 2528 2.02 4.10 515 500 7 37904
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 2530 2.02 4.10 515 500 7 37904

Misses in library cache during parse: 0
Optimizer mode: FIRST_ROWS
Parsing user id: 64

Rows Row Source Operation
------- ---------------------------------------------------
37904 VIEW (cr=500 pr=515 pw=515 time=3587213 us)
37904 COUNT (cr=500 pr=515 pw=515 time=3413040 us)
37904 VIEW (cr=500 pr=515 pw=515 time=3295049 us)
37904 SORT ORDER BY (cr=500 pr=515 pw=515 time=3144606 us)
37904 COUNT (cr=500 pr=0 pw=0 time=613868 us)
37904 TABLE ACCESS FULL T2 (cr=500 pr=0 pw=0 time=281939 us)

--- second case ROWNUM present
select * from
(
select /*+ FIRST_ROWS */ a.*, rownum rnum
from
(
/* OUR QUERY ROWNUM PRESENT */
select owner, object_name, object_type, rownum
from t2
where owner = 'PUBLIC'
order by object_name
) a
where rownum <= 10
)
where rnum >= 1

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 2 0.33 0.49 0 500 0 10
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 4 0.33 0.49 0 500 0 10

Misses in library cache during parse: 0
Optimizer mode: FIRST_ROWS
Parsing user id: 64

Rows Row Source Operation
------- ---------------------------------------------------
10 VIEW (cr=500 pr=0 pw=0 time=495126 us)
10 COUNT STOPKEY (cr=500 pr=0 pw=0 time=494955 us)
10 VIEW (cr=500 pr=0 pw=0 time=494864 us)
10 SORT ORDER BY STOPKEY (cr=500 pr=0 pw=0 time=494817 us)
37904 COUNT (cr=500 pr=0 pw=0 time=262446 us)
37904 TABLE ACCESS FULL OBJ#(56928) (cr=500 pr=0 pw=0 time=129898 us)


Elapsed time in first case: 4.10 seconds
Elapsed time in second case (what would be our query) : 0.49 seconds

the second option is 8 times faster.


A reader, April 05, 2004 - 1:01 pm UTC

Invaluable information.

thank you Tom.

Different question

Roughing it, April 14, 2004 - 6:16 pm UTC

I have a table with time and place,
where the place is a single string with city,stateAbbrev
like SeattleWA
It is indexed by time and has about 10M records.

These queries take no time at all as expected:
select min(time) from Time_Place;
select max(time) from Time_Place;

But if I do:
select min(time), max(time) from Time_Place;
it takes a looooooong time...

What I really want is:
select max(time) from Time_Place
where place like '%CA';

If it started searching at the end, it would find it very quickly. It's not finding it quickly. It's appearing to search all the records.

Is there a way to speed this up?
Or must I keep a list of last times per state and do
select max(time) from Time_Place
where time>=(select last time from Last_per_state
where state='CA')
and place like '%CA';

Thanks,
-r

Tom Kyte
April 15, 2004 - 8:07 am UTC

Ok, two things here -- select min/max and how to make that query on data stored "not correctly" (it should have been two fields!!!) go fast.

max/min first. big_table is 1,000,000 rows on my system, if we:

big_table@ORA9IR2> set autotrace on
big_table@ORA9IR2> select min(created) from big_table;

MIN(CREAT
---------
12-MAY-02


Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=3 Card=1 Bytes=7)
1 0 SORT (AGGREGATE)
2 1 INDEX (FULL SCAN (MIN/MAX)) OF 'BT_IDX_CREATED' (NON-UNIQUE) (Cost=3 Card=1000000 Bytes=7000000)

that used index full scan (min/max) -- it knew it could read the index head or tail and be done, very efficient:


Statistics
----------------------------------------------------------
0 recursive calls
0 db block gets
3 consistent gets
0 physical reads
0 redo size
388 bytes sent via SQL*Net to client
499 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
1 rows processed

big_table@ORA9IR2>
big_table@ORA9IR2> select max(created) from big_table;

MAX(CREAT
---------
28-NOV-03


Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=3 Card=1 Bytes=7)
1 0 SORT (AGGREGATE)
2 1 INDEX (FULL SCAN (MIN/MAX)) OF 'BT_IDX_CREATED' (NON-UNIQUE) (Cost=3 Card=1000000 Bytes=7000000)




Statistics
----------------------------------------------------------
0 recursive calls
0 db block gets
3 consistent gets
0 physical reads
0 redo size
388 bytes sent via SQL*Net to client
499 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
1 rows processed

same there, but:

big_table@ORA9IR2>
big_table@ORA9IR2> select min(created), max(created) from big_table;

MIN(CREAT MAX(CREAT
--------- ---------
12-MAY-02 28-NOV-03


Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=257 Card=1 Bytes=7)
1 0 SORT (AGGREGATE)
2 1 INDEX (FAST FULL SCAN) OF 'BT_IDX_CREATED' (NON-UNIQUE) (Cost=257 Card=1000000 Bytes=7000000)




Statistics
----------------------------------------------------------
0 recursive calls
0 db block gets
2671 consistent gets
2656 physical reads
0 redo size
456 bytes sent via SQL*Net to client
499 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
1 rows processed

it cannot read the head and the tail 'in the general sense' -- i'll concede in this case, it could but in a query with a group by, it could not really. So -- can we do something?

big_table@ORA9IR2>
big_table@ORA9IR2>
big_table@ORA9IR2> select min(created), max(created)
2 from (
3 select min(created) created from big_table
4 union all
5 select max(created) created from big_table
6 )
7 /

MIN(CREAT MAX(CREAT
--------- ---------
12-MAY-02 28-NOV-03


Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=6 Card=1 Bytes=9)
1 0 SORT (AGGREGATE)
2 1 VIEW (Cost=6 Card=2 Bytes=18)
3 2 UNION-ALL
4 3 SORT (AGGREGATE)
5 4 INDEX (FULL SCAN (MIN/MAX)) OF 'BT_IDX_CREATED' (NON-UNIQUE) (Cost=3 Card=1000000 Bytes=70
00000)

6 3 SORT (AGGREGATE)
7 6 INDEX (FULL SCAN (MIN/MAX)) OF 'BT_IDX_CREATED' (NON-UNIQUE) (Cost=3 Card=1000000 Bytes=70
00000)





Statistics
----------------------------------------------------------
0 recursive calls
0 db block gets
6 consistent gets
0 physical reads
0 redo size
456 bytes sent via SQL*Net to client
499 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
1 rows processed

that shows how to do that. Now onto that state field stuffed onto the end -- here we have to full scan the table (or full scan an index on object_name,created) since EACH ROW must be inspected:

big_table@ORA9IR2>
big_table@ORA9IR2> select max(created)
2 from big_table
3 where object_name like '%WI';

MAX(CREAT
---------
12-MAY-02


Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=1379 Card=1 Bytes=24)
1 0 SORT (AGGREGATE)
2 1 TABLE ACCESS (FULL) OF 'BIG_TABLE' (Cost=1379 Card=50000 Bytes=1200000)




Statistics
----------------------------------------------------------
0 recursive calls
0 db block gets
14338 consistent gets
14327 physical reads
0 redo size
388 bytes sent via SQL*Net to client
499 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
1 rows processed

we can begin by observing that is the same as this, substr() -- get the last two characters:


big_table@ORA9IR2>
big_table@ORA9IR2> select max(created)
2 from big_table
3 where substr(object_name,length(object_name)-1) = 'WI';

MAX(CREAT
---------
12-MAY-02


Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=1379 Card=1 Bytes=24)
1 0 SORT (AGGREGATE)
2 1 TABLE ACCESS (FULL) OF 'BIG_TABLE' (Cost=1379 Card=10000 Bytes=240000)




Statistics
----------------------------------------------------------
0 recursive calls
0 db block gets
14338 consistent gets
13030 physical reads
0 redo size
388 bytes sent via SQL*Net to client
499 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
1 rows processed

Now we have something to INDEX!

big_table@ORA9IR2>
big_table@ORA9IR2> create index fbi on big_table( substr(object_name,length(object_name)-1), created )
2 compute statistics;

Index created.

big_table@ORA9IR2>
big_table@ORA9IR2> select max(created)
2 from big_table
3 where substr(object_name,length(object_name)-1) = 'WI';

MAX(CREAT
---------
12-MAY-02


Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=3 Card=1 Bytes=24)
1 0 SORT (AGGREGATE)
2 1 INDEX (RANGE SCAN) OF 'FBI' (NON-UNIQUE) (Cost=3 Card=10000 Bytes=240000)




Statistics
----------------------------------------------------------
29 recursive calls
0 db block gets
7 consistent gets
2 physical reads
0 redo size
388 bytes sent via SQL*Net to client
499 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
1 rows processed

big_table@ORA9IR2>
big_table@ORA9IR2> set autotrace off


getting rows N through M of a result set

Ben Kafka, April 15, 2004 - 5:06 pm UTC

Really appreciate finding this info... wish oracle did it
like postgresql but this would have been some work coming up with myself. Thanks!

getting N rows out of 1 row

Deepak Gulrajani, April 21, 2004 - 2:36 pm UTC

The above feedback was useful to return less number of rows from the resultset of the Query.

Tom, Can we write a SQL Statement and return 2 or 3(dynamic)rows into a PL/SQL Table for every single row?

Tom Kyte
April 21, 2004 - 9:05 pm UTC

Can we write a SQL Statement and return 2 or 3(dynamic)rows into a PL/SQL
Table for every single row?

that doesn't "make sense" to me. not sure what you mean.

getting N rows out of 1 row

Deepak Gulrajani, April 23, 2004 - 2:13 pm UTC

Tom, Can we achive this with a single SQL in bulk rather than row by row. Sorry my question in the previous update was a little vague.Here is the example--
I would like to create rows in table B(2 or 3 depending on value of col3 for every row in Table A, i.e if value of col3 = F-2 then I need to create 2 rows in Table B, if value in col3 = F-3 then I need to create 3 rows in table B). For Example----

ROW IN TABLE A
-----------------------
col1 col2 col3 col4 col5 col6
---- ---- ---- ---- ---- ----
1 ITEM F-2 XXX YYY 15

ROWS IN TABLE B(if col3= F-2)
--------------------------
col1 col2 col3 col4 col5 col6
---- ---- ---- ---- ---- ----
1 ITEM F-2 XXX YYY -15
2 IPV F-2 XXX YYY 15

ROWS IN TABLE B(if col3= F-3 then basically the col6 is further split)
--------------------------
col1 col2 col3 col4 col5 col6
---- ---- ---- ---- ---- ----
1 ITEM F-3 XXX YYY -15
2 IPV F-3 XXX YYY 12
3 ERV F-3 XXX YYY 3



Tom Kyte
April 23, 2004 - 3:23 pm UTC

if substr( col3,3,1 ) is always a number then:


select a.*
from a,
(select rownum r from all_objects where r <= 10) x
where x.r <= to_number(substr(a.col3,3,1))
/



(adjust r <= 10 to your needs, if 10 isn't "big enough", make it big enough)


getting N rows out of 1 row

Deepak Gulrajani, April 23, 2004 - 4:47 pm UTC

Tom, Thanks for the prompt and precise reply. --deepak

just a tiny fix

Marcio, April 23, 2004 - 7:52 pm UTC

select a.*
from a,
(select rownum r from all_objects where r <= 10) x
^^^^^^^^^^^^^
where x.r <= to_number(substr(a.col3,3,1))
/

instead of where r <= 10 you have where rownum <= 10

ops$marcio@MRP920> select rownum r from all_objects where r <= 10;
select rownum r from all_objects where r <= 10
*
ERROR at line 1:
ORA-00904: "R": invalid identifier

so, you have:

select a.*
from a,
(select rownum r from all_objects where rownum <= 10) x
where x.r <= to_number(substr(a.col3,3,1))
/


Tom Kyte
April 23, 2004 - 7:57 pm UTC

thanks, that is correct (every time I put a query up without actually running the darn thing that happens :)

Selecting nth Row from table by IDNumber

denni50, April 26, 2004 - 8:55 am UTC

Hi Tom

I'm developing a Second Gift Analysis Report.
(mgt wants to see the activity of first time donors
who give second gifts).

The dilemma is I have to go back and start with
donors who gave their 1st gift in November 2003...then
generate a report when they gave their second gift.
Some of the donors may have went on to give 3rd and 4th
gifts through April 2004...however all subsequent gifts
after the second gift need to be excluded from the query.

On my home computer(Oracle 9i) I was able to use:
ROW_NUMBER() OVER(PARTITION BY idnumber ORDER BY giftdate)
as rn....etc to get the results using test data.

At work we don't AF.
Below is the query to find the first
time donors in November 2003. I then inserted those records
into a temp table called SecondGift:

FirstGift Query:

select idnumber,giftdate,giftamount
from gift where idnumber in(select g.idnumber
from gift g
where g.usercode1='ACGA'
and g.giftdate < to_date('01-NOV-2003','DD-MON-YYYY')
having sum(g.giftamount)=0
group by g.idnumber)
and giftamount>0
and giftdate between to_date('01-NOV-2003','DD-MON-YYYY')
and to_date('30-NOV-2003','DD-MON-YYYY')

here is the query trying to select second gift donors:
(however it's only selecting idnumbers with count=2.
A donor may have nth records even though I'm only searching
for the second giftamount>0)

Second Gift Query
select idnumber,giftdate,giftamount
from gift where idnumber in(select g.idnumber
from gift g where g.idnumber in(select s.idnumber
from secondgift s
where s.firstgiftcode='Nov2003')
and g.giftamount>0
having count(g.giftdate)=2
group by g.idnumber)
and giftamount>0

tried using rownum(typically used for TOP n analysis)with that returning only the 2nd row from 8mil records.


thanks for any feedback







Tom Kyte
April 26, 2004 - 9:36 am UTC

perhaps this'll help:

ops$tkyte@ORA9IR2> select * from t;
 
  IDNUMBER GIFTDATE  GIFTAMOUNT
---------- --------- ----------
         1 01-OCT-03         55
         1 01-APR-04         65
         1 02-APR-04         65
         2 01-DEC-03         55
         2 01-APR-04         65
         3 01-OCT-03         55
         3 21-OCT-03         65
 
7 rows selected.
 
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> select idnumber,
  2         gf1,
  3         to_date(substr(gf2,1,14),'yyyymmddhh24miss') gd2,
  4         to_number(substr(gf2,15)) ga2
  5    from (
  6  select idnumber,
  7         max(giftdate1) gf1,
  8         min(giftdate2) gf2
  9    from (
 10  select idnumber,
 11         case when giftdate <= to_date('01-nov-2003','dd-mon-yyyy')
 12              then giftdate
 13          end giftdate1,
 14         case when giftdate  > to_date('01-nov-2003','dd-mon-yyyy')
 15              then to_char(giftdate,'yyyymmddhh24miss') || to_char(giftamount)
 16          end giftdate2
 17    from t
 18         )
 19   group by idnumber
 20  having max(giftdate1) is not null and min(giftdate2) is not null
 21         )
 22  /
 
  IDNUMBER GF1       GD2              GA2
---------- --------- --------- ----------
         1 01-OCT-03 01-APR-04         65
 
ops$tkyte@ORA9IR2>
 

set of rows at a time

pushparaj arulappan, April 28, 2004 - 11:36 am UTC

Tom,

In our web application we need to retrieve data from
the database in portion and present to the user by piece meal.

For example, for a search if the query retrieves 100000
rows , initially we only want to present the user the first 10000 rows and then pick the next 10000 rows and so on..

The query may be joined with multiple tables.

We use the connection pool and hence we do not want to hold on to the connection for that particular user until the user reviews all the 100000 rows. We probably want to disconnect the user's connection from the database after
fetching the the first 10000 rows.

Can you please guide us.

Our database is Oracle9i and weblogic is the application server.

Thanks
Pushparaj

Tom Kyte
April 28, 2004 - 6:57 pm UTC

10,000!!!!!! out of 100,000!!!!!

are you *kidding*???

google = gold standard for searching

google = 10 hits per page
google = if you try to go to page 100, we'll laugh at you and then say "no"
google = "got it so right"


you need to back off by at least an order of 2 to 3 magnitudes here -- at least.

and then use this query (above)

Selecting n rows from tables

Graeme Whitfield, May 06, 2004 - 3:38 am UTC

Thanks, this saved me a bucket to time!!!

Selecting N rows for each Group

Mir, May 21, 2004 - 3:27 pm UTC

Hi Tom,

How will i write a SQL Query to fetch N rows of every Group. If we take the DEPT, EMP example i want to retrive say first 5 rows of EVERY dept.



Tom Kyte
May 22, 2004 - 11:14 am UTC

select *
from ( select ..., ROW_NUMBER() over (PARTITION BY DEPT order by whatever )rn
from emp )
where rn <= 5;

Thanks Tom, * * * * * invaluable * * * * *

A reader, May 24, 2004 - 11:51 am UTC


Thanks Tom, * * * * * invaluable * * * * *

A reader, May 24, 2004 - 11:52 am UTC


I need help about how to paginate

Fernando Sanchez, May 30, 2004 - 4:09 pm UTC

I had never had to work with these kind of things and I quite lost.

An application is asking me any page of any size from a table an it is taking too long. I think the problem is because of the pagination.

This is an example of what they ask me, returns 10 rows out of 279368 (it is taking 00:01:117.09)

select *
from (select a.*, rownum rnum
from (select env.CO_MSDN_V, env.CO_IMSI_V, sms.CO_TEXT_V, env.CO_MSC_V, per.DS_PER_CLT_V, env.CO_REIN_N, TO_CHAR(env.FX_FECH_ENV_D, 'DD/MM/YYYY HH24:MI:SS'), TO_CHAR(env.FX_RCP_IAS_D, 'DD/MM/YYYY HH24:MI:SS')
from ir_tb_env_clts env, IR_CT_SMS sms, IR_CT_PER_CLT per
where env.CO_SMS_N = sms.CO_SMS_N(+)
and env.CO_PER_CLT_N = per.CO_PER_CLT_N(+)
order by env.SQ_ECL_01, env.CO_MSDN_V, env.CO_SMS_N, env.FX_FECH_ENV_D) a
where rownum <= 100510)
where rnum >= 100501;



Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=28552 Card=136815 Bytes=28867965)

1 0 VIEW (Cost=28552 Card=136815 Bytes=28867965)
2 1 COUNT (STOPKEY)
3 2 VIEW (Cost=28552 Card=136815 Bytes=27089370)
4 3 SORT (ORDER BY STOPKEY) (Cost=28552 Card=136815 Bytes=35435085)

5 4 HASH JOIN (OUTER) (Cost=3016 Card=136815 Bytes=35435085)

6 5 MERGE JOIN (OUTER) (Cost=833 Card=136815 Bytes=26542110)

7 6 SORT (JOIN)
8 7 PARTITION RANGE (ALL)
9 8 TABLE ACCESS (FULL) OF 'IR_TB_ENV_CLTS' (Cost=829 Card=136815 Bytes=13544685)

10 6 SORT (JOIN) (Cost=3 Card=82 Bytes=7790)
11 10 TABLE ACCESS (FULL) OF 'IR_CT_SMS' (Cost=1 Card=82 Bytes=7790)

12 5 TABLE ACCESS (FULL) OF 'IR_CT_PER_CLT' (Cost=1 Card=82 Bytes=5330)





Statistics
----------------------------------------------------------
5481 recursive calls
2443 db block gets
3548 consistent gets
57938 physical reads
107572 redo size
1580 bytes sent via SQL*Net to client
426 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
1 sorts (memory)
2 sorts (disk)
10 rows processed


Any advice will be helful for me.


Thanks in advance.



Tom Kyte
May 31, 2004 - 12:12 pm UTC

think about what has to take place here --

either:

at least the first 100,000 plus rows would have to retrieve via an index (very painfully slow to go row by row) and then the 10 you want would be returned

or

the entire result is gotten as fast as possible and sorted and then the 10 you want are returned.

there will be nothing "speedy" about this. Ask the developer to give you the business case that would actually necessitate going beyond say the first 100 rows (first 10 pages of a result set). Ask them to find a single search engine on the web (say like google) that lets you goto "row 100,000 out of lots of rows". You won't find one.

I believe the application has got it "wrong" here. Who would have the

a) patience to hit next page 1,000 times to get to this page?
b) the *NEED* to goto page 1,000





partially solved

Fernando Sanchez, May 30, 2004 - 5:49 pm UTC

The biggest problem was the joins in the most inside query

select env.CO_MSDN_V, sms.CO_TEXT_V, env.CO_MSC_V, per.DS_PER_CLT_V, env.CO_REIN_N, TO_CHAR(env.FX_FECH_ENV_D, 'DD/MM/YYYY HH24:MI:SS'), TO_CHAR(env.FX_RCP_IAS_D, 'DD/MM/YYYY HH24:MI:SS')
from (select a.*, rownum rnum
from (select *
from ir_tb_env_clts
order by SQ_ECL_01, CO_MSDN_V, CO_SMS_N, FX_FECH_ENV_D) a
where rownum <= 100510) env, IR_CT_SMS sms, IR_CT_PER_CLT per
where env.CO_SMS_N = sms.CO_SMS_N(+)
and env.CO_PER_CLT_N = per.CO_PER_CLT_N(+)
and env.rnum >= 100501

takes only about 11 seconds, I sure there are more things I could do,

Appart from that, isn't there a more standard way of returning pages of a table to an application ?

Thanks again.


Tom Kyte
May 31, 2004 - 12:30 pm UTC

guess what -- rownum is assigned BEFORE order by is done.

what you have done is:

a) gotten the first 100510 rows
b) sorted them
c) joined them (possibly destroying the sorted order, most likely)
d) returned the "last ten" in some random order.

In short -- you have not returned "rows N thru M", so fast=true this is *not*

You can try something like this. the goal with "X" is to get the 10 rowids *after sorting* (so there better be an index on the order by columns AND one of the columns better be NOT NULL in the data dictionary).

Once we get those 10 rows (and that'll take as long as it takes to range scan that index from the START to the 100,000+ plus row -- that'll be some time), we'll join to the table again to pick up the rows we want and outer join to SMS and PER.

select /*+ FIRST_ROWS */
env.CO_MSDN_V,
sms.CO_TEXT_V,
env.CO_MSC_V,
per.DS_PER_CLT_V,
env.CO_REIN_N,
TO_CHAR(env.FX_FECH_ENV_D, 'DD/MM/YYYY HH24:MI:SS'),
TO_CHAR(env.FX_RCP_IAS_D, 'DD/MM/YYYY HH24:MI:SS')
from (select /*+ FIRST_ROWS */ rid
from (select /*+ FIRST_ROWS */ a.*, rownum rnum
from (select /*+ FIRST_ROWS */ rowid rid
from ir_tb_env_clts
order by SQ_ECL_01, CO_MSDN_V, CO_SMS_N, FX_FECH_ENV_D
) a
where rownum <= :n
)
where r >= :m
) X,
ir_tb_env_clts env
IR_CT_SMS sms,
IR_CT_PER_CLT per
where env.rowid = x.rid
and env.CO_SMS_N = sms.CO_SMS_N(+)
and env.CO_PER_CLT_N = per.CO_PER_CLT_N(+)
order by env.SQ_ECL_01, env.CO_MSDN_V, env.CO_SMS_N, env.FX_FECH_ENV_D
/

and if the outer join is causing us issues we can:

select /*+ FIRST_ROWS */
env.CO_MSDN_V,
(select CO_TEXT_V
from ir_ct_sms sms
where env.CO_SMS_N = sms.CO_SMS_N),
env.CO_MSC_V,
(select DS_PER_CLT_V
from IR_CT_PER_CLT
where env.CO_PER_CLT_N = per.CO_PER_CLT_N ),
env.CO_REIN_N,
TO_CHAR(env.FX_FECH_ENV_D, 'DD/MM/YYYY HH24:MI:SS'),
TO_CHAR(env.FX_RCP_IAS_D, 'DD/MM/YYYY HH24:MI:SS')
from (select /*+ FIRST_ROWS */ rid
from (select /*+ FIRST_ROWS */ a.*, rownum rnum
from (select /*+ FIRST_ROWS */ rowid rid
from ir_tb_env_clts
order by SQ_ECL_01, CO_MSDN_V, CO_SMS_N, FX_FECH_ENV_D
) a
where rownum <= :n
)
where r >= :m
) X,
ir_tb_env_clts env
where env.rowid = x.rid
order by env.SQ_ECL_01, env.CO_MSDN_V, env.CO_SMS_N, env.FX_FECH_ENV_D
/


assuming SMS and PER are "optional 1 to 1 relations with ENV" -- if they are not -- then your query above really returns "randomness" since it would get 10 random rows -- and then turn them into N random rows....


Insert to a file

A reader, June 10, 2004 - 10:31 am UTC

I have a partitioned table. Each Partition has around 5 million rows.I need to load a single partition data to a file but in batches of say 10.So each set will be of around 500,000 rows.
What is the best most efficient way to do that.
I was thinking of using your query to get m thru n,parametize it and in a loop use utl file package.
Any suggestions or any alternative approach ?

Tom Kyte
June 10, 2004 - 5:06 pm UTC

no, you would have a single query:

select * from table t partition(p);

and array fetch from it 10 rows at a time. do not even consider "paging" thru it, do not even CONSIDER it.



sqlplus can do this.
see </code> http://asktom.oracle.com/~tkyte/flat <code>

Insert to a file DB version 9.2

A reader, June 10, 2004 - 10:42 am UTC

forgot the DB version.
Thanx

Insert to a file DB version 9.2 some Clarification

A reader, June 10, 2004 - 5:19 pm UTC

Thanx for your response.
When you say"select * from table t partition(p);

and array fetch from it 10 rows at a time. do not even consider "paging" thru
it, do not even CONSIDER it.
"

1) by array fetch Do you mean a bulk collect with limit clause ?
Will a cursor be able to handle 2 million row set with the limit set to 500,000 so it will be 10 such sets.

2) Can I load these sets of 500,000 to a different external table each time instead of using utl_file.
Will it be better.
3) Is it possible to use insert /*+append*/ into external table like insert /*+append*/ select ..batch of 500,000 for each set.

Thanx



Tom Kyte
June 10, 2004 - 8:11 pm UTC

1) if you were to do this in plsql - yes, but i would recommend either sqlplus or proc (see that url)


2) in 10g, yes, in 9i -- no, you cannot "create" an external table as select.

3) you cannot insert into an external table.

Insert to a file DB version 9.2 some Clarification :to add

A reader, June 10, 2004 - 5:30 pm UTC

As I need data in 10 different files of 500,000 rows each

Tom Kyte
June 10, 2004 - 8:12 pm UTC

I'd use C if you could (code is written pretty much if you are on unix -- the code is written as you could array_flat | split )

Breaks in Dates

Saar, June 13, 2004 - 11:27 pm UTC

Tom,

I have 2 tables and the structure is enclosed...


Table 1 : COST_BREAKS

Structure :

from_date date,
to_date date,
cost number(13,2)

Data :

From_Date To_Date Cost
01/04/2004 19/06/2004 800
20/06/2004 31/07/2004 1100
01/08/2004 31/03/2005 900


Table 2 : PRICE_BREAKS

Structure :

From_Date date,
To_Date date,
Price Number(13,2)


Data

From_Date To_Date Price
02/05/2004 22/06/2004 1450
01/06/2004 15/07/2004 1750
16/07/2004 31/03/2005 1650



Output after combining the two table values with date breaks...

The breaks up from_date(01/04/2004) & To_Date(31/10/2004) will be passed as
parameter and should get the datebreaks along with cost and price like this.

Output :-


From_Date To_Date Price Cost
--------- ------- ---- -----
01/04/2004 01/05/2004 Nil 800
02/05/2004 31/05/2004 1450 800
01/06/2004 19/06/2004 1450 800
20/06/2004 22/06/2004 1450 1100
23/06/2004 15/07/2004 1750 1100
16/07/2004 31/07/2004 1650 1100
01/08/2004 31/08/2004 1650 900


Ur advice will be valuable

Tom Kyte
June 14, 2004 - 7:44 am UTC

I ignored this on the other page (what this had to do with export, I'll never figure out)

but since you put it here as well, I feel compelled to point out something.

Maybe anyone else reading this can help *me* out and let me know how this could be more clear:

</code> http://asktom.oracle.com/pls/ask/f?p=4950:9:::NO:9:F4950_P9_DISPLAYID:127412348064 <code>

this "followup" neither

a) applies to the original question
b) supplies the basic information required (create table, inserts)

I'm at a loss as to how to make it "more clear"?

Saar, June 14, 2004 - 9:01 am UTC

Create Table cost_breaks
( cost_id      Number,
  from_date    date,
  to_date      date,
  cost         number(13,2)
);


Insert Into cost_breaks Values (120,to_date('01-APR-04'),to_date('19-JUN-04'),800);
Insert Into cost_breaks Values (121,to_date('20-JUN-04'),to_date('31-JUL-04'),1100);
Insert Into cost_breaks Values (122,to_date('01-AUG-04'),to_date('31-MAR-05'),900);

Create Table price_breaks
( price_id     Number,
  from_date    date,
  to_date      date,
  cost         number(13,2)
);

Insert Into price_breaks Values (131,to_date('02-MAY-04'),to_date('22-JUN-04'),1450);
Insert Into price_breaks Values (132,to_date('01-JUN-04'),to_date('15-JUL-04'),750);
Insert Into price_breaks Values (133,to_date('16-JUL-04'),to_date('31-MAR-05'),1650);


COMMIT;

------------------------------------------------------------------------------------

SQL> SELECT * FROM COST_BREAKS;

   COST_ID FROM_DATE   TO_DATE                COST
---------- ----------- ----------- ---------------
       120 01/04/2004  19/06/2004           800.00
       121 20/06/2004  31/07/2004          1100.00
       122 01/08/2004  31/03/2005           900.00

SQL> SQL> SELECT * FROM PRICE_BREAKS;

  PRICE_ID FROM_DATE   TO_DATE                COST
---------- ----------- ----------- ---------------
       131 02/05/2004  22/06/2004          1450.00
       132 01/06/2004  15/07/2004           750.00
       133 16/07/2004  31/03/2005          1650.00
       

I have To pass 2 dateband. One Is '01-MAR-04' And The other one Is '31-OCT-04'. Now I have To produce a output
With datebreaks In both The tables....Like this..       

From_Date    To_Date        Price    Cost
---------    -------        ----    -----
01/04/2004    01/05/2004              800
02/05/2004    31/05/2004    1450      800
01/06/2004    19/06/2004    1450      800
20/06/2004    22/06/2004    1450      1100
23/06/2004    15/07/2004    1750      1100
16/07/2004    31/07/2004    1650      1100
01/08/2004    31/08/2004    1650      900

Rgrd 

Tom Kyte
June 14, 2004 - 10:45 am UTC

cool -- unfortunately you understand what you want, but it is not clear to me what you want. but it looks alot like "a procedural output in a report", not a SQL query.

Also, still not sure what this has to do with "getting rows N thru M from a result set"?

but you will want to write some code to generate this, I think I see what you want (maybe), and it's not going to be done via a simple query.

how to get a fixed no of rows

s devarshi, June 21, 2004 - 8:16 am UTC

Tom
I have a table t1(name,marks). a name can appear many times. i want to select 2 names with their top ten marks arranged in descending order.can it be done by sql.
I can get all the rows (select name,mark from t1 where name in (a ,b) order by a||b;)

Devarshi


Tom Kyte
June 21, 2004 - 9:29 am UTC

select name, mark, rn
from (select name, mark, row_number() over (partition by name order by mark) rn
from t
where name in ( a,b )
)
where rn <= 10;


also, -- read about the difference between row_number, rank and dense_rank.

suppose name=a has 100 rows with the "top" mark

row_number will assign 1, 2, 3, 4, .... to these 100 rows and you'll get 10 "random" ones.

rank will assign 1 to the first 100 rows (they are all the same rank) and 101 to the second and 102 and so on. so, you'll get 100 rows using rank.

dense_rank will assign 1 to the first 100 rows, 2 to the second highest and so on. with dense_rank you'll get 100+ rows....

Pseudo code

User, July 13, 2004 - 5:34 pm UTC

Hi Tom,
I have received a pseudo code from an non-oracle user and wanted to convert this code to sql query.Pls see below

Get from table research_personnel list of all center= cder co-pi
loop over list:
if emp_profile_key listed -> check in skills db for center
if from cder -> delete from research_personnel
else -> move to table resform_collab
else -> check if center specified in research_personnel
if center is cder -> delete from research_personnel
else -> move to table resform_collab
move to resform_collab:
insert into resform_collab:
resform_basic_id (same)
collab_name (research_personnel.fname + " " +
research_personnel.lname) collab_center ("FDA/" + research_personnel.center + "/" + research_personnel.office + "/" + research_personnel.division + "/" + research_personnel.lab)
delete from research_personnel

Any guidence would be appreciated.


Tom Kyte
July 13, 2004 - 8:07 pm UTC

logic does not make sense.

you have else/else with no if's

if from cder -> ...
else -> move ... (ok
else ?????? how do you get here.

SQL query

A reader, July 14, 2004 - 9:55 am UTC


Tom,
Please see this.


get all CDER co-pi:

List1=
Select * from researchnew.research_personnel,researchnew.resform_basic
where researchnew.pi_type=2
and researchnew.resp_center='CDER'
and researchnew.resform_basic.resform_basic_id=researchnew.research_personnel.resform_basic_id

Loop over List1:
_________________
if we have List1.empprofkey:

level1 =
Select level1 from expertise.fda_aries_data
Where expertise.fda_aries_data.emp_profile_key = List1.empprofkey

if level1 is CDER:
select * from researchnew.research_personnel
where researchnew.pi_type=2 and researchnew.resp_center='CDER'and researchnew.resform_basic.resform_basic_id=researchnew.research_personnel.resform_basic_id and expertise.fda_aries_data.emp_profile_key = research_personnel.empprofkey


List1.id

else: insert into resform_collab:
collab_name= emp_first_name + " " + emp_last_name
collab_center = "FDA/" + org_level_1_code + "/"+ org_level_2_code + "/"+ org_level_3_code + "/"+ org_level_4_code
else:
if researchnew.research_personnelcenter is CDER:
delete from researchnew.research_personnel
where List1.id

else: insert to resform_collab:
collab_name= lname + " " + fname
collab_center = "FDA/" + center + "/"+ office + "/"+ division + "/"+ lab

Tom Kyte
July 14, 2004 - 11:49 am UTC

don't under the need or use of the second select in there? seems to be the same as the first ?

level1 assignment could be a join in the main driving query (join 3 tables together)

now, once 3 tables are joined, you can easily:

insert into resform_collab
select decode( level1, cder, then format data one way, else format it another way) * from these three tables;

and then

delete from researchnew where key in (select * from these three tables where level1 = cder);


you are doing a three table join, if level1 = cder then format columns one way and insert into resform_collab, else format another way and do insert. then delete rows where level1=cder.

SQL Query

A reader, July 14, 2004 - 12:53 pm UTC

Tom,
Your answer clearup little bit.Could you pls put them as a sql query ?.I haven't done much complex sql stuff but am in the learing process.

Thanks for all your help.

Tom Kyte
July 14, 2004 - 9:59 pm UTC

i sort of did? just join -- two sql statements...

rows current before and after when ordered by date

A reader, July 14, 2004 - 3:32 pm UTC

Hello Sir.

Given an ID ,type and a start date
Need to get all rows ( after arranging in ascending order of start date )
having
1) The above id ,type ,start date

and
2) row or set of rows with start date earlier to the one given above( just one date closest to)

and
3) row or set of rows with start date after the one given above ( just one date closest to)

example

for id = 1 type = A and start date = 1/17/1995

out put must be
ID TYPE START_DATE END_DATE
--------------- ---- --------------------- ---------------------
1 A 2/11/1993 1/16/1995
1 A 2/11/1993 1/16/1995
1 A 1/17/1995 1/19/1996
1 A 1/17/1995 1/19/1996
1 A 1/20/1996 1/16/1997

Mt soln works but i think its terrible.

Can we have a complete view and then just give this id ,type and date and get the above result.
My soln needs to generate a dynamic query so I cant just use by giving a where clause to a view.
Any better soln

I tried using dense_rank
SELECT *
FROM (SELECT DENSE_RANK () OVER (PARTITION BY ID, TYPE ORDER BY start_date)
rn,
t.*
FROM td t) p
WHERE ID = 1
AND TYPE = 'A'
AND EXISTS (
SELECT NULL
FROM (SELECT DENSE_RANK () OVER (PARTITION BY ID, TYPE ORDER BY start_date)
rn,
s.*
FROM td s) q
WHERE q.start_date = TO_DATE ('1/17/1995', 'MM/DD/YYYY')
AND q.ID = p.ID
AND q.TYPE = p.TYPE
AND q.rn BETWEEN (p.rn - 1) AND (p.rn + 1))
ORDER BY ID, TYPE, rn
RN ID TYPE START_DATE END_DATE
---------- --------------- ---- --------------------- ---------------------
3 1 A 2/11/1993 1/16/1995
3 1 A 2/11/1993 1/16/1995
4 1 A 1/17/1995 1/19/1996
4 1 A 1/17/1995 1/19/1996
5 1 A 1/20/1996 1/16/1997


CREATE TABLE TD
(
ID VARCHAR2(15 BYTE) NOT NULL,
TYPE VARCHAR2(1 BYTE),
START_DATE DATE,
END_DATE DATE
)
LOGGING
NOCACHE
NOPARALLEL;

INSERT INTO TD ( ID, TYPE, START_DATE, END_DATE ) VALUES (
'1', 'A', TO_Date( '09/11/1987 12:00:00 AM', 'MM/DD/YYYY HH:MI:SS AM'), TO_Date( '09/07/1991 12:00:00 AM', 'MM/DD/YYYY HH:MI:SS AM'));
INSERT INTO TD ( ID, TYPE, START_DATE, END_DATE ) VALUES (
'1', 'A', TO_Date( '09/08/1991 12:00:00 AM', 'MM/DD/YYYY HH:MI:SS AM'), TO_Date( '02/10/1993 12:00:00 AM', 'MM/DD/YYYY HH:MI:SS AM'));
INSERT INTO TD ( ID, TYPE, START_DATE, END_DATE ) VALUES (
'1', 'A', TO_Date( '02/11/1993 12:00:00 AM', 'MM/DD/YYYY HH:MI:SS AM'), TO_Date( '01/16/1995 12:00:00 AM', 'MM/DD/YYYY HH:MI:SS AM'));
INSERT INTO TD ( ID, TYPE, START_DATE, END_DATE ) VALUES (
'1', 'A', TO_Date( '01/17/1995 12:00:00 AM', 'MM/DD/YYYY HH:MI:SS AM'), TO_Date( '01/19/1996 12:00:00 AM', 'MM/DD/YYYY HH:MI:SS AM'));
INSERT INTO TD ( ID, TYPE, START_DATE, END_DATE ) VALUES (
'1', 'A', TO_Date( '01/20/1996 12:00:00 AM', 'MM/DD/YYYY HH:MI:SS AM'), TO_Date( '01/16/1997 12:00:00 AM', 'MM/DD/YYYY HH:MI:SS AM'));
INSERT INTO TD ( ID, TYPE, START_DATE, END_DATE ) VALUES (
'1', 'A', TO_Date( '01/17/1997 12:00:00 AM', 'MM/DD/YYYY HH:MI:SS AM'), TO_Date( '01/15/1998 12:00:00 AM', 'MM/DD/YYYY HH:MI:SS AM'));
INSERT INTO TD ( ID, TYPE, START_DATE, END_DATE ) VALUES (
'1', 'A', TO_Date( '01/12/2004 12:00:00 AM', 'MM/DD/YYYY HH:MI:SS AM'), NULL);
INSERT INTO TD ( ID, TYPE, START_DATE, END_DATE ) VALUES (
'2', 'B', TO_Date( '01/13/1997 12:00:00 AM', 'MM/DD/YYYY HH:MI:SS AM'), TO_Date( '10/30/1997 12:00:00 AM', 'MM/DD/YYYY HH:MI:SS AM'));
INSERT INTO TD ( ID, TYPE, START_DATE, END_DATE ) VALUES (
'2', 'B', TO_Date( '04/06/1996 12:00:00 AM', 'MM/DD/YYYY HH:MI:SS AM'), TO_Date( '09/12/1996 12:00:00 AM', 'MM/DD/YYYY HH:MI:SS AM'));
INSERT INTO TD ( ID, TYPE, START_DATE, END_DATE ) VALUES (
'2', 'B', TO_Date( '09/13/1996 12:00:00 AM', 'MM/DD/YYYY HH:MI:SS AM'), TO_Date( '01/12/1997 12:00:00 AM', 'MM/DD/YYYY HH:MI:SS AM'));
INSERT INTO TD ( ID, TYPE, START_DATE, END_DATE ) VALUES (
'2', 'A', TO_Date( '04/06/1996 12:00:00 AM', 'MM/DD/YYYY HH:MI:SS AM'), TO_Date( '09/12/1996 12:00:00 AM', 'MM/DD/YYYY HH:MI:SS AM'));
INSERT INTO TD ( ID, TYPE, START_DATE, END_DATE ) VALUES (
'2', 'A', TO_Date( '09/13/1996 12:00:00 AM', 'MM/DD/YYYY HH:MI:SS AM'), TO_Date( '01/12/1997 12:00:00 AM', 'MM/DD/YYYY HH:MI:SS AM'));
INSERT INTO TD ( ID, TYPE, START_DATE, END_DATE ) VALUES (
'2', 'A', TO_Date( '01/13/1997 12:00:00 AM', 'MM/DD/YYYY HH:MI:SS AM'), TO_Date( '10/30/1997 12:00:00 AM', 'MM/DD/YYYY HH:MI:SS AM'));
INSERT INTO TD ( ID, TYPE, START_DATE, END_DATE ) VALUES (
'2', 'B', TO_Date( '01/13/1997 12:00:00 AM', 'MM/DD/YYYY HH:MI:SS AM'), TO_Date( '10/30/1997 12:00:00 AM', 'MM/DD/YYYY HH:MI:SS AM'));
INSERT INTO TD ( ID, TYPE, START_DATE, END_DATE ) VALUES (
'2', 'B', TO_Date( '04/06/1996 12:00:00 AM', 'MM/DD/YYYY HH:MI:SS AM'), TO_Date( '09/12/1996 12:00:00 AM', 'MM/DD/YYYY HH:MI:SS AM'));
INSERT INTO TD ( ID, TYPE, START_DATE, END_DATE ) VALUES (
'2', 'B', TO_Date( '09/13/1996 12:00:00 AM', 'MM/DD/YYYY HH:MI:SS AM'), TO_Date( '01/12/1997 12:00:00 AM', 'MM/DD/YYYY HH:MI:SS AM'));
INSERT INTO TD ( ID, TYPE, START_DATE, END_DATE ) VALUES (
'1', 'A', TO_Date( '09/11/1987 12:00:00 AM', 'MM/DD/YYYY HH:MI:SS AM'), TO_Date( '09/07/1991 12:00:00 AM', 'MM/DD/YYYY HH:MI:SS AM'));
INSERT INTO TD ( ID, TYPE, START_DATE, END_DATE ) VALUES (
'1', 'A', TO_Date( '09/08/1991 12:00:00 AM', 'MM/DD/YYYY HH:MI:SS AM'), TO_Date( '02/10/1993 12:00:00 AM', 'MM/DD/YYYY HH:MI:SS AM'));
INSERT INTO TD ( ID, TYPE, START_DATE, END_DATE ) VALUES (
'1', 'A', TO_Date( '02/11/1993 12:00:00 AM', 'MM/DD/YYYY HH:MI:SS AM'), TO_Date( '01/16/1995 12:00:00 AM', 'MM/DD/YYYY HH:MI:SS AM'));
INSERT INTO TD ( ID, TYPE, START_DATE, END_DATE ) VALUES (
'1', 'A', TO_Date( '01/17/1995 12:00:00 AM', 'MM/DD/YYYY HH:MI:SS AM'), TO_Date( '01/19/1996 12:00:00 AM', 'MM/DD/YYYY HH:MI:SS AM'));
INSERT INTO TD ( ID, TYPE, START_DATE, END_DATE ) VALUES (
'1', 'A', NULL, NULL);
INSERT INTO TD ( ID, TYPE, START_DATE, END_DATE ) VALUES (
'1', 'A', NULL, NULL);
COMMIT;

Tom Kyte
July 15, 2004 - 11:20 am UTC

ops$tkyte@ORA9IR2> select * from td
  2  where id = '1'
  3    and type = 'A'
  4    and start_date in
  5    ( to_date( '1/17/1995', 'mm/dd/yyyy' ),
  6      (select min(start_date)
  7         from td
  8        where id = '1' and type = 'A'
  9          and start_date > to_date( '1/17/1995', 'mm/dd/yyyy' )),
 10      (select max(start_date)
 11         from td
 12        where id = '1' and type = 'A'
 13          and start_date < to_date( '1/17/1995', 'mm/dd/yyyy' )) )
 14   order by start_date
 15  /
 
ID              T START_DAT END_DATE
--------------- - --------- ---------
1               A 11-FEB-93 16-JAN-95
1               A 11-FEB-93 16-JAN-95
1               A 17-JAN-95 19-JAN-96
1               A 17-JAN-95 19-JAN-96
1               A 20-JAN-96 16-JAN-97


is one way... 

What if

A reader, July 15, 2004 - 11:45 am UTC

Thanx Sir for your answer.
What if I were to extend this to say 2 dates prior and after the given date ?
Or N dates prior and after given date.

In my Bad analytic soln I would just change

rn between rn - N and rn + N
Any suggestions ?


Tom Kyte
July 15, 2004 - 1:30 pm UTC

in ( select to_date( ... ) from dual
union all
select start_date
from (select distinct start_date
from td where id = .. and type = ...
and start_date <= your_date order by start_date desc )
where rownum <= 2 )
union all .....

just generate the sets of dates you are interested in.

But Nulls

A reader, July 17, 2004 - 9:21 pm UTC

Thanx Sir for your Help.
Soln will not work for Nulls may I need to nvl with sysdate.As there are few in the test data.

Also if we want to return ranges for sample
start_date
1/1/1990
null
1/1/1991
null
null
1/1/1992
1/1/1992
1/1/1993.

Nulls will be grouped together.
Example for 1992
it should return
null
1/1/1992
1/1/1992
1/1/1993.
How to get that


Tom Kyte
July 18, 2004 - 12:00 pm UTC

huh?

does not compute, not understanding what you are asking.


you seem to be presuming that the null row "has some position in the table that is meaningful".


that null doesn't sort after 1991 and before 1992 -- rows have no "positions" in a table. You seem to be prescribing attributes of a flat file to rows in a table and you cannot.

SQL Query

A reader, July 19, 2004 - 2:53 pm UTC

Tom,
I tried to come with a sql query to perform the insert and delete as you outlined here.But not able to succeed.
=================================================
level1 assignment could be a join in the main driving query (join 3 tables
together)

now, once 3 tables are joined, you can easily:

insert into resform_collab
select decode( level1, cder, then format data one way, else format it another
way) * from these three tables;

and then

delete from researchnew where key in (select * from these three tables where
level1 = cder);


you are doing a three table join, if level1 = cder then format columns one way
and insert into resform_collab, else format another way and do insert. then
delete rows where level1=cder
=======================================

Could you please explaion this using emp ,dept tables or ur own example tables so that I can duplicate that.
Thanks a lot.

Tom Kyte
July 19, 2004 - 4:30 pm UTC

you have a three table join here. can you get that far? if not, no example against emp/dept is going to help you.

Mr Parag - "Mutual Respect" - You should learn how to?

Reji, July 28, 2004 - 6:51 pm UTC

You might change this to "MR" - Tom is 100% right. I don't understand why you got really upset with his response. You should check your BP - not Bharat Petroleum, Blood Pressure.

You could have taken his response in a very light way but at the same time you should have understood why he said that.

Please behave properly Sir.

Tom:

Thanks for spending your personal time to help 100s of software engineers around the globe. We all really appreciated your time and effort.

limiting takes longer

v, August 03, 2004 - 8:23 pm UTC

My original query takes about 1 second to execute. It involves joining 3 tables and a lot of conditions must be met. When I ran the same query with your example to limit the range of records to N through M, it took 50 seconds to execute.

I noticed a few other users have posted here concerning a performance issue when limiting rows. Obviously there is something misconfigured on our end because the majority of users are happy here. :)

I noticed when I take out the last WHERE clause, "where rnum >= MIN_ROWS", the query executes in 1 second. I also tried changing the clause to "where rnum = 1000", and that also takes tremendously long.

Any pointers?


Tom Kyte
August 03, 2004 - 8:42 pm UTC

show us the queries and explain plans (autotrace traceonly explain is sufficient)

and a tkprof of the same (that actually fetched the data)

thanks

sriram, August 05, 2004 - 4:33 am UTC

Hei..its was petty useful...Not only this...I have cleared many things in this site. This one is really gr8

thanks

sriram, August 05, 2004 - 4:34 am UTC

Hei..its was pretty useful...Not only this...I have cleared many things in this site. This site is really gr8

Does "rownum <= MAX_ROWS" give any performance improvment?

A reader, August 05, 2004 - 9:59 am UTC

Dear Tom,

In terms of performance, is there any difference between Query (A) and (B)?

A)
select a.*, rownum rnum
from ( YOUR_QUERY_GOES_HERE -- including the order by ) a
where rownum <= MAX_ROWS
/


B)
select a.*, rownum rnum
from ( YOUR_QUERY_GOES_HERE -- including the order by ) a
/


Tom Kyte
August 05, 2004 - 1:03 pm UTC

sure, if the (b) returns a billion rows and (a) returns 5 -- (a) will be faster :)

but we call that a top-n query and yes there are top-n optimizations that makes (a) faster and less expensive to perform

</code> http://asktom.oracle.com/pls/asktom/f?p=100:11:::::P11_QUESTION_ID:127412348064,#PAGEBOTTOM <code>



Is the URL correct?

Sami, August 06, 2004 - 10:00 am UTC

Tom,
The URL which you have given is pointing to the same page.

Tom Kyte
August 06, 2004 - 10:21 am UTC

</code> http://asktom.oracle.com/pls/asktom/f?p=100:11:::::P11_QUESTION_ID:127412348064#3282021148551 <code>

thanks -- should be pointing UP in this page :)

In out web application,paging sql is strange

Steven, August 07, 2004 - 1:59 am UTC

Hello,I have a question about sql paging --- get top Max --Min value from an order by inner sql;

We have a table app_AssetBasicInfo(ID Number Primary key,Title varchar2(255),CategoryID number not null,Del_tag not null,CreateDate Date not null,...);

CategoryID has 3 distinct value,del_tag has 2 distinct value ,they are very skewed. and I gathered statistics using method_opt=>'for columns CategoryID&#65292;del_tag size skewonly');
And I have a index CATEGORYDELTAGCDATEID on app_Assetbasicinfo(CategoryID,del_tag,CreateDate desc,ID desc) and physical table storage is also sorted by categoryID,del_tag,CreateDate desc,ID desc.

paging sql like these:

select * from (select table_a.*,rownum as my_rownum from (select
title FROM app_AssetBasicInfo WHERE
app_AssetBasicInfo.CategoryID=1 AND Del_tag=0 And
CreateDate between &Date1 and &Date2 order by CreateDate DESC,app_AssetBasicInfo.ID DESC )
table_a where rownum<&Max_Value) where
my_rownum>=&Min_Value;

but it is confused me very much.Please see these sql_trace result:
[code]
********************************************************************************

select table_a.*,rownum as my_rownum from (select
title FROM app_AssetBasicInfo WHERE
app_AssetBasicInfo.CategoryID=2 AND Del_tag=0 order by CreateDate DESC,app_AssetBasicInfo.ID DESC )
table_a where rownum<20

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.03 0.44 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 3 0.00 0.00 0 8 0 19
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 5 0.03 0.44 0 8 0 19

Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: 63

Rows Row Source Operation
------- ---------------------------------------------------
19 COUNT STOPKEY
19 VIEW
19 TABLE ACCESS BY INDEX ROWID APP_ASSETBASICINFO
19 INDEX RANGE SCAN CATEGORYDELTAGCDATEID (object id 33935)

********************************************************************************

select * from (select table_a.*,rownum as my_rownum from (select
title FROM app_AssetBasicInfo WHERE
app_AssetBasicInfo.CategoryID=1 AND Del_tag=0 order by CreateDate DESC,app_AssetBasicInfo.ID DESC )
table_a where rownum<20) where my_rownum>=0

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.26 0.49 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 3 1.81 1.90 0 19523 0 19
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 5 2.07 2.40 0 19523 0 19

Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: 63

Rows Row Source Operation
------- ---------------------------------------------------
19 VIEW
19 COUNT STOPKEY
19 VIEW
19 SORT ORDER BY STOPKEY
482147 TABLE ACCESS BY INDEX ROWID APP_ASSETBASICINFO
482147 INDEX RANGE SCAN CATEGORYDELTAGCDATEID (object id 33935)

INDEX RANGE SCAN return 482147 rows ,seems equal with full index scan.
[/code]


I discovered that when i wrap a outer select ,it gets slow and comsume much consistent gets.

I also alter index CATEGORYDELTAGCDATEID with compress 2 and /*+ first_rows */ hint;but result is same;

but when i use RowID to paging sql,it runs good.but it can not support tables join.
[code]
************************************************************
select title from app_AssetBasicInfo
where rowid in
( select rid from
( select rownum rno,rowid rid from
(select rowid FROM app_AssetBasicInfo WHERE
app_AssetBasicInfo.CategoryID=1 AND Del_tag=0
order by CreateDate desc,app_AssetBasicInfo.ID DESC
) where rownum <= 20
) where rno >= 0
)

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.15 0.15 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 3 0.01 0.00 0 23 0 20
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 5 0.17 0.15 0 23 0 20

Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: 63

Rows Row Source Operation
------- ---------------------------------------------------
20 NESTED LOOPS
20 VIEW
20 SORT UNIQUE
20 COUNT STOPKEY
20 VIEW
20 INDEX RANGE SCAN CATEGORYDELTAGCDATEID (object id 33971)
20 TABLE ACCESS BY USER ROWID APP_ASSETBASICINFO

************************************************************
[/code]

I want to know why sql with outer select get more index range scamed than sql with no outer wapper.


I am looking forward for your reply.

Thank you very much !

Steven



Tom Kyte
August 07, 2004 - 10:15 am UTC

for paging queries, recommend you use first_rows -- you always want the index pretty much, since you want to get rows 1..10 ASAP. 11..20 should take just a tad longer and so on.

/*+ FIRST_ROWS */

How would you go about this?

Brian McGinity, August 15, 2004 - 3:58 pm UTC

Suppose SCOTT.EMP had 300,000 rows and you needed to show do this type pagination from a search:

1.User inputs an ename to search.
2.If ename is found in EMP then show result (see result set description below).
3.If ename is not found than chop off the last letter in the search criteria and try again.

Once found, the result set needs to show the 20 enames sorted alphabetically before the match and 20 enames after the match. The result has a total of 41 names sorted descending with the closest matching record in the middle.





Tom Kyte
August 16, 2004 - 8:17 am UTC

"closest matching record" in this case is ambigous since the equality could return thousands of records to begin with. that'd be my first problem - what means 'closest'

it'd be something like:

with q
as
(select ename
from (select ename
from emp
where ename in ( :ename,
case when :l > 1 then substr( :ename, :l-1 ) end,
case when :l > 2 then substr( :ename, :l-2 ) end,
case when :l > 3 then substr( :ename, :l-3 ) end,
...
case when :l > N then substr( :ename, :l-3 ) end )
order by length(ename) desc )
where rownum = 1 )
( select *
from (select * from emps
where ename <= (select ename from q) order by ename desc )
where rownum <= 21 )
union
( select *
from ( select * from emps
where ename >= (select ename from q) order by ename asc)
where rownum <= 21 )
order by ename;


subquery q gets the "ename of interest"
first union all gets it and 20 before
second gets it and 20 after

union does sort distinct which removes the duplicate.

BAGUS SEKALI (PERFECT)

David, Raymond, September 01, 2004 - 11:47 pm UTC

I have been looking solution to my problem
and finally I got the solution...

select *
from ( select a.*, rownum rnum
from ( YOUR_QUERY_GOES_HERE -- including the order by ) a
where rownum <= MAX_ROWS )
where rnum >= MIN_ROWS

Absolutely...this is one of the most useful query
in ORACLE SQL

again...thanks TOM...

Pagination ordered by a varchar2 column ?

Kim Berg Hansen, October 13, 2004 - 7:08 am UTC

Hi, Tom

I'm trying to use your pagination methods for a log-system I'm developing (Oracle 8.1.7.4.)

But I can't always get Oracle to use the trick with index scanning to make this speedy. Seems to me it only works with dates/numbers and not with varchar2s?


I have this test-table :

SQL> create table testlog
  2  (
  3      logdate        date          not null,
  4      logseq           integer          not null,
  5      logdmltype     varchar2(1)    not null,
  6      loguser        varchar2(10)   not null,
  7      logdept        varchar2(10)   not null,
  8      logip           raw(4)          not null,
  9      recordid       integer          not null,
 10      keyfield       varchar2(10)   not null,
 11      col1_old       varchar2(10),
 12      col1_new       varchar2(10),
 13      col2_old       number(32,16),
 14      col2_new       number(32,16)
 15  );

With these test-data :

SQL> insert into testlog
  2  select
  3  last_ddl_time logdate,
  4  rownum logseq,
  5  'U' logdmltype,
  6  substr(owner,1,10) loguser,
  7  substr(object_type,1,10) logdept,
  8  hextoraw('AABBCCDD') logip,
  9  ceil(object_id/100) recordid,
 10  substr(object_name,1,10) keyfield,
 11  substr(subobject_name,1,10) col1_old,
 12  substr(subobject_name,2,10) col1_new,
 13  data_object_id col2_old,
 14  object_id col2_new
 15  from all_objects
 16  where rownum <= 40000;

40000 rows created.


Typical ways to find data would be "by date", "by user", "by recordid", "by keyfield" :

SQL> create index testlog_date on testlog (
  2      logdate, logseq
  3  );

SQL> create index testlog_user on testlog (
  2      loguser, logdate, logseq
  3  );

SQL> create index testlog_recordid on testlog (
  2      recordid, logdate, logseq
  3  );

SQL> create index testlog_keyfield on testlog (
  2      keyfield, logdate, logseq
  3  );

(Note all indexes are on "not null" columns - that's a requirement for the trick to work, right?)


Gather statistics :

SQL> begin dbms_stats.gather_table_stats('XAL_SUPERVISOR','TESTLOG',method_opt=>'FOR ALL INDEXED COLUMNS SIZE 1',cascade=>true); end;
  2  /


And then fire some test statements for pagination :

********************************************************************************

Try "by date" :

select /*+ FIRST_ROWS */ * from (
   select /*+ FIRST_ROWS */ p.*, rownum r from (
      select /*+ FIRST_ROWS */ t.*
      from testlog t
      order by logdate, logseq
   ) p
   where rownum <= 5
) where r >= 1

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.02       0.01          0          0          0           0
Execute      1      0.00       0.00          0          0          0           0
Fetch        2      0.00       0.00          0          5          0           5
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        4      0.02       0.01          0          5          0           5

Rows     Row Source Operation
-------  ---------------------------------------------------
      5  VIEW 
      5   COUNT STOPKEY 
      5    VIEW 
      5     TABLE ACCESS BY INDEX ROWID TESTLOG 
      5      INDEX FULL SCAN (object id 190604)   <--TESTLOG_DATE

Works dandy.

********************************************************************************

Try "by date" backwards :

select /*+ FIRST_ROWS */ * from (
   select /*+ FIRST_ROWS */ p.*, rownum r from (
      select /*+ FIRST_ROWS */ t.*
      from testlog t
      order by logdate desc, logseq desc
   ) p
   where rownum <= 5
) where r >= 1

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute      1      0.00       0.00          0          0          0           0
Fetch        2      0.01       0.01          1          5          0           5
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        4      0.01       0.01          1          5          0           5

Rows     Row Source Operation
-------  ---------------------------------------------------
      5  VIEW 
      5   COUNT STOPKEY 
      5    VIEW 
      5     TABLE ACCESS BY INDEX ROWID TESTLOG 
      5      INDEX FULL SCAN DESCENDING (object id 190604)   <--TESTLOG_DATE

Works dandy backwards too.

********************************************************************************

Try "by user" :

select /*+ FIRST_ROWS */ * from (
   select /*+ FIRST_ROWS */ p.*, rownum r from (
      select /*+ FIRST_ROWS */ t.*
      from testlog t
      order by loguser, logdate, logseq
   ) p
   where rownum <= 5
) where r >= 1

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.01          0          0          0           0
Execute      1      0.00       0.00          0          0          0           0
Fetch        2      0.15       0.24        161        361          6           5
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        4      0.15       0.25        161        361          6           5

Rows     Row Source Operation
-------  ---------------------------------------------------
      5  VIEW 
      5   COUNT STOPKEY 
      5    VIEW 
      5     SORT ORDER BY STOPKEY 
  40000      TABLE ACCESS FULL TESTLOG 

Hmmm... Not so dandy with varchar2 column ?

********************************************************************************

Try "by recordid" :

select /*+ FIRST_ROWS */ * from (
   select /*+ FIRST_ROWS */ p.*, rownum r from (
      select /*+ FIRST_ROWS */ t.*
      from testlog t
      order by recordid, logdate, logseq
   ) p
   where rownum <= 5
) where r >= 1

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute      1      0.00       0.00          0          0          0           0
Fetch        2      0.00       0.00          0          5          0           5
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        4      0.00       0.00          0          5          0           5

Rows     Row Source Operation
-------  ---------------------------------------------------
      5  VIEW 
      5   COUNT STOPKEY 
      5    VIEW 
      5     TABLE ACCESS BY INDEX ROWID TESTLOG 
      5      INDEX FULL SCAN (object id 190606)   <--TESTLOG_RECORDID

Works dandy with a number column.

********************************************************************************

Try "last 5 for a particular recordid" :

select /*+ FIRST_ROWS */ * from (
   select /*+ FIRST_ROWS */ p.*, rownum r from (
      select /*+ FIRST_ROWS */ t.*
      from testlog t
      where recordid = 1000
      order by recordid desc, logdate desc, logseq desc
   ) p
   where rownum <= 5
) where r >= 1

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute      1      0.00       0.00          0          0          0           0
Fetch        2      0.00       0.02          2          6          0           5
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        4      0.00       0.02          2          6          0           5

Rows     Row Source Operation
-------  ---------------------------------------------------
      5  VIEW 
      5   COUNT STOPKEY 
      5    VIEW 
      5     TABLE ACCESS BY INDEX ROWID TESTLOG 
      5      INDEX RANGE SCAN DESCENDING (object id 190606)   <--TESTLOG_RECORDID

Number column again rocks - it does a descending range scan and stops when it has 5 records.

********************************************************************************

Try "last 5 for a particular user" :

select /*+ FIRST_ROWS */ * from (
   select /*+ FIRST_ROWS */ p.*, rownum r from (
      select /*+ FIRST_ROWS */ t.*
      from testlog t
      where loguser = 'SYS'
      order by loguser desc, logdate desc, logseq desc
   ) p
   where rownum <= 5
) where r >= 1

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.01       0.01          0          0          0           0
Execute      1      0.00       0.00          0          0          0           0
Fetch        2      0.08       0.12          5       2373          0           5
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        4      0.09       0.13          5       2373          0           5

Rows     Row Source Operation
-------  ---------------------------------------------------
      5  VIEW 
      5   COUNT STOPKEY 
      5    VIEW 
      5     SORT ORDER BY STOPKEY 
   8706      TABLE ACCESS BY INDEX ROWID TESTLOG 
   8707       INDEX RANGE SCAN (object id 190605)   <--TESTLOG_USER

Again the varchar2 column makes it not so perfect :-(

********************************************************************************

One thing I notice is this :

Rows     Row Source Operation
-------  ---------------------------------------------------
      5  VIEW 
      5   COUNT STOPKEY 
      5    VIEW 
      5     SORT ORDER BY STOPKEY   <-- This linie only appears using varchar2 column?
         (table access one way or the other)

For numbers/dates the COUNT STOPKEY can halt the table/index access when enough records have been found.
For varchar2s the SORT ORDER BY STOPKEY seems to disable that trick?

Why does it want to SORT ORDER BY STOPKEY when it's a varchar2?
It's already sorted in the index (same as with the numbers/dates)?
What am I doing wrong?


As always - profound thanks for all your help to all of us.


Kim Berg Hansen

Senior System Developer
T.Hansen Gruppen A/S
 

Tom Kyte
October 13, 2004 - 8:34 am UTC

what is your characterset?

817 for me does:

select /*+ FIRST_ROWS */ * from (
select /*+ FIRST_ROWS */ p.*, rownum r from (
select /*+ FIRST_ROWS */ t.*
from testlog t
order by loguser, logdate, logseq
) p
where rownum <= 5
) where r >= 1

with WE8ISO8859P1


is this an "nls_sort()" issue? (eg: the binary sort isn't 'sorted' in your character set and we'd need an FBI perhaps?)

Yes !!!

Kim Berg Hansen, October 13, 2004 - 9:00 am UTC

I'm amazed as usual.

You have a rare gift for immediately noticing those details that should have been obvious to us blind folks raving in the dark ;-)

My character set is WE8ISO8859P1 - no problem there.

My database has NLS_SORT=BINARY.

The client I used for testing/development had NLS_SORT=DANISH.

When I change the client to NLS_SORT=BINARY - everything works as it's supposed to do...

Thanks a million, Tom.



Tom Kyte
October 13, 2004 - 9:12 am UTC

a function based index could work for them..... creating the index on nls_sort(....) and ordering by that.

No need...

Kim Berg Hansen, October 13, 2004 - 9:24 am UTC

I just checked - the production clients (the ERP system) does have NLS_SORT=BINARY.

It was simply the registry settings here on my development PC that wasn't correct... so the solution was very simple :-)


A reader, October 14, 2004 - 1:09 am UTC


Continued pagination troubles...

Kim Berg Hansen, October 15, 2004 - 8:19 am UTC

Hi again, Tom

Continuation of my question from a couple of days ago...

I'm still working on the best way of getting Oracle to use index scans in pagination queries.
I have no problem anymore with the simpler queries from my last question to you.

But suppose I wish to start the pagination from a particular point in a composite index.
A good example is this index :

SQL> create index testlog_user on testlog (
  2      loguser, logdate, logseq
  3  );

(Table, indexes and data for these tests are identical to the last question I gave you.)

********************************************************

Example 1:

It works fine if I do pagination from a starting point in the index in which I only use first column of the index. For example start the pagination a point where loguser = 'SYS' :


SQL> select /*+ FIRST_ROWS */ loguser, logdate, logseq, keyfield from (
  2      select /*+ FIRST_ROWS */ p.*, rownum r from (
  3         select /*+ FIRST_ROWS */ t.*
  4         from testlog t
  5         where loguser >= 'SYS'
  6         order by loguser, logdate, logseq
  7      ) p
  8      where rownum <= 5
  9  ) where r >= 1;

LOGUSER    LOGDATE      LOGSEQ KEYFIELD                                         
---------- -------- ---------- ----------                                       
SYS        01-03-27       4133 BOOTSTRAP$                                       
SYS        01-03-27       5044 I_CCOL1                                          
SYS        01-03-27       5045 I_CCOL2                                          
SYS        01-03-27       5046 I_CDEF1                                          
SYS        01-03-27       5047 I_CDEF2                                          


call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.01       0.01          0          0          0           0
Execute      1      0.00       0.00          0          0          0           0
Fetch        2      0.00       0.00          0          5          0           5
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        4      0.01       0.01          0          5          0           5

Rows     Row Source Operation
-------  ---------------------------------------------------
      5  VIEW 
      5   COUNT STOPKEY 
      5    VIEW 
      5     TABLE ACCESS BY INDEX ROWID TESTLOG 
      5      INDEX RANGE SCAN (object id 191422) <--Index TESTLOG_USER

Perfect scan of the index and stop at the number of rows I wish to paginate.

********************************************************

Example 2:

Consider then when I wish to use a starting poing with all three columns in the composite index. For example start the pagination at the point where loguser = 'SYS', logdate = '31-08-2004 11:22:33' and logseq = 5799, and then just scan the index forward 5 records from that point.
Best SQL I can come up with is something like this :


SQL> select /*+ FIRST_ROWS */ loguser, logdate, logseq, keyfield from (
  2      select /*+ FIRST_ROWS */ p.*, rownum r from (
  3         select /*+ FIRST_ROWS */ t.*
  4         from testlog t
  5         where ( loguser = 'SYS' and
                   logdate = to_date('31-08-2004 11:22:33','DD-MM-YYYY HH24:MI:SS') and
                   logseq >= 5799
                 )
  6            or ( loguser = 'SYS' and
                   logdate > to_date('31-08-2004 11:22:33','DD-MM-YYYY HH24:MI:SS')
                 )
  7            or ( loguser > 'SYS' )
  8         order by loguser, logdate, logseq
  9      ) p
 10      where rownum <= 5
 11  ) where r >= 1;

LOGUSER    LOGDATE      LOGSEQ KEYFIELD                                         
---------- -------- ---------- ----------                                       
SYS        04-08-31       5799 V_$PROCESS                                       
SYS        04-08-31       5827 V_$SESSION                                       
SYS        04-08-31       5857 V_$STATNAM                                       
SYSTEM     01-03-27      16877 AQ$_QUEUES                                       
SYSTEM     01-03-27      16878 AQ$_QUEUES                                       


call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute      1      0.00       0.00          0          0          0           0
Fetch        2      0.08       0.10          0       6153          0           5
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        4      0.08       0.10          0       6153          0           5

Rows     Row Source Operation
-------  ---------------------------------------------------
      5  VIEW 
      5   COUNT STOPKEY 
      5    VIEW 
      5     TABLE ACCESS BY INDEX ROWID TESTLOG 
  16848      INDEX FULL SCAN (object id 191422) <--Index TESTLOG_USER


Do you know another way to phrase that SQL in a manner so that Oracle understands, that I'm pinpointing a particular spot in the index and wants it to scan forward from that point ?

Conceptually I think it should be no different for Oracle to start the range scan at a point defined by 3 column values in example 2 as it starts the range scan at a point defined by only the first column value in example 1?

The trouble is how to express in the SQL language what I want done :-)

In "pseudo-code" I might be tempted to express it somewhat like :

   "where (loguser, logdate, logseq) >= ('SYS', '31-08-2004 11:22:33', 5799)"

...but that syntax is not recognized in SQL, alas ;-)

What do you think? Can I do anything to make the example 2 be as perfectly efficient as example 1?
 

Tom Kyte
October 15, 2004 - 11:52 am UTC

For example start the pagination at the point where loguser =
'SYS', logdate = '31-08-2004 11:22:33' and logseq = 5799, and then just scan the
index forward 5 records from that point.

I don't understand the concept of "start the pagination at the point"?


are you saying "ordered by loguser, logdate, logseq", starting with :x/:y/:z?

in which case, we'd need an index on those 3 columns in order to avoid getting ALL rows and sorting before giving you the first row.

Paraphrase of my previous review...

Kim Berg Hansen, October 18, 2004 - 4:24 am UTC

Hi again, Tom

Sorry if I don't "conceptualize" clearly - english ain't my native language :-) I'll try to paraphrase to make it more clear.

Test table, indexes and data used for this is taken from my review of october 13th to this question. 

Specifically the index I'm trying to use (abuse? ;-) is this index:

SQL> create index testlog_user on testlog (
  2      loguser, logdate, logseq
  3  );

All three columns are NOT NULL, so all the 40.000 records will be in the index.
Here's part of the data ordered by that index:

SQL> select p.*, rownum from (
  2      select loguser, logdate, logseq, keyfield
  3      from testlog t
  4      order by loguser, logdate, logseq
  5  ) p;

LOGUSER    LOGDATE      LOGSEQ KEYFIELD       ROWNUM
---------- -------- ---------- ---------- ----------
OUTLN      01-03-27      17121 OL$                 1
OUTLN      01-03-27      17122 OL$HINTS            2

(... lots of rows ...)

PUBLIC     03-07-23      13812 TOAD_TABLE       8139
PUBLIC     03-07-23      13810 TOAD_SPACE       8140
SYS        01-03-27       4133 BOOTSTRAP$       8141 <== POINT A
SYS        01-03-27       5044 I_CCOL1          8142
SYS        01-03-27       5045 I_CCOL2          8143
SYS        01-03-27       5046 I_CDEF1          8144
SYS        01-03-27       5047 I_CDEF2          8145
SYS        01-03-27       5048 I_CDEF3          8146
SYS        01-03-27       5049 I_CDEF4          8147
SYS        01-03-27       5050 I_COBJ#          8148
SYS        01-03-27       5051 I_COL1           8149
SYS        01-03-27       5052 I_COL2           8150
SYS        01-03-27       5053 I_COL3           8151
SYS        01-03-27       5057 I_CON1           8152
SYS        01-03-27       5058 I_CON2           8153

(... lots of rows ...)

SYS        04-08-31       5799 V_$PROCESS      16844 <== POINT B
SYS        04-08-31       5827 V_$SESSION      16845
SYS        04-08-31       5857 V_$STATNAM      16846
SYSTEM     01-03-27      16877 AQ$_QUEUES      16847
SYSTEM     01-03-27      16878 AQ$_QUEUES      16848
SYSTEM     01-03-27      16879 AQ$_QUEUES      16849
SYSTEM     01-03-27      16880 AQ$_QUEUE_      16850
SYSTEM     01-03-27      16881 AQ$_QUEUE_      16851
SYSTEM     01-03-27      16882 AQ$_SCHEDU      16852
SYSTEM     01-03-27      16883 AQ$_SCHEDU      16853
SYSTEM     01-03-27      16884 AQ$_SCHEDU      16854
SYSTEM     01-03-27      16910 DEF$_TRANO      16855
SYSTEM     01-03-27      17113 SYS_C00745      16856
SYSTEM     01-03-27      17114 SYS_C00748      16857
SYSTEM     01-03-27      16891 DEF$_AQERR      16858
SYSTEM     01-03-27      16893 DEF$_CALLD      16859
SYSTEM     01-03-27      16894 DEF$_CALLD      16860
SYSTEM     01-03-27      16896 DEF$_DEFAU      16861
SYSTEM     01-03-27      16898 DEF$_DESTI      16862
SYSTEM     01-03-27      16900 DEF$_ERROR      16863

(... lots of rows ...)

XAL_TRYKSA 04-09-21      31307 ORDREPOSTI      39999
XAL_TRYKSA 04-09-30      31220 LAGERINDGA      40000

40000 rows selected.

I've marked two records - point A and point B - that I'll explain further down.


Now for the tricky part of the explanation...

The original pagination code that utilizes my index well calls for using something like this construct:

SQL> select /*+ FIRST_ROWS */ loguser, logdate, logseq, keyfield from (
  2      select /*+ FIRST_ROWS */ p.*, rownum r from (
  3         select /*+ FIRST_ROWS */ t.*
  4         from testlog t
  5         order by loguser, logdate, logseq
  6      ) p
  7      where rownum <= :hirow
  8  ) where r >= :lorow;

(Which works perfectly after I corrected NLS_SORT on my development PC ;-)


When a user asks for seeing the records starting with "loguser = 'SYS'" and then enable him to paginate with 5 rows at a time forward "from that point on" - thats what I mean with "starting the pagination at point A".

I can not use the statement from above with :lorow = 8141 and :hirow = 8145 because that would require that I somehow first find those two numbers. To avoid that I instead use this:

SQL> select /*+ FIRST_ROWS */ loguser, logdate, logseq, keyfield from (
  2      select /*+ FIRST_ROWS */ p.*, rownum r from (
  3         select /*+ FIRST_ROWS */ t.*
  4         from testlog t
  5         where loguser >= 'SYS'
  6         order by loguser, logdate, logseq
  7      ) p
  8      where rownum <= 5
  9  ) where r >= 1;

LOGUSER    LOGDATE      LOGSEQ KEYFIELD                                         
---------- -------- ---------- ----------                                       
SYS        01-03-27       4133 BOOTSTRAP$                                       
SYS        01-03-27       5044 I_CCOL1                                          
SYS        01-03-27       5045 I_CCOL2                                          
SYS        01-03-27       5046 I_CDEF1                                          
SYS        01-03-27       5047 I_CDEF2                                          

This statement gives me "Page 1" in a "five rows at a time" pagination "starting at the point where loguser = 'SYS'" (point A). And this statement utilizes the index very efficiently indeed:

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.01          0          0          0           0
Execute      1      0.00       0.00          0          0          0           0
Fetch        2      0.00       0.00          0          5          0           5
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        4      0.00       0.01          0          5          0           5

Rows     Row Source Operation
-------  ---------------------------------------------------
      5  VIEW
      5   COUNT STOPKEY
      5    VIEW
      5     TABLE ACCESS BY INDEX ROWID TESTLOG
      5      INDEX RANGE SCAN (object id 191688)

When the user clicks to see "Page 2" (paginates forward), this statement is used:

SQL> select /*+ FIRST_ROWS */ loguser, logdate, logseq, keyfield from (
  2      select /*+ FIRST_ROWS */ p.*, rownum r from (
  3         select /*+ FIRST_ROWS */ t.*
  4         from testlog t
  5         where loguser >= 'SYS'
  6         order by loguser, logdate, logseq
  7      ) p
  8      where rownum <= 10
  9  ) where r >= 6;

LOGUSER    LOGDATE      LOGSEQ KEYFIELD                                         
---------- -------- ---------- ----------                                       
SYS        01-03-27       5048 I_CDEF3                                          
SYS        01-03-27       5049 I_CDEF4                                          
SYS        01-03-27       5050 I_COBJ#                                          
SYS        01-03-27       5051 I_COL1                                           
SYS        01-03-27       5052 I_COL2                                           

And it is quite efficient as well:

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.01          0          0          0           0
Execute      1      0.00       0.00          0          0          0           0
Fetch        2      0.00       0.00          0          6          0           5
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        4      0.00       0.01          0          6          0           5

Rows     Row Source Operation
-------  ---------------------------------------------------
      5  VIEW
     10   COUNT STOPKEY
     10    VIEW
     10     TABLE ACCESS BY INDEX ROWID TESTLOG
     10      INDEX RANGE SCAN (object id 191688)

So by this method I "pinpoint Point A in the index" and paginate forward from that point... (Hope this is clear what I mean.)

The tricky part is when I wish to do exactly the same thing at point B !!!

This time I want to start at the point in the index (in the "order by" if you wish, but that's the same in this case) defined not just by the first column but by three columns. I want to start at the point where loguser = 'SYS' and logdate = '31-08-2004 11:22:33' and logseq = 5799 (point B) and paginate "forward in the index/order by".

I can come up with one way of defining a where-clause that will give me the rows from "point B" and forward using that order by:

SQL> select /*+ FIRST_ROWS */ loguser, logdate, logseq, keyfield from (
  2      select /*+ FIRST_ROWS */ p.*, rownum r from (
  3         select /*+ FIRST_ROWS */ t.*
  4         from testlog t
  5         where (loguser = 'SYS' and logdate = to_date('31-08-2004 11:22:33','DD-MM-YYYY HH24:MI:SS') and logseq >= 5799)
  6            or (loguser = 'SYS' and logdate > to_date('31-08-2004 11:22:33','DD-MM-YYYY HH24:MI:SS'))
  7            or (loguser > 'SYS')
  8         order by loguser, logdate, logseq
  9      ) p
 10      where rownum <= 5
 11  ) where r >= 1;

LOGUSER    LOGDATE      LOGSEQ KEYFIELD                                         
---------- -------- ---------- ----------                                       
SYS        04-08-31       5799 V_$PROCESS                                       
SYS        04-08-31       5827 V_$SESSION                                       
SYS        04-08-31       5857 V_$STATNAM                                       
SYSTEM     01-03-27      16877 AQ$_QUEUES                                       
SYSTEM     01-03-27      16878 AQ$_QUEUES                                       

It gives me the correct 5 rows (page 1 of the pagination starting at point B), but it does not use the index efficiently - it full scans the index rather that a range scan:

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.01       0.01          0          0          0           0
Execute      1      0.00       0.00          0          0          0           0
Fetch        2      0.08       0.07          0       6153          0           5
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        4      0.09       0.08          0       6153          0           5

Rows     Row Source Operation
-------  ---------------------------------------------------
      5  VIEW
      5   COUNT STOPKEY
      5    VIEW
      5     TABLE ACCESS BY INDEX ROWID TESTLOG
  16848      INDEX FULL SCAN (object id 191688)

And when the user clicks "Page 2" this is what I try:

SQL> select /*+ FIRST_ROWS */ loguser, logdate, logseq, keyfield from (
  2      select /*+ FIRST_ROWS */ p.*, rownum r from (
  3         select /*+ FIRST_ROWS */ t.*
  4         from testlog t
  5         where (loguser = 'SYS' and logdate = to_date('31-08-2004 11:22:33','DD-MM-YYYY HH24:MI:SS') and logseq >= 5799)
  6            or (loguser = 'SYS' and logdate > to_date('31-08-2004 11:22:33','DD-MM-YYYY HH24:MI:SS'))
  7            or (loguser > 'SYS')
  8         order by loguser, logdate, logseq
  9      ) p
 10      where rownum <= 10
 11  ) where r >= 6;

LOGUSER    LOGDATE      LOGSEQ KEYFIELD                                         
---------- -------- ---------- ----------                                       
SYSTEM     01-03-27      16879 AQ$_QUEUES                                       
SYSTEM     01-03-27      16880 AQ$_QUEUE_                                       
SYSTEM     01-03-27      16881 AQ$_QUEUE_                                       
SYSTEM     01-03-27      16882 AQ$_SCHEDU                                       
SYSTEM     01-03-27      16883 AQ$_SCHEDU                                       

Which again is the correct rows but inefficient:

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.01       0.01          0          0          0           0
Execute      1      0.00       0.00          0          0          0           0
Fetch        2      0.08       0.10          0       6153          0           5
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        4      0.09       0.11          0       6153          0           5

Rows     Row Source Operation
-------  ---------------------------------------------------
      5  VIEW
     10   COUNT STOPKEY
     10    VIEW
     10     TABLE ACCESS BY INDEX ROWID TESTLOG
  16853      INDEX FULL SCAN (object id 191688)


So my problem is:

When I "start my pagination at point A" - Oracle intelligently realizes that it can go to the index "at point A" and give me five rows by scanning the index from that point forward (or in the case of pagination to "Page 2": 10 rows forward and then only return the last 5 of those 10.) That is very efficient and rocks!

When I "start my pagination at point B"... I don't have a clear way of defining my where-clause, so that Oracle can realize "hey, this is the same as before, I can go to point B in the index and give him 5 rows by scanning forward from that point".


How can I write my where-clause in a way, so that Oracle has a chance to realize that it can do exactly the same thing with "point B" as it did with "point A"?


I'm sorry I write such long novels that you probably get bored reading through them :-) ... but that's the only way I can be clear about it.

I hope you can figure out some way to work around this full index scan and get a range scan instead...?!? I'm kinda stumped here :-)
 

Tom Kyte
October 18, 2004 - 8:48 am UTC

"I want to start at the point where loguser = 'SYS' and logdate 
= '31-08-2004 11:22:33' and logseq = 5799"


that is 'hard' -- if you just used the simple predicate, that would "skip around" in the table as it went from loguser value to loguser value.  Hence your really complex predicate:

  5         where (loguser = 'SYS' and logdate = to_date('31-08-2004 
11:22:33','DD-MM-YYYY HH24:MI:SS') and logseq >= 5799)
  6            or (loguser = 'SYS' and logdate > to_date('31-08-2004 
11:22:33','DD-MM-YYYY HH24:MI:SS'))
  7            or (loguser > 'SYS')

(as soon as you see an OR -- abandon all hope :)



so, basically, you are trying to treat the table as if it was a VSAM/ISAM file -- seek to key and read forward from key.  Concept that is vaguely orthogonal to relational technology...

but what about this:


ops$tkyte@ORA9IR2> update testlog set logseq = rownum, logdate = add_months(sysdate,-12) where loguser = 'XDB';
 
270 rows updated.
 
<b>I wanted some data "after loguser=SYS logdate=31-aug-2004 logseq=5799" with smaller logdates and logseqs</b>


ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> create index user_date_seq on testlog
  2  ( rpad(loguser,10) || to_char(logdate,'yyyymmddhh24miss') || to_char(logseq,'fm9999999999') );
 
Index created.

<b>we encode the columns you want to "seek through" in a single column.  numbers that are POSITIVE are easy -- you have to work a little harder to get negative numbers to encode "sortable"</b>

 
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> create or replace view v
  2  as
  3  select testlog.*,
  4         rpad(loguser,10) || to_char(logdate,'yyyymmddhh24miss') || to_char(logseq,'fm9999999999') user_date_seq
  5    from testlog
  6  /
 
View created.

<b>I like the view, cuts down on typos in the query later...</b>
 
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> exec dbms_stats.gather_table_stats( user, 'T', cascade=>true );
 
PL/SQL procedure successfully completed.
 
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> variable min number
ops$tkyte@ORA9IR2> variable max number
ops$tkyte@ORA9IR2> variable u varchar2(10)
ops$tkyte@ORA9IR2> variable d varchar2(15)
ops$tkyte@ORA9IR2> variable s number
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> set linesize 121
ops$tkyte@ORA9IR2> set autotrace on explain
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> exec :min := 1; :max := 5; :u := 'SYS'; :d := '20040831112233'; :s := 5799
 
PL/SQL procedure successfully completed.
 
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> select /*+ FIRST_ROWS */ loguser, logdate, logseq, keyfield from (
  2      select /*+ FIRST_ROWS */ p.*, rownum r from (
  3         select /*+ FIRST_ROWS */ t.*
  4         from v t
  5         where user_date_seq >= rpad(:u,10) || :d || to_char(:s,'fm9999999999')
  6         order by user_date_seq
  7      ) p
  8      where rownum <= :max
  9  ) where r >= :min
 10  /
 
LOGUSER    LOGDATE       LOGSEQ KEYFIELD
---------- --------- ---------- ----------
SYS        02-SEP-04       6126 BOOTSTRAP$
SYS        02-SEP-04       6129 CCOL$
SYS        02-SEP-04       6144 CDEF$
SYS        02-SEP-04       6152 CLU$
SYS        02-SEP-04       6162 CON$
 
 
Execution Plan
----------------------------------------------------------
   0      SELECT STATEMENT Optimizer=HINT: FIRST_ROWS (Cost=10 Card=997 Bytes=48853)
   1    0   VIEW (Cost=10 Card=997 Bytes=48853)
   2    1     COUNT (STOPKEY)
   3    2       VIEW (Cost=10 Card=997 Bytes=35892)
   4    3         TABLE ACCESS (BY INDEX ROWID) OF 'TESTLOG' (Cost=10 Card=997 Bytes=101694)
   5    4           INDEX (RANGE SCAN) OF 'USER_DATE_SEQ' (NON-UNIQUE) (Cost=2 Card=179)
 
 
 
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> exec :min := 14200; :max := 14205;
 
PL/SQL procedure successfully completed.
 
ops$tkyte@ORA9IR2> /
 
LOGUSER    LOGDATE       LOGSEQ KEYFIELD
---------- --------- ---------- ----------
XDB        18-OCT-03        115 XDB$ENUM2_
XDB        18-OCT-03        116 XDB$ENUM_T
XDB        18-OCT-03        117 XDB$ENUM_V
XDB        18-OCT-03        118 XDB$EXTNAM
XDB        18-OCT-03        119 XDB$EXTRA_
XDB        18-OCT-03         12 DBMS_XDBZ
 
6 rows selected.
 
 
Execution Plan
----------------------------------------------------------
   0      SELECT STATEMENT Optimizer=HINT: FIRST_ROWS (Cost=10 Card=997 Bytes=48853)
   1    0   VIEW (Cost=10 Card=997 Bytes=48853)
   2    1     COUNT (STOPKEY)
   3    2       VIEW (Cost=10 Card=997 Bytes=35892)
   4    3         TABLE ACCESS (BY INDEX ROWID) OF 'TESTLOG' (Cost=10 Card=997 Bytes=101694)
   5    4           INDEX (RANGE SCAN) OF 'USER_DATE_SEQ' (NON-UNIQUE) (Cost=2 Card=179)
 
 

 

Viable approach

Kim Berg Hansen, October 18, 2004 - 9:45 am UTC

Yes, I can use that method - particularly when wrapped in a view like that.

Probably I'll wrap the sorting encoding some more...

- function: sortkey(user, date, seq) return varchar2 (returning concatenated sorting string)
- index on sortkey(loguser, logdate, logseq)
- view: select t.*, sortkey(loguser, logdate, logseq) sortedkey from testlog t
- where clauses: select * from v where sortedkey = sortkey(:user, :date, :seq)

...or something like that.

Thanks for pointing me in the direction of another approach.

Often my trouble is that I'm used to thinking in terms of the Concorde XAL ERP system that sits "on top of" that Oracle base. In the XAL world there's nothing but simple tables and indexes are only on columns.

But then again in the XAL programming language one continually uses an index as a key and scans forward in this fashion.

I'm beginning (slowly but surely) to see the strengths of the "set-based" thinking I need to do in order to write good SQL (instead of the very much record-based thinking needed to write good XAL :-)...
...but one of the things that has always puzzled me is why the SQL language does NOT allow for using the composite indexes as key lookups in where clauses somehow... I mean those indexes are there and could profitably be used - the language just doesn't support it...

Oh, well - it's just one of those things that "just is", I guess. Perhaps I should try modifying a MySql to include that functionality :-)

Anyway, making the index a non-composite index is a viable approach - I can live with that.


anto, October 19, 2004 - 2:59 pm UTC

In oracle , for my current session I want to retrieve say only the first 10 rows of the select SQL each time.(I dont want to give 'where rownum <=10 each time' in the query). Is there any way I can do this at the session level in oracle, instead of giving where rownum <= 10 'condition each time in the query ?

Tom Kyte
October 19, 2004 - 4:15 pm UTC

no, we always return what you query for - the client would either have to fetch the first ten and stop or you add "where rownum <= 10"

A reader, October 19, 2004 - 4:36 pm UTC

Thanks,Tom for confirming this

fga

Michael, October 20, 2004 - 12:14 am UTC

I think you can always use fine grained access control (fga)
to append "automatically" a where clause to any table.
In your case "where rownum < 10". Search this site for fga or "fine grained access control". That'll do it.

Cheers

Tom Kyte
October 20, 2004 - 7:07 am UTC

whoa -- think about it.

sure, if all you do is "select * from t" -- something that simple (yet so very very very very very drastic) would work.

but --

select * from emp, dept where...

would become:

select * from ( select * from emp where rownum <= 10 ) emp,
( select * from dept where rownum <= 10 ) dept


suppose that finds 10 emps in deptno = 1000
and 10 depts 10, 20, 30, .... 100


no data found



Use Views?

Michael, October 21, 2004 - 3:48 am UTC

In that case why not simply create a view and predicate the view? Wouldn't you then have something like

select * from
(select * from emp,dept where emp.deptno=dept.deptno) <== View
where rownum < 100?

I think if you allow access to the data only through views (and not through) tables you overcome your mentioned problem?

Tom Kyte
October 21, 2004 - 6:57 am UTC

What if you wanted deptno=10

select * from your_view where deptno=10

would have the where rownum done FIRST (get 100 random rows) and then return the ones from that 100 that are deptno=10 (perhaps NO rows)

no, views / FGA -- they are not solutions to this.

it seems there is bug in 9.2.0.1.0 when get rows from M to N

Steven, November 01, 2004 - 9:34 pm UTC

I think it's a bug.
[code]
SQL> select *from v$version;
BANNER
---------
Oracle9i Enterprise Edition Release 9.2.0.1.0 - Production
PL/SQL Release 9.2.0.1.0 - Production
CORE    9.2.0.1.0       Production
TNS for 32-bit Windows: Version 9.2.0.1.0 - Production
NLSRTL Version 9.2.0.1.0 - Production

SQL> create table tt nologging as select rownum rn,b.*from dba_objects b;
SQL> alter table tt  add primary key(rn) nologging;
SQL> create index ttidx on tt(objecT_type,created) nologging;
SQL> analyze table tt compute statistics;

SQL> select /*+ first_rows */*from  (select a.*,rownum as rr from (select *from
tt where object_type='TABLE' order by created) a where rownum<20)where rr>0;

Execution Plan
----------------------------------------------------------
   0      SELECT STATEMENT Optimizer=HINT: FIRST_ROWS (Cost=485 Card=1
          9 Bytes=3857)

   1    0   VIEW (Cost=485 Card=19 Bytes=3857)
   2    1     COUNT (STOPKEY)
   3    2       VIEW (Cost=485 Card=1005 Bytes=190950)
   4    3         SORT (ORDER BY STOPKEY) (Cost=485 Card=1005 Bytes=89
          445)

   5    4           TABLE ACCESS (BY INDEX ROWID) OF 'TT' (Cost=484 Ca
          rd=1005 Bytes=89445)

   6    5             INDEX (RANGE SCAN) OF 'TTIDX' (NON-UNIQUE) (Cost
          =6 Card=1005)

Statistics
----------------------------------------------------------
          0  recursive calls
          0  db block gets
        852  consistent gets
          0  physical reads
          0  redo size
       1928  bytes sent via SQL*Net to client
        514  bytes received via SQL*Net from client
          3  SQL*Net roundtrips to/from client
          1  sorts (memory)
          0  sorts (disk)
         19  rows processed

SQL> select /*+ first_rows */ a.*,rownum as rr from (select *from tt where objec
t_type='TABLE' order by created) a where rownum<20;
Execution Plan
----------------------------------------------------------
   0      SELECT STATEMENT Optimizer=HINT: FIRST_ROWS (Cost=484 Card=1
          9 Bytes=190950)

   1    0   COUNT (STOPKEY)
   2    1     VIEW (Cost=484 Card=1005 Bytes=190950)
   3    2       TABLE ACCESS (BY INDEX ROWID) OF 'TT' (Cost=484 Card=1
          005 Bytes=89445)

   4    3         INDEX (RANGE SCAN) OF 'TTIDX' (NON-UNIQUE) (Cost=6 C
          ard=1005)

Statistics
----------------------------------------------------------
          3  recursive calls
          0  db block gets
         16  consistent gets
          0  physical reads
          0  redo size
       1928  bytes sent via SQL*Net to client
        514  bytes received via SQL*Net from client
          3  SQL*Net roundtrips to/from client
          1  sorts (memory)
          0  sorts (disk)
         19  rows processed

[/code]

but in  9.2.0.5.0  it's correct

[code]
SQL> select *from v$version;

BANNER
----------------------------------------------------------------
Oracle9i Enterprise Edition Release 9.2.0.5.0 - Production
PL/SQL Release 9.2.0.5.0 - Production
CORE    9.2.0.6.0       Production
TNS for 32-bit Windows: Version 9.2.0.5.0 - Production
NLSRTL Version 9.2.0.5.0 - Production

SQL> create table tt nologging as select rownum rn,b.*from dba_objects b;

SQL>  alter table tt  add primary key(rn) nologging;

SQL> create index ttidx on tt(objecT_type,created) nologging;

SQL> analyze table tt compute statistics;

SQL> select /*+ first_rows */ *from (select a.*,rownum as rr from (select *from
  2  tt where object_type='TABLE' order by created) a where rownum<20)where rr>0;

Execution Plan
----------------------------------------------------------
   0      SELECT STATEMENT Optimizer=HINT: FIRST_ROWS (Cost=64 Card=19
           Bytes=3857)
   1    0   VIEW (Cost=64 Card=19 Bytes=3857)
   2    1     COUNT (STOPKEY)
   3    2       VIEW (Cost=64 Card=395 Bytes=75050)
   4    3         TABLE ACCESS (BY INDEX ROWID) OF 'TT' (Cost=64 Card=
          395 Bytes=31995)

   5    4           INDEX (RANGE SCAN) OF 'TTIDX' (NON-UNIQUE) (Cost=3
           Card=395)
Statistics
----------------------------------------------------------
          8  recursive calls
          0  db block gets
         19  consistent gets
          0  physical reads
          0  redo size
       1907  bytes sent via SQL*Net to client
        514  bytes received via SQL*Net from client
          3  SQL*Net roundtrips to/from client
          3  sorts (memory)
          0  sorts (disk)
         19  rows processed

SQL> select/*+ first_rows */ a.*,rownum as rr from (select *from
  2  tt where object_type='TABLE' order by created) a where rownum<20;

Execution Plan
----------------------------------------------------------
   0      SELECT STATEMENT Optimizer=HINT: FIRST_ROWS (Cost=64 Card=19
           Bytes=3610)
   1    0   COUNT (STOPKEY)
   2    1     VIEW (Cost=64 Card=395 Bytes=75050)
   3    2       TABLE ACCESS (BY INDEX ROWID) OF 'TT' (Cost=64 Card=39
          5 Bytes=31995)
   4    3         INDEX (RANGE SCAN) OF 'TTIDX' (NON-UNIQUE) (Cost=3 C
          ard=395)

Statistics
----------------------------------------------------------
          3  recursive calls
          0  db block gets
         17  consistent gets
          0  physical reads
          0  redo size
       1907  bytes sent via SQL*Net to client
        514  bytes received via SQL*Net from client
          3  SQL*Net roundtrips to/from client
          1  sorts (memory)
          0  sorts (disk)
         19  rows processed
[/code]

but i could not search any infomation from metalink.
hope this can help many people. 

old question and DBMS_XMLQuery

A reader, November 18, 2004 - 1:21 pm UTC

Tom,
first, thanks a lot as usual!!!
DBMS_XMLQuery package has two interesting procedures: setSkipRows and setMaxRows that look like perfect tool for getting specific "data window" from the whole recordset while forming XML. Just wondering - is it using the same technic, that you provided in the very beginning of this thread, or there is some additional magic behind the scene? In other words - will it also be more slow with setSkipRows increasing?

Tom Kyte
November 18, 2004 - 2:19 pm UTC

alter session set sql-trace=true;
do it
tkprof it

and see :)

it is what I would have to do...

Thanks for the wonderful solution

san_mat, December 10, 2004 - 7:02 am UTC

Hi Tom,
as always you are Great !!
I got what i was looking for..everytime i have some problem..i come to this site & you never disappoints for any queries...

Thanks again


Total rows

Rula, January 03, 2005 - 5:34 pm UTC

Naveen from India said

"The devlopers want me to get the total rows that the query return so that they can display that many pages. [...] This is two I/O calls to the database and two queries to be parsed to display a page. Is there any work around."

And you answered

"I have a very very very very simple solution to this problem. DON'T DO IT."

It is a funny answer, but we had a similar problem here, and that kind of answer was not possible (if we meant to keep our jobs...), so we found a work around. Here it is:

select * from
(SELECT COUNT(1) OVER (PARTITION BY 1) total_rows,
row_number() OVER (ORDER BY <some criteria>) r_number
<the rest of your query>
) WHERE r_number BETWEEN :1 AND :2

So you may get the first page simultaneously to the total number of rows. It is better than executing 2 queries and worse than ignoring the total like google.

I hope it is useful for you.

Regards.
Rula.



Tom Kyte
January 03, 2005 - 10:35 pm UTC

I did not mean to be funny

I was *DEADLY* serious.

I wrote it in my book.

that

count(*) over ()

(slightly more meaningful and less confusing than "count(1) over (partition by 1)" what is UP with count(1) -- what is count(1)? count(*) has meaning -- but I digress)

That trick is something hardware vendors will love you for, but not too many DBA's or people who care about performance.


You get the FIRST page for that query after ONLY resolving and getting to the VERY LAST PAGE

(think about it -- how can you get the total row count without- well -- counting the rows!!!!!)

Row estimates

Andy, January 21, 2005 - 4:20 am UTC

Hi Tom,

Apologies if this is deemed off-topic! I'm trying to return the estimated cardinality of a query to the user (for use with pagination etc.) From what I've read in this thread, there are only two ways to do this: either with EXPLAIN PLAN, or by querying the v$ tables directly. I've decided to do the latter, so as to be able to take advantage of bind peeking and - if I've understood correctly - as it's a bit more efficient. So, I'm using a query like this:

select * from
(select b.*,rownum rnum
from (<main query>) b
where rownum < :max )
where rnum >= :min ;

starting with, say, : max = 51 and :min = 0 if I'm fetching 50 rows at a time. To get the card value using EXPLAIN PLAN I would, when I get the first batch of rows, strip away the "batch" stuff and send this:

explain plan for <main query>

The card value is then straightforward as I simply take the value from plan_table where id = 0. But I'm not so sure how I get the *right* card value when using v$sql_plan. Because I'm querying v$sql_plan for a plan that already exists, how can I get the card value that refers to what would have been selected had there been no batching? Example:

mires@WS2TEST> var x varchar2(10)
mires@WS2TEST> exec :x := '1.01.01'

PL/SQL-Prozedur wurde erfolgreich abgeschlossen.

mires@WS2TEST> select * from (select rownumber from fulltext where az = :x) where rownum < 11;

ROWNUMBER
----------
37845
37846
37847
37848
37849
37850
37851
37852
37853
37855

10 Zeilen ausgewählt.

mires@WS2TEST> explain plan set statement_id ='my_test_no_batch' for select rownumber from fulltext where az = :x;

EXPLAIN PLAN ausgeführt.

mires@WS2TEST> select id, operation, cardinality from plan_table where statement_id = 'my_test_no_batch';

ID OPERATION CARDINALITY
---------- ------------------------------ -----------
0 SELECT STATEMENT 100
1 TABLE ACCESS 100
2 INDEX 100


(So with EXPLAIN PLAN I just take the value where id = 0).

mires@WS2TEST> select /* find me */ * from (select rownumber from fulltext where az = :x) where rownum < 11;

ROWNUMBER
----------
37845
37846
37847
37848
37849
37850
37851
37852
37853
37855

10 Zeilen ausgewählt.

mires@WS2TEST> select id, operation, cardinality from v$sql_plan where (address, child_number) in (select address, child_number from v$sql where sql_text like '%find me%' and sql_text not like '%sql_text%') order by id;

ID OPERATION CARDINALITY
---------- ------------------------------ -----------
0 SELECT STATEMENT
1 COUNT
2 TABLE ACCESS 100
3 INDEX 100

In v$sql_plan, card values are not shown for each step. Here it's obvious which card value refers to my inner query, but how can I be sure with a more complex query. Does v$sql_plan never display a card value for the step which filters out my 10 rows (in which case I can just take the "last" card value - i.e. the card value for the lowest id value that has a non-null card value)?



Tom Kyte
January 21, 2005 - 8:26 am UTC

you want the first one you hit by ID.  it is the "top of the stack", it'll be as close as you appear to be able to get from v$sql_plan.


ops$tkyte@ORA9IR2> create table emp as select * from scott.emp;
 
Table created.
 
ops$tkyte@ORA9IR2> create index emp_ename_idx on emp(ename);
 
Index created.
 
ops$tkyte@ORA9IR2> exec dbms_stats.gather_table_stats( user, 'EMP', cascade=>true );
 
PL/SQL procedure successfully completed.
 
ops$tkyte@ORA9IR2> variable x varchar2(25)
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> create or replace view dynamic_plan_table
  2  as
  3  select
  4   rawtohex(address) || '_' || child_number statement_id,
  5   sysdate timestamp, operation, options, object_node,
  6   object_owner, object_name, 0 object_instance,
  7   optimizer,  search_columns, id, parent_id, position,
  8   cost, cardinality, bytes, other_tag, partition_start,
  9   partition_stop, partition_id, other, distribution,
 10   cpu_cost, io_cost, temp_space, access_predicates,
 11   filter_predicates
 12   from v$sql_plan;
 
View created.
 
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> define Q='select * from (select emp.empno, e2.ename from emp, emp e2 where e2.ename = :x ) where rownum < 11';
ops$tkyte@ORA9IR2> select * from (select emp.empno, e2.ename from emp, emp e2 where e2.ename = :x ) where rownum < 11;
 
no rows selected
 
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> delete from plan_table;
 
6 rows deleted.
 
ops$tkyte@ORA9IR2> explain plan for &Q;
old   1: explain plan for &Q
new   1: explain plan for select * from (select emp.empno, e2.ename from emp, emp e2 where e2.ename = :x ) where rownum < 11
 
Explained.
 
ops$tkyte@ORA9IR2> select * from table(dbms_xplan.display);
 
PLAN_TABLE_OUTPUT
-------------------------------------------------------------------------------
 
------------------------------------------------------------------------
| Id  | Operation             |  Name          | Rows  | Bytes | Cost  |
------------------------------------------------------------------------
|   0 | SELECT STATEMENT      |                |    10 |   100 |     3 |
|*  1 |  COUNT STOPKEY        |                |       |       |       |
|   2 |   MERGE JOIN CARTESIAN|                |    14 |   140 |     3 |
|   3 |    TABLE ACCESS FULL  | EMP            |    14 |    56 |     3 |
|   4 |    BUFFER SORT        |                |     1 |     6 |       |
|*  5 |     INDEX RANGE SCAN  | EMP_ENAME_IDX  |     1 |     6 |       |
------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
   1 - filter(ROWNUM<11)
   5 - access("E2"."ENAME"=:Z)
 
Note: cpu costing is off
 
19 rows selected.
 
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> select * from table( dbms_xplan.display
  2  ( 'dynamic_plan_table',
  3      (select rawtohex(address)||'_'||child_number x
  4         from v$sql
  5        where sql_text='&Q' ),
  6     'serial' ) )
  7  /
old   5:       where sql_text='&Q' ),
new   5:       where sql_text='select * from (select emp.empno, e2.ename from emp, emp e2 where e2.ename = :x ) where rownum < 11' ),
 
PLAN_TABLE_OUTPUT
-------------------------------------------------------------------------------
 
------------------------------------------------------------------------
| Id  | Operation             |  Name          | Rows  | Bytes | Cost  |
------------------------------------------------------------------------
|   0 | SELECT STATEMENT      |                |       |       |     3 |
|*  1 |  COUNT STOPKEY        |                |       |       |       |
|   2 |   MERGE JOIN CARTESIAN|                |    14 |   140 |     3 |
|   3 |    TABLE ACCESS FULL  | EMP            |    14 |    56 |     3 |
|   4 |    BUFFER SORT        |                |     1 |     6 |       |
|*  5 |     INDEX RANGE SCAN  | EMP_ENAME_IDX  |     1 |     6 |       |
------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
   1 - filter(ROWNUM<11)
   5 - access("ENAME"=:X)
 
Note: cpu costing is off
 
<b>and it won't be precisely the same as explain plan gives (but in this case -- 11 would not really be in the query, so it would not "know" -- and 11 would actually be wrong for you!  14 is right if you think about it, you want to know the estimated size of the entire set, not the set after the stopkey processing)</b>

 

first_rows vs. order by

VA, March 10, 2005 - 4:20 pm UTC

In a pagination style query like

select /*+ first_rows */ ...
from ...
where ...
order by ...

Dont the first_rows and the order by cancel each other out? ORDER BY implies that you need to fetch everything to start spitting out the first row. Which is contradictory with first_rows.

So, if I have a resultset that returns 1000 rows and I want to see the first 10 rows ordered by something, how would I go about doing this most efficiently knowing that users are going to go away after paging down couple of times?

Thanks

Tom Kyte
March 10, 2005 - 7:38 pm UTC

no it doesn't. think "index" and think "top n processing"

if you can use an index, we can get there pretty fast.

if you cannot - we still can use a top-n optimization to avoid sorting 1000 rows (just have to grab the first n rows -- sort them, then every row that comes after that -- just compare to last row in array of N sorted rows -- if greater than the last row, ignore it, else put it in the array and bump out the last one)

Pls explain how it will work with example

Kiran, March 11, 2005 - 5:39 am UTC

sql>select *
2 from ( select rownum rnum, a.*
3 from ( select * from emp order by 1 ) a
4 where rownum <= 15 )
5 where rnum >= 1
6 ;

RNUM EMPNO ENAME JOB MGR HIREDATE SAL COMM DEPTNO LOC
---------- ---------- ---------- --------- ---------- --------- ---------- ---------- ---------- ---
1 7369 aaat&&maa CLERK 7902 17-DEC-80 8000 250
2 7566 NJS MANAGER 7839 02-APR-81 2975 100
3 7782 CLARK MANAGER 7839 09-JUN-81 2450 10
4 7788 SCOTT ANALYST 7566 09-DEC-82 3000
5 7839 KING PRESIDENT 17-NOV-81 5000 10
6 7876 ADAMS CLERK 7788 12-JAN-83 1100
7 7902 FORD ANALYST 7566 03-DEC-81 3000
8 7934 MILLER CLERK 7782 23-JAN-82 1300 10
9 7965 AKV CLERK 7566 20-DEC-83 1020 400 20

9 rows selected.


it looks like normal query, it is not resetting the rownum value, pls explain ur query with example

Tom Kyte
March 11, 2005 - 6:22 am UTC

run the query from the inside out (don't know what you mean by "not resetting the rownum value")

a) take the query select * from emp order by 1
b) then get the first 15 rows (where rownum <= 15) and assign rownum as rnum to each of them
c) then keep only rnum >= 1

to get rows 1..15 of the result set

you should try perhaps 5 .. 10 since emp only has 14 rows.

A reader, March 11, 2005 - 10:32 am UTC

So, if I have a resultset that returns 1000 rows and I want to see the first 10 rows ordered by something, how would I go about doing this most efficiently knowing that users are going to go away after paging down couple of times?

How would I do this?

Thanks



Tom Kyte
March 11, 2005 - 10:56 am UTC

i use the pagination style query we were discussing right above. right on my home page I use this query (not again emp of course :)

salee, March 11, 2005 - 11:52 pm UTC

i wnat tetrive some records out of 3 million records(ie i want to retrive records between 322222 and322232).using rownum how can i do this

Tom Kyte
March 12, 2005 - 10:07 am UTC

there is no such thing as "record 32222" you know. You have to have an order by. but to get rows n thru m, see above? I showed how to do that.

delete from table - easiest way

A reader, March 22, 2005 - 4:05 pm UTC

Hi Tom,

I have a table sample as

create table sample
(
num number,
str varchar2(255),
method varchar2(255)
id1 number,
id2 number
);

I have about 32 million rows in this table out of which some rows are duplicated like for eg

num str method id1 id2
1 1 2 201 202
2 1 201 202

that is, the id1 and id2 of two rows might be duplicated. if that is the case, i want to find out such rows and keep one and delete another. is there an easiest way to achieve this?

Thanks.


Tom Kyte
March 22, 2005 - 6:13 pm UTC

search this site for

duplicates

we've done this one a couple of ways, many times.

analytics and index use

bob, April 07, 2005 - 8:35 am UTC

Tom,

The query you mentioned above:

select *
from ( select t.*, row_number() over (partition by store order by customer) rn
from t
)
where rn <= 2;

I can't understand why this requires a full scan for a large table on our system. (9.0.1.4)
If I had a (store,customer) index and the cardinality of store was 5 or so, couldn't the optimizer read the first two rows in the index for each store? That may be overly simplistic, but I don't understand why I have to do 2000 consistent gets for a 25k row table and a full scan to accomplish this for 10 total rows.

stats are calculated using "analyze table t compute statistics for table for all indexes for all indexed columns"

On a related note: while trying to hint with FIRST_ROWS, I noticed the hint changed the answer.

select * from (
select /*+ FIRST_ROWS(5) */ dt, rownum r from t order by dt)
where r <= 5;

returns all 25k rows, if I dropped the (5) out of the hint, it returned just 5.




Tom Kyte
April 07, 2005 - 10:28 am UTC

well, you have 10 stores

covering 25,000 rows

so that is 2,500 not "5 or so" all of a sudden....

You would need to combine a skip scan of the index with an index range scan and count stopkey. Meaning, we'd have to see "oh, there are only about 10 stores, you want the first two after sorting by customer. we'll skip around in the index and just get two from each". I believe some day it'll be that sophisticated, but remember in general these queries are much more complex.

And in your example, you would have had 5,000 stores -- and skip scanning would have be a really bad idea.

If a hint changes the results, there is a bug, please contact support (but 9014..)

CBO vs. me

bob, April 07, 2005 - 10:57 am UTC

Tom,

I always remember you suggesting we should think about how the optimizer might approach a query using the methods it has available to it. I assumed that if my simple mind could think of an approach than surely the CBO could implement it. :) I understand your point that it might be much more complicated in general.

If in this example, I know the stores (and there are 5), I might be better off, writing a union of 5 queries that each get the first two customers for that store using that concatenated index than this analytic approach. I'll have to test that theory to see.

Thanks for the verification. I thought I was missing something. With regards to 9014, for some reason metalink de-certification notices don't phase the customer.


getting rows N through M of a result set

Hossein Alaei Bavil, May 04, 2005 - 7:16 am UTC

excellent!
I think you are in Oracle core !!
but i think why oracle dosen't produce a built in feature for doing this?


usin join or cursors

mohannad, May 15, 2005 - 1:18 pm UTC

i have four tables and i want to use the information from the four tables so what is the most effient way ,
1.to create a view joining the four tables
2.or to create only one database data block using oracle developer and use the post query trigger to retrieve the information from the others tables by using cursors or select into.

my point is that by using post query the cursors or select into is performed only on the records fetched from the database (10 recors for example) and when you show more records by moving the scroll bar down for example the post query then fires again, but using the join between the tables means that oracle should join all the records at once which means taking more time. so which choice is better since i am working with a huge tables so time is very important to me.
thanks alot



Tom Kyte
May 15, 2005 - 1:42 pm UTC

database were born to join
and be written to

a join does not mean that Oracle has to retrieve the last row before it can give you the first at all.

use FIRST_ROWS (session setting or hint) if getting the first rows is the most important thing to you.

join or cursors

mohannad, May 15, 2005 - 2:42 pm UTC

thank you for your quick respond,
but i think that i have bit conflict in understanding the meaning of paging,as i understand that paging means displaying the result of the join only when the user want to display more results which can be done by joining the tables at a higher level (using oracle forms for example) an the advantege of not joining the tables in the database is that if the user get what he want without the need to display all the record so joing the tables at a higher level means that computation only occurs when the user wants more results to be displayed, i am right or no????

Tom Kyte
May 15, 2005 - 3:30 pm UTC

you are not correct.

select /*+ FIRST_ROWS */ *
from t1, t2, t3, t4, t5, t6, ......
where t1.key = t2.key
and .....


you open that query and fetch the first row and only that first row.

Well, the database is going to read a teeny bit of t1, t2, t3, t4, .... and so on. It is NOT going to process the entire thing!

joining does not mean "gotta get all the rows before you get the last". joins can be done on the fly.

say you have:

drop table t1;
drop table t2;
drop table t3;
drop table t4;

create table t1 as select * from all_objects;
alter table t1 add constraint t1_pk primary key(object_id);
exec dbms_stats.gather_table_stats(user,'T1',cascade=>true);

create table t2 as select * from all_objects;
alter table t2 add constraint t2_pk primary key(object_id);
exec dbms_stats.gather_table_stats(user,'t2',cascade=>true);

create table t3 as select * from all_objects;
alter table t3 add constraint t3_pk primary key(object_id);
exec dbms_stats.gather_table_stats(user,'t3',cascade=>true);

create table t4 as select * from all_objects;
alter table t4 add constraint t4_pk primary key(object_id);
exec dbms_stats.gather_table_stats(user,'t4',cascade=>true);


and you query:

select /*+ first_rows */ t1.object_name, t2.owner, t3.created, t4.temporary
from t1, t2, t3, t4
where t1.object_id = t2.object_id
and t2.object_id = t3.object_id
and t3.object_id = t4.object_id

and fetch 100 rows, or you do it yourself:

declare
cnt number := 0;
begin
for x in ( select t1.object_name, t1.object_id from t1 )
loop
for y in ( select t2.owner, t2.object_id from t2 where object_id = x.object_id)
loop
for z in ( select t3.created, object_id from t3 where object_id = y.object_id)
loop
for a in ( select t4.temporary from t4 where t4.object_id = z.object_id )
loop
cnt := cnt+1;
exit when cnt >= 100;
end loop;
exit when cnt >= 100;
end loop;
exit when cnt >= 100;
end loop;
exit when cnt >= 100;
end loop;
end;
/


well, tkprof shows:

SELECT /*+ first_rows */ T1.OBJECT_NAME, T2.OWNER, T3.CREATED, T4.TEMPORARY
FROM
T1, T2, T3, T4 WHERE T1.OBJECT_ID = T2.OBJECT_ID AND T2.OBJECT_ID =
T3.OBJECT_ID AND T3.OBJECT_ID = T4.OBJECT_ID


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 1 0.01 0.00 0 611 0 100
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 3 0.01 0.00 0 611 0 100

Misses in library cache during parse: 0
Optimizer mode: FIRST_ROWS
Parsing user id: 108 (recursive depth: 1)

Rows Row Source Operation
------- ---------------------------------------------------
100 NESTED LOOPS (cr=611 pr=0 pw=0 time=8091 us)
100 NESTED LOOPS (cr=409 pr=0 pw=0 time=5346 us)
100 NESTED LOOPS (cr=207 pr=0 pw=0 time=3194 us)
100 TABLE ACCESS FULL T1 (cr=5 pr=0 pw=0 time=503 us)
100 TABLE ACCESS BY INDEX ROWID T2 (cr=202 pr=0 pw=0 time=1647 us)
100 INDEX UNIQUE SCAN T2_PK (cr=102 pr=0 pw=0 time=801 us)(object id 67372)
100 TABLE ACCESS BY INDEX ROWID T3 (cr=202 pr=0 pw=0 time=1464 us)
100 INDEX UNIQUE SCAN T3_PK (cr=102 pr=0 pw=0 time=659 us)(object id 67374)
100 TABLE ACCESS BY INDEX ROWID T4 (cr=202 pr=0 pw=0 time=1433 us)
100 INDEX UNIQUE SCAN T4_PK (cr=102 pr=0 pw=0 time=637 us)(object id 67376)


we only do the WORK WE NEED to do, as you ask us. And if you compare the work done here with the work you would make us do:



SELECT T1.OBJECT_NAME, T1.OBJECT_ID FROM T1
call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 3 0.00 0.00 0 5 0 100
********************************************************************************
SELECT T2.OWNER, T2.OBJECT_ID FROM T2 WHERE OBJECT_ID = :B1

call count cpu elapsed disk query current rows ------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 201 0.01 0.01 0 300 0 100
********************************************************************************
SELECT T3.CREATED, OBJECT_ID FROM T3 WHERE OBJECT_ID = :B1

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 201 0.00 0.04 0 300 0 100
********************************************************************************
SELECT T4.TEMPORARY FROM T4 WHERE T4.OBJECT_ID = :B1

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 201 0.06 0.01 0 300 0 100



You would do in 905 IO's what we did in 611, you would have many back and forths, binds an executes, where as we would have one.


IF you can do it in a single query, please -- do it.


select /*+ FIRST_ROWS */ *

mohannad, May 15, 2005 - 4:42 pm UTC

i can not understand what is the differnce between

1.select * from items,invoice_d
where items.itemno=invoice_d.itemno;

and
2.select /*+ FIRST_ROWS */ *
from items,invoice_d
where items.itemno=invoice_d.itemno;

they give me the same result and the same number of records fetched each time (as i understand that using /*+ FIRST_ROWS */ * means fetching certain number of records but i can understand how i did not find any difference between the query with first_rows or without it)

Thanks a lot..

Tom Kyte
May 15, 2005 - 8:00 pm UTC

if it gave you different answers, that would be a bug.

The plans should be different.  The first query will optimize to find ALL ROWS as efficiently as possible, the second to return the first rows as soon as it can.

the first optimizes for throughput.
the second for intial response time:


ops$tkyte@ORA10G> create table items( itemno number primary key, data char(80) );
 
Table created.
 
ops$tkyte@ORA10G> create table invoice( id number, itemno references items, data char(80),
  2  constraint invoice_pk primary key(id,itemno) );
 
Table created.
 
ops$tkyte@ORA10G>
ops$tkyte@ORA10G> exec dbms_stats.set_table_stats( user, 'ITEMS', numrows => 100000 );
 
PL/SQL procedure successfully completed.
 
ops$tkyte@ORA10G> exec dbms_stats.set_table_stats( user, 'INVOICE', numrows => 1000000 );
 
PL/SQL procedure successfully completed.
 
ops$tkyte@ORA10G>
ops$tkyte@ORA10G> set autotrace traceonly explain
ops$tkyte@ORA10G> select *
  2    from items, invoice
  3   where items.itemno = invoice.itemno;
 
Execution Plan
----------------------------------------------------------
   0      SELECT STATEMENT Optimizer=ALL_ROWS (Cost=5925 Card=1000000 Bytes=195000000)
   1    0   HASH JOIN (Cost=5925 Card=1000000 Bytes=195000000)
   2    1     TABLE ACCESS (FULL) OF 'ITEMS' (TABLE) (Cost=31 Card=100000 Bytes=9500000)
   3    1     TABLE ACCESS (FULL) OF 'INVOICE' (TABLE) (Cost=50 Card=1000000 Bytes=100000000)
 
 
 
ops$tkyte@ORA10G> select /*+ FIRST_ROWS */ *
  2    from items, invoice
  3   where items.itemno = invoice.itemno;
 
Execution Plan
----------------------------------------------------------
   0      SELECT STATEMENT Optimizer=HINT: FIRST_ROWS (Cost=101481 Card=1000000 Bytes=195000000)
   1    0   NESTED LOOPS (Cost=101481 Card=1000000 Bytes=195000000)
   2    1     TABLE ACCESS (FULL) OF 'INVOICE' (TABLE) (Cost=50 Card=1000000 Bytes=100000000)
   3    1     TABLE ACCESS (BY INDEX ROWID) OF 'ITEMS' (TABLE) (Cost=1 Card=1 Bytes=95)
   4    3       INDEX (UNIQUE SCAN) OF 'SYS_C0010764' (INDEX (UNIQUE)) (Cost=0 Card=1)
 
 
 
ops$tkyte@ORA10G> set autotrace off


the hash join will wait until it reads the one table fully, hashes it -- once it does that, you'll start getting rows.

The second one returns rows IMMEDIATELY, but will take longer to return the last row. 

dffernce between first_rows(n) and all_rows

mohannad, May 15, 2005 - 7:09 pm UTC

what is the main difference between first_rows(n) and all_rows,as i understand that first_rows(10) for example retrive the first 10 rows very fast,but as i understand that if i want to retrive all the records then i should avoid using first_rows and instead i should use all_rows what does all_rows do???.

Tom Kyte
May 15, 2005 - 8:03 pm UTC

all rows = optimize to be able to get the last row as fast as possible. you might wait for the first row for a while, but all rows will be returned faster.

first rows = get first row as fast as possible. getting to the last row might take lots longer than with all rows, but we have an end user waiting to see data so get the first rows fast


use all rows for non-interactive things (eg: print this report)
use first rows for things that end users sit and wait for (paging through a query on the web for example)

performance issue

mohannad, May 16, 2005 - 9:33 am UTC

i have two tables with large amount of records

>>desc working
empno
start_date
.
.
.
with empno as the promary key

>>desc working_hestory
empno
hestory_date
.
.
.
with empno & hestory_date as the primary key

and i want all empno from working table where their stardate < max of their hestory_date i write the following two queries
but i found the second is two time faster than the first i want to know what is the reasons???,some people told me that the optimizer will optimize the two queries so that they will have the same speed but when i use the two queries i found the second query faster than the first so what is the reson ,and is there any general rule about that

1.select * from working where start_date<(select max(hestory_date) from working_hestory
where working.empno=working_hestory.empno)


2.select * from working , (select empno,max(hestory_date) from working_hestory where empno in(select empno from working) a
where
a.empno=working.empno
and
start_date<hestory_date;

Thanks A lot





Tom Kyte
May 16, 2005 - 12:57 pm UTC

read the plans. they will be very different.

you would probably make it even faster with

select *
from working, (select empno, max(history_date)
from working_history
group by empno) a
where working.empno = a.empno
and working.start_date > a.history_date;



performance issue

mohannad, May 16, 2005 - 1:04 pm UTC

but what is the reasons behind the difference between the two quires????
is their any general guidline for this difference.

Tom Kyte
May 16, 2005 - 1:20 pm UTC

I cannot see the plans, you can....



performance issue

mohannad, May 16, 2005 - 1:29 pm UTC

i mean if there is any guidline that i can use when i write any sql quuery without the use of plans ,if there a rule to use join rather than subquery for example.

Tom Kyte
May 16, 2005 - 1:49 pm UTC

if you gain a conceptual understanding of how the query will likely be processed, that would be good -- understand what happens, how it happens (access paths are discussed in the performance guide, I wrote about them in effective oracle by design as well)

but if you use the CBO, it'll try rewriting them as much as it can -- making the difference between the two less and less. No idea what optimizer you were using however.

Also, knowledge of the features of sql available to you (like analytic functions) is key to being successful.

Best of the Best

A Reader, May 20, 2005 - 8:27 am UTC

Howdy,

Thanks for sharing your knowledge with us.

Cheers

Partition

Mohit, May 20, 2005 - 8:49 am UTC

Hi Tom,

Hope you are in good spirits!

Tom, where I can read some more stuff like the one below:

--------------------------------
select *
from ( select t.*, row_number() over (partition by store order by customer) rn
from t
)
where rn <= 2;
---------------------------------

I have never seen this kind of the logic in any of the SQL books i have read so far. Can you suggest any book or documentation for learning/reading knowledgable thinhs like the above please?

Thanks Tom,
Mohit


Tom Kyte
May 20, 2005 - 10:29 am UTC

Expert One on One Oracle - chapter on analytics.
Effective Oracle by Design ....

On otn.oracle.com -> data warehousing guide.

been in the database since 8.1.6

question about paging

James Su, June 09, 2005 - 10:05 am UTC

hi Tom,

We have a large transactions table with indexes on trans_id (primary key) and trans_time, now I am trying to display the transactions page by page. The startdate and enddate is specified by user and passed from front end (usually the first and last day of the month). The front end will also remember the trans_id of the last row of the page and pass it to the database in order to fetch the next page.

main logic:

...........

begin

if p_direction='pagedown' then -- going to next page
v_sql := 'select trans_id,trans_time,trans_amount from mytransactions where trans_time between :1 and :2 and trans_id<=:3 order by trans_id desc';
else -- going to last page
v_sql := 'select trans_id,trans_time,trans_amount from mytransactions where trans_time between :1 and :2 and trans_id>=:3 order by trans_id';
end if;

open c_var for v_sql using p_startdate,p_enddate,p_last_trans_id;

i :=0;

loop
FETCH c_var INTO v_row;

i := i + 1;

EXIT WHEN c_var%NOTFOUND or i>30; -- 30: rows per page

-- add v_row into the array

end loop;

close c_var;

-- return array to the front end
...........
end;
/

in this way, if the user can input a trans_id then we can help him to locate to that page.

Can you tell me whether there's a better approach? The performance seems not good. Thank you very much.

Tom Kyte
June 09, 2005 - 11:24 am UTC

first rows hint it AND use rownum to get the number of rows you want

select *
from ( select /*+ FIRST_ROWS */ ..... order by trans_id )
where rownum <= 30;


that'll be the query you want -- use static SQL (no need for dynamic) here. Bulk collect the rows and be done


if ( pagedown )
then
select .... BULK COLLECT into ....
from ( select ... )
where rownum <= 30;
else
select .......
end if;



first_rows hint works!

James Su, June 09, 2005 - 11:40 am UTC

hi Tom,
It's amazing, thank you so much!!

first_row hint on views

James Su, June 09, 2005 - 12:36 pm UTC

sorry tom, I forgot to mention that mytransactions is actually a view, which is the union all of current table and archived table. Now the problem is:
If I have the trans_id =1 in the archived table, then:
select /*+ FIRST_ROWS */ trans_id from mytransactions where rownum<=30 and trans_id>=1 order by trans_id;

it will return the trans_id in the current table, which is greater than 1.

What can I do with this situation? Thank you.

Tom Kyte
June 09, 2005 - 6:20 pm UTC

you cannot do that regardless.

to "top-n" an ordered set, you MUST:

select *
from ( select /*+ first_rows */ .... ORDER BY .... )
where rownum <= 30;

and if it is in a union all view -- it isn't going to be excessively "first rows friendly"

When is Rownum applied

A reader, July 07, 2005 - 5:47 pm UTC

Hello,
Is rownum applied after order by clause or as the rows are fetched

select * from (
select deptno ,rownum r from dept order by deptno )
where r = 1




Tom Kyte
July 07, 2005 - 6:02 pm UTC

that assigns rownum to the data from dept AND THEN sorts it AND THEN keeps the first row that happend to come from dept before it was sorted.

eg:

select deptno from dept where rownum=1;

would be the same but faster.

if the want the first after sorting

select * from (select deptno from dept order by deptno) where rownum = 1;

(in this case, actually, select min(deptno) from dept :)

Please help me with a query

reader, August 09, 2005 - 3:57 am UTC

Hi Tom,

I have a table "xyz" where TDATE and BOOKNAME are the columns in it .

The output of the table is like this when i do a "select * from xyz".



TDATE BOOKNAME
--------------- ----------
16-MAY-05 kk6
16-MAY-05 kk6
16-MAY-05 kk6


17-MAY-05 kk7
17-MAY-05 kk7
17-MAY-05 kk7
17-MAY-05 kk7
17-MAY-05 kk7
17-MAY-05 kk7



I would like to have the output like the below.Please help me with a sql which will give me the number of times a distinct BOOKNAME value is present per TDATE value



TDATE BOOKNAME count(*)
--------------- ---------- ----------
16-MAY-05 kk7 3
17-MAY-05 kk7 6


Thanks in advance

Tom Kyte
August 09, 2005 - 9:50 am UTC

homework?  (sorry, this looks pretty basic)

look up trunc in the sql reference manual, you'll probably have to trucn the TDATE to the day level:

ops$tkyte@ORA10G> alter session set nls_date_format = 'dd-mon-yyyy hh24:mi:ss';
                                                                                                                                                                      
Session altered.
                                                                                                                                                                      
ops$tkyte@ORA10G> select sysdate, trunc(sysdate) from dual;
                                                                                                                                                                      
SYSDATE              TRUNC(SYSDATE)
-------------------- --------------------
09-aug-2005 09:38:08 09-aug-2005 00:00:00
                                                                                                                                                                      


In order to lose the time component.  And then read up on group by and count(*).

Now, I don't know why your output has kk7 twice, I'll assume that is a typo. But this is a very simple group by on the trunc of tdate and bookname with a count. 

ROWNUM performance

Tony, August 22, 2005 - 4:34 pm UTC

Tom,
Thanks a lot for your help and valueable time. I have a very simple query (looks simple) but it takes more than 4 mins to execute,

select * from (select L.LEG_ID from leg_t L WHERE
L.STAT_ID = 300 AND
L.LEG_CAT = 2 AND
L.D_CD = 'CIS' AND
L.P_ID is null order by L.LEG_ID desc)
where rownum <= 16;

LEG_ID is the primary key(PK_LEG), I also have index(leg_i1) on (STAT_ID,LEG_CAT,D_CD,P_ID,leg_id desc).

Now if I run this query as is it takes about 4-5 mins and the plan is:

SELECT STATEMENT Cost = 90
COUNT STOPKEY
VIEW
TABLE ACCESS BY INDEX ROWID LEG_T
INDEX FULL SCAN DESCENDING PK_LEG
The query doesn't use the leg_i1 index..shouldn't it?

Secondly if I run the internal query:

select L.LEG_ID from leg_t L WHERE
L.STAT_ID = 300 AND
L.LEG_CAT = 2 AND
L.D_CD = 'CIS' AND
L.P_ID is null order by L.LEG_ID desc

it uses the index leg_i1 and comes back in milli-seconds.

I tried the rule hint on the query and it come back in milliseconds again instead of 4-5 minutes.( I can't use hints in the application.)

Please guide.

Tom Kyte
August 24, 2005 - 3:28 am UTC

it is trying to do first rows here (because of the top-n, the rownum) and the path to first rows is to use that index to "sort" the data, read the data sorted.

But apparently, you have to search LOTS of rows to find the ones of interest - hence it takes longer.

either try all_rows optimization OR put leg_id on the leading edge of that index instead of way at the end

ROWNUM performance

Tony, August 29, 2005 - 4:23 pm UTC

Tom,
Thanks a lot for your valuable time, I tried the index as you suggested but still the optimizer doesn't pick it,(default optimizer mode is all_rows).

This table(leg_t) contains about one million rows, and STAT_ID(not null) column contains just 8 distinct values,
LEG_CAT(not null) column contains just 2 distinct value
D_CD )not null) column contains just 1 distinct value

I can't use bitmap index, what other option do you recommend so that the optimizer pick up the index (as it does when the mode is RULE) please help.



Tom Kyte
August 30, 2005 - 1:24 am UTC

hint it all rows in the inline view. (it sees the rownum..)

ROWNUM performance

Tony, August 30, 2005 - 10:12 am UTC

Tom,
Thanks again, I tried all_rows as you suggested but still it doesn't pick the index, it still goes for the primary key index, which takes 5 minutes. Here is the plan with all_rows:

SELECT STATEMENT Cost = 10
COUNT STOPKEY
VIEW
TABLE ACCESS BY INDEX ROWID LEG_T
INDEX FULL SCAN DESCENDING PK_LEG

Do you suggest histograms for such columns, which columns are the nest candidates for histograms (if you think that can help)

Please help, I even tried to play with optimizer_index_caching, optimizer_index_cost_adj parameters but couldn't get better results.

Tom Kyte
August 30, 2005 - 12:22 pm UTC

select *
from (
select *
from (select /*+ no_merge */ L.LEG_ID
from leg_t L
WHERE L.STAT_ID = 300
AND L.LEG_CAT = 2
AND L.D_CD = 'CIS'
AND L.P_ID is null
)
order by L.LEG_ID desc
)
where rownum <= 16;



FIRST_ROWS

Jon Roberts, September 07, 2005 - 11:12 am UTC

I had implemented the suggested solution some time back and when it finally got to production it was rather slow when using an order by in the inner most query.
We allow users to sort by a number of columns and when sorting, it would run much slower. Using autotrace, I could see that I had the same plan but with the larger production table, it had more data to search and it took longer to do the full table scan.

I created indexes on the columns people sort by but it wouldn't use the indexes. I just re-read this discussion and found your suggestion of using the first_rows hint. That did the trick. It uses the indexes now and everything is nice and fast.

Thanks for the great article!

Excellent Thread

Manas, November 03, 2005 - 1:28 pm UTC

Thanks Tom.
Before going through this thread, I was thinking to implement the pagination using ref cursor (dynamic) and Bulk collect.

How to find the record count of a ref cursor ?

VKOUL, December 05, 2005 - 6:44 pm UTC

Hi Tom,

Is it possible ? (kind of collection.count)

procedure (par1 in number, par2 out refcursor, par3 out number) is
begin
. . .
open par2 for select . . .;
at this point how can I get the number of records in par2.
par3 := number of records;
end;
/


Tom Kyte
December 06, 2005 - 5:35 am UTC

you cannot, no one KNOWS what the record count is until.....

you've fetch the last row.


consider this:


open rc for select * from ten_billion_row_table;


it is not as if we copy the entire result set someplace, in fact, we typically do no work to open a cursor (no IO is performed), it is not until you actually start asking for data that we start getting it and we have no idea how many rows will be returned until they are actually returned.

No use in doing work that you might well never be asked to do.

A reader, December 23, 2005 - 4:47 am UTC

Awesome !!!

Page through a ref cursor using bulk collect

Barry Chase, January 13, 2006 - 6:15 am UTC

Can I use your bulkcollect and first rows logic while building a refcursor that I pass back to a front end which permits them to page through a large dataset 10,25,50 records at a time while still maintaining performance and the end as well as the first part of the query ?

Tom Kyte
January 13, 2006 - 11:15 am UTC

don't use bulk collect - just return the ref cursor and let the front end array fetch as it needs rows.

Follow up question

Barry C, January 14, 2006 - 10:52 am UTC

Okay,no on bulk collecting. Our frontend is pulling back several thousand records potentially. I would prefer that they apply more criteria, but our administrative users have decided that they feel differently. Needless to say, for a webpage, the performance is far from 5-10 seconds return for all of the records. They say this is unacceptable. I tried the min max row thing and it works great at the early part of the query, but performance progressively gets worse as I go down the result set... say...show me recs 900-950.

So I am supposed to come up with a solution for which I am not sure there is a solution. Any thoughts or commentary ?


Tom Kyte
January 15, 2006 - 3:45 pm UTC

only give them a NEXT button and no "page 55" button.

Do it like google. Ask your end users to go to page 101 of this search:

</code> http://www.google.com/search?q=oracle&start=0&ie=utf-8&oe=utf-8&client=firefox-a&rls=org.mozilla:en-US:official <code>

also, ask them to read the time to develop each page as well as they try to get there.

tell them "google = gold standard, if they don't do it, neither will I"

I give you a next button, nothing more.
Google lets you hit 10 at a time, nothing more.

And google will say this:
Sorry, Google does not serve more than 1000 results for any query.

if you try to go page page 100.

Further enhancement

A reader, January 16, 2006 - 5:51 am UTC

Hi Tom,

excellent thread. In addition to getting M..N rows I would also like to add column sorting (the column header will be a link). How can I do this efficiently?

Thanks

RP

Tom Kyte
January 16, 2006 - 9:39 am UTC

read original answer? I had "including order by...."??

A reader, January 16, 2006 - 12:14 pm UTC

..with the potential to use any of the columns in the table that means either i create a set of sql statements, one for each column (plus ASC or DESC) or I generate the sql statement dynamically.

If i do it dynamically would i not lose the magic of *bind variables*?

Apologies, that was what i meant to ask.

R



Tom Kyte
January 16, 2006 - 12:51 pm UTC

you would not lose the magic of bind variables, you would however have a copy of the sql statement for each unique sort combination (which is not horrible, unless you have hundreds/thousands of such sort orders)

A reader, January 16, 2006 - 1:08 pm UTC

and if the number of static statements got too large, could i do it dynamically like this:

</code> http://asktom.oracle.com/pls/asktom/f?p=100:11:::::P11_QUESTION_ID:1288401763279 <code>

?? Is it relevent in this context(No pun intended)?

R

Tom Kyte
January 16, 2006 - 1:47 pm UTC

Yes, there are many ways to bind

a) static sql in plsql does it nicely
b) sys_context with open refcursor for....
c) dbms_sql - with dbms_sql.bind_variable
d) open refcursor for .... USING <when you know the number of binds>


you could also:


order by decode( p_input, 1, c1 ) ASC, decode( p_input, -1, c1 ) DESC,
decode( p_input, 2, c2 ) ASC, decode( p_input, -2, c2 ) DESC,
....

in order to have one order by statement - you would never be able to use an index to retrieve the data "sorted" (but you might not be able to anyway in many cases)...


what if your query get info back from more than one table?

Michelle, February 09, 2006 - 10:08 pm UTC

What would the syntax look like?
Thank you!

Tom Kyte
February 10, 2006 - 12:35 pm UTC

I assume you are referring to the query right above this?

select a.c1, a.c2, b.c3, b.c4, ....
from a,b....
where ....
order by decode( p_input, 1, a.c1 ) ASC, decode( p_input, -1, a.c1 ) DESC,
decode( p_input, 2, a.c2 ) ASC, decode( p_input, -2, a.c2 ) DESC,
decode( p_input, 3, b.c3 ) ASC, decode( p_input, -3, b.c3 ) DESC,
....


not any different than if there was one table really.

Get the total

Nitai, March 01, 2006 - 11:24 am UTC

Hi Tom

How can I get the total of all found records with this query:

SELECT rn, id
FROM (
SELECT ROWNUM AS rn, id
FROM (
SELECT id
FROM test
)
WHERE ROWNUM <= 30
)
WHERE rn > 0

I tried to put count(rn) in there but that only returns me the 30 records (of course) but what I need is the total records this query found. Is this even possible within the same query? Thank you for your kind help.


Tom Kyte
March 01, 2006 - 1:48 pm UTC

why?

goto google, search for oracle, tell me if you think their count is accurate. then, goto page 101 of the search results and tell me what the first link is.

nitai, March 01, 2006 - 4:21 pm UTC

Call me stupid, but what is your point. When I go to Google and enter Oracle I get this:

Results 411 - 411 of about 113,000,000 for oracle

I get to go until page 43 and that's it. Ok, that means it is not possible?

All I really need is how many total found records there are (meaning the 113,000,00 in the case of the google search) :-)

Tom Kyte
March 02, 2006 - 9:04 am UTC

do you think that google actually counted the results?

No, they don't

There is no page 101 on google.


They don't let you go that far.


My point - made many times - counting the exact number of hits to paginate through a result set on the web is "not smart"

I refuse to do it.

I won't show how.

It is just a way to burn CPU like mad, make everything really really really slow.



Got your point

Nitai, March 02, 2006 - 9:13 am UTC

Ok Tom, got your point. Ok, but what about if I have a ecommerce site and customers are searching for a product. They would want to know how many products they found, thus I would need that number of the overall found records.

At the moment I would have to run the query two times, one that gets me the total number and one with the rownum > 50 and so on. I don't think that is very performant either.

What else to do?

Tom Kyte
March 02, 2006 - 12:44 pm UTC

Just tell them "you are looking at 1 thru 10 of more than 10"

Or guess - just like I do, google does. Give them a google interface - look at google as the gold standard here. Google ran out of pages and didn't get upset or anything - if you tried to goto page 50, it just put you on the last page.


You DO NOT EVER need to tell them

you are looking at 1 through 10 of 153,531 items

Just tell them, here is 1 through 10, there are more, next will get you to them.

Or give them links to the first 10 pages (like google) and if they click on page 10 but there isn't a page 10, show them the last page and then only show them pages 1..N in the click links.

Be like google.

Sorry, not going to tell you how to burn cpu like mad, this is one of my pet peeves - this counting stuff.

10gR2 optimizer problem

A reader, March 10, 2006 - 3:43 am UTC

Hi Tom,

I hardly believed when I've seen it. This is a tkprof of same query second time it has select * from (<original_query) arround. Can you give us hint what might be happening here.

SELECT
a.*, ROWNUM AS rnum
FROM (SELECT /*+first_rows*/
s.userid, s.username, s.client_ip, s.timestamp_,
s.DURATION, s.calling_station_id,
s.called_station_id, s.acct_terminate_cause,
s.nas_port_type
FROM dialin_sessions s
WHERE s.client_ip LIKE '213.240.3.%'
AND s.username LIKE 'c%'
AND s.timestamp_end >= 1136070000
AND s.timestamp_ <= 1138751940
ORDER BY timestamp_ DESC) a
WHERE ROWNUM <= 26

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.01 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 1 0.06 0.06 0 6801 0 25
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 3 0.06 0.07 0 6801 0 25

Misses in library cache during parse: 1
Optimizer mode: FIRST_ROWS
Parsing user id: 63

Rows Row Source Operation
------- ---------------------------------------------------
25 COUNT STOPKEY (cr=6801 pr=0 pw=0 time=74387 us)
25 VIEW (cr=6801 pr=0 pw=0 time=74303 us)
25 TABLE ACCESS BY INDEX ROWID DIALIN_SESSIONS (cr=6801 pr=0 pw=0 time=74221 us)
7050 INDEX RANGE SCAN DESCENDING DIALIN_SESSIONS_TIMESTAMP (cr=21 pr=0 pw=0 time=14187 us)(object id 53272)


Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
SQL*Net message to client 2 0.00 0.00
SQL*Net message from client 2 6.39 6.39
********************************************************************************


SELECT *
FROM (SELECT
a.*, ROWNUM AS rnum
FROM (SELECT /*+first_rows*/
s.userid, s.username, s.client_ip, s.timestamp_,
s.DURATION, s.calling_station_id,
s.called_station_id, s.acct_terminate_cause,
s.nas_port_type
FROM dialin_sessions s
WHERE s.client_ip LIKE '213.240.3.%'
AND s.username LIKE 'c%'
AND s.timestamp_end >= 1136070000
AND s.timestamp_ <= 1138751940
ORDER BY timestamp_ DESC) a
WHERE ROWNUM <= 26)

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 1 34.45 68.05 267097 325479 0 25
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 3 34.45 68.06 267097 325479 0 25

Misses in library cache during parse: 1
Optimizer mode: FIRST_ROWS
Parsing user id: 63

Rows Row Source Operation
------- ---------------------------------------------------
25 VIEW (cr=325479 pr=267097 pw=0 time=68055294 us)
25 COUNT STOPKEY (cr=325479 pr=267097 pw=0 time=68055230 us)
25 VIEW (cr=325479 pr=267097 pw=0 time=68055196 us)
25 SORT ORDER BY STOPKEY (cr=325479 pr=267097 pw=0 time=68055118 us)
12268 TABLE ACCESS FULL DIALIN_SESSIONS (cr=325479 pr=267097 pw=0 time=23052374 us)


Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
SQL*Net message to client 2 0.00 0.00
SQL*Net message from client 2 17.54 17.55
db file sequential read 6768 0.02 3.62
db file scattered read 28978 0.07 41.07
latch: cache buffers lru chain 1 0.00 0.00
********************************************************************************

Tom Kyte
March 10, 2006 - 12:15 pm UTC

you would have to provide a little more context.


what happened in between these two tkprofs that I assume were taken at different times.

Optimizer problem

Zeljko Vracaric, March 13, 2006 - 4:24 am UTC

No, it was one session. Queries are only one in that database session. I was trying to optimize one of ours most used php scripts (migrating to oracle from sybase). I analyzed 10053 trace that day. But the only thing I've spotted is that in final section optimizer goal for second query was all rows not first_rows. I tried to change optimizer mode by alter session and I got same results. It is the first_rows that is essential for query taking plan with index that enables stop key to stop processing after 25 rows that match criteria.
It is very complicated script because it has to answer lot of really different questions. For instance give me all sessions that were active in some point in time and on the other hand give me all sessions in long period of time matching some criteria. We have to detect intersection of two intervals and avoid FTS or index scan on millions of rows, finding criteria to limit number of rows processed. Optimizing it is of course subject for another thread. But this problem with simple inline view was unexpected.


date java question

winny, March 24, 2006 - 8:06 pm UTC

Create a Date class with the following capabilities:
a) Output the date in multiple formats such as
DDD YYYY
MM/DD/YY
June 14, 1992
b) Use overloaded constructors to create Date objects initialized with dates of the formats in part (a).
Hint : you can compare strings using method equals. Suppose you have two string references s1 and s2. if those strings are equal s1. equals (s2) returns true. Otherwise, it returns false


10gR2 linux another similar problem

Zeljko Vracaric, March 27, 2006 - 3:28 am UTC

Hi Tom,

I've found another similar problem with select * from (<query>). This time I used autotrace to document it. It looks like bug in optimizer using wrong cardinalities, or we are doing something very wrong in out php project.

BILLING@dev> select * from ecm_invoiceitems;

253884 rows selected.


Execution Plan
----------------------------------------------------------
Plan hash value: 4279212659

--------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
--------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 253K| 49M| 542 (8)| 00:00:03 |
| 1 | TABLE ACCESS FULL| ECM_INVOICEITEMS | 253K| 49M| 542 (8)| 00:00:03 |
--------------------------------------------------------------------------------------


Statistics
----------------------------------------------------------
0 recursive calls
0 db block gets
253920 consistent gets
0 physical reads
0 redo size
188764863 bytes sent via SQL*Net to client
107899015 bytes received via SQL*Net from client
761644 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
253884 rows processed

BILLING@dev> select a.*,rownum as rnum from(
2 select /*+first_rows */i.invoiceid,i.invoice_number,ii.product_name,p.amount,co.company,i.time_invoice, c.code as customer_code, i.customerid, i.statusid,i.cost_total
3 from ecm_invoiceitems ii,cm_customers c, cm_contacts co,ecm_invoices i
4 left join ecm_payments_invoices ip on ( i.invoiceid=ip.invoiceid)
5 left join ecm_payments p on ( p.paymentid=ip.paymentid )
6 where
7 i.invoiceid=ii.invoiceid and i.customerid = c.customerid and c.contactid = co.contactid and co.type_ = 'PERSON' and ((p.paymentid is null and i.cost_total between 200-1 and 200+1) or p.amount=200)
8 and (p.paymentid is null or p.is_success in ('U', 'S'))
9 and i.statusid not in (0,303) and time_invoice>to_date('2005-11-01','yyyy-mm-dd')
10 order by i.statusid desc, p.amount,i.time_invoice,i.invoiceid
11 ) a where rownum<25;


Execution Plan
----------------------------------------------------------
Plan hash value: 440181276

----------------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time |
----------------------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 24 | 6936 | | 35762 (1)| 00:02:27 |
|* 1 | COUNT STOPKEY | | | | | | |
| 2 | VIEW | | 33450 | 9440K| | 35762 (1)| 00:02:27 |
|* 3 | SORT ORDER BY STOPKEY | | 33450 | 5749K| 11M| 35762 (1)| 00:02:27 |
| 4 | TABLE ACCESS BY INDEX ROWID | ECM_INVOICEITEMS | 1 | 56 | | 1 (0)| 00:00:01 |
| 5 | NESTED LOOPS | | 33450 | 5749K| | 34658 (1)| 00:02:23 |
|* 6 | FILTER | | | | | | |
| 7 | NESTED LOOPS OUTER | | 24819 | 2908K| | 28438 (1)| 00:01:57 |
| 8 | NESTED LOOPS OUTER | | 24819 | 2593K| | 22219 (1)| 00:01:32 |
| 9 | NESTED LOOPS | | 24751 | 2296K| | 16019 (1)| 00:01:06 |
| 10 | NESTED LOOPS | | 24751 | 1571K| | 9817 (1)| 00:00:41 |
|* 11 | TABLE ACCESS BY INDEX ROWID| ECM_INVOICES | 24751 | 966K| | 3615 (1)| 00:00:15 |
|* 12 | INDEX RANGE SCAN | INVOICES_TIME_INVOICE | 24856 | | | 17 (0)| 00:00:01 |
| 13 | TABLE ACCESS BY INDEX ROWID| CM_CUSTOMERS | 1 | 25 | | 1 (0)| 00:00:01 |
|* 14 | INDEX UNIQUE SCAN | CM_CUSTOME_5955332052 | 1 | | | 1 (0)| 00:00:01 |
|* 15 | TABLE ACCESS BY INDEX ROWID | CM_CONTACTS | 1 | 30 | | 1 (0)| 00:00:01 |
|* 16 | INDEX UNIQUE SCAN | CM_CONTACT_17544893292 | 1 | | | 1 (0)| 00:00:01 |
|* 17 | INDEX RANGE SCAN | ECM_PAYMEN_800553712 | 1 | 12 | | 1 (0)| 00:00:01 |
| 18 | TABLE ACCESS BY INDEX ROWID | ECM_PAYMENTS | 1 | 13 | | 1 (0)| 00:00:01 |
|* 19 | INDEX UNIQUE SCAN | ECM_PAYMEN_9475344592 | 1 | | | 1 (0)| 00:00:01 |
|* 20 | INDEX RANGE SCAN | ECM_INVOICEITEMS_INVOICEID | 1 | | | 1 (0)| 00:00:01 |
----------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

1 - filter(ROWNUM<25)
3 - filter(ROWNUM<25)
6 - filter(("P"."PAYMENTID" IS NULL AND "I"."COST_TOTAL">=199 AND "I"."COST_TOTAL"<=201 OR "P"."AMOUNT"=200) AND
("P"."PAYMENTID" IS NULL OR ("P"."IS_SUCCESS"='S' OR "P"."IS_SUCCESS"='U')))
11 - filter("I"."STATUSID"<>0 AND "I"."STATUSID"<>303)
12 - access("I"."TIME_INVOICE">TO_DATE('2005-11-01 00:00:00', 'yyyy-mm-dd hh24:mi:ss'))
14 - access("I"."CUSTOMERID"="C"."CUSTOMERID")
15 - filter("CO"."TYPE_"='PERSON')
16 - access("C"."CONTACTID"="CO"."CONTACTID")
17 - access("I"."INVOICEID"="IP"."INVOICEID"(+))
19 - access("P"."PAYMENTID"(+)="IP"."PAYMENTID")
20 - access("I"."INVOICEID"="II"."INVOICEID")


Statistics
----------------------------------------------------------
0 recursive calls
0 db block gets
131721 consistent gets
0 physical reads
0 redo size
1134 bytes sent via SQL*Net to client
385 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
1 sorts (memory)
0 sorts (disk)
1 rows processed

BILLING@dev> select * from
2 (select a.*,rownum as rnum from(
3 select /*+first_rows */i.invoiceid,i.invoice_number,ii.product_name,p.amount,co.company,i.time_invoice, c.code as customer_code, i.customerid, i.statusid,i.cost_total
4 from ecm_invoiceitems ii,cm_customers c, cm_contacts co,ecm_invoices i
5 left join ecm_payments_invoices ip on ( i.invoiceid=ip.invoiceid)
6 left join ecm_payments p on ( p.paymentid=ip.paymentid )
7 where
8 i.invoiceid=ii.invoiceid and i.customerid = c.customerid and c.contactid = co.contactid and co.type_ = 'PERSON' and ((p.paymentid is null and i.cost_total between 200-1 and 200+1) or p.amount=200)
9 and (p.paymentid is null or p.is_success in ('U', 'S'))
10 and i.statusid not in (0,303) and time_invoice>to_date('2005-11-01','yyyy-mm-dd')
11 order by i.statusid desc, p.amount,i.time_invoice,i.invoiceid
12 ) a where rownum<25
13 );


Execution Plan
----------------------------------------------------------
Plan hash value: 1216454693

-----------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
-----------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 2 | 604 | 1147 (2)| 00:00:05 |
| 1 | VIEW | | 2 | 604 | 1147 (2)| 00:00:05 |
|* 2 | COUNT STOPKEY | | | | | |
| 3 | VIEW | | 2 | 578 | 1147 (2)| 00:00:05 |
|* 4 | SORT ORDER BY STOPKEY | | 2 | 352 | 1147 (2)| 00:00:05 |
| 5 | CONCATENATION | | | | | |
|* 6 | FILTER | | | | | |
| 7 | NESTED LOOPS OUTER | | 1 | 176 | 7 (0)| 00:00:01 |
| 8 | NESTED LOOPS | | 1 | 163 | 6 (0)| 00:00:01 |
| 9 | NESTED LOOPS OUTER | | 2 | 266 | 5 (0)| 00:00:01 |
| 10 | NESTED LOOPS | | 2 | 242 | 4 (0)| 00:00:01 |
| 11 | NESTED LOOPS | | 1 | 96 | 3 (0)| 00:00:01 |
| 12 | TABLE ACCESS FULL | ECM_INVOICEITEMS | 198 | 11088 | 2 (0)| 00:00:01 |
|* 13 | TABLE ACCESS BY INDEX ROWID| ECM_INVOICES | 1 | 40 | 1 (0)| 00:00:01 |
|* 14 | INDEX UNIQUE SCAN | ECM_INVOIC_14275361692 | 1 | | 1 (0)| 00:00:01 |
| 15 | TABLE ACCESS BY INDEX ROWID | CM_CUSTOMERS | 1 | 25 | 1 (0)| 00:00:01 |
|* 16 | INDEX UNIQUE SCAN | CM_CUSTOME_5955332052 | 1 | | 1 (0)| 00:00:01 |
|* 17 | INDEX RANGE SCAN | ECM_PAYMEN_800553712 | 1 | 12 | 1 (0)| 00:00:01 |
|* 18 | TABLE ACCESS BY INDEX ROWID | CM_CONTACTS | 1 | 30 | 1 (0)| 00:00:01 |
|* 19 | INDEX UNIQUE SCAN | CM_CONTACT_17544893292 | 1 | | 1 (0)| 00:00:01 |
| 20 | TABLE ACCESS BY INDEX ROWID | ECM_PAYMENTS | 1 | 13 | 1 (0)| 00:00:01 |
|* 21 | INDEX UNIQUE SCAN | ECM_PAYMEN_9475344592 | 1 | | 1 (0)| 00:00:01 |
|* 22 | FILTER | | | | | |
| 23 | NESTED LOOPS OUTER | | 1 | 176 | 35 (0)| 00:00:01 |
| 24 | NESTED LOOPS | | 1 | 163 | 34 (0)| 00:00:01 |
| 25 | NESTED LOOPS | | 2 | 266 | 33 (0)| 00:00:01 |
| 26 | NESTED LOOPS OUTER | | 2 | 216 | 32 (0)| 00:00:01 |
| 27 | NESTED LOOPS | | 2 | 192 | 31 (0)| 00:00:01 |
| 28 | TABLE ACCESS FULL | ECM_INVOICEITEMS | 198 | 11088 | 2 (0)| 00:00:01 |
|* 29 | TABLE ACCESS BY INDEX ROWID| ECM_INVOICES | 1 | 40 | 1 (0)| 00:00:01 |
|* 30 | INDEX UNIQUE SCAN | ECM_INVOIC_14275361692 | 1 | | 1 (0)| 00:00:01 |
|* 31 | INDEX RANGE SCAN | ECM_PAYMEN_800553712 | 1 | 12 | 1 (0)| 00:00:01 |
| 32 | TABLE ACCESS BY INDEX ROWID | CM_CUSTOMERS | 1 | 25 | 1 (0)| 00:00:01 |
|* 33 | INDEX UNIQUE SCAN | CM_CUSTOME_5955332052 | 1 | | 1 (0)| 00:00:01 |
|* 34 | TABLE ACCESS BY INDEX ROWID | CM_CONTACTS | 1 | 30 | 1 (0)| 00:00:01 |
|* 35 | INDEX UNIQUE SCAN | CM_CONTACT_17544893292 | 1 | | 1 (0)| 00:00:01 |
| 36 | TABLE ACCESS BY INDEX ROWID | ECM_PAYMENTS | 1 | 13 | 1 (0)| 00:00:01 |
|* 37 | INDEX UNIQUE SCAN | ECM_PAYMEN_9475344592 | 1 | | 1 (0)| 00:00:01 |
-----------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

2 - filter(ROWNUM<25)
4 - filter(ROWNUM<25)
6 - filter("P"."AMOUNT"=200 AND ("P"."PAYMENTID" IS NULL OR ("P"."IS_SUCCESS"='S' OR
"P"."IS_SUCCESS"='U')))
13 - filter("I"."TIME_INVOICE">TO_DATE('2005-11-01 00:00:00', 'yyyy-mm-dd hh24:mi:ss') AND
"I"."STATUSID"<>0 AND "I"."STATUSID"<>303)
14 - access("I"."INVOICEID"="II"."INVOICEID")
16 - access("I"."CUSTOMERID"="C"."CUSTOMERID")
17 - access("I"."INVOICEID"="IP"."INVOICEID"(+))
18 - filter("CO"."TYPE_"='PERSON')
19 - access("C"."CONTACTID"="CO"."CONTACTID")
21 - access("P"."PAYMENTID"(+)="IP"."PAYMENTID")
22 - filter(("P"."PAYMENTID" IS NULL OR ("P"."IS_SUCCESS"='S' OR "P"."IS_SUCCESS"='U')) AND
"P"."PAYMENTID" IS NULL AND LNNVL("P"."AMOUNT"=200))
29 - filter("I"."COST_TOTAL"<=201 AND "I"."TIME_INVOICE">TO_DATE('2005-11-01 00:00:00', 'yyyy-mm-dd
hh24:mi:ss') AND "I"."COST_TOTAL">=199 AND "I"."STATUSID"<>0 AND "I"."STATUSID"<>303)
30 - access("I"."INVOICEID"="II"."INVOICEID")
31 - access("I"."INVOICEID"="IP"."INVOICEID"(+))
33 - access("I"."CUSTOMERID"="C"."CUSTOMERID")
34 - filter("CO"."TYPE_"='PERSON')
35 - access("C"."CONTACTID"="CO"."CONTACTID")
37 - access("P"."PAYMENTID"(+)="IP"."PAYMENTID")


Statistics
----------------------------------------------------------
0 recursive calls
0 db block gets
1214653 consistent gets
0 physical reads
0 redo size
1134 bytes sent via SQL*Net to client
385 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
1 sorts (memory)
0 sorts (disk)
1 rows processed

BILLING@dev>


first query plan is ok but second is very wrong. We have a lot queries like this in out web application we try to migrate from sybase. I'd hate to hint queries like this, is there any other solution?

Tom Kyte
March 27, 2006 - 9:54 am UTC

can you tell me what exactly is wrong - given that I spend seconds looking at review/followups and only look at them once.



select * from (<query>)

Zeljko Vracaric, March 28, 2006 - 2:18 am UTC

Hello Tom,

Problem is that optimizer changes query plan when we put select * from () around it. I'm sorry I didn't point it clearly.

I can not reproduce it on small and simple example. So I sent real examples from out application in previous posts. We use a lot construction like this you recommended.

select *
from ( select a.*, rownum rnum
from ( YOUR_QUERY_GOES_HERE -- including the order by ) a
where rownum <= MAX_ROWS )
where rnum >= MIN_ROWS

but,


select a.*, rownum rnum
from ( YOUR_QUERY_GOES_HERE -- including the order by ) a
where rownum <= MAX_ROWS

has good query plan and it's performance is ok. However putting select * from () with or without where rnum>1 query plan is changed and I believe it is very wrong.

Tracing 10053 on query in my first post (one with tkprof) I found that probably optimizer goal was changed from first_rows to all_rows. I'm not sure because I'm not expert in debugging 10053 trace.
In my later post (autotrace) I found another query with similar problem and looking at the plan I believe that cardinality for full scan of ecm_invoiceitems table was wrong but statistics were good and I included select * from ecm_invoiceitems in post to proof that. So basicly in my previous post I have 3 queries with autotrace.

select * from ecm_invoiceitem to show that optimizer knows cardinality.

select ...(complex query with ecm_invoiceitems in from) with correct plan for first_rows hint

select * from (select ...(complex query with ecm_invoiceitems in from)) this has wrong plan, plan is different than previous.

IÂ’m surprised with this third query plan. I expected to be same as plan without select * from () around.

So trying to be short I wrote another large post. Keeping short and explain things in simple manner is a talent, I think you have that gift and thatÂ’s why your book and site is very popular.

Zeljko


Is it possible in SQL or PL/SQL ?

Parag Jayant Patankar, April 04, 2006 - 2:16 am UTC

Hi Tom,

I am using Oracle 9.2 database. I am having following data

drop table toto;
create table toto
(
r char(10)
)
organization external
(
type oracle_loader
default directory data_dir
access parameters
(
records delimited by newline
logfile data_dir:'toto.log'
)
location ('pp.sysout')
)
reject limit unlimited
/

In pp.sysout I am having following data

A
B
C
D=10
E
F
G
A
B
C
D=20
E
F
G
H
I
A
B
C
D=20
E
F
G
H
A
B
C
D=30
E
F
G
H

I want set of results in a different spool file starting from 'A' upto next 'A' where value of 'D' is unique.

For e.g.
1. spool file xxx.1 will contain
A
B
C
D=10
E
F
G

2. spool file xxx.2 will contain ( it will have two sets because D=20 appearing twice in data )

A
B
C
D=20
E
F
G
H
I
A
B
C
D=20
E
F
G
H

3. spool file xxx.3 will contain

A
B
C
D=30
E
F
G
H

Kindly let me know is it possible to do that ? if yes pl show me how.

thanks & regards
pjp

Tom Kyte
April 04, 2006 - 9:55 am UTC

I don't know of a way to do that in sqlplus - not with the multiple spools.



It is possible

Michel Cadot, April 06, 2006 - 3:59 am UTC

Hi,

Put the following in a file and execute it.

col sp fold_after
break on sp
set head off
set feed off
set pages 0
set recsep off
set colsep off
spool t
with
t2 as (
select col,
case
when instr(col, '=') != 0
then to_number(substr(col,instr(col,'=')+1))
end value,
rownum rn,
max(case when col = 'A' then rownum end)
over (order by rownum) grp
from t
),
t3 as (
select col,
max(value) over (partition by grp) value,
rn, grp
from t2
),
t4 as (
select col, value,
max(grp)
over (partition by value order by rn
rows between unbounded preceding and unbounded following)
grp
from t3
)
select 'spool file'||value sp,
'prompt '||col
from t4
/
prompt spool off
spool off
@t.lst

It does not work if you have the same D value but in non consecutive groups.
Spool file names contain D value instead of consecutive number.

Regards
Michel


Tom Kyte
April 06, 2006 - 10:03 am UTC

interesting workaround - write a single spool that is itself a sqlplus script that does a spool and echo for each file :)

The whole world

Michel Cadot, April 06, 2006 - 10:24 am UTC

Give us SQL*Plus, case expression, instr, substr and analytic functions, connect by and we can handle the SQL world with the help of the model clause from time to time. :))

Generating SQL or SQL*Plus script with SQL in SQL*Plus is one of my favorite tools with "new_value" on column to generate polymorphic queries.

Cheers
Michel


A reader, April 21, 2006 - 1:57 pm UTC

Hi Tom,

In your reply to the initial post in this thread, for paging results you suggested the query

select *
from ( select a.*, rownum rnum
from ( YOUR_QUERY_GOES_HERE -- including the order by ) a
where rownum <= MAX_ROWS )
where rnum >= MIN_ROWS
/

I have something like this for one of our web applications. It works fine but the only problem I am facing is when MAX_ROWS = 20 and MIN_ROWS= 1 the query returns almost instantaneously (~2 secs). But if I want to browse to the last page in the web page then my MAX_ROWS = 37612 and MIN_ROWS = 37601 then the query is taking some time (~18 secs). Is this expected behaviour?

Thanks for your help.


Tom Kyte
April 21, 2006 - 3:36 pm UTC

sure - just like on google - google "oracle" and then look at the time to return each page.

first - tell us how long for page 1, and then for page 99.

and tell us how long for page 101 :)


If you want the "last page" - to me you really mean "i want the first page, after sorting the data properly"


No one - NO ONE is going to hit page down that many times (and if you give them a last page button - that is YOUR FAULT - stop doing that, have them sort the opposite way and get the FIRST PAGE). Look at google - do what they do.

ORDER BY in inner query

Viorel Hobinca, May 10, 2006 - 12:08 pm UTC

In

select *
from ( select a.*, rownum rnum
from ( YOUR_QUERY_GOES_HERE -- including the order by ) a
where rownum <= MAX_ROWS )
where rnum >= MIN_ROWS

does the ORDER BY in the inner query have to include a primary key or some unique field?

We ran into a problem where subsequent pages returned the same result set when the ORDER BY clause had only one field with low distribution. We plan on adding a primary key or rowid to the ORDER BY but I'm wondering if there are other ways. We use Oracle 10g.

Tom Kyte
May 11, 2006 - 8:47 am UTC

the order by should have something "unique" about it - good to point that out.

Else - the order of the rows with the same order by key would be indeterminate and could vary!

ORDER BY ROWID

A reader, May 11, 2006 - 11:30 am UTC

If we use "order by rowid" are we going to get the same result s each time we run the query (even if the table has no primary key)?

Tom Kyte
May 11, 2006 - 7:51 pm UTC

as long as the rowid is unique sure.

Ref: ORDER BY in inner query

A reader, May 12, 2006 - 10:34 am UTC

Is ORDER BY *required* in the inner query? I'm wondering if Oracle can guarantee the order of the result set if no order is specified. With no such guarantee the paging will produce indeterminate results ...

Tom Kyte
May 12, 2006 - 9:13 pm UTC

if you don't use an order by (and one that says "this is definitely row 42, no other row can be 42"), then rows "100-110" could change everytime you ask for them.

And - it would be "correct"

FIRST_ROWS(n)

Su Baba, May 16, 2006 - 3:01 pm UTC

Does the "n" in FIRST_ROWS(n) hint represent the number of records I want to have returned. If the following query always returns 50 records, should n be set to 50?

SELECT *
FROM (
SELECT /*+ FIRST_ROWS(50) */ a.*, rownum r
FROM (YOUR QUERY GOES HERE) a
WHERE rownum < :max_row
)
WHERE r >= :min_row;



Tom Kyte
May 16, 2006 - 3:14 pm UTC

it represents the number of records to be returned.

Row Count

Su Baba, June 06, 2006 - 7:06 pm UTC

You had mentioned above that to get the estimated count of a "result set M through N" SQL, you can do one of the following:

"If you use text, you can approximate the result set size.
If you use the CBO and 9i, you can get the estimated cardinality from v$SQL_PLAN"

How does an application use v$sql_plan to get the row count? How is this actually implemented?

thanks


Tom Kyte
June 06, 2006 - 9:42 pm UTC

you query it?

Thanks

Rakesh Sreenivasa, June 16, 2006 - 2:52 pm UTC

great !

Rakesh

Very Impressed!!!

Tom Persaud, July 06, 2006 - 4:21 pm UTC

Tom,
Your solution was simple and useful. You seem to enjoy solving problems and helping others. I am sure this takes a lot of your time. Like a good listener, you seem to evaluate each question and comment. Your directness is also admirable. In my opinion, you have a God-given gift and you share it freely and openly. I envy you in that I wish I could be more like you - in skill and attitude.

8.0.5 Solution

Mal, August 04, 2006 - 9:59 am UTC

This code works for me in 8.0.5

select * from
( select rownum rx, title, id from TABLE where rownum < 5 )
where rx > 1

Tom Kyte
August 04, 2006 - 12:14 pm UTC

sure, but add "order by" which is typically used in this construct (doesn't make so much sense to paginate through unordered data)

Previous example inaccurate

Mal, August 04, 2006 - 10:10 am UTC

I posted a little too quickly, 8.0.x doesn't support order clauses in subselect, so while the above example is true, it's not very helpful.

different (weird) results when used in stored function

Johann Tagle, August 15, 2006 - 6:29 am UTC

Hi Tom,

I'm developing a search program on 8.1.7.  When I execute the following:

select ID, list_name from
   (select ID, list_name, rownum as number_row from
      (select distinct b.id ID,
decode(b.preferred_name,null,b.default_name,b.preferred_name) list_name
from bizinfo b, bizlookup l
where contains(l.keywords, 'computer equipment and supplies')>0
        and b.id = l.id
        order by list_name)
    where rownum <= 5)
where number_row >= 1;

I get something like:
        ID LIST_NAME
---------- --------------------------------------------
     63411 2A Info
     65480 ABACIST
       269 ABC COMPUTER
     97285 ACCENT MICRO
     97286 ACCENT MICRO - SM CITY NORTH

However, if I put the same SQL to a stored function:

CREATE Function GETSAMPLEBIZ ( v_search IN varchar2, startpage IN number, endpage IN number)
  RETURN  MYTYPES.REF_CURSOR IS
  RET MYTYPES.REF_CURSOR;
BEGIN
  OPEN RET FOR
    select ID, list_name from
    (select ID, list_name, rownum as number_row from
        (select distinct b.id as ID, decode(b.preferred_name,null,b.default_name,b.preferred_name) list_name
        from bizinfo b, bizlookup l
        where contains(l.keywords, v_search)>0
        and b.id = l.id
        order by list_name
        )
    where rownum <= endpage
    )
    where number_row >= startpage;

   return RET;
END;

(MYTYPES.REF_CURSOR defined elsewhere)

then run:
SQL> var ref refcursor;
SQL> exec :ref := getsamplebiz('computer equipment and supplies',1,5);
SQL> print ref;

I get:

        ID :B1
---------- --------------------------------
     63411 computer equipment and supplies
     65480 computer equipment and supplies
       269 computer equipment and supplies
     97285 computer equipment and supplies
     97286 computer equipment and supplies

Based on the ID column, the result set is the same, but what's supposed to be list_name is replaced by my search parameter.

I can't figure out what's wrong with this.  Would appreciate any suggestion.  

Thanks!

Johann 

Tom Kyte
August 15, 2006 - 8:23 am UTC

I'd use support for that one. it is obviously "not right"

a case to upgrade to 10g?

Johann Tagle, August 15, 2006 - 10:29 am UTC

Hi Tom,

Thanks for the response. However, 8.1.7 is no longer supported, right? Tried it on my development copy of 10g and its working well there. Hmmm, this might be a good addition to the case for upgrading to 10g I'm helping my client develop. Without this I either have give up the benefits of using a stored function or have the front end application go through every row until it gets to the relevant "page", which would be inefficient.

Thanks again,

Johann

Performance trade-off?

Mahmood Lebbai, September 11, 2006 - 2:15 pm UTC

Tom,

In the query you gave us for the initial question,

select * from ( select a.*, rownum rnum
from ( YOUR_QUERY_GOES_HERE -- including the order by ) a
where rownum <= MAX_ROWS ) where rnum >= MIN_ROWS

You said the inner query would fetch the maximum records we would be interested in and afterwards it would cut off the required records from the result set.

But consider this situation where,say, we got 3 million records and I would like to fetch some records in some order and take out some range of records say 2999975 to 2999979 (just five records). According to your query, the inner query will select 2999979 records (it looks quite unnecessary) and then select the five records.It looks some what odd. What is your justification on this?

I was wondering on this whether there might be any performance trade off on this.

Thanks.


Tom Kyte
September 11, 2006 - 2:58 pm UTC

this is for pagination through a result set on the web.

goto google.

search for Oracle.

Now, goto page 101. Tell me what you see on that page?

Nothing, there is no page 101 - google tells you "don't be SILLY, stop it, search better, get with the program, I am NOT going to waste MY resources on such a question"

We should do the same.

I would seriously ask you "what possible business reason could you justify gettnig those five records - and do you possibly thing you really mean to order by something DESC so that instead of getting the last five, you get the first five???"


This is optimized to produce an answer on screen as soon as possible. No one would hit the page down button that many times.

Look to google, they are the "gold standard" for searching, they got this pagination thing down right.

Wayne Khan, September 26, 2006 - 11:04 pm UTC

Hi Tom,
At first I got bamboozled by the subqueries, but this is great, it worked.

:)

your query worked with a small problem

Soumak, October 18, 2006 - 2:04 pm UTC

What the fetching bit did as I understood was that it executed the entire query and then selected rows N to M (M>N)from the resultset. Howeever, is there any way that the query stops execution and returns me the resultset when the limit M has been reached ?

I do not thing I can use rownum in such case. Any alternate suggestions?

I was using a HQL (Hibernate) where two methods setMaxResults() and setFirstResult() did that for me. Any equivalent in SQL?



Tom Kyte
October 18, 2006 - 3:42 pm UTC

the query DOES stop when it gets M-N+1 rows??? not sure at all what you mean.

Excellent, but be aware

Keith Jamieson, October 19, 2006 - 7:53 am UTC

Hi Tom

(ORACLE 10g release 2) 

I'm trying to convince the Java Team here that this is the correct approach to use, to page through a result set. 

They like this solution, with one small exception.
If they insert a record, or remove a record, or if the column value that is being ordered by changes, then potentially the results of their previous/next pagination may change.  (I'm assuming the changes were committed in another session, though the example below is all in one session).

So essentially, they are saying 'What happened to my user'
SQL> select *
  2    from ( select a.*, rownum rnum
  3             from ( select ename,hiredate  from scott.emp order by ename ) a
  4            where rownum <= 5 ) -- max rows
  5   where rnum >= 1-- min_rows
  6  /

ENAME      HIREDATE        RNUM
---------- --------- ----------
ADAMS      23-MAY-87          1
ALLEN      20-FEB-81          2
BLAKE      01-MAY-81          3
CLARK      09-JUN-81          4
FORD       03-DEC-81          5

SQL> select *
  2    from ( select a.*, rownum rnum
  3             from ( select ename,hiredate  from scott.emp order by ename ) a
  4            where rownum <= 10 ) -- max rows
  5   where rnum >= 6-- min_rows
  6  /

ENAME      HIREDATE        RNUM
---------- --------- ----------
JAMES      03-DEC-81          6
JONES      02-APR-81          7
KING       17-NOV-81          8
MARTIN     28-SEP-81          9
MILLER     23-JAN-82         10

SQL> -- now allen changes name to smith
SQL> update emp
  2  set ename = 'SMITH' where ename = 'ALLEN';

1 row updated.

SQL> -- assume happened in another session
SQL> -- so now user presses prev page
SQL> select *
  2    from ( select a.*, rownum rnum
  3             from ( select ename,hiredate  from scott.emp order by ename ) a
  4            where rownum <= 5 ) -- max rows
  5   where rnum >= 1-- min_rows
  6  /

ENAME      HIREDATE        RNUM
---------- --------- ----------
ADAMS      23-MAY-87          1
BLAKE      01-MAY-81          2
CLARK      09-JUN-81          3
FORD       03-DEC-81          4
JAMES      03-DEC-81          5

SQL> -- user ALLEN has disappeared
SQL> insert into scott.emp
  2  select 999,'KYTE',job,mgr,hiredate,sal,comm,deptno
  3  from scott.emp
  4  where rownum = 1
  5  /

1 row created.

SQL> -- new user created
SQL> -- page next
SQL> select *
  2    from ( select a.*, rownum rnum
  3             from ( select ename,hiredate  from scott.emp order by ename ) a
  4            where rownum <= 10 ) -- max rows
  5   where rnum >= 6-- min_rows
  6  /

ENAME      HIREDATE        RNUM
---------- --------- ----------
JONES      02-APR-81          6
KING       17-NOV-81          7
KYTE       17-DEC-80          8
MARTIN     28-SEP-81          9
MILLER     23-JAN-82         10

SQL> -- where did KYTE come from?
SQL> rollback;

Rollback complete.

SQL> exit

To be fair the Java Side have not yet come up with a realistic case where this can happen. 

Basically, what I have said is if this can happen, then you 
have to use some type of collection eg (PL/SQL TABLES -- (Associative Arrays), and if not , then use the rownum pagination.

I can see if we added extra columns to the table, to track whether a row is new, or has been update , (marked as deleted), this will get around the problem, but I think this is unnecessary overhead


 

Tom Kyte
October 19, 2006 - 8:21 am UTC

or flashback query if they want to freeze the result set as of a point in tmie. before they start the first time - they would call dbms_flashback to get the system_change_number and they could use that value to get a consistent read - across sessions, connections and so on.

Excellent as usual.

Keith Jamieson, October 20, 2006 - 9:14 am UTC

Just tried this out (as user system). 
It worked :)

For ordinary users , they must have been granted execute privileges on dbms_flashback. I ha dto log on as sysdba to do this.



SQL> declare
  2  v_scn number := dbms_flashback.get_system_change_number;
  3  begin
  4  DBMS_OUTPUT.PUT_LINE('---------------');
  5  DBMS_OUTPUT.PUT_LINE('SHOW THE DATA');
  6  DBMS_OUTPUT.PUT_LINE('---------------');
  7  for cur in
  8  (
  9  select *
 10      from ( select a.*, rownum rnum
 11               from ( select ename,hiredate  from scott.emp
 12             --  as of scn(v_scn)
 13               order by ename ) a
 14              where rownum <= 5 ) -- max rows
 15    where rnum >= 1-- min_rows
 16  )
 17  loop
 18  dbms_output.put_line(to_char(cur.rnum)||' '||cur.ename);
 19  end loop;
 20  DBMS_OUTPUT.PUT_LINE('---------------');
 21  DBMS_OUTPUT.PUT_LINE('MODIFY THE DATA');
 22  DBMS_OUTPUT.PUT_LINE('---------------');
 23  update scott.emp
 24  set ename = 'ALLEN' where ename = 'DARN';
 25  commit;
 26  DBMS_OUTPUT.PUT_LINE('---------------');
 27  DBMS_OUTPUT.PUT_LINE('SHOW THE NEW DATA');
 28  DBMS_OUTPUT.PUT_LINE('---------------');
 29  for cur in
 30  (
 31  select *
 32      from ( select a.*, rownum rnum
 33               from ( select ename,hiredate  from scott.emp
 34             --  as of scn(v_scn)
 35               order by ename ) a
 36              where rownum <= 5 ) -- max rows
 37    where rnum >= 1-- min_rows
 38  )
 39  loop
 40  dbms_output.put_line(to_char(cur.rnum)||' '||cur.ename);
 41  end loop;
 42  DBMS_OUTPUT.PUT_LINE('---------------');
 43  DBMS_OUTPUT.PUT_LINE('SHOW DATA BEFORE MODIFICATION');
 44  DBMS_OUTPUT.PUT_LINE('---------------');
 45  for cur in
 46  (
 47  select *
 48      from ( select a.*, rownum rnum
 49               from ( select ename,hiredate  from scott.emp
 50               as of scn(v_scn)
 51               order by ename ) a
 52              where rownum <= 5 ) -- max rows
 53    where rnum >= 1-- min_rows
 54  )
 55  loop
 56  dbms_output.put_line(to_char(cur.rnum)||' '||cur.ename);
 57  end loop;
 58  end;
 59  /
---------------
SHOW THE DATA
---------------
1 ADAMS
2 BLAKE
3 CLARK
4 DARN   <<================
5 FORD
---------------
MODIFY THE DATA
---------------
---------------
SHOW THE NEW DATA
---------------
1 ADAMS
2 ALLEN  <<================
3 BLAKE
4 CLARK
5 FORD
---------------
SHOW DATA BEFORE MODIFICATION
---------------
1 ADAMS
2 BLAKE
3 CLARK
4 DARN    <<================
5 FORD

PL/SQL procedure successfully completed.

SQL> exit
 

Quibbles/questions

R Flood, October 26, 2006 - 5:41 pm UTC

First, this is a great, informed discussion. But unless I am missing something, the conclusions are not applicable to many problems (and not always faster than the competition). Two observations and a question on the stats:

1. Google is the gold standard for a particular kind of searching where errors in sequence and content are permissible (to a degree), and concepts like subtotal/total are almost irrelevant. It's not a good model when the results represent or are used in a complex financial calculation, rocket launch, etc.

2. The assumption that no one wants to hit 'next' more than a few times is not always true. In general, sure. But there are plenty of business use cases where hitting 'next' more than a few times is common. Applications development is driven by usage, and as some posters pointed out "We do what Google does" or "You should only hit 'next' 3 or fewer times" can quickly lead to unemployment.

3. Is there not a breakeven point where the many-rows-and-cursor approach would become more efficient than hitting the DB for every set? While a large table + cursor pagination doesn't make sense, even if 10 'nexts' is the norm, if you get 200-400 rows and cursor through them, wouldn't the total database expense be less than subselect+rownum failry soon? The numbers above seemed to suggest 3 nexts was the breakeven, and that was assuming (I think) that the cursor case grabbed the whole table instead of, say, 5/10x the rows displayed at once.

Tom Kyte
October 27, 2006 - 7:35 am UTC

1) sure it is, show me otherwise.

2) sure it is, show me otherwise. a couple of times means "10 or 20" as well, show me when you really need to page MORE THAN THAT - commonly.

There are exceptions to every rule - this is a universal fact - for 999999999 times out of 1000000000, what is written here applies. So, why is it not done this way that often?

3) what is a many rows and cursor approach???

Followup

R Flood, October 27, 2006 - 11:01 am UTC

1. In my experience, Google freely returns "good enough" data. That is, the order might be different, the sum total of pages might be cleverly (or not) truncated, etc. This is just fine for a search engine, but not for calculations that depend on perfect sequence and accuracy. But is it not obvious that what is ideal for a search engine (page speed=paramount, data accuracy=not so much) is different than what matters for finance, rocket science, etc.?

2./3. (they are connected)
Sorry about the faulty reference. I thought the many-rows-and-cursor approach was somewhere in this thread. But what I meant by this was a Java(or whatever) server that gets chunks of rows (less than the whole table, but more than one screen, adjustable based on use case), and returns a screenful at a time to the client.

The core question was: Isn't there a point where getting all rows (but certainly a few hundred at once) in a server program and returning them on demand will be much easier on the database than hitting it for each set?

Tom Kyte
October 27, 2006 - 6:20 pm UTC

1) why would a finance report reader need to know there were precisely 125,231 rows in their report?

2) and then maintains a state and therefore wastes a ton of resources and therefore rockets us back to the days of client server. I'm not a fan.



don't spend a lot of time trying to "take it easy on the database", if people spent more time on their database design and learning the database features (rather than trying to outguess the database, doing their own joins, sorting in the client - whatever) we'd all be much much better off.

Rownum problem

Anne, November 01, 2006 - 12:19 pm UTC

Hi Tom, I have an interesting problem here : simple select of rownum from two tables showing different results - #1 returns rownum as expected, but #2 doesn't. Could you please explain why...


#1. select rownum, id
from dnr_refund_outbound_process
order by id desc;

ROWNUM ID
---------- ----------
1 125
2 124
3 123
4 122
5 121
6 120
7 119

#2.select rownum
, adjustment_id
from ar_adjustments_all
where org_id = 85
and customer_trx_id = 15922
order by adjustment_id desc ;

ROWNUM ADJUSTMENT_ID
---------- -------------
7 8296
6 8295
5 8294
4 8293
3 8292
2 8291
1 7808

Both DNR_REFUND_OUTBOUND_PROCESS and AR_ADJUSTMENTS_ALL are tables.

Indexes are :
CREATE UNIQUE INDEX PK_DNR_REFUND_OUTBOUND_PROCESS ON DNR_REFUND_OUTBOUND_PROCESS
(ID) ......

CREATE UNIQUE INDEX AR_ADJUSTMENTS_U1 ON AR_ADJUSTMENTS_ALL
(ADJUSTMENT_ID) ....

If there is any other info you need from me, please let me know.

As always, appreciate your help!




Tom Kyte
November 01, 2006 - 6:16 pm UTC

they both are showing rownum??

(but you need to understand that rownum is assigned during the where clause processing, before sorting!)

you probably meant:

select rownum
, adjustment_id
from
(select adjustment_id
from ar_adjustments_all
where org_id = 85
and customer_trx_id = 15922
order by adjustment_id desc );

to sort AND THEN assign rownum

Rownum problem - tkprof results

Anne, November 01, 2006 - 12:48 pm UTC

Hi Tom,

I missed sending in the tkprof results for my earlier question. I hope this may give some clue...

*** SESSION ID:(31.4539) 2006-11-01 11:29:34.253

********************************************************************************

BEGIN DBMS_OUTPUT.GET_LINES(:LINES, :NUMLINES); END;


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 1
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 2 0.00 0.01 0 0 0 1

Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: 173 (APPS)
********************************************************************************

select rownum, id
from dnr_refund_outbound_process
order by id desc

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.02 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 8 0.00 0.00 0 8 0 93
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 10 0.00 0.02 0 8 0 93

Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: 173 (APPS)

Rows Row Source Operation
------- ---------------------------------------------------
93 COUNT
93 INDEX FULL SCAN DESCENDING PK_DNR_REFUND_OUTBOUND_PROCESS (object id 270454)


Rows Execution Plan
------- ---------------------------------------------------
0 SELECT STATEMENT GOAL: CHOOSE
93 COUNT
93 INDEX (FULL SCAN DESCENDING) OF
'PK_DNR_REFUND_OUTBOUND_PROCESS' (UNIQUE)

********************************************************************************

select rownum
, adjustment_id
from ar_adjustments_all
where org_id = 85
and customer_trx_id = 15922
order by adjustment_id desc

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 2 0.00 0.00 0 4 0 7
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 4 0.00 0.00 0 4 0 7

Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: 173 (APPS)

Rows Row Source Operation
------- ---------------------------------------------------
7 SORT ORDER BY
7 COUNT
7 TABLE ACCESS BY INDEX ROWID AR_ADJUSTMENTS_ALL
7 INDEX RANGE SCAN AR_ADJUSTMENTS_N2 (object id 28058)


Rows Execution Plan
------- ---------------------------------------------------
0 SELECT STATEMENT GOAL: CHOOSE
7 SORT (ORDER BY)
7 COUNT
7 TABLE ACCESS GOAL: ANALYZED (BY INDEX ROWID) OF
'AR_ADJUSTMENTS_ALL'
7 INDEX GOAL: ANALYZED (RANGE SCAN) OF 'AR_ADJUSTMENTS_N2'
(NON-UNIQUE)

********************************************************************************


BEGIN sys.dbms_system.set_sql_trace_in_session(31, 4539, false); END;
..................





Rownum problem

Bella Joseph, November 02, 2006 - 9:23 am UTC

Hi Tom,

select rownum
, adjustment_id
from
(select adjustment_id
from ar_adjustments_all
where org_id = 85
and customer_trx_id = 15922
order by adjustment_id desc );


Yes, this is exactly what I meant, but I expected #2 sql to return the same results. I think I am missing out on the specific reason for this ...

Both sqls are pretty much the same - they are both selecting rownum with order by desc. Why does #2 return rownum in descending order instead of ascending like #1?

From your comments, I gather that the resoning behind this is that #1 has no where-clause to process and hence rownum is assigned during the sorting. Whereas #2 has a where clause to process and hence rownum is assigned during the where clause before the sorting. Would you agree ?

Thanks for your patience! :)

Tom Kyte
November 02, 2006 - 9:29 am UTC

rownum assigned AFTER where clause BEFORE order by

so, you selected rows, filtered them, numbered them (Randomly as they were encountered) and then sorted the results.

If the first one did anything you think was "correct" as far as rownum and ordering, it was purely by ACCIDENT (eg: likely you used an index to read the data sorted in the first place and the order by was ignored - in fact the tkprof shows that)

R Floods Post

Keith Jamieson, November 16, 2006 - 10:29 am UTC

I just read R floods post and I am implementing this precisely so that java can get a number of records, and the user can paginate next/previous as many times as they want to.

Java will be able to scroll forwards and backwards through
the retrieved rows, so our goal of bi-directional cursor scrolling is achieved.

So, essentially, by pressing next or previous all we are doing is replacing the rows that we scroll through
with the next/previous ones in the list.

I have had many discussions/conversations around this and the only real issue was the potential for data inconsistency, which is solved by using dbms_flashback_query.

The benefits of this approach are:

Database retrieves the data quickly. Bind variable usage.
parse once execute many.
We can scroll through an entire record set if so desired.
The amount of records to be retrieved at a time can be amended dynamically, by keeping values in a table.
There is also potentially less memory overhead on the client.

So, as far as I'm concerned this is now better than google search.
If you want to page through a million row table 10 at a time you can do so.






















Tom Kyte
November 16, 2006 - 3:24 pm UTC

downside is - you suck resources like a big drain, under the ocean, really fast, really hard.

I don't like it. not a good thing.

to find out how many records there are, you have to GET THEM ALL. what a waste

but, it is up to you, you asked my opinion, that is it and it is rather consistent over the years and not likely to change.

pagination query

o retrieves data quickly, first pages fast. no one goes way down.

o uses binds, don't know why you think that is a special attribute of yours

o we can scroll too, in fact, I can go to page "n" at any time

o we are as dynamic as anything else.

o I don't see how you say "less memory in client" with your approach, quite the OPPOSITE would be true, very much so. I need to keep a page, and you?



and you know, if you want to page though a million row table - more power to you, most people have much much more important stuff to do.

Paging by Partition

Alessandro Nazzani, December 19, 2006 - 10:21 am UTC

Is there a smart way (that is, without resorting to procedural code) to paginate a "partition by" query without breaking groups (10g)?

Suppose the following statement:

select groupid, itemid, itemname, itemowner,
row_number() over (partition by groupid order by itemname) seq,
max(itemid) over (partition by groupid) lastitem from
V$GROUP_TYPES where itemtype=1 order by groupid, itemname;

I've been asked to add pagination but, if the last record of the page is not the last record of the group, I should "extend" the page until I reach the end of the group (groups can range between 2 to roughly 20 records each).

Thanks for your attention.

Alessandro

Tom Kyte
December 19, 2006 - 10:25 am UTC

no create
no insert
no look

Alessandro Nazzani, December 19, 2006 - 11:59 am UTC

> no create
> no insert
> no look

My bad, sorry.

CREATE TABLE V$GROUP_TYPES (GROUPID NUMBER(10) NOT NULL,
ITEMID NUMBER(10) NOT NULL, ITEMNAME VARCHAR2(10) NOT NULL,
ITEMOWNER VARCHAR2(10) NOT NULL, ITEMTYPE NUMBER(1) NOT NULL);

INSERT INTO V$GROUP_TYPES (GROUPID, ITEMID, ITEMNAME, ITEMOWNER, ITEMTYPE)
VALUES (1, 12795, 'Item 12795', 'Myself', 1);
INSERT INTO V$GROUP_TYPES (GROUPID, ITEMID, ITEMNAME, ITEMOWNER, ITEMTYPE)
VALUES (1, 12796, 'Item 12796', 'Myself', 1);
INSERT INTO V$GROUP_TYPES (GROUPID, ITEMID, ITEMNAME, ITEMOWNER, ITEMTYPE)
VALUES (2, 13151, 'Item 13151', 'Myself', 1);
INSERT INTO V$GROUP_TYPES (GROUPID, ITEMID, ITEMNAME, ITEMOWNER, ITEMTYPE)
VALUES (2, 13152, 'Item 13152', 'Myself', 1);
INSERT INTO V$GROUP_TYPES (GROUPID, ITEMID, ITEMNAME, ITEMOWNER, ITEMTYPE)
VALUES (3, 6640, 'Item 6640', 'Myself', 1);
INSERT INTO V$GROUP_TYPES (GROUPID, ITEMID, ITEMNAME, ITEMOWNER, ITEMTYPE)
VALUES (3, 6641, 'Item 6641', 'Myself', 1);
INSERT INTO V$GROUP_TYPES (GROUPID, ITEMID, ITEMNAME, ITEMOWNER, ITEMTYPE)
VALUES (3, 6642, 'Item 6642', 'Myself', 1);
INSERT INTO V$GROUP_TYPES (GROUPID, ITEMID, ITEMNAME, ITEMOWNER, ITEMTYPE)
VALUES (4, 4510, 'Item 4510', 'Myself', 1);
INSERT INTO V$GROUP_TYPES (GROUPID, ITEMID, ITEMNAME, ITEMOWNER, ITEMTYPE)
VALUES (4, 4511, 'Item 4511', 'Myself', 1);
INSERT INTO V$GROUP_TYPES (GROUPID, ITEMID, ITEMNAME, ITEMOWNER, ITEMTYPE)
VALUES (4, 4512, 'Item 4512', 'Myself', 1);
INSERT INTO V$GROUP_TYPES (GROUPID, ITEMID, ITEMNAME, ITEMOWNER, ITEMTYPE)
VALUES (5, 10095, 'Item 10095', 'Myself', 1);
INSERT INTO V$GROUP_TYPES (GROUPID, ITEMID, ITEMNAME, ITEMOWNER, ITEMTYPE)
VALUES (5, 10096, 'Item 10096', 'Myself', 1);
INSERT INTO V$GROUP_TYPES (GROUPID, ITEMID, ITEMNAME, ITEMOWNER, ITEMTYPE)
VALUES (6, 8811, 'Item 8811', 'Myself', 1);
INSERT INTO V$GROUP_TYPES (GROUPID, ITEMID, ITEMNAME, ITEMOWNER, ITEMTYPE)
VALUES (6, 8812, 'Item 8812', 'Myself', 1);
INSERT INTO V$GROUP_TYPES (GROUPID, ITEMID, ITEMNAME, ITEMOWNER, ITEMTYPE)
VALUES (6, 8811, 'Item 8811', 'Myself', 1);
INSERT INTO V$GROUP_TYPES (GROUPID, ITEMID, ITEMNAME, ITEMOWNER, ITEMTYPE)
VALUES (6, 8812, 'Item 8812', 'Myself', 1);
commit;

select groupid, itemid, itemname, itemowner,
row_number() over (partition by groupid order by itemname) seq,
max(itemid) over (partition by groupid) lastitem from
V$GROUP_TYPES where itemtype=1 order by groupid, itemname;

GROUPID ITEMID ITEMNAME ITEMOWNER SEQ LASTITEM
-------- -------- ---------- ---------- ----- ----------
1 12795 Item 12795 Myself 1 12796
1 12796 Item 12796 Myself 2 12796
2 13151 Item 13151 Myself 1 13152
2 13152 Item 13152 Myself 2 13152
3 6640 Item 6640 Myself 1 6642
3 6641 Item 6641 Myself 2 6642
3 6642 Item 6642 Myself 3 6642
4 4510 Item 4510 Myself 1 4512
4 4511 Item 4511 Myself 2 4512
4 4512 Item 4512 Myself 3 4512
5 10095 Item 10095 Myself 1 10096
5 10096 Item 10096 Myself 2 10096
6 8811 Item 8811 Myself 1 8812
6 8811 Item 8811 Myself 2 8812
6 8812 Item 8812 Myself 3 8812
6 8812 Item 8812 Myself 4 8812

If, for example, page size is set to 5, I should have the following pages:

GROUPID ITEMID ITEMNAME ITEMOWNER SEQ LASTITEM
-------- -------- ---------- ---------- ----- ----------
1 12795 Item 12795 Myself 1 12796
1 12796 Item 12796 Myself 2 12796
2 13151 Item 13151 Myself 1 13152
2 13152 Item 13152 Myself 2 13152
3 6640 Item 6640 Myself 1 6642
3 6641 Item 6641 Myself 2 6642
3 6642 Item 6642 Myself 3 6642

GROUPID ITEMID ITEMNAME ITEMOWNER SEQ LASTITEM
-------- -------- ---------- ---------- ----- ----------
4 4510 Item 4510 Myself 1 4512
4 4511 Item 4511 Myself 2 4512
4 4512 Item 4512 Myself 3 4512
5 10095 Item 10095 Myself 1 10096
5 10096 Item 10096 Myself 2 10096

GROUPID ITEMID ITEMNAME ITEMOWNER SEQ LASTITEM
-------- -------- ---------- ---------- ----- ----------
6 8811 Item 8811 Myself 1 8812
6 8811 Item 8811 Myself 2 8812
6 8812 Item 8812 Myself 3 8812
6 8812 Item 8812 Myself 4 8812

Thanks in advance for your time.

Alessandro

Tom Kyte
December 19, 2006 - 12:55 pm UTC

ops$tkyte%ORA10GR2> update v$group_types set groupid = groupid*10;

16 rows updated.

ops$tkyte%ORA10GR2> commit;

Commit complete.

ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> select *
  2    from (
  3  select groupid, itemid, itemname, itemowner,
  4         row_number() over (partition by groupid order by itemname) seq,
  5         max(itemid) over (partition by groupid) lastitem,
  6             dense_rank() over (order by groupid) page_no
  7    from V$GROUP_TYPES where itemtype=1 order by groupid, itemname
  8         )
  9  /

   GROUPID     ITEMID ITEMNAME   ITEMOWNER         SEQ   LASTITEM    PAGE_NO
---------- ---------- ---------- ---------- ---------- ---------- ----------
        10      12795 Item 12795 Myself              1      12796          1
        10      12796 Item 12796 Myself              2      12796          1
        20      13151 Item 13151 Myself              1      13152          2
        20      13152 Item 13152 Myself              2      13152          2
        30       6640 Item 6640  Myself              1       6642          3
        30       6641 Item 6641  Myself              2       6642          3
        30       6642 Item 6642  Myself              3       6642          3
        40       4510 Item 4510  Myself              1       4512          4
        40       4511 Item 4511  Myself              2       4512          4
        40       4512 Item 4512  Myself              3       4512          4
        50      10095 Item 10095 Myself              1      10096          5
        50      10096 Item 10096 Myself              2      10096          5
        60       8811 Item 8811  Myself              1       8812          6
        60       8811 Item 8811  Myself              2       8812          6
        60       8812 Item 8812  Myself              3       8812          6
        60       8812 Item 8812  Myself              4       8812          6

16 rows selected.

ops$tkyte%ORA10GR2> select *
  2    from (
  3  select groupid, itemid, itemname, itemowner,
  4         row_number() over (partition by groupid order by itemname) seq,
  5         max(itemid) over (partition by groupid) lastitem,
  6             dense_rank() over (order by groupid) page_no
  7    from V$GROUP_TYPES where itemtype=1 order by groupid, itemname
  8         )
  9   where page_no = 5
 10  /

   GROUPID     ITEMID ITEMNAME   ITEMOWNER         SEQ   LASTITEM    PAGE_NO
---------- ---------- ---------- ---------- ---------- ---------- ----------
        50      10095 Item 10095 Myself              1      10096          5
        50      10096 Item 10096 Myself              2      10096          5

ops$tkyte%ORA10GR2> select *
  2    from (
  3  select groupid, itemid, itemname, itemowner,
  4         row_number() over (partition by groupid order by itemname) seq,
  5         max(itemid) over (partition by groupid) lastitem,
  6             dense_rank() over (order by groupid) page_no
  7    from V$GROUP_TYPES where itemtype=1 order by groupid, itemname
  8         )
  9   where page_no = 6
 10  /

   GROUPID     ITEMID ITEMNAME   ITEMOWNER         SEQ   LASTITEM    PAGE_NO
---------- ---------- ---------- ---------- ---------- ---------- ----------
        60       8811 Item 8811  Myself              1       8812          6
        60       8811 Item 8811  Myself              2       8812          6
        60       8812 Item 8812  Myself              3       8812          6
        60       8812 Item 8812  Myself              4       8812          6

 

Mike, December 19, 2006 - 1:17 pm UTC

My experience has been that paging through a large data set is a sign that someone hasn't spoken to the users and discovered what they really need to see. Give the users the ability to find the data they need or use business logic to present the users with the data they most likely need.

One way to do this is to build the ability for a user to define and save the default criteria for the data returned when a screen is loaded.

Sure, there will be exceptions, but I think as a general rule, an application should be designed without the user needing to page through data "looking" for the necessary record.


Tom Kyte
December 19, 2006 - 3:47 pm UTC

(but what about my home page or google?)

Pagination is a pretty necessary thing for most all applications in my experience.

Alessandro Nazzani, December 19, 2006 - 1:33 pm UTC

Tom,

as always thank you very much for your patience.

If I understand correctly, you are proposing to navigate "by groups": instead of setting a number of rows per page, setting a number of groups.

The only drawback is that if I have 10 groups of 2 records followed by 10 groups of 20 records I will end up with pages with *significant* different sizes (in term of records); guess I can live with that, after all. :)

Thanks for helping me approaching the problem from a different point of view.

Alessandro

Mike, December 20, 2006 - 1:17 pm UTC

While I can see the value in having a technical discussion on the best way to code paging through screens, I feel that users having to page through data sets should be used very infrequently.

My experience has been that too many applications default to the "dump a lot of records on the screen and let the user page through to find the necessary record" style. When I see users paging through screens, I always look to see if that task/screen can be improved.

In many cases, I can produce the result the user needs without the need to page through result sets. Sometimes, it is an easy change and some times it takes more work. I often add the ability for a user to save a default search criteria for each applicable screen.

>> (but what about my home page or google?)

Why did you decide to present 10 articles sorted by Last Updated (I guess)? Do most people come to Asktom to "browse" or do they go looking for an answer to a specific topic? Can you tell how many people never clicked a link on the home page, but typed in a query instead?

In my case, 99% of the time I go to Asktom, I ignore the home page and type in a query for a topic I'm interested in.


Tom Kyte
December 20, 2006 - 1:25 pm UTC

I see it in virtually all applications - all of them.

people come to browse, yes.

It is my experience that if people do not find it in 10 or less pages, they refine their search - but you know what.....

that doesn't mean "page two" isn't necessary and if page two is needed, you need....

pagination

Mike, December 20, 2006 - 2:00 pm UTC

Sorry, I'm not making myself clear. I have no problem with supporting pagination in applications. I just feel it should be used very infrequently. I track paging in my logging table, so I can tell when users are paging frequently. Usually, when I visit the issue, the user either needs training or the screen/process needs to be re-designed.

I was just trying to make a usability suggestion related to the technical question.

pagination

benn, January 09, 2007 - 10:23 am UTC

Hi tom
i have some doubt on pagination. i want a procedure that will accept the 'from' and 'to' parameter (rownum) for paginaton as well as the order by column also as a parameter( the order by changes based on the parameter) , and my query is using multiple table which doesnt have Unique keys, the pagination is not working poperly at that time..
please have a look in to the procedure..

CREATE OR REPLACE Procedure P_pagination
(cur out sys_refcursor,end1 number,start1 number,ordr number)
as
Var_sql varchar2(4000);
begin
var_sql := ' Select * '||
' From '||
' (select rownum rwnum,aa.* from ' ||
' (select ti.a,t2.a,t3.a,t4.b from t1,t2,t3,t4 where < all the joins> order by '||ordr||' )aa'||
' where rownum <='|| end1 ||') ' ||
' where rwnum >='|| start1 ;

open cur for var_sql;
end ;
/


Tom Kyte
January 11, 2007 - 9:30 am UTC

you have unique stuff - rowids.


order by ' || ordr || ' t1.rowid, t2.rowid, .... ) aa '

Re: I don't like It

Keith Jamieson, January 15, 2007 - 5:06 am UTC

Okay, I think either I have a unique situation, or more likely, I didn't explain myself very well.

I 100% agree that the pagination query is the way to go.
Effectively, what I have done is suggested parameterising the pagination query in a procedure and have the start and end rows for the pagination query controlled in Java.

Previously, our code would be a query, which was limited by rownum, say 10,0000. This was globally set in a package.
Apparently the reason this was introduced was that the clients couldn't handle all the data being passed to them, ie They used to run out of memory in the client, and this was the solution applied at the time, so what I was saying here is using the pagination query results in less memory being returned to the client in each call, as opposed to potentially 10,000 rows being downloaded to the client.
( I do know that the user should be forced to put in some selection criteria, but at present this is not the case).

I can quite see that the flashback query may require additional resources, but this is a compromise, which will allow the pagination query to be used.

Scrolling forwards and backwards is required, so my choices as far as I see it are:

1) Stick with the query being limited by rownum <= 10,000
(Which has already caused a couple of issues).
or
2) use a parameterised pagination query.


Of course, I do know that the correct approach to limit the numbe rof rows is to force the user to put in appropriate selection criteria. I'm working towards that goal.
















Use of abbrivations

A reader, January 15, 2007 - 5:54 am UTC

Tom,

regarding 'IM' speak i think you have to check if the page has any u or ur or plz....words and replace them with empty strings so that the sentence does't make any proper meaning

What if you want ALL_ROWS

Rahul, January 16, 2007 - 7:05 pm UTC

Tom,

As Always, thank you for your help to Oracle world.

I have a situation where, for a business process, I am getting all the results into a staging table and the users take decisions based on that.

So, now, they have an option of certain filters on that table query (I am implementing these filters using NDS, and as taught by you, using bind variables).

Then, they would take the decisions based on the result set. There is a good possibility that they would be paging through the result set no matter the size.

Doesn't it make sense, in this case, to use ALL_ROWS instead of FIRST_ROWS because they have to check (actual check box on the front end) which records to work on?

If so, then, should I use ALL_ROWS on every stage of the SQL Statement?

Also, then, in this case, wouldn't it make sense to give them the count of how many rows (they are not that many based on the filters) there are in the result set?

Thank you,
Rahul

Pagination with total number of records

Mahesh Chittaranjan, January 22, 2007 - 12:23 am UTC

Tom,

I have a similar situation to R Floods except that I do not need the dbms flashback query. The code that calls the pagination procedure is in a web application. Given below is the function I use to get the page data. The only issue I have is that I HAVE TO show the total number of records and page x of y (easy to calculate when total and page size are known). The question is can the number of records returned by the query be returned in the below in a better fashion?

create or replace function nmc_sp_get_customer_page(customerName varchar2, pageNumber int, pageSize int, totalRecords OUT int)
return types.ref_cursor
as
cust_cursor types.ref_cursor;
begin
declare
startRec int;
endRec int;
pageNo int;
pSize int;
begin
-- pageNumber = 0 indicates the last page

-- pageSize parameter is set in the web application's property file
-- The check below is just so that the code works even if wierd values are set

if pageSize < 0 or pageSize > 100 then
pSize := 25;
else
pSize := pageSize;
end if;

pageNo := pageNumber;

-- How can this be optimized?
-- Is it possible to get the count without having to run the query below?

select count(name) into totalRecords
from customer
where name like customerName;

-- calculate start and end records to be used as MINROWS and MAXROWS

if pageNumber <> 0 then
startRec := ((pageNumber - 1) * pSize) + 1;
endRec := startRec + pSize - 1;

if endRec >= totalRecords then
pageNo := 0;
end if;
else
-- claculate how many records to show on the last page.

endRec := mod(totalRecords, pSize);

if endRec = 0 then
endRec := pSize;
end if;
end if;

if pageNo <> 0 then
-- For any page other than the last page, use this.
-- The user is probably not going to see more than the first 5 pages

open cust_cursor for
select name from
(select a.*, rownum rnum from
(select name from customer where name like customerName order by name) a
where rownum <= endRec)
where rnum >= startRec;
else
-- Since there is a last page button on the web page, the user is likely to click it

open cust_cursor for
select name from
(select name from customer where name like customerName order by name desc)
where rownum <= endRec
order by name;

end if;

return cust_cursor;
end;
end nmc_sp_get_customer_page;
/

another solution to the initial question

Maarten, January 22, 2007 - 8:57 am UTC

I just read the initial question and think there is yet another solution.

Here's my contribution:

/* if rnum = results_total, the last page is reached */
select c.*
from (select width_bucket (
b.rnum
, 1
, ( b.result_total
- mod (b.result_total, 10))
/* try to get the exact number of records in a bucket (10), the rest go into the overflow bucket */
+ 1
, (trunc (b.result_total
/ 10))
/* indicate how much buckets you need, derived from # record per page you desire (10) */
) as page_nr
, b.rnum /* original rownumber */
, b.table_name
, b.tablespace_name
, b.result_total /* total number of records */
from (select (last_value (a.rnum) over (order by a.dummy_for_last_value)) as result_total
, a.rnum
, a.table_name
, a.tablespace_name
from (select rownum rnum /* the actual query */
, ute.table_name
, ute.tablespace_name
, 1 as dummy_for_last_value
from user_tables ute
order by ute.tablespace_name /* do ordering here */
) a) b) c

Tom, I need your help on this

Asim, January 25, 2007 - 4:29 pm UTC

Tom,
This is what we are doing -

inside a pl/sql block -

cursor c1 is select id from id_assign where status = 0 and rownum =1 for update;

...

open c1;
update id_assign
set status = 1
where current of c1;

close c1;

The "select for update" is doing a full table scan even though status column has an index as COST is less compared to index scan.

Any suggestions please to make it faster??

Thanks,
Asim




Asim, February 08, 2007 - 9:48 am UTC

Tom,
Could you please give your input on this -
This is what we are doing now
inside a pl/sql block -

cursor c1 is select /*+ INDEX(id_assign x1id_assign)*/id from id_assign where status = 0 and rownum =1 for update;

where x1id_assign is an index for column status.
...

open c1;
update id_assign
set status = 1
where current of c1;

close c1;

Our requirement is to get any one id which has status = 0 and then mark this id as used by setting status = 1 and assign_dt = sysdate.

Now this table has around 2 million ids.And this proc gets called for each record processing to assign an id.

After adding the index hint, it is somewhat faster but not yet upto the speed which business wants, Any suggestions please to make it faster??

Thanks,
Asim


Tom Kyte
February 08, 2007 - 11:20 am UTC

you should have a fetch in there - if you want to have a "current of"

but one wonders why you bother with the select at all? why not just

update t set status = 1 where status = 0 and rownum = 1;


that index hint - whY????? remove it. If you have an index on status, the update should use it (because of the rownum=1).

Asim, February 08, 2007 - 2:32 pm UTC

Hi Tom,
Thanks for your reply.

Actually there is one fetch indeed before update.Sorry I missed it while putting the question.

The reason why I we need the select is, we need the ID return from this stored proc and also mark the id as used so that nobody else can use it.


This is what we are doing in brief -

We have a stored proc which basically gets called from Ab Initio(ETL Tool) for inserting each record for the initial load. Before inserting the record into the database, it does some validations as well as some manipulations inside the main proc and then it calls the proc below to get an ID and mark it as used.

This same process gets repeated for millions of record for initial load.


Here is the procedure -
============================================================
CREATE PROCEDURE GETID(P_ID OUT VARCHAR2, P_STATUS OUT VARCHAR2, P_ERROR_CODE OUT VARCHAR2,P_ERROR_MSG OUT VARCHAR2)

PRAGMA AUTONOMOUS_TRANSACTION;
V_ID varchar2(16);
CURSOR c1 IS SELECT ID FROM ID_ASSIGN WHERE STATUS IS NULL AND ROWNUM <2 FOR UPDATE;
PRAGMA AUTONOMOUS_TRANSACTION;

V_ID VARCHAR2(16);
V_ERROR_MSG VARCHAR2(6000);
XAPPERROR EXCEPTION;

CURSOR c1 IS
SELECT /*+ INDEX(ID_ASSIGN X1ID_ASSIGN)*/ ID FROM ID_ASSIGN WHERE STATUS = 0 AND ROWNUM =1 FOR UPDATE;

BEGIN

OPEN c1;

FETCH c1 into V_ID;

IF c1%NOTFOUND OR c1%NOTFOUND IS NULL THEN
V_ERROR_MSG := 'No ID is available for assignment';
RAISE XAPPERROR;
END IF;

UPDATE ID_ASSIGN
SET STATUS = 1,
ASSIGN_DT = CURRENT_TIMESTAMP
WHERE CURRENT OF c1;

COMMIT;

CLOSE c1;

P_ID := V_ID;
P_STATUS := '0';
P_ERROR_CODE := NULL;
P_ERROR_MSG := NULL;

EXCEPTION
WHEN XAPPERROR THEN
CLOSE c1;
P_ID := NULL;
P_STATUS := '1';
P_ERROR_CODE := '-01403';
P_ERROR_MSG := V_ERROR_MSG ;
ROLLBACK;
WHEN OTHERS THEN
CLOSE c1;
P_ID := NULL;
P_STATUS := '1';
DECLARE
V_SQLCODE NUMBER := SQLCODE;
V_SQL_MSG VARCHAR(512) := REPLACE(SQLERRM, CHR(10), ',');
BEGIN
P_ERROR_CODE := V_SQLCODE;
P_ERROR_MSG := V_SQL_MSG;
ROLLBACK;
END;
END;

============================================================

Now when we did a load test from oracle box without using the index hint on status, it was loading 50 recs/sec and then when it was invoked from Ab Initio, it was loading 10recs/sec.

This was not acceptable so we tried using this index hint as without that it was doing a full table scan to improve the number to 300recs/sec from oracle box and from Ab Initio it was 110recs/sec.

The main table where the new record is suuposed to get inserted will have around 110 million records in production which is partitioned and this ID_ASSIGN table will have around 2 to 3 million record sand this table is not partitioned- some of them will be used as well as available.


Your views please.

Thank you,
Asim
Tom Kyte
February 08, 2007 - 4:21 pm UTC

update t set x = y where <condition> returning id into l_id;


Asim, February 08, 2007 - 3:16 pm UTC

Hi Tom,

I am sorry that I have put cursor definition twice in the procedure in my previous response -

here is the correct procedure -

CREATE PROCEDURE GETID(P_ID OUT VARCHAR2, P_STATUS OUT VARCHAR2, P_ERROR_CODE OUT VARCHAR2,P_ERROR_MSG OUT VARCHAR2)

PRAGMA AUTONOMOUS_TRANSACTION;
V_ID VARCHAR2(16);
V_ERROR_MSG VARCHAR2(6000);
XAPPERROR EXCEPTION;

CURSOR c1 IS
SELECT /*+ INDEX(ID_ASSIGN X1ID_ASSIGN)*/ ID FROM ID_ASSIGN WHERE STATUS = 0 AND ROWNUM =1 FOR UPDATE;

BEGIN

OPEN c1;

FETCH c1 into V_ID;

IF c1%NOTFOUND OR c1%NOTFOUND IS NULL THEN
V_ERROR_MSG := 'No ID is available for assignment';
RAISE XAPPERROR;
END IF;

UPDATE ID_ASSIGN
SET STATUS = 1,
ASSIGN_DT = CURRENT_TIMESTAMP
WHERE CURRENT OF c1;

COMMIT;

CLOSE c1;

P_ID := V_ID;
P_STATUS := '0';
P_ERROR_CODE := NULL;
P_ERROR_MSG := NULL;

EXCEPTION
WHEN XAPPERROR THEN
CLOSE c1;
P_ID := NULL;
P_STATUS := '1';
P_ERROR_CODE := '-01403';
P_ERROR_MSG := V_ERROR_MSG ;
ROLLBACK;
WHEN OTHERS THEN
CLOSE c1;
P_ID := NULL;
P_STATUS := '1';
DECLARE
V_SQLCODE NUMBER := SQLCODE;
V_SQL_MSG VARCHAR(512) := REPLACE(SQLERRM, CHR(10), ',');
BEGIN
P_ERROR_CODE := V_SQLCODE;
P_ERROR_MSG := V_SQL_MSG;
ROLLBACK;
END;
END;

===========================================

Please suggest if I am doing something wrong.

Thanks,
Asim

Asim, February 08, 2007 - 5:21 pm UTC

Hi Tom,

Thanks for your reply.

We tried like this to use the update ... returning into ..-


CREATE PROCEDURE GETID(P_ID OUT VARCHAR2, P_STATUS OUT VARCHAR2, P_ERROR_CODE OUT VARCHAR2,P_ERROR_MSG OUT VARCHAR2)

PRAGMA AUTONOMOUS_TRANSACTION;

V_ID VARCHAR2(16);
V_ERROR_MSG VARCHAR2(6000);
XAPPERROR EXCEPTION;


BEGIN

UPDATE id_assign
SET STATUS = 1,
ASSIGN_DT = CURRENT_TIMESTAMP
WHERE status = 0
AND ROWNUM = 1
RETURNING CCID INTO V_ID;

COMMIT;

P_ID := V_ID;
P_STATUS := '0';
P_ERROR_CODE := NULL;
P_ERROR_MSG := NULL;

EXCEPTION
WHEN XAPPERROR THEN
P_CCID := NULL;
P_STATUS := '1';
P_ERROR_CODE := '-01403';
P_ERROR_MSG := V_ERROR_MSG ;
ROLLBACK;
WHEN OTHERS THEN
P_CCID := NULL;
P_STATUS := '1';
DECLARE
V_SQLCODE NUMBER := SQLCODE;
V_SQL_MSG VARCHAR(512) := REPLACE(SQLERRM, CHR(10), ',');
BEGIN
P_ERROR_CODE := V_SQLCODE;
P_ERROR_MSG := V_SQL_MSG;
ROLLBACK;
END;
END;




=============================
We already tried this with index on STATUS column still performance was almost same like 300recs/s in oracle box and with ab initio 100recs/sec.

So do you want me to try not to have any index on status column and then try the same.


Tom Kyte
February 08, 2007 - 9:14 pm UTC

"ab initio"??

what sort of expectations do you have for a SERIAL process here?

Asim, February 09, 2007 - 11:44 am UTC

Hi Tom,

In Ab Initio(ETL Tool), right now everything is serial process and business does not want right now with paralell process.

We also tried to run the same main proc which calls this id assign proc(using the cursor) in pralell in different sessions in oracle(not in Ab Initio) but performance went down when we were running the same process for each record in three different sessions.

And I am not sure if "UPDATE ...and RETUNING INTO .." can handle paralell process. So we thought of using CURSOR with SELECT FOR UPDATE and moreover "UPDATE ...and RETUNING INTO .." did not increase performance than CURSOR.

I really appreciate your help on this.

Thanks,
Asim
Tom Kyte
February 12, 2007 - 9:30 am UTC

update returning into is simply PLSQL syntax that lets you

a) update (and thus lock) a row
b) get the values of the row

in a single statement - not sure where the term parallel even came into play?


if you tell me that

a) select for update
b) update

is not slower than

a) update

I'll not be believing you.


ops$tkyte%ORA10GR2> create table t1
  2  as
  3  select rownum id, a.* from all_objects a where rownum <= 10000
  4  /

Table created.

ops$tkyte%ORA10GR2> alter table t1 add constraint t1_pk primary key(id);

Table altered.

ops$tkyte%ORA10GR2> alter table t1 add constraint t1_unq unique(object_id);

Table altered.

ops$tkyte%ORA10GR2> create table t2
  2  as
  3  select rownum id, a.* from all_objects a where rownum <= 10000
  4  /

Table created.

ops$tkyte%ORA10GR2> alter table t2 add constraint t2_pk primary key(id);

Table altered.

ops$tkyte%ORA10GR2> alter table t2 add constraint t2_unq unique(object_id);

Table altered.

ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> create or replace procedure p1
  2  as
  3          l_rec t1%rowtype;
  4  begin
  5          for i in 1 .. 10000
  6          loop
  7                  select * into l_rec from t1 where id = i for update;
  8                  update t1 set object_name = lower(object_name) where object_id = l_rec.object_id;
  9          end loop;
 10  end;
 11  /

Procedure created.

ops$tkyte%ORA10GR2> show errors
No errors.
ops$tkyte%ORA10GR2> create or replace procedure p2
  2  as
  3          l_rec t1%rowtype;
  4          l_object_id number;
  5  begin
  6          for i in 1 .. 10000
  7          loop
  8                  update t1 set object_name = lower(object_name) where id = i returning object_id into l_object_id;
  9          end loop;
 10  end;
 11  /

Procedure created.

ops$tkyte%ORA10GR2> show errors
No errors.
ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> exec runStats_pkg.rs_start;

PL/SQL procedure successfully completed.

ops$tkyte%ORA10GR2> exec p1

PL/SQL procedure successfully completed.

ops$tkyte%ORA10GR2> exec runStats_pkg.rs_middle;

PL/SQL procedure successfully completed.

ops$tkyte%ORA10GR2> exec p2

PL/SQL procedure successfully completed.

ops$tkyte%ORA10GR2> exec runStats_pkg.rs_stop(10000);
Run1 ran in 148 hsecs
Run2 ran in 59 hsecs
run 1 ran in 250.85% of the time

Name                                  Run1        Run2        Diff
STAT...index fetch by key           20,001      10,000     -10,001
STAT...redo entries                 20,016      10,013     -10,003
STAT...table fetch by rowid         10,007           0     -10,007
STAT...execute count                20,031      10,006     -10,025
STAT...db block gets                20,493      10,304     -10,189
STAT...db block gets from cach      20,493      10,304     -10,189
STAT...recursive calls              20,387      10,014     -10,373
STAT...buffer is not pinned co      20,029           0     -20,029
STAT...calls to get snapshot s      30,034      10,005     -20,029
STAT...db block changes             40,272      20,178     -20,094
STAT...consistent gets - exami      50,024      20,001     -30,023
STAT...consistent gets from ca      50,084      20,012     -30,072
STAT...consistent gets              50,084      20,012     -30,072
STAT...session logical reads        70,577      30,316     -40,261
LATCH.cache buffers chains         161,744      70,854     -90,890
STAT...physical read total byt     327,680     204,800    -122,880
STAT...physical read bytes         327,680     204,800    -122,880
STAT...undo change vector size   1,722,880   1,043,932    -678,948
STAT...redo size                 4,900,264   2,814,576  -2,085,688

Run1 latches total versus runs -- difference and pct
Run1        Run2        Diff       Pct
173,632      74,955     -98,677    231.65%

PL/SQL procedure successfully completed.


Asim, February 09, 2007 - 12:09 pm UTC

Hi Tom,
I think I should tell you some more about Ab Initio.

It is an ETL(Export Transform Load) tool. We use it for Initial Load of data as well as delta loads. Expectation of Initial Load data is around 110 million records.
So Ab Initio gets some files and each file they process for the data the way they want and then call this main proc to load the record into the table by assigning an unused id to it.

Thanks,
Asim


Asim, February 12, 2007 - 10:00 am UTC

Hi Tom,

Thanks for your reply.

I think I did something else wrong while using "UPDATE..RETUNRING INTO..".

I want to give it a second try with "UPDATE..RETUNRING INTO.." and come back to you.

Only thing I want to confirm, so you think the BITMAP INDEX on status column is not needed in my case even when I use the STATUS column in my where clause of update query with rownum =1???

Thanks,
Asim



Tom Kyte
February 12, 2007 - 11:33 am UTC

you have a bitmap index on status?!?!?!?!?!?!?!?!?!

as they say in support "tar closed"

absolutely and entirely inappropriate to have a bitmap index, get rid of it if you are doing single row updates!@!!!!!

Asim, February 13, 2007 - 9:35 am UTC

Hi Tom,

Just one final question on the bitmap index on "status" column.

We used the index on STATUS column as we are saying "UPDATE ID_ASSIGN set STATUS = 1, ASSIGN_DT =SYSDATE WHERE STATUS = 0 AND ROWNUM =1 RETURNING INTO ...". Because of the data distribution of STATUS column(could be couple of millions records with status = 0 and couple of millions records with status =1), we are thinking we are unable to tell oracle explicitly which row to update exactly.

And when I see your query, it is doing "update t1 set object_name = lower(object_name) where id = i returning object_id into l_object_id;" where you have a primary key index on "id".

I really appreciate your help being on this for a long time.

Thanks,
Asim

Tom Kyte
February 13, 2007 - 10:10 am UTC

you cannot do that with bitmaps - single row updates KILL IT.

do not use a bitmap index on status, just don't.

use a b*tree if you must, but not a bitmap

Asim, February 13, 2007 - 2:43 pm UTC

Hi Tom,

Yes, bitmap index in this case is slower than b-tree index for status column.

Thanks for your help on this.

Please have a look at what I tried and let me know if I got it correctly.

Looks like the difference of time between those two different cases are always 3 to 4 secs.

Only thing which is bothering me is, looks like if I keep running the procs for 20000 records couple of times,
although the difference of time remains same but time taken by each run increases.
It does not stay constant at/around everytime I run the anonymous blocks below.

============================================================================================================================


CREATE TABLE TEST_ID_ASSIGN
(
ID CHAR(16 BYTE),
STATUS NUMBER(1),
ASSIGN_DT TIMESTAMP(6)
);



Now the table TEST_ID_ASSIGN has 280000 records with STATUS = 0(means available) and 120000 records with STATUS =1(not available).

There is a primary key on "ID" column and B-Tree index on STATUS column.


=============================================================================================================================



1) Step 1 :


CREATE OR REPLACE PROCEDURE ASSIGNID_TEST(P_ID OUT VARCHAR2, P_STATUS OUT VARCHAR2, P_ERROR_CODE OUT VARCHAR2,P_ERROR_MSG OUT VARCHAR2)
IS

PRAGMA AUTONOMOUS_TRANSACTION;

V_ID VARCHAR2(16);
V_ERROR_MSG VARCHAR2(6000);
XAPPERROR EXCEPTION;

BEGIN

UPDATE /*+ INDEX(TEST_ID_ASSIGN ASX1ID_ASSIGN)*/ test_id_assign
SET STATUS = 1,
ASSIGN_DT = CURRENT_TIMESTAMP
WHERE status = 0
AND ROWNUM = 1
RETURNING ID INTO V_ID;

COMMIT;

P_ID := V_ID;
P_STATUS := '0';
P_ERROR_CODE := NULL;
P_ERROR_MSG := NULL;

EXCEPTION
WHEN XAPPERROR THEN
P_ID := NULL;
P_STATUS := '1';
P_ERROR_CODE := '-01403';
P_ERROR_MSG := V_ERROR_MSG ;
ROLLBACK;
WHEN OTHERS THEN
P_ID := NULL;
P_STATUS := '1';
DECLARE
V_SQLCODE NUMBER := SQLCODE;
V_SQL_MSG VARCHAR(512) := REPLACE(SQLERRM, CHR(10), ',');
BEGIN
P_ERROR_CODE := V_SQLCODE;
P_ERROR_MSG := V_SQL_MSG;
ROLLBACK;
END;
END;

================================================================================================================


declare
v_id varchar2(16);
v_status char(1);
v_error_code varchar2(30);
v_error_msg varchar2(2000);
v_start_time timestamp(6) := current_timestamp;
v_end_time timestamp(6);
ctr NUMBER :=0;
v_numrecs NUMBER := 20000;
begin

LOOP
EXIT WHEN ctr = v_numrecs;
assignid_test(v_id,v_status,v_error_code,v_error_msg);
ctr := ctr+1;
end loop;

v_end_time := current_timestamp;

DBMS_OUTPUT.PUT_LINE('Start Time:'||v_start_time);
DBMS_OUTPUT.PUT_LINE('End Time:'||v_end_time);

DBMS_OUTPUT.PUT_LINE('Elapsed Time:'||to_char(v_end_time - v_start_time));

end;





It took - 12.62 seconds for 20000 records.

I used the index hint for the update as without the hint it was getting slower.


Only thing which is bothering me, if I run this anonymous block couple of times for 20000 recs eachtime, the time taken each time increases.
It does not stay constant at or around 12.62 secs above.

===========================================================================================================================





2) Step 2 :

CREATE OR REPLACE PROCEDURE ASSIGNID_TEST1(P_ID OUT VARCHAR2, P_STATUS OUT VARCHAR2, P_ERROR_CODE OUT VARCHAR2,P_ERROR_MSG OUT VARCHAR2)
IS

PRAGMA AUTONOMOUS_TRANSACTION;

V_ID VARCHAR2(16);
V_ERROR_MSG VARCHAR2(6000);
XAPPERROR EXCEPTION;

CURSOR c1 IS
SELECT /*+ INDEX(TEST_ID_ASSIGN ASX1ID_ASSIGN)*/ ID FROM TEST_ID_ASSIGN WHERE STATUS = 0 AND ROWNUM =1 FOR UPDATE;

BEGIN

OPEN c1;

FETCH c1 into V_ID;

IF c1%NOTFOUND OR c1%NOTFOUND IS NULL THEN
V_ERROR_MSG := 'No ID is available for assignment';
RAISE XAPPERROR;
END IF;

UPDATE TEST_ID_ASSIGN
SET STATUS = 1,
ASSIGN_DT = CURRENT_TIMESTAMP
WHERE CURRENT OF c1;

COMMIT;

CLOSE c1;

P_ID := V_ID;
P_STATUS := '0';
P_ERROR_CODE := NULL;
P_ERROR_MSG := NULL;

EXCEPTION
WHEN XAPPERROR THEN
CLOSE c1;
P_ID := NULL;
P_STATUS := '1';
P_ERROR_CODE := '-01403';
P_ERROR_MSG := V_ERROR_MSG ;
ROLLBACK;
WHEN OTHERS THEN
CLOSE c1;
P_ID := NULL;
P_STATUS := '1';
DECLARE
V_SQLCODE NUMBER := SQLCODE;
V_SQL_MSG VARCHAR(512) := REPLACE(SQLERRM, CHR(10), ',');
BEGIN
P_ERROR_CODE := V_SQLCODE;
P_ERROR_MSG := V_SQL_MSG;
ROLLBACK;
END;
END;

===========================================================================================================================


declare
v_id varchar2(16);
v_status char(1);
v_error_code varchar2(30);
v_error_msg varchar2(2000);
v_start_time timestamp(6) := current_timestamp;
v_end_time timestamp(6);
ctr NUMBER :=0;
v_numrecs NUMBER := 20000;
begin

LOOP
EXIT WHEN ctr = v_numrecs;
assignid_test1(v_id,v_status,v_error_code,v_error_msg);
ctr := ctr+1;
end loop;

v_end_time := current_timestamp;

DBMS_OUTPUT.PUT_LINE('Start Time:'||v_start_time);
DBMS_OUTPUT.PUT_LINE('End Time:'||v_end_time);

DBMS_OUTPUT.PUT_LINE('Elapsed Time:'||to_char(v_end_time - v_start_time));

end;



It took - 16.97 seconds for 20000 records.

If I run this anonymous block couple of times for 20000 recs eachtime, the time taken each time increases.
It does not stay constant at or around 16.97 secs above.

============================================================================================================================



Thanks,
Asim





Asim, February 16, 2007 - 11:34 am UTC

Hi Tom,

Thanks a lot for all your help in resolving this issue.

Now I am looking for your suggestion for the problem below-

As I told you, I have a table which will have around 110 million records in production.
When a new record will come in, we need to query the table to get existing records for doing some manipulation,
and then insert the record into database. Only if the query below does not return any record only , we need to generate a new ID
by the proc which I already discussed with you in the previous threads, else just use existing record's id and insert the data
into the table.

The query is as below -

CURSOR C_ABK IS
SELECT CUSTOMER_ACCOUNT_ID, ID_TP_CD,GOVT_ISSUED_ID, BIRTH_DT, MEMBER_DT,ID
FROM ACCOUNT
WHERE CUSTOMER_LINK = P_CUSTOMER_LINK
AND ID_TP_CD <> V_DETACHED_TP_CD;


We do have an B-Tree index on CUSTOMER_LINK and a bitmap index on ID_TP_CD which can have values - 100,90,80,75,70,50,40.
CUSTOMER_ACCOUNT_ID is a primary key in the table.

The query will bring back at max 6 records for each customer out of 110 million records and most of the times(80%) the query will not bring back any record at all.



Is there anyway we can improve this query to work some more faster to improve performace?


We also created one composite index on "CUSTOMER_LINK,ID_TP_CD, CUSTOMER_ACCOUNT_ID, ID_TP_CD,GOVT_ISSUED_ID, BIRTH_DT, MEMBER_DT,ID" and query became some what faster but not so good to accept yet.

In this composite index we have included CUSTOMER_ACCOUNT_ID, which already has one unique index because of primary key.


Really appreciate your help.

Thanks,
Asim
Tom Kyte
February 17, 2007 - 11:06 am UTC

this is a transactional table - there should be NO BITMAP INDEXES AT ALL. The are entirely INAPPROPRIATE on a table that is transactional in nature.

you should have a single b*tree index on (customer_link,id_tp_cd)

Asim, February 27, 2007 - 1:55 pm UTC

Hi Tom,
Thank you very much for helping me.Looks like we are good now.

Only one thing which we think we might improve but wanted to check with you.

We have a table invalid SSNs which has only one column called ssn_invalid. This table we load it with initial load data of around 50 records only as of now. While loading each customer record we verify that his/her SSN is valid by checking from this table like this -
select count(1) from ssn_invalid where ssn_invalid = '1234';

Currently the table does not have any primary key or index on this table. So the query does a full table scan always with cost = 4.

But if we add primary key for this column, it does an index unique scan with cost = 0.

Are we going to achieve anything by adding a primary key on this table as even for index scan it has to go and search for the index to verify if there is any record?

Moreover index will occupy some more space.

Your views please.

Thanks,
Asim



Tom Kyte
February 27, 2007 - 2:29 pm UTC

it depends.


tkprof with and without, see what you see.

50 ssn's - probable one block table, but 3 or 4 IO's each time you query.

make it an IOT (index organized table) and it'll still be one block, but only 1 block during the scan.


(it need not take more space)

ops$tkyte%ORA10GR2> create table t1
  2  as
  3  select Object_id invalid_ssn
  4    from all_objects
  5   where rownum <= 50;

Table created.

ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> create table t2
  2  ( invalid_ssn primary key )
  3  organization index
  4  as
  5  select Object_id invalid_ssn
  6    from all_objects
  7   where rownum <= 50;

Table created.

ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> select count(*) from t1 where invalid_ssn = 1234;

  COUNT(*)
----------
         0

ops$tkyte%ORA10GR2> select count(*) from t2 where invalid_ssn = 1234;

  COUNT(*)
----------
         0

ops$tkyte%ORA10GR2> set autotrace on
ops$tkyte%ORA10GR2> select count(*) from t1 where invalid_ssn = 1234;

  COUNT(*)
----------
         0


Execution Plan
----------------------------------------------------------
Plan hash value: 3724264953

------------------------------------------------------------------------
| Id  | Operation          | Name | Rows  | Bytes | Cost (%CPU)| Time
------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |      |     1 |    13 |     3   (0)| 00:00:0
|   1 |  SORT AGGREGATE    |      |     1 |    13 |            |
|*  2 |   TABLE ACCESS FULL| T1   |     1 |    13 |     3   (0)| 00:00:0
------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - filter("INVALID_SSN"=1234)

Note
-----
   - dynamic sampling used for this statement


Statistics
----------------------------------------------------------
          0  recursive calls
          0  db block gets
          3  consistent gets
          0  physical reads
          0  redo size
        410  bytes sent via SQL*Net to client
        385  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
          1  rows processed

ops$tkyte%ORA10GR2> select count(*) from t2 where invalid_ssn = 1234;

  COUNT(*)
----------
         0


Execution Plan
----------------------------------------------------------
Plan hash value: 1767952272

------------------------------------------------------------------------
| Id  | Operation          | Name              | Rows  | Bytes | Cost (%
------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |                   |     1 |    13 |     1
|   1 |  SORT AGGREGATE    |                   |     1 |    13 |
|*  2 |   INDEX UNIQUE SCAN| SYS_IOT_TOP_66544 |     1 |    13 |     1
------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - access("INVALID_SSN"=1234)


Statistics
----------------------------------------------------------
          1  recursive calls
          0  db block gets
          1  consistent gets
          0  physical reads
          0  redo size
        410  bytes sent via SQL*Net to client
        385  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
          1  rows processed

ops$tkyte%ORA10GR2> set autotrace off

Asim, February 27, 2007 - 2:39 pm UTC

Hi Tom,

Thank you very much for your reply.

Could you please just explain me this -

"50 ssn's - probable one block table, but 3 or 4 IO's each time you query.

make it an IOT (index organized table) and it'll still be one block, but only 1 block during the scan.
"
As per trace that is what is shown but I am unable to think why "3 or 4 IO's each time I query" for the table without primary key ????

Thanks,
Asim



Tom Kyte
February 27, 2007 - 2:45 pm UTC

because it reads the extent map to figure out what block to read, the IOT didn't have to do that.

Asim, February 27, 2007 - 3:07 pm UTC

Hi Tom,

Please see below what I tried just now.
Looks like both are reading the same number of bytes of data,but cost is less with IOT.

Just thinking if this is a significant difference.

Please clarify.
==========================================================
SQL> create table t1
2 ( invalid_ssn PRIMARY KEY )
3 organization index
4 as
5 select * from ssninvalid;

Table created.

SQL> create table t2
2 as
3 select * from ssninvalid;

Table created.

SQL> set autotrace on
SQL> SELECT count(1) FROM t1 WHERE INVALID_SSN = '123456789';

COUNT(1)
----------
1


Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=ALL_ROWS (Cost=1 Card=1 Bytes=6)
1 0 SORT (AGGREGATE)
2 1 INDEX (UNIQUE SCAN) OF 'SYS_IOT_TOP_75244' (INDEX (UNIQU
E)) (Cost=1 Card=1 Bytes=6)





Statistics
----------------------------------------------------------
24 recursive calls
0 db block gets
3 consistent gets
0 physical reads
0 redo size
219 bytes sent via SQL*Net to client
277 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
1 rows processed

SQL> SELECT count(1) FROM t2 WHERE INVALID_SSN = '123456789';

COUNT(1)
----------
1


Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=ALL_ROWS (Cost=2 Card=1 Bytes=6)
1 0 SORT (AGGREGATE)
2 1 TABLE ACCESS (FULL) OF 'T2' (TABLE) (Cost=2 Card=1 Bytes
=6)





Statistics
----------------------------------------------------------
28 recursive calls
0 db block gets
9 consistent gets
1 physical reads
0 redo size
221 bytes sent via SQL*Net to client
277 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
1 rows processed

SQL> set autotrace off
SQL>
==========================================================
Tom Kyte
February 27, 2007 - 3:13 pm UTC

run them again, get rid of the hard parse. you see the recursive calls? there should be none in real life.

I know!!!

use my example.

Asim, February 27, 2007 - 3:29 pm UTC

Hi Tom,

You are right.This is what I got when I ran them again.

Do you think you can help me in quantifying over time saving as the advantage of using IOT in our case for visiting this table around 110 million times for 110 million customer record?

I really appreciate your help.
Thanks,
Asim

=============================================
SQL> set autotrace on
SQL> SELECT count(1) FROM t1 WHERE INVALID_SSN = '123456789';

COUNT(1)
----------
1


Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=ALL_ROWS (Cost=1 Card=1 Bytes=6)
1 0 SORT (AGGREGATE)
2 1 INDEX (UNIQUE SCAN) OF 'SYS_IOT_TOP_75244' (INDEX (UNIQU
E)) (Cost=1 Card=1 Bytes=6)





Statistics
----------------------------------------------------------
0 recursive calls
0 db block gets
1 consistent gets
0 physical reads
0 redo size
221 bytes sent via SQL*Net to client
277 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
1 rows processed

SQL> SELECT count(1) FROM t2 WHERE INVALID_SSN = '123456789';

COUNT(1)
----------
1


Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=ALL_ROWS (Cost=2 Card=1 Bytes=6)
1 0 SORT (AGGREGATE)
2 1 TABLE ACCESS (FULL) OF 'T2' (TABLE) (Cost=2 Card=1 Bytes
=6)





Statistics
----------------------------------------------------------
0 recursive calls
0 db block gets
3 consistent gets
0 physical reads
0 redo size
221 bytes sent via SQL*Net to client
277 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
1 rows processed

SQL> SELECT count(1) FROM t1 WHERE INVALID_SSN = '987654321';

COUNT(1)
----------
0


Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=ALL_ROWS (Cost=1 Card=1 Bytes=6)
1 0 SORT (AGGREGATE)
2 1 INDEX (UNIQUE SCAN) OF 'SYS_IOT_TOP_75244' (INDEX (UNIQU
E)) (Cost=1 Card=1 Bytes=6)





Statistics
----------------------------------------------------------
1 recursive calls
0 db block gets
1 consistent gets
0 physical reads
0 redo size
220 bytes sent via SQL*Net to client
277 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
1 rows processed

SQL> SELECT count(1) FROM t2 WHERE INVALID_SSN = '987654321';

COUNT(1)
----------
0


Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=ALL_ROWS (Cost=2 Card=1 Bytes=6)
1 0 SORT (AGGREGATE)
2 1 TABLE ACCESS (FULL) OF 'T2' (TABLE) (Cost=2 Card=1 Bytes
=6)





Statistics
----------------------------------------------------------
4 recursive calls
0 db block gets
7 consistent gets
0 physical reads
0 redo size
220 bytes sent via SQL*Net to client
277 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
1 rows processed

SQL> set autotrace off
SQL>

==========================================================



Tom Kyte
February 27, 2007 - 3:40 pm UTC

you will save 220 million logical IO's

I pray in real life you do not use literals.

Asim, February 27, 2007 - 3:57 pm UTC

Hi Tom,
Thank you very much again for your help.

Yes, in real time the query will be using bind variable only but not literals, as query is written inside a PL/SQL function.

Thanks,
Asim






Asim, March 02, 2007 - 9:22 am UTC

Hi Tom,
I am back again seeking for your suggestion.

Now looks like management wants to see if we can run the load process in paralell. We tried to run paralell in two ways.So there are two sessions which are running in paralell and inserting records into my main table.It is getting a unique id as I explained before in this thread. The final proc which is running right now is as below and a quick recap too.

============================================================
PROCEDURE CREATEID(P_ID OUT VARCHAR2, P_STATUS OUT VARCHAR2, P_ERROR_CODE OUT VARCHAR2,P_ERROR_MSG OUT VARCHAR2)
IS

PRAGMA AUTONOMOUS_TRANSACTION;

V_ID VARCHAR2(16);
V_ERROR_MSG VARCHAR2(300) := 'No ID is available for assignment';
XAPPERROR EXCEPTION;

BEGIN

UPDATE /*+ INDEX(ID_ASSIGN X1ID_ASSIGN)*/ ID_ASSIGN
SET STATUS = 'U',
ASSIGN_DT = CURRENT_TIMESTAMP
WHERE status = 'A'
AND ROWNUM = 1
RETURNING ID INTO V_ID;

IF SQL%ROWCOUNT = 0 OR V_ID IS NULL THEN
RAISE XAPPERROR;
END IF;

P_ID := V_ID;
P_STATUS := '0';
P_ERROR_CODE := NULL;
P_ERROR_MSG := NULL;

COMMIT;

EXCEPTION
WHEN XAPPERROR THEN
P_ID := NULL;
P_STATUS := '1';
P_ERROR_CODE := '-01403';
P_ERROR_MSG := V_ERROR_MSG ;
ROLLBACK;
WHEN OTHERS THEN
P_ID := NULL;
P_STATUS := '1';
DECLARE
V_SQLCODE NUMBER := SQLCODE;
V_SQL_MSG VARCHAR(512) := REPLACE(SQLERRM, CHR(10), ',');
BEGIN
P_ERROR_CODE := V_SQLCODE;
P_ERROR_MSG := V_SQL_MSG;
ROLLBACK;
END;
END;
============================================================


There is a main proc which selects from the main table(customer_account) first to see if similar records exist,then it gets the ID from the existing records and then insert the record, else if no records exist in the database it calls CREATID to generate a new ID and then insert the record.

Now problem is, when we are running in paralell , the process is becoming very slow even slower than running serially.

Any idea what could be the reason? I tried to see if there is any lock on any table but does not look like it is.
Is there any parameter from database side also to set to allow this paralell process?

Thank you very much for all your help.
Asim


Tom Kyte
March 04, 2007 - 6:06 pm UTC

for the love of whatever you love - please use a sequence.

this code shouts "i shall be as slow as I can be and will still generate gaps"

Asim, March 05, 2007 - 10:55 am UTC

Hi Tom,
I am sorry that I could not get it.
Could you please explain once again?

Thanks,
Asim
Tom Kyte
March 05, 2007 - 2:18 pm UTC

do not write code.

please use a sequence.

that'll generate your unique ids, very fast, scalable.

Asim, March 05, 2007 - 3:38 pm UTC

Hi Tom,
Thank you for clarifying the same.
Actually that is what I suggested and proved also that it generates IDs in much more faster way but management does not want to use sequence number approach(I don't know,probably they don't belive sequence approach). Instead they are buying unique ids generated from some other third party systems like credit card number generator company. We get this numbers into a file and then load into this table and mark them as not available to use.

Anyway I guess I need to make them believe that this approach can not help us in paralell unless and until we go back to sequence number.


I have one more question though:
This is what I am trying now -
- Ab Initio(third party tool) calls our main stored proc to add each record into database, and they take 1.54 mins to load 20741 records. This seemed to us slower than running the same from oracle server.

So what we did is, we used the same data in a file and we prepared a .sql file having all 20741 calls to the stored proc. Now we executed this .sql file from sqlplus on server where database is residing. But same number of records took 4.40 mins.

Do you have any idea why sqlplus from oracle server took more time than ab initio(third party tool) call although we will expect the opposite as third party tool will use some network overhead ?

Thanks,
Asim









Tom Kyte
March 05, 2007 - 8:40 pm UTC

then management has doomed you to "not be capable of doing more than one thing at a time"

You are committing every time. That means you are waiting for a log file sync (IO), ever ID you get takes a very measurable amount of time.

21,000 records in 120 seconds is 0.006 seconds per record. Not bad considering each one has to wait for a log file sync wait.

you probably hard coded the information in the .sql file whereas the load program used bind variables. You might spend 95% of your run time parsing SQL instead of executing it without bind variables.

Asim, March 06, 2007 - 11:16 am UTC

Hi Tom,

This is what I am doing in .sql file -

set feedback off;
variable P_OUTPUT varchar2(4000);

exec add('1','200703220797219104','4006610000000622',.......,'N','2006-07-10',:P_OUTPUT);

exec queryadd('1','200703220797219105','4006610000000622',.......,'N','2006-07-10',:P_OUTPUT);

exec queryadd('1','200703220797219106','4006610000000622',.......,'N','2006-07-10',:P_OUTPUT);
.....

I think instead of this, I need to set all the variables everytime which I am passing to add and call add using bind variables which will be very much cumbersome in this case, i will have 20000 records , so i have to set 20000 times before calling add.


Is this what you meant?

Thanks,
Asim

Tom Kyte
March 06, 2007 - 11:22 am UTC

yes, each one is a hard parse and you spend probably as much time parsing as executing.


sqlplus is a simple, stupid command line tool - it is wholly inappropriate for what you are doing.

even if you set binds - they would be hard parses themselves.

abandon sqlplus for this exercise

Asim, March 06, 2007 - 11:27 am UTC

Hi Tom,
As usual Thank you very much for such a prompt answer and helping me.

May be I will come back again with some other problem in future.

I really appreciate your help.

Thanks,
Asim

pagination

shay, April 04, 2007 - 4:21 am UTC

hi tom,
I have table t9

create table t9 (a number,b number);

insert into t9 values (35791,1);
insert into t9 values (35863,1);
insert into t9 values (35995,1);
insert into t9 values (36363,2);
insert into t9 values (36651,1);
insert into t9 values (36783,1);
insert into t9 values (36823,1);
insert into t9 values (36849,1);
insert into t9 values (36917,2);
insert into t9 values (37177,1);
insert into t9 values (37227,1);
insert into t9 values (37245,1);
insert into t9 values (37341,1);
insert into t9 values (37451,1);
insert into t9 values (37559,1);
insert into t9 values (37581,1);
insert into t9 values (37697,1);
insert into t9 values (37933,1);
insert into t9 values (38231,1);
insert into t9 values (38649,1);

commit;

now I do :

select *
from (
select
a,b,
row_number() over
(order by a) rn
from t9)
where rn between 1 and 16
order by rn
/

A B RN
---------- ---------- ----------
35791 1 1
35863 1 2
35995 1 3
36363 2 4
36651 1 5
36783 1 6
36823 1 7
36849 1 8
36917 2 9
37177 1 10
37227 1 11
37245 1 12
37341 1 13
37451 1 14
37559 1 15
37581 1 16

16 rows selected.

I would like to cut the result set after the second 2 at column b , I mean at row 9 Include. is it possiable ?

Thanks
Tom Kyte
April 04, 2007 - 10:13 am UTC

where rn between 1 and 9


but, I think that is too easy, hence your question must be more complex than you have let us in on... so, what is the real question behind the question.

shay, April 10, 2007 - 10:03 am UTC

Sorry for not explaining myself so good.
I would like to get 15 rows but ... If I find lets say 2 rows with column b = 2 then I would like to cut the result set and return only 9 rows.

I hope this one is more understandable

Tom Kyte
April 10, 2007 - 11:24 am UTC

is it always the second time b = 2 or what is the true logic here. is b always 1's and 2's or what.

please be very precise, pretend you were explaining this to your mom - be very precise, very detailed. You understand your problem - but we have no idea what it is.

Can I get an estimate of rows without running the query?

A reader, April 11, 2007 - 1:47 pm UTC

Tom,
We have some search pages within our application. Users can input multiple pieces of information to make searches more precise and return a manageable number of hits which can be easily displayed in a couple of pages. However, all pieces of information are optional and sometimes users will search with very little information. In such cases, the search query takes a very long time to run and burns up the CPU.

My question is:
Is there a way to estimate how many rows the query will return without actually running the query? The logic is if we know that the query will return 1000 rows, we will not run the query at all and ask the user to provide more information to narrow down the search.

If we try to use explain plan, the concern is that it might give incorrect cardinality estimates and we might force even the "good users" to provide more information. Conversely, we might run a bad query thinking that it will return only 20 rows. The point is I can "lie" about the estimated number but it has to be a smart lie.

Please advise what would be a good solution.

Thanks...
Tom Kyte
April 11, 2007 - 5:45 pm UTC

estimates are - well - estimates, they are not exact, they will never be exact, they are by definition GUESSES!

you can either use a predicative resource governor (the resource manager, set up a plan that won't run a query that takes more the 3 seconds - but again, it is a GUESS as to how long)

or a reactive resource governor - fail the query after using N cpu seconds

"Web" pagination and read consistency

Stew Ashton, May 02, 2007 - 9:55 am UTC

HI Tom,

I would like to compare a bit more explicitly the client / server and "Web" pagination solutions. I would appreciate your comments or corrections as needed.

1) In client / server, we can maintain the connection and keep the cursor open, so we just execute the full query once and fetch the first "page". Subsequent pages will simply require additional fetches. This means we have read consistency throughout, since we're still within one query.

2) In Web applications, everything is "stateless": every time we get a request from the user, we have to "start over", so every page requires a new query. Side effect: we lose read consistency.

To maintain read consistency in a stateless environment, I thought of using flashback queries:

variable n number
exec select dbms_flashback.get_system_change_number into :n from dual;
SELECT /*+ FIRST_ROWS */ * FROM
(SELECT p.*, rownum rnum FROM
(SELECT <whatever> FROM <table> as OF SCN :n ORDER BY <something unique>) p
WHERE rownum <= 200)
WHERE rnum > 100;

Of course, the application would need to keep the scn around between requests.

3) Would this indeed get us the same read consistency as the client / server solution?

4) Can you see any side effects or gotchas? Performance issues? It would seem to me that most of the gotchas (such as "snapshot too old") would apply to any "read consistent" solution.

Thanks in advance!

PS: sorry, couldn't get the code button to work.
Tom Kyte
May 02, 2007 - 5:06 pm UTC

3) sure

4) just what you pointed out


can you describe what you mean by "i could not get the code button to work"?

Code button

Stew Ashton, May 03, 2007 - 11:51 am UTC

Aha! I was creating a test case, when it occurred to me that I had modified the font options in Firefox. When I "allow pages to choose their own fonts, instead of my selections above", I miraculously see fixed width when I use the code button.

As Emily Litella (remember her?) would say : Never mind!

Row Orders in one select statement

Elahe Faghihi, May 15, 2007 - 10:19 am UTC

Hi Tom,

How could I write one select statement that returns the row orders properly?

create table t1 (a varchar2(30));

insert into t1 (a)
values ('first');

insert into t1 (a)
values ('second');

insert into t1 (a)
values ('third');

insert into t1 (a)
values ('forth');

commit;

select * from t1;

A
======
first
second
third
forth



I would like to run a query which could return this:

Row_order A
==============================
1 first
2 second
3 third
4 forth


Tom Kyte
May 15, 2007 - 8:58 pm UTC

you better fix your data model then?

properly is in the eye of the beholder, to me ANY order of those rows would be correct and proper since you stuffed the data in there without anything meaningful to sort by.

Well, if you really must ...

Greg, May 16, 2007 - 9:28 am UTC

For Elahe Faghihi :

If you really really cannot "fix" the data model as Tom says .. here's a sneaky/ugly/fun way of doing it .. ;) heh

SQL > drop table junk;

Table dropped.

SQL > create table junk as
  2       select to_char(to_date(level, 'j'), 'jspth' ) a
  3             from dual
  4          connect by level <= 5;

Table created.

SQL > select jk.a
  2    from junk jk,
  3         ( select level lvl,
  4                  to_char(to_date(level, 'j'), 'jspth' ) spt
  5             from dual
  6          connect by level <= 125  -- pick a big number .. or do a max(a) on junk ...
  7          ) dl
  8   where dl.spt = jk.a
  9   order by dl.lvl
 10  /

A
---------------
first
second
third
fourth
fifth

5 rows selected.

rownum indexes order by

Tony, June 21, 2007 - 4:23 pm UTC

Tom,
Thanks a lot for your help
I have two queries:

1)select * from ( select t.tex_id from tex_t t where t.status = 5005 order by t.c_date desc, t.t_num asc, t.tex_id asc ) where rownum < 20;

The columns in order by are not indexed, I created an index on these three columns (c_date desc,t_num,tex_id )

The query results came back in one second (from 2 minutes without the index).

For the following query there is no index for order by clause either when I create the index on pkup_date ,t_num ,tex_id for the query blow it starts using the index but the problem is the first query stops using it's index and starts the full table scan again.

In other words one index works at a time, can you please guide me.

2)select * from ( select t.tex_id from tex_t t where t.status = 5010 and (t.tdr_count >= 1 or t.p_count >= 1) and t.cur_stat_id <> 11 order by t.pkup_date asc, t.t_num asc, t.tex_id asc ) where rownum < 20 ;

Tom Kyte
June 22, 2007 - 10:16 am UTC

give an entire example.

no creates
no inserts
no dbms_stats.set_* calls to give us representative stats to see what you see
no look

Recursive Function

preet, June 29, 2007 - 9:18 am UTC

Tom,

In my database, I have a table TRADES which stores all the data pertaining to the trades.
There's another table RELATED_TRADES which stores all the related trades.

SELECT * FROM TRADES;
/
TRADE_ID TRADE_TYPE
------- ---------
1 A
2 B
3 A
4 B
5 A
6 B
7 B
8 B


SELECT * FROM RELATED_TRADES;
/
TRADE_ID RELATED_TRADE TRADE_STATUS
------- ------------- ------------
1 3 C
3 5 C
4 6 X
5 7 C
7 9 C
8 10 X

Now, a trade may or may not have a related trade. A related trade may or may not a further related trade.
There can be a complete hierarchy of related trades. E.G. in above example- trade 1 is related
to trade 3, trade 3 is related to trade 5 which in turn is related to trade 7. Trade 7 is related to trade 9 and so on.

I need to
1. Get all the trade_ids from TRADES one by one.
2. Get all the related_trades( and their related trades) for each trade_id.
3. Check the TRADE_STATUS of each. If it is 'X', delete the parent trade in TRADES.

How do I do that in PlSql. I am using oracle 8i.


Please help

Thanks and Regards,
Preet

ROWNUM Article

Norm, July 05, 2007 - 9:50 am UTC

Tom,
I have a question about getting rows N through M after reading this thread and an article of yours on OTN titled "On ROWNUM and Limiting Results".

The following is a quote from that article (the section dealing with pagination):
Both of them do that, but because ID has so many duplicate values, the query cannot do it deterministically¿the same sort order is not assured from run to run of the query. In order to correct this, you need to add something unique to the ORDER BY. In this case, just use ROWID:


SQL> select *
2 from
3 (select a.*, rownum rnum
4 from
5 (select id, data
6 from t
7 order by id, rowid) a
8 where rownum <= 150
9 )
10 where rnum >= 148;


I don't understand why ordering by rowid gives you what you are looking for here, that is, a repeatable order.

Ealier in the article you make the point that the ROWNUM gets assigned before the order by clause but after the predicate clause. So, in the
select id, data
from t
order by id, rowid

part of the code since select id, data isn't guaranteed to return things in the same order each execution, wouldn't the rowid be different for the rows each run? Which would then make ordering by rowid meaningless as well?

Obviously, from your output, it does what you're saying it does, I'm just not seeing it conceptually.
Tom Kyte
July 05, 2007 - 1:16 pm UTC

because rowid is UNIQUE within a table - so the rows are always sorted deterministically.

a rowid is assigned to a row upon insert and is (in general) immutable. definitely immutable for the duration of a query (it only changes IF you enable row movement on the table and a) update partition key b) flashback table c) shrink the table - not normal conditions...)


say you have a table T:

ID     DATA
----   --------
1      A
1      B
1      C
1      D


select * from (select * from t order by id) where rownum = 1;


which row will that return from T? Unknown - but if you add rowid to the order by, we will know (the row with the smallest rowid will always be returned deterministically in that case)



About Shay's question April 4th & 10th

Kim Berg Hansen, July 06, 2007 - 9:47 am UTC

Hi, Tom

Shay had a question in this thread April 4th and 10th which kind of fizzled as he didn't follow up on it.

But I think I understand his question and would be interested in an answer to it :-)

He said:

----quote--->

A B RN
---------- ---------- ----------
35791 1 1
35863 1 2
35995 1 3
36363 2 4
36651 1 5
36783 1 6
36823 1 7
36849 1 8
36917 2 9
37177 1 10
37227 1 11
37245 1 12
37341 1 13
37451 1 14
37559 1 15
37581 1 16

16 rows selected.

I would like to cut the result set after the second 2 at column b , I mean at row 9 Include. is it possiable ?

<----endquote-----

I believe when b=2, that "ends" a group of lines. (Perhaps better understood if thought of as subtotal lines - like grouping_id()=1 in a rollup() query.)

So the desired result is, that pagination should break in such a manner, that a "group" of lines is not split unto several pages (except when the group has more lines than the "pagination size".) Always let the pagination split after the last occuring "subtotal" line (b=2) if such line(s) exist.

The pagination size (16 in this case) would then be the maximum allowed number of lines to return - but it could be anything from 1 to 16 lines returned on each "page".

So we would have to use some analytics to "cut" the page at the last "b=2" line, but we would also have to return the line-number to the client, so it would know the "from" line-number to ask for the next page, right?

Would be nice to see an efficient way of such a "dynamic page-size" pagination :-)

Regards

Kim Berg Hansen

A reader, July 13, 2007 - 6:34 pm UTC

select * 
  from ( select a.*, rownum rnum
           from ( YOUR_QUERY_GOES_HERE -- including the order by ) a
          where rownum <= MAX_ROWS )
 where rnum >= MIN_ROWS
/


Suppose the inner query involves multiple table joins like the following:

SELECT p.parent_Id, p.someColumn
FROM   parent p, child1 c1, child2 c2
WHERE  p.parent_Id = c1.parent_id AND
       p.parent_Id = c2.parent_id AND
       c1.someColumn = 'xyz' AND
       c2.someColumn = 'abc';


In order to avoid potential duplicate records as the result of that parent-child relationship, I may have to use DISTINCT clause, which will materialize the view, causing the SQL to perform inefficiently.

Is there anyway to optimize this type of query?


Tom Kyte
July 13, 2007 - 7:55 pm UTC

it is not performing inefficiently, it is doing what it needs to do to get your answer.

this looks very wacky - if p is 1:m with c1 and p is 1:m with c2 (as it must be), you cannot join them all together - result would be "meaningless"

give me real world use here - I think your query is non-sensible to begin with.

A reader, July 13, 2007 - 9:48 pm UTC

Ok. Sorry. Let's say we only have two tables: dept and employee. We want to display 10 records at a time, page by page, of all departments that have employees whose name starts with John.

The example I have below has only a few records, but let's assume that we have thousands or more of departments. The inner query without DISTINCT will create duplicates. Any way to avoid using DISTINCT in this type of query?


CREATE TABLE dept (
  dept_id  NUMBER PRIMARY KEY,
  name     VARCHAR2(20)
);

CREATE TABLE employee (
  emp_id   NUMBER PRIMARY KEY,
  dept_id  NUMBER REFERENCES dept (dept_id),
  name     VARCHAR2(20),
  salary   NUMBER
);

INSERT INTO dept VALUES (1, 'Marketing');
INSERT INTO dept VALUES (2, 'Sales');

INSERT INTO employee VALUES (1, 1, 'John Doe', 1000000);
INSERT INTO employee VALUES (2, 1, 'John Doe2', 250000);
INSERT INTO employee VALUES (3, 1, 'Alice Doe', 5000000);
INSERT INTO employee VALUES (4, 2, 'Jerry Doe', 30000);
INSERT INTO employee VALUES (5, 2, 'John Doe3',2500000);

commit;

SELECT name
FROM (
   SELECT a.*, rownum r
   FROM (
      SELECT d.name
      FROM   dept d, employee e
      WHERE  d.dept_id = e.dept_id AND
             e.name    LIKE 'John%'
      ORDER  BY salary DESC
   ) a
   WHERE  rownum <= 10
)
WHERE  r >= 1;

NAME
---------
Sales
Marketing
Marketing


Tom Kyte
July 17, 2007 - 10:20 am UTC

why did you join when you didn't mean to?


select d.name 
  from dept 
 where d.dept_id in (select e.dept_id from emp e where e.name like 'John%')


that order by salary - that is - well, ambiguous at best and if you put a distinct on there - it would, well - destroy the ordering.


Say deptno = 10 and 20 have johns and further:

deptno     sal
--------   ------
10         1000
20         900
10         800
20         700


so, if you order by sal then distinct - what is "first" here????

sorry, your example is not useful - too many ambiguities.


But you didn't mean to join if you are distinct'ing (usually, the need to use distinct means "I made a mistake in the query")

How about the consistency of the results

A reader, July 17, 2007 - 11:04 am UTC

What would you suggest if the requirement is to keep the results of several subsequent executions of query:

select *
from (
select a.*, rownum rnum
from ( YOUR_QUERY_GOES_HERE ) a
where rownum <= MAX_ROWS
)
where rnum >= MIN_ROWS

consistent to each other (concurrent DML must not affect the outcome)?
Tom Kyte
July 17, 2007 - 1:11 pm UTC

flashback query....

first time you run, before you run, use dbms_flashback.get_system_change_number, use "as of scn :x" on the query.

n in first_rows(n) hint can have a formula?

Sandro, September 17, 2007 - 8:50 am UTC

It is possible to use first_rows(n+m) hint?
Seem that first_rows is the same that first_rows(n+m)

Look that...

set autotrace on explain
select * from dual where rownum < 10;
xecution Plan
---------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE
1 0 COUNT (STOPKEY)
2 1 TABLE ACCESS (FULL) OF 'DUAL'

select /*+ first_rows */ * from dual where rownum < 10;
Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=HINT: FIRST_ROWS (Cost=11 Card=9 Bytes=18)
1 0 COUNT (STOPKEY)
2 1 TABLE ACCESS (FULL) OF 'DUAL' (Cost=11 Card=8168 Bytes=16336)

select /*+ first_rows(10) */ * from dual where rownum < 10;
Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=HINT: FIRST_ROWS (Cost=2 Card=9 Bytes=18)
1 0 COUNT (STOPKEY)
2 1 TABLE ACCESS (FULL) OF 'DUAL' (Cost=2 Card=10 Bytes=20)

select /*+ first_rows(5+5) */ * from dual where rownum < 10;
Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=HINT: FIRST_ROWS (Cost=11 Card=9 Bytes=18)
1 0 COUNT (STOPKEY)
2 1 TABLE ACCESS (FULL) OF 'DUAL' (Cost=11 Card=8168 Bytes=16336)

Seems that first_rows(n+m+....) is corrected in first_rows. It is TRUE?

Thanks in advance.

Tom Kyte
September 18, 2007 - 2:20 pm UTC

no first_rows(5+5) is the same as

first_rows(heap of junk here)

in fact, first_rows(5+5) is the same as first_rows....

ops$tkyte%ORA10GR2> /*
ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> drop table t;
ops$tkyte%ORA10GR2> create table t as select * from all_objects;
ops$tkyte%ORA10GR2> */
ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> @at
ops$tkyte%ORA10GR2> column PLAN_TABLE_OUTPUT format a72 truncate
ops$tkyte%ORA10GR2> set autotrace traceonly explain
ops$tkyte%ORA10GR2> select /*+ first_rows */ * from t;

Execution Plan
----------------------------------------------------------
Plan hash value: 1601196873

------------------------------------------------------------------------
| Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time
------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |      | 54259 |  6782K|   228   (4)| 00:00:02
|   1 |  TABLE ACCESS FULL| T    | 54259 |  6782K|   228   (4)| 00:00:02
------------------------------------------------------------------------

Note
-----
   - dynamic sampling used for this statement

ops$tkyte%ORA10GR2> select /*+ first_rows(10) */ * from t;

Execution Plan
----------------------------------------------------------
Plan hash value: 1601196873

------------------------------------------------------------------------
| Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time
------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |      | 54259 |  6782K|     7  (72)| 00:00:01
|   1 |  TABLE ACCESS FULL| T    | 54259 |  6782K|     7  (72)| 00:00:01
------------------------------------------------------------------------

Note
-----
   - dynamic sampling used for this statement


ops$tkyte%ORA10GR2> select /*+ first_rows(1+1) */ * from t;

Execution Plan
----------------------------------------------------------
Plan hash value: 1601196873

------------------------------------------------------------------------
| Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time
------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |      | 54259 |  6782K|   228   (4)| 00:00:02
|   1 |  TABLE ACCESS FULL| T    | 54259 |  6782K|   228   (4)| 00:00:02
------------------------------------------------------------------------

Note
-----
   - dynamic sampling used for this statement

How to fetch N rows from row M

Max, October 01, 2007 - 12:43 pm UTC

Could you please share your thoughts regarding the following scenario?

Consider some row source which serves as some kind of queue that should be read in a "piecewise" manner without deleting the processed messages but mark them as "processed" instead.

One approach could be to issue a query such as:

select *
from <some_row_source>
where row_processed = 'FALSE'
and group_no > :<some_parameter>
order by group_no, row_no -- primary key columns

and to re-execute that query over and over again with certain values for <some_parameter> every time a "piece" (set of consecutive "group_no", determined by the caller) has been processed.

As opposed to that one could use a query such as:

select *
from <some_row_source>
where row_processed = 'FALSE'
and group_no = :<some_parameter>
order by group_no, row_no -- primary key columns

or specify some range for "group_no" with "between" instead of using "=" to give the optimizer some clue about the "size" of the resultset to be retrieved beforehand, instead of just asking for "all" rows by using the operator ">" without fetching all of them later on.

I'd prefer the latter approach although it might suffer from unsuccessful attempts to retrieve the next "piece" when no entries can be found for the given parameter value(s).

One might index "group_no" and "row_processed" to maintain the query's where-clause. On the other hand with the first query the optimizer might tend to prefer the primary key index (on "group_no" and "row_no") nevertheless -- regardless of a less efficient range scan -- due to the assumption that retrieving the rows in sorted order would save some *crucial* additional sort steps. But whether or not these savings are really crucial depends on the "size" of the "pieces" which in turn is UNKNOWN to the optimizer with the first approach (when the caller just fetches a subset of the rows that this query returnes).

What do you think about that?
Tom Kyte
October 03, 2007 - 4:14 pm UTC

one might also use AQ (advanced queues) since it is already built and does that.

Hint first_rows deprecated

Sandro, October 09, 2007 - 5:59 pm UTC

At page 2 of book "cost-based Oracle Fundamentals" (j.lewis and T.Kyte) I have read that first_rows hint are deprecated in 9i.
Any considerations?

Implication in WHERE clause

George Robinson, November 02, 2007 - 4:47 pm UTC

On October 18, 2004, Kim Berg Hansen wrote:


I want to start at the point where loguser = 'SYS' and logdate = '31-08-2004 11:22:33' and logseq = 5799 (point B) and paginate "forward in the index/order by".

To find the starting point I am using:

where (loguser = 'SYS' and logdate = to_date('31-08-2004 11:22:33','DD-MM-YYYY HH24:MI:SS') and logseq >= 5799) or (loguser = 'SYS' and logdate > to_date('31-08-2004 11:22:33','DD-MM-YYYY HH24:MI:SS')) or (loguser > 'SYS')
ORDER BY loguser, logdate, logseq


..and years later George Robinson replied:

Dear Hans,

You are preventing an Index Range Scan (assuming you have a composite index on loguser, logdate, logseq) by inefficiently joined predicates in your WHERE clause.

Just like Tom replied: "as soon as you see an OR -- abandon all hope :)", the naked "OR" operators throw a monkey wrench in your concept.

However you can tame the evil "OR"s by surrounding them with good "AND"s like this:


a
a AND (A OR B)
a AND (A OR (b AND (B OR c)))
a AND (A OR (b AND (B OR (c AND (C OR d)))))
a AND (A OR (b AND (B OR (c AND (C OR (d AND (D OR e)))))))
etc...

where each letter symbolizes a comparison of two values and a lowercase letter symbolizes an obtuse comparison using ">=" or "<=", and an uppercase letter symbolizes an sharp comparison using ">" or "<"

Thus your 3-variable WHERE clause becomes:

a AND (A OR (b AND (B OR c)))

where:
a ---> loguser>='SYS'
A ---> loguser>'SYS'
b ---> logdate>=:somelogdate
B ---> logdate>:somelogdate
c ---> logseq>=:somelogseq

so after substitution your WHERE clause becomes:

WHERE loguser>='SYS' AND (loguser>'SYS' OR (logdate>=:somelogdate AND (logdate>:somelogdate OR logseq>=:somelogseq)))

Now, because the evil "OR"s are enlosed in good "AND"s, they become tamed and do not cause Full Table Scans or Full Index Scans anymore.

The whole idea of structuring predicates like that is called "Logical Implication".

Implication allows you to start in a particular point of your multicolumn sorting sequence and continue forward (or backward) N rows from that point, while enjoying the full advantages of short Index Range Scans with COUNT STOPKEY.

You can even do away with 'logseq' alltogether if you have a primary key column 'pk' and you consecutively number records that happen to have the same 'loguser' and 'logdate'. (assuming you have a composite index on loguser, logdate, pk)

SELECT
i,loguser,logdate
FROM (
SELECT /*+ INDEXC(testlog testlog(loguser,logdate,pk)) FIRST_ROWS */
row_number() OVER (PARTITION BY loguser,logdate ORDER BY loguser,logdate,pk) i,
loguser,
logdate,
FROM testlog
WHERE loguser>=:someloguser AND (loguser>:someloguser OR (logdate>=:somelogdate))
ORDER BY loguser,logdate,pk
) a
WHERE (a.i>3 OR loguser>:someloguser OR logdate>:somelogdate) AND rownum<=10

The pagination query above starts from some definite point in your sorting sequence (in this case the 3rd someloguser and somelogdate), and gives you up to 10 next rows from that point. All while using quick and efficient short Index Range Scans.

I believe this is what you wanted in 2004.


Regards,
George

George Robinson, November 02, 2007 - 5:12 pm UTC

In my previous post there is a typo.

IS:
/*+ INDEXC(testlog testlog(loguser,logdate,pk)) FIRST_ROWS */

SHOULD BE:
/*+ INDEX(testlog testlog(loguser,logdate,pk)) FIRST_ROWS */

counter against the google standard

Jia, November 07, 2007 - 3:04 pm UTC

I agree with R Flood and others about the need for displaying 1 million records on a case by case basis.


consider a research institution that wants to calculate the average rainfall in the last century or a particular city.


In this case, if there are 1 million records that falls into this category, then there has to be a way to access all 1 million of it, otherwise the average will not be accurate.
Tom Kyte
November 07, 2007 - 6:22 pm UTC

you would never display 1,000,000 records.

you might quite possibly aggregate them down to ONE, but to display to an end user 1,000,000 records - what are they going to do with that? type them all into a calculator or something? think about it - "display 1,000,000 records" (end user runs screaming into the hall...)

zhuojm, November 12, 2007 - 7:16 am UTC

when i exec
select * from dual where rownum=1.9;
one row will be selected.
why?

George Robinson, November 14, 2007 - 6:07 pm UTC


The DUAL table contains only one row to begin with.

The pseudocolumn rownum contains natural numbers so 1.9 must be converted to natural number as well, to perform the comparison.

I guess the comparison is done like this:
  WHERE rownum = TRUNC(1.9)
this is equivalent to
  WHERE rownum = 1



Tom Kyte
November 21, 2007 - 10:45 am UTC

no, this is a bug. it shouldn't do that.

zhuojm, November 16, 2007 - 2:53 am UTC

select 1 from dual where rownum=1.9 and rownum<>1.9

the row will also be selected.
so i do not think it means trunc(1.9).

George Robinson, November 16, 2007 - 5:42 pm UTC

I confirmed it, and indeed one row is returned.

The WHERE clause seems contradictory, and should always return FALSE.

It's illogical. I wonder what Tom will write about this?

I guess the Oracle developers never anticipated that some weird guy will try to compare rownum with non-natural numbers.


Regards,
George

zhuojm, November 19, 2007 - 8:23 am UTC

How to check data is good or not?

Dawar, January 18, 2008 - 2:22 am UTC

My manager send me excel spread sheet which contains 7000 records.
I need to delete records from master table which contains 40000 records.

I loaded data to Oracle table through MS Access.
I choosed only five columns in new table instead of 35 columns.

I realized Datatype is varchar 2 in newly created table.
But in Master table Datatype is number and date.



If I run select to_number (emp_no) or any other coulmn from table B;
I got error on row 210.

I created another table C and took five columns from master table, when I tried to load data in table CX,
I got error message on row 210.

But If I put rownum < 210 its work.

Is it data problem?
How to check data is good or not?

How to see an effect of FIRST_ROWS?

Marat, March 19, 2008 - 1:59 pm UTC

Dear Tom,
I cant understand how to see an effect of using FIRST_ROWS:

create table test_perf1 as select * from all_objects;
insert into test_perf1 select * from test_perf1;
insert into test_perf1 select * from test_perf1;

create index idx_test_perf1 on test_perf1(object_name);
analyze table test_perf1 compute statistics;

--a table for record timing statistics
CREATE TABLE TST
(
  F   NUMBER,
  F1  NUMBER
);


declare
    n number;
    cursor c is select * /*+ ALL_ROWS*/ from test_perf1 order by object_name;
    r c%rowtype;
begin
    n := dbms_utility.GET_TIME;
    open c;
    fetch c into r;
    insert into tst values(1, dbms_utility.GET_TIME-n);
    commit;
end;
/
--run againg
declare
    n number;
    cursor c is select * /*+ ALL_ROWS*/ from test_perf1 order by object_name;
    r c%rowtype;
begin
    n := dbms_utility.GET_TIME;
    open c;
    fetch c into r;
    insert into tst values(1, dbms_utility.GET_TIME-n);
    commit;
end;
/

select * from tst;

    F         F1
----- ----------
    1       1198
    1        822

delete from tst;

declare
    n number;
    cursor c is select * /*+ FIRST_ROWS*/ from test_perf1 order by object_name;
    r c%rowtype;
begin
    n := dbms_utility.GET_TIME;
    open c;
    fetch c into r;
    insert into tst values(1, dbms_utility.GET_TIME-n);
    commit;
end;
/
declare
    n number;
    cursor c is select * /*+ FIRST_ROWS*/ from test_perf1 order by object_name;
    r c%rowtype;
begin
    n := dbms_utility.GET_TIME;
    open c;
    fetch c into r;
    insert into tst values(1, dbms_utility.GET_TIME-n);
    commit;
end;
/
select * from tst;

    F         F1
----- ----------
    1        877
    1        758


So there is no big difference in fetching time beween FIRST_ROWS and ALL_ROWS. Could you clarify this?
Thank you.
Tom Kyte
March 24, 2008 - 9:51 am UTC

use sql trace and tkprof so you can see that plans actually changed and review the amount of work done

ops$tkyte%ORA10GR2> create table t as select * from all_objects;

Table created.

ops$tkyte%ORA10GR2> create index t_idx on t(object_name);

Index created.

ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> @trace
ops$tkyte%ORA10GR2> alter session set events '10046 trace name context forever, level 12';

Session altered.

ops$tkyte%ORA10GR2> declare
  2          cursor c1 is select /*+ ALL_ROWS */ * from t order by object_name;
  3          cursor c2 is select /*+ FIRST_ROWS */ * from t order by object_name;
  4          l_data t%rowtype;
  5  begin
  6          open c1;
  7          fetch c1 into l_data;
  8          close c1;
  9          open c2;
 10          fetch c2 into l_data;
 11          close c2;
 12  end;
 13  /

PL/SQL procedure successfully completed.


Now, in tkprof, we can see:


SELECT /*+ ALL_ROWS */ * FROM T ORDER BY OBJECT_NAME

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          1          0           0
Execute      2      0.00       0.00          0          0          0           0
Fetch        1      0.24       0.54         50        689          9           1
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        4      0.25       0.55         50        690          9           1

Rows     Row Source Operation
-------  ---------------------------------------------------
      1  SORT ORDER BY (cr=689 pr=50 pw=693 time=546375 us)
  49767   TABLE ACCESS FULL T (cr=689 pr=0 pw=0 time=398207 us)
********************************************************************************
SELECT /*+ FIRST_ROWS */ * FROM T ORDER BY OBJECT_NAME

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          1          0           0
Execute      1      0.00       0.00          0          0          0           0
Fetch        1      0.00       0.00          1          3          0           1
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        3      0.00       0.01          1          4          0           1

Rows     Row Source Operation
-------  ---------------------------------------------------
      1  TABLE ACCESS BY INDEX ROWID T (cr=3 pr=1 pw=0 time=967 us)
      1   INDEX FULL SCAN T_IDX (cr=2 pr=1 pw=0 time=932 us)(object id 56423)

Excellent

Manjunath, April 04, 2008 - 7:22 am UTC

Tom ,Your are simply Superb !!!!!!!!!!!!!!......I have started loving oracle ...

The tax on memory

Debasish, May 20, 2008 - 3:11 am UTC

select *
from (
select a.*, rownum rnum
from ( YOUR_QUERY_GOES_HERE ) a
where rownum <= MAX_ROWS
)
where rnum >= MIN_ROWS

In the above query suppose my query like

select *
from (
select a.*, rownum rnum
from ( select empno,ename,sal from emp order by empno ) a
where rownum <= 50000
)
where rnum >= 49980

assume there is one milion record in EMP table then the inner most query (Select empno,ename,sal from emp order by empno)will fetch all the 1 milion rows for every time, it will tax on memory/performence. is their any other way to fetch 20 result set each time.


Tom Kyte
May 20, 2008 - 11:12 am UTC

no it won't, it does a top-n query optimization. It will be as efficient as possible.

it will NOT sort 1,000,000 records in memory.
it will have at most 50,000 records.

if there is an index on emp(empno), and empno is NOT NULL, it'll use the index in a full scan (not fast full, just a full scan) to retrieve the data sorted and stop.

if there isn't an index, it understands you only need 50,000 so it'll get the first 50,000 - sort them and hold them. It'll get record 50,001 and ask "is this less than the last - the 50,000th - record I already have - if not then discard it else remove the current 50,000th record and put this one into the 50,000 record set where it belongs". You won't use tons of temp, you won't use tons of memory.

read:

https://www.oracle.com/technetwork/issue-archive/2006/06-sep/o56asktom-086197.html


Performance

Serge, May 26, 2008 - 2:46 pm UTC

Tom,

Excellent thread. It was very helpful.
But I have a little question about this.

Suppose we have to do the following (at the same time from several pl/sql, pro*c... different programs):

select *
from ( select a.*, ROWNUM rnum
from ( select * from MY_VIEW where cond ) a
where ROWNUM <=
:MAX_ROW_TO_FETCH )
where rnum >= :MIN_ROW_TO_FETCH

-- MY_VIEW changes randomly so we can't change the view script.

we also can create a table with ROWNUM (rnum) as pk and insert the retrieved data

insert into my_table (ROWNUM rnum, select a.*
from ( select * from MY_VIEW where cond ) a )

and then we can easily access like this:

select *
from my_table
where :MIN_ROW_TO_FETCH <= rnum and rnum <= :MAX_ROW_TO_FETCH

what is better faster (performance)? hitting the view or inserting and hitting the table?
remember we are accesing the data from several programs at the same time.

thanks!
Tom Kyte
May 27, 2008 - 8:06 am UTC

insufficient data provided to really say....

think about the obvious consequences

a) the time to get the first page increased hugely - because you have to get all of the data and then insert.

b) if you are a web based application, you have to figure out how to mediate access to this one table (session id would have to be there, now you have to have a session id too)

c) (b) continued - you have to figure out how to clean up such a table over time.

d) I don't know what "MY_VIEW changes randomly so we can't change the view script." means or implies.



In general, I use:

select * 
  from ( select a.*, rownum rnum
           from ( YOUR_QUERY_GOES_HERE -- including the order by ) a
          where rownum <= MAX_ROWS )
 where rnum >= MIN_ROWS
/


because we know that no one really goes to page 100, they get bored, so why generate page 100, 101, and so on.

/*+ FIRST_ROWS */ vs /*+ FIRST_ROWS(n) */

A reader, August 08, 2008 - 6:37 pm UTC

What's the difference between these two hints? How do we use them?
Tom Kyte
August 12, 2008 - 4:26 am UTC

first one is the "old one", sort of deprecated - does a "rule based cost based optimization"

the second one is the preferred method in the versions of Oracle that support it (all current releases), it does a true cost based optimization.

pc, August 11, 2008 - 8:27 am UTC

Can I incorporate this query into Pro*C, I was trying to do so. But seems having ORA-00936: missing expression.

Sorry if I sound stupid, caused I am quite new in Pro*C

Tom Kyte
August 12, 2008 - 8:26 am UTC

big page here pc, not sure what bit you were looking at.

give a tiny tiny example using scott.emp - depending on the release - you might have to "hide" the sql from proc using dynamic sql.

http://docs.oracle.com/docs/cd/B19306_01/appdev.102/b14407/pc_13dyn.htm#i2342

/*+ FIRST_ROWS */ vs /*+ FIRST_ROWS(n) */

A reader, August 12, 2008 - 11:59 am UTC

Can you show us an example where these two hints will yield different execution plans? Thanks.
Tom Kyte
August 13, 2008 - 8:26 am UTC

create table t as select * from all_objects order by object_name;
create index t_idx on t(object_id);
exec dbms_stats.gather_table_stats( user, 'T' );
@trace
begin
    for x in (select /*+ first_rows(1000) */ rownum r, t.* from t where object_id >0)
    loop
        exit when x.r >= 25;
    end loop;
    for x in (select /*+ first_rows */ rownum r, t.* from t where object_id >0)
    loop
        exit when x.r >= 25;
    end loop;
end;
/


SELECT /*+ first_rows(1000) */ ROWNUM R, T.* FROM T WHERE OBJECT_ID >0


Rows     Row Source Operation
-------  ---------------------------------------------------
    100  COUNT  (cr=5 pr=0 pw=0 time=7 us)
    100   TABLE ACCESS FULL T (cr=5 pr=0 pw=0 time=3 us cost=6 size=101000 card=1000)
********************************************************************************
SELECT /*+ first_rows */ ROWNUM R, T.* FROM T WHERE OBJECT_ID >0


Rows     Row Source Operation
-------  ---------------------------------------------------
    100  COUNT  (cr=68 pr=0 pw=0 time=55 us)
    100   TABLE ACCESS BY INDEX ROWID T (cr=68 pr=0 pw=0 time=51 us cost=51031 size=6836589 card=67689)
    100    INDEX RANGE SCAN T_IDX (cr=2 pr=0 pw=0 time=2 us cost=152 size=0 card=67689)(object id 77466)





/*+ FIRST_ROWS */ vs /*+ FIRST_ROWS(n) */

peter, August 13, 2008 - 5:38 pm UTC

Hi Tom,

What conclusion can I draw from your demo (August 12, 2008 - 11am)? Can I safely assume that FIRST_ROWS(n) will have the similar or more efficient plans under most circumstances?

When you created table t, I noticed you ordered the data by object_name. Were you doing that to purposely create a bad clustering factor for object_id?
Tom Kyte
August 18, 2008 - 9:41 am UTC

first_rows is deprecated functionality, not to be used.

first_rows(n) is the correct and proper way forward....


Yes, I munged the clustering factor on purpose to make an index on object_id less efficient - so the cost based optimizer would stop using it for large range scans

Fast fetch of N records from each group

Karteek, August 18, 2008 - 2:20 am UTC

Tom,

If I am correct FIRST_ROWS can improve the performance in limiting the number of rows (1..n) using fast index scan. If in a case where I want the latest 10 records of EACH department, how do I limit latest 10 rows of each department(every department has got millions of records).

select * from (select a, b, c, row_number() rn over (partition by a, b order by start_time desc) from efact
) where rn <11

This query can get latest 10 records under each group (a, b). As I said (a,b) group has got millions of data each. In such scenario performance is really going bad, as the inner query had to do lot of unwanted work before coming to outer query. If I use FIRST_ROWS(10), then that can only help in pulling 10 records in total, but not from EACH group. Hope I am able to explain you the scenario. Can you suggest how it can be improved?

Also, assuming that this is the kind of only query that I run on this table 'efact', what would be a good combination of index on this?

1) group by a, b
2) order by start_time desc
3) col c also need to be retrieved in select clause.

can (a,b, start_time, c) be a one good combination?

Thanks Tom!

Tom Kyte
August 20, 2008 - 9:28 am UTC

you call it un-necessary

others would call it un-avoidable.


I would hope that NO index range scan would be used. I would demand a full scan of that table.

You need to get every row out of the table, break it up by a,b and sort within that by c, then return just the first ten. A traditional index is not going to help there (it could help get the first 10 - but after that, no - it would have to read over millions of index entries to find the next ten)

Fast fetch of N records from each group

Karteek, August 18, 2008 - 2:21 am UTC

Tom,

If I am correct FIRST_ROWS can improve the performance in limiting the number of rows (1..n) using fast index scan. If in a case where I want the latest 10 records of EACH department, how do I limit latest 10 rows of each department(every department has got millions of records).

select * from (select a, b, c, row_number() rn over (partition by a, b order by start_time desc) from efact
) where rn <11

This query can get latest 10 records under each group (a, b). As I said (a,b) group has got millions of data each. In such scenario performance is really going bad, as the inner query had to do lot of unwanted work before coming to outer query. If I use FIRST_ROWS(10), then that can only help in pulling 10 records in total, but not from EACH group. Hope I am able to explain you the scenario. Can you suggest how it can be improved?

Also, assuming that this is the kind of only query that I run on this table 'efact', what would be a good combination of index on this?

1) group by a, b
2) order by start_time desc
3) col c also need to be retrieved in select clause.

can (a,b, start_time, c) be a one good combination?

Thanks Tom!

Pagination Query Execution Plan

peter, August 22, 2008 - 7:40 pm UTC

Please see the example below. SQL #2 uses the index on y.col4 because the value in the predicate has low cardinality. SQL #1 uses full table scan instead. Why wouldn't Oracle use the index on x(timestamp) for SQL #1. Wouldn't that generate less LIO's?

CREATE TABLE x AS
SELECT rownum x_id, SYSDATE - MOD(rownum, 200) timestamp, object_type,
       object_Name
FROM   all_objects;

ALTER TABLE x ADD CONSTRAINT x_pk PRIMARY KEY (x_id);

CREATE TABLE y (
   y_id  NUMBER PRIMARY KEY,
   x_id  NUMBER NOT NULL REFERENCES x(x_id),
   col1  VARCHAR2(100),
   col2  VARCHAR2(100),
   col3  VARCHAR2(100),
   col4  VARCHAR2(10)
);

CREATE SEQUENCE y_seq;

DECLARE
   CURSOR c IS
      SELECT x_id FROM x;

BEGIN
   FOR x in c LOOP
      INSERT INTO y VALUES (
         y_seq.NEXTVAL,
         x.x_id,
         RPAD('x', 100, 'x'),
         RPAD('x', 100, 'x'),
         RPAD('x', 100, 'x'),
         'New'
      );

      INSERT INTO y VALUES (
         y_seq.NEXTVAL,
         x.x_id,
         RPAD('x', 100, 'x'),
         RPAD('x', 100, 'x'),
         RPAD('x', 100, 'x'),
         'Old'
      );
   END LOOP;

   FOR x in c LOOP
      INSERT INTO y VALUES (
         y_seq.NEXTVAL,
         x.x_id,
         RPAD('x', 100, 'x'),
         RPAD('x', 100, 'x'),
         RPAD('x', 100, 'x'),
         dbms_random.String('U', 3)
      );
   END LOOP;
END;
/

COMMIT;

select count(distinct timestamp) from x;

COUNT(DISTINCTTIMESTAMP)
------------------------
                     200



CREATE INDEX x_n1 ON x(timestamp);
CREATE INDEX y_fk1 ON y(x_id);
CREATE INDEX y_n1 ON y(col4);

BEGIN
  DBMS_STATS.GATHER_TABLE_STATS(
    ownname          => user,
    tabname          => 'X',
    estimate_percent => DBMS_STATS.AUTO_SAMPLE_SIZE,
    method_opt       => 'FOR ALL COLUMNS SIZE SKEWONLY',
    cascade          => TRUE);
END;
/

BEGIN
  DBMS_STATS.GATHER_TABLE_STATS(
    ownname          => user,
    tabname          => 'Y',
    estimate_percent => DBMS_STATS.AUTO_SAMPLE_SIZE,
    method_opt       => 'FOR ALL COLUMNS SIZE SKEWONLY',
    cascade          => TRUE);
END;
/


SELECT COUNT(*) FROM y;

  COUNT(*)
----------
    141483

SELECT COUNT(DISTINCT col4) FROM y;

COUNT(DISTINCTCOL4)
-------------------
              16377

set timing on
set autotrace traceonly

-- --------------------------------------------------------
-- SQL #1
-- --------------------------------------------------------
SELECT object_Name, col2
FROM (
   SELECT object_Name, col2, rownum rn
   FROM   (
      SELECT x.object_name, y.col2
      FROM   x, y
      WHERE  x.x_id = y.x_id AND
             y.col4 = 'New'
      ORDER  BY x.timestamp
   ) 
   WHERE  rownum <= 50
)
WHERE  rn >= 1;

Elapsed: 00:00:00.09

-----------------------------------------------------------------------------------------
| Id  | Operation                | Name | Rows  | Bytes |TempSpc| Cost (%CPU)| Time     |
-----------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT         |      |    50 | 11600 |       |  3441   (1)| 00:00:42 |
|*  1 |  VIEW                    |      |    50 | 11600 |       |  3441   (1)| 00:00:42 |
|*  2 |   COUNT STOPKEY          |      |       |       |       |            |          |
|   3 |    VIEW                  |      | 46660 |  9979K|       |  3441   (1)| 00:00:42 |
|*  4 |     SORT ORDER BY STOPKEY|      | 46660 |  6743K|    14M|  3441   (1)| 00:00:42 |
|*  5 |      HASH JOIN           |      | 46660 |  6743K|  2304K|  1907   (1)| 00:00:23 |
|   6 |       TABLE ACCESS FULL  | X    | 47161 |  1750K|       |    79   (2)| 00:00:01 |
|*  7 |       TABLE ACCESS FULL  | Y    | 46660 |  5012K|       |  1443   (1)| 00:00:18 |
-----------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - filter("RN">=1)
   2 - filter(ROWNUM<=50)
   4 - filter(ROWNUM<=50)
   5 - access("X"."X_ID"="Y"."X_ID")
   7 - filter("Y"."COL4"='New')

Statistics
----------------------------------------------------------
       6833  consistent gets
          0  physical reads

-- --------------------------------------------------------
-- SQL #2
-- --------------------------------------------------------
SELECT object_Name, col2
FROM (
   SELECT object_Name, col2, rownum rn
   FROM   (
      SELECT x.object_name, y.col2
      FROM   x, y
      WHERE  x.x_id = y.x_id AND
             y.col4 = 'AAD'
      ORDER  BY x.timestamp
   ) 
   WHERE  rownum <= 50
)
WHERE  rn >= 1;

Elapsed: 00:00:00.00

-----------------------------------------------------------------------------------------
| Id  | Operation                        | Name | Rows  | Bytes | Cost (%CPU)| Time     |
-----------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                 |      |     5 |  1160 |     3  (34)| 00:00:01 |
|*  1 |  VIEW                            |      |     5 |  1160 |     3  (34)| 00:00:01 |
|*  2 |   COUNT STOPKEY                  |      |       |       |            |          |
|   3 |    VIEW                          |      |     5 |  1095 |     3  (34)| 00:00:01 |
|*  4 |     SORT ORDER BY STOPKEY        |      |     5 |   740 |     3  (34)| 00:00:01 |
|   5 |      NESTED LOOPS                |      |     5 |   740 |     2   (0)| 00:00:01 |
|   6 |       TABLE ACCESS BY INDEX ROWID| Y    |     5 |   550 |     1   (0)| 00:00:01 |
|*  7 |        INDEX RANGE SCAN          | Y_N1 |     5 |       |     1   (0)| 00:00:01 |
|   8 |       TABLE ACCESS BY INDEX ROWID| X    |     1 |    38 |     1   (0)| 00:00:01 |
|*  9 |        INDEX UNIQUE SCAN         | X_PK |     1 |       |     1   (0)| 00:00:01 |
-----------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - filter("RN">=1)
   2 - filter(ROWNUM<=50)
   4 - filter(ROWNUM<=50)
   7 - access("Y"."COL4"='AAD')
   9 - access("X"."X_ID"="Y"."X_ID")

Statistics
----------------------------
         10  consistent gets
          0  physical reads

Tom Kyte
August 26, 2008 - 7:48 pm UTC

hint it and you tell us what happens.

it thinks in the "new" case - I'll get 1/3rd of the data using an index, and join it to X, then sort

it thinks in the "aad" case - I'll get nothing from Y using an index, and have to join to nothing in X, then sort nothing



You don't want to get 1/3rd of the data using an index, and then use another index to pick up the timestamp, and then sort.

peter, August 27, 2008 - 1:08 pm UTC

Sorry I mis-read the execution plan. For SQL #1, I was expecting that Oracle would use the index on x.timestamp (x_n1) even though there's a "y.col4 = 'New'" predicate. I'm not able to reproduce it now but I've seen cases where the execution plan drives off the index defined on the ORDER BY column even though there are other predicates.

I ran more tests and the following results puzzled me.

SQL #1 - Why doesn't Oracle use the index on x.timestamp?
SQL #2 - Why Oracle still doesn't use the index on x.timestamp despite the hint?
SQL #3 - This is what I had expected SQL #1 would do.


SELECT * FROM v$version;

BANNER
----------------------------------------------------------------
Oracle Database 10g Enterprise Edition Release 10.2.0.3.0 - Prod
PL/SQL Release 10.2.0.3.0 - Production
CORE    10.2.0.3.0      Production
TNS for 32-bit Windows: Version 10.2.0.3.0 - Production
NLSRTL Version 10.2.0.3.0 - Production

SELECT index_name, column_name
FROM   user_ind_columns
WHERE  table_name = 'X'
ORDER  BY 1, column_position;

INDEX_NAME                     COLUMN_NAME
------------------------------ -----------
X_N1                           TIMESTAMP
X_PK                           X_ID

-- ===========================================================================
-- SQL #1
-- ===========================================================================
SELECT *
FROM (
   SELECT a.*, rownum rn
   FROM (
      SELECT *
      FROM   x
      ORDER  BY timestamp
   ) a
   WHERE  rownum <= 50
)
WHERE  rn >= 1;

-----------------------------------------------------------------------------------------
| Id  | Operation                | Name | Rows  | Bytes |TempSpc| Cost (%CPU)| Time     |
-----------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT         |      |    50 |  4600 |       |   622   (2)| 00:00:08 |
|*  1 |  VIEW                    |      |    50 |  4600 |       |   622   (2)| 00:00:08 |
|*  2 |   COUNT STOPKEY          |      |       |       |       |            |          |
|   3 |    VIEW                  |      | 47161 |  3638K|       |   622   (2)| 00:00:08 |
|*  4 |     SORT ORDER BY STOPKEY|      | 47161 |  2072K|  5560K|   622   (2)| 00:00:08 |
|   5 |      TABLE ACCESS FULL   | X    | 47161 |  2072K|       |    79   (2)| 00:00:01 |
-----------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - filter("RN">=1)
   2 - filter(ROWNUM<=50)
   4 - filter(ROWNUM<=50)

-- ===========================================================================
-- SQL #2
-- ===========================================================================
SELECT *
FROM (
   SELECT a.*, rownum rn
   FROM (
      SELECT /*+ INDEX(x x_n1) */ *
      FROM   x
      ORDER  BY timestamp
   ) a
   WHERE  rownum <= 50
)
WHERE  rn >= 1;

-----------------------------------------------------------------------------------------
| Id  | Operation                | Name | Rows  | Bytes |TempSpc| Cost (%CPU)| Time     |
-----------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT         |      |    50 |  4600 |       |   622   (2)| 00:00:08 |
|*  1 |  VIEW                    |      |    50 |  4600 |       |   622   (2)| 00:00:08 |
|*  2 |   COUNT STOPKEY          |      |       |       |       |            |          |
|   3 |    VIEW                  |      | 47161 |  3638K|       |   622   (2)| 00:00:08 |
|*  4 |     SORT ORDER BY STOPKEY|      | 47161 |  2072K|  5560K|   622   (2)| 00:00:08 |
|   5 |      TABLE ACCESS FULL   | X    | 47161 |  2072K|       |    79   (2)| 00:00:01 |
-----------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - filter("RN">=1)
   2 - filter(ROWNUM<=50)
   4 - filter(ROWNUM<=50)


-- ===========================================================================
-- SQL #3
-- ===========================================================================
SELECT *
FROM (
   SELECT a.*, rownum rn
   FROM (
      SELECT *
      FROM   x
      ORDER  BY x_id
   ) a
   WHERE  rownum <= 50
)
WHERE  rn >= 1;

---------------------------------------------------------------------------------------
| Id  | Operation                      | Name | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT               |      |    50 |  4600 |     1   (0)| 00:00:01 |
|*  1 |  VIEW                          |      |    50 |  4600 |     1   (0)| 00:00:01 |
|*  2 |   COUNT STOPKEY                |      |       |       |            |          |
|   3 |    VIEW                        |      |    50 |  3950 |     1   (0)| 00:00:01 |
|   4 |     TABLE ACCESS BY INDEX ROWID| X    | 47161 |  2072K|     1   (0)| 00:00:01 |
|   5 |      INDEX FULL SCAN           | X_PK |    50 |       |     1   (0)| 00:00:01 |
---------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - filter("RN">=1)
   2 - filter(ROWNUM<=50)

Tom Kyte
August 29, 2008 - 1:02 pm UTC

is that column NULLABLE?

yes, it is

 Name                                     Null?    Type
 ---------------------------------------- -------- ----------------------------
 X_ID                                     NOT NULL NUMBER
 TIMESTAMP                                         DATE
 OBJECT_TYPE                                       VARCHAR2(19)
 OBJECT_NAME                              NOT NULL VARCHAR2(30)




therefore, this query:

SELECT *
FROM (
   SELECT a.*, rownum rn
   FROM (
      SELECT *
      FROM   x
      ORDER  BY timestamp
   ) a
   WHERE  rownum <= 50
)
WHERE  rn >= 1;


cannot USE an index on timestamp - since if timestamp where NULL, it would not be in that particular index (an entirely null key entry is never made into a standard b*tree index).

For example, suppose the table has 1 row in it, and timestamp was null for that row. Using the index on timestamp - we cannot SEE that row - it is not in the index, we would have returned zero records.


alter that column to be NOT NULL (and in fact, look at all of your tables and ask yourself - what columns are really NOT NULL - and add the constraint)

Does the order by stay or do you need another one

Slavko Brkic, October 24, 2008 - 5:59 am UTC

select *
from ( select a.*, rownum rnum
from ( YOUR_QUERY_GOES_HERE -- including the order by ) a
where rownum <= MAX_ROWS )
where rnum >= MIN_ROWS
/

Maybe this is an order by question really but here goes anyway. Does the order by from the inner select stay all the way or does one need to add another order by to be sure. In the explain plan adding another order by does not add another sort. This indicates that Oracle remembers it has already sorted the inner select and does not need to do another one. But can one be entirely sure?

found the answer

Slavko Brkic, October 24, 2008 - 10:07 am UTC

Nick, December 04, 2008 - 5:43 am UTC

Hello,

I am trying this to use databasepaging for a webapplication. there is this little problem with my procedure, where i can't see the problem

procedure:

create or replace procedure gsb_sel_part
(p_begin in number, p_einde in number, cs out pkgGsb.gsbType)
AS
BEGIN
OPEN cs FOR
select * from ( select a.*, rownum rnum from ( select * from gastenboek order by gastenboek_id DESC ) a where rownum <= p_einde ) where rnum >= p_begin;
END gsb_sel_part;
/


i get an error that the expression is of the wrong type.

Package:

CREATE OR REPLACE PACKAGE pkgGsb
IS
TYPE gsbType IS REF CURSOR RETURN gsb%ROWTYPE;
END pkgGsb;
/

I really can't see the problem
Tom Kyte
December 09, 2008 - 11:44 am UTC

well, your ref cursor is defined to have the columns the table does...

where as your select statement selects out rownum IN ADDITION to the columns in the table.

so, they do not match.

either

a) use sys_refcursor - a weakly typed cursor
b) select the columns you want to select, not select * in the open
c) redefine your record to include that extra column

Pagination in web application.

Vijay, January 06, 2009 - 1:42 pm UTC

Hi Tom,
We have a web application where pagination is used and the application. The page shows 10 rows at a time and also shows the total number records returned by the search.

They way it is currently implemented is

1) They get a count of the records for the search.
2) Then again they execute the query to get the results

For instance

CREATE TABLE test_paging
(
object_name VARCHAR2(30)
);

INSERT INTO test_paging
select object_name from dba_objects;

COMMIT;

SELECT count(1) FROM test_paging;

SELECT * FROM (
SELECT object_name, rownum RNBR FROM test_paging
ORDER BY OBJECT_NAME
)
WHERE RNBR >= ((:v_page-1)*10)+1 AND RNBR <= :v_page*10 ;


I feel that the same can be implemented using the following

SELECT COUNT(1) FROM test_paging;

SELECT * FROM (
SELECT OBJECT_NAME, ROW_NUMBER() OVER( ORDER BY OBJECT_NAME) RNBR FROM TEST_PAGING
ORDER BY OBJECT_NAME
)
WHERE RNBR >= ((:v_page-1)*10)+1 AND RNBR <= :v_page*10

or

SELECT * FROM (
SELECT OBJECT_NAME, ROW_NUMBER() OVER( ORDER BY OBJECT_NAME) RNBR,
COUNT(1) OVER() NORECS
FROM TEST_PAGING
ORDER BY OBJECT_NAME
)
WHERE RNBR >= ((:v_page-1)*10)+1 AND RNBR <= :v_page*10

Please can you let me know whether the approach which I have suggested it will be a better option or which one is efficient.


Sorting in Pagination Query

A reader, February 13, 2009 - 8:13 pm UTC

In the pagination query below, if I have an index on X.col3, Oracle would probably be able to use that index to retrieve data.

SELECT *
FROM (
   SELECT col1, col2, rownum rn
   FROM   (
      SELECT x.col1, y.col2
      FROM   x, y
      WHERE  x.x_id = y.x_id
      ORDER  BY x.col3
   ) 
   WHERE  rownum <= 50
)
WHERE  rn >= 1;

What if I'm sorting by two columns, one from X, another from Y? How would I optimize this type of SQL so that the first 50 records can be retrieved efficiently?

SELECT *
FROM (
   SELECT col1, col2, rownum rn
   FROM   (
      SELECT x.col1, y.col2
      FROM   x, y
      WHERE  x.x_id = y.x_id
      ORDER  BY x.col3, y.col4
   ) 
   WHERE  rownum <= 50
)
WHERE  rn >= 1;

Use of index in pagination and sorting

A reader, February 26, 2009 - 4:02 pm UTC

In my example below, I created a record that contains a NULL value in object_name. My questions are:

* How come the index can be used on the first two SQLs but not the third? I was surprised that even though there is a NULL value, the index is still used.

* If the SQL will be sorted by either ASC or DESC order on the object_name as in the first two SQLs, is there a better way to improve the performance than by creating two indexes like I did in the example?

CREATE TABLE x AS
SELECT object_id, object_name, object_type, status
FROM   all_objects;

ALTER TABLE x MODIFY object_name NULL;

INSERT INTO x VALUES (100000, NULL, NULL, NULL);

CREATE INDEX x_idx ON x (object_name, object_id);
CREATE INDEX x_idx2 ON x (object_name DESC, object_id);


SELECT * 
FROM  (
   SELECT a.*, rownum rn
   FROM (
      SELECT *
      FROM   x
      ORDER  BY object_name, object_id
   ) a
   WHERE  rownum <= 5)
WHERE  rn >= 1;

----------------------------------------------------------------------------------------
| Id  | Operation                      | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
----------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT               |       |     5 |   295 |     6   (0)| 00:00:01 |
|*  1 |  VIEW                          |       |     5 |   295 |     6   (0)| 00:00:01 |
|*  2 |   COUNT STOPKEY                |       |       |       |            |          |
|   3 |    VIEW                        |       | 28020 |  1258K|     6   (0)| 00:00:01 |
|   4 |     TABLE ACCESS BY INDEX ROWID| X     | 28020 |  1258K|     6   (0)| 00:00:01 |
|   5 |      INDEX FULL SCAN           | X_IDX |     5 |       |     2   (0)| 00:00:01 |
----------------------------------------------------------------------------------------


SELECT * 
FROM  (
   SELECT a.*, rownum rn
   FROM (
      SELECT *
      FROM   x
      ORDER  BY object_name DESC, object_id
   ) a
   WHERE  rownum <= 5)
WHERE  rn >= 1;

-----------------------------------------------------------------------------------------
| Id  | Operation                      | Name   | Rows  | Bytes | Cost (%CPU)| Time     |
-----------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT               |        |     5 |   295 |     6   (0)| 00:00:01 |
|*  1 |  VIEW                          |        |     5 |   295 |     6   (0)| 00:00:01 |
|*  2 |   COUNT STOPKEY                |        |       |       |            |          |
|   3 |    VIEW                        |        | 28020 |  1258K|     6   (0)| 00:00:01 |
|   4 |     TABLE ACCESS BY INDEX ROWID| X      | 28020 |  1258K|     6   (0)| 00:00:01 |
|   5 |      INDEX FULL SCAN           | X_IDX2 |     5 |       |     2   (0)| 00:00:01 |
-----------------------------------------------------------------------------------------

SELECT * 
FROM  (
   SELECT a.*, rownum rn
   FROM (
      SELECT *
      FROM   x
      ORDER  BY object_name NULLs FIRST, object_id
   ) a
   WHERE  rownum <= 5)
WHERE  rn >= 1;

-----------------------------------------------------------------------------------------
| Id  | Operation                | Name | Rows  | Bytes |TempSpc| Cost (%CPU)| Time     |
-----------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT         |      |     5 |   295 |       |   418   (2)| 00:00:06 |
|*  1 |  VIEW                    |      |     5 |   295 |       |   418   (2)| 00:00:06 |
|*  2 |   COUNT STOPKEY          |      |       |       |       |            |          |
|   3 |    VIEW                  |      | 28020 |  1258K|       |   418   (2)| 00:00:06 |
|*  4 |     SORT ORDER BY STOPKEY|      | 28020 |  1258K|  3320K|   418   (2)| 00:00:06 |
|   5 |      TABLE ACCESS FULL   | X    | 28020 |  1258K|       |    87   (3)| 00:00:02 |
-----------------------------------------------------------------------------------------


Tom Kyte
March 03, 2009 - 8:23 am UTC

because nulls sort "higher" by default (they come LAST in a ASCENDING sort).


So, your order by


order by object_name nulls first, object_id


is sort of like:


order by object_name (but put things that start with Z first), object_id


You are not reading the data in sort order anymore, you are reading the "end" of the index to get NULLS first and then skipping back to the "front" of the index. You cannot do that.


ops$tkyte%ORA10GR2> create index x_idx3 on x( decode(object_name,null,1,2), object_name, object_id );

Index created.

ops$tkyte%ORA10GR2> SELECT *
  2  FROM  (
  3     SELECT a.*, rownum rn
  4     FROM (
  5        SELECT *
  6        FROM   x
  7        ORDER  BY decode(object_name,null,1,2), object_name, object_id
  8     ) a
  9     WHERE  rownum <= 5)
 10  WHERE  rn >= 1;

Execution Plan
----------------------------------------------------------
Plan hash value: 4280432357

------------------------------------------------------------------------
| Id  | Operation                      | Name   | Rows  | Bytes | Cost (
------------------------------------------------------------------------
|   0 | SELECT STATEMENT               |        |     5 |   295 |     6
|*  1 |  VIEW                          |        |     5 |   295 |     6
|*  2 |   COUNT STOPKEY                |        |       |       |
|   3 |    VIEW                        |        | 45111 |  2026K|     6
|   4 |     TABLE ACCESS BY INDEX ROWID| X      | 45111 |  2026K|     6
|   5 |      INDEX FULL SCAN           | X_IDX3 |     5 |       |     3
------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter("RN">=1)
   2 - filter(ROWNUM<=5)



that index would facilitate that ordering.

NULLs and Index

A reader, March 03, 2009 - 2:34 pm UTC

but aren't NULLs not indexed? At what point in the data access path in the above example did Oracle retrieve NULLs?
Tom Kyte
March 03, 2009 - 9:10 pm UTC

yes, nulls are INDEXED.


Only they are "high" values - LARGE values, BIG values. So, they are on the right hand side of the index. Low values on the left, high values on the right.


so, when you sort:

ORDER BY object_name NULLs FIRST, object_id


you are asking for a bit of the far right hand side to come first AND THEN go back over to the left to finish up (but don't go too far to the right - you'll repeat)


The analogy I tried to use was:

order by object_name (but put things that start with Z first), object_id


What if Object_name was NOT NULL.
What if object_name started with one of the 26 ASCII characters A-Z.

What if you said "order by object_name - but please put Z first and then A-Y"


that is what ordering by "object_name NULLS FIRST" is like, you are re-arranging the index. Which we cannot do.

So, the index cannot be used.

Thomas, March 04, 2009 - 11:35 am UTC

NULLs are indexed only if not all of the indexed columns are NULL. In this case here, at least one column (object_id) of the index is always non-NULL, so object_names with NULL will be part of that index.
Tom Kyte
March 04, 2009 - 1:43 pm UTC

correct, I should have been more precise above, NULLs are indexed in this case. It wasn't because null values were missing in the index (that would have precluded the index from ever being used in all of the examples), but rather that nulls sort "high" or "big"


See
http://asktom.oracle.com/Misc/something-about-nothing.html
for more info.

rownum grouping

Pank, March 19, 2009 - 12:48 pm UTC

Tom,

Can we group based on number of rows using rownum or any other method? I mean to say if table contains 20 rows and if we need to divide those rows into group of two. Like first two rows have group number 1, 3rd and 4th rows have group number 2 and so on.

Thanks,
Panks
Tom Kyte
March 23, 2009 - 9:40 am UTC

sure, that would work, you can do this:

ops$tkyte%ORA10GR2> select trunc((rownum-0.1)/2)+1, ename from scott.emp;

TRUNC((ROWNUM-0.1)/2)+1 ENAME
----------------------- ----------
                      1 SMITH
                      1 ALLEN
                      2 WARD
                      2 JONES
                      3 MARTIN
                      3 BLAKE
                      4 CLARK
                      4 KING
                      5 TURNER
                      5 ADAMS
                      6 JAMES
                      6 FORD
                      7 MILLER
                      7 SCOTT

14 rows selected.

<b>or, if you want "five groups", regardless of the number of rows, you can use ntile:</b>

ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> select ntile(5) over (order by ename) nt, ename from scott.emp;

        NT ENAME
---------- ----------
         1 ADAMS
         1 ALLEN
         1 BLAKE
         2 CLARK
         2 FORD
         2 JAMES
         3 JONES
         3 KING
         3 MARTIN
         4 MILLER
         4 SCOTT
         4 SMITH
         5 TURNER
         5 WARD

14 rows selected.


Pagination and NLSSort

A reader, March 23, 2009 - 1:25 pm UTC

Test case provided below. Can you please explain why the index doesn't get used in the first case, but is used in the second case? Thanks.

CREATE TABLE x AS
SELECT * FROM all_objects;

CREATE INDEX x_LOWER_nlssort_idx ON x(LOWER(NLSSORT(object_name, 'NLS_SORT=FRENCH')));

exec dbms_stats.gather_table_stats( user, 'X' );


-- --------------------------------------------------------------------------------------
-- Why doesn't Oracle use the function-based index here?
-- --------------------------------------------------------------------------------------
SELECT /*+ gather_plan_statistics */ *
FROM (
   SELECT *
   FROM   x
   ORDER  BY LOWER(NLSSORT(object_name, 'NLS_SORT=FRENCH'))
)
WHERE  rownum <= 5;

SELECT * FROM TABLE(dbms_xplan.display_cursor(NULL, NULL, 'iostats last'));

------------------------------------------------------------------------------------------
| Id  | Operation               | Name | Starts | E-Rows | A-Rows |   A-Time   | Buffers |
------------------------------------------------------------------------------------------
|*  1 |  COUNT STOPKEY          |      |      1 |        |      5 |00:00:01.13 |     654 |
|   2 |   VIEW                  |      |      1 |  47097 |      5 |00:00:01.13 |     654 |
|*  3 |    SORT ORDER BY STOPKEY|      |      1 |  47097 |      5 |00:00:01.13 |     654 |
|   4 |     TABLE ACCESS FULL   | X    |      1 |  47097 |  47097 |00:00:00.05 |     654 |
------------------------------------------------------------------------------------------

-- --------------------------------------------------------------------------------------
-- Oracle uses the function-based index as expected.
-- --------------------------------------------------------------------------------------
CREATE INDEX x_nlssort_LOWER_idx ON x(NLSSORT(LOWER(object_name), 'NLS_SORT=FRENCH'));

SELECT /*+ gather_plan_statistics */ *
FROM (
   SELECT *
   FROM   x
   ORDER  BY NLSSORT(LOWER(object_name), 'NLS_SORT=FRENCH')
)
WHERE  rownum <= 5;

SELECT * FROM TABLE(dbms_xplan.display_cursor(NULL, NULL, 'iostats last'));

---------------------------------------------------------------------------------------------------------------
| Id  | Operation                     | Name                | Starts | E-Rows | A-Rows |   A-Time   | Buffers |
---------------------------------------------------------------------------------------------------------------
|*  1 |  COUNT STOPKEY                |                     |      1 |        |      5 |00:00:00.01 |       8 |
|   2 |   VIEW                        |                     |      1 |      5 |      5 |00:00:00.01 |       8 |
|   3 |    TABLE ACCESS BY INDEX ROWID| X                   |      1 |  47097 |      5 |00:00:00.01 |       8 |
|   4 |     INDEX FULL SCAN           | X_NLSSORT_LOWER_IDX |      1 |      5 |      5 |00:00:00.01 |       4 |
---------------------------------------------------------------------------------------------------------------

Tom Kyte
March 26, 2009 - 1:28 pm UTC

I cannot reproduce in my 10g instance on linux.

ops$tkyte%ORA10GR2> drop table t purge;

Table dropped.

ops$tkyte%ORA10GR2> set serveroutput off
ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> CREATE TABLE t
  2  AS
  3  SELECT *
  4    FROM all_objects;

Table created.

ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> CREATE INDEX x_LOWER_nlssort_idx ON t(LOWER(NLSSORT(object_name, 'NLS_SORT=FRENCH')));

Index created.

ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> exec dbms_stats.gather_table_stats( user, 'T' );

PL/SQL procedure successfully completed.

ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> @at
ops$tkyte%ORA10GR2> column PLAN_TABLE_OUTPUT format a72 truncate
ops$tkyte%ORA10GR2> set autotrace traceonly explain
ops$tkyte%ORA10GR2> SELECT *
  2  FROM (
  3     SELECT *
  4        FROM   t
  5               ORDER  BY LOWER(NLSSORT(object_name, 'NLS_SORT=FRENCH'))
  6                   )
  7                   WHERE  rownum <= 5;

Execution Plan
----------------------------------------------------------
Plan hash value: 1489985632

------------------------------------------------------------------------
| Id  | Operation                     | Name                | Rows  | By
------------------------------------------------------------------------
|   0 | SELECT STATEMENT              |                     |     5 |
|*  1 |  COUNT STOPKEY                |                     |       |
|   2 |   VIEW                        |                     |     5 |
|   3 |    TABLE ACCESS BY INDEX ROWID| T                   | 50027 |  4
|   4 |     INDEX FULL SCAN           | X_LOWER_NLSSORT_IDX |     5 |
------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter(ROWNUM<=5)

ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> set autotrace off
ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> SELECT /*+ gather_plan_statistics */ *
  2  FROM (
  3     SELECT *
  4        FROM   t
  5               ORDER  BY LOWER(NLSSORT(object_name, 'NLS_SORT=FRENCH'))
  6                   )
  7                   WHERE  rownum <= 5;

OWNER                          OBJECT_NAME
------------------------------ ------------------------------
SUBOBJECT_NAME                  OBJECT_ID DATA_OBJECT_ID OBJECT_TYPE
------------------------------ ---------- -------------- -------------------
CREATED   LAST_DDL_ TIMESTAMP           STATUS  T G S
--------- --------- ------------------- ------- - - -
ORDSYS                         /aaac4c04_MlibAddRIF
                                    43725                JAVA CLASS
30-JUN-05 30-JUN-05 2005-06-30:19:31:20 VALID   N N N

PUBLIC                         /aaac4c04_MlibAddRIF
                                    44866                SYNONYM
30-JUN-05 30-JUN-05 2005-06-30:19:31:56 VALID   N N N

SYS                            /aaafddd5_PatternUnixDot
                                    19331                JAVA CLASS
30-JUN-05 30-JUN-05 2005-06-30:19:24:57 VALID   N N N

PUBLIC                         /aaafddd5_PatternUnixDot
                                    33205                SYNONYM
30-JUN-05 30-JUN-05 2005-06-30:19:24:57 VALID   N N N

SYS                            /aab67636_BasicScrollBarUIProp
                                    12719                JAVA CLASS
30-JUN-05 30-JUN-05 2005-06-30:19:24:57 VALID   N N N


ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> SELECT * FROM TABLE(dbms_xplan.display_cursor(NULL, NULL, 'iostats last'));

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------
SQL_ID  8wvmstmqvb5zc, child number 0
-------------------------------------
SELECT /*+ gather_plan_statistics */ * FROM (    SELECT *       FROM   t
LOWER(NLSSORT(object_name, 'NLS_SORT=FRENCH'))    )    WHERE  rownum <=

Plan hash value: 1489985632

------------------------------------------------------------------------
| Id  | Operation                     | Name                | Starts | E
------------------------------------------------------------------------
|*  1 |  COUNT STOPKEY                |                     |      1 |
|   2 |   VIEW                        |                     |      1 |
|   3 |    TABLE ACCESS BY INDEX ROWID| T                   |      1 |
|   4 |     INDEX FULL SCAN           | X_LOWER_NLSSORT_IDX |      1 |
------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter(ROWNUM<=5)


21 rows selected.


Thanks Tom

Pank, March 25, 2009 - 6:44 am UTC

Excellent as always.


Ashok, April 27, 2009 - 10:58 am UTC

Most efficient:

select t1.* from TEST t1,
(select rowid rid, rownum rnum from TEST
where rownum < 20) t2
where t1.rowid = t2.rid

Any comments?

Fetch records from 10 to 20

Ashok Bathini, April 27, 2009 - 11:00 am UTC

(Corrected)Most efficient way:
Fetch records from 10 to 20:

select t1.* from TEST t1,
(select * from (select rowid rid, rownum rnum from TEST
where rownum < 20) where rnum >10) t2
where t1.rowid = t2.rid

Any comments?
Tom Kyte
April 27, 2009 - 2:31 pm UTC

comment:

you are getting a random set of records. There is no such thing as the first record, the second record the 10th record UNTIL AND UNLESS you have a (deterministic) order by statement.


so, where is your order by?

@Ashok Bathini re:"Most efficient way"

Stew Ashton, April 28, 2009 - 8:01 am UTC


Ashok,

I posted the same approach a few years ago :
http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:76812348057#411983300346159899
You leave out a lot:
1) the ORDER BY, as Tom said above (I use two);
2) the index capable of satisfying the WHERE and the ORDER BY;
3) the cardinality hint (or maybe FIRST_ROWS ?) that helps the Optimizer choose the right execution plan for the outer query.

When you have all of the above, you get an efficient query; however, 2) is not always possible in "real world" queries.

When all is said and done, is "our" approach really more efficient than Tom's original reply? Well, maybe, but only for much later pages, or when the outer query is complex -- in other words, almost never in real life.

9 years later, still does the trick

David, July 10, 2009 - 10:34 am UTC

I'm not sure why Oracle has to be so convoluted (MySQL does the same thing in fewer, less confusing lines of code), but this query pointed out exactly what I'd been missing.

I noticed that the posting date was 2000. Nine years later, this is exactly what I was looking for. Props for that.

David

FIRST_ROWS_N and rownum

Avi, September 02, 2009 - 4:51 am UTC

Dear Tom,
Recently I have been informed that if I have requirement of say 5 rows in my output then I should use first_rows(5) hint for faster results. In my env optimizer_mode=choose.
I tested it in my test env and found it working. Can you please tell why this is working like this though my result is same?

1.
select col1, col2, col3, ...........
from table1 a, table2, ...........
where .........
and ..........


and rownum <=5;
2.
select /*+ first_rows(5) */ col1, col2, col3, ...........
from table1 a, table2, ...........
where .........
and ..........


and rownum <=5;

Here I found 2nd query with hint coming out faster.

Regards,
Avi
Tom Kyte
September 02, 2009 - 10:36 am UTC

compare plans

but yes, I have myself written many times - first_rows is a 'good hint', a useful hint.

It is in this case telling the optimizer to get the first 5 rows as fast as possible.

but, post both plans - do a tkprof, show the output of that.


for all I know, your tables are not analyzed and the first one is using the RBO and the second the CBO - we'd need a tad bit more information before saying anything.

top n optimization

Rahul Kumar, March 21, 2010 - 6:24 pm UTC

Hi Tom,
I'm a java developer and currently involved with pagination.
While going through all this discussion, I want to know the difference between:

a)
select name, rownum as rnum, count(name) over (partition by name) as total_count from (select name from employee order by name) where rownum <=100;


b)
select name, rownum as rnum from (select name from employee order by name) where rownum <=100;


performance wise.

In this same discussion you have mentioned of top n optimization and how count(name) will force to scan entire table. For time being, I suppose it is true. But then the algorithm to sort will work same way in both cases. Even in case of (b), the 101st row will be checked against 100th row(and so the 100,000th row), if it is not less than, it will be discarded right away, that should be well expected from an algo and hence it will scan the full table anyway then count(name) here should have no effect on performance.
Tom Kyte
March 22, 2010 - 9:12 am UTC

not much difference.

both return the same 100 rows.

query (b) could return the first row RIGHT AWAY. It doesn't need the last row or some set of rows to start returning the data.

query (a) needs to get some number of rows (probably all 100 in reality) in order to return the first row.

query (a) returns a bogus answer if you ask me. I don't see the point of counting the names, partitioned by name, when you use a "get me 100 records" query.


What if the name "A" is the lowest name sort wise (so it comes first). What if there are 102 rows with "A", you will report "100"

Or what if there are 99 rows with A and 2,000 with B. You will report a-99, b-1


Not sure you want to do that query that way, doesn't seem like it would make sense.

Select Top 2 Dates by Client

IreneA, April 01, 2010 - 3:16 pm UTC

Hi Tom

I'm fairly new to Oracle and have purchased your books.
not sure if this question conforms to your rules as it relates to the original question here, I've been searching here and elsewhere and reading through your books(and others) trying to find a sample similiar to what I need.

I'm trying to write a query where I'm only interested in the first 2 top records by ORDATE in descending order.

I then need to compare the max(ordate) to the one just below it and determine if the difference between the top two dates is 18 months for a promotional campaign.

the query would select clientID 479572 but not 470873.

thanks in advance for any help and apologies if this question is outside the boundaries of your rules.


CLIENTID ORDATE
---------- ---------
470873 18-FEB-10
470873 04-DEC-09
470873 01-OCT-09
470873 10-SEP-09
470873 29-MAY-09
470873 01-APR-09
470873 19-MAR-09
470873 10-MAR-09
470873 02-FEB-09

CLIENTID ORDATE
---------- ---------
479572 29-OCT-09
479572 06-MAR-08
479572 23-SEP-03
479572 17-JUL-03
479572 15-NOV-02
479572 03-AUG-98
479572 03-DEC-97
479572 25-APR-96

Tom Kyte
April 05, 2010 - 1:08 pm UTC

no table creates
no insert intos
no look

and your question is not well phrased. subject says "Select Top 2 Dates by Client" which seems to indicate "every client will be selected" but you say later "only client 479572 will be selected, not 470873".. so, if you provide a create table and insert statements - please also expand a bit on your question, provide lots more detail.

same as above

IreneA, April 05, 2010 - 2:34 pm UTC

sorry Tom, didn't intend to come off looking lazy.

I provided inserts for the first 5 rows for each clientid as the other rows are irrelevant to this particular question. I'm running this on Windows 2003 Server, Oracle 10g

thanks.

create table ORDERS(ClientID Number, OrDate Date);

insert into orders values(470873, TO_DATE( '18-FEB-2010', 'DD-MON-YYYY' ));
insert into orders values(470873, TO_DATE( '04-DEC-2009', 'DD-MON-YYYY' ));
insert into orders values(470873, TO_DATE( '01-OCT-2009', 'DD-MON-YYYY' ));
insert into orders values(470873, TO_DATE( '10-SEP-2009', 'DD-MON-YYYY' ));
insert into orders values(470873, TO_DATE( '29-MAY-2009', 'DD-MON-YYYY' ));
insert into orders values(479572, TO_DATE( '29-OCT-2009', 'DD-MON-YYYY' ));
insert into orders values(479572, TO_DATE( '06-MAR-2008', 'DD-MON-YYYY' ));
insert into orders values(479572, TO_DATE( '23-SEP-2003', 'DD-MON-YYYY' ));
insert into orders values(479572, TO_DATE( '17-JUL-2003', 'DD-MON-YYYY' ));
insert into orders values(479572, TO_DATE( '15-NOV-2002', 'DD-MON-YYYY' ));


SQL> select * from orders;

  CLIENTID ORDATE
---------- ---------
    470873 18-FEB-10
    470873 04-DEC-09
    470873 01-OCT-09
    470873 10-SEP-09
    470873 29-MAY-09
    479572 29-OCT-09
    479572 06-MAR-08
    479572 23-SEP-03
    479572 17-JUL-03
    479572 15-NOV-02

10 rows selected.

Tom Kyte
April 05, 2010 - 10:20 pm UTC

i still don't know what to do, as explained above, i did not follow your logic, please expand on HOW to get the answer from this data. give me some psuedo code.

Response to IreneA

Centinul, April 06, 2010 - 5:46 am UTC

IreneA is this what you are looking for?

SQL> SELECT  CLIENTID
  2  FROM
  3  (
  4          SELECT  CLIENTID
  5          ,       ORDATE
  6          ,       LAG(ORDATE) OVER (PARTITION BY CLIENTID ORDER BY ORDATE) LAG_ORDATE
  7          ,       ROW_NUMBER() OVER (PARTITION BY CLIENTID ORDER BY ORDATE DESC) RN
  8          FROM    ORDERS
  9  )
 10  WHERE   RN = 1
 11  AND     MONTHS_BETWEEN(ORDATE,LAG_ORDATE) >= 18
 12  /

            CLIENTID
--------------------
              479572


My understanding of the requirements is the following:

1. For EACH CLIENTID retrieve the TOP TWO ORDATEs in descending order.

2. Once those dates are chosen if the difference between the most recent and the one immediately following it FOR EACH CLIENTID is greater than or equal to 18 months then return the CLIENTID

Select Top 2 Dates by Client

IreneA, April 06, 2010 - 7:38 am UTC

Centinul

thank you so much, I see you're using Analytics, can you please explain in simple terms how you determined the max(ordate) from the date just below it, and why is RN=1.

is the only solution to this using Analytics?

thanks again.


SQL> SELECT  CLIENTID
  2  FROM
  3  (
  4          SELECT  CLIENTID
  5          ,       ORDATE
  6          ,       LAG(ORDATE) OVER (PARTITION BY CLIENTID ORDER BY ORDATE) LAG_ORDATE
  7          ,       ROW_NUMBER() OVER (PARTITION BY CLIENTID ORDER BY ORDATE DESC) RN
  8          FROM    ORDERS
  9  )
 10  WHERE   RN = 1
 11  AND     MONTHS_BETWEEN(ORDATE,LAG_ORDATE) >= 18
 12  /

            CLIENTID
--------------------
              479572


IreneA

Centinul, April 06, 2010 - 9:40 am UTC

The main reason I chose to use analytics because it is more efficient in that Oracle will only access the table once, versus other methods which may require multiple accesses. On a small table it might not matter but on a larger table this performance difference could be very significant.

In cases like this it is helpful to break the query down into it's parts to try and understand what's going on. For example I would start with running the SUBQUERY and analyzing the results. Which are (I added sort by RN and CLIENTID for ease of reading):

            CLIENTID ORDATE              LAG_ORDATE                            RN
-------------------- ------------------- ------------------- --------------------
              470873 02/18/2010 00:00:00 12/04/2009 00:00:00                    1
              470873 12/04/2009 00:00:00 10/01/2009 00:00:00                    2
              470873 10/01/2009 00:00:00 09/10/2009 00:00:00                    3
              470873 09/10/2009 00:00:00 05/29/2009 00:00:00                    4
              470873 05/29/2009 00:00:00                                        5
              479572 10/29/2009 00:00:00 03/06/2008 00:00:00                    1
              479572 03/06/2008 00:00:00 09/23/2003 00:00:00                    2
              479572 09/23/2003 00:00:00 07/17/2003 00:00:00                    3
              479572 07/17/2003 00:00:00 11/15/2002 00:00:00                    4
              479572 11/15/2002 00:00:00                                        5


The function ROW_NUMBER() OVER (PARTITION BY CLIENTID ORDER BY ORDATE DESC) allows us to determine the row for EACH CLIENTID (PARTITION BY CLIENTID) that contains the most recent date. It does this as mentioned for each CLIENTID and it requires a DESCENDING ORDER (ORDER BY ORDATE DESC). Therefore the row with the value of RN = 1 will have the most recent A.K.A. "max" value. You didn't mention it in your requirements but is there any chance you'll have multiple entries with the same most recent date? If so you may have to add additional columns to the ORDER BY clause or use another analytical function like RANK/DENSE_RANK. The outer query restricts the result to RN=1 to return the record for EACH CLIENTID that has the most recent date.

The function LAG(ORDATE) OVER (PARTITION BY CLIENTID ORDER BY ORDATE) allows us to look behind (or ahead with LEAD()) from our current position. Like the ROW_NUMBER() function I've partitioned it by CLIENTID because we are concerned with the values for each CLIENTID not across ALL CLIENTS. The ORDER BY clause allows us to control which row we will look back at. In this case we want the row with the previous ORDATE. I use the LAG function as an additional column in the SELECT statement so I can perform the computation of the months between the most recent and previous values rather easily. If we didn't use the LAG function a SELF JOIN would be required.

Another possible solution would be to use the MODEL CLAUSE to get the required results. A solution without analytics is most likely possible but it probably won't be nearly as efficient as the analytics version.

Centinul

IreneA, April 06, 2010 - 4:36 pm UTC

wow, that is so cool, thanks for taking the time to explain the logic, you can look backwards from a primary point using LAG or forwards using LEAD.

I can look back at any reference date and select it by using the corresponding rownum(RN).

you are correct in that there can be multi rows with the same date, there's a transaction# unique to each row so I would need to use either min or max trans# in those cases.

I've done some reading on Analytics and for some reason feel a tad bit intimidated by them, they seem so esoteric.

I've copied it to my programming folder for future reference.

on that note, much thanks.

Good solution

Reader, April 29, 2010 - 9:43 am UTC

This is a good solution for pagination. But it may not work for 1-10 of 100 scenario

We cannot display the total number of records.


Tom Kyte
April 29, 2010 - 10:25 am UTC

neither does google, they guess - they do not let you go to page 101, they make you search for something using better inputs. They know better than to WASTE cycles computing an exact number (because you know what, the only way to get that number is to ANSWER the entire question - that is painful).


Never show an exact number (see my home page for example)...

Say "here is 1-10 of MORE THAN 10!!!"

give them the ability to go prev and next - if you need to let them get to the 'last page', just realize that the last page of the current set is the first page of a set ordered opposite of what you have (eg: never let them go to the last page, do let them reverse the sort order and get to the first page)

If someone gets more than 1000 hits - they have a useless set of data (for human beings). Let them page next next next - they get bored, they stop.


Stop wasting machine cycles counting things no one cares about, no one will ever get to. Just give them NEXT and PREV - if you have to - let them click on page 1-10 - at most.

And if you let them click on page 10 and discover "page 10 doesn't exist", just show them the last page that does exist and say "sorry, the data stops earlier than we thought" (just like... google does - and they do it right)


Data will change on every call of the query

Reader, May 01, 2010 - 7:36 am UTC

Another point I would like to bring out is that the data can change in the table, before stored procedure returning the results sets.

I mean that, the data in the table in the first call (displaying 1-10) may not be the same when it is (11-20). Some other user can insert/update/delete records from the table. So, essentially we will not work on a snap-shot.

re: Data will change on every call of the query

Stew Ashton, May 01, 2010 - 12:19 pm UTC


See my suggestion above about using the same SCN across calls. To me, this is the only way to get Web pagination to "work" in a consistent fashion, even if you just go "prev" and "next". http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:127412348064#251594100346048494

Another way to select range of rownum

Jawed Ahmad, September 15, 2010 - 2:18 am UTC

Hi,

what about performance of following query in compare to your suggested query to select range of rows---

select * from a where rownum <= 100
minus
select * from a where rownum <= 90;

your comments on it most appreciable.


Tom Kyte
September 15, 2010 - 8:15 am UTC

why would you want to hit the table twice?? especially since in real life there is typically a bit more to the table list and where clause than you have.

No, this would be totally unsafe and not very performant.

minus causes a DISTINCT to take place - you typically do not distinct.

Error while trying to retrieve records from view

Surya, February 04, 2011 - 3:41 am UTC

Hi

Thank you so much for the query to extract specific number of rows from the result. I was able to create a view out of it but when I try "select * from view_name" I get the error ora-00972: identifier is too long. The number of characters in the view name is less than 20 but still I get the error. Please can you let me know why?

This is my query:

CREATE OR REPLACE VIEW v_top_ten_records(column_a, column_b, column_c)
AS SELECT *
FROM ( SELECT a.*, ROWNUM rnum
FROM ( SELECT x.sk_guid, ROUND(SUM(y.QUANTITY))
FROM table_x x, table_y y WHERE x.sk_guid = y.sk_guid
GROUP BY x.sk_guid
ORDER BY 2 DESC ) a
WHERE ROWNUM <= 10 )
WHERE rnum >= 1

and then I tried "select * from v_top_ten_records" and I got the error.

Please let me know how to fix this.

Thank you.
Tom Kyte
February 04, 2011 - 9:35 am UTC

please give me an entire example to reproduce the issue with. I don't see any issues with the above view.

view

sam, February 04, 2011 - 10:13 am UTC

Tom:

Is this really valid? a view with input paramaters?

I thought you cant have parameterzied views in oracle.

it is good if you support that as i can use it in many places


CREATE OR REPLACE VIEW v_top_ten_records(column_a, column_b, column_c)

Tom Kyte
February 04, 2011 - 10:15 am UTC

Sam,

that is not a view with parameters, that is a view with column aliases.

create or replace view v ( c1, c2, c3 )
as
select 1, 2, 3 from dual;

is the same as

create or replace view v
as
select 1 c1, 2 c2, 3 c3 from dual;

is the same as

create or replace view v
as
select 1 as c1, 2 as "C2", 3 c3 from dual;

and so on.

view

sam, February 04, 2011 - 10:16 am UTC

oh, these are the table columns definitions that view retrieves.

they are not input parameters. Disregard my pervious note.

Error while trying to retrieve records from view

Surya, February 07, 2011 - 12:59 am UTC

Hi

I identified the issue and was able to fix it. This was the issue:

Initial query to create view:

CREATE OR REPLACE VIEW v_top_ten_records(column_a, column_b, column_c)
AS SELECT *
FROM ( SELECT a.*, ROWNUM rnum
FROM ( SELECT x.sk_guid, ROUND(SUM(y.QUANTITY))
FROM table_x x, table_y y WHERE x.sk_guid = y.sk_guid
GROUP BY x.sk_guid
ORDER BY 2 DESC ) a
WHERE ROWNUM <= 10 )
WHERE rnum >= 1

But when I described the view, I observed that the source showed the following script:

CREATE OR REPLACE VIEW v_top_ten_records(column_a, column_b, column_c)
AS SELECT "SK_GUID","ROUND(SUM(y.QUANTITY))","RNUM"
FROM ( SELECT a.*, ROWNUM rnum
FROM ( SELECT x.sk_guid, ROUND(SUM(y.QUANTITY))
FROM table_x x, table_y y WHERE x.sk_guid = y.sk_guid
GROUP BY x.sk_guid
ORDER BY 2 DESC ) a
WHERE ROWNUM <= 10 )
WHERE rnum >= 1

Here the length of my actual parameter in "ROUND(SUM(y.QUANTITY))" was greater than 30 and hence I get the error when I try "select * from v_top_ten_records"

I fixed it by giving the alias for the second column while creating the view as follows:


CREATE OR REPLACE VIEW v_top_ten_records(column_a, column_b, column_c)
AS SELECT *
FROM ( SELECT a.*, ROWNUM rnum
FROM ( SELECT x.sk_guid, ROUND(SUM(y.QUANTITY) as "column_b")
FROM table_x x, table_y y WHERE x.sk_guid = y.sk_guid
GROUP BY x.sk_guid
ORDER BY 2 DESC ) a
WHERE ROWNUM <= 10 )
WHERE rnum >= 1

I could have avoided this issue by giving aliases next to the parameters instead of giving them like view_name(a1, a2, a3)!

Thank you.
Tom Kyte
February 07, 2011 - 6:45 am UTC

you could have avoided the back and forth by posting a reproducible example as well...

In fact - if you had posted a reproducible example - you almost certainly would have found the issue right off.

Which means - you could have solved this days ago :)

Without ever having to post a question....


Morale of this story: always prepare a reproducible test case for your issue. by doing so you will clearly demonstrate the issue. furthermore, you will find over time that you solve almost all of your issues without ever having to ask - as it will become clear what the issue is as you whittle your test case down to the smallest possible bit of code - removing everything that is not relevant to the problem at hand.



The problem here was you posted an example create view that would work just fine - with no way to reproduce your issue. No one can help you there since what you posted bears no resemblance to what you are actually doing...

Error whle trying to retrieve records from view

Surya, February 07, 2011 - 2:04 am UTC

Adding to my previous comment, I just wonder why oracle picks up the column name from within the view, that is, "ROUND(SUM(y.QUANTITY" when used in "select * from view_name" instead of the explicit alias that was provided (column_b) while creating the view.
Tom Kyte
February 07, 2011 - 6:53 am UTC

do you have access to support? can you file the following as a bug?

ops$tkyte%ORA11GR2> CREATE OR REPLACE VIEW v_top_ten_records(column_a, column_b, column_c)
  2  AS  SELECT *
  3    FROM ( SELECT a.*, ROWNUM rnum
  4             FROM ( SELECT x.dummy,
  5                           trunc(floor(ceil(ROUND(SUM(decode(dummy,'X',1,2))))))
  6                     FROM  dual x
  7                     group by dummy
  8                     ORDER BY 2 DESC ) a
  9            WHERE ROWNUM <= 10
 10              )
 11   WHERE rnum >= 1
 12  /

View created.

ops$tkyte%ORA11GR2> select text from user_views where view_name = 'V_TOP_TEN_RECORDS';

TEXT
-------------------------------------------------------------------------------
SELECT "DUMMY","TRUNC(FLOOR(CEIL(ROUND(SUM(DECODE(DUMMY,'X',1,2))))))","RNUM"
  FROM ( SELECT a.*, ROWNUM rnum
           FROM ( SELECT x.dummy,
                         trunc(floor(ceil(ROUND(SUM(decode(dummy,'X',1,2))))))
                   FROM  dual x
                   group by dummy
                   ORDER BY 2 DESC ) a
          WHERE ROWNUM <= 10
            )
 WHERE rnum >= 1


ops$tkyte%ORA11GR2> select * from v_top_ten_records;
select * from v_top_ten_records
              *
ERROR at line 1:
ORA-00972: identifier is too long




if not, let me know and I'll file it.

Error while trying to retrieve records from view

Surya, February 07, 2011 - 11:06 pm UTC

Hi

I will ensure I do a thorough analysis before posting the query henceforth :)

I am afraid I do not have access to the support for oracle. Please can you take this forward with them.

Thank you.
Tom Kyte
February 09, 2011 - 7:14 am UTC

will do

Error while trying to retrieve records from view

Surya, February 07, 2011 - 11:07 pm UTC

Hi

I will ensure I do a thorough analysis before posting the query henceforth :)

I am afraid I do not have access to the support for oracle. Please can you take this forward with them.

Thank you.

A humble request

MS, February 12, 2011 - 2:17 pm UTC

Hi Tom,

I am new to Oracle ( and SQL ), and I am learning a lot through some posts in your site.

From one of the posts above,

select * from (
select p.*, rownum rnum
from (select * from hz_parties ) p
where rownum < 100
) where rnum >= 90

____________

Can I use the below query, and will it return the same result set?

select * from (
select hz.*, rownum rnum from hz_parties hz
where rownum < 100
) where rnum >= 90
-----------

Tom Kyte
February 14, 2011 - 7:43 am UTC

you should technically have something to sort the data - otherwise the concept of "90 through 100" is ambiguous - any set of rows could be 90-100.

See
http://www.oracle.com/technetwork/issue-archive/2006/06-sep/o56asktom-086197.html
http://www.oracle.com/technetwork/issue-archive/2007/07-jan/o17asktom-093877.html

for more details.


In your example, to answer your question, since you do not have an order by - your queries would be the same - even though they could return different rows (ponder that for a while :) ). The reason they might not return the same rows is because they are not deterministic - you didn't ask for the rows to be returned in any particular order, so we could return them in a different order if we feel like it.

A similar Query

Shishir, October 05, 2011 - 10:59 am UTC

What if i want to fetch set of row based on a condition

select abc
from
Query1 q1,
Query2 q2,
Query3 q3

if q1 gives result then from q1 , else if q1 fails (i.e. gives no rows) then from q2 and if q2 also fails then from q3 and if q3 also fails then some default value

can we write this kind of thing ina single query ?
Tom Kyte
October 05, 2011 - 11:15 am UTC

ops$tkyte%ORA11GR2> create table emp as select empno, ename, sal, job from scott.emp;

Table created.

ops$tkyte%ORA11GR2> 
ops$tkyte%ORA11GR2> 
ops$tkyte%ORA11GR2> with
  2  q1 as (select 'q1' whence, emp.* from emp where ename like '%A%'),
  3  q2 as (select 'q2' whence, emp.* from emp where sal > 2500 and NOT EXISTS(select null from q1)),
  4  q3 as (select 'q3' whence, emp.* from emp where job = 'MANAGER' and not exists (select null from q1
  5  union all select null from q2))
  6  select * from q1 union all select * from q2 union all select * from q3;

WH      EMPNO ENAME             SAL JOB
-- ---------- ---------- ---------- ---------
q1       7499 ALLEN            1600 SALESMAN
q1       7521 WARD             1250 SALESMAN
q1       7654 MARTIN           1250 SALESMAN
q1       7698 BLAKE            2850 MANAGER
q1       7782 CLARK            2450 MANAGER
q1       7876 ADAMS            1100 CLERK
q1       7900 JAMES             950 CLERK

7 rows selected.

ops$tkyte%ORA11GR2> 
ops$tkyte%ORA11GR2> with
  2  q1 as (select 'q1' whence, emp.* from emp where ename like '%Z%'),
  3  q2 as (select 'q2' whence, emp.* from emp where sal > 2500 and NOT EXISTS(select null from q1)),
  4  q3 as (select 'q3' whence, emp.* from emp where job = 'MANAGER' and not exists (select null from q1
  5  union all select null from q2))
  6  select * from q1 union all select * from q2 union all select * from q3;

WH      EMPNO ENAME             SAL JOB
-- ---------- ---------- ---------- ---------
q2       7566 JONES            2975 MANAGER
q2       7698 BLAKE            2850 MANAGER
q2       7788 SCOTT            3000 ANALYST
q2       7839 KING             5000 PRESIDENT
q2       7902 FORD             3000 ANALYST

ops$tkyte%ORA11GR2> 
ops$tkyte%ORA11GR2> with
  2  q1 as (select 'q1' whence, emp.* from emp where ename like '%Z%'),
  3  q2 as (select 'q2' whence, emp.* from emp where sal > 25000 and NOT EXISTS(select null from q1)),
  4  q3 as (select 'q3' whence, emp.* from emp where job = 'MANAGER' and not exists (select null from q1
  5  union all select null from q2))
  6  select * from q1 union all select * from q2 union all select * from q3;

WH      EMPNO ENAME             SAL JOB
-- ---------- ---------- ---------- ---------
q3       7566 JONES            2975 MANAGER
q3       7698 BLAKE            2850 MANAGER
q3       7782 CLARK            2450 MANAGER




would be one way, but this would probably - likely - best be done a little more procedurally - open q1, then if need be open q2, then if need be open q3. Otherwise, you'd be hitting them all probably.

benchmark it...

Thanks a lot

Shishir, October 07, 2011 - 6:44 am UTC

Thanks a lot ..

I was thinking of using COUNT and DECODE
in the following way

SELECT DECODE (q1.COUNT,
0, DECODE (q2.COUNT,
0, DECODE (q3.COUNT, 0, 'ABC', q3_value),
q2_value
),
q1_value
)
FROM query1_count q1, query2_count q2, query3_count q3

here query1_count,query2_count,query3_count will be three similar queries which will give count instead of data

and q1_value,q2_value and q3_value will give the data

Thought of using decode as i want to fetch only a single value

Will this approach work ... ?
If it will .. is it going to affect the performance ?

Can't follow the cursor approach , want to use the same logic within a report query
Tom Kyte
October 07, 2011 - 1:59 pm UTC

if q1, q2, q3 all return exactly one row - that would work. However, you previously wrote:

if q1 gives result then from q1 , else if q1 fails (i.e. gives no rows) then from q2 and if q2 also
fails then from q3 and if q3 also fails then some default value


which seems to me to say "q1 might return zero rows", in which case - no it would not. Nor would it work if any of q1, q2, q3 return more than one row.



and I don't know what you might possibly mean by:

Can't follow the cursor approach

since that is pretty much the ONLY way to ultimately get data out of the database.

shankar, November 04, 2011 - 1:19 pm UTC

/*TABLES
=====================================================================
CREATING NEW TABLES
-------------------------
CREATE TABLE a(SUBSCR_NO NUMBER,PRIMARY_OFFER_ID NUMBER,BAL1_ID NUMBER);
CREATE TABLE B(OFFER_ID NUMBER,OFFER_TYPE NUMBER,SUBSCR_NO NUMBER);
CREATE TABLE C(BAL_ID NUMBER,OFFER_ID NUMBER,IS_CORE NUMBER);
CREATE TABLE VALID_A(SUBSCR_NO NUMBER,PRIMARY_OFFER_ID NUMBER,BAL1_ID NUMBER);
CREATE TABLE INVALID_A(SUBSCR_NO NUMBER,PRIMARY_OFFER_ID NUMBER,BAL1_ID NUMBER);


-----------------------------------------------------------
INSERTING DATA INTO TABLES
-----------------------------------------------------------
INSERT INTO a VALUES(10,100,1);
INSERT INTO a VALUES(20,300,2);
INSERT INTO a VALUES(30,400,3);
INSERT INTO a VALUES(50,600,5);


INSERT INTO B VALUES(300,1,20);
INSERT INTO B VALUES(100,2,10);
INSERT INTO B VALUES(200,2,90);
INSERT INTO B VALUES(600,2,50);


INSERT INTO C VALUES(20,200,0);
INSERT INTO C VALUES(1,100,1);
INSERT INTO C VALUES(20,200,1);
INSERT INTO C VALUES(5,600,1);

---------------------------------------------------
select * from a;

SUBSCR_NO PRIMARY_OFFER_ID BAL1_ID
------ ---------------- ----------
10 100 1
20 300 2
30 400 3
50 600 5

select * from b;

OFFER_ID OFFER_TYPE SUBSCR_NO
------ ---------- ----------
300 1 20
100 2 10
200 2 90
600 2 50

select * from c;

BAL_ID OFFER_ID IS_CORE
------ ---------- ----------
20 200 0
1 100 1
20 200 1
5 600 1
=========================================================*/


DECLARE
TYPE a_tr is table of a%rowtype index by binary_integer;
Gt_a_tr a_tr;
type b_tr is table of b%rowtype index by binary_integer;
Gt_b_tr b_tr;
type c_tr is table of c%rowtype index by binary_integer;
Gt_c_tr c_tr;

begin
delete valid_a;
delete invalid_a;
commit;
select * bulk collect into Gt_a_tr from a;
select * bulk collect into Gt_b_tr from b;
select * bulk collect into Gt_c_tr from c;

for i in Gt_a_tr.first..Gt_a_tr.last
loop
for j in Gt_b_tr.first..Gt_b_tr.last
loop

if Gt_a_tr(i).subscr_no=Gt_b_tr(j).subscr_no and
Gt_a_tr(i).primary_offer_id=Gt_b_tr(j).offer_id and
Gt_b_tr(j).offer_type=2 then

for k in Gt_c_tr.first..Gt_c_tr.last
loop

if
Gt_b_tr(j).offer_id=Gt_c_tr(k).offer_id and
Gt_c_tr(k).bal_id=Gt_a_tr(i).bal1_id and
Gt_c_tr(k).is_core=1

then
insert into valid_a values(Gt_a_tr(i).subscr_no,Gt_a_tr(i).primary_offer_id,Gt_a_tr(i).bal1_id);
commit;
dbms_output.put_line('Valid subscribers numbers are:'||Gt_a_tr(i).subscr_no||' '||Gt_a_tr(i).primary_offer_id||' '||Gt_a_tr(i).bal1_id);
Gt_a_tr(i).subscr_no:=Gt_a_tr.next(Gt_a_tr(i).subscr_no);
end if;
end loop;-- k end loop
end if;

end loop; --j end loop
if Gt_a_tr(i).subscr_no is null then
null;
else
insert into valid_a values(Gt_a_tr(i).subscr_no,Gt_a_tr(i).primary_offer_id,Gt_a_tr(i).bal1_id);
commit;
dbms_output.put_line('Invalid subscribers numbers are:'||Gt_a_tr(i).subscr_no||' '||Gt_a_tr(i).primary_offer_id||' '||Gt_a_tr(i).bal1_id);
end if;
end loop; -- i end loop
end;
=======================================
OUTPUT
=====================================
Valid subscribers numbers are:10 100 1
Invalid subscribers numbers are:20 300 2
Invalid subscribers numbers are:30 400 3
Valid subscribers numbers are:50 600 5

PL/SQL procedure successfully completed.

SQL> SELECT * FROM VALID_A;

SUBSCR_NO PRIMARY_OFFER_ID BAL1_ID
---------- ---------------- ----------
30 400 3
50 600 5
10 100 1
20 300 2

SQL> SELECT * FROM INVALID_A;

no rows selected


Hi Tom could you please let me know the correct output.

only valid subscribers should be in valid_a table and invalid subscribers should be in invalid_a table,But here all subscribers are inserted into valid_a table only...

thanks inadvance...



Tom Kyte
November 07, 2011 - 10:26 am UTC

what defines a valid subscriber versus an invalid one.


You do realize that posting code that does not work - that does the wrong thing - is not any way to explain what you are trying to accomplish. You are not posting code that does the right thing - if we were to read it - it would only pervert our view of what needs to be done.

tell us - in words - what defines a valid subscriber versus invalid (and by doing that exercise you might well discover the bug in your own logic!)

A reader, March 15, 2012 - 5:19 am UTC


Pagination Query

Rajeshwaran, Jeyabal, March 16, 2012 - 7:16 am UTC

Tom:

I was running a pagination query and Tkprof & Autotrace shows this.

Autotrace shows full table scan's for below table and followed by FULL PARTITION WISE join (step id=10 in explain plan autotrace output)

and Tkprof shows that. Now you see optimizer shows indexed reads for all the 5 tables instead of FTS & lots of Nested loops instead of Hash joins. All partitioned stats & indexed stats are matches with rowcount in table partitons.

Can you help me how can i get full partition wise joins in Tkprof results? we are using Oracle 10.2.0.5.

RV_PROJ_CONTENT_PROVIDER_THE
RV_PROJ_CONTENT_THE
RV_CHART_THE
RV_PROJ_CONTENT_STATUS_FLT_THE
RV_PROJ_CONTENT_MEMBER_THE


test@TESTDB> variable l_start_id number;
test@TESTDB> variable l_end_id number;
test@TESTDB>
test@TESTDB> exec :l_start_id := 100;

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.56
test@TESTDB> exec :l_end_id := 125;

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.53
test@TESTDB>
test@TESTDB> set linesize 5000;
test@TESTDB> set autotrace traceonly explain ;
test@TESTDB>
test@TESTDB> SELECT *
  2  FROM
  3    (SELECT row_.*,
  4      rownum rownum_
  5    FROM
  6      (SELECT *
  7      FROM
  8        (SELECT
  9          PROJCONT.PROJ_CONTENT_BARCODE,
 10          NVL(CHART.CHART_STATUS, PROJCONT.RETRIEVAL_REQUEST_STATUS) CURRENT_STATUS,
 11          CHART.CHART_STATUS,
 12          STSFLT.CURRENT_STATUS_DT,
 13          PROJCONT.CURRENT_CHART_KEY LATEST_CHART_KEY,
 14          PROJCONT.SCHEDULED_VISIT_DT,
 15          CHART.AVAILABILITY_STATUS,
 16          FNC_GET_STATUS_CHANGED_BY(CHART.CHART_STATUS_DT,CHART.MODIFIED_BY, CHART.DML_USER,PROJCONT.VENDOR_RETRIEVAL_UPDATEBY_NAME,PROJCONT.DML_USER) S
_CHANGED_BY,
 17          PROJ.PROJ_KEY,
 18          MEMB.MBR_ID,
 19          FNC_GET_FULL_NAME(MEMB.MBR_FIRSTNAME,MEMB.MBR_LASTNAME) MEMBER_NAME,
 20          MEMB.MBR_HP_CD,
 21          MEMB.MBR_HP_PRODUCT,
 22          PROV.SOURCE_SYSTEM_PROV_ID,
 23          FNC_GET_FULL_NAME(PROV.PROV_FIRSTNAME,PROV.PROV_LASTNAME) PROVIDER_NAME,
 24          PROV.PROV_ADDRESS_1 PROVIDER_ADDRESS,
 25          PROV.PROV_CITY,
 26          PROV.PROV_STATE,
 27          PROV.PROV_ZIP,
 28          FNC_GET_FULL_NAME(PROJ.PROJ_REQ_FIRSTNAME,PROJ.PROJ_REQ_LASTNAME) PROJ_REQ_NAME,
 29          STSFLT.PNP_REASON_DESCRIP,
 30          PROJ.PROJ_NAME,
 31          PROJ.PROJ_STATUS_CD PROJECT_STATUS,
 32          CHART.LOCK_USER,
 33          CHART.ACTIVE_DT,
 34          PROJ.PROJ_SOURCE_CD,
 35          PROJCONT.PROJ_CONTENT_MBR_KEY,
 36          PROJCONT.PROJ_CONTENT_PROV_KEY,
 37          PROJCONT.RETRIEVAL_REQUEST_STATUS
 38        FROM RV_PROJ_CONTENT_THE PROJCONT,
 39          RV_CHART_THE CHART,
 40          RV_PROJ_CONTENT_MEMBER_THE MEMB,
 41          RV_PROJ_CONTENT_PROVIDER_THE PROV,
 42          RV_PROJ_CONTENT_STATUS_FLT_THE STSFLT,
 43          RV_PROJECT PROJ
 44        WHERE PROJCONT.CURRENT_CHART_KEY      = CHART.CHART_KEY (+)
 45       AND   projcont.proj_key  = CHART.proj_key  (+)
 46        AND MEMB.PROJ_CONTENT_MBR_KEY         = PROJCONT.PROJ_CONTENT_MBR_KEY
 47       and memb.proj_key                                     = projcont.proj_key
 48        AND PROV.PROJ_CONTENT_PROV_KEY        = PROJCONT.PROJ_CONTENT_PROV_KEY
 49       and prov.proj_key                                     = PROJCONT.proj_key
 50        AND STSFLT.PROJ_CONTENT_BARCODE       = PROJCONT.PROJ_CONTENT_BARCODE
 51       AND STSFLT.proj_key                               = PROJCONT.PROJ_KEY
 52        AND PROJ.PROJ_KEY                     = PROJCONT.PROJ_KEY
 53        AND PROJ.PROJ_STATUS_CD              IN (SELECT STATUS  FROM RV_PROJECT_STATUS)
 54        AND ( CHART.CHART_STATUS             IN ( SELECT STATUS FROM RV_CHART_STATUS  )
 55        OR PROJCONT.RETRIEVAL_REQUEST_STATUS IN ( SELECT STATUS FROM RV_RETRIEVAL_REQUEST_STATUS ) )
 56        )
 57      ORDER BY PROJ_KEY,PROJ_CONTENT_MBR_KEY,PROJ_CONTENT_PROV_KEY ASC
 58      ) row_ where rownum <= :l_end_id
 59    )
 60  WHERE rownum_ > :l_start_id
 61  /
Elapsed: 00:00:04.37

Execution Plan
----------------------------------------------------------
Plan hash value: 1666514308

----------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                     | Name                           | Rows  | Bytes |TempSpc| Cost (%CPU)| Time     | Pstart| Pstop |
----------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT              |                                | 58261 |   471M|       | 94023   (4)| 00:18:49 |       |       |
|*  1 |  VIEW                         |                                | 58261 |   471M|       | 94023   (4)| 00:18:49 |       |       |
|*  2 |   COUNT STOPKEY               |                                |       |       |       |         |             |       |       |
|   3 |    VIEW                       |                                | 58261 |   471M|       | 94023   (4)| 00:18:49 |       |       |
|*  4 |     SORT ORDER BY STOPKEY     |                                | 58261 |    30M|    32M| 94023   (4)| 00:18:49 |       |       |
|*  5 |      FILTER                   |                                |       |       |       |         |             |       |       |
|*  6 |       HASH JOIN               |                                | 58261 |    30M|       | 87281   (4)| 00:17:28 |       |       |
|   7 |        INDEX FULL SCAN        | PK_RV_PROJECT_STATUS           |     8 |    72 |       |     1   (0)| 00:00:01 |       |       |
|*  8 |        HASH JOIN              |                                | 58261 |    29M|       | 87279   (4)| 00:17:28 |       |       |
|   9 |         TABLE ACCESS FULL     | RV_PROJECT                     |   969 | 76551 |       |    11   (0)| 00:00:01 |       |       |
|  10 |         PARTITION LIST ALL    |                                | 58261 |    25M|       | 87267   (4)| 00:17:28 |     1 |   970 |
|* 11 |          HASH JOIN            |                                | 58261 |    25M|       | 87267   (4)| 00:17:28 |       |       |
|* 12 |           HASH JOIN           |                                | 58261 |    21M|       | 70644   (4)| 00:14:08 |       |       |
|* 13 |            HASH JOIN OUTER    |                                | 58323 |    18M|       | 37549   (4)| 00:07:31 |       |       |
|* 14 |             HASH JOIN         |                                |   598K|   148M|       | 21942   (4)| 00:04:24 |       |       |
|  15 |              TABLE ACCESS FULL| RV_PROJ_CONTENT_PROVIDER_THE   |   598K|    78M|       |  4332   (1)| 00:00:52 |     1 |   970 |
|  16 |              TABLE ACCESS FULL| RV_PROJ_CONTENT_THE            |  3842K|   447M|       | 17094   (1)| 00:03:26 |     1 |   970 |
|  17 |             TABLE ACCESS FULL | RV_CHART_THE                   |  2500K|   154M|       | 15099   (1)| 00:03:02 |     1 |   970 |
|  18 |            TABLE ACCESS FULL  | RV_PROJ_CONTENT_STATUS_FLT_THE |  3838K|   215M|       | 32585   (2)| 00:06:32 |     1 |   970 |
|  19 |           TABLE ACCESS FULL   | RV_PROJ_CONTENT_MEMBER_THE     |  4416K|   315M|       | 16108   (1)| 00:03:14 |     1 |   970 |
|* 20 |       INDEX UNIQUE SCAN       | PK_RV_CHART_STATUS             |     1 |     8 |       |     0   (0)| 00:00:01 |       |       |
|* 21 |        INDEX UNIQUE SCAN      | PK_RV_RETRIEVAL_REQUEST_STATUS |     1 |     7 |       |     0   (0)| 00:00:01 |       |       |
----------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter("ROWNUM_">TO_NUMBER(:L_START_ID))
   2 - filter(ROWNUM<=TO_NUMBER(:L_END_ID))
   4 - filter(ROWNUM<=TO_NUMBER(:L_END_ID))
   5 - filter( EXISTS (SELECT 0 FROM "RV_CHART_STATUS" "RV_CHART_STATUS" WHERE "STATUS"=:B1) OR  EXISTS (SELECT 0 FROM
              "RV_RETRIEVAL_REQUEST_STATUS" "RV_RETRIEVAL_REQUEST_STATUS" WHERE "STATUS"=:B2))
   6 - access("PROJ"."PROJ_STATUS_CD"="STATUS")
   8 - access("PROJ"."PROJ_KEY"="PROJCONT"."PROJ_KEY")
  11 - access("MEMB"."PROJ_KEY"="PROJCONT"."PROJ_KEY" AND "MEMB"."PROJ_CONTENT_MBR_KEY"="PROJCONT"."PROJ_CONTENT_MBR_KEY")
  12 - access("STSFLT"."PROJ_KEY"="PROJCONT"."PROJ_KEY" AND "STSFLT"."PROJ_CONTENT_BARCODE"="PROJCONT"."PROJ_CONTENT_BARCODE")
  13 - access("PROJCONT"."PROJ_KEY"="CHART"."PROJ_KEY"(+) AND "PROJCONT"."CURRENT_CHART_KEY"="CHART"."CHART_KEY"(+))
  14 - access("PROV"."PROJ_KEY"="PROJCONT"."PROJ_KEY" AND "PROV"."PROJ_CONTENT_PROV_KEY"="PROJCONT"."PROJ_CONTENT_PROV_KEY")
  20 - access("STATUS"=:B1)
  21 - access("STATUS"=:B1)

test@TESTDB>


call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.10       0.12          0          0          0           0
Execute      1      1.84       1.81          0          0          0           0
Fetch        1   1737.87    9267.71    1646336   28702246          0           0
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        3   1739.82    9269.66    1646336   28702246          0           0

Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 105  

Rows     Row Source Operation
-------  ---------------------------------------------------
      0  VIEW  (cr=0 pr=0 pw=0 time=28 us)
      0   COUNT STOPKEY (cr=0 pr=0 pw=0 time=27 us)
      0    VIEW  (cr=0 pr=0 pw=0 time=25 us)
      0     SORT ORDER BY STOPKEY (cr=0 pr=0 pw=0 time=23 us)
3501813      CONCATENATION  (cr=28702246 pr=1646336 pw=132300 time=8366496431 us)
3501813       FILTER  (cr=28702246 pr=1646336 pw=132300 time=8362994617 us)
3525005        NESTED LOOPS  (cr=28702233 pr=1646336 pw=132300 time=8242149059 us)
3525005         NESTED LOOPS  (cr=18127216 pr=682728 pw=132300 time=2669103020 us)
3525005          HASH JOIN  (cr=7552199 pr=283414 pw=132300 time=575194966 us)
3838501           NESTED LOOPS OUTER (cr=7533712 pr=144262 pw=0 time=514480463 us)
3838501            NESTED LOOPS  (cr=74664 pr=72792 pw=0 time=103701369 us)
    969             NESTED LOOPS  (cr=48 pr=40 pw=0 time=55024 us)
    969              TABLE ACCESS FULL RV_PROJECT (cr=46 pr=39 pw=0 time=34339 us)
    969              INDEX UNIQUE SCAN PK_RV_PROJECT_STATUS (cr=2 pr=1 pw=0 time=17844 us)(object id 177923)
3838501             PARTITION LIST ITERATOR PARTITION: KEY KEY (cr=74616 pr=72752 pw=0 time=109971338 us)
3838501              TABLE ACCESS FULL RV_PROJ_CONTENT_THE PARTITION: KEY KEY (cr=74616 pr=72752 pw=0 time=106120099 us)
2483553            TABLE ACCESS BY GLOBAL INDEX ROWID RV_CHART_THE PARTITION: ROW LOCATION ROW LOCATION (cr=7459048 pr=71470 pw=0 time=388346318 us)
2483602             INDEX RANGE SCAN PK_RV_CHART_THE (cr=4975446 pr=8028 pw=0 time=64668796 us)(object id 189750)
 599395           PARTITION LIST ALL PARTITION: 1 970 (cr=18487 pr=17097 pw=0 time=65956271 us)
 599395            TABLE ACCESS FULL RV_PROJ_CONTENT_PROVIDER_THE PARTITION: 1 970 (cr=18487 pr=17097 pw=0 time=33522082 us)
3525005          TABLE ACCESS BY GLOBAL INDEX ROWID RV_PROJ_CONTENT_MEMBER_THE PARTITION: ROW LOCATION ROW LOCATION (cr=10575017 pr=399314 pw=0 time=2233289159 us)
3525005           INDEX UNIQUE SCAN PK_RV_PROJ_CONTENT_T_MEMBER (cr=7050012 pr=58773 pw=0 time=358943475 us)(object id 189915)
3525005         TABLE ACCESS BY GLOBAL INDEX ROWID RV_PROJ_CONTENT_STATUS_FLT_THE PARTITION: ROW LOCATION ROW LOCATION (cr=10575017 pr=963608 pw=0 time=5398753721 us)
3525005          INDEX UNIQUE SCAN PK_RV_PROJ_CONTENT_STATUS_FLTT (cr=7050012 pr=22176 pw=0 time=230196650 us)(object id 192917)
     12        INDEX UNIQUE SCAN PK_RV_RETRIEVAL_REQUEST_STATUS (cr=13 pr=0 pw=0 time=1016 us)(object id 178028)
      0       FILTER  (cr=0 pr=0 pw=0 time=0 us)
      0        NESTED LOOPS  (cr=0 pr=0 pw=0 time=0 us)
      0         NESTED LOOPS  (cr=0 pr=0 pw=0 time=0 us)
      0          HASH JOIN  (cr=0 pr=0 pw=0 time=0 us)
      0           NESTED LOOPS OUTER (cr=0 pr=0 pw=0 time=0 us)
      0            NESTED LOOPS  (cr=0 pr=0 pw=0 time=0 us)
      0             NESTED LOOPS  (cr=0 pr=0 pw=0 time=0 us)
      0              TABLE ACCESS FULL RV_PROJECT (cr=0 pr=0 pw=0 time=0 us)
      0              INDEX UNIQUE SCAN PK_RV_PROJECT_STATUS (cr=0 pr=0 pw=0 time=0 us)(object id 177923)
      0             PARTITION LIST ITERATOR PARTITION: KEY KEY (cr=0 pr=0 pw=0 time=0 us)
      0              TABLE ACCESS FULL RV_PROJ_CONTENT_THE PARTITION: KEY KEY (cr=0 pr=0 pw=0 time=0 us)
      0            TABLE ACCESS BY GLOBAL INDEX ROWID RV_CHART_THE PARTITION: ROW LOCATION ROW LOCATION (cr=0 pr=0 pw=0 time=0 us)
      0             INDEX RANGE SCAN PK_RV_CHART_THE (cr=0 pr=0 pw=0 time=0 us)(object id 189750)
      0           PARTITION LIST ALL PARTITION: 1 970 (cr=0 pr=0 pw=0 time=0 us)
      0            TABLE ACCESS FULL RV_PROJ_CONTENT_PROVIDER_THE PARTITION: 1 970 (cr=0 pr=0 pw=0 time=0 us)
      0          TABLE ACCESS BY GLOBAL INDEX ROWID RV_PROJ_CONTENT_MEMBER_THE PARTITION: ROW LOCATION ROW LOCATION (cr=0 pr=0 pw=0 time=0 us)
      0           INDEX UNIQUE SCAN PK_RV_PROJ_CONTENT_T_MEMBER (cr=0 pr=0 pw=0 time=0 us)(object id 189915)
      0         TABLE ACCESS BY GLOBAL INDEX ROWID RV_PROJ_CONTENT_STATUS_FLT_THE PARTITION: ROW LOCATION ROW LOCATION (cr=0 pr=0 pw=0 time=0 us)
      0          INDEX UNIQUE SCAN PK_RV_PROJ_CONTENT_STATUS_FLTT (cr=0 pr=0 pw=0 time=0 us)(object id 192917)
     12        INDEX UNIQUE SCAN PK_RV_RETRIEVAL_REQUEST_STATUS (cr=13 pr=0 pw=0 time=1016 us)(object id 178028)
      0        INDEX UNIQUE SCAN PK_RV_CHART_STATUS (cr=0 pr=0 pw=0 time=0 us)(object id 177725)


Elapsed times include waiting on following events:
  Event waited on                             Times   Max. Wait  Total Waited
  ----------------------------------------   Waited  ----------  ------------
  library cache lock                              1        0.00          0.00
  row cache lock                                 49        0.00          0.01
  SQL*Net message to client                       1        0.00          0.00
  SQL*Net more data to client                     1        0.00          0.00
  gc cr grant 2-way                          801252        0.04        201.40
  db file sequential read                   1436962        0.47       7342.18
  gc cr multi block request                    6254        1.22          4.47
  i/o slave wait                              13692        0.44        147.86
  db file scattered read                       9570        0.44        128.81
  gc current block 2-way                      11591        0.00          4.14
  db file parallel read                          39        0.04          0.79
  enq: TT - contention                            2        0.00          0.00
  CSS initialization                              2        0.00          0.00
  CSS operation: action                           2        0.00          0.00
  CSS operation: query                            6        0.01          0.01
  direct path write temp                       8820        0.02          0.52
  latch: KCL gc element parent latch              7        0.00          0.00
  KJC: Wait for msg sends to complete             1        0.00          0.00
  direct path read temp                        8137        0.16          9.84
  gc cr grant congested                          38        0.00          0.05
  latch: gcs resource hash                        6        0.00          0.00
  SQL*Net break/reset to client                   1        0.00          0.00
  SQL*Net message from client                     1        4.85          4.85

Pagination Query

Rajeshwaran, Jeyabal, March 16, 2012 - 11:15 am UTC

Tom - Can you please help me on this?
Tom Kyte
March 16, 2012 - 11:22 am UTC

sorry, there is no one anyone can look at a pretty large query, having NO KNOWLEDGE of the schema, of the partitioning strategy, of the indexing strategy, of the nature of the data, of the skew of the data, or how the data arrives (indicating natural clustering and such) and give you anything useful.

Note: I'm not asking for that here, it isn't appropriate as a comment/followup.


Here is one path for you to pursue. Use this method of 'tracing' the query:

https://jonathanlewis.wordpress.com/2006/11/09/dbms_xplan-in-10g/

compare the e-rows and a-rows columns, look to see if there are LARGE (orders of magnitude) divergences in the estimate from the actual row counts. If there are - we can start looking at that and trying to figure out where it went wrong. Fixing the cardinality estimates is the way to fix a 'bad plan'

Also, make sure you are using first_rows(n) optimization for this type of query.

Very good information.

Ganesh, May 24, 2012 - 4:17 am UTC

Dear Tom,

This is really technical information about pagination in Oracle. I could not get this level of details anywhere else about pagination.

In a post "Followup September 9, 2002 - 8pm Central time zone:", you mentioned that humans may not be interested in paginating thru all rows in a big table and so this query may not have impact.

But, what if there is an application which need to go through all the rows to scan some important data?
In that case what will be the impact when the query reaches the higher number of rows? Will the query become progressively slower? Will this cause the Oracle DB server to slow down? My application need to scan the entire table data and that's why I am concerned about this performance degradation.

Please guide.

Thanks for your help.

Best Regards.
Tom Kyte
May 24, 2012 - 9:06 am UTC

http://www.oracle.com/technetwork/issue-archive/2006/06-sep/o56asktom-086197.html
http://www.oracle.com/technetwork/issue-archive/2007/07-jan/o17asktom-093877.html

are better write ups....



... But, what if there is an application which need to go through all the rows to
scan some important data? ....

then you wouldn't be using pagination, programs don't paginate, people do. A program would never ask for page1, page2, page3 - it would just open a query and process row after row.

Order by in outer query

Ashok Kumar, December 05, 2012 - 6:54 am UTC

Hi,

I am curious to know that in the following query, do I need to use order by clause in outer query?


SELECT *
FROM (SELECT a1, b1,
row_number() over(ORDER BY A.b1 DESC) rNum
FROM TABLE_A)
WHERE rNum between 1 and 10

ORDER BY B1 DESC
Tom Kyte
December 14, 2012 - 1:43 pm UTC

yes

A reader, December 14, 2012 - 3:15 pm UTC

There is no need to sort in outer query, as inner query is doing generate row_number order by that column.

please correct me?

thanks
Tom Kyte
December 17, 2012 - 4:14 pm UTC

you are not allowed to skip the order by statement.

the optimizer is free to optimize it away, but YOU are not allowed to skip it.


http://asktom.oracle.com/Misc/order-in-court.html

ops$tkyte%ORA11GR2> create table t ( a int, b int );

Table created.

ops$tkyte%ORA11GR2> 
ops$tkyte%ORA11GR2> set autotrace traceonly explain
ops$tkyte%ORA11GR2> 
ops$tkyte%ORA11GR2> select *
  2    from (select a, b, row_number() over (order by b desc) rn
  3            from t)
  4   where rn <= 10
  5   order by b desc
  6  /

Execution Plan
----------------------------------------------------------
Plan hash value: 3047187157

---------------------------------------------------------------------------------
| Id  | Operation                | Name | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------------
|   0 | SELECT STATEMENT         |      |     1 |    39 |     3  (34)| 00:00:01 |
|*  1 |  VIEW                    |      |     1 |    39 |     3  (34)| 00:00:01 |
|*  2 |   WINDOW SORT PUSHED RANK|      |     1 |    26 |     3  (34)| 00:00:01 |
|   3 |    TABLE ACCESS FULL     | T    |     1 |    26 |     2   (0)| 00:00:01 |
---------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter("RN"<=10)
   2 - filter(ROW_NUMBER() OVER ( ORDER BY INTERNAL_FUNCTION("B") DESC
              )<=10)

Note
-----
   - dynamic sampling used for this statement (level=2)

ops$tkyte%ORA11GR2> select *
  2    from (select a, b, row_number() over (order by b desc) rn
  3            from t)
  4   where rn <= 10
  5   order by a
  6  /

Execution Plan
----------------------------------------------------------
Plan hash value: 3060612387

----------------------------------------------------------------------------------
| Id  | Operation                 | Name | Rows  | Bytes | Cost (%CPU)| Time     |
----------------------------------------------------------------------------------
|   0 | SELECT STATEMENT          |      |     1 |    39 |     4  (50)| 00:00:01 |
|   1 |  SORT ORDER BY            |      |     1 |    39 |     4  (50)| 00:00:01 |
|*  2 |   VIEW                    |      |     1 |    39 |     3  (34)| 00:00:01 |
|*  3 |    WINDOW SORT PUSHED RANK|      |     1 |    26 |     3  (34)| 00:00:01 |
|   4 |     TABLE ACCESS FULL     | T    |     1 |    26 |     2   (0)| 00:00:01 |
----------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - filter("RN"<=10)
   3 - filter(ROW_NUMBER() OVER ( ORDER BY INTERNAL_FUNCTION("B") DESC
              )<=10)

Note
-----
   - dynamic sampling used for this statement (level=2)


see how the optimizer skipped the sort in the first query - but not the second.


What if we invent a way to generate row_number() without having to sort (like group by had happen)? what if someone adds something to the query?



IF you want data sorted... use order by. if we don't have to sort it - we won't

pagination and counts

dx, January 11, 2013 - 11:29 am UTC

Hi Tom

Regarding what has been said about implementing pagination in a stateless web environment I totally agree with all the points made about not providing users with a row count but providing a best guess, ie the way google does it.

I categorise a row count as summary data. It is data that, in order to calculate it, you need to read the entire resultset right down to the last row before returning any results to the client hence cannot use any first_rows optimisation since the last row needs to be read.

However I have a requirement that does need to implement this kind of "summary" data, not a row count but a sum.
I have requested the requirements be altered so we don't have to provide this but it was declined, they need this data.
So I have a webpage that runs a stored procedure that could return many hundreds of rows, so I need to implement paging, but I also need to provide this "summary" data.

So to get this summary data, because the last row needs to be read before sending the first row back to the client am i right in saying that FIRST_ROWS optimisation cannot be used?

Is this true of analytics too?
I have seen you use ROW_NUMBER for pagination in this way:

select *
from (
select /*+ first_rows(25) */
object_id,
object_name,
row_number() over (order by object_id) rn
from all_objects)
where rn between :n and :m
order by rn;

So my question is can you use count or sum in the same way to take advangage of first_rows ie:

select *
from (
select /*+ first_rows(25) */
object_id,
object_name,
row_number() over (order by object_id) rn,
count() over () as cnt,
sum(object_id) over () as sm
from all_objects)
where rn between :n and :m
order by rn;

Or are count and sum analytics processed differently to row_number?
Is it possible to optimise them using first_rows??
If not please can you explain how these are processed differently to row_number which can take advantage of first_rows optimisation.


Thanks

Tom Kyte
January 15, 2013 - 9:22 am UTC

am i right in saying that FIRST_ROWS
optimisation cannot be used?


not necessarily - for example:

select deptno, dname, (select count(*) from emp where emp.deptno = dept.deptno)
  from dept
 order by deptno


can be a relatively efficient approach to this problem, instead of coding:

select dept.deptno, dept.dname, count(emp.empno) 
  from dept, emp
 where dept.deptno = emp.deptno(+)
 order by dept.deptno


these two queries are semantically equivalent - but can result in massively different processing. In order to get the first row out of the first query - we might just have to index full scan into DEPT to get one row (and stop at the first row) and then do an index range scan on an index on EMP(deptno) to get the count and output the result.

The second query might will full scan DEPT, full scan EMP and do a hash outer join, aggregate, sort, and then return the first row.


see
http://www.oracle.com/technetwork/issue-archive/2011/11-sep/o51asktom-453438.html

for a further discussion on this.



As for the question about count and sum processed different to row_number - the answer is no, not really - HOWEVER - since you are using radically different window clauses - in your case - they would have to be..


you have row_number() over (order by object_id) - that can retrieve the data sorted and then assign row_number to it as the data flows back. If it used an index to retrieve the data ordered by object_id - it can get the first five sorted rows and assign the numbers 1..5 to them and return them.

you have count(*) over () - that has to get the count of the entire result set - the window clause covers every single row.


Think of it this was:

count(*) over (order by object_id) cnt1, count(*) over () cnt2


cnt1 is a running total count, in order to get the first five rows - we can just get the first five rows via an index on object_id - assign 1 to the first row, 2 to the second, and so on (we are doing a running total - the window of data is the current row and all preceding rows - NONE of the following rows count).

cnt2 on the other hand - that count is the grand total - in order to know what it is, we need to know the total number of rows - we have to process every row.



depending on the nature of your query, the scalar subquery might be something to investigate.

Pagination Without Order By

Vahid Sadeghi, April 24, 2014 - 11:11 am UTC

Hi,
As you said , in order to have pagination in result set we should first sort the result by unique id.

We have a flat table :
CREATE TABLE FLAT_TABLE
(
ID NUMBER,
COL1 NUMBER,
COL2 VARCHAR2(100CHAR),
....
TITLE CLOB,
ADDRESS CLOB
)

Also we have context index on TITLE and ADDRESS.
We are going to have pagination on result set , but the cost of order is more , and we think that can we bypass the order by ... , because there isn't any DML on result set ?

here is you query :

SELECT * FROM (
SELECT a.*, ROWNUM RNUM FROM (
SELECT * FROM FLAT_TABLE
WHERE CONTAINS( TITLE, 'xxx' ) > 0 OR
CONTAINS( ADDRESS , 'xxxx') > 0
) WHERE ROWNUM <= :MAXNUM
) WHERE RNUM >= :MINNUM

Pagination Without Order By...

Vahid Sadeghi, April 25, 2014 - 8:41 pm UTC

Hi,

We have a table :

CREATE TABLE FLAT_TABLE
(
ID NUMBER,
COL1 NUMBER,
COL2 VARCHAR2(100CHAR),
....
TITLE CLOB,
ADDRESS CLOB
)

Also we have context index on TITLE and ADDRESS.

We are going to do pagination on result set , because there isn't any update on table, can we bypass the order by...
Does following query guaranties the unique response in different pages ?

This is our query :

SELECT * FROM (
SELECT a.*, ROWNUM RNUM FROM (
SELECT * FROM FLAT_TABLE
WHERE CONTAINS( TITLE, 'xxx' ) > 0 OR
CONTAINS( ADDRESS , 'xxxx') > 0
) WHERE ROWNUM <= :MAXNUM
) WHERE RNUM >= :MINNUM

Pagination Query

Vahid Sadeghi, July 27, 2014 - 10:24 am UTC

Hi TOM,
I have Table :
CREATE TABLE BIG_TABLE {
ID NUMER,
TYPE NUMBER,
TITLE CLOB
)

There is a revers index on ID column.
There is a BITMAP Index on TYPE column.
There is a Context Index on TITLE column.

In order to have pagination i used your pattern :
SELECT * FROM (
SELECT a.* , ROWNUM RNUM FROM
(
SELECT * FROM BIG_TABLE
WHERE CONTAINS ( TITLE, '{TOM}') > 0
an
TYPE IN ( 1,2,3,4 )
ORDER BY ID
) a WHERE ROWNUM <= 400
) WHERE RNUM > 0

Here is tkprof result :

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.01 0 264 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 2 2.93 5.98 13390 988191 0 400
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 4 2.94 5.99 13390 988455 0 400

********************************************

Rows (1st) Rows (avg) Rows (max) Row Source Operation
---------- ---------- ---------- ---------------------------------------------------
400 400 400 VIEW
400 400 400 COUNT STOPKEY
400 400 400 VIEW
400 400 400 SORT ORDER BY STOPKEY
173649 173649 173649 TABLE ACCESS BY INDEX ROWID BIG_TABLE
846181 846181 846181 DOMAIN INDEX TITLE_INX


Now a rewrite the query as follow :
SELECT * FROM (
SELECT a.* , ROWNUM RNUM FROM
(
SELECT * FROM BIG_TABLE
WHERE CONTAINS ( TITLE, '{TOM}') > 0
an
TYPE IN ( 1,2,3,4 )
) a WHERE ROWNUM <= 400
ORDER BY ID
) WHERE RNUM > 0

I change the place of order by ID

and follow is the result of tkprof :

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.01 0 264 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 2 0.00 0.01 31 1798 0 400
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 4 0.01 0.03 31 2062 0 400


***************************************************8

Rows (1st) Rows (avg) Rows (max) Row Source Operation
---------- ---------- ---------- ---------------------------------------------------
400 400 400 VIEW
400 400 400 SORT ORDER BY
400 400 400 COUNT STOPKEY
400 400 400 TABLE ACCESS BY INDEX ROWID BIG_TABLE
1040 1040 1040 DOMAIN INDEX TITLE_INX

The average response time of first query is about 60s and second query is 1s .
Does the second query can be use as pagination ?
Does it guaranties unique record on next pages ?

Is select the data using row num is efficient??

Kalyan, February 13, 2019 - 11:31 am UTC

I'm trying to select data from billion record table using the row num approach,
But facing issues while fetching data.

Can you tell me the effective way to select data from large tables?

example query :

select "ingest_ts_utc" from (SELECT to_char(sys_extract_utc(systimestamp), 'YYYY-MM-DD HH24:MI:SS.FF') as "ingest_ts_utc" ,ROWNUM as rno from XYZ.ABC ) where rno between 361754731 and 381852215


Chris Saxon
February 13, 2019 - 1:23 pm UTC

Are you really want to get rows 361 million-odd through to 381 million or so? So you're returning 20 million+ rows?

That's going to take a while. Whatever you do.

I suggest you rethink your query so it doesn't involve processing millions of rows.

Mean of a column

Joseph Poirier, August 14, 2021 - 11:04 pm UTC

Tom, I'd like to use your query to fetch a mean in Oracle, but I'm having trouble assigning variables for the following..

Where Rownum > (Round(Count(Column1) / 2, 4) as var1)- 1

Where rnum < var1 + Case When (Count(Column1) % / 2 as var2) = 1 Then 1 Else 2 End

Any help greatly appreciated. thanks
Connor McDonald
August 16, 2021 - 4:46 am UTC

Not sure what you mean (no pun intended).

Don't you just want AVG in this case?

split the rowids

Apraim, April 24, 2022 - 11:25 am UTC

How can we split the total number of records in a table based on the rowids ?

example - if total number of rows is 100M , split the rows using rowids into 10 buckets of 10 M each, based on physical rowid
Chris Saxon
April 25, 2022 - 1:34 pm UTC

I'm not sure how this relates the original question, but splitting rows into equal-sized buckets is a job for NTILE:

ntile(10) over ( order by rowid ) grp

Very helpful

Narendra, May 02, 2022 - 5:21 pm UTC

This query is very helpful for me.
Connor McDonald
May 03, 2022 - 2:46 am UTC

glad we could help