Skip to Main Content
  • Questions
  • getting rows N through M of a result set

Breadcrumb

Question and Answer

Tom Kyte

Thanks for the question, Rajesh .

Asked: May 02, 2000 - 1:21 pm UTC

Last updated: May 03, 2022 - 2:46 am UTC

Version:

Viewed 100K+ times! This question is

You Asked

I would like to fetch data after joining 3 tables and
sorting based on some field. As this query results into approx
100 records, I would like to cut the result set into 4, each of
25 record. and I would like to give sequence number to each
record. Can I do using SQL Plus ?




and Tom said...



In Oracle8i, release 8.1 -- yes.

select *
from ( select a.*, rownum rnum
from ( YOUR_QUERY_GOES_HERE -- including the order by ) a
where rownum <= MAX_ROWS )
where rnum >= MIN_ROWS
/

that'll do it. It will *not* work in 8.0 or before.


Rating

  (364 ratings)

Is this answer out of date? If it is, please let us know via a Comment

Comments

Pagination

Karthik, August 03, 2001 - 1:25 am UTC

It was to the point and very very useful
i would keep pestering you with more questions in the weeks to come.

Yes,

A reader, September 25, 2001 - 1:21 am UTC

it was useful.

Lifesaver....

Robert Jackson, October 16, 2001 - 4:28 pm UTC

This information was invaluable... I would have had to "kludge" something....

Parag

Parag Mehta, March 31, 2002 - 5:30 am UTC

Tom :

Great .... I think Ora ( Oracle ) has been made for u.
I am highly impressed by ur answere.

Regards
- Parag

Tom Kyte
March 31, 2002 - 9:07 am UTC

you = u
your = ur

is your keyboard broken such that Y and O do not work anymore? cle might be the next to go.

(there are enough abbreviations and three letter acronyms in the world, do we really have to make it HARDER to read stuff everyday by making up new ones all of the time)

Upset

Parag, March 31, 2002 - 10:24 am UTC

I am very Upset with "YOUR" Behaviour. I have not expected the same from " YOU". You could have convey the same in a different Professional Words.

For " YOUR" kind information Dear Tom , My KEYBOARD has not broken down at all. It's working perfectly.


With you tom on 'YOUR' comment on 'u' or 'ur'

Sean, March 31, 2002 - 5:54 pm UTC

Mr. Parag,

You just way over reacted.

U R GR8

Mark A. Williams, April 01, 2002 - 8:57 am UTC

Tom,

Maybe you could put something on the main page indicating appropriate use of abbreviations? Although, now that I think about it, it probably wouldn't do much good, as it appears people ignore what is there (and on the 'acceptance' page) anyway...

- Mark

Tom Kyte
April 01, 2002 - 10:08 am UTC

Already there ;)

It's my new crusade (along with bind variables). But yes, you are correct -- most people don't read it anyway.

You would probably be surprised how many people ask me "where can I read about your book" -- surprising given that it is right there on the home page...

Saw it was there after the fact

Mark A. Williams, April 01, 2002 - 10:27 am UTC

Tom:

Saw that you had added the message about the abbreviations after the fact. That's what I get for having my bookmark point to the 'Search/Archives' tab instead of the main page...

- Mark

A reader, April 01, 2002 - 11:37 am UTC

Excellent query. I just want to be sure I understand it.
You run the query 4 times, each time changeing the MAX and MIN rownumbers. Correct?

Tom Kyte
April 01, 2002 - 1:06 pm UTC

You just change min and max to get different ranges of rows, yes.

Very good

Natthawut, April 01, 2002 - 12:18 pm UTC

This will be useful for me in the future.
Thanks.

PS. Don't listen to Mr.Parag. He just envy you ;)

between

Mikito harakiri, April 01, 2002 - 7:39 pm UTC

Returning to the old discussion about difference between

select p.*, rownum rnum
from (select * from hz_parties ) p
where rownum between 90 and 100

vs

select * from (
select p.*, rownum rnum
from (select * from hz_parties ) p
where rownum < 100
) where rnum >= 90

I claim that they are identical fron perfomance standpoint. Indeed, the plan for the first one

SELECT STATEMENT 20/100
VIEW 20/100
Filter Predicates
from$_subquery$_001.RNUM>=90
COUNT (STOPKEY)
Filter Predicates
ROWNUM<=100
TABLE ACCESS (FULL) hz_parties 20/3921

seems to be faster than

SELECT STATEMENT 20/100
COUNT (STOPKEY)
Filter Predicates
ROWNUM<=100
FILTER
Filter Predicates
ROWNUM>=90
TABLE ACCESS (FULL) hz_parties 20/3921


But, note that all nodes in the plan are unblocking!. Therefore, it doesn't matter which condition is evaluated earier...


Tom Kyte
April 01, 2002 - 8:51 pm UTC

Please don't claim -- benchmark and PROVE (come on -- I do it all of the time).

Your first query "where rownum between 90 and 100" never returns ANY data.  that predicate will ALWAYS evaluate to false -- always.

I've already proven in another question (believe it was with you again) that 

select * from ( 
   select p.*, rownum rnum
           from (select * from hz_parties ) p
          where rownum < 100
) where rnum >= 90

is faster then:

select * from ( 
   select p.*, rownum rnum
           from (select * from hz_parties ) p
) where rnum between 90 and 100

which is what I believe you INTENDED to type.  It has to do with the way we process the COUNT(STOPKEY) and the fact that we must evaluate 

   select p.*, rownum rnum
           from (select * from hz_parties ) p

AND THEN apply the filter where as the other will find the first 100 AND THEN stop.

so, say I have an unindexed table:

ops$tkyte@ORA817DEV.US.ORACLE.COM> select count(*) from big_table;

  COUNT(*)
----------
   1099008

(a copy of all_objects over and over and over) and I run three queries.  Yours to show it fails (no data), what I think you meant to type and what I would type:

select p.*, rownum rnu
  from ( select * from big_table ) p
 where rownum between 90 and 100

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute      1      0.00       0.00          0          0          0           0
Fetch        1      6.17      15.31      14938      14985         81           0
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        3      6.17      15.31      14938      14985         81           0

Misses in library cache during parse: 0
Optimizer goal: CHOOSE
Parsing user id: 216

Rows     Row Source Operation
-------  ---------------------------------------------------
      0  COUNT STOPKEY
      0   FILTER
1099009    TABLE ACCESS FULL BIG_TABLE


<b>your query -- no data found....  Look at the number of rows inspected however</b>



select *
from (
select p.*, rownum rnum
  from ( select * from big_table ) p
)
 where rnum between 90 and 100

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute      1      0.00       0.00          0          0          0           0
Fetch        2      7.93      17.03      14573      14986         81          11
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        4      7.93      17.03      14573      14986         81          11

Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: 216

Rows     Row Source Operation
-------  ---------------------------------------------------
     11  VIEW
1099008   COUNT
1099008    TABLE ACCESS FULL BIG_TABLE

<b>what I believe you mean to type in -- agein -- look at the rows processed!

Now, what I've been telling everyone to use:</b>


select * from (
   select p.*, rownum rnum
           from (select * from big_table ) p
          where rownum < 100
) where rnum >= 90

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute      1      0.00       0.00          0          0          0           0
Fetch        2      0.00       0.01          1          7         12          10
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        4      0.00       0.01          1          7         12          10

Misses in library cache during parse: 0
Optimizer goal: CHOOSE
Parsing user id: 216

Rows     Row Source Operation
-------  ---------------------------------------------------
     10  VIEW
     99   COUNT STOPKEY
     99    TABLE ACCESS FULL BIG_TABLE


<b>HUGE difference.  Beat that...  

Claims -- don't want em.
Benchmark, metrics, statistics -- love em -- want em -- need em.
</b>




 

Over the top!

Thevaraj Subramaniam, April 01, 2002 - 10:30 pm UTC

Tom, I am really very impressed with the way you prove it with examples and explanations. Answering questions from around the world, and at the same time facing hurdles along the way and overcoming it. You are the best! Will be always supporting asktom.oracle.com. Cheers.

Thank goodness!

Jim, April 03, 2002 - 1:25 am UTC

Tom,

Liked the solution and your new rule.

You have my vote on the rule not to use "u" for you
and "ur" for your. It's not clever, it simply makes
things harder to read, in fact I think it's just plain
lazy

Anyone that doesn't like it can simply ask someone else.


between

Mikito harakiri, April 03, 2002 - 3:37 pm UTC

Thanks Tom. I finally noticed that you have rownum in one predicate and rnum in the other and they are different:-)

sql>select * from (
2 select p.*, rownum rnum
3 from (select * from hz_parties ) p
4 where rownum < 100
5 ) where rnum >= 90

Statistics
----------------------------------------------------------
7 consistent gets
5 physical reads

The best solution I was able to get:

appsmain>select * from (
2 select * from (
3 select p.*, rownum rnum
4 from (select * from hz_parties ) p
5 ) where rnum between 90 and 100
6 ) where rownum < 10

Statistics
----------------------------------------------------------
15 consistent gets
5 physical reads

It's neither faster, nor more elegant:-(

actual "between" test

Mikito harakiri, April 03, 2002 - 8:32 pm UTC

Tom,

Sorry, but I see no difference:

public static void main(String[] args) throws Exception {
Class.forName("oracle.jdbc.driver.OracleDriver");
System.out.println(execute("select * from (select p.*, rownum rnum "
+ " from (select * from hz_parties ) p "
+ " where rownum < 100 "
+ " ) where rnum >= 90 "));
System.out.println(execute("select * from ( \n"
+ " select p.*, rownum rnum "
+ " from (select * from hz_parties ) p "
+ " ) where rnum between 90 and 100"));

}
static long execute( String query ) throws Exception {
Connection con = DriverManager.getConnection("jdbc:oracle:thin:@dlserv7:1524:main","apps","apps");
con.setAutoCommit(false);

con.createStatement().execute("alter system flush shared_pool");
long t1 = System.currentTimeMillis();
ResultSet rs = con.createStatement().executeQuery(query);
rs.next();
rs.next();
rs.next();
rs.next();
rs.next();
rs.next();
long t2 = System.currentTimeMillis();

con.rollback();
con.close();
return t2 - t1;
}

Both queries return in 0.6 sec. Here is my interpretation: The "between" query in the context where we open cursor, read first rows, and then discard the rest is essentially the same as "between" query with stopcount (that goofy sql in my last reply). The execution engine doesn't seem to go forward and check the between predicate for the whole table, or does it?

Tom Kyte
April 04, 2002 - 11:31 am UTC

TKPROF, TKPROF, TKPROF.

thats all you need to use.

This query:


select *
from ( select p.*, rownum rnum
from ( YOUR_QUERY )
where rownum < 100
)
where rnum >= 90


runs your query and gathers the first 100 rows and stops. IF YOUR_QUERY must materialize all of the rows before it can get the first row (eg: it has certain constructs like groups by and such) -- then the difference in your case may not be as large -- but its there. Use TKPROF to get RID of the java overhead in the timings (timing in a client like that isn't very reliable).

Consider:

here we obviously don't need to get the last row before the first row -- it's very "fast"

select *
from ( select p.*, rownum rnum
from ( select owner, object_name, object_type
from big_table
) p
where rownum <= 100
)
where rnum >= 90

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 2 0.01 0.00 63 7 12 11
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 4 0.01 0.00 63 7 12 11

Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: 216

Rows Row Source Operation
------- ---------------------------------------------------
11 VIEW
100 COUNT STOPKEY
100 TABLE ACCESS FULL BIG_TABLE



Now, lets add an aggregate -- here we do have to process all rows in the table HOWEVER, since the rownum is pushed down as far as we can push it - we can do some suboptimizations that make this faster


select *
from ( select p.*, rownum rnum
from ( select owner, object_name, object_type, count(*)
from big_table
group by owner, object_name, object_type
) p
where rownum <= 100
)
where rnum >= 90

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.01 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 2 5.78 18.08 14794 14985 81 11
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 4 5.79 18.08 14794 14985 81 11

Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: 216

Rows Row Source Operation
------- ---------------------------------------------------
11 VIEW
100 COUNT STOPKEY
100 VIEW
100 SORT GROUP BY STOPKEY
1099008 TABLE ACCESS FULL BIG_TABLE

Lastly, we'll do it your way -- here we don't push the rownum down, the chance for optimization is gone and you run really slow

select *
from ( select p.*, rownum rnum
from ( select owner, object_name, object_type, count(*)
from big_table
group by owner, object_name, object_type
) p
)
where rnum between 90 and 100

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.03 0 0 0 0
Execute 2 0.00 0.00 0 0 0 0
Fetch 2 20.15 112.44 24136 14985 184 11
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 5 20.15 112.47 24136 14985 184 11

Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: 216

Rows Row Source Operation
------- ---------------------------------------------------
11 VIEW
17172 COUNT
17172 VIEW
17172 SORT GROUP BY
1099008 TABLE ACCESS FULL BIG_TABLE


I guess, at the end of the day, it is up to you. I can only show you that it is faster so many times. In the end -- it is your choice.

In your case, this is what I am guessing:

o hz_parties is a view (recognize it from apps)
o its a view that gets the last row before it can get the first
o the number of rows you can see is not significant (maybe a thousand or so, something that fits in RAM nicely)
o the rownum optimization in your case doesn't do much -- if you see the tkprof, you'll be able to quantify what it does for you.


In general I can say this:

you would be doing the wrong thing to use "where rnum between a and b" when you can push the rownum DOWN into the inner query and achieve PHENOMEMAL performance gains in general. But again, that is your choice.


nuff said




Performance difference

Ken Chiu, July 25, 2002 - 5:25 pm UTC

The 1st query below is more than half faster than the 2nd query, please explain what happened ?

select b.*
(Select * from A Order by A.Id) b
where rownum<100

select * from
(select b.*,rownum rnum
(Select * from A Order by A.Id) b
where rownum<100)
and rnum >= 50

thanks.


Tom Kyte
July 25, 2002 - 10:35 pm UTC

half faster... Hmmm.... wonder what that means.

I can say that (after fixing your queries) -- My findings differ from yours. In my case, big_table is a 1,000,000 row table and I see:

big_table@ORA920.US.ORACLE.COM> set autotrace traceonly
big_table@ORA920.US.ORACLE.COM> select b.*
2 from (Select * from big_table A Order by A.Id) b
3 where rownum<100
4 /

99 rows selected.


Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=15735 Card=99 Bytes=141000000)
1 0 COUNT (STOPKEY)
2 1 VIEW (Cost=15735 Card=1000000 Bytes=141000000)
3 2 TABLE ACCESS (BY INDEX ROWID) OF 'BIG_TABLE' (Cost=15735 Card=1000000 Byte
s=89000000)

4 3 INDEX (FULL SCAN) OF 'BIG_TABLE_PK' (UNIQUE) (Cost=2090 Card=1000000)




Statistics
----------------------------------------------------------
0 recursive calls
0 db block gets
19 consistent gets
0 physical reads
0 redo size
9701 bytes sent via SQL*Net to client
565 bytes received via SQL*Net from client
8 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
99 rows processed

big_table@ORA920.US.ORACLE.COM>
big_table@ORA920.US.ORACLE.COM> select * from
2 (select b.*,rownum rnum
3 from (Select * from big_table A Order by A.Id) b
4 where rownum<100)
5 where rnum >= 50
6 /

50 rows selected.


Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=15735 Card=99 Bytes=15246)
1 0 VIEW (Cost=15735 Card=99 Bytes=15246)
2 1 COUNT (STOPKEY)
3 2 VIEW (Cost=15735 Card=1000000 Bytes=141000000)
4 3 TABLE ACCESS (BY INDEX ROWID) OF 'BIG_TABLE' (Cost=15735 Card=1000000 By
tes=89000000)

5 4 INDEX (FULL SCAN) OF 'BIG_TABLE_PK' (UNIQUE) (Cost=2090 Card=1000000)




Statistics
----------------------------------------------------------
0 recursive calls
0 db block gets
13 consistent gets
0 physical reads
0 redo size
5667 bytes sent via SQL*Net to client
532 bytes received via SQL*Net from client
5 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
50 rows processed

big_table@ORA920.US.ORACLE.COM>
big_table@ORA920.US.ORACLE.COM> set autotrace off
big_table@ORA920.US.ORACLE.COM> spool off



The second query is more efficient then the first.

Food for thought

Mike Moore, September 09, 2002 - 7:46 pm UTC

The testing shows stats for a range at the beginning of a large table. I wonder what that stats look like when selecting rows 999000 thru 999100 ... in other words, rows at the end of a large table?
I'd try it myself if I could.

Tom Kyte
September 09, 2002 - 8:09 pm UTC

Every subsequent query as you page down can get slower and slower (goto google, you'll see that there as well)

HOWEVER, common sense says that no end user will have the patience 10 or 25 rows at a time to get to rows 999000 thru 999100 -- even google cuts you off WAY before you get crazy. A result set that large is quite simply meaningless for us humans.

But then again you can goto asktom and search from somthing and keep paging forward till you get board. It is true you'll get 18,000 hits at most since thats all thats in there so for -- but your NEVER have the patience to get to the end.


Sort of like the old commericial if you remember the wise old owl "how many licks does it take to get to the center of a tootsie pop" (i think the owl only got to three before he just bit the lollipop). For those not in the US and who didn't grow up in the 70's -- ignore that last couple of sentences ;)



Food for thought (cont)

Michael J. Moore, September 10, 2002 - 9:34 pm UTC

Good point! I mean about nobody actually going to page throught that much data. I confess that I don't completely understand how to read an EXECUTE PLAN so my question is only intended to prove to myself that I do or don't understand what is actually going on. Suppose a person wanted to use your SELECT technique for choosing rows N thru M towards the end of a large table as I earlier suggested. Maybe they are not using it for paging, but for some bizarre twilight zone reason that is what they want to do. Is it true that one could expect the performance of the SELECT to degrade as ranges deeper and deeper into the table are selected? If 'yes' then I say 'great, I understand what is happening. If 'no', then I say, "darn, I still don't have a clue."
As for the 70's, I voted for McCarthy, but Dick Nixon won.

Tom Kyte
September 11, 2002 - 7:36 am UTC

I would order the result set backwards and get the first page instead of the last then (flip the order of the data around).

Yes, it'll take longer to get the last N rows then the first N rows in general (not every time, but you can reasonable expect it to be the case)

problem in query

Ankit Chhibber, September 21, 2002 - 3:09 am UTC

I tried this query on an ordered view, the view has about 7000 records with eventseverity as 64.

select * from
( select fmeventsview.* , rownum rnum from
(select * from fmeventsview where EventSeverity = 64 )fmeventsview where rownum <=500 ) where rnum >0;

but i get just 234 rows in the result set.

if i fire the embeded query

"select fmeventsview.* , rownum rnum from
(select * from fmeventsview where EventSeverity = 64 )fmeventsview where rownum <=500 "
i do get 500 records with RNUM values from 1-500

I don't know where i am goofing up :-(
please advice on the same



Tom Kyte
September 21, 2002 - 11:14 am UTC

I hate views with order bys. Add the order by to the query itself. The order by doesn't have to be specifically obeyed in the view once you start doing wacky things to the query. It must be throwing off the rownum somehow -- but not have a test case to play with -- I cannot say.

Getting rows 10,00,001 to 10,00,010 - Query taking forever to execute

Brijesh, September 22, 2002 - 4:35 am UTC

Hi Tom,
The query which you've shown is very good and working very fast within a range of 100,000 to 150,000 rows but
when trying to get rows more than 500,000's it is taking a minute for doing so.

The query :

select fatwaid,fatwatitle
from (select a.*,rownum r
from (select * from fatwa order by fatwaid) a
where rownum <= &upperbound )
where r >= &lowerbound

when executed with 150001 and 150010 gives me
following output and plan

10 rows selected.

Elapsed: 00:00:02.01

Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=826 Card=150010 Byte
s=11700780)

1 0 VIEW (Cost=826 Card=150010 Bytes=11700780)
2 1 COUNT (STOPKEY)
3 2 VIEW (Cost=826 Card=1282785 Bytes=83381025)
4 3 TABLE ACCESS (BY INDEX ROWID) OF 'FATWA' (Cost=826 C
ard=1282785 Bytes=2837520420)

5 4 INDEX (FULL SCAN) OF 'PK_FATWA' (UNIQUE) (Cost=26
Card=1282785)

When executed with values of
1000001 and 1000010

Following is the plan and time
10 rows selected.

Elapsed: 00:01:01.08

Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=826 Card=1000010 Byt
es=78000780)

1 0 VIEW (Cost=826 Card=1000010 Bytes=78000780)
2 1 COUNT (STOPKEY)
3 2 VIEW (Cost=826 Card=1282785 Bytes=83381025)
4 3 TABLE ACCESS (BY INDEX ROWID) OF 'FATWA' (Cost=826 C
ard=1282785 Bytes=2837520420)

5 4 INDEX (FULL SCAN) OF 'PK_FATWA' (UNIQUE) (Cost=26
Card=1282785)


How Can I speed up the process of getting last rows?

Tom Kyte
September 22, 2002 - 10:12 am UTC

Nope, no go -- this is good for paging through a result set. Given that HUMANS page through a result set and pages are 10-25 rows and we as humans would NEVER in a billion years have the patience to page down 100,000 times -- it is very workable.

Perhaps you want to order by DESC and get the first page?

(think about it -- to get the "last page", one must iterate over all of the preceding pages. a desc sort would tend to read the index backwards)



invalid coloumn name exception

Ankit Chhibber, October 04, 2002 - 5:32 am UTC

Hi,
when this query is fired simultanously ( from Java application using JDBC) from multiple threads, oracle sometimes gives an exception "invalid coloumn name" :-(
can you please explain the reason ???


Tom Kyte
October 04, 2002 - 8:25 am UTC

Umm, magic. A bug. Programmer error. I don't know.

Sounds like time to file a TAR with support. One would need tons of information such as (and don't give it to me, give it to support) type of driver used, version of driver, version of db, a test case (as small as humanly possible) that can be run to reproduce the issue.

that last part will be the hard part maybe. but you should be able to start up a small java program with a couple of threads all that just wildly parse and execute queries that eventually hits this error.

An old question revivied again

Ankit Chhibber, October 21, 2002 - 12:41 pm UTC

Hi Tom,
I am using your queryto do a lot of DB operations :-), I am reading records 1000 at a time based on your approach. when there are 100,000 records in DB (This is acceptable situation, that is what people tell me :-) ), the fetch for first 1000 rows takes about 50 seconds :-(.(Our Os is solaris, and i use JDBC for accesing the DB)
I am doing a sort (order by) on one of the primary keys.
Can you suggest some way of improving the performance here ???

It would be of real help

regards
Ankit

Tom Kyte
October 21, 2002 - 1:13 pm UTC

</code> http://asktom.oracle.com/~tkyte/tkprof.html <code>

use that tool (sql_trace + TIMED_STATISTICS) to see the query plan, rows flowing through the steps of the plan and use that as your jump off point for tuning.

You might be a candidate for FIRST_ROWS optimization.

why 1000 rows, 25 or 100 is more reasonable. But anyway -- it is probably the fact that you need to sort 100k rows each time -- check your sort area size as well.

Getting rows 10,00,001 to 10,00,010 - Query taking forever to execute

Brijesh, October 22, 2002 - 12:57 am UTC

Now i've got it,
its just a matter of thinking why would a user would page through all the 100000 pages to go to 100001.

Even me searched on google many times but never went beyond the tenth page.

Thanks for all you are doing for developers,
and for the reply.
Regards Brijesh


get the count for my query

Cesar, November 11, 2002 - 1:27 pm UTC

How i can get the count in my query?

select *
from ( select a.*, rownum rnum
from ( YOUR_QUERY_GOES_HERE -- HOW GET HOW MANY ROWS ARE HERE?? ) a
where rownum <= MAX_ROWS )
where rnum >= MIN_ROWS



Excellent stuff

Harpreet, December 12, 2002 - 12:56 am UTC

hi
I was having the same problem for a few days of how to do pagination. Howard suggested to look in to your site and i found the answer, with some very good discussions.

this is really good work.



how to group it by

A reader, December 17, 2002 - 2:45 pm UTC

I have table t as

select * from t;


T1 COLOR
---------- --------------------
1 PINK
2 GREEN
3 BLUE
4 RED
5 YELLOW

select * from xabc;
COLOR C_DATE
-------------------- -----------------
RED MON
RED TUE
RED WED
RED THU
RED FRI
RED SAT
RED SUN
PINK MON
PINK TUE
PINK WED


now I need get the resuleset as follows

COLOR C_DATE
-------------------- -----------------
RED MON
RED TUE
RED WED
RED THU
PINK MON

because red = 4 in t and pink = 1 in t
how to do it ?

TIA

Tom Kyte
December 18, 2002 - 10:54 am UTC

does not compute. No idea what you mean. so what if red = 4 and pink = 1?

Thanks,

A reader, December 17, 2002 - 4:37 pm UTC

don't spend time answering that I got it !!



1 select p.*
2 from (
3 select x.color,x.c_date,
4 row_number() over (partition by x.color order by c_date) r
5 from xabc x,t
6 where x.color = t.color
7 ) p , t
8 where p.color = t.color
9* and r <= t.t1
nydev168-->/

COLOR C_DATE R
-------------------- -------------------- ----------
PINK MON 1
RED FRI 1
RED MON 2
RED SAT 3
RED SUN 4




Thanks :)

Scrollable cursors

A reader, December 18, 2002 - 5:53 pm UTC

Tom,

Are scrollable cursors (9.2) available in pl/sql and jdbc, or only pro c/c++?

If not, when will this feature become available from pl/sql?

Tom Kyte
December 19, 2002 - 7:14 am UTC

jdbc has then.

I cannot imagine a case whereby plsql would need/desire them. I can see their usefulness in a situation where you have a client/server stated connection and want to page up/down through a result set - but plsql does not lend itself to that sort of environment? We rely on the client to do that (eg: something like forms, or jdbc). In a stored procedure -- when would you want to "go backwards"?

what if red = 4 and pink = 1?

A reader, December 20, 2002 - 11:22 am UTC

it means there should be only 4 rows returned for red
and only 1 row should be returned for pink evenif ther are 10 rows for pink



Master Oracle Guru

Denise, February 05, 2003 - 4:08 pm UTC

Tom

I wish I had 1/5 your knowledge...everytime I come
here seeking answers and solutions you always seem to
hit the target head on...and then top it off with superb
code that is easy to understand and apply.

Everytime I come here my answers are solved and I learn
something new everytime.

I am DEFINITELY buying your book!

as a newbie I can't express enough how important it is
for those of us venturing into this brave new world of
Oracle to have someone of your stature, expertise & knowledge paving the way.

I think your(errrrr...'ur') TERRIFIC!!!
Denise


Helena Markova, February 13, 2003 - 2:52 am UTC


Excellent.

Chandra S.Reddy, February 20, 2003 - 8:17 am UTC

Hi Tom,
You are really great. This solution is very much useful for me.
I believe, there will not be any much resource utilization with this approach.
Is that right Tom?

Tom Kyte
February 20, 2003 - 8:25 am UTC

there will be as much resource utilitization as needed to process the query?

How can I do this in sql?

A reader, February 22, 2003 - 3:22 am UTC

Tom,
If I want to return set n from each group of records based upon key, for example, the data is like this..
store customer qty
1 10 10
1 100 20
1 1000 30
.......................
2 20 20
2 200 200
...........
...........
I want to return any two records from each group of store i.e, two records for each store.
Thanks

Tom Kyte
February 22, 2003 - 10:48 am UTC

select *
from ( select t.*, row_number() over (partition by store order by customer) rn
from t
)
where rn <= 2;


would do it -- or




Thanks a lot !

A reader, February 22, 2003 - 5:12 pm UTC

Tom,
This is regarding followup:
"select *
from ( select t.*, row_number() over (partition by store order by customer) rn
from t
)
where rn <= 2;
would do it -- or"

1.What is or....?
2.Where can I find more about such type of queries?

Thanks

Tom Kyte
February 22, 2003 - 5:17 pm UTC

ignore the or ;)


analytics are documented in

o sql reference manual
o data warehousing guide

and I think the write up I have on them in my book "Expert one on one Oracle" is pretty understandable if you have that - I have a chapter on analytics.

row_number is one of about 40 analytic functions we have

Upset

A reader, February 23, 2003 - 3:31 pm UTC

u r the hard 1 not Parag

Tom Kyte
February 23, 2003 - 3:52 pm UTC

maybe if you use real words next time, I'll actually understand what you are trying to communicate.

it is not that much to ask is it? T

That is what is known as a rhetorical question, I don't expect a response. The answer is "no, when communicating, it is not too much to ask people to use a common agreed upon language as opposed to making one up"...

so, don't go away mad, just....


Rownum

phunghung, February 25, 2003 - 5:33 am UTC

Exellent !!!
It's very useful for me
Special thanks :D

Pagination in other scenario.

Chandra S.Reddy, February 28, 2003 - 4:35 am UTC

Tom,
This is further R&D for the approach you have provided.
If someone wants to get M to N records of Dept Number instaed of Emp Number.

select * from (select tmp1.*, rownum1 rnum
from (select e.* from scott.emp e, scott.dept d where e.deptno = d.deptno)tmp1,
(select deptno, rownum rownum1 from scott.dept)tmp2
where tmp1.deptno = tmp2.deptno and
rownum1 <= :End ) where rnum >= :Start ;
/


Tom Kyte
February 28, 2003 - 10:01 am UTC

Well, there are other ways to fry that fish.

analytics rock and roll:

scott@ORA920> select dept.deptno, dname, ename,
2 dense_rank() over ( order by dept.deptno ) dr
3 from emp, dept
4 where emp.deptno = dept.deptno
5 /

DEPTNO DNAME ENAME DR
---------- -------------- ---------- ----------
10 ACCOUNTING CLARK 1
10 ACCOUNTING KING 1
10 ACCOUNTING MILLER 1
20 RESEARCH SMITH 2
20 RESEARCH ADAMS 2
20 RESEARCH FORD 2
20 RESEARCH SCOTT 2
20 RESEARCH JONES 2
30 SALES ALLEN 3
30 SALES BLAKE 3
30 SALES MARTIN 3
30 SALES JAMES 3
30 SALES TURNER 3
30 SALES WARD 3

14 rows selected.

scott@ORA920>
scott@ORA920> variable x number
scott@ORA920> variable y number
scott@ORA920>
scott@ORA920> exec :x := 2; :y := 3;

PL/SQL procedure successfully completed.

scott@ORA920>
scott@ORA920> select *
2 from (
3 select dept.deptno, dname, ename,
4 dense_rank() over ( order by dept.deptno ) dr
5 from emp, dept
6 where emp.deptno = dept.deptno
7 )
8 where dr between :x and :y
9 /

DEPTNO DNAME ENAME DR
---------- -------------- ---------- ----------
20 RESEARCH SMITH 2
20 RESEARCH ADAMS 2
20 RESEARCH FORD 2
20 RESEARCH SCOTT 2
20 RESEARCH JONES 2
30 SALES ALLEN 3
30 SALES BLAKE 3
30 SALES MARTIN 3
30 SALES JAMES 3
30 SALES TURNER 3
30 SALES WARD 3

11 rows selected.

Using dates is giving error.

Chandra S.Reddy, March 02, 2003 - 9:03 am UTC

Tom,
Very nice to see many approaches to implent  the pagination.

When I try to implement one of your method, I got some problems.

Issue #1.

Please see below.

SQL> create or replace procedure sp(out_cvGenric OUT PKG_SWIP_CommDefi.GenCurTyp) is
  2  begin
  3  
  4  OPEN out_cvGenric FOR 
  5  select *
  6      from (
  7    select dept.deptno, dname, ename,to_char(hiredate,'dd-mm-yyyy'),
  8           dense_rank() over ( order by dept.deptno ) dr
  9      from emp, dept
 10     where emp.deptno = dept.deptno and hiredate between '17-DEC-80' and '17-DEC-82'
 11           )
 12  where dr between 2 and 3;

 19  end ;
 20  /

Warning: Procedure created with compilation errors.
SQL> show err;

LINE/COL ERROR
-------- -----------------------------------------------------------------
8/28     PLS-00103: Encountered the symbol "(" when expecting one of the
         following:
         , from

I managed this problem by keeping the query in strings(OPEN out_cvGenric FOR 'select * from ... ' ) and using USING clause. It worked very fine.

Why is this error Tom.?

Issue #2.

Please check below code. This is my actual implementation.Above is PL/SQL shape for your answer.

procedure sp_clips_reports_soandso (
                in_noperationcenterid in number,
                in_dreportfromdt in  date , 
                in_dreporttodt in date ,
                in_cusername in varchar2,
                in_ntirestatuscode in number,
                in_cwipaccount in varchar2,
                in_npagestart in  number,
                in_npageend in  number ,
                out_nrecordcnt out number ,
                out_nstatuscode out number,
                out_cvgenric out pkg_clips_commdefi.gencurtyp,
                out_cerrordesc out varchar2) is

            v_tempstart    number(5) ;
            v_tempend    number(5) ;
begin
        out_nstatuscode := 0;

            select count(tire_trn_number) into out_nrecordcnt
            from    t_clips_tire 
            where     redirect_operation_center_id = in_noperationcenterid
                and    tire_status_id = in_ntirestatuscode
                and    tire_date >= in_dreportfromdt
                and tire_date <= in_dreporttodt
                and wip_account = in_cwipaccount ;

        if in_npagestart =  -1 and in_npageend = -1 then
        
            v_tempstart    := 1;
            v_tempend    := out_nrecordcnt ;
        else
              v_tempstart :=   in_npagestart ;
              v_tempend :=    in_npageend ;

        end if ;
open out_cvgenric for 
'select *
    from (
  select tire.tire_trn_number tiretrnnumber,
                    to_char(tire.tire_date,''mm/dd/yy''),
                    tire.tire_time,
                    tire.direct_submitter_name user_name,
                dense_rank() over ( order by tire.tire_trn_number ) dr
            from    t_clips_tire tire,
                t_clips_afs_transaction transactions,
                t_clips_transaction_code transactionscd
            where
                tire.tire_trn_number = transactions.tire_trn_number and
                transactions.tran_code = transactionscd.tran_code and 
                redirect_operation_center_id = :opp and
                tire.tire_status_id = :stcode  and
                tire.wip_account = :wip and
                tire.tire_date > :reportfromdt and
                tire.tire_date < :reporttodt and
            order by transactions.tire_trn_number,tran_seq
         )
where dr between :start and :end' using in_noperationcenterid,in_ntirestatuscode,in_cwipaccount,v_tempstart,v_tempend;

end sp_clips_reports_soandso;
/
show err;
no errors.
sql> var out_cvgenric refcursor;
sql> var out_nstatuscode  number; 
sql> declare
  2  out_cerrordesc varchar2(2000) ;
  3  --var out_nrecordcnt number ;
  4  begin
  5  sp_clips_reports_soandso(4,'16-feb-02', '16-feb-03',null,2,'0293450720',1,10,:out_nrecordcnt, :out_nstatuscode ,:out_cvgenric,out_cerrordesc);
  6  dbms_output.put_line(out_cerrordesc);
  7  end ;
  8  /
declare
*
error at line 1:
ora-00936: missing expression
ora-06512: at "CLIPStest2.sp_clips_reports_soandso", line 40
ora-06512: at line 5

In the above code,query is in string,program got compiled.
But while calling it is showing errors.
If I remove "tire.tire_date > :ReportFromDt and tire.tire_date < :ReportToDt" from the WHERE clause, the query is working fine and giving results.
If the dates are in query, it is going wrong.

To say, this pagination in SP, will remove much burdens on the application server. But unfortunately am not coming with the solution.

Could you please provide me the solution.
Thanks in advance.

 

Tom Kyte
March 02, 2003 - 9:32 am UTC

1) see

http://asktom.oracle.com/pls/asktom/f?p=100:11:::::P11_QUESTION_ID:3027089372477

same issue -- same workaround in 8i and before, native dynamic sql or a view

2) why are you counting -- that is a very very very bad idea.  First -- the answer can and will change (your count is a "guess").  Second, it is the best way to make a system CRAWL to a halt.  "Oh, I think I'll do a bunch of work and then do it all over again".  Time to double up the machine.

you have a buggy sql statement -- two things I see straight off:

        tire.tire_date < :reporttodt and
            order by transactions.tire_trn_number,tran_seq
         )

AND ORDER BY -- you are "missing an expression" in there.

Second - you are missing a pair of binds.  I see 5 "using" variables but count 7 binds.


it is not the dates in the query -- it is the invalid query itself.


Suggestion -- this is how I diagnosed this -- cut and paste the query into sqlplus, change '' into ' globally in the query.  and run it after doing "variable" statements:

ops$tkyte@ORA920> variable opp varchar2(20)
ops$tkyte@ORA920> variable stcode varchar2(20)
ops$tkyte@ORA920> variable wip varchar2(20)
ops$tkyte@ORA920> variable reportfromdt varchar2(20)
ops$tkyte@ORA920> variable reporttodt varchar2(20)
ops$tkyte@ORA920> variable start varchar2(20)
ops$tkyte@ORA920> variable end varchar2(20)
ops$tkyte@ORA920>
ops$tkyte@ORA920> select *
  2      from (
  3    select tire.tire_trn_number tiretrnnumber,
  4                      to_char(tire.tire_date,'mm/dd/yy'),
  5                      tire.tire_time,
  6                      tire.direct_submitter_name user_name,
  7                  dense_rank() over ( order by tire.tire_trn_number ) dr
  8              from    t_clips_tire tire,
  9                  t_clips_afs_transaction transactions,
 10                  t_clips_transaction_code transactionscd
 11              where
 12                  tire.tire_trn_number = transactions.tire_trn_number and
 13                  transactions.tran_code = transactionscd.tran_code and
 14                  redirect_operation_center_id = :opp and
 15                  tire.tire_status_id = :stcode  and
 16                  tire.wip_account = :wip and
 17                  tire.tire_date > :reportfromdt and
 18                  tire.tire_date < :reporttodt and
 19              order by transactions.tire_trn_number,tran_seq
 20           )
 21  where dr between :start and :end
 22  /
            order by transactions.tire_trn_number,tran_seq
            *
ERROR at line 19:
ORA-00936: missing expression


Now it becomes crystal clear where the mistake it. 

Using dates is giving error.

Chandra S.Reddy, March 02, 2003 - 9:39 am UTC

Hi Tom,
In my previous question with the same title, USING caluse is wrong. There bind variables for 'tire_date' fields are missing. It was wrongly pasted. Sorry for that.
Please find the correct one below.
--
USING in_noperationcenterid,in_ntirestatuscode, in_cwipaccount,in_dreportfromdt, in_dreporttodt,v_tempstart, v_tempend ;
----

Thanks you very much.

Tom Kyte
March 02, 2003 - 9:55 am UTC

still -- missing expression -- figure it out, not hard given information I already supplied.

Thank you.

A reader, March 02, 2003 - 11:05 am UTC

Tom,
Thank you for the suggestion.
COUNT is bad idea, but I should return this to application. There application will decide the pagination factor depending on the no. of records. So am using count there.



Why does between not work?

Errick, March 26, 2003 - 10:27 am UTC

Tom,
I've been reading through this set of posts, and was curious. Why exactly does the between 90 and 100 not work, whereas just select * from bigtable where rownum < 100 work? Maybe Im missing something from the article. Just curious.

Tom Kyte
March 26, 2003 - 3:59 pm UTC

because rownum starts at 1 and is incremented only when a row is output.

so,

select * from t where rownum between 90 and 100 would be like this:


rownum := 1;
for x in ( select * from t )
loop
if ( rownum between 90 and 100 )
then
output
rownum := rownum+1;
end if;
end loop;

nothing ever comes out of that loop.

Let me understand it better...

David, April 07, 2003 - 8:49 am UTC

Tom,

I am a DBA and I am sometimes a bit confused when it it comes to supporting web applications.

The web development people have asked me how to implement pagination, since their connection is stateless.

I would like to submit the query one time only, but I ended up creating something like below, which "re-parses", "re-executes" and "re-fetches" for each page:

select * from
(select b.*,rownum rnum
from (Select * from big_table a order by a.id) b
where rownum < :max )
where rnum >= :min ;

1) To my knowledge, each time I do this I have to "re-parse", "re-execute" and "re-fetch" the data. The bind variable values are kept and incremented for each page in the application. Is this a good approach ?

2) Wouldn't it be better if I could return the entire set (with first_rows) ?

3) How would be a mechanism for that (how would I code that) ?

4) Using this last approach, couldn't I do some kind of "pipelining" so the rows are returned to the application, submitting the query only once and without having to return the whole set -- since the entire table is too large.

Thanks


Tom Kyte
April 07, 2003 - 1:39 pm UTC

1) yes, it is what I do. Short of maintaining a connection and becoming a client server application -- there is no real avoiding this.

Me -- I'd rather have to parse (soft) for each page then to keep a physical, private connection (and all of the associated resources) open for that user IN CASE they hit page forward.

2) and you have a 500 row result set -- and the user only looks at the first ten -- and never ever goes to page 2? So you do 50 times the LIO you need to? Probably at least 40 times as much LIO as you ever would (pages are like 10 rows and the users NEVER goto page 11).


No, you want to do as little work as possible, save in the knowledge that people get bored and go away after hitting page down once or twice.

3) you'd be on your own...

4) that would be back to "i'm client server, I always have a stated connection, I always consume massive resources on your machine -- even when I'm not using it"

A Belated Defence of Parag

Richard, April 07, 2003 - 11:43 am UTC

With reference to Parag's use of abbreviations: Parag's meaning was clear; it must have been to you, too, or you wouldn't have known to put u = you, ur = your.

Yes, the world IS awash with abbreviations (3-letter and otherwise)and acronyms, but that's because they usually (as in Parag's case) make perfect sense and would be likely to confuse or befuddle only the elderly and the infirm!

yrs etc.,

Richard

Tom Kyte
April 07, 2003 - 2:22 pm UTC

Elmer Fudd here,

Weww, I disagwee. You see, I gets wots of qwestions -- some in pewfect engwish, some in bwoken engwish, some in foweign wanguages. Oh, dat scwewy wabbit! I twy to pawse these qwestions -- make sense of them and evewy woadbwock someone puts in thewe makes it hawd fow me to do that. Just wike weading this pawagwaph is hawd fow you now. I do not think it is too much to ask to use pwopew wowds in a fowum wike this. Oh, dat scwewy wabbit! Dis is NOT a ceww phone hewe, this is not instant messaging. Dis is a discussion pwace, a pwace to wead things. Oh, dat scwewy wabbit! Using made up things just makes it hawdew to undewstand. I don't ask fow too many things, this is one that I keep asking fow though.

that really hard to read text brought to you by the dialectizer:
</code> http://www.rinkworks.com/dialect/ <code>


Well, I disagree. You see, I gets lots of questions -- some in perfect english, some in broken english, some in foreign languages. I try to parse these questions -- make sense of them and every roadblock someone puts in there makes it hard for me to do that.

Just like reading this paragraph is hard for you now.

I do not think it is too much to ask to use proper words in a forum like this. This is NOT a cell phone here, this is not instant messaging. This is a discussion place, a place to read things. Using made up things just makes it harder to understand.

I don't ask for too many things, this is one that I keep asking for though.

Sending results to the Internet application

B. Robinson, April 07, 2003 - 12:10 pm UTC

DBA David,

It is not just that the connections are stateless, but the connections are pooled and rotated such that there may be a different database connection used for every web page request from a given user.

So the only way to avoid requerying for every subset of the large result set would be to return the whole massive result set to the web app, and the web app would cache the all results in memory, reading each subset from memory as needed. But since this would require the entire result set to be read from the database, it would make more sense to use all_rows.

Naturally, that approach uses up gobs of memory on the app server or web server, so it may not be feasible for a web app with thousands of users.

Tom Kyte
April 07, 2003 - 2:24 pm UTC

the connection from the client (browser) to the app server is stateless.

time

A reader, April 07, 2003 - 5:56 pm UTC


just a note.

on toms site,

to load first 3-4 pages is very fast about < 2 secs.
when we go to 490-500 of 501 takes 10 sec. to load a very simple page

Tom Kyte
April 07, 2003 - 6:38 pm UTC

and it gets worse the further you go. my stuff is optimized to get your the first rows fast -- I do not give you the ability to goto to "row 3421" -- what meaning would that have in a search like this?


google search for Oracle


Results 1 - 10 of about 6,840,000. Search took 0.11 seconds.
Results 91 - 100 of about 7,800,000. Search took 0.24 seconds.
Results 181 - 190 of about 6,840,000. Search took 0.49 seconds.
(wow, thats wacky - the counts change too)
Results 811 - 820 of about 6,840,000. Search took 0.91 seconds.
Results 901 - 908 of about 6,840,000. Search took 0.74 seconds.

what? they cut me off -- I'm sure my answer was 909, I'm just sure of it!

Results xxx of about xxxxxx

A reader, April 08, 2003 - 6:54 am UTC

I recently went through a load of code removing every count(*) that there was before the actual query that was done by a developer before I came on the project.

It was amazing the argument I had with the (PHP) web developer about it. I just made the change and let the users decide if they liked the improved performance more than the missing bit of fairly pointless information. Guess what they preferred!

The thing that is missing is the "results 1-10 of about 500" (or whatever), which would be useful. The user might well want to know if there are just a few more records to look at, in which case it might well be worth paging, or whether there are lots, so that they would no to refine the search.

I know Oracle Text can do this sort of thing, but is there anything that can help in "Standard" Oracle? Using Oracle Text would need quite a re-write of the system.

What we could do is have the application ask for 21 rows of data. If the cursor came back with 10-20 more rows, the screen would say ">> 7 more rows" (or whatever), and if it hits the 21, then display ">> at least 11 more rows".

Have you any comments?

Thanks

Tom Kyte
April 08, 2003 - 7:54 am UTC

...
The thing that is missing is the "results 1-10 of about 500" (or whatever),
.....

if using Oracle Text queries (like I do here) there is an API for that.

if using the CBO in 9i -- you can get the estimated cardinality for the query in v$sql_plan...




For: Srinivas M

A Reader, April 08, 2003 - 9:00 am UTC

Hi,

All those fieldx IS LIKE '''' OR fieldX IS NULL .... what is that for ?!! don't you just want fieldX is null ?? Anyway...maybe I missed something...

I'm sure Tom will have lots to say on this, and apologies for 'butting in' but I thought i'd give my opinion and if its off base at least I'll learn :)

Do you need that table to be deleted and inserted into each time (looks like a pseudo-temporary table) ? All that looping and fetching - and it looks to me like if you had a llimit of 1000, you are going to fetch and do nothing with 1000 rows ??! Can't you change your query to use the constructs Tom has already defined in this article i.e.

SELECT * FROM (YOUR QUERY IN HERE, BUT SELECTING rownum BACK ALSO) WHERE rownum BETWEEN llimit and ulimit

??

Then I suspect you don't need your table, your delete, your loops and fetches, you can just open this and return the cursor.

Regards,

Paul

A reader, April 08, 2003 - 9:06 am UTC

Hi Srinivas,
Sorry to jump in between but i would like to say one thing. Tom has already given us his views and coding tips and tricks. Lets not waste his time by asking him to correct our code. I think this site provides us enough knowledge and tools. Only thing required on our part is applying it correctly and doing some research.


Screwy Rabbit!

Richard, April 08, 2003 - 10:28 am UTC

Hi,

Elmer Fudd... priceless! Seldom has an explanation been so funny! Point taken, though.

How about always translating your pages? Daffy Duck's my favourite.

Wegards,

Wichard

is this the proc. you are using for your site ?

A reader, April 08, 2003 - 3:23 pm UTC

is this the proc. you are using for your site ?


if you bind the variable in a session and the
http connection is state less so how will you
do it?

pls explain

Tom Kyte
April 08, 2003 - 5:47 pm UTC

yes, this is the procedure I use here...


the "bind variables" are of course passed from page to page -- in my case I use a sessionid (look up at that really big number in the URL) and your session "state" is but a row in a table to me.

Hidden fields, cookies -- they work just as well.

Thanks

A reader, April 08, 2003 - 6:14 pm UTC


Want a trick on this

DeeeBeee Crazeee, April 28, 2003 - 8:35 am UTC

Hi Tom,

I just wanted to know if there is a trick of combining multiple rows into a single row with values comma seperated.

For example, I have the department table :

Dept:

Dept_name
---------
ACCOUNTS
HR
MARKETING

I need a query that would return me...

Dept_name
---------
ACCOUNTS, HR, MARKETING

....is there a way with SQL or do we have to use PL/SQL. The number of rows are not fixed.

thanks a lot

PS: Just wanted to check if I can post my questions here (in this section, without asking it afresh) ....because, I just happened to come accross a page wherein a reader was apologizing for having asked a question in the comments. Do let me know on this, so that I can apologise too when I ask you a question in this section the next time ;)



Tom Kyte
April 28, 2003 - 8:48 am UTC

search this site for

stragg



What about this ?

A reader, May 16, 2003 - 2:40 pm UTC

I happened to found this in an article on pagination:

select rownum, col1
from foobar
group by rownum, col1
having rownum >= :start and rownum < :end

What do you think ? How does it compare to your solution to the original question ?

Tom Kyte
May 16, 2003 - 5:30 pm UTC

try it, it doesn't work.


set start to 10 and end to 15

you'll never get anything.

the way to do it -- it is above, my method works and is the most efficient method (as of May 16 2003, maybe some day in the furture there will be another more efficient method)

paging result set

lakshmi, May 17, 2003 - 4:38 am UTC

Excellent

Dynamic order by using rownum

vinodhps, May 27, 2003 - 6:17 am UTC

Hi tom ,
our current oracle version is 8.0.4 , i got one query which has to be ordered dynamically ie. if Max_ind is X then low_value column has to be order Ascending or if Max_ind is N then low_value column has to be ordered Descending. But i could i do that in query.. iam using this in my form
in the below query order by Desc or Asc is depend on the value passing for the max_ind(N or X).

SELECT insp_dtl.test_value,
insp_dtl.lab_test_sno,
purity.low_value, purity.high_value,
purity.pro_rata_flag, purity.pro_rata_type,
purity.cumulative_flag, purity.incr,
purity.prcnt, purity.flat_rate,
purity.cal_mode, NVL (purity.precision, 1) precision,
purity.min_max_ind
FROM t_las_matl_insp_hdr insp_hdr,
t_las_matl_insp_dtl insp_dtl,
t_pur_po_matl_purity_fact_dtl purity
WHERE insp_hdr.lab_test_sno = insp_dtl.lab_test_sno
AND insp_hdr.cnr_no = 200300905
AND purity.po_no = 200200607
-- AND purity.matl_code = f_matl_code
AND purity.para_code = insp_dtl.para_code
-- AND purity.para_code = f_para_code
ORDER BY low_value;




LAB_TEST_SNO LOW_VALUE HIGH_VALUE Max_ind
------------ --------- ---------- ---------
200300208 1.1 1.5 X
200300208 1.1 2 N
200300208 1.6 2 N
200300208 86 87.9 X
200300208 88 89.9 N

Tom Kyte
May 27, 2003 - 7:53 am UTC

great, thanks for letting us know?

Not really sure what you are trying to say here.

dynamically order by clause

vinodhps, May 27, 2003 - 9:08 am UTC

Hi Tom,
Thanks for your immediate response,
well i will put my question in this way..

SQL> create table order_by
  2  (low_value number(5),
  3   max_ind   varchar2(1));

Table created.


  1  insert into order_by
  2  select rownum ,'X' from all_objects
  3  where rownum < 10
  4* order by rownum desc
SQL> /

9 rows created.


  1  insert into order_by
  2  select rownum ,'N' from all_objects
  3  where rownum < 10
  4* order by rownum
SQL> /

9 rows created.

Now i would like to select all the values from the table by passing a value for max_ind(a indicator) whether its maximum or minimum value here in this query if i pass variable X then the query order by clause must be descending or else it should be ascending, actually it is a cursor .

SQL> select low_value  from order_by order by low_value desc;

LOW_VALUE
---------
        9
        9
        8
        8
        7
        7
        6
        6
        5
        5
        4
        4
        3
        3
        2
        2
        1
        1

18 rows selected.

This DESC or ASC will be decided dynamically.

Is it possible to do it dynamically Tom,

I  think above statements are clear. 

Tom Kyte
May 27, 2003 - 9:42 am UTC

you would use native dynamic sql to get the optimum query plan.

l_query := 'select .... order by low_value ' || p_asc_or_desc;

where p_asc_or_desc is a variable you set to ASC or DESC.


that would be best.

you can use decode, but you'll never use an index to sort with if that matters to you


order by decode( p_input, 'ASC', low_value, 0 ) ASC,
decode( p_input, 'DESC', low_value, 0 ) DESC




Thank you tom

vinodhps, May 27, 2003 - 10:11 am UTC

Thank you tom for your immediate responce...

Hope to see more from you.

Thank you,


Very useful, thanks Tom. One more question.

Jack Liu, June 02, 2003 - 3:10 pm UTC

1. Did you get total result number by count(*)? this pagin needs to know the count(*) or not, because select count take longer time.

2. How to optimize order by, it will use only 3s for below query but 44s with order?
select * from
( select qu.*, rownum rnum
from ( select issn,volume,issue,foa,title,author,subtitle,a.aid,rtype
from article a , ec.language l where 1=1
AND rtype in ('ART','REV','SER')
AND a.aid=l.aid AND l.langcode='eng'
AND a.issue is not null ORDER BY a.ayear desc ) qu
where rownum < 61)
where rnum >= 31


Tom Kyte
June 02, 2003 - 3:33 pm UTC

1) no, i use text's "get me an approximation of what you think the result set size might be" function. (its a text query for me)

2) /*+ FIRST_ROWS */

do you have an index on a.ayear?
is ayear NOT NULL?

if so, it could use the index, with first_rows, to read the data descending.

Very, Very...Helpful

Ralph, June 02, 2003 - 6:18 pm UTC

Along those lines...How can we get the maximum rows that will be fetched? i.e. to be able to show 1-10 of 1000 records, how do we know that there are total 1000 records without writing another select with count(*)

Tom Kyte
June 02, 2003 - 8:10 pm UTC

you don't -- all you need to show is

"you are seeing 1-10 of more then 10, hit next to see what might be 11-20 (or maybe less"


If you use text, you can approximate the result set size.
If you use the CBO and 9i, you can get the estimated cardinality from v$SQL_PLAN



Very helpful, thanks Tom, follow up with my question.

Jack, June 03, 2003 - 1:53 pm UTC

Tom,
Thanks for your quick response. This is really a very good place for Oracle Users. Just follow up my original question:

1) Is this "get me an approximation of what you think the result set size might be" function only in Oracle Text? If I use Oracle Intermedia text, any solution to show total result?

2) a.ayear is indexed but has some null. I know it can not use index to replace order in this situation, but why I use /*+ INDEX_ASC (article article_ayear) */, it doesn't work either? The optimizer mode is "choose" in svrmgrl> show parameter optimizer_mode;

Many many thanks.

Jack
I am planing to buy "expert one-on-one".


Tom Kyte
June 03, 2003 - 2:02 pm UTC

1) the approximation I'm showing is from text and only works with text.

you can get the estimated cardinality from an explain plan for other queries -- in 9i, that is right in v$sql_plan so you do not need to explain the query using explain plan

2) you answered your own question. The query does not permit the index to be used since using the index would miss NULL entries -- resulting in the wrong answer.

can you add "and a.ayear IS NOT NULL" or "and a.ayear > to_date( '01010001','ddmmyyyy')" to the query. then, an index on ayear alone can be used.


better be quick on the book purchase (bookpool still has it as of jun/3/2003 -- see link on homepage)

Publisher went under, book no longer printed ;(
New book in august though ;)

Thank you for your quick response!

Jack, June 03, 2003 - 3:39 pm UTC

Tom,
Thanks, actually I want the order is Ascending by ayear, since oracle default use descending for index, that's the reason I use /*+ INDEX_ASC (article article_ayear) */ hint.
Just don't know why it doesn't work, here is the explain plan with INDEX_ASC hint, I don't know why the hint is still choose.
SELECT STATEMENT Hint=CHOOSE 6 K 6911 VIEW 6 K 1 M 6911
COUNT STOPKEY NESTED LOOPS 6 K 736 K 6911 TABLE ACCESS FULL LANGUAGE 6 K 113 K 56 TABLE ACCESS BY INDEX ROWID ARTICLE 63 K 5 M 1 INDEX UNIQUE SCAN SYS_C004334 63 K

Thanks,

Jack



Tom Kyte
June 04, 2003 - 7:33 am UTC

Oracle uses indexes ASCENDING by default.

I told you why the index cannot be used -- ayear is nullable, using that index would (could) result in missing rows that needed to be processed.

hence, add the predicate I described above to make it so that the index CAN in fact be used.

paging and a join

marc, June 05, 2003 - 1:42 pm UTC

Which way would be better with a large table and the user wants to see an average of 500 rows back. The query has a main driving table and a 2nd table that will only be used to show a column's data. The 2nd table will not be used in the where or the order of the main select.

option 1(all table joined in the main select):

select name,emp_id,salary from (
select a.*, rownum rnum from (
SELECT emp.name,emp.emp_id,salary.salary FROM EMP,SALARY
where zip = someting and
EMP.emp_id = salary.emp_id order by name
) a where rownum <= MAX_ROWS )
where rnum >= MIN_ROWS

option 2(only the driving table is in the main select and joins are done at the higher level. Then oracle would only have to join the 2 tables with the data the the user will show. ):
select a.name,a.empid,salary.salary from (
select a.*, rownum rnum from (
SELECT emp.name,emp.emp_id FROM where zip = someting order by name
) a ,SALARY
where a.emp_id = salary.emp_id where rownum <= MAX_ROWS )
where rnum >= MIN_ROWS


Tom Kyte
June 05, 2003 - 1:44 pm UTC

users don't want to see 500 rows. 500 rows -- waayyy too much data, you cannot consume that much information...

option 1. using first_rows hint.


it'll only access the other table as much as it needs to and is much easier to understand.

but benchmark it on your data, that'll tell YOU for sure what is best in YOUR case.

marc, June 06, 2003 - 2:25 pm UTC

My users must be supper users, because they would like to see all that data (page by page of course) so they can eyeball a forecast and see a trend. These people are brokers that look at 1 column of the dataset, which is price per share (money), So they do look at many rows at a time. The extra data is superfluous. 500 rows is nothing for these users.

I was asking your opinion if you think it is better to do the join in the main dataset and not in the pagination piece. My perfmaince tuning told me it putting the data in the main or subset is all related on the type of data in I need to show, Example a join to get the name can be in main select, but the get the (select count(trades) from othertable) works better in the pagination section.


Tom Kyte
June 06, 2003 - 2:55 pm UTC

as long as the query is a "first rows" sort of query that can terminate with a COUNT STOPKEY -- the join can go anywhere.

Using index with order by

Jon, July 15, 2003 - 6:21 am UTC

Will an index be used with an order by if the table has already been accessed via another index? I thought the CBO would only work with one index per table (except with bitmap indexes).

I'm working on returning search results. Users want first 2000 rows (I know, I know... what will they do with it all - was originally top 500 rows, but that wasn't enough for them). The main table is already being accessed via another index to limit the result set initially. Explain Plan tells me that the index on the order by column is not being used. How to use the index for ordering?

Actually as I'm writing this, I think the answer came to me - concatenated indexes - of form (limit_cols, order_by_col), and then include the leading index column(s) in the order by clause.

Secondly, if I work with a union clause on similar, but not identical, queries, can an index be used for ordering in this case?

E.g.
select * from (
select * from (
select ... from x,y,z where ...
union all
select ... from x,y,w where ...
) order by x.col1
) where rownum <= 2000

or would we get better results with this approach:

select * from (
select * from (
select * from (
select ... from x,y,z where ...
order by x.col1
) where rownum <= 2000
union all
select * from (
select ... from x,y,w where ...
order by x.col1
) where rownum <= 2000
) order by x.col1
) where rownum <= 2000

So, if the result is partially sorted, does an order by perform better than if not sorted (this brings make memories of sorting algorithms many years ago...). I would think yes - but not sure of Oracle's internal sort algorithm?

Tom Kyte
July 15, 2003 - 9:56 am UTC

if you use the "index to sort", how can you use another index "to find"

You can either use an index to sort OR you can use an index to find, but tell me -- how could you imagine using both?

Your concatenated index will work in some cases -- yes.


the 2cnd approach -- where you limit all of the subresults - will most likely be the better approach.


You cannot go into the "does an order by perform better ....", that is so far out of the realm of your control at this point as to be something to not even think about.

Jon, July 15, 2003 - 7:14 pm UTC

"how could you imagine using both?" - not sure I understand you here. Wanting to use two indexes is a common requirement - so I can easily imagine it:

select *
from emp
where hire_date between to_date('01/01/2002','DD/MM/YYYY')
and to_date('01/02/2002','DD/MM/YYYY')
order by emp_no

If this was a large table, the ability to use an index to filter and an index to order by would seem advantageous.

As for internal sort algorithms - do you know what Oracle uses - or is it secret squirrel stuff?

Tom Kyte
July 15, 2003 - 7:21 pm UTC

so tell me -- how would it work, give us the "psuedo code", make it real.

Hmmm...

Jon, July 16, 2003 - 10:22 am UTC

I mean Oracle does that fancy index combine operation with bitmap indexes. I guess I'll just have to build it for you.

Tell you what, if I come up with a way of doing something similar for b*tree's, I'll sell it to Oracle... then I'll retire :-)

Tom Kyte
July 16, 2003 - 10:48 am UTC

Oh, we can use more then one index

we have index joins -- for example:


create table t ( x int, y int );

create index t_idx1 on t(x);
create index t_idx2 on t(y);

then select x, y from t where x = 5 and y = 55;

could range scan both t_idx1, t_idx2 and then hash join them together by rowid.


We have bitmaps where by we can AND and OR bitmaps together...



BUT - I want you to explain an algorithm that would permit you to

a) range scan by index index in order to locate data
b) use another index to "sort it"


None of the multi-index approaches "sort" data, they are used to find data.

All this thinking makes my brain hurt.

Jon, July 16, 2003 - 11:46 pm UTC

Well, since we CAN combine two indexes, how about:

1) Use idx1 to range scan
2) Hash join rowids to idx2 to produce result set
3) Do a sort-merge between 2) result set and idx2 to order

The efficiency of doing 2) & 3) over sort of table data would probably depend on cardinality of 1).

More fun for the CBO team and the Oracle mathematics dept...

Tom Kyte
July 17, 2003 - 10:23 am UTC


it would depend on cardinality of 1 and 2 really.

if card of 1 is small but card of 2 is big and you have to (must) full scan idx2 a block at a time to look for matches (we have to inspect every index entry) -- full scanning the index could take a really really long time

step 3 would would not be neccesary in this scenario as the full scan of index 2 would be 'sorted' and would just probe the hash table you built in 1



To clarify

Jon, July 16, 2003 - 11:50 pm UTC

By sort-merge in 3) I mean a set intersection operation.

getting rows N through M of a result set

Mohan, July 17, 2003 - 8:13 am UTC

Regarding the discussion regarding pagination of the result set into random chunks and sequencing them

consider the table customer_data

create table customer_data(custno number, invoiceno number);
insert into customer_data(custno, invoiceno) values(1,110);
insert into customer_data(custno, invoiceno) values(1,111);
insert into customer_data(custno, invoiceno) values(1,112);
insert into customer_data(custno, invoiceno) values(2,1150);
insert into customer_data(custno, invoiceno) values(2,1611);
insert into customer_data(custno, invoiceno) values(3,1127);
insert into customer_data(custno, invoiceno) values(2,3150);
insert into customer_data(custno, invoiceno) values(2,3611);
insert into customer_data(custno, invoiceno) values(3,3127);

The following query will break the result sets based on custno and sequences each chunk.

select b.rnum-a.minrnum+1 slno, a.custno, b.invoiceno from(select custno, min(rnum) minrnum from(select rownum rnum, custno, invoiceno from (select custno, invoiceno from customer_data order by custno, invoiceno)) group by custno) a, (select rownum rnum, custno, invoiceno from (select custno, invoiceno from customer_data order by custno, invoiceno)) b where a.custno=b.custno;


Mohan


Tom Kyte
July 17, 2003 - 10:42 am UTC

ok, put 100,000 rows in there and let us know how it goes... (speed and resource useage wise)

It works..

DD, July 17, 2003 - 5:15 pm UTC

<quote>
What about this ? May 16, 2003
Reviewer: A reader

I happened to found this in an article on pagination:

select rownum, col1
from foobar
group by rownum, col1
having rownum >= :start and rownum < :end

What do you think ? How does it compare to your solution to the original
question ?


Followup:
try it, it doesn't work.


set start to 10 and end to 15

you'll never get anything.

the way to do it -- it is above, my method works and is the most efficient
method (as of May 16 2003, maybe some day in the furture there will be another
more efficient method)
</quote>

Tom,
Your reply above states that this does not work. Infact it does work and it MUST work. The group by will be done before the having clause is applied and so we will get the correct result set. Please let me know your views and what is it that makes you think this wont work. Here are my results.

RKPD01> select rownum, object_id from big_table
2 group by rownum, object_id
3 having rownum > 10 and rownum < 15;

ROWNUM OBJECT_ID
---------- ----------
11 911
12 915
13 1091
14 1103


I havent tried to see if it is efficient but I wanted to verify why it wouldnt work when it should. Hope to Hear

Thanks
DD


Tom Kyte
July 17, 2003 - 7:37 pm UTC

oh, i messed up, saw the having and read it as 'where'

totally 100% inefficient, not a good way to do it. it does the entire result set and then gets rows 10-15

as opposed to my method which gets 15 rows, then throws out the first couple.



getting rows N through M of a result set

Mohan K, July 19, 2003 - 3:00 am UTC

Refer to the review on July 17, 2003

If the custno column is not indexed, then the performance will be a problem.

Run the following scripts to test the above query.


create table customer_data(custno number, invoiceno number);

declare
n1 number;
n2 number;
begin
for n1 in 1..2500 LOOP
for n2 in 1..100 LOOP
insert into customer_data(custno, invoiceno) values(n1, n2);
END LOOP;
END LOOP;
end;
/

commit;

create index customer_data_idx on customer_data(custno);


The first sql statement will create the table. The PL/SQL script will populate the table with 250000 rows. The next statement will create an index.


Now run the query as given below

select b.rnum-a.minrnum+1 slno, a.custno, b.invoiceno from(select custno, min(rnum) minrnum from
(select rownum rnum, custno, invoiceno from (select custno, invoiceno from customer_data order by custno, invoiceno)) group by custno) a,
(select rownum rnum, custno, invoiceno from (select custno, invoiceno from customer_data order by custno, invoiceno)) b
where a.custno=b.custno;

Mohan



Is it Possible?

A reader, July 23, 2003 - 12:39 pm UTC

Hi Tom,

I have a table like this

Name
Date
Amount

Data will be like

User1 01-JAN-03 100
User1 22-JUL-03 20
......
User2 23-JUL-03 90

Is there any way, I can get the last 6 (Order by Date desc)records for each user with a Single query?

I need get the output like

User1 22-JUL-03 20
User1 01-JAN-03 100
....
User2 23-JUL-03........

Thank you very much Tom. (I am using 8.1.7)

Tom Kyte
July 23, 2003 - 7:02 pm UTC

select *
from (select name, date, amount,
row_number() over (Partition by user order by date DESC ) rn
from t )
where rn <= 6;

For Query example on CUSTOMER_DATA table posted above...

Kamal Kishore, July 23, 2003 - 10:05 pm UTC

It is my understanding that the same output can be produced by using the following query:

SELECT row_number() over(PARTITION BY custno ORDER BY custno, invoiceno) slno,
       custno,
       invoiceno
FROM   customer_data
WHERE  custno IN (1, 2)
ORDER  BY custno,
          invoiceno
/


I may be understanding wrong. Maybe, Tom can verify this.

I ran the two queries on the CUSTOMER_DATA table (with 250000 rows) and below are the statistics. I ran both queries several times to remove any benefit of doubt, but results were similar.

I see a huge performance difference on the two queries.

Waiting for inputs/insights from Tom.
Thanks,

==========================================================

SQL> SELECT row_number() over(PARTITION BY custno ORDER BY custno, invoiceno) slno,
  2         custno,
  3         invoiceno
  4  FROM   customer_data
  5  WHERE  custno IN (1, 2)
  6  ORDER  BY custno,
  7            invoiceno
  8  /

200 rows selected.

Elapsed: 00:00:00.02

Execution Plan
----------------------------------------------------------
   0      SELECT STATEMENT Optimizer=CHOOSE
   1    0   WINDOW (SORT)
   2    1     CONCATENATION
   3    2       TABLE ACCESS (BY INDEX ROWID) OF 'CUSTOMER_DATA'
   4    3         INDEX (RANGE SCAN) OF 'CUSTOMER_DATA_IDX' (NON-UNIQU
          E)

   5    2       TABLE ACCESS (BY INDEX ROWID) OF 'CUSTOMER_DATA'
   6    5         INDEX (RANGE SCAN) OF 'CUSTOMER_DATA_IDX' (NON-UNIQU
          E)





Statistics
----------------------------------------------------------
          0  recursive calls
          0  db block gets
          8  consistent gets
          0  physical reads
          0  redo size
       2859  bytes sent via SQL*Net to client
        510  bytes received via SQL*Net from client
          3  SQL*Net roundtrips to/from client
          1  sorts (memory)
          0  sorts (disk)
        200  rows processed

SQL> SELECT b.rnum - a.minrnum + 1 slno,
  2         a.custno,
  3         b.invoiceno
  4  FROM   (SELECT custno,
  5                 MIN(rnum) minrnum
  6          FROM   (SELECT rownum rnum,
  7                         custno,
  8                         invoiceno
  9                  FROM   (SELECT custno,
 10                                 invoiceno
 11                          FROM   customer_data
 12                          ORDER  BY custno,
 13                                    invoiceno))
 14          GROUP  BY custno) a,
 15         (SELECT rownum rnum,
 16                 custno,
 17                 invoiceno
 18          FROM   (SELECT custno,
 19                         invoiceno
 20                  FROM   customer_data
 21                  ORDER  BY custno,
 22                            invoiceno)) b
 23  WHERE  a.custno = b.custno AND a.custno in (1, 2)
 24  ORDER  BY custno,
 25            invoiceno
 26  /

200 rows selected.

Elapsed: 00:00:20.08

Execution Plan
----------------------------------------------------------
   0      SELECT STATEMENT Optimizer=CHOOSE
   1    0   MERGE JOIN
   2    1     VIEW
   3    2       COUNT
   4    3         VIEW
   5    4           SORT (ORDER BY)
   6    5             TABLE ACCESS (FULL) OF 'CUSTOMER_DATA'
   7    1     SORT (JOIN)
   8    7       VIEW
   9    8         SORT (GROUP BY)
  10    9           VIEW
  11   10             COUNT
  12   11               VIEW
  13   12                 SORT (ORDER BY)
  14   13                   TABLE ACCESS (FULL) OF 'CUSTOMER_DATA'




Statistics
----------------------------------------------------------
          0  recursive calls
         88  db block gets
       1740  consistent gets
       8679  physical reads
          0  redo size
       2859  bytes sent via SQL*Net to client
        510  bytes received via SQL*Net from client
          3  SQL*Net roundtrips to/from client
          2  sorts (memory)
          2  sorts (disk)
        200  rows processed

SQL>
 

Example on customer_data table

Mohan K, July 24, 2003 - 4:06 am UTC

Specify the where clause in the inner query. The same where clause has to be applied twice.

select b.rnum-a.minrnum+1 slno, a.custno, b.invoiceno from(select custno,
min(rnum) minrnum from
(select rownum rnum, custno, invoiceno from (select custno, invoiceno from
customer_data where custno in(2,3) order by custno, invoiceno)) group by custno) a,
(select rownum rnum, custno, invoiceno from (select custno, invoiceno from
customer_data where custno in(2,3) order by custno, invoiceno)) b
where a.custno=b.custno
/


Mohan


tkprof results on CUSTOMER_DATA query...

Kamal Kishore, July 24, 2003 - 8:50 am UTC

SELECT row_number() over(PARTITION BY custno ORDER BY custno, invoiceno) slno,
custno,
invoiceno
FROM customer_data
WHERE custno IN (1, 2)
ORDER BY custno,
invoiceno

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 3 0.02 0.01 0 8 0 200
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 5 0.02 0.01 0 8 0 200



SELECT b.rnum - a.minrnum + 1 slno, a.custno, b.invoiceno
FROM (SELECT custno, MIN(rnum) minrnum
FROM (SELECT rownum rnum, custno, invoiceno
FROM (SELECT custno, invoiceno
FROM customer_data
WHERE custno IN (1, 2)
ORDER BY custno, invoiceno))
GROUP BY custno) a,
(SELECT rownum rnum, custno, invoiceno
FROM (SELECT custno, invoiceno
FROM customer_data
WHERE custno IN (1, 2)
ORDER BY custno, invoiceno)) b
WHERE a.custno = b.custno

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.01 0.04 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 3 0.06 0.05 0 16 0 200
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 5 0.07 0.10 0 16 0 200


**********************************************************
==========================================================
**********************************************************

SELECT row_number() over(PARTITION BY custno ORDER BY custno, invoiceno) slno,
custno,
invoiceno
FROM customer_data
ORDER BY custno,
invoiceno

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 2501 16.53 19.38 2080 436 50 250000
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 2503 16.53 19.38 2080 436 50 250000


SELECT b.rnum - a.minrnum + 1 slno, a.custno, b.invoiceno
FROM (SELECT custno, MIN(rnum) minrnum
FROM (SELECT rownum rnum, custno, invoiceno
FROM (SELECT custno, invoiceno
FROM customer_data
ORDER BY custno, invoiceno))
GROUP BY custno) a,
(SELECT rownum rnum, custno, invoiceno
FROM (SELECT custno, invoiceno
FROM customer_data
ORDER BY custno, invoiceno)) b
WHERE a.custno = b.custno

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.03 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 2501 71.99 82.11 5007 872 100 250000
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 2503 71.99 82.14 5007 872 100 250000


ALL_ROWS or FIRST_ROWS ?

Tatiane, August 05, 2003 - 1:50 pm UTC

After all, using your pagination method, what optimization mode (or goal) should we use ?

Tom Kyte
August 05, 2003 - 2:22 pm UTC

FIRST_ROWS definitely

A reader, August 05, 2003 - 2:41 pm UTC

Maybe FIRST_ROWS_1, 10, 100, 1000 ????

From the 9.2 Reference:

<q>
first_rows_n

The optimizer uses a cost-based approach, regardless of the presence of statistics, and optimizes with a goal of best response time to return the first n rows (where n = 1, 10, 100, 1000).

first_rows

The optimizer uses a mix of costs and heuristics to find a best plan for fast delivery of the first few rows.
</q>

What is the difference in this case ?

I still don't get it

Sudha Bhagavatula, August 11, 2003 - 5:04 pm UTC

I'm trying to run this and I get only 25 rows:

select *
from (select cl.prov_full_name full_name,
cl.spec_desc specialty_dsc,
sum(cl.plan_liab_amt) tot_pd,
sum(cl.co_ins_amt+cl.ded_amt+cl.copay_amt) patient_resp,
count(distinct clm10_id) claims
from aso.t_medical_claims_detail cl,
aso.t_employer_groups_data g,
aso.t_categories_data c
where g.emp_super_grp_id||g.emp_sub_grp_id = cl.emp_grp_id
and c.cat_dim_id = g.cat_dim_id
and c.cat_name like 'America%'
and cl.paid_date between to_date('01/01/2003','mm/dd/yyyy')
and to_date('06/30/2003','mm/dd/yyyy')
and prov_full_name not like '*%'
and spec_desc not like '*%'
group by prov_full_name,
spec_desc
order by count(distinct clm10_id) desc )
where rownum < 26
union
select decode(full_name,null,' ', 'All Other Providers') full_name,decode(specialty_dsc,null,' ','y') specialty_dsc,tot_pd,patient_resp,claims
from (select cl.prov_full_name full_name,
cl.spec_desc specialty_dsc,
sum(cl.plan_liab_amt) tot_pd,
sum(cl.co_ins_amt+cl.ded_amt+cl.copay_amt) patient_resp,
count(distinct clm10_id) claims
from aso.t_medical_claims_detail cl,
aso.t_employer_groups_data g,
aso.t_categories_data c
where g.emp_super_grp_id||g.emp_sub_grp_id = cl.emp_grp_id
and c.cat_dim_id = g.cat_dim_id
and c.cat_name like 'America%'
and cl.paid_date between to_date('01/01/2003','mm/dd/yyyy')
and to_date('06/30/2003','mm/dd/yyyy')
and prov_full_name not like '*%'
and spec_desc not like '*%'
group by prov_full_name,
spec_desc
order by count(distinct clm10_id) desc )
where rownum >= 26

Tom Kyte
August 11, 2003 - 6:50 pm UTC

it by very definition will only ever return 25 rows at most.

"where rownum > 26" is assured to return 0 records.

rownum is assigned to a row like this:


rownum = 1
loop over potential records in the result set
if predicate satisified
then
OUTPUT RECORD
rownum = rownum + 1
end if
end loop


So, you see -- rownum is ALWAYS 1 since rownum is never > 26 and rownum never gets incremented.

So how do I get the rows

Sudha Bhagavatula, August 12, 2003 - 8:34 am UTC

So how do I get the result that I'm trying to achieve ? Can it be done ?

Thanks
Sudha

Tom Kyte
August 12, 2003 - 9:02 am UTC

I don't know -- why don't you phrase IN ENGLISH what you are trying to achieve.

The sql parser in my brain doesn't like to parse big queries and try to reverse engineer what you MIGHT have wanted (given that the question isn't phrased properly in the first place and all)....

This is my question

Sudha Bhagavatula, August 12, 2003 - 9:34 am UTC

I have to create a report showing the top 25 providers based on the number of distinct claims. Get the the total for the 25 providers, compute percentages against the total for all the providers, and then total the claims for the providers not in the top 25.

This is how the report should be :

provider #claims %of total

xxxxxxx 1234 14%
yyyyyyy 987 11%
-------


---till the top 25
All other providers 3210 32%

Thanks
Sudha

Tom Kyte
August 12, 2003 - 9:52 am UTC

ops$tkyte@ORA920> /*
DOC>
DOC>drop table t1;
DOC>drop table t2;
DOC>
DOC>create table t1 ( provider int );
DOC>
DOC>create table t2 ( provider int, claim_no int );
DOC>
DOC>
DOC>-- 100 providers...
DOC>insert into t1 select rownum from all_objects where rownum <= 100;
DOC>
DOC>insert into t2
DOC>select dbms_random.value( 1, 100 ), rownum
DOC>  from all_objects;
DOC>*/
ops$tkyte@ORA920>
ops$tkyte@ORA920>
ops$tkyte@ORA920> select case when rn <= 25
  2              then to_char(provider)
  3              else 'all others'
  4         end provider,
  5         to_char( round(sum( rtr ) * 100 ,2), '999.99' )  || '%'
  6    from (
  7  select provider, cnt, rtr, row_number() over (order by rtr) rn
  8    from (
  9  select provider, cnt, ratio_to_report(cnt) over () rtr
 10    from (
 11  select t1.provider, count(*) cnt
 12    from t1, t2
 13   where t1.provider = t2.provider
 14   group by t1.provider
 15         )
 16         )
 17         )
 18   group by case when rn <= 25
 19                 then to_char(provider)
 20                 else 'all others'
 21             end
 22   order by count(*), sum(rtr) desc
 23  /

PROVIDER                                 TO_CHAR(
---------------------------------------- --------
69                                           .97%
45                                           .97%
14                                           .97%
99                                           .97%
27                                           .97%
43                                           .97%
5                                            .96%
72                                           .96%
2                                            .96%
61                                           .96%
78                                           .96%
29                                           .95%
92                                           .95%
88                                           .95%
63                                           .95%
91                                           .95%
35                                           .94%
67                                           .93%
77                                           .93%
60                                           .91%
76                                           .91%
55                                           .91%
79                                           .88%
1                                            .48%
100                                          .48%
all others                                 77.24%

26 rows selected.
 

Great solution

Sudha Bhagavatula, August 12, 2003 - 2:27 pm UTC

Tom,

That worked like a charm, thanks !

Sudha



Works great, but bind variables giving bad plan

Mike Madland, August 22, 2003 - 4:57 pm UTC

Hi Tom,

Thanks for a great web site and a great book.

I'm using your awesome paginate query and getting great
results but I'm running into issues with the optimizer
giving me a bad plan when I use bind variables for the
beginning and ending row numbers.  I've tried all kinds
of hints but ended up resorting to dynamic sql to get the
fastest plan.

Do you have any ideas on why my query with the bind
variables is insisting on doing a hash join (and thus is
slower) and if there is any fix?  Thanks in advance.

Connected to:
Oracle9i Enterprise Edition Release 9.2.0.3.0 - Production
With the Partitioning, OLAP and Oracle Data Mining options
JServer Release 9.2.0.3.0 - Production

SQL> create sequence s;

Sequence created.

SQL> create table t as
  2  select s.nextval pk, object_name, created, object_type
  3   from all_objects;

Table created.

SQL> insert /*+ append */
  2    into t (pk, object_name, created, object_type)
  3  select s.nextval,  object_name, created, object_type
  4   from t;

21158 rows created.

SQL> commit;

Commit complete.

SQL> insert /*+ append */
  2    into t (pk, object_name, created, object_type)
  3  select s.nextval,  object_name, created, object_type
  4   from t;

42316 rows created.

SQL> commit;

Commit complete.

SQL> insert /*+ append */
  2    into t (pk, object_name, created, object_type)
  3  select s.nextval,  object_name, created, object_type
  4   from t;

84632 rows created.

SQL>  commit;

Commit complete.

SQL> insert /*+ append */
  2    into t (pk, object_name, created, object_type)
  3  select s.nextval,  object_name, created, object_type
  4   from t;

169264 rows created.

SQL> commit;

Commit complete.

SQL> alter table t add constraint pk_t primary key (pk);

Table altered.

SQL> create index t_u on t (lower(object_name), pk);

Index created.

SQL> analyze table t compute statistics
  2    for table for all indexes for all indexed columns
  3  /

Table analyzed.

SQL> set timing on
SQL> alter session set sql_trace=true;

Session altered.

SQL> SELECT t.pk, t.object_name, t.created, object_type
  2    FROM (SELECT *
  3            FROM (select innermost.*, rownum as rowpos
  4                    from (SELECT pk
  5                            FROM t
  6                           ORDER BY LOWER(object_name)
  7                         ) innermost
  8                   where rownum <= 10 )
  9           where rowpos >= 1) pg
 10         INNER JOIN t ON pg.pk = t.pk
 11   ORDER BY pg.rowpos;

        PK OBJECT_NAME                    CREATED   OBJECT_TYPE
---------- ------------------------------ --------- -------------
         1 /1005bd30_LnkdConstant         13-AUG-03 JAVA CLASS
     10352 /1005bd30_LnkdConstant         13-AUG-03 SYNONYM
     21159 /1005bd30_LnkdConstant         13-AUG-03 JAVA CLASS
     31510 /1005bd30_LnkdConstant         13-AUG-03 SYNONYM
     42317 /1005bd30_LnkdConstant         13-AUG-03 JAVA CLASS
     52668 /1005bd30_LnkdConstant         13-AUG-03 SYNONYM
     63475 /1005bd30_LnkdConstant         13-AUG-03 JAVA CLASS
     73826 /1005bd30_LnkdConstant         13-AUG-03 SYNONYM
     84633 /1005bd30_LnkdConstant         13-AUG-03 JAVA CLASS
     94984 /1005bd30_LnkdConstant         13-AUG-03 SYNONYM

10 rows selected.

Elapsed: 00:00:00.03
SQL>
SQL> variable r            refcursor
SQL>
SQL> declare
  2  i_endrow    integer;
  3  i_startrow  integer;
  4
  5  begin
  6  i_endrow   := 10;
  7  i_startrow := 1;
  8
  9  open :r FOR
 10  SELECT t.pk, t.object_name, t.created, object_type
 11    FROM (SELECT *
 12            FROM (select innermost.*, rownum as rowpos
 13                    from (SELECT pk
 14                            FROM t
 15                           ORDER BY LOWER(object_name)
 16                         ) innermost
 17                   where rownum <= i_endrow )
 18           where rowpos >= i_startrow) pg
 19         INNER JOIN t ON pg.pk = t.pk
 20   ORDER BY pg.rowpos;
 21  END;
 22  /

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.00
SQL>
SQL> print :r

        PK OBJECT_NAME                    CREATED   OBJECT_TYPE
---------- ------------------------------ --------- -------------
         1 /1005bd30_LnkdConstant         13-AUG-03 JAVA CLASS
     10352 /1005bd30_LnkdConstant         13-AUG-03 SYNONYM
     21159 /1005bd30_LnkdConstant         13-AUG-03 JAVA CLASS
     31510 /1005bd30_LnkdConstant         13-AUG-03 SYNONYM
     42317 /1005bd30_LnkdConstant         13-AUG-03 JAVA CLASS
     52668 /1005bd30_LnkdConstant         13-AUG-03 SYNONYM
     63475 /1005bd30_LnkdConstant         13-AUG-03 JAVA CLASS
     73826 /1005bd30_LnkdConstant         13-AUG-03 SYNONYM
     84633 /1005bd30_LnkdConstant         13-AUG-03 JAVA CLASS
     94984 /1005bd30_LnkdConstant         13-AUG-03 SYNONYM

10 rows selected.

Elapsed: 00:00:02.05

---- From TKPROF ----

SELECT t.pk, t.object_name, t.created, object_type
  FROM (SELECT *
          FROM (select innermost.*, rownum as rowpos
                  from (SELECT pk
                          FROM t
                         ORDER BY LOWER(object_name)
                       ) innermost
                 where rownum <= 10 )
         where rowpos >= 1) pg
       INNER JOIN t ON pg.pk = t.pk
 ORDER BY pg.rowpos

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.01       0.01          0          0          0           0
Execute      1      0.00       0.00          0          0          0           0
Fetch        2      0.00       0.36         22         25          0          10
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        4      0.01       0.38         22         25          0          10

Rows     Execution Plan
-------  ---------------------------------------------------
      0  SELECT STATEMENT   GOAL: CHOOSE
     10   SORT (ORDER BY)
     10    NESTED LOOPS
     10     VIEW
     10      COUNT (STOPKEY)
     10       VIEW
     10        INDEX   GOAL: ANALYZED (FULL SCAN) OF 'T_U'
                   (NON-UNIQUE)
     10     TABLE ACCESS   GOAL: ANALYZED (BY INDEX ROWID) OF 'T'
     10      INDEX   GOAL: ANALYZED (UNIQUE SCAN) OF 'PK_T' (UNIQUE)

********************************************************************************

SELECT t.pk, t.object_name, t.created, object_type
  FROM (SELECT *
          FROM (select innermost.*, rownum as rowpos
                  from (SELECT pk
                          FROM t
                         ORDER BY LOWER(object_name)
                       ) innermost
                 where rownum <= :b1 )
         where rowpos >= :b2) pg
       INNER JOIN t ON pg.pk = t.pk
 ORDER BY pg.rowpos

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute      1      0.00       0.00          0          0          0           0
Fetch        1      2.34       2.55       1152       2492          0          10
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        3      2.34       2.56       1152       2492          0          10

Rows     Execution Plan
-------  ---------------------------------------------------
      0  SELECT STATEMENT   GOAL: CHOOSE
     10   SORT (ORDER BY)
     10    HASH JOIN
     10     VIEW
     10      COUNT (STOPKEY)
     10       VIEW
     10        INDEX   GOAL: ANALYZED (FULL SCAN) OF 'T_U'
                   (NON-UNIQUE)
 338528     TABLE ACCESS   GOAL: ANALYZED (FULL) OF 'T'
 

Tom Kyte
August 23, 2003 - 10:00 am UTC

first_rows all of the subqueries. that is appropriate for pagination. I should have put that into the original response I guess!


consider:

SELECT t.pk, t.object_name, t.created, object_type
FROM (SELECT *
FROM (select innermost.*, rownum as rowpos
from (SELECT pk
FROM t
ORDER BY LOWER(object_name)
) innermost
where rownum <= :b1 )
where rowpos >= :b2) pg
INNER JOIN t ON pg.pk = t.pk
ORDER BY pg.rowpos

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 1 1.92 2.03 2 2491 0 10
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 3 1.92 2.03 2 2491 0 10

Rows Row Source Operation
------- ---------------------------------------------------
10 SORT ORDER BY (cr=2491 r=2 w=0 time=2033994 us)
10 HASH JOIN (cr=2491 r=2 w=0 time=2033701 us)
10 VIEW (cr=3 r=2 w=0 time=1029 us)
10 COUNT STOPKEY (cr=3 r=2 w=0 time=955 us)
10 VIEW (cr=3 r=2 w=0 time=883 us)
10 INDEX FULL SCAN T_U (cr=3 r=2 w=0 time=848 us)(object id 55317)
350000 TABLE ACCESS FULL T (cr=2488 r=0 w=0 time=598964 us)


versus:

********************************************************************************
SELECT t.pk, t.object_name, t.created, object_type
FROM (SELECT /*+ FIRST_ROWS */ *
FROM (select /*+ FIRST_ROWS */ innermost.*, rownum as rowpos
from (SELECT /*+ FIRST_ROWS */ pk
FROM t
ORDER BY LOWER(object_name)
) innermost
where rownum <= :b1 )
where rowpos >= :b2) pg
INNER JOIN t ON pg.pk = t.pk
ORDER BY pg.rowpos

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 1 0.00 0.00 0 20 0 10
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 3 0.00 0.00 0 20 0 10

Rows Row Source Operation
------- ---------------------------------------------------
10 SORT ORDER BY (cr=20 r=0 w=0 time=954 us)
10 TABLE ACCESS BY INDEX ROWID OBJ#(55315) (cr=20 r=0 w=0 time=749 us)
21 NESTED LOOPS (cr=15 r=0 w=0 time=589 us)
10 VIEW (cr=3 r=0 w=0 time=278 us)
10 COUNT STOPKEY (cr=3 r=0 w=0 time=210 us)
10 VIEW (cr=3 r=0 w=0 time=151 us)
10 INDEX FULL SCAN OBJ#(55317) (cr=3 r=0 w=0 time=98 us)(object id 55317)
10 INDEX RANGE SCAN OBJ#(55316) (cr=12 r=0 w=0 time=188 us)(object id 55316)

A reader, August 25, 2003 - 4:37 am UTC


Perfect

Mike Madland, September 03, 2003 - 12:43 pm UTC

Tom, thank you so much. I had tried first_rows, but not on *all* of the subqueries. This is great.

how about 8.0.

s devarshi, September 13, 2003 - 3:34 am UTC

what if ia want to do the same in 8.0.4 version

plsql ?

i have few other problem and wanted to ask you about it,
'ask your question later ' is blocking me

devarshi

Tom Kyte
September 13, 2003 - 9:27 am UTC

you cannot use order by in a subquery in 8.0 so this technique doesn't apply.

you have to open the cursor.

fetch the first N rows and ignore them

then fetch the next M rows and keep them

close the cursor



that's it.

One question about your approach

julie, September 25, 2003 - 11:04 am UTC

My java developer is asking me how he will know
how many rows are in the table for him to pass me
the minimum and maximun number. So that he can
pass 20, 40 and so on on the jsp page.




Tom Kyte
September 25, 2003 - 11:26 pm UTC

you have a "first page"

you have a "next page"

when "next page" returns less rows then requested -- you know you have hit "last page"

it is the way I do it... works great. uses least resources.

ORDER OF SELECTS

Tracy, October 08, 2003 - 12:04 pm UTC

I have a table accounts with a varchar2(50) column accountnumber.

I want to select the row with the highest value in accountnumber where the column contains numbers only so I do this:


test> select max(acno)
2 from
3 (select to_number(ACCOUNTNUMBER) acno
4 from ACCOUNTS
5 where replace(translate(ACCOUNTNUMBER, '1234567890', 'aaaaaaaaaa'),'a','') is null);

MAX(ACNO)
------------
179976182723

which works fine. (May not be the best way of doing it, but it works.)

I then want to refine it by adding 'only if the number is less than 500000' so I add

where acno < 500000

and then I get ORA-01722: invalid number.

test> l
1 select max(acno)
2 from
3 (select to_number(ACCOUNTNUMBER) acno
4 from ACCOUNTS
5* where replace(translate(ACCOUNTNUMBER, '1234567890', 'aaaaaaaaaa'),'a','') is null) where acno < 500000
test> /
where replace(translate(ACCOUNTNUMBER, '1234567890', 'aaaaaaaaaa'),'a','') is null) where acno < 500000
*
ERROR at line 5:
ORA-01722: invalid number

Presumably this is to do with the order in which the selects work, but I thought that because the inner select is returning numbers
only that the outer select would work ok?

Tom Kyte
October 09, 2003 - 3:24 pm UTC

you are ascribing procedural constructs to a non-procedural language!

you are thinking "inline view done AND then outer stuff"

in fact that query is not any different then the query with the inline view removed -- the acno < 50000 is done "whenever".


you can:

where
decode( replace(translate( accountNumber, '1234567890','0000000000'),'0',''),
NULL, to_number( accountNumber ),
NULL ) < 50000


hint: don't use 'a', else a string with 'a' in it would be considered a valid number!


search producing wrong results

Paul Druker, October 09, 2003 - 11:33 am UTC

Tom, I was looking for from$_subquery$ combination on your site (I saw it in dba_audit_trail.obj_name). However, search for from$_subquery$ provides approximately 871 records, which is not correct. For example, this page does contain this word, but almost all extracted pages don't. It's interesting that search for from_subquery (without underscoire and $ sign) provides the same result. Search for "from$_subquery$" provides the same 871 results. I'd understand "special treatment" of underscore sign, but why $ sign?

Tom Kyte
October 09, 2003 - 6:04 pm UTC

Implementing dynamic query to suggested pagination query

Stephane Gouin, October 24, 2003 - 8:57 am UTC

Hi Tom

Using owa_util.cellsprint, but wanting to customize the table rows a little (adding a style sheet to hilight every other row, as a visual aid to users), so forced to look at the following query, as given in this thread:

select *
from ( select a.*, rownum rnum
from ( YOUR_QUERY_GOES_HERE -- including the order by ) a
where rownum <= MAX_ROWS )
where rnum >= MIN_ROWS
/

My question is however, how do you introduce a dynamic query into the mix, given I want to build a re-usable module to others can implement. This is exactly what owa_util.cellsprint with the dynamic cursor accomplishes, but I can't get in there to tweak it the layout.

Thanks for your help

Tom Kyte
October 24, 2003 - 9:43 am UTC

cellsprint only supports dynamic sql? not sure what the issue is here?

Stephane Gouin, October 24, 2003 - 10:54 am UTC

Hi Tom,

Sorry, wasn't clear in my question. I was using cellsprint, but I realized I can't insert a style in the table row tag for instance (ie <tr class="h1">) The objective is to add a style to the row, where via a CSS, I can hilite alternate rows, giving user a little contrast when dealing with long lists.

I want to extend the cellsprint function, by allowing further control over the table tags... (ie style sheets, alignment, widths etc...)

Using a REF Cursor (or owa_util.bind_variables) for the subquery, how could I implement it using the pagination query.

Hope I clarified the question enough..

Tom Kyte
October 24, 2003 - 11:09 am UTC

you actually have access to the source code for cellsprint (its not wrapped). just copy it as your own and modify as you see fit.



getting rows N through M of a result se

Edward Girard, October 30, 2003 - 10:35 am UTC

Excelllent thread

Very useful for web-based applications

Saminathan Seerangan, November 01, 2003 - 12:00 am UTC


HAVING can be efficient

Joe Levy, November 12, 2003 - 1:27 pm UTC

Agreed that this

<quote>
select rownum, col1
from foobar
group by rownum, col1
having rownum >= :start and rownum < :end
</quote>

is inefficient. But

select rownum, col1
from foobar
where rownum < :end -- added line to improve performance
group by rownum, col1
having rownum >= :start and rownum < :end

is almost as efficient as your preferred method. And it has the advantage of being usable in a scalar subquery. (The additional nesting required by your preferred method puts columns from tables in the outer query out of scope.)

Is there a reason not to use a HAVING clause with ROWNUM when variable scope demands it?


Tom Kyte
November 12, 2003 - 4:47 pm UTC

why would a scalar subquery need the N'th row?

but yes, that would work (don't need the second and rownum < :end)

Row_Number() or ROWNUM

Ranjit Desai, November 19, 2003 - 5:50 am UTC

Hi Tom,

We do use row_Number() and other Analytical functions. But recently came across some limitation. Oracle 8i standard edition don't support this functions. They are only available in Enterprise Edition. Many of our sites are on Standard edition on Oracle 8i.

So current method to use Row_NUmber() to get required output needs to be changed.

SELECT deptno, ename, hiredate,
ROW_NUMBER() OVER (PARTITION BY deptno ORDER BY hiredate) AS emp_id
FROM emp

To get similar output in SELECT query what can we do? Is it possible to use ROWNUM?? or Used defined function?

Please help us. As we have already tried some options without success.

Thanks & Regards,

Ranjit Desai

Tom Kyte
November 21, 2003 - 11:25 am UTC

you cannot use rownum to achieve that. analytics are mandatory for getting those numbers "partitioned"

9iR2 SE (standard) offers analytics as a feature.

Fetching rows N-M

Stevef, November 26, 2003 - 5:28 am UTC

Can the first N rows optimization feature be used in association with the paging technique to enhance the performance these queries ?

SELECT /*+ FIRST_ROWS(N) */ ....



</code> http://otn.oracle.com/products/oracle9i/daily/jan28.html <code>

Tom Kyte
November 26, 2003 - 7:49 am UTC

yes, i usually just use first_rows myself.

Fetching rows N-M

Stevef, November 27, 2003 - 8:24 am UTC

Actually, Weird effects. The first query below returns 10 rows as expected but the second returns 19 rows !!!!
(Oracle9i Enterprise Edition Release 9.2.0.2.1 Win2000)

select*
from (select a.*,rownum r
from (select /*+ first_rows */ customerid from customer order by 1) a
where rownum <= 10+9 )
where r >= 10

select*
from (select a.*,rownum r
from (select /*+ first_rows(10) */ customerid from customer order by 1) a
where rownum <= 10+9 )
where r >= 10



Tom Kyte
November 27, 2003 - 10:51 am UTC

confirmed -- filed a bug, temporary workaround is to add "order by r"

we can see they lose the filter using dbms_xplan:


ops$tkyte@ORA920> delete from plan_table;
6 rows deleted.
 
ops$tkyte@ORA920> explain plan for
  2  select*
  3     from (select a.*,rownum r
  4             from (select /*+ first_rows(10) */ empno from scott.emp order by 1) a
  5     where rownum <= 19 )
  6  where r >= 10
  7  /
 
Explained.
 
ops$tkyte@ORA920> select * from table(dbms_xplan.display);
 
PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------
 
-------------------------------------------------------------------------
| Id  | Operation            |  Name       | Rows  | Bytes | Cost (%CPU)|
-------------------------------------------------------------------------
|   0 | SELECT STATEMENT     |             |    14 |   364 |     2  (50)|
|   1 |  VIEW                |             |    14 |   364 |            |
|*  2 |   COUNT STOPKEY      |             |       |       |            |
|   3 |    VIEW              |             |    14 |   182 |            |
|   4 |     INDEX FULL SCAN  | EMP_PK      |    14 |    42 |     2  (50)|
-------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
   2 - filter(ROWNUM<=19)
 
15 rows selected.
 
ops$tkyte@ORA920>
ops$tkyte@ORA920> delete from plan_table;
 
5 rows deleted.
 
ops$tkyte@ORA920> explain plan for
  2  select*
  3     from (select a.*,rownum r
  4             from (select /*+ first_rows(10) */ empno from scott.emp order by 1) a
  5     where rownum <= 19 )
  6  where r >= 10
  7  order by r
  8  /
 
Explained.
 
ops$tkyte@ORA920> select * from table(dbms_xplan.display);
 
PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------
 
-------------------------------------------------------------------------
| Id  | Operation            |  Name       | Rows  | Bytes | Cost (%CPU)|
-------------------------------------------------------------------------
|   0 | SELECT STATEMENT     |             |    14 |   364 |     3  (67)|
|   1 |  SORT ORDER BY       |             |    14 |   364 |     3  (67)|
|*  2 |   VIEW               |             |    14 |   364 |            |
|*  3 |    COUNT STOPKEY     |             |       |       |            |
|   4 |     VIEW             |             |    14 |   182 |            |
|   5 |      INDEX FULL SCAN | EMP_PK      |    14 |    42 |     2  (50)|
-------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 <b>
   2 - filter("from$_subquery$_001"."R">=10)</b>
   3 - filter(ROWNUM<=19)
 
17 rows selected.
 
ops$tkyte@ORA920>
ops$tkyte@ORA920>
ops$tkyte@ORA920> select*
  2     from (select a.*,rownum r
  3             from (select /*+ first_rows(10) */ empno from scott.emp order by 1) a
  4     where rownum <= 19 )
  5  where r >= 10
  6  order by r
  7  /
 
     EMPNO          R
---------- ----------
      7844         10
      7876         11
      7900         12
      7902         13
      7934         14
 
ops$tkyte@ORA920>
 

rows N-M

Stevef, November 28, 2003 - 6:11 am UTC

Gosh Tom, your momma sure raised you smart!
Great detective work!

Getting total rows..

Naveen, December 04, 2003 - 4:41 am UTC

Hi Tom:

The application we are developing should use pagination to show the results. The devlopers want me to get the total rows that the query return so that they can display that many pages. (Say, if the total rows returned are 100 and the number of results that have to be displayed in each page is 10 rows, they can set 10 pages to display results sets.) The requirement is that we have to display the page number with hyperlink, so when user clicks on page 3, we have to display rows 31-40.

To do this i have to first find the count of rows that the query returns and then fire the query to return the rows N through M. This is two I/O calls to the database and two queries to be parsed to display a page. Is there any work around.

Thanks
Nav.

Tom Kyte
December 04, 2003 - 8:36 am UTC

I have a very very very very simple solution to this problem.

DON'T DO IT.

Your developers probably love www.google.com right?
they appreciate its speed, accuracy, usefulness.

All you need to do is tell them "use google as the gold standard for searching. DO WHAT IT DOES"

Google lies constantly. the hit count is never real. It tells you "here are the first 10 pages" -- but you'll find if you click on page 10, you'll be on page 7 (there wasnt any page 8, 9 or 10 -- they didn't know that)

google guesses. (i guess -- search on asktom, "approximately")

google is the gold standard -- just remember that.

In order to tell the end user "hey, there are 15 pages" you would have to run the entire query to completion on page one

and guess what, by the time page 1 is delivered to them (after waiting and waiting for it) there is a good chance their result set won't have 15 pages!!! (it is a database after all, people do write to it). they might have 16 or maybe 14, or maybe NONE or maybe lots more the next time they page up or down!!

google is the gold standard.

did you know, you'll never go past page 100 on google - try it, they won't let you.

Here is a short excerpt from my book "Effective Oracle By Design" where I talk about this very topic (pagination in a web environment)


<quote>
Keep in mind that people are impatient and have short attention spans. How many times have you gone past the tenth page on a search page on the Internet? When I do a Google (www.google.com) search that returns more hits than the number of hamburgers sold by McDonald's, I never go to the last page; in fact, I never get to page 11. By the time I've looked at the first five pages or so, I realize that I need to refine my search because this is too much data. Your end users will, believe it or not, do the same.


Some Advice on Web-based Searches with Pagination

My advice for handling web-based searches that you need to paginate through is to never provide an exact hit count. Use an estimate to tell the users about N hits. This is what I do on my asktom web site, for example. I use Oracle Text to index the content. Before I run a query, I ask Oracle Text for an estimate. You can do the same with your relational queries using EXPLAIN PLAN in Oracle8i and earlier, or by querying V$SQL_PLAN in Oracle9i and up.
You may want to tell the end users they got 1,032,231 hits, but the problem with that is twofold:

o It takes a long time to count that many hits. You need to run that ALL_ROWS type of query to the end to find that out! It is really slow.
o By the time you count the hits, in all probability (unless you are on a read-only database), the answer has already changed and you do not have that number of hits anymore!


My other advice for this type of application is to never provide a Last Page button or give the user more than ten pages at a time from which to choose. Look at the standard, www.google.com, and do what it does.

Follow those two pieces of advice, and your pagination worries are over.
</quote>




Thanks Tom..

Naveen, December 04, 2003 - 10:24 pm UTC

Hi Tom,

Got what you said. I'll try to convince my developers with this information. Day by day the admiration for you keeps growing.

Thank you

Nav.




Displaying Top N rows in 8.0.6

Russell, December 09, 2003 - 4:49 am UTC

G'day Tom,

On September 13, 2003 or there abouts you left the following:
----------
you cannot use order by in a subquery in 8.0 so this technique doesn't apply.

you have to open the cursor.

fetch the first N rows and ignore them

then fetch the next M rows and keep them

close the cursor
that's it.

----------

I have an application, where a set of grouped records is in the vicinity of 800 combinations. For the purposes of analysis, 80% of work is in top 20% of grouped entries. As such most gains will be achieved by analysing these entries with the most reords. As I am trying to do the majority of the grunt work in Oracle, parameters are passed by users, to a procedure with a ref Cursor being OUTput to a Crystal report.

One of the parameters I am inputting, is TopN hoping to return grouped entries with the greatest Record counts by the grouping needed.

I include this statement in a cursor, and loop through for 1 to TopN, appending the resulting Group Names to a varchar2 variable hoping to include the contents of this string in a subsequent where statement.

A possible example:

declare
TopN := 3; -- return all records matching the group identifer with TopN most records

Counter Number := 0;
vchString varchar2(200);
begin
for I in (Select Dept, count(*) from employees
where ....
group by Dept order by count(*) desc)
Loop

if counter > TopN then
exit;
end if;

if counter > 0 then
vchString:=vchstring||',';
end if;
vchString:=vchString||i.dept;
-- or vchstring:=vchstring|| ''''||string||''''
-- for columns containing varchar data....
end loop;

I then have a statement
Cursor is ....
select ......
from employees
where .....
AND DEPT in ( vchString);

end;

with the hope that the original cursor might return something like
DEPT COUNT(*)
30 8
20 7
19 6
10 5
15 3
7 2
1 1
4 1


and the returning cursor in the optimum world would therefore become something like
select ......
from employees
where .....
AND DEPT in ( 30,20,19);

Hence having to select, group, sum, and display the 12(TopN) group entries instead of 800 ish.

The loop works, and populates the varchar2 variable, but the contents of that variable don't seem to be transposed or included into the IN statement. As mentioned above I am using Oracle database version 8.0.6, and having read a number of threads on your site, don't think I can use the Analytic functions included in 8.1 and above.

Please advise what my problem is, or if there is a better way to try and do what I am after.

Thanks in advance.

Catastrophic Performance Degredation

Doh!, December 16, 2003 - 11:32 am UTC

Any ideas as to why the act of putting an outer "wrapper" on an inner query

select * from ( my_inner_query )

can cause the performance of a query to degrade by a factor of 3000 ?

First the innermost query:

SQL>     ( SELECT a.*, ROWNUM RECORDINDEX FROM
  2      ( SELECT 'Townland Boundary' "FEATURENAME", mME.MinX, mME.MaxX, mME.MinY, mME.MaxY,  gL.LayerName,
  3        gL.LayerAlias,  TOWNLAND.GEONAME COLUMN1 , COUNTY.GEONAME COLUMN2
  4       FROM COUNTY ,TOWNLAND , MinMaxExtent mME, GeoLayer gL
  5        WHERE COUNTY.GEONAME = 'L123'
  6         AND  TOWNLAND.GEONAME LIKE 'BALL%'
  7         AND TOWNLAND.COUNTYID = COUNTY.COUNTYID
  8         AND mME.ForeignID = TOWNLAND.GEOMETRYID
  9        AND  mME.TableName = 'TOWNLAND'
 10        AND gL.TableName = mME.TableName
 11        AND gL.LayerName = 'TOWNLAND'
 12      ORDER BY  TOWNLAND.GEONAME , COUNTY.GEONAME)
 13      a WHERE ROWNUM <= 10)
 14  /

10 rows selected.

Elapsed: 00:00:00.01
Execution Plan
----------------------------------------------------------
   0      SELECT STATEMENT Optimizer=CHOOSE (Cost=169 Card=2 Bytes=3214)
   1    0   COUNT (STOPKEY)
   2    1     VIEW (Cost=169 Card=2 Bytes=3214)
   3    2       SORT (ORDER BY STOPKEY) (Cost=169 Card=2 Bytes=224)
   4    3         NESTED LOOPS (Cost=167 Card=2 Bytes=224)
   5    4           NESTED LOOPS (Cost=163 Card=2 Bytes=158)
   6    5             MERGE JOIN (CARTESIAN) (Cost=3 Card=1 Bytes=57)
   7    6               TABLE ACCESS (BY INDEX ROWID) OF 'GEOLAYER' (Cost=2 Card=1 Bytes=45)
   8    7                 INDEX (RANGE SCAN) OF 'GEOLAYER_LAYERNAME_IDx' (NON-UNIQUE) (Cost=1 Card=1)
   9    6               BUFFER (SORT) (Cost=1 Card=1 Bytes=12)
  10    9                 TABLE ACCESS (BY INDEX ROWID) OF 'COUNTY' (Cost=1 Card=1 Bytes=12)
  11   10                   INDEX (RANGE SCAN) OF 'COUNTY_GEONAME_IDX'    (NON-UNIQUE)
  12    5             TABLE ACCESS (BY INDEX ROWID) OF 'TOWNLAND' (Cost=163 Card=1 Bytes=22)
  13   12               BITMAP CONVERSION (TO ROWIDS)
  14   13                 BITMAP AND
  15   14                   BITMAP CONVERSION (FROM ROWIDS)
  16   15                     INDEX (RANGE SCAN) OF 'TOWNLAND_COUNTYID_IDX' (NON-UNIQUE) (Cost=4 Card=1950)
  17   14                   BITMAP CONVERSION (FROM ROWIDS)
  18   17                     SORT (ORDER BY)
  19   18                       INDEX (RANGE SCAN) OF 'TOWNLAND_GEONAME_IDX' (NON-UNIQUE) (Cost=14 Card=1950)
  20    4           TABLE ACCESS (BY INDEX ROWID) OF 'MINMAXEXTENT' (Cost=2 Card=50698 Bytes=1673034)
  21   20             INDEX (UNIQUE SCAN) OF 'MINMAXEXT_UK' (UNIQUE) (  Cost=1 Card=4)

Statistics
----------------------------------------------------------
          0  recursive calls
          6  db block gets
        530  consistent gets
         21  physical reads
          0  redo size
       1458  bytes sent via SQL*Net to client
        498  bytes received via SQL*Net from client
          4  SQL*Net roundtrips to/from client
          2  sorts (memory)
          1  sorts (disk)
         10  rows processed

Now the final outer wrapper:

SQL> SELECT a.* FROM
  2      ( SELECT a.*, ROWNUM RECORDINDEX FROM
  3      ( SELECT 'Townland Boundary' "FEATURENAME", mME.MinX, mME.MaxX, mME.MinY, mME.MaxY,  gL.LayerName,
  4        gL.LayerAlias,  TOWNLAND.GEONAME COLUMN1 , COUNTY.GEONAME COLUMN2
  5       FROM COUNTY ,TOWNLAND , MinMaxExtent mME, GeoLayer gL
  6        WHERE COUNTY.GEONAME = 'L123'
  7         AND  TOWNLAND.GEONAME LIKE 'BALL%'
  8         AND TOWNLAND.COUNTYID = COUNTY.COUNTYID
  9         AND mME.ForeignID = TOWNLAND.GEOMETRYID
 10        AND  mME.TableName = 'TOWNLAND'
 11        AND gL.TableName = mME.TableName
 12        AND gL.LayerName = 'TOWNLAND'
 13      ORDER BY  TOWNLAND.GEONAME , COUNTY.GEONAME)
 14      a WHERE ROWNUM <= 10) a
 15  /

10 rows selected.

Elapsed: 00:00:32.00

Execution Plan
----------------------------------------------------------
   0      SELECT STATEMENT Optimizer=CHOOSE (Cost=466 Card=2 Bytes=3240)

   1    0   VIEW (Cost=466 Card=2 Bytes=3240)
   2    1     COUNT (STOPKEY)
   3    2       VIEW (Cost=466 Card=2 Bytes=3214)
   4    3         SORT (ORDER BY STOPKEY) (Cost=466 Card=2 Bytes=224)
   5    4           TABLE ACCESS (BY INDEX ROWID) OF 'TOWNLAND' (Cost=464 Card=1 Bytes=22)
   6    5             NESTED LOOPS (Cost=464 Card=2 Bytes=224)
   7    6               NESTED LOOPS (Cost=68 Card=722 Bytes=64980)
   8    7                 MERGE JOIN (CARTESIAN) (Cost=3 Card=1 Bytes=57)
   9    8                   TABLE ACCESS (BY INDEX ROWID) OF 'COUNTY'(Cost=2 Card=1 Bytes=12)
  10    9                     INDEX (RANGE SCAN) OF 'COUNTY_GEONAME_IDX' (NON-UNIQUE) (Cost=1 Card=1)
  11    8                   BUFFER (SORT) (Cost=1 Card=1 Bytes=45)
  12   11                     TABLE ACCESS (BY INDEX ROWID) OF 'GEOLAYER' (Cost=1 Card=1 Bytes=45)
  13   12                       INDEX (RANGE SCAN) OF 'GEOLAYER_LAYERNAME_IDX' (NON-UNIQUE)
  14    7                 TABLE ACCESS (BY INDEX ROWID) OF 'MINMAXEXTENT' (Cost=65 Card=1 Bytes=33)
  15   14                   INDEX (RANGE SCAN) OF 'MINMAXEXT_UK' (UNIQUE) (Cost=43 Card=2112)
  16    6               BITMAP CONVERSION (TO ROWIDS)
  17   16                 BITMAP AND
  18   17                   BITMAP CONVERSION (FROM ROWIDS)
  19   18                     INDEX (RANGE SCAN) OF 'TOWNLAND_PK' (UNIQUE)
  20   17                   BITMAP CONVERSION (FROM ROWIDS)
  21   20                     INDEX (RANGE SCAN) OF 'TOWNLAND_COUNTYID_IDX' (NON-UNIQUE) (Cost=4 Card=12)

Statistics
----------------------------------------------------------
          0  recursive calls
          0  db block gets
     254501  consistent gets
        847  physical reads
          0  redo size
       1458  bytes sent via SQL*Net to client
        498  bytes received via SQL*Net from client
          4  SQL*Net roundtrips to/from client
          2  sorts (memory)
          0  sorts (disk)
         10  rows processed

 

Tom Kyte
December 16, 2003 - 1:46 pm UTC

if you push a first_rows hint into the innermost queries -- what happens then (no answer for why this is happening -- don't know in this case -- for that, suggest a tar but lets try to find a way to workaround the issue here)

Improvement

A reader, December 17, 2003 - 6:16 am UTC

Query elapsed time falls to about 1 second. Huge improvement but still not as snappy as the original query at 0.01 seconds!

  1    SELECT a.* FROM
  2           ( SELECT  a.*, ROWNUM RECORDINDEX FROM
  3           ( SELECT /*+ FIRST_ROWS */ 'Townland Boundary' "FEATURENAME", mME.MinX, mME.MaxX, mME.MinY, mME.MaxY,  gL.LayerName,
  4             gL.LayerAlias,  TOWNLAND.GEONAME COLUMN1 , COUNTY.GEONAME COLUMN2
  5            FROM COUNTY ,TOWNLAND , MinMaxExtent mME, GeoLayer gL
  6             WHERE COUNTY.GEONAME = 'L123'
  7              AND  TOWNLAND.GEONAME LIKE 'BALL%'
  8             AND TOWNLAND.COUNTYID = COUNTY.COUNTYID
  9              AND mME.ForeignID = TOWNLAND.GEOMETRYID
 10            AND  mME.TableName = 'TOWNLAND'
 11           AND gL.TableName = mME.TableName
 12            AND gL.LayerName = 'TOWNLAND'
 13          ORDER BY  TOWNLAND.GEONAME , COUNTY.GEONAME)
 14*         a WHERE ROWNUM <= 10) a
SQL> /

10 rows selected.

Elapsed: 00:00:01.08

Execution Plan
----------------------------------------------------------
   0      SELECT STATEMENT Optimizer=HINT: FIRST_ROWS (Cost=8578 Card=2 Bytes=3240)
   1    0   VIEW (Cost=8578 Card=2 Bytes=3240)
   2    1     COUNT (STOPKEY)
   3    2       VIEW (Cost=8578 Card=2 Bytes=3214)
   4    3         SORT (ORDER BY STOPKEY) (Cost=8578 Card=2 Bytes=224)
   5    4           NESTED LOOPS (Cost=8576 Card=2 Bytes=224)
   6    5             MERGE JOIN (CARTESIAN) (Cost=8264 Card=156 Bytes=12324)
   7    6               NESTED LOOPS (Cost=8108 Card=156 Bytes=5304)
   8    7                 TABLE ACCESS (BY INDEX ROWID) OF 'TOWNLAND'(Cost=4052 Card=4056 Bytes=89232)
   9    8                   INDEX (RANGE SCAN) OF 'TOWNLAND_GEONAME_IDX' (NON-UNIQUE) (Cost=15 Card=4056)
  10    7                 TABLE ACCESS (BY INDEX ROWID) OF 'COUNTY' (Cost=1 Card=1 Bytes=12)
  11   10                   INDEX (UNIQUE SCAN) OF 'COUNTY_UK' (UNIQUE)
  12    6               BUFFER (SORT) (Cost=8263 Card=1 Bytes=45)
  13   12                 TABLE ACCESS (BY INDEX ROWID) OF 'GEOLAYER' (Cost=1 Card=1 Bytes=45)
  14   13                   INDEX (RANGE SCAN) OF 'GEOLAYER_LAYERNAME_IDX' (NON-UNIQUE)
  15    5             TABLE ACCESS (BY INDEX ROWID) OF 'MINMAXEXTENT'(Cost=2 Card=1 Bytes=33)
  16   15               INDEX (UNIQUE SCAN) OF 'MINMAXEXT_UK' (UNIQUE)(Cost=1 Card=1)

Statistics
----------------------------------------------------------
          0  recursive calls
          0  db block gets
      10413  consistent gets
       4174  physical reads
          0  redo size
       1458  bytes sent via SQL*Net to client
        498  bytes received via SQL*Net from client
          4  SQL*Net roundtrips to/from client
          2  sorts (memory)
          0  sorts (disk)
         10  rows processed
 

Tom Kyte
December 17, 2003 - 7:03 am UTC

do you have a tkprof -- are the "estimations" in the autotrace anywhere near the "real numbers" in the tkprof -- are the stats current and up to date.

A reader, December 17, 2003 - 9:53 am UTC

Stats are current.

Here we have tkprof first with and then without first_rows hint:

SELECT a.* FROM
( SELECT a.*, ROWNUM RECORDINDEX FROM
( SELECT /*+ FIRST_ROWS */ :"SYS_B_0" "FEATURENAME",
mME.MinX, mME.MaxX, mME.MinY, mME.MaxY, gL.LayerName,
gL.LayerAlias, TOWNLAND.GEONAME COLUMN1 , COUNTY.GEONAME COLUMN2
FROM COUNTY ,TOWNLAND , MinMaxExtent mME, GeoLayer gL
WHERE COUNTY.GEONAME = :"SYS_B_1"
AND TOWNLAND.GEONAME LIKE :"SYS_B_2"
AND TOWNLAND.COUNTYID = COUNTY.COUNTYID
AND mME.ForeignID = TOWNLAND.GEOMETRYID
AND mME.TableName = :"SYS_B_3"
AND gL.TableName = mME.TableName
AND gL.LayerName = :"SYS_B_4"
ORDER BY TOWNLAND.GEONAME , COUNTY.GEONAME)
a WHERE ROWNUM <= :"SYS_B_5") a

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 2 0.00 0.00 0 0 0 0
Execute 2 0.00 0.00 0 0 0 0
Fetch 3 0.68 11.24 8694 20432 0 10
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 7 0.68 11.25 8694 20432 0 10

Misses in library cache during parse: 1
Optimizer goal: FIRST_ROWS
Parsing user id: 89

Rows Row Source Operation
------- ---------------------------------------------------
0 VIEW (cr=10019 r=4443 w=0 time=9453637 us)
0 COUNT STOPKEY (cr=10019 r=4443 w=0 time=9453630 us)
0 VIEW (cr=10019 r=4443 w=0 time=9453627 us)
0 SORT ORDER BY STOPKEY (cr=10019 r=4443 w=0 time=9453621 us)
0 NESTED LOOPS (cr=10019 r=4443 w=0 time=9453566 us)
0 MERGE JOIN CARTESIAN (cr=10019 r=4443 w=0 time=9453562 us)
0 NESTED LOOPS (cr=10019 r=4443 w=0 time=9453555 us)
5011 TABLE ACCESS BY INDEX ROWID TOWNLAND (cr=5006 r=4438 w=0 time=9339026 us)
5011 INDEX RANGE SCAN TOWNLAND_GEONAME_IDX (cr=21 r=0 w=0 time=30694 us)(object id 55399)
0 TABLE ACCESS BY INDEX ROWID COUNTY (cr=5013 r=5 w=0 time=84014 us)
5011 INDEX UNIQUE SCAN COUNTY_UK (cr=2 r=1 w=0 time=32598 us)(object id 55015)
0 BUFFER SORT (cr=0 r=0 w=0 time=0 us)
0 TABLE ACCESS BY INDEX ROWID GEOLAYER (cr=0 r=0 w=0 time=0 us)
0 INDEX RANGE SCAN GEOLAYER_LAYERNAME_IDX (cr=0 r=0 w=0 time=0 us)(object id 55930)
0 TABLE ACCESS BY INDEX ROWID MINMAXEXTENT (cr=0 r=0 w=0 time=0 us)
0 INDEX UNIQUE SCAN MINMAXEXT_UK (cr=0 r=0 w=0 time=0 us)(object id 55602)

********************************************************************************


SELECT a.* FROM
( SELECT a.*, ROWNUM RECORDINDEX FROM
( SELECT :"SYS_B_0" "FEATURENAME",
mME.MinX, mME.MaxX, mME.MinY, mME.MaxY, gL.LayerName,
gL.LayerAlias, TOWNLAND.GEONAME COLUMN1 , COUNTY.GEONAME COLUMN2
FROM COUNTY ,TOWNLAND , MinMaxExtent mME, GeoLayer gL
WHERE COUNTY.GEONAME = :"SYS_B_1"
AND TOWNLAND.GEONAME LIKE :"SYS_B_2"
AND TOWNLAND.COUNTYID = COUNTY.COUNTYID
AND mME.ForeignID = TOWNLAND.GEOMETRYID
AND mME.TableName = :"SYS_B_3"
AND gL.TableName = mME.TableName
AND gL.LayerName = :"SYS_B_4"
ORDER BY TOWNLAND.GEONAME , COUNTY.GEONAME)
a WHERE ROWNUM <= :"SYS_B_5") a

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 2 136.84 139.22 851 254502 0 10
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 4 136.84 139.22 851 254502 0 10

Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: 89

Rows Row Source Operation
------- ---------------------------------------------------
10 VIEW (cr=254502 r=851 w=0 time=139222875 us)
10 COUNT STOPKEY (cr=254502 r=851 w=0 time=139222837 us)
10 VIEW (cr=254502 r=851 w=0 time=139222808 us)
10 SORT ORDER BY STOPKEY (cr=254502 r=851 w=0 time=139222779 us)
130 TABLE ACCESS BY INDEX ROWID TOWNLAND (cr=254502 r=851 w=0 time=139221108 us)
51817 NESTED LOOPS (cr=254226 r=695 w=0 time=138882960 us)
50698 NESTED LOOPS (cr=628 r=597 w=0 time=1651025 us)
1 MERGE JOIN CARTESIAN (cr=5 r=0 w=0 time=319 us)
1 TABLE ACCESS BY INDEX ROWID COUNTY (cr=3 r=0 w=0 time=127 us)
1 INDEX RANGE SCAN COUNTY_GEONAME_IDX (cr=1 r=0 w=0 time=56 us)(object id 55401)
1 BUFFER SORT (cr=2 r=0 w=0 time=96 us)
1 TABLE ACCESS BY INDEX ROWID GEOLAYER (cr=2 r=0 w=0 time=31 us)
1 INDEX RANGE SCAN GEOLAYER_LAYERNAME_IDX (cr=1 r=0 w=0 time=18 us)(object id 55930)
50698 TABLE ACCESS BY INDEX ROWID MINMAXEXTENT (cr=623 r=597 w=0 time=1548952 us)
50698 INDEX RANGE SCAN MINMAXEXT_UK (cr=154 r=137 w=0 time=592031 us)(object id 55602)
1118 BITMAP CONVERSION TO ROWIDS (cr=253598 r=98 w=0 time=136695508 us)
1118 BITMAP AND (cr=253598 r=98 w=0 time=136580794 us)
50698 BITMAP CONVERSION FROM ROWIDS (cr=50804 r=98 w=0 time=1338601 us)
50698 INDEX RANGE SCAN TOWNLAND_PK (cr=50804 r=98 w=0 time=960672 us)(object id 55131)
27108 BITMAP CONVERSION FROM ROWIDS (cr=202794 r=0 w=0 time=134967141 us)
56680364 INDEX RANGE SCAN TOWNLAND_COUNTYID_IDX (cr=202794 r=0 w=0 time=76815579 us)(object id 55416)





Tom Kyte
December 18, 2003 - 8:28 am UTC

I wanted to compare the first_rows to the one that is "fast", not the slow one.

additional tkprof

A reader, December 17, 2003 - 10:50 am UTC

here's the tkprof for the query without the outer wrapper:

SELECT a.*, ROWNUM RECORDINDEX FROM
( SELECT :"SYS_B_0" "FEATURENAME",
mME.MinX, mME.MaxX, mME.MinY, mME.MaxY, gL.LayerName,
gL.LayerAlias, TOWNLAND.GEONAME COLUMN1 , COUNTY.GEONAME COLUMN2
FROM COUNTY ,TOWNLAND , MinMaxExtent mME, GeoLayer gL
WHERE COUNTY.GEONAME = :"SYS_B_1"
AND TOWNLAND.GEONAME LIKE :"SYS_B_2"
AND TOWNLAND.COUNTYID = COUNTY.COUNTYID
AND mME.ForeignID = TOWNLAND.GEOMETRYID
AND mME.TableName = :"SYS_B_3"
AND gL.TableName = mME.TableName
AND gL.LayerName = :"SYS_B_4"
ORDER BY TOWNLAND.GEONAME , COUNTY.GEONAME)
a WHERE ROWNUM <= :"SYS_B_5"

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 2 0.03 0.08 21 530 6 10
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 4 0.03 0.08 21 530 6 10

Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: 89

Rows Row Source Operation
------- ---------------------------------------------------
10 COUNT STOPKEY (cr=530 r=21 w=21 time=85530 us)
10 VIEW (cr=530 r=21 w=21 time=85489 us)
10 SORT ORDER BY STOPKEY (cr=530 r=21 w=21 time=85462 us)
130 NESTED LOOPS (cr=530 r=21 w=21 time=84925 us)
130 NESTED LOOPS (cr=138 r=21 w=21 time=82066 us)
1 MERGE JOIN CARTESIAN (cr=4 r=0 w=0 time=236 us)
1 TABLE ACCESS BY INDEX ROWID GEOLAYER (cr=2 r=0 w=0 time=93 us)
1 INDEX RANGE SCAN GEOLAYER_LAYERNAME_IDX (cr=1 r=0 w=0 time=56 us)(object id 55930)
1 BUFFER SORT (cr=2 r=0 w=0 time=92 us)
1 TABLE ACCESS BY INDEX ROWID COUNTY (cr=2 r=0 w=0 time=38 us)
1 INDEX RANGE SCAN COUNTY_GEONAME_IDX (cr=1 r=0 w=0 time=19 us)(object id 55401)
130 TABLE ACCESS BY INDEX ROWID TOWNLAND (cr=134 r=21 w=21 time=81527 us)
130 BITMAP CONVERSION TO ROWIDS (cr=26 r=21 w=21 time=80386 us)
1 BITMAP AND (cr=26 r=21 w=21 time=80228 us)
1 BITMAP CONVERSION FROM ROWIDS (cr=5 r=0 w=0 time=2782 us)
1118 INDEX RANGE SCAN TOWNLAND_COUNTYID_IDX (cr=5 r=0 w=0 time=1551 us)(object id 55416)
1 BITMAP CONVERSION FROM ROWIDS (cr=21 r=21 w=21 time=77315 us)
5011 SORT ORDER BY (cr=21 r=21 w=21 time=72098 us)
5011 INDEX RANGE SCAN TOWNLAND_GEONAME_IDX (cr=21 r=0 w=0 time=20016 us)(object id 55399)
130 TABLE ACCESS BY INDEX ROWID MINMAXEXTENT (cr=392 r=0 w=0 time=2105 us)
130 INDEX UNIQUE SCAN MINMAXEXT_UK (cr=262 r=0 w=0 time=1126 us)(object id 55602)

********************************************************************************

Tom Kyte
December 18, 2003 - 8:36 am UTC

wonder if it is a side effect of cursor sharing here -- hmm. The difference between the plans is one is using b*tree - bitmap conversions, avoiding the table access by rowid (and that is what is causing the "slowdown", all of the IO to read that table a block at a time)

what happens to the plans if you turned off cursor sharing for a minute (alter session set cursor_sharing=exact). just curious at this point.

A reader, December 18, 2003 - 11:15 am UTC

Attached:

First the fastest and then the slower with first_rows hint:



alter session set cursor_sharing=exact

SELECT a.*, ROWNUM RECORDINDEX FROM
( SELECT 'Townland Boundary' "FEATURENAME", mME.MinX, mME.MaxX, mME.MinY, mME.MaxY, gL.LayerName,
gL.LayerAlias, TOWNLAND.GEONAME COLUMN1 , COUNTY.GEONAME COLUMN2
FROM COUNTY ,TOWNLAND , MinMaxExtent mME, GeoLayer gL
WHERE COUNTY.GEONAME = 'L123'
AND TOWNLAND.GEONAME LIKE 'BALL%'
AND TOWNLAND.COUNTYID = COUNTY.COUNTYID
AND mME.ForeignID = TOWNLAND.GEOMETRYID
AND mME.TableName = 'TOWNLAND'
AND gL.TableName = mME.TableName
AND gL.LayerName = 'TOWNLAND'
ORDER BY TOWNLAND.GEONAME , COUNTY.GEONAME)
a WHERE ROWNUM <= 10

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.40 0.41 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 2 0.09 0.28 120 530 6 10
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 4 0.50 0.69 120 530 6 10

Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: 89

Rows Row Source Operation
------- ---------------------------------------------------
10 COUNT STOPKEY (cr=530 r=120 w=21 time=284532 us)
10 VIEW (cr=530 r=120 w=21 time=284494 us)
10 SORT ORDER BY STOPKEY (cr=530 r=120 w=21 time=284464 us)
130 NESTED LOOPS (cr=530 r=120 w=21 time=283692 us)
130 NESTED LOOPS (cr=138 r=119 w=21 time=270656 us)
1 MERGE JOIN CARTESIAN (cr=4 r=1 w=0 time=11295 us)
1 TABLE ACCESS BY INDEX ROWID GEOLAYER (cr=2 r=0 w=0 time=89 us)
1 INDEX RANGE SCAN GEOLAYER_LAYERNAME_IDX (cr=1 r=0 w=0 time=52 us)(object id 55930)
1 BUFFER SORT (cr=2 r=1 w=0 time=11101 us)
1 TABLE ACCESS BY INDEX ROWID COUNTY (cr=2 r=1 w=0 time=11002 us)
1 INDEX RANGE SCAN COUNTY_GEONAME_IDX (cr=1 r=1 w=0 time=10963 us)(object id 55401)
130 TABLE ACCESS BY INDEX ROWID TOWNLAND (cr=134 r=118 w=21 time=258993 us)
130 BITMAP CONVERSION TO ROWIDS (cr=26 r=40 w=21 time=138838 us)
1 BITMAP AND (cr=26 r=40 w=21 time=138608 us)
1 BITMAP CONVERSION FROM ROWIDS (cr=5 r=0 w=0 time=2838 us)
1118 INDEX RANGE SCAN TOWNLAND_COUNTYID_IDX (cr=5 r=0 w=0 time=1601 us)(object id 55416)
1 BITMAP CONVERSION FROM ROWIDS (cr=21 r=40 w=21 time=135671 us)
5011 SORT ORDER BY (cr=21 r=40 w=21 time=130355 us)
5011 INDEX RANGE SCAN TOWNLAND_GEONAME_IDX (cr=21 r=19 w=0 time=78502 us)(object id 55399)
130 TABLE ACCESS BY INDEX ROWID MINMAXEXTENT (cr=392 r=1 w=0 time=11953 us)
130 INDEX UNIQUE SCAN MINMAXEXT_UK (cr=262 r=0 w=0 time=1625 us)(object id 55602)

********************************************************************************


SELECT a.* FROM
( SELECT /*+ first_rows */ a.*, ROWNUM RECORDINDEX FROM
( SELECT 'Townland Boundary' "FEATURENAME", mME.MinX, mME.MaxX, mME.MinY, mME.MaxY, gL.LayerName,
gL.LayerAlias, TOWNLAND.GEONAME COLUMN1 , COUNTY.GEONAME COLUMN2
FROM COUNTY ,TOWNLAND , MinMaxExtent mME, GeoLayer gL
WHERE COUNTY.GEONAME = 'L123'
AND TOWNLAND.GEONAME LIKE 'BALL%'
AND TOWNLAND.COUNTYID = COUNTY.COUNTYID
AND mME.ForeignID = TOWNLAND.GEOMETRYID
AND mME.TableName = 'TOWNLAND'
AND gL.TableName = mME.TableName
AND gL.LayerName = 'TOWNLAND'
ORDER BY TOWNLAND.GEONAME , COUNTY.GEONAME)
a WHERE ROWNUM <= 10) a

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.03 0.02 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 2 0.56 12.38 3889 10413 0 10
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 4 0.59 12.41 3889 10413 0 10

Misses in library cache during parse: 1
Optimizer goal: FIRST_ROWS
Parsing user id: 89

Rows Row Source Operation
------- ---------------------------------------------------
10 VIEW (cr=10413 r=3889 w=0 time=12388676 us)
10 COUNT STOPKEY (cr=10413 r=3889 w=0 time=12388629 us)
10 VIEW (cr=10413 r=3889 w=0 time=12388592 us)
10 SORT ORDER BY STOPKEY (cr=10413 r=3889 w=0 time=12388562 us)
130 NESTED LOOPS (cr=10413 r=3889 w=0 time=12387024 us)
130 MERGE JOIN CARTESIAN (cr=10021 r=3888 w=0 time=12380531 us)
130 NESTED LOOPS (cr=10019 r=3888 w=0 time=12377287 us)
5011 TABLE ACCESS BY INDEX ROWID TOWNLAND (cr=5006 r=3888 w=0 time=12252430 us)
5011 INDEX RANGE SCAN TOWNLAND_GEONAME_IDX (cr=21 r=15 w=0 time=41073 us)(object id 55399)
130 TABLE ACCESS BY INDEX ROWID COUNTY (cr=5013 r=0 w=0 time=88160 us)
5011 INDEX UNIQUE SCAN COUNTY_UK (cr=2 r=0 w=0 time=28698 us)(object id 55015)
130 BUFFER SORT (cr=2 r=0 w=0 time=1745 us)
1 TABLE ACCESS BY INDEX ROWID GEOLAYER (cr=2 r=0 w=0 time=38 us)
1 INDEX RANGE SCAN GEOLAYER_LAYERNAME_IDX (cr=1 r=0 w=0 time=25 us)(object id 55930)
130 TABLE ACCESS BY INDEX ROWID MINMAXEXTENT (cr=392 r=1 w=0 time=4649 us)
130 INDEX UNIQUE SCAN MINMAXEXT_UK (cr=262 r=0 w=0 time=2697 us)(object id 55602)

********************************************************************************

Re: Catastrophic Performance Degredation

T Truong, February 11, 2004 - 5:39 pm UTC

Mr. Kyte,
We are having the same performance problem as with reviewer Doh!

The following query (provided in your first post of this thread) works perfectly prior to our database upgrade from 8.1.7 to 9.2.0.

select *
from ( select a.*, rownum rnum
from ( YOUR_QUERY_GOES_HERE -- including the order by ) a
where rownum <= MAX_ROWS )
where rnum >= MIN_ROWS
/

After the upgrade to 9.2.0, we have the performance problem with the above query.

Please continue this thread to determine the cause.

Best Regards,


Tom Kyte
February 11, 2004 - 6:51 pm UTC

hows about your example? your query, your 8i tkprf and your 9i one as well

Re: Catastrophic Performance Degredation

T Truong, February 11, 2004 - 8:24 pm UTC

Mr. Kyte,
Thank you for your prompt response.

As a developer, don't have access to tkprof utilities to check out the query statistics and we're not getting much time from our DBAs so far to run tkprof; though we will be getting some of their time soon (hopefully next week).

So far, we know that if we set the initial parameter OPTIMIZER_FEATURES_ENABLE to 8.1.7, then the query runs just as fast as it was prior to the database upgrade.

We don't have any other 8i sandbox to regenerate the explan plan for the query, but here's the query and its explain plan in our only 9i 9.2.0 sandbox

select
       x.ccbnumber
      ,x.chgtype
      ,x.pkgid
      ,x.project_status
      ,x.title
      ,x.projleadname
 from (
    select a.*, rownum rnum
     from (
           select
                  pkg_info.chgty_code || pkg_info.pkg_seq_id ccbnumber
                 ,pkg_info.chgty_code chgtype
                 ,pkg_info.pkg_seq_id pkgid
                 ,(
                   select decode(pkg_info.sty_code,'INPROCESS','In-Process'
                                                  ,'CREATED','Created'
                                                  ,decode(
                                                          min(
                                                              decode(pvs.sty_code,'COMPLETE',10
                                                                                 ,'CANCEL',20
                                                                                 ,'APPROVE',30
                                                                                 ,'APPROVEDEF',30
                                                                                 ,'INPROCESS',40
                                                                                 ,'DISAPPROVE',50
                                                                                 ,'DISAPPRXA',50
                                                                                 ,'VOID',60
                                                                                 ,'CREATED',70
                                                                                 ,null
                                                                    )
                                                             ),10,'Complete'
                                                              ,20,'Cancelled'
                                                              ,30,'Approved'
                                                              ,40,'In-Process'
                                                              ,50,'Disapproved'
                                                              ,50,'Disapproved XA'
                                                              ,60,'Void'
                                                              ,70,'Created'
                                                         )
                                ) project_status
                     from cms_pkg_ver_statuses pvs
                         ,cms_pkg_vers pv
                    where pv.pkg_seq_id = pkg_info.pkg_seq_id
                      and pv.pkg_seq_id = pvs.pkgver_pkg_seq_id
                      and pv.seq_num = pvs.pkgver_seq_num
                      and pvs.sty_scty_dp_code = 'PKG'
                      and pvs.sty_scty_code = 'STATE'
                      and pvs.create_date =
                             (select max(create_date)
                                from cms_pkg_ver_statuses
                               where pkgver_pkg_seq_id = pvs.pkgver_pkg_seq_id
                                 and pkgver_seq_num = pvs.pkgver_seq_num
                                 and sty_scty_dp_code = 'PKG'
                                 and sty_scty_code = 'STATE'
                             )
                      and pvs.create_date =
                             (select max(create_date)
                                from cms_pkg_ver_statuses a
                               where a.pkgver_pkg_seq_id = pvs.pkgver_pkg_seq_id
                                 and a.pkgver_seq_num = pvs.pkgver_seq_num
                                 and a.sty_scty_dp_code = 'PKG'
                                 and a.sty_scty_code = 'STATE'
                                 and a.create_date =
                                        (select max(create_date)
                                           from cms_pkg_ver_statuses
                                          where pkgver_pkg_seq_id = a.pkgver_pkg_seq_id
                                            and pkgver_seq_num = a.pkgver_seq_num
                                            and sty_scty_dp_code = 'PKG'
                                            and sty_scty_code = 'STATE'
                                        )
                             )
                  ) project_status
                 ,pkg_info.title title
                 ,emp.user_name projleadname
             from pit_pkg_info pkg_info
                 ,emp_person emp
            where pkg_info.projmgr_emp_employee_num = emp.emp_no(+)
              and pkg_info.title like '%AIR%'
            order by pkgid
          ) a
     where rownum <= 100
      ) x
where x.rnum >= 51
;


OPERATION                   OPTIONS         OBJECT_NAME         COST POSITION
--------------------------- --------------- ----------------- ------ --------
SELECT STATEMENT                                                   4        4
  VIEW                                                             4        1
    COUNT                   STOPKEY                                         1
      VIEW                                                         4        1
        NESTED LOOPS        OUTER                                  4        1
          NESTED LOOPS      OUTER                                  3        1
            TABLE ACCESS    BY INDEX ROWID  PIT_PKG_INFO           3        1
              INDEX         RANGE SCAN      PIT_PKG_PKGVER_I       2        1
            INDEX           UNIQUE SCAN     STY_PK                          2
          TABLE ACCESS      BY INDEX ROWID  EMP_PERSON             1        2
            INDEX           UNIQUE SCAN     SYS_C001839                     1

11 rows selected.

Follows are the 9i initial parameters

SQL> show parameters
O7_DICTIONARY_ACCESSIBILITY          boolean     FALSE                          
_trace_files_public                  boolean     TRUE                           
active_instance_count                integer                                    
aq_tm_processes                      integer     0                              
archive_lag_target                   integer     0                              
audit_file_dest                      string      ?/rdbms/audit                  
audit_sys_operations                 boolean     FALSE                          
audit_trail                          string      TRUE                           
background_core_dump                 string      partial                        
background_dump_dest                 string      /u01/app/oracle/admin/U50DAMC/ 
                                                 bdump                          
backup_tape_io_slaves                boolean     FALSE                          
bitmap_merge_area_size               integer     1048576                        
blank_trimming                       boolean     FALSE                          
buffer_pool_keep                     string                                     
buffer_pool_recycle                  string                                     
circuits                             integer     0                              
cluster_database                     boolean     FALSE                          
cluster_database_instances           integer     1                              
cluster_interconnects                string                                     
commit_point_strength                integer     1                              
compatible                           string      9.2.0.0                        
control_file_record_keep_time        integer     3                              
control_files                        string      /np70/oradata/U50DAMC/cr1/cont 
                                                 rol01.ctl, /np70/oradata/U50DA 
                                                 MC/cr2/control02.ctl, /np70/or 
                                                 adata/U50DAMC/cr3/control03.ct 
                                                 l                              
core_dump_dest                       string      /u01/app/oracle/admin/U50DAMC/ 
                                                 cdump                          
cpu_count                            integer     4                              
create_bitmap_area_size              integer     8388608                        
cursor_sharing                       string      EXACT                          
cursor_space_for_time                boolean     FALSE                          
db_16k_cache_size                    big integer 0                              
db_2k_cache_size                     big integer 0                              
db_32k_cache_size                    big integer 0                              
db_4k_cache_size                     big integer 0                              
db_8k_cache_size                     big integer 0                              
db_block_buffers                     integer     6000                           
db_block_checking                    boolean     FALSE                          
db_block_checksum                    boolean     TRUE                           
db_block_size                        integer     8192                           
db_cache_advice                      string      OFF                            
db_cache_size                        big integer 0                              
db_create_file_dest                  string                                     
db_create_online_log_dest_1          string                                     
db_create_online_log_dest_2          string                                     
db_create_online_log_dest_3          string                                     
db_create_online_log_dest_4          string                                     
db_create_online_log_dest_5          string                                     
db_domain                            string      lgb.ams.boeing.com             
db_file_multiblock_read_count        integer     8                              
db_file_name_convert                 string                                     
db_files                             integer     1024                           
db_keep_cache_size                   big integer 0                              
db_name                              string      U50DAMC                        
db_recycle_cache_size                big integer 0                              
db_writer_processes                  integer     4                              
dblink_encrypt_login                 boolean     FALSE                          
dbwr_io_slaves                       integer     0                              
dg_broker_config_file1               string      ?/dbs/dr1@.dat                 
dg_broker_config_file2               string      ?/dbs/dr2@.dat                 
dg_broker_start                      boolean     FALSE                          
disk_asynch_io                       boolean     TRUE                           
dispatchers                          string                                     
distributed_lock_timeout             integer     60                             
dml_locks                            integer     800                            
drs_start                            boolean     FALSE                          
enqueue_resources                    integer     2389                           
event                                string                                     
fal_client                           string                                     
fal_server                           string                                     
fast_start_io_target                 integer     0                              
fast_start_mttr_target               integer     0                              
fast_start_parallel_rollback         string      LOW                            
file_mapping                         boolean     FALSE                          
filesystemio_options                 string      asynch                         
fixed_date                           string                                     
gc_files_to_locks                    string                                     
global_context_pool_size             string                                     
global_names                         boolean     FALSE                          
hash_area_size                       integer     10000000                       
hash_join_enabled                    boolean     TRUE                           
hi_shared_memory_address             integer     0                              
hpux_sched_noage                     integer     0                              
hs_autoregister                      boolean     TRUE                           
ifile                                file                                       
instance_groups                      string                                     
instance_name                        string      U50DAMC                        
instance_number                      integer     0                              
java_max_sessionspace_size           integer     0                              
java_pool_size                       big integer 50331648                       
java_soft_sessionspace_limit         integer     0                              
job_queue_processes                  integer     4                              
large_pool_size                      big integer 16777216                       
license_max_sessions                 integer     0                              
license_max_users                    integer     0                              
license_sessions_warning             integer     0                              
local_listener                       string                                     
lock_name_space                      string                                     
lock_sga                             boolean     FALSE                          
log_archive_dest                     string                                     
log_archive_dest_1                   string      location=/np70/oradata/U50DAMC 
                                                 /arch MANDATORY REOPEN=60      
log_archive_dest_10                  string                                     
log_archive_dest_2                   string      location=/u01/app/oracle/admin 
                                                 /altarch/U50DAMC OPTIONAL      
log_archive_dest_3                   string                                     
log_archive_dest_4                   string                                     
log_archive_dest_5                   string                                     
log_archive_dest_6                   string                                     
log_archive_dest_7                   string                                     
log_archive_dest_8                   string                                     
log_archive_dest_9                   string                                     
log_archive_dest_state_1             string      enable                         
log_archive_dest_state_10            string      enable                         
log_archive_dest_state_2             string      defer                          
log_archive_dest_state_3             string      defer                          
log_archive_dest_state_4             string      defer                          
log_archive_dest_state_5             string      defer                          
log_archive_dest_state_6             string      enable                         
log_archive_dest_state_7             string      enable                         
log_archive_dest_state_8             string      enable                         
log_archive_dest_state_9             string      enable                         
log_archive_duplex_dest              string                                     
log_archive_format                   string      U50DAMC_%T_%S.ARC              
log_archive_max_processes            integer     2                              
log_archive_min_succeed_dest         integer     1                              
log_archive_start                    boolean     TRUE                           
log_archive_trace                    integer     0                              
log_buffer                           integer     1048576                        
log_checkpoint_interval              integer     10000                          
log_checkpoint_timeout               integer     1800                           
log_checkpoints_to_alert             boolean     FALSE                          
log_file_name_convert                string                                     
log_parallelism                      integer     1                              
logmnr_max_persistent_sessions       integer     1                              
max_commit_propagation_delay         integer     700                            
max_dispatchers                      integer     5                              
max_dump_file_size                   string      10240K                         
max_enabled_roles                    integer     148                            
max_rollback_segments                integer     40                             
max_shared_servers                   integer     20                             
mts_circuits                         integer     0                              
mts_dispatchers                      string                                     
mts_listener_address                 string                                     
mts_max_dispatchers                  integer     5                              
mts_max_servers                      integer     20                             
mts_multiple_listeners               boolean     FALSE                          
mts_servers                          integer     0                              
mts_service                          string      U50DAMC                        
mts_sessions                         integer     0                              
nls_calendar                         string                                     
nls_comp                             string                                     
nls_currency                         string                                     
nls_date_format                      string                                     
nls_date_language                    string                                     
nls_dual_currency                    string                                     
nls_iso_currency                     string                                     
nls_language                         string      AMERICAN                       
nls_length_semantics                 string      BYTE                           
nls_nchar_conv_excp                  string      FALSE                          
nls_numeric_characters               string                                     
nls_sort                             string                                     
nls_territory                        string      AMERICA                        
nls_time_format                      string                                     
nls_time_tz_format                   string                                     
nls_timestamp_format                 string                                     
nls_timestamp_tz_format              string                                     
object_cache_max_size_percent        integer     10                             
object_cache_optimal_size            integer     102400                         
olap_page_pool_size                  integer     33554432                       
open_cursors                         integer     500                            
open_links                           integer     100                            
open_links_per_instance              integer     4                              
optimizer_dynamic_sampling           integer     0                              
optimizer_features_enable            string      8.1.7                          
optimizer_index_caching              integer     0                              
optimizer_index_cost_adj             integer     100                            
optimizer_max_permutations           integer     80000                          
optimizer_mode                       string      CHOOSE                         
oracle_trace_collection_name         string                                     
oracle_trace_collection_path         string      ?/otrace/admin/cdf             
oracle_trace_collection_size         integer     5242880                        
oracle_trace_enable                  boolean     FALSE                          
oracle_trace_facility_name           string      oracled                        
oracle_trace_facility_path           string      ?/otrace/admin/fdf             
os_authent_prefix                    string      ops_                           
os_roles                             boolean     FALSE                          
parallel_adaptive_multi_user         boolean     FALSE                          
parallel_automatic_tuning            boolean     FALSE                          
parallel_execution_message_size      integer     2152                           
parallel_instance_group              string                                     
parallel_max_servers                 integer     5                              
parallel_min_percent                 integer     0                              
parallel_min_servers                 integer     0                              
parallel_server                      boolean     FALSE                          
parallel_server_instances            integer     1                              
parallel_threads_per_cpu             integer     2                              
partition_view_enabled               boolean     FALSE                          
pga_aggregate_target                 big integer 25165824                       
plsql_compiler_flags                 string      INTERPRETED                    
plsql_native_c_compiler              string                                     
plsql_native_library_dir             string                                     
plsql_native_library_subdir_count    integer     0                              
plsql_native_linker                  string                                     
plsql_native_make_file_name          string                                     
plsql_native_make_utility            string                                     
plsql_v2_compatibility               boolean     FALSE                          
pre_page_sga                         boolean     FALSE                          
processes                            integer     600                            
query_rewrite_enabled                string      false                          
query_rewrite_integrity              string      enforced                       
rdbms_server_dn                      string                                     
read_only_open_delayed               boolean     FALSE                          
recovery_parallelism                 integer     0                              
remote_archive_enable                string      true                           
remote_dependencies_mode             string      TIMESTAMP                      
remote_listener                      string                                     
remote_login_passwordfile            string      EXCLUSIVE                      
remote_os_authent                    boolean     FALSE                          
remote_os_roles                      boolean     FALSE                          
replication_dependency_tracking      boolean     TRUE                           
resource_limit                       boolean     FALSE                          
resource_manager_plan                string                                     
rollback_segments                    string      r01, r02, r03, r04, r05, r06,  
                                                 r07, r08                       
row_locking                          string      always                         
serial_reuse                         string      DISABLE                        
serializable                         boolean     FALSE                          
service_names                        string      U50DAMC.lgb.ams.boeing.com     
session_cached_cursors               integer     0                              
session_max_open_files               integer     10                             
sessions                             integer     665                            
sga_max_size                         big integer 386756664                      
shadow_core_dump                     string      partial                        
shared_memory_address                integer     0                              
shared_pool_reserved_size            big integer 10066329                       
shared_pool_size                     big integer 201326592                      
shared_server_sessions               integer     0                              
shared_servers                       integer     0                              
sort_area_retained_size              integer     5000000                        
sort_area_size                       integer     5000000                        
spfile                               string                                     
sql92_security                       boolean     FALSE                          
sql_trace                            boolean     FALSE                          
sql_version                          string      NATIVE                         
standby_archive_dest                 string      ?/dbs/arch                     
standby_file_management              string      MANUAL                         
star_transformation_enabled          string      FALSE                          
statistics_level                     string      TYPICAL                        
tape_asynch_io                       boolean     TRUE                           
thread                               integer     0                              
timed_os_statistics                  integer     0                              
timed_statistics                     boolean     TRUE                           
trace_enabled                        boolean     TRUE                           
tracefile_identifier                 string                                     
transaction_auditing                 boolean     TRUE                           
transactions                         integer     200                            
transactions_per_rollback_segment    integer     5                              
undo_management                      string      MANUAL                         
undo_retention                       integer     900                            
undo_suppress_errors                 boolean     FALSE                          
undo_tablespace                      string                                     
use_indirect_data_buffers            boolean     FALSE                          
user_dump_dest                       string      /u01/app/oracle/admin/U50DAMC/ 
                                                 udump                          
utl_file_dir                         string                                     
workarea_size_policy                 string      AUTO                           

Hope you can spot something out of these

Best Regards,
Thomas
 

Tom Kyte
February 12, 2004 - 8:31 am UTC

I dont like your DBA's then. Really, they are preventing everyone from doing *their job*. arg.....


anyway, that plan "looks dandy" -- it looks like it would get first rows first.

We really need to "compare" plans.

Can you at least get an autotrace traceonly explain out of 8i (or at least an explain plan)

can you tell me "how fast it was in 8i", "how slow it is in 9i" and are the machines you are testing on even remotely similar.

Very handy

Sajid Anwar, March 08, 2004 - 11:29 am UTC

Hi Tom,
Just simple one about your SPECIAL QUERY for paging. I am using your method for paging.

select *
from ( select a.*, rownum rnum
from ( select * from t ) a
where rownum <= 5
) b
where rnum >= 2;

This gives me everything plus one extra column rnum that I dont want. How do I get rid of it in the same query?


Many thanks in advance.

Regards,
Sajid


Tom Kyte
March 08, 2004 - 2:05 pm UTC

besides just selecting the columns you want in the outer wrapper? nothing


select a, b, c, d
from ......

instead of select *

Regarding post from "T Truong from Long Beach, CA"

Matt, March 08, 2004 - 6:39 pm UTC

Tom,

There are various ways of getting access to trace files on the server, some of which have been documented on this site.

However, once the developer has the raw trace they need to access TKPROF. As far as I am aware this is not part of the default Oracle client distribution. Installing the DB on each desktop would be nice but an administrators nightmare.

As there any licencing restrictions that might prevent copying the required libraries and executables for tkprof and placing these on a desktop for developer use? I tested this (though not exhaustively) and it appears to work.

Do you see any problems with this approach?

Cheers,


Tom Kyte
March 09, 2004 - 10:50 am UTC

why an admins nightmare? developers cannot install software?


i'm not aware of any issues with getting people access to the code they have licensed.

how do I display rows with out using /* + first_rows */ hint ?

A reader, March 09, 2004 - 11:35 am UTC

Hi tom we use
saveral applications to brows the data from oracle.
one of them is toad.

They show a part of the data as soon as it is available on the screen and don't wait for the complete result set to be returned.


I checked the v$sql, v$sqlarea there is not any stmt with the first rows hint . It show the exact sql stmt that we "user" passed. How can I do that in my custom application ? I don't what to wait for 4k record and then show it to user, I need first rows hint functinality with out changing the stmt ? possible ? how ? is pagging involded ?

and yes we are using Java 1.4 + classes12.jar
and to display results, we use JTable





Tom Kyte
March 09, 2004 - 3:26 pm UTC

you can alter your session to set the optimizer goal if you like.

Response to Tom

Matt, March 09, 2004 - 5:00 pm UTC

>> Why an admins nightmare? developers cannot install software?

I intending to create a developer/administrator divide here - software development is a team effort. Of course developers can install software however, I would prefer that everyone runs the same patched version of the DB and managing this when there are multiple desktop DB's I see as being problematic.

>>i'm not aware of any issues with getting people access to the code they have licensed.

This is the issue, I guess. Is a patched desktop version of the DB that is used for development "licenced for development" (ie: free) or is there a licence payment required?

I understand that the "standalone" tkprof might fall into a different category. But, if a patched desktop version may be licenced for development, I don't see an issue.

Ta.

Tom Kyte
March 09, 2004 - 10:41 pm UTC

i don't know what you mean by a patched desktop version?

Very Useful

shabana, March 16, 2004 - 5:33 am UTC

I had problems populating large result sets. The Query helped me in fetching the needed rows keeping a page count from web tier

"order by"

A reader, April 01, 2004 - 6:45 pm UTC

hi tom
"select *
from ( select a.*, rownum rnum
from ( YOUR_QUERY_GOES_HERE -- including the order by ) a
where rownum <= MAX_ROWS )
where rnum >= MIN_ROWS
/
"

Does not the order by force you to go through the
entire rows anyways = which is pretty much
the same overhead as a "count(*)" thus defeating
the purpose? And I think in most cases, users
do want to sort their result in some order.
(Also in order by case the FIRST_ROWS hint also
is useless...
thanx!

Tom Kyte
April 02, 2004 - 9:49 am UTC

No it doesn't


think "index"


also, using rownum with the order by trick above has special top-n optimizations, so in the case where it would have to get the entire result set -- it is much more efficient then asking Oracle to generate the result set and just fetching the first N rows (using the rownum kicks in a special top-n optimization)


This is NOT like count(*). count(*) is something we can definitely 100% live without and would force the entire result set to be computed (most of the times, we don't need to get the entire result set here!)

ORDER BY is something you cannot live without -- we definitely 100% need it in many cases.

thanx!

A reader, April 02, 2004 - 10:56 am UTC

I tried it out myself and I am getting the results
that you say. I believe the "count stopkey" indicates
the rownum based top n optimization you talked
about (t1 is a copy of all_objects with some 30,000
rows one index on all columns of t1 being selected.)
select * from
(
select /*+ FIRST_ROWS */ a.*, rownum rnum
from
(
/* OUR QUERY */
select owner, object_name, object_type, rownum
from t1
where owner = 'PUBLIC'
order by owner, object_name, object_type
) a
where rownum <= 10
)
where rnum >= 1

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.02 0.02 0 0 0 0
Execute 1 0.01 0.00 0 0 0 0
Fetch 2 0.00 0.00 0 4 0 10
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 4 0.03 0.02 0 4 0 10

Misses in library cache during parse: 1
Optimizer mode: FIRST_ROWS
Parsing user id: 64

Rows Row Source Operation
------- ---------------------------------------------------
10 VIEW (cr=4 pr=0 pw=0 time=486 us)
10 COUNT STOPKEY (cr=4 pr=0 pw=0 time=427 us)
10 VIEW (cr=4 pr=0 pw=0 time=373 us)
10 COUNT (cr=4 pr=0 pw=0 time=309 us)
10 INDEX RANGE SCAN OBJ#(56907) (cr=4 pr=0 pw=0 time=270 us)(object id 56907)

Thanx!!!!



Tom Kyte
April 02, 2004 - 1:31 pm UTC

it also applies in sorting unindexed stuff as well (full details in expert one on one)

basically if you

select *
from ( select * from really_huge_table order by something_not_indexed )
where rownum < 10

oracle will get the first record and put it into slot 1 in a result set

it'll get the second record and if that is less than the one in one, it'll push it down to two and put this in one, else this goes into two

and so on for the first 10 records -- we now have 10 sorted records -- now it'll get the 11th and either

a) the 11th exceeds the one in the 10th slot -- this new record is discarded
b) the 11th is less than one of the existing 10 -- the current 10th goes away and this gets stuffed in there.


lots more efficient to sort the top N, than it would be to sort the entire result set into temp, merge it all back together -- just to fetch the first 10...

wow!

A reader, April 02, 2004 - 1:59 pm UTC

awesome - thanx a lot!!! ( not sure if you are on
vacation or is this your idea of vacation?;))

Regards

i think

A reader, April 02, 2004 - 2:03 pm UTC

"(full details in expert one
on one)"
You meant effective oracle by design (page 502)
thanx!



Tom Kyte
April 02, 2004 - 3:21 pm UTC

doh, you are right.

so here is the second test (without indexes)

A reader, April 02, 2004 - 2:45 pm UTC

thought I would share with others since I anyways
ran it.
------schema
spool s3
set echo on
drop table t2;
create table t2
as select owner, object_name, object_type
from all_objects;
insert into t2
select * from t2;
commit;

analyze table t2 compute statistics for table for all indexes for all
indexed columns;
-------------------
notice we have no indexes created
--------- selects ran - one with rownum and one without

set termout off
alter session set timed_statistics=true;
alter session set events '10046 trace name context forever, level 12';
select * from
(
select /*+ FIRST_ROWS */ a.*, rownum rnum
from
(
/* OUR QUERY ROWNUM ABSENT */
select owner, object_name, object_type, rownum
from t2
where owner = 'PUBLIC'
order by object_name
) a
);

select * from
(
select /*+ FIRST_ROWS */ a.*, rownum rnum
from
(
/* OUR QUERY ROWNUM PRESENT */
select owner, object_name, object_type, rownum
from t2
where owner = 'PUBLIC'
order by object_name
) a
where rownum <= 10
)
where rnum >= 1;

--------tkprof results----
-- FIRST CASE - ROWNUM ABSENT
select * from
(
select /*+ FIRST_ROWS */ a.*, rownum rnum
from
(
/* OUR QUERY ROWNUM ABSENT */
select owner, object_name, object_type, rownum
from t2
where owner = 'PUBLIC'
order by object_name
) a
)

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 2528 2.02 4.10 515 500 7 37904
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 2530 2.02 4.10 515 500 7 37904

Misses in library cache during parse: 0
Optimizer mode: FIRST_ROWS
Parsing user id: 64

Rows Row Source Operation
------- ---------------------------------------------------
37904 VIEW (cr=500 pr=515 pw=515 time=3587213 us)
37904 COUNT (cr=500 pr=515 pw=515 time=3413040 us)
37904 VIEW (cr=500 pr=515 pw=515 time=3295049 us)
37904 SORT ORDER BY (cr=500 pr=515 pw=515 time=3144606 us)
37904 COUNT (cr=500 pr=0 pw=0 time=613868 us)
37904 TABLE ACCESS FULL T2 (cr=500 pr=0 pw=0 time=281939 us)

--- second case ROWNUM present
select * from
(
select /*+ FIRST_ROWS */ a.*, rownum rnum
from
(
/* OUR QUERY ROWNUM PRESENT */
select owner, object_name, object_type, rownum
from t2
where owner = 'PUBLIC'
order by object_name
) a
where rownum <= 10
)
where rnum >= 1

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 2 0.33 0.49 0 500 0 10
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 4 0.33 0.49 0 500 0 10

Misses in library cache during parse: 0
Optimizer mode: FIRST_ROWS
Parsing user id: 64

Rows Row Source Operation
------- ---------------------------------------------------
10 VIEW (cr=500 pr=0 pw=0 time=495126 us)
10 COUNT STOPKEY (cr=500 pr=0 pw=0 time=494955 us)
10 VIEW (cr=500 pr=0 pw=0 time=494864 us)
10 SORT ORDER BY STOPKEY (cr=500 pr=0 pw=0 time=494817 us)
37904 COUNT (cr=500 pr=0 pw=0 time=262446 us)
37904 TABLE ACCESS FULL OBJ#(56928) (cr=500 pr=0 pw=0 time=129898 us)


Elapsed time in first case: 4.10 seconds
Elapsed time in second case (what would be our query) : 0.49 seconds

the second option is 8 times faster.


A reader, April 05, 2004 - 1:01 pm UTC

Invaluable information.

thank you Tom.

Different question

Roughing it, April 14, 2004 - 6:16 pm UTC

I have a table with time and place,
where the place is a single string with city,stateAbbrev
like SeattleWA
It is indexed by time and has about 10M records.

These queries take no time at all as expected:
select min(time) from Time_Place;
select max(time) from Time_Place;

But if I do:
select min(time), max(time) from Time_Place;
it takes a looooooong time...

What I really want is:
select max(time) from Time_Place
where place like '%CA';

If it started searching at the end, it would find it very quickly. It's not finding it quickly. It's appearing to search all the records.

Is there a way to speed this up?
Or must I keep a list of last times per state and do
select max(time) from Time_Place
where time>=(select last time from Last_per_state
where state='CA')
and place like '%CA';

Thanks,
-r

Tom Kyte
April 15, 2004 - 8:07 am UTC

Ok, two things here -- select min/max and how to make that query on data stored "not correctly" (it should have been two fields!!!) go fast.

max/min first. big_table is 1,000,000 rows on my system, if we:

big_table@ORA9IR2> set autotrace on
big_table@ORA9IR2> select min(created) from big_table;

MIN(CREAT
---------
12-MAY-02


Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=3 Card=1 Bytes=7)
1 0 SORT (AGGREGATE)
2 1 INDEX (FULL SCAN (MIN/MAX)) OF 'BT_IDX_CREATED' (NON-UNIQUE) (Cost=3 Card=1000000 Bytes=7000000)

that used index full scan (min/max) -- it knew it could read the index head or tail and be done, very efficient:


Statistics
----------------------------------------------------------
0 recursive calls
0 db block gets
3 consistent gets
0 physical reads
0 redo size
388 bytes sent via SQL*Net to client
499 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
1 rows processed

big_table@ORA9IR2>
big_table@ORA9IR2> select max(created) from big_table;

MAX(CREAT
---------
28-NOV-03


Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=3 Card=1 Bytes=7)
1 0 SORT (AGGREGATE)
2 1 INDEX (FULL SCAN (MIN/MAX)) OF 'BT_IDX_CREATED' (NON-UNIQUE) (Cost=3 Card=1000000 Bytes=7000000)




Statistics
----------------------------------------------------------
0 recursive calls
0 db block gets
3 consistent gets
0 physical reads
0 redo size
388 bytes sent via SQL*Net to client
499 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
1 rows processed

same there, but:

big_table@ORA9IR2>
big_table@ORA9IR2> select min(created), max(created) from big_table;

MIN(CREAT MAX(CREAT
--------- ---------
12-MAY-02 28-NOV-03


Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=257 Card=1 Bytes=7)
1 0 SORT (AGGREGATE)
2 1 INDEX (FAST FULL SCAN) OF 'BT_IDX_CREATED' (NON-UNIQUE) (Cost=257 Card=1000000 Bytes=7000000)




Statistics
----------------------------------------------------------
0 recursive calls
0 db block gets
2671 consistent gets
2656 physical reads
0 redo size
456 bytes sent via SQL*Net to client
499 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
1 rows processed

it cannot read the head and the tail 'in the general sense' -- i'll concede in this case, it could but in a query with a group by, it could not really. So -- can we do something?

big_table@ORA9IR2>
big_table@ORA9IR2>
big_table@ORA9IR2> select min(created), max(created)
2 from (
3 select min(created) created from big_table
4 union all
5 select max(created) created from big_table
6 )
7 /

MIN(CREAT MAX(CREAT
--------- ---------
12-MAY-02 28-NOV-03


Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=6 Card=1 Bytes=9)
1 0 SORT (AGGREGATE)
2 1 VIEW (Cost=6 Card=2 Bytes=18)
3 2 UNION-ALL
4 3 SORT (AGGREGATE)
5 4 INDEX (FULL SCAN (MIN/MAX)) OF 'BT_IDX_CREATED' (NON-UNIQUE) (Cost=3 Card=1000000 Bytes=70
00000)

6 3 SORT (AGGREGATE)
7 6 INDEX (FULL SCAN (MIN/MAX)) OF 'BT_IDX_CREATED' (NON-UNIQUE) (Cost=3 Card=1000000 Bytes=70
00000)





Statistics
----------------------------------------------------------
0 recursive calls
0 db block gets
6 consistent gets
0 physical reads
0 redo size
456 bytes sent via SQL*Net to client
499 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
1 rows processed

that shows how to do that. Now onto that state field stuffed onto the end -- here we have to full scan the table (or full scan an index on object_name,created) since EACH ROW must be inspected:

big_table@ORA9IR2>
big_table@ORA9IR2> select max(created)
2 from big_table
3 where object_name like '%WI';

MAX(CREAT
---------
12-MAY-02


Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=1379 Card=1 Bytes=24)
1 0 SORT (AGGREGATE)
2 1 TABLE ACCESS (FULL) OF 'BIG_TABLE' (Cost=1379 Card=50000 Bytes=1200000)




Statistics
----------------------------------------------------------
0 recursive calls
0 db block gets
14338 consistent gets
14327 physical reads
0 redo size
388 bytes sent via SQL*Net to client
499 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
1 rows processed

we can begin by observing that is the same as this, substr() -- get the last two characters:


big_table@ORA9IR2>
big_table@ORA9IR2> select max(created)
2 from big_table
3 where substr(object_name,length(object_name)-1) = 'WI';

MAX(CREAT
---------
12-MAY-02


Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=1379 Card=1 Bytes=24)
1 0 SORT (AGGREGATE)
2 1 TABLE ACCESS (FULL) OF 'BIG_TABLE' (Cost=1379 Card=10000 Bytes=240000)




Statistics
----------------------------------------------------------
0 recursive calls
0 db block gets
14338 consistent gets
13030 physical reads
0 redo size
388 bytes sent via SQL*Net to client
499 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
1 rows processed

Now we have something to INDEX!

big_table@ORA9IR2>
big_table@ORA9IR2> create index fbi on big_table( substr(object_name,length(object_name)-1), created )
2 compute statistics;

Index created.

big_table@ORA9IR2>
big_table@ORA9IR2> select max(created)
2 from big_table
3 where substr(object_name,length(object_name)-1) = 'WI';

MAX(CREAT
---------
12-MAY-02


Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=3 Card=1 Bytes=24)
1 0 SORT (AGGREGATE)
2 1 INDEX (RANGE SCAN) OF 'FBI' (NON-UNIQUE) (Cost=3 Card=10000 Bytes=240000)




Statistics
----------------------------------------------------------
29 recursive calls
0 db block gets
7 consistent gets
2 physical reads
0 redo size
388 bytes sent via SQL*Net to client
499 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
1 rows processed

big_table@ORA9IR2>
big_table@ORA9IR2> set autotrace off


getting rows N through M of a result set

Ben Kafka, April 15, 2004 - 5:06 pm UTC

Really appreciate finding this info... wish oracle did it
like postgresql but this would have been some work coming up with myself. Thanks!

getting N rows out of 1 row

Deepak Gulrajani, April 21, 2004 - 2:36 pm UTC

The above feedback was useful to return less number of rows from the resultset of the Query.

Tom, Can we write a SQL Statement and return 2 or 3(dynamic)rows into a PL/SQL Table for every single row?

Tom Kyte
April 21, 2004 - 9:05 pm UTC

Can we write a SQL Statement and return 2 or 3(dynamic)rows into a PL/SQL
Table for every single row?

that doesn't "make sense" to me. not sure what you mean.

getting N rows out of 1 row

Deepak Gulrajani, April 23, 2004 - 2:13 pm UTC

Tom, Can we achive this with a single SQL in bulk rather than row by row. Sorry my question in the previous update was a little vague.Here is the example--
I would like to create rows in table B(2 or 3 depending on value of col3 for every row in Table A, i.e if value of col3 = F-2 then I need to create 2 rows in Table B, if value in col3 = F-3 then I need to create 3 rows in table B). For Example----

ROW IN TABLE A
-----------------------
col1 col2 col3 col4 col5 col6
---- ---- ---- ---- ---- ----
1 ITEM F-2 XXX YYY 15

ROWS IN TABLE B(if col3= F-2)
--------------------------
col1 col2 col3 col4 col5 col6
---- ---- ---- ---- ---- ----
1 ITEM F-2 XXX YYY -15
2 IPV F-2 XXX YYY 15

ROWS IN TABLE B(if col3= F-3 then basically the col6 is further split)
--------------------------
col1 col2 col3 col4 col5 col6
---- ---- ---- ---- ---- ----
1 ITEM F-3 XXX YYY -15
2 IPV F-3 XXX YYY 12
3 ERV F-3 XXX YYY 3



Tom Kyte
April 23, 2004 - 3:23 pm UTC

if substr( col3,3,1 ) is always a number then:


select a.*
from a,
(select rownum r from all_objects where r <= 10) x
where x.r <= to_number(substr(a.col3,3,1))
/



(adjust r <= 10 to your needs, if 10 isn't "big enough", make it big enough)


getting N rows out of 1 row

Deepak Gulrajani, April 23, 2004 - 4:47 pm UTC

Tom, Thanks for the prompt and precise reply. --deepak

just a tiny fix

Marcio, April 23, 2004 - 7:52 pm UTC

select a.*
from a,
(select rownum r from all_objects where r <= 10) x
^^^^^^^^^^^^^
where x.r <= to_number(substr(a.col3,3,1))
/

instead of where r <= 10 you have where rownum <= 10

ops$marcio@MRP920> select rownum r from all_objects where r <= 10;
select rownum r from all_objects where r <= 10
*
ERROR at line 1:
ORA-00904: "R": invalid identifier

so, you have:

select a.*
from a,
(select rownum r from all_objects where rownum <= 10) x
where x.r <= to_number(substr(a.col3,3,1))
/


Tom Kyte
April 23, 2004 - 7:57 pm UTC

thanks, that is correct (every time I put a query up without actually running the darn thing that happens :)

Selecting nth Row from table by IDNumber

denni50, April 26, 2004 - 8:55 am UTC

Hi Tom

I'm developing a Second Gift Analysis Report.
(mgt wants to see the activity of first time donors
who give second gifts).

The dilemma is I have to go back and start with
donors who gave their 1st gift in November 2003...then
generate a report when they gave their second gift.
Some of the donors may have went on to give 3rd and 4th
gifts through April 2004...however all subsequent gifts
after the second gift need to be excluded from the query.

On my home computer(Oracle 9i) I was able to use:
ROW_NUMBER() OVER(PARTITION BY idnumber ORDER BY giftdate)
as rn....etc to get the results using test data.

At work we don't AF.
Below is the query to find the first
time donors in November 2003. I then inserted those records
into a temp table called SecondGift:

FirstGift Query:

select idnumber,giftdate,giftamount
from gift where idnumber in(select g.idnumber
from gift g
where g.usercode1='ACGA'
and g.giftdate < to_date('01-NOV-2003','DD-MON-YYYY')
having sum(g.giftamount)=0
group by g.idnumber)
and giftamount>0
and giftdate between to_date('01-NOV-2003','DD-MON-YYYY')
and to_date('30-NOV-2003','DD-MON-YYYY')

here is the query trying to select second gift donors:
(however it's only selecting idnumbers with count=2.
A donor may have nth records even though I'm only searching
for the second giftamount>0)

Second Gift Query
select idnumber,giftdate,giftamount
from gift where idnumber in(select g.idnumber
from gift g where g.idnumber in(select s.idnumber
from secondgift s
where s.firstgiftcode='Nov2003')
and g.giftamount>0
having count(g.giftdate)=2
group by g.idnumber)
and giftamount>0

tried using rownum(typically used for TOP n analysis)with that returning only the 2nd row from 8mil records.


thanks for any feedback







Tom Kyte
April 26, 2004 - 9:36 am UTC

perhaps this'll help:

ops$tkyte@ORA9IR2> select * from t;
 
  IDNUMBER GIFTDATE  GIFTAMOUNT
---------- --------- ----------
         1 01-OCT-03         55
         1 01-APR-04         65
         1 02-APR-04         65
         2 01-DEC-03         55
         2 01-APR-04         65
         3 01-OCT-03         55
         3 21-OCT-03         65
 
7 rows selected.
 
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> select idnumber,
  2         gf1,
  3         to_date(substr(gf2,1,14),'yyyymmddhh24miss') gd2,
  4         to_number(substr(gf2,15)) ga2
  5    from (
  6  select idnumber,
  7         max(giftdate1) gf1,
  8         min(giftdate2) gf2
  9    from (
 10  select idnumber,
 11         case when giftdate <= to_date('01-nov-2003','dd-mon-yyyy')
 12              then giftdate
 13          end giftdate1,
 14         case when giftdate  > to_date('01-nov-2003','dd-mon-yyyy')
 15              then to_char(giftdate,'yyyymmddhh24miss') || to_char(giftamount)
 16          end giftdate2
 17    from t
 18         )
 19   group by idnumber
 20  having max(giftdate1) is not null and min(giftdate2) is not null
 21         )
 22  /
 
  IDNUMBER GF1       GD2              GA2
---------- --------- --------- ----------
         1 01-OCT-03 01-APR-04         65
 
ops$tkyte@ORA9IR2>
 

set of rows at a time

pushparaj arulappan, April 28, 2004 - 11:36 am UTC

Tom,

In our web application we need to retrieve data from
the database in portion and present to the user by piece meal.

For example, for a search if the query retrieves 100000
rows , initially we only want to present the user the first 10000 rows and then pick the next 10000 rows and so on..

The query may be joined with multiple tables.

We use the connection pool and hence we do not want to hold on to the connection for that particular user until the user reviews all the 100000 rows. We probably want to disconnect the user's connection from the database after
fetching the the first 10000 rows.

Can you please guide us.

Our database is Oracle9i and weblogic is the application server.

Thanks
Pushparaj

Tom Kyte
April 28, 2004 - 6:57 pm UTC

10,000!!!!!! out of 100,000!!!!!

are you *kidding*???

google = gold standard for searching

google = 10 hits per page
google = if you try to go to page 100, we'll laugh at you and then say "no"
google = "got it so right"


you need to back off by at least an order of 2 to 3 magnitudes here -- at least.

and then use this query (above)

Selecting n rows from tables

Graeme Whitfield, May 06, 2004 - 3:38 am UTC

Thanks, this saved me a bucket to time!!!

Selecting N rows for each Group

Mir, May 21, 2004 - 3:27 pm UTC

Hi Tom,

How will i write a SQL Query to fetch N rows of every Group. If we take the DEPT, EMP example i want to retrive say first 5 rows of EVERY dept.



Tom Kyte
May 22, 2004 - 11:14 am UTC

select *
from ( select ..., ROW_NUMBER() over (PARTITION BY DEPT order by whatever )rn
from emp )
where rn <= 5;

Thanks Tom, * * * * * invaluable * * * * *

A reader, May 24, 2004 - 11:51 am UTC


Thanks Tom, * * * * * invaluable * * * * *

A reader, May 24, 2004 - 11:52 am UTC


I need help about how to paginate

Fernando Sanchez, May 30, 2004 - 4:09 pm UTC

I had never had to work with these kind of things and I quite lost.

An application is asking me any page of any size from a table an it is taking too long. I think the problem is because of the pagination.

This is an example of what they ask me, returns 10 rows out of 279368 (it is taking 00:01:117.09)

select *
from (select a.*, rownum rnum
from (select env.CO_MSDN_V, env.CO_IMSI_V, sms.CO_TEXT_V, env.CO_MSC_V, per.DS_PER_CLT_V, env.CO_REIN_N, TO_CHAR(env.FX_FECH_ENV_D, 'DD/MM/YYYY HH24:MI:SS'), TO_CHAR(env.FX_RCP_IAS_D, 'DD/MM/YYYY HH24:MI:SS')
from ir_tb_env_clts env, IR_CT_SMS sms, IR_CT_PER_CLT per
where env.CO_SMS_N = sms.CO_SMS_N(+)
and env.CO_PER_CLT_N = per.CO_PER_CLT_N(+)
order by env.SQ_ECL_01, env.CO_MSDN_V, env.CO_SMS_N, env.FX_FECH_ENV_D) a
where rownum <= 100510)
where rnum >= 100501;



Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=28552 Card=136815 Bytes=28867965)

1 0 VIEW (Cost=28552 Card=136815 Bytes=28867965)
2 1 COUNT (STOPKEY)
3 2 VIEW (Cost=28552 Card=136815 Bytes=27089370)
4 3 SORT (ORDER BY STOPKEY) (Cost=28552 Card=136815 Bytes=35435085)

5 4 HASH JOIN (OUTER) (Cost=3016 Card=136815 Bytes=35435085)

6 5 MERGE JOIN (OUTER) (Cost=833 Card=136815 Bytes=26542110)

7 6 SORT (JOIN)
8 7 PARTITION RANGE (ALL)
9 8 TABLE ACCESS (FULL) OF 'IR_TB_ENV_CLTS' (Cost=829 Card=136815 Bytes=13544685)

10 6 SORT (JOIN) (Cost=3 Card=82 Bytes=7790)
11 10 TABLE ACCESS (FULL) OF 'IR_CT_SMS' (Cost=1 Card=82 Bytes=7790)

12 5 TABLE ACCESS (FULL) OF 'IR_CT_PER_CLT' (Cost=1 Card=82 Bytes=5330)





Statistics
----------------------------------------------------------
5481 recursive calls
2443 db block gets
3548 consistent gets
57938 physical reads
107572 redo size
1580 bytes sent via SQL*Net to client
426 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
1 sorts (memory)
2 sorts (disk)
10 rows processed


Any advice will be helful for me.


Thanks in advance.



Tom Kyte
May 31, 2004 - 12:12 pm UTC

think about what has to take place here --

either:

at least the first 100,000 plus rows would have to retrieve via an index (very painfully slow to go row by row) and then the 10 you want would be returned

or

the entire result is gotten as fast as possible and sorted and then the 10 you want are returned.

there will be nothing "speedy" about this. Ask the developer to give you the business case that would actually necessitate going beyond say the first 100 rows (first 10 pages of a result set). Ask them to find a single search engine on the web (say like google) that lets you goto "row 100,000 out of lots of rows". You won't find one.

I believe the application has got it "wrong" here. Who would have the

a) patience to hit next page 1,000 times to get to this page?
b) the *NEED* to goto page 1,000





partially solved

Fernando Sanchez, May 30, 2004 - 5:49 pm UTC

The biggest problem was the joins in the most inside query

select env.CO_MSDN_V, sms.CO_TEXT_V, env.CO_MSC_V, per.DS_PER_CLT_V, env.CO_REIN_N, TO_CHAR(env.FX_FECH_ENV_D, 'DD/MM/YYYY HH24:MI:SS'), TO_CHAR(env.FX_RCP_IAS_D, 'DD/MM/YYYY HH24:MI:SS')
from (select a.*, rownum rnum
from (select *
from ir_tb_env_clts
order by SQ_ECL_01, CO_MSDN_V, CO_SMS_N, FX_FECH_ENV_D) a
where rownum <= 100510) env, IR_CT_SMS sms, IR_CT_PER_CLT per
where env.CO_SMS_N = sms.CO_SMS_N(+)
and env.CO_PER_CLT_N = per.CO_PER_CLT_N(+)
and env.rnum >= 100501

takes only about 11 seconds, I sure there are more things I could do,

Appart from that, isn't there a more standard way of returning pages of a table to an application ?

Thanks again.


Tom Kyte
May 31, 2004 - 12:30 pm UTC

guess what -- rownum is assigned BEFORE order by is done.

what you have done is:

a) gotten the first 100510 rows
b) sorted them
c) joined them (possibly destroying the sorted order, most likely)
d) returned the "last ten" in some random order.

In short -- you have not returned "rows N thru M", so fast=true this is *not*

You can try something like this. the goal with "X" is to get the 10 rowids *after sorting* (so there better be an index on the order by columns AND one of the columns better be NOT NULL in the data dictionary).

Once we get those 10 rows (and that'll take as long as it takes to range scan that index from the START to the 100,000+ plus row -- that'll be some time), we'll join to the table again to pick up the rows we want and outer join to SMS and PER.

select /*+ FIRST_ROWS */
env.CO_MSDN_V,
sms.CO_TEXT_V,
env.CO_MSC_V,
per.DS_PER_CLT_V,
env.CO_REIN_N,
TO_CHAR(env.FX_FECH_ENV_D, 'DD/MM/YYYY HH24:MI:SS'),
TO_CHAR(env.FX_RCP_IAS_D, 'DD/MM/YYYY HH24:MI:SS')
from (select /*+ FIRST_ROWS */ rid
from (select /*+ FIRST_ROWS */ a.*, rownum rnum
from (select /*+ FIRST_ROWS */ rowid rid
from ir_tb_env_clts
order by SQ_ECL_01, CO_MSDN_V, CO_SMS_N, FX_FECH_ENV_D
) a
where rownum <= :n
)
where r >= :m
) X,
ir_tb_env_clts env
IR_CT_SMS sms,
IR_CT_PER_CLT per
where env.rowid = x.rid
and env.CO_SMS_N = sms.CO_SMS_N(+)
and env.CO_PER_CLT_N = per.CO_PER_CLT_N(+)
order by env.SQ_ECL_01, env.CO_MSDN_V, env.CO_SMS_N, env.FX_FECH_ENV_D
/

and if the outer join is causing us issues we can:

select /*+ FIRST_ROWS */
env.CO_MSDN_V,
(select CO_TEXT_V
from ir_ct_sms sms
where env.CO_SMS_N = sms.CO_SMS_N),
env.CO_MSC_V,
(select DS_PER_CLT_V
from IR_CT_PER_CLT
where env.CO_PER_CLT_N = per.CO_PER_CLT_N ),
env.CO_REIN_N,
TO_CHAR(env.FX_FECH_ENV_D, 'DD/MM/YYYY HH24:MI:SS'),
TO_CHAR(env.FX_RCP_IAS_D, 'DD/MM/YYYY HH24:MI:SS')
from (select /*+ FIRST_ROWS */ rid
from (select /*+ FIRST_ROWS */ a.*, rownum rnum
from (select /*+ FIRST_ROWS */ rowid rid
from ir_tb_env_clts
order by SQ_ECL_01, CO_MSDN_V, CO_SMS_N, FX_FECH_ENV_D
) a
where rownum <= :n
)
where r >= :m
) X,
ir_tb_env_clts env
where env.rowid = x.rid
order by env.SQ_ECL_01, env.CO_MSDN_V, env.CO_SMS_N, env.FX_FECH_ENV_D
/


assuming SMS and PER are "optional 1 to 1 relations with ENV" -- if they are not -- then your query above really returns "randomness" since it would get 10 random rows -- and then turn them into N random rows....


Insert to a file

A reader, June 10, 2004 - 10:31 am UTC

I have a partitioned table. Each Partition has around 5 million rows.I need to load a single partition data to a file but in batches of say 10.So each set will be of around 500,000 rows.
What is the best most efficient way to do that.
I was thinking of using your query to get m thru n,parametize it and in a loop use utl file package.
Any suggestions or any alternative approach ?

Tom Kyte
June 10, 2004 - 5:06 pm UTC

no, you would have a single query:

select * from table t partition(p);

and array fetch from it 10 rows at a time. do not even consider "paging" thru it, do not even CONSIDER it.



sqlplus can do this.
see </code> http://asktom.oracle.com/~tkyte/flat <code>

Insert to a file DB version 9.2

A reader, June 10, 2004 - 10:42 am UTC

forgot the DB version.
Thanx

Insert to a file DB version 9.2 some Clarification

A reader, June 10, 2004 - 5:19 pm UTC

Thanx for your response.
When you say"select * from table t partition(p);

and array fetch from it 10 rows at a time. do not even consider "paging" thru
it, do not even CONSIDER it.
"

1) by array fetch Do you mean a bulk collect with limit clause ?
Will a cursor be able to handle 2 million row set with the limit set to 500,000 so it will be 10 such sets.

2) Can I load these sets of 500,000 to a different external table each time instead of using utl_file.
Will it be better.
3) Is it possible to use insert /*+append*/ into external table like insert /*+append*/ select ..batch of 500,000 for each set.

Thanx



Tom Kyte
June 10, 2004 - 8:11 pm UTC

1) if you were to do this in plsql - yes, but i would recommend either sqlplus or proc (see that url)


2) in 10g, yes, in 9i -- no, you cannot "create" an external table as select.

3) you cannot insert into an external table.

Insert to a file DB version 9.2 some Clarification :to add

A reader, June 10, 2004 - 5:30 pm UTC

As I need data in 10 different files of 500,000 rows each

Tom Kyte
June 10, 2004 - 8:12 pm UTC

I'd use C if you could (code is written pretty much if you are on unix -- the code is written as you could array_flat | split )

Breaks in Dates

Saar, June 13, 2004 - 11:27 pm UTC

Tom,

I have 2 tables and the structure is enclosed...


Table 1 : COST_BREAKS

Structure :

from_date date,
to_date date,
cost number(13,2)

Data :

From_Date To_Date Cost
01/04/2004 19/06/2004 800
20/06/2004 31/07/2004 1100
01/08/2004 31/03/2005 900


Table 2 : PRICE_BREAKS

Structure :

From_Date date,
To_Date date,
Price Number(13,2)


Data

From_Date To_Date Price
02/05/2004 22/06/2004 1450
01/06/2004 15/07/2004 1750
16/07/2004 31/03/2005 1650



Output after combining the two table values with date breaks...

The breaks up from_date(01/04/2004) & To_Date(31/10/2004) will be passed as
parameter and should get the datebreaks along with cost and price like this.

Output :-


From_Date To_Date Price Cost
--------- ------- ---- -----
01/04/2004 01/05/2004 Nil 800
02/05/2004 31/05/2004 1450 800
01/06/2004 19/06/2004 1450 800
20/06/2004 22/06/2004 1450 1100
23/06/2004 15/07/2004 1750 1100
16/07/2004 31/07/2004 1650 1100
01/08/2004 31/08/2004 1650 900


Ur advice will be valuable

Tom Kyte
June 14, 2004 - 7:44 am UTC

I ignored this on the other page (what this had to do with export, I'll never figure out)

but since you put it here as well, I feel compelled to point out something.

Maybe anyone else reading this can help *me* out and let me know how this could be more clear:

</code> http://asktom.oracle.com/pls/ask/f?p=4950:9:::NO:9:F4950_P9_DISPLAYID:127412348064 <code>

this "followup" neither

a) applies to the original question
b) supplies the basic information required (create table, inserts)

I'm at a loss as to how to make it "more clear"?

Saar, June 14, 2004 - 9:01 am UTC

Create Table cost_breaks
( cost_id      Number,
  from_date    date,
  to_date      date,
  cost         number(13,2)
);


Insert Into cost_breaks Values (120,to_date('01-APR-04'),to_date('19-JUN-04'),800);
Insert Into cost_breaks Values (121,to_date('20-JUN-04'),to_date('31-JUL-04'),1100);
Insert Into cost_breaks Values (122,to_date('01-AUG-04'),to_date('31-MAR-05'),900);

Create Table price_breaks
( price_id     Number,
  from_date    date,
  to_date      date,
  cost         number(13,2)
);

Insert Into price_breaks Values (131,to_date('02-MAY-04'),to_date('22-JUN-04'),1450);
Insert Into price_breaks Values (132,to_date('01-JUN-04'),to_date('15-JUL-04'),750);
Insert Into price_breaks Values (133,to_date('16-JUL-04'),to_date('31-MAR-05'),1650);


COMMIT;

------------------------------------------------------------------------------------

SQL> SELECT * FROM COST_BREAKS;

   COST_ID FROM_DATE   TO_DATE                COST
---------- ----------- ----------- ---------------
       120 01/04/2004  19/06/2004           800.00
       121 20/06/2004  31/07/2004          1100.00
       122 01/08/2004  31/03/2005           900.00

SQL> SQL> SELECT * FROM PRICE_BREAKS;

  PRICE_ID FROM_DATE   TO_DATE                COST
---------- ----------- ----------- ---------------
       131 02/05/2004  22/06/2004          1450.00
       132 01/06/2004  15/07/2004           750.00
       133 16/07/2004  31/03/2005          1650.00
       

I have To pass 2 dateband. One Is '01-MAR-04' And The other one Is '31-OCT-04'. Now I have To produce a output
With datebreaks In both The tables....Like this..       

From_Date    To_Date        Price    Cost
---------    -------        ----    -----
01/04/2004    01/05/2004              800
02/05/2004    31/05/2004    1450      800
01/06/2004    19/06/2004    1450      800
20/06/2004    22/06/2004    1450      1100
23/06/2004    15/07/2004    1750      1100
16/07/2004    31/07/2004    1650      1100
01/08/2004    31/08/2004    1650      900

Rgrd 

Tom Kyte
June 14, 2004 - 10:45 am UTC

cool -- unfortunately you understand what you want, but it is not clear to me what you want. but it looks alot like "a procedural output in a report", not a SQL query.

Also, still not sure what this has to do with "getting rows N thru M from a result set"?

but you will want to write some code to generate this, I think I see what you want (maybe), and it's not going to be done via a simple query.

how to get a fixed no of rows

s devarshi, June 21, 2004 - 8:16 am UTC

Tom
I have a table t1(name,marks). a name can appear many times. i want to select 2 names with their top ten marks arranged in descending order.can it be done by sql.
I can get all the rows (select name,mark from t1 where name in (a ,b) order by a||b;)

Devarshi


Tom Kyte
June 21, 2004 - 9:29 am UTC

select name, mark, rn
from (select name, mark, row_number() over (partition by name order by mark) rn
from t
where name in ( a,b )
)
where rn <= 10;


also, -- read about the difference between row_number, rank and dense_rank.

suppose name=a has 100 rows with the "top" mark

row_number will assign 1, 2, 3, 4, .... to these 100 rows and you'll get 10 "random" ones.

rank will assign 1 to the first 100 rows (they are all the same rank) and 101 to the second and 102 and so on. so, you'll get 100 rows using rank.

dense_rank will assign 1 to the first 100 rows, 2 to the second highest and so on. with dense_rank you'll get 100+ rows....

Pseudo code

User, July 13, 2004 - 5:34 pm UTC

Hi Tom,
I have received a pseudo code from an non-oracle user and wanted to convert this code to sql query.Pls see below

Get from table research_personnel list of all center= cder co-pi
loop over list:
if emp_profile_key listed -> check in skills db for center
if from cder -> delete from research_personnel
else -> move to table resform_collab
else -> check if center specified in research_personnel
if center is cder -> delete from research_personnel
else -> move to table resform_collab
move to resform_collab:
insert into resform_collab:
resform_basic_id (same)
collab_name (research_personnel.fname + " " +
research_personnel.lname) collab_center ("FDA/" + research_personnel.center + "/" + research_personnel.office + "/" + research_personnel.division + "/" + research_personnel.lab)
delete from research_personnel

Any guidence would be appreciated.


Tom Kyte
July 13, 2004 - 8:07 pm UTC

logic does not make sense.

you have else/else with no if's

if from cder -> ...
else -> move ... (ok
else ?????? how do you get here.

SQL query

A reader, July 14, 2004 - 9:55 am UTC


Tom,
Please see this.


get all CDER co-pi:

List1=
Select * from researchnew.research_personnel,researchnew.resform_basic
where researchnew.pi_type=2
and researchnew.resp_center='CDER'
and researchnew.resform_basic.resform_basic_id=researchnew.research_personnel.resform_basic_id

Loop over List1:
_________________
if we have List1.empprofkey:

level1 =
Select level1 from expertise.fda_aries_data
Where expertise.fda_aries_data.emp_profile_key = List1.empprofkey

if level1 is CDER:
select * from researchnew.research_personnel
where researchnew.pi_type=2 and researchnew.resp_center='CDER'and researchnew.resform_basic.resform_basic_id=researchnew.research_personnel.resform_basic_id and expertise.fda_aries_data.emp_profile_key = research_personnel.empprofkey


List1.id

else: insert into resform_collab:
collab_name= emp_first_name + " " + emp_last_name
collab_center = "FDA/" + org_level_1_code + "/"+ org_level_2_code + "/"+ org_level_3_code + "/"+ org_level_4_code
else:
if researchnew.research_personnelcenter is CDER:
delete from researchnew.research_personnel
where List1.id

else: insert to resform_collab:
collab_name= lname + " " + fname
collab_center = "FDA/" + center + "/"+ office + "/"+ division + "/"+ lab

Tom Kyte
July 14, 2004 - 11:49 am UTC

don't under the need or use of the second select in there? seems to be the same as the first ?

level1 assignment could be a join in the main driving query (join 3 tables together)

now, once 3 tables are joined, you can easily:

insert into resform_collab
select decode( level1, cder, then format data one way, else format it another way) * from these three tables;

and then

delete from researchnew where key in (select * from these three tables where level1 = cder);


you are doing a three table join, if level1 = cder then format columns one way and insert into resform_collab, else format another way and do insert. then delete rows where level1=cder.

SQL Query

A reader, July 14, 2004 - 12:53 pm UTC

Tom,
Your answer clearup little bit.Could you pls put them as a sql query ?.I haven't done much complex sql stuff but am in the learing process.

Thanks for all your help.

Tom Kyte
July 14, 2004 - 9:59 pm UTC

i sort of did? just join -- two sql statements...

rows current before and after when ordered by date

A reader, July 14, 2004 - 3:32 pm UTC

Hello Sir.

Given an ID ,type and a start date
Need to get all rows ( after arranging in ascending order of start date )
having
1) The above id ,type ,start date

and
2) row or set of rows with start date earlier to the one given above( just one date closest to)

and
3) row or set of rows with start date after the one given above ( just one date closest to)

example

for id = 1 type = A and start date = 1/17/1995

out put must be
ID TYPE START_DATE END_DATE
--------------- ---- --------------------- ---------------------
1 A 2/11/1993 1/16/1995
1 A 2/11/1993 1/16/1995
1 A 1/17/1995 1/19/1996
1 A 1/17/1995 1/19/1996
1 A 1/20/1996 1/16/1997

Mt soln works but i think its terrible.

Can we have a complete view and then just give this id ,type and date and get the above result.
My soln needs to generate a dynamic query so I cant just use by giving a where clause to a view.
Any better soln

I tried using dense_rank
SELECT *
FROM (SELECT DENSE_RANK () OVER (PARTITION BY ID, TYPE ORDER BY start_date)
rn,
t.*
FROM td t) p
WHERE ID = 1
AND TYPE = 'A'
AND EXISTS (
SELECT NULL
FROM (SELECT DENSE_RANK () OVER (PARTITION BY ID, TYPE ORDER BY start_date)
rn,
s.*
FROM td s) q
WHERE q.start_date = TO_DATE ('1/17/1995', 'MM/DD/YYYY')
AND q.ID = p.ID
AND q.TYPE = p.TYPE
AND q.rn BETWEEN (p.rn - 1) AND (p.rn + 1))
ORDER BY ID, TYPE, rn
RN ID TYPE START_DATE END_DATE
---------- --------------- ---- --------------------- ---------------------
3 1 A 2/11/1993 1/16/1995
3 1 A 2/11/1993 1/16/1995
4 1 A 1/17/1995 1/19/1996
4 1 A 1/17/1995 1/19/1996
5 1 A 1/20/1996 1/16/1997


CREATE TABLE TD
(
ID VARCHAR2(15 BYTE) NOT NULL,
TYPE VARCHAR2(1 BYTE),
START_DATE DATE,
END_DATE DATE
)
LOGGING
NOCACHE
NOPARALLEL;

INSERT INTO TD ( ID, TYPE, START_DATE, END_DATE ) VALUES (
'1', 'A', TO_Date( '09/11/1987 12:00:00 AM', 'MM/DD/YYYY HH:MI:SS AM'), TO_Date( '09/07/1991 12:00:00 AM', 'MM/DD/YYYY HH:MI:SS AM'));
INSERT INTO TD ( ID, TYPE, START_DATE, END_DATE ) VALUES (
'1', 'A', TO_Date( '09/08/1991 12:00:00 AM', 'MM/DD/YYYY HH:MI:SS AM'), TO_Date( '02/10/1993 12:00:00 AM', 'MM/DD/YYYY HH:MI:SS AM'));
INSERT INTO TD ( ID, TYPE, START_DATE, END_DATE ) VALUES (
'1', 'A', TO_Date( '02/11/1993 12:00:00 AM', 'MM/DD/YYYY HH:MI:SS AM'), TO_Date( '01/16/1995 12:00:00 AM', 'MM/DD/YYYY HH:MI:SS AM'));
INSERT INTO TD ( ID, TYPE, START_DATE, END_DATE ) VALUES (
'1', 'A', TO_Date( '01/17/1995 12:00:00 AM', 'MM/DD/YYYY HH:MI:SS AM'), TO_Date( '01/19/1996 12:00:00 AM', 'MM/DD/YYYY HH:MI:SS AM'));
INSERT INTO TD ( ID, TYPE, START_DATE, END_DATE ) VALUES (
'1', 'A', TO_Date( '01/20/1996 12:00:00 AM', 'MM/DD/YYYY HH:MI:SS AM'), TO_Date( '01/16/1997 12:00:00 AM', 'MM/DD/YYYY HH:MI:SS AM'));
INSERT INTO TD ( ID, TYPE, START_DATE, END_DATE ) VALUES (
'1', 'A', TO_Date( '01/17/1997 12:00:00 AM', 'MM/DD/YYYY HH:MI:SS AM'), TO_Date( '01/15/1998 12:00:00 AM', 'MM/DD/YYYY HH:MI:SS AM'));
INSERT INTO TD ( ID, TYPE, START_DATE, END_DATE ) VALUES (
'1', 'A', TO_Date( '01/12/2004 12:00:00 AM', 'MM/DD/YYYY HH:MI:SS AM'), NULL);
INSERT INTO TD ( ID, TYPE, START_DATE, END_DATE ) VALUES (
'2', 'B', TO_Date( '01/13/1997 12:00:00 AM', 'MM/DD/YYYY HH:MI:SS AM'), TO_Date( '10/30/1997 12:00:00 AM', 'MM/DD/YYYY HH:MI:SS AM'));
INSERT INTO TD ( ID, TYPE, START_DATE, END_DATE ) VALUES (
'2', 'B', TO_Date( '04/06/1996 12:00:00 AM', 'MM/DD/YYYY HH:MI:SS AM'), TO_Date( '09/12/1996 12:00:00 AM', 'MM/DD/YYYY HH:MI:SS AM'));
INSERT INTO TD ( ID, TYPE, START_DATE, END_DATE ) VALUES (
'2', 'B', TO_Date( '09/13/1996 12:00:00 AM', 'MM/DD/YYYY HH:MI:SS AM'), TO_Date( '01/12/1997 12:00:00 AM', 'MM/DD/YYYY HH:MI:SS AM'));
INSERT INTO TD ( ID, TYPE, START_DATE, END_DATE ) VALUES (
'2', 'A', TO_Date( '04/06/1996 12:00:00 AM', 'MM/DD/YYYY HH:MI:SS AM'), TO_Date( '09/12/1996 12:00:00 AM', 'MM/DD/YYYY HH:MI:SS AM'));
INSERT INTO TD ( ID, TYPE, START_DATE, END_DATE ) VALUES (
'2', 'A', TO_Date( '09/13/1996 12:00:00 AM', 'MM/DD/YYYY HH:MI:SS AM'), TO_Date( '01/12/1997 12:00:00 AM', 'MM/DD/YYYY HH:MI:SS AM'));
INSERT INTO TD ( ID, TYPE, START_DATE, END_DATE ) VALUES (
'2', 'A', TO_Date( '01/13/1997 12:00:00 AM', 'MM/DD/YYYY HH:MI:SS AM'), TO_Date( '10/30/1997 12:00:00 AM', 'MM/DD/YYYY HH:MI:SS AM'));
INSERT INTO TD ( ID, TYPE, START_DATE, END_DATE ) VALUES (
'2', 'B', TO_Date( '01/13/1997 12:00:00 AM', 'MM/DD/YYYY HH:MI:SS AM'), TO_Date( '10/30/1997 12:00:00 AM', 'MM/DD/YYYY HH:MI:SS AM'));
INSERT INTO TD ( ID, TYPE, START_DATE, END_DATE ) VALUES (
'2', 'B', TO_Date( '04/06/1996 12:00:00 AM', 'MM/DD/YYYY HH:MI:SS AM'), TO_Date( '09/12/1996 12:00:00 AM', 'MM/DD/YYYY HH:MI:SS AM'));
INSERT INTO TD ( ID, TYPE, START_DATE, END_DATE ) VALUES (
'2', 'B', TO_Date( '09/13/1996 12:00:00 AM', 'MM/DD/YYYY HH:MI:SS AM'), TO_Date( '01/12/1997 12:00:00 AM', 'MM/DD/YYYY HH:MI:SS AM'));
INSERT INTO TD ( ID, TYPE, START_DATE, END_DATE ) VALUES (
'1', 'A', TO_Date( '09/11/1987 12:00:00 AM', 'MM/DD/YYYY HH:MI:SS AM'), TO_Date( '09/07/1991 12:00:00 AM', 'MM/DD/YYYY HH:MI:SS AM'));
INSERT INTO TD ( ID, TYPE, START_DATE, END_DATE ) VALUES (
'1', 'A', TO_Date( '09/08/1991 12:00:00 AM', 'MM/DD/YYYY HH:MI:SS AM'), TO_Date( '02/10/1993 12:00:00 AM', 'MM/DD/YYYY HH:MI:SS AM'));
INSERT INTO TD ( ID, TYPE, START_DATE, END_DATE ) VALUES (
'1', 'A', TO_Date( '02/11/1993 12:00:00 AM', 'MM/DD/YYYY HH:MI:SS AM'), TO_Date( '01/16/1995 12:00:00 AM', 'MM/DD/YYYY HH:MI:SS AM'));
INSERT INTO TD ( ID, TYPE, START_DATE, END_DATE ) VALUES (
'1', 'A', TO_Date( '01/17/1995 12:00:00 AM', 'MM/DD/YYYY HH:MI:SS AM'), TO_Date( '01/19/1996 12:00:00 AM', 'MM/DD/YYYY HH:MI:SS AM'));
INSERT INTO TD ( ID, TYPE, START_DATE, END_DATE ) VALUES (
'1', 'A', NULL, NULL);
INSERT INTO TD ( ID, TYPE, START_DATE, END_DATE ) VALUES (
'1', 'A', NULL, NULL);
COMMIT;

Tom Kyte
July 15, 2004 - 11:20 am UTC

ops$tkyte@ORA9IR2> select * from td
  2  where id = '1'
  3    and type = 'A'
  4    and start_date in
  5    ( to_date( '1/17/1995', 'mm/dd/yyyy' ),
  6      (select min(start_date)
  7         from td
  8        where id = '1' and type = 'A'
  9          and start_date > to_date( '1/17/1995', 'mm/dd/yyyy' )),
 10      (select max(start_date)
 11         from td
 12        where id = '1' and type = 'A'
 13          and start_date < to_date( '1/17/1995', 'mm/dd/yyyy' )) )
 14   order by start_date
 15  /
 
ID              T START_DAT END_DATE
--------------- - --------- ---------
1               A 11-FEB-93 16-JAN-95
1               A 11-FEB-93 16-JAN-95
1               A 17-JAN-95 19-JAN-96
1               A 17-JAN-95 19-JAN-96
1               A 20-JAN-96 16-JAN-97


is one way... 

What if

A reader, July 15, 2004 - 11:45 am UTC

Thanx Sir for your answer.
What if I were to extend this to say 2 dates prior and after the given date ?
Or N dates prior and after given date.

In my Bad analytic soln I would just change

rn between rn - N and rn + N
Any suggestions ?


Tom Kyte
July 15, 2004 - 1:30 pm UTC

in ( select to_date( ... ) from dual
union all
select start_date
from (select distinct start_date
from td where id = .. and type = ...
and start_date <= your_date order by start_date desc )
where rownum <= 2 )
union all .....

just generate the sets of dates you are interested in.

But Nulls

A reader, July 17, 2004 - 9:21 pm UTC

Thanx Sir for your Help.
Soln will not work for Nulls may I need to nvl with sysdate.As there are few in the test data.

Also if we want to return ranges for sample
start_date
1/1/1990
null
1/1/1991
null
null
1/1/1992
1/1/1992
1/1/1993.

Nulls will be grouped together.
Example for 1992
it should return
null
1/1/1992
1/1/1992
1/1/1993.
How to get that


Tom Kyte
July 18, 2004 - 12:00 pm UTC

huh?

does not compute, not understanding what you are asking.


you seem to be presuming that the null row "has some position in the table that is meaningful".


that null doesn't sort after 1991 and before 1992 -- rows have no "positions" in a table. You seem to be prescribing attributes of a flat file to rows in a table and you cannot.

SQL Query

A reader, July 19, 2004 - 2:53 pm UTC

Tom,
I tried to come with a sql query to perform the insert and delete as you outlined here.But not able to succeed.
=================================================
level1 assignment could be a join in the main driving query (join 3 tables
together)

now, once 3 tables are joined, you can easily:

insert into resform_collab
select decode( level1, cder, then format data one way, else format it another
way) * from these three tables;

and then

delete from researchnew where key in (select * from these three tables where
level1 = cder);


you are doing a three table join, if level1 = cder then format columns one way
and insert into resform_collab, else format another way and do insert. then
delete rows where level1=cder
=======================================

Could you please explaion this using emp ,dept tables or ur own example tables so that I can duplicate that.
Thanks a lot.

Tom Kyte
July 19, 2004 - 4:30 pm UTC

you have a three table join here. can you get that far? if not, no example against emp/dept is going to help you.

Mr Parag - "Mutual Respect" - You should learn how to?

Reji, July 28, 2004 - 6:51 pm UTC

You might change this to "MR" - Tom is 100% right. I don't understand why you got really upset with his response. You should check your BP - not Bharat Petroleum, Blood Pressure.

You could have taken his response in a very light way but at the same time you should have understood why he said that.

Please behave properly Sir.

Tom:

Thanks for spending your personal time to help 100s of software engineers around the globe. We all really appreciated your time and effort.

limiting takes longer

v, August 03, 2004 - 8:23 pm UTC

My original query takes about 1 second to execute. It involves joining 3 tables and a lot of conditions must be met. When I ran the same query with your example to limit the range of records to N through M, it took 50 seconds to execute.

I noticed a few other users have posted here concerning a performance issue when limiting rows. Obviously there is something misconfigured on our end because the majority of users are happy here. :)

I noticed when I take out the last WHERE clause, "where rnum >= MIN_ROWS", the query executes in 1 second. I also tried changing the clause to "where rnum = 1000", and that also takes tremendously long.

Any pointers?


Tom Kyte
August 03, 2004 - 8:42 pm UTC

show us the queries and explain plans (autotrace traceonly explain is sufficient)

and a tkprof of the same (that actually fetched the data)

thanks

sriram, August 05, 2004 - 4:33 am UTC

Hei..its was petty useful...Not only this...I have cleared many things in this site. This one is really gr8

thanks

sriram, August 05, 2004 - 4:34 am UTC

Hei..its was pretty useful...Not only this...I have cleared many things in this site. This site is really gr8

Does "rownum <= MAX_ROWS" give any performance improvment?

A reader, August 05, 2004 - 9:59 am UTC

Dear Tom,

In terms of performance, is there any difference between Query (A) and (B)?

A)
select a.*, rownum rnum
from ( YOUR_QUERY_GOES_HERE -- including the order by ) a
where rownum <= MAX_ROWS
/


B)
select a.*, rownum rnum
from ( YOUR_QUERY_GOES_HERE -- including the order by ) a
/


Tom Kyte
August 05, 2004 - 1:03 pm UTC

sure, if the (b) returns a billion rows and (a) returns 5 -- (a) will be faster :)

but we call that a top-n query and yes there are top-n optimizations that makes (a) faster and less expensive to perform

</code> http://asktom.oracle.com/pls/asktom/f?p=100:11:::::P11_QUESTION_ID:127412348064,#PAGEBOTTOM <code>



Is the URL correct?

Sami, August 06, 2004 - 10:00 am UTC

Tom,
The URL which you have given is pointing to the same page.

Tom Kyte
August 06, 2004 - 10:21 am UTC

</code> http://asktom.oracle.com/pls/asktom/f?p=100:11:::::P11_QUESTION_ID:127412348064#3282021148551 <code>

thanks -- should be pointing UP in this page :)

In out web application,paging sql is strange

Steven, August 07, 2004 - 1:59 am UTC

Hello,I have a question about sql paging --- get top Max --Min value from an order by inner sql;

We have a table app_AssetBasicInfo(ID Number Primary key,Title varchar2(255),CategoryID number not null,Del_tag not null,CreateDate Date not null,...);

CategoryID has 3 distinct value,del_tag has 2 distinct value ,they are very skewed. and I gathered statistics using method_opt=>'for columns CategoryID&#65292;del_tag size skewonly');
And I have a index CATEGORYDELTAGCDATEID on app_Assetbasicinfo(CategoryID,del_tag,CreateDate desc,ID desc) and physical table storage is also sorted by categoryID,del_tag,CreateDate desc,ID desc.

paging sql like these:

select * from (select table_a.*,rownum as my_rownum from (select
title FROM app_AssetBasicInfo WHERE
app_AssetBasicInfo.CategoryID=1 AND Del_tag=0 And
CreateDate between &Date1 and &Date2 order by CreateDate DESC,app_AssetBasicInfo.ID DESC )
table_a where rownum<&Max_Value) where
my_rownum>=&Min_Value;

but it is confused me very much.Please see these sql_trace result:
[code]
********************************************************************************

select table_a.*,rownum as my_rownum from (select
title FROM app_AssetBasicInfo WHERE
app_AssetBasicInfo.CategoryID=2 AND Del_tag=0 order by CreateDate DESC,app_AssetBasicInfo.ID DESC )
table_a where rownum<20

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.03 0.44 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 3 0.00 0.00 0 8 0 19
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 5 0.03 0.44 0 8 0 19

Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: 63

Rows Row Source Operation
------- ---------------------------------------------------
19 COUNT STOPKEY
19 VIEW
19 TABLE ACCESS BY INDEX ROWID APP_ASSETBASICINFO
19 INDEX RANGE SCAN CATEGORYDELTAGCDATEID (object id 33935)

********************************************************************************

select * from (select table_a.*,rownum as my_rownum from (select
title FROM app_AssetBasicInfo WHERE
app_AssetBasicInfo.CategoryID=1 AND Del_tag=0 order by CreateDate DESC,app_AssetBasicInfo.ID DESC )
table_a where rownum<20) where my_rownum>=0

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.26 0.49 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 3 1.81 1.90 0 19523 0 19
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 5 2.07 2.40 0 19523 0 19

Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: 63

Rows Row Source Operation
------- ---------------------------------------------------
19 VIEW
19 COUNT STOPKEY
19 VIEW
19 SORT ORDER BY STOPKEY
482147 TABLE ACCESS BY INDEX ROWID APP_ASSETBASICINFO
482147 INDEX RANGE SCAN CATEGORYDELTAGCDATEID (object id 33935)

INDEX RANGE SCAN return 482147 rows ,seems equal with full index scan.
[/code]


I discovered that when i wrap a outer select ,it gets slow and comsume much consistent gets.

I also alter index CATEGORYDELTAGCDATEID with compress 2 and /*+ first_rows */ hint;but result is same;

but when i use RowID to paging sql,it runs good.but it can not support tables join.
[code]
************************************************************
select title from app_AssetBasicInfo
where rowid in
( select rid from
( select rownum rno,rowid rid from
(select rowid FROM app_AssetBasicInfo WHERE
app_AssetBasicInfo.CategoryID=1 AND Del_tag=0
order by CreateDate desc,app_AssetBasicInfo.ID DESC
) where rownum <= 20
) where rno >= 0
)

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.15 0.15 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 3 0.01 0.00 0 23 0 20
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 5 0.17 0.15 0 23 0 20

Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: 63

Rows Row Source Operation
------- ---------------------------------------------------
20 NESTED LOOPS
20 VIEW
20 SORT UNIQUE
20 COUNT STOPKEY
20 VIEW
20 INDEX RANGE SCAN CATEGORYDELTAGCDATEID (object id 33971)
20 TABLE ACCESS BY USER ROWID APP_ASSETBASICINFO

************************************************************
[/code]

I want to know why sql with outer select get more index range scamed than sql with no outer wapper.


I am looking forward for your reply.

Thank you very much !

Steven



Tom Kyte
August 07, 2004 - 10:15 am UTC

for paging queries, recommend you use first_rows -- you always want the index pretty much, since you want to get rows 1..10 ASAP. 11..20 should take just a tad longer and so on.

/*+ FIRST_ROWS */

How would you go about this?

Brian McGinity, August 15, 2004 - 3:58 pm UTC

Suppose SCOTT.EMP had 300,000 rows and you needed to show do this type pagination from a search:

1.User inputs an ename to search.
2.If ename is found in EMP then show result (see result set description below).
3.If ename is not found than chop off the last letter in the search criteria and try again.

Once found, the result set needs to show the 20 enames sorted alphabetically before the match and 20 enames after the match. The result has a total of 41 names sorted descending with the closest matching record in the middle.





Tom Kyte
August 16, 2004 - 8:17 am UTC

"closest matching record" in this case is ambigous since the equality could return thousands of records to begin with. that'd be my first problem - what means 'closest'

it'd be something like:

with q
as
(select ename
from (select ename
from emp
where ename in ( :ename,
case when :l > 1 then substr( :ename, :l-1 ) end,
case when :l > 2 then substr( :ename, :l-2 ) end,
case when :l > 3 then substr( :ename, :l-3 ) end,
...
case when :l > N then substr( :ename, :l-3 ) end )
order by length(ename) desc )
where rownum = 1 )
( select *
from (select * from emps
where ename <= (select ename from q) order by ename desc )
where rownum <= 21 )
union
( select *
from ( select * from emps
where ename >= (select ename from q) order by ename asc)
where rownum <= 21 )
order by ename;


subquery q gets the "ename of interest"
first union all gets it and 20 before
second gets it and 20 after

union does sort distinct which removes the duplicate.

BAGUS SEKALI (PERFECT)

David, Raymond, September 01, 2004 - 11:47 pm UTC

I have been looking solution to my problem
and finally I got the solution...

select *
from ( select a.*, rownum rnum
from ( YOUR_QUERY_GOES_HERE -- including the order by ) a
where rownum <= MAX_ROWS )
where rnum >= MIN_ROWS

Absolutely...this is one of the most useful query
in ORACLE SQL

again...thanks TOM...

Pagination ordered by a varchar2 column ?

Kim Berg Hansen, October 13, 2004 - 7:08 am UTC

Hi, Tom

I'm trying to use your pagination methods for a log-system I'm developing (Oracle 8.1.7.4.)

But I can't always get Oracle to use the trick with index scanning to make this speedy. Seems to me it only works with dates/numbers and not with varchar2s?


I have this test-table :

SQL> create table testlog
  2  (
  3      logdate        date          not null,
  4      logseq           integer          not null,
  5      logdmltype     varchar2(1)    not null,
  6      loguser        varchar2(10)   not null,
  7      logdept        varchar2(10)   not null,
  8      logip           raw(4)          not null,
  9      recordid       integer          not null,
 10      keyfield       varchar2(10)   not null,
 11      col1_old       varchar2(10),
 12      col1_new       varchar2(10),
 13      col2_old       number(32,16),
 14      col2_new       number(32,16)
 15  );

With these test-data :

SQL> insert into testlog
  2  select
  3  last_ddl_time logdate,
  4  rownum logseq,
  5  'U' logdmltype,
  6  substr(owner,1,10) loguser,
  7  substr(object_type,1,10) logdept,
  8  hextoraw('AABBCCDD') logip,
  9  ceil(object_id/100) recordid,
 10  substr(object_name,1,10) keyfield,
 11  substr(subobject_name,1,10) col1_old,
 12  substr(subobject_name,2,10) col1_new,
 13  data_object_id col2_old,
 14  object_id col2_new
 15  from all_objects
 16  where rownum <= 40000;

40000 rows created.


Typical ways to find data would be "by date", "by user", "by recordid", "by keyfield" :

SQL> create index testlog_date on testlog (
  2      logdate, logseq
  3  );

SQL> create index testlog_user on testlog (
  2      loguser, logdate, logseq
  3  );

SQL> create index testlog_recordid on testlog (
  2      recordid, logdate, logseq
  3  );

SQL> create index testlog_keyfield on testlog (
  2      keyfield, logdate, logseq
  3  );

(Note all indexes are on "not null" columns - that's a requirement for the trick to work, right?)


Gather statistics :

SQL> begin dbms_stats.gather_table_stats('XAL_SUPERVISOR','TESTLOG',method_opt=>'FOR ALL INDEXED COLUMNS SIZE 1',cascade=>true); end;
  2  /


And then fire some test statements for pagination :

********************************************************************************

Try "by date" :

select /*+ FIRST_ROWS */ * from (
   select /*+ FIRST_ROWS */ p.*, rownum r from (
      select /*+ FIRST_ROWS */ t.*
      from testlog t
      order by logdate, logseq
   ) p
   where rownum <= 5
) where r >= 1

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.02       0.01          0          0          0           0
Execute      1      0.00       0.00          0          0          0           0
Fetch        2      0.00       0.00          0          5          0           5
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        4      0.02       0.01          0          5          0           5

Rows     Row Source Operation
-------  ---------------------------------------------------
      5  VIEW 
      5   COUNT STOPKEY 
      5    VIEW 
      5     TABLE ACCESS BY INDEX ROWID TESTLOG 
      5      INDEX FULL SCAN (object id 190604)   <--TESTLOG_DATE

Works dandy.

********************************************************************************

Try "by date" backwards :

select /*+ FIRST_ROWS */ * from (
   select /*+ FIRST_ROWS */ p.*, rownum r from (
      select /*+ FIRST_ROWS */ t.*
      from testlog t
      order by logdate desc, logseq desc
   ) p
   where rownum <= 5
) where r >= 1

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute      1      0.00       0.00          0          0          0           0
Fetch        2      0.01       0.01          1          5          0           5
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        4      0.01       0.01          1          5          0           5

Rows     Row Source Operation
-------  ---------------------------------------------------
      5  VIEW 
      5   COUNT STOPKEY 
      5    VIEW 
      5     TABLE ACCESS BY INDEX ROWID TESTLOG 
      5      INDEX FULL SCAN DESCENDING (object id 190604)   <--TESTLOG_DATE

Works dandy backwards too.

********************************************************************************

Try "by user" :

select /*+ FIRST_ROWS */ * from (
   select /*+ FIRST_ROWS */ p.*, rownum r from (
      select /*+ FIRST_ROWS */ t.*
      from testlog t
      order by loguser, logdate, logseq
   ) p
   where rownum <= 5
) where r >= 1

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.01          0          0          0           0
Execute      1      0.00       0.00          0          0          0           0
Fetch        2      0.15       0.24        161        361          6           5
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        4      0.15       0.25        161        361          6           5

Rows     Row Source Operation
-------  ---------------------------------------------------
      5  VIEW 
      5   COUNT STOPKEY 
      5    VIEW 
      5     SORT ORDER BY STOPKEY 
  40000      TABLE ACCESS FULL TESTLOG 

Hmmm... Not so dandy with varchar2 column ?

********************************************************************************

Try "by recordid" :

select /*+ FIRST_ROWS */ * from (
   select /*+ FIRST_ROWS */ p.*, rownum r from (
      select /*+ FIRST_ROWS */ t.*
      from testlog t
      order by recordid, logdate, logseq
   ) p
   where rownum <= 5
) where r >= 1

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute      1      0.00       0.00          0          0          0           0
Fetch        2      0.00       0.00          0          5          0           5
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        4      0.00       0.00          0          5          0           5

Rows     Row Source Operation
-------  ---------------------------------------------------
      5  VIEW 
      5   COUNT STOPKEY 
      5    VIEW 
      5     TABLE ACCESS BY INDEX ROWID TESTLOG 
      5      INDEX FULL SCAN (object id 190606)   <--TESTLOG_RECORDID

Works dandy with a number column.

********************************************************************************

Try "last 5 for a particular recordid" :

select /*+ FIRST_ROWS */ * from (
   select /*+ FIRST_ROWS */ p.*, rownum r from (
      select /*+ FIRST_ROWS */ t.*
      from testlog t
      where recordid = 1000
      order by recordid desc, logdate desc, logseq desc
   ) p
   where rownum <= 5
) where r >= 1

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute      1      0.00       0.00          0          0          0           0
Fetch        2      0.00       0.02          2          6          0           5
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        4      0.00       0.02          2          6          0           5

Rows     Row Source Operation
-------  ---------------------------------------------------
      5  VIEW 
      5   COUNT STOPKEY 
      5    VIEW 
      5     TABLE ACCESS BY INDEX ROWID TESTLOG 
      5      INDEX RANGE SCAN DESCENDING (object id 190606)   <--TESTLOG_RECORDID

Number column again rocks - it does a descending range scan and stops when it has 5 records.

********************************************************************************

Try "last 5 for a particular user" :

select /*+ FIRST_ROWS */ * from (
   select /*+ FIRST_ROWS */ p.*, rownum r from (
      select /*+ FIRST_ROWS */ t.*
      from testlog t
      where loguser = 'SYS'
      order by loguser desc, logdate desc, logseq desc
   ) p
   where rownum <= 5
) where r >= 1

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.01       0.01          0          0          0           0
Execute      1      0.00       0.00          0          0          0           0
Fetch        2      0.08       0.12          5       2373          0           5
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        4      0.09       0.13          5       2373          0           5

Rows     Row Source Operation
-------  ---------------------------------------------------
      5  VIEW 
      5   COUNT STOPKEY 
      5    VIEW 
      5     SORT ORDER BY STOPKEY 
   8706      TABLE ACCESS BY INDEX ROWID TESTLOG 
   8707       INDEX RANGE SCAN (object id 190605)   <--TESTLOG_USER

Again the varchar2 column makes it not so perfect :-(

********************************************************************************

One thing I notice is this :

Rows     Row Source Operation
-------  ---------------------------------------------------
      5  VIEW 
      5   COUNT STOPKEY 
      5    VIEW 
      5     SORT ORDER BY STOPKEY   <-- This linie only appears using varchar2 column?
         (table access one way or the other)

For numbers/dates the COUNT STOPKEY can halt the table/index access when enough records have been found.
For varchar2s the SORT ORDER BY STOPKEY seems to disable that trick?

Why does it want to SORT ORDER BY STOPKEY when it's a varchar2?
It's already sorted in the index (same as with the numbers/dates)?
What am I doing wrong?


As always - profound thanks for all your help to all of us.


Kim Berg Hansen

Senior System Developer
T.Hansen Gruppen A/S
 

Tom Kyte
October 13, 2004 - 8:34 am UTC

what is your characterset?

817 for me does:

select /*+ FIRST_ROWS */ * from (
select /*+ FIRST_ROWS */ p.*, rownum r from (
select /*+ FIRST_ROWS */ t.*
from testlog t
order by loguser, logdate, logseq
) p
where rownum <= 5
) where r >= 1

with WE8ISO8859P1


is this an "nls_sort()" issue? (eg: the binary sort isn't 'sorted' in your character set and we'd need an FBI perhaps?)

Yes !!!

Kim Berg Hansen, October 13, 2004 - 9:00 am UTC

I'm amazed as usual.

You have a rare gift for immediately noticing those details that should have been obvious to us blind folks raving in the dark ;-)

My character set is WE8ISO8859P1 - no problem there.

My database has NLS_SORT=BINARY.

The client I used for testing/development had NLS_SORT=DANISH.

When I change the client to NLS_SORT=BINARY - everything works as it's supposed to do...

Thanks a million, Tom.



Tom Kyte
October 13, 2004 - 9:12 am UTC

a function based index could work for them..... creating the index on nls_sort(....) and ordering by that.

No need...

Kim Berg Hansen, October 13, 2004 - 9:24 am UTC

I just checked - the production clients (the ERP system) does have NLS_SORT=BINARY.

It was simply the registry settings here on my development PC that wasn't correct... so the solution was very simple :-)


A reader, October 14, 2004 - 1:09 am UTC


Continued pagination troubles...

Kim Berg Hansen, October 15, 2004 - 8:19 am UTC

Hi again, Tom

Continuation of my question from a couple of days ago...

I'm still working on the best way of getting Oracle to use index scans in pagination queries.
I have no problem anymore with the simpler queries from my last question to you.

But suppose I wish to start the pagination from a particular point in a composite index.
A good example is this index :

SQL> create index testlog_user on testlog (
  2      loguser, logdate, logseq
  3  );

(Table, indexes and data for these tests are identical to the last question I gave you.)

********************************************************

Example 1:

It works fine if I do pagination from a starting point in the index in which I only use first column of the index. For example start the pagination a point where loguser = 'SYS' :


SQL> select /*+ FIRST_ROWS */ loguser, logdate, logseq, keyfield from (
  2      select /*+ FIRST_ROWS */ p.*, rownum r from (
  3         select /*+ FIRST_ROWS */ t.*
  4         from testlog t
  5         where loguser >= 'SYS'
  6         order by loguser, logdate, logseq
  7      ) p
  8      where rownum <= 5
  9  ) where r >= 1;

LOGUSER    LOGDATE      LOGSEQ KEYFIELD                                         
---------- -------- ---------- ----------                                       
SYS        01-03-27       4133 BOOTSTRAP$                                       
SYS        01-03-27       5044 I_CCOL1                                          
SYS        01-03-27       5045 I_CCOL2                                          
SYS        01-03-27       5046 I_CDEF1                                          
SYS        01-03-27       5047 I_CDEF2                                          


call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.01       0.01          0          0          0           0
Execute      1      0.00       0.00          0          0          0           0
Fetch        2      0.00       0.00          0          5          0           5
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        4      0.01       0.01          0          5          0           5

Rows     Row Source Operation
-------  ---------------------------------------------------
      5  VIEW 
      5   COUNT STOPKEY 
      5    VIEW 
      5     TABLE ACCESS BY INDEX ROWID TESTLOG 
      5      INDEX RANGE SCAN (object id 191422) <--Index TESTLOG_USER

Perfect scan of the index and stop at the number of rows I wish to paginate.

********************************************************

Example 2:

Consider then when I wish to use a starting poing with all three columns in the composite index. For example start the pagination at the point where loguser = 'SYS', logdate = '31-08-2004 11:22:33' and logseq = 5799, and then just scan the index forward 5 records from that point.
Best SQL I can come up with is something like this :


SQL> select /*+ FIRST_ROWS */ loguser, logdate, logseq, keyfield from (
  2      select /*+ FIRST_ROWS */ p.*, rownum r from (
  3         select /*+ FIRST_ROWS */ t.*
  4         from testlog t
  5         where ( loguser = 'SYS' and
                   logdate = to_date('31-08-2004 11:22:33','DD-MM-YYYY HH24:MI:SS') and
                   logseq >= 5799
                 )
  6            or ( loguser = 'SYS' and
                   logdate > to_date('31-08-2004 11:22:33','DD-MM-YYYY HH24:MI:SS')
                 )
  7            or ( loguser > 'SYS' )
  8         order by loguser, logdate, logseq
  9      ) p
 10      where rownum <= 5
 11  ) where r >= 1;

LOGUSER    LOGDATE      LOGSEQ KEYFIELD                                         
---------- -------- ---------- ----------                                       
SYS        04-08-31       5799 V_$PROCESS                                       
SYS        04-08-31       5827 V_$SESSION                                       
SYS        04-08-31       5857 V_$STATNAM                                       
SYSTEM     01-03-27      16877 AQ$_QUEUES                                       
SYSTEM     01-03-27      16878 AQ$_QUEUES                                       


call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute      1      0.00       0.00          0          0          0           0
Fetch        2      0.08       0.10          0       6153          0           5
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        4      0.08       0.10          0       6153          0           5

Rows     Row Source Operation
-------  ---------------------------------------------------
      5  VIEW 
      5   COUNT STOPKEY 
      5    VIEW 
      5     TABLE ACCESS BY INDEX ROWID TESTLOG 
  16848      INDEX FULL SCAN (object id 191422) <--Index TESTLOG_USER


Do you know another way to phrase that SQL in a manner so that Oracle understands, that I'm pinpointing a particular spot in the index and wants it to scan forward from that point ?

Conceptually I think it should be no different for Oracle to start the range scan at a point defined by 3 column values in example 2 as it starts the range scan at a point defined by only the first column value in example 1?

The trouble is how to express in the SQL language what I want done :-)

In "pseudo-code" I might be tempted to express it somewhat like :

   "where (loguser, logdate, logseq) >= ('SYS', '31-08-2004 11:22:33', 5799)"

...but that syntax is not recognized in SQL, alas ;-)

What do you think? Can I do anything to make the example 2 be as perfectly efficient as example 1?
 

Tom Kyte
October 15, 2004 - 11:52 am UTC

For example start the pagination at the point where loguser =
'SYS', logdate = '31-08-2004 11:22:33' and logseq = 5799, and then just scan the
index forward 5 records from that point.

I don't understand the concept of "start the pagination at the point"?


are you saying "ordered by loguser, logdate, logseq", starting with :x/:y/:z?

in which case, we'd need an index on those 3 columns in order to avoid getting ALL rows and sorting before giving you the first row.

Paraphrase of my previous review...

Kim Berg Hansen, October 18, 2004 - 4:24 am UTC

Hi again, Tom

Sorry if I don't "conceptualize" clearly - english ain't my native language :-) I'll try to paraphrase to make it more clear.

Test table, indexes and data used for this is taken from my review of october 13th to this question. 

Specifically the index I'm trying to use (abuse? ;-) is this index:

SQL> create index testlog_user on testlog (
  2      loguser, logdate, logseq
  3  );

All three columns are NOT NULL, so all the 40.000 records will be in the index.
Here's part of the data ordered by that index:

SQL> select p.*, rownum from (
  2      select loguser, logdate, logseq, keyfield
  3      from testlog t
  4      order by loguser, logdate, logseq
  5  ) p;

LOGUSER    LOGDATE      LOGSEQ KEYFIELD       ROWNUM
---------- -------- ---------- ---------- ----------
OUTLN      01-03-27      17121 OL$                 1
OUTLN      01-03-27      17122 OL$HINTS            2

(... lots of rows ...)

PUBLIC     03-07-23      13812 TOAD_TABLE       8139
PUBLIC     03-07-23      13810 TOAD_SPACE       8140
SYS        01-03-27       4133 BOOTSTRAP$       8141 <== POINT A
SYS        01-03-27       5044 I_CCOL1          8142
SYS        01-03-27       5045 I_CCOL2          8143
SYS        01-03-27       5046 I_CDEF1          8144
SYS        01-03-27       5047 I_CDEF2          8145
SYS        01-03-27       5048 I_CDEF3          8146
SYS        01-03-27       5049 I_CDEF4          8147
SYS        01-03-27       5050 I_COBJ#          8148
SYS        01-03-27       5051 I_COL1           8149
SYS        01-03-27       5052 I_COL2           8150
SYS        01-03-27       5053 I_COL3           8151
SYS        01-03-27       5057 I_CON1           8152
SYS        01-03-27       5058 I_CON2           8153

(... lots of rows ...)

SYS        04-08-31       5799 V_$PROCESS      16844 <== POINT B
SYS        04-08-31       5827 V_$SESSION      16845
SYS        04-08-31       5857 V_$STATNAM      16846
SYSTEM     01-03-27      16877 AQ$_QUEUES      16847
SYSTEM     01-03-27      16878 AQ$_QUEUES      16848
SYSTEM     01-03-27      16879 AQ$_QUEUES      16849
SYSTEM     01-03-27      16880 AQ$_QUEUE_      16850
SYSTEM     01-03-27      16881 AQ$_QUEUE_      16851
SYSTEM     01-03-27      16882 AQ$_SCHEDU      16852
SYSTEM     01-03-27      16883 AQ$_SCHEDU      16853
SYSTEM     01-03-27      16884 AQ$_SCHEDU      16854
SYSTEM     01-03-27      16910 DEF$_TRANO      16855
SYSTEM     01-03-27      17113 SYS_C00745      16856
SYSTEM     01-03-27      17114 SYS_C00748      16857
SYSTEM     01-03-27      16891 DEF$_AQERR      16858
SYSTEM     01-03-27      16893 DEF$_CALLD      16859
SYSTEM     01-03-27      16894 DEF$_CALLD      16860
SYSTEM     01-03-27      16896 DEF$_DEFAU      16861
SYSTEM     01-03-27      16898 DEF$_DESTI      16862
SYSTEM     01-03-27      16900 DEF$_ERROR      16863

(... lots of rows ...)

XAL_TRYKSA 04-09-21      31307 ORDREPOSTI      39999
XAL_TRYKSA 04-09-30      31220 LAGERINDGA      40000

40000 rows selected.

I've marked two records - point A and point B - that I'll explain further down.


Now for the tricky part of the explanation...

The original pagination code that utilizes my index well calls for using something like this construct:

SQL> select /*+ FIRST_ROWS */ loguser, logdate, logseq, keyfield from (
  2      select /*+ FIRST_ROWS */ p.*, rownum r from (
  3         select /*+ FIRST_ROWS */ t.*
  4         from testlog t
  5         order by loguser, logdate, logseq
  6      ) p
  7      where rownum <= :hirow
  8  ) where r >= :lorow;

(Which works perfectly after I corrected NLS_SORT on my development PC ;-)


When a user asks for seeing the records starting with "loguser = 'SYS'" and then enable him to paginate with 5 rows at a time forward "from that point on" - thats what I mean with "starting the pagination at point A".

I can not use the statement from above with :lorow = 8141 and :hirow = 8145 because that would require that I somehow first find those two numbers. To avoid that I instead use this:

SQL> select /*+ FIRST_ROWS */ loguser, logdate, logseq, keyfield from (
  2      select /*+ FIRST_ROWS */ p.*, rownum r from (
  3         select /*+ FIRST_ROWS */ t.*
  4         from testlog t
  5         where loguser >= 'SYS'
  6         order by loguser, logdate, logseq
  7      ) p
  8      where rownum <= 5
  9  ) where r >= 1;

LOGUSER    LOGDATE      LOGSEQ KEYFIELD                                         
---------- -------- ---------- ----------                                       
SYS        01-03-27       4133 BOOTSTRAP$                                       
SYS        01-03-27       5044 I_CCOL1                                          
SYS        01-03-27       5045 I_CCOL2                                          
SYS        01-03-27       5046 I_CDEF1                                          
SYS        01-03-27       5047 I_CDEF2                                          

This statement gives me "Page 1" in a "five rows at a time" pagination "starting at the point where loguser = 'SYS'" (point A). And this statement utilizes the index very efficiently indeed:

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.01          0          0          0           0
Execute      1      0.00       0.00          0          0          0           0
Fetch        2      0.00       0.00          0          5          0           5
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        4      0.00       0.01          0          5          0           5

Rows     Row Source Operation
-------  ---------------------------------------------------
      5  VIEW
      5   COUNT STOPKEY
      5    VIEW
      5     TABLE ACCESS BY INDEX ROWID TESTLOG
      5      INDEX RANGE SCAN (object id 191688)

When the user clicks to see "Page 2" (paginates forward), this statement is used:

SQL> select /*+ FIRST_ROWS */ loguser, logdate, logseq, keyfield from (
  2      select /*+ FIRST_ROWS */ p.*, rownum r from (
  3         select /*+ FIRST_ROWS */ t.*
  4         from testlog t
  5         where loguser >= 'SYS'
  6         order by loguser, logdate, logseq
  7      ) p
  8      where rownum <= 10
  9  ) where r >= 6;

LOGUSER    LOGDATE      LOGSEQ KEYFIELD                                         
---------- -------- ---------- ----------                                       
SYS        01-03-27       5048 I_CDEF3                                          
SYS        01-03-27       5049 I_CDEF4                                          
SYS        01-03-27       5050 I_COBJ#                                          
SYS        01-03-27       5051 I_COL1                                           
SYS        01-03-27       5052 I_COL2                                           

And it is quite efficient as well:

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.01          0          0          0           0
Execute      1      0.00       0.00          0          0          0           0
Fetch        2      0.00       0.00          0          6          0           5
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        4      0.00       0.01          0          6          0           5

Rows     Row Source Operation
-------  ---------------------------------------------------
      5  VIEW
     10   COUNT STOPKEY
     10    VIEW
     10     TABLE ACCESS BY INDEX ROWID TESTLOG
     10      INDEX RANGE SCAN (object id 191688)

So by this method I "pinpoint Point A in the index" and paginate forward from that point... (Hope this is clear what I mean.)

The tricky part is when I wish to do exactly the same thing at point B !!!

This time I want to start at the point in the index (in the "order by" if you wish, but that's the same in this case) defined not just by the first column but by three columns. I want to start at the point where loguser = 'SYS' and logdate = '31-08-2004 11:22:33' and logseq = 5799 (point B) and paginate "forward in the index/order by".

I can come up with one way of defining a where-clause that will give me the rows from "point B" and forward using that order by:

SQL> select /*+ FIRST_ROWS */ loguser, logdate, logseq, keyfield from (
  2      select /*+ FIRST_ROWS */ p.*, rownum r from (
  3         select /*+ FIRST_ROWS */ t.*
  4         from testlog t
  5         where (loguser = 'SYS' and logdate = to_date('31-08-2004 11:22:33','DD-MM-YYYY HH24:MI:SS') and logseq >= 5799)
  6            or (loguser = 'SYS' and logdate > to_date('31-08-2004 11:22:33','DD-MM-YYYY HH24:MI:SS'))
  7            or (loguser > 'SYS')
  8         order by loguser, logdate, logseq
  9      ) p
 10      where rownum <= 5
 11  ) where r >= 1;

LOGUSER    LOGDATE      LOGSEQ KEYFIELD                                         
---------- -------- ---------- ----------                                       
SYS        04-08-31       5799 V_$PROCESS                                       
SYS        04-08-31       5827 V_$SESSION                                       
SYS        04-08-31       5857 V_$STATNAM                                       
SYSTEM     01-03-27      16877 AQ$_QUEUES                                       
SYSTEM     01-03-27      16878 AQ$_QUEUES                                       

It gives me the correct 5 rows (page 1 of the pagination starting at point B), but it does not use the index efficiently - it full scans the index rather that a range scan:

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.01       0.01          0          0          0           0
Execute      1      0.00       0.00          0          0          0           0
Fetch        2      0.08       0.07          0       6153          0           5
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        4      0.09       0.08          0       6153          0           5

Rows     Row Source Operation
-------  ---------------------------------------------------
      5  VIEW
      5   COUNT STOPKEY
      5    VIEW
      5     TABLE ACCESS BY INDEX ROWID TESTLOG
  16848      INDEX FULL SCAN (object id 191688)

And when the user clicks "Page 2" this is what I try:

SQL> select /*+ FIRST_ROWS */ loguser, logdate, logseq, keyfield from (
  2      select /*+ FIRST_ROWS */ p.*, rownum r from (
  3         select /*+ FIRST_ROWS */ t.*
  4         from testlog t
  5         where (loguser = 'SYS' and logdate = to_date('31-08-2004 11:22:33','DD-MM-YYYY HH24:MI:SS') and logseq >= 5799)
  6            or (loguser = 'SYS' and logdate > to_date('31-08-2004 11:22:33','DD-MM-YYYY HH24:MI:SS'))
  7            or (loguser > 'SYS')
  8         order by loguser, logdate, logseq
  9      ) p
 10      where rownum <= 10
 11  ) where r >= 6;

LOGUSER    LOGDATE      LOGSEQ KEYFIELD                                         
---------- -------- ---------- ----------                                       
SYSTEM     01-03-27      16879 AQ$_QUEUES                                       
SYSTEM     01-03-27      16880 AQ$_QUEUE_                                       
SYSTEM     01-03-27      16881 AQ$_QUEUE_                                       
SYSTEM     01-03-27      16882 AQ$_SCHEDU                                       
SYSTEM     01-03-27      16883 AQ$_SCHEDU                                       

Which again is the correct rows but inefficient:

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.01       0.01          0          0          0           0
Execute      1      0.00       0.00          0          0          0           0
Fetch        2      0.08       0.10          0       6153          0           5
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        4      0.09       0.11          0       6153          0           5

Rows     Row Source Operation
-------  ---------------------------------------------------
      5  VIEW
     10   COUNT STOPKEY
     10    VIEW
     10     TABLE ACCESS BY INDEX ROWID TESTLOG
  16853      INDEX FULL SCAN (object id 191688)


So my problem is:

When I "start my pagination at point A" - Oracle intelligently realizes that it can go to the index "at point A" and give me five rows by scanning the index from that point forward (or in the case of pagination to "Page 2": 10 rows forward and then only return the last 5 of those 10.) That is very efficient and rocks!

When I "start my pagination at point B"... I don't have a clear way of defining my where-clause, so that Oracle can realize "hey, this is the same as before, I can go to point B in the index and give him 5 rows by scanning forward from that point".


How can I write my where-clause in a way, so that Oracle has a chance to realize that it can do exactly the same thing with "point B" as it did with "point A"?


I'm sorry I write such long novels that you probably get bored reading through them :-) ... but that's the only way I can be clear about it.

I hope you can figure out some way to work around this full index scan and get a range scan instead...?!? I'm kinda stumped here :-)
 

Tom Kyte
October 18, 2004 - 8:48 am UTC

"I want to start at the point where loguser = 'SYS' and logdate 
= '31-08-2004 11:22:33' and logseq = 5799"


that is 'hard' -- if you just used the simple predicate, that would "skip around" in the table as it went from loguser value to loguser value.  Hence your really complex predicate:

  5         where (loguser = 'SYS' and logdate = to_date('31-08-2004 
11:22:33','DD-MM-YYYY HH24:MI:SS') and logseq >= 5799)
  6            or (loguser = 'SYS' and logdate > to_date('31-08-2004 
11:22:33','DD-MM-YYYY HH24:MI:SS'))
  7            or (loguser > 'SYS')

(as soon as you see an OR -- abandon all hope :)



so, basically, you are trying to treat the table as if it was a VSAM/ISAM file -- seek to key and read forward from key.  Concept that is vaguely orthogonal to relational technology...

but what about this:


ops$tkyte@ORA9IR2> update testlog set logseq = rownum, logdate = add_months(sysdate,-12) where loguser = 'XDB';
 
270 rows updated.
 
<b>I wanted some data "after loguser=SYS logdate=31-aug-2004 logseq=5799" with smaller logdates and logseqs</b>


ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> create index user_date_seq on testlog
  2  ( rpad(loguser,10) || to_char(logdate,'yyyymmddhh24miss') || to_char(logseq,'fm9999999999') );
 
Index created.

<b>we encode the columns you want to "seek through" in a single column.  numbers that are POSITIVE are easy -- you have to work a little harder to get negative numbers to encode "sortable"</b>

 
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> create or replace view v
  2  as
  3  select testlog.*,
  4         rpad(loguser,10) || to_char(logdate,'yyyymmddhh24miss') || to_char(logseq,'fm9999999999') user_date_seq
  5    from testlog
  6  /
 
View created.

<b>I like the view, cuts down on typos in the query later...</b>
 
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> exec dbms_stats.gather_table_stats( user, 'T', cascade=>true );
 
PL/SQL procedure successfully completed.
 
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> variable min number
ops$tkyte@ORA9IR2> variable max number
ops$tkyte@ORA9IR2> variable u varchar2(10)
ops$tkyte@ORA9IR2> variable d varchar2(15)
ops$tkyte@ORA9IR2> variable s number
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> set linesize 121
ops$tkyte@ORA9IR2> set autotrace on explain
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> exec :min := 1; :max := 5; :u := 'SYS'; :d := '20040831112233'; :s := 5799
 
PL/SQL procedure successfully completed.
 
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> select /*+ FIRST_ROWS */ loguser, logdate, logseq, keyfield from (
  2      select /*+ FIRST_ROWS */ p.*, rownum r from (
  3         select /*+ FIRST_ROWS */ t.*
  4         from v t
  5         where user_date_seq >= rpad(:u,10) || :d || to_char(:s,'fm9999999999')
  6         order by user_date_seq
  7      ) p
  8      where rownum <= :max
  9  ) where r >= :min
 10  /
 
LOGUSER    LOGDATE       LOGSEQ KEYFIELD
---------- --------- ---------- ----------
SYS        02-SEP-04       6126 BOOTSTRAP$
SYS        02-SEP-04       6129 CCOL$
SYS        02-SEP-04       6144 CDEF$
SYS        02-SEP-04       6152 CLU$
SYS        02-SEP-04       6162 CON$
 
 
Execution Plan
----------------------------------------------------------
   0      SELECT STATEMENT Optimizer=HINT: FIRST_ROWS (Cost=10 Card=997 Bytes=48853)
   1    0   VIEW (Cost=10 Card=997 Bytes=48853)
   2    1     COUNT (STOPKEY)
   3    2       VIEW (Cost=10 Card=997 Bytes=35892)
   4    3         TABLE ACCESS (BY INDEX ROWID) OF 'TESTLOG' (Cost=10 Card=997 Bytes=101694)
   5    4           INDEX (RANGE SCAN) OF 'USER_DATE_SEQ' (NON-UNIQUE) (Cost=2 Card=179)
 
 
 
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> exec :min := 14200; :max := 14205;
 
PL/SQL procedure successfully completed.
 
ops$tkyte@ORA9IR2> /
 
LOGUSER    LOGDATE       LOGSEQ KEYFIELD
---------- --------- ---------- ----------
XDB        18-OCT-03        115 XDB$ENUM2_
XDB        18-OCT-03        116 XDB$ENUM_T
XDB        18-OCT-03        117 XDB$ENUM_V
XDB        18-OCT-03        118 XDB$EXTNAM
XDB        18-OCT-03        119 XDB$EXTRA_
XDB        18-OCT-03         12 DBMS_XDBZ
 
6 rows selected.
 
 
Execution Plan
----------------------------------------------------------
   0      SELECT STATEMENT Optimizer=HINT: FIRST_ROWS (Cost=10 Card=997 Bytes=48853)
   1    0   VIEW (Cost=10 Card=997 Bytes=48853)
   2    1     COUNT (STOPKEY)
   3    2       VIEW (Cost=10 Card=997 Bytes=35892)
   4    3         TABLE ACCESS (BY INDEX ROWID) OF 'TESTLOG' (Cost=10 Card=997 Bytes=101694)
   5    4           INDEX (RANGE SCAN) OF 'USER_DATE_SEQ' (NON-UNIQUE) (Cost=2 Card=179)
 
 

 

Viable approach

Kim Berg Hansen, October 18, 2004 - 9:45 am UTC

Yes, I can use that method - particularly when wrapped in a view like that.

Probably I'll wrap the sorting encoding some more...

- function: sortkey(user, date, seq) return varchar2 (returning concatenated sorting string)
- index on sortkey(loguser, logdate, logseq)
- view: select t.*, sortkey(loguser, logdate, logseq) sortedkey from testlog t
- where clauses: select * from v where sortedkey = sortkey(:user, :date, :seq)

...or something like that.

Thanks for pointing me in the direction of another approach.

Often my trouble is that I'm used to thinking in terms of the Concorde XAL ERP system that sits "on top of" that Oracle base. In the XAL world there's nothing but simple tables and indexes are only on columns.

But then again in the XAL programming language one continually uses an index as a key and scans forward in this fashion.

I'm beginning (slowly but surely) to see the strengths of the "set-based" thinking I need to do in order to write good SQL (instead of the very much record-based thinking needed to write good XAL :-)...
...but one of the things that has always puzzled me is why the SQL language does NOT allow for using the composite indexes as key lookups in where clauses somehow... I mean those indexes are there and could profitably be used - the language just doesn't support it...

Oh, well - it's just one of those things that "just is", I guess. Perhaps I should try modifying a MySql to include that functionality :-)

Anyway, making the index a non-composite index is a viable approach - I can live with that.


anto, October 19, 2004 - 2:59 pm UTC

In oracle , for my current session I want to retrieve say only the first 10 rows of the select SQL each time.(I dont want to give 'where rownum <=10 each time' in the query). Is there any way I can do this at the session level in oracle, instead of giving where rownum <= 10 'condition each time in the query ?

Tom Kyte
October 19, 2004 - 4:15 pm UTC

no, we always return what you query for - the client would either have to fetch the first ten and stop or you add "where rownum <= 10"

A reader, October 19, 2004 - 4:36 pm UTC

Thanks,Tom for confirming this

fga

Michael, October 20, 2004 - 12:14 am UTC

I think you can always use fine grained access control (fga)
to append "automatically" a where clause to any table.
In your case "where rownum < 10". Search this site for fga or "fine grained access control". That'll do it.

Cheers

Tom Kyte
October 20, 2004 - 7:07 am UTC

whoa -- think about it.

sure, if all you do is "select * from t" -- something that simple (yet so very very very very very drastic) would work.

but --

select * from emp, dept where...

would become:

select * from ( select * from emp where rownum <= 10 ) emp,
( select * from dept where rownum <= 10 ) dept


suppose that finds 10 emps in deptno = 1000
and 10 depts 10, 20, 30, .... 100


no data found



Use Views?

Michael, October 21, 2004 - 3:48 am UTC

In that case why not simply create a view and predicate the view? Wouldn't you then have something like

select * from
(select * from emp,dept where emp.deptno=dept.deptno) <== View
where rownum < 100?

I think if you allow access to the data only through views (and not through) tables you overcome your mentioned problem?

Tom Kyte
October 21, 2004 - 6:57 am UTC

What if you wanted deptno=10

select * from your_view where deptno=10

would have the where rownum done FIRST (get 100 random rows) and then return the ones from that 100 that are deptno=10 (perhaps NO rows)

no, views / FGA -- they are not solutions to this.

it seems there is bug in 9.2.0.1.0 when get rows from M to N

Steven, November 01, 2004 - 9:34 pm UTC

I think it's a bug.
[code]
SQL> select *from v$version;
BANNER
---------
Oracle9i Enterprise Edition Release 9.2.0.1.0 - Production
PL/SQL Release 9.2.0.1.0 - Production
CORE    9.2.0.1.0       Production
TNS for 32-bit Windows: Version 9.2.0.1.0 - Production
NLSRTL Version 9.2.0.1.0 - Production

SQL> create table tt nologging as select rownum rn,b.*from dba_objects b;
SQL> alter table tt  add primary key(rn) nologging;
SQL> create index ttidx on tt(objecT_type,created) nologging;
SQL> analyze table tt compute statistics;

SQL> select /*+ first_rows */*from  (select a.*,rownum as rr from (select *from
tt where object_type='TABLE' order by created) a where rownum<20)where rr>0;

Execution Plan
----------------------------------------------------------
   0      SELECT STATEMENT Optimizer=HINT: FIRST_ROWS (Cost=485 Card=1
          9 Bytes=3857)

   1    0   VIEW (Cost=485 Card=19 Bytes=3857)
   2    1     COUNT (STOPKEY)
   3    2       VIEW (Cost=485 Card=1005 Bytes=190950)
   4    3         SORT (ORDER BY STOPKEY) (Cost=485 Card=1005 Bytes=89
          445)

   5    4           TABLE ACCESS (BY INDEX ROWID) OF 'TT' (Cost=484 Ca
          rd=1005 Bytes=89445)

   6    5             INDEX (RANGE SCAN) OF 'TTIDX' (NON-UNIQUE) (Cost
          =6 Card=1005)

Statistics
----------------------------------------------------------
          0  recursive calls
          0  db block gets
        852  consistent gets
          0  physical reads
          0  redo size
       1928  bytes sent via SQL*Net to client
        514  bytes received via SQL*Net from client
          3  SQL*Net roundtrips to/from client
          1  sorts (memory)
          0  sorts (disk)
         19  rows processed

SQL> select /*+ first_rows */ a.*,rownum as rr from (select *from tt where objec
t_type='TABLE' order by created) a where rownum<20;
Execution Plan
----------------------------------------------------------
   0      SELECT STATEMENT Optimizer=HINT: FIRST_ROWS (Cost=484 Card=1
          9 Bytes=190950)

   1    0   COUNT (STOPKEY)
   2    1     VIEW (Cost=484 Card=1005 Bytes=190950)
   3    2       TABLE ACCESS (BY INDEX ROWID) OF 'TT' (Cost=484 Card=1
          005 Bytes=89445)

   4    3         INDEX (RANGE SCAN) OF 'TTIDX' (NON-UNIQUE) (Cost=6 C
          ard=1005)

Statistics
----------------------------------------------------------
          3  recursive calls
          0  db block gets
         16  consistent gets
          0  physical reads
          0  redo size
       1928  bytes sent via SQL*Net to client
        514  bytes received via SQL*Net from client
          3  SQL*Net roundtrips to/from client
          1  sorts (memory)
          0  sorts (disk)
         19  rows processed

[/code]

but in  9.2.0.5.0  it's correct

[code]
SQL> select *from v$version;

BANNER
----------------------------------------------------------------
Oracle9i Enterprise Edition Release 9.2.0.5.0 - Production
PL/SQL Release 9.2.0.5.0 - Production
CORE    9.2.0.6.0       Production
TNS for 32-bit Windows: Version 9.2.0.5.0 - Production
NLSRTL Version 9.2.0.5.0 - Production

SQL> create table tt nologging as select rownum rn,b.*from dba_objects b;

SQL>  alter table tt  add primary key(rn) nologging;

SQL> create index ttidx on tt(objecT_type,created) nologging;

SQL> analyze table tt compute statistics;

SQL> select /*+ first_rows */ *from (select a.*,rownum as rr from (select *from
  2  tt where object_type='TABLE' order by created) a where rownum<20)where rr>0;

Execution Plan
----------------------------------------------------------
   0      SELECT STATEMENT Optimizer=HINT: FIRST_ROWS (Cost=64 Card=19
           Bytes=3857)
   1    0   VIEW (Cost=64 Card=19 Bytes=3857)
   2    1     COUNT (STOPKEY)
   3    2       VIEW (Cost=64 Card=395 Bytes=75050)
   4    3         TABLE ACCESS (BY INDEX ROWID) OF 'TT' (Cost=64 Card=
          395 Bytes=31995)

   5    4           INDEX (RANGE SCAN) OF 'TTIDX' (NON-UNIQUE) (Cost=3
           Card=395)
Statistics
----------------------------------------------------------
          8  recursive calls
          0  db block gets
         19  consistent gets
          0  physical reads
          0  redo size
       1907  bytes sent via SQL*Net to client
        514  bytes received via SQL*Net from client
          3  SQL*Net roundtrips to/from client
          3  sorts (memory)
          0  sorts (disk)
         19  rows processed

SQL> select/*+ first_rows */ a.*,rownum as rr from (select *from
  2  tt where object_type='TABLE' order by created) a where rownum<20;

Execution Plan
----------------------------------------------------------
   0      SELECT STATEMENT Optimizer=HINT: FIRST_ROWS (Cost=64 Card=19
           Bytes=3610)
   1    0   COUNT (STOPKEY)
   2    1     VIEW (Cost=64 Card=395 Bytes=75050)
   3    2       TABLE ACCESS (BY INDEX ROWID) OF 'TT' (Cost=64 Card=39
          5 Bytes=31995)
   4    3         INDEX (RANGE SCAN) OF 'TTIDX' (NON-UNIQUE) (Cost=3 C
          ard=395)

Statistics
----------------------------------------------------------
          3  recursive calls
          0  db block gets
         17  consistent gets
          0  physical reads
          0  redo size
       1907  bytes sent via SQL*Net to client
        514  bytes received via SQL*Net from client
          3  SQL*Net roundtrips to/from client
          1  sorts (memory)
          0  sorts (disk)
         19  rows processed
[/code]

but i could not search any infomation from metalink.
hope this can help many people. 

old question and DBMS_XMLQuery

A reader, November 18, 2004 - 1:21 pm UTC

Tom,
first, thanks a lot as usual!!!
DBMS_XMLQuery package has two interesting procedures: setSkipRows and setMaxRows that look like perfect tool for getting specific "data window" from the whole recordset while forming XML. Just wondering - is it using the same technic, that you provided in the very beginning of this thread, or there is some additional magic behind the scene? In other words - will it also be more slow with setSkipRows increasing?

Tom Kyte
November 18, 2004 - 2:19 pm UTC

alter session set sql-trace=true;
do it
tkprof it

and see :)

it is what I would have to do...

Thanks for the wonderful solution

san_mat, December 10, 2004 - 7:02 am UTC

Hi Tom,
as always you are Great !!
I got what i was looking for..everytime i have some problem..i come to this site & you never disappoints for any queries...

Thanks again


Total rows

Rula, January 03, 2005 - 5:34 pm UTC

Naveen from India said

"The devlopers want me to get the total rows that the query return so that they can display that many pages. [...] This is two I/O calls to the database and two queries to be parsed to display a page. Is there any work around."

And you answered

"I have a very very very very simple solution to this problem. DON'T DO IT."

It is a funny answer, but we had a similar problem here, and that kind of answer was not possible (if we meant to keep our jobs...), so we found a work around. Here it is:

select * from
(SELECT COUNT(1) OVER (PARTITION BY 1) total_rows,
row_number() OVER (ORDER BY <some criteria>) r_number
<the rest of your query>
) WHERE r_number BETWEEN :1 AND :2

So you may get the first page simultaneously to the total number of rows. It is better than executing 2 queries and worse than ignoring the total like google.

I hope it is useful for you.

Regards.
Rula.



Tom Kyte
January 03, 2005 - 10:35 pm UTC

I did not mean to be funny

I was *DEADLY* serious.

I wrote it in my book.

that

count(*) over ()

(slightly more meaningful and less confusing than "count(1) over (partition by 1)" what is UP with count(1) -- what is count(1)? count(*) has meaning -- but I digress)

That trick is something hardware vendors will love you for, but not too many DBA's or people who care about performance.


You get the FIRST page for that query after ONLY resolving and getting to the VERY LAST PAGE

(think about it -- how can you get the total row count without- well -- counting the rows!!!!!)

Row estimates

Andy, January 21, 2005 - 4:20 am UTC

Hi Tom,

Apologies if this is deemed off-topic! I'm trying to return the estimated cardinality of a query to the user (for use with pagination etc.) From what I've read in this thread, there are only two ways to do this: either with EXPLAIN PLAN, or by querying the v$ tables directly. I've decided to do the latter, so as to be able to take advantage of bind peeking and - if I've understood correctly - as it's a bit more efficient. So, I'm using a query like this:

select * from
(select b.*,rownum rnum
from (<main query>) b
where rownum < :max )
where rnum >= :min ;

starting with, say, : max = 51 and :min = 0 if I'm fetching 50 rows at a time. To get the card value using EXPLAIN PLAN I would, when I get the first batch of rows, strip away the "batch" stuff and send this:

explain plan for <main query>

The card value is then straightforward as I simply take the value from plan_table where id = 0. But I'm not so sure how I get the *right* card value when using v$sql_plan. Because I'm querying v$sql_plan for a plan that already exists, how can I get the card value that refers to what would have been selected had there been no batching? Example:

mires@WS2TEST> var x varchar2(10)
mires@WS2TEST> exec :x := '1.01.01'

PL/SQL-Prozedur wurde erfolgreich abgeschlossen.

mires@WS2TEST> select * from (select rownumber from fulltext where az = :x) where rownum < 11;

ROWNUMBER
----------
37845
37846
37847
37848
37849
37850
37851
37852
37853
37855

10 Zeilen ausgewählt.

mires@WS2TEST> explain plan set statement_id ='my_test_no_batch' for select rownumber from fulltext where az = :x;

EXPLAIN PLAN ausgeführt.

mires@WS2TEST> select id, operation, cardinality from plan_table where statement_id = 'my_test_no_batch';

ID OPERATION CARDINALITY
---------- ------------------------------ -----------
0 SELECT STATEMENT 100
1 TABLE ACCESS 100
2 INDEX 100


(So with EXPLAIN PLAN I just take the value where id = 0).

mires@WS2TEST> select /* find me */ * from (select rownumber from fulltext where az = :x) where rownum < 11;

ROWNUMBER
----------
37845
37846
37847
37848
37849
37850
37851
37852
37853
37855

10 Zeilen ausgewählt.

mires@WS2TEST> select id, operation, cardinality from v$sql_plan where (address, child_number) in (select address, child_number from v$sql where sql_text like '%find me%' and sql_text not like '%sql_text%') order by id;

ID OPERATION CARDINALITY
---------- ------------------------------ -----------
0 SELECT STATEMENT
1 COUNT
2 TABLE ACCESS 100
3 INDEX 100

In v$sql_plan, card values are not shown for each step. Here it's obvious which card value refers to my inner query, but how can I be sure with a more complex query. Does v$sql_plan never display a card value for the step which filters out my 10 rows (in which case I can just take the "last" card value - i.e. the card value for the lowest id value that has a non-null card value)?



Tom Kyte
January 21, 2005 - 8:26 am UTC

you want the first one you hit by ID.  it is the "top of the stack", it'll be as close as you appear to be able to get from v$sql_plan.


ops$tkyte@ORA9IR2> create table emp as select * from scott.emp;
 
Table created.
 
ops$tkyte@ORA9IR2> create index emp_ename_idx on emp(ename);
 
Index created.
 
ops$tkyte@ORA9IR2> exec dbms_stats.gather_table_stats( user, 'EMP', cascade=>true );
 
PL/SQL procedure successfully completed.
 
ops$tkyte@ORA9IR2> variable x varchar2(25)
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> create or replace view dynamic_plan_table
  2  as
  3  select
  4   rawtohex(address) || '_' || child_number statement_id,
  5   sysdate timestamp, operation, options, object_node,
  6   object_owner, object_name, 0 object_instance,
  7   optimizer,  search_columns, id, parent_id, position,
  8   cost, cardinality, bytes, other_tag, partition_start,
  9   partition_stop, partition_id, other, distribution,
 10   cpu_cost, io_cost, temp_space, access_predicates,
 11   filter_predicates
 12   from v$sql_plan;
 
View created.
 
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> define Q='select * from (select emp.empno, e2.ename from emp, emp e2 where e2.ename = :x ) where rownum < 11';
ops$tkyte@ORA9IR2> select * from (select emp.empno, e2.ename from emp, emp e2 where e2.ename = :x ) where rownum < 11;
 
no rows selected
 
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> delete from plan_table;
 
6 rows deleted.
 
ops$tkyte@ORA9IR2> explain plan for &Q;
old   1: explain plan for &Q
new   1: explain plan for select * from (select emp.empno, e2.ename from emp, emp e2 where e2.ename = :x ) where rownum < 11
 
Explained.
 
ops$tkyte@ORA9IR2> select * from table(dbms_xplan.display);
 
PLAN_TABLE_OUTPUT
-------------------------------------------------------------------------------
 
------------------------------------------------------------------------
| Id  | Operation             |  Name          | Rows  | Bytes | Cost  |
------------------------------------------------------------------------
|   0 | SELECT STATEMENT      |                |    10 |   100 |     3 |
|*  1 |  COUNT STOPKEY        |                |       |       |       |
|   2 |   MERGE JOIN CARTESIAN|                |    14 |   140 |     3 |
|   3 |    TABLE ACCESS FULL  | EMP            |    14 |    56 |     3 |
|   4 |    BUFFER SORT        |                |     1 |     6 |       |
|*  5 |     INDEX RANGE SCAN  | EMP_ENAME_IDX  |     1 |     6 |       |
------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
   1 - filter(ROWNUM<11)
   5 - access("E2"."ENAME"=:Z)
 
Note: cpu costing is off
 
19 rows selected.
 
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> select * from table( dbms_xplan.display
  2  ( 'dynamic_plan_table',
  3      (select rawtohex(address)||'_'||child_number x
  4         from v$sql
  5        where sql_text='&Q' ),
  6     'serial' ) )
  7  /
old   5:       where sql_text='&Q' ),
new   5:       where sql_text='select * from (select emp.empno, e2.ename from emp, emp e2 where e2.ename = :x ) where rownum < 11' ),
 
PLAN_TABLE_OUTPUT
-------------------------------------------------------------------------------
 
------------------------------------------------------------------------
| Id  | Operation             |  Name          | Rows  | Bytes | Cost  |
------------------------------------------------------------------------
|   0 | SELECT STATEMENT      |                |       |       |     3 |
|*  1 |  COUNT STOPKEY        |                |       |       |       |
|   2 |   MERGE JOIN CARTESIAN|                |    14 |   140 |     3 |
|   3 |    TABLE ACCESS FULL  | EMP            |    14 |    56 |     3 |
|   4 |    BUFFER SORT        |                |     1 |     6 |       |
|*  5 |     INDEX RANGE SCAN  | EMP_ENAME_IDX  |     1 |     6 |       |
------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
   1 - filter(ROWNUM<11)
   5 - access("ENAME"=:X)
 
Note: cpu costing is off
 
<b>and it won't be precisely the same as explain plan gives (but in this case -- 11 would not really be in the query, so it would not "know" -- and 11 would actually be wrong for you!  14 is right if you think about it, you want to know the estimated size of the entire set, not the set after the stopkey processing)</b>

 

first_rows vs. order by

VA, March 10, 2005 - 4:20 pm UTC

In a pagination style query like

select /*+ first_rows */ ...
from ...
where ...
order by ...

Dont the first_rows and the order by cancel each other out? ORDER BY implies that you need to fetch everything to start spitting out the first row. Which is contradictory with first_rows.

So, if I have a resultset that returns 1000 rows and I want to see the first 10 rows ordered by something, how would I go about doing this most efficiently knowing that users are going to go away after paging down couple of times?

Thanks

Tom Kyte
March 10, 2005 - 7:38 pm UTC

no it doesn't. think "index" and think "top n processing"

if you can use an index, we can get there pretty fast.

if you cannot - we still can use a top-n optimization to avoid sorting 1000 rows (just have to grab the first n rows -- sort them, then every row that comes after that -- just compare to last row in array of N sorted rows -- if greater than the last row, ignore it, else put it in the array and bump out the last one)

Pls explain how it will work with example

Kiran, March 11, 2005 - 5:39 am UTC

sql>select *
2 from ( select rownum rnum, a.*
3 from ( select * from emp order by 1 ) a
4 where rownum <= 15 )
5 where rnum >= 1
6 ;

RNUM EMPNO ENAME JOB MGR HIREDATE SAL COMM DEPTNO LOC
---------- ---------- ---------- --------- ---------- --------- ---------- ---------- ---------- ---
1 7369 aaat&&maa CLERK 7902 17-DEC-80 8000 250
2 7566 NJS MANAGER 7839 02-APR-81 2975 100
3 7782 CLARK MANAGER 7839 09-JUN-81 2450 10
4 7788 SCOTT ANALYST 7566 09-DEC-82 3000
5 7839 KING PRESIDENT 17-NOV-81 5000 10
6 7876 ADAMS CLERK 7788 12-JAN-83 1100
7 7902 FORD ANALYST 7566 03-DEC-81 3000
8 7934 MILLER CLERK 7782 23-JAN-82 1300 10
9 7965 AKV CLERK 7566 20-DEC-83 1020 400 20

9 rows selected.


it looks like normal query, it is not resetting the rownum value, pls explain ur query with example

Tom Kyte
March 11, 2005 - 6:22 am UTC

run the query from the inside out (don't know what you mean by "not resetting the rownum value")

a) take the query select * from emp order by 1
b) then get the first 15 rows (where rownum <= 15) and assign rownum as rnum to each of them
c) then keep only rnum >= 1

to get rows 1..15 of the result set

you should try perhaps 5 .. 10 since emp only has 14 rows.

A reader, March 11, 2005 - 10:32 am UTC

So, if I have a resultset that returns 1000 rows and I want to see the first 10 rows ordered by something, how would I go about doing this most efficiently knowing that users are going to go away after paging down couple of times?

How would I do this?

Thanks



Tom Kyte
March 11, 2005 - 10:56 am UTC

i use the pagination style query we were discussing right above. right on my home page I use this query (not again emp of course :)

salee, March 11, 2005 - 11:52 pm UTC

i wnat tetrive some records out of 3 million records(ie i want to retrive records between 322222 and322232).using rownum how can i do this

Tom Kyte
March 12, 2005 - 10:07 am UTC

there is no such thing as "record 32222" you know. You have to have an order by. but to get rows n thru m, see above? I showed how to do that.

delete from table - easiest way

A reader, March 22, 2005 - 4:05 pm UTC

Hi Tom,

I have a table sample as

create table sample
(
num number,
str varchar2(255),
method varchar2(255)
id1 number,
id2 number
);

I have about 32 million rows in this table out of which some rows are duplicated like for eg

num str method id1 id2
1 1 2 201 202
2 1 201 202

that is, the id1 and id2 of two rows might be duplicated. if that is the case, i want to find out such rows and keep one and delete another. is there an easiest way to achieve this?

Thanks.


Tom Kyte
March 22, 2005 - 6:13 pm UTC

search this site for

duplicates

we've done this one a couple of ways, many times.

analytics and index use

bob, April 07, 2005 - 8:35 am UTC

Tom,

The query you mentioned above:

select *
from ( select t.*, row_number() over (partition by store order by customer) rn
from t
)
where rn <= 2;

I can't understand why this requires a full scan for a large table on our system. (9.0.1.4)
If I had a (store,customer) index and the cardinality of store was 5 or so, couldn't the optimizer read the first two rows in the index for each store? That may be overly simplistic, but I don't understand why I have to do 2000 consistent gets for a 25k row table and a full scan to accomplish this for 10 total rows.

stats are calculated using "analyze table t compute statistics for table for all indexes for all indexed columns"

On a related note: while trying to hint with FIRST_ROWS, I noticed the hint changed the answer.

select * from (
select /*+ FIRST_ROWS(5) */ dt, rownum r from t order by dt)
where r <= 5;

returns all 25k rows, if I dropped the (5) out of the hint, it returned just 5.




Tom Kyte
April 07, 2005 - 10:28 am UTC

well, you have 10 stores

covering 25,000 rows

so that is 2,500 not "5 or so" all of a sudden....

You would need to combine a skip scan of the index with an index range scan and count stopkey. Meaning, we'd have to see "oh, there are only about 10 stores, you want the first two after sorting by customer. we'll skip around in the index and just get two from each". I believe some day it'll be that sophisticated, but remember in general these queries are much more complex.

And in your example, you would have had 5,000 stores -- and skip scanning would have be a really bad idea.

If a hint changes the results, there is a bug, please contact support (but 9014..)

CBO vs. me

bob, April 07, 2005 - 10:57 am UTC

Tom,

I always remember you suggesting we should think about how the optimizer might approach a query using the methods it has available to it. I assumed that if my simple mind could think of an approach than surely the CBO could implement it. :) I understand your point that it might be much more complicated in general.

If in this example, I know the stores (and there are 5), I might be better off, writing a union of 5 queries that each get the first two customers for that store using that concatenated index than this analytic approach. I'll have to test that theory to see.

Thanks for the verification. I thought I was missing something. With regards to 9014, for some reason metalink de-certification notices don't phase the customer.


getting rows N through M of a result set

Hossein Alaei Bavil, May 04, 2005 - 7:16 am UTC

excellent!
I think you are in Oracle core !!
but i think why oracle dosen't produce a built in feature for doing this?


usin join or cursors

mohannad, May 15, 2005 - 1:18 pm UTC

i have four tables and i want to use the information from the four tables so what is the most effient way ,
1.to create a view joining the four tables
2.or to create only one database data block using oracle developer and use the post query trigger to retrieve the information from the others tables by using cursors or select into.

my point is that by using post query the cursors or select into is performed only on the records fetched from the database (10 recors for example) and when you show more records by moving the scroll bar down for example the post query then fires again, but using the join between the tables means that oracle should join all the records at once which means taking more time. so which choice is better since i am working with a huge tables so time is very important to me.
thanks alot



Tom Kyte
May 15, 2005 - 1:42 pm UTC

database were born to join
and be written to

a join does not mean that Oracle has to retrieve the last row before it can give you the first at all.

use FIRST_ROWS (session setting or hint) if getting the first rows is the most important thing to you.

join or cursors

mohannad, May 15, 2005 - 2:42 pm UTC

thank you for your quick respond,
but i think that i have bit conflict in understanding the meaning of paging,as i understand that paging means displaying the result of the join only when the user want to display more results which can be done by joining the tables at a higher level (using oracle forms for example) an the advantege of not joining the tables in the database is that if the user get what he want without the need to display all the record so joing the tables at a higher level means that computation only occurs when the user wants more results to be displayed, i am right or no????

Tom Kyte
May 15, 2005 - 3:30 pm UTC

you are not correct.

select /*+ FIRST_ROWS */ *
from t1, t2, t3, t4, t5, t6, ......
where t1.key = t2.key
and .....


you open that query and fetch the first row and only that first row.

Well, the database is going to read a teeny bit of t1, t2, t3, t4, .... and so on. It is NOT going to process the entire thing!

joining does not mean "gotta get all the rows before you get the last". joins can be done on the fly.

say you have:

drop table t1;
drop table t2;
drop table t3;
drop table t4;

create table t1 as select * from all_objects;
alter table t1 add constraint t1_pk primary key(object_id);
exec dbms_stats.gather_table_stats(user,'T1',cascade=>true);

create table t2 as select * from all_objects;
alter table t2 add constraint t2_pk primary key(object_id);
exec dbms_stats.gather_table_stats(user,'t2',cascade=>true);

create table t3 as select * from all_objects;
alter table t3 add constraint t3_pk primary key(object_id);
exec dbms_stats.gather_table_stats(user,'t3',cascade=>true);

create table t4 as select * from all_objects;
alter table t4 add constraint t4_pk primary key(object_id);
exec dbms_stats.gather_table_stats(user,'t4',cascade=>true);


and you query:

select /*+ first_rows */ t1.object_name, t2.owner, t3.created, t4.temporary
from t1, t2, t3, t4
where t1.object_id = t2.object_id
and t2.object_id = t3.object_id
and t3.object_id = t4.object_id

and fetch 100 rows, or you do it yourself:

declare
cnt number := 0;
begin
for x in ( select t1.object_name, t1.object_id from t1 )
loop
for y in ( select t2.owner, t2.object_id from t2 where object_id = x.object_id)
loop
for z in ( select t3.created, object_id from t3 where object_id = y.object_id)
loop
for a in ( select t4.temporary from t4 where t4.object_id = z.object_id )
loop
cnt := cnt+1;
exit when cnt >= 100;
end loop;
exit when cnt >= 100;
end loop;
exit when cnt >= 100;
end loop;
exit when cnt >= 100;
end loop;
end;
/


well, tkprof shows:

SELECT /*+ first_rows */ T1.OBJECT_NAME, T2.OWNER, T3.CREATED, T4.TEMPORARY
FROM
T1, T2, T3, T4 WHERE T1.OBJECT_ID = T2.OBJECT_ID AND T2.OBJECT_ID =
T3.OBJECT_ID AND T3.OBJECT_ID = T4.OBJECT_ID


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 1 0.01 0.00 0 611 0 100
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 3 0.01 0.00 0 611 0 100

Misses in library cache during parse: 0
Optimizer mode: FIRST_ROWS
Parsing user id: 108 (recursive depth: 1)

Rows Row Source Operation
------- ---------------------------------------------------
100 NESTED LOOPS (cr=611 pr=0 pw=0 time=8091 us)
100 NESTED LOOPS (cr=409 pr=0 pw=0 time=5346 us)
100 NESTED LOOPS (cr=207 pr=0 pw=0 time=3194 us)
100 TABLE ACCESS FULL T1 (cr=5 pr=0 pw=0 time=503 us)
100 TABLE ACCESS BY INDEX ROWID T2 (cr=202 pr=0 pw=0 time=1647 us)
100 INDEX UNIQUE SCAN T2_PK (cr=102 pr=0 pw=0 time=801 us)(object id 67372)
100 TABLE ACCESS BY INDEX ROWID T3 (cr=202 pr=0 pw=0 time=1464 us)
100 INDEX UNIQUE SCAN T3_PK (cr=102 pr=0 pw=0 time=659 us)(object id 67374)
100 TABLE ACCESS BY INDEX ROWID T4 (cr=202 pr=0 pw=0 time=1433 us)
100 INDEX UNIQUE SCAN T4_PK (cr=102 pr=0 pw=0 time=637 us)(object id 67376)


we only do the WORK WE NEED to do, as you ask us. And if you compare the work done here with the work you would make us do:



SELECT T1.OBJECT_NAME, T1.OBJECT_ID FROM T1
call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 3 0.00 0.00 0 5 0 100
********************************************************************************
SELECT T2.OWNER, T2.OBJECT_ID FROM T2 WHERE OBJECT_ID = :B1

call count cpu elapsed disk query current rows ------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 201 0.01 0.01 0 300 0 100
********************************************************************************
SELECT T3.CREATED, OBJECT_ID FROM T3 WHERE OBJECT_ID = :B1

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 201 0.00 0.04 0 300 0 100
********************************************************************************
SELECT T4.TEMPORARY FROM T4 WHERE T4.OBJECT_ID = :B1

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 201 0.06 0.01 0 300 0 100



You would do in 905 IO's what we did in 611, you would have many back and forths, binds an executes, where as we would have one.


IF you can do it in a single query, please -- do it.


select /*+ FIRST_ROWS */ *

mohannad, May 15, 2005 - 4:42 pm UTC

i can not understand what is the differnce between

1.select * from items,invoice_d
where items.itemno=invoice_d.itemno;

and
2.select /*+ FIRST_ROWS */ *
from items,invoice_d
where items.itemno=invoice_d.itemno;

they give me the same result and the same number of records fetched each time (as i understand that using /*+ FIRST_ROWS */ * means fetching certain number of records but i can understand how i did not find any difference between the query with first_rows or without it)

Thanks a lot..

Tom Kyte
May 15, 2005 - 8:00 pm UTC

if it gave you different answers, that would be a bug.

The plans should be different.  The first query will optimize to find ALL ROWS as efficiently as possible, the second to return the first rows as soon as it can.

the first optimizes for throughput.
the second for intial response time:


ops$tkyte@ORA10G> create table items( itemno number primary key, data char(80) );
 
Table created.
 
ops$tkyte@ORA10G> create table invoice( id number, itemno references items, data char(80),
  2  constraint invoice_pk primary key(id,itemno) );
 
Table created.
 
ops$tkyte@ORA10G>
ops$tkyte@ORA10G> exec dbms_stats.set_table_stats( user, 'ITEMS', numrows => 100000 );
 
PL/SQL procedure successfully completed.
 
ops$tkyte@ORA10G> exec dbms_stats.set_table_stats( user, 'INVOICE', numrows => 1000000 );
 
PL/SQL procedure successfully completed.
 
ops$tkyte@ORA10G>
ops$tkyte@ORA10G> set autotrace traceonly explain
ops$tkyte@ORA10G> select *
  2    from items, invoice
  3   where items.itemno = invoice.itemno;
 
Execution Plan
----------------------------------------------------------
   0      SELECT STATEMENT Optimizer=ALL_ROWS (Cost=5925 Card=1000000 Bytes=195000000)
   1    0   HASH JOIN (Cost=5925 Card=1000000 Bytes=195000000)
   2    1     TABLE ACCESS (FULL) OF 'ITEMS' (TABLE) (Cost=31 Card=100000 Bytes=9500000)
   3    1     TABLE ACCESS (FULL) OF 'INVOICE' (TABLE) (Cost=50 Card=1000000 Bytes=100000000)
 
 
 
ops$tkyte@ORA10G> select /*+ FIRST_ROWS */ *
  2    from items, invoice
  3   where items.itemno = invoice.itemno;
 
Execution Plan
----------------------------------------------------------
   0      SELECT STATEMENT Optimizer=HINT: FIRST_ROWS (Cost=101481 Card=1000000 Bytes=195000000)
   1    0   NESTED LOOPS (Cost=101481 Card=1000000 Bytes=195000000)
   2    1     TABLE ACCESS (FULL) OF 'INVOICE' (TABLE) (Cost=50 Card=1000000 Bytes=100000000)
   3    1     TABLE ACCESS (BY INDEX ROWID) OF 'ITEMS' (TABLE) (Cost=1 Card=1 Bytes=95)
   4    3       INDEX (UNIQUE SCAN) OF 'SYS_C0010764' (INDEX (UNIQUE)) (Cost=0 Card=1)
 
 
 
ops$tkyte@ORA10G> set autotrace off


the hash join will wait until it reads the one table fully, hashes it -- once it does that, you'll start getting rows.

The second one returns rows IMMEDIATELY, but will take longer to return the last row. 

dffernce between first_rows(n) and all_rows

mohannad, May 15, 2005 - 7:09 pm UTC

what is the main difference between first_rows(n) and all_rows,as i understand that first_rows(10) for example retrive the first 10 rows very fast,but as i understand that if i want to retrive all the records then i should avoid using first_rows and instead i should use all_rows what does all_rows do???.

Tom Kyte
May 15, 2005 - 8:03 pm UTC

all rows = optimize to be able to get the last row as fast as possible. you might wait for the first row for a while, but all rows will be returned faster.

first rows = get first row as fast as possible. getting to the last row might take lots longer than with all rows, but we have an end user waiting to see data so get the first rows fast


use all rows for non-interactive things (eg: print this report)
use first rows for things that end users sit and wait for (paging through a query on the web for example)

performance issue

mohannad, May 16, 2005 - 9:33 am UTC

i have two tables with large amount of records

>>desc working
empno
start_date
.
.
.
with empno as the promary key

>>desc working_hestory
empno
hestory_date
.
.
.
with empno & hestory_date as the primary key

and i want all empno from working table where their stardate < max of their hestory_date i write the following two queries
but i found the second is two time faster than the first i want to know what is the reasons???,some people told me that the optimizer will optimize the two queries so that they will have the same speed but when i use the two queries i found the second query faster than the first so what is the reson ,and is there any general rule about that

1.select * from working where start_date<(select max(hestory_date) from working_hestory
where working.empno=working_hestory.empno)


2.select * from working , (select empno,max(hestory_date) from working_hestory where empno in(select empno from working) a
where
a.empno=working.empno
and
start_date<hestory_date;

Thanks A lot





Tom Kyte
May 16, 2005 - 12:57 pm UTC

read the plans. they will be very different.

you would probably make it even faster with

select *
from working, (select empno, max(history_date)
from working_history
group by empno) a
where working.empno = a.empno
and working.start_date > a.history_date;



performance issue

mohannad, May 16, 2005 - 1:04 pm UTC

but what is the reasons behind the difference between the two quires????
is their any general guidline for this difference.

Tom Kyte
May 16, 2005 - 1:20 pm UTC

I cannot see the plans, you can....



performance issue

mohannad, May 16, 2005 - 1:29 pm UTC

i mean if there is any guidline that i can use when i write any sql quuery without the use of plans ,if there a rule to use join rather than subquery for example.

Tom Kyte
May 16, 2005 - 1:49 pm UTC

if you gain a conceptual understanding of how the query will likely be processed, that would be good -- understand what happens, how it happens (access paths are discussed in the performance guide, I wrote about them in effective oracle by design as well)

but if you use the CBO, it'll try rewriting them as much as it can -- making the difference between the two less and less. No idea what optimizer you were using however.

Also, knowledge of the features of sql available to you (like analytic functions) is key to being successful.

Best of the Best

A Reader, May 20, 2005 - 8:27 am UTC

Howdy,

Thanks for sharing your knowledge with us.

Cheers

Partition

Mohit, May 20, 2005 - 8:49 am UTC

Hi Tom,

Hope you are in good spirits!

Tom, where I can read some more stuff like the one below:

--------------------------------
select *
from ( select t.*, row_number() over (partition by store order by customer) rn
from t
)
where rn <= 2;
---------------------------------

I have never seen this kind of the logic in any of the SQL books i have read so far. Can you suggest any book or documentation for learning/reading knowledgable thinhs like the above please?

Thanks Tom,
Mohit


Tom Kyte
May 20, 2005 - 10:29 am UTC

Expert One on One Oracle - chapter on analytics.
Effective Oracle by Design ....

On otn.oracle.com -> data warehousing guide.

been in the database since 8.1.6

question about paging

James Su, June 09, 2005 - 10:05 am UTC

hi Tom,

We have a large transactions table with indexes on trans_id (primary key) and trans_time, now I am trying to display the transactions page by page. The startdate and enddate is specified by user and passed from front end (usually the first and last day of the month). The front end will also remember the trans_id of the last row of the page and pass it to the database in order to fetch the next page.

main logic:

...........

begin

if p_direction='pagedown' then -- going to next page
v_sql := 'select trans_id,trans_time,trans_amount from mytransactions where trans_time between :1 and :2 and trans_id<=:3 order by trans_id desc';
else -- going to last page
v_sql := 'select trans_id,trans_time,trans_amount from mytransactions where trans_time between :1 and :2 and trans_id>=:3 order by trans_id';
end if;

open c_var for v_sql using p_startdate,p_enddate,p_last_trans_id;

i :=0;

loop
FETCH c_var INTO v_row;

i := i + 1;

EXIT WHEN c_var%NOTFOUND or i>30; -- 30: rows per page

-- add v_row into the array

end loop;

close c_var;

-- return array to the front end
...........
end;
/

in this way, if the user can input a trans_id then we can help him to locate to that page.

Can you tell me whether there's a better approach? The performance seems not good. Thank you very much.

Tom Kyte
June 09, 2005 - 11:24 am UTC

first rows hint it AND use rownum to get the number of rows you want

select *
from ( select /*+ FIRST_ROWS */ ..... order by trans_id )
where rownum <= 30;


that'll be the query you want -- use static SQL (no need for dynamic) here. Bulk collect the rows and be done


if ( pagedown )
then
select .... BULK COLLECT into ....
from ( select ... )
where rownum <= 30;
else
select .......
end if;



first_rows hint works!

James Su, June 09, 2005 - 11:40 am UTC

hi Tom,
It's amazing, thank you so much!!

first_row hint on views

James Su, June 09, 2005 - 12:36 pm UTC

sorry tom, I forgot to mention that mytransactions is actually a view, which is the union all of current table and archived table. Now the problem is:
If I have the trans_id =1 in the archived table, then:
select /*+ FIRST_ROWS */ trans_id from mytransactions where rownum<=30 and trans_id>=1 order by trans_id;

it will return the trans_id in the current table, which is greater than 1.

What can I do with this situation? Thank you.

Tom Kyte
June 09, 2005 - 6:20 pm UTC

you cannot do that regardless.

to "top-n" an ordered set, you MUST:

select *
from ( select /*+ first_rows */ .... ORDER BY .... )
where rownum <= 30;

and if it is in a union all view -- it isn't going to be excessively "first rows friendly"

When is Rownum applied

A reader, July 07, 2005 - 5:47 pm UTC

Hello,
Is rownum applied after order by clause or as the rows are fetched

select * from (
select deptno ,rownum r from dept order by deptno )
where r = 1




Tom Kyte
July 07, 2005 - 6:02 pm UTC

that assigns rownum to the data from dept AND THEN sorts it AND THEN keeps the first row that happend to come from dept before it was sorted.

eg:

select deptno from dept where rownum=1;

would be the same but faster.

if the want the first after sorting

select * from (select deptno from dept order by deptno) where rownum = 1;

(in this case, actually, select min(deptno) from dept :)

Please help me with a query

reader, August 09, 2005 - 3:57 am UTC

Hi Tom,

I have a table "xyz" where TDATE and BOOKNAME are the columns in it .

The output of the table is like this when i do a "select * from xyz".



TDATE BOOKNAME
--------------- ----------
16-MAY-05 kk6
16-MAY-05 kk6
16-MAY-05 kk6


17-MAY-05 kk7
17-MAY-05 kk7
17-MAY-05 kk7
17-MAY-05 kk7
17-MAY-05 kk7
17-MAY-05 kk7



I would like to have the output like the below.Please help me with a sql which will give me the number of times a distinct BOOKNAME value is present per TDATE value



TDATE BOOKNAME count(*)
--------------- ---------- ----------
16-MAY-05 kk7 3
17-MAY-05 kk7 6


Thanks in advance

Tom Kyte
August 09, 2005 - 9:50 am UTC

homework?  (sorry, this looks pretty basic)

look up trunc in the sql reference manual, you'll probably have to trucn the TDATE to the day level:

ops$tkyte@ORA10G> alter session set nls_date_format = 'dd-mon-yyyy hh24:mi:ss';
                                                                                                                                                                      
Session altered.
                                                                                                                                                                      
ops$tkyte@ORA10G> select sysdate, trunc(sysdate) from dual;
                                                                                                                                                                      
SYSDATE              TRUNC(SYSDATE)
-------------------- --------------------
09-aug-2005 09:38:08 09-aug-2005 00:00:00
                                                                                                                                                                      


In order to lose the time component.  And then read up on group by and count(*).

Now, I don't know why your output has kk7 twice, I'll assume that is a typo. But this is a very simple group by on the trunc of tdate and bookname with a count. 

ROWNUM performance

Tony, August 22, 2005 - 4:34 pm UTC

Tom,
Thanks a lot for your help and valueable time. I have a very simple query (looks simple) but it takes more than 4 mins to execute,

select * from (select L.LEG_ID from leg_t L WHERE
L.STAT_ID = 300 AND
L.LEG_CAT = 2 AND
L.D_CD = 'CIS' AND
L.P_ID is null order by L.LEG_ID desc)
where rownum <= 16;

LEG_ID is the primary key(PK_LEG), I also have index(leg_i1) on (STAT_ID,LEG_CAT,D_CD,P_ID,leg_id desc).

Now if I run this query as is it takes about 4-5 mins and the plan is:

SELECT STATEMENT Cost = 90
COUNT STOPKEY
VIEW
TABLE ACCESS BY INDEX ROWID LEG_T
INDEX FULL SCAN DESCENDING PK_LEG
The query doesn't use the leg_i1 index..shouldn't it?

Secondly if I run the internal query:

select L.LEG_ID from leg_t L WHERE
L.STAT_ID = 300 AND
L.LEG_CAT = 2 AND
L.D_CD = 'CIS' AND
L.P_ID is null order by L.LEG_ID desc

it uses the index leg_i1 and comes back in milli-seconds.

I tried the rule hint on the query and it come back in milliseconds again instead of 4-5 minutes.( I can't use hints in the application.)

Please guide.

Tom Kyte
August 24, 2005 - 3:28 am UTC

it is trying to do first rows here (because of the top-n, the rownum) and the path to first rows is to use that index to "sort" the data, read the data sorted.

But apparently, you have to search LOTS of rows to find the ones of interest - hence it takes longer.

either try all_rows optimization OR put leg_id on the leading edge of that index instead of way at the end

ROWNUM performance

Tony, August 29, 2005 - 4:23 pm UTC

Tom,
Thanks a lot for your valuable time, I tried the index as you suggested but still the optimizer doesn't pick it,(default optimizer mode is all_rows).

This table(leg_t) contains about one million rows, and STAT_ID(not null) column contains just 8 distinct values,
LEG_CAT(not null) column contains just 2 distinct value
D_CD )not null) column contains just 1 distinct value

I can't use bitmap index, what other option do you recommend so that the optimizer pick up the index (as it does when the mode is RULE) please help.



Tom Kyte
August 30, 2005 - 1:24 am UTC

hint it all rows in the inline view. (it sees the rownum..)

ROWNUM performance

Tony, August 30, 2005 - 10:12 am UTC

Tom,
Thanks again, I tried all_rows as you suggested but still it doesn't pick the index, it still goes for the primary key index, which takes 5 minutes. Here is the plan with all_rows:

SELECT STATEMENT Cost = 10
COUNT STOPKEY
VIEW
TABLE ACCESS BY INDEX ROWID LEG_T
INDEX FULL SCAN DESCENDING PK_LEG

Do you suggest histograms for such columns, which columns are the nest candidates for histograms (if you think that can help)

Please help, I even tried to play with optimizer_index_caching, optimizer_index_cost_adj parameters but couldn't get better results.

Tom Kyte
August 30, 2005 - 12:22 pm UTC

select *
from (
select *
from (select /*+ no_merge */ L.LEG_ID
from leg_t L
WHERE L.STAT_ID = 300
AND L.LEG_CAT = 2
AND L.D_CD = 'CIS'
AND L.P_ID is null
)
order by L.LEG_ID desc
)
where rownum <= 16;



FIRST_ROWS

Jon Roberts, September 07, 2005 - 11:12 am UTC

I had implemented the suggested solution some time back and when it finally got to production it was rather slow when using an order by in the inner most query.
We allow users to sort by a number of columns and when sorting, it would run much slower. Using autotrace, I could see that I had the same plan but with the larger production table, it had more data to search and it took longer to do the full table scan.

I created indexes on the columns people sort by but it wouldn't use the indexes. I just re-read this discussion and found your suggestion of using the first_rows hint. That did the trick. It uses the indexes now and everything is nice and fast.

Thanks for the great article!

Excellent Thread

Manas, November 03, 2005 - 1:28 pm UTC

Thanks Tom.
Before going through this thread, I was thinking to implement the pagination using ref cursor (dynamic) and Bulk collect.

How to find the record count of a ref cursor ?

VKOUL, December 05, 2005 - 6:44 pm UTC

Hi Tom,

Is it possible ? (kind of collection.count)

procedure (par1 in number, par2 out refcursor, par3 out number) is
begin
. . .
open par2 for select . . .;
at this point how can I get the number of records in par2.
par3 := number of records;
end;
/


Tom Kyte
December 06, 2005 - 5:35 am UTC

you cannot, no one KNOWS what the record count is until.....

you've fetch the last row.


consider this:


open rc for select * from ten_billion_row_table;


it is not as if we copy the entire result set someplace, in fact, we typically do no work to open a cursor (no IO is performed), it is not until you actually start asking for data that we start getting it and we have no idea how many rows will be returned until they are actually returned.

No use in doing work that you might well never be asked to do.

A reader, December 23, 2005 - 4:47 am UTC

Awesome !!!

Page through a ref cursor using bulk collect

Barry Chase, January 13, 2006 - 6:15 am UTC

Can I use your bulkcollect and first rows logic while building a refcursor that I pass back to a front end which permits them to page through a large dataset 10,25,50 records at a time while still maintaining performance and the end as well as the first part of the query ?

Tom Kyte
January 13, 2006 - 11:15 am UTC

don't use bulk collect - just return the ref cursor and let the front end array fetch as it needs rows.

Follow up question

Barry C, January 14, 2006 - 10:52 am UTC

Okay,no on bulk collecting. Our frontend is pulling back several thousand records potentially. I would prefer that they apply more criteria, but our administrative users have decided that they feel differently. Needless to say, for a webpage, the performance is far from 5-10 seconds return for all of the records. They say this is unacceptable. I tried the min max row thing and it works great at the early part of the query, but performance progressively gets worse as I go down the result set... say...show me recs 900-950.

So I am supposed to come up with a solution for which I am not sure there is a solution. Any thoughts or commentary ?


Tom Kyte
January 15, 2006 - 3:45 pm UTC

only give them a NEXT button and no "page 55" button.

Do it like google. Ask your end users to go to page 101 of this search:

</code> http://www.google.com/search?q=oracle&start=0&ie=utf-8&oe=utf-8&client=firefox-a&rls=org.mozilla:en-US:official <code>

also, ask them to read the time to develop each page as well as they try to get there.

tell them "google = gold standard, if they don't do it, neither will I"

I give you a next button, nothing more.
Google lets you hit 10 at a time, nothing more.

And google will say this:
Sorry, Google does not serve more than 1000 results for any query.

if you try to go page page 100.

Further enhancement

A reader, January 16, 2006 - 5:51 am UTC

Hi Tom,

excellent thread. In addition to getting M..N rows I would also like to add column sorting (the column header will be a link). How can I do this efficiently?

Thanks

RP

Tom Kyte
January 16, 2006 - 9:39 am UTC

read original answer? I had "including order by...."??

A reader, January 16, 2006 - 12:14 pm UTC

..with the potential to use any of the columns in the table that means either i create a set of sql statements, one for each column (plus ASC or DESC) or I generate the sql statement dynamically.

If i do it dynamically would i not lose the magic of *bind variables*?

Apologies, that was what i meant to ask.

R



Tom Kyte
January 16, 2006 - 12:51 pm UTC

you would not lose the magic of bind variables, you would however have a copy of the sql statement for each unique sort combination (which is not horrible, unless you have hundreds/thousands of such sort orders)

A reader, January 16, 2006 - 1:08 pm UTC

and if the number of static statements got too large, could i do it dynamically like this:

</code> http://asktom.oracle.com/pls/asktom/f?p=100:11:::::P11_QUESTION_ID:1288401763279 <code>

?? Is it relevent in this context(No pun intended)?

R

Tom Kyte
January 16, 2006 - 1:47 pm UTC

Yes, there are many ways to bind

a) static sql in plsql does it nicely
b) sys_context with open refcursor for....
c) dbms_sql - with dbms_sql.bind_variable
d) open refcursor for .... USING <when you know the number of binds>


you could also:


order by decode( p_input, 1, c1 ) ASC, decode( p_input, -1, c1 ) DESC,
decode( p_input, 2, c2 ) ASC, decode( p_input, -2, c2 ) DESC,
....

in order to have one order by statement - you would never be able to use an index to retrieve the data "sorted" (but you might not be able to anyway in many cases)...


what if your query get info back from more than one table?

Michelle, February 09, 2006 - 10:08 pm UTC

What would the syntax look like?
Thank you!

Tom Kyte
February 10, 2006 - 12:35 pm UTC

I assume you are referring to the query right above this?

select a.c1, a.c2, b.c3, b.c4, ....
from a,b....
where ....
order by decode( p_input, 1, a.c1 ) ASC, decode( p_input, -1, a.c1 ) DESC,
decode( p_input, 2, a.c2 ) ASC, decode( p_input, -2, a.c2 ) DESC,
decode( p_input, 3, b.c3 ) ASC, decode( p_input, -3, b.c3 ) DESC,
....


not any different than if there was one table really.

Get the total

Nitai, March 01, 2006 - 11:24 am UTC

Hi Tom

How can I get the total of all found records with this query:

SELECT rn, id
FROM (
SELECT ROWNUM AS rn, id
FROM (
SELECT id
FROM test
)
WHERE ROWNUM <= 30
)
WHERE rn > 0

I tried to put count(rn) in there but that only returns me the 30 records (of course) but what I need is the total records this query found. Is this even possible within the same query? Thank you for your kind help.


Tom Kyte
March 01, 2006 - 1:48 pm UTC

why?

goto google, search for oracle, tell me if you think their count is accurate. then, goto page 101 of the search results and tell me what the first link is.

nitai, March 01, 2006 - 4:21 pm UTC

Call me stupid, but what is your point. When I go to Google and enter Oracle I get this:

Results 411 - 411 of about 113,000,000 for oracle

I get to go until page 43 and that's it. Ok, that means it is not possible?

All I really need is how many total found records there are (meaning the 113,000,00 in the case of the google search) :-)

Tom Kyte
March 02, 2006 - 9:04 am UTC

do you think that google actually counted the results?

No, they don't

There is no page 101 on google.


They don't let you go that far.


My point - made many times - counting the exact number of hits to paginate through a result set on the web is "not smart"

I refuse to do it.

I won't show how.

It is just a way to burn CPU like mad, make everything really really really slow.



Got your point

Nitai, March 02, 2006 - 9:13 am UTC

Ok Tom, got your point. Ok, but what about if I have a ecommerce site and customers are searching for a product. They would want to know how many products they found, thus I would need that number of the overall found records.

At the moment I would have to run the query two times, one that gets me the total number and one with the rownum > 50 and so on. I don't think that is very performant either.

What else to do?

Tom Kyte
March 02, 2006 - 12:44 pm UTC

Just tell them "you are looking at 1 thru 10 of more than 10"

Or guess - just like I do, google does. Give them a google interface - look at google as the gold standard here. Google ran out of pages and didn't get upset or anything - if you tried to goto page 50, it just put you on the last page.


You DO NOT EVER need to tell them

you are looking at 1 through 10 of 153,531 items

Just tell them, here is 1 through 10, there are more, next will get you to them.

Or give them links to the first 10 pages (like google) and if they click on page 10 but there isn't a page 10, show them the last page and then only show them pages 1..N in the click links.

Be like google.

Sorry, not going to tell you how to burn cpu like mad, this is one of my pet peeves - this counting stuff.

10gR2 optimizer problem

A reader, March 10, 2006 - 3:43 am UTC

Hi Tom,

I hardly believed when I've seen it. This is a tkprof of same query second time it has select * from (<original_query) arround. Can you give us hint what might be happening here.

SELECT
a.*, ROWNUM AS rnum
FROM (SELECT /*+first_rows*/
s.userid, s.username, s.client_ip, s.timestamp_,
s.DURATION, s.calling_station_id,
s.called_station_id, s.acct_terminate_cause,
s.nas_port_type
FROM dialin_sessions s
WHERE s.client_ip LIKE '213.240.3.%'
AND s.username LIKE 'c%'
AND s.timestamp_end >= 1136070000
AND s.timestamp_ <= 1138751940
ORDER BY timestamp_ DESC) a
WHERE ROWNUM <= 26

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.01 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 1 0.06 0.06 0 6801 0 25
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 3 0.06 0.07 0 6801 0 25

Misses in library cache during parse: 1
Optimizer mode: FIRST_ROWS
Parsing user id: 63

Rows Row Source Operation
------- ---------------------------------------------------
25 COUNT STOPKEY (cr=6801 pr=0 pw=0 time=74387 us)
25 VIEW (cr=6801 pr=0 pw=0 time=74303 us)
25 TABLE ACCESS BY INDEX ROWID DIALIN_SESSIONS (cr=6801 pr=0 pw=0 time=74221 us)
7050 INDEX RANGE SCAN DESCENDING DIALIN_SESSIONS_TIMESTAMP (cr=21 pr=0 pw=0 time=14187 us)(object id 53272)


Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
SQL*Net message to client 2 0.00 0.00
SQL*Net message from client 2 6.39 6.39
********************************************************************************


SELECT *
FROM (SELECT
a.*, ROWNUM AS rnum
FROM (SELECT /*+first_rows*/
s.userid, s.username, s.client_ip, s.timestamp_,
s.DURATION, s.calling_station_id,
s.called_station_id, s.acct_terminate_cause,
s.nas_port_type
FROM dialin_sessions s
WHERE s.client_ip LIKE '213.240.3.%'
AND s.username LIKE 'c%'
AND s.timestamp_end >= 1136070000
AND s.timestamp_ <= 1138751940
ORDER BY timestamp_ DESC) a
WHERE ROWNUM <= 26)

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 1 34.45 68.05 267097 325479 0 25
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 3 34.45 68.06 267097 325479 0 25

Misses in library cache during parse: 1
Optimizer mode: FIRST_ROWS
Parsing user id: 63

Rows Row Source Operation
------- ---------------------------------------------------
25 VIEW (cr=325479 pr=267097 pw=0 time=68055294 us)
25 COUNT STOPKEY (cr=325479 pr=267097 pw=0 time=68055230 us)
25 VIEW (cr=325479 pr=267097 pw=0 time=68055196 us)
25 SORT ORDER BY STOPKEY (cr=325479 pr=267097 pw=0 time=68055118 us)
12268 TABLE ACCESS FULL DIALIN_SESSIONS (cr=325479 pr=267097 pw=0 time=23052374 us)


Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
SQL*Net message to client 2 0.00 0.00
SQL*Net message from client 2 17.54 17.55
db file sequential read 6768 0.02 3.62
db file scattered read 28978 0.07 41.07
latch: cache buffers lru chain 1 0.00 0.00
********************************************************************************

Tom Kyte
March 10, 2006 - 12:15 pm UTC

you would have to provide a little more context.


what happened in between these two tkprofs that I assume were taken at different times.

Optimizer problem

Zeljko Vracaric, March 13, 2006 - 4:24 am UTC

No, it was one session. Queries are only one in that database session. I was trying to optimize one of ours most used php scripts (migrating to oracle from sybase). I analyzed 10053 trace that day. But the only thing I've spotted is that in final section optimizer goal for second query was all rows not first_rows. I tried to change optimizer mode by alter session and I got same results. It is the first_rows that is essential for query taking plan with index that enables stop key to stop processing after 25 rows that match criteria.
It is very complicated script because it has to answer lot of really different questions. For instance give me all sessions that were active in some point in time and on the other hand give me all sessions in long period of time matching some criteria. We have to detect intersection of two intervals and avoid FTS or index scan on millions of rows, finding criteria to limit number of rows processed. Optimizing it is of course subject for another thread. But this problem with simple inline view was unexpected.


date java question

winny, March 24, 2006 - 8:06 pm UTC

Create a Date class with the following capabilities:
a) Output the date in multiple formats such as
DDD YYYY
MM/DD/YY
June 14, 1992
b) Use overloaded constructors to create Date objects initialized with dates of the formats in part (a).
Hint : you can compare strings using method equals. Suppose you have two string references s1 and s2. if those strings are equal s1. equals (s2) returns true. Otherwise, it returns false


10gR2 linux another similar problem

Zeljko Vracaric, March 27, 2006 - 3:28 am UTC

Hi Tom,

I've found another similar problem with select * from (<query>). This time I used autotrace to document it. It looks like bug in optimizer using wrong cardinalities, or we are doing something very wrong in out php project.

BILLING@dev> select * from ecm_invoiceitems;

253884 rows selected.


Execution Plan
----------------------------------------------------------
Plan hash value: 4279212659

--------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
--------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 253K| 49M| 542 (8)| 00:00:03 |
| 1 | TABLE ACCESS FULL| ECM_INVOICEITEMS | 253K| 49M| 542 (8)| 00:00:03 |
--------------------------------------------------------------------------------------


Statistics
----------------------------------------------------------
0 recursive calls
0 db block gets
253920 consistent gets
0 physical reads
0 redo size
188764863 bytes sent via SQL*Net to client
107899015 bytes received via SQL*Net from client
761644 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
253884 rows processed

BILLING@dev> select a.*,rownum as rnum from(
2 select /*+first_rows */i.invoiceid,i.invoice_number,ii.product_name,p.amount,co.company,i.time_invoice, c.code as customer_code, i.customerid, i.statusid,i.cost_total
3 from ecm_invoiceitems ii,cm_customers c, cm_contacts co,ecm_invoices i
4 left join ecm_payments_invoices ip on ( i.invoiceid=ip.invoiceid)
5 left join ecm_payments p on ( p.paymentid=ip.paymentid )
6 where
7 i.invoiceid=ii.invoiceid and i.customerid = c.customerid and c.contactid = co.contactid and co.type_ = 'PERSON' and ((p.paymentid is null and i.cost_total between 200-1 and 200+1) or p.amount=200)
8 and (p.paymentid is null or p.is_success in ('U', 'S'))
9 and i.statusid not in (0,303) and time_invoice>to_date('2005-11-01','yyyy-mm-dd')
10 order by i.statusid desc, p.amount,i.time_invoice,i.invoiceid
11 ) a where rownum<25;


Execution Plan
----------------------------------------------------------
Plan hash value: 440181276

----------------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time |
----------------------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 24 | 6936 | | 35762 (1)| 00:02:27 |
|* 1 | COUNT STOPKEY | | | | | | |
| 2 | VIEW | | 33450 | 9440K| | 35762 (1)| 00:02:27 |
|* 3 | SORT ORDER BY STOPKEY | | 33450 | 5749K| 11M| 35762 (1)| 00:02:27 |
| 4 | TABLE ACCESS BY INDEX ROWID | ECM_INVOICEITEMS | 1 | 56 | | 1 (0)| 00:00:01 |
| 5 | NESTED LOOPS | | 33450 | 5749K| | 34658 (1)| 00:02:23 |
|* 6 | FILTER | | | | | | |
| 7 | NESTED LOOPS OUTER | | 24819 | 2908K| | 28438 (1)| 00:01:57 |
| 8 | NESTED LOOPS OUTER | | 24819 | 2593K| | 22219 (1)| 00:01:32 |
| 9 | NESTED LOOPS | | 24751 | 2296K| | 16019 (1)| 00:01:06 |
| 10 | NESTED LOOPS | | 24751 | 1571K| | 9817 (1)| 00:00:41 |
|* 11 | TABLE ACCESS BY INDEX ROWID| ECM_INVOICES | 24751 | 966K| | 3615 (1)| 00:00:15 |
|* 12 | INDEX RANGE SCAN | INVOICES_TIME_INVOICE | 24856 | | | 17 (0)| 00:00:01 |
| 13 | TABLE ACCESS BY INDEX ROWID| CM_CUSTOMERS | 1 | 25 | | 1 (0)| 00:00:01 |
|* 14 | INDEX UNIQUE SCAN | CM_CUSTOME_5955332052 | 1 | | | 1 (0)| 00:00:01 |
|* 15 | TABLE ACCESS BY INDEX ROWID | CM_CONTACTS | 1 | 30 | | 1 (0)| 00:00:01 |
|* 16 | INDEX UNIQUE SCAN | CM_CONTACT_17544893292 | 1 | | | 1 (0)| 00:00:01 |
|* 17 | INDEX RANGE SCAN | ECM_PAYMEN_800553712 | 1 | 12 | | 1 (0)| 00:00:01 |
| 18 | TABLE ACCESS BY INDEX ROWID | ECM_PAYMENTS | 1 | 13 | | 1 (0)| 00:00:01 |
|* 19 | INDEX UNIQUE SCAN | ECM_PAYMEN_9475344592 | 1 | | | 1 (0)| 00:00:01 |
|* 20 | INDEX RANGE SCAN | ECM_INVOICEITEMS_INVOICEID | 1 | | | 1 (0)| 00:00:01 |
----------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

1 - filter(ROWNUM<25)
3 - filter(ROWNUM<25)
6 - filter(("P"."PAYMENTID" IS NULL AND "I"."COST_TOTAL">=199 AND "I"."COST_TOTAL"<=201 OR "P"."AMOUNT"=200) AND
("P"."PAYMENTID" IS NULL OR ("P"."IS_SUCCESS"='S' OR "P"."IS_SUCCESS"='U')))
11 - filter("I"."STATUSID"<>0 AND "I"."STATUSID"<>303)
12 - access("I"."TIME_INVOICE">TO_DATE('2005-11-01 00:00:00', 'yyyy-mm-dd hh24:mi:ss'))
14 - access("I"."CUSTOMERID"="C"."CUSTOMERID")
15 - filter("CO"."TYPE_"='PERSON')
16 - access("C"."CONTACTID"="CO"."CONTACTID")
17 - access("I"."INVOICEID"="IP"."INVOICEID"(+))
19 - access("P"."PAYMENTID"(+)="IP"."PAYMENTID")
20 - access("I"."INVOICEID"="II"."INVOICEID")


Statistics
----------------------------------------------------------
0 recursive calls
0 db block gets
131721 consistent gets
0 physical reads
0 redo size
1134 bytes sent via SQL*Net to client
385 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
1 sorts (memory)
0 sorts (disk)
1 rows processed

BILLING@dev> select * from
2 (select a.*,rownum as rnum from(
3 select /*+first_rows */i.invoiceid,i.invoice_number,ii.product_name,p.amount,co.company,i.time_invoice, c.code as customer_code, i.customerid, i.statusid,i.cost_total
4 from ecm_invoiceitems ii,cm_customers c, cm_contacts co,ecm_invoices i
5 left join ecm_payments_invoices ip on ( i.invoiceid=ip.invoiceid)
6 left join ecm_payments p on ( p.paymentid=ip.paymentid )
7 where
8 i.invoiceid=ii.invoiceid and i.customerid = c.customerid and c.contactid = co.contactid and co.type_ = 'PERSON' and ((p.paymentid is null and i.cost_total between 200-1 and 200+1) or p.amount=200)
9 and (p.paymentid is null or p.is_success in ('U', 'S'))
10 and i.statusid not in (0,303) and time_invoice>to_date('2005-11-01','yyyy-mm-dd')
11 order by i.statusid desc, p.amount,i.time_invoice,i.invoiceid
12 ) a where rownum<25
13 );


Execution Plan
----------------------------------------------------------
Plan hash value: 1216454693

-----------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
-----------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 2 | 604 | 1147 (2)| 00:00:05 |
| 1 | VIEW | | 2 | 604 | 1147 (2)| 00:00:05 |
|* 2 | COUNT STOPKEY | | | | | |
| 3 | VIEW | | 2 | 578 | 1147 (2)| 00:00:05 |
|* 4 | SORT ORDER BY STOPKEY | | 2 | 352 | 1147 (2)| 00:00:05 |
| 5 | CONCATENATION | | | | | |
|* 6 | FILTER | | | | | |
| 7 | NESTED LOOPS OUTER | | 1 | 176 | 7 (0)| 00:00:01 |
| 8 | NESTED LOOPS | | 1 | 163 | 6 (0)| 00:00:01 |
| 9 | NESTED LOOPS OUTER | | 2 | 266 | 5 (0)| 00:00:01 |
| 10 | NESTED LOOPS | | 2 | 242 | 4 (0)| 00:00:01 |
| 11 | NESTED LOOPS | | 1 | 96 | 3 (0)| 00:00:01 |
| 12 | TABLE ACCESS FULL | ECM_INVOICEITEMS | 198 | 11088 | 2 (0)| 00:00:01 |
|* 13 | TABLE ACCESS BY INDEX ROWID| ECM_INVOICES | 1 | 40 | 1 (0)| 00:00:01 |
|* 14 | INDEX UNIQUE SCAN | ECM_INVOIC_14275361692 | 1 | | 1 (0)| 00:00:01 |
| 15 | TABLE ACCESS BY INDEX ROWID | CM_CUSTOMERS | 1 | 25 | 1 (0)| 00:00:01 |
|* 16 | INDEX UNIQUE SCAN | CM_CUSTOME_5955332052 | 1 | | 1 (0)| 00:00:01 |
|* 17 | INDEX RANGE SCAN | ECM_PAYMEN_800553712 | 1 | 12 | 1 (0)| 00:00:01 |
|* 18 | TABLE ACCESS BY INDEX ROWID | CM_CONTACTS | 1 | 30 | 1 (0)| 00:00:01 |
|* 19 | INDEX UNIQUE SCAN | CM_CONTACT_17544893292 | 1 | | 1 (0)| 00:00:01 |
| 20 | TABLE ACCESS BY INDEX ROWID | ECM_PAYMENTS | 1 | 13 | 1 (0)| 00:00:01 |
|* 21 | INDEX UNIQUE SCAN | ECM_PAYMEN_9475344592 | 1 | | 1 (0)| 00:00:01 |
|* 22 | FILTER | | | | | |
| 23 | NESTED LOOPS OUTER | | 1 | 176 | 35 (0)| 00:00:01 |
| 24 | NESTED LOOPS | | 1 | 163 | 34 (0)| 00:00:01 |
| 25 | NESTED LOOPS | | 2 | 266 | 33 (0)| 00:00:01 |
| 26 | NESTED LOOPS OUTER | | 2 | 216 | 32 (0)| 00:00:01 |
| 27 | NESTED LOOPS | | 2 | 192 | 31 (0)| 00:00:01 |
| 28 | TABLE ACCESS FULL | ECM_INVOICEITEMS | 198 | 11088 | 2 (0)| 00:00:01 |
|* 29 | TABLE ACCESS BY INDEX ROWID| ECM_INVOICES | 1 | 40 | 1 (0)| 00:00:01 |
|* 30 | INDEX UNIQUE SCAN | ECM_INVOIC_14275361692 | 1 | | 1 (0)| 00:00:01 |
|* 31 | INDEX RANGE SCAN | ECM_PAYMEN_800553712 | 1 | 12 | 1 (0)| 00:00:01 |
| 32 | TABLE ACCESS BY INDEX ROWID | CM_CUSTOMERS | 1 | 25 | 1 (0)| 00:00:01 |
|* 33 | INDEX UNIQUE SCAN | CM_CUSTOME_5955332052 | 1 | | 1 (0)| 00:00:01 |
|* 34 | TABLE ACCESS BY INDEX ROWID | CM_CONTACTS | 1 | 30 | 1 (0)| 00:00:01 |
|* 35 | INDEX UNIQUE SCAN | CM_CONTACT_17544893292 | 1 | | 1 (0)| 00:00:01 |
| 36 | TABLE ACCESS BY INDEX ROWID | ECM_PAYMENTS | 1 | 13 | 1 (0)| 00:00:01 |
|* 37 | INDEX UNIQUE SCAN | ECM_PAYMEN_9475344592 | 1 | | 1 (0)| 00:00:01 |
-----------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

2 - filter(ROWNUM<25)
4 - filter(ROWNUM<25)
6 - filter("P"."AMOUNT"=200 AND ("P"."PAYMENTID" IS NULL OR ("P"."IS_SUCCESS"='S' OR
"P"."IS_SUCCESS"='U')))
13 - filter("I"."TIME_INVOICE">TO_DATE('2005-11-01 00:00:00', 'yyyy-mm-dd hh24:mi:ss') AND
"I"."STATUSID"<>0 AND "I"."STATUSID"<>303)
14 - access("I"."INVOICEID"="II"."INVOICEID")
16 - access("I"."CUSTOMERID"="C"."CUSTOMERID")
17 - access("I"."INVOICEID"="IP"."INVOICEID"(+))
18 - filter("CO"."TYPE_"='PERSON')
19 - access("C"."CONTACTID"="CO"."CONTACTID")
21 - access("P"."PAYMENTID"(+)="IP"."PAYMENTID")
22 - filter(("P"."PAYMENTID" IS NULL OR ("P"."IS_SUCCESS"='S' OR "P"."IS_SUCCESS"='U')) AND
"P"."PAYMENTID" IS NULL AND LNNVL("P"."AMOUNT"=200))
29 - filter("I"."COST_TOTAL"<=201 AND "I"."TIME_INVOICE">TO_DATE('2005-11-01 00:00:00', 'yyyy-mm-dd
hh24:mi:ss') AND "I"."COST_TOTAL">=199 AND "I"."STATUSID"<>0 AND "I"."STATUSID"<>303)
30 - access("I"."INVOICEID"="II"."INVOICEID")
31 - access("I"."INVOICEID"="IP"."INVOICEID"(+))
33 - access("I"."CUSTOMERID"="C"."CUSTOMERID")
34 - filter("CO"."TYPE_"='PERSON')
35 - access("C"."CONTACTID"="CO"."CONTACTID")
37 - access("P"."PAYMENTID"(+)="IP"."PAYMENTID")


Statistics
----------------------------------------------------------
0 recursive calls
0 db block gets
1214653 consistent gets
0 physical reads
0 redo size
1134 bytes sent via SQL*Net to client
385 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
1 sorts (memory)
0 sorts (disk)
1 rows processed

BILLING@dev>


first query plan is ok but second is very wrong. We have a lot queries like this in out web application we try to migrate from sybase. I'd hate to hint queries like this, is there any other solution?

Tom Kyte
March 27, 2006 - 9:54 am UTC

can you tell me what exactly is wrong - given that I spend seconds looking at review/followups and only look at them once.



select * from (<query>)

Zeljko Vracaric, March 28, 2006 - 2:18 am UTC

Hello Tom,

Problem is that optimizer changes query plan when we put select * from () around it. I'm sorry I didn't point it clearly.

I can not reproduce it on small and simple example. So I sent real examples from out application in previous posts. We use a lot construction like this you recommended.

select *
from ( select a.*, rownum rnum
from ( YOUR_QUERY_GOES_HERE -- including the order by ) a
where rownum <= MAX_ROWS )
where rnum >= MIN_ROWS

but,


select a.*, rownum rnum
from ( YOUR_QUERY_GOES_HERE -- including the order by ) a
where rownum <= MAX_ROWS

has good query plan and it's performance is ok. However putting select * from () with or without where rnum>1 query plan is changed and I believe it is very wrong.

Tracing 10053 on query in my first post (one with tkprof) I found that probably optimizer goal was changed from first_rows to all_rows. I'm not sure because I'm not expert in debugging 10053 trace.
In my later post (autotrace) I found another query with similar problem and looking at the plan I believe that cardinality for full scan of ecm_invoiceitems table was wrong but statistics were good and I included select * from ecm_invoiceitems in post to proof that. So basicly in my previous post I have 3 queries with autotrace.

select * from ecm_invoiceitem to show that optimizer knows cardinality.

select ...(complex query with ecm_invoiceitems in from) with correct plan for first_rows hint

select * from (select ...(complex query with ecm_invoiceitems in from)) this has wrong plan, plan is different than previous.

IÂ’m surprised with this third query plan. I expected to be same as plan without select * from () around.

So trying to be short I wrote another large post. Keeping short and explain things in simple manner is a talent, I think you have that gift and thatÂ’s why your book and site is very popular.

Zeljko


Is it possible in SQL or PL/SQL ?

Parag Jayant Patankar, April 04, 2006 - 2:16 am UTC

Hi Tom,

I am using Oracle 9.2 database. I am having following data

drop table toto;
create table toto
(
r char(10)
)
organization external
(
type oracle_loader
default directory data_dir
access parameters
(
records delimited by newline
logfile data_dir:'toto.log'
)
location ('pp.sysout')
)
reject limit unlimited
/

In pp.sysout I am having following data

A
B
C
D=10
E
F
G
A
B
C
D=20
E
F
G
H
I
A
B
C
D=20
E
F
G
H
A
B
C
D=30
E
F
G
H

I want set of results in a different spool file starting from 'A' upto next 'A' where value of 'D' is unique.

For e.g.
1. spool file xxx.1 will contain
A
B
C
D=10
E
F
G

2. spool file xxx.2 will contain ( it will have two sets because D=20 appearing twice in data )

A
B
C
D=20
E
F
G
H
I
A
B
C
D=20
E
F
G
H

3. spool file xxx.3 will contain

A
B
C
D=30
E
F
G
H

Kindly let me know is it possible to do that ? if yes pl show me how.

thanks & regards
pjp

Tom Kyte
April 04, 2006 - 9:55 am UTC

I don't know of a way to do that in sqlplus - not with the multiple spools.



It is possible

Michel Cadot, April 06, 2006 - 3:59 am UTC

Hi,

Put the following in a file and execute it.

col sp fold_after
break on sp
set head off
set feed off
set pages 0
set recsep off
set colsep off
spool t
with
t2 as (
select col,
case
when instr(col, '=') != 0
then to_number(substr(col,instr(col,'=')+1))
end value,
rownum rn,
max(case when col = 'A' then rownum end)
over (order by rownum) grp
from t
),
t3 as (
select col,
max(value) over (partition by grp) value,
rn, grp
from t2
),
t4 as (
select col, value,
max(grp)
over (partition by value order by rn
rows between unbounded preceding and unbounded following)
grp
from t3
)
select 'spool file'||value sp,
'prompt '||col
from t4
/
prompt spool off
spool off
@t.lst

It does not work if you have the same D value but in non consecutive groups.
Spool file names contain D value instead of consecutive number.

Regards
Michel


Tom Kyte
April 06, 2006 - 10:03 am UTC

interesting workaround - write a single spool that is itself a sqlplus script that does a spool and echo for each file :)

The whole world

Michel Cadot, April 06, 2006 - 10:24 am UTC

Give us SQL*Plus, case expression, instr, substr and analytic functions, connect by and we can handle the SQL world with the help of the model clause from time to time. :))

Generating SQL or SQL*Plus script with SQL in SQL*Plus is one of my favorite tools with "new_value" on column to generate polymorphic queries.

Cheers
Michel


A reader, April 21, 2006 - 1:57 pm UTC

Hi Tom,

In your reply to the initial post in this thread, for paging results you suggested the query

select *
from ( select a.*, rownum rnum
from ( YOUR_QUERY_GOES_HERE -- including the order by ) a
where rownum <= MAX_ROWS )
where rnum >= MIN_ROWS
/

I have something like this for one of our web applications. It works fine but the only problem I am facing is when MAX_ROWS = 20 and MIN_ROWS= 1 the query returns almost instantaneously (~2 secs). But if I want to browse to the last page in the web page then my MAX_ROWS = 37612 and MIN_ROWS = 37601 then the query is taking some time (~18 secs). Is this expected behaviour?

Thanks for your help.


Tom Kyte
April 21, 2006 - 3:36 pm UTC

sure - just like on google - google "oracle" and then look at the time to return each page.

first - tell us how long for page 1, and then for page 99.

and tell us how long for page 101 :)


If you want the "last page" - to me you really mean "i want the first page, after sorting the data properly"


No one - NO ONE is going to hit page down that many times (and if you give them a last page button - that is YOUR FAULT - stop doing that, have them sort the opposite way and get the FIRST PAGE). Look at google - do what they do.

ORDER BY in inner query

Viorel Hobinca, May 10, 2006 - 12:08 pm UTC

In

select *
from ( select a.*, rownum rnum
from ( YOUR_QUERY_GOES_HERE -- including the order by ) a
where rownum <= MAX_ROWS )
where rnum >= MIN_ROWS

does the ORDER BY in the inner query have to include a primary key or some unique field?

We ran into a problem where subsequent pages returned the same result set when the ORDER BY clause had only one field with low distribution. We plan on adding a primary key or rowid to the ORDER BY but I'm wondering if there are other ways. We use Oracle 10g.

Tom Kyte
May 11, 2006 - 8:47 am UTC

the order by should have something "unique" about it - good to point that out.

Else - the order of the rows with the same order by key would be indeterminate and could vary!

ORDER BY ROWID

A reader, May 11, 2006 - 11:30 am UTC

If we use "order by rowid" are we going to get the same result s each time we run the query (even if the table has no primary key)?

Tom Kyte
May 11, 2006 - 7:51 pm UTC

as long as the rowid is unique sure.

Ref: ORDER BY in inner query

A reader, May 12, 2006 - 10:34 am UTC

Is ORDER BY *required* in the inner query? I'm wondering if Oracle can guarantee the order of the result set if no order is specified. With no such guarantee the paging will produce indeterminate results ...

Tom Kyte
May 12, 2006 - 9:13 pm UTC

if you don't use an order by (and one that says "this is definitely row 42, no other row can be 42"), then rows "100-110" could change everytime you ask for them.

And - it would be "correct"

FIRST_ROWS(n)

Su Baba, May 16, 2006 - 3:01 pm UTC

Does the "n" in FIRST_ROWS(n) hint represent the number of records I want to have returned. If the following query always returns 50 records, should n be set to 50?

SELECT *
FROM (
SELECT /*+ FIRST_ROWS(50) */ a.*, rownum r
FROM (YOUR QUERY GOES HERE) a
WHERE rownum < :max_row
)
WHERE r >= :min_row;



Tom Kyte
May 16, 2006 - 3:14 pm UTC

it represents the number of records to be returned.

Row Count

Su Baba, June 06, 2006 - 7:06 pm UTC

You had mentioned above that to get the estimated count of a "result set M through N" SQL, you can do one of the following:

"If you use text, you can approximate the result set size.
If you use the CBO and 9i, you can get the estimated cardinality from v$SQL_PLAN"

How does an application use v$sql_plan to get the row count? How is this actually implemented?

thanks


Tom Kyte
June 06, 2006 - 9:42 pm UTC

you query it?

Thanks

Rakesh Sreenivasa, June 16, 2006 - 2:52 pm UTC

great !

Rakesh

Very Impressed!!!

Tom Persaud, July 06, 2006 - 4:21 pm UTC

Tom,
Your solution was simple and useful. You seem to enjoy solving problems and helping others. I am sure this takes a lot of your time. Like a good listener, you seem to evaluate each question and comment. Your directness is also admirable. In my opinion, you have a God-given gift and you share it freely and openly. I envy you in that I wish I could be more like you - in skill and attitude.

8.0.5 Solution

Mal, August 04, 2006 - 9:59 am UTC

This code works for me in 8.0.5

select * from
( select rownum rx, title, id from TABLE where rownum < 5 )
where rx > 1

Tom Kyte
August 04, 2006 - 12:14 pm UTC

sure, but add "order by" which is typically used in this construct (doesn't make so much sense to paginate through unordered data)

Previous example inaccurate

Mal, August 04, 2006 - 10:10 am UTC

I posted a little too quickly, 8.0.x doesn't support order clauses in subselect, so while the above example is true, it's not very helpful.

different (weird) results when used in stored function

Johann Tagle, August 15, 2006 - 6:29 am UTC

Hi Tom,

I'm developing a search program on 8.1.7.  When I execute the following:

select ID, list_name from
   (select ID, list_name, rownum as number_row from
      (select distinct b.id ID,
decode(b.preferred_name,null,b.default_name,b.preferred_name) list_name
from bizinfo b, bizlookup l
where contains(l.keywords, 'computer equipment and supplies')>0
        and b.id = l.id
        order by list_name)
    where rownum <= 5)
where number_row >= 1;

I get something like:
        ID LIST_NAME
---------- --------------------------------------------
     63411 2A Info
     65480 ABACIST
       269 ABC COMPUTER
     97285 ACCENT MICRO
     97286 ACCENT MICRO - SM CITY NORTH

However, if I put the same SQL to a stored function:

CREATE Function GETSAMPLEBIZ ( v_search IN varchar2, startpage IN number, endpage IN number)
  RETURN  MYTYPES.REF_CURSOR IS
  RET MYTYPES.REF_CURSOR;
BEGIN
  OPEN RET FOR
    select ID, list_name from
    (select ID, list_name, rownum as number_row from
        (select distinct b.id as ID, decode(b.preferred_name,null,b.default_name,b.preferred_name) list_name
        from bizinfo b, bizlookup l
        where contains(l.keywords, v_search)>0
        and b.id = l.id
        order by list_name
        )
    where rownum <= endpage
    )
    where number_row >= startpage;

   return RET;
END;

(MYTYPES.REF_CURSOR defined elsewhere)

then run:
SQL> var ref refcursor;
SQL> exec :ref := getsamplebiz('computer equipment and supplies',1,5);
SQL> print ref;

I get:

        ID :B1
---------- --------------------------------
     63411 computer equipment and supplies
     65480 computer equipment and supplies
       269 computer equipment and supplies
     97285 computer equipment and supplies
     97286 computer equipment and supplies

Based on the ID column, the result set is the same, but what's supposed to be list_name is replaced by my search parameter.

I can't figure out what's wrong with this.  Would appreciate any suggestion.  

Thanks!

Johann 

Tom Kyte
August 15, 2006 - 8:23 am UTC

I'd use support for that one. it is obviously "not right"

a case to upgrade to 10g?

Johann Tagle, August 15, 2006 - 10:29 am UTC

Hi Tom,

Thanks for the response. However, 8.1.7 is no longer supported, right? Tried it on my development copy of 10g and its working well there. Hmmm, this might be a good addition to the case for upgrading to 10g I'm helping my client develop. Without this I either have give up the benefits of using a stored function or have the front end application go through every row until it gets to the relevant "page", which would be inefficient.

Thanks again,

Johann

Performance trade-off?

Mahmood Lebbai, September 11, 2006 - 2:15 pm UTC

Tom,

In the query you gave us for the initial question,

select * from ( select a.*, rownum rnum
from ( YOUR_QUERY_GOES_HERE -- including the order by ) a
where rownum <= MAX_ROWS ) where rnum >= MIN_ROWS

You said the inner query would fetch the maximum records we would be interested in and afterwards it would cut off the required records from the result set.

But consider this situation where,say, we got 3 million records and I would like to fetch some records in some order and take out some range of records say 2999975 to 2999979 (just five records). According to your query, the inner query will select 2999979 records (it looks quite unnecessary) and then select the five records.It looks some what odd. What is your justification on this?

I was wondering on this whether there might be any performance trade off on this.

Thanks.


Tom Kyte
September 11, 2006 - 2:58 pm UTC

this is for pagination through a result set on the web.

goto google.

search for Oracle.

Now, goto page 101. Tell me what you see on that page?

Nothing, there is no page 101 - google tells you "don't be SILLY, stop it, search better, get with the program, I am NOT going to waste MY resources on such a question"

We should do the same.

I would seriously ask you "what possible business reason could you justify gettnig those five records - and do you possibly thing you really mean to order by something DESC so that instead of getting the last five, you get the first five???"


This is optimized to produce an answer on screen as soon as possible. No one would hit the page down button that many times.

Look to google, they are the "gold standard" for searching, they got this pagination thing down right.

Wayne Khan, September 26, 2006 - 11:04 pm UTC

Hi Tom,
At first I got bamboozled by the subqueries, but this is great, it worked.

:)

your query worked with a small problem

Soumak, October 18, 2006 - 2:04 pm UTC

What the fetching bit did as I understood was that it executed the entire query and then selected rows N to M (M>N)from the resultset. Howeever, is there any way that the query stops execution and returns me the resultset when the limit M has been reached ?

I do not thing I can use rownum in such case. Any alternate suggestions?

I was using a HQL (Hibernate) where two methods setMaxResults() and setFirstResult() did that for me. Any equivalent in SQL?



Tom Kyte
October 18, 2006 - 3:42 pm UTC

the query DOES stop when it gets M-N+1 rows??? not sure at all what you mean.

Excellent, but be aware

Keith Jamieson, October 19, 2006 - 7:53 am UTC

Hi Tom

(ORACLE 10g release 2) 

I'm trying to convince the Java Team here that this is the correct approach to use, to page through a result set. 

They like this solution, with one small exception.
If they insert a record, or remove a record, or if the column value that is being ordered by changes, then potentially the results of their previous/next pagination may change.  (I'm assuming the changes were committed in another session, though the example below is all in one session).

So essentially, they are saying 'What happened to my user'
SQL> select *
  2    from ( select a.*, rownum rnum
  3             from ( select ename,hiredate  from scott.emp order by ename ) a
  4            where rownum <= 5 ) -- max rows
  5   where rnum >= 1-- min_rows
  6  /

ENAME      HIREDATE        RNUM
---------- --------- ----------
ADAMS      23-MAY-87          1
ALLEN      20-FEB-81          2
BLAKE      01-MAY-81          3
CLARK      09-JUN-81          4
FORD       03-DEC-81          5

SQL> select *
  2    from ( select a.*, rownum rnum
  3             from ( select ename,hiredate  from scott.emp order by ename ) a
  4            where rownum <= 10 ) -- max rows
  5   where rnum >= 6-- min_rows
  6  /

ENAME      HIREDATE        RNUM
---------- --------- ----------
JAMES      03-DEC-81          6
JONES      02-APR-81          7
KING       17-NOV-81          8
MARTIN     28-SEP-81          9
MILLER     23-JAN-82         10

SQL> -- now allen changes name to smith
SQL> update emp
  2  set ename = 'SMITH' where ename = 'ALLEN';

1 row updated.

SQL> -- assume happened in another session
SQL> -- so now user presses prev page
SQL> select *
  2    from ( select a.*, rownum rnum
  3             from ( select ename,hiredate  from scott.emp order by ename ) a
  4            where rownum <= 5 ) -- max rows
  5   where rnum >= 1-- min_rows
  6  /

ENAME      HIREDATE        RNUM
---------- --------- ----------
ADAMS      23-MAY-87          1
BLAKE      01-MAY-81          2
CLARK      09-JUN-81          3
FORD       03-DEC-81          4
JAMES      03-DEC-81          5

SQL> -- user ALLEN has disappeared
SQL> insert into scott.emp
  2  select 999,'KYTE',job,mgr,hiredate,sal,comm,deptno
  3  from scott.emp
  4  where rownum = 1
  5  /

1 row created.

SQL> -- new user created
SQL> -- page next
SQL> select *
  2    from ( select a.*, rownum rnum
  3             from ( select ename,hiredate  from scott.emp order by ename ) a
  4            where rownum <= 10 ) -- max rows
  5   where rnum >= 6-- min_rows
  6  /

ENAME      HIREDATE        RNUM
---------- --------- ----------
JONES      02-APR-81          6
KING       17-NOV-81          7
KYTE       17-DEC-80          8
MARTIN     28-SEP-81          9
MILLER     23-JAN-82         10

SQL> -- where did KYTE come from?
SQL> rollback;

Rollback complete.

SQL> exit

To be fair the Java Side have not yet come up with a realistic case where this can happen. 

Basically, what I have said is if this can happen, then you 
have to use some type of collection eg (PL/SQL TABLES -- (Associative Arrays), and if not , then use the rownum pagination.

I can see if we added extra columns to the table, to track whether a row is new, or has been update , (marked as deleted), this will get around the problem, but I think this is unnecessary overhead


 

Tom Kyte
October 19, 2006 - 8:21 am UTC

or flashback query if they want to freeze the result set as of a point in tmie. before they start the first time - they would call dbms_flashback to get the system_change_number and they could use that value to get a consistent read - across sessions, connections and so on.

Excellent as usual.

Keith Jamieson, October 20, 2006 - 9:14 am UTC

Just tried this out (as user system). 
It worked :)

For ordinary users , they must have been granted execute privileges on dbms_flashback. I ha dto log on as sysdba to do this.



SQL> declare
  2  v_scn number := dbms_flashback.get_system_change_number;
  3  begin
  4  DBMS_OUTPUT.PUT_LINE('---------------');
  5  DBMS_OUTPUT.PUT_LINE('SHOW THE DATA');
  6  DBMS_OUTPUT.PUT_LINE('---------------');
  7  for cur in
  8  (
  9  select *
 10      from ( select a.*, rownum rnum
 11               from ( select ename,hiredate  from scott.emp
 12             --  as of scn(v_scn)
 13               order by ename ) a
 14              where rownum <= 5 ) -- max rows
 15    where rnum >= 1-- min_rows
 16  )
 17  loop
 18  dbms_output.put_line(to_char(cur.rnum)||' '||cur.ename);
 19  end loop;
 20  DBMS_OUTPUT.PUT_LINE('---------------');
 21  DBMS_OUTPUT.PUT_LINE('MODIFY THE DATA');
 22  DBMS_OUTPUT.PUT_LINE('---------------');
 23  update scott.emp
 24  set ename = 'ALLEN' where ename = 'DARN';
 25  commit;
 26  DBMS_OUTPUT.PUT_LINE('---------------');
 27  DBMS_OUTPUT.PUT_LINE('SHOW THE NEW DATA');
 28  DBMS_OUTPUT.PUT_LINE('---------------');
 29  for cur in
 30  (
 31  select *
 32      from ( select a.*, rownum rnum
 33               from ( select ename,hiredate  from scott.emp
 34             --  as of scn(v_scn)
 35               order by ename ) a
 36              where rownum <= 5 ) -- max rows
 37    where rnum >= 1-- min_rows
 38  )
 39  loop
 40  dbms_output.put_line(to_char(cur.rnum)||' '||cur.ename);
 41  end loop;
 42  DBMS_OUTPUT.PUT_LINE('---------------');
 43  DBMS_OUTPUT.PUT_LINE('SHOW DATA BEFORE MODIFICATION');
 44  DBMS_OUTPUT.PUT_LINE('---------------');
 45  for cur in
 46  (
 47  select *
 48      from ( select a.*, rownum rnum
 49               from ( select ename,hiredate  from scott.emp
 50               as of scn(v_scn)
 51               order by ename ) a
 52              where rownum <= 5 ) -- max rows
 53    where rnum >= 1-- min_rows
 54  )
 55  loop
 56  dbms_output.put_line(to_char(cur.rnum)||' '||cur.ename);
 57  end loop;
 58  end;
 59  /
---------------
SHOW THE DATA
---------------
1 ADAMS
2 BLAKE
3 CLARK
4 DARN   <<================
5 FORD
---------------
MODIFY THE DATA
---------------
---------------
SHOW THE NEW DATA
---------------
1 ADAMS
2 ALLEN  <<================
3 BLAKE
4 CLARK
5 FORD
---------------
SHOW DATA BEFORE MODIFICATION
---------------
1 ADAMS
2 BLAKE
3 CLARK
4 DARN    <<================
5 FORD

PL/SQL procedure successfully completed.

SQL> exit
 

Quibbles/questions

R Flood, October 26, 2006 - 5:41 pm UTC

First, this is a great, informed discussion. But unless I am missing something, the conclusions are not applicable to many problems (and not always faster than the competition). Two observations and a question on the stats:

1. Google is the gold standard for a particular kind of searching where errors in sequence and content are permissible (to a degree), and concepts like subtotal/total are almost irrelevant. It's not a good model when the results represent or are used in a complex financial calculation, rocket launch, etc.

2. The assumption that no one wants to hit 'next' more than a few times is not always true. In general, sure. But there are plenty of business use cases where hitting 'next' more than a few times is common. Applications development is driven by usage, and as some posters pointed out "We do what Google does" or "You should only hit 'next' 3 or fewer times" can quickly lead to unemployment.

3. Is there not a breakeven point where the many-rows-and-cursor approach would become more efficient than hitting the DB for every set? While a large table + cursor pagination doesn't make sense, even if 10 'nexts' is the norm, if you get 200-400 rows and cursor through them, wouldn't the total database expense be less than subselect+rownum failry soon? The numbers above seemed to suggest 3 nexts was the breakeven, and that was assuming (I think) that the cursor case grabbed the whole table instead of, say, 5/10x the rows displayed at once.

Tom Kyte
October 27, 2006 - 7:35 am UTC

1) sure it is, show me otherwise.

2) sure it is, show me otherwise. a couple of times means "10 or 20" as well, show me when you really need to page MORE THAN THAT - commonly.

There are exceptions to every rule - this is a universal fact - for 999999999 times out of 1000000000, what is written here applies. So, why is it not done this way that often?

3) what is a many rows and cursor approach???

Followup

R Flood, October 27, 2006 - 11:01 am UTC

1. In my experience, Google freely returns "good enough" data. That is, the order might be different, the sum total of pages might be cleverly (or not) truncated, etc. This is just fine for a search engine, but not for calculations that depend on perfect sequence and accuracy. But is it not obvious that what is ideal for a search engine (page speed=paramount, data accuracy=not so much) is different than what matters for finance, rocket science, etc.?

2./3. (they are connected)
Sorry about the faulty reference. I thought the many-rows-and-cursor approach was somewhere in this thread. But what I meant by this was a Java(or whatever) server that gets chunks of rows (less than the whole table, but more than one screen, adjustable based on use case), and returns a screenful at a time to the client.

The core question was: Isn't there a point where getting all rows (but certainly a few hundred at once) in a server program and returning them on demand will be much easier on the database than hitting it for each set?

Tom Kyte
October 27, 2006 - 6:20 pm UTC

1) why would a finance report reader need to know there were precisely 125,231 rows in their report?

2) and then maintains a state and therefore wastes a ton of resources and therefore rockets us back to the days of client server. I'm not a fan.



don't spend a lot of time trying to "take it easy on the database", if people spent more time on their database design and learning the database features (rather than trying to outguess the database, doing their own joins, sorting in the client - whatever) we'd all be much much better off.

Rownum problem

Anne, November 01, 2006 - 12:19 pm UTC

Hi Tom, I have an interesting problem here : simple select of rownum from two tables showing different results - #1 returns rownum as expected, but #2 doesn't. Could you please explain why...


#1. select rownum, id
from dnr_refund_outbound_process
order by id desc;

ROWNUM ID
---------- ----------
1 125
2 124
3 123
4 122
5 121
6 120
7 119

#2.select rownum
, adjustment_id
from ar_adjustments_all
where org_id = 85
and customer_trx_id = 15922
order by adjustment_id desc ;

ROWNUM ADJUSTMENT_ID
---------- -------------
7 8296
6 8295
5 8294
4 8293
3 8292
2 8291
1 7808

Both DNR_REFUND_OUTBOUND_PROCESS and AR_ADJUSTMENTS_ALL are tables.

Indexes are :
CREATE UNIQUE INDEX PK_DNR_REFUND_OUTBOUND_PROCESS ON DNR_REFUND_OUTBOUND_PROCESS
(ID) ......

CREATE UNIQUE INDEX AR_ADJUSTMENTS_U1 ON AR_ADJUSTMENTS_ALL
(ADJUSTMENT_ID) ....

If there is any other info you need from me, please let me know.

As always, appreciate your help!




Tom Kyte
November 01, 2006 - 6:16 pm UTC

they both are showing rownum??

(but you need to understand that rownum is assigned during the where clause processing, before sorting!)

you probably meant:

select rownum
, adjustment_id
from
(select adjustment_id
from ar_adjustments_all
where org_id = 85
and customer_trx_id = 15922
order by adjustment_id desc );

to sort AND THEN assign rownum

Rownum problem - tkprof results

Anne, November 01, 2006 - 12:48 pm UTC

Hi Tom,

I missed sending in the tkprof results for my earlier question. I hope this may give some clue...

*** SESSION ID:(31.4539) 2006-11-01 11:29:34.253

********************************************************************************

BEGIN DBMS_OUTPUT.GET_LINES(:LINES, :NUMLINES); END;


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 1
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 2 0.00 0.01 0 0 0 1

Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: 173 (APPS)
********************************************************************************

select rownum, id
from dnr_refund_outbound_process
order by id desc

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.02 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 8 0.00 0.00 0 8 0 93
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 10 0.00 0.02 0 8 0 93

Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: 173 (APPS)

Rows Row Source Operation
------- ---------------------------------------------------
93 COUNT
93 INDEX FULL SCAN DESCENDING PK_DNR_REFUND_OUTBOUND_PROCESS (object id 270454)


Rows Execution Plan
------- ---------------------------------------------------
0 SELECT STATEMENT GOAL: CHOOSE
93 COUNT
93 INDEX (FULL SCAN DESCENDING) OF
'PK_DNR_REFUND_OUTBOUND_PROCESS' (UNIQUE)

********************************************************************************

select rownum
, adjustment_id
from ar_adjustments_all
where org_id = 85
and customer_trx_id = 15922
order by adjustment_id desc

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 2 0.00 0.00 0 4 0 7
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 4 0.00 0.00 0 4 0 7

Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: 173 (APPS)

Rows Row Source Operation
------- ---------------------------------------------------
7 SORT ORDER BY
7 COUNT
7 TABLE ACCESS BY INDEX ROWID AR_ADJUSTMENTS_ALL
7 INDEX RANGE SCAN AR_ADJUSTMENTS_N2 (object id 28058)


Rows Execution Plan
------- ---------------------------------------------------
0 SELECT STATEMENT GOAL: CHOOSE
7 SORT (ORDER BY)
7 COUNT
7 TABLE ACCESS GOAL: ANALYZED (BY INDEX ROWID) OF
'AR_ADJUSTMENTS_ALL'
7 INDEX GOAL: ANALYZED (RANGE SCAN) OF 'AR_ADJUSTMENTS_N2'
(NON-UNIQUE)

********************************************************************************


BEGIN sys.dbms_system.set_sql_trace_in_session(31, 4539, false); END;
..................





Rownum problem

Bella Joseph, November 02, 2006 - 9:23 am UTC

Hi Tom,

select rownum
, adjustment_id
from
(select adjustment_id
from ar_adjustments_all
where org_id = 85
and customer_trx_id = 15922
order by adjustment_id desc );


Yes, this is exactly what I meant, but I expected #2 sql to return the same results. I think I am missing out on the specific reason for this ...

Both sqls are pretty much the same - they are both selecting rownum with order by desc. Why does #2 return rownum in descending order instead of ascending like #1?

From your comments, I gather that the resoning behind this is that #1 has no where-clause to process and hence rownum is assigned during the sorting. Whereas #2 has a where clause to process and hence rownum is assigned during the where clause before the sorting. Would you agree ?

Thanks for your patience! :)

Tom Kyte
November 02, 2006 - 9:29 am UTC

rownum assigned AFTER where clause BEFORE order by

so, you selected rows, filtered them, numbered them (Randomly as they were encountered) and then sorted the results.

If the first one did anything you think was "correct" as far as rownum and ordering, it was purely by ACCIDENT (eg: likely you used an index to read the data sorted in the first place and the order by was ignored - in fact the tkprof shows that)

R Floods Post

Keith Jamieson, November 16, 2006 - 10:29 am UTC

I just read R floods post and I am implementing this precisely so that java can get a number of records, and the user can paginate next/previous as many times as they want to.

Java will be able to scroll forwards and backwards through
the retrieved rows, so our goal of bi-directional cursor scrolling is achieved.

So, essentially, by pressing next or previous all we are doing is replacing the rows that we scroll through
with the next/previous ones in the list.

I have had many discussions/conversations around this and the only real issue was the potential for data inconsistency, which is solved by using dbms_flashback_query.

The benefits of this approach are:

Database retrieves the data quickly. Bind variable usage.
parse once execute many.
We can scroll through an entire record set if so desired.
The amount of records to be retrieved at a time can be amended dynamically, by keeping values in a table.
There is also potentially less memory overhead on the client.

So, as far as I'm concerned this is now better than google search.
If you want to page through a million row table 10 at a time you can do so.






















Tom Kyte
November 16, 2006 - 3:24 pm UTC

downside is - you suck resources like a big drain, under the ocean, really fast, really hard.

I don't like it. not a good thing.

to find out how many records there are, you have to GET THEM ALL. what a waste

but, it is up to you, you asked my opinion, that is it and it is rather consistent over the years and not likely to change.

pagination query

o retrieves data quickly, first pages fast. no one goes way down.

o uses binds, don't know why you think that is a special attribute of yours

o we can scroll too, in fact, I can go to page "n" at any time

o we are as dynamic as anything else.

o I don't see how you say "less memory in client" with your approach, quite the OPPOSITE would be true, very much so. I need to keep a page, and you?



and you know, if you want to page though a million row table - more power to you, most people have much much more important stuff to do.

Paging by Partition

Alessandro Nazzani, December 19, 2006 - 10:21 am UTC

Is there a smart way (that is, without resorting to procedural code) to paginate a "partition by" query without breaking groups (10g)?

Suppose the following statement:

select groupid, itemid, itemname, itemowner,
row_number() over (partition by groupid order by itemname) seq,
max(itemid) over (partition by groupid) lastitem from
V$GROUP_TYPES where itemtype=1 order by groupid, itemname;

I've been asked to add pagination but, if the last record of the page is not the last record of the group, I should "extend" the page until I reach the end of the group (groups can range between 2 to roughly 20 records each).

Thanks for your attention.

Alessandro

Tom Kyte
December 19, 2006 - 10:25 am UTC

no create
no insert
no look

Alessandro Nazzani, December 19, 2006 - 11:59 am UTC

> no create
> no insert
> no look

My bad, sorry.

CREATE TABLE V$GROUP_TYPES (GROUPID NUMBER(10) NOT NULL,
ITEMID NUMBER(10) NOT NULL, ITEMNAME VARCHAR2(10) NOT NULL,
ITEMOWNER VARCHAR2(10) NOT NULL, ITEMTYPE NUMBER(1) NOT NULL);

INSERT INTO V$GROUP_TYPES (GROUPID, ITEMID, ITEMNAME, ITEMOWNER, ITEMTYPE)
VALUES (1, 12795, 'Item 12795', 'Myself', 1);
INSERT INTO V$GROUP_TYPES (GROUPID, ITEMID, ITEMNAME, ITEMOWNER, ITEMTYPE)
VALUES (1, 12796, 'Item 12796', 'Myself', 1);
INSERT INTO V$GROUP_TYPES (GROUPID, ITEMID, ITEMNAME, ITEMOWNER, ITEMTYPE)
VALUES (2, 13151, 'Item 13151', 'Myself', 1);
INSERT INTO V$GROUP_TYPES (GROUPID, ITEMID, ITEMNAME, ITEMOWNER, ITEMTYPE)
VALUES (2, 13152, 'Item 13152', 'Myself', 1);
INSERT INTO V$GROUP_TYPES (GROUPID, ITEMID, ITEMNAME, ITEMOWNER, ITEMTYPE)
VALUES (3, 6640, 'Item 6640', 'Myself', 1);
INSERT INTO V$GROUP_TYPES (GROUPID, ITEMID, ITEMNAME, ITEMOWNER, ITEMTYPE)
VALUES (3, 6641, 'Item 6641', 'Myself', 1);
INSERT INTO V$GROUP_TYPES (GROUPID, ITEMID, ITEMNAME, ITEMOWNER, ITEMTYPE)
VALUES (3, 6642, 'Item 6642', 'Myself', 1);
INSERT INTO V$GROUP_TYPES (GROUPID, ITEMID, ITEMNAME, ITEMOWNER, ITEMTYPE)
VALUES (4, 4510, 'Item 4510', 'Myself', 1);
INSERT INTO V$GROUP_TYPES (GROUPID, ITEMID, ITEMNAME, ITEMOWNER, ITEMTYPE)
VALUES (4, 4511, 'Item 4511', 'Myself', 1);
INSERT INTO V$GROUP_TYPES (GROUPID, ITEMID, ITEMNAME, ITEMOWNER, ITEMTYPE)
VALUES (4, 4512, 'Item 4512', 'Myself', 1);
INSERT INTO V$GROUP_TYPES (GROUPID, ITEMID, ITEMNAME, ITEMOWNER, ITEMTYPE)
VALUES (5, 10095, 'Item 10095', 'Myself', 1);
INSERT INTO V$GROUP_TYPES (GROUPID, ITEMID, ITEMNAME, ITEMOWNER, ITEMTYPE)
VALUES (5, 10096, 'Item 10096', 'Myself', 1);
INSERT INTO V$GROUP_TYPES (GROUPID, ITEMID, ITEMNAME, ITEMOWNER, ITEMTYPE)
VALUES (6, 8811, 'Item 8811', 'Myself', 1);
INSERT INTO V$GROUP_TYPES (GROUPID, ITEMID, ITEMNAME, ITEMOWNER, ITEMTYPE)
VALUES (6, 8812, 'Item 8812', 'Myself', 1);
INSERT INTO V$GROUP_TYPES (GROUPID, ITEMID, ITEMNAME, ITEMOWNER, ITEMTYPE)
VALUES (6, 8811, 'Item 8811', 'Myself', 1);
INSERT INTO V$GROUP_TYPES (GROUPID, ITEMID, ITEMNAME, ITEMOWNER, ITEMTYPE)
VALUES (6, 8812, 'Item 8812', 'Myself', 1);
commit;

select groupid, itemid, itemname, itemowner,
row_number() over (partition by groupid order by itemname) seq,
max(itemid) over (partition by groupid) lastitem from
V$GROUP_TYPES where itemtype=1 order by groupid, itemname;

GROUPID ITEMID ITEMNAME ITEMOWNER SEQ LASTITEM
-------- -------- ---------- ---------- ----- ----------
1 12795 Item 12795 Myself 1 12796
1 12796 Item 12796 Myself 2 12796
2 13151 Item 13151 Myself 1 13152
2 13152 Item 13152 Myself 2 13152
3 6640 Item 6640 Myself 1 6642
3 6641 Item 6641 Myself 2 6642
3 6642 Item 6642 Myself 3 6642
4 4510 Item 4510 Myself 1 4512
4 4511 Item 4511 Myself 2 4512
4 4512 Item 4512 Myself 3 4512
5 10095 Item 10095 Myself 1 10096
5 10096 Item 10096 Myself 2 10096
6 8811 Item 8811 Myself 1 8812
6 8811 Item 8811 Myself 2 8812
6 8812 Item 8812 Myself 3 8812
6 8812 Item 8812 Myself 4 8812

If, for example, page size is set to 5, I should have the following pages:

GROUPID ITEMID ITEMNAME ITEMOWNER SEQ LASTITEM
-------- -------- ---------- ---------- ----- ----------
1 12795 Item 12795 Myself 1 12796
1 12796 Item 12796 Myself 2 12796
2 13151 Item 13151 Myself 1 13152
2 13152 Item 13152 Myself 2 13152
3 6640 Item 6640 Myself 1 6642
3 6641 Item 6641 Myself 2 6642
3 6642 Item 6642 Myself 3 6642

GROUPID ITEMID ITEMNAME ITEMOWNER SEQ LASTITEM
-------- -------- ---------- ---------- ----- ----------
4 4510 Item 4510 Myself 1 4512
4 4511 Item 4511 Myself 2 4512
4 4512 Item 4512 Myself 3 4512
5 10095 Item 10095 Myself 1 10096
5 10096 Item 10096 Myself 2 10096

GROUPID ITEMID ITEMNAME ITEMOWNER SEQ LASTITEM
-------- -------- ---------- ---------- ----- ----------
6 8811 Item 8811 Myself 1 8812
6 8811 Item 8811 Myself 2 8812
6 8812 Item 8812 Myself 3 8812
6 8812 Item 8812 Myself 4 8812

Thanks in advance for your time.

Alessandro

Tom Kyte
December 19, 2006 - 12:55 pm UTC

ops$tkyte%ORA10GR2> update v$group_types set groupid = groupid*10;

16 rows updated.

ops$tkyte%ORA10GR2> commit;

Commit complete.

ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> select *
  2    from (
  3  select groupid, itemid, itemname, itemowner,
  4         row_number() over (partition by groupid order by itemname) seq,
  5         max(itemid) over (partition by groupid) lastitem,
  6             dense_rank() over (order by groupid) page_no
  7    from V$GROUP_TYPES where itemtype=1 order by groupid, itemname
  8         )
  9  /

   GROUPID     ITEMID ITEMNAME   ITEMOWNER         SEQ   LASTITEM    PAGE_NO
---------- ---------- ---------- ---------- ---------- ---------- ----------
        10      12795 Item 12795 Myself              1      12796          1
        10      12796 Item 12796 Myself              2      12796          1
        20      13151 Item 13151 Myself              1      13152          2
        20      13152 Item 13152 Myself              2      13152          2
        30       6640 Item 6640  Myself              1       6642          3
        30       6641 Item 6641  Myself              2       6642          3
        30       6642 Item 6642  Myself              3       6642          3
        40       4510 Item 4510  Myself              1       4512          4
        40       4511 Item 4511  Myself              2       4512          4
        40       4512 Item 4512  Myself              3       4512          4
        50      10095 Item 10095 Myself              1      10096          5
        50      10096 Item 10096 Myself              2      10096          5
        60       8811 Item 8811  Myself              1       8812          6
        60       8811 Item 8811  Myself              2       8812          6
        60       8812 Item 8812  Myself              3       8812          6
        60       8812 Item 8812  Myself              4       8812          6

16 rows selected.

ops$tkyte%ORA10GR2> select *
  2    from (
  3  select groupid, itemid, itemname, itemowner,
  4         row_number() over (partition by groupid order by itemname) seq,
  5         max(itemid) over (partition by groupid) lastitem,
  6             dense_rank() over (order by groupid) page_no
  7    from V$GROUP_TYPES where itemtype=1 order by groupid, itemname
  8         )
  9   where page_no = 5
 10  /

   GROUPID     ITEMID ITEMNAME   ITEMOWNER         SEQ   LASTITEM    PAGE_NO
---------- ---------- ---------- ---------- ---------- ---------- ----------
        50      10095 Item 10095 Myself              1      10096          5
        50      10096 Item 10096 Myself              2      10096          5

ops$tkyte%ORA10GR2> select *
  2    from (
  3  select groupid, itemid, itemname, itemowner,
  4         row_number() over (partition by groupid order by itemname) seq,
  5         max(itemid) over (partition by groupid) lastitem,
  6             dense_rank() over (order by groupid) page_no
  7    from V$GROUP_TYPES where itemtype=1 order by groupid, itemname
  8         )
  9   where page_no = 6
 10  /

   GROUPID     ITEMID ITEMNAME   ITEMOWNER         SEQ   LASTITEM    PAGE_NO
---------- ---------- ---------- ---------- ---------- ---------- ----------
        60       8811 Item 8811  Myself              1       8812          6
        60       8811 Item 8811  Myself              2       8812          6
        60       8812 Item 8812  Myself              3       8812          6
        60       8812 Item 8812  Myself              4       8812          6

 

Mike, December 19, 2006 - 1:17 pm UTC

My experience has been that paging through a large data set is a sign that someone hasn't spoken to the users and discovered what they really need to see. Give the users the ability to find the data they need or use business logic to present the users with the data they most likely need.

One way to do this is to build the ability for a user to define and save the default criteria for the data returned when a screen is loaded.

Sure, there will be exceptions, but I think as a general rule, an application should be designed without the user needing to page through data "looking" for the necessary record.


Tom Kyte
December 19, 2006 - 3:47 pm UTC

(but what about my home page or google?)

Pagination is a pretty necessary thing for most all applications in my experience.

Alessandro Nazzani, December 19, 2006 - 1:33 pm UTC

Tom,

as always thank you very much for your patience.

If I understand correctly, you are proposing to navigate "by groups": instead of setting a number of rows per page, setting a number of groups.

The only drawback is that if I have 10 groups of 2 records followed by 10 groups of 20 records I will end up with pages with *significant* different sizes (in term of records); guess I can live with that, after all. :)

Thanks for helping me approaching the problem from a different point of view.

Alessandro

Mike, December 20, 2006 - 1:17 pm UTC

While I can see the value in having a technical discussion on the best way to code paging through screens, I feel that users having to page through data sets should be used very infrequently.

My experience has been that too many applications default to the "dump a lot of records on the screen and let the user page through to find the necessary record" style. When I see users paging through screens, I always look to see if that task/screen can be improved.

In many cases, I can produce the result the user needs without the need to page through result sets. Sometimes, it is an easy change and some times it takes more work. I often add the ability for a user to save a default search criteria for each applicable screen.

>> (but what about my home page or google?)

Why did you decide to present 10 articles sorted by Last Updated (I guess)? Do most people come to Asktom to "browse" or do they go looking for an answer to a specific topic? Can you tell how many people never clicked a link on the home page, but typed in a query instead?

In my case, 99% of the time I go to Asktom, I ignore the home page and type in a query for a topic I'm interested in.


Tom Kyte
December 20, 2006 - 1:25 pm UTC

I see it in virtually all applications - all of them.

people come to browse, yes.

It is my experience that if people do not find it in 10 or less pages, they refine their search - but you know what.....

that doesn't mean "page two" isn't necessary and if page two is needed, you need....

pagination

Mike, December 20, 2006 - 2:00 pm UTC

Sorry, I'm not making myself clear. I have no problem with supporting pagination in applications. I just feel it should be used very infrequently. I track paging in my logging table, so I can tell when users are paging frequently. Usually, when I visit the issue, the user either needs training or the screen/process needs to be re-designed.

I was just trying to make a usability suggestion related to the technical question.

pagination

benn, January 09, 2007 - 10:23 am UTC

Hi tom
i have some doubt on pagination. i want a procedure that will accept the 'from' and 'to' parameter (rownum) for paginaton as well as the order by column also as a parameter( the order by changes based on the parameter) , and my query is using multiple table which doesnt have Unique keys, the pagination is not working poperly at that time..
please have a look in to the procedure..

CREATE OR REPLACE Procedure P_pagination
(cur out sys_refcursor,end1 number,start1 number,ordr number)
as
Var_sql varchar2(4000);
begin
var_sql := ' Select * '||
' From '||
' (select rownum rwnum,aa.* from ' ||
' (select ti.a,t2.a,t3.a,t4.b from t1,t2,t3,t4 where < all the joins> order by '||ordr||' )aa'||
' where rownum <='|| end1 ||') ' ||
' where rwnum >='|| start1 ;

open cur for var_sql;
end ;
/


Tom Kyte
January 11, 2007 - 9:30 am UTC

you have unique stuff - rowids.


order by ' || ordr || ' t1.rowid, t2.rowid, .... ) aa '

Re: I don't like It

Keith Jamieson, January 15, 2007 - 5:06 am UTC

Okay, I think either I have a unique situation, or more likely, I didn't explain myself very well.

I 100% agree that the pagination query is the way to go.
Effectively, what I have done is suggested parameterising the pagination query in a procedure and have the start and end rows for the pagination query controlled in Java.

Previously, our code would be a query, which was limited by rownum, say 10,0000. This was globally set in a package.
Apparently the reason this was introduced was that the clients couldn't handle all the data being passed to them, ie They used to run out of memory in the client, and this was the solution applied at the time, so what I was saying here is using the pagination query results in less memory being returned to the client in each call, as opposed to potentially 10,000 rows being downloaded to the client.
( I do know that the user should be forced to put in some selection criteria, but at present this is not the case).

I can quite see that the flashback query may require additional resources, but this is a compromise, which will allow the pagination query to be used.

Scrolling forwards and backwards is required, so my choices as far as I see it are:

1) Stick with the query being limited by rownum <= 10,000
(Which has already caused a couple of issues).
or
2) use a parameterised pagination query.


Of course, I do know that the correct approach to limit the numbe rof rows is to force the user to put in appropriate selection criteria. I'm working towards that goal.
















Use of abbrivations

A reader, January 15, 2007 - 5:54 am UTC

Tom,

regarding 'IM' speak i think you have to check if the page has any u or ur or plz....words and replace them with empty strings so that the sentence does't make any proper meaning

What if you want ALL_ROWS

Rahul, January 16, 2007 - 7:05 pm UTC

Tom,

As Always, thank you for your help to Oracle world.

I have a situation where, for a business process, I am getting all the results into a staging table and the users take decisions based on that.

So, now, they have an option of certain filters on that table query (I am implementing these filters using NDS, and as taught by you, using bind variables).

Then, they would take the decisions based on the result set. There is a good possibility that they would be paging through the result set no matter the size.

Doesn't it make sense, in this case, to use ALL_ROWS instead of FIRST_ROWS because they have to check (actual check box on the front end) which records to work on?

If so, then, should I use ALL_ROWS on every stage of the SQL Statement?

Also, then, in this case, wouldn't it make sense to give them the count of how many rows (they are not that many based on the filters) there are in the result set?

Thank you,
Rahul

Pagination with total number of records

Mahesh Chittaranjan, January 22, 2007 - 12:23 am UTC

Tom,

I have a similar situation to R Floods except that I do not need the dbms flashback query. The code that calls the pagination procedure is in a web application. Given below is the function I use to get the page data. The only issue I have is that I HAVE TO show the total number of records and page x of y (easy to calculate when total and page size are known). The question is can the number of records returned by the query be returned in the below in a better fashion?

create or replace function nmc_sp_get_customer_page(customerName varchar2, pageNumber int, pageSize int, totalRecords OUT int)
return types.ref_cursor
as
cust_cursor types.ref_cursor;
begin
declare
startRec int;
endRec int;
pageNo int;
pSize int;
begin
-- pageNumber = 0 indicates the last page

-- pageSize parameter is set in the web application's property file
-- The check below is just so that the code works even if wierd values are set

if pageSize < 0 or pageSize > 100 then
pSize := 25;
else
pSize := pageSize;
end if;

pageNo := pageNumber;

-- How can this be optimized?
-- Is it possible to get the count without having to run the query below?

select count(name) into totalRecords
from customer
where name like customerName;

-- calculate start and end records to be used as MINROWS and MAXROWS

if pageNumber <> 0 then
startRec := ((pageNumber - 1) * pSize) + 1;
endRec := startRec + pSize - 1;

if endRec >= totalRecords then
pageNo := 0;
end if;
else
-- claculate how many records to show on the last page.

endRec := mod(totalRecords, pSize);

if endRec = 0 then
endRec := pSize;
end if;
end if;

if pageNo <> 0 then
-- For any page other than the last page, use this.
-- The user is probably not going to see more than the first 5 pages

open cust_cursor for
select name from
(select a.*, rownum rnum from
(select name from customer where name like customerName order by name) a
where rownum <= endRec)
where rnum >= startRec;
else
-- Since there is a last page button on the web page, the user is likely to click it

open cust_cursor for
select name from
(select name from customer where name like customerName order by name desc)
where rownum <= endRec
order by name;

end if;

return cust_cursor;
end;
end nmc_sp_get_customer_page;
/

another solution to the initial question

Maarten, January 22, 2007 - 8:57 am UTC

I just read the initial question and think there is yet another solution.

Here's my contribution:

/* if rnum = results_total, the last page is reached */
select c.*
from (select width_bucket (
b.rnum
, 1
, ( b.result_total
- mod (b.result_total, 10))
/* try to get the exact number of records in a bucket (10), the rest go into the overflow bucket */
+ 1
, (trunc (b.result_total
/ 10))
/* indicate how much buckets you need, derived from # record per page you desire (10) */
) as page_nr
, b.rnum /* original rownumber */
, b.table_name
, b.tablespace_name
, b.result_total /* total number of records */
from (select (last_value (a.rnum) over (order by a.dummy_for_last_value)) as result_total
, a.rnum
, a.table_name
, a.tablespace_name
from (select rownum rnum /* the actual query */
, ute.table_name
, ute.tablespace_name
, 1 as dummy_for_last_value
from user_tables ute
order by ute.tablespace_name /* do ordering here */
) a) b) c

Tom, I need your help on this

Asim, January 25, 2007 - 4:29 pm UTC

Tom,
This is what we are doing -

inside a pl/sql block -

cursor c1 is select id from id_assign where status = 0 and rownum =1 for update;

...

open c1;
update id_assign
set status = 1
where current of c1;

close c1;

The "select for update" is doing a full table scan even though status column has an index as COST is less compared to index scan.

Any suggestions please to make it faster??

Thanks,
Asim




Asim, February 08, 2007 - 9:48 am UTC

Tom,
Could you please give your input on this -
This is what we are doing now
inside a pl/sql block -

cursor c1 is select /*+ INDEX(id_assign x1id_assign)*/id from id_assign where status = 0 and rownum =1 for update;

where x1id_assign is an index for column status.
...

open c1;
update id_assign
set status = 1
where current of c1;

close c1;

Our requirement is to get any one id which has status = 0 and then mark this id as used by setting status = 1 and assign_dt = sysdate.

Now this table has around 2 million ids.And this proc gets called for each record processing to assign an id.

After adding the index hint, it is somewhat faster but not yet upto the speed which business wants, Any suggestions please to make it faster??

Thanks,
Asim


Tom Kyte
February 08, 2007 - 11:20 am UTC

you should have a fetch in there - if you want to have a "current of"

but one wonders why you bother with the select at all? why not just

update t set status = 1 where status = 0 and rownum = 1;


that index hint - whY????? remove it. If you have an index on status, the update should use it (because of the rownum=1).

Asim, February 08, 2007 - 2:32 pm UTC

Hi Tom,
Thanks for your reply.

Actually there is one fetch indeed before update.Sorry I missed it while putting the question.

The reason why I we need the select is, we need the ID return from this stored proc and also mark the id as used so that nobody else can use it.


This is what we are doing in brief -

We have a stored proc which basically gets called from Ab Initio(ETL Tool) for inserting each record for the initial load. Before inserting the record into the database, it does some validations as well as some manipulations inside the main proc and then it calls the proc below to get an ID and mark it as used.

This same process gets repeated for millions of record for initial load.


Here is the procedure -
============================================================
CREATE PROCEDURE GETID(P_ID OUT VARCHAR2, P_STATUS OUT VARCHAR2, P_ERROR_CODE OUT VARCHAR2,P_ERROR_MSG OUT VARCHAR2)

PRAGMA AUTONOMOUS_TRANSACTION;
V_ID varchar2(16);
CURSOR c1 IS SELECT ID FROM ID_ASSIGN WHERE STATUS IS NULL AND ROWNUM <2 FOR UPDATE;
PRAGMA AUTONOMOUS_TRANSACTION;

V_ID VARCHAR2(16);
V_ERROR_MSG VARCHAR2(6000);
XAPPERROR EXCEPTION;

CURSOR c1 IS
SELECT /*+ INDEX(ID_ASSIGN X1ID_ASSIGN)*/ ID FROM ID_ASSIGN WHERE STATUS = 0 AND ROWNUM =1 FOR UPDATE;

BEGIN

OPEN c1;

FETCH c1 into V_ID;

IF c1%NOTFOUND OR c1%NOTFOUND IS NULL THEN
V_ERROR_MSG := 'No ID is available for assignment';
RAISE XAPPERROR;
END IF;

UPDATE ID_ASSIGN
SET STATUS = 1,
ASSIGN_DT = CURRENT_TIMESTAMP
WHERE CURRENT OF c1;

COMMIT;

CLOSE c1;

P_ID := V_ID;
P_STATUS := '0';
P_ERROR_CODE := NULL;
P_ERROR_MSG := NULL;

EXCEPTION
WHEN XAPPERROR THEN
CLOSE c1;
P_ID := NULL;
P_STATUS := '1';
P_ERROR_CODE := '-01403';
P_ERROR_MSG := V_ERROR_MSG ;
ROLLBACK;
WHEN OTHERS THEN
CLOSE c1;
P_ID := NULL;
P_STATUS := '1';
DECLARE
V_SQLCODE NUMBER := SQLCODE;
V_SQL_MSG VARCHAR(512) := REPLACE(SQLERRM, CHR(10), ',');
BEGIN
P_ERROR_CODE := V_SQLCODE;
P_ERROR_MSG := V_SQL_MSG;
ROLLBACK;
END;
END;

============================================================

Now when we did a load test from oracle box without using the index hint on status, it was loading 50 recs/sec and then when it was invoked from Ab Initio, it was loading 10recs/sec.

This was not acceptable so we tried using this index hint as without that it was doing a full table scan to improve the number to 300recs/sec from oracle box and from Ab Initio it was 110recs/sec.

The main table where the new record is suuposed to get inserted will have around 110 million records in production which is partitioned and this ID_ASSIGN table will have around 2 to 3 million record sand this table is not partitioned- some of them will be used as well as available.


Your views please.

Thank you,
Asim
Tom Kyte
February 08, 2007 - 4:21 pm UTC

update t set x = y where <condition> returning id into l_id;


Asim, February 08, 2007 - 3:16 pm UTC

Hi Tom,

I am sorry that I have put cursor definition twice in the procedure in my previous response -

here is the correct procedure -

CREATE PROCEDURE GETID(P_ID OUT VARCHAR2, P_STATUS OUT VARCHAR2, P_ERROR_CODE OUT VARCHAR2,P_ERROR_MSG OUT VARCHAR2)

PRAGMA AUTONOMOUS_TRANSACTION;
V_ID VARCHAR2(16);
V_ERROR_MSG VARCHAR2(6000);
XAPPERROR EXCEPTION;

CURSOR c1 IS
SELECT /*+ INDEX(ID_ASSIGN X1ID_ASSIGN)*/ ID FROM ID_ASSIGN WHERE STATUS = 0 AND ROWNUM =1 FOR UPDATE;

BEGIN

OPEN c1;

FETCH c1 into V_ID;

IF c1%NOTFOUND OR c1%NOTFOUND IS NULL THEN
V_ERROR_MSG := 'No ID is available for assignment';
RAISE XAPPERROR;
END IF;

UPDATE ID_ASSIGN
SET STATUS = 1,
ASSIGN_DT = CURRENT_TIMESTAMP
WHERE CURRENT OF c1;

COMMIT;

CLOSE c1;

P_ID := V_ID;
P_STATUS := '0';
P_ERROR_CODE := NULL;
P_ERROR_MSG := NULL;

EXCEPTION
WHEN XAPPERROR THEN
CLOSE c1;
P_ID := NULL;
P_STATUS := '1';
P_ERROR_CODE := '-01403';
P_ERROR_MSG := V_ERROR_MSG ;
ROLLBACK;
WHEN OTHERS THEN
CLOSE c1;
P_ID := NULL;
P_STATUS := '1';
DECLARE
V_SQLCODE NUMBER := SQLCODE;
V_SQL_MSG VARCHAR(512) := REPLACE(SQLERRM, CHR(10), ',');
BEGIN
P_ERROR_CODE := V_SQLCODE;
P_ERROR_MSG := V_SQL_MSG;
ROLLBACK;
END;
END;

===========================================

Please suggest if I am doing something wrong.

Thanks,
Asim

Asim, February 08, 2007 - 5:21 pm UTC

Hi Tom,

Thanks for your reply.

We tried like this to use the update ... returning into ..-


CREATE PROCEDURE GETID(P_ID OUT VARCHAR2, P_STATUS OUT VARCHAR2, P_ERROR_CODE OUT VARCHAR2,P_ERROR_MSG OUT VARCHAR2)

PRAGMA AUTONOMOUS_TRANSACTION;

V_ID VARCHAR2(16);
V_ERROR_MSG VARCHAR2(6000);
XAPPERROR EXCEPTION;


BEGIN

UPDATE id_assign
SET STATUS = 1,
ASSIGN_DT = CURRENT_TIMESTAMP
WHERE status = 0
AND ROWNUM = 1
RETURNING CCID INTO V_ID;

COMMIT;

P_ID := V_ID;
P_STATUS := '0';
P_ERROR_CODE := NULL;
P_ERROR_MSG := NULL;

EXCEPTION
WHEN XAPPERROR THEN
P_CCID := NULL;
P_STATUS := '1';
P_ERROR_CODE := '-01403';
P_ERROR_MSG := V_ERROR_MSG ;
ROLLBACK;
WHEN OTHERS THEN
P_CCID := NULL;
P_STATUS := '1';
DECLARE
V_SQLCODE NUMBER := SQLCODE;
V_SQL_MSG VARCHAR(512) := REPLACE(SQLERRM, CHR(10), ',');
BEGIN
P_ERROR_CODE := V_SQLCODE;
P_ERROR_MSG := V_SQL_MSG;
ROLLBACK;
END;
END;




=============================
We already tried this with index on STATUS column still performance was almost same like 300recs/s in oracle box and with ab initio 100recs/sec.

So do you want me to try not to have any index on status column and then try the same.


Tom Kyte
February 08, 2007 - 9:14 pm UTC

"ab initio"??

what sort of expectations do you have for a SERIAL process here?

Asim, February 09, 2007 - 11:44 am UTC

Hi Tom,

In Ab Initio(ETL Tool), right now everything is serial process and business does not want right now with paralell process.

We also tried to run the same main proc which calls this id assign proc(using the cursor) in pralell in different sessions in oracle(not in Ab Initio) but performance went down when we were running the same process for each record in three different sessions.

And I am not sure if "UPDATE ...and RETUNING INTO .." can handle paralell process. So we thought of using CURSOR with SELECT FOR UPDATE and moreover "UPDATE ...and RETUNING INTO .." did not increase performance than CURSOR.

I really appreciate your help on this.

Thanks,
Asim
Tom Kyte
February 12, 2007 - 9:30 am UTC

update returning into is simply PLSQL syntax that lets you

a) update (and thus lock) a row
b) get the values of the row

in a single statement - not sure where the term parallel even came into play?


if you tell me that

a) select for update
b) update

is not slower than

a) update

I'll not be believing you.


ops$tkyte%ORA10GR2> create table t1
  2  as
  3  select rownum id, a.* from all_objects a where rownum <= 10000
  4  /

Table created.

ops$tkyte%ORA10GR2> alter table t1 add constraint t1_pk primary key(id);

Table altered.

ops$tkyte%ORA10GR2> alter table t1 add constraint t1_unq unique(object_id);

Table altered.

ops$tkyte%ORA10GR2> create table t2
  2  as
  3  select rownum id, a.* from all_objects a where rownum <= 10000
  4  /

Table created.

ops$tkyte%ORA10GR2> alter table t2 add constraint t2_pk primary key(id);

Table altered.

ops$tkyte%ORA10GR2> alter table t2 add constraint t2_unq unique(object_id);

Table altered.

ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> create or replace procedure p1
  2  as
  3          l_rec t1%rowtype;
  4  begin
  5          for i in 1 .. 10000
  6          loop
  7                  select * into l_rec from t1 where id = i for update;
  8                  update t1 set object_name = lower(object_name) where object_id = l_rec.object_id;
  9          end loop;
 10  end;
 11  /

Procedure created.

ops$tkyte%ORA10GR2> show errors
No errors.
ops$tkyte%ORA10GR2> create or replace procedure p2
  2  as
  3          l_rec t1%rowtype;
  4          l_object_id number;
  5  begin
  6          for i in 1 .. 10000
  7          loop
  8                  update t1 set object_name = lower(object_name) where id = i returning object_id into l_object_id;
  9          end loop;
 10  end;
 11  /

Procedure created.

ops$tkyte%ORA10GR2> show errors
No errors.
ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> exec runStats_pkg.rs_start;

PL/SQL procedure successfully completed.

ops$tkyte%ORA10GR2> exec p1

PL/SQL procedure successfully completed.

ops$tkyte%ORA10GR2> exec runStats_pkg.rs_middle;

PL/SQL procedure successfully completed.

ops$tkyte%ORA10GR2> exec p2

PL/SQL procedure successfully completed.

ops$tkyte%ORA10GR2> exec runStats_pkg.rs_stop(10000);
Run1 ran in 148 hsecs
Run2 ran in 59 hsecs
run 1 ran in 250.85% of the time

Name                                  Run1        Run2        Diff
STAT...index fetch by key           20,001      10,000     -10,001
STAT...redo entries                 20,016      10,013     -10,003
STAT...table fetch by rowid         10,007           0     -10,007
STAT...execute count                20,031      10,006     -10,025
STAT...db block gets                20,493      10,304     -10,189
STAT...db block gets from cach      20,493      10,304     -10,189
STAT...recursive calls              20,387      10,014     -10,373
STAT...buffer is not pinned co      20,029           0     -20,029
STAT...calls to get snapshot s      30,034      10,005     -20,029
STAT...db block changes             40,272      20,178     -20,094
STAT...consistent gets - exami      50,024      20,001     -30,023
STAT...consistent gets from ca      50,084      20,012     -30,072
STAT...consistent gets              50,084      20,012     -30,072
STAT...session logical reads        70,577      30,316     -40,261
LATCH.cache buffers chains         161,744      70,854     -90,890
STAT...physical read total byt     327,680     204,800    -122,880
STAT...physical read bytes         327,680     204,800    -122,880
STAT...undo change vector size   1,722,880   1,043,932    -678,948
STAT...redo size                 4,900,264   2,814,576  -2,085,688

Run1 latches total versus runs -- difference and pct
Run1        Run2        Diff       Pct
173,632      74,955     -98,677    231.65%

PL/SQL procedure successfully completed.


Asim, February 09, 2007 - 12:09 pm UTC

Hi Tom,
I think I should tell you some more about Ab Initio.

It is an ETL(Export Transform Load) tool. We use it for Initial Load of data as well as delta loads. Expectation of Initial Load data is around 110 million records.
So Ab Initio gets some files and each file they process for the data the way they want and then call this main proc to load the record into the table by assigning an unused id to it.

Thanks,
Asim


Asim, February 12, 2007 - 10:00 am UTC

Hi Tom,

Thanks for your reply.

I think I did something else wrong while using "UPDATE..RETUNRING INTO..".

I want to give it a second try with "UPDATE..RETUNRING INTO.." and come back to you.

Only thing I want to confirm, so you think the BITMAP INDEX on status column is not needed in my case even when I use the STATUS column in my where clause of update query with rownum =1???

Thanks,
Asim



Tom Kyte
February 12, 2007 - 11:33 am UTC

you have a bitmap index on status?!?!?!?!?!?!?!?!?!

as they say in support "tar closed"

absolutely and entirely inappropriate to have a bitmap index, get rid of it if you are doing single row updates!@!!!!!

Asim, February 13, 2007 - 9:35 am UTC

Hi Tom,

Just one final question on the bitmap index on "status" column.

We used the index on STATUS column as we are saying "UPDATE ID_ASSIGN set STATUS = 1, ASSIGN_DT =SYSDATE WHERE STATUS = 0 AND ROWNUM =1 RETURNING INTO ...". Because of the data distribution of STATUS column(could be couple of millions records with status = 0 and couple of millions records with status =1), we are thinking we are unable to tell oracle explicitly which row to update exactly.

And when I see your query, it is doing "update t1 set object_name = lower(object_name) where id = i returning object_id into l_object_id;" where you have a primary key index on "id".

I really appreciate your help being on this for a long time.

Thanks,
Asim

Tom Kyte
February 13, 2007 - 10:10 am UTC

you cannot do that with bitmaps - single row updates KILL IT.

do not use a bitmap index on status, just don't.

use a b*tree if you must, but not a bitmap

Asim, February 13, 2007 - 2:43 pm UTC

Hi Tom,

Yes, bitmap index in this case is slower than b-tree index for status column.

Thanks for your help on this.

Please have a look at what I tried and let me know if I got it correctly.

Looks like the difference of time between those two different cases are always 3 to 4 secs.

Only thing which is bothering me is, looks like if I keep running the procs for 20000 records couple of times,
although the difference of time remains same but time taken by each run increases.
It does not stay constant at/around everytime I run the anonymous blocks below.

============================================================================================================================


CREATE TABLE TEST_ID_ASSIGN
(
ID CHAR(16 BYTE),
STATUS NUMBER(1),
ASSIGN_DT TIMESTAMP(6)
);



Now the table TEST_ID_ASSIGN has 280000 records with STATUS = 0(means available) and 120000 records with STATUS =1(not available).

There is a primary key on "ID" column and B-Tree index on STATUS column.


=============================================================================================================================



1) Step 1 :


CREATE OR REPLACE PROCEDURE ASSIGNID_TEST(P_ID OUT VARCHAR2, P_STATUS OUT VARCHAR2, P_ERROR_CODE OUT VARCHAR2,P_ERROR_MSG OUT VARCHAR2)
IS

PRAGMA AUTONOMOUS_TRANSACTION;

V_ID VARCHAR2(16);
V_ERROR_MSG VARCHAR2(6000);
XAPPERROR EXCEPTION;

BEGIN

UPDATE /*+ INDEX(TEST_ID_ASSIGN ASX1ID_ASSIGN)*/ test_id_assign
SET STATUS = 1,
ASSIGN_DT = CURRENT_TIMESTAMP
WHERE status = 0
AND ROWNUM = 1
RETURNING ID INTO V_ID;

COMMIT;

P_ID := V_ID;
P_STATUS := '0';
P_ERROR_CODE := NULL;
P_ERROR_MSG := NULL;

EXCEPTION
WHEN XAPPERROR THEN
P_ID := NULL;
P_STATUS := '1';
P_ERROR_CODE := '-01403';
P_ERROR_MSG := V_ERROR_MSG ;
ROLLBACK;
WHEN OTHERS THEN
P_ID := NULL;
P_STATUS := '1';
DECLARE
V_SQLCODE NUMBER := SQLCODE;
V_SQL_MSG VARCHAR(512) := REPLACE(SQLERRM, CHR(10), ',');
BEGIN
P_ERROR_CODE := V_SQLCODE;
P_ERROR_MSG := V_SQL_MSG;
ROLLBACK;
END;
END;

================================================================================================================


declare
v_id varchar2(16);
v_status char(1);
v_error_code varchar2(30);
v_error_msg varchar2(2000);
v_start_time timestamp(6) := current_timestamp;
v_end_time timestamp(6);
ctr NUMBER :=0;
v_numrecs NUMBER := 20000;
begin

LOOP
EXIT WHEN ctr = v_numrecs;
assignid_test(v_id,v_status,v_error_code,v_error_msg);
ctr := ctr+1;
end loop;

v_end_time := current_timestamp;

DBMS_OUTPUT.PUT_LINE('Start Time:'||v_start_time);
DBMS_OUTPUT.PUT_LINE('End Time:'||v_end_time);

DBMS_OUTPUT.PUT_LINE('Elapsed Time:'||to_char(v_end_time - v_start_time));

end;





It took - 12.62 seconds for 20000 records.

I used the index hint for the update as without the hint it was getting slower.


Only thing which is bothering me, if I run this anonymous block couple of times for 20000 recs eachtime, the time taken each time increases.
It does not stay constant at or around 12.62 secs above.

===========================================================================================================================





2) Step 2 :

CREATE OR REPLACE PROCEDURE ASSIGNID_TEST1(P_ID OUT VARCHAR2, P_STATUS OUT VARCHAR2, P_ERROR_CODE OUT VARCHAR2,P_ERROR_MSG OUT VARCHAR2)
IS

PRAGMA AUTONOMOUS_TRANSACTION;

V_ID VARCHAR2(16);
V_ERROR_MSG VARCHAR2(6000);
XAPPERROR EXCEPTION;

CURSOR c1 IS
SELECT /*+ INDEX(TEST_ID_ASSIGN ASX1ID_ASSIGN)*/ ID FROM TEST_ID_ASSIGN WHERE STATUS = 0 AND ROWNUM =1 FOR UPDATE;

BEGIN

OPEN c1;

FETCH c1 into V_ID;

IF c1%NOTFOUND OR c1%NOTFOUND IS NULL THEN
V_ERROR_MSG := 'No ID is available for assignment';
RAISE XAPPERROR;
END IF;

UPDATE TEST_ID_ASSIGN
SET STATUS = 1,
ASSIGN_DT = CURRENT_TIMESTAMP
WHERE CURRENT OF c1;

COMMIT;

CLOSE c1;

P_ID := V_ID;
P_STATUS := '0';
P_ERROR_CODE := NULL;
P_ERROR_MSG := NULL;

EXCEPTION
WHEN XAPPERROR THEN
CLOSE c1;
P_ID := NULL;
P_STATUS := '1';
P_ERROR_CODE := '-01403';
P_ERROR_MSG := V_ERROR_MSG ;
ROLLBACK;
WHEN OTHERS THEN
CLOSE c1;
P_ID := NULL;
P_STATUS := '1';
DECLARE
V_SQLCODE NUMBER := SQLCODE;
V_SQL_MSG VARCHAR(512) := REPLACE(SQLERRM, CHR(10), ',');
BEGIN
P_ERROR_CODE := V_SQLCODE;
P_ERROR_MSG := V_SQL_MSG;
ROLLBACK;
END;
END;

===========================================================================================================================


declare
v_id varchar2(16);
v_status char(1);
v_error_code varchar2(30);
v_error_msg varchar2(2000);
v_start_time timestamp(6) := current_timestamp;
v_end_time timestamp(6);
ctr NUMBER :=0;
v_numrecs NUMBER := 20000;
begin

LOOP
EXIT WHEN ctr = v_numrecs;
assignid_test1(v_id,v_status,v_error_code,v_error_msg);
ctr := ctr+1;
end loop;

v_end_time := current_timestamp;

DBMS_OUTPUT.PUT_LINE('Start Time:'||v_start_time);
DBMS_OUTPUT.PUT_LINE('End Time:'||v_end_time);

DBMS_OUTPUT.PUT_LINE('Elapsed Time:'||to_char(v_end_time - v_start_time));

end;



It took - 16.97 seconds for 20000 records.

If I run this anonymous block couple of times for 20000 recs eachtime, the time taken each time increases.
It does not stay constant at or around 16.97 secs above.

============================================================================================================================



Thanks,
Asim





Asim, February 16, 2007 - 11:34 am UTC

Hi Tom,

Thanks a lot for all your help in resolving this issue.

Now I am looking for your suggestion for the problem below-

As I told you, I have a table which will have around 110 million records in production.
When a new record will come in, we need to query the table to get existing records for doing some manipulation,
and then insert the record into database. Only if the query below does not return any record only , we need to generate a new ID
by the proc which I already discussed with you in the previous threads, else just use existing record's id and insert the data
into the table.

The query is as below -

CURSOR C_ABK IS
SELECT CUSTOMER_ACCOUNT_ID, ID_TP_CD,GOVT_ISSUED_ID, BIRTH_DT, MEMBER_DT,ID
FROM ACCOUNT
WHERE CUSTOMER_LINK = P_CUSTOMER_LINK
AND ID_TP_CD <> V_DETACHED_TP_CD;


We do have an B-Tree index on CUSTOMER_LINK and a bitmap index on ID_TP_CD which can have values - 100,90,80,75,70,50,40.
CUSTOMER_ACCOUNT_ID is a primary key in the table.

The query will bring back at max 6 records for each customer out of 110 million records and most of the times(80%) the query will not bring back any record at all.



Is there anyway we can improve this query to work some more faster to improve performace?


We also created one composite index on "CUSTOMER_LINK,ID_TP_CD, CUSTOMER_ACCOUNT_ID, ID_TP_CD,GOVT_ISSUED_ID, BIRTH_DT, MEMBER_DT,ID" and query became some what faster but not so good to accept yet.

In this composite index we have included CUSTOMER_ACCOUNT_ID, which already has one unique index because of primary key.


Really appreciate your help.

Thanks,
Asim
Tom Kyte
February 17, 2007 - 11:06 am UTC

this is a transactional table - there should be NO BITMAP INDEXES AT ALL. The are entirely INAPPROPRIATE on a table that is transactional in nature.

you should have a single b*tree index on (customer_link,id_tp_cd)

Asim, February 27, 2007 - 1:55 pm UTC

Hi Tom,
Thank you very much for helping me.Looks like we are good now.

Only one thing which we think we might improve but wanted to check with you.

We have a table invalid SSNs which has only one column called ssn_invalid. This table we load it with initial load data of around 50 records only as of now. While loading each customer record we verify that his/her SSN is valid by checking from this table like this -
select count(1) from ssn_invalid where ssn_invalid = '1234';

Currently the table does not have any primary key or index on this table. So the query does a full table scan always with cost = 4.

But if we add primary key for this column, it does an index unique scan with cost = 0.

Are we going to achieve anything by adding a primary key on this table as even for index scan it has to go and search for the index to verify if there is any record?

Moreover index will occupy some more space.

Your views please.

Thanks,
Asim



Tom Kyte
February 27, 2007 - 2:29 pm UTC

it depends.


tkprof with and without, see what you see.

50 ssn's - probable one block table, but 3 or 4 IO's each time you query.

make it an IOT (index organized table) and it'll still be one block, but only 1 block during the scan.


(it need not take more space)

ops$tkyte%ORA10GR2> create table t1
  2  as
  3  select Object_id invalid_ssn
  4    from all_objects
  5   where rownum <= 50;

Table created.

ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> create table t2
  2  ( invalid_ssn primary key )
  3  organization index
  4  as
  5  select Object_id invalid_ssn
  6    from all_objects
  7   where rownum <= 50;

Table created.

ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> select count(*) from t1 where invalid_ssn = 1234;

  COUNT(*)
----------
         0

ops$tkyte%ORA10GR2> select count(*) from t2 where invalid_ssn = 1234;

  COUNT(*)
----------
         0

ops$tkyte%ORA10GR2> set autotrace on
ops$tkyte%ORA10GR2> select count(*) from t1 where invalid_ssn = 1234;

  COUNT(*)
----------
         0


Execution Plan
----------------------------------------------------------
Plan hash value: 3724264953

------------------------------------------------------------------------
| Id  | Operation          | Name | Rows  | Bytes | Cost (%CPU)| Time
------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |      |     1 |    13 |     3   (0)| 00:00:0
|   1 |  SORT AGGREGATE    |      |     1 |    13 |            |
|*  2 |   TABLE ACCESS FULL| T1   |     1 |    13 |     3   (0)| 00:00:0
------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - filter("INVALID_SSN"=1234)

Note
-----
   - dynamic sampling used for this statement


Statistics
----------------------------------------------------------
          0  recursive calls
          0  db block gets
          3  consistent gets
          0  physical reads
          0  redo size
        410  bytes sent via SQL*Net to client
        385  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
          1  rows processed

ops$tkyte%ORA10GR2> select count(*) from t2 where invalid_ssn = 1234;

  COUNT(*)
----------
         0


Execution Plan
----------------------------------------------------------
Plan hash value: 1767952272

------------------------------------------------------------------------
| Id  | Operation          | Name              | Rows  | Bytes | Cost (%
------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |                   |     1 |    13 |     1
|   1 |  SORT AGGREGATE    |                   |     1 |    13 |
|*  2 |   INDEX UNIQUE SCAN| SYS_IOT_TOP_66544 |     1 |    13 |     1
------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - access("INVALID_SSN"=1234)


Statistics
----------------------------------------------------------
          1  recursive calls
          0  db block gets
          1  consistent gets
          0  physical reads
          0  redo size
        410  bytes sent via SQL*Net to client
        385  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
          1  rows processed

ops$tkyte%ORA10GR2> set autotrace off

Asim, February 27, 2007 - 2:39 pm UTC

Hi Tom,

Thank you very much for your reply.

Could you please just explain me this -

"50 ssn's - probable one block table, but 3 or 4 IO's each time you query.

make it an IOT (index organized table) and it'll still be one block, but only 1 block during the scan.
"
As per trace that is what is shown but I am unable to think why "3 or 4 IO's each time I query" for the table without primary key ????

Thanks,
Asim



Tom Kyte
February 27, 2007 - 2:45 pm UTC

because it reads the extent map to figure out what block to read, the IOT didn't have to do that.

Asim, February 27, 2007 - 3:07 pm UTC

Hi Tom,

Please see below what I tried just now.
Looks like both are reading the same number of bytes of data,but cost is less with IOT.

Just thinking if this is a significant difference.

Please clarify.
==========================================================
SQL> create table t1
2 ( invalid_ssn PRIMARY KEY )
3 organization index
4 as
5 select * from ssninvalid;

Table created.

SQL> create table t2
2 as
3 select * from ssninvalid;

Table created.

SQL> set autotrace on
SQL> SELECT count(1) FROM t1 WHERE INVALID_SSN = '123456789';

COUNT(1)
----------
1


Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=ALL_ROWS (Cost=1 Card=1 Bytes=6)
1 0 SORT (AGGREGATE)
2 1 INDEX (UNIQUE SCAN) OF 'SYS_IOT_TOP_75244' (INDEX (UNIQU
E)) (Cost=1 Card=1 Bytes=6)





Statistics
----------------------------------------------------------
24 recursive calls
0 db block gets
3 consistent gets
0 physical reads
0 redo size
219 bytes sent via SQL*Net to client
277 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
1 rows processed

SQL> SELECT count(1) FROM t2 WHERE INVALID_SSN = '123456789';

COUNT(1)
----------
1


Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=ALL_ROWS (Cost=2 Card=1 Bytes=6)
1 0 SORT (AGGREGATE)
2 1 TABLE ACCESS (FULL) OF 'T2' (TABLE) (Cost=2 Card=1 Bytes
=6)





Statistics
----------------------------------------------------------
28 recursive calls
0 db block gets
9 consistent gets
1 physical reads
0 redo size
221 bytes sent via SQL*Net to client
277 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
1 rows processed

SQL> set autotrace off
SQL>
==========================================================
Tom Kyte
February 27, 2007 - 3:13 pm UTC

run them again, get rid of the hard parse. you see the recursive calls? there should be none in real life.

I know!!!

use my example.

Asim, February 27, 2007 - 3:29 pm UTC

Hi Tom,

You are right.This is what I got when I ran them again.

Do you think you can help me in quantifying over time saving as the advantage of using IOT in our case for visiting this table around 110 million times for 110 million customer record?

I really appreciate your help.
Thanks,
Asim

=============================================
SQL> set autotrace on
SQL> SELECT count(1) FROM t1 WHERE INVALID_SSN = '123456789';

COUNT(1)
----------
1


Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=ALL_ROWS (Cost=1 Card=1 Bytes=6)
1 0 SORT (AGGREGATE)
2 1 INDEX (UNIQUE SCAN) OF 'SYS_IOT_TOP_75244' (INDEX (UNIQU
E)) (Cost=1 Card=1 Bytes=6)





Statistics
----------------------------------------------------------
0 recursive calls
0 db block gets
1 consistent gets
0 physical reads
0 redo size
221 bytes sent via SQL*Net to client
277 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
1 rows processed

SQL> SELECT count(1) FROM t2 WHERE INVALID_SSN = '123456789';

COUNT(1)
----------
1


Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=ALL_ROWS (Cost=2 Card=1 Bytes=6)
1 0 SORT (AGGREGATE)
2 1 TABLE ACCESS (FULL) OF 'T2' (TABLE) (Cost=2 Card=1 Bytes
=6)





Statistics
----------------------------------------------------------
0 recursive calls
0 db block gets
3 consistent gets
0 physical reads
0 redo size
221 bytes sent via SQL*Net to client
277 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
1 rows processed

SQL> SELECT count(1) FROM t1 WHERE INVALID_SSN = '987654321';

COUNT(1)
----------
0


Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=ALL_ROWS (Cost=1 Card=1 Bytes=6)
1 0 SORT (AGGREGATE)
2 1 INDEX (UNIQUE SCAN) OF 'SYS_IOT_TOP_75244' (INDEX (UNIQU
E)) (Cost=1 Card=1 Bytes=6)





Statistics
----------------------------------------------------------
1 recursive calls
0 db block gets
1 consistent gets
0 physical reads
0 redo size
220 bytes sent via SQL*Net to client
277 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
1 rows processed

SQL> SELECT count(1) FROM t2 WHERE INVALID_SSN = '987654321';

COUNT(1)
----------
0


Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=ALL_ROWS (Cost=2 Card=1 Bytes=6)
1 0 SORT (AGGREGATE)
2 1 TABLE ACCESS (FULL) OF 'T2' (TABLE) (Cost=2 Card=1 Bytes
=6)





Statistics
----------------------------------------------------------
4 recursive calls
0 db block gets
7 consistent gets
0 physical reads
0 redo size
220 bytes sent via SQL*Net to client
277 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
1 rows processed

SQL> set autotrace off
SQL>

==========================================================



Tom Kyte
February 27, 2007 - 3:40 pm UTC

you will save 220 million logical IO's

I pray in real life you do not use literals.

Asim, February 27, 2007 - 3:57 pm UTC

Hi Tom,
Thank you very much again for your help.

Yes, in real time the query will be using bind variable only but not literals, as query is written inside a PL/SQL function.

Thanks,
Asim






Asim, March 02, 2007 - 9:22 am UTC

Hi Tom,
I am back again seeking for your suggestion.

Now looks like management wants to see if we can run the load process in paralell. We tried to run paralell in two ways.So there are two sessions which are running in paralell and inserting records into my main table.It is getting a unique id as I explained before in this thread. The final proc which is running right now is as below and a quick recap too.

============================================================
PROCEDURE CREATEID(P_ID OUT VARCHAR2, P_STATUS OUT VARCHAR2, P_ERROR_CODE OUT VARCHAR2,P_ERROR_MSG OUT VARCHAR2)
IS

PRAGMA AUTONOMOUS_TRANSACTION;

V_ID VARCHAR2(16);
V_ERROR_MSG VARCHAR2(300) := 'No ID is available for assignment';
XAPPERROR EXCEPTION;

BEGIN

UPDATE /*+ INDEX(ID_ASSIGN X1ID_ASSIGN)*/ ID_ASSIGN
SET STATUS = 'U',
ASSIGN_DT = CURRENT_TIMESTAMP
WHERE status = 'A'
AND ROWNUM = 1
RETURNING ID INTO V_ID;

IF SQL%ROWCOUNT = 0 OR V_ID IS NULL THEN
RAISE XAPPERROR;
END IF;

P_ID := V_ID;
P_STATUS := '0';
P_ERROR_CODE := NULL;
P_ERROR_MSG := NULL;

COMMIT;

EXCEPTION
WHEN XAPPERROR THEN
P_ID := NULL;
P_STATUS := '1';
P_ERROR_CODE := '-01403';
P_ERROR_MSG := V_ERROR_MSG ;
ROLLBACK;
WHEN OTHERS THEN
P_ID := NULL;
P_STATUS := '1';
DECLARE
V_SQLCODE NUMBER := SQLCODE;
V_SQL_MSG VARCHAR(512) := REPLACE(SQLERRM, CHR(10), ',');
BEGIN
P_ERROR_CODE := V_SQLCODE;
P_ERROR_MSG := V_SQL_MSG;
ROLLBACK;
END;
END;
============================================================


There is a main proc which selects from the main table(customer_account) first to see if similar records exist,then it gets the ID from the existing records and then insert the record, else if no records exist in the database it calls CREATID to generate a new ID and then insert the record.

Now problem is, when we are running in paralell , the process is becoming very slow even slower than running serially.

Any idea what could be the reason? I tried to see if there is any lock on any table but does not look like it is.
Is there any parameter from database side also to set to allow this paralell process?

Thank you very much for all your help.
Asim


Tom Kyte
March 04, 2007 - 6:06 pm UTC

for the love of whatever you love - please use a sequence.

this code shouts "i shall be as slow as I can be and will still generate gaps"

Asim, March 05, 2007 - 10:55 am UTC

Hi Tom,
I am sorry that I could not get it.
Could you please explain once again?

Thanks,
Asim
Tom Kyte
March 05, 2007 - 2:18 pm UTC

do not write code.

please use a sequence.

that'll generate your unique ids, very fast, scalable.

Asim, March 05, 2007 - 3:38 pm UTC

Hi Tom,
Thank you for clarifying the same.
Actually that is what I suggested and proved also that it generates IDs in much more faster way but management does not want to use sequence number approach(I don't know,probably they don't belive sequence approach). Instead they are buying unique ids generated from some other third party systems like credit card number generator company. We get this numbers into a file and then load into this table and mark them as not available to use.

Anyway I guess I need to make them believe that this approach can not help us in paralell unless and until we go back to sequence number.


I have one more question though:
This is what I am trying now -
- Ab Initio(third party tool) calls our main stored proc to add each record into database, and they take 1.54 mins to load 20741 records. This seemed to us slower than running the same from oracle server.

So what we did is, we used the same data in a file and we prepared a .sql file having all 20741 calls to the stored proc. Now we executed this .sql file from sqlplus on server where database is residing. But same number of records took 4.40 mins.

Do you have any idea why sqlplus from oracle server took more time than ab initio(third party tool) call although we will expect the opposite as third party tool will use some network overhead ?

Thanks,
Asim









Tom Kyte
March 05, 2007 - 8:40 pm UTC

then management has doomed you to "not be capable of doing more than one thing at a time"

You are committing every time. That means you are waiting for a log file sync (IO), ever ID you get takes a very measurable amount of time.

21,000 records in 120 seconds is 0.006 seconds per record. Not bad considering each one has to wait for a log file sync wait.

you probably hard coded the information in the .sql file whereas the load program used bind variables. You might spend 95% of your run time parsing SQL instead of executing it without bind variables.

Asim, March 06, 2007 - 11:16 am UTC

Hi Tom,

This is what I am doing in .sql file -

set feedback off;
variable P_OUTPUT varchar2(4000);

exec add('1','200703220797219104','4006610000000622',.......,'N','2006-07-10',:P_OUTPUT);

exec queryadd('1','200703220797219105','4006610000000622',.......,'N','2006-07-10',:P_OUTPUT);

exec queryadd('1','200703220797219106','4006610000000622',.......,'N','2006-07-10',:P_OUTPUT);
.....

I think instead of this, I need to set all the variables everytime which I am passing to add and call add using bind variables which will be very much cumbersome in this case, i will have 20000 records , so i have to set 20000 times before calling add.


Is this what you meant?

Thanks,
Asim

Tom Kyte
March 06, 2007 - 11:22 am UTC

yes, each one is a hard parse and you spend probably as much time parsing as executing.


sqlplus is a simple, stupid command line tool - it is wholly inappropriate for what you are doing.

even if you set binds - they would be hard parses themselves.

abandon sqlplus for this exercise

Asim, March 06, 2007 - 11:27 am UTC

Hi Tom,
As usual Thank you very much for such a prompt answer and helping me.

May be I will come back again with some other problem in future.

I really appreciate your help.

Thanks,
Asim

pagination

shay, April 04, 2007 - 4:21 am UTC

hi tom,
I have table t9

create table t9 (a number,b number);

insert into t9 values (35791,1);
insert into t9 values (35863,1);
insert into t9 values (35995,1);
insert into t9 values (36363,2);
insert into t9 values (36651,1);
insert into t9 values (36783,1);
insert into t9 values (36823,1);
insert into t9 values (36849,1);
insert into t9 values (36917,2);
insert into t9 values (37177,1);
insert into t9 values (37227,1);
insert into t9 values (37245,1);
insert into t9 values (37341,1);
insert into t9 values (37451,1);
insert into t9 values (37559,1);
insert into t9 values (37581,1);
insert into t9 values (37697,1);
insert into t9 values (37933,1);
insert into t9 values (38231,1);
insert into t9 values (38649,1);

commit;

now I do :

select *
from (
select
a,b,
row_number() over
(order by a) rn
from t9)
where rn between 1 and 16
order by rn
/

A B RN
---------- ---------- ----------
35791 1 1
35863 1 2
35995 1 3
36363 2 4
36651 1 5
36783 1 6
36823 1 7
36849 1 8
36917 2 9
37177 1 10
37227 1 11
37245 1 12
37341 1 13
37451 1 14
37559 1 15
37581 1 16

16 rows selected.

I would like to cut the result set after the second 2 at column b , I mean at row 9 Include. is it possiable ?

Thanks
Tom Kyte
April 04, 2007 - 10:13 am UTC

where rn between 1 and 9


but, I think that is too easy, hence your question must be more complex than you have let us in on... so, what is the real question behind the question.

shay, April 10, 2007 - 10:03 am UTC

Sorry for not explaining myself so good.
I would like to get 15 rows but ... If I find lets say 2 rows with column b = 2 then I would like to cut the result set and return only 9 rows.

I hope this one is more understandable

Tom Kyte
April 10, 2007 - 11:24 am UTC

is it always the second time b = 2 or what is the true logic here. is b always 1's and 2's or what.

please be very precise, pretend you were explaining this to your mom - be very precise, very detailed. You understand your problem - but we have no idea what it is.

Can I get an estimate of rows without running the query?

A reader, April 11, 2007 - 1:47 pm UTC

Tom,
We have some search pages within our application. Users can input multiple pieces of information to make searches more precise and return a manageable number of hits which can be easily displayed in a couple of pages. However, all pieces of information are optional and sometimes users will search with very little information. In such cases, the search query takes a very long time to run and burns up the CPU.

My question is:
Is there a way to estimate how many rows the query will return without actually running the query? The logic is if we know that the query will return 1000 rows, we will not run the query at all and ask the user to provide more information to narrow down the search.

If we try to use explain plan, the concern is that it might give incorrect cardinality estimates and we might force even the "good users" to provide more information. Conversely, we might run a bad query thinking that it will return only 20 rows. The point is I can "lie" about the estimated number but it has to be a smart lie.

Please advise what would be a good solution.

Thanks...
Tom Kyte
April 11, 2007 - 5:45 pm UTC

estimates are - well - estimates, they are not exact, they will never be exact, they are by definition GUESSES!

you can either use a predicative resource governor (the resource manager, set up a plan that won't run a query that takes more the 3 seconds - but again, it is a GUESS as to how long)

or a reactive resource governor - fail the query after using N cpu seconds

"Web" pagination and read consistency

Stew Ashton, May 02, 2007 - 9:55 am UTC

HI Tom,

I would like to compare a bit more explicitly the client / server and "Web" pagination solutions. I would appreciate your comments or corrections as needed.

1) In client / server, we can maintain the connection and keep the cursor open, so we just execute the full query once and fetch the first "page". Subsequent pages will simply require additional fetches. This means we have read consistency throughout, since we're still within one query.

2) In Web applications, everything is "stateless": every time we get a request from the user, we have to "start over", so every page requires a new query. Side effect: we lose read consistency.

To maintain read consistency in a stateless environment, I thought of using flashback queries:

variable n number
exec select dbms_flashback.get_system_change_number into :n from dual;
SELECT /*+ FIRST_ROWS */ * FROM
(SELECT p.*, rownum rnum FROM
(SELECT <whatever> FROM <table> as OF SCN :n ORDER BY <something unique>) p
WHERE rownum <= 200)
WHERE rnum > 100;

Of course, the application would need to keep the scn around between requests.

3) Would this indeed get us the same read consistency as the client / server solution?

4) Can you see any side effects or gotchas? Performance issues? It would seem to me that most of the gotchas (such as "snapshot too old") would apply to any "read consistent" solution.

Thanks in advance!

PS: sorry, couldn't get the code button to work.
Tom Kyte
May 02, 2007 - 5:06 pm UTC

3) sure

4) just what you pointed out


can you describe what you mean by "i could not get the code button to work"?

Code button

Stew Ashton, May 03, 2007 - 11:51 am UTC

Aha! I was creating a test case, when it occurred to me that I had modified the font options in Firefox. When I "allow pages to choose their own fonts, instead of my selections above", I miraculously see fixed width when I use the code button.

As Emily Litella (remember her?) would say : Never mind!

Row Orders in one select statement

Elahe Faghihi, May 15, 2007 - 10:19 am UTC

Hi Tom,

How could I write one select statement that returns the row orders properly?

create table t1 (a varchar2(30));

insert into t1 (a)
values ('first');

insert into t1 (a)
values ('second');

insert into t1 (a)
values ('third');

insert into t1 (a)
values ('forth');

commit;

select * from t1;

A
======
first
second
third
forth



I would like to run a query which could return this:

Row_order A
==============================
1 first
2 second
3 third
4 forth


Tom Kyte
May 15, 2007 - 8:58 pm UTC

you better fix your data model then?

properly is in the eye of the beholder, to me ANY order of those rows would be correct and proper since you stuffed the data in there without anything meaningful to sort by.

Well, if you really must ...

Greg, May 16, 2007 - 9:28 am UTC

For Elahe Faghihi :

If you really really cannot "fix" the data model as Tom says .. here's a sneaky/ugly/fun way of doing it .. ;) heh

SQL > drop table junk;

Table dropped.

SQL > create table junk as
  2       select to_char(to_date(level, 'j'), 'jspth' ) a
  3             from dual
  4          connect by level <= 5;

Table created.

SQL > select jk.a
  2    from junk jk,
  3         ( select level lvl,
  4                  to_char(to_date(level, 'j'), 'jspth' ) spt
  5             from dual
  6          connect by level <= 125  -- pick a big number .. or do a max(a) on junk ...
  7          ) dl
  8   where dl.spt = jk.a
  9   order by dl.lvl
 10  /

A
---------------
first
second
third
fourth
fifth

5 rows selected.

rownum indexes order by

Tony, June 21, 2007 - 4:23 pm UTC

Tom,
Thanks a lot for your help
I have two queries:

1)select * from ( select t.tex_id from tex_t t where t.status = 5005 order by t.c_date desc, t.t_num asc, t.tex_id asc ) where rownum < 20;

The columns in order by are not indexed, I created an index on these three columns (c_date desc,t_num,tex_id )

The query results came back in one second (from 2 minutes without the index).

For the following query there is no index for order by clause either when I create the index on pkup_date ,t_num ,tex_id for the query blow it starts using the index but the problem is the first query stops using it's index and starts the full table scan again.

In other words one index works at a time, can you please guide me.

2)select * from ( select t.tex_id from tex_t t where t.status = 5010 and (t.tdr_count >= 1 or t.p_count >= 1) and t.cur_stat_id <> 11 order by t.pkup_date asc, t.t_num asc, t.tex_id asc ) where rownum < 20 ;

Tom Kyte
June 22, 2007 - 10:16 am UTC