Skip to Main Content

Breadcrumb

Question and Answer

Tom Kyte

Thanks for the question, Georg.

Asked: January 05, 2003 - 5:31 pm UTC

Last updated: March 02, 2004 - 6:49 pm UTC

Version: 8.1.6

Viewed 1000+ times

You Asked

Hi Tom,

maybe - this is a question for your next book on tuning (as you mentioned one time on these pages).

The first step - recognizing the problem (slow SQL statement) is described very well in several books (e.g. using tracing and tkprof)

Examples in books how to tune a statement are always so simple but the reality is not. How about very complex views - say more than 15 tables and one or more inline views.

I can see the problem in the tkprof output (in one step several billions of rows are affected and just one step further filtered to only some hunderts) and in most of the cases I have an idea where it comes from (e.g. inline view produces too much data).

I am not satisfied with my tuning skills - it's still a little bit try and error.

To solve a "complex slow query problem" I usually:
- try to understand the statement
- try to split the statement (are subqueries fast enough?)
- check the joins (are existing indexes used and if not is that maybe correct?)
...

What is your way to solve this task? Could you give some practical examples in your next book?

By the way - what will be the date, when I can buy the next best seller from you?

Many thanks
Georg


and Tom said...

This is a question i get alot.

It is a question to which I have no answer. Well, I sort of do.

It is a matter of knowing all of the available access paths and when to use them and how to make them available.

It is a matter of knowing the question (many times I don't tune a query, I tune the question and develop an entirely new query).

It is a matter of knowing all of the available bits of functionality -- I mean, suppose you were looking a query that did a self join of a table with itself -- joining each record with the record that "came before it" (eg: to compare the difference in time between the last visit and the prior visit to the doctors office). Unless you knew about LAG() and LEAD() -- you would not be able to tune this.

Mastering all aspects of the SQL language. Tell me -- can you in a single query tell me what the most frequently occuring object_type in your database is? That is -- just write a query that returns the name(s) of the object_type that has the highest count(*). I can do it in so many different ways -- many people cannot do it without writing code.

Understanding that indexes != fast, full scans != slow. Being prepared to throw out every existing "rule of thumb" you have. The only rule of thumb that holds water is "that every rule of thumb is wrong".

Thinking about what the machine might have to do in order to process your query.

Do you really *need* that outer join, or is it there just in case? what if you select a select instead... (all about different access paths)...


and so on. Sorry -- if there were a way to encode in writing "here are the 15 steps to tune your query" or even "the 15,000,000 steps" -- I would be off turning that into software. It would be the worlds best query rewrite engine -- generating queries that were optimally tuned (we have a couple developers working on stuff like that).


Book -- spring time (i hope, it is slow going). Very different from last book. More like this site -- best practices sort of stuff. More pointers to where to get more information sometimes than information itself... Very different from Expert one on one tho.

Rating

  (17 ratings)

Is this answer out of date? If it is, please let us know via a Comment

Comments

We are really looking forward your next book

Lakshmi narasimhan R, January 06, 2003 - 2:14 am UTC

Hi tom

We will be Glad if we get your book. We will be really happy if you can include how to read Explain Plan and its different meanings and interpretations. Even though its been with lot of books including Oracle Documentation, i am of the opinion that it is not in easy word. Bit hard and confusing way. We hope you can give it in straight forward simple way.

thanks in advance.

Tom Kyte
January 06, 2003 - 7:45 am UTC

I'm not doing that -- I find the performance guide does it well? do you have a specific question. it is just a tree??? it is pretty straight forward to read.

Give me a question and I'll answer it -- but I'm not going to write how to read it as that is well documented.

Oracle SQL Tuning Book

Moorthy Rekapalli, January 06, 2003 - 10:48 am UTC

Hi,

First of all, let me thank Tom and his team for their contributions to the Oracle community. I learned a lot from Tom's book.

Oracle SQL High-Performance Tuning by Guy Harrison is a very good book that is dedicated to different access paths, how to interpret explain plan, tkprof etc...

This book will be a very good companion to Tom's Oracle expert one-on-one book.

Just thought of sharing my 2 cents with the Oracle community.

Hope this helps,
Moorthy Rekapalli.


an idea for the next book :)

John, January 06, 2003 - 12:12 pm UTC

when you are all done, break it into 3 volumns and INCREASE THE POINT SIZE!!!! :)
You'll be able to sell 3 books, and each will have more info than most, and I'll be able to read 2 chapters without getting a headache!

(what where you using on Expert? 2-point, 3-point?)
(maybe I'm just getting old)
Thanks for the knowledge


Tom Kyte
January 06, 2003 - 12:26 pm UTC

Hey -- I write it on 21" crt's or 20" lcds (bought myself a christmas present for at home -- a pair of 20" lcd monitors -- man does that jam!).

The fonts are big to me when I look at them ;)

Seriously -- I'll send this along to WROX -- they do that stuff, I have no control over it really.

I'm glad I had a part in that!

John, January 06, 2003 - 1:07 pm UTC

(financing you Christmas present):)
I've been through the appendixes and just started the
first few chapters. Very well written and informative.
Your Joe User style makes it an easy read (once I put the bottlenecks on, \OO/ heh, heh)
I can't express enough how useful this sight has been for me.
(btw I also have an LCD 17" and find it too "crisp")
If "Knowledge is Power" you are one with much power.

Looking forward to the next book (or 3)
Thanks again

Your next book

A reader, January 06, 2003 - 2:30 pm UTC

Hi Tom,

When is your 3rd book out? May I peek the table of contents? Is there any draft on the web site?
I just can't wait to get your new book. Thanks!

Steve


Print of Expert one on one

A reader, January 06, 2003 - 4:43 pm UTC

Tom,
Just to let you know that I buy books and try to read books as soon as possible (before new version of book or software version is available, and I loose interest), and I did the same with your book. But I also found the print hard to read (I am talking of printing from WROX here). In fact, when I first bought and tried to read, I just stopped because of print. But more I visited your site, more I came to know about your authority on the subject, and I have started reading again, but print wise it is really hard to read. I know print of O'Reilly books <> print of WROX books but I found print of O'Reilly is really comfortable for which I have already written to wrox.
Regards,


fast query

ASHRAF, January 07, 2003 - 6:43 am UTC

in data warehouse database i need to run the query fast and i care about only how can i get output fast so i wonder about CBO IT IS MAY BE not the fast

Tom Kyte
January 08, 2003 - 2:04 pm UTC

In a datawarehouse -- CBO is the only way to go....

bitmap indexes = CBO
partitioning = CBO
parallel = CBO
function based indexes = CBO
star transformations = CBO
materialized views = CBO

my point would be, if you stand up a data warehouse and DON'T use the CBO, you got it sdrawkcab -- thats backwards ;)

A reader, January 07, 2003 - 1:48 pm UTC

Tom,

Just a suggestion, is it possible to get the archives of Q & A from this site on cdrom based on dates.It will be a cool thing to have.I am pretty much sure viewers of this site don't mind to spend some amount if need be.

Thanks.



Tom Kyte
January 08, 2003 - 2:39 pm UTC

nope, all online, all of the time.

feel free to download -- i try to make that really easy. A simple "save link to disk" and that week it done.

archiving asktom

hrishy, January 22, 2003 - 3:37 am UTC

Hi Tom

certainly this is a great place to be for any kinda of technology releated question..wheather your a DBA/developer or you have anythin remotely to do with oracle :-D..so many times i find myself archiving the threads some of them for the way the query was written..some for innovative way of explaning things (my favourite one being consistent gets ;-D)..but when i try to save it this web page it has some funny name which i later change to the topic of the thread.

1)Can we have an option of saving it by thread name when i save it to the disk (i am just being lazy of renaming )
2)what is the problem with oracle folks burning this forum on a CD and selling it or bundling it along with your books..as i find it handy when i visit client places ..just the way i carry oracle manuals ..as we dont have internet acesses at many client places..as increasingly find myself slightly modify your query to come out with innovative solutions to problems

Tom Kyte
January 22, 2003 - 8:16 am UTC

1) at the bottom of every article is a:

Bookmark this page with the link HERE

link -- that gives you a clean URL that won't change over time.


2) I don't get it -- just goto the archive by week and "save link to disk". there you go.

the second I put it onto CD, it's out of date. It takes seconds to download a weeks worth of stuff -- you can put it on a floppy or burn a CD from it or just put it on your laptop....

It is a matter of distribution, time, etc....

With regards to...

Kashif, January 22, 2003 - 2:12 pm UTC

"Mastering all aspects of the SQL language. Tell me -- can you in a single query
tell me what the most frequently occuring object_type in your database is? That
is -- just write a query that returns the name(s) of the object_type that has
the highest count(*). I can do it in so many different ways -- many people
cannot do it without writing code."

I've come up with two, using techniques I learned from this site:

select object_type
from all_objects
group by object_type
having count (object_type) >= ALL (select count (object_type)
from all_objects
group by object_type)

AND

select object_type from
(
select object_type, count (object_type) cnt
from all_objects
group by object_type
order by cnt desc
)
where rownum = 1

Can you show us some other queries to solve the same problem? The first of my queries runs for quite a bit, which makes sense since there's so much sorting going on. Thanks.

Kashif

Tom Kyte
January 22, 2003 - 2:34 pm UTC

well -- your queries point out an interesting issue with this question.  

ops$tkyte@ORA920> create table t
  2  as
  3  select * from all_objects
  4   where object_type not in ( 'JAVA CLASS', 'SYNONYM' );

Table created.

ops$tkyte@ORA920>
ops$tkyte@ORA920> create trigger t_trigger before insert on t for each row
  2  begin
  3          select decode(:new.object_type,'TABLE','VIEW','TABLE')
  4            into :new.object_type
  5            from dual;
  6  end;
  7  /

Trigger created.

ops$tkyte@ORA920> insert into t select * from t where object_type in ( 'TABLE', 'VIEW' );

3566 rows created.

ops$tkyte@ORA920>
ops$tkyte@ORA920> select object_type, count(*)
  2    from t
  3   group by object_type
  4   order by 2
  5  /

OBJECT_TYPE          COUNT(*)
------------------ ----------
CONSUMER GROUP              2
MATERIALIZED VIEW           4
CONTEXT                     6
LOB                         6
DIRECTORY                   9
INDEXTYPE                   9
EVALUATION CONTEXT         11
RULE SET                   15
JAVA SOURCE                18
OPERATOR                   28
PROCEDURE                  50
TABLE PARTITION            61
TYPE BODY                  61
TRIGGER                    95
LIBRARY                   104
SEQUENCE                  127
INDEX PARTITION           134
FUNCTION                  147
JAVA RESOURCE             190
JAVA DATA                 291
PACKAGE BODY              533
PACKAGE                   590
TYPE                      862
INDEX                    1043
TABLE                    3566
VIEW                     3566

26 rows selected.

ops$tkyte@ORA920>
ops$tkyte@ORA920> select object_type
  2    from t
  3   group by object_type
  4  having count (object_type) >= ALL (select count (object_type)
  5                                       from t
  6                                      group by object_type)
  7  /

OBJECT_TYPE
------------------
TABLE
VIEW

ops$tkyte@ORA920>
ops$tkyte@ORA920> select object_type
  2    from ( select object_type, count (object_type) cnt
  3             from t
  4            group by object_type
  5            order by cnt desc
  6     )
  7   where rownum = 1
  8  /

OBJECT_TYPE
------------------
TABLE

ops$tkyte@ORA920>


So, which one is right?  both are, neither is, the first one is, the second one is.  Depends on your perspective..... (eg: the question is ambigous ;)


Anyway -- here are some others:

ops$tkyte@ORA920> select *
  2    from (
  3  select object_type, row_number() over ( order by cnt desc nulls last ) rn
  4    from ( select object_type, count(*) cnt
  5             from t
  6            group by object_type )
  7         )
  8   where rn = 1
  9  /

OBJECT_TYPE                RN
------------------ ----------
TABLE                       1

ops$tkyte@ORA920>
ops$tkyte@ORA920> select * from (
  2  select object_type, cnt, max(cnt) over () max_cnt
  3    from ( select distinct object_type,
  4                  count(*) over (partition by object_type) cnt
  5             from t )
  6  ) where cnt = max_cnt
  7  /

OBJECT_TYPE               CNT    MAX_CNT
------------------ ---------- ----------
TABLE                    3566       3566
VIEW                     3566       3566

ops$tkyte@ORA920>
ops$tkyte@ORA920> select object_type
  2    from t
  3   group by object_type
  4  having count(*) = ( select max(count(*))
  5                        from t
  6                       group by object_type )
  7  /

OBJECT_TYPE
------------------
TABLE
VIEW

ops$tkyte@ORA920>
ops$tkyte@ORA920> select substr( max( to_char(count(*),'fm0000000000009')||object_type ), 14 )
  2    from t
  3   group by object_type
  4  /

SUBSTR(MAX(TO_CHAR(
-------------------
VIEW

 

A reader, January 22, 2003 - 3:05 pm UTC

Hi Tom,

You are using ALL operator in sql query.

What is that ?

ops$tkyte@ORA920>
ops$tkyte@ORA920> select object_type
2 from t
3 group by object_type
4 having count (object_type) >= ALL (select count (object_type) <------
5 from t
6 group by object_type)
7 /


Tom Kyte
January 22, 2003 - 3:13 pm UTC

just read it.

lines 4-6 in english are:

keep the rows having the count that is greater than or equal to ALL of the counts by object type....


There is an ANY modifier as well -- expression <operator> ANY ( set ) just like
expression <operator> ALL ( set )




again, a definition might be of help here...

A reader, January 22, 2003 - 3:28 pm UTC

At the Mirriam-Webster webiste,

</code> http://www.m-w.com/ <code>

the following definition for the word 'ALL' was found:

1 a : the whole amount or quantity of <needed all the courage they had> <sat up all night> b : as much as possible <spoke in all seriousness>
2 : every member or individual component of <all men will go> <all five children were present>
3 : the whole number or sum of <all the angles of a triangle are equal to two right angles>
4 : EVERY <all manner of hardship>
5 : any whatever <beyond all doubt>
6 : nothing but : ONLY: a : completely taken up with, given to, or absorbed by <became all attention> b : having or seeming to have (some physical feature) in conspicuous excess or prominence <all legs> c : paying full attention with <all ears>
7 dialect : used up : entirely consumed -- used especially of food and drink
8 : being more than one person or thing <who all is coming>
synonym see WHOLE
- all the : as much of... as : as much of a... as <all the home I ever had>

compare the two queries

Winston, January 22, 2003 - 6:44 pm UTC

I compared the last two ways using runSTAT, one approach is running faster then another, but occupying more cache buffer chain latch, shared pool latch but with less library cache latch.

Which approach would you choose to use and why? How do you compare the cost of different latches?

Thanks



declare
2 l_start number;
3 l_run1 number;
4 l_run2 number;
5
6 type rc is ref cursor;
7 l_rc rc;
8 begin
9 insert into run_stats select 'before', stats.* from stats;
10
11 l_start := dbms_utility.get_time;
12 for i in 1 .. 1000
13 loop
14 open l_rc for 'select object_type
15 from t
16 group by object_type
17 having count(*) = ( select max(count(*))
18 from t
19 group by object_type )';
20 close l_rc;
21 end loop;
22 l_run1 := (dbms_utility.get_time-l_start);
23 dbms_output.put_line( l_run1 || ' hsecs' );
24
25 insert into run_stats select 'after 1', stats.* from stats;
26 l_start := dbms_utility.get_time;
27 for i in 1 .. 1000
28 loop
29 open l_rc for 'select substr( max(
30 to_char(count(*),''fm0000000000009'')||object_type ), 14 )
31 from t
32 group by object_type' ;
33 close l_rc;
34 end loop;
35 l_run2 := (dbms_utility.get_time-l_start);
36 dbms_output.put_line( l_run2 || ' hsecs' );
37 dbms_output.put_line
38 ( 'run 1 ran in ' || round(l_run1/l_run2*100,2) || '% of the time' );
39
40 insert into run_stats select 'after 2', stats.* from stats;
41 end;
42 /
27 hsecs
36 hsecs
run 1 ran in 75% of the time

PL/SQL procedure successfully completed.


select a.name, b.value-a.value run1, c.value-b.value run2,
2 ( (c.value-b.value)-(b.value-a.value)) diff
3 from run_stats a, run_stats b, run_stats c
4 where a.name = b.name
5 and b.name = c.name
6 and a.runid = 'before'
7 and b.runid = 'after 1'
8 and c.runid = 'after 2'
9 and (c.value-a.value) > 0
10 and (c.value-b.value) <> (b.value-a.value)
11 order by abs( (c.value-b.value)-(b.value-a.value))
12 /

NAME RUN1
-------------------------------------------------------------------------------- ----------
RUN2 DIFF
---------- ----------
LATCH.cache buffers lru chain 1
0 -1

LATCH.checkpoint queue latch 2
3 1

LATCH.undo global data 0
1 1

LATCH.session allocation 1
0 -1

STAT...messages sent 0
1 1

STAT...session cursor cache hits 1000
1001 1

STAT...free buffer requested 3
2 -1

STAT...table fetch continued row 2
0 -2

STAT...redo entries 16
13 -3

LATCH.redo allocation 15
19 4

STAT...sorts (memory) 8
4 -4

STAT...enqueue releases 7
3 -4

STAT...cluster key scans 5
0 -5

STAT...session cursor cache count 5
0 -5

STAT...recursive cpu usage 31
36 5

STAT...enqueue requests 8
3 -5

STAT...db block changes 27
22 -5

LATCH.enqueue hash chains 12
6 -6

STAT...opened cursors current 7
1 -6

STAT...parse time cpu 5
12 7

STAT...parse time elapsed 5
12 7

STAT...db block gets 25
17 -8

LATCH.messages 3
13 10

STAT...cluster key scan block gets 10
0 -10

LATCH.enqueues 24
12 -12

LATCH.library cache load lock 14
0 -14

STAT...rows fetched via callback 18
0 -18

STAT...opened cursors cumulative 1024
1005 -19

STAT...sorts (rows) 2117
2085 -32

STAT...parse count (total) 1038
1005 -33

STAT...execute count 1042
1005 -37

STAT...calls to get snapshot scn: kcmgss 1043
1005 -38

STAT...redo size 21552
21512 -40

STAT...buffer is pinned count 49
0 -49

LATCH.row cache objects 142
72 -70

STAT...table fetch by rowid 85
8 -77

STAT...no work - consistent read gets 93
8 -85

STAT...buffer is not pinned count 179
16 -163

STAT...consistent gets 209
12 -197

STAT...session logical reads 234
29 -205

LATCH.cache buffers chains 412
115 -297

STAT...recursive calls 3506
3030 -476

LATCH.shared pool 1322
146 -1176

LATCH.library cache 2714
5256 2542

STAT...session uga memory -4264
27464 31728


45 rows selected.

Tom Kyte
January 23, 2003 - 7:39 am UTC

well, i would not run it using NDS, and when I run it statically:

ops$tkyte@ORA920> declare
  2      l_start number;
  3      l_run1  number;
  4      l_run2  number;
  5
  6      l_seq   number;
  7  begin
  8      insert into run_stats select 'before', stats.* from stats;
  9
 10      l_start := dbms_utility.get_time;
 11      for i in 1 .. 100
 12      loop
 13          for x in ( select object_type
 14                       from t
 15                      group by object_type
 16                     having count(*) = (select max(count(*))
 17                                          from t
 18                                         group by object_type ) )
 19          loop
 20              null;
 21          end loop;
 22      end loop;
 23      l_run1 := (dbms_utility.get_time-l_start);
 24      dbms_output.put_line( l_run1 || ' hsecs' );
 25
 26      insert into run_stats select 'after 1', stats.* from stats;
 27      l_start := dbms_utility.get_time;
 28      for i in 1 .. 100
 29      loop
 30          for x in ( select substr( max(to_char(count(*),'fm0000000000009')||object_type ), 14 )
 31                       from t
 32                      group by object_type )
 33          loop
 34              null;
 35          end loop;
 36      end loop;
 37      l_run2 := (dbms_utility.get_time-l_start);
 38      dbms_output.put_line( l_run2 || ' hsecs' );
 39      dbms_output.put_line
 40      ( 'run 1 ran in ' || round(l_run1/l_run2*100,2) || '% of the time' );
 41
 42      insert into run_stats select 'after 2', stats.* from stats;
 43  end;
 44  /
230 hsecs
120 hsecs
run 1 ran in 191.67% of the time

PL/SQL procedure successfully completed.

I get totally the opposite of your results -- 0.27 seconds for 1,000 executions is a pretty fast number to say the least -- these stats:

STAT...consistent gets                                                           
       209
        12       -197

STAT...session logical reads                                                     
       234
        29       -205

show that your table was pretty much "empty"??

 

I missed the fetch of ref cursor...table contains 4123 records

Winston, January 24, 2003 - 5:49 pm UTC

HOw do you interprete the following benchmark difference?
run 1 ran in 183.91% of the time but took less consistent gets and session logic reads. Details as below

declare
l_start number;
l_run1 number;
l_run2 number;
l_object_type t.object_type%type;

type rc is ref cursor;
l_rc rc;
begin
insert into run_stats select 'before', stats.* from stats;

l_start := dbms_utility.get_time;
for i in 1 .. 100
loop
open l_rc for 'select object_type from (
select object_type, cnt, max(cnt) over () max_cnt
from ( select distinct object_type,
count(*) over (partition by object_type) cnt
from t )
) where cnt = max_cnt';
loop
fetch l_rc into l_object_type;
exit when l_rc%notfound;
end loop;
close l_rc;
end loop;
l_run1 := (dbms_utility.get_time-l_start);
dbms_output.put_line( l_run1 || ' hsecs' );

insert into run_stats select 'after 1', stats.* from stats;
l_start := dbms_utility.get_time;
for i in 1 .. 100
loop
open l_rc for 'select object_type
from t
group by object_type
having count(*) = ( select max(count(*))
from t
group by object_type )';
loop
fetch l_rc into l_object_type;
exit when l_rc%notfound;
end loop;
close l_rc;

end loop;
l_run2 := (dbms_utility.get_time-l_start);
dbms_output.put_line( l_run2 || ' hsecs' );
dbms_output.put_line
( 'run 1 ran in ' || round(l_run1/l_run2*100,2) || '% of the time' );

insert into run_stats select 'after 2', stats.* from stats;
end;
/

pause
select a.name, b.value-a.value run1, c.value-b.value run2,
( (c.value-b.value)-(b.value-a.value)) diff
from run_stats a, run_stats b, run_stats c
where a.name = b.name
and b.name = c.name
and a.runid = 'before'
and b.runid = 'after 1'
and c.runid = 'after 2'
and (c.value-a.value) > 0
and (c.value-b.value) <> (b.value-a.value)
order by abs( (c.value-b.value)-(b.value-a.value))
/
RUN2 DIFF
---------- ----------
LATCH.active checkpoint queue latch 2
1 -1

LATCH.session allocation 1
0 -1

LATCH.undo global data 0
1 1

STAT...cursor authentications 0
1 1

STAT...messages sent 0
1 1

STAT...opened cursors current 3
4 1

STAT...session cursor cache hits 100
101 1

STAT...free buffer requested 3
2 -1

STAT...cluster key scans 2
1 -1

LATCH.cache buffers lru chain 2
0 -2

STAT...parse time elapsed 4
2 -2

STAT...table fetch continued row 2
0 -2

LATCH.redo writing 8
5 -3

STAT...redo entries 16
13 -3

LATCH.redo allocation 15
19 4

STAT...cluster key scan block gets 2
6 4

STAT...db block changes 27
22 -5

STAT...enqueue releases 7
2 -5

STAT...session cursor cache count 5
0 -5

STAT...enqueue requests 8
2 -6

LATCH.enqueue hash chains 12
4 -8

LATCH.library cache load lock 10
2 -8

STAT...opened cursors cumulative 119
108 -11

LATCH.enqueues 24
8 -16

STAT...rows fetched via callback 18
0 -18

STAT...execute count 133
112 -21

STAT...calls to get snapshot scn: kcmgss 134
112 -22

STAT...parse count (total) 133
108 -25

STAT...redo size 21508
21468 -40

LATCH.checkpoint queue latch 98
51 -47

LATCH.row cache objects 126
78 -48

STAT...buffer is pinned count 49
0 -49

STAT...table fetch by rowid 80
13 -67

STAT...table scans (short tables) 100
200 100

STAT...sorts (memory) 308
204 -104

LATCH.shared pool 412
246 -166

STAT...recursive cpu usage 431
237 -194

STAT...recursive calls 974
701 -273

LATCH.library cache 855
497 -358

STAT...db block gets 424
817 393

STAT...sorts (rows) 827917
826685 -1232

STAT...consistent gets 5376
10439 5063

STAT...buffer is not pinned count 5357
10436 5079

STAT...no work - consistent read gets 5276
10423 5147

STAT...table scan blocks gotten 5200
10400 5200

STAT...session logical reads 5800
11256 5456

LATCH.cache buffers chains 11554
22563 11009

STAT...session pga memory max 15104
0 -15104

STAT...session pga memory 77400
0 -77400

STAT...table scan rows gotten 412300
824600 412300

* I have to use NDS to work around PLS-00103 error
* select count(*) from t;

COUNT(*)
----------
4123


Many thanks!

Tom Kyte
January 24, 2003 - 6:13 pm UTC

I don't know

I don't know the tables you used
I cannot really read your input as it wraps and goes for page after page after page....



I can say the first one it more cpu intensive -- on a CPU challenged machine, it could tend to run slower.

interprete runstats

Winston, January 24, 2003 - 6:57 pm UTC

From what you're saying I got the feeling that when we use runstat to compare two approaches...we should care more about "STAT...consistent gets"
and "STAT...session logical reads " then total elapse time. Please confirm.

I generated table t using the same way you used in the thread.
try sending the result again as below

423 hsecs
229 hsecs
run 1 ran in 184.72% of the time

PL/SQL procedure successfully completed.

> pause

> select a.name, b.value-a.value run1, c.value-b.value run2,
2 ( (c.value-b.value)-(b.value-a.value)) diff
3 from run_stats a, run_stats b, run_stats c
4 where a.name = b.name
5 and b.name = c.name
6 and a.runid = 'before'
7 and b.runid = 'after 1'
8 and c.runid = 'after 2'
9 and (c.value-a.value) > 0
10 and (c.value-b.value) <> (b.value-a.value)
11 order by abs( (c.value-b.value)-(b.value-a.value))
12 /

NAME RUN1
-------------------------------------------------------------------------------- ----------
RUN2 DIFF
---------- ----------
LATCH.session allocation 1
0 -1

LATCH.undo global data 1
0 -1

STAT...cluster key scans 2
1 -1

STAT...cursor authentications 0
1 1

STAT...hot buffers moved to head of LRU 2
1 -1

STAT...opened cursors current 3
4 1

STAT...session cursor cache hits 100
101 1

STAT...free buffer requested 3
2 -1

LATCH.active checkpoint queue latch 2
0 -2

LATCH.redo writing 6
4 -2

STAT...table fetch continued row 2
0 -2

STAT...parse time cpu 1
4 3

STAT...redo entries 16
13 -3

LATCH.redo allocation 15
19 4

STAT...cluster key scan block gets 2
6 4

STAT...db block changes 27
22 -5

STAT...session cursor cache count 5
0 -5

STAT...enqueue releases 7
2 -5

STAT...enqueue requests 8
2 -6

LATCH.enqueue hash chains 12
4 -8

LATCH.library cache load lock 10
2 -8

LATCH.messages 14
5 -9

STAT...opened cursors cumulative 119
108 -11

LATCH.enqueues 24
8 -16

STAT...rows fetched via callback 18
0 -18

STAT...execute count 133
112 -21

STAT...calls to get snapshot scn: kcmgss 134
112 -22

STAT...parse count (total) 133
108 -25

STAT...redo size 21524
21480 -44

LATCH.checkpoint queue latch 74
27 -47

LATCH.row cache objects 126
78 -48

STAT...buffer is pinned count 49
0 -49

STAT...table fetch by rowid 80
13 -67

STAT...table scans (short tables) 100
200 100

STAT...sorts (memory) 308
204 -104

LATCH.shared pool 414
250 -164

STAT...recursive cpu usage 433
235 -198

STAT...recursive calls 974
701 -273

LATCH.library cache 855
497 -358

STAT...db block gets 424
817 393

STAT...sorts (rows) 827917
826685 -1232

STAT...consistent gets 5376
10439 5063

STAT...buffer is not pinned count 5357
10436 5079

STAT...no work - consistent read gets 5276
10423 5147

STAT...table scan blocks gotten 5200
10400 5200

STAT...session logical reads 5800
11256 5456

LATCH.cache buffers chains 11555
22562 11007

STAT...session pga memory 25800
0 -25800

STAT...table scan rows gotten 412300
824600 412300


49 rows selected.


Tom Kyte
January 25, 2003 - 11:01 am UTC

need to teach you a simple command:

SQL> column name format a20
SQL> column run1 format 9999999
....


be really nice if your followup was about 1/4 of its current size (and readable)


LATCHES are what I look at most. 
Consistent gets -- sure, certainly.
CPU time can be very important as well -- something we are not seeing here.  For these two queries I would be tempted to use tkprof.  runstats is good for "approaches" (techniques), tkprof for queries.

Doing that on this and running each query 3 times we see:

select object_type
  from t
 group by object_type
having count(*) = ( select max(count(*))
                      from t
                     group by object_type )

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        3      0.00       0.00          0          0          0           0
Execute      3      0.00       0.00          0          0          0           0
Fetch        6      0.07       0.22          0       1108          1           6
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total       12      0.07       0.22          0       1108          1           6

Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: 241

Rows     Row Source Operation
-------  ---------------------------------------------------
      2  FILTER
     25   SORT GROUP BY
  11439    TABLE ACCESS FULL T
      1   SORT AGGREGATE
     25    SORT GROUP BY
  11439     TABLE ACCESS FULL T

********************************************************************************

select object_type
  from ( select object_type, cnt, max(cnt) over () max_cnt
           from ( select distinct object_type,
                         count(*) over (partition by object_type) cnt
                    from t )
        ) where cnt = max_cnt

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        3      0.00       0.00          0          0          0           0
Execute      3      0.00       0.00          0          0          0           0
Fetch        6      0.13       0.21          0        552          0           6
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total       12      0.13       0.22          0        552          0           6

Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: 241

Rows     Row Source Operation
-------  ---------------------------------------------------
      2  VIEW
     25   WINDOW BUFFER
     25    VIEW
     25     SORT UNIQUE
  11439      WINDOW SORT
  11439       TABLE ACCESS FULL T


So, as I said -- on a CPU challenged machine, the second query is somewhat MORE cpu intensive and this will show up as slower performance.

However, it does significantly less logical IO which is a bonus.

So -- it is a toss up.  Do you have some extra cpu power (seems you do not), the second might be a little more scalable as it will do less latching and such.  Are you CPU challenged?  the first might be more appealing. 

Best Practices

David, March 24, 2003 - 4:15 pm UTC

You make some very good points, as always. I'd be interested to know if you have some SQL 'best practices' in terms of performance. More than likely, "You shall use bind variables" is your #1 commandment. I'm wondering if you have a Top Ten. I'm sure you probably have a list that comes to mind for performance. Some knowledgable people also have 'readability' best practices or 'robust coding' best practices. For example, some might say that

insert into t1 select * from t2

is bad code because it's not explicit and if the order of the columns in the ddl ever changes, the results could vary. I know that you're swamped with requests and you don't have time to write out 'proper' sql in asktom.com, so I'm not suggesting that you should be so precise, but if you saw that piece of code in an application, would you say it's fine the way it is?

PL versus SQL

Gabriel, March 02, 2004 - 11:36 am UTC

Hello Tom,

I've been trying to convince my project manager that if I can do stuff in SQL is much better than doing them in PL/SQL. But he will have none of it. He thinks that an update will place row locks for far too long for him and make other transactions wait. I replied that PL can slow down his whole system if not needed etc. etc.

I then made a small test case using the runstats package that you developped. It is again obvious to me that the latches are serializable and on our env of 1000 concurrent connection that will have a big impact on scalability (as you can see I read your book ;-) ). But now I am at a loss to interpret some of the results of the runstats. Can you please help me with the ones that I didn't fill up and maybe correct me on the ones that I filled, or just send me a link where I can find the explanations (or both?).

exec runstats_pkg.rs_start;

declare
x varchar2(20);
cursor c is
select substr(sapfile_row,0,(instr(sapfile_row,'|'))-1) cv from sapfile;
begin
for cv in c loop
update hs_so_test hst
set XVUMATERIALSTATUS=
(select ltrim(replace(substr(sapfile_row,(instr(sapfile_row,'|'))),'|',' '))info
from sapfile
where sapfile.SAPFILE_ROW not like ('HDR%')
and
sapfile.SAPFILE_ROW not like ('TRL%')
and
hst.WORKITEMID=x)
where hst.workitemid in (select substr(sapfile_row,0,(instr(sapfile_row,'|'))-1) from sapfile);
commit;
end loop;
end;
/

exec runstats_pkg.rs_middle;

update hs_so_test hst
set XVUMATERIALSTATUS=
(select ltrim(replace(substr(sapfile_row,(instr(sapfile_row,'|'))),'|',' '))info
from sapfile
where sapfile.SAPFILE_ROW not like ('HDR%')
and
sapfile.SAPFILE_ROW not like ('TRL%')
and
substr(sapfile_row,0,(instr(sapfile_row,'|'))-1)=hst.WORKITEMID)
where hst.workitemid in (select substr(sapfile_row,0,(instr(sapfile_row,'|'))-1) from sapfile);

exec runstats_pkg.rs_stop(500);





PL/SQL procedure successfully completed.


PL/SQL procedure successfully completed.


PL/SQL procedure successfully completed.


PL/SQL procedure successfully completed.


0 rows updated.

Run1 ran in 50 hsecs
Run2 ran in 10 hsecs
Run1 ran in 500% of the time

Name Run1 Run2 Diff
STAT...session logical reads 1,442 789 -653
STAT...consistent gets 903 246 -657
LATCH.cache buffers chains 4,123 3,075 -1,048
STAT...table scan rows gotten 2,439 609 -1,830
STAT...sorts (rows) 5,715 3,878 -1,837
LATCH.row cache enqueue latch 25,130 5,028 -20,102
LATCH.row cache objects 37,670 7,537 -30,133
STAT...session pga memory 0 655,360 655,360
STAT...session uga memory 0 1,574,344 1,574,344

Run1 latches total versus runs -- difference and pct
Run1 Run2 Diff Pct
68,759 17,052 -51,707 403.23%

PL/SQL procedure successfully completed.

session logical reads :2 times more that current or other blocks were requested.

consistent gets :3.5 times more the number of blocks that were requested.

cache buffers chains :???

table scan rows gotten :4 times more the number of rows processed

sorts (rows) :1.5 time more the number of rows sorted

row cache enqueue latch :???

row cache objects :???

session pga memory :???

session uga memory :???



Thank you very much,

Tom Kyte
March 02, 2004 - 6:49 pm UTC

cache buffers chains are latches we use to access blocks in the buffer cache (correlates well with your logical reads)

row cache latches protect the dictionary cache (shared definitions)

you would expect uga/pga memory to go up. if you run again, they stay the same. everything you do will "increase memory usage".