Skip to Main Content

Breadcrumb

Question and Answer

Connor McDonald

Thanks for the question.

Asked: June 28, 2001 - 4:59 pm UTC

Last updated: March 11, 2017 - 1:41 am UTC

Version: 8.1.6

Viewed 50K+ times! This question is

You Asked

Tom:

create table test( a int);
begin
for i in 1..10000 loop
insert into test values (i);
end loop;
end;

set autotrace on
select count(0) from test;

Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE
1 0 SORT (AGGREGATE)
2 1 TABLE ACCESS (FULL) OF 'TEST'

Statistics
----------------------------------------------------------
0 recursive calls
4 db block gets
20 consistent gets
0 physical reads
0 redo size
369 bytes sent via SQL*Net to client
425 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
1 rows processed


select * from test where a=10;

Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE
1 0 TABLE ACCESS (FULL) OF 'TEST'




Statistics
----------------------------------------------------------
0 recursive calls
4 db block gets
21 consistent gets
0 physical reads
0 redo size
360 bytes sent via SQL*Net to client
425 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
1 rows processed


select * from test;
Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE
1 0 TABLE ACCESS (FULL) OF 'TEST'




Statistics
----------------------------------------------------------
0 recursive calls
4 db block gets
686 consistent gets
0 physical reads
0 redo size
185864 bytes sent via SQL*Net to client
74351 bytes received via SQL*Net from client
668 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
10001 rows processed



question:
1.
when query count(0), what exactly did oracle do? since we don't have index, did oracle went to every block and query the number?
if so, why consistent gets is much lower that select * from test?
both are full scan table.

2.
why select * from test have much higher consistent gets than
select * from test where a=10?
since there are no index, oracle need to go to every block to get the value, it should be same consistent gets, although the first one return more rows, but it does't matter.

Regards,



and Tom said...

q1) select count(0) is just like

select count(*)
from ( select 0 from t )
/

yes, oracle went to each block to find the rows to give you a zero

q2) its a side effect of your arraysize. You must have an 8k blocksize cause I reproduced this exactly.

We were expecting about 20 consistent gets right? Well, the default array size in sqlplus is 15 rows / fetch. 10000/15 = 666.66666. Well, 666+20 = 686 -- whoah there -- 686 is our consistent gets!

Thats what happened. When you fetched 15 rows, Oracle paused, gave you the data. When you went back to get the next 15, it got the buffer again to resume your query.

Watch what happens with different array sizes, starting with 15:


ops$tkyte@8i> select * from test;

10000 rows selected.

Statistics
----------------------------------------------------------
0 recursive calls
4 db block gets
686 consistent gets
0 physical reads
0 redo size
108813 bytes sent via SQL*Net to client
46265 bytes received via SQL*Net from client
668 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
10000 rows processed

ops$tkyte@8i> set arraysize 1000
ops$tkyte@8i> select * from test;

10000 rows selected.


Statistics
----------------------------------------------------------
0 recursive calls
4 db block gets
30 consistent gets
0 physical reads
0 redo size
86266 bytes sent via SQL*Net to client
942 bytes received via SQL*Net from client
11 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
10000 rows processed

ops$tkyte@8i> set arraysize 5000
ops$tkyte@8i> select * from test;

10000 rows selected.


Statistics
----------------------------------------------------------
0 recursive calls
4 db block gets
22 consistent gets
0 physical reads
0 redo size
149793 bytes sent via SQL*Net to client
382 bytes received via SQL*Net from client
3 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
10000 rows processed



Other interesting thing to note is that as the array size gets too large -- the amount of data transferred goes up. there is a diminishing marginal return on the array size so don't go OVERBOARD.

followup to comment one

Set the arraysize to some constant as it will not matter for single row fetches whether it is too big but having it too small for lots of rows (NOT just full scans -- lots of rows) does impact you.


Rating

  (262 ratings)

Is this answer out of date? If it is, please let us know via a Comment

Comments

A reader, June 29, 2001 - 1:32 pm UTC

does that mean when we do full table scen we want array size a little bit bigger, when query by rowid, we want arraysize smaller?

Tested it but got different results

ZS, April 30, 2002 - 10:11 pm UTC

I am a bit confused now, though you explained it well, but I tested it in my envirnment and got different results.
My query returns 122 rows with a select *.

Arraysize of 15 --> number of consistent gets was=1051 (122/15=8) which means 1043 + 8 = 1051 (good so far)
so I would assume according to the example above that when I do count(*) number of consistent gets would be 1043, right? BUT
Arrayszie of 15 with count(*) --> consistent gets = 920 I dont understand where this comes from?

I did more testing:
Arraysize of 122 with select * --> consistent gets = 1043
Arraysize of 122 with select count(*) --> consistent gets = 920

Arrayszie of 5000 with select * --> consistent gets = 1043
Arrayszie of 5000 with select count(*) --> consistent gets = 920

I would appreciate if you can explain.
thanks,
ZS


Tom Kyte
May 01, 2002 - 7:16 am UTC

The count(*) returns how many rows?  1.

The array size does even begin to kick in.  That statement is processed in ONE call.

Also, the count(*) may very well *never be touching the table*.  It might be just using an index.

Consider:

ops$tkyte@ORA817DEV.US.ORACLE.COM> create table t as select * from all_objects;

Table created.

ops$tkyte@ORA817DEV.US.ORACLE.COM> 
ops$tkyte@ORA817DEV.US.ORACLE.COM> 
ops$tkyte@ORA817DEV.US.ORACLE.COM> exec show_space( 'T' )
Free Blocks.............................0
Total Blocks............................320
Total Bytes.............................2621440
Unused Blocks...........................4
Unused Bytes............................32768
Last Used Ext FileId....................7
Last Used Ext BlockId...................40969
Last Used Block.........................60

PL/SQL procedure successfully completed.

<b>so the table consumes about 316 blocks..</b>


ops$tkyte@ORA817DEV.US.ORACLE.COM> select * from t;
22905 rows selected.


Statistics
----------------------------------------------------------
          0  recursive calls
         12  db block gets
       1823  consistent gets
         46  physical reads
          0  redo size
    2704019  bytes sent via SQL*Net to client
     169811  bytes received via SQL*Net from client
       1528  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
      22905  rows processed

<b>
22905/15 = 1527+316 = 1843 which is about our consistent gets (what we were expecting)....</b>

ops$tkyte@ORA817DEV.US.ORACLE.COM> select count(*) from t;


Statistics
----------------------------------------------------------
          0  recursive calls
         12  db block gets
        319  consistent gets
         46  physical reads
          0  redo size
        369  bytes sent via SQL*Net to client
        425  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
          1  rows processed

<b>And the count(*), since it was done in a SINGLE CALL -- no arraysize -- only did a consistent get on each block ONE TIME.  Hence it did 319 consistent gets (which is about the size of the table)

Now, adding a primary key to the table:</b>


ops$tkyte@ORA817DEV.US.ORACLE.COM> alter table t add constraint t_pk primary key(object_id);

Table altered.

ops$tkyte@ORA817DEV.US.ORACLE.COM> analyze table t compute statistics;

Table analyzed.

ops$tkyte@ORA817DEV.US.ORACLE.COM> select count(*) from t;


Statistics
----------------------------------------------------------
         79  recursive calls
          4  db block gets
         82  consistent gets
          1  physical reads
          0  redo size
        369  bytes sent via SQL*Net to client
        425  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          1  sorts (memory)
          0  sorts (disk)
          1  rows processed

<b>we can see that the consistent gets go way down.  why?  because the optimizer counted the rows in the index, not the table in this case.</b>


 

A reader, May 01, 2002 - 8:32 am UTC

Hi tom,

we calculate hit ratio like
(1-(phycial_read/(db_block_get+consistance get))

but according to your reply here i think consistent gets or db_block_get is physical read (it is not logical).

Can you explian about this.


Tom Kyte
May 01, 2002 - 9:06 am UTC

db_block_gets + consistent_gets = LOGICAL IO
physical_reads = PHYSICAL IO


I don't see where I might have mislead you on that here -- can you point out which part of the above indicated to you that they were physical IO?

A reader, May 01, 2002 - 9:36 am UTC

Tom

I think i am confused here about local i/o and physical i/o.
Sorry about that(it's not your fault).

But I am confused here.

If i issue query first time. Noting will be in memory So
db_block_get and consistant get will be physical read.
(am i Right?)

If i issue query second time.lots of block will be in memory.Here it will be logical i/o.

Am i right here?

Thanks.




Tom Kyte
May 01, 2002 - 9:49 am UTC

You might be right, you might not be. Some other query and/or operation may have cached the data already.


On the second point, you might be right, you might not be. It depends on

o the amount of data touched
o the size of the buffer cache
o what else is going on in the system
o the use of various features/functions in the database (eq: parallel query)

In general, yes, you would expect in many cases that the second execution would be almost all logical io.

Reader

A reader, May 01, 2002 - 10:09 am UTC

Consistenet gets is based upon re-constructing a block
for consistent read. Hence it is a function of only the
number of db_blocks to be read.

If you say that it is altered by the arraysize, do you
suggest that, due to arraysize, some blocks are read
muliple times and hence some blocks have > 1
consistent read in the process

Thanks

Tom Kyte
May 01, 2002 - 10:37 am UTC

No, you are wrong in your statement.

A consistent get is a block gotten in read consistent mode (point in time mode).  It MAY or MAY NOT involve reconstruction (rolling back).

Db Block Gets are CURRENT mode gets -- blocks read "as of right now".


Some blocks are processed more then once, yes, the blocks will have more then 1 consistent read in the process.  Consider:


ops$tkyte@ORA817DEV.US.ORACLE.COM> create table t as select * from all_objects;

Table created.

ops$tkyte@ORA817DEV.US.ORACLE.COM> exec show_space( 'T')
Free Blocks.............................0
Total Blocks............................320
Total Bytes.............................2621440
Unused Blocks...........................4
Unused Bytes............................32768
Last Used Ext FileId....................7
Last Used Ext BlockId...................40969
Last Used Block.........................60

PL/SQL procedure successfully completed.

<b>Table has 316 blocks, 22,908 rows..</b>


ops$tkyte@ORA817DEV.US.ORACLE.COM> set autotrace traceonly statistics;

ops$tkyte@ORA817DEV.US.ORACLE.COM> set arraysize 15
ops$tkyte@ORA817DEV.US.ORACLE.COM> select * from t;
22908 rows selected.

<b>here with an array size of 15, we expect

22908/15 + 316 = 1843 consistent mode gets.  db block gets -- they were for performing the FULL SCAN, they had nothing to do with the data itself we selected</b>


Statistics
----------------------------------------------------------
          0  recursive calls
         12  db block gets<b>
       1824  consistent gets</b>
        170  physical reads
          0  redo size
    2704448  bytes sent via SQL*Net to client
     169922  bytes received via SQL*Net from client
       1529  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
      22908  rows processed

ops$tkyte@ORA817DEV.US.ORACLE.COM> set arraysize 100
ops$tkyte@ORA817DEV.US.ORACLE.COM> select * from t;

22908 rows selected.

<b>Now, with 100 as the arraysize, we expect

22908/100 + 316 = 545 consistent mode gets.</b>

Statistics
----------------------------------------------------------
          0  recursive calls
         12  db block gets
        546  consistent gets
        180  physical reads
          0  redo size
    2557774  bytes sent via SQL*Net to client
      25844  bytes received via SQL*Net from client
        231  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
      22908  rows processed

ops$tkyte@ORA817DEV.US.ORACLE.COM> set arraysize 1000
ops$tkyte@ORA817DEV.US.ORACLE.COM> select * from t;

22908 rows selected.

<b>now, with arraysize = 1000, we expect:

22908/1000+316 = 338 consistent mode gets...</b>

Statistics
----------------------------------------------------------
          0  recursive calls
         12  db block gets
        342  consistent gets
        222  physical reads
          0  redo size
    2534383  bytes sent via SQL*Net to client
       2867  bytes received via SQL*Net from client
         24  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
      22908  rows processed




<b>so yes, the blocks are gotten in consistent mode MORE THEN ONCE when the array fetch size is lower then the number of rows to be retrieved in this case</b>

This is because we'll be 1/2 way through processing a block -- have enough rows to return to the client -- and we'll give UP that block.  When they ask for the next N rows, we need to get that halfway processed block again and pick up where we left off.
 

how does this formula works?

A reader, May 01, 2002 - 2:48 pm UTC


22908/15 + 316 = 1843 consistent mode gets. db block gets -- they were for
performing the FULL SCAN, they had nothing to do with the data itself we
selected


how did you get 22908/15 + 315 would be the total consistent gets?

Tom Kyte
May 01, 2002 - 3:17 pm UTC

read the original question / answer. I went through it there.

In this case, the numbers:

22908 = number of rows
15 = arraysize.

22908/15 = number of times you'll fetch

so, that is the number of times you'll have to read a block more then once.

315 was the number of blocks in the table (in a perfect world, we would do 315 consistent gets. With an arraysize of 15 and 22908 rows, we'll add 22908/15 consistent gets to that total)...



I got that srt of idea but I was confused

A reader, May 01, 2002 - 4:43 pm UTC

Hi

So a low arraysize in sql*plus can actually distort autotrace stats and affects performance?

So I guess autotrace is not that accurate or TKPROF will do the same?

How can we set arraysize let's for example in Developer Reports or AD HOC Tools like Business Objects to improve the select * from table_a like queries performance

Tom Kyte
May 01, 2002 - 9:13 pm UTC

No -- it won't "distort them".

You can use the array size in sqlplus to see what your APP would do.

Array size by default is 15 in plus
10 in JDBC
2 in pro*c
1 in OCI
??? in odbc (no idea, never use it)

It is NOT distorted -- it is reality! It shows what really happens as you use different array sizes to fetch.

TKPROF -- that would show exactly the same information. The same -- from jdbc to plus to pro*c (sqlplus is JUST an oci application)


with reports see:

ARRAYSIZE (RWBLD60)

Description ARRAYSIZE is the size of the array in kilobytes for use with ORACLE array processing.
Syntax
[ARRAYSIZE=]n


with business objects -- refer to their documentation.

arent consistent gets in blocks?

A reader, May 01, 2002 - 4:47 pm UTC

arent consistent gets calculated in blocks? if that was the case how can we say number_of_rows/arraysize + total_blocks_read would be the total consistent gets?

number_of_rows/arraysize dont give us in blocks does it?

The numbers suggests it is correct but this is really confusing since I cannot find it anywhere documentated

Tom Kyte
May 01, 2002 - 9:17 pm UTC

It is because we need to get a block more then ONCE.

Suppose we do that simple "select * from t"

Suppose every block has 16 rows on it.

Suppose we have an array size of 15.


We do the first fetch. That'll get block 1 (one consistent get) and get the first 15 rows and return.

Now, we do the second fetch. That'll get block 1 again (two consistent gets) and then get block 2 to get the next 14 rows (three consistent gets). We return.

We do the 3rd fetch. We get block 2 again (four consistent gets) to get the remaining 2 rows and then get block 3 (five consistent gets) to get the next 13 rows.

And so on. We end up getting every block 2 times -- because we stopped processing it partway through to return the result to the client.


Hence, if you full scan a table with N rows and use an array fetch of M rows, you stand a change of getting the blocks an extra N/M times.

from your last example

A reader, May 02, 2002 - 3:56 am UTC

Hi Tom

from your last example it would be true if we have 16 rows per block, what if we have 10 rows per block? We would just visit the block once right?
If we have 46 rows instead then first block would be visited 4 or 5 times?

first time to get 15 rows
second to get another 15
their to get another 15
and fourth to get the remaining one
fifth the full table scan visit, I am not sure if fifth should exist but since all your examples add a previos visit for the Full Table Scan...

Finally I dont understand why a large arraysize generates more network traffic, I thought it must decrease but we see the oppsite


Tom Kyte
May 02, 2002 - 7:14 am UTC

Ok, 10 rows/block:

on first fetch we:
get block 1 (1 consistent get) 10 rows
get block 2 (2 consistent gets) 5 rows / 5 left

on second fetch we:
get block 2 (3 consistent gets) 5 rows / 0 left
get block 3 (4 consistent gets) 5 rows / 5 left

....

and so on. We visit EVERY OTHER block 2 times (the N/M still holds here. If I have 100 rows on 10 blocks and an array size of 10 -- I'll add 10 consistent gets for a grand total of 20 consistent gets).





I don't get :-(

Igor, May 02, 2002 - 4:51 pm UTC

Hi

"We were expecting about 20 consistent gets right? Well, the default array size
in sqlplus is 15 rows / fetch. 10000/15 = 666.66666. Well, 666+20 = 686 --
whoah there -- 686 is our consistent gets!"

Why 20? I also don't get point for "666+20 = ... "

Hope to get answer

Igor



Tom Kyte
May 03, 2002 - 10:01 am UTC

Sorry --- I've explained this beyond my best ability at this point. I've explained many many times. This is my last try:

The table has 20 blocks in it.
You would expect 20 consistent gets on a full scan.
But, with an array size of 15 we will revisit some blocks MORE THEN ONCE.

In fact, we might revisit upto 10000/15 = 666 blocks more then ONCE.

This is because we "start" and "stop" the query in the middle of a block. (see the two followups immediately above this).


If you full scan a table T that has N blocks and R rows and you use an array fetch size of A, we will typically perform the following number of consistent gets:

N + R/A


We have to read N blocks (that should be obvious)
We might revisit R/A blocks more then once (since we stop in the middle)



How should we tune this?

A Reader, October 14, 2002 - 11:10 am UTC

Hi Tom,
A full table scan of a table with 770555 rows takes more than 2 minutes to get the results. It looks that both the "consistent gets" and the "physical reads" are high for a 13 data block gets. Where should we start to tune the database/query?  We have to use the query.  

Thank you in advance.


SQL> select * from TAB1;

770555 rows selected.

Elapsed: 00:02:147.52

Execution Plan
----------------------------------------------------------
   0      SELECT STATEMENT Optimizer=CHOOSE (Cost=1447 Card=752634 Byt
          es=161816310)

   1    0   TABLE ACCESS (FULL) OF 'TAB1' (Cost=1447 Card=752
          634 Bytes=161816310)





Statistics
----------------------------------------------------------
          0  recursive calls
         13  db block gets
      74096  consistent gets
      21627  physical reads
          0  redo size
   55033565  bytes sent via SQL*Net to client
    5702495  bytes received via SQL*Net from client
      51372  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
     770555  rows processed
 

Tom Kyte
October 14, 2002 - 3:53 pm UTC

13 db block gets are blocks gotten in CURRENT mode -- they were blocks that told us were all of the other blocks were! (they were the segment headers).

set arraysize 100

and try again in sqlplus (and watch your consistent gets start to go down)


Now, also bear in mind -- this is not two minutes just to read the blocks. This is two minutes to

o read the blocks
o send them over the network
o have sqlplus process them




consistent gets

Steve, January 04, 2003 - 9:14 pm UTC

Hi Tom,

1) When you talk about tuning SQL, you said just reducing
LIO (</code> http://asktom.oracle.com/pls/asktom/f?p=100:11:::::P11_QUESTION_ID:6749454952894,%7Blios%7D <code>. So increase ArraySize is one way
to reduce LIO ( consistent gets). Is this right?

2) what is the relationship between number of "consistent gets" + "db block gets" and number of current (tkprof output)? Based on the formular you ive us in this question: consistent gets = N+R/A, so current = N + R/A ?

3) what is relationship the between physical reads and disk (tkprof output)?


4) Is the following show_space system procedure? if that is one you wrote, May I have that?

ops$tkyte@ORA817DEV.US.ORACLE.COM> exec show_space( 'T' )
Free Blocks.............................0
Total Blocks............................320
Total Bytes.............................2621440
Unused Blocks...........................4
Unused Bytes............................32768
Last Used Ext FileId....................7
Last Used Ext BlockId...................40969
Last Used Block.........................60

Thanks

Steve







Tom Kyte
January 05, 2003 - 9:30 am UTC

1) in some cases it'll reduce the LIO's yes (in many cases). In all cases -- using a healthy but not "overweight" arraysize is recommended - in all cases.

2) not sure what N+r/a is (hopefully that is not A=arraysize cause it doesn't go that way -- the arraysize can have an effect - I used a case where I could predicate the effect but there are many more cases where

a) it'll not have an effect
b) it'll have an effect but a wholly unpredicable one.

db block gets (autotrace) are current (tkprof)
consistent gets(autotrace) are query (tkprof)

3) one to one - physical reads(autotrace) is disk (tkprof)

4)
</code> http://asktom.oracle.com/pls/asktom/f?p=100:11:::::P11_QUESTION_ID:231414051079 <code>


Dave, January 05, 2003 - 4:40 pm UTC

To respond to "A Reader"s question earlier, in business Objects the array fetch size is defined as part of the named connection used for universes or free-hand SQL.

Low - Hanging Tuning Fruit ?

Jerry, January 12, 2003 - 7:42 am UTC

I work for a company that buys hardware that we grow out of, not into, so I am always on the lookout for tuning ideas, both in the app, and to some extent, the instance (even though I am a developer).

You mention some apps and their default arraysizes. We run PeopleSoft on Oracle, which uses COBOL and SQR on a midtier UNIX app server. Should/could we try and tune those array sizes for any gain? If so, would that be in Net8? I note that OCI has an array size of 1.

Thanks for your time. Can't tell you how much I've learned from your site. It is part of my Sunday routine.


"
Array size by default is 15 in plus
10 in JDBC
2 in pro*c
1 in OCI
"

Tom Kyte
January 12, 2003 - 9:57 am UTC

It would be a question for peoplesoft to answer -- they wrote the code. OCI can do anysize, they would have had to of programmed support for more than one row at a time.

Why should the SQL PLUS array size affect the plan?

Tony, January 13, 2003 - 12:48 am UTC

Why should the SQL PLUS array size affect the statistics/plan? In general, we tune queries in Sql Plus but the query is invoked from the middle tier. I don't understand the relavence of array size in Sql Plus?. Only middle tier component's array size matters. We use COM component. How to set the array size in COM object?



Tom Kyte
January 13, 2003 - 7:57 am UTC

the arraysize doesn't affect the plan.

they arraysize DOES affect the statistics (it would be a shame if it didn't!)

The arraysize in sqlplus demonstrates that "hey, I should look at my client and set the bulk fetch size appropriately". Pretty much all tools have the ability to do it.

A com object is like saying "I use CORBA, how to set arraysize in CORBA". Doesn't make sense. COM isn't a "language", COM doesn't do sql. You look at the language you are using. If it is an open systems language like C, Java -- I'lll be glad to tell you. If it is a closed language like VB -- you'll have to refer to the docs for that -- I've no idea as I've never considered programming with them.



Why should the SQL PLUS array size affect the plan?

Tony, January 13, 2003 - 1:58 am UTC

Continuation of my earlier question..

What I understand from your explanation is that only client side array size has to be set appropriately to minimize the consistent gets. Nothing to do with the database. Am I correct? Developers are least bothered about the array size of the middle tier. Now I understand the importance.

Sorry if my question is repeatative.




Tom Kyte
January 13, 2003 - 8:01 am UTC

Developers are the ONLY bothered people with regards to this. They are responsible for making sure the batch fetch size in their client applications is set to a reasonable number (i like 100 personally)

For Ref Cursor also?

Tony, January 13, 2003 - 10:59 am UTC

We extensively use ref cursor in our application.
Do we have to set array size even if refcursor is used?


Tom Kyte
January 13, 2003 - 12:48 pm UTC

yup. if you want the client to fetch N rows at a time where N is some number you wish to control.

Consistent Gets - Why LIO?

Jerry, January 13, 2003 - 12:47 pm UTC

After reading Gorman's "THE SEARCH FOR INTELLIGENT LIFE IN THE COST-BASED OPTIMIZER" paper referenced on your site, a question about consistent gets came to mind.

If cgets are LIO, but involve constructing a read-consistent image of a block, wouldn't that involve referencing rbs, which in my mind would (or at least could) involve PIO?

Tom Kyte
January 13, 2003 - 1:26 pm UTC

Nope, rollback is to be found in the buffer cache just like anything else. It is just a rollback SEGMENT after all -- just another segment (like an index segment).

Extra CR / CU ?

Amit, February 18, 2003 - 1:00 am UTC

Hi Tom-

During testing of above examples- I came across this:
I built 2 tables - both containing a single row.
One is created and populated with CTAS, the other is created empty, and the row is inserted and commited.
Both tables have one extent consisting of 2 8K blocks.

1. Why does oracle need to perform 5 CR, 12 CU while scanning the 1st table (containing 2 blocks)?
2. Why does oracle need to perform 1 CR, 4 CU while scanning the 2nd table (containing 2 blocks)?


create table t1ctas storage (initial 1k next 1k pctincrease 0 ) tablespace users
as select rownum n from dual
/
create table t1ins storage (initial 1k next 1k pctincrease 0) tablespace users
as select * from t1ctas where 1=0
/
insert into t1ins select rownum n from dual
/
commit;

set arraysize 100;
alter session set events '10046 trace name context forever, level 1';

select * from t1ctas -- CR=5 CU=12
/
select * from t1ins -- CR=1 CU=4
/
select * from t1ctas -- CR=10 CU=24
union all select * from t1ctas
/
select * from t1ins -- CR=8 CU=2
union all select * from t1ins
/
select * from t1ctas -- CR=6 CU=16
union all select * from t1ins
/
exit;




*** SESSION ID:(11.1483) 2003-02-17 23:22:29.809
APPNAME mod='SQL*Plus' mh=3669949024 act='' ah=4029777240
=====================
PARSING IN CURSOR #1 len=70 dep=0 uid=37 oct=42 lid=37 tim=1283779 hv=1445189883 ad='5302b64'
alter session set events '10046 trace name context forever, level 1'
END OF STMT
EXEC #1:c=1,e=38,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=3,tim=1283784
=====================
PARSING IN CURSOR #1 len=47 dep=0 uid=37 oct=3 lid=37 tim=1283805 hv=4134657170 ad='5310df8'
select * from t1ctas -- CR=5 CU=12
END OF STMT
PARSE #1:c=0,e=8,p=0,cr=0,cu=0,mis=1,r=0,dep=0,og=3,tim=1283811
EXEC #1:c=0,e=0,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=3,tim=1283812
FETCH #1:c=0,e=2,p=1,cr=5,cu=12,mis=0,r=1,dep=0,og=3,tim=1283816
FETCH #1:c=0,e=0,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=3,tim=1283818
STAT #1 id=1 cnt=1 pid=0 pos=0 obj=40021 op='TABLE ACCESS FULL T1CTAS '
=====================
PARSING IN CURSOR #1 len=46 dep=0 uid=37 oct=3 lid=37 tim=1283840 hv=740128398 ad='530d86c'
select * from t1ins -- CR=1 CU=4
END OF STMT
PARSE #1:c=0,e=8,p=0,cr=0,cu=0,mis=1,r=0,dep=0,og=3,tim=1283846
EXEC #1:c=0,e=0,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=3,tim=1283847
FETCH #1:c=0,e=0,p=0,cr=1,cu=4,mis=0,r=1,dep=0,og=3,tim=1283849
FETCH #1:c=0,e=0,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=3,tim=1283851
STAT #1 id=1 cnt=1 pid=0 pos=0 obj=40022 op='TABLE ACCESS FULL T1INS '
=====================
PARSING IN CURSOR #1 len=78 dep=0 uid=37 oct=3 lid=37 tim=1283877 hv=3911934752 ad='5313e54'
select * from t1ctas -- CR=10 CU=24
union all select * from t1ctas
END OF STMT
PARSE #1:c=0,e=9,p=0,cr=0,cu=0,mis=1,r=0,dep=0,og=3,tim=1283884
EXEC #1:c=0,e=0,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=3,tim=1283886
FETCH #1:c=0,e=0,p=0,cr=5,cu=12,mis=0,r=1,dep=0,og=3,tim=1283888
FETCH #1:c=0,e=0,p=0,cr=5,cu=12,mis=0,r=1,dep=0,og=3,tim=1283890
STAT #1 id=1 cnt=2 pid=0 pos=0 obj=0 op='UNION-ALL '
STAT #1 id=2 cnt=1 pid=1 pos=1 obj=40021 op='TABLE ACCESS FULL T1CTAS '
STAT #1 id=3 cnt=1 pid=1 pos=2 obj=40021 op='TABLE ACCESS FULL T1CTAS '
=====================
PARSING IN CURSOR #1 len=76 dep=0 uid=37 oct=3 lid=37 tim=1283923 hv=3592920098 ad='530ac48'
select * from t1ins -- CR=8 CU=2
union all select * from t1ins
END OF STMT
PARSE #1:c=0,e=11,p=0,cr=0,cu=0,mis=1,r=0,dep=0,og=3,tim=1283931
EXEC #1:c=0,e=0,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=3,tim=1283933
FETCH #1:c=0,e=0,p=0,cr=1,cu=4,mis=0,r=1,dep=0,og=3,tim=1283935
FETCH #1:c=0,e=0,p=0,cr=1,cu=4,mis=0,r=1,dep=0,og=3,tim=1283937
STAT #1 id=1 cnt=2 pid=0 pos=0 obj=0 op='UNION-ALL '
STAT #1 id=2 cnt=1 pid=1 pos=1 obj=40022 op='TABLE ACCESS FULL T1INS '
STAT #1 id=3 cnt=1 pid=1 pos=2 obj=40022 op='TABLE ACCESS FULL T1INS '
=====================
PARSING IN CURSOR #1 len=77 dep=0 uid=37 oct=3 lid=37 tim=1283969 hv=1374823634 ad='52ffcd8'
select * from t1ctas -- CR=6 CU=16
union all select * from t1ins
END OF STMT
PARSE #1:c=0,e=10,p=0,cr=0,cu=0,mis=1,r=0,dep=0,og=3,tim=1283977
EXEC #1:c=0,e=0,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=3,tim=1283979
FETCH #1:c=0,e=0,p=0,cr=5,cu=12,mis=0,r=1,dep=0,og=3,tim=1283981
FETCH #1:c=0,e=0,p=0,cr=1,cu=4,mis=0,r=1,dep=0,og=3,tim=1283983
XCTEND rlbk=0, rd_only=1
STAT #1 id=1 cnt=2 pid=0 pos=0 obj=0 op='UNION-ALL '
STAT #1 id=2 cnt=1 pid=1 pos=1 obj=40021 op='TABLE ACCESS FULL T1CTAS '
STAT #1 id=3 cnt=1 pid=1 pos=2 obj=40022 op='TABLE ACCESS FULL T1INS '


Tom Kyte
February 18, 2003 - 7:29 am UTC

it just does in 8i for some reason. In 9i, it does not.

Note that most people would never see this as a single row table created via CTAS is not very common. if you do a "real sized" table -- you'll see the additional header gets (the db block get -- current mode gets) but averaged out over a large full scan, it is less than "noise"

Robert, April 03, 2003 - 12:10 pm UTC


how do I know how many blocks my query read?

A reader, April 17, 2003 - 2:50 pm UTC

Hi

How do I determine how many database blocks were read in my query from autotrace utility or TKPROF?

Tom Kyte
April 17, 2003 - 3:09 pm UTC

consitent gets:

Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE
1 0 SORT (AGGREGATE)
2 1 TABLE ACCESS (FULL) OF 'TEST'

Statistics
----------------------------------------------------------
0 recursive calls
4 db block gets
20 consistent gets



or in tkprof query + db block gets.

the thing is, doesnt that deoend on arraysize?

A reader, April 18, 2003 - 4:46 am UTC

Hi

I can see you get different consistent gets if we use different array size, so how can that be the number of data blocks...?

I mean I can see 100000 consistent gets or 10000 consistent gets for same query, does that mean my query reads different number of blocks????

Tom Kyte
April 18, 2003 - 12:00 pm UTC

yes it does -- by definition.

consistent gets = logical IO's = number of block gets.

it does not equal the number of DISTINCT blocks you processed if that is what you mean, it means the number of block gets (which require cpu cycles and latches = locks and so on)



A reader, April 18, 2003 - 10:07 am UTC

Hi Tom,

I am working on oracle from last 6 year. I am reading your site from begining (Aug 2000).

But when it comes to consistant get and db block get
I am more confused then ever before. Please help help!!!

You said
1) "A consistent get is a block gotten in read consistent mode (point in time mode).
It MAY or MAY NOT involve reconstruction (rolling back)."

What do you mean by "point in time mode" (Hearing this word from last couple of years but lot confusing...

What do you mean by "It MAY or MAY NOT involve reconstruction (rolling back)."

Please elobarate in this issue

Thnaks





Tom Kyte
April 18, 2003 - 12:32 pm UTC

if you've been reading since then -- you've heard me say:

"if you read the concepts guide AND
you retain 10% of that information YOU'LL HAVE
90% more knowledge about Oracle then most people out there..."


</code> http://docs.oracle.com/docs/cd/B10501_01/server.920/a96524/c21cnsis.htm#2570 <code>

all about multi-versioning.

All queries done by Oracle are done "at a point in time", consistently -- read that (read the entire book in fact) and you'll get an understanding of the single most important concept to understand about Oracle ever.

If you have my book "Expert one on one Oracle" -- i cover it in gory detail in there as well.

Would dbms_jobs be affected by arraysize?

Shiv, April 18, 2003 - 10:07 am UTC

Tom,

Thanks in advance.
Can a dbms_job be optimized using arraysize? if so how does one set it. or it doesn't matter since it doesn't run as a client.

cheers
Shiv

Tom Kyte
April 18, 2003 - 12:33 pm UTC

PLSQL can be optimized using BULK COLLECT (array fetching) and FORALL (bulk modification)

search the PLSQL manual's table of contents for those terms and read on....

regarding number of blocks to read

A reader, April 18, 2003 - 2:22 pm UTC

Hi

you said that consistent gets = LIO = block gets

I guess you mean the times Oracle goes to fetch the blocks, but I was asking the number of physical blocks Oracle needs to read! How how many times but rather, how many blocks.

For example to satisfy a query Oracle needs to read 100000 database blocks (8k, so 8k*100000 around 800MB), from where we can get this statistics??? Consistent gets is the number of times we need to fetch

I guess it´s

arraysize*number of consistent gets but if we dont know the arraysize?

Tom Kyte
April 18, 2003 - 3:52 pm UTC

Statistics
----------------------------------------------------------
0 recursive calls
4 db block gets
20 consistent gets
0 physical reads


that would be that number.

it might be 0, it might be 20000000, it might be 22 to read 100,000 database blocks in answer to a query (for PIO's)

from your first example

A reader, April 19, 2003 - 2:07 pm UTC

hello

from your first example

ops$tkyte@8i> select * from test;

10000 rows selected.

Statistics
----------------------------------------------------------
0 recursive calls
4 db block gets
686 consistent gets
0 physical reads
0 redo size
108813 bytes sent via SQL*Net to client
46265 bytes received via SQL*Net from client
668 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
10000 rows processed

ops$tkyte@8i> set arraysize 1000
ops$tkyte@8i> select * from test;

10000 rows selected.


Statistics
----------------------------------------------------------
0 recursive calls
4 db block gets
30 consistent gets
0 physical reads
0 redo size
86266 bytes sent via SQL*Net to client
942 bytes received via SQL*Net from client
11 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
10000 rows processed

one is 696 consistent gets and the other 30, 4 db block gets in both cases

did Oracle read 4 database blocks only to fetch 10000 rows?!?!?!

Tom Kyte
April 19, 2003 - 5:12 pm UTC

db block gets are blocks gotten in CURRENT (as opposed to consisten) mode. They are block gets we did to discover the extent map for the table (to know what to read). We get that in "current, as of right now" mode instead of "in consistent read, as of the point in time the query began" mode.

Oracle did 4 current mode gets in both cases to figure out what blocks to read.

In the first example, it did 686 logical IO's
In the second, only 30

It did no physical IO.

Reader

A reader, April 19, 2003 - 9:36 pm UTC

Tom,

If Oracle wants to read the extent map from the
segment header, why there are 4 db block gets instead of 1.

In my following test I get 0 db block gets

I have created table t as select * from dba_objects;

SQL> select count(*) from t;

  COUNT(*)
----------
     29387

<bounced the database>

SQL> get afiedt.buf
  1  set termout off
  2  set autotrace on
  3  spool c:\tmp\t.log
  4  select * from t;
  5* spool off
SQL> start afiedt.buf

Excerpt from spool file=>
=========================
Statistics
------------------------------------
        188  recursive calls                        
          0  db block gets                   
       2359  consistent gets          
        402  physical reads              
          0  redo size          
    2049661  bytes sent via SQL*Net to client
      22048  bytes received via SQL*Net from client
       1961  SQL*Net roundtrips to/from client
          2  sorts (memory) 
          0  sorts (disk) 
      29387  rows processed


Thanks 

Tom Kyte
April 20, 2003 - 8:54 am UTC

because -- just because. In different releases -- different count.

In your example, you didn't do a full table scan -- apples to toaster oven comparision.

You are using the CBO
You have a primary key on T
It read the index using an index full scan.

Or you are using Oracle9i and new features like ASSM...

Or ....


Let me put it this way -- if you see 4 there, and you did a simple full table scan -- then the 4 db block gets are what I said they where.

I did not say "you shall always see 4 there", rather, that when you do -- thats what they are.


from your first example

A reader, April 20, 2003 - 2:52 am UTC

sorry I didnt make the question clear.... I really wanted to ask how many database blocks (in memory or disk whatever) did Oracle read in order to satisfy your query and not how many LIO or PIO.

Definitely the number of blocks visited is not number of consistent gets since consistent get varies with arraysize, so in your example how many blocks did Oracle need to read?

Tom Kyte
April 20, 2003 - 9:01 am UTC

what is the difference?

What do you mean by "how many blocks did Oracle need to read".

To me that is one of two things:

o how many block gets did Oracle do in order to satisfy my query. That is logical IO

o how many blocks did Oracle physically read from disk in order to satisfy my query. That is phyiscal IO.


Those are the only two numbers that mean anything. Those are the only two numbers you can get.

That there are 100 blocks in the table, and we did 1,000 Logical IO's to read it -- of those two numbers the only relevant one is "1,000".



the difference is

A reader, April 20, 2003 - 1:10 pm UTC

hi

the difference is I wanted to know how many blocks did Oracle need to read in order to satisfy a query, I wanted to know how much MB or GB did Oracle need to read. Consistents gets are fine but for example if I see a huge consistent gets but I did not know the arraysize someone might think geez it's read tons to satisfy this query, but it was probably because a low arraysize, who knows? If we dont know arraysize it's hard to determine how "large" or how "bad" is that high number of consistent gets. So since consistent gets is determined by the arraysize I wanteds to know some value which does not depend on another parameter, for example the number of database blocks needed to read. I guess for a same query with same execution plan Oracle should read X number of database blocks which would YMB (database block size * number of database blocks). I am not that conmcerned about PIO at this time, I was wondering how do we know the number of database blocks Oracle needs to read, so for example from that determine how much db cache is my query using, we definitely cannot calculate that with consistent get right?

Tom Kyte
April 20, 2003 - 2:56 pm UTC

need to read -- that is PIO

need to process that is LIO

nothing else is relevant nor obtainable.

TKPROF would tell you if arraysize is a factor -- divide fetches into rows - if the number is 1, they aren't using array fetches.

A FACTOR in consistent gets is arraysize.

ARRAYSIZE does not determine consistent gets.




A reader, April 21, 2003 - 8:09 am UTC

Hi Tom,

Consistant get means reconstructing block from rollback segment. Am I right ?

If so then it is reading rollback segment which resides on tablespace and it doing PIO then why we call it LIO ???

If not then throght some light on this.

Thanks for excellent web site!!!

Tom Kyte
April 21, 2003 - 8:26 am UTC

consistent gets means getting a block "as of a point in time" and MAY require read asides to the rollback segment in order to reconstruct the block.  emphasis on MAY.

the RBS is just a segment (that is what the S is for).  It resides in the cache just like an index, table or anything else does.


Here is an example showing the "read asides" to the rbs.  It shows that we MAY require a read to the rbs by running the same exact query -- but doing lots of work in between:


ops$tkyte@ORA920> create table t ( x int );

Table created.

ops$tkyte@ORA920> insert into t values ( 1 );

1 row created.

ops$tkyte@ORA920> commit;

Commit complete.

ops$tkyte@ORA920>
ops$tkyte@ORA920>
ops$tkyte@ORA920> variable x refcursor
ops$tkyte@ORA920> variable y refcursor
ops$tkyte@ORA920>
ops$tkyte@ORA920> alter session set sql_trace=true;

Session altered.

ops$tkyte@ORA920> begin
  2     open :x for select * from t X;
  3     open :y for select * from t Y;
  4  end;
  5  /

PL/SQL procedure successfully completed.

ops$tkyte@ORA920> print x

         X
----------
         1

ops$tkyte@ORA920>
ops$tkyte@ORA920> begin
  2      for i in 1 .. 10000
  3      loop
  4          update t set x = i;
  5          commit;
  6      end loop;
  7  end;
  8  /

PL/SQL procedure successfully completed.

ops$tkyte@ORA920>
ops$tkyte@ORA920> print y

         X
----------
         1

ops$tkyte@ORA920>


So, they return the same exact information but TKPROF shows us:

SELECT * from t X

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute      1      0.00       0.00          0          0          0           0
Fetch        1      0.00       0.00          0          7          0           1
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        3      0.00       0.00          0          7          0           1

Rows     Row Source Operation
-------  ---------------------------------------------------
      1  TABLE ACCESS FULL T (cr=7 r=0 w=0 time=162 us)
********************************************************************************
SELECT * from t Y

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute      1      0.00       0.00          0          0          0           0
Fetch        1      0.04       0.06          0      10002          0           1
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        3      0.04       0.06          0      10002          0           1

Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: 83     (recursive depth: 1)

Rows     Row Source Operation
-------  ---------------------------------------------------
      1  TABLE ACCESS FULL OBJ#(34399) (cr=10002 r=0 w=0 time=60325 us)


<b>that it took 10k consistent gets for Y -- versus 7 for X</b> -- both queries return the same data -- it is just that for the first fetch from Y we had to "undo" the 10,000 updates that happened between the OPEN of Y and the fetch.

 

A reader, April 21, 2003 - 8:19 am UTC

Hi Tom

what do you think about DB2 calim on this

</code> http://www- <code>
3.ibm.com/software/data/pubs/papers/readconsistency/readconsistency.pdf


Tom Kyte
April 21, 2003 - 9:44 am UTC

I like this quote:

<quote>
The reality is that with Oracle, as with any other database, you design and code your application with an understanding of the underlying isolation and concurrency model
</quote>

I've been saying that for years myself...


but then they say:

<quote>
No other database vendor has implemented Oracle's Multi Version Read Consistency isolation nor has it proven to be a performance advantage in industry standard, ISV or real life customer benchmark.
</quote>

well, they sort of need to do their homework -- I would like us to be the only ones with this feature -- but we ain't. RDB (which we bought a while ago) had it for a loooonnnnnggg time. Borlands Interbase (someone bought that I believe) did. Heck, mysql does it. So, to make us look "unique", that is very nice of them -- but not really true. And given that I don't even see DB2 in the top 10 tpc-C results -- even on AIX for goodness sake -- don't see where they are going with this benchmark concept (oh thats right, they don't actually run the tpc-c's, not even on their own machines).... Ok, going to the SAP website

</code> http://www.sap.com/benchmark/index.asp?content=http://www.sap.com/benchmark/ato2tier.asp <code>

looks like the top ones in this ISV benchmark (run by a company that doesn't necessarily promote Oracle) would be .... (not DB2).

Oh, i see way down at the bottom they refer to a read only benchmark -- a tpc-h. How appropriate in a paper describing CONCURRENCY CONTROLS that affect reads/writes at the same time, (not).

Oh I see they say we bypass our MV in the tpc-c because we use serializable. Boy -- just shows a TOTAL and FUNDEMENTAL lack of understanding on their part of how MV works in Oracle -- they totally 100% missed the boat. Our serializable is even MORE depenedent on multi-versions then our read committed is!!!


Now, they say:

<quote>
With Oracle, the data you see in any query is the data as it existed at the start of that query. This does not conform to any ANSI standard isolation level, nor is it the way other RDBMSs work.
</quote>

Wrong and Wrong (2 for 2). ANSI describes isolation in terms of "phenomena".
They take three phenomena and create four isolation levels based on the
existence, or lack thereof, of the above phenomena. They are:

Isolation Level Dirty Read Non-Repeatable Read Phantom Read
Read Uncommitted Permitted Permitted Permitted
Read Committed Permitted Permitted
Repeatable Read Permitted
Serializable

Oracle supports explicitly two of the above isolation levels as they are defined
-- read committed and serializable. This doesn't tell the whole story however.
The SQL standard was attempting to set up isolation levels that would permit
various degrees of consistency for queries performed in each level. Repeatable
read is the isolation level they claim you must be at to get a read consistent
result from a query. Read committed does not give you consistent results. Read
uncommitted is the level to use to get non-blocking reads.

In Oracle -- Serializable and Repeatable Read are synonymous. Read uncommitted with its dirty read is unTHINKABLE.

but wait -- lets see what DB2 has to say in their technical documentation - not their marketecture documentation:

<quote>
IBM Redbook for DB2

3.3.1.1 Strategy 1 - Conversation Integrity (Not Recommended)

A commit point is created at the end of the conversation. The lock duration may include user think time. Therefore, this strategy should not be used in applications where other users need concurrent access to the tables.


Read-only application might get away with this strategy in some cases. S-locks are normally released before commit point, but an S-locked page or row cannot be updated by another process and an S-lock may have a very long duration. It may include user think time.

We do not recommend this strategy even for read-only applications, although it may be the most convenient one from a programming point of view.
</quote>

what they are talking about there is "getting the right answer". In DB2 getting the right answer from a query that accesses more then one record requires leaving shared read locks behind -- that is sort of a bummer in a multi-user environment isn't it.

or, going onto the read uncommitted level they are so proud of (apparently)

<quote>
1.3.2.1 Case 2 - Without Locking

1. User A updates value 100 to 0 into the DB2 table column.
2. User B reads the updated value 0 and makes program decisions based on that value.
3. User A cancels the process and changes value 0 back to value 100 for the
DB2 table column.

This case shows that, without locking, User B made a wrong program decision, and because of that the business data in the database is potentially corrupted

</quote>

well, thats not too brilliant.

Of course, I've already addressed the "hey no one else does it like this" (which is a really poor argument don't you think. Is IBM a company that would jump off a bridge because everyone else does?? I don't get where they are going with that little tidbit of inaccuracy)



It is funny that they continue on to bash our detour into the RBS to read around locks -- when they themselves would STOP. "Bad Oracle -- they use extra CPU, Good IBM -- we just stop you cold, no wasting of a resource you cannot put in the bank and use later, no siree bob!"



Great -- they get into a technical discussion of how Oracle manages rollback and manage to totally muck it up in the first paragraph:

<quote SRC=IBM accuracy=LOW>
Rollback segments are simply disk space set aside to store old images of data. They are physically a set of operating system files

inaccurate -- they are a segment that may span tons of files/logical volumes

of a predefined size

inaccurate -- they can grow and shrink

that work in a circular fashion.

accurate to a point -- but they get it wrong in the following

That is, old data is put into rbs 1 followed by rbs 2 and so on. When all of the rollback segments are full, the process wraps back around to the first segment.

wrong wrong wrong -- so wrong.

Undo Tablespaces in Oracle 9i work in a similar fashion except that the information is stored in an Oracle managed tablespace which is made up of operating system data files so the process is almost identical. There is a DBA management burden to configure and maintain rollback segments.

lets look below, after this quote for a tidbit on that

How big should your rollback segments be? What if you have a single transaction that does not fit into your rollback segments? What happens if you run out of space?

in IBM with fixed sized logs that cannot wrap (as opposed to Oracle) -- how big should your logs be? what if you have a single transaction that doesn't fit into your logs? what if you run out of space? POT calling KETTLE black here?

</quote>

Ok, so Oracle unduly burdens you with RBS? How about:


<quote src= IBM udb release notes>

If maxlocks is set too low, lock escalation happens when there is still enough lock space for other concurrent applications. If maxlocks is set too high, a few applications can consume most of the lock space, and other applications will have to perform lock escalation. The need for lock escalation in this case results in poor concurrency.&#65533;
</quote>

hey, whats this lock escalation thing and why didn't they talk about this "feature" in their marketecture paper? What undue burden does this "feature" place on the DB2 dba.


Oh yeah, they forget to mention in 9i using undo tablespaces, the dba sets up an undo retention period and -- oh yeah, thats it.


continueing on:

<quote>
Well running out of space is not a problem...Oracle simply cancels your transaction! Yes that's right, if there is not enough rollback segment space for your transaction, the transaction fails. ...
</quote>

How evil that would be (if it were true). But wait -- ask IBM "hey, db2 guys, what happens if i run out of log? oh -- you cancel my transaction. Well, I don't think that is very nice do you?"


a) we cancel the statement -- not the transaction.
b) we can suspend the statement allowing the dba to add more space if needed
c) we can dynamically grow the space (do their logs do that?) if needed making this self correcting if need be
d) we can even set up quotas to prevent runaways from consuming too much space (hey, db2, can you?)



as for the tiring repeated quotes of "these are a bear to manage", tell me -- how about maxlocks, lock esculation, sizing of logs, sizing of memory -- etc etc etc in DB2? you are trading X for Y here.


They then go onto say Oracle is page based -- not row based (this is stretching it a BIT far don't you think...)


<quote>
If a transaction is looking for record 5 on block 106, it may have to clone and reconstruct several different version of block 106 even if record 5 has never changed. With DB2, the concurrency mechanisms work on the row level so if one transaction is locking row Y and another transaction wants to look at row X (even if they are both on the same page), then both transaction would be able to proceed.
</quote>


Well, lets see. Change the example to

transaction looking for record 5.
someone is modifying record 5.

Oracle gives you -- record 5.

DB2 gives you -- the hand. As in "stop", "wait", do not pass go, do not collect $200 go directly to jail. (dirty reads i guess would be their "get out of jail free" card -- if you want dirty data)



<quote>
Similarly if you took a DB2 application and ported it directly to Oracle, you would find that it may have problems with ORA-1555 because of the frequent commits in the application.
</quote>

Oh that is rich. And why did the DB2 program have frequent commits? Why aren't the transactions "right sized"? Why do you commit all of the time -- even before the TRANSACTION is finished in DB2? Could it be because -- gasp -- readers block writers and writes block reads in DB2? (yes, yes it could be).


<quote>
Each implementation does what it is designed to do and it's the application development that either makes the best use of the technology or runs into issues.
</quote>

Now it is like talking to a schizophrenic person. Moment of lucidity wrapped in periods of craziness. That quote above -- beautiful, perfect, excellent -- totally correct.

They actually back up my premise that "database idenpedence is a farce", unfortunately -- they do so in a paper so riddled with mistakes that is calls into question even the accurate information.


Then they get into the locking/blocking stuff (bottom of page 8 into 9) and say

"Note that this last example is the default behaviour with DB2"

What they don't mention is that it is the ONLY behavior with DB2. Unless you want to read UNCOMMITTED -- dirty data that is.





A reader, April 21, 2003 - 8:42 am UTC

Tom,

DB2 page contain statement like

"Oracle's concurrency model is page based(block) not row based
In Oracle, the SCN is stored in the header of each data block (a.k.a. page).
So if any record on that block is modified, the SCN for that block is updated.
If a transaction is looking for record 5 on block 106, it may have to clone and
reconstruct several different version of block 106 even if record 5 has never
changed."


If there is five row in a block and One get changed and we are reading rows which are not changed. According to DB2 document it has to reconstruct image from rollback segment
fot the rows which haven't changed .

If this is the case then it is not right..


Great Explaination

Sam, April 22, 2003 - 12:00 am UTC

I ponder, what Mr. DB2 has to say on this..

db block gets on update.

Jerry, May 06, 2003 - 1:53 pm UTC

I think this is my favorite thread.  One question came up regarding the statistics for db block gets during an update.  It seems to me that db block gets very closely approximates the number of rows involved in an update, even though it is a block.  

Could Oracle be accessing or updating the block header for each row updated? 

Here is my working example (8.1.7 on AIX):



SQL> create table t as select * from all_objects;

Table created.

SQL>
SQL> exec show_space('T');
Free Blocks.............................0
Total Blocks............................900
Total Bytes.............................7372800
Unused Blocks...........................29
Unused Bytes............................237568
Last Used Ext FileId....................31
Last Used Ext BlockId...................4033
Last Used Block.........................16

PL/SQL procedure successfully completed.

SQL>
SQL> update t set object_id = object_id;

67118 rows updated.

SQL>
SQL> commit;

Commit complete.

SQL>
SQL> set autotrace on exp stat
SQL> set timing on
SQL>
SQL> /* once again after parsing/caching */
SQL>
SQL> update t set object_id = object_id;

67118 rows updated.

Elapsed: 00:00:45.01

Execution Plan
----------------------------------------------------------
   0      UPDATE STATEMENT Optimizer=CHOOSE
   1    0   UPDATE OF 'T'
   2    1     TABLE ACCESS (FULL) OF 'T'




Statistics
----------------------------------------------------------
          0  recursive calls
      68751  db block gets
        979  consistent gets
          0  physical reads
   16924764  redo size
        545  bytes sent via SQL*Net to client
        458  bytes received via SQL*Net from client
          6  SQL*Net roundtrips to/from client
          1  sorts (memory)
          0  sorts (disk)
      67118  rows processed

SQL>
SQL> commit;

Commit complete.

Elapsed: 00:00:00.00
SQL>
SQL> set autotrace off
SQL> set timing off
SQL> insert into t select * from t;

67118 rows created.

SQL>
SQL> commit;

Commit complete.

SQL>
SQL> exec show_space('T');
Free Blocks.............................1
Total Blocks............................1740
Total Bytes.............................14254080
Unused Blocks...........................0
Unused Bytes............................0
Last Used Ext FileId....................31
Last Used Ext BlockId...................4443
Last Used Block.........................60

PL/SQL procedure successfully completed.

SQL>
SQL> set autotrace on exp stat
SQL> set timing on
SQL> update t set object_id = object_id;

134236 rows updated.

Elapsed: 00:01:04.05

Execution Plan
----------------------------------------------------------
   0      UPDATE STATEMENT Optimizer=CHOOSE
   1    0   UPDATE OF 'T'
   2    1     TABLE ACCESS (FULL) OF 'T'




Statistics
----------------------------------------------------------
          0  recursive calls
     137468  db block gets
       1739  consistent gets
          0  physical reads
   33839832  redo size
        546  bytes sent via SQL*Net to client
        458  bytes received via SQL*Net from client
          6  SQL*Net roundtrips to/from client
          1  sorts (memory)
          0  sorts (disk)
     134236  rows processed

SQL>
SQL> commit;


 

Tom Kyte
May 06, 2003 - 2:03 pm UTC

yes it does -- don't forget to include indexes that could be on the table as well....

Arash, May 07, 2003 - 2:33 pm UTC

Hi Tom,

I can't say how much I learnt from your site. I did understand how to calculate 'Consistent Gets' for 'SELECT' statement and also what is 'db block read' in update statement. But I don't know how to calculate or what is the number of 'Consistent Gets' in update statement. I attached the example:

D820> exec show_space('T');
Free Blocks.............................5
Total Blocks............................164
Total Bytes.............................1343488
Unused Blocks...........................1
Unused Bytes............................8192
Last Used Ext FileId....................4
Last Used Ext BlockId...................38555
Last Used Block.........................15

PL/SQL procedure successfully completed.

D820> update t set object_id = object_id;

12016 rows updated.


Execution Plan
----------------------------------------------------------
0 UPDATE STATEMENT Optimizer=CHOOSE
1 0 UPDATE OF 'T'
2 1 TABLE ACCESS (FULL) OF 'T'




Statistics
----------------------------------------------------------
5706 recursive calls
13109 db block gets
3823 consistent gets
0 physical reads
2922208 redo size
485 bytes sent via SQL*Net to client
837 bytes received via SQL*Net from client
6 SQL*Net roundtrips to/from client
1 sorts (memory)
0 sorts (disk)
12016 rows processed
------------------------------------------------------
D820> select * from t;

12016 rows selected.


Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE
1 0 TABLE ACCESS (FULL) OF 'T'




Statistics
----------------------------------------------------------
0 recursive calls
3 db block gets
242 consistent gets
0 physical reads
0 redo size
1676901 bytes sent via SQL*Net to client
10132 bytes received via SQL*Net from client
83 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
12016 rows processed
Why 'Consistent Gets' is higher in update statement than select !? The same arraysize(150), same table, same datablocks...

I would appreciate it if you elaborate this more.
Thanks for your time,
Arash



Tom Kyte
May 08, 2003 - 9:12 am UTC

an UPDATE (and delete for that matter - and sometimes an insert) have two components -- a consistent READ part and a modification part.

inline views will be useful here to see this, instead of coding:

update t set object_id = object_id;


say you wrote:

update
( select *
from t )

set object_id = object_id;

the part in "bold" is done as a consistent read -- the update, modification part regets the block in current mode to actually update the data.

Why must this be? well, consider what would happen if:

create table t ( x int );
create t_idx index on t(x);


update t set x = x+100 where x > 5;

Now, say that update used t_idx to get the data, to find rows where x >5. Now, we get the first row -- x = 5, we update to 105 and update the index et.al. Now, we continue one and eventually we get into rows where x > 100.... We find a row 105 -- we already updated (but do not realize this in current mode, only in consistent read mode). We would make it 205 and so on.

You would have an infinite loop. Don't laugh -- sqlserver (MS and Sybase) both have the ability for this to happen. I remember the first time I saw it happen -- I wrote a program to read a table and generate a complex update. I wrote the update to stdout. I had the update go into isql, something like this:


isql (with a query) | filter_to_generate_updates | isql (to do the updates)


that table had 5000 rows in it. without consistent reads -- and since their update will sometimes be a "delete+insert" -- what happened was row 1 became row 5001 and so on. when we got to row 5000 instead of stopping, it kept reading.

25,000 plus updates later I killed it -- really confused. took a while to figure out what exactly had happened.

A reader, May 08, 2003 - 11:45 am UTC

Thanks Tom.

A reader, May 08, 2003 - 1:35 pm UTC

Tom,
Excellent example.
Could you explain

"modification part regets the block in current mode to actually update the data."

Thanks


Tom Kyte
May 09, 2003 - 12:18 pm UTC

say you:

create table t ( x int );

insert into t values ( 1 );
insert into t values ( 2 );
commit;


update t set x = x+1;

Oracle will

a) read T in consistent read mode as of the point in time the update was started
b) that'll get the single block T is on in consistent mode.
c) Oracle will see the first row -- x=1 -- and say "hey, I gotta update that"
d) Oracle will get that block in current mode, modify it and let go of it
e) Oracle will see the second row -- x=2 -- and say "hey...."
f) Oracle will get that block in current mode, modify and let go of it.


consider:

ops$tkyte@ORA920> create table t ( x int );
Table created.

ops$tkyte@ORA920> insert into t select rownum from all_users;
46 rows created.

ops$tkyte@ORA920> commit;
Commit complete.

ops$tkyte@ORA920> set autotrace traceonly statistics
ops$tkyte@ORA920> update t set x = x where rownum <= 1;

Statistics
----------------------------------------------------------
          2  db block gets
          7  consistent gets
          1  rows processed

ops$tkyte@ORA920> update t set x = x where rownum <= 2;

Statistics
----------------------------------------------------------
          2  db block gets
          7  consistent gets
          2  rows processed

ops$tkyte@ORA920> update t set x = x where rownum <= 5;

Statistics
----------------------------------------------------------
          5  db block gets
          7  consistent gets
          5  rows processed

ops$tkyte@ORA920> update t set x = x where rownum <= 10;

Statistics
----------------------------------------------------------
         10  db block gets
          7  consistent gets
         10  rows processed

ops$tkyte@ORA920> update t set x = x where rownum <= 20;

Statistics
----------------------------------------------------------
         20  db block gets
          7  consistent gets
         20  rows processed


The number of consistent gets is "consistent" (we full scanned the table in consistent read mode each time).  The number of db block (current mode) gets goes up as the rows modified goes up. 

A reader, May 09, 2003 - 3:09 pm UTC

Hi Tom,
Now I am getting clear on this.

One other question

When you update table number of DB BLOCK GET goes up and up. In your example if you update 5 rows DB BLOCK GET would be 5, for 10 rows it wold be 10 and for 20 it would be 20 and so on.....

What does that means when you update "update t set x = x where rownum <= 20" ?

Is table spread across 20 blocks or Table spread across 20 block in buffer cahce?

Could you explain this?
Thanks,


Tom Kyte
May 09, 2003 - 3:16 pm UTC

that table (very very small table) would have been on a single block.  the same block was gotten over and over and over.

ops$tkyte@ORA920> create table t ( x int );

Table created.

ops$tkyte@ORA920> insert into t select rownum from all_users;

46 rows created.

ops$tkyte@ORA920> select dbms_rowid.rowid_block_number(rowid), count(*) from t
  2  group by dbms_rowid.rowid_block_number(rowid);

DBMS_ROWID.ROWID_BLOCK_NUMBER(ROWID)   COUNT(*)
------------------------------------ ----------
                                 584         46


there was one block, block 584... 

A reader, May 09, 2003 - 4:02 pm UTC

I got it

Thanks

A reader, May 12, 2003 - 10:05 am UTC

Hi Tom,

Here is the summary of what I've understood uptill now.
update ( select * from t ) set object_id = object_id;
select "select * from t" would do consistent read . (Ie If query start at 10 SCN then it will read all block below 10 SCN.
If the block is not available then it will reconstruct image from rollback segment.)
Update portition would do CURRENT READ (ie It won't care about SCN it will get latest value for the block)
Please confirm this.

Thanks

Tom Kyte
May 12, 2003 - 10:06 am UTC

yes, confirmed.

A reader, May 28, 2003 - 1:53 pm UTC

Hi Tom,

I understood what you mean by giving the following example :
"
create table t ( x int );
create t_idx index on t(x);


update t set x = x+100 where x > 5;

Now, say that update used t_idx to get the data, to find rows where x >5. Now,
we get the first row -- x = 5, we update to 105 and update the index et.al.
Now, we continue one and eventually we get into rows where x > 100.... We find a
row 105 -- we already updated (but do not realize this in current mode, only in
consistent read mode). We would make it 205 and so on.
"

But I couldn't simulate in MSSQL 2000 and Sybase. I mean it works fine. It doesn't work like forever loop. It just dose the update.

example:
set transaction isolation level read uncommitted

create table t (x integer)

begin
declare @i integer;
SET @i=1;
while @i<1000
begin
insert into t(x) values(@i)
SET @i = @i + 1
end
end

select * from t

begin transaction
update t
set x = x+5
where x> 100

(899 row(s) affected)

Could you please tell me what cause to have this weird problem "You would have an infinite loop. Don't laugh -- sqlserver (MS and Sybase) both have the ability for this to happen. " ?

Thanks for your time

Tom Kyte
May 28, 2003 - 7:10 pm UTC

process 1 reads and prints to stdout

process 2 converts data into update statement

process 3 updates data

( using a pipe, i had isql | awk | isql ). If the row is updated via a DELETE+INSERT and there are no clustered indexes, you may will be able to reproduce this as the insert tends to go at the end of the table.

unfortunately, I don't have either of those products installed (well, actually one won't install on any computer I have, no DLLs you know)...

You can see a similar effect by doing this

o create a large table with "known data" -- something that will take a couple of seconds to full scan.

o make sure you know the "first" record that will be hit in the full scan and the "last record". eg, in Oracle I might:

insert into this_table ( some known value );
insert into this_table select lots of data;
insert into this_table ( some other known value );

Now, make sure one of the columns is a number you can "sum up".

o have two sessions ready to go

o in session one "select sum(of_that_number) from table", remember that number. also, remember what value row 1 has.

o while that is running, but before it is done do these updates in session 2

update table set of_that_number = 0 where <predicate to get the first row>;
update table set of_that_number = <value that was in row1> where <last row>;


o make sure the updates run after the first row was read, but before the last. Tell me, is the sum you get an answer that ever existed in your database at any point in time? The results can be even more amusing if

o the update to zero is to some number other then zero (say 5)
o the update to row 1 causes it to move to the end of the table (or
even the middle) as it might (might not, they got better at that over time)


you can end up reading row 1 more then once giving you really ambigous results.


Many changes to a block in different SCNs

Oren, July 01, 2003 - 4:40 am UTC

Hi Tom.

Suppose a specific data block has been changed at SCNs 10, 13, 15 and 20.
And suppose that a query that started at SCN 14 needs a record from this block (when the block is already "signed" as SCN 20).

So the block should be reconstructed from the rollback segment that contains the "SCN 15" change AND from the rollback segment that contains the "SCN 13" change, right?
Now, how does Oracle know which rollback segments it should read? Does it go over all the existing rollback segments and look for that block? (I guess this would be too expensive)

And if the rollback segment that contains the "SCN 13" change has already been overridden, we'll get "snapshot too old", right?
But assuming the rollback segment that contains the "SCN 10" change has not been overridden yet, how does Oracle know NOT to reconstruct the block from the "SCN 10" change (and to give ora-1555 instead)?
Is there some kind of a list for each data blcok, containing all the SCNs (or rollback segment slots) that changed that block?

Does my question indicate I misunderstand something crucial in multi-versioning :-( ?
Many thanks,
Oren.


Tom Kyte
July 01, 2003 - 8:08 am UTC

the block header has this information. the transaction header in the block itself has this information.

SP2-0612: Error generating AUTOTRACE report when arraysize is reduced..????

A reader, July 01, 2003 - 2:49 pm UTC

RKPD01> create table test
2 ( x char(2000),
3 y char(2000),
4 z char(2000),
5 a char(1000));

Table created.


RKPD01> begin
2 for i in 1..10 loop
3 insert into test values(chr(i), chr(i+1), chr(i+2), chr(i+3));
4 end loop;
5 end;
6 /

PL/SQL procedure successfully completed.

RKPD01> commit;

Commit complete.

RKPD01> select count(*) from test;


Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE
1 0 SORT (AGGREGATE)
2 1 TABLE ACCESS (FULL) OF 'TEST'



Statistics
----------------------------------------------------------
0 recursive calls
4 db block gets
12 consistent gets
0 physical reads
0 redo size
367 bytes sent via SQL*Net to client
425 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
1 rows processed


RKPD01> show array
arraysize 15
RKPD01> select * from test;

10 rows selected.


Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE
1 0 TABLE ACCESS (FULL) OF 'TEST'




Statistics
----------------------------------------------------------
0 recursive calls
4 db block gets
12 consistent gets
0 physical reads
0 redo size
71296 bytes sent via SQL*Net to client
425 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
10 rows processed

RKPD01> set arraysize 10
RKPD01> select * from test;

10 rows selected.


Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE
1 0 TABLE ACCESS (FULL) OF 'TEST'




Statistics
----------------------------------------------------------
0 recursive calls
4 db block gets
12 consistent gets
0 physical reads
0 redo size
71296 bytes sent via SQL*Net to client
425 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
10 rows processed

RKPD01> set arraysize 5
RKPD01> select * from test;
SP2-0612: Error generating AUTOTRACE report

10 rows selected.

SP2-0612: Error generating AUTOTRACE report

Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE
1 0 TABLE ACCESS (FULL) OF 'TEST'



SP2-0612: Error generating AUTOTRACE STATISTICS report
RKPD01> select * from test;
SP2-0612: Error generating AUTOTRACE report

10 rows selected.

SP2-0612: Error generating AUTOTRACE report

Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE
1 0 TABLE ACCESS (FULL) OF 'TEST'



SP2-0612: Error generating AUTOTRACE STATISTICS report
RKPD01> set arraysize 100
RKPD01> select * from test
2
RKPD01> /

10 rows selected.
Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE
1 0 TABLE ACCESS (FULL) OF 'TEST'




Statistics
----------------------------------------------------------
0 recursive calls
4 db block gets
12 consistent gets
0 physical reads
0 redo size
71296 bytes sent via SQL*Net to client
425 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
10 rows processed

RKPD01> set arraysize 5
RKPD01> /
SP2-0612: Error generating AUTOTRACE report

10 rows selected.

SP2-0612: Error generating AUTOTRACE report

Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE
1 0 TABLE ACCESS (FULL) OF 'TEST'



SP2-0612: Error generating AUTOTRACE STATISTICS report

as you see i m getting this error when i reduce the arraysize to a low value but not when i set it sufficiently high.. what is the reason for the same....am i doing something wrong here?? Oracle 8.1.7

Tom Kyte
July 01, 2003 - 3:08 pm UTC

looks like a small issue with 8i sqlplus and autotrace with arraysizes below 9.

not a "big deal"

Thanks for the quick response :)

A reader, July 01, 2003 - 3:51 pm UTC


more the consistent gets more the usage of cpu? Thanks.

Reader, August 09, 2003 - 5:47 pm UTC


Tom Kyte
August 10, 2003 - 2:31 pm UTC

more cpu, more latches (which are locks) more work.

SO, SETTING THE ARRAY SIZE IS IMPORTANT TO MINIMIZE THE CPU USAGE? THANKS.

reader, August 10, 2003 - 2:59 pm UTC


Tom Kyte
August 10, 2003 - 3:35 pm UTC

IT IS IMPORTANT for performance (less network roundtrips), for scalability (less latching), and for resource utilization (less resources used).

yes....

why I am not seeing values......?

reader, August 11, 2003 - 9:29 pm UTC

Please see below. Am I missing something? Why can't I see the statistics? Thanks for your time.

SQL> @$ORACLE_HOME/rdbms/admin/utlxplan
SQL> rem
SQL> rem $Header: utlxplan.sql 29-oct-2001.20:28:58 mzait Exp $ xplainpl.sql
SQL> rem
SQL> Rem Copyright (c) 1988, 2001, Oracle Corporation.  All rights reserved.
SQL> Rem NAME
SQL> REM    UTLXPLAN.SQL
SQL> Rem  FUNCTION
SQL> Rem  NOTES
SQL> Rem  MODIFIED
SQL> Rem        mzait   10/26/01  - add keys and filter predicates to the plan e
SQL> Rem        ddas    05/05/00  - increase length of options column
SQL> Rem        ddas    04/17/00  - add CPU, I/O cost, temp_space columns
SQL> Rem        mzait   02/19/98 -  add distribution method column
SQL> Rem        ddas    05/17/96 -  change search_columns to number
SQL> Rem        achaudhr        07/23/95 -  PTI: Add columns partition_{start, }
SQL> Rem        glumpkin        08/25/94 -  new optimizer fields
SQL> Rem        jcohen  11/05/93 -  merge changes from branch 1.1.710.1 - 9/24
SQL> Rem        jcohen  09/24/93 - #163783 add optimizer column
SQL> Rem        glumpkin        10/25/92 -  Renamed from XPLAINPL.SQL
SQL> Rem        jcohen  05/22/92 - #79645 - set node width to 128 (M_XDBI in ge)
SQL> Rem        rlim    04/29/91 -         change char to varchar2
SQL> Rem   Peeler        10/19/88 - Creation
SQL> Rem
SQL> Rem This is the format for the table that is used by the EXPLAIN PLAN
SQL> Rem statement.     The explain statement requires the presence of this
SQL> Rem table in order to store the descriptions of the row sources.
SQL>
SQL> create table PLAN_TABLE (
  2          statement_id    varchar2(30),
  3          timestamp       date,
  4          remarks         varchar2(80),
  5          operation       varchar2(30),
  6          options         varchar2(255),
  7          object_node     varchar2(128),
  8          object_owner    varchar2(30),
  9          object_name     varchar2(30),
 10          object_instance numeric,
 11          object_type     varchar2(30),
 12          optimizer       varchar2(255),
 13          search_columns  number,
 14          id              numeric,
 15          parent_id       numeric,
 16          position        numeric,
 17          cost            numeric,
 18          cardinality     numeric,
 19          bytes           numeric,
 20          other_tag       varchar2(255),
 21          partition_start varchar2(255),
 22          partition_stop  varchar2(255),
 23          partition_id    numeric,
 24          other           long,
 25          distribution    varchar2(30),
 26          cpu_cost        numeric,
 27          io_cost         numeric,
 28          temp_space      numeric,
 29          access_predicates varchar2(4000),
 30          filter_predicates varchar2(4000));

Table created.

SQL>
SQL> set autotrace traceonly statistics
SQL> select * from t;

6197 rows selected.


Statistics
----------------------------------------------------------
          0  recursive calls
          0  db block gets
          0  consistent gets
          0  physical reads
          0  redo size
          0  bytes sent via SQL*Net to client
          0  bytes received via SQL*Net from client
          0  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
       6197  rows processed
                                                          

Tom Kyte
August 12, 2003 - 8:13 am UTC

you are logged in as sys.

You should never use sys, system, or ANY of the oracle accounts for anything. they are "ours"

Create your own account to play with.

Why the difference in consistent gets?

Tony, September 05, 2003 - 5:39 am UTC

I have two tables, SMALL and BIG

SQL> select count(*) from small;

    COUNT(*)
    ----------
     104

SQL> select count(*) from big;

  COUNT(*)
   ----------
     34486

I issued the query SELECT COUNT(*) FROM small, big (without join) in RBO and got different consistent gets and elapse time when the table order was changed. Why the consistent get was different when table order changed? Why the consistent get was different for CBO also? Plans are given below: Please clear my doubt.

RBO:
-------
SQL> select /*+ rule */ count(*) from small, big;

Execution Plan
----------------------------------------------------------
   0      SELECT STATEMENT Optimizer=HINT: RULE
   1    0   SORT (AGGREGATE)
   2    1     NESTED LOOPS
   3    2       TABLE ACCESS (FULL) OF 'BIG'
   4    2       TABLE ACCESS (FULL) OF 'SMALL'



Statistics
----------------------------------------------------------
          0  recursive calls
          0  db block gets
     172822  consistent gets
          0  physical reads
          0  redo size
        382  bytes sent via SQL*Net to client
        499  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
          1  rows processed


SQL> select /*+ rule */ count(*) from big,small;

Execution Plan
----------------------------------------------------------
   0      SELECT STATEMENT Optimizer=HINT: RULE
   1    0   SORT (AGGREGATE)
   2    1     NESTED LOOPS
   3    2       TABLE ACCESS (FULL) OF 'SMALL'
   4    2       TABLE ACCESS (FULL) OF 'BIG'


Statistics
----------------------------------------------------------
          0  recursive calls
          0  db block gets
      40773  consistent gets
          0  physical reads
          0  redo size
        382  bytes sent via SQL*Net to client
        499  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
          1  rows processed

CBO:
------

SQL> select count(*) from small, big;

Execution Plan
----------------------------------------------------------
   0      SELECT STATEMENT Optimizer=CHOOSE (Cost=4058 Card=1)
   1    0   SORT (AGGREGATE)
   2    1     MERGE JOIN (CARTESIAN) (Cost=4058 Card=3586544)
   3    2       TABLE ACCESS (FULL) OF 'SMALL' (Cost=2 Card=104)
   4    2       BUFFER (SORT) (Cost=4056 Card=34486)
   5    4         TABLE ACCESS (FULL) OF 'BIG' (Cost=39 Card=34486)

Statistics
----------------------------------------------------------
          0  recursive calls
          0  db block gets
        397  consistent gets
          0  physical reads
          0  redo size
        382  bytes sent via SQL*Net to client
        499  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          1  sorts (memory)
          0  sorts (disk)
          1  rows processed



 

Tom Kyte
September 05, 2003 - 3:57 pm UTC

the plan is:

  for x in ( select * from T1 )
    for y in ( select * from T2 )
        output x+y
    end loop
  end loop

from the rbo.  The formula for the consistent gets will then be:

number_of_blocks_in_t1 + number_of_rows_in_t1 * number_of_blocks_in_t2

so, here are some sample numbers to play with:

ops$tkyte@ORA920> declare
  2      l_small_rows   number := 104;
  3      l_big_rows     number := 34486;
  4
  5      type numArray is table of number;
  6
  7      l_small_blocks numArray := numArray( 1,     2,    4,    8,   16 );
  8      l_big_blocks   numArray := numArray( 300, 600, 1200, 2400, 4800 );
  9
 10      l_small_drives number;
 11      l_big_drives number;
 12  begin
 13      for i in 1 .. l_small_blocks.count
 14      loop
 15          for j in 1 .. l_big_blocks.count
 16          loop
 17              dbms_output.put_line( 'small blocks = ' || l_small_blocks(i) ||
 18                                    ' big blocks = ' || l_big_blocks(i) );
 19
 20              l_small_drives := (l_small_blocks(i) + l_small_rows*l_big_blocks(j));
 21              l_big_drives   := (l_big_blocks(i) + l_big_rows*l_small_blocks(j));
 22              dbms_output.put_line( 'small drives = ' || l_small_drives ||
 23                                    ' big drives = ' || l_big_drives ||
 24                                    ' difference = ' || abs( l_big_drives-l_small_drives ) );
 25          end loop;
 26      end loop;
 27  end;
 28  /
small blocks = 1 big blocks = 300
small drives = 31201 big drives = 34786 difference = 3585
small blocks = 1 big blocks = 300
small drives = 62401 big drives = 69272 difference = 6871
small blocks = 1 big blocks = 300
small drives = 124801 big drives = 138244 difference = 13443
small blocks = 1 big blocks = 300
small drives = 249601 big drives = 276188 difference = 26587
small blocks = 1 big blocks = 300
small drives = 499201 big drives = 552076 difference = 52875
small blocks = 2 big blocks = 600
small drives = 31202 big drives = 35086 difference = 3884
small blocks = 2 big blocks = 600
small drives = 62402 big drives = 69572 difference = 7170
small blocks = 2 big blocks = 600
small drives = 124802 big drives = 138544 difference = 13742
small blocks = 2 big blocks = 600
small drives = 249602 big drives = 276488 difference = 26886
small blocks = 2 big blocks = 600
small drives = 499202 big drives = 552376 difference = 53174
small blocks = 4 big blocks = 1200
small drives = 31204 big drives = 35686 difference = 4482
small blocks = 4 big blocks = 1200
small drives = 62404 big drives = 70172 difference = 7768
small blocks = 4 big blocks = 1200
small drives = 124804 big drives = 139144 difference = 14340
small blocks = 4 big blocks = 1200
small drives = 249604 big drives = 277088 difference = 27484
small blocks = 4 big blocks = 1200
small drives = 499204 big drives = 552976 difference = 53772
small blocks = 8 big blocks = 2400
small drives = 31208 big drives = 36886 difference = 5678
small blocks = 8 big blocks = 2400
small drives = 62408 big drives = 71372 difference = 8964
small blocks = 8 big blocks = 2400
small drives = 124808 big drives = 140344 difference = 15536
small blocks = 8 big blocks = 2400
small drives = 249608 big drives = 278288 difference = 28680
small blocks = 8 big blocks = 2400
small drives = 499208 big drives = 554176 difference = 54968
small blocks = 16 big blocks = 4800
small drives = 31216 big drives = 39286 difference = 8070
small blocks = 16 big blocks = 4800
small drives = 62416 big drives = 73772 difference = 11356
small blocks = 16 big blocks = 4800
small drives = 124816 big drives = 142744 difference = 17928
small blocks = 16 big blocks = 4800
small drives = 249616 big drives = 280688 difference = 31072
small blocks = 16 big blocks = 4800
small drives = 499216 big drives = 556576 difference = 57360
 
PL/SQL procedure successfully completed.
 


As you can see -- the number of consistent gets for that plan is radically affected by the NUMBER OF BLOCKS in the table.  


the cbo plan -- it is not even remotely comparable.  but it basically read BIG once, sorted it and read it from temp (temp reads are NOT consistent gets)
 

Client Arraysize won't affect the consistent gets?

Tony, September 06, 2003 - 3:04 am UTC

Thanks for my question on "Why the difference in consistent gets?". You have given a new formula to calculate consistent gets. But you have mentioned on many occations that consistent get (logical read, if I'm not wrong) depends on client array size, in my case SQL Plus array size. Please clarify.







Tom Kyte
September 06, 2003 - 9:04 am UTC

clarity is in the original answer and followups??

that was the crux of the first answer on this page. do you have a specific question?

More consistent gets Less elapsed time. Why?

Tony, September 06, 2003 - 3:32 am UTC

[Continuation of the above question....]

When consistent get is more, elapsed time is less. And
When consistent get is less, elapsed time is more for the above query plan for the question on "Why the difference in consistent gets?"

Why elapsed time is inversely proportional to consistent gets in my case?. Please clear my doubt.



Tom Kyte
September 06, 2003 - 9:04 am UTC

give me a "for example" to work with at least.

Why elapsed time is high when consistent get is low?

Tony, September 08, 2003 - 1:28 am UTC

Why elapsed time is high when consistent get is low and elapsed time is low when consistent get is high?

For example:

SQL>  select count(*) from small;

  COUNT(*)
   ----------
     104

SQL> select count(*) from big;

  COUNT(*)
   ----------
   34486

SQL> select /*+ rule */ count(*) from small, big;

Elapsed: 00:00:01.08

Execution Plan
----------------------------------------------------------
   0      SELECT STATEMENT Optimizer=HINT: RULE
   1    0   SORT (AGGREGATE)
   2    1     NESTED LOOPS
   3    2       TABLE ACCESS (FULL) OF 'BIG'
   4    2       TABLE ACCESS (FULL) OF 'SMALL'

Statistics
----------------------------------------------------------
          0  recursive calls
          0  db block gets
     172822  consistent gets
          0  physical reads
          0  redo size
        382  bytes sent via SQL*Net to client
        499  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
          1  rows processed

SQL>  select /*+ rule */ count(*) from big, small;

Elapsed: 00:00:02.00

Execution Plan
----------------------------------------------------------
   0      SELECT STATEMENT Optimizer=HINT: RULE
   1    0   SORT (AGGREGATE)
   2    1     NESTED LOOPS
   3    2       TABLE ACCESS (FULL) OF 'SMALL'
   4    2       TABLE ACCESS (FULL) OF 'BIG'


Statistics
----------------------------------------------------------
          0  recursive calls
          0  db block gets
      40773  consistent gets
          0  physical reads
          0  redo size
        382  bytes sent via SQL*Net to client
        499  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
          1  rows processed
 

Tom Kyte
September 08, 2003 - 11:07 am UTC

use tkprof.

first, the autotrace stuff takes time and is counted.

second, you are seeing the total elapsed time -- including sqlplus "screen paint", network, etc

use tkprof to find out for real what is taking place.

Outstanding

Jim, September 10, 2003 - 4:22 pm UTC

Tom, you're still the one! This thread is rockin!

Setting arraysize for an oci app

Ma$e, October 06, 2003 - 10:58 am UTC

Tom:

Following up on the arraysize discussion.

I have an app that connects to the oracle d/b via oci calls. How can I test the effect of arraysize for this app ?
Does arraysize even come into consideration for an oci app ?

Thanks again for your help.

Ma$e

Tom Kyte
October 06, 2003 - 11:20 am UTC

yes, it makes sense/comes into consideration.

Look at the OCIDefine functions -- valuep can be an array.

Can't find OCIDefine function calls

Ma$e, October 06, 2003 - 2:27 pm UTC

Tom:

I searched for OCIDefine function calls in the source code library and found no hits. Chances are pretty good that we are using the OCI 7 calls. I tried looking for its equivalent name from the "old days" and did not get anywhere.

Would you know what this function call was known in its previous incarnation ?

Thanks in advance.

Ma$e

Tom Kyte
October 06, 2003 - 2:44 pm UTC

ocidefinebypos

for example in 8 and above

see $ORACLE_HOME/rdbms/demo/oci06.c for "old style" array fetch example

Sorry I also forgot to ask another Q.

Ma$e, October 06, 2003 - 2:31 pm UTC

How can the end user set the array size for an oci app without access to the source code?

Or in other words:

We are s/w developpers: How can we give the option to the user of setting the value for this to the end user?

Can it be set via a logon trigger or some other means, for the user that we use to establish an oci connection to the d/b with ?

Thanks
Ma$e


Tom Kyte
October 06, 2003 - 2:44 pm UTC

you, as developers, would have a parameter file with configuration options that the end user could edit.


it is a CLIENT thing, cannot be done in the server.

What was I thinking....

A reader, October 06, 2003 - 3:04 pm UTC

Hi Tom:

Absolutely correct. The arraysize setting is a client side setting. What was I thinking....

So let me understand. The application would open the parameter file and assign a value to an internal variable which we would pass to the oci function call as our array size (simillar to the init.ora)!!! You can tell I'm a DBA.

Any other possible way?

Since this is unlikely to happen unless I can demonstrate to the developpers (yet another battle between a DBA and the other guys) that this will benefit the end user in a tangible form that the developpers can understand. So I need to be able to demonstrate without any shadow of doubt of the benefits for a user settable array size.


I found a "C" programming example on metalink:

</code> http://metalink.oracle.com/metalink/plsql/ml2_documents.showFrameDocument?p_database_id=NOT&p_id=73823.1 <code>

However this example does not demonstrate "roi" for different arraysize that the developpers can understand.

Can you help (please wear both hats - developper and DBA) ?


Thanks

Ma$e

Tom Kyte
October 06, 2003 - 3:38 pm UTC

the ROI?  

I do that all of the time with runstats and sqlplus.  I did it in chapter 2 of my new book "Effective Oracle By Design".  I do it in my "whats wrong presentation" i give from time to time.  Here is short excerpt showing the effects of array fetching on logical IO's from the book:

<quote>
Array Size Effects

The array size is the number of rows fetched (or sent, in the case of inserts, updates, and deletes) by the server at a time. It can have a dramatic effect on performance. 

To demonstrate the effects of array size, we'll run the same query a couple of times and look at just the consistent get differences between runs:

ops$tkyte@ORA920> create table t as select * from all_objects;
Table created.

ops$tkyte@ORA920> set autotrace traceonly statistics;

ops$tkyte@ORA920> set arraysize 2
ops$tkyte@ORA920> select * from t;
29352 rows selected.

Statistics
----------------------------------------------------------
      14889  consistent gets

Note how one half of 29,352 (rows fetched) is very close to 14,889, the number of consistent gets. Every row we fetched from the server actually caused it to send two rows back. So, for every two rows of data, we needed to do a logical I/O to get the data. Oracle got a block, took two rows from it, and sent it to SQL*Plus. Then SQL*Plus asked for the next two rows, and Oracle got that block again or got the next block, if we had already fetched the data, and returned the next two rows, and so on. 

Next, let's increase the array size:

ops$tkyte@ORA920> set arraysize 5
ops$tkyte@ORA920> select * from t;
29352 rows selected.

Statistics
----------------------------------------------------------
       6173  consistent gets

Now, 29,352 divided by 5 is about 5,871, and that would be the least amount of consistent gets we would be able to achieve (the actual observed number of consistent gets is slightly higher). All that means is sometimes in order to get two rows, we needed to get two blocks: we got the last row from one block and the first row from the next block. 

Let's increase the array size again:

ops$tkyte@ORA920> set arraysize 10
ops$tkyte@ORA920> select * from t;
29352 rows selected.

Statistics
----------------------------------------------------------
       3285  consistent gets

ops$tkyte@ORA920> set arraysize 15
ops$tkyte@ORA920> select * from t;
29352 rows selected.

Statistics
----------------------------------------------------------
       2333  consistent gets

ops$tkyte@ORA920> set arraysize 100
ops$tkyte@ORA920> select * from t;
29352 rows selected.

Statistics
----------------------------------------------------------
        693  consistent gets

ops$tkyte@ORA920> set arraysize 5000
ops$tkyte@ORA920> select * from t;
29352 rows selected.

Statistics
----------------------------------------------------------
        410  consistent gets

ops$tkyte@ORA920> set autotrace off

As you can see, as the array size goes up, the number of consistent gets goes down. So, does that mean you should set your array size to 5,000, as in this last test? Absolutely not. 

If you notice, the overall number of consistent gets has not dropped dramatically between array sizes of 100 and 5,000. However, the amount of RAM needed on the client and server has gone up with the increased array size. The client must now be able to cache 5,000 rows. Not only that, but it makes our performance look choppy: The server works really hard and fast to get 5,000 rows, then the client works really hard and fast to process 5,000 rows, then the server, then the client, and so on. It would be better to have more of a stream of information flowing: Ask for 100 rows, get 100 rows, ask for 100, process 100, and so on. That way, both the client and server are more or less continuously processing data, rather than the processing occurring in small bursts.
</quote>


and to put it in terms they (coders) will understand -- array fetching <b>goes faster</b>.  On:
http://asktom.oracle.com/~tkyte/flat/index.html

I have a pro*c program that accepts an array size as input.  Look at the difference between single row fetching:

[tkyte@tkyte-pc array_flat]$ time ./array_flat userid=big_table/big_table 'sqlstmt=select * from big_table where rownum <= 100000' arraysize=1 > /dev/null
 
Connected to ORACLE as user: big_table/big_table
 
Unloading 'select * from big_table where rownum <= 100000'
Array size = 1
ID,OWNER,OBJECT_NAME,SUBOBJECT_NAME,OBJECT_ID,DATA_OBJECT_ID,OBJECT_TYPE,CREATED,LAST_DDL_TIME,TIMESTAMP,STATUS,TEMPORARY,GENERATED,SECONDARY
100000 rows extracted
 
real    1m10.250s
user    0m30.852s
sys     0m2.365s
[tkyte@tkyte-pc array_flat]$ !!
time ./array_flat userid=big_table/big_table 'sqlstmt=select * from big_table where rownum <= 100000' arraysize=1 > /dev/null
 
Connected to ORACLE as user: big_table/big_table
 
Unloading 'select * from big_table where rownum <= 100000'
Array size = 1
ID,OWNER,OBJECT_NAME,SUBOBJECT_NAME,OBJECT_ID,DATA_OBJECT_ID,OBJECT_TYPE,CREATED,LAST_DDL_TIME,TIMESTAMP,STATUS,TEMPORARY,GENERATED,SECONDARY
100000 rows extracted
 
real    1m13.662s
user    0m20.514s
sys     0m0.910s

<b>versus array fetching:</b>


[tkyte@tkyte-pc array_flat]$ time ./array_flat userid=big_table/big_table 'sqlstmt=select * from big_table where rownum <= 100000' arraysize=100 > /dev/null                                                                                                                                                            
Connected to ORACLE as user: big_table/big_table
 
Unloading 'select * from big_table where rownum <= 100000'
Array size = 100
ID,OWNER,OBJECT_NAME,SUBOBJECT_NAME,OBJECT_ID,DATA_OBJECT_ID,OBJECT_TYPE,CREATED,LAST_DDL_TIME,TIMESTAMP,STATUS,TEMPORARY,GENERATED,SECONDARY
100000 rows extracted
 
real    0m6.145s
user    0m3.816s
sys     0m0.211s
[tkyte@tkyte-pc array_flat]$
[tkyte@tkyte-pc array_flat]$ !!
time ./array_flat userid=big_table/big_table 'sqlstmt=select * from big_table where rownum <= 100000' arraysize=100 > /dev/null
 
Connected to ORACLE as user: big_table/big_table
 
Unloading 'select * from big_table where rownum <= 100000'
Array size = 100
ID,OWNER,OBJECT_NAME,SUBOBJECT_NAME,OBJECT_ID,DATA_OBJECT_ID,OBJECT_TYPE,CREATED,LAST_DDL_TIME,TIMESTAMP,STATUS,TEMPORARY,GENERATED,SECONDARY
100000 rows extracted
 
real    0m5.491s
user    0m3.795s
sys     0m0.230s
[tkyte@tkyte-pc array_flat]$


<b>that should raise an eyebrow or two</b> -- hey, you mean we can make our code run faster with array fetching! 

How can I thank you....

Ma$e, October 07, 2003 - 11:55 am UTC

Tom:

There is so much value in your response. I can't than you enough for this. I'm still digesting your response.

One thing for sure I am going to read your book first before asking you questions that have been answered there. I feel bad. I do own both your books. However, it's your comments that preceede or follow your quotes that are invaluble.

Thanks one again. May God bless you.

Ma$e

Clarification

A reader, October 27, 2003 - 12:16 pm UTC

In your response to one of the reader's queries, you had mentioned a case where a table has 100 rows across 10 blocks with an array size of 10 and the total consistent gets is 20. Can you explain me how you arrived at 20. My calculation is as follows :

100 rows
10 rows/block
Array size 10

First fetch (1 consistent get) - 10 rows/0 rows left
Second fetch (2 consistent gets) - 10 rows/0 rows left
....
Tenth fetch (10 consistent gets) - 10 rows/0 rows left

My question is that since the number of rows/block is the same as the array size in the above example, it should do 10 consistent gets exactly. Can you explain me how you arrived at 20 consistent gets instead 10.Is Oracle revisiting blocks again or how you have arrived at 20 consistent block gets?

Tom Kyte
October 27, 2003 - 1:46 pm UTC

it picks up where it left off -- on the last block. you know it had 10 rows, I know it had 10 rows, but the database just said "get me 10 and remember where we stopped". it didn't look necessarily to see that "hey, there isn't an 11th row, we should pick up on the NEXT block next time"

with the full scan extent map reading -- it is actually "more" then 20

consistent gets and db block gets

Ivan, October 30, 2003 - 7:01 am UTC

Tom -

I am confused about db block gets and consistent gets.

1. db block gets can only be concerned for full table
scan. Why ?
Why it is not related to the data we selected ?


2. Consistent gets can be two things( Pleas clarify )
i. from buffer cache if buffer cache has the data. For
example,
Person A selected the table T. After, Person B
selected the table T again.
So Person B can get the data in memory. This can be
called consistent gets.

ii. from rollback segment.
Person A selected a row in table A. But that row
was updated by Person B.
So Person A needs to get the data from rollback
segment. It will cause
two things to happen.
Frist , Physical reads - from rollback segmet
into buffer cache.
Second, consistent gets - reading the data from
buffer cache.


Ivan

Tom Kyte
October 30, 2003 - 7:20 am UTC

1) it is not concerned only for full table scans. It is a mode blocks are gotten in in many cases.

blocks are either gotten in current mode (db block gets) or consistent read mode (consistent gets). We get the blocks in the mode we need them. If you do not know what "consistent gets" means -- please see the concepts guide and read the concurrency control sections -- and learn about multi-versioning and read consistency.

2) consistent gets ALWAYS come from the buffer cache.

rollback is buffered in the cache.

but yes, a consistent get may well involved reading rollback data for the affected segment.

meaning of current mode

Ivan, October 30, 2003 - 10:07 am UTC

Tom
I am not still clear about the current mode.

Please clarify whether I am correct .

Assume every block has a time in it. It will
record the most updated time if it is modified.

For example, at time t, Person A select * from t.

Consisten mode means getting the block wiht time <= t.

If we need to get a block with time > t, we need to
roll it back to get the consistent data.

Since we are performing full table scan, we need
currnet mode(the block with most updated time) to tell
us where is the segment header.

Tom Kyte
October 30, 2003 - 11:18 am UTC

current mode = get the block as it exists right now.

consistent mode = get the block as it existed at some point in the past.

Total Reads vs Memory reads

A reader, November 03, 2003 - 2:47 pm UTC

Tom,

In the book "Performance Tuning Tips and Techniques(TUSC)", Rich indicates that the Total reads = "Consistent Gets" + "Db Block Gets" and the Memory reads = Total Reads - Disk Reads. My understanding is Total Reads = Logical I/O + Physical I/O where Logical I/O = Consistent Gets + Db Block Gets.
a. Can you please clarify me as to what he means by Memory reads.
b. Since Physical I/O is initiated by a Logical I/O, the memory reads are only those reads that do not include the physical I/o part of it.

Please clarify

Tom Kyte
November 04, 2003 - 7:44 am UTC

that is wrong.

A physical IO performed in order to read a block into the cache will be followed by a subsequent logical IO to get the block out of the buffer cache. We can observe this via something like this:

big_table@ORA920LAP> alter tablespace users offline;

Tablespace altered.

big_table@ORA920LAP> alter tablespace users online;

Tablespace altered.

big_table@ORA920LAP>
big_table@ORA920LAP> select blocks from user_tables where table_name = 'BIG_TABLE';

BLOCKS
----------
13804

big_table@ORA920LAP> set autotrace on
big_table@ORA920LAP> select /*+ FULL(big_table) */ count(*) from big_table;

COUNT(*)
----------
1000000


Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=1329 Card=1)
1 0 SORT (AGGREGATE)
2 1 TABLE ACCESS (FULL) OF 'BIG_TABLE' (Cost=1329 Card=998400)




Statistics
----------------------------------------------------------
458 recursive calls
0 db block gets
13769 consistent gets
13647 physical reads
0 redo size
379 bytes sent via SQL*Net to client
499 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
6 sorts (memory)
0 sorts (disk)
1 rows processed

big_table@ORA920LAP> set autotrace off


The alter's invalidate the buffers in the cache, ensuring a PIO. As you can see, if we use the formula:

memory reads = total reads - disk reads

we'd get it wrong. memory reads = consistent gets+db block gets.


I don't know what he means by memory reads, sometimes information taken out of context is hard to interpret. I don't know if it is a "mistake" or just taken out of context.

There are other nuances to consider as well -- direct IO done to TEMP (reads and writes) will appear as PIO but not as LIO -- that is, you can have a query that does more PIO then LIO (rare, but can happen) due to this.


I've been asked from time to time to put out "challenges". As I'm pressed for time this morning -- I'll try one.

In the following output -- we always process 10,000 or 20,000 rows. Only difference is a sort to disk. Soooooo tell me -- why the big differences in LIO's and PIO's between the two???

but it shows that the above formulas are not really accurate. some PIO's are followed by LIO's -- this is the "normal" case. some PIO's (direct reads) are not (hash joins, sorts may well result in this)

big_table@ORA920LAP> alter session set workarea_size_policy=manual;

Session altered.

big_table@ORA920LAP> alter session set sort_area_size = 65537;

Session altered.

big_table@ORA920LAP>
big_table@ORA920LAP> set autotrace traceonly
big_table@ORA920LAP> select * from big_table where rownum <= 10000;

10000 rows selected.


Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=1329 Card=10000 Bytes=940000)
1 0 COUNT (STOPKEY)
2 1 TABLE ACCESS (FULL) OF 'BIG_TABLE' (Cost=1329 Card=998400 Bytes=93849600)




Statistics
----------------------------------------------------------
0 recursive calls
0 db block gets
794 consistent gets
134 physical reads
0 redo size
773075 bytes sent via SQL*Net to client
7825 bytes received via SQL*Net from client
668 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
10000 rows processed

big_table@ORA920LAP> select * from big_table where rownum <= 10000 order by 1,2,3,4,5,6,7,8;

10000 rows selected.


Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=82617 Card=10000 Bytes=940000)
1 0 SORT (ORDER BY) (Cost=82617 Card=10000 Bytes=940000)
2 1 COUNT (STOPKEY)
3 2 TABLE ACCESS (FULL) OF 'BIG_TABLE' (Cost=1329 Card=998400 Bytes=93849600)




Statistics
----------------------------------------------------------
0 recursive calls
2 db block gets
138 consistent gets
257 physical reads
0 redo size
773086 bytes sent via SQL*Net to client
7825 bytes received via SQL*Net from client
668 SQL*Net roundtrips to/from client
0 sorts (memory)
1 sorts (disk)
10000 rows processed

big_table@ORA920LAP> select * from big_table where rownum <= 20000;

20000 rows selected.


Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=1329 Card=20000 Bytes=1880000)
1 0 COUNT (STOPKEY)
2 1 TABLE ACCESS (FULL) OF 'BIG_TABLE' (Cost=1329 Card=998400 Bytes=93849600)




Statistics
----------------------------------------------------------
0 recursive calls
0 db block gets
1591 consistent gets
257 physical reads
0 redo size
1570644 bytes sent via SQL*Net to client
15162 bytes received via SQL*Net from client
1335 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
20000 rows processed

big_table@ORA920LAP> select * from big_table where rownum <= 20000 order by 1,2,3,4,5,6,7,8;

20000 rows selected.


Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=82617 Card=20000 Bytes=1880000)
1 0 SORT (ORDER BY) (Cost=82617 Card=20000 Bytes=1880000)
2 1 COUNT (STOPKEY)
3 2 TABLE ACCESS (FULL) OF 'BIG_TABLE' (Cost=1329 Card=998400 Bytes=93849600)




Statistics
----------------------------------------------------------
0 recursive calls
3 db block gets
279 consistent gets
536 physical reads
0 redo size
1570607 bytes sent via SQL*Net to client
15162 bytes received via SQL*Net from client
1335 SQL*Net roundtrips to/from client
0 sorts (memory)
1 sorts (disk)
20000 rows processed

big_table@ORA920LAP> set autotrace off


Consistent Gets

A reader, November 03, 2003 - 4:24 pm UTC

Tom,

In one of the discussions for Consistent gets, you mention that
"it took 10k consistent gets for Y -- versus 7 for X -- both queries return the same data -- it is just that for the first fetch from Y we had to "undo" the 10,000 updates that happened between the OPEN of Y and the fetch."

You mention that we need to reduce the number of "consistent gets" in order to fine tune a query. In this example, there is no problem with our query because it just reads one record from table. However, the "consistent gets" figure is so huge because of the UPDATE and the way Oracle uses to reconstruct a consistent image and so we might try to determine as to why this query made such huge consistent gets to return just 1 row. Given this, how do we differentiate between :
a. A really bad query with a large number of "consistent gets"
b. A single-row query that shows a large number of "consistent gets" just because of an UPDATE that happened between the OPEN and FETCH (as in the example above).
c. What other trace result might indicate that there is no problem with the query, but it is just because of the UPDATE that happened in between and how can we reduce the consistent gets in the above case

Thanks



Tom Kyte
November 04, 2003 - 7:54 am UTC


this sort of shows why the outline I had in chapter 10 of Expert one on one Oracle is so important.

FIRST

a) tune in isolation. get that app really tuned in single user mode. here you are isolated from external events like this

THEN

b) tune in multi-user situations -- test to scale. then side effects from other users will become apparent (locking, blocking, things like this)

and lastly

c) tune the "system" after A and B are done.


In an OLTP environment -- for the query to be really affected by this consistent read mechanism means you are running a really really long query in a system with tons of updates. Normally the queries would be short and sweet -- the odds of having to undo 10,000 updates would be exceedingly small.

In a read mostly environment -- it won't happen of course.

So, it is more of a "mixed workload" environment and would only be suspected in the execution of long running queries against active tables (eg: your KNOWLEDGE of the charactertistics of the system will be crucial in looking at information)

Colocated vs Disorganized

A reader, November 05, 2003 - 10:17 am UTC

Tom,

I read your book Expert One-on-One Oracle on the topic on indexes where you explain about how physical placement of data affects consistent gets. I have the following understanding and I have the following questions :

Understanding :

Non-SQL*Plus environment, processing 1 row at a time, data colocated :

Assume that we want to retreive 100 rows ie x between 1 and 100 and that the index and table blocks are not in buffer cache. Oracle reads the index block which constitutes 1 physical I/O (in turn a 1 logical I/O). Since each rowid in the index block points to the same data block, Oracle will read the first data block from disk and returns the first row which constitutes 1 physical I/O (in turn 1 logical I/O). It gets the next record from the index block and since the data is colocated it finds the next record in the same table block as the first one and so this time it performs just 1 logical I/O. Continuing in this manner, Oracle will at the best case perform N+1 logical reads (1 index block read and N table block read) for retreiving N colocated rows

Questions :

1. Is my understanding correct
2. In that example in your book, Oracle performs 2909 logical I/Os for getting 20000 rows in colocated table. Can you explain me how Oracle performed 2909 consistent reads?
3. The only difference in the case of colocated and disorganised is that Oracle may have to perform lot of physical reads in the case of disorganized table because the index entry may not point to a row in the same physical block. However, colocated table will perform only logical I/O since the data are closer to each other. Is it correct?
4. Is the autotrace got using SQL*PLus which uses an arraysize of 15?

Tom Kyte
November 05, 2003 - 6:06 pm UTC

1) well, it'll get the index block to find the table block, and then get the index block to find the next table block, and so on.

ops$tkyte@ORA920LAP> set echo on
ops$tkyte@ORA920LAP> /*
DOC>drop table t;
DOC>create table t ( x int, y int );
DOC>create index t_idx on t(x);
DOC>insert into t select rownum , rownum from all_objects;
DOC>*/
ops$tkyte@ORA920LAP>
ops$tkyte@ORA920LAP> alter session set sql_trace=true;

Session altered.

ops$tkyte@ORA920LAP> begin
  2          for x in ( select * from t where x < 100 )
  3          loop
  4                  null;
  5          end loop;
  6  end;
  7  /

PL/SQL procedure successfully completed.

ops$tkyte@ORA920LAP> begin
  2          for x in ( select * from t where x < 200 )
  3          loop
  4                  null;
  5          end loop;
  6  end;
  7  /

PL/SQL procedure successfully completed.

select * from t where x < 100

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute      1      0.00       0.00          0          0          0           0
Fetch      100      0.00       0.00          0        200          0          99
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total      102      0.00       0.00          0        200          0          99

Rows     Row Source Operation
-------  ---------------------------------------------------
     99  TABLE ACCESS BY INDEX ROWID T (cr=200 r=0 w=0 time=1748 us)
     99   INDEX RANGE SCAN T_IDX (cr=101 r=0 w=0 time=984 us)(object id 41082)
********************************************************************************
select * from t where x < 200

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute      1      0.00       0.00          0          0          0           0
Fetch      200      0.00       0.03          0        400          0         199
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total      202      0.00       0.03          0        400          0         199

Rows     Row Source Operation
-------  ---------------------------------------------------
    199  TABLE ACCESS BY INDEX ROWID T (cr=400 r=0 w=0 time=29098 us)
    199   INDEX RANGE SCAN T_IDX (cr=201 r=0 w=0 time=27490 us)(object id 41082)


so, 100 rows -- 200 consistent gets.  200 rows -- 400 consistent gets.  2 consistent gets per row....


2) well -- it is a matter of the number of rows per block, the array size and such.  It just "was" in this case.  do that test case and play with the arraysize - set it up and watch that go down. set it down and watch it go up.

3) no, no it isn't.  you need to look closer at the example.  look at the consistent gets between colocated and disorganized.  It is all about the LOGICAL IO, even more so then physical.  they may well perform the same amount of physical io.

4) yes...


 

N + 1 gets

A reader, November 06, 2003 - 7:56 am UTC

Tom,

Then could you pls. explain me under what situations we would get a best case of N + 1 gets to retreive N rows. Is it that the best case will always be 2 * N?

Help

A reader, November 06, 2003 - 8:22 am UTC

Tom,

If Oracle is anyway going to do 2 consistent gets per row even if the data is colocated, then can you explain me what difference it makes if the data is colocated or disorganized. The only thing if the data is colocated is that the table block will already be in the buffer cache. But if the data is disorganized the index entry may point to the table block not in the buffer cache and so Oracle will perform a physical I/O and then a logical I/O. (So an extra physical I/O). If that is the case, why so much difference between consistent gets if the data is colocated and disorganized. In that case, the difference should only be in the physical reads and not "consistent gets" (since consistent gets is performed in both cases to access the block from buffer cache). Guess, I am still not understanding clearly what consistent gets means. Pls explain
Does consistent gets increase each time the same block in the buffer cache is visited again or only if a new block is brought over into the buffer cache?

Consistent Gets - Misleading

A reader, November 06, 2003 - 1:35 pm UTC

Tom,

I have a table called LOT_LEVEL_POSITION in our development environment and it has a non-unique index LOT1 on 3 columns
(POSITION_ID,SECURITY_ALIAS,LOT_NUMBER). There are around 63191 rows in that table in our development database and around 53214898 on our production m/c. I ran the following 2 queries on the development database and here are the autotrace output.

select position_id,security_alias,lot_number,orig_lot_number,update_date,update_source
from holdingdbo.lot_level_position llp
where 1 < (select count(*)
from holdingdbo.lot_level_position i
where i.position_id = llp.position_id
and i.security_alias = llp.security_alias
and i.lot_number = llp.lot_number
)

Statistics
----------------------------------------------------------
0 recursive calls
0 db block gets
116325 consistent gets
36 physical reads
0 redo size
377716 bytes sent via SQL*Net to client
22640 bytes received via SQL*Net from client
607 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
9080 rows processed

select position_id,security_alias,lot_number,orig_lot_number,update_date,update_source
from holdingdbo.lot_level_position llp
where (position_id,security_alias,lot_number) in (
select position_id,security_alias,lot_number
from holdingdbo.lot_level_position
group by position_id,security_alias,lot_number
having count(*) > 1
)

Statistics
----------------------------------------------------------
0 recursive calls
6 db block gets
3064 consistent gets
3195 physical reads
0 redo size
377716 bytes sent via SQL*Net to client
22640 bytes received via SQL*Net from client
607 SQL*Net roundtrips to/from client
0 sorts (memory)
1 sorts (disk)
9080 rows processed

Based on the above results, I extrapolated the "consistent gets" (which was a guess on my part) to my production database volume. Surprisingly, the first query which showed more consistent gets on my development database ran more quickly on the production database very quickly and the second query which showed less consistent gets ran more slowly and took almost 00:59:3541.73 to complete.

1. Is my extrapolation approach correct. If not how can I simulate this on my development environment. I want to mimic my production volumes.
2. You insist on reducing logical I/O and so is "consistent gets" alone reliable or any other factors to be considered (such as disk reads, db block gets, sorts etc)
3. In my second query, why my physical reads greater than logical reads (3195 > 3064 + 6)
4. Why my first query was faster on production database even though it showed a large number of "consistent gets" on my development database based on which I did my extrapolation
5. What other tuning ratios I need to use and any other tuning tricks/tips/figures I need to look for in my trace.
5. Is cost figures reliable and can I assume queries with lower costs will be faster and any formula I can use as to why Oracle arrived at "x number of consistent gets"

Tom Kyte
November 06, 2003 - 5:29 pm UTC

queries scale in many different ways.

there are queries that have the same number of CG's (consistent gets) regardless of the volume of data (eg: select * from t where pk = :x)

there are queries that will use N*CG's -- where N is some multiple of the data. So, if you have 100,000 rows, it'll do 1*CG, if you have 200,000 rows it'll do 200,000 rows

There are other queries that will use exp(CG) meaning -- it'll get worse and worse and worse in a linear/exponetial fashion as the volume increases.



N+1 best case vs 2*N

A reader, November 06, 2003 - 10:05 pm UTC

Tom,

In one of your discussions, you mentioned that the best case consistent reads is N+1 and worst case is 2*N. Also, in the earlier example, you mentioned that Oracle will perform 2*(number of rows) "consistent gets" when the rows are accessed using the index-then-table approach. Can you let me know when the N+1 consistent gets (the best case) occur

Tom Kyte
November 07, 2003 - 8:31 am UTC

i didn't mention that.

someone else did.

i showed it to be "not correct"

Consistent Gets

A reader, November 06, 2003 - 10:08 pm UTC

Tom,

You mentioned that the number of CGs is data volume dependent. ie. queries that may be N*CGs or exponential. So how do we rely on CGs and how do we write queries that are data volume independent

Tom Kyte
November 07, 2003 - 8:50 am UTC

you cannot -- it is the nature of the query itself!!  it is a characteristic of the underlying data and how it works.

Ok, lets say you have a table:

create table emp ( empno int primary key, ename varchar2(25), deptno int );


very simple.  The rules are:

a) there are never more then 100 depts in any company, EVER.  data is somewhat distributed equally over deptnos 

b) empno is unique

c) we know NOTHING about ename really


Well, the query:

select * from emp where empno = :x

will scale with pretty much the SAME number of CG's regardless of the number of rows.  There will be 2/4 LIO's against an index followed by a table access by index rowid.  You'll have between 3/5 LIO's for that query regardless of whether you have 10,000 or 10,000,000,000 rows.


The query:

select * from emp where deptno = :x

will scale somewhat linearly with respect to the volume of data.  In general, the number of rows returned will be total_number_of_rows_in_table/100.  If you have 10,000 rows, you would expect 100 returned value.  10,000,000,000 rows -- 100,000,000 rows.  the number of CG's will go up in a somewhat predicable fashion as the number of rows increases.


The query

select * from emp where ename = :x

is a sort of an "unpredicable" query.  Say :x = 'SMITH'.  Well, That might return 1 row, all rows, most rows, some rows, many rows from the table (smith is a very popular name).  It could be that SMITH is not a popular name in small companies located in California.  It could be that SMITH is the *most* popular name for small companies located in Texas.  So, this query -- even with small tables could return wildly different CG's in different implementations -- due solely to data skew.

does that make sense?

Now, consider 

select ..
  from t1, t2, t3, t4, t5 ....
 where ......


and you can see, it gets harder to generalize.  IF YOU KNOW your data and what it means, you can generalize that.  


I guess one way to do this would be

a) set up your tables
b) fill them with a bit of data
c) analyze & explain plan your queries
d) fill them with a bit more data
e) goto c/d a couple of times


then, you can look at the row sources to see how the queries are going to scale.  for example, let's run the above scenario using "emp"


ops$tkyte@ORA920PC> create table emp ( empno int primary key, ename varchar2(25), deptno int );
 
Table created.
 
ops$tkyte@ORA920PC>
ops$tkyte@ORA920PC> create index emp_ename_idx on emp(ename);
 
Index created.
 
ops$tkyte@ORA920PC> create index emp_deptno_idx on emp(deptno);
 
Index created.
 
ops$tkyte@ORA920PC>
ops$tkyte@ORA920PC> insert into emp
  2  select object_id, owner, mod(rownum,100)
  3    from (select * from all_objects order by reverse(object_name) )
  4   where rownum <= 100;
 
100 rows created.
 
ops$tkyte@ORA920PC>
ops$tkyte@ORA920PC>
ops$tkyte@ORA920PC> variable x number
ops$tkyte@ORA920PC> variable y varchar2(25)
ops$tkyte@ORA920PC>
ops$tkyte@ORA920PC> analyze table emp compute statistics;
 
Table analyzed.
 
ops$tkyte@ORA920PC> set autotrace traceonly explain
ops$tkyte@ORA920PC> select * from emp where empno = :x;
 
Execution Plan
----------------------------------------------------------
   0      SELECT STATEMENT Optimizer=CHOOSE (Cost=1 Card=1 Bytes=10)
   1    0   TABLE ACCESS (BY INDEX ROWID) OF 'EMP' (Cost=1 Card=1 Bytes=10)
   2    1     INDEX (UNIQUE SCAN) OF 'SYS_C007176' (UNIQUE)
 
 
 
ops$tkyte@ORA920PC> select * from emp where deptno = :x;
 
Execution Plan
----------------------------------------------------------
   0      SELECT STATEMENT Optimizer=CHOOSE (Cost=2 Card=1 Bytes=10)
   1    0   TABLE ACCESS (BY INDEX ROWID) OF 'EMP' (Cost=2 Card=1 Bytes=10)
   2    1     INDEX (RANGE SCAN) OF 'EMP_DEPTNO_IDX' (NON-UNIQUE) (Cost=1 Card=1)
 
 
 
ops$tkyte@ORA920PC> select * from emp where ename = :y;
 
Execution Plan
----------------------------------------------------------
   0      SELECT STATEMENT Optimizer=CHOOSE (Cost=2 Card=20 Bytes=200)
   1    0   TABLE ACCESS (BY INDEX ROWID) OF 'EMP' (Cost=2 Card=20 Bytes=200)
   2    1     INDEX (RANGE SCAN) OF 'EMP_ENAME_IDX' (NON-UNIQUE) (Cost=1 Card=20)
 
 
 
ops$tkyte@ORA920PC> set autotrace off
ops$tkyte@ORA920PC>
ops$tkyte@ORA920PC>
ops$tkyte@ORA920PC> insert into emp
  2  select object_id, owner, mod(rownum,100)
  3    from (select *
  4            from all_objects
  5                   where not exists (select null
  6                                       from emp
  7                                                          where empno=object_id)
  8           order by reverse(object_name) )
  9   where rownum <= 900;
 
900 rows created.
 
ops$tkyte@ORA920PC>
ops$tkyte@ORA920PC> analyze table emp compute statistics;
 
Table analyzed.
 
ops$tkyte@ORA920PC> set autotrace traceonly explain
ops$tkyte@ORA920PC> select * from emp where empno = :x;
 
Execution Plan
----------------------------------------------------------
   0      SELECT STATEMENT Optimizer=CHOOSE (Cost=2 Card=1 Bytes=10)
   1    0   TABLE ACCESS (BY INDEX ROWID) OF 'EMP' (Cost=2 Card=1 Bytes=10)
   2    1     INDEX (UNIQUE SCAN) OF 'SYS_C007176' (UNIQUE) (Cost=1 Card=1000)
 
 
 
ops$tkyte@ORA920PC> select * from emp where deptno = :x;
 
Execution Plan
----------------------------------------------------------
   0      SELECT STATEMENT Optimizer=CHOOSE (Cost=2 Card=10 Bytes=100)
   1    0   TABLE ACCESS (FULL) OF 'EMP' (Cost=2 Card=10 Bytes=100)
 
 
 
ops$tkyte@ORA920PC> select * from emp where ename = :y;
 
Execution Plan
----------------------------------------------------------
   0      SELECT STATEMENT Optimizer=CHOOSE (Cost=2 Card=59 Bytes=590)
   1    0   TABLE ACCESS (FULL) OF 'EMP' (Cost=2 Card=59 Bytes=590)
 
 
 
ops$tkyte@ORA920PC> set autotrace off
ops$tkyte@ORA920PC>
ops$tkyte@ORA920PC>
ops$tkyte@ORA920PC> insert into emp
  2  select object_id, owner, mod(rownum,100)
  3    from (select *
  4            from all_objects
  5                   where not exists (select null
  6                                       from emp
  7                                                          where empno=object_id)
  8           order by reverse(object_name) )
  9   where rownum <= 9000;
 
9000 rows created.
 
ops$tkyte@ORA920PC>
ops$tkyte@ORA920PC>
ops$tkyte@ORA920PC> analyze table emp compute statistics;
 
Table analyzed.
 
ops$tkyte@ORA920PC> set autotrace traceonly explain
ops$tkyte@ORA920PC> select * from emp where empno = :x;
 
Execution Plan
----------------------------------------------------------
   0      SELECT STATEMENT Optimizer=CHOOSE (Cost=2 Card=1 Bytes=11)
   1    0   TABLE ACCESS (BY INDEX ROWID) OF 'EMP' (Cost=2 Card=1 Bytes=11)
   2    1     INDEX (UNIQUE SCAN) OF 'SYS_C007176' (UNIQUE) (Cost=1 Card=10000)
 
 
 
ops$tkyte@ORA920PC> select * from emp where deptno = :x;
 
Execution Plan
----------------------------------------------------------
   0      SELECT STATEMENT Optimizer=CHOOSE (Cost=4 Card=100 Bytes=1100)
   1    0   TABLE ACCESS (FULL) OF 'EMP' (Cost=4 Card=100 Bytes=1100)
 
 
 
ops$tkyte@ORA920PC> select * from emp where ename = :y;
 
Execution Plan
----------------------------------------------------------
   0      SELECT STATEMENT Optimizer=CHOOSE (Cost=4 Card=294 Bytes=3234)
   1    0   TABLE ACCESS (FULL) OF 'EMP' (Cost=4 Card=294 Bytes=3234)
 
 
 
ops$tkyte@ORA920PC> set autotrace off


so, we see:


    rows  =>    100         1,000            10,000
empno = :x        1             1                 1
deptno =:x        1            10               100
ename = :y       20            59               294



now, you see how each of those will "scale".


It is the nature of the data itself.

 

Card in Execution Plan

A reader, November 07, 2003 - 2:22 pm UTC

Tom,

So does that mean the Card= in the Execution plan section is the same as "consistent gets". If not what does "Card=" in the Execution plan mean and does the figures in your output like 1,10,59,20,100,294 show the number of "consistent gets". If not what do these figures mean?


rows => 100 1,000 10,000
empno = :x 1 1 1
deptno =:x 1 10 100
ename = :y 20 59 294

Pl. explain



Tom Kyte
November 07, 2003 - 5:20 pm UTC

No, card = cardinality = estimated number of rows.

consistent gets and card may be miles and miles apart.

multiblock read making difference?

Sudhir, November 22, 2003 - 5:58 pm UTC

Tom,

In your test case when you used example:

"big_table@ORA920LAP> select * from big_table where rownum <= 10000;

10000 rows selected.


Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=1329 Card=10000 Bytes=940000)
1 0 COUNT (STOPKEY)
2 1 TABLE ACCESS (FULL) OF 'BIG_TABLE' (Cost=1329 Card=998400
Bytes=93849600)




Statistics
----------------------------------------------------------
0 recursive calls
0 db block gets
794 consistent gets
134 physical reads
0 redo size
773075 bytes sent via SQL*Net to client
7825 bytes received via SQL*Net from client
668 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
10000 rows processed

big_table@ORA920LAP> select * from big_table where rownum <= 10000 order by
1,2,3,4,5,6,7,8;

10000 rows selected.


Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=82617 Card=10000 Bytes=940000)
1 0 SORT (ORDER BY) (Cost=82617 Card=10000 Bytes=940000)
2 1 COUNT (STOPKEY)
3 2 TABLE ACCESS (FULL) OF 'BIG_TABLE' (Cost=1329 Card=998400
Bytes=93849600)




Statistics
----------------------------------------------------------
0 recursive calls
2 db block gets
138 consistent gets
257 physical reads
0 redo size
773086 bytes sent via SQL*Net to client
7825 bytes received via SQL*Net from client
668 SQL*Net roundtrips to/from client
0 sorts (memory)
1 sorts (disk)
10000 rows processed
"

Somehow I feel, having order by is making some difference in the way db_file_multiblock_read is being used in one and not in the other. Is that why? I didn't even think 'order by' and 'rownum <= 10000' are valid combinations (for most cases)

As always, thanks for creative solutions.

Tom Kyte
November 22, 2003 - 6:36 pm UTC

multi-block-read count affects:

a) the cost of the full scan as seen by the CBO
b) the performance of the full scan if we have to do physical IO


it will not in this case affect consistent gets....

All I wanted to do was show

1) getting 10,000 rows
2) getting 10,000 rows and sorting them


so, rownum was OK for this demo.

Tom, I am missing the point, please help!

Sudhir, November 22, 2003 - 7:59 pm UTC

Without sort

794 consistent gets

with sort

138 consistent gets

whats going on? Thats why I was thinking of multiblock issue.

If you can clear the confusion, Thanks in advance.

Tom Kyte
November 22, 2003 - 10:08 pm UTC

sort = read all - sort it, put in into private memory, retrieve it

no sort = read it bit by bit by bit as you hit it.


the sort was like a "bulk fetch with arraysize = ALL ROWS at once", less consistent gets overall since we "consistently got them" all at once - instead of row by row by row.

(it was part of the 'challenge')

amazing!

Sudhir, November 22, 2003 - 10:15 pm UTC

Tom,

In this case is it likely that the operation with sort may have run faster than one without sort? And do you think there is something like a pattern here that may be useful in general.

Again, thanks for the solution.



Tom Kyte
November 23, 2003 - 8:23 am UTC

No, do not think about using a "sort" as a "pattern"

Use order by when you need sorted data.

It used more memory in the server process, it did more work to get the first row then the last, you waited longer to start seeing data, it used perhaps more CPU.





Thank you! EOM

Sudhir, November 23, 2003 - 2:25 pm UTC


db block gets+consistent gets > Total Number of Blocks for a table.

Sami, January 08, 2004 - 5:53 pm UTC

CREATE TABLE MY_LOOKUP1
(
  LOOKUPID      NUMBER(8)    NOT NULL,
  SERVICEID       NUMBER(8)    NOT NULL,
  LOOKUPABBREV  VARCHAR2(20) NOT NULL,
  LOOKUPDESC    VARCHAR2(40) NOT NULL,
  LANGUAGEID    NUMBER(8)    DEFAULT 30    NOT NULL,
  ACTIVEFLAG    VARCHAR2(1)  DEFAULT 'Y'   NOT NULL,
  CREATEDATE    DATE         DEFAULT NULL  NOT NULL,
  UPDATEDATE    DATE         DEFAULT NULL  NOT NULL
)
TABLESPACE MY_DATA_TS
PCTUSED    40
PCTFREE    10
INITRANS   1
MAXTRANS   255
STORAGE    (
            INITIAL          2K
            NEXT             2K
            MINEXTENTS       1
            MAXEXTENTS       505
            PCTINCREASE      50
            FREELISTS        1
            FREELIST GROUPS  1
            BUFFER_POOL      DEFAULT
           )
LOGGING 
NOCACHE
NOPARALLEL;

INSERT INTO MY_LOOKUP1 select * from MY_LOOKUP;
5 rows inserted.


ANALYZE TABLE MY_LOOKUP1 COMPUTE STATISTICS FOR TABLE FOR ALL INDEXES FOR ALL INDEXED COLUMNS;
Table Analyzed.


SQL> select blocks,empty_blocks,num_rows,avg_row_len from  dba_tables where TABLE_NAME='MY_LOOKUP1';
 
    BLOCKS EMPTY_BLOCKS   NUM_ROWS AVG_ROW_LEN
---------- ------------ ---------- -----------
         1            0          5          50


Execution Plan
----------------------------------------------------------
   0      SELECT STATEMENT Optimizer=CHOOSE (Cost=1 Card=5 Bytes=250)
   1    0   TABLE ACCESS (FULL) OF 'MY_LOOKUP1' (Cost=1 Card=5 Bytes=250)
 
 
Statistics
----------------------------------------------------------
          0  recursive calls
          4  db block gets
          2  consistent gets
          0  physical reads
          0  redo size
       1085  bytes sent via SQL*Net to client
        431  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
          5  rows processed


Questions:
##########

TABLE STATISTICS from DBA_TABLES are below:
===========================================
NUM_ROW    =5
BLOCKS    =1
EMPTY_BLOCKS=0
AVE_ROW_LEN=50

(Q1) BLOCKS=1 (This segment has only one block). 
It does mean that I have all my records in ONE block. Right?

(Q2) If (Q1) correct, why does statistics show 
    4  db block gets
    2  consistent gets
I have everything in ONE block, why am I getting 6 (4+2) blocks.

(Q3)Bytes=250 in Explain Plan which is AVE_ROW_LEN * NUM_ROW. Correct?


Tom,Kinly enlighten me the relationship between DBA_TABLES statistics number and AutoTrace Statistics.


Thanks,
Sami
 

Tom Kyte
January 08, 2004 - 8:30 pm UTC

for a full scan, we read the segment header to get the extent map. you are seeing that extra read done to figure out what to read for you.

Consistent Gets are still very puzzling

Ik, January 09, 2004 - 5:07 am UTC

Tom,

I always used to think that 'Consistent gets' happens only when Oracle tries to provide read consistency. That is when you have uncommitted transactions.

(1) Iam confused here between current mode gets (buffer reads) and consistent gets. In the original question, why is there the need for read consistency? Every read should be current mode, right?

(2)
I did the same test as the person who posted the original question here. I got the same results.

Now, i dropped the table (tes) and now used 2 sessions. Inserted those 10000 records in one session (NO COMMIT) and then queried from another SQLPlus session. Here is what i got.

select * from tes
/
Statistics
----------------------------------------------------------
0 recursive calls
0 db block gets
10040 consistent gets
0 physical reads
832 redo size
90 bytes sent via SQL*Net to client
242 bytes received via SQL*Net from client
1 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
0 rows processed

Consistent gets have shot up. Is that because transaction was left uncommitted by the other session? Can you please explain why the numbers are high.

(3) Unrelated to the above 2 questions - I was trying to artificially shoot up the consistent reads by forcing an index access. It seems that the optimizer won't budge from its stand. It goes for FTS always. Why so?

create unique index idx_test on tes(a)

select /*+ INDEX(tes idx_test)*/
*
from
tes
Operation Object Name Rows Bytes Cost TQ In/Out PStart PStop

SELECT STATEMENT 82 2
TABLE ACCESS FULL TES 82 1 K 2

Thank you very much for the valuable service you are doing to us Oracle developers.

Thank You.

Tom Kyte
January 09, 2004 - 8:36 am UTC

we use consistent gets to get a block consistently (as of a point in time) it may or may NOT require a read aside to the RBS but the block is gotten "consistently" regardless.

1) no, we ALWAYS get the blocks in consistent read mode.  You cannot tell until you get to the block if the data is "good enough for you".  You always process in CR mode therefore.

2) yes, it is read consistency there.  Your query read each block and had to roll back all of the inserts on that block in order to see that "hey, this block is actually EMPTY as far as I'm concerned"

3) the index is probably on a NULLABLE column.

ops$tkyte@ORA920PC> create table t ( x int, y int );
 
Table created.
 
ops$tkyte@ORA920PC>
ops$tkyte@ORA920PC> create index test_idx on t(x);
 
Index created.
 
ops$tkyte@ORA920PC>
ops$tkyte@ORA920PC> set autotrace traceonly explain
ops$tkyte@ORA920PC> select /*+ index( t test_idx ) */ * from t;
 
Execution Plan
----------------------------------------------------------
   0      SELECT STATEMENT Optimizer=CHOOSE (Cost=2 Card=82 Bytes=2132)
   1    0   TABLE ACCESS (FULL) OF 'T' (Cost=2 Card=82 Bytes=2132)
 
 
 
ops$tkyte@ORA920PC> alter table t modify x NOT NULL;
 
Table altered.
 
ops$tkyte@ORA920PC> select /*+ index( t test_idx ) */ * from t;
 
Execution Plan
----------------------------------------------------------
   0      SELECT STATEMENT Optimizer=CHOOSE (Cost=826 Card=82 Bytes=2132)
   1    0   TABLE ACCESS (BY INDEX ROWID) OF 'T' (Cost=826 Card=82 Bytes=2132)
   2    1     INDEX (FULL SCAN) OF 'TEST_IDX' (NON-UNIQUE) (Cost=26 Card=82)
 
 
 
ops$tkyte@ORA920PC> set autotrace off


<b>since entirely null keys are not put into b*tree indexes -- we cannot use the index if X allowed nulls (you would not get the right answer since the index does not contain all of the rows in the table)</b>

 

Still come confusion

A reader, January 20, 2004 - 11:45 am UTC

Tom,

I have two environments integration (SPI1) and Benchmark (BMK1) and I have the following query running on the same environments.

SELECT /*+ ORDERED USE_NL(pcd,p) INDEX(pd) */
p.position_id, pd.position_detail_id, pd.Alt_Market_Value,
pd.Alt_Market_Value_Income, pd.Alt_Book_Value, pcd.Alt_GAAP_Cost_Value,
pd.Base_FX_Unrealized_GL, pd.Alt_Base_Unrealized_GL, pd.Alt_Base_FX_Unrealized_GL,
pd.Alt_Accrued_Income, pd.Local_to_sec_basis_fx_rate, pd.Income_Receivable_Local,
pd.Income_Receivable_Base, pd.Alt_Income_Receivable
FROM holdingdbo.position_detail pd,
holdingdbo.position_cost_detail pcd,
holdingdbo.position p
WHERE pd.update_date >= '17-JAN-04'
AND p.position_id = pd.position_id
AND p.src_intfc_inst = 17
AND pd.position_detail_id = pcd.position_detail_id

I have the following statistics information :

BMK1
====
Clus.
Table Index Blocks Rows Factor Col
T1 I1 15848 876751 225197 Col1
I2 15848 876751 832570 COl2
I2 15848 876751 832570 Col3
I2 15848 876751 832570 Col4
I3 15848 876751 242262 Col5

T2 I11 1086082 92017221 1809257 Col1
T2 I12 1086082 92017221 25658678 Col2

T3 I21 1670338 60198216 7946507 Col1
I22 1670338 60198216 59152225 Col2
I23 1670338 60198216 30525320 Col3
I24 1670338 60198216 19431172 Col4
I24 1670338 60198216 19431172 Col2
I24 1670338 60198216 19431172 Col5
I24 1670338 60198216 19431172 Col6
I24 1670338 60198216 19431172 Col7

SPI1
====

Table Index Blocks Rows Clus Factor Col
T1 I1 196587 557024 372737 Col1
I2 196587 557024 547845 Col2
I2 196587 557024 547845 Col3
I2 196587 557024 547845 Col4
I3 196587 557024 410857 Col5

T2 I11 904315 38124980 3716600 Col1
I12 904315 38124980 32805420 Col2

T3 I21 2247043 39646070 11968130 Col1
I22 2247043 39646070 38879830 Col2
I23 2247043 39646070 33992900 Col3
I24 2247043 39646070 26692950 Col4
I24 2247043 39646070 26692950 Col2
I24 2247043 39646070 26692950 Col5
I24 2247043 39646070 26692950 Col6
I24 2247043 39646070 26692950 Col7

The above query executed in the above two environments produced two different timings. My questions are :

1. Is hints always good to put in a query
2. If the execution plans are same in the two environments, what are the other factors to look for in the two environments (besides the num rows, blocks, clustering factor as above) that contribute to this time difference. ie. how to find the exact cause of this time difference
3. How do we check if all the tables/indexes etc are analyzed
4. If the tools like AUTOTRACE, TKPROF, SQL_TRACE etc are used in both the environments, what would be the sequence of steps (like a checklist for myself) that could help me find the exact cause of the problem.
5. I read your books on these chapters, but when it comes to practical tuning cases as above, I am not sure what could be the best possible way to tackle such problems.
6. How to infer from consistent gets/physical reads etc from TKPROF/AUTOTRACE output

A detailed explanation on this would be of great help

Thanks

Tom Kyte
January 20, 2004 - 2:07 pm UTC

1) no
2) wait events -- use 10046 level 12 trace event (search for 10046 on this site if that doesn't mean anything to you) and see what it is waiting for.

and remember, two machines differ in thousands of ways.

3) queries against the data dictionary, last analyzed is in there as well as sample sizes

4) since I don't know what they "problem" is or even if there is one, a checklist won't help. it is usually "obvious". do you have a 9i tkprof with wait events that one could look at.

5) show us a tkprof - it should be somewhat "obvious"

6) you do not infer from tkprof -- it is spelled out (there is a physical reads column, there are current and query mode (logical) columns. they are told to you.

from autotrace, again, they are spelled out -- they are right there. consistent gets, db block gets ( logical ios) and physical reads. no infering, just reading.


Pls explain

A reader, January 20, 2004 - 4:54 pm UTC

Tom,

In your response to the above response, you mentioned that it could differ in thousands of ways. I would like to get some information from you as to a couple of ways that the two could be different. Please provide with a list that is very significant ( I understand that the list is huge, but at least would be helpful if you could list a couple of them that I could easily check)

Tom Kyte
January 20, 2004 - 10:00 pm UTC

nope, as i said -- show us the tkprof and it should be "obvious"

why network packets increases?

A reader, February 07, 2004 - 3:30 am UTC

Hi

Why the amount of data transferred goes up when you set a big arraysize? Has it got something to do with tcp/ip packet size?

Tom Kyte
February 07, 2004 - 2:44 pm UTC

goes up and down.  40k is "noise" as far as I'm concerned.  test was really skinny, not a good test.

we'll just attribute it to gremlins this time....


bigger, fatter table:

ops$tkyte@ORA9IR2> set autotrace traceonly statistics;
ops$tkyte@ORA9IR2> select * from all_objects;
 
29512 rows selected.
 
 
Statistics
----------------------------------------------------------
    2097689  bytes sent via SQL*Net to client
 
ops$tkyte@ORA9IR2> set arraysize 100
ops$tkyte@ORA9IR2> /
 
29512 rows selected.
 
 
Statistics
----------------------------------------------------------
    1895377  bytes sent via SQL*Net to client
 
ops$tkyte@ORA9IR2> set arraysize 1000
ops$tkyte@ORA9IR2> /
 
29512 rows selected.
 
 
Statistics
----------------------------------------------------------
    1863191  bytes sent via SQL*Net to client
 
ops$tkyte@ORA9IR2> set arraysize 5000
ops$tkyte@ORA9IR2> /
 
29512 rows selected.
 
 
Statistics
----------------------------------------------------------
    1860287  bytes sent via SQL*Net to client
 

Alison Tonner, February 19, 2004 - 7:42 am UTC

Hi Tom, thanks for all the information provided in the rest of this thread.

I was wondering if you could clarify something.

When I change the optimizer mode from first_rows_10 to first_rows_100 the cost of the plan goes up, even though the consistant_gets and physical reads goes down.

I do see that the cardinallity gets higher but still not sure why the cost goes up instead of down?

SYSTEM-ORCL->alter session set optimizer_mode = first_rows_10;

Session altered.

Elapsed: 00:00:00.00
SYSTEM-ORCL->select * from customers c, sales s
2 where c.cust_id = s.cust_id
3 and cust_last_name = 'Smith'
4 and s.time_id <= '31-DEC-98';

359 rows selected.

Elapsed: 00:06:01.01

Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=FIRST_ROWS (Cost=22 Card=10 Bytes=2890)
1 0 TABLE ACCESS (BY INDEX ROWID) OF 'CUSTOMERS' (Cost=22 Card=1 Bytes=130)
2 1 NESTED LOOPS (Cost=22 Card=10 Bytes=2890)
3 2 PARTITION RANGE (ITERATOR)
4 3 TABLE ACCESS (FULL) OF 'SALES' (Cost=3 Card=201 Bytes=31959)
5 2 BITMAP CONVERSION (TO ROWIDS)
6 5 BITMAP AND
7 6 BITMAP CONVERSION (FROM ROWIDS)
8 7 INDEX (RANGE SCAN) OF 'CUSTOMERS_PK' (UNIQUE)
9 6 BITMAP CONVERSION (FROM ROWIDS)
10 9 INDEX (RANGE SCAN) OF 'CUST_CUST_LAST_NAME_IDX' (NON-UNIQUE) (Cost=1 Card=641)




Statistics
----------------------------------------------------------
0 recursive calls
0 db block gets
877531 consistent gets
2731 physical reads
0 redo size
21137 bytes sent via SQL*Net to client
756 bytes received via SQL*Net from client
25 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
359 rows processed

SYSTEM-ORCL->rem Now with FIRST_ROWS_100
SYSTEM-ORCL->pause

SYSTEM-ORCL->alter session set optimizer_mode = first_rows_100;

Session altered.

Elapsed: 00:00:00.00
SYSTEM-ORCL->select * from customers c, sales s
2 where c.cust_id = s.cust_id
3 and cust_last_name = 'Smith'
4 and s.time_id <= '31-DEC-98';

359 rows selected.

Elapsed: 00:00:01.09

Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=FIRST_ROWS (Cost=61 Card=100 Bytes=15900)
1 0 TABLE ACCESS (BY LOCAL INDEX ROWID) OF 'SALES' (Cost=61)
2 1 NESTED LOOPS (Cost=61 Card=100 Bytes=15900)
3 2 TABLE ACCESS (BY INDEX ROWID) OF 'CUSTOMERS' (Cost=5 Card=1 Bytes=130)
4 3 INDEX (RANGE SCAN) OF 'CUST_CUST_LAST_NAME_IDX' (NON-UNIQUE) (Cost=3 Card=1)
5 2 PARTITION RANGE (ITERATOR)
6 5 BITMAP CONVERSION (TO ROWIDS)
7 6 BITMAP AND
8 7 BITMAP MERGE
9 8 BITMAP INDEX (RANGE SCAN) OF 'SALES_TIME_BIX'
10 7 BITMAP INDEX (SINGLE VALUE) OF 'SALES_CUST_BIX'




Statistics
----------------------------------------------------------
14 recursive calls
0 db block gets
3191 consistent gets
497 physical reads
0 redo size
20130 bytes sent via SQL*Net to client
756 bytes received via SQL*Net from client
25 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
359 rows processed

Tom Kyte
February 19, 2004 - 11:19 am UTC

you cannot compare the cost of two different plans under two different settings of optimizer parameters.

doesn't make sense.


the optimizer parameters cause the costing model to "change", that is their goal. by changing them -- you changed the method by which the numbers are assigned. It would be expected that the costs are "different", the way these parameters work in general is by changing the costing model.

Now, your test is a little invalid. You asked "please optimize to get the first 10 rows ASAP" and then fetched all 359. You "lied" to the optimizer ;) Use this:

alter session set sql_trace=true;
declare
cnt number := 0;
begin
for x in ( select * from customers c, sales s
where c.cust_id = s.cust_id
and cust_last_name = 'Smith'
and s.time_id <= TO_DATE( '31-DEC-1998' ) ) /* pet peeve of mine */
loop
cnt := cnt + 1;
exit when (cnt=10);
end loop;
end;
/

and use tkprof to analyze the outcome with different optimizer settings.

Alison, February 19, 2004 - 11:32 am UTC

Thanks - that's all I needed to know

Just the idea that they are apples and oranges and can't be compared in that way makes sence.

Thanks as usuall for your fast, consise reply.

Low LIO != Fast responce??

Prasad, March 10, 2004 - 7:04 am UTC

Hi Tom,

The above discussion more than clearly says what the "consistent gets" is all about.
We consider it as a part of LIO. And you always suggests for lowering LIOs. So i did a test case where I lowered the LIO (by increasing the arraysize 15...1000...4000) But as i lowered the LIO my execution(Elapsed) time increased instead.How is it? Am I missing something??

here is the test i carried out.
----------------------------------------
create table t as select * from all_objects;

Table created.

exec show_space( 'T' )

Free Blocks.............................0
Total Blocks............................137
Total Bytes.............................1122304
Unused Blocks...........................25
Unused Bytes............................204800
Last Used Ext FileId....................61
Last Used Ext BlockId...................1677
Last Used Block.........................20


set timing on

select * from t;
8704 rows selected.

Elapsed: 00:00:18.82

Statistics
----------------------------------------------------------
0 recursive calls
12 db block gets
689 consistent gets
0 physical reads
0 redo size
780354 bytes sent via SQL*Net to client
47343 bytes received via SQL*Net from client
582 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
8704 rows processed


set arraysize 1000

select * from t;
8704 rows selected.


Elapsed: 00:00:19.18

Statistics
----------------------------------------------------------
0 recursive calls
12 db block gets
124 consistent gets
99 physical reads
0 redo size
732878 bytes sent via SQL*Net to client
1011 bytes received via SQL*Net from client
10 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
8704 rows processed


set arraysize 4000

select * from t;
8704 rows selected.

Elapsed: 00:00:24.57

Statistics
----------------------------------------------------------
0 recursive calls
12 db block gets
118 consistent gets
0 physical reads
0 redo size
732380 bytes sent via SQL*Net to client
525 bytes received via SQL*Net from client
4 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
8704 rows processed

Tom Kyte
March 10, 2004 - 9:15 am UTC

I've also said that there are diminishing marginal returns to everything. I found an arraysize of about 100 is the "sweet spot" for array fetching in general.

see
</code> http://asktom.oracle.com/pls/asktom/f?p=100:11:::::P11_QUESTION_ID:12226527910608#15916220123225 <code>

yes, the goal is to reduce LIO's in general when tuning a query. array fetching is a tool but only to a point.

Perfect

Prasad, March 10, 2004 - 9:34 am UTC

set arraysize 100

select * from t;
8664 rows selected.

Elapsed: 00:00:12.74

Statistics
----------------------------------------------------------
0 recursive calls
12 db block gets
201 consistent gets
111 physical reads
0 redo size
735856 bytes sent via SQL*Net to client
7329 bytes received via SQL*Net from client
88 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
8664 rows processed

Thanks
Prasad

Does analytics functions speed up a FTS?

Marcio, April 28, 2004 - 4:57 pm UTC

I noted this, when had to catch total (count(*)) from a table to inform application_info -- so when put AF on select x, y, z, a, count(*) over () from t instead of just select * from t, got reduce my LIOs.

Do you know why?

Consider test!
~~~~~~~~~~~~~~
drop table t;

create global temporary table t ( x number, y number, z number, a number);

insert into t
select rownum, rownum+1, rownum+2, null
from all_objects
union all
select rownum, rownum+1, rownum+2, null
from all_objects
/

alter session set sql_trace = true;

begin
for x in ( select x, y, z, a from t )
loop
null;
end loop;
end;
/

begin
for x in ( select x, y, z, a, count(*) over () cnt from t )
loop
null;
end loop;
end;
/

alter session set sql_trace = false;

------------------------------------------
Got this:

begin
for x in ( select x, y, z, a from t )
loop
null;
end loop;
end;

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.10 0.07 0 0 1 0
Execute 1 7.20 7.77 0 0 0 1
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 2 7.30 7.85 0 0 1 1

Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: 288
********************************************************************************

select x, y, z, a
from
t


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 49567 3.58 3.17 0 49573 0 49566
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 49569 3.58 3.18 0 49573 0 49566

and

begin
for x in ( select x, y, z, a, count(*) over () cnt from t )
loop
null;
end loop;
end;

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.02 0.01 0 0 0 0
Execute 1 6.69 7.50 0 0 0 1
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 2 6.71 7.51 0 0 0 1

Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: 288
********************************************************************************

select x, y, z, a, count(*) over () cnt
from
t


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 49567 3.68 3.22 0 135 0 49566
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 49569 3.68 3.22 0 135 0 49566


LIOs - from 49573 to 135
Even on sql*plus look:

synth_base@MRP920> select x, y, z, a from t;

49566 rows selected.


Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE
1 0 TABLE ACCESS (FULL) OF 'T'




Statistics
----------------------------------------------------------
0 recursive calls
0 db block gets
626 consistent gets
0 physical reads
0 redo size
989711 bytes sent via SQL*Net to client
5948 bytes received via SQL*Net from client
497 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
49566 rows processed

synth_base@MRP920> select x, y, z, a, count(*) over () cnt from t;

49566 rows selected.


Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE
1 0 WINDOW (BUFFER)
2 1 TABLE ACCESS (FULL) OF 'T'




Statistics
----------------------------------------------------------
0 recursive calls
0 db block gets
135 consistent gets
0 physical reads
0 redo size
989767 bytes sent via SQL*Net to client
5948 bytes received via SQL*Net from client
497 SQL*Net roundtrips to/from client
1 sorts (memory)
0 sorts (disk)
49566 rows processed

Seems have same behavior on permanent tables as well.
Thank you.

Tom Kyte
April 28, 2004 - 7:44 pm UTC

the sort caused them to be read all at one -- on the FIRST fetch.

do this:


begin
for x in ( select x, y, z, a from t )
loop
exit;
end loop;
end;
/

begin
for x in ( select x, y, z, a, count(*) over () cnt from t )
loop
exit;
end loop;
end;
/

instead and you'll see what I mean. reads from "temp" are not LIOs.

it is as if you set arraysize = infinity.

I would *not* recommend this approach (in fact, i'm on record time and time and time and time and time again saying 'counting the hits is "not a good thing" (tm)') at all, not even a tiny bit.



show me the link please!

Marcio, April 29, 2004 - 8:33 am UTC

Well, I'd like show in dbms_application_info n thru m, where m is count(*) from table -- I can't use bulk collect.
Would you show me where (link) you point it out?

Tom Kyte
April 29, 2004 - 10:27 am UTC

er? what link -- not really sure what you mean (nor would I agree with "i can't use bulk collect")

Consistent gets + db_block_gets

reader, April 29, 2004 - 1:39 pm UTC

Tom,

YOUR HELP ....

create table test as select * from all_objects;
set autotrace on

Free Blocks.............................0
Total Blocks............................390
Total Bytes.............................3194880
Unused Blocks...........................76
Unused Bytes............................622592
Last Used Ext FileId....................162
Last Used Ext BlockId...................114342
Last Used Block.........................54

PL/SQL procedure successfully completed.
1. CASE A
select count(*) from test;


COUNT(*)
----------
27187

Statistics
----------------------------------------------------------
0 recursive calls
12 db block gets
317 consistent gets
313 physical reads
0 redo size
203 bytes sent via SQL*Net to client
90 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
1 rows processed


2.CASE B
update test1 set owner='TEST' (Session A)

2 recursive calls
28089 db block gets
374 consistent gets
179 physical reads
6900888 redo size
534 bytes sent via SQL*Net to client
580 bytes received via SQL*Net from client
6 SQL*Net roundtrips to/from client
3 sorts (memory)
0 sorts (disk)
27187 rows processed

3.CASE C
select count(*) from test1;(session A)

0 recursive calls
4 db block gets
318 consistent gets
0 physical reads
0 redo size
204 bytes sent via SQL*Net to client
187 bytes received via SQL*Net from client
3 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
1 rows processed

update test1 set owner='TEST' (from a different session) (session B)
NOT COMMITED YET

4.CASE D
select count(*) from test1;(session A)

0 recursive calls
4 db block gets
27875 consistent gets
50 physical reads
16328 redo size
203 bytes sent via SQL*Net to client
187 bytes received via SQL*Net from client
3 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
1 rows processed


1.why will db_block_gets go down from 12 to 4 From case A to case D
I read from your site that it the info about were will we get the blocks eg segment headers
But segment header is one .Does it have to read it multiple times or is it something else .

2.Consistent gets is going up in case D -which is fine because it has to rebuild the information using undo.
But is the redo size > 0 for a select .
Also ,why is the consistent read so high is it one block per row of undo info read.

3.Why is the db_block_get so high is case B .
what info is oracle getting here and from where ?

your answers are highly appreciated !

THANKS .

Tom Kyte
April 29, 2004 - 2:42 pm UTC

1) it just does, don't know, it seems to happen due to the CTAS (if you create empty, insert append, it doesn't happen).

2) the redo size >0 is block cleanout (if you have expert one on one oracle, I go into this in detail)

the consistent gets are high just due to the RBS processing.

3) every row update will result in a CURRENT MODE read -- and db block gets are current mode reads. we do a consistent read to find the rows, and then a current mode read to update it.

select vs select for update

A reader, June 17, 2004 - 11:50 am UTC

I know that selects return you read consistent
data as of the point of time the select started.
Is that true for "select .. for update" as well?
Are there any differences between the two in terms
of read consistency?

Thank you!

Tom Kyte
June 17, 2004 - 2:38 pm UTC

it'll be just like an update (subject to restart). It will be read consistent, perhaps not with respect to the point in time at which it was submitted, but at some point in time between the time it started and the time it completed.

see
</code> http://asktom.oracle.com/~tkyte/wc.html <code>



thanx!

A reader, June 17, 2004 - 3:57 pm UTC


Buffer gets/exec and consistent gets

Arun Gupta, June 17, 2004 - 4:17 pm UTC

Tom,
Are the Buffer gets/exec in a statspack report for a query related to consistent gets in autotrace statistics or query in TKPROF report?
Thanks

Tom Kyte
June 17, 2004 - 6:33 pm UTC

in tkprof they are at the level of the application,that particular execution.

in a statspack they are system wide, over "every" possible tkprof....

so tkprof is a detailed view, for a particular session whereas statspack is over all sessions for that period of time.

Buffer gets/exec and consistent gets

Arun Gupta, June 18, 2004 - 6:45 am UTC

Tom,
Actually, in a load test conducted on one of our applications, a particular query showed 9 million buffer gets/exec in statspack report. Tuning the query in isolation, I could not reproduce the scenario. I was using TKPROF and could only get 167,000 consistent gets. Reading through this thread, I understand that insert/update/delete against base tables can increase the consistent gets if they happen between opening of a cursor and when the results are fetched. This is very much possible in our load test which stresses the application quite heavily. We use ref cursors to get results. Can this be a possible explanation of what might have happened?

Thanks

Tom Kyte
June 18, 2004 - 10:51 am UTC

sure, if the table was really "hot" and you were updating lots of indexed columns and the query used lots and lots of indexes and/or the query fetched a row and worked on it a while and then fetched a row

sure. seems extreme, but yes.

Still not clear about indexes.

aru, July 15, 2004 - 1:29 am UTC

Hi Tom,

I am still not clear about the way we will access the datablocks through the indexes. The part which confuses me is that when you have proved in this thread that the way the datablock is accesses is as you said:-
"Followup:  
1) well, it'll get the index block to find the table block, and then get the 
index block to find the next table block, and so on."

Now in this case the arraysize is immaterial because only one data row is accessed in this way.That also goes to say that the minimum IO one can have while accessing rows THROUGH an index is:
(N * 2) where N is number of rows accessed. If 100 rows accessed then MINIMUM IO will be 200. 100 for the index block gets and 100 for the datablock gets for corresponding rowid's. 

IF this is now true then there is something definitely missing from this thread that no one has either understood or has not thought about.
 
Also Tom please explain why there is so much difference in the output's here:
----------------------

SQL> select * from t where x < 100;(table created as per this thread):

99 rows selected.


Execution Plan
----------------------------------------------------------
   0      SELECT STATEMENT Optimizer=CHOOSE
   1    0   TABLE ACCESS (BY INDEX ROWID) OF 'COLO'
   2    1     INDEX (RANGE SCAN) OF 'COLO_X_IDX' (NON-UNIQUE)




Statistics
----------------------------------------------------------
          0  recursive calls
          0  db block gets
         13  consistent gets
          0  physical reads
          0  redo size
       2626  bytes sent via SQL*Net to client
        865  bytes received via SQL*Net from client
          6  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
         99  rows processed


And from tkprof it is :- 
Misses in library cache during parse: 0
Optimizer goal: CHOOSE
Parsing user id: 45  
****************************************

select * 
from
 colo where x < 100 


call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute      1      0.00       0.00          0          0          0           0
Fetch      100      0.00       0.00          0        200          0          99
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total      102      0.00       0.00          0        200          0          99

Rows     Row Source Operation
-------  ---------------------------------------------------
     99  TABLE ACCESS BY INDEX ROWID T (cr=200 r=0 w=0 time=1748 us)
     99   INDEX RANGE SCAN T_IDX (cr=101 r=0 w=0 time=984 us)(object id 41082)


Shows 200. Difference of 200 - 13 = 187 is too much for me to even try thinking what's going on here. HELP!!!!!!!!

Thanks YET again,
ARU.
 

Tom Kyte
July 15, 2004 - 12:18 pm UTC

using sqlplus -- we have array fetching going on. 

Lets say the table data was ordered with respect to the index (eg: index on x, table data is sorted by x).

now, we ask for x < 100.

we are using sqlplus -- arraysize defaults to 15.


we as for first row (we are doing to get 15).  we read an index block, locate row one.  we get the table block for row 1 (and don't give it up)..  we locate row 2 (still have the index block we need) and get table block for row 2 -- discover "we already have that block in our session -- cool, just use it" and so on.  we get the first 15 rows with 2 LIOS

suppose we did that row by row.  we would have 2 LIOS per row or 30 LIOs for the first 15.


That is how array fetching can not only improve wall clock response time (takes less time to do 2lios and transfer 15 rows in bulk then it does to do 30 lios and transfer a row by row by row) but also scalability - since the LIO's involve latching.


try this out:

ops$tkyte@ORA9IR2> drop table t;
Table dropped.
                                                                                
ops$tkyte@ORA9IR2> create table t ( x int primary key, y int );
Table created.
                                                                                
ops$tkyte@ORA9IR2> insert into t select rownum, rownum from all_objects where rownum <= 101;
101 rows created.
                                                                                
ops$tkyte@ORA9IR2> set autotrace traceonly statistics
ops$tkyte@ORA9IR2> set arraysize 2
ops$tkyte@ORA9IR2> select * from t where x < 100;
                                                                                
99 rows selected.
                                                                                
Statistics
----------------------------------------------------------
        101  consistent gets
                                                                                
ops$tkyte@ORA9IR2> set arraysize 10
         22  consistent gets
ops$tkyte@ORA9IR2> set arraysize 20
         12  consistent gets
ops$tkyte@ORA9IR2> set arraysize 50
          6  consistent gets
ops$tkyte@ORA9IR2> set arraysize 100
          4  consistent gets


the bottom was edited - i just reran the same query with different array sizes. 

Why redo size?

A reader, December 23, 2004 - 11:50 am UTC

Regarding the autotrace stats, why would a SELECT statement generate redo entries? See

no rows selected

Elapsed: 00:07:37.65

Statistics
----------------------------------------------------------
7 recursive calls
0 db block gets
1725310 consistent gets
132101 physical reads
247884 redo size
867 bytes sent via SQL*Net to client
456 bytes received via SQL*Net from client
1 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
0 rows processed

[I am using true temporary tablespaces, auto memoty allocation using pga_aggregate_target]

Tom Kyte
December 23, 2004 - 12:44 pm UTC

"block cleanouts" (search for that) could cause it.  

and I see 7 recursive calls -- could be contributing factors as well.

ops$tkyte@ORA9IR2> set autotrace traceonly statistics;
ops$tkyte@ORA9IR2> select rownum from all_objects;
 
28713 rows selected.
 
 
Statistics
----------------------------------------------------------
          0  recursive calls
          0  db block gets
     134211  consistent gets
          0  physical reads
          0  redo size
     399714  bytes sent via SQL*Net to client
      21553  bytes received via SQL*Net from client
       1916  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
      28713  rows processed
 
ops$tkyte@ORA9IR2> select s.nextval from all_objects;
 
28713 rows selected.
 
 
Statistics
----------------------------------------------------------
      20104  recursive calls
       4351  db block gets
     135647  consistent gets
          0  physical reads
    1214332  redo size
     409812  bytes sent via SQL*Net to client
      21553  bytes received via SQL*Net from client
       1916  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
      28713  rows processed
 
ops$tkyte@ORA9IR2> set autotrace off


<b>there were in this case updates to SEQ$ executed as "autonomous transactions" that caused the redo</b>
 

Thanks

A reader, December 23, 2004 - 2:25 pm UTC

"there were in this case updates to SEQ$ executed as "autonomous transactions" that caused the redo"

Hm, so a select from a sequence causes a "autonomous transaction". Sequences have been around since at least Oracle 7.3, but autonomous transactions were "invented" in Oracle 8 and up! Guess Oracle always had them, but didnt "expose" them to the general public until v8?

Any other features you know of like this that are used internally but not available to us?

Thanks

Tom Kyte
December 23, 2004 - 3:04 pm UTC

autonomous transactions have been in the server forever and an age ago.

they were simply exposed in 815.

Want to guess how long "flashback query" has been around?


There are thousands, perhaps millions, of things "internal" that are not exposed.

How many blocks are from rollback segment?

Sami, April 28, 2005 - 8:56 pm UTC

Dear Tom,

From autotrace or tkprof, Is there a way to identify, how many blocks are retrived from "rollback segment" to get the consistent results.

How many blocks are retrived from the rollback segment in the below example?

Statistics
----------------------------------------------------------
0 recursive calls
12 db block gets
319 consistent gets
46 physical reads
0 redo size
369 bytes sent via SQL*Net to client
425 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
1 rows processed

Tom Kyte
April 28, 2005 - 9:00 pm UTC

no.
only thing you can say is somewhere between 0 and 318.

Statistics over a database link ...

VKOUL, May 06, 2005 - 12:48 pm UTC

Pls. go through the following ...

===================================================
DATABASE 1 START
===================================================
SQL> create table a as select * from all_objects;

Table created.

SQL> select count(*) FROM a;

  COUNT(*)
----------
    188088

SQL> create table b as select * from a;

Table created.

SQL> select count(*) from b;

  COUNT(*)
----------
    188088

SQL> 
SQL> set autotrace trace explain statistics
SQL> 
SQL> SELECT *
  2  FROM   a, b
  3  WHERE  a.object_name = b.object_name;

2832576 rows selected.


Execution Plan
----------------------------------------------------------
   0      SELECT STATEMENT Optimizer=CHOOSE
   1    0   MERGE JOIN
   2    1     SORT (JOIN)
   3    2       TABLE ACCESS (FULL) OF 'B'
   4    1     SORT (JOIN)
   5    4       TABLE ACCESS (FULL) OF 'A'




Statistics
----------------------------------------------------------
          0  recursive calls
         84  db block gets
       5228  consistent gets
       9286  physical reads
          0  redo size
  100467544  bytes sent via SQL*Net to client
    2077721  bytes received via SQL*Net from client
     188840  SQL*Net roundtrips to/from client
          0  sorts (memory)
          2  sorts (disk)
    2832576  rows processed

SQL> 
SQL> set timing on
SQL> 
SQL> /

2832576 rows selected.

Elapsed: 00:01:34.04

Execution Plan
----------------------------------------------------------
   0      SELECT STATEMENT Optimizer=CHOOSE
   1    0   MERGE JOIN
   2    1     SORT (JOIN)
   3    2       TABLE ACCESS (FULL) OF 'B'
   4    1     SORT (JOIN)
   5    4       TABLE ACCESS (FULL) OF 'A'




Statistics
----------------------------------------------------------
          0  recursive calls
         24  db block gets
       5228  consistent gets
       9286  physical reads
          0  redo size
  100884572  bytes sent via SQL*Net to client
    2077721  bytes received via SQL*Net from client
     188840  SQL*Net roundtrips to/from client
          0  sorts (memory)
          2  sorts (disk)
    2832576  rows processed

SQL> 
SQL> disc
Disconnected from Oracle9i Enterprise Edition Release 9.2.0.1.0 - Production
With the Partitioning, OLAP and Oracle Data Mining options
JServer Release 9.2.0.1.0 - Production
SQL> 
===================================================
DATABASE 1 END
===================================================
===================================================
DATABASE 2 START
===================================================
SQL> set autotrace trace explain statistics
SQL> select * from a@dev9idw, b@dev9idw
  2* where  a.object_name = b.object_name;

2832576 rows selected.


Execution Plan
----------------------------------------------------------
   0      SELECT STATEMENT (REMOTE) Optimizer=CHOOSE
   1    0   MERGE JOIN
   2    1     SORT (JOIN)
   3    2       TABLE ACCESS (FULL) OF 'B'                             DEV9IDW.
                                                                       ESD1.WA.
                                                                       LCL

   4    1     SORT (JOIN)
   5    4       TABLE ACCESS (FULL) OF 'A'                             DEV9IDW.
                                                                       ESD1.WA.
                                                                       LCL





Statistics
----------------------------------------------------------
          7  recursive calls
          1  db block gets
          4  consistent gets
          0  physical reads
        212  redo size
  132465052  bytes sent via SQL*Net to client
    2077717  bytes received via SQL*Net from client
     188840  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
    2832576  rows processed

SQL> 
SQL> set timing on
SQL> 
SQL> /

2832576 rows selected.

Elapsed: 00:03:16.03

Execution Plan
----------------------------------------------------------
   0      SELECT STATEMENT (REMOTE) Optimizer=CHOOSE
   1    0   MERGE JOIN
   2    1     SORT (JOIN)
   3    2       TABLE ACCESS (FULL) OF 'B'                             DEV9IDW.
                                                                       ESD1.WA.
                                                                       LCL

   4    1     SORT (JOIN)
   5    4       TABLE ACCESS (FULL) OF 'A'                             DEV9IDW.
                                                                       ESD1.WA.
                                                                       LCL





Statistics
----------------------------------------------------------
          0  recursive calls
          0  db block gets
          0  consistent gets
          0  physical reads
          0  redo size
  132075604  bytes sent via SQL*Net to client
    2077717  bytes received via SQL*Net from client
     188840  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
    2832576  rows processed

SQL> 
SQL> disc
Disconnected from Oracle9i Enterprise Edition Release 9.2.0.1.0 - Production
With the Partitioning, OLAP and Oracle Data Mining options
JServer Release 9.2.0.1.0 - Production
SQL> 
===================================================
DATABASE 2 END
===================================================

Q1 : Why is there too much difference on consistent gets
& physical reads between two databases

Q2 : Is consistent gets & physical reads from the remote database or local database

Q3 : If all tables belong to remote database, is there ever a possibility that optimizer will change plan to bring data across over the dblink and then join the tables together. ( but in this example it does not show that)

Thanks
 

Tom Kyte
May 06, 2005 - 1:28 pm UTC

q1) one time query is done here, next time over there (remote). autotrace shows only your local statistics.

q2) local only

q3) yes, it can if it wants to (but you won't see the consistent gets, they happened remotely)

Is it correct???

Archana, May 27, 2005 - 4:43 am UTC

Hi Tom,

I was just going through this discussion and come accross a formula for consistent gets. It says,

Consistent gets = number_of_rows/arraysize + total_blocks_read

Is it really true??
If yes, is it true for a particular version??

I tried this in 10g database but couldn't get the proper result.

Can you please help me in this regard?

Thanks in advance.

Tom Kyte
May 27, 2005 - 7:50 am UTC

for the query in question -- a simple full scan.

This example demonstrates the effect arraysize can have on logical IO. Oracle will get and reget a block over and over and over. If you are full scanning a table with 100 blocks and each block has 100 rows and you array fetch 10 at a time, you'll get each block 10 times -- and therefore perform 1000 logical IO's.

If you array fetch 100 at a time, you'll get each block once and performan 100 logical IO's

But, if you select count(*) from that_table -- you'll do 100 logical IO's regardless of arraysize.

There is no hard and fast formula for every query.

reader

A reader, May 27, 2005 - 12:05 pm UTC

Does the server process reads from disk to the buffer pool, and then server process then reads from the buffer pool to
process the query. If so are we counting the
number of physical reads twice

Tom Kyte
May 27, 2005 - 12:46 pm UTC

no you aren't? there was one physical IO, from disk to buffer pool. there was one logical io from buffer pool

reader

A reader, May 27, 2005 - 2:06 pm UTC

Therefore if server process wants to get a block
(ex:<file1;block5>) from the disk, does it
involve 2 ios, ( 1 physical io and 1 logical io )

Tom Kyte
May 27, 2005 - 3:22 pm UTC

unless we are doing a direct read, yes. A physical IO to read into the buffer cache would be followed by "get it from the buffer cache"

what 's going on here?

A reader, June 06, 2005 - 11:57 am UTC

hi tom,

we 've observed a somehow "strange" effect: an *increased* number of "query" buffers reported by trace files after removing the where-clause from the definition of cursor "MyCursor" (see below) -- although that where-clause just pointed to the very first remaining entry to be read and deleted:

declare
cursor MyCursor( p_nMyPKVal MyTab.MyPKCol%type ) is
select /*+ FIRST_ROWS */ RowID RID, ...
from MyTab
where MyPKCol > p_nMyPKVal order by MyPKCol ;

begin
for i in 1 .. p_nTxnCount loop
open MyCursor( i * p_nTxnSize - p_nTxnSize ) ;
for i in 1 .. p_nTxnSize loop
fetch MyCursor into MyRecord ;
exit when MyCursor%NOTFOUND ;
delete MyTab where RowID = MyRecord.RID ;
end loop ;
commit ;
close MyCursor ;
end loop ;
end ;
/

furthermore this increase increased with an increasing number of transactions (parameter p_nTxnSize below).

as an example: 1000 transactions with 50 rows for each

statistics for the version without where-clause:

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.02 0 0 0 0
Execute 1000 0.12 0.09 0 0 0 0
Fetch 50000 6.42 34.34 7233 162832 0 50000
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 51001 6.54 34.45 7233 162832 0 50000

statistics for the version with the additional where-clause:

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 1000 0.14 0.11 0 0 0 0
Fetch 50000 6.13 32.25 7144 117607 0 50000
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 51001 6.27 32.36 7144 117607 0 50000

statistics were computed for table and its primary key (and only) index.

what could have happened here?
is it a matter of empty index blocks (or such)?

Tom Kyte
June 06, 2005 - 12:39 pm UTC

I see the decrease, no increase?

but without a predicate, one could assume a different plan and a different plan will likely have different IO's associated with it.


I don't like your code at all. You should just be issuing a DELETE, no procedural code at all, especially if you are concerned about reducing the amount of work performed

A reader, June 06, 2005 - 4:50 pm UTC

but this code snippet has just been minimized for description purposes (sorry: i shouldn't have mentioned that). so it 's not about deleting some rows but rather some other processing does take place between fetch and delete ...

and hmmm, the number of "query" buffers did *increase* when there was *no where-clause* provided ...

unfortunately i 've to admit that the trace files didn't include the execution plans (for whatever reason) -- although the plans for some other queries had been there. we switched sql_trace off after execution and before closing the sessions ...

here are at least the *predicted* execution plans for both queries:

select /*+ FIRST_ROWS */ RowID RID, MyPKCol, MyTxtCol from MyTab order by MyPKCol
---------------------------------------------------------------------------------
0 SELECT STATEMENT Optimizer=HINT: FIRST_ROWS (Cost=7237 Card=50000 Bytes=26100000)
1 0 TABLE ACCESS (BY INDEX ROWID) OF 'MYTAB' (Cost=7237 Card=50000 Bytes=26100000)
2 1 INDEX (FULL SCAN) OF 'XPKMYTAB' (UNIQUE) (Cost=94 Card=50000)


select /*+ FIRST_ROWS */ RowID RID, MyPKCol, MyTxtCol from MyTab where MyPKCol > :A order by MyPKCol
----------------------------------------------------------------------------------------------------
0 SELECT STATEMENT Optimizer=HINT: FIRST_ROWS (Cost=67 Card=2500 Bytes=1305000)
1 0 TABLE ACCESS (BY INDEX ROWID) OF 'MYTAB' (Cost=67 Card=2500 Bytes=1305000)
2 1 INDEX (RANGE SCAN) OF 'XPKMYTAB' (UNIQUE) (Cost=2 Card=450)

there were lower costs calculated for the later query. but please keep in mind that the bind variable's value has always been set to the *lowest* existing value for column MyPKCol (remaining after deletes). so the optimizer should have predicted a "full range scan" ;o) for that too ...

hey, would you mind to run the following code and compare our results?

/*
-- prepare (run only once)
connect TOM/TOM
drop table MyTab ;
create table MyTab( MyPKCol number, MyTxtCol varchar2( 1000 ), constraint XPKMyTab primary key( MyPKCol ) ) ;
disconnect
*/

connect TOM/TOM
declare
MyTxnCnt constant number := 1000 ;
MyTxnSize constant number := 50 ;

cursor MyCursor( p_nPKVal MyTab.MyPKCol%type ) is
select /*+ FIRST_ROWS */ RowID RID, MyPKCol, MyTxtCol from MyTab order by MyPKCol ;

type MyRecordType is record( RID rowid, MyPKCol number, MyTxtCol varchar2( 1000 ) ) ;
MyRecord MyRecordType ;

begin
delete MyTab ;
for i in 1 .. MyTxnCnt * MyTxnSize loop
insert into MyTab( MyPKCol, MyTxtCol )
values( i, lpad( i, 1000, '0' ) ) ;
end loop ;
commit ;
execute immediate 'analyze table MyTab compute statistics for table for all indexes' ;

execute immediate 'alter session set tracefile_identifier = MyTrc_' || to_char( MyTxnCnt, 'FM000009' ) || '_' || to_char( MyTxnSize, 'FM009' ) ;
execute immediate 'alter session set events ''10046 trace name context forever''' ;
for i in 1 .. MyTxnCnt loop
open MyCursor( i * MyTxnSize - MyTxnSize ) ;
for i in 1 .. MyTxnSize loop
fetch MyCursor into MyRecord ;
exit when MyCursor%NOTFOUND ;
delete MyTab where RowID = MyRecord.RID ;
end loop ;
commit ;
close MyCursor ;
end loop ;
execute immediate 'alter session set SQL_Trace = FALSE' ;
end ;
/
disconnect TOM/TOM

connect TOM/TOM
declare
MyTxnCnt constant number := 1000 ;
MyTxnSize constant number := 50 ;

cursor MyCursor( p_nPKVal MyTab.MyPKCol%type ) is
select /*+ FIRST_ROWS */ RowID RID, MyPKCol, MyTxtCol from MyTab where MyPKCol > p_nPKVal order by MyPKCol ;

type MyRecordType is record( RID rowid, MyPKCol number, MyTxtCol varchar2( 1000 ) ) ;

MyRecord MyRecordType ;

begin
delete MyTab ;
for i in 1 .. MyTxnCnt * MyTxnSize loop
insert into MyTab( MyPKCol, MyTxtCol )
values( i, lpad( i, 1000, '0' ) ) ;
end loop ;
commit ;
execute immediate 'analyze table MyTab compute statistics for table for all indexes' ;

execute immediate 'alter session set tracefile_identifier = MyTrc_' || to_char( MyTxnCnt, 'FM000009' ) || '_' || to_char( MyTxnSize, 'FM009' ) ;
execute immediate 'alter session set events ''10046 trace name context forever''' ;
for i in 1 .. MyTxnCnt loop
open MyCursor( i * MyTxnSize - MyTxnSize ) ;
for i in 1 .. MyTxnSize loop
fetch MyCursor into MyRecord ;
exit when MyCursor%NOTFOUND ;
delete MyTab where RowID = MyRecord.RID ;
end loop ;
commit ;
close MyCursor ;
end loop ;
execute immediate 'alter session set SQL_Trace = FALSE' ;
end ;
/
disconnect TOM/TOM





Tom Kyte
June 06, 2005 - 5:57 pm UTC

---------------------------------------------------------------------------------

0 SELECT STATEMENT Optimizer=HINT: FIRST_ROWS (Cost=7237 Card=50000
Bytes=26100000)
1 0 TABLE ACCESS (BY INDEX ROWID) OF 'MYTAB' (Cost=7237 Card=50000
Bytes=26100000)
2 1 INDEX (FULL SCAN) OF 'XPKMYTAB' (UNIQUE) (Cost=94 Card=50000)


select /*+ FIRST_ROWS */ RowID RID, MyPKCol, MyTxtCol from MyTab where MyPKCol >
:A order by MyPKCol
---------------------------------------------------------------------------------
-------------------
0 SELECT STATEMENT Optimizer=HINT: FIRST_ROWS (Cost=67 Card=2500
Bytes=1305000)
1 0 TABLE ACCESS (BY INDEX ROWID) OF 'MYTAB' (Cost=67 Card=2500
Bytes=1305000)
2 1 INDEX (RANGE SCAN) OF 'XPKMYTAB' (UNIQUE) (Cost=2 Card=450)


totally different plans, I would be amazed IF they did the same IO.

No where clause -- it read the ENTIRE index (index full scan) from front to well, when you told it to stop.

With where clause -- it range scanned PART of the index. getting the rows immediately, skipping the "front" of the index.


You cannot even remotely expect the IO's from two different plans to be even remotely similar except by pure accident.

but the point is ...

A reader, June 07, 2005 - 1:40 am UTC

... that the index range to be scanned always started at the very first existing row (remaining after delete/commit). that 's why i called it the "full range".

nevertheless the execution plans show different ways of how the index is used -- but how could the *measured* IOs be any different although each and every scan (in either case: with and without where-clause) always started at the very first existing/remaining row?

did the where-clause approach just benefit from skipping the processing of an increasing number of empty index blocks (since its where-clause always pointed to the very first remaining non-empty one) whereas the non-where-clause approach did have to "locate" the next entry to continue with first?

Tom Kyte
June 07, 2005 - 8:10 am UTC

the index RANGE scan sure.

the index FULL scan without the predicate -- NOT.


the index FULL SCAN does not start at the very first *existing* row, it started way over in the left hand side of the index which could contain deleted entries. You have what I call a "sweeper" index.


</code> http://asktom.oracle.com/pls/asktom/f?p=100:11:::::P11_QUESTION_ID:2913600659112#24081704877485 <code>



A reader, June 07, 2005 - 6:08 pm UTC

yes, that 's it: a special case of the sample you gave.

thank you very much.

why my LIO not getting reduced ?

Parag Jayant Patankar, June 10, 2005 - 10:34 am UTC

Hi Tom,

I am running SQL query with different arraysize on server itself 

SQL> show array
arraysize 15

SQL> @j

7956 rows selected.


Statistics
----------------------------------------------------------
          0  recursive calls
          0  db block gets
     445546  consistent gets
      11183  physical reads
          0  redo size
     559984  bytes sent via SQL*Net to client
       6481  bytes received via SQL*Net from client
        532  SQL*Net roundtrips to/from client
          5  sorts (memory)
          0  sorts (disk)
       7956  rows processed

Now I have set arraysize 5000

SQL> set arraysize 5000
SQL> @j

7956 rows selected.


Statistics
----------------------------------------------------------
          0  recursive calls
          0  db block gets
     445546  consistent gets
      11159  physical reads
          0  redo size
     464235  bytes sent via SQL*Net to client
        662  bytes received via SQL*Net from client
          3  SQL*Net roundtrips to/from client
          5  sorts (memory)
          0  sorts (disk)
       7956  rows processed

Now why my LIO ( Consistent gets ) not getting reduced ? my SGA is

SQL> show sga

Total System Global Area  128943960 bytes
Fixed Size                   743256 bytes
Variable Size              75497472 bytes
Database Buffers           50331648 bytes
Redo Buffers                2371584 bytes

regards & thanks
pjp 

Tom Kyte
June 10, 2005 - 11:01 am UTC

because it depends on how the query is processed. In your case - I see 5 sorts in memory. Most likely, the last row of the query is "found" (processed) before the FIRST row is retrieved. Therefore, the entire result set is in TEMP before you get the first row. IO from TEMP is not a "consistent get" (doesn't need to be, we did the consistent get to put the data into temp in the first place).


This arraysize setting will affect LIO for queries that are "first row" type queries only -- generally, ones without say a GROUP BY (group by is generally an "all rows" query -- you have to get an process all of the rows before you get the first row back from the database.


If you have effective Oracle by Design - I cover what I call "fast return" and "slow return" queries in a lot more detail.

Few Doubts

Deepak, July 03, 2005 - 11:32 am UTC

Hi Tom,

I tried to test the consistent get concept on my database. The details given below...

SCOTT> set arraysize 5
SCOTT> drop table emp1;

Table dropped.

SCOTT> create table emp1
  2  as
  3  select * from emp;

Table created.

SQL> select value from v$system_parameter
  2  where name='db_block_size';

VALUE
-----------------------------------------
2048

SYSTEM> select blocks from dba_segments
  2  where owner='SCOTT'
  3  and segment_name='EMP';

    BLOCKS
----------
         5

SCOTT> set autotrace traceonly statistics
SCOTT> select count(*) from emp1;


Statistics
----------------------------------------------------------
          0  recursive calls
          0  db block gets
          3  consistent gets
          1  physical reads
          0  redo size
        379  bytes sent via SQL*Net to client
        503  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
          1  rows processed

As per you, Consistent gets should be ...

Consistent gets=14/5 (14 rows, arraysize 5) + 5 (blocks) = 8

Can you please explain the discrepancy? Where did I go wrong?

SQL> select * from emp1;

14 rows selected.


Statistics
----------------------------------------------------------
          0  recursive calls
          0  db block gets
          6  consistent gets
          0  physical reads
          0  redo size
       1582  bytes sent via SQL*Net to client
        525  bytes received via SQL*Net from client
          4  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
         14  rows processed

Also in the above SQL query a FTS is done but the consistent gets varry.

PLEASE NOTE: No other sessions are doing any operations on the said table EMP1 and no other operations are going on.

Please clear my doubt. 

Thanks in advance.
 

Tom Kyte
July 03, 2005 - 12:08 pm UTC

select count(*)  <<<<====

think about how many rows that processes, sends back.  hmmm, how many round trips happened there...


In the second case, you see exactly what I described?  There are blocks gotten just because "it is a full scan" and you did some consistent gets just to figure out what to full scan:

ops$tkyte@ORA9IR2> set arraysize 5
ops$tkyte@ORA9IR2> set autotrace traceonly
ops$tkyte@ORA9IR2> select * from emp where empno=0;
 
no rows selected
 
 
Execution Plan
----------------------------------------------------------
   0      SELECT STATEMENT Optimizer=CHOOSE
   1    0   TABLE ACCESS (FULL) OF 'EMP'
 
 
 
 
Statistics
----------------------------------------------------------
          0  recursive calls
          0  db block gets
          3  consistent gets
          0  physical reads
          0  redo size
        589  bytes sent via SQL*Net to client
        368  bytes received via SQL*Net from client
          1  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
          0  rows processed
 
<b>two of them in fact, for zero rows we did 2 gets to figure out that emp is a one block table and read it finding nothing...</b>

ops$tkyte@ORA9IR2> select * from emp;
 
14 rows selected.
 
 
Execution Plan
----------------------------------------------------------
   0      SELECT STATEMENT Optimizer=CHOOSE
   1    0   TABLE ACCESS (FULL) OF 'EMP'
 
 
 
 
Statistics
----------------------------------------------------------
          0  recursive calls
          0  db block gets
          6  consistent gets
          0  physical reads
          0  redo size
       1582  bytes sent via SQL*Net to client
        521  bytes received via SQL*Net from client
          4  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
         14  rows processed
 
ops$tkyte@ORA9IR2> set autotrace off

<b>Now, we did 2 LIO's to figure out what to read - leaving 4 LIO's to account for.  SQLPlus (you can turn on trace to see this) fetches the first row AND THEN starts array fetching so

1 LIO for row 1
1 LIO for rows 2, 3, 4, 5, 6
1 LIO for rows 7, 8, 9, 10, 11
1 LIO for rows 12, 13, 14


</b>

 

Correction

Deepak, July 03, 2005 - 11:35 am UTC

Please correct

SYSTEM> select blocks from dba_segments
2 where owner='SCOTT'
3 and segment_name='EMP';

BLOCKS
----------
5

to

SYSTEM> select blocks from dba_segments
2 where owner='SCOTT'
3 and segment_name='EMP1';

BLOCKS
----------
5



Consistent Gets & DB Block Gets

Deepak, July 14, 2005 - 1:04 am UTC

Hi Tom,

You have helped me in understanding the concepts a lot. This time I am confused regarding the proper definition of Consistent Gets & Db Block Gets.

Few definitions say:

Consistent gets are the no. of blocks obtainted from the buffer cache in consistent mode.

Others say:

Consistent gets are Blocks obtained in consistent mode (irrespective of their source).

Please make me understand with simple examples.

Thanks in advance...

Tom Kyte
July 14, 2005 - 10:27 am UTC

consistent gets come from buffer cache. Don't know where the "irrespective" comes in.

a consistent get may well have been preceded by a physical io, but the get is from the buffer cache.

A reader, July 23, 2005 - 6:06 am UTC

Hi Tom:

Look at this
</code> http://asktom.oracle.com/pls/asktom/f?p=100:11:::::P11_QUESTION_ID:32812348052 <code>

----------------------
ops$tkyte@ORA817DEV.US.ORACLE.COM> create table t as select * from all_objects;
Table created.

ops$tkyte@ORA817DEV.US.ORACLE.COM> set autotrace on
ops$tkyte@ORA817DEV.US.ORACLE.COM> select *
2 from ( select owner, object_name, object_id
3 from t
4 order by object_id desc )
5 where rownum < 10;

9 rows selected.


Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE
1 0 COUNT (STOPKEY)
2 1 VIEW
3 2 SORT (ORDER BY STOPKEY)
4 3 TABLE ACCESS (FULL) OF 'T'

Statistics
----------------------------------------------------------
0 recursive calls
12 db block gets
238 consistent gets
234 physical reads
0 redo size
936 bytes sent via SQL*Net to client
430 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
1 sorts (memory)
0 sorts (disk)
9 rows processed

ops$tkyte@ORA817DEV.US.ORACLE.COM> exec show_space('T' )
Free Blocks.............................0
Total Blocks............................256
Total Bytes.............................2097152
Unused Blocks...........................21
Unused Bytes............................172032
Last Used Ext FileId....................7
Last Used Ext BlockId...................246537
Last Used Block.........................43

PL/SQL procedure successfully completed.

We can see the 238 blocks = 256 - 21 - 1 + 4
Total Not Overhead full scan
Blocks Used for Table read seg. hdr
four times

----------------------
You say 238(blocks)=256-21-1+4(full scan read seg.hdr four times)
I'm puzzled about adding '4 full scan read seg.hdr four times'
How to calculate the result——4?
Could you explain the formula metioned above in particular?

Thanks

Tom Kyte
July 23, 2005 - 9:36 am UTC

the 4 will vary from release to release. It was the cost of donig a full scan - to read the segment header to figure out what to read.

Select vs delete get information

Rob H, December 07, 2005 - 4:34 pm UTC

I'm confused regarding the significant differences in gets for a select vs a delete:

SQL> select c1 from USER.TABLE1 where c1='000000042000250';

C1
---------------
000000042000250

Elapsed: 00:00:00.62

Execution Plan
----------------------------------------------------------
   0      SELECT STATEMENT Optimizer=RULE
   1    0   INDEX (UNIQUE SCAN) OF 'IT72' (UNIQUE)




Statistics
----------------------------------------------------------
          0  recursive calls
          0  db block gets
          3  consistent gets
          2  physical reads
          0  redo size
        498  bytes sent via SQL*Net to client
        655  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
          1  rows processed

SQL> delete from USER.TABLE1  where c1='000000042000250';

1 row deleted.

Elapsed: 00:00:16.88

Execution Plan
----------------------------------------------------------
   0      DELETE STATEMENT Optimizer=RULE
   1    0   DELETE OF 'TABLE1'
   2    1     INDEX (UNIQUE SCAN) OF 'IT72' (UNIQUE)




Statistics
----------------------------------------------------------
          0  recursive calls
         51  db block gets
       6572  consistent gets
       4262  physical reads
      22288  redo size
        799  bytes sent via SQL*Net to client
        703  bytes received via SQL*Net from client
          3  SQL*Net roundtrips to/from client
          3  sorts (memory)
          0  sorts (disk)
          1  rows processed

SQL> rollback;

Rollback complete.

Elapsed: 00:00:00.04
SQL>

There are no triggers on the table, and the only constraint is a not null on the column.  There are approx 15 million rows in the table and it has been recently analyzed.  There is a unique index 'IT72' on the C1 column.  Using OPTIMIZER_MODE=RULE or CHOOSE or FIRST_ROWS yields similar results.  

Any help with the significant difference between the two? 

Tom Kyte
December 08, 2005 - 1:44 am UTC

sure, I can reproduce that:

ops$tkyte@ORA9IR2> set autotrace on
ops$tkyte@ORA9IR2> select * from p where x = 1;

         X
----------
         1


Execution Plan
----------------------------------------------------------
   0      SELECT STATEMENT Optimizer=CHOOSE
   1    0   INDEX (UNIQUE SCAN) OF 'SYS_C003797' (UNIQUE)




Statistics
----------------------------------------------------------
          0  recursive calls
          0  db block gets
          1  consistent gets
          0  physical reads
          0  redo size
        372  bytes sent via SQL*Net to client
        503  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
          1  rows processed

ops$tkyte@ORA9IR2> delete from p where x = 1;

1 row deleted.


Execution Plan
----------------------------------------------------------
   0      DELETE STATEMENT Optimizer=CHOOSE
   1    0   DELETE OF 'P'
   2    1     INDEX (UNIQUE SCAN) OF 'SYS_C003797' (UNIQUE)




Statistics
----------------------------------------------------------
        219  recursive calls
          7  db block gets
        488  consistent gets
          0  physical reads
        556  redo size
        790  bytes sent via SQL*Net to client
        798  bytes received via SQL*Net from client
          4  SQL*Net roundtrips to/from client
          5  sorts (memory)
          0  sorts (disk)
          1  rows processed

ops$tkyte@ORA9IR2> set autotrace off
ops$tkyte@ORA9IR2>



I should tell you what I did though to make that happen (like you should with me, set up a test case please....)


create table p ( x int primary key );
insert into p values ( 1 );
insert into p values ( 2 );
create table c
as
select 2 x, all_objects.* from all_objects;


any unindexed foreign keys sitting out there in your system?
 

Forgot ...

Bob B, December 08, 2005 - 9:24 am UTC

Tom,

I believe you forgot to add the FK from c to p above .

Tom Kyte
December 08, 2005 - 11:48 am UTC

indeed, missed the

alter table c add constraint c_fk foreign key (x) references p
/

in the cut and paste.

No Primary Key.....

Rob H, December 09, 2005 - 10:01 am UTC

The only constraints on the table are check constraints....

SQL> select owner, CONSTRAINT_NAME, CONSTRAINT_TYPE, R_CONSTRAINT_NAME from dba_constraints where table_name ='TABLE1';

OWNER                          CONSTRAINT_NAME                C R_CONSTRAINT_NAME
------------------------------ ------------------------------ - ------------------------------
USER                           SYS_C003566                    C
USER                           SYS_C003567                    C
USER                           SYS_C003568                    C
USER                           SYS_C003569                    C
USER                           SYS_C003570                    C
USER                           SYS_C003571                    C

6 rows selected.


I thought you needed a primary key to create a foreign one. 

Tom Kyte
December 09, 2005 - 1:01 pm UTC

so, can you create a table, and reproduce the issue. help me help you - give me a test case to work with.

all I said was "given what you've given me so far, yes, I can make a delete do lots more work...."

so, give me a test case to work with - you should be table to create a table just like table1, insert some data and reproduce the issue. Then we can look at it.

Thanks

Rob H, December 09, 2005 - 2:40 pm UTC

Yes, I am in the process of doing so and am wondering if the datatype has something to do with it.

Varchar2(15 CHAR)

I'll continue to work on this.

Thanks



Tom Kyte
December 10, 2005 - 4:38 am UTC

doubtful, more likely something hidden like auditing, fine grained access control, a missed trigger, a foreign key - something like that that you will find as you make the test case ;)

Indexes?

Rob H, December 12, 2005 - 3:28 pm UTC

I haven't looked into fine grained auditing (yet) but I have looked and no foriegn keys, no triggers. I did find this out about the indexes on the table:


INDEX_NAME BLEVEL DISTINCT_KEYS LEAF_BLOCKS AVG_DATA_BLOCKS_PER_KEY
---------- -------- ------------- ----------- -----------------------
IDX1 8 48 25033 80392
IDX2 3 213314 101890 56
IDX3 8 8 16623 391445
IDX4 2 3 87970 315863
IDX5 2 15470610 93170
1
Now to me, IDX1, IDX3, and IDX4 could be a problem.

Tom Kyte
December 12, 2005 - 4:17 pm UTC

indeed! but how so given 1 row deleted :)

Delete causing large index reads?

Rob H, December 12, 2005 - 4:58 pm UTC

Sorry, your response seems a little unclear (at least the "but how so given 1 row deleted").

Am I right in assuming that the large consistent and physical reads in the explain are really from the indexes being read and adjusted due to the single row delete? Similarly for the redo (tons of archiving being done).

I have to delete 10 million rows from a 24x7 online table. I would like to create table as select, truncate, insert but this has to be done online. It takes 6 minutes and nearly a gig of UNDO to remove 1,000 rows (the UNDO usage is exponential in time). This is causing a lot of work for the database. I am trying to find a way to not impact the online production while purging the data safely with the least amount of "work" by the database. :(

Tom Kyte
December 12, 2005 - 5:24 pm UTC

I suppose it could be the indexes, but - i would have to set up something to simulate it.

6572 consistent gets
4262 physical reads

for 1 row deleted, they would have to be some seriously damaged indexes to do that.


Data purging via delete - we need to talk about that, there are better ways. Tell you what you are deleting 10,000,000 rows - out of how many? and by what criteria? and is this the FIRST TIME or the ONLY TIME you are going to do this? what is the table structure and what do the indexes look like on it (no tablespaces/storage, just simple ddl)

Bitmaps

Rob H, December 13, 2005 - 12:25 am UTC

After some review I think I found the problem. I never bothered to check because I made the assumption that no one would put bitmap indexes on a table that gets 100,000 inserts a day....

And so, I think you have the answer....

Tom Kyte
December 13, 2005 - 9:15 am UTC

that indeed would be it.

surprised you did not have deadlocks all over the place - everything must be running faster now.

but still, answer the questions from above:

Data purging via delete - we need to talk about that, there are better ways.
Tell you what you are deleting 10,000,000 rows - out of how many? and by what
criteria? and is this the FIRST TIME or the ONLY TIME you are going to do this?


Info

Rob H, December 13, 2005 - 11:13 am UTC

surprised you did not have deadlocks all over the place <- YES!

I have not dropped the indexes yet....The application "apparently" requires it. It is a black box type app where there is columns C1..C# and tables named T1..T#.

The delete process is

Data purging via delete - we need to talk about that, there are better ways.

They are deleting 10,000,000 out of 16,000,000 (approx 50,000-100,000 rows are added daily). I would prefer to move the "wanted" data to another table, drop the indexes, truncate the table, add the "wanted" data back and re-create the indexes (excluding the bitmaps).

The criteria for deletion is where C6 (I think a status field, Number datatype with 3 distinct values) and C7 < some number (where the number is datetime converted to epoch number).

I have been asked to help with this application. I did not help develop it, I have been pulled in to alleviate problems with performance (wow, I now see so many chapters of your book coming back to haunt me). From a design standpoint I see several glaring issues.

1. Bitmap Indexes and ORA-00060's
2. The delete is really a delete from the primary table, then 2 other "children" but yet there is no referential integrity or constraints (why Tom, are people so scared of data integrity).
3. No one seems to know anything about the design or how any of it works.....
4. There are 101 columns on the primary table. Here are some stats from dba_tables:

NUM_ROWS EMPTY_BLOCKS AVG_SPACE AVG_ROW_LEN
---------- ------------ ---------- -----------
15469430 1216 1011 427

I *was* using a cursor to select a group of records from the primary table, delete from the 3 tables using 'execute immediate ... using ' single row by single row. This obviously generated a stunning amount of REDO. I also could only update 1,000 rows in the table before my UNDO TBS was nearly full. This was only an initial benchmark for a test, but lead me to the post I put above of large consistent and physical gets for a single delete. And now you know how I got here...

Tom Kyte
December 13, 2005 - 1:19 pm UTC

... "I would prefer to move the "wanted" data to another table, "....

me too. why can't we do that - it would take just a couple of minutes.


the bitmaps should be replaced with b*trees, your indexes are likely many times bigger than they should be and are not making things "faster"



2) because they don't know better. They have been scared off by people that talk like they know what they are talking about saying integrity "slows down" databases which is complete $@#.

But how

Rob H, December 13, 2005 - 3:19 pm UTC

How do I ensure that no records are lost during the create table and the truncate. A trigger perhaps? This table and its contents must be available 24x7 (appearently).

Tom Kyte
December 13, 2005 - 5:22 pm UTC

this would be a offline operation.

or:

create new empty table
put indexes on it, grant on it, etc....

then:

rename old_table to HOLD;
rename new_table to old_table;
insert into old_table select * from hold where <keep these is true>;


it would be "offline" for the duration of the two renames only.


or (my preference)

design a new approach to this alltogether so you never are faced with this again (eg: partitioning), and then create a new partitioned table and do the switch...

Are "Db Block Gets" data block reads ?

MckDBA, January 12, 2006 - 11:29 am UTC

Hello Tom,
great Site, great fan of yours.

I learnt a lot from this thread about the LIO's. Just when I think that I know it all and when I replay in my mind, I get lost a bit.

Need clarification on your earlier statement -
" .... db block gets are blocks gotten in CURRENT (as opposed to consisten) mode. They are block gets we did to discover the extent map for the table (to know what to read). We get that in "current, as of right now" mode instead of "in consistent read, as of the point in time the query began" mode ....."

So, lets say we are updating table T. So, the "db block gets" are not the actual reads of table T data blocks but more of meta blocks, which has information where to read the real Table T data blocks? But, the consistent gets are access to the real Table T data blocks.

Am I right in my understanding of DB Block Gets? Appreciate your response.

Thanks

Tom Kyte
January 12, 2006 - 11:49 am UTC

no, I used the segment header reads as an example.  We do current mode reads in other cases as well -- UPDATE for example will use consistent read to FIND a block - it will use current mode reads to actually modify the block (you have to modify the current copy of the block after all)


ops$tkyte@ORA10GR2> create table t as select * from all_objects;

Table created.

ops$tkyte@ORA10GR2> set autotrace on statistics;
ops$tkyte@ORA10GR2> update t set object_name = lower(object_name);

49937 rows updated.


Statistics
----------------------------------------------------------
        642  recursive calls
      52769  db block gets
      51340  consistent gets
        686  physical reads
   17110600  redo size
        920  bytes sent via SQL*Net to client
        960  bytes received via SQL*Net from client
          6  SQL*Net roundtrips to/from client
          9  sorts (memory)
          0  sorts (disk)
      49937  rows processed

 

how about this update ?

MckDBA, January 30, 2006 - 12:09 am UTC

Hey Tom,
I think I got it now.
SunOS, 9.2.0.5 environment.

So, I went ahead and tried a simple test to confirm my understanding.

SQL> create table test as select * from dba_objects;
Table created.
 
SQL> exec show_space('TEST');
Free Blocks.............................               0
Total Blocks............................             384 <-
Total Bytes.............................       3,145,728
Total MBytes............................               3
Unused Blocks...........................              26 <-
Unused Bytes............................         212,992
Last Used Ext FileId....................               1
Last Used Ext BlockId...................          39,433
Last Used Block.........................             102
 
So, the actual # of blocks used = 358

SQL> set autotrace traceonly statitics
SQL> select * from test;
26127 rows selected.
 
Statistics
----------------------------------------------------------
          0  recursive calls
          0  db block gets
       2079  consistent gets
          0  physical reads
          0  redo size
    1933448  bytes sent via SQL*Net to client
      19806  bytes received via SQL*Net from client
       1743  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
      26127  rows processed
 

I could reason out the 2097 consistent gets(from an earlier thread on this site) by calculating :

# of blocks used by table + (table rows/fetch_size)
358+26127/15 = 2100(approximately equivalent to 2079).


Then, I tried to reason out the numbers in the update statement.
SQL> update test set owner=lower(owner);
 
26127 rows updated.
 
Statistics
----------------------------------------------------------
        170  recursive calls
      28176  db block gets
        446  consistent gets
        210  physical reads
   11291096  redo size
       1023  bytes sent via SQL*Net to client
       1047  bytes received via SQL*Net from client
          4  SQL*Net roundtrips to/from client
          2  sorts (memory)
          0  sorts (disk)
      26127  rows processed

From what I understand, that update would run more like

update (select * from test) set owner=lower(owner).

The "select * from test" would be consistent reads. Makes sense. The rest of the update would be current reads.

1) So, I was expecting to see the consistent reads to be around 2100, but the actual # shown above is 446. What am I missing ?

2) The update statement acted on 26127 rows, so I expected to see around the same number for current reads. But the actual number was 28176. Why is there a difference of about 2000 ?

Appreciate your response 

Tom Kyte
January 30, 2006 - 2:06 am UTC

1) there is no "array fetching" with the update - the update is a single call, all rows processed (or not)

2) transaction (undo) work was taking place as well. look at used_ublk in v$transaction next time. you were generating undo.

Now im very puzzled.

Notna, February 08, 2006 - 10:57 am UTC

Hi Tom,

I have this query that I dont understand the results. When I set hash_join_enabled=false consistent gets is low but more sorts more, and executes less than 1.50 minutes. But when I set the hash_join_enabled to true, Consistent gets is high, no sorts, but executes faster. Can you tell me what's the best execution path best for my query and could you please enlighten me why?

Cheers,
NOTNA




test@tun> set autotrace traceonly explain statistics
test@tun> l
1 select
2 int_cap ,
3 ccy,
4 int_accrued_ctd ,
5 auth_int_accrued_ctd ,
6 int_accrued_mtd ,
7 auth_int_accrued_mtd ,
8 int_adj ,
9 auth_int_adj ,
10 int_adj_mtd ,
11 auth_int_adj_mtd ,
12 int_accrued_post ,
13 auth_int_accrued_post ,
14 int_adj_post ,
15 auth_int_adj_post
16 from accounts act,
17 interest int
18 where act.seq_key = int.seq_key
19* and act.deposit_type <> 'T'
test@tun> alter session set hash_join_enabled=false;

Session altered.

Elapsed: 00:00:00.00
test@tun> select
2 int_cap ,
3 ccy,
4 int_accrued_ctd ,
5 auth_int_accrued_ctd ,
6 int_accrued_mtd ,
7 auth_int_accrued_mtd ,
8 int_adj ,
9 auth_int_adj ,
10 int_adj_mtd ,
11 auth_int_adj_mtd ,
12 int_accrued_post ,
13 auth_int_accrued_post ,
14 int_adj_post ,
15 auth_int_adj_post
16 from accounts act,
17 interest int
18 where act.seq_key = int.seq_key
19 and act.deposit_type <> 'T'
20 /

1203824 rows selected.

Elapsed: 00:01:50.92

Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=70522 Card=1691348 B
ytes=214801196)

1 0 PARTITION RANGE (ALL)
2 1 MERGE JOIN (Cost=70522 Card=1691348 Bytes=214801196)
3 2 SORT (JOIN) (Cost=15651 Card=1691348 Bytes=20296176)
4 3 TABLE ACCESS (FULL) OF 'ACCOUNTS' (Cost=9039 Card=169
1348 Bytes=20296176)

5 2 SORT (JOIN) (Cost=54871 Card=2355244 Bytes=270853060)
6 5 TABLE ACCESS (FULL) OF 'INTEREST' (Cost=3250 Card=2
355244 Bytes=270853060)





Statistics
----------------------------------------------------------
0 recursive calls
0 db block gets
196701 consistent gets
169011 physical reads
0 redo size
12587484 bytes sent via SQL*Net to client
562055 bytes received via SQL*Net from client
80256 SQL*Net roundtrips to/from client
211 sorts (memory)
0 sorts (disk)
1203824 rows processed

test@tun> alter session set hash_join_enabled=true;

Session altered.

Elapsed: 00:00:00.06
test@tun> select
2 int_cap ,
3 ccy,
4 int_accrued_ctd ,
5 auth_int_accrued_ctd ,
6 int_accrued_mtd ,
7 auth_int_accrued_mtd ,
8 int_adj ,
9 auth_int_adj ,
10 int_adj_mtd ,
11 auth_int_adj_mtd ,
12 int_accrued_post ,
13 auth_int_accrued_post ,
14 int_adj_post ,
15 auth_int_adj_post
16 from accounts act,
17 interest int
18 where act.seq_key = int.seq_key
19 and act.deposit_type <> 'T'
20 /

1203824 rows selected.

Elapsed: 00:01:16.67

Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=14261 Card=1691348 B
ytes=214801196)

1 0 PARTITION RANGE (ALL)
2 1 HASH JOIN (Cost=14261 Card=1691348 Bytes=214801196)
3 2 TABLE ACCESS (FULL) OF 'ACCOUNTS' (Cost=9039 Card=16913
48 Bytes=20296176)

4 2 TABLE ACCESS (FULL) OF 'INTEREST' (Cost=3250 Card=235
5244 Bytes=270853060)





Statistics
----------------------------------------------------------
91 recursive calls
0 db block gets
275283 consistent gets
169990 physical reads
0 redo size
13233434 bytes sent via SQL*Net to client
562055 bytes received via SQL*Net from client
80256 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
1203824 rows processed

test@tun>






Tom Kyte
February 08, 2006 - 11:15 am UTC

what happens when you

set arraysize 100
or
set arraysize 500

in sqlplus with the hash join (the somewhat obvious answer is "likely the hash join"

the first one reads the tables once, sorts them into temp and returns the data from temp (no logical IO on temp)

the second one reads and hashes one of the tables - and the reads the second table and hash joins to the other one. You are likely having more than 15 rows per block on the second table being processed - hence you are getting and "re-getting" blocks.

Increasing the array fetch size to something more appropriate for a query that returns 1.2 million records will reduce the LIO.




big_table@ORA10GR2> set autotrace traceonly statistics;
big_table@ORA10GR2> select * from big_table;

1000000 rows selected.


Statistics
----------------------------------------------------------
1 recursive calls
0 db block gets
80146 consistent gets
14061 physical reads
608 redo size
112310317 bytes sent via SQL*Net to client
733711 bytes received via SQL*Net from client
66668 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
1000000 rows processed

big_table@ORA10GR2> set arraysize 100
big_table@ORA10GR2> select * from big_table;

1000000 rows selected.


Statistics
----------------------------------------------------------
0 recursive calls
0 db block gets
24302 consistent gets
13574 physical reads
564 redo size
104943607 bytes sent via SQL*Net to client
110374 bytes received via SQL*Net from client
10001 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
1000000 rows processed


Still confused...

NOTNA, February 08, 2006 - 1:07 pm UTC

here are the results for 500 Array size :

test@tun> set autotrace traceonly explain statistics
test@tun> alter session set hash_join_enabled=false;

Session altered.

Elapsed: 00:00:00.00
test@tun> set array size 500
SP2-0268: arraysize option not a valid number
test@tun> set arraysize 500
test@tun> select
2 int_cap ,
3 ccy,
4 int_accrued_ctd ,
5 auth_int_accrued_ctd ,
6 int_accrued_mtd ,
7 auth_int_accrued_mtd ,
8 int_adj ,
9 auth_int_adj ,
10 int_adj_mtd ,
11 auth_int_adj_mtd ,
12 int_accrued_post ,
13 auth_int_accrued_post ,
14 int_adj_post ,
15 auth_int_adj_post
16 from accounts act,
17 interest int
18 where act.seq_key = int.seq_key
19 and act.deposit_type <> 'T'
20 .
test@tun> /

1203824 rows selected.

Elapsed: 00:04:28.03

Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=70522 Card=1691348 B
ytes=214801196)

1 0 PARTITION RANGE (ALL)
2 1 MERGE JOIN (Cost=70522 Card=1691348 Bytes=214801196)
3 2 SORT (JOIN) (Cost=15651 Card=1691348 Bytes=20296176)
4 3 TABLE ACCESS (FULL) OF 'ACCOUNTS' (Cost=9039 Card=169
1348 Bytes=20296176)

5 2 SORT (JOIN) (Cost=54871 Card=2355244 Bytes=270853060)
6 5 TABLE ACCESS (FULL) OF 'INTEREST' (Cost=3250 Card=2
355244 Bytes=270853060)





Statistics
----------------------------------------------------------
96 recursive calls
0 db block gets
2652763 consistent gets
212854 physical reads
2516020 redo size
9265994 bytes sent via SQL*Net to client
19534 bytes received via SQL*Net from client
2409 SQL*Net roundtrips to/from client
213 sorts (memory)
0 sorts (disk)
1203824 rows processed

test@tun> alter session set hash_join_enabled=true;

Session altered.

Elapsed: 00:00:00.00
test@tun> set arraysize 500
test@tun> select
2 int_cap ,
3 ccy,
4 int_accrued_ctd ,
5 auth_int_accrued_ctd ,
6 int_accrued_mtd ,
7 auth_int_accrued_mtd ,
8 int_adj ,
9 auth_int_adj ,
10 int_adj_mtd ,
11 auth_int_adj_mtd ,
12 int_accrued_post ,
13 auth_int_accrued_post ,
14 int_adj_post ,
15 auth_int_adj_post
16 from ACCOUNTS act,
17 INTEREST int
18 where act.seq_key = int.seq_key
19 and act.deposit_type <> 'T'
20 /

1203824 rows selected.

Elapsed: 00:04:15.75

Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=14261 Card=1691348 B
ytes=214801196)

1 0 PARTITION RANGE (ALL)
2 1 HASH JOIN (Cost=14261 Card=1691348 Bytes=214801196)
3 2 TABLE ACCESS (FULL) OF 'ACCOUNTS' (Cost=9039 Card=16913
48 Bytes=20296176)

4 2 TABLE ACCESS (FULL) OF 'INTEREST' (Cost=3250 Card=235
5244 Bytes=270853060)





Statistics
----------------------------------------------------------
91 recursive calls
0 db block gets
2655118 consistent gets
216586 physical reads
2515968 redo size
9269898 bytes sent via SQL*Net to client
19534 bytes received via SQL*Net from client
2409 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
1203824 rows processed

test@tun>

So, does it mean setting the hash_join_enabled to true for this query is the correct one?

Thanks...



Tom Kyte
February 09, 2006 - 4:46 am UTC

what are you confused about first of all?

I suggested 100, not 500.

Confused about Array Size

Sajid raza, February 08, 2006 - 2:36 pm UTC

> the first one reads the tables once, sorts them into temp
> and returns the data
> from temp (no logical IO on temp)
>
> the second one reads and hashes one of the tables -
> and the reads the second table and hash joins to the
> other one. You are likely having more than 15 rows per
> block on the second table being processed - hence you are
> getting and "re-getting" blocks.

I have clearly missed something, but isn't the array size a feature of the client application (sqlplus)? Why should it affect the outcome of the second step in the hash-join, after all, isn't that an 'internal' operation.

Tom Kyte
February 09, 2006 - 4:54 am UTC

hash joins work like this:


a) read one of the tables from start to finish and hash it.
b) wait for client to ask for some rows
c) when client asks for some rows - read some rows from table2


it is step c) that would be affected by the client array fetch size. If the client asks for a single row (and lets say that table1 and table2 have a one to one relationship to eachother to make the math easy) AND table2 has 100 rows per block - you end up getting the first block 100 times.

If the client on the other hand array fetches 100 rows, you end up getting block one from the buffer cache once.

how can index scan better than full table scan in this case?

A reader, February 10, 2006 - 11:58 am UTC

Hi

I have difficulties understanding why this is happening, please check the queries, I have indexed DEPTNO column:

SQL> select /* index(emp emp_idx1) */ *
  2  from emp where deptno in (10, 20, 30, 40);

     EMPNO ENAME      JOB              MGR HIREDATE                 SAL       COMM     DEPTNO
---------- ---------- --------- ---------- ----------------- ---------- ---------- ----------
      7782 CLARK      MANAGER         7839 19810609 00:00:00       2450                    10
      7839 KING       PRESIDENT            19811117 00:00:00       5000                    10
      6003 MILLER     CLERK           7782 19820123 00:00:00       1300                    10
      7369 SMITH      CLERK           7902 19801217 00:00:00        800                    20
      7566 JONES      MANAGER         7839 19810402 00:00:00       2975                    20
      7788 SCOTT      ANALYST         7566 19821209 00:00:00       3000                    20
      7876 ADAMS      CLERK           7788 19830112 00:00:00       1100                    20
      7902 FORD       ANALYST         7566 19811203 00:00:00       3000                    20
      7499 ALLEN      SALESMAN        7698 19810220 00:00:00       1600        300         30
      7521 WARD       SALESMAN        7698 19810222 00:00:00       1250        500         30
      7654 MARTIN     SALESMAN        7698 19810928 00:00:00       1250       1400         30
      7698 BLAKE      MANAGER         7839 19810501 00:00:00       2850                    30
      7844 TURNER     SALESMAN        7698 19810908 00:00:00       1500          0         30
      7900 JAMES      CLERK           7698 19811203 00:00:00        950                    30

14 rows selected.


Execution Plan
----------------------------------------------------------
   0      SELECT STATEMENT Optimizer=CHOOSE (Cost=2 Card=14 Bytes=448)
   1    0   INLIST ITERATOR
   2    1     TABLE ACCESS (BY INDEX ROWID) OF 'EMP' (Cost=2 Card=14 Bytes=448)
   3    2       INDEX (RANGE SCAN) OF 'EMP_IDX1' (NON-UNIQUE) (Cost=1 Card=14)

Statistics
----------------------------------------------------------
          0  recursive calls
          0  db block gets
          9  consistent gets
          0  physical reads
          0  redo size
       1278  bytes sent via SQL*Net to client
        499  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
         14  rows processed



SQL> select * from emp;

     EMPNO ENAME      JOB              MGR HIREDATE                 SAL       COMM     DEPTNO
---------- ---------- --------- ---------- ----------------- ---------- ---------- ----------
      7369 SMITH      CLERK           7902 19801217 00:00:00        800                    20
      7499 ALLEN      SALESMAN        7698 19810220 00:00:00       1600        300         30
      7521 WARD       SALESMAN        7698 19810222 00:00:00       1250        500         30
      7566 JONES      MANAGER         7839 19810402 00:00:00       2975                    20
      7654 MARTIN     SALESMAN        7698 19810928 00:00:00       1250       1400         30
      7698 BLAKE      MANAGER         7839 19810501 00:00:00       2850                    30
      7782 CLARK      MANAGER         7839 19810609 00:00:00       2450                    10
      7788 SCOTT      ANALYST         7566 19821209 00:00:00       3000                    20
      7839 KING       PRESIDENT            19811117 00:00:00       5000                    10
      7844 TURNER     SALESMAN        7698 19810908 00:00:00       1500          0         30
      7876 ADAMS      CLERK           7788 19830112 00:00:00       1100                    20
      7900 JAMES      CLERK           7698 19811203 00:00:00        950                    30
      7902 FORD       ANALYST         7566 19811203 00:00:00       3000                    20
      6003 MILLER     CLERK           7782 19820123 00:00:00       1300                    10

14 rows selected.


Execution Plan
----------------------------------------------------------
   0      SELECT STATEMENT Optimizer=CHOOSE (Cost=4 Card=14 Bytes=448)
   1    0   TABLE ACCESS (FULL) OF 'EMP' (Cost=4 Card=14 Bytes=448)

Statistics
----------------------------------------------------------
          0  recursive calls
          0  db block gets
         16  consistent gets
          0  physical reads
          0  redo size
       1307  bytes sent via SQL*Net to client
        499  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
         14  rows processed


Why index scan only needs 9 consistent gets and full table more? And I am reading same number of rows?!

 

Tom Kyte
February 10, 2006 - 1:21 pm UTC

your stats are likely out of date.

but you include nothing really to reproduce with - I would never expect scott.emp to have 16'ish blocks.

TEST RESULTS

A reader, February 12, 2006 - 1:28 pm UTC

Hi

I dropped the table and recreated, gaher the stats with both analyze and dbms_stats and getting same result. Please bear in mind my db_block_size is 4K.
I carried the test with segment space management auto and manual. It seems that using AUTO is what is causing 16 consistent gets reading the EMP table.

The strange thing is, with segment space management manual Oracle still prefers index scan when full scan is only consumes half consistent gets!

This is Oracle 9.2.0.6 32 bit on Red Hat Enterprise Linux 3



CREATE TABLE EMP
(EMPNO NUMBER(4) NOT NULL,
ENAME VARCHAR2(10),
JOB VARCHAR2(9),
MGR NUMBER(4),
HIREDATE DATE,
SAL NUMBER(7, 2),
COMM NUMBER(7, 2),
DEPTNO NUMBER(2));

INSERT INTO EMP VALUES
(7369, 'SMITH', 'CLERK', 7902,
TO_DATE('17-DEC-1980', 'DD-MON-YYYY'), 800, NULL, 20);
INSERT INTO EMP VALUES
(7499, 'ALLEN', 'SALESMAN', 7698,
TO_DATE('20-FEB-1981', 'DD-MON-YYYY'), 1600, 300, 30);
INSERT INTO EMP VALUES
(7521, 'WARD', 'SALESMAN', 7698,
TO_DATE('22-FEB-1981', 'DD-MON-YYYY'), 1250, 500, 30);
INSERT INTO EMP VALUES
(7566, 'JONES', 'MANAGER', 7839,
TO_DATE('2-APR-1981', 'DD-MON-YYYY'), 2975, NULL, 20);
INSERT INTO EMP VALUES
(7654, 'MARTIN', 'SALESMAN', 7698,
TO_DATE('28-SEP-1981', 'DD-MON-YYYY'), 1250, 1400, 30);
INSERT INTO EMP VALUES
(7698, 'BLAKE', 'MANAGER', 7839,
TO_DATE('1-MAY-1981', 'DD-MON-YYYY'), 2850, NULL, 30);
INSERT INTO EMP VALUES
(7782, 'CLARK', 'MANAGER', 7839,
TO_DATE('9-JUN-1981', 'DD-MON-YYYY'), 2450, NULL, 10);
INSERT INTO EMP VALUES
(7788, 'SCOTT', 'ANALYST', 7566,
TO_DATE('09-DEC-1982', 'DD-MON-YYYY'), 3000, NULL, 20);
INSERT INTO EMP VALUES
(7839, 'KING', 'PRESIDENT', NULL,
TO_DATE('17-NOV-1981', 'DD-MON-YYYY'), 5000, NULL, 10);
INSERT INTO EMP VALUES
(7844, 'TURNER', 'SALESMAN', 7698,
TO_DATE('8-SEP-1981', 'DD-MON-YYYY'), 1500, 0, 30);
INSERT INTO EMP VALUES
(7876, 'ADAMS', 'CLERK', 7788,
TO_DATE('12-JAN-1983', 'DD-MON-YYYY'), 1100, NULL, 20);
INSERT INTO EMP VALUES
(7900, 'JAMES', 'CLERK', 7698,
TO_DATE('3-DEC-1981', 'DD-MON-YYYY'), 950, NULL, 30);
INSERT INTO EMP VALUES
(7902, 'FORD', 'ANALYST', 7566,
TO_DATE('3-DEC-1981', 'DD-MON-YYYY'), 3000, NULL, 20);
INSERT INTO EMP VALUES
(7934, 'MILLER', 'CLERK', 7782,
TO_DATE('23-JAN-1982', 'DD-MON-YYYY'), 1300, NULL, 10);

COMMIT;

CREATE INDEX EMP_IDX1 ON EMP(DEPTNO);

CREATE TABLE XEMP TABLESPACE DMT
AS SELECT * FROM EMP;

CREATE INDEX XEMP_IDX1 ON XEMP(DEPTNO)
TABLESPACE DMT;

ANALYZE TABLE EMP COMPUTE STATISTICS;

ANALYZE TABLE XEMP COMPUTE STATISTICS;

select
TABLESPACE_NAME,
BLOCK_SIZE,
INITIAL_EXTENT,
NEXT_EXTENT,
MIN_EXTENTS,
MAX_EXTENTS,
MIN_EXTLEN,
EXTENT_MANAGEMENT,
ALLOCATION_TYPE,
SEGMENT_SPACE_MANAGEMENT
from dba_tablespaces
where tablespace_name in ('USERS', 'DMT');

TABLESPA BLOCK_SIZE INITIAL_EXTENT NEXT_EXTENT MIN_EXTENTS MAX_EXTENTS MIN_EXTLEN EXTENT_MAN ALLOCATIO SEGMEN
-------- ---------- -------------- ----------- ----------- ----------- ---------- ---------- --------- ------
DMT 4096 131072 131072 1 2147483645 131072 LOCAL UNIFORM MANUAL
USERS 4096 65536 1 2147483645 65536 LOCAL SYSTEM AUTO

select
TABLE_NAME,
TABLESPACE_NAME,
PCT_FREE,
PCT_USED,
INI_TRANS,
MAX_TRANS,
FREELISTS,
FREELIST_GROUPS,
NUM_ROWS,
BLOCKS,
EMPTY_BLOCKS,
AVG_ROW_LEN
from user_tables
where table_name in ('XEMP', 'EMP');

TABL TABLESPA PCT_FREE PCT_USED INI_TRANS MAX_TRANS FREELISTS FREELIST_GROUPS NUM_ROWS BLOCKS EMPTY_BLOCKS AVG_ROW_LEN
---- -------- ---------- ---------- ---------- ---------- ---------- --------------- ---------- ---------- ------------ -----------
EMP USERS 10 1 255 14 13 3 40
XEMP DMT 10 40 1 255 1 1 14 1 30 40

SET AUTOTRACE TRACE EXP STAT

SELECT *
FROM EMP;

Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=4 Card=14 Bytes=448)
1 0 TABLE ACCESS (FULL) OF 'EMP' (Cost=4 Card=14 Bytes=448)

Statistics
----------------------------------------------------------
0 recursive calls
0 db block gets
16 consistent gets
0 physical reads
0 redo size
1307 bytes sent via SQL*Net to client
499 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
14 rows processed

SELECT *
FROM EMP
WHERE DEPTNO IN (10, 20, 30, 40);

Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=2 Card=14 Bytes=448)
1 0 INLIST ITERATOR
2 1 TABLE ACCESS (BY INDEX ROWID) OF 'EMP' (Cost=2 Card=14 Bytes=448)
3 2 INDEX (RANGE SCAN) OF 'EMP_IDX1' (NON-UNIQUE) (Cost=1 Card=14)

Statistics
----------------------------------------------------------
0 recursive calls
0 db block gets
9 consistent gets
0 physical reads
0 redo size
1278 bytes sent via SQL*Net to client
499 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
14 rows processed

SELECT *
FROM XEMP;

Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=2 Card=14 Bytes=448)
1 0 TABLE ACCESS (FULL) OF 'XEMP' (Cost=2 Card=14 Bytes=448)

Statistics
----------------------------------------------------------
0 recursive calls
0 db block gets
4 consistent gets
0 physical reads
0 redo size
1307 bytes sent via SQL*Net to client
499 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
14 rows processed

SELECT *
FROM XEMP
WHERE DEPTNO IN (10, 20, 30, 40);

Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=2 Card=14 Bytes=448)
1 0 INLIST ITERATOR
2 1 TABLE ACCESS (BY INDEX ROWID) OF 'XEMP' (Cost=2 Card=14 Bytes=448)
3 2 INDEX (RANGE SCAN) OF 'XEMP_IDX1' (NON-UNIQUE) (Cost=1 Card=14)

Statistics
----------------------------------------------------------
0 recursive calls
0 db block gets
9 consistent gets
0 physical reads
0 redo size
1278 bytes sent via SQL*Net to client
499 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
14 rows processed

BEGIN
dbms_stats.gather_table_stats( OWNNAME => 'LSC',
TABNAME => 'EMP',
ESTIMATE_PERCENT => 99,
METHOD_OPT => 'FOR ALL COLUMNS SIZE 100',
DEGREE => 1,
CASCADE => TRUE);
END;
/

BEGIN
dbms_stats.gather_table_stats( OWNNAME => 'LSC',
TABNAME => 'XEMP',
ESTIMATE_PERCENT => 99,
METHOD_OPT => 'FOR ALL COLUMNS SIZE 100',
DEGREE => 1,
CASCADE => TRUE);
END;
/

select
TABLE_NAME,
TABLESPACE_NAME,
PCT_FREE,
PCT_USED,
INI_TRANS,
MAX_TRANS,
FREELISTS,
FREELIST_GROUPS,
NUM_ROWS,
BLOCKS,
EMPTY_BLOCKS,
AVG_ROW_LEN
from user_tables
where table_name in ('XEMP', 'EMP');

TABL TABLESPA PCT_FREE PCT_USED INI_TRANS MAX_TRANS FREELISTS FREELIST_GROUPS NUM_ROWS BLOCKS EMPTY_BLOCKS AVG_ROW_LEN
---- -------- ---------- ---------- ---------- ---------- ---------- --------------- ---------- ---------- ------------ -----------
EMP USERS 10 1 255 14 13 3 37
XEMP DMT 10 40 1 255 1 1 14 1 30 37





Tom Kyte
February 12, 2006 - 5:16 pm UTC

ok, that makes sense with ASSM - it wastes space in order to increase concurrency during modifications.


As for the test - I would just say "way too small", this is a case of "six one way, half a dozen the other". The costs are basically about the same in this case.

number of rows per block vs. arraysize

jianhui, February 23, 2006 - 12:57 am UTC

Hi Tom,
It seems puzzling accroding to the arraysize. For example, i have a table called test(c char(750)) with 200 rows, and 9 rows per 8k size block. total 23 blocks.

If i set array size to 9, shouldnt there be 23 or so consistent gets, since there is no re-visit for each fetch. However the testing numbers tell different story, could you explain why? I was suspecting db_file_multiblock_read_count has something to do with that, however after I set it to 1, the result still the same. Thanks a lot! (version is 9205)

SQL> select dbms_rowid.ROWID_BLOCK_NUMBER(rowid), count(*)
  2  from test
  3  group by dbms_rowid.ROWID_BLOCK_NUMBER(rowid);

DBMS_ROWID.ROWID_BLOCK_NUMBER(ROWID)   COUNT(*)
------------------------------------ ----------
                             1163312          9
                             1163313          9
                             1163314          9
                             1163315          9
                             1163316          9
                             1163317          9
                             1163318          9
                             1163319          9
                             1163320          9
                             1163321          9
                             1163322          9

DBMS_ROWID.ROWID_BLOCK_NUMBER(ROWID)   COUNT(*)
------------------------------------ ----------
                             1163323          9
                             1163324          9
                             1163325          9
                             1163326          9
                             1163327          9
                             1163328          9
                             1163329          9
                             1163330          9
                             1163331          9
                             1163332          9
                             1163333          9

DBMS_ROWID.ROWID_BLOCK_NUMBER(ROWID)   COUNT(*)
------------------------------------ ----------
                             1163334          2

23 rows selected.

SQL> set autotrace trace stat
SQL> set arraysize 9
SQL> select * from test;

200 rows selected.


Statistics
----------------------------------------------------------
          0  recursive calls
          0  db block gets
         50  consistent gets
          0  physical reads
          0  redo size
     155838  bytes sent via SQL*Net to client
        898  bytes received via SQL*Net from client
         24  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
        200  rows processed

SQL> set arraysize 200
SQL> /

200 rows selected.


Statistics
----------------------------------------------------------
          0  recursive calls
          0  db block gets
         28  consistent gets
          0  physical reads
          0  redo size
     152523  bytes sent via SQL*Net to client
        656  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
        200  rows processed

SQL> set arraysize 10
SQL> /

200 rows selected.


Statistics
----------------------------------------------------------
          0  recursive calls
          0  db block gets
         45  consistent gets
          0  physical reads
          0  redo size
     155290  bytes sent via SQL*Net to client
        865  bytes received via SQL*Net from client
         21  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
        200  rows processed

SQL>  

Tom Kyte
February 23, 2006 - 7:54 am UTC

sqlplus always does a single row fetch and then starts array fetching, that'll affect this definitely. So,

arraysize 9

read 1 - read block 1 row 1
read 2 - read block 1 rows 2-9, block 2 row 1
read 3 - read block 2 rows 2-9, block 3 row 1
read 4 - read block 3 rows 2-9, block 4 row 1

So, each block is read twice in that case.

arraysize 10

read 1 - read block 1 row 1
read 2 - read block 1 rows 2-9, block 2 rows 1-2
read 3 - read block 2 rows 3-9, block 3 rows 1-3
...

and there almost every block is read twice.


sql_trace=true is your friend. You can see the rows fetched on each call in the trace file directly.


I set arraysize 9 and ran a select * from t with 200 rows:

PARSING IN CURSOR #13 len=15 dep=0 uid=78 oct=3 lid=78 tim=1113963019093011 hv=1134051363 ad='43c7cd88'
select * from t
END OF STMT
PARSE #13:c=4999,e=4824,p=0,cr=32,cu=0,mis=1,r=0,dep=0,og=1,tim=1113963019092993
EXEC #13:c=0,e=95,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=1,tim=1113963019093243
FETCH #13:c=0,e=146,p=0,cr=4,cu=0,mis=0,r=1,dep=0,og=1,tim=1113963019093481
FETCH #13:c=0,e=110,p=0,cr=2,cu=0,mis=0,r=9,dep=0,og=1,tim=1113963019094329
FETCH #13:c=0,e=93,p=0,cr=2,cu=0,mis=0,r=9,dep=0,og=1,tim=1113963019094882
FETCH #13:c=0,e=76,p=0,cr=2,cu=0,mis=0,r=9,dep=0,og=1,tim=1113963019095326
FETCH #13:c=0,e=72,p=0,cr=2,cu=0,mis=0,r=9,dep=0,og=1,tim=1113963019095788
FETCH #13:c=0,e=57,p=0,cr=2,cu=0,mis=0,r=9,dep=0,og=1,tim=1113963019096166
FETCH #13:c=0,e=48,p=0,cr=2,cu=0,mis=0,r=9,dep=0,og=1,tim=1113963019096484
FETCH #13:c=0,e=52,p=0,cr=2,cu=0,mis=0,r=9,dep=0,og=1,tim=1113963019096816
FETCH #13:c=0,e=50,p=0,cr=2,cu=0,mis=0,r=9,dep=0,og=1,tim=1113963019097121
FETCH #13:c=0,e=48,p=0,cr=2,cu=0,mis=0,r=9,dep=0,og=1,tim=1113963019097412
FETCH #13:c=0,e=51,p=0,cr=2,cu=0,mis=0,r=9,dep=0,og=1,tim=1113963019097739
FETCH #13:c=0,e=47,p=0,cr=2,cu=0,mis=0,r=9,dep=0,og=1,tim=1113963019098048
FETCH #13:c=0,e=45,p=0,cr=2,cu=0,mis=0,r=9,dep=0,og=1,tim=1113963019098336
FETCH #13:c=0,e=49,p=0,cr=2,cu=0,mis=0,r=9,dep=0,og=1,tim=1113963019098661
FETCH #13:c=0,e=51,p=0,cr=2,cu=0,mis=0,r=9,dep=0,og=1,tim=1113963019098970
FETCH #13:c=0,e=48,p=0,cr=2,cu=0,mis=0,r=9,dep=0,og=1,tim=1113963019099273
FETCH #13:c=0,e=55,p=0,cr=2,cu=0,mis=0,r=9,dep=0,og=1,tim=1113963019099588
FETCH #13:c=0,e=52,p=0,cr=2,cu=0,mis=0,r=9,dep=0,og=1,tim=1113963019099904
FETCH #13:c=0,e=47,p=0,cr=2,cu=0,mis=0,r=9,dep=0,og=1,tim=1113963019100207
FETCH #13:c=1000,e=84,p=0,cr=2,cu=0,mis=0,r=9,dep=0,og=1,tim=1113963019100546
FETCH #13:c=0,e=137,p=0,cr=4,cu=0,mis=0,r=9,dep=0,og=1,tim=1113963019101181
FETCH #13:c=0,e=81,p=0,cr=2,cu=0,mis=0,r=9,dep=0,og=1,tim=1113963019101954
FETCH #13:c=1000,e=184,p=0,cr=2,cu=0,mis=0,r=9,dep=0,og=1,tim=1113963019102596
FETCH #13:c=0,e=88,p=0,cr=4,cu=0,mis=0,r=1,dep=0,og=1,tim=1113963019103128


r=1 shows the rows fetched on each call.

a simple question

A reader, February 23, 2006 - 8:43 am UTC

Hi Tom,

Since I read this thread for the first time I've been trying to explain why I get 3 consistent gets in the following example (segment header; data block and ??.  
If I create the table as IOT I get 1. Can you please shed some light?

Connected to:
Oracle9i Release 9.2.0.6.0 - Production
JServer Release 9.2.0.6.0 - Production

SQL> create table t(x int);

Table created.

SQL> insert into t values(1);

1 row created.

SQL> set autotrace traceonly stat
SQL> select * from t;


Statistics
----------------------------------------------------------
          0  db block gets
          3  consistent gets
          0  physical reads
          0  redo size
        372  bytes sent via SQL*Net to client
        499  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
          1  rows processed

SQL>
 

Tom Kyte
February 23, 2006 - 9:54 am UTC

because for the IOT we read the root block, and stop.

We don't need to read the segment header block to figure out what blocks to full scan - we just goto the root block and discover "there are no leaves to goto" and stop.


full scan = "read segment header to figure out what block ranges to full scan, then full scan them"

iot range scan = "read root and from root figure out what to read next - like a branch block, leaf block, whatever. In the case the root = leaf, you are done in 1 IO"

re: a simple question

A reader, February 23, 2006 - 1:20 pm UTC

Thanks Tom,

But in my case I only have one data block for the heap table, so 1 (segment header) + 1 (data block) = 2. I've done a block dump and the HWM is on block 1. Where's the other 1?



Tom Kyte
February 23, 2006 - 7:10 pm UTC

the number of IO's performed against the segment header (and even the TYPE of IO done - current or consistent) vary by version.

We never promised to read it just once.

Older releases would have even a higher count.

Consistent gets too high for index range scan on global temporary Table

Guru Dhar, February 24, 2006 - 1:20 am UTC

Hi Tom,

We have a situation that we are not able to exlain/proceed with perfrmace tuning. It is as follows. We have a global temporary table that gets populated and then updated in a loop. The trace for the insert and update is below.

INSERT INTO XXRPA_INS_TEMP (FACTOR, RISK_RATING, CUSTOMER_ID, PARTY_NUMBER,
ANSWER_SET_ID, SELECTION_METHOD, RISK_ASSESSMENT_ID, ZONE) SELECT
XRA.FACTOR, XRA.RISK_RATING, XRA.CLAIMANT_PARTY_ID, HP.PARTY_NUMBER,
XRA.ANSWER_SET_ID, XRA.SELECTION_METHOD, XRA.RISK_ASSESSMENT_ID, XRA.ZONE
FROM XXRPA_RISK_ASSESSMENT XRA, HZ_PARTIES HP,
XXRPA_SPSAPPL_SPS_APPLC_SETS_V XSSASV WHERE HP.PARTY_ID =
XRA.CLAIMANT_PARTY_ID AND XRA.ANSWER_SET_ID = XSSASV.ANSWER_SET_ID AND
TRUNC (XSSASV.CLAIM_SUBMISSION_DATE) BETWEEN :B3 AND :B2 AND
XRA.LOOKUP_CODE_SCHEME = 'SPS' AND XRA.SELECTION_METHOD IN ('ASSESSED',
'RANDOM', 'RISK') AND XRA.ZONE = :B1



call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.02 0.01 0 0 0 0
Execute 7 18.63 18.53 1232 1264709 16886 116085
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 8 18.65 18.54 1232 1264709 16886 116085

Misses in library cache during parse: 1
Misses in library cache during execute: 1
Optimizer goal: CHOOSE
Parsing user id: 173 (APPS) (recursive depth: 1)

Rows Row Source Operation
------- ---------------------------------------------------
0 FILTER
0 TABLE ACCESS BY INDEX ROWID XXRPA_ANSWER_SETS
1 NESTED LOOPS
0 NESTED LOOPS
0 TABLE ACCESS BY INDEX ROWID XXRPA_RISK_ASSESSMENT
0 INDEX RANGE SCAN XXRPA_RISK_ASSESSMENT_ZONE (object id 473197)
0 TABLE ACCESS BY INDEX ROWID HZ_PARTIES
0 INDEX UNIQUE SCAN HZ_PARTIES_U1 (object id 172923)
0 INDEX RANGE SCAN XAS_ANSWER_SET_VERSION_I (object id 408101)


UPDATE XXRPA_INS_TEMP SET BAND = :B2
WHERE
RISK_ASSESSMENT_ID = :B1


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 523042 23230.25 22698.18 0 231893488 535118 523042
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 523043 23230.25 22698.18 0 231893488 535118 523042

Misses in library cache during parse: 1
Misses in library cache during execute: 1
Optimizer goal: CHOOSE
Parsing user id: 173 (APPS) (recursive depth: 1)

Rows Row Source Operation
------- ---------------------------------------------------
0 UPDATE
189300 INDEX RANGE SCAN XXRPA_INS_TEMP_IDX1 (object id 473196)

Now, the update is via an indexed path and the indexed column has the highest selectivity (1 row/per risk_assesment_id). Still the gets is more than 400 blocks per execution (or perr row update).

Can you provide us with any clues as to why this could be? Please let us know if you need any additional information.

Thanks for your site.

Tom Kyte
February 24, 2006 - 8:31 am UTC

I will guess

band is indexed.

tell us more. This is likely a case where the IO's are against the index being maintained. You appear to be updating every row an AVERAGE OF 4.5 TIMES!!!! Why is that!

give us the details on the gtt, what band is datatype wise, indexes involved, the transaction semantics (where is the commit and such),


Something is funny there - you executed this 523,042 times - but the row source only shows 189,300 - we'll need more information - details as to how to reproduce this.

Consistent gets too high for index range scan on global temporary Table

Guru Dhar, February 28, 2006 - 8:56 am UTC

Hi Tom,
Thanks for your response.

There is no index on the BAND column. The Only index we have is on risk_assessment_id column.

Following are the DDL scripts used for the table and index;
CREATE GLOBAL TEMPORARY TABLE XXRPA_INS_TEMP
(
LINE_ID NUMBER,
HEADER_ID NUMBER,
ORDERED_QUANTITY NUMBER,
RISK_RATING NUMBER,
CUSTOMER_ID NUMBER,
PARTY_NUMBER VARCHAR2(30 BYTE),
RS_ZONE NUMBER(4),
FACTOR NUMBER,
REMOTE_SENSING_FLAG VARCHAR2(1 BYTE),
GROWER_PARTY_ID NUMBER,
REQUEST_DATE DATE,
ANSWER_SET_ID NUMBER,
INSPECTION_METHOD VARCHAR2(50 BYTE),
QUARTILE NUMBER,
RISK_ASSESSMENT_ID NUMBER,
CLAIMANT_PARTY_ID NUMBER,
LOOKUP_CODE_SCHEME VARCHAR2(30 BYTE),
SCHEME_YEAR VARCHAR2(10 BYTE),
SELECTION_METHOD VARCHAR2(50 BYTE),
ZONE NUMBER,
PROCESSED_BY_RANDOM VARCHAR2(1 BYTE),
BAND NUMBER,
REPORT_FLAG VARCHAR2(1 BYTE),
COUNTER NUMBER
)
ON COMMIT DELETE ROWS;


CREATE INDEX XXRPA_INS_TEMP_IDX1 ON XXRPA_INS_TEMP
(RISK_ASSESSMENT_ID);

There is only one commit in the program after the end of the processing.

As you pointed out, there was a problem in Looping construct which we had retified.

I was trying to get the index statistics from the instance where this program was being tested for Performance, but unfortunately they had dropped the index and recreated it, hence index stats are also not available.

The issue of the row source data that does not match up with the number of executions is indeed puzzling and we are not able to get that to happen again..
we tried to repoduce this on the development instance, but were not able to do so.

Current Tkprof results attached:

SELECT FACTOR, RISK_ASSESSMENT_ID
FROM
XXRPA_INS_TEMP


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 18079 0.51 0.41 0 18233 0 18078
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 18081 0.51 0.41 0 18233 0 18078

Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: 173 (APPS) (recursive depth: 1)

Rows Row Source Operation
------- ---------------------------------------------------
18078 TABLE ACCESS FULL XXRPA_INS_TEMP



UPDATE XXRPA_INS_TEMP SET BAND = :B2
WHERE
RISK_ASSESSMENT_ID = :B1


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 18078 2.85 2.69 0 36224 18454 18078
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 18079 2.85 2.70 0 36224 18454 18078

Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: 173 (APPS) (recursive depth: 1)

Rows Row Source Operation
------- ---------------------------------------------------
0 UPDATE
18078 INDEX RANGE SCAN XXRPA_INS_TEMP_IDX1 (object id 407378)

Thanks.

Tom Kyte
February 28, 2006 - 9:06 am UTC

looks good now, could it have been a "bad trace" file?

Bad Trace File ???

Guru Dhar, March 01, 2006 - 5:24 am UTC

Hi Tom,
  I am not sure what a Bad trace file is.  Recently we have encountered issue related to inconsistent executiom time reported by trace file and sql Plus (Set timing on).

I had managed to save the trace file which I am attaching:



TKPROF: Release 8.0.6.3.0 - Production on Fri Jun 3 12:37:17 2005

(c) Copyright 1999 Oracle Corporation.  All rights reserved.

Trace file: rpad16_ora_23719.trc
Sort options: default

********************************************************************************
count    = number of times OCI procedure was executed
cpu      = cpu time in seconds executing 
elapsed  = elapsed time in seconds executing
disk     = number of physical reads of buffers from disk
query    = number of buffers gotten for consistent read
current  = number of buffers gotten in current mode (usually for update)
rows     = number of rows processed by the fetch or execute call
********************************************************************************

alter session set sql_trace = true


call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        0      0.00       0.00          0          0          0           0
Execute      1      0.00       2.76          0          0          0           0
Fetch        0      0.00       0.00          0          0          0           0
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        1      0.00       2.76          0          0          0           0

Misses in library cache during parse: 0
Optimizer goal: CHOOSE
Parsing user id: 173  
********************************************************************************

BEGIN amit_test_prc; END;


call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1    100.00      64.58          0          0          0           0
Execute      1      0.00      29.71          0          0          0           1
Fetch        0      0.00       0.00          0          0          0           0
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        2    100.00      94.29          0          0          0           1

Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: 173  
********************************************************************************

SELECT /*1*/ count(*)
  from scott.emp

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       1.33          0          0          0           0
Execute      1      0.00       0.88          0          0          0           0
Fetch        1      0.00       1.76          0          3          0           1
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        3      0.00       3.97          0          3          0           1

Misses in library cache during parse: 0
Optimizer goal: CHOOSE
Parsing user id: 173     (recursive depth: 1)
********************************************************************************

SELECT /*2*/ count(*)
  from scott.emp

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       1.52          0          0          0           0
Execute      1      0.00       0.53          0          0          0           0
Fetch        1      0.00       0.68          0          3          0           1
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        3      0.00       2.73          0          3          0           1

Misses in library cache during parse: 0
Optimizer goal: CHOOSE
Parsing user id: 173     (recursive depth: 1)
********************************************************************************

SELECT /*3*/ count(*)
  from scott.emp

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       1.25          0          0          0           0
Execute      1      0.00       0.54          0          0          0           0
Fetch        1      0.00       1.10          0          3          0           1
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        3      0.00       2.89          0          3          0           1

Misses in library cache during parse: 0
Optimizer goal: CHOOSE
Parsing user id: 173     (recursive depth: 1)
********************************************************************************

SELECT /*4*/ count(*)
  from scott.emp

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       1.19          0          0          0           0
Execute      1      0.00       0.50          0          0          0           0
Fetch        1      0.00       0.61          0          3          0           1
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        3      0.00       2.30          0          3          0           1

Misses in library cache during parse: 0
Optimizer goal: CHOOSE
Parsing user id: 173     (recursive depth: 1)
********************************************************************************

SELECT count(*)
  from scott.emp

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        4    100.00      12.62          0          0          0           0
Execute      4      0.00       1.96          0          0          0           0
Fetch        4      0.00       2.79          0         12          0           4
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total       12    100.00      17.37          0         12          0           4

Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: 173     (recursive depth: 1)
********************************************************************************

alter session set sql_trace = false


call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       1.66          0          0          0           0
Execute      1      0.00       2.15          0          0          0           0
Fetch        0      0.00       0.00          0          0          0           0
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        2      0.00       3.81          0          0          0           0

Misses in library cache during parse: 0
Optimizer goal: CHOOSE
Parsing user id: 173  



********************************************************************************

OVERALL TOTALS FOR ALL NON-RECURSIVE STATEMENTS

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        2    100.00      66.24          0          0          0           0
Execute      3      0.00      34.62          0          0          0           1
Fetch        0      0.00       0.00          0          0          0           0
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        5    100.00     100.86          0          0          0           1

Misses in library cache during parse: 1


OVERALL TOTALS FOR ALL RECURSIVE STATEMENTS

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        8    100.00      17.91          0          0          0           0
Execute      8      0.00       4.41          0          0          0           0
Fetch        8      0.00       6.94          0         24          0           8
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total       24    100.00      29.26          0         24          0           8

Misses in library cache during parse: 1

   11  user  SQL statements in session.
    0  internal SQL statements in session.
   11  SQL statements in session.
********************************************************************************
Trace file: rpad16_ora_23719.trc
Trace file compatibility: 7.03.02
Sort options: default

       1  session in tracefile.
      11  user  SQL statements in trace file.
       0  internal SQL statements in trace file.
      11  SQL statements in trace file.
       8  unique SQL statements in trace file.
     101  lines in trace file.

Two Issues:
1. Elapsed time less than CPU time by big Margin.
2. Total execution time is totally out of sync as compared to SQL plus timing.


SQL> select server from v$session
  2  where sid = (select sid from v$mystat where rownum = 1);

SERVER
---------
DEDICATED

SQL> alter session set sql_trace = true;

Session altered.

SQL> set time on
16:54:21 SQL> exec amit_test_prc

PL/SQL procedure successfully completed.

16:54:32 SQL> alter session set sql_trace = false;

Session altered.

16:55:08 SQL> exit


Definition of Procedure:
create or replace procedure amit_test_prc as
 l_cnt  number;
begin
  select /*1*/ count(*)
  into l_cnt
  from scott.emp;

  select /*2*/ count(*)
  into l_cnt
  from scott.emp;

  select /*3*/ count(*)
  into l_cnt
  from scott.emp;

  select /*4*/ count(*)
  into l_cnt
  from scott.emp;

  select count(*)
  into l_cnt
  from scott.emp;

  select count(*)
  into l_cnt
  from scott.emp;

  select count(*)
  into l_cnt
  from scott.emp;

  select count(*)
  into l_cnt
  from scott.emp;
end;

Also we had observed that Same query (i.e Select count(*) from scott.emp) getting Soft Parsed multiple times. (1 Hard and 3 Soft parses). Not able to figure out why it happened . we have observed that the behaviour is different when executed in a loop (the soft parses disappear)

Thanks a lot.
 

Tom Kyte
March 01, 2006 - 8:19 am UTC

you have a tkprof that doesn't make sense - I would suspect the trace file was "improperly generated, broken, bad in some way shape or form"

that is what a bad trace file is.


why are you using 8.0.6 tkprof???


are you using a really really really old tkprof against a trace file from a newer version. I think so. All bets all. Use the right tkprof.

Marcel, May 10, 2006 - 9:15 am UTC

Hi Tom,

I think that physical reads ( because of physical reads are slow/expensive) are to avoid, the same with
the consistent gets( because of lachtes=locks in the memory).
What are recursive calls exactly?

Statistics
----------------------------------------------------------
0 recursive calls
0 db block gets
428 consistent gets
0 physical reads
0 redo size
1307 bytes sent via SQL*Net to client
499 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
14 rows processed

Thanks
Marcel


Tom Kyte
May 10, 2006 - 9:27 am UTC

sql executed on behalf of your sql - to do space management, hard parsing, and so on.

or sql called from other sql by you (eg: the sql you execute in a plsql routine is "recursive")


and in general, if you reduce the logical IO, you'll take care of that physical IO ;)

do LIOs always matter?

Ryan, June 30, 2006 - 6:04 pm UTC

This is going to long, but its interesting. I hope it's worth the read.

I am performing a few tests. Here is the basic framework of the tests.

Test 1:
GTT joined to Heap table on a unique index and the data is perfectly colocated(ordered and dense)

Test 2:
GTT joined to List partitioned table by a unique index. It is now two columns. It leads with the same unique column in the first test and the second column in a partition key. The data is perfected colocated. The index is local.

Our GTT has 5,000 rows and 20,000 rows. We are using dynamic sampling 2. 10G release 2.

The data in the GTT are perfectly colocated. All the rows in a GTT match the rows in one partition in the list partitioned table. The base tables are identical accept for partition. Exact same data. They have between 20 million and 200 million rows. Tables are analyzed with autosample size option.

Test Code:
1. Same queries for both. The partitioned tables use queries that include the partition key.
2. Open 1,10,20,50,100 threads. So one round of tests with 5,000 rows and another with 20,000 rows in the GTT.
3. For 1 thread grab tkprof. Use awrpt for multiple threads. Sometimes I grabbed 10046 for multiple threads, but it was skewing my system results.
4. No one query is the bottleneck. All of them have somewhat comparable results.
5. We tested 8 queries run in a stored procedure.
6. We bounce all relavent tablespaces between tests and flush the shared pool.

Results:
1. Elapsed time is about the same. CPU is about the same. Physical IO is a little higher for the non-partitioned tables.
2. We noticed a 20-50% spike in LIOs for the partitioned tables depending on the number of threads we run. When we look at the row source in the tkprof output the increase is almost entirely on 'partition list iterator'.

1. Have you seen a spike in LIO due to the partition list iterator row source?

2. If my LIOs are so much higher, how come I do not see an increase in elapsed time or CPU? We tested this under stress with up to 100 threads. We have more waits with multiple threads for non-partitioned tables. So this may be balancing the elapsed time. However, what about CPU? I thought LIO translates into CPU usage?

I believe Mogens Noorgaard had a chapter in the Oak Table book about some LIOs are more costly than others, but we don't have the granualarity to tell yet?

I don't have a production system to test the effects against something live. It's still in development.



Tom Kyte
July 01, 2006 - 7:47 am UTC

set up test case scripts?


LIO's come into play in scaling. each LIO is a latch into the cache. If you have lots of latch misses, you spin on CPU. As you add more users - the CPU used by the queries will higher LIO's will tend to go up faster than those with LOW LIO's.


Some LIO's - like those done against UNDO to provide consistent read - are less costly, but the ones you are observing in your single user test are "the real deal"



How to distinguish one type of LIO from another.

Neeraj Nagpal, July 03, 2006 - 5:19 pm UTC

Tom,

You said that that the LIO's - like those done against UNDO to provide consistent read - are less costly, but the ones you are observing in your single user test are "the real deal"

Is there a direct (or indirect) way to tell one type of LIO from another.

Thanks,
Neeraj


does insert use Buffer Cache?

A reader, July 04, 2006 - 12:49 pm UTC

Hi

I have a doubt about INSERT transaction management.

When we update or delete we go to the disk and find the data we are going to modify, if found read that block to the buffer cache.

With insert do we do the samething? I dont think so since there is no rowid to find!

TIA

Tom Kyte
July 07, 2006 - 9:02 pm UTC

you have to find a rowid to associate with the row - so yes, the same thing happens.

We goto the buffer cache to inspect the freelists (manual segment space management) or the bitmap blocks in the table (assm - automatic segment space management) to find a block with sufficient free space. We might have to read from disk into the cache these freelist/bitmap blocks and might have to readd from disk the block we discover having sufficient free space.

more disk reads

Karmit, July 06, 2006 - 5:59 am UTC

Hi Tom,

I've a very puzzling situation here. We just migrated a test database to production (different bare metal and SAN
- but same sort of platform/ os/ patch etc) using RMAN backup/ restore.

There is one particular operation which seems to run extremely slow (5 times slower than test system). Bear in mind that everything else is the same. Only one user doing the test, same spfile parameters etc (its a RMAN copy).

When I compared the SQL stats I see for the offending operation, the number of rows processed is exactly the same, the explain plan is exactly the same (even the cost is comparable) - basically its the same amount of data its going through.

However, I did find a few differences - the number of buffer gets, disk reads were about 30K more in the prod system.. and just about every read operation for this particular SQL seems to be more by about 30K.. I suspect thats where the "elapsed time" increase is coming from.

From a very theoretical point of view (I know you don't like this :-) - but would you be able to comment on as to
why would the disk reads be so much off? Its doing an index read.. saw the db seq file waits taking up bulk of the time - on both systems.

P.S: I had collected stats on the prod exactly the same way as I did on the test.

Tom Kyte
July 08, 2006 - 9:39 am UTC

in production - are there other users (I know you said "one user doing the test" but that doesn't preclude other users in production) hitting this table?




disk reads

Karmit, July 06, 2006 - 8:40 am UTC

Hi Tom,

Follow up on the above issue. I rebuilt the index and now the disk reads seem to be comparable.. the CPU time is actually lower on prod - but the elapsed time is significantly higher... dodgy disks?

=============== ============= ================
Statistic TEST PROD
=============== ============= ================
DiskReadPerExec 793027 637324
BufGetsPerExec 3,541,324 3,473,893
DiskReads 793027 637324
BufGets 3541324 3473893
BufGetsPerRow 9.23 9.05
BCHitRatio 77.61 81.65
RowsProcs 383856 383856
CPUTime 51450000 47160000
ElapsedTime 578,209,410 2,436,869,321
=============== ============= ================

Thanks,
Karmit

Tom Kyte
July 08, 2006 - 9:50 am UTC

I don't see how rebuilding the index could affect this IF this was a simple restore (eg: they both would have had the same index structure no???)


tkprof it, do you have one of those (with waits included in it)

Insert Buffer Cache

Roderick, July 06, 2006 - 11:11 pm UTC


Insert will still go into a buffer cache. Freelist or bitmap information for the table will tell the process which block to insert the row into. That block will have to be read into cache first so it can figure out where inside the block to put it.

Insert Buffer Cache

Roderick, July 06, 2006 - 11:14 pm UTC


Oops. Forgot to add that a direct path insert can bypass the buffer cache though.

disk reads

Karmit, July 10, 2006 - 4:55 am UTC

Hi,

Finally! - one of the unix admin sheepishly admitted that the type of disk being used for that filesystem was a slower one than supposed to be going in... its been changed now and the performance is back to normal.

Thanks,
Karmit

Where is SCN Stored

Sanji, August 23, 2006 - 9:07 pm UTC

Hello Tom,

This is an abstract from your favourite topics in 9i Concepts Guide, "Data Concurrency and Consistency"

"As a query enters the execution stage, the current system change number (SCN) is determined. As data blocks are read on behalf of the query, only blocks written with the observed SCN are used. Blocks with changed data (more recent SCNs) are reconstructed from data in the rollback segments, and the reconstructed data is returned for the query. Therefore, each query returns all committed data with respect to the SCN recorded at the time that query execution began. Changes of other transactions that occur during a query's execution are not observed, guaranteeing that consistent data is returned for each query."

1> Where is the SCN Stored, once it is determined at the start of the execution stage of any query ?

For instance the following query is to be executed

update T set X = <variable>
where clause....

So, the query scans for consistent image of table 'T' based on the "where clause" result set.
Considering the "read consistent" behaviour of Oracle as mentioned above, the current SCN is determined.

Q> From where is this current system change number determined and once determined where is it stored for reference ?


2> "As data blocks are read on behalf of the query, only blocks written with the observed SCN are used. Blocks with changed data (more recent SCNs) are reconstructed from data in the rollback segments, and the reconstructed data is returned for the query."

Does the transaction table in the RBS store the SCN for all it's contents (blocks), so as to reconstruct the read consistent block in case if it requires reconstruction ?

3> In case of updates, inserts and deletes, block are gotten in current mode.
I have read this somewhere in this site that the current mode is actually the data segment header. I am confused as to how the blocks are gotten in current mode.

Thanks
Sanji

Tom Kyte
August 27, 2006 - 7:27 pm UTC

1) it is just part of your "cursor state", just a variable as part of the program that is called "Oracle"

2) the SCN is "conceptual" here, it is really all about committed transaction state. the rollback segment headers and the block headers themselves contain everything we need to know.

3) it used to be in older releases that the segment headers were gotten in current mode for full scans. It isn't true anymore.

You'd be interested in reading:
</code> http://asktom.oracle.com/Misc/something-different-part-i-of-iii.html http://asktom.oracle.com/Misc/part-ii-seeing-restart.html http://asktom.oracle.com/Misc/part-iii-why-is-restart-important-to.html <code>



A reader, August 31, 2006 - 11:17 am UTC

How can I reduce the consistent gets,
changing arraysize didn't help?
This is the oracle version we are
using: 9.2.0.1.0

select count(*) from t;

count(*)
-------
76232

Set arraysize 100

Statistics
----------------------------------------------------------
0 recursive calls
0 db block gets
4192 consistent gets
0 physical reads
0 redo size
210 bytes sent via SQL*Net to client
277 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
1 rows processed

Set arraysize 1000

Statistics
----------------------------------------------------------
0 recursive calls
0 db block gets
4192 consistent gets
0 physical reads
0 redo size
210 bytes sent via SQL*Net to client
277 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
1 rows processed

Thanks in advance!

Tom Kyte
August 31, 2006 - 11:40 am UTC

that'll never help for that type of a query (changing the array size).

There is one round trip to the server here - there would not be a reason to visit and revisit the same block over and over during this full scan.


the only way to reduce the consistent gets for that count(*) would be to reduce the number of blocks that must be read to get the count.


eg:

<b>
a) having a primary key. the CBO (cost based optimizer) will fast full scan an index on a NOT NULL column</b>

ops$tkyte%ORA10GR2> create table t
  2  as
  3  select * from all_objects;

Table created.

ops$tkyte%ORA10GR2> exec dbms_stats.gather_table_stats(user,'T');

PL/SQL procedure successfully completed.

ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> select count(*) from t;

  COUNT(*)
----------
     50026

ops$tkyte%ORA10GR2> set autotrace traceonly statistics;
ops$tkyte%ORA10GR2> select count(*) from t;


Statistics
----------------------------------------------------------
        694  consistent gets
..

ops$tkyte%ORA10GR2> set autotrace off
ops$tkyte%ORA10GR2> alter table t add constraint t_pk primary key(object_id);
Table altered.

ops$tkyte%ORA10GR2> set autotrace traceonly statistics;
ops$tkyte%ORA10GR2> select count(*) from t;


Statistics
----------------------------------------------------------
        166  consistent gets
...

<b>b) compress the data to make it smaller</b>

ops$tkyte%ORA10GR2> set autotrace off
ops$tkyte%ORA10GR2> alter table t drop constraint t_pk;
Table altered.

ops$tkyte%ORA10GR2> alter table t compress;

Table altered.

ops$tkyte%ORA10GR2> alter table t move;

Table altered.

ops$tkyte%ORA10GR2> exec dbms_stats.gather_table_stats(user,'T');

PL/SQL procedure successfully completed.

ops$tkyte%ORA10GR2> set autotrace traceonly statistics;
ops$tkyte%ORA10GR2> select count(*) from t;


Statistics
----------------------------------------------------------
        211  consistent gets
...
ops$tkyte%ORA10GR2> set autotrace off




arraysize only helps when there is more rows than the array size (and only up to a point - say 100-500 rows maximum).


You have a single row, the entire query is entirely processed from start to finish during the first fetch, we are NOT revisiting blocks as you would in a select * from T query. 

A reader, August 31, 2006 - 12:56 pm UTC

Thanks for your response.
But in my context, t is a complex view. And we doing a count(*) on that view. Depending on that count(*) number, we are updating another table, eg:

update table t1
where col1=(select count(*) from t);

This update is giving 756908 consistent gets, what can I do to reduce these?

Tom Kyte
August 31, 2006 - 1:21 pm UTC

same answer - the entire select count(*) is done in a single call.

arraysize - that cannot possibly have any effect here at all, we do not experience the "get and re-get the same block over and over" syndrome - because it is a single call.

Only thing you can do is tune the underlying view definition to do less work.


If T was a complex view, posing the question as you did was counter productive.

really should be:

select count(*)
from ( select ...
from .....
where .....
)

if you want anyone to ever help you make it "go faster".


the only thing you can do to "reduce" your logical IO's is what I was alluding to above - via tuning (eg: adding index on not null column, tuning. compressing table, tuning. you - likely "rewrite view query" if even possible. Sometimes your query is already doing the least amount of work possible)

A reader, August 31, 2006 - 2:32 pm UTC

Thanks for your excellent explanation! One last doubt, can we compress a view?

Tom Kyte
August 31, 2006 - 2:57 pm UTC

you compress segments - and only if they are read only/read mostly (eg: bulk loaded from time to time).

so, tables the view queries - sure.
the view itself - nope.

(unless it is a materialized view, in which case I'd just make a materialized view of the materialized view that was the select count(*) and be done with it)

Array Size effects in PL/SQL

Arindam Mukherjee, September 09, 2006 - 3:38 am UTC

Respected Mr. Tom,

On reading array size effects, could you kindly let me ask you - how can I take this advantage in PL/SQL program when this size setting entails SQL plus environment only?
If possible, please answer me with example.

Tom Kyte
September 09, 2006 - 12:20 pm UTC

arraysize is a sqlplus parameter.

In plsql

a) if you use 10g and above, then:

for x in ( select ... )
for x in CURSOR_NAME

will array fetch 100 rows at a time.

b) otherwise you use bulk collect and limit to set the arraysize for plsql

Consistent gets

Amit, September 12, 2006 - 2:58 am UTC

Hi Tom,

I am still not clear with the concept of consistent gets
To calculate buffer cache hit ratio we use
SELECT phy.value physical_reads,
cur.value consistent_gets,
con.value db_block_gets,
1-((phy.value)/((cur.value)+(con.value))) "Cache hit ratio"
FROM v$sysstat cur, v$sysstat con, v$sysstat phy
WHERE cur.name='db block gets'
AND con.name='consistent gets'
AND phy.name='physical reads'
*********************************************

Does this mean that db block gets+ consistent gets include both disk read and reads from buffer cache.
I am assuming buffer cache hit ratio to be total gets from buffer to total gets. i.e buffer/(buffer+ disk)

Please correct me if i am wrong.

Thanks
Amit

Tom Kyte
September 12, 2006 - 8:28 am UTC

a logical IO (consistent get) at some point incurred a physical IO.

But we might perform 5,000 logical IO's against a given block - and only 1 physical IO (read it once from disk, read it 5,000 times from cache)


So, each db block get or consistent get could have done a physical IO, or not.

What you are looking at is the ratio of the number of times a block is read from the cache vs disk.



Thanks a lot.

Amit, September 20, 2006 - 12:47 am UTC


GETS - Consistent and Current

Arindam Mukherjee, September 21, 2006 - 8:16 am UTC

Respected Mr. Tom,

I am truly confused with “Consistent Gets” and “Current block Read” i.e. Db block gets.
Since query generally perform Consistent Gets, so can I say – “Consistent gets” is UNDO block reading. I can’t say that because you also have written in your book – “retrieval of blocks from the buffer cache in “read consistent” mode. So??
Anyway you kindly explain us in easy language –
1> What is Consistent Gets
2> What is “Current block Read” i.e. Db block gets


Tom Kyte
September 22, 2006 - 1:56 am UTC

consistent gets are NOT undo blocks being read.

A consistent get is a block that is gotten as of a point in time. Say you start a query at 11:00. It is still running at 11:05. At 11:05 it needs to get a block from the buffer cache. It will request a consistent get on that block AS OF 11:00. (the block is needed as of 11:00 for a consistent read). If the block in the cache is "newer" than 11:00 - we will have to do some reads to the UNDO segment to undo the changes - but if the block in the cache is "older" than 11:00 - we will just read that block out of the cache and not hit the undo.

db block gets - suppose the query you were running right above at 11:00 was an UPDATE. At 11:05 it did a consistent get on a block as of 11:00. It discovered that there is a row on that block that it should update. It will then request that same block in CURRENT MODE - as of "right now", not "as of 11:00". We have to update the current version of the block - not an old consistent read version. So we get the block AS OF 11:05 - verify the row has not changed between 11:00 and 11:05 - and then do the update to the current mode version of the block.

want to understand this - LIO reduction

Ajeet, December 27, 2006 - 11:25 pm UTC

Hi Tom,
I have a query which was doing 24093 LIOs per execution.
I generated the statistics on both of the tables involved in the query using this :

begin
dbms_stats.gather_table_stats(
ownname => 'TBAADM',
tabname => 'DAILY_TRAN_HEADER_TABLE',
estimate_percent => 10,
method_opt => 'for all indexed columns size AUTO',
degree => 4,
cascade => true
);
end;
/
Same thing I did for another table in this query -table name DAILY_TRAN_DETAIL_TABLE.

and then the LIO's went down to 2 per execution.

earlier also I had generated the statistics but I did use this - to generate it.

begin
dbms_stats.gather_table_stats(
ownname => 'TBAADM',
tabname => 'DAILY_TRAN_HEADER_TABLE',
estimate_percent => 10,
method_opt => 'for all indexed columns',
degree => 4,
cascade => true
);
end;
/
so there was NO "size auto" in the method_opt.

below is the query

SELECT
TO_CHAR(DTD.TRAN_DATE,'DD-MM-YYYY HH24:MI:SS'),
DTD.TRAN_ID,
DTD.REFERRAL_ID,
DTD.PART_TRAN_SRL_NUM,
DTD.TRAN_DATE,
DTD.SVS_TRAN_ID ,
TO_CHAR(DTD.VALUE_DATE,'DD-MM-YYYY HH24:MI:SS'),
TO_CHAR(DTD.GL_DATE,'DD-MM-YYYY HH24:MI:SS')
FROM tbaadm.DTD,tbaadm.DTH
WHERE (DTH.TRAN_DATE = DTD.TRAN_DATE AND DTH.TRAN_ID = DTD.TRAN_ID AND DTD.BANK_ID = '01'
AND DTH.BANK_ID = '01'
AND (DTD.RESTRICT_MODIFY_IND <> 'T' OR DTD.RESTRICT_MODIFY_IND IS NULL )
AND DTH.INIT_SOL_ID || NULL = '0049'
AND DTD.TRAN_CRNCY_CODE || NULL = 'INR'
AND DTH.TRAN_DATE = TO_DATE( '08-05-2003' ,'DD-MM-YYYY')
AND DTH.TRAN_ID > ' MI752762'
AND DTD.ACID = 'WF112351'
AND DTD.RCRE_TIME >= TO_DATE( '27-12-2006 00:00:00' ,'DD-MM-YYYY HH24:MI:SS')
)
ORDER BY 5,2,4
/

below is the AUTOTRACE output before :

SQL> set autot on
SQL> @wfp.sql

no rows selected


Execution Plan
----------------------------------------------------------
Plan hash value: 1051266481

--------------------------------------------------------------------------------
-----------------------------

| Id | Operation | Name | Rows | By
tes | Cost (%CPU)| Time |

--------------------------------------------------------------------------------
-----------------------------

| 0 | SELECT STATEMENT | | 1 |
242 | 2 (0)| 00:00:01 |

|* 1 | TABLE ACCESS BY INDEX ROWID | DAILY_TRAN_HEADER_TABLE | 1 |
50 | 1 (0)| 00:00:01 |

| 2 | NESTED LOOPS | | 1 |
242 | 2 (0)| 00:00:01 |

|* 3 | TABLE ACCESS BY INDEX ROWID| DAILY_TRAN_DETAIL_TABLE | 1 |
192 | 1 (0)| 00:00:01 |

|* 4 | INDEX RANGE SCAN | IDX_DAILY_TRAN_DETAIL_TABLE | 1 |
| 1 (0)| 00:00:01 |

|* 5 | INDEX RANGE SCAN | IDX_DTH_TRAN_DATE_ID | 1 |
| 1 (0)| 00:00:01 |

--------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------

1 - filter("DTH"."INIT_SOL_ID"||NULL='0049' AND "DTH"."BANK_ID"='01')
3 - filter("DTD"."RCRE_TIME">=TO_DATE('2006-12-26 00:00:00', 'yyyy-mm-dd hh24
:mi:ss') AND

"DTD"."ACID"='WF112351' AND "DTD"."TRAN_CRNCY_CODE"||NULL='INR' AN
D

("DTD"."RESTRICT_MODIFY_IND"<>'T' OR "DTD"."RESTRICT_MODIFY_IND" I
S NULL))

4 - access("DTD"."TRAN_DATE"=TO_DATE('2003-05-08 00:00:00', 'yyyy-mm-dd hh24:
mi:ss') AND

"DTD"."TRAN_ID">' MI752762' AND "DTD"."BANK_ID"='01' AND "DTD"."TR
AN_ID" IS NOT NULL)

filter("DTD"."BANK_ID"='01' AND "DTD"."TRAN_ID">' MI752762' AND
"DTD"."TRAN_DATE"=TO_DATE('2003-05-08 00:00:00', 'yyyy-mm-dd hh24:
mi:ss'))

5 - access("DTH"."TRAN_ID"="DTD"."TRAN_ID" AND "DTH"."TRAN_DATE"=TO_DATE('200
3-05-08 00:00:00',

'yyyy-mm-dd hh24:mi:ss'))
filter("DTH"."TRAN_ID">' MI752762')


Statistics
----------------------------------------------------------
53 recursive calls
0 db block gets
24093 consistent gets
0 physical reads
0 redo size
912 bytes sent via SQL*Net to client
458 bytes received via SQL*Net from client
1 SQL*Net roundtrips to/from client
1 sorts (memory)
0 sorts (disk)
0 rows processed

SQL>

and below is the final autot ouput (after re-generating the stats using method opt => 'all indexed colums size auto'.

SQL> @wfp.sql

no rows selected


Execution Plan
----------------------------------------------------------
Plan hash value: 2041883080

--------------------------------------------------------------------------------
--------------------------

| Id | Operation | Name | Rows | Bytes
| Cost (%CPU)| Time |

--------------------------------------------------------------------------------
--------------------------

| 0 | SELECT STATEMENT | | 1 | 168
| 12 (9)| 00:00:01 |

| 1 | SORT ORDER BY | | 1 | 168
| 12 (9)| 00:00:01 |

|* 2 | TABLE ACCESS BY INDEX ROWID | DAILY_TRAN_HEADER_TABLE | 1 | 26
| 1 (0)| 00:00:01 |

| 3 | NESTED LOOPS | | 1 | 168
| 11 (0)| 00:00:01 |

|* 4 | TABLE ACCESS BY INDEX ROWID| DAILY_TRAN_DETAIL_TABLE | 1 | 142
| 10 (0)| 00:00:01 |

|* 5 | INDEX RANGE SCAN | IDX_DTD_ACCT_ID | 68 |
| 1 (0)| 00:00:01 |

|* 6 | INDEX RANGE SCAN | IDX_DTH_TRAN_DATE_ID | 1 |
| 1 (0)| 00:00:01 |

--------------------------------------------------------------------------------
--------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

2 - filter("DTH"."INIT_SOL_ID"||NULL='0049' AND "DTH"."BANK_ID"='01')
4 - filter("DTD"."RCRE_TIME">=TO_DATE('2006-12-27 00:00:00', 'yyyy-mm-dd hh24
:mi:ss') AND

"DTD"."TRAN_CRNCY_CODE"||NULL='INR' AND "DTD"."BANK_ID"='01' AND
"DTD"."TRAN_DATE"=TO_DATE('2003-05-08 00:00:00', 'yyyy-mm-dd hh24:
mi:ss') AND

("DTD"."RESTRICT_MODIFY_IND" IS NULL OR "DTD"."RESTRICT_MODIFY_IND
"<>'T') AND "DTD"."TRAN_ID">'

MI752762')
5 - access("DTD"."ACID"='WF112351')
6 - access("DTH"."TRAN_ID"="DTD"."TRAN_ID" AND "DTH"."TRAN_DATE"=TO_DATE('200
3-05-08

00:00:00', 'yyyy-mm-dd hh24:mi:ss'))
filter("DTH"."TRAN_ID">' MI752762')


Statistics
----------------------------------------------------------
0 recursive calls
0 db block gets
2 consistent gets
0 physical reads
0 redo size
912 bytes sent via SQL*Net to client
458 bytes received via SQL*Net from client
1 SQL*Net roundtrips to/from client
1 sorts (memory)
0 sorts (disk)
0 rows processed

SQL>

Now I am very puzzeled - not able to figure out how this improvement has happened -what is there in method_opt => 'all indexed columns size auto' .

can you please explain us using the above 2 explain plans that what is the reason of this change.

Thanks
Ajeet

Tom Kyte
December 28, 2006 - 9:44 am UTC

why not compare the before and after statistics, that would sort of show you "what changed"

you computed entirely different column statistics, leading to entirely different estimated cardinalities, leading to different plans.

Another consistent get v/s db block get clarification

Mahesh, February 08, 2007 - 11:54 am UTC

Hi Tom,

Assume u have a table t with 1000 rows, arraysize 100 and 200 rows per block, i.e. table has data that occupies 5 blocks.

Suppose u have an update statement U1 that executes at time T, updates some rows in the third block only. A query S1 (select * from t) starts at time T+5 (update U1 has not yet commited). S1 gets the first two blocks in current mode. When S1 fetches from the third block, it finds it needs to visit a RBS block to get the third block in consistent mode, since changes from U1 are still pending. 2 questions:

1. For the S1 query, does Oracle fetch the first 2 blocks in current mode?
2. When the S1 query fetches from the third block, does the fetch from the third block and the fetch from the RBS block together count as one consistent get, or as two consistent gets?
Tom Kyte
February 08, 2007 - 12:47 pm UTC

S1 would get the blocks in consistent read mode, not current mode.

1) it does not, it fetched them in consistent read mode.

2) it would be 2 consistent gets.

Blocks accessed for index updates

Steve, February 09, 2007 - 3:14 pm UTC

Hi Tom,
I don't believe this specific explanation has been made in this thread, but it seems relevant.
Can you please explain all the blocks that are read/written when a value is updated in an indexed column?
Thanks in advance.
Tom Kyte
February 12, 2007 - 9:37 am UTC

It depends



update t set indexed_column = 5 where primary_key = 10;


well, that will likely:

a) read the root/branch/leaf blocks from the primary key index, get a rowid
b) using that rowid read a table block for that row
c) update that indexed_column on the block
d) assuming the value actually changed - that indexed_column was not 5 before - we would then
e) read the root/branch/leaf blocks of the index on indexed_column (where value=5 and rowid=that_rowid)
f) get the leaf block in current mode, modify the value
g) if the modified value caused the index block to overfill (split), we would have to then perform more IO to get the new leaf block, get the branch and root blocks of the index in current mode as well - to allow the split to take place


So, in the most basic update - reads on root/branch/leaf - write on a leaf.

in the case of a block split - more reads to get new block, and more current mode gets and writes to update the index structure.

Blocks accessed for index updates

Steve, February 12, 2007 - 11:43 am UTC

Thanks so much for the info on index block access.
A few clarifications, please:

a) = consistent gets, right?
b) = current mode get?
e) = consistent gets?
g) = consistent gets to find the new leaf block, then
current gets of root/branch(es)/leaf to do the split?
Tom Kyte
February 12, 2007 - 1:12 pm UTC

b) = consistent get + current get. We read the data using consistent reads (always) and then get the block in current mode to do the update.

g) = get space from freelists or ASSM bitmap blocks - not a consistent read, a space request. then current mode reads up the structure to finish the split.

current reads for select

A reader, February 12, 2007 - 2:53 pm UTC

Does current read always happen for update statements?
At 11:05 it did a consistent get of a block as of 11:00 and discovered a row in it should be updated. In this case it will request a block in CURRENT MODE. From where Oracle will request this block?

I once remember seeing 80% of 35 million LIO's were current reads. The statement was INSERT..SELECT.

How can this happen?

Thanks,

Tom Kyte
February 13, 2007 - 8:54 am UTC

Oracle gets the blocks from the buffer cache (which may of course have to get the data from disk first)

ops$tkyte%ORA10GR2> create table t
  2  as
  3  select * from all_objects where 1=0;

Table created.

ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> create index t_idx1 on t(object_id);

Index created.

ops$tkyte%ORA10GR2> create index t_idx2 on t(object_name);

Index created.

ops$tkyte%ORA10GR2> create index t_idx3 on t(owner);

Index created.

ops$tkyte%ORA10GR2> create index t_idx4 on t(object_type);

Index created.

ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> exec dbms_monitor.session_trace_enable;

PL/SQL procedure successfully completed.

ops$tkyte%ORA10GR2> insert into t select * from all_objects;

50162 rows created.



tkprof will show:

insert into t select * from all_objects


call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.02       0.02          0          0          0           0
Execute      1      4.50       4.57          2      70709     259323       50162
Fetch        0      0.00       0.00          0          0          0           0
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        2      4.53       4.59          2      70709     259323       50162


Now, if you do the same thing without the indexes
insert into t select * from all_objects


call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.04       0.04          0          0          0           0
Execute      1      1.60       1.56          0      66246       6839       50165
Fetch        0      0.00       0.00          0          0          0           0
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        2      1.64       1.60          0      66246       6839       50165



It was index maintenance - you have to update that complex data structure.

refinement ;)

A. O'Hara, February 12, 2007 - 3:58 pm UTC

Of course point e) above should be

e) read the root/branch/leaf blocks of the index on indexed_column (where value=<OLD VALUE> and rowid=that_rowid)

and then "updating the value" may mean, if "value=5 and rowid=that_rowid" has to be on a different leaf block, deleting the old entry and then re-scanning the index to insert the new "value=5 and rowid=that_rowid" in another block.
Tom Kyte
February 13, 2007 - 8:58 am UTC

correct - a modify of a an indexed column in the index structure is a "delete plus insert" so likely there are two leaf blocks involved as the row "moves" in the index.

thanks!

Please clarify

Anil, February 14, 2007 - 12:00 pm UTC

Hello Tom,
Iam on Oracle9i. The tables used in the query are analyzed. In first query without any hint has the consistent gets of 36147.

SELECT a.actlinkId,actionIndex,a.fieldId,assignShort,assignLong, keywordList,parameterList, sampleServer,sampleSchema
FROM actlink_set a,actlink_mapping b
WHERE a.actlinkId=b.actlinkId
AND b.schemaId=1690
ORDER BY 1 DESC,2 DESC


Execution Plan
----------------------------------------------------------
0
SELECT STATEMENT Optimizer=CHOOSE (Cost=80 Card=1535 Bytes=127405)

1 0
SORT (ORDER BY) (Cost=80 Card=1535 Bytes=127405)

2 1
NESTED LOOPS (Cost=58 Card=1535 Bytes=127405)

3 2
TABLE ACCESS (FULL) OF 'ACTLINK_SET' (Cost=58 Card=27990 Bytes=2071260)

4 2
INDEX (UNIQUE SCAN) OF 'ACTLINK_MAPPING_IND' (UNIQUE)



Statistics
----------------------------------------------------------
0 recursive calls
0 db block gets
36147 consistent gets
3 physical reads
0 redo size
160161 bytes sent via SQL*Net to client
46862 bytes received via SQL*Net from client
746 SQL*Net roundtrips to/from client
1 sorts (memory)
0 sorts (disk)
326 rows processed

Second query query with FTS hint has consistent gets of 1478 only.

SELECT /*+ FULL(b) */ a.actlinkId,actionIndex,a.fieldId,assignShort,assignLong, keywordList,parameterList, sampleServer,sampleSchema
FROM actlink_set a,actlink_mapping b
WHERE a.actlinkId=b.actlinkId
AND b.schemaId=1690
ORDER BY 1 DESC,2 DESC

Execution Plan
----------------------------------------------------------
0
SELECT STATEMENT Optimizer=CHOOSE (Cost=86 Card=1535 Bytes=127405)

1 0
SORT (ORDER BY) (Cost=86 Card=1535 Bytes=127405)

2 1
HASH JOIN (Cost=64 Card=1535 Bytes=127405)

3 2
TABLE ACCESS (FULL) OF 'ACTLINK_MAPPING' (Cost=5 Card=95 Bytes=855)

4 2
TABLE ACCESS (FULL) OF 'ACTLINK_SET' (Cost=58 Card=27990 Bytes=2071260)



Statistics
----------------------------------------------------------
0 recursive calls
0 db block gets
1478 consistent gets
3 physical reads
0 redo size
160162 bytes sent via SQL*Net to client
46862 bytes received via SQL*Net from client
746 SQL*Net roundtrips to/from client
1 sorts (memory)
0 sorts (disk)
326 rows processed


As you been always been suggesting not to use hint should I use hint?

ACTLINK_MAPPING = 22378 rows

ACTLINK_SET = 34737 rows


Thanks
Anil

Blocks accessed for index updates

Steve, February 20, 2007 - 4:55 pm UTC

Hi Tom,
It seems that a couple of things are true:

-The addition of an index will cause additional current mode gets.

-Exactly how many additional current mode gets is somewhat unpredictable.

Can you recommend general steps to be taken, to minimize the additional current mode gets (i.e. re-org the indexes, etc.)?

Thanks
Tom Kyte
February 20, 2007 - 5:28 pm UTC

- if you update the indexed column, sure
- yes, not predicable in a practical sense

b*tree indexes are very resilient structures. You would not reorg them for this update activity.

A reader, April 04, 2007 - 6:58 am UTC

Tom,
The thing about increase of arraysize could decrease logical IO's, is the crux of the matter appears to be

1) When Oracle attempts to retrieve the NEXT row of a query, it checks if it has made a trip to the Client, if it does, it always does a logical IO of the current Block.

b) If it hasn't, it appears to be happy the block is like 'latched' and just gets the next row from the current block.

Both the above points relate to the TWO rows in the same block.

Now, the question is, is it correct that it needs to do a logical IO after a trip to the Client because the World could have changed in the interval? Like the client has not necessarily fetched the next row instantaneously and it NEEDS to do a logical IO to check?

Ravi
Tom Kyte
April 04, 2007 - 10:35 am UTC

it does a logical IO not because things could have changed - but because it doesn't have the information anymore - it freed it, gave it up, released it.

A reader, April 23, 2007 - 11:18 am UTC

Tom,
What if I have an update statement which needs to update rows in B1, which is currently 'latched' by a running Query Q1.

1) Will the update has to wait?
2) It may be unlikely, but possibly with one block reference data table, what if there are 10 queries waiting to get that blocks' data, is there a queuing mechanism for latches as well?

Thanks
Tom Kyte
April 23, 2007 - 4:38 pm UTC

what do you mean "which is currently 'latched' by a running query"

reads do not block writes
writes do not block reads

Some minor clarifications needed

Vladimir Andreev, April 24, 2007 - 11:48 am UTC

Hi Tom,

Here are some conclusions of mine based on what I learned about how the buffer cache works. Please confirm (or correct) them:

1. At any given moment, at most one "current mode" buffer may exist in the (global) cache for a db block; if it is currently in use (pinned) by a session, other sessions attempting a current get of the same block would block on "buffer busy".

2. A session making a current get of a block searches the cache for a current mode buffer and if not found, reads it from disk (or requests it from the global cache) into a free buffer.

3. A session making a consistent get (as of SCN=X) searches the cache for a "consistent mode" buffer with SCN>=X; if found, it pins it and rolls it back to SCN X; if not, it reads it from disk into a free buffer and then performs the rollback to SCN X.

Questions:
a) does the fact that the (single) current mode buffer is in use by a session cause other sessions that need a consistent version of the same block but must perform a physical read to create it to block on "buffer busy" (since the block is potentially in flux)?

b) Since at any time there may be many different versions of the same block, it would make sense that these versions be sorted by SCN, so that a session searching for a version with SCN>=X can quickly find the version with SCN closest to, but not smaller than X. Are the consistent mode buffers for a block ordered by SCN in the cache buffers chain?

Thanks in advance,

Flado
Tom Kyte
April 24, 2007 - 1:01 pm UTC

a) don't quite follow you. if there is a current mode version in the cache, why would another session perform a physical read?

b) don't know, detail that doesn't help me day to day, only so much room for so many details :)

Hopefully related

A.VARADARAJAN, April 25, 2007 - 1:51 am UTC

I was told by another DBA that if I wanted to reduce my "db file sequential read" waits then one of the ways to do it is by reducing consistant gets. I, after going trough this article, doubt this. after all if the number of records per block is much higher than 15 then the same block will be accessed multiple times and pysica; reads is anyhow not going to be affected.
Tom Kyte
April 25, 2007 - 9:56 am UTC

LIO = logical IO
PIO = physical IO

In general, if you watch your LIO's your PIO's will take care of themselves.

In general, reducing LIO's can reduce PIO's, the thought is you are reading TOO MANY blocks, not necessarily the same block over and over. However, even if you are reading the SAME block - if you are reading tons of extra blocks, you are likely causing some blocks to be flushed from the cache and you might have to read and re-read them from disk over and over during the processing of your query.

I would say that in general, LIO reduction is what I'm looking to accomplish and if you watch your LIO's your PIO's will take care of themselves.


explain_plan, statistics and optimizer hints

Piyush Agarwal, May 05, 2007 - 8:53 am UTC

Dear Tom,

After reading first few reviews I got some basics in consistent gets and impact of arraysize . I think all this is very essential in performance tuning. Please also explain how to reduce SQL*Net roundtrips to/from client. I think if this value is large, it affects the performance. Moreover, for Java (Sprint JDBC) how to manually set the arraysize to improve the performance as you have said for JDBC defaul value is 10. Can we increase the value on the fly on the basis of expected output of each procedure or sql in Java to enhance performance.

We have used sys_refcursors and with sql clauses in the procedures in our project. What is the impact on performance of using them. What if, it contains clob values.

Hence, do you have any document which explains each item of explain_plan, statistics and optimizer hints and their affects . How to adjust them and what are the general standards.

IF you can also throw some light here , it would help me in understanding the document which you may refer with your reply.

Regards,
Piyush
Tom Kyte
May 08, 2007 - 10:12 am UTC

... SQL*Net roundtrips to/from client. I ...


make less calls - you do that by..... using larger arraysizes (although 100-500 rows at a time is a reasonable maximum, you really do not need to go beyond that)

"sprint jdbc"??? You could read the documentation for your jdbc driver to see what is available - I've shown how to do it for the connection and for individual statements in general.

the performance guide, free on otn.oracle.com, covers explain plans, statistics and how they are used and hints.


A reader, June 20, 2007 - 12:03 pm UTC

Tom,

I have two scripts, the top one using Bulk collect uses only like, they both look up the same rows and bring up the same count(*) for the number of rows, but the first query does 80 consistent get, while the next one does 39K consistent get.

Does Oracle lets go of the latch even between when PLSQL fetches using SQL?


declare
type b is table of areas_of_land.aol_id%type index
by binary_integer;
a b;
begin
select aol_id bulk collect into a from areas_of_land where
aol_id between 100 and 100001 ;
dbms_output.put_line(a.count);
end;


declare
a areas_of_land.aol_id%type;
cnt number default 0;
begin
for i in (select aol_id from areas_of_land where
aol_id between 100 and 100001 ) loop
a := i.aol_id;
cnt := cnt+1;
end loop;
dbms_output.put_line(cnt);
end;

Explain plan:

Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=RULE
1 0 INDEX (RANGE SCAN) OF 'AOL_PK' (UNIQUE)
Tom Kyte
June 20, 2007 - 12:09 pm UTC

plsql is no different from C or VB or <any client here> in that respect.

plsql is just doing sql and making calls to the database to fetch data.

so, yes.

do NOT bulk collect that many rows, just do 100-500

A reader, June 20, 2007 - 12:33 pm UTC

Tom

Why not Bulk Collect more than 500 rows?

Thanks

ravi
Tom Kyte
June 20, 2007 - 1:28 pm UTC

memory and 'practicality'

you want to

loop
get something to process
process it
end loop


biting off more than you can "chew" doesn't make sense, get a bit, work a bit, get a bit, work a bit.


get a ton of stuff, allocate ton of memory, set up ton of stuff into memory (manage a HUGE structure), retrieve small bits from this huge structure, destroy structure.

get a bit of stuff, allocate a small amount of memory, .....



A reader, June 21, 2007 - 8:29 am UTC

Tom

I have a test case, where I ask a session one to wait on a row and go on to session2 and update that row a 100 times before the session1 fetches it, and Oracle accurately gets a consistent value as of start of query in Session 1.
The question is, does the rollback segments store the SCN as well, like, does Oracle trawl throgh all the 100 rollback segment values to match the SCN that it wants and outputs it (it can't find it, then its ora 1555, I guess).

Thanks

Ravi

drop table t1;

create table t1(a number);

insert into t1 values(1);

insert into t1 values(2);

select * from t1;

Session1:

begin
for i in (select a from t1 order by a asc) loop
dbms_output.put_line(i.a);
dbms_lock.sleep(7);
end loop;
end;

Session2:

DECLARE
m NUMBER;
BEGIN
FOR i IN 1 .. 100
LOOP
UPDATE t1
SET a = a + 1
WHERE a > 1;

COMMIT;
END LOOP;

SELECT COUNT ( * )
INTO m
FROM t1;
END;
Tom Kyte
June 21, 2007 - 11:00 am UTC

think of undo as being a linked list. when you modify a block - we put the before image information into undo (call this U1 - so the current mode block points to U1) and the block points to that undo.

now, if you modify the block again, we put the before image information into undo again and the current mode block points to this (call this U2)

suppose a query was started before either update. It hits this block. We discover "this block is too knew and we don't see anything else in the buffer cache that is at least old enough".

So, we take this block and follow the pointer, we find U2. We apply U2 - the block we have now looks just like it did before the second update. We look at it and say "hmm, still too new". Now, we need to rollback this block too - it points to U1 (because the current mode block pointed to U1 after the first update, before the second update).

We roll it back, it is now old enough.


The transaction history is used to walk the undo history to find a block version that predates your query. The scn is not really stored there, but it can be inferred from other undo information.

This is conceptually how it happens, there are edge cases, caveats, shortcuts, lots of stuff - but conceptually - the block is undone bit by bit till we have a version old enough.

Consistent gets

Manju, July 18, 2007 - 8:07 am UTC

Dear Tom,

I have 1000 rows in emp table when i execute
select * from emp; query statistics looks like this

My Question is : As per this formula rows/arraysize+total blocks. In my case i have to get 81 consistent gets but i dont see any thing in stats, can you pls clarify this.
Statistics
----------------------------------------------------------
          0  recursive calls
          0  db block gets
          0  consistent gets
          0  physical reads
          0  redo size
          0  bytes sent via SQL*Net to client
          0  bytes received via SQL*Net from client
          0  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
       1000  rows processed

SQL> exec show_space('EMP');
Free Blocks.............................1
Total Blocks............................15
Total Bytes.............................122880
Unused Blocks...........................11
Unused Bytes............................90112
Last Used Ext FileId....................220
Last Used Ext BlockId...................1792
Last Used Block.........................4

PL/SQL procedure successfully completed.


Regards
Manju
Tom Kyte
July 18, 2007 - 10:53 am UTC

stop doing things as SYS - just stop it right now. Never use that account for anything other then when it is mandatory to do so (and it almost never is)


sys is special, it is magic, it is why you don't see anything

Consistent gets

Manju, July 18, 2007 - 8:13 am UTC

Dear Tom,

Sorry a small change rows/arraysize+(total blocks-unused blocks)
we have to get consistent gets as 70 in out case. pls clarify.

Regards

Manju

consistent gets go up if I use TRIM in where clause

conundrum, July 18, 2007 - 3:09 pm UTC

hi Tom,

I have a query which has 11652 consistent gets and 5977 physical reads.
This query use "rule" hint and returns the rows quickly (I only had small number of rows - 10 - to test, though).
Explain plan has a bunch of nested nested loops with one full table scan in the last nested loop and others are TABLE ACCESS (BY INDEX ROWID).

The same query, but without "rule" hint works but it gets 65582 consistent gets and 7881 physical reads.

Now when I change "where" clause and remove TRIM() from one of the "where" conditions, the execution plan shows no longer that TABLE FULL SCAN (which is a "good thing" I guess), but TABLE ACCESS (BY INDEX ROWID) etc., which means that index was used. But instead of response time goes down or at least stay the same for these 10 rows, the query took long long time so I had to kill it. It seems that the number of consistent gets went to enormous proportions and the number of physical reads went down!?
How to explain this strange behavior?

thanks,

Consistent gets

Manju, July 19, 2007 - 12:03 am UTC


Dear Tom,

Thanks for your reply, is there any reason behind that magic, can you pls explain me.that will be great help.

Regards

Manju.
Tom Kyte
July 19, 2007 - 10:51 am UTC

yes, because we programmed it that way. that is the reason

sys is magic

DO NOT USE IT

it is that simple.

consistent reads in delete

Robert Koltai, July 20, 2007 - 7:07 am UTC

Hi Tom,

I have two environments Budapest and Milano.
Deleting records in Budapest is fast, and is slow in Milano:)

******
Milano SQL TRACE:
******
call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.01 0.00 0 0 0 0
Execute 1 128.27 127.93 1 7001151 23310 2264
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 2 128.28 127.93 1 7001151 23310 2264

Rows Row Source Operation
------- ---------------------------------------------------
0 DELETE ALARMLOG (cr=7014735 pr=1 pw=0 time=129579424 us)
2264 COUNT STOPKEY (cr=7001120 pr=0 pw=0 time=128253166 us)
2264 NESTED LOOPS (cr=7001120 pr=0 pw=0 time=125607833 us)
2264 VIEW VW_NSO_1 (cr=12152 pr=0 pw=0 time=224828 us)
2264 SORT UNIQUE (cr=12152 pr=0 pw=0 time=215769 us)
2264 COUNT STOPKEY (cr=12152 pr=0 pw=0 time=2093835 us)
2264 FILTER (cr=12152 pr=0 pw=0 time=2091568 us)
2264 NESTED LOOPS OUTER (cr=12152 pr=0 pw=0 time=2089296 us)
2264 NESTED LOOPS OUTER (cr=7624 pr=0 pw=0 time=2032663 us)
2264 PARTITION RANGE ALL PARTITION: 1 2 (cr=3096 pr=0 pw=0 time=1966934 us)
2264 INDEX FULL SCAN PK_ALARMLOG PARTITION: 1 2 (cr=3096 pr=0 pw=0 time=1964636 us)(object id 10022)
0 PARTITION RANGE ALL PARTITION: 1 2 (cr=4528 pr=0 pw=0 time=55810 us)
0 INDEX RANGE SCAN LOG_STATECHANGEDEF_IDX PARTITION: 1 2 (cr=4528 pr=0 pw=0 time=39340 us)(object id 10034)
0 PARTITION RANGE ALL PARTITION: 1 2 (cr=4528 pr=0 pw=0 time=54239 us)
0 INDEX RANGE SCAN LOG_MONITOREDALARM_IDX PARTITION: 1 2 (cr=4528 pr=0 pw=0 time=38572 us)(object id 10030)
2264 PARTITION RANGE ALL PARTITION: 1 2 (cr=6988968 pr=0 pw=0 time=123237059 us)
2264 INDEX FULL SCAN PK_ALARMLOG PARTITION: 1 2 (cr=6988968 pr=0 pw=0 time=123161005 us)(object id 10022)

******
Budapest SQL TRACE:
******
call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 2 0.00 0.00 0 0 0 0
Execute 2 0.31 0.34 0 3000 22373 3095
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 4 0.31 0.34 0 3000 22373 3095

Rows Row Source Operation
------- ---------------------------------------------------
0 DELETE ALARMLOG (cr=15492 pr=0 pw=0 time=837947 us)
3095 COUNT STOPKEY (cr=6206 pr=0 pw=0 time=73586 us)
3095 HASH JOIN RIGHT SEMI (cr=6206 pr=0 pw=0 time=67435 us)
3095 VIEW VW_NSO_1 (cr=6198 pr=0 pw=0 time=49564 us)
3095 COUNT STOPKEY (cr=6198 pr=0 pw=0 time=49562 us)
3095 FILTER (cr=6198 pr=0 pw=0 time=46467 us)
3095 NESTED LOOPS OUTER (cr=6198 pr=0 pw=0 time=46464 us)
3095 NESTED LOOPS OUTER (cr=3103 pr=0 pw=0 time=24793 us)
3095 PARTITION RANGE SINGLE PARTITION: 1 1 (cr=8 pr=0 pw=0 time=3121 us)
3095 INDEX FULL SCAN PK_ALARMLOG PARTITION: 1 1 (cr=8 pr=0 pw=0 time=17 us)(object id 10022)
0 PARTITION RANGE SINGLE PARTITION: 1 1 (cr=3095 pr=0 pw=0 time=19122 us)
0 INDEX RANGE SCAN LOG_STATECHANGEDEF_IDX PARTITION: 1 1 (cr=3095 pr=0 pw=0 time=12116 us)(object id 10034)
0 PARTITION RANGE SINGLE PARTITION: 1 1 (cr=3095 pr=0 pw=0 time=18884 us)
0 INDEX RANGE SCAN LOG_MONITOREDALARM_IDX PARTITION: 1 1 (cr=3095 pr=0 pw=0 time=11864 us)(object id 10030)
3095 PARTITION RANGE SINGLE PARTITION: 1 1 (cr=8 pr=0 pw=0 time=6241 us)
3095 INDEX FULL SCAN PK_ALARMLOG PARTITION: 1 1 (cr=8 pr=0 pw=0 time=3125 us)(object id 10022)

Some data from dba_indexes, dba_extents for both enviroments:
INDEX_NAME BLEVEL LEAF_BLOCKS DISTINCT_KEYS CLUSTERING_FACTOR NUM_ROWS
-------------------- ----------- ------------- ----------------- ----------
MIL: PK_ALARMLOG 2 29 5065 1103 5065
BUD: PK_ALARMLOG 1 5 2095 66 2095

SEGMENT_NAME SEGMENT_TYPE BYTES BLOCKS EXTENTS
------------------- ------------------ ---------- ---------- ----------
MIL: PK_ALARMLOG INDEX PARTITION 13631488 1664 13
MIL: PK_ALARMLOG INDEX PARTITION 25165824 3072 24
BUD: PK_ALARMLOG INDEX PARTITION 7340032 896 7

Note that both tables contained only few thousand records when the delete was issued.

It seems that INDEX FULL SCAN on PK_ALARMLOG is the killer, but how come that you need 6988968 cr in Milano, and only 8 in Budapest?
My guess is that ALARMLOG table in Milano once contained hundreds of thousands of records.
Q1) Do you think this is the reason?
Q2) Could I have avoided the problem with some kind of storage setting for the index? (I mean to get the unused space freed)
Q3) Can you explain what is the 6988968 cr needed for? If 1 cr is for 1 block (8Kb), then it's ~56GB !!
Q4) Do you advise an index rebuild?

Thanks in advance
R.
Tom Kyte
July 20, 2007 - 8:42 am UTC

are your statistics up to date.

the index is being full scanned in a nested loop (over and over)

consistent reads in delete

Robert Koltai, July 20, 2007 - 10:56 am UTC

WOW that a was fast reply! And inspiring!!! Thanks!

Yes I made a GATHER_SCHEMA_STATS right before executing the command, and was expecting a new execution plan (even flushed the share pool in an other try), but the plan is always the same in Milano.

Here's an other run with different number of records:
DELETE FROM alarmLog
WHERE
(ROWNUM <= :"SYS_B_0" ) AND logRecordId IN ( SELECT DISTINCT logRecordId
FROM alarmLog TAB1, OMCDBSYS.LOG_stateChangeDef TAB2,
OMCDBSYS.LOG_monitoredAlarm TAB3 WHERE (ROWNUM <= :"SYS_B_1" ) AND
tab1.logRecordId = tab2.stateChangeDefLogRecordId(+) AND tab1.logRecordId =
tab3.monitoredAlarmLogRecordId(+) AND :"SYS_B_2"=:"SYS_B_3" )


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 1 293.89 300.18 254 15620516 52040 5065
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 2 293.89 300.18 254 15620516 52040 5065

Rows Row Source Operation
------- ---------------------------------------------------
0 DELETE ALARMLOG (cr=15689573 pr=254 pw=0 time=308662082 us)
5065 COUNT STOPKEY (cr=15659108 pr=4 pw=0 time=291458740 us)
5065 NESTED LOOPS (cr=15659108 pr=4 pw=0 time=291302036 us)
5065 VIEW VW_NSO_1 (cr=23453 pr=4 pw=0 time=510225 us)
5065 SORT UNIQUE (cr=23453 pr=4 pw=0 time=479833 us)
5065 COUNT STOPKEY (cr=23453 pr=4 pw=0 time=313387 us)
5065 FILTER (cr=23453 pr=4 pw=0 time=308316 us)
5065 NESTED LOOPS OUTER (cr=23453 pr=4 pw=0 time=303238 us)
5065 NESTED LOOPS OUTER (cr=13323 pr=2 pw=0 time=179797 us)
5065 PARTITION RANGE ALL PARTITION: 1 2 (cr=3193 pr=0 pw=0 time=26275 us)
5065 INDEX FAST FULL SCAN PK_ALARMLOG PARTITION: 1 2 (cr=3193 pr=0 pw=0 time=21180 us)(object id 10022)
0 PARTITION RANGE ALL PARTITION: 1 2 (cr=10130 pr=2 pw=0 time=140604 us)
0 INDEX RANGE SCAN LOG_STATECHANGEDEF_IDX PARTITION: 1 2 (cr=10130 pr=2 pw=0 time=103116 us)(object id 10034)
0 PARTITION RANGE ALL PARTITION: 1 2 (cr=10130 pr=2 pw=0 time=117327 us)
0 INDEX RANGE SCAN LOG_MONITOREDALARM_IDX PARTITION: 1 2 (cr=10130 pr=2 pw=0 time=83493 us)(object id 10030)
5065 PARTITION RANGE ALL PARTITION: 1 2 (cr=15635655 pr=0 pw=0 time=287768328 us)
5065 INDEX FULL SCAN PK_ALARMLOG PARTITION: 1 2 (cr=15635655 pr=0 pw=0 time=287600111 us)(object id 10022)

Initially I though that ONE index full scan took this much time, but now from your reply I got the idea to divide cr by the number of records and in three different Milano test cases:

6988968/2264=3087
15635655/5065=3087
19355490/6270=3087

So we have a MAGIC NUMBER here:))
My guest is that all records were indexed by partition P2 of PK_ALARMLOG having 3072 blocks.

OK now I also realized that the Budapest version of the plan has HASH JOIN RIGHT SEMI instead of the NESTED LOOPS.

So the question remaining: Why does Oracle choose NESTED LOOPS instead of HASH?
The response lies in the statistics I guess. But the're up to date. All other init paramters that may affect the optimizer are the same.

Shall I open an SR or do you have a better idea?
Thanks again!
R.
Tom Kyte
July 20, 2007 - 5:03 pm UTC

do you have cursor sharing on on purpose?????


get the explain plan, compare to the actuals - are the estimated row counts accurate on the database where the plan is "not good"

are they configured the same (init.ora wise)

A reader, July 23, 2007 - 8:44 am UTC

Tom,

I bet you have seen questions like the one from a user who ran a trace logged on as SYS, they know enough about consistent gets etc, do you suspect the question is simply intended to test YOU?

Or do you assume that you need to give room that they could have a genuine doubt, even though its hard to believe that there is a valid reason why they would not tell you their Userid of SYS?

Thanks
Tom Kyte
July 23, 2007 - 8:50 am UTC

I don't know what you mean here... Not sure where you are trying to go?

consistent reads in delete - Milan problem

Rrobert Koltai, July 23, 2007 - 9:26 am UTC

Hi Tom thanks for your answers!

- Yes, cursor sharing is on on purpose
- init.oras are identical

I'll check the explain plan against the actual one as soon as the Milan colleguaes load some data into the DB with the application.

But the real question for You is this:
- the nested loop vs. hash join accounts for a difference in the consistent reads in the 10^3 range. (As we saw that the nested loop is executed "number of record" times and the hash join once:))
- The actual difference in the cr is in the 10^6 range(6988968 vs. 8)
So still 10^3 is missing.

=>
****
Why do we need to read 10^3 blocks in Milan when we read only 8 blocks in Budapest for one full scan of PK_ALARMLOG?
May it have something to to with the size history of the table and its PK? If so then could we avoid this problem somehow? (I mean could the full scan be affected by having had a lot of records in PK_ALARMLOG?)
The "consistent reads in one nested loop" (3087) seems to have to do something with the number of blocks of PK_ALARMLOG partition 2. See my first post.
****

Thanks again,
Robert

Alexander the ok, August 08, 2007 - 4:09 pm UTC

Hi Tom,

I'm having a problem with a query that's selecting from a view. In development it runs well, in production the major difference that sticks out is consistent gets.

Before I start posting tkprofs, explain plans, and all that stuff I was hoping if you could answer the following question I can just figure it out myself.

I went to trace the query in prod, and it ran exactly how it does in dev. Why would that happen? If I set tracing off, it runs terrible again. That's what I would like to know first, what could possibly cause that.

Thanks.
Tom Kyte
August 14, 2007 - 10:32 am UTC

bind peeking....

search this site for bind peeking to read lots about it.

sql_trace is like an optimizer parameter - it causes a hard parse the first time, with your bind values (not what binds were used for the "bad" plan - but the "bad" plan IS NOT A BAD PLAN, it just isn't the best plan for your current bind variable!!!!)

here is a "for example", id=1 - use index, id=99 - use full scan. Each plan is BEST for it's respective value, but there is only ONE PLAN in place. Depending on who hard parses first - you either get index or full scan in play:



ops$tkyte%ORA10GR2> create table t as select 99 id, a.* from all_objects a;
Table created.

ops$tkyte%ORA10GR2> update t set id = 1 where rownum = 1;
1 row updated.

ops$tkyte%ORA10GR2> create index t_idx on t(id);
Index created.

ops$tkyte%ORA10GR2> exec dbms_stats.gather_table_stats( user, 'T', method_opt => 'for all indexed columns size 254' );
PL/SQL procedure successfully completed.

ops$tkyte%ORA10GR2> variable n number
ops$tkyte%ORA10GR2> set autotrace traceonly explain
ops$tkyte%ORA10GR2> select count(object_type) from t where id = 1;
------------------------------------------------------------------------
| Id  | Operation                    | Name  | Rows  | Bytes | Cost (%CP
------------------------------------------------------------------------
|   0 | SELECT STATEMENT             |       |     1 |    14 |     2   (
|   1 |  SORT AGGREGATE              |       |     1 |    14 |
|   2 |   TABLE ACCESS BY INDEX ROWID| T     |     1 |    14 |     2   (
|*  3 |    INDEX RANGE SCAN          | T_IDX |     1 |       |     1   (
------------------------------------------------------------------------

ops$tkyte%ORA10GR2> select count(object_type) from t where id = 99;
------------------------------------------------------------------------
| Id  | Operation          | Name | Rows  | Bytes | Cost (%CPU)| Time
------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |      |     1 |    14 |   159   (2)| 00:00:0
|   1 |  SORT AGGREGATE    |      |     1 |    14 |            |
|*  2 |   TABLE ACCESS FULL| T    | 49888 |   682K|   159   (2)| 00:00:0
------------------------------------------------------------------------

ops$tkyte%ORA10GR2> set autotrace traceonly statistics;
ops$tkyte%ORA10GR2> exec :n := 99;
PL/SQL procedure successfully completed.

ops$tkyte%ORA10GR2> select count(object_type) from t where id = :n;
Statistics
----------------------------------------------------------
          1  recursive calls
          0  db block gets
        711  consistent gets

ops$tkyte%ORA10GR2> exec :n := 1;
PL/SQL procedure successfully completed.

ops$tkyte%ORA10GR2> select count(object_type) from t where id = :n;
Statistics
----------------------------------------------------------
          0  recursive calls
          0  db block gets
        711  consistent gets

ops$tkyte%ORA10GR2> alter session set sql_trace=true;
Session altered.

ops$tkyte%ORA10GR2> select count(object_type) from t where id = :n;
Statistics
----------------------------------------------------------
          1  recursive calls
          0  db block gets
          3  consistent gets

ops$tkyte%ORA10GR2> alter session set sql_trace=false;
Session altered.

ops$tkyte%ORA10GR2> select count(object_type) from t where id = :n;
Statistics
----------------------------------------------------------
          0  recursive calls
          0  db block gets
        711  consistent gets


interesting!!!

A reader, August 14, 2007 - 11:21 am UTC

Your reply to the bind peek concept is very interesting. If the explain plan generated for the first value of bind variable carries for all other bind values, how to avoid that? I am interested in attribute that has very few distinct values.

select * from sales where region = 'North America';
select * from sales where region = 'Europe';

If I have 7 different region values in the SALES table that has more than million rows, I want the query to use FULL TABLE SCAN for NORTH AMERICA (constituting more than 60% of the rows) and use Index scan for the region AFRICA.

How to get this happen? Avoid bind variables in this case?

thanks,

Tom Kyte
August 20, 2007 - 11:51 am UTC

you would avoid binding that particular column predicate, yes.

I'll let you have seven slots in the SGA for your sql (if you had 7,000 regions, the answer would be different of course)

slots in sga?

A reader, August 20, 2007 - 12:07 pm UTC

I did not understand about "slots in SGA". What does it mean.

Thanks for answering my earlier question.

Tom Kyte
August 22, 2007 - 10:37 am UTC

v$sql is an in memory table, it consumes space in the SGA, that is where sql is cached (amongst other v$ like structures)

each SQL you parse takes some amount of SGA memory, out of the shared pool. It gets its "slot", slice, piece, bit, chunk... of memory.

Small question

A reader, August 28, 2007 - 1:41 pm UTC

Hi,
Below is attached screen shot of alert log, my query is fatching data taking 5 mins, I have small question on belows screen shot can you tell me is it fetching any data?
Why it is not giving operation details in this trace file

like
STAT #1 id=301 cnt=0 pid=300 pos=1 obj=58650 op='TABLE ACCESS BY INDEX ROWID SRCADJ_IDW_TXN_ACTY '
STAT #1 id=302 cnt=0 pid=301 pos=1 obj=58651 op='INDEX RANGE SCAN PK_SRCADJ_IDW_TXN_ACTY '
STAT #1 id=303 cnt=0 pid=229 pos=2 obj=217354 op='INDEX RANGE SCAN IX_INSTN_CNTXT_NUNQ '







screen shot from trace file
===========================

WAIT #1: nam='db file scattered read' ela= 1658 p1=4 p2=12777 p3=16
WAIT #1: nam='db file scattered read' ela= 1469 p1=4 p2=12793 p3=16
WAIT #1: nam='db file scattered read' ela= 1373 p1=4 p2=12825 p3=16
WAIT #1: nam='db file scattered read' ela= 1588 p1=4 p2=12841 p3=16
WAIT #1: nam='db file scattered read' ela= 877 p1=4 p2=12873 p3=10
FETCH #1:c=214800000,e=267706635,p=99513,cr=326241,cu=0,mis=0,r=1,dep=0,og=4,tim=10257239122658
WAIT #1: nam='SQL*Net message from client' ela= 2365 p1=1650815232 p2=1 p3=0
WAIT #1: nam='SQL*Net message to client' ela= 7 p1=1650815232 p2=1 p3=0
FETCH #1:c=0,e=3698,p=0,cr=0,cu=0,mis=0,r=15,dep=0,og=4,tim=10257239136352
WAIT #1: nam='SQL*Net message from client' ela= 126172 p1=1650815232 p2=1 p3=0
WAIT #1: nam='SQL*Net message to client' ela= 12 p1=1650815232 p2=1 p3=0
FETCH #1:c=0,e=1567,p=0,cr=0,cu=0,mis=0,r=15,dep=0,og=4,tim=10257239264915
WAIT #1: nam='SQL*Net message from client' ela= 253613 p1=1650815232 p2=1 p3=0
WAIT #1: nam='SQL*Net message to client' ela= 7 p1=1650815232 p2=1 p3=0
FETCH #1:c=0,e=1192,p=0,cr=0,cu=0,mis=0,r=15,dep=0,og=4,tim=10257239521115
WAIT #1: nam='SQL*Net message from client' ela= 279000 p1=1650815232 p2=1 p3=0
WAIT #1: nam='SQL*Net message to client' ela= 5 p1=1650815232 p2=1 p3=0
FETCH #1:c=0,e=1225,p=0,cr=0,cu=0,mis=0,r=15,dep=0,og=4,tim=10257239803503
WAIT #1: nam='SQL*Net message from client' ela= 301066 p1=1650815232 p2=1 p3=0
WAIT #1: nam='SQL*Net message to client' ela= 4 p1=1650815232 p2=1 p3=0
FETCH #1:c=0,e=630,p=0,cr=0,cu=0,mis=0,r=15,dep=0,og=4,tim=10257240106787
WAIT #1: nam='SQL*Net message from client' ela= 343529 p1=1650815232 p2=1 p3=0
WAIT #1: nam='SQL*Net message to client' ela= 3 p1=1650815232 p2=1 p3=0
FETCH #1:c=0,e=779,p=0,cr=0,cu=0,mis=0,r=15,dep=0,og=4,tim=10257240452356
WAIT #1: nam='SQL*Net message from client' ela= 358273 p1=1650815232 p2=1 p3=0
WAIT #1: nam='SQL*Net message to client' ela= 4 p1=1650815232 p2=1 p3=0
FETCH #1:c=0,e=839,p=0,cr=0,cu=0,mis=0,r=15,dep=0,og=4,tim=10257240812510
WAIT #1: nam='SQL*Net message from client' ela= 371427 p1=1650815232 p2=1 p3=0
WAIT #1: nam='SQL*Net message to client' ela= 4 p1=1650815232 p2=1 p3=0
FETCH #1:c=0,e=735,p=0,cr=0,cu=0,mis=0,r=15,dep=0,og=4,tim=10257241185629
WAIT #1: nam='SQL*Net message from client' ela= 381607 p1=1650815232 p2=1 p3=0
WAIT #1: nam='SQL*Net message to client' ela= 4 p1=1650815232 p2=1 p3=0
FETCH #1:c=0,e=1246,p=0,cr=0,cu=0,mis=0,r=3,dep=0,og=4,tim=10257241569629

Thanks in advance;

Tom Kyte
September 04, 2007 - 3:03 pm UTC

the trace file snippet clearly shows data being fetched - not sure what I'm looking for?


you will not get STAT records until the cursor is CLOSED by the client application - and then only if you are still tracing.

Too slow?

A reader, September 18, 2007 - 4:56 pm UTC

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute      1      0.00       0.00          0          0          0           0
Fetch       38      8.02       7.85          0     135119          0        3625
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total       40      8.02       7.85          0     135119          0        3625


Elapsed times include waiting on following events:
  Event waited on                             Times   Max. Wait  Total Waited
  ----------------------------------------   Waited  ----------  ------------
  SQL*Net message to client                      38        0.00          0.00
  SQL*Net more data to client                   254        0.00          0.02
  SQL*Net message from client                    38        0.00          0.21
********************************************************************************


Question:

1. Doesn't 8 seconds seem very slow for retrieving 3625 rows especially since there aren't any significant wait events?

Thanks

Tom Kyte
September 19, 2007 - 10:20 am UTC

select count(*) from one_hundred_quadrilion_rows;


that query - returns 1 record.
Now, looking at the table name, one might anticipate it taking some amount of time.

the number of records returned from a query is a meaningless metric when evaluating its performance.

i don't understand the comment about the wait events (and the lack thereof), it was all cpu time here. just because something didn't wait for some resource does not mean "it'll be super fast".

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute      1      0.00       0.00          0          0          0           0
Fetch        2     17.70      22.86     166872     166956          0           3
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        4     17.70      22.86     166872     166956          0           3


is that too slow? it only gets three records....

select count(*)
from
 big_table union all select count(*) from big_table union all select count(*)
  from big_table


call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute      1      0.00       0.00          0          0          0           0
Fetch        2     17.70      22.86     166872     166956          0           3
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        4     17.70      22.86     166872     166956          0           3

Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 58

Rows     Row Source Operation
-------  ---------------------------------------------------
      3  UNION-ALL  (cr=166956 pr=166872 pw=0 time=11732562 us)
      1   SORT AGGREGATE (cr=55652 pr=55624 pw=0 time=11732542 us)
25000000    INDEX FAST FULL SCAN BIG_TABLE_PK (cr=55652 pr=55624 pw=0 time=25029584 us)(object id 134235)
      1   SORT AGGREGATE (cr=55652 pr=55624 pw=0 time=5563526 us)
25000000    INDEX FAST FULL SCAN BIG_TABLE_PK (cr=55652 pr=55624 pw=0 time=25000354 us)(object id 134235)
      1   SORT AGGREGATE (cr=55652 pr=55624 pw=0 time=5565519 us)
25000000    INDEX FAST FULL SCAN BIG_TABLE_PK (cr=55652 pr=55624 pw=0 time=25000220 us)(object id 134235)


Now, now it doesn't look like that was too slow - it processed a TON of data.


You need to look to the details here - you did 135,119 logical IOs, I doubt it took that many blocks to hold 3625 simple records - therefore, there were aggregates - group bys, sorts, joins maybe and lots of stuff done to thousands or millions of rows - and then 3625 were produced from it.

the amount of work performed - that would be meaningful.

buffer cache,

A reader, September 25, 2007 - 3:02 pm UTC

When a query is issued for example SELECT statement, the output is returened to the calling user process. The rows that satisfies my query are returned through database buffer cache.

If my buffer cache can hold 1000 blocks in the memory, and I issue the following statements from a table that is having million blocks, how does oracle work?

select * from big_table;

select count(*) from big_table; --here the output is one row

insert into another_big_table
select * from big_table;

Where does the rows store after it process from the block and before it sends it across?

For example, if I issue a statement like

select * from big_table where name='xyz' and assume there is no index on "name" column. The big_table has more than a million blocks. If my cache can hold 1000 blocks at a time, and if the name ='xyz' is stored in 5000 blocks, where will Oracle store the blocks that satisfies my request? How does it make room for the new blocks that are coming out of the disk?

Thanks,

Tom Kyte
September 26, 2007 - 9:31 pm UTC

... The rows that satisfies my query are returned
through database buffer cache. ....

that premise is wrong, or at the very least, incomplete.

there are times when the buffer cache is not used (parallel query)

and there is always TEMP where data can be paged out

and there is always that

select * from a_PETABYTE_table;

needs only to get enough block buffer space to hold A SINGLE ROW (or whatever your array fetch size is, or one block) in order to answer it. I mean, that full scan - reads a block (we need a block in the cache - sure - get it, use it). Then we need a second block - but the first block - it could be gone, might not be but so what, we don't need it anymore.

You could run with a 2 block buffer cache and answer any query. Might be slow, but it would work - you can only process A BLOCK AT A TIME - you are serial!!!

blocks,

A reader, September 27, 2007 - 10:51 am UTC

In continuation to the answer you have provided....
Where will the "read" block go? Say I issue a query and if it takes 10 minutes to execute, where will the "read" block sit until the query completes its execution?

thanks,

Tom Kyte
September 28, 2007 - 3:49 pm UTC

you need a block. You get the block from the cache and basically copy the contents into your working storage.

It might stay in the block buffer cache for a while or not, it doesn't really matter.

will a low arraysize flood the buffer cache?

A reader, September 27, 2007 - 6:35 pm UTC

Hi

I wonder if a low arraysize such as 1 can flood the databae cache?

For example if a query using arraysize 1 needs to execute 100000 consistent gets, if that query is running in several sessions would I see some waits such as buffer busy waits, cache buffer chains?


Tom Kyte
September 28, 2007 - 5:20 pm UTC

no, it won't.

let us say you have 100,000 consistent gets to fetch 100,000 rows.

Those 100,000 rows are stored 100 rows per block (assume), so there are 1,000 blocks needed.

You could have a buffer cache of TWO BLOCKS and still be able to do that.

when you ask for row one, we get block one out of the buffer cache - we might have to do physical IO to do that, but whatever, we get block one out of the cache - read row one - and send it back to the client. Now, block one is in the cache - or block one has been flushed from cache. Client asks for row two. We get block one out of the cache again (maybe reading it from disk if need be, whatever) and do it all over again.

Your query never needs more than one block to be in the cache at ANY point in time.


will a low arraysize flood the buffer cache?

A reader, September 28, 2007 - 5:39 pm UTC

Hi

From a concurreny view low arraysize MIGHT cause contention for blocks no? If I have 10 sessions trying to read the same 1000 blocks (as your example) 100000 times each one of them I think we would see waits such as buffer busy waits?

By the way, English is not my native language and I have always had problem understanding what does pinning buffer mean. To exact, what does pinning mean? May you please give a brief expalnation. I have searched you site but all explanation I have seen is about pinning objects in library cache.

Thanks a lot
Tom Kyte
October 03, 2007 - 1:15 pm UTC

yes, the lower arraysize can lead to a higher number of logical IO's as you get and re-get the same block over and over - which requires latching every time you do that.

they would not see buffer busy waits really, it would be cache buffer chains latching.

but this is not "flooding" - it is latch contention.


if you pin something to the wall, it is stuck to the wall - until you unpin it.

if you pin something in oracle, it is stuck there until you unpin it.

buffer cache,

A reader, October 01, 2007 - 10:56 am UTC

In your previous reply, you mentioned about "working storage". What is that working storage called? Does each process has its own working storage? How do we find the size of the working storage? Is it adjustable?

Thanks,

Tom Kyte
October 03, 2007 - 3:16 pm UTC

it is your pga memory - your sort areas, hash areas, storage for plsql variables, cursor pointers, and so on. It is your private session memory.

will a low arraysize flood the buffer cache?

A reader, October 04, 2007 - 4:57 am UTC

Hi Tom

I have done a small test, I change arraysize from 15 to 1 and see consistent increases however if I look at x$bh.tch (buffer touch count) it does not increase.

I was expecting it to increase since I am reading the same blocks more than arraysize 15. How so?

buffers,

A reader, October 04, 2007 - 11:00 am UTC

Thanks for letting me know about working area. In that case, after the block is processed, will be sent to private sql area before it delivers to the calling user process?

If the SELECT statement takes 10 minutes to execute, at minute 4 if block 1 is processed and at minute 5 if block 10 is processed, will these blocks sit in working area till the 10th minute?

Where would blocks reside in case of INSERT statement?

consistent gets

Tim, October 10, 2007 - 7:43 pm UTC

I have an update statement with subquery SELECT that consumes 153 secs and the tkprof shows that it had to visit 463879 blocks (query=463879) to update 37864 rows and the number of current blocks is 1140. The execution plan shows that index is used to scan the table. How do I reduce "query" value as I suspect it is taking much time to process consistent gets. Thanks
Tom Kyte
October 14, 2007 - 10:00 pm UTC

one might guess that you are updating indexed columns themselves and they in turn need to walk the indexes...

did you do the tkprof with wait events, it would help you come to the right conclusion as to what you are waiting on.

same plan different hosts cloned environments

Susan, January 04, 2008 - 10:20 am UTC

We have a sql statement that runs with very high consistent gets in our production environment but much lower consistent gets in our staging. I ran an autotrace in both environments and the execution paths are the same. Our staging environment is an exp/imp of our production system. I'm wondering if re-building indexes and/or re-defining the production tables would be something to consider. Any changes need to be done on-line :( Thanks for any suggestions.
Tom Kyte
January 04, 2008 - 12:21 pm UTC

consider this example:

ops$tkyte%ORA10GR2> create table t as select 1 x from dual;

Table created.

ops$tkyte%ORA10GR2> variable x refcursor
ops$tkyte%ORA10GR2> variable y refcursor
ops$tkyte%ORA10GR2> variable z refcursor
ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> @trace
ops$tkyte%ORA10GR2> alter session set events '10046 trace name context forever, level 12';

Session altered.

ops$tkyte%ORA10GR2> exec open :x for select * from t t1;

PL/SQL procedure successfully completed.

ops$tkyte%ORA10GR2> exec open :y for select * from t t2;

PL/SQL procedure successfully completed.

ops$tkyte%ORA10GR2> print x

         X
----------
         1

ops$tkyte%ORA10GR2> begin
  2     for i in 1 .. 10000
  3     loop
  4          update t set x = x+1;
  5          commit;
  6     end loop;
  7  end;
  8  /

PL/SQL procedure successfully completed.

ops$tkyte%ORA10GR2> exec open :z for select * from t t3;

PL/SQL procedure successfully completed.

ops$tkyte%ORA10GR2> print y

         X
----------
         1

ops$tkyte%ORA10GR2> print z

         X
----------
     10001


simple one block table, no indexes - very trivial query.... however, tkprof shows:

SELECT * FROM T T1

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          1          0           0
Execute      1      0.00       0.00          0          0          0           0
Fetch        1      0.00       0.00          0          3          0           1
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        3      0.00       0.00          0          4          0           1
********************************************************************************
SELECT * FROM T T2

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          1          0           0
Execute      1      0.00       0.00          0          0          0           0
Fetch        1      0.03       0.07          0      10003          0           1
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        3      0.03       0.07          0      10004          0           1
********************************************************************************
SELECT * FROM T T3

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          1          0           0
Execute      1      0.00       0.00          0          0          0           0
Fetch        1      0.00       0.00          0          3          0           1
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        3      0.00       0.00          0          4          0           1




Your LIO's could be increased due to consistent read (we need to process undo in order to process your query).

What I would do is look at the row source operations in the tkprofs - they can show the number of IO's performed by each step. Look for wide variations between the two systems and see if anything pops out - if all of the extra IO is against a single index - that index *might* be a candidate for a rebuild.

I would NOT reorg the entire schema, no.

Thanks

Susan, January 04, 2008 - 2:36 pm UTC

Thank you Tom that was most helpful.

Aman...., January 28, 2008 - 10:28 pm UTC

Hi sir,
I am trying to understand this consistent gets.some how I guess reading this thread, I am able to come some where.But still there are some doubts which I want to ask.
Here is a table creation with some data.
DB:10.2.0.1
Os:RHEL4
begin
for i in 1..10000 loop
insert into test values (i);
end loop;
end;
PL/SQL procedure successfully completed.

SQL> set autot trace stat
SQL> select count(0) from test;



Statistics
----------------------------------------------------------
          5  recursive calls
          0  db block gets
         47  consistent gets
          0  physical reads
          0  redo size
        411  bytes sent via SQL*Net to client
        385  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
          1  rows processed
SQL> SQL> show arraysize
arraysize 15
SQL> select * from test;

10000 rows selected.


Statistics
----------------------------------------------------------
          4  recursive calls
          0  db block gets
        714  consistent gets
          0  physical reads
          0  redo size
     176815  bytes sent via SQL*Net to client
       7711  bytes received via SQL*Net from client
        668  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
      10000  rows processed
SQL> select count(*) from test;


Statistics
----------------------------------------------------------
          4  recursive calls
          0  db block gets
         47  consistent gets
          0  physical reads
          0  redo size
        411  bytes sent via SQL*Net to client
        385  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
          1  rows processed

SQL> set autot off
SQL> select blocks from dba_segments  where segment_name='TEST';

    BLOCKS
----------
       640
        24
SQL> select count(*) from test;


Statistics
----------------------------------------------------------
          4  recursive calls
          0  db block gets
         47  consistent gets
          0  physical reads
          0  redo size
        411  bytes sent via SQL*Net to client
        385  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
          1  rows processed

SQL> set autot off
SQL> select blocks from dba_segments  where segment_name='TEST';

    BLOCKS
----------
       640
        24
SQL> select blocks,segment_name from dba_segments where segment_name='TEST' and owner='AMAN';

    BLOCKS
----------
SEGMENT_NAME
--------------------------------------------------------------------------------
        24
TEST
SQL> set autot trace stat
SQL> select * from test;

10000 rows selected.


Statistics
----------------------------------------------------------
          0  recursive calls
          0  db block gets
        690  consistent gets
          0  physical reads
          0  redo size
     176815  bytes sent via SQL*Net to client
       7711  bytes received via SQL*Net from client
        668  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
      10000  rows processed

SQL> update test set a=1;

10000 rows updated.


Statistics
----------------------------------------------------------
        131  recursive calls
     <b> 26401  db block gets</b>
        <b>150  consistent gets</b>
          0  physical reads
    5301360  redo size
        678  bytes sent via SQL*Net to client
        555  bytes received via SQL*Net from client
          4  SQL*Net roundtrips to/from client
          1  sorts (memory)
          0  sorts (disk)
      10000  rows processed

Here are the doubts,
(1)I understood that the 690 consistent gets are coming with the calculation that there are 10000/15+24=690.That makes sense.But I am not able to get that when I am giving an update than there are some (26401 db block gets) and jst 150 consistent gets.Can you please explain how this is happening?One guess of mine is that that the segment headers are always gotten in the current mode so Oracle is updating the segment header all the time when it is doing an update on the table.But with that guess,some how it appears that there are about twice the rows updation that happened.Please explain thatwhy there are so high current gets and why there is just 150 consistent gets ?
2) What kind of tuning advice or practise can be followed by looking at the consistent and current gets?In your book also you have mentioned that these are the most important stats to look for in the autotrace but I amnot ableto make it out that what advice one can come over looking at these ?
Thanks and regards,

Tom Kyte
January 29, 2008 - 6:57 am UTC

1) the consistent gets are just to read the table in consistent read mode "as it exists", when we update a row - we have to get it in current mode and we do that a row at a time, and yes, there are other bits that need to be modified - undo to be created and modified - in addition to the table itself.

2) reduce the consistent gets in general implies better performance.

eg: your query "select * from test", it'll do less logical IO if you increase the array size from 15 to 100.

The goal, in general, is to attempt to reduce logical IO in most cases, logical IO = lot of work, less of it means less work

Aman...., January 28, 2008 - 10:46 pm UTC

Sir ,
you mentioned,
Your query never needs more than one block to be in the cache at ANY point in time.
I didnt understand that.What does this mean?
Regards
Aman....
Tom Kyte
January 29, 2008 - 6:59 am UTC

sorry, I do not know how else to say it.

it means exactly what it says.


Aman...., January 29, 2008 - 10:44 pm UTC

1) the consistent gets are just to read the table in consistent read mode "as it exists", when we update a row - we have to get it in current mode and we do that a row at a time, and yes, there are other bits that need to be modified - undo to be created and modified - in addition to the table itself.
Hi sir,
Thanks alot for your reply.I guessI understand the point of consistent gets but I am still not clear with what has happened in the Update clause of mine.there are 10000 rows and there is an index over the column also so the number of db block gets are double+extra rows.That makes sense as the rows are getting modified so Oracle is updating the header one by one for each row modification.But what does 150 consistent gets describe?These rows were already in the cache as I didnt do any flush of buffer cache.What I was expecting that the same number of consistent gets that comes for a normal select will come or may be more than that too.But it dropped way down as compared to the normal select.Thats what I a,nto able to understand.Please explain this behaviour.
Thanks and regards.
Aman....
Tom Kyte
January 30, 2008 - 9:47 am UTC

there is undo being read, there is undo being written, there are table blocks being read, table blocks being written, all kinds of activity going on. there are index blocks being read, there are index blocks being written.

reads = consistent gets
writes = current mode gets

lots of stuff going on there when you update, you are hitting more structures than you would with a simple select.

jdbc array size

bakunian, February 06, 2008 - 9:53 am UTC

Tom,

I'm dba supporting java app so could you tell me how to increase JDBC array size. Is it something that can be set at application, session, transaction or perhaps statement level?
And if you could include some code snippet it would great.

Thanks for your time

Optimize query which returns large resultsets

Kamal, March 24, 2008 - 6:42 pm UTC

Tom:
I have this query which returns 154721 records as output. When i tested in sqlplus, the query runs fast but it takes long time around 20 minutes to show the resultset completion on the screen. Is there any way to improve the query completion.

"25218094 bytes sent via SQL*Net to client"

cat qry.sql
===========
select distinct a.contract_enterprise_name, a.contract_customer_name, a.contract_number,
b.contract_service, a.contract_att_signature_date, a.contract_date_updated,
a.contract_update_driver, a.contract_site, b.contract_gams_chrg_to_acct_id,
a.contract_region, b.contract_currency_code, b.isocntrycode, b.contract_exchange_rate,
a.contract_customer_address, a.contract_customer_state, a.contract_customer_zip_code,
a.contract_customer_country, b.contract_contracted_country, c.contract_customer_site_name,
c.contract_site_address, c.contract_site_state, c.contract_site_zip_code, c.contract_site_country,
d.contract_be_id, d.contract_be_description, d.contract_be_category, d.contract_be_quantity,
d.contract_be_gross_rate, d.contract_be_net_rate, c.contract_site_fixed_rate, d.contract_be_discount
from GLOBAL_CUST_CONTRACTS a, GLOBAL_ISOCOUNTRY b, GLOBAL_CUST_SITE_ADDRESS c,
GLOBAL_CUST_COMP_CONTRACTS d
where a.contract_enterprise_name = 'ADVANCED MICRO DEVICES'
and a.contract_number = b.contract_number and b.contract_number = c.contract_number
and b.contract_service = c.contract_service and b.isocntrycode = c.contract_iso_country_code
and c.contract_number = d.contract_number and c.contract_iso_country_code = d.contract_iso_country_code
and c.contract_service = d.contract_service and c.contract_site_address_number = d.contract_site_address_number
/

23:11:30 SQL> @qry

154721 rows selected.

Elapsed: 00:00:37.77

Statistics
----------------------------------------------------------
0 recursive calls
0 db block gets
156224 consistent gets
0 physical reads
0 redo size
25218094 bytes sent via SQL*Net to client
851428 bytes received via SQL*Net from client
77362 SQL*Net roundtrips to/from client
1 sorts (memory)
0 sorts (disk)
154721 rows processed


Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=1389 Card=47419 Byte
s=13609253)

1 0 SORT (UNIQUE) (Cost=1389 Card=47419 Bytes=13609253)
2 1 TABLE ACCESS (BY INDEX ROWID) OF 'GLOBAL_CUST_COMP_CONTR
ACTS' (Cost=2 Card=261 Bytes=23229)

3 2 NESTED LOOPS (Cost=392 Card=47419 Bytes=13609253)
4 3 HASH JOIN (Cost=30 Card=181 Bytes=35838)
5 4 HASH JOIN (Cost=17 Card=181 Bytes=26788)
6 5 TABLE ACCESS (FULL) OF 'GLOBAL_CUST_CONTRACTS' (
Cost=3 Card=9 Bytes=981)

7 5 TABLE ACCESS (FULL) OF 'GLOBAL_ISOCOUNTRY' (Cost
=13 Card=25753 Bytes=1004367)

8 4 TABLE ACCESS (FULL) OF 'GLOBAL_CUST_SITE_ADDRESS'
(Cost=12 Card=25753 Bytes=1287650)

9 3 INDEX (RANGE SCAN) OF 'DC_GCCC_PK001' (UNIQUE) (Cost
=1 Card=13)



Table row counts are :
=======================
23:37:19 SQL> select count(*) from GLOBAL_CUST_COMP_CONTRACTS ;

COUNT(*)
----------
6733522

Elapsed: 00:00:11.77
23:37:41 SQL> select count(*) from GLOBAL_CUST_CONTRACTS ;

COUNT(*)
----------
1267

Elapsed: 00:00:00.08
23:37:51 SQL> select count(*) from GLOBAL_CUST_SITE_ADDRESS ;

COUNT(*)
----------
25757

Elapsed: 00:00:00.07
23:37:57 SQL> select count(*) from GLOBAL_ISOCOUNTRY ;

COUNT(*)
----------
25757
Tom Kyte
March 24, 2008 - 7:42 pm UTC

154721 - that is a teeny tiny number. tiny.

"25218094 bytes sent via SQL*Net to client" is about 25mb, a small, trivial number. Unless you are on a bad/slow network of course.


you would want a tkprof to see how much time it spent in the database, what it waited on, versus how long it took your client to show you the last row, that would be very helpful.

tkprof report

Kamal, March 24, 2008 - 9:47 pm UTC

Tom:

I ran the same query from shell script in nohup and it completed in 2 minutes and 23 secs.

I am attaching the TKPROF report for the same query.
Can you suggest on improving query response time.

********************************************************************************

alter session SET EVENTS '10046 TRACE NAME CONTEXT FOREVER, LEVEL 12'


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 0 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 1 0.00 0.00 0 0 0 0

Misses in library cache during parse: 0
Misses in library cache during execute: 1
Optimizer goal: CHOOSE
Parsing user id: 205

Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
SQL*Net message to client 1 0.00 0.00
SQL*Net message from client 1 0.00 0.00
********************************************************************************

select distinct a.contract_enterprise_name,
a.contract_customer_name, a.contract_number,
b.contract_service, a.contract_att_signature_date,
a.contract_date_updated, a.contract_update_driver,
a.contract_site, b.contract_gams_chrg_to_acct_id,
a.contract_region, b.contract_currency_code,
b.isocntrycode, b.contract_exchange_rate,
a.contract_customer_address, a.contract_customer_state,
a.contract_customer_zip_code,
a.contract_customer_country,
b.contract_contracted_country,
c.contract_customer_site_name, c.contract_site_address,
c.contract_site_state, c.contract_site_zip_code,
c.contract_site_country, d.contract_be_id,
d.contract_be_description, d.contract_be_category,
d.contract_be_quantity, d.contract_be_gross_rate,
d.contract_be_net_rate, c.contract_site_fixed_rate,
d.contract_be_discount from GLOBAL_CUST_CONTRACTS a,
GLOBAL_ISOCOUNTRY b, GLOBAL_CUST_SITE_ADDRESS c,
GLOBAL_CUST_COMP_CONTRACTS d where
a.contract_enterprise_name = 'BOMBARDIER' and
a.contract_number = b.contract_number and
b.contract_number = c.contract_number and b.contract_service =
c.contract_service and b.isocntrycode = c.contract_iso_country_code and
c.contract_number= d.contract_number and
c.contract_iso_country_code =d.contract_iso_country_code and
c.contract_service =d.contract_service and
c.contract_site_address_number =d.contract_site_address_number

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.02 0.02 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 66764 9.60 10.19 0 134743 0 133525
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 66766 9.62 10.22 0 134743 0 133525

Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: 205

Rows Row Source Operation
------- ---------------------------------------------------
133525 SORT UNIQUE
133525 TABLE ACCESS BY INDEX ROWID GLOBAL_CUST_COMP_CONTRACTS
133976 NESTED LOOPS
450 HASH JOIN
450 HASH JOIN
25 TABLE ACCESS FULL GLOBAL_CUST_CONTRACTS
25757 TABLE ACCESS FULL GLOBAL_ISOCOUNTRY
25757 TABLE ACCESS FULL GLOBAL_CUST_SITE_ADDRESS
133525 INDEX RANGE SCAN DC_GCCC_PK001 (object id 1633499)


Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
SQL*Net message to client 66764 0.00 0.07
SQL*Net more data to client 1 0.00 0.00
SQL*Net message from client 66764 0.15 122.93
********************************************************************************

alter session SET EVENTS '10046 TRACE NAME CONTEXT OFF'
===========================================================

Indexes on the tables used in the query:
========================================
SQL> l
1 select table_name, index_name, column_name from user_ind_columns where table_name in
2 ('GLOBAL_CUST_CONTRACTS','GLOBAL_ISOCOUNTRY','GLOBAL_CUST_SITE_ADDRESS','GLOBAL_CUST_COMP_CONTRACTS')
3* order by table_name, index_name, column_position
SQL> /

TABLE_NAME INDEX_NAME COLUMN_NAME
------------------------------ ------------------------------ ------------------------------
GLOBAL_CUST_COMP_CONTRACTS DC_GCCC_PK001 CONTRACT_NUMBER
GLOBAL_CUST_COMP_CONTRACTS DC_GCCC_PK001 CONTRACT_SERVICE
GLOBAL_CUST_COMP_CONTRACTS DC_GCCC_PK001 CONTRACT_ISO_COUNTRY_CODE
GLOBAL_CUST_COMP_CONTRACTS DC_GCCC_PK001 CONTRACT_SITE_ADDRESS_NUMBER
GLOBAL_CUST_COMP_CONTRACTS DC_GCCC_PK001 CONTRACT_BE_ID
GLOBAL_CUST_CONTRACTS DC_GCC_PK001 CONTRACT_NUMBER
GLOBAL_CUST_CONTRACTS GLOBAL_CUST_CONTRACTS_IDX1 CONTRACT_ENTERPRISE_NAME
GLOBAL_CUST_SITE_ADDRESS DC_GCSA_PK001 CONTRACT_NUMBER
GLOBAL_CUST_SITE_ADDRESS DC_GCSA_PK001 CONTRACT_SERVICE
GLOBAL_CUST_SITE_ADDRESS DC_GCSA_PK001 CONTRACT_ISO_COUNTRY_CODE
GLOBAL_CUST_SITE_ADDRESS DC_GCSA_PK001 CONTRACT_SITE_ADDRESS_NUMBER
GLOBAL_ISOCOUNTRY DBCL_GIC_IDX001 CONTRACT_NUMBER
GLOBAL_ISOCOUNTRY DC_GI_PK001 CONTRACT_NUMBER
GLOBAL_ISOCOUNTRY DC_GI_PK001 CONTRACT_SERVICE
GLOBAL_ISOCOUNTRY DC_GI_PK001 ISOCNTRYCODE

15 rows selected.
Tom Kyte
March 25, 2008 - 6:36 am UTC

... I ran the same query from shell script in nohup and it completed in 2 minutes and 23 secs. ...

probably you redirected it's output as well - so that it wasn't painting the screen and it is precisely what I said before.

.... versus how long it took your client to show you the last row ....

that query takes 10 seconds to execute in the server (CODE BUTTON IS VERY USEFUL FOR CODE)


call  count    cpu  elapsed    disk    query  current    rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse    1    0.02    0.02      0      0      0      0
Execute    1    0.00    0.00      0      0      0      0
Fetch  66764    9.60    10.19      0  134743      0    133525
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total  66766    9.62    10.22      0  134743      0    133525 



the rest of the time spent processing is in the CLIENT - in the thing that painted the screen.

you see, during the execution of that sql statment:

Elapsed times include waiting on following events:
Event waited on                  Times  Max. Wait Total Waited
----------------------------------------  Waited ---------- ------------
SQL*Net message to client            66764    0.00      0.07
SQL*Net more data to client              1    0.00      0.00<b>
SQL*Net message from client          66764    0.15    122.93</b> 


we spent about 2 minutes waiting in the database for the client to tell us to DO SOMETHING (just sitting there)


so, the client is the bottleneck here - the client retrieving and formatting the data.


Try this:

set timing on
set autotrace traceonly explain
select ..... (your query here)
set autotrace off


you'll find the response time to be likely "much more responsive", as you are not timing the amount of time it takes to paint your screen.

Update single row with a lot of number of current blocks

David, July 03, 2008 - 7:11 pm UTC

Hi Tom,

In our production system we've detect one update sentence that consumes a lot of number of current blocks. This sentence update only one row.

Our table only has one index (unique index on number column).

Buffer Gets Executions Gets per Exec %Total CPU Time (s) Elapsed Time (s) SQL Id SQL Module SQL Text
95,436,804 77,904 1,225.06 36.58 2310.33 3426.40 4rgcxhjz3bmf9 SQL*Plus UPDATE XXXX.TABLE_0712V3_...

Our update is:

UPDATE XXXX.TABLE_0712V3 SET COL1 = :B3 , COL2 = :B2 WHERE COL3 = :B1

where COL3 type is number and is primary key. Table and indexes are analyzed correctly.

Explain plan show us one unique scan index access. Our table have 8 milions of rows and 10000 blocks. Our block size is 32k. Index level is 2. I think that this update should be cosumes aprox 3 consistent reads and 1 current block but we get 1225 bloks / exec. There are not triggers. There are not snapshot logs. We've validated variable types and are correct.

Can you explain me or guide me to know / understand what could due this high current block consumes?


Thanks in advance.

Regards


Tom Kyte
July 07, 2008 - 10:47 am UTC

why do you say "current blocks" - I see consistent read blocks sure....


are either col1, col2 indexed?
might there be more than one of these updates happening simultaneously?

Thomas, July 08, 2008 - 7:33 am UTC

Hi Tom,

I don't understand why Oracle gives up the block when the
client has finished filling its array. Why doesn't Oracle
keep the consistent copy of the block for processing it
later when the client requests the next couple of rows?
Is this some kind of memory optimization to give up the
block or are there other reasons for that?

Is my understanding correct that when a block is fetched
from the buffer cache (which might involve a physical read
first) a cursor/session-private copy of the block is made
which is then processed?
Tom Kyte
July 08, 2008 - 9:42 am UTC

we latch it
get what we need
unlatch it

My example here is very very very simple, a single table. Normally - there are many many blocks involved - index blocks we are processing (and there we need to go back to the cache since as we walk from leaf to leaf - we need to know what the current next leaf is - the pointers change). Same with database blocks and row pieces - it is fairly complex to process this data as it is changing in structure as we are using it..

A reader, July 08, 2008 - 10:14 am UTC

Tom,
Is the reason why Oracle gives up the block between successive fetches ALSO because there is no saying how long between these fetches or even if the follow up request for the block EVER comes at all?

Like we could have a PL/SQL fetch and then the client just disappears like a comet, say doing 'other work, like calculations, queries' and then come back to request the reminder of the Block?
Tom Kyte
July 08, 2008 - 4:22 pm UTC

I gave a reason above?

My initial example was *simple, trivial, non-complex*, with that simple non-complex example - it would seem that you could keep the block (but you cannot really, it is not as simple as we made it sound)

Now, pretend we were doing something more complex - walking an index structure to retrieve rows...

In an index, each leaf block has a pointer to the next leaf block so when we range scan - we walk along the bottom of the b*tree.

What happens when this complex structure is modified and the next leaf block pointer is altered? You need to know that.

Blocks are very complicated - much more complicated then we make them out to be when talking conceptually. They represent some very complex structures, data structures, that are constantly being modified.

Why "consistent gets" changes with query format?

Drew, August 15, 2008 - 3:04 pm UTC

Hi Tom,
We run the following same query but get very different performance. When putting the "ORDER BY SITE_ID" in a new line (the first one), it takes 0.05 second to get all the result. However, when the order by is on the same line with previous statement, it takes over 10 seconds to get the results.  What's could be going here and why?  Also, it takes over 10 seconds not matter which lines the order by is in.
Thank you for your help.

select * from (
select distinct SITE_ID from LMS_SITE_TOOL where SITE_ID like '~%'
and SITE_ID not in (select SITE_ID from LMS_SITE_TOOL where REGISTRATION = 'lms.rsf.evaluation')  
order by SITE_ID
) where rownum < 100
; 

78 rows selected.

Elapsed: 00:00:00.05

Execution Plan
----------------------------------------------------------
Plan hash value: 1118013707

-------------------------------------------------------------------------------------------------------------
| Id  | Operation                         | Name                    | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                  |                         |   844 | 19412 |  1189   (1)| 00:00:17 |
|   1 |  SORT UNIQUE NOSORT               |                         |   844 | 19412 |  1189   (1)| 00:00:17 |
|   2 |   VIEW                            |                         |   999 | 22977 |  1188   (1)| 00:00:17 |
|*  3 |    COUNT STOPKEY                  |                         |       |       |            |          |
|   4 |     VIEW                          |                         |  9377 |   210K|  1188   (1)| 00:00:17 |
|   5 |      MERGE JOIN ANTI              |                         |  9377 |   558K|  1188   (1)| 00:00:17 |
|*  6 |       INDEX RANGE SCAN            | IE_LMS_SITE_TOOL_SITE |  9493 |   213K|    32   (0)| 00:00:01 |
|*  7 |       SORT UNIQUE                 |                         |   113 |  4294 |  1155   (1)| 00:00:17 |
|*  8 |        TABLE ACCESS BY INDEX ROWID| LMS_SITE_TOOL         |   113 |  4294 |  1154   (1)| 00:00:17 |
|*  9 |         INDEX RANGE SCAN          | IE_LMS_SITE_TOOL_SITE |  9493 |       |    32   (0)| 00:00:01 |
-------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   3 - filter(ROWNUM<1000)
   6 - access("SITE_ID" LIKE '~%')
       filter("SITE_ID" LIKE '~%')
   7 - access("SITE_ID"="SITE_ID")
       filter("SITE_ID"="SITE_ID")
   8 - filter("REGISTRATION"='LMS.rsf.evaluation')
   9 - access("SITE_ID" LIKE '~%')
       filter("SITE_ID" LIKE '~%')


Statistics
----------------------------------------------------------
          0  recursive calls
          0  db block gets
        384  consistent gets
          0  physical reads
          0  redo size
       4553  bytes sent via SQL*Net to client
        547  bytes received via SQL*Net from client
          7  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
         78  rows processed

SQL> select distinct * from (select * from (
select SITE_ID from LMS_SITE_TOOL
where SITE_ID like '~%'
and SITE_ID not in (select SITE_ID from LMS_SITE_TOOL where REGISTRATION = 'lms.rsf.evaluation') order by SITE_ID
) where rownum < 1000);   2    3    4    5

78 rows selected.

Elapsed: 00:00:10.62

Execution Plan
----------------------------------------------------------
Plan hash value: 1118013707

-------------------------------------------------------------------------------------------------------------
| Id  | Operation                         | Name                    | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                  |                         |   844 | 19412 |  1189   (1)| 00:00:17 |
|   1 |  SORT UNIQUE NOSORT               |                         |   844 | 19412 |  1189   (1)| 00:00:17 |
|   2 |   VIEW                            |                         |   999 | 22977 |  1188   (1)| 00:00:17 |
|*  3 |    COUNT STOPKEY                  |                         |       |       |            |          |
|   4 |     VIEW                          |                         |  9377 |   210K|  1188   (1)| 00:00:17 |
|   5 |      MERGE JOIN ANTI              |                         |  9377 |   558K|  1188   (1)| 00:00:17 |
|*  6 |       INDEX RANGE SCAN            | IE_LMS_SITE_TOOL_SITE |  9493 |   213K|    32   (0)| 00:00:01 |
|*  7 |       SORT UNIQUE                 |                         |   113 |  4294 |  1155   (1)| 00:00:17 |
|*  8 |        TABLE ACCESS BY INDEX ROWID| LMS_SITE_TOOL         |   113 |  4294 |  1154   (1)| 00:00:17 |
|*  9 |         INDEX RANGE SCAN          | IE_LMS_SITE_TOOL_SITE |  9493 |       |    32   (0)| 00:00:01 |
-------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   3 - filter(ROWNUM<1000)
   6 - access("SITE_ID" LIKE '~%')
       filter("SITE_ID" LIKE '~%')
   7 - access("SITE_ID"="SITE_ID")
       filter("SITE_ID"="SITE_ID")
   8 - filter("REGISTRATION"='LMS.rsf.evaluation')
   9 - access("SITE_ID" LIKE '~%')
       filter("SITE_ID" LIKE '~%')


Statistics
----------------------------------------------------------
          0  recursive calls
          0  db block gets
     158207  consistent gets
          0  physical reads
          0  redo size
       4553  bytes sent via SQL*Net to client
        547  bytes received via SQL*Net from client
          7  SQL*Net roundtrips to/from client
          1  sorts (memory)
          0  sorts (disk)
         78  rows processed

Tom Kyte
August 20, 2008 - 8:49 am UTC

show us a tkprof.


explain plan "lies", lets see what really is happening.

consistent reads are getting increase over time

Marat, October 06, 2008 - 12:59 am UTC

Dear Tom,
we are facing the following problem: the same query is run under PL/SQL procedure many times for long period. Firstly it goes quite fast, but eventually it slows to unacceptable time.
I compared two trace files - one for the good period and another for the bad one - and realized that in the bad period number of consistent gets is 1000 times higher.

--BAD PERIOD
SELECT /*+ some_comment*/ f1,f2,f3 
FROM
 RATE t1 r,t2 R_T WHERE R_T.f1= :B4 AND 1=1 AND :B3 = :B3 
  AND R_T.f2 = R.f2 AND R.f3= :B2 AND R.f2 = 
  :B1 AND R.DATE = (SELECT MAX(DATE) FROM t1 R1 WHERE 
  TRUNC(R1.DATE) <= :B5 AND R1.f2 = R.f2 AND 
  R1.f3 = R.f3 AND R1.f1 = R.f1)


call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute    284      0.21       0.46          0          0          0           0
Fetch      284   1637.08    1598.88          0  193678115          0         283
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total      569   1637.29    1599.34          0  193678115          0         283

Misses in library cache during parse: 1
Misses in library cache during execute: 1
Optimizer mode: ALL_ROWS
Parsing user id: 120     (recursive depth: 2)

Elapsed times include waiting on following events:
  Event waited on                             Times   Max. Wait  Total Waited
  ----------------------------------------   Waited  ----------  ------------
  latch free                                      8        0.00          0.02
***********************************************************


--GOOD PERIOD
SELECT /*+ some_comment*/ f1,f2,f3 
FROM
 RATE t1 r,t2 R_T WHERE R_T.f1= :B4 AND 1=1 AND :B3 = :B3 
  AND R_T.f2 = R.f2 AND R.f3= :B2 AND R.f2 = 
  :B1 AND R.DATE = (SELECT MAX(DATE) FROM t1 R1 WHERE 
  TRUNC(R1.DATE) <= :B5 AND R1.f2 = R.f2 AND 
  R1.f3 = R.f3 AND R1.f1 = R.f1)


call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute   1524      0.72       0.59          0          0          0           0
Fetch     1524     17.32      16.82          0    1004724          0        1524
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total     3049     18.04      17.42          0    1004724          0        1524

Misses in library cache during parse: 1
Misses in library cache during execute: 1
Optimizer mode: ALL_ROWS
Parsing user id: 120     (recursive depth: 2)
***********************************************************


Also, after restarting the database or gathering statistics on one of the query's table it again run fast... for some time.

What is the possible reason? Where to look?
Thank you.
Tom Kyte
October 06, 2008 - 2:51 pm UTC

give more information - are people modifying this object, and by restarting you kick them all out (so no modifications are made at first and then they are and then they are more and so on)


The tables are rarely modified

Marat, October 07, 2008 - 2:08 am UTC

The tables are rarely modified - just once or twice a day. Also, when the problem occured there were few users connected the database.
Tom Kyte
October 08, 2008 - 7:46 pm UTC

is there more than one plan in v$sql for this query.

Why is consistent gets so high?

Anil, October 07, 2008 - 6:52 am UTC

Tom,

CREATE TABLE VIVUSER.TR3318 
   ( C1 VARCHAR2(15 BYTE) NOT NULL ENABLE, 
 C2 VARCHAR2(254 BYTE), 
 C3 NUMBER(15,0) NOT NULL ENABLE, 
 C4 VARCHAR2(254 BYTE), 
 C5 VARCHAR2(254 BYTE) NOT NULL ENABLE, 
 C6 NUMBER(15,0) NOT NULL ENABLE, 
 C7 NUMBER(15,0) NOT NULL ENABLE, 
 C8 VARCHAR2(254 BYTE) NOT NULL ENABLE, 
 C501 VARCHAR2(50 BYTE), 
 C502 VARCHAR2(254 BYTE), 
 C503 VARCHAR2(255 BYTE), 
 C504 VARCHAR2(254 BYTE), 
 C505 NUMBER(15,0), 
 C506 NUMBER(15,0), 
 C507 NUMBER(15,0), 
 C508 VARCHAR2(255 BYTE), 
 C509 CLOB
   ) PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255 NOCOMPRESS LOGGING
  STORAGE(INITIAL 40960 NEXT 524288 MINEXTENTS 1 MAXEXTENTS 505
  PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
  TABLESPACE VIVTS 
 LOB (C509) STORE AS (
  TABLESPACE VIVTS ENABLE STORAGE IN ROW CHUNK 8192 PCTVERSION 10
  NOCACHE 
  STORAGE(INITIAL 40960 NEXT 524288 MINEXTENTS 1 MAXEXTENTS 505
  PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)) ;

--UNIQUE INDEX ON C1

SQL> select count(*) from TR3318;

  COUNT(*)
----------
        13


select blocks 
from user_segments 
where segment_name = 'TR3318'

    BLOCKS
----------
     17340


SELECT C8,COUNT(*) FROM TR3318
WHERE (T4475.C7 = 0)
GROUP BY C8
ORDER BY C8

Elapsed: 00:00:03.61

Execution Plan
----------------------------------------------------------
   0      SELECT STATEMENT Optimizer=CHOOSE (Cost=18 Card=3 Bytes=60)
   1    0   SORT (GROUP BY) (Cost=18 Card=3 Bytes=60)
   2    1     TABLE ACCESS (FULL) OF 'TR3318' (Cost=2 Card=209 Bytes=4180)


Statistics
-----------------------------------------------------
          0  recursive calls
          0  db block gets
      17323  consistent gets
          0  physical reads
          0  redo size
        249  bytes sent via SQL*Net to client
        234  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          1  sorts (memory)
          0  sorts (disk)
          1  rows processed

Question : Why is consistent gets so high? I think this table is frequently deleted and populated (though Iam yet to confirm with this from application team) and for that reason the blocks consumed is quite high.

Tom Kyte
October 08, 2008 - 9:16 pm UTC

http://asktom.oracle.com/pls/ask/search?p_string=hwm

when you full scan, you full scan to the high water mark.

if you put a billion rows in a table
and delete them all
and full scan the table

it'll full scan the space the one billion rows were in at one point.

Consistent gets and CBO

Krishna Rajagopalan, October 20, 2008 - 10:42 am UTC

Hello Mr Tom,
We have recently upgraded our database from 9i to 10g. A query that used to be fast, now takes 30 mins to comeback with the result. The consistent gets is very high. Here is the statement and the statistics. I have included the number of rows as well in the SQL statement as comments.

We use CBO. In 10g when the query uses index it takes along time to return results. But If eliminate the use of index,the results come back very fast using FULL table scan. Basically I want to know is this an issue due to database setup? OR Should I tune the query even though the CBO is being used?

SELECT DISTINCT f1.batch_no, f1.batch_no
FROM fund_pool_setup f1 -- has 3000 records
WHERE f1.batch_no <> 0
AND f1.active = 'Y'
AND NOT EXISTS ( SELECT DISTINCT f2.batch_no
FROM fund_pool f2 -- has 420,000 records
WHERE f2.batch_no = f1.batch_no
AND f2.status_code = 'POS')
ORDER BY 2
SQL> /

11 rows selected.


Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=23 Card=1 Bytes=15)
1 0 SORT (ORDER BY) (Cost=23 Card=1 Bytes=15)
2 1 HASH (UNIQUE) (Cost=22 Card=1 Bytes=15)
3 2 NESTED LOOPS (ANTI) (Cost=21 Card=1 Bytes=15)
4 3 TABLE ACCESS (FULL) OF 'FUND_POOL_SETUP' (TABLE) (Co
st=12 Card=35 Bytes=245)

5 3 TABLE ACCESS (BY INDEX ROWID) OF 'FUND_POOL' (TABLE)
(Cost=1 Card=358584 Bytes=2868672)

6 5 INDEX (RANGE SCAN) OF 'POOL_STATUS_LU_FK_I' (INDEX
) (Cost=1 Card=1)

Statistics
----------------------------------------------------------
43 recursive calls
0 db block gets
26010200 consistent gets
248 physical reads
0 redo size
524 bytes sent via SQL*Net to client
277 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
1 sorts (memory)
0 sorts (disk)
11 rows processed

We thank you for your help.
Tom Kyte
October 21, 2008 - 12:05 pm UTC

please use the code button to make things readable

are the estimated card=values in the plan close to reality or far away from what you expect.

consistent gets question

lizhuohua, October 22, 2008 - 6:50 am UTC

Hi Tom:
I'm doing a testing about consistent gets

SQL>select count(*) from t1;

  COUNT(*)
----------
     60295

Elapsed: 00:00:00.00
SQL>select count(*) from t2;

  COUNT(*)
----------
         0

Elapsed: 00:00:00.01
SQL>set autot trace
SQL>select * from t1;

60295 rows selected.

Elapsed: 00:00:00.42

Execution Plan
----------------------------------------------------------
Plan hash value: 3617692013

--------------------------------------------------------------------------
| Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |      | 61764 |  2111K|    41   (5)| 00:00:01 |
|   1 |  TABLE ACCESS FULL| T1   | 61764 |  2111K|    41   (5)| 00:00:01 |
--------------------------------------------------------------------------

Note
-----
   - dynamic sampling used for this statement


Statistics
----------------------------------------------------------
          0  recursive calls
          0  db block gets
       4169  consistent gets
          0  physical reads
          0  redo size
    1655279  bytes sent via SQL*Net to client
      44701  bytes received via SQL*Net from client
       4021  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
      60295  rows processed

SQL>select /*+leading(t1)*/
 *
  from t1, t2
 where t1.userid = t2.userid(+);  2    3    4 

60295 rows selected.

Elapsed: 00:00:00.50

Execution Plan
----------------------------------------------------------
Plan hash value: 1823443478

-----------------------------------------------------------------------------------
| Id  | Operation          | Name | Rows  | Bytes |TempSpc| Cost (%CPU)| Time     |
-----------------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |      | 61764 |  2653K|       |   183   (3)| 00:00:03 |
|*  1 |  HASH JOIN OUTER   |      | 61764 |  2653K|  2840K|   183   (3)| 00:00:03 |
|   2 |   TABLE ACCESS FULL| T1   | 61764 |  2111K|       |    41   (5)| 00:00:01 |
|   3 |   TABLE ACCESS FULL| T2   |     1 |     9 |       |     2   (0)| 00:00:01 |
-----------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - access("T1"."USERID"="T2"."USERID"(+))

Note
-----
   - dynamic sampling used for this statement


Statistics
----------------------------------------------------------
          4  recursive calls
          0  db block gets
        232  consistent gets
          0  physical reads
          0  redo size
    1884304  bytes sent via SQL*Net to client
      44701  bytes received via SQL*Net from client
       4021  SQL*Net roundtrips to/from client
          1  sorts (memory)
          0  sorts (disk)
      60295  rows processed

sql1 select * from t1  consistent gets
is much bigger than
sql2 select /*+ leading(t1) */ * from t1,t2 where t1.userid=t2.user(+);
I know this because sql2 read data from working area but sql1 read data from data buffers.
Did you think that sql2 is better than sql1?

If not ,I think may be we will make some mistake when tuning SQL using consistent gets.


Tom Kyte
October 22, 2008 - 8:57 am UTC

the rule is always "in general, reduce logical IO's" - key word being "in general"

lose the hints. Avoid hints. Avoid hints that tell the optimizer what to do, embrace hints that give it more information (all rows, first rows(n)). Avoid hints like use_nl, hash, leading, ordered, whatever...


of course "sql2 is not better", that should be obvious - isn't it? It does a ton more work.


tell me, compare:

select * from t1;
select * from t1 order by some_column;


which one does more work? (answer - the second one, obvious)

which one does less logical IO? (answer probably the second one - it'll full scan the table into temp and sort it)


how do you 'tune' query1? (answer - set arraysize 100, re-run it)


Don't let your common sense go unnoticed in a case like this....

CBO

Krishna Rajagopalan, October 22, 2008 - 10:32 am UTC

Hello Mr Tom,

Thanks for your reply. I reformatted. The CARD values are as expected and the results are from production database.

We have recently upgraded our database from 9i to 10g. A query that used to be fast, now takes 30 mins to comeback with the result. The consistent gets is very high. Here is the statement and the statistics. I have included the number of rows as well in the SQL statement as comments.

We use CBO. In 10g when the query uses index it takes along time to return results. But If eliminate the use of index,the results come back very fast using FULL table scan. Basically I want to know is this an issue due to database setup? OR Should I tune the query even though the CBO is being used?

SELECT DISTINCT f1.batch_no, f1.batch_no
FROM fund_pool_setup f1 -- has 3000 records
WHERE f1.batch_no <> 0
AND f1.active = 'Y'
AND NOT EXISTS ( SELECT DISTINCT f2.batch_no
FROM fund_pool f2 -- has 420,000 records
WHERE f2.batch_no = f1.batch_no
AND f2.status_code = 'POS')
ORDER BY 2
SQL> /

11 rows selected.


Execution Plan
----------------------------------------------------------
  0    SELECT STATEMENT Optimizer=CHOOSE (Cost=23 Card=1 Bytes=15)
  1  0  SORT (ORDER BY) (Cost=23 Card=1 Bytes=15)
  2  1  HASH (UNIQUE) (Cost=22 Card=1 Bytes=15)
  3  2    NESTED LOOPS (ANTI) (Cost=21 Card=1 Bytes=15)
  4  3      TABLE ACCESS (FULL) OF 'FUND_POOL_SETUP' (TABLE) (Co
      st=12 Card=35 Bytes=245)

  5  3      TABLE ACCESS (BY INDEX ROWID) OF 'FUND_POOL' (TABLE)
      (Cost=1 Card=358584 Bytes=2868672)

  6  5      INDEX (RANGE SCAN) OF 'POOL_STATUS_LU_FK_I' (INDEX
      ) (Cost=1 Card=1)

Statistics
----------------------------------------------------------
      43    recursive calls
      0     db block gets
  26010200  consistent gets
    248     physical reads
      0     redo size
    524     bytes sent via SQL*Net to client
    277     bytes received via SQL*Net from client
      2     SQL*Net roundtrips to/from client
      1     sorts (memory)
      0     sorts (disk)
      11    rows processed


We thank you for your help.
Tom Kyte
October 22, 2008 - 4:15 pm UTC

let me see the tkprof - the estimated card= values cannot be accurate, they would not generate 26,010,200 IO's

Thanks

lizhuohua, October 22, 2008 - 9:47 pm UTC

Thank for your review ,Tom.
Thanks and
Best Regards!

CBO and Consistent Gets

Krishna Rajagopalan, October 23, 2008 - 1:14 pm UTC

Mr Tom,

Here is the tkprof that I got.
Thanks for your help
Krishna

********************************************************************************

SELECT 
    DISTINCT f1.batch_no, 
            f1.batch_no
FROM fund_pool_setup f1 -- has 3000 records
WHERE f1.batch_no <> 0
AND f1.active = 'Y'
AND NOT EXISTS ( SELECT DISTINCT f2.batch_no
                 FROM fund_pool f2 -- has 420,000 records
                 WHERE f2.batch_no = f1.batch_no
                 AND f2.status_code = 'POS')
ORDER BY 2

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.06       0.07          0          0          0           0
Execute      1      0.00       0.00          0          0          0           0
Fetch        3   1344.11    1324.76      17744   24024279          0          23
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        5   1344.17    1324.83      17744   24024279          0          23

Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: 11041  

Rows     Row Source Operation
-------  ---------------------------------------------------
     23  SORT ORDER BY (cr=24024279 pr=17744 pw=0 time=1324761381 us)
     23   HASH UNIQUE (cr=24024279 pr=17744 pw=0 time=1324760906 us)
     23    NESTED LOOPS ANTI (cr=24024279 pr=17744 pw=0 time=1555988526 us)
   2642     TABLE ACCESS FULL FUND_POOL_SETUP (cr=93 pr=94 pw=0 time=3264904 us)
   2619     TABLE ACCESS BY INDEX ROWID FUND_POOL  
            (cr=24024186 pr=17650 pw=0 time=1323702645 us)
487306620      INDEX RANGE SCAN POOL_STATUS_LU_FK_I 
          (cr=2107067 pr=1539 pw=0 time=487555431 us)(object id 69263)

********************************************************************************

alter session set sql_trace=false


call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute      1      0.00       0.00          0          0          0           0
Fetch        0      0.00       0.00          0          0          0           0
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        2      0.00       0.00          0          0          0           0

Misses in library cache during parse: 0
Parsing user id: 11041  



********************************************************************************

OVERALL TOTALS FOR ALL NON-RECURSIVE STATEMENTS

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        2      0.06       0.07          0          0          0           0
Execute      2      0.00       0.00          0          0          0           0
Fetch        3   1344.11    1324.76      17744   24024279          0          23
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        7   1344.17    1324.83      17744   24024279          0          23

Misses in library cache during parse: 1


OVERALL TOTALS FOR ALL RECURSIVE STATEMENTS

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse       16      0.05       0.04          0          0          0           0
Execute    120      0.12       0.18          0          0          0           0
Fetch      153      0.06       0.17         50        402          0         767
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total      289      0.23       0.40         50        402          0         767

Misses in library cache during parse: 15
Misses in library cache during execute: 15

    2  user  SQL statements in session.
  120  internal SQL statements in session.
  122  SQL statements in session.

Tom Kyte
October 23, 2008 - 1:53 pm UTC

well, I asked "are the estimated cardinalities close"

You said "yes, they are"
The CARD values are as expected and the results are from production database.

well, I disagree.


It guessed:
   TABLE ACCESS (FULL) OF 'FUND_POOL_SETUP' (TABLE) (Cost=12 Card=35 Bytes=245)

It got:

   2642     TABLE ACCESS FULL FUND_POOL_SETUP (cr=93 pr=94 pw=0 time=3264904 



but here is the real problem:

2619 TABLE ACCESS BY INDEX ROWID FUND_POOL
(cr=24024186 pr=17650 pw=0 time=1323702645 us)
487306620 INDEX RANGE SCAN POOL_STATUS_LU_FK_I
(cr=2107067 pr=1539 pw=0 time=487555431 us)(object id 69263)


It read 487,306,620 rows from the index, but only found 2,619 rows in the table. Classic "index is defined with too few columns"


So, what is the index defined on?
And why isn't it defined on
     WHERE f2.batch_no = f1.batch_no
                 AND f2.status_code = 'POS')

(batch_no,status_code, <anything else you want...>



CBO and Consistent Gets

Krishna Rajagopalan, October 23, 2008 - 5:00 pm UTC

Tom,

I got the point. I thank you for the explanation. I should be looking at Trace output more than the explain only. I created the index and it solved the problem.

I did the trace again and the output is as follows.

SELECT
    DISTINCT f1.batch_no,
            f1.batch_no
FROM fund_pool_setup f1 -- has 3000 records
WHERE f1.batch_no <> 0
AND f1.active = 'Y'
AND NOT EXISTS ( SELECT DISTINCT f2.batch_no
                 FROM fund_pool f2 -- has 420,000 records
                 WHERE f2.batch_no = f1.batch_no
                 AND f2.status_code = 'POS')
ORDER BY 2

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.05       0.03          0          0          0           0
Execute      1      0.00       0.00          0          0          0           0
Fetch        3      0.29       0.56       1203       5379          0          23
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        5      0.34       0.59       1203       5379          0          23

Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: 4897  

Rows     Row Source Operation
-------  ---------------------------------------------------
     23  SORT ORDER BY (cr=5379 pr=1203 pw=0 time=562055 us)
     23   HASH UNIQUE (cr=5379 pr=1203 pw=0 time=561818 us)
     23    NESTED LOOPS ANTI (cr=5379 pr=1203 pw=0 time=547619 us)
   2642     TABLE ACCESS FULL FUND_POOL_SETUP (cr=93 pr=90 pw=0 time=3640744 us)
   2619     INDEX RANGE SCAN IDX_BATCH_STATUS (cr=5286 pr=1113 pw=0 time=202625 us)
            (object id 990783)

about arraysize

A Reader, January 21, 2009 - 3:29 am UTC

I have a third party package running on one machine where Oracle 8.1.7 client runs; this 3rd party package makes use of 8.1.7 client to connect to the 10gR2 DB server on another machine.

Is there a way I can set the ARRAYSIZE parameter globally for 8.1.7 client so that it applies to all the queries being sent to the 10g DB server by the third party package? The 3rd party package has not given us any parameter settings for doing this.
Tom Kyte
January 21, 2009 - 1:04 pm UTC

how to do an array fetch varies by programming language (eg: plsql is different from oci is different from jdbc is different from pro*c).

And typically, the program must play a part in it.

So, it is doubtful, unless they programmed themselves in sqlplus...

about arraysize

A Reader, January 22, 2009 - 3:54 am UTC

Thanks Tom.

1. Is there a way I can know what arraysize is being used by this third party package by tracing their session? I intend to use DBMS_SUPPORT package to trace the session.

2. Can I look at FETCH portion of the trace file and by looking at successive values of r=value in the TRC file, is it OK to deduce the ARRAYSIZE being used by the query? For example, if r=10 and r=25 in successive FETCH lines in the TRC file, can I say that the query is using an arraysize of 15?

Tom Kyte
January 22, 2009 - 9:18 am UTC

#2 answered your own #1.


Arraysize and Joins

Saptarshi, July 18, 2009 - 4:00 am UTC

Hi Tom,

I have been doing some testing with arraysize in 10gR2 and got some unexpected results. Please help me to understand the behaviour. Thanks!
I have tables t1 and t2, each having 15000 rows. As I execute "select * from t1" and tkprof the trace file, I see a significant reduction in consistent gets with the increase in arraysize from 10 to 1000 (as expected). However, when I execute "select * from t1,t2 where n1=n2" (doing hash join), I can see a reduction in no. of fetches but the number of consistent reads remain same.
1. Does it imply, for joins arraysize does not have any impact on performance?
2. If 1 is true, using BULK COLLECT - FORALL constructs in PL/SQL whenever there are joins won't give any performance benefit. Is that correct?

Details are given below:
create table t1 (row_num number, n1 number, txt varchar2(400));

create table t2 (row_num number, n2 number, txt varchar2(400));

insert into t1 (select rownum, trunc(rownum/15)+1, rpad(object_name,400,'-') from dba_objects where rownum<=15000)

insert into t2 (select rownum, mod(rownum,15), rpad(object_name,400,'-') from dba_objects where rownum<=15000)

select * from t1,t2 where n1=n2
arraysize 15
call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.01 0.05 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 13935 3.26 5.75 2577 1865 0 209000
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 13937 3.28 5.81 2577 1865 0 209000

arraysize 1000

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.04 0.05 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 210 2.54 8.90 2577 1865 0 209000
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 212 2.59 8.95 2577 1865 0 209000



Tom Kyte
July 24, 2009 - 9:23 am UTC

1. Does it imply, for joins arraysize does not have any impact on performance?
2. If 1 is true, using BULK COLLECT - FORALL constructs in PL/SQL whenever
there are joins won't give any performance benefit. Is that correct?


ABSOLUTELY NOT, it does not imply that.


The hash join plan basically builds the result set in temp to get the first row.

If the optimizer used a nested loops join (go ahead, set up the indexes and hint it with first rows), you would see a massive reduction.


or, just do this

select * from t1;
select * from t1 order by 1, 2, 3;


compare those, you'll find arraysize to impact the first - but not the second (the second query will read the entire table, sort it in temp, and then return the first row, the second row comes from TEMP - not the buffer cache, hence no logical IO).


Does that mean "array size is meaningless for the hash join" - no, absolutely not. You cut down on the round trips, you cut down on the work performed on the server to retrieve from your temp space (which could be on disk or in your uga).


Always set your array fetch size appropriately, recognize that one thing it MIGHT be able to do is reduce logical IO, but it will always reduce the overall work performed.

Thanks

Saptarshi, July 27, 2009 - 8:28 am UTC

Your detailed explanation helped me to relate the test results with my understanding. Many thanks - Saptarshi

high vs low consistent gets,

A reader, January 18, 2010 - 4:28 pm UTC

From performance point of view, I have been told by Oracle experts (in hotsos symposium) like choose the SQL that generates the least amount of consistent gets.  

In my example below, the query with least number of consistent gets took 6-7 more time to execute.

SQL> l
  1  select count(*) from
  2  (
  3  SELECT /*+ index_ffs(a,sca_pk) */ TEST_RUN_KEY TABLE_KEY, COUNT(1) KEY_COUNT FROM SUBDIE_CLASS_ANALOG PARTITION (SCA_P1179) a
  4* GROUP BY TEST_RUN_KEY)

  COUNT(*)
----------
     70224

Elapsed: 00:02:24.12

Statistics
----------------------------------------------------------
         44  recursive calls
          0  db block gets
    2241401  consistent gets
    2002925  physical reads
          0  redo size
        517  bytes sent via SQL*Net to client
        487  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          1  sorts (memory)
          0  sorts (disk)
          1  rows processed


However, in the next similar query (took out the hints), I have the following numbers:

SQL> l
  1  select count(*) from
  2  (
  3  SELECT TEST_RUN_KEY TABLE_KEY, COUNT(1) KEY_COUNT FROM SUBDIE_CLASS_ANALOG PARTITION (SCA_P1179) a
  4* GROUP BY TEST_RUN_KEY)

 COUNT(*)
----------
     70224

Elapsed: 00:14:52.90

Statistics
----------------------------------------------------------
          1  recursive calls
          0  db block gets
    1989854  consistent gets
    1826690  physical reads
          0  redo size
        517  bytes sent via SQL*Net to client
        487  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
          1  rows processed

How can we justify based on the results above?

Thanks,

Tom Kyte
January 19, 2010 - 4:10 pm UTC

.... From performance point of view, I have been told by Oracle experts (in hotsos
symposium) like choose the SQL that generates the least amount of consistent
gets. ...

let me clarify that, I always say:

"My goal, in general, in tuning a query is to reduce the logical IO's. Reduce the logical IO's and in general the physical IO's take care of themselves"

But then I stress - as I've heard others at that same conference - the "in general, use your common sense as well as that 'rule of thumb' (also known as ROT)"

For example, I have this query:
ops$tkyte%ORA10GR2> create table t as select * from all_objects;

Table created.

ops$tkyte%ORA10GR2> select * from t;

49964 rows selected.


Statistics
----------------------------------------------------------
          0  recursive calls
          0  db block gets
       3974  consistent gets
          0  physical reads
          0  redo size
    2529998  bytes sent via SQL*Net to client
      37030  bytes received via SQL*Net from client
       3332  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
      49964  rows processed



Now, what if I told you I could rewrite that query to return the data using 20% of the logical IO's. Pretty good right? I can in fact do that:

ops$tkyte%ORA10GR2> select * from t order by 1,2,3,4,5,6,7,8,9,10;

49964 rows selected.


Statistics
----------------------------------------------------------
          4  recursive calls
          0  db block gets
        768  consistent gets
          0  physical reads
          0  redo size
    2877626  bytes sent via SQL*Net to client
      37030  bytes received via SQL*Net from client
       3332  SQL*Net roundtrips to/from client
          1  sorts (memory)
          0  sorts (disk)
      49964  rows processed




but now tell me - did I tune that query? No, I didn't in this case. I reduced the logical IO by causing the data to have to read the entire set of data and sort it in temp, we don't do logical IO in temp - hence we reduced it. The right way to tune that query?


<b>
ops$tkyte%ORA10GR2> set arraysize 500
</b>ops$tkyte%ORA10GR2> select * from t;

49964 rows selected.


Statistics
----------------------------------------------------------
          0  recursive calls
          0  db block gets
        789  consistent gets
          0  physical reads
          0  redo size
    2119661  bytes sent via SQL*Net to client
       1489  bytes received via SQL*Net from client
        101  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
      49964  rows processed


This is why I'm not too afraid (yet) of being replaced by software... You still need to think - to visualize what is going to happen when you do something.


The goal IN GENERAL is to reduce logical IO's (without introducing something even worse!)




Now, in your case, I would say.... TWO THINGS

a) never never never use select count(*) from (query to be tuned goes here);

NEVER. You cannot tell what the performance of "query to be tuned" will be that way. As soon as you introduce the count(*) - you allow us to drop columns from the query. We might not have to go to a table, we can use a completely different plan than "query to be tuned" without the count would be. Use

SQL> set autotrace traceonly

to do this in the future, never never never count(*)


b) your second query is better and if you ran it again, it would be a lot different. I bet you have a cooked file system (buffered by the OS). The first query got part of its answer from the file system cache and really flooded the cache out with its 2+ million block reads. The second query did all true physical IO.

Search this site for

"secondary sga"

to see what I mean. The second query is far superior to the first. Far superior. But if you run the first query, because it is so large, it blows out the cache - warping the answer for the second query.

consistent gets and sorts in memory,

A reader, January 21, 2010 - 2:01 pm UTC

I agree you regarding the memory sorts which is not calculated as part of logical IOs.  I executed the same query again (by the way, count(*) was my addition.  It doesn't exist in the actual code) and have the details below:

SQL> SELECT TEST_RUN_KEY TABLE_KEY, COUNT(1) KEY_COUNT FROM SUBDIE_CLASS_ANALOG PARTITION (SCA_P1179) a GROUP BY TEST_RUN_KEY;

71149 rows selected.

Elapsed: 00:15:21.81

Execution Plan
----------------------------------------------------------
Plan hash value: 1206646978

---------------------------------------------------------------------------------
| Id  | Operation              | Name   | Rows  | Bytes | Cost  | Pstart| Pstop |
---------------------------------------------------------------------------------
|   0 | SELECT STATEMENT       |        |  1686 | 10116 |     1 |       |       |
|   1 |  PARTITION RANGE SINGLE|        |  1686 | 10116 |     1 |   346 |   346 |
|   2 |   SORT GROUP BY NOSORT |        |  1686 | 10116 |     1 |       |       |
|   3 |    INDEX FULL SCAN     | SCA_PK |    33M|   191M|     1 |   346 |   346 |
---------------------------------------------------------------------------------

Note
-----
   - cpu costing is off (consider enabling it)


Statistics
----------------------------------------------------------
          1  recursive calls
          0  db block gets
    2001219  consistent gets
    1843935  physical reads
          0  redo size
    1653845  bytes sent via SQL*Net to client
      52660  bytes received via SQL*Net from client
       4745  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
      71149  rows processed


I ran the above query again and the statistics was exactly same as above.

Now I executed the same query with INDEX FFS hint:

SELECT /*+ index_ffs(a,sca_pk) */ TEST_RUN_KEY TABLE_KEY, COUNT(1) KEY_COUNT FROM SUBDIE_CLASS_ANALOG PARTITION (SCA_P1179) a GROUP BY TEST_RUN_KEY

Elapsed: 00:02:14.14

Execution Plan
----------------------------------------------------------
Plan hash value: 3158401961

---------------------------------------------------------------------------------------------------------------------
| Id  | Operation                   | Name     | Rows  | Bytes | Cost  | Pstart| Pstop |    TQ  |IN-OUT| PQ Distrib |
---------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |          |  1686 | 10116 | 36060 |       |       |        |      |            |
|   1 |  PX COORDINATOR             |          |       |       |       |       |       |        |      |            |
|   2 |   PX SEND QC (RANDOM)       | :TQ10001 |  1686 | 10116 | 36060 |       |       |  Q1,01 | P->S | QC (RAND)  |
|   3 |    HASH GROUP BY            |          |  1686 | 10116 | 36060 |       |       |  Q1,01 | PCWP |            |
|   4 |     PX RECEIVE              |          |  1686 | 10116 | 36060 |       |       |  Q1,01 | PCWP |            |
|   5 |      PX SEND HASH           | :TQ10000 |  1686 | 10116 | 36060 |       |       |  Q1,00 | P->P | HASH       |
|   6 |       HASH GROUP BY         |          |  1686 | 10116 | 36060 |       |       |  Q1,00 | PCWP |            |
|   7 |        PX BLOCK ITERATOR    |          |    33M|   191M|     2 |   346 |   346 |  Q1,00 | PCWC |            |
|   8 |         INDEX FAST FULL SCAN| SCA_PK   |    33M|   191M|     2 |   346 |   346 |  Q1,00 | PCWP |            |
---------------------------------------------------------------------------------------------------------------------

Note
-----
   - cpu costing is off (consider enabling it)


Statistics
----------------------------------------------------------
         68  recursive calls
          3  db block gets
    2248772  consistent gets
    2009592  physical reads
        672  redo size
    1653845  bytes sent via SQL*Net to client
      52660  bytes received via SQL*Net from client
       4745  SQL*Net roundtrips to/from client
          1  sorts (memory)
          0  sorts (disk)
      71149  rows processed


So my query with INDEX FFS consumes more logical IOs and also does some amount of sort in the memory.  They query with INDEX FULL SCAN, doesn't do sorting in memory.  

Is that the reason why my query with INDEX FFS runs faster than INDEX FULL SCAN?  

About our file system, we use ASM.  I am not sure what you mean by cooked file system.  The disks behind the ASM diskgroup are pretty good (XP24K from HP).  

Thanks,

Tom Kyte
January 25, 2010 - 1:36 am UTC

.. Is that the reason why my query with INDEX FFS runs faster than INDEX FULL
SCAN? ...

no, it would be because of multi-block IO and parallel query.


Again, it looks like your stats are botched, the costs are way too low, way way too low. Please take a look at that.

A reader, January 24, 2010 - 3:51 pm UTC

Hi Sir;
Apologies for this easy question
In you book, there are examples like:

select * from heap single_row where username=:b1

This query does index range scan and in explain plan: disk: 0

What I want to ask is, even it is index range scan, the "disk" should be greater than 0.
At least oracle has to retrive index block from disk to cache. Am I wrong?
Tom Kyte
January 26, 2010 - 1:49 am UTC

umm, care to share the name of the book and more of the example???

I'm not sure what you are referring to - which book you are looking at - and the context of the example.



disk is physical IO, I would expect disk to be zero many many times - if I got everything from the cache...

gets

A reader, January 26, 2010 - 8:21 pm UTC


Increased number of consistent reads in 11.2 when session_cached_cursors=0

Nenad Noveljic, March 25, 2010 - 3:53 am UTC

Hi Tom,
In 11.2 Oracle does much more consistent reads for the same case when session_cached_cursors=0. Do you have an explanation for this? I have a service request open, but no answer so far.

Test case:
create table t1 (a varchar2(100)) ;

CREATE OR REPLACE PROCEDURE FILL_DUMMY_DATA AS
BEGIN
for i in 1..1000000
loop
insert into t1 values ('dummy data') ;
end loop ;
END FILL_DUMMY_DATA;
/

alter session set tracefile_identifier='testcase5' ;
alter session set events '10046 trace name context forever, level 12' ;

exec fill_dummy_data() ;

alter session set events '10046 trace name context off' ;


TKPROF (excerpt for the insert statement)

session_cached_cursor=50
call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 2 0.00 0.00 1 1 0 0
Execute 1009979 52.94 158.48 28 2326 1037446 1009979
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 1009981 52.94 158.49 29 2327 1037446 1009979

session_cached_cursor=0
call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1009987 9.22 9.40 1 1 0 0
Execute 1009987 57.47 76.14 27 1012318 1037305 1009987
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 2019974 66.69 85.55 28 1012319 1037305 1009987

Tom Kyte
March 26, 2010 - 11:37 am UTC

see the parse count that went up to 1,000,000 - it accounts for that. You made it so that plsql cannot cache any open cursors - so your code was really like:

for i in 1 .. 1000000
loop
  parse
  bind
  execute
  close



instead of


for i in 1 .. 1000000
loop
   if i=1 then parse end if
   bind
   execute



so, you are seeing the deferred parse taking place during the execute.


You'd see this in 9.2.0.4 and above - since session_cached_cursors controls the size of the plsql cursor cache. before that release, open_cursors did.

You should DEFINITELY let session cached cursors be more than zero - absolutely.

Which porcess will fill the array of SQL Plus ?

Bix, March 26, 2010 - 12:01 pm UTC

Hi Tom,
I have the below questions/doubts .
1.The Server process will invoke SQL engine to execute the SQL statement issued by the user process (let us say SQL Plus ) .

2.The Server Process will make physical IO if the blocks are not found in 'cache'

3. The Server process will make consistent gets or get the rows from buffer cache and put in 'array' for user process /client ?

4.The Server process will fetch the value into a PL/SQL variable when a cursor is being run for PL/SQL engine and PL/SQL engine will procee the value further .

Are my above points correct ? In Oracle documentation - the below was mentioned that it is the 'User Process' that makes Logical/Physical calls ..

Is my understanding wrong ?

"
The first time an Oracle user process requires a particular piece of data, it searches for the data in the database buffer cache. If the process finds the data already in the cache (a cache hit), it can read the data directly from memory. If the process cannot find the data in the cache (a cache miss), it must copy the data block from a datafile on disk into a buffer in the cache before accessing the data. Accessing data through a cache hit is faster than data access through a cache miss."


Many thanks for your time
Tom Kyte
March 26, 2010 - 3:43 pm UTC

the user process in that context was the "dedicated or shared server" process.

what you are calling the server process in that case was in fact the user process - the dedicated server

The same process ??

Bix, March 26, 2010 - 3:59 pm UTC

Many thanks Tom for the quick response .
So in both the cases (Oracle documentation) and my understanding - the server process is -'dedicated' or 'shared porcess' am I right ? Could you please also clarify if my points are correct ,i.e, Server process will store (fetch) the value into PL/SQL variable when a 'fetch' is executed ? Once again many thanks for your help all the way .
Tom Kyte
March 26, 2010 - 5:24 pm UTC

if plsql is running in the server, then yes, that dedicated server or shared server process would be the thing that places the values in plsql variables stored in the sessions UGA memory area.

A reader, March 27, 2010 - 3:12 pm UTC

Hello Sir,

Your reply to Original Poster (Nenad Noveljic from Switzerland ) regarding session_cached_cursors

Test case:
create table t1 (a varchar2(100)) ;
CREATE OR REPLACE PROCEDURE FILL_DUMMY_DATA AS
BEGIN
for i in 1..1000000 loop insert into t1 values ('dummy data') ;
end loop ;
END FILL_DUMMY_DATA; /

My Question is
(1) in the above test case there is no explicit close to the cursor than why Oracle does not cache ?

(2) It's said that sql within PL/SQL always cache
so when setting session_cached_cursor=0 is the no of parse is soft parse? and by setting session_cached_cursor=50 Is oracle does softer soft parsing?

Thanks





Tom Kyte
April 05, 2010 - 9:04 am UTC

(1) because you are using implicit cursors - whereby the open and close are done implicitly. Why didn't you ask about the open? we didn't open the cursor - so how would we be able to execute it? Open and close are IMPLICIT with IMPLICIT cursors.

Oracle caches implicit cursors transparently for you - using session_cached_cursors to dictate how many cursors to cache open. If you set it to zero, you disable this feature (note: before 9.2.0.4 of Oracle it was open_cursors that set this limit, you could not disable it in the past)

(2) if it has been said to you, then you know now that you were not told the truth.

If you set session cached cursors to zero, you will lose the implicit cursor caching - it is that simple - and the review you refer to clearly demonstrates it.

consistent gets

bix, April 24, 2010 - 6:58 am UTC

Hi Tom,
To undersatnd consistent gets/ physical reads in more detail -
I want to know how a SQL statment is executed (i know how it works logically (parse,bind,COB stuf,execute) but iam interested to know how the physical implementation of execution goes ) .

1.select * from emp (Full scan)

Assume emp table is stored in 10 blocks and no block of emp table is in bufer cache .

1.Server process loads first block into bufer cache
2.Cursor points to the first row
3.Server process fetches the rows (pointed by the cursor) into array (depends on size of array) of the client untill all the rows from the block are finished .
4.Server process leaves the block off and go to Database and get second block from database .

this will continue till all the blocks are fetched from database,

* At any point of time - Server process will work only on one block , it will not load ALL blcoks into bufer cache at a time,It will load first block and fetch the rows into array ,leave it , go to database and get the second block- right ??


2.select emp_name from emp where emp_no=20 (Index Scan)

Do the ALL the Index blocks are loaded into memory ? Can you please explain how it works ?
(I know logically how it works- it will scan the index to find the rowid of the empno=20 , and it will get the data block of that rowid but i want to know how actually physically it works in instance )



3.select emp_name,dept_name from emp e.dept d where e.dept_no=d.dept_no (JOIN- 'lets say - 'NESTED LOOPS') ?
Can you please explain how it works in instance ?


I have read the concepts guide - they explain most of the times - how logically they work (I mean - every row in emp,Oracle will search the match in dept and throw the row but I want to know how 'exactly' SERVER PROCESS WILL WORK ON 'BLOCKS'in case of nested loops ?)


Many thanks or your help and time all the way ...


Regards
Bix

Tom Kyte
April 26, 2010 - 8:19 am UTC

1) select * from emp

that will typically employ MULTI-BLOCK reads - we won't read a block at a time, we use the db file multi block read count to figure out what to read.

We would figure out we want to read N blocks at a time (suppose N is 32).

We would figure out which, if any of the first 32 blocks are in cache - suppose we found the 10th block in the cache. We'll now issue a multi-block read for blocks 1-9 and scatter them into the buffer cache - we are now ready to process blocks 1-10. When we hit a block not in the cache - we'll do it all over again and again using multiblock IO until we read the entire table ( as long as the client keeps asking us to fetch of course )


So, no, you were technically wrong when you say it will work only on one block - it uses multi block io (sometimes) and single block io (sometimes0




2) where empno=20 - index range scan.

It would read index root
index branch/branch (as many as it needs to get to bottom of index)
index leaf -> get rowid for empno=20
table block


So, typically 4 blocks - three via index, one to table.



3) time to pick up effective Oracle by design, I walk through how statements are processed in a chapter.

It'll use single block IO and multiblock IO in that case, because I expect the plan would be:

nested loop
   full scan dept
   index range scan empno_idx
      table access by index rowid emp


so, it would read dept using multiblock IO and then do a range scan for each row found in dept into the emp table.


Bix, April 26, 2010 - 2:05 pm UTC

you were telling more about 'getting the blocks ' into SGA -i was talking about 'how it (full scan/index) happens' after getting the blocks into SGA ?


""So, no, you were technically wrong when you say it will work only on one block - it uses multi block io (sometimes) and single block io (sometimes0"

i was mentioning - ex:if the size of array is 15 - Server process will touch the first block and will fetch 15 rows into the array - in the second fetch , it will go to the block and get ... in this sense I said Server process will work one block at a time ... Am i wrong here ?


Regards
Bix



Tom Kyte
April 26, 2010 - 2:19 pm UTC

if we are getting rows out of the cache, sure.


but remember, you wrote:

1.Server process loads first block into bufer cache


which means - you too were talking about getting blocks into the buffer cache, you started with "assume blocks are not in the cache" in fact :)

A reader, April 26, 2010 - 2:34 pm UTC

yes,you are correct Tom :)

One more point 'Fetching into array out of cache - Will it start after reading ALL the blocks of the table into buffer cache?? (OR) - just Server process gets few blcoks (db_multiple_size) and fetch the rows into array from one after the other block and after fetching all rows from all the blcoks in buffer cache - it will go to database to get next set of blocks of the table (seond time db_multiple_size) ?

1.Get all the blocks of the table into SGA using db_multiple_size I/o
2.Then start fetching ..

(OR)

1.get first set of blocks (one i/o multiple block call)
2.fetch into array , leave the blocks off
3.get second set of blocks (second i/o..) from database into SGA
4.Fetch into array

...

Which is the correct process ?




Thanks
Bix

Tom Kyte
April 26, 2010 - 3:44 pm UTC

it'll get the multiblock read count blocks into the cache, then logical IO them out one at a time, getting JUST what it needs to get in order to return data to the client.


A reader, April 27, 2010 - 9:57 am UTC

Sir,

Physical I/O --------------> OS call , copy the block(s) to SGA etc ..

Logical I/O ---------------> it is a Oracle program call , we will read the block and get the required rows into our variables (or) data structures(sort_areas) which are used to store the data .

is my understanding correct ?

Tom Kyte
April 27, 2010 - 10:13 am UTC

well, a logical IO might well lead to a physical IO.

a logical IO is a read from the cache, a physical IO is a read from disk. if we do a logical IO and don't find the block in cache, we'll do a physical IO to put it there.

A reader, April 27, 2010 - 10:25 am UTC

that's ok Tom- I was trying to understand how we manage the data in sort areas of PGA ? I mean- we do logical I/Os and get the required rows from blocks into data structures to sort or do any aggregate in PGA correct ?

Tom Kyte
April 27, 2010 - 11:54 am UTC

we might use logical IO
we might use direct reads bypassing the buffer cache
we might be using other information stored in our pga or in temp (if we have to resort some data already sorted for example)
we might.....


the possibilities are limitless - the answer to almost any "do we" is probably "yes"



select owner, count(*) from t group by owner order by 2;


that would probably have a plan like:

ops$tkyte%ORA10GR2> select owner, count(*) from t group by owner order by 2;

Execution Plan
----------------------------------------------------------
Plan hash value: 81261667

----------------------------------------------------------------------------
| Id  | Operation           | Name | Rows  | Bytes | Cost (%CPU)| Time     |
----------------------------------------------------------------------------
|   0 | SELECT STATEMENT    |      |     1 |    17 |     4  (50)| 00:00:01 |
|   1 |  SORT ORDER BY      |      |     1 |    17 |     4  (50)| 00:00:01 |
|   2 |   HASH GROUP BY     |      |     1 |    17 |     4  (50)| 00:00:01 |
|   3 |    TABLE ACCESS FULL| T    |     1 |    17 |     2   (0)| 00:00:01 |
----------------------------------------------------------------------------



the full scan might read from the buffer cache, it might use direct IO to disk and bypass the buffer cache (after doing a segment level checkpoint). So the answer to a bit of this is:

it depends, we might read from the buffer cache, we might read direct from disk - into our PGA.


Then we have the hash group by step. That'll be done in a hash workarea in the pga. If all fits in memory, it'll be entirely in the pga and the rows will go from logical ios or physical ios into our hash algorithm and then aggregated in the pga (spilling to disk if need be)

and then there is the order by. the order by will be using as a row source the temporary data that lives in the pga or on disk and will sort into the pga and or disk as needed.

So, it depends.

A reader, April 27, 2010 - 2:02 pm UTC

Hi Tom,

1.The below is from concepts guide (11g) ,Can you please explain this in bit more detail ?

* A parse call by the application associates a SQL statement with a private SQL area. After a statement has been associated with a private SQL area, it can be run repeatedly without your application making a parse call.
* A parse operation by Oracle Database allocates a shared SQL area for a SQL statement. Once a shared SQL area has been allocated for a statement, it can be run repeatedly without being reparsed.


Who makes 'parse call' actually and at what stage it is made in the below simple pl/sql block?


declare
begin
Cursor C is select * from emp order by emp_no desc
Open C;
Fetch C into array (assume bulk collect)
when not found
exit
close C;
end;


2.A cursor is a pointer created on client side - when it is created ? is it 'open C' stage ?

3.for query - we get the rows into pga before giving to client but for update statement - we wont get anyrows to pga - correct ?
Tom Kyte
April 27, 2010 - 2:13 pm UTC

applications make parse calls to the database, the database will parse a statement if it needs to in response to that parse request by the client.


In plsql, we cache statements for you - so during the FIRST execution of that block of code in a SESSION - you will:

parse cursor C during the OPEN call. It will be either a hard or soft parse, we don't know - the server takes care of that.

and during SUBSEQUENT executions of that block of code in a single SESSION - we will probably:

not parse anything, plsql will attempt to keep cursor c open for us.


If that code were in a client application, you would tend to parse EVERY TIME that code was executed.



2) a cursor has a client side bit and a server side bit. The client cursor is created when the client 'opens' a cursor - in this case, it would probably be during the open call for plsql. In a 3gl - it would be when they prepare a statement typically - depends on the language what the API call is called.

3) updates have to read data too, that update has a query, that is done like a query is done. You'll do logical IO's to read blocks into the pga to find the rows you want to update and then you'll do logical IO's in current mode to actually MODIFY the block.

So in reality, an update will read more times into the pga - but you know what, it really isn't relevant to us - what will this knowledge gain you?


Bix - I would suggest you concentrate on mastering SQL, leaning PLSQL the language inside and out, understanding how transactions really work, what savepoints do, how to handle exceptions - etc. The basics. This stuff you are asking about a lot recently here - it won't do much for you. Mastering SQL beyond what anyone else has where you work - that'll be something truly useful...

A reader, April 27, 2010 - 2:46 pm UTC

Many thanks Tom,I am bit new to Oracle , trying to learn , reading books (your books mainly) , I know i troubled you a lot with my questions but really got 90% of the picture clearly what happens when a sql statement is executed in Oracle with your explanations. i came to know most of the things in your explanations which i did not find in any book .

I think I should first draw a line up to what extent I should learn a concept.The main reason for this questions is - I always assume it in practically , how it happens ,ex: to understand an 'update ' statement how it works - I start with a pl/sql block with an update statement in it ,then assume a cursor will be opened for that ... then I will think what is curosr - is it a pointer ? if it is a pointer how it points to the server process ?? etc ... I dont know if these are really required for me ...

I am not a master on SQL but I have some knowldge in sql and pl/sql - I have to admit here that I wrote some programs in pl/sql , wrote curosrs ,used collections - with out knowing how they work in oracle- yes, this was true and I have to agree that ... you may not beleive it ..i was taking salary each month from myy company but felt not doing 'justice' to my company with my code though it was working at that moment - and no self satisfaction for me ....that's why I wanted to know exactly what happens internally ...? you may wonder - till last month i was under impression that 'cursor' is a comletely database concept - and sqlplus is a 'langaue' kind of stuff .... this is my knowldge till last month (I have more than 1 yr expe in pl/sql pragramming ) now i have got some idea - where i can understand most of the concepts now easily in oracle - MANY MANY MANY THANKS TO YOU ..you are really 'god' for oracle learners ...Extremely sorry once again for all my questions ...

thanks

doubt about additional IOs

Aman...., August 03, 2010 - 1:03 pm UTC

Hi sir,

This ran in 11.2 on Windows XP.
SQL> desc t2
 Name                                      Null?    Type
 ----------------------------------------- -------- ----------------------------
 OWNER                                              VARCHAR2(30)
 OBJECT_NAME                                        VARCHAR2(128)
 SUBOBJECT_NAME                                     VARCHAR2(30)
 OBJECT_ID                                          NUMBER
 DATA_OBJECT_ID                                     NUMBER
 OBJECT_TYPE                                        VARCHAR2(19)
 CREATED                                            DATE
 LAST_DDL_TIME                                      DATE
 TIMESTAMP                                          VARCHAR2(19)
 STATUS                                             VARCHAR2(7)
 TEMPORARY                                          VARCHAR2(1)
 GENERATED                                          VARCHAR2(1)
 SECONDARY                                          VARCHAR2(1)
 NAMESPACE                                          NUMBER
 EDITION_NAME                                       VARCHAR2(30)

SQL> set arraysize 10
SQL> exec show_space('T2');
Free Blocks.............................               0
Total Blocks............................           1,152
Total Bytes.............................       9,437,184
Total MBytes............................               9
Unused Blocks...........................             127
Unused Bytes............................       1,040,384
Last Used Ext FileId....................               1
Last Used Ext BlockId...................          89,984
Last Used Block.........................               1

PL/SQL procedure successfully completed.

SQL> set autot trace stat
SQL> select * from t2;

71996 rows selected.


Statistics
----------------------------------------------------------
          0  recursive calls
          0  db block gets
       8133  consistent gets
          0  physical reads
          0  redo size
    8296723  bytes sent via SQL*Net to client
      79609  bytes received via SQL*Net from client
       7201  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
      71996  rows processed

SQL> select count(*) from t2;


Statistics
----------------------------------------------------------
          0  recursive calls
          0  db block gets
       1029  consistent gets
          0  physical reads
          0  redo size
        424  bytes sent via SQL*Net to client
        420  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
          1  rows processed

SQL>


My doubt is that why there is an extra logical IO's which are reported in the count(*) (there are 4 IOs additional IOs) ? If this is due to the reading of the segment header block(just an idea) to get to know the extent map, the same should be applicable to the normal select as well. So why in a normal select,the IOs are not much varying and only in count?

Regards
Tom Kyte
August 03, 2010 - 1:19 pm UTC

the extra blocks (1029 instead of 1025) are the esgment information pretty much - yes.


... the same should be applicable to the normal select as well. So why in a normal select,the IOs are not much varying and only in count? ...

the same is applicable - it only does that once (we only need to peek at the segment header once to figure out what to read - not for every fetch).

Still not clear....

Aman...., August 03, 2010 - 8:51 pm UTC

>>he extra blocks (1029 instead of 1025) are the esgment information pretty much - yes.

That means, about 4 IOs went to find the segment information from the segment header and may be the bitmap block maps or sort of like that, right? Sine there is no recursive call happening, it means that these extra IOs can't be contributed to the dictionary lookups.


>>the same is applicable - it only does that once (we only need to peek at the segment >>header once to figure out what to read - not for every fetch).

I am sorry but I didn't get it. So for a select, the information is gathered about what to read and afterwards, the information is fetched again then why it is done for the aggregate like count? If for a select,the segment information can be captured in a single IO(since the IO count doesn't increase in it beyond the number of blocks used), why not the same happens for count(*) ?

Thanks and regards

Tom Kyte
August 04, 2010 - 7:29 am UTC

... Sine there
is no recursive call happening, it means that these extra IOs can't be
contributed to the dictionary lookups.
...

i didn't say they were, they were to figure out what to full scan.


ops$tkyte%ORA11GR2> create table t ( x int ) segment creation immediate;

Table created.

ops$tkyte%ORA11GR2> select * from t;

no rows selected

ops$tkyte%ORA11GR2> set autotrace traceonly statistics;
ops$tkyte%ORA11GR2> select * from t;

no rows selected


Statistics
----------------------------------------------------------
          0  recursive calls
          0  db block gets<b>
          3  consistent gets</b>
          0  physical reads
          0  redo size
        281  bytes sent via SQL*Net to client
        409  bytes received via SQL*Net from client
          1  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
          0  rows processed




totally empty table, nothing in it - yet we did IO's to figure that out.





The segment information is read once per query execution, not once per fetch per query execution. Each of your queries had the same "startup overhead" - after that - they did what they had to do.

I'm not sure what you are asking at all anymore here - it is not very clear. Given the original question way above (the very puzzling question" and the answer - it should be clear why the fetch of every row takes more IO's (the getting and re-getting of blocks across fetches - NOT segment information but actual table blocks) than the count(*) which doesn't have to get and re-get blocks over and over

Follow Up on ARRAYSIZE

Trond, January 20, 2011 - 10:32 am UTC

Thanks Tom,

I have a quick follow up question on Array size. I am tracing some Application functionality and came across the famous FETCH r=1 over and over again. So application has somehow set array size to 1 and the effect is 1000's of wasted network round trips.

EXEC #54:c=0,e=403,p=0,cr=0,cu=0,mis=0,r=0,dep=1,og=4,tim=987268354
FETCH #54:c=0,e=70,p=0,cr=4,cu=0,mis=0,r=1,dep=1,og=4,tim=987268528
FETCH #54:c=0,e=6,p=0,cr=0,cu=0,mis=0,r=1,dep=1,og=4,tim=987268664
FETCH #54:c=0,e=5,p=0,cr=0,cu=0,mis=0,r=1,dep=1,og=4,tim=987268774
FETCH #54:c=0,e=5,p=0,cr=0,cu=0,mis=0,r=1,dep=1,og=4,tim=987268876
FETCH #54:c=0,e=6,p=0,cr=0,cu=0,mis=0,r=1,dep=1,og=4,tim=987268979
FETCH #54:c=0,e=5,p=0,cr=0,cu=0,mis=0,r=1,dep=1,og=4,tim=987269084
FETCH #54:c=0,e=5,p=0,cr=0,cu=0,mis=0,r=1,dep=1,og=4,tim=987269188
FETCH #54:c=0,e=5,p=0,cr=0,cu=0,mis=0,r=1,dep=1,og=4,tim=987269293
FETCH #54:c=0,e=5,p=0,cr=0,cu=0,mis=0,r=1,dep=1,og=4,tim=987269398
FETCH #54:c=0,e=6,p=0,cr=0,cu=0,mis=0,r=1,dep=1,og=4,tim=987269504
FETCH #54:c=0,e=5,p=0,cr=0,cu=0,mis=0,r=1,dep=1,og=4,tim=987269607
FETCH #54:c=0,e=5,p=0,cr=0,cu=0,mis=0,r=0,dep=1,og=4,tim=987269713

However, what I did notice was that also standard system recursive statement fetching was also done 1 by 1 row. I know array size is client configuration, but it seems to be inherited by all recursive statements as well. It this true? Or do we have ways of manipulating Oracle's "own" (internal) array size?

Another quick one: are there any possibilities to set array size in running user session (just like you can turn on trace on running session)?

Kind regards,
TRONd

Tom Kyte
January 24, 2011 - 6:11 am UTC

the array size is not really a "client configuration" - it is a statement level thing. A client can have one cursor that array fetches 100 rows, another 1000 rows and another 1 row at a time.

... are there any possibilities to set array size in running
user session ...

Only in an environment that allows you to specify it. SQLPlus for example lets you set arraysize N and it will then use an array size of N for all statements it executes. Oracle Forms lets you set the default array fetch size for blocks. JDBC would let you set PROGRAMATICALLY - the array fetch size it would use - but the program written using jdbc calls would have to give you a way to set it.

it is not a configuration setting in a file or anywhere - UNLESS the program you are running has made it so.

Sorry

Davis, July 13, 2011 - 3:34 pm UTC

Hi Tom,
       nice explanation for the db block gets and consistent gets still it's not clear to me, sorry for that
i've been reading the thread since past few days and got few things to be clear. i tested myself on my PC where i'm the only user with no DML happening here is what i see.
SQL> set autotrace traceonly 
SQL> select * from scott.emp;
14 rows selected.


Execution Plan
----------------------------------------------------------
   0      SELECT STATEMENT Optimizer=ALL_ROWS (Cost=3 Card=14 Bytes=51
          8)

   1    0   TABLE ACCESS (FULL) OF 'EMP' (TABLE) (Cost=3 Card=14 Bytes
          =518)
Statistics
----------------------------------------------------------
       1528  recursive calls
          0  db block gets
        287  consistent gets
         44  physical reads
          0  redo size
       1342  bytes sent via SQL*Net to client
        500  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
         31  sorts (memory)
          0  sorts (disk)
         14  rows processed

this is the first time I run so it goes through parsing and since no blocks in the buffer cache so it does physical IO to read from disk. it also does some consistent gets, not sure why? i see no db blocks get here. why?
 when i run the same statement from different sqlplus window i get the below result
14 rows selected.


Execution Plan
----------------------------------------------------------
   0      SELECT STATEMENT Optimizer=ALL_ROWS (Cost=3 Card=14 Bytes=51
          8)

   1    0   TABLE ACCESS (FULL) OF 'EMP' (TABLE) (Cost=3 Card=14 Bytes
          =518)
Statistics
----------------------------------------------------------
          0  recursive calls
          0  db block gets
          0  consistent gets
          0  physical reads
          0  redo size
          0  bytes sent via SQL*Net to client
          0  bytes received via SQL*Net from client
          0  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
         14  rows processed

here i see only 14 rows processed and rest are 0. why?
please help me understand this 
any effort ,help and time would be greatly appreciated.
thanks in advance

Tom Kyte
July 15, 2011 - 8:57 am UTC

it also does some
consistent gets, not sure why?


a physical IO is almost always followed by a logical IO. We do a physical IO to get the block into the buffer cache, then you have to do a logical IO to get the block out of the cache. You'll always see the consistent gets (logical IO's), the physical IO's might go away, but not the logical IO's. Using a conventional path read (that is, NOT a direct path read), we always read from the buffer cache and MIGHT have to read from disk first.

i see no db blocks get here.

because we were not modifying any data here - we didn't update anything, we do db block gets (current mode gets) when we need to modify data.


here i see only 14 rows processed and rest are 0. why?

either:

a) you ran as SYS in the second window (do not do that, do NOT do that, please do NOT do that). SYS is special, sys is magic, in some releases of sqlplus - autotrace did not repeat statistics for sys.

b) you were in 11g and using the result cache in implicit mode. (result_cache_mode = force)

Excellent Full Marks

A reader, July 18, 2011 - 4:10 pm UTC

Thank you very much for quick response. that indeed helped a lot to understand the difference between db block gets and consistent gets. you're right i was login as sys from second window. so based on your explanation can we conclude that No DML or modification into the database meaning (not possible in real world) no DB BLOCK GETS. if we see too many consistent gets. is it ok? or should be consider tuning?
thanks
Tom Kyte
July 18, 2011 - 5:35 pm UTC

you'll see db block gets when we need a block in current mode, typically in order to update it. You can see it with read only statements (recursive sql to allocate temp space for example might happen under the covers). You will see it with modifications.

Consistent gets, logical IO's are very expensive, your goal *in general* is to reduce logical IOs - if you watch the logical IO's the physical IO's in general take care of themselves. Yes, you typically want the 'less IO' approach.

Logical IO's expensive

A reader, July 18, 2011 - 8:54 pm UTC

hi tom,
you said Logical IO's expensive. what exactly it means? because it involves physical IO's. could you please fall some light here?
thanks
Tom Kyte
July 19, 2011 - 7:37 am UTC

Logical IO's are not physical IO's - a physical IO is a read from disk that puts the block into the buffer cache. A logical IO is a retrieval of the block from the buffer cache. A physical IO is almost always surrounded by a logical IO. They are different - hence different names.


A logical IO involves a lot of work. The buffer cache is a big place first of all - we need to locate the block. In the buffer cache, we tend to organize blocks in a two dimensional array of sorts. The first 'array' is an array of N lists. We start by taking a data block address (DBA - the file & block number of the block) and hashing that to a number between 1 and N. That tells us what list the block is on. The second dimension of the 'array' is a list - a linked list. We'll walk that list looking for the block we need. Remember also that read consistency comes into play - there may be many versions of a given block in the buffer cache and just because we find a block in the buffer cache doesn't mean it is the version of the block we need.

Complicating things is the fact that the buffer cache is a big shared data structure. People need to modify it (put blocks in the cache for example, write blocks out from the cache to disk, and so on). People need to read it. Whenever that is the case (you have readers and writers), you need some mutual exclusion device (a lock, a latch, a mutex, a semaphore - whatever - a serialization device, think of it as a lock). So, when we want to walk the buffer cache looking for a block - we take a cache buffers chains (cbc) latch - that is, we introduce some level of serialization.

So, let's say we did not find the block in the cache - it just wasn't there. That logical IO will now become a physical IO to get the block in there.

So, let's say we did find some version(s) of the block in the cache - we need to run some tests to see if we have the right version (see
http://www.oracle.com/technetwork/issue-archive/2010/10-nov/o60asktom-176254.html
for a quick discussion if you don't know about multi-versioning and read consistency). If we have the right version - we are done (finally!). If not, we have some more work (and more logical IO's to do). We need to read undo information and roll the block back in order to get the right version.


All in all - hundreds of cpu instructions to hash, to index into arrays, to walk linked lists, to latch and/or spin on a latch waiting for it (locking), lots of potential work - just to get a block.


The fastest way to do something is.....

To not do it in the first place.

The fastest logical IO is the logical IO that is not done.

small Puzzle

A reader, July 21, 2011 - 8:52 am UTC

hi tom,
first of all i would like say thank you very much for really very nice expanation. one small thing puzzling me after thoughtfull reading your reply
your said
A logical IO involves a lot of work. The buffer cache is a big place first of all - we need to locate the block. In the buffer cache, we tend to organize blocks in a two dimensional array of sorts. The first 'array' is an array of N lists. We start by taking a data block address (DBA - the file & block number of the block) and hashing that to a number between 1 and N.

puzzle
when we fire a sql statement i know it goes through parsing(hard or soft or no parsing). how oracle analyze or come to know that the data block address it needs to generate result set are on the linked list or not? i mean does it do dictionary look up for underlying table to find out. I may be very clear putting my puzzle but i would like to know how oracle sense that the block it needs is on the linked list or it just scans linked list and buffer cache and if it does not find the block there then it does physical read. please provide some help to clear this puzzle
thanks again
Tom Kyte
July 22, 2011 - 2:00 pm UTC

how oracle analyze or come to know that the data block address
it needs to generate result set are on the linked list or not?


It depends - it depends on the plan.

for example, suppose you have: select * from t;

and the plan is "FULL SCAN"

then we would go to the data dictionary and ask it for the extent map of the table - now we know all of the flies the table has data in and all of the block ranges that have data for that table. We'll start by looking for the first N of those blocks (N = db file multiblock read count, a parameter) in the cache. We do that by taking the flie.block (the DBA, data block address) for each of those N blocks and hashing them to find the list they are on, walking that list and looking for them. If they are not there - then we issue the IO or IO's request to read them into the cache.



If we were using an index to access a row - then we'd issue a logical IO for the root block (we figure out the root block DBA from the dictionary) and once we get the root block - it will point us to a branch block (and it - the root block- would have that DBA on it) and so on. Until we eventually get a rowid from the leaf block and that points us to the DBA for the table data.



We do not 'sense' if a block is there in the cache or not, we have to go through the process I outlined above every time we need a block.


and that is why your goal when tuning a query - IN GENERAL - is to try to minimuze the amount of logical IO it would have to perform. We do physical IO when we try to do a logical IO but cannot find the block and even when we find the block - you can see it is a lot of work.

Since the fastest way to do something is to not do it - the less logical IO's we do - the better - in general.

memory usage

a reader, August 05, 2011 - 4:46 pm UTC

So when a pl/sql or java program issues a select statement, lets say the query would return 1 million rows, before the program fetches any rows, how would Oracle handle loading data into buffer cache, load only certain number of rows in memory and wait the fetch and then get additional rows or load all the rows in memory if the memory is big enough and swap them to disk if not? I guess it's the former, isn't it?

Thanks
Tom Kyte
August 06, 2011 - 8:05 am UTC

It might not load anything into the cache at all.

We try to get the data as you request it.

See
http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:275215756923#39255764276301

(wait for the page to load, you want to read "queries from start to finish")

Higher then expected consistent gets

Arvvind Mishra, August 09, 2011 - 12:57 am UTC

Hi Tom,

I am trying to understand the consistent gets. I have created following test:


SQL> create table test1 (id int);

Table created.

SQL> insert into test1
  2  values(1);

COMMIT;

begin
 dbms_stats.gather_table_stats('SCOTT','TEST1');
END;

SET AUTOTRACE TRACEONLY

QL> SELECT * FROM TEST1;


xecution Plan
---------------------------------------------------------
lan hash value: 4122059633

--------------------------------------------------------------------------
 Id  | Operation         | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------
   0 | SELECT STATEMENT  |       |     1 |     3 |     3   (0)| 00:00:01 |
   1 |  TABLE ACCESS FULL| TEST1 |     1 |     3 |     3   (0)| 00:00:01 |
--------------------------------------------------------------------------


tatistics
---------------------------------------------------------
         0  recursive calls
         0  db block gets
         7  consistent gets
         0  physical reads
         0  redo size
       405  bytes sent via SQL*Net to client
       381  bytes received via SQL*Net from client
         2  SQL*Net roundtrips to/from client
         0  sorts (memory)
         0  sorts (disk)
         1  rows processed

Why Oracle is doing 7 consistent gets whey in explain plan it is showing 3?

Thanks,

Arvind

Tom Kyte
August 13, 2011 - 3:58 pm UTC

Automatic segment space management. It spreads inserts out over the area between the high water mark of a table and the "low high water mark". The inserted row deoesn't necessarily go into the first block of the table.

No big deal - look - this table is so small, it doesn't really *matter*.

The cost is the cost - it is not a direct measure of IO necessarily, it will never be perfect (except by accident). it is what we call an *estimate*


ops$tkyte%ORA11GR2> create table t1 ( x int ) tablespace users;

Table created.

ops$tkyte%ORA11GR2> create table t2 ( x int ) tablespace manual;

Table created.

ops$tkyte%ORA11GR2> 
ops$tkyte%ORA11GR2> insert into t1 values (1);

1 row created.

ops$tkyte%ORA11GR2> insert into t2 values (1);

1 row created.

ops$tkyte%ORA11GR2> select * from t1;

         X
----------
         1

ops$tkyte%ORA11GR2> select * from t2;

         X
----------
         1

ops$tkyte%ORA11GR2> 
ops$tkyte%ORA11GR2> set autotrace traceonly statistics;
ops$tkyte%ORA11GR2> select * from t1;


Statistics
----------------------------------------------------------
          0  recursive calls
          0  db block gets
          7  consistent gets
          0  physical reads
          0  redo size
        419  bytes sent via SQL*Net to client
        420  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
          1  rows processed

ops$tkyte%ORA11GR2> select * from t2;


Statistics
----------------------------------------------------------
          0  recursive calls
          0  db block gets
          3  consistent gets
          0  physical reads
          0  redo size
        419  bytes sent via SQL*Net to client
        420  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
          1  rows processed

ops$tkyte%ORA11GR2> set autotrace off

Higher then expected consistent gets

Arvvind Mishra, August 09, 2011 - 5:16 am UTC

I want to add one more question to previous update:

Why Oracle estimated 3 consistent gets to read one block in explain plan? It should be one.

Tom Kyte
August 13, 2011 - 3:59 pm UTC

it is an estimate after all- consider the very definition of the word...

and for such a tiny difference, not really a problem.

physical reads NOT match consistent gets

Gus, August 09, 2011 - 9:19 pm UTC

Tom,
  here i create a test table:
create table gus_test_read as select * from all_objects where rownum < 10001;

analyze table gus_test_read compute statistics;

and then i execute the sql in sqlplus command, found a ting i can't understand:
SQL> select count(1) from gus_test_read;

  COUNT(1)
----------
     10000


Execution Plan
----------------------------------------------------
Plan hash value: 3280442023

----------------------------------------------------
| Id  | Operation          | Name          | Rows  |
----------------------------------------------------
|   0 | SELECT STATEMENT   |               |     1 |
|   1 |  SORT AGGREGATE    |               |     1 |
|   2 |   TABLE ACCESS FULL| GUS_TEST_READ | 10000 |
----------------------------------------------------


Statistics
----------------------------------------------------
          1  recursive calls
          0  db block gets
         66  consistent gets
         64  physical reads
          0  redo size
        411  bytes sent via SQL*Net to client
        400  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
          1  rows processed

You see there just return 1 row, so the array size did not relate the consistent reads, so if we can say in this situation the consistent reads is equal to the number of gus_test_read's blocks ?

and if so then it should equal to physical reads right? , because i checked the table's blocks haven't in the buffer cache before the SQL.

so there are two questions:
1.cut off the affection of array size, is the consistent reads equal to the numbers of table's blocks.
2.if there was no data block of the table in buffer cache, then the physical reads are the number of the table's blocks ?
Note:(all the suppose is for table full scan.)

Tom Kyte
August 13, 2011 - 4:15 pm UTC

analyze table gus_test_read compute statistics;


do not do that, use dbms_stats please. analyze to compute/estimate statistics has been deprecated for a long long long long time. Probably longer than you have been using Oracle...

1) no, it isn't. we have to do IO's to the dictionary to figure out "where does this table start", "where does this table end".

2) no, it won't be. We have to read the segment extent header to figure out "where does it start" "where does it end" and we tend to read that block more than once.

So, they'll never be exactly equal

Higher then expected consistent gets

Arvind Mishra, August 14, 2011 - 9:29 am UTC

Thanks Tom

consistent gets estimation

Arvind Mishra, August 16, 2011 - 3:49 am UTC

Hi Tom,

I am trying to learn how to estimate consistent gets for a query. For this I have created a test case:


SQL> desc btable
 Name                                      Null?    Type
 ----------------------------------------- -------- ----------------------------
 ID                                        NOT NULL NUMBER
 NAME                                      NOT NULL VARCHAR2(30)
 JOIN_DATE                                          DATE

SQL> exec show_space('BTABLE');

PL/SQL procedure successfully completed.

SQL> exec show_space('BTABLE');
Free Blocks.............................
Total Blocks............................512
Total Bytes.............................4194304
Unused Blocks...........................0
Unused Bytes............................0
Last Used Ext FileId....................4
Last Used Ext BlockId...................905
Last Used Block.........................128

PL/SQL procedure successfully completed.


SQL> SELECT I.INDEX_NAME, I.INDEX_TYPE, c.column_name,I.BLEVEL,I.LEAF_BLOCKS INDEX_LEAF_BLOCKS,
  2  I.clustering_factor,t.blocks TABLE_BLOKS,t.num_rows TOTAL_ROWS
  3  FROM USER_INDEXES I , USER_TABLES T, USER_IND_COLUMNS C
  4  WHERE
  5  I.TABLE_NAME = T.TABLE_NAME AND I.INDEX_NAME = C.INDEX_NAME
  6  AND I.TABLE_NAME = 'BTABLE' AND I.INDEX_NAME ='IDX_JOIN_DT' ;

INDEX_NAME                     INDEX_TYPE
------------------------------ ---------------------------
COLUMN_NAME
--------------------------------------------------------------------------------
    BLEVEL INDEX_LEAF_BLOCKS CLUSTERING_FACTOR TABLE_BLOKS TOTAL_ROWS
---------- ----------------- ----------------- ----------- ----------
IDX_JOIN_DT                    NORMAL
JOIN_DATE
         1               181               792         510      68047


SQL> SHOW ARRAYSIZE
arraysize 15

SQL> select * from btable where join_date = TO_DATE('19-OCT-07','DD-MON-YY');

3000 rows selected.


Execution Plan
----------------------------------------------------------
Plan hash value: 1426611787

--------------------------------------------------------------------------------
-----------

| Id  | Operation                   | Name        | Rows  | Bytes | Cost (%CPU)|
 Time     |

--------------------------------------------------------------------------------
-----------

|   0 | SELECT STATEMENT            |             |    70 |  2590 |     2   (0)|
 00:00:01 |

|   1 |  TABLE ACCESS BY INDEX ROWID| BTABLE      |    70 |  2590 |     2   (0)|
 00:00:01 |

|*  2 |   INDEX RANGE SCAN          | IDX_JOIN_DT |    70 |       |     1   (0)|
 00:00:01 |

--------------------------------------------------------------------------------
-----------


Predicate Information (identified by operation id):
---------------------------------------------------

   2 - access("JOIN_DATE"=TO_DATE('19-OCT-07','DD-MON-YY'))


Statistics
----------------------------------------------------------
          0  recursive calls
          0  db block gets
       1107  consistent gets
          0  physical reads
          0  redo size
      95626  bytes sent via SQL*Net to client
       2570  bytes received via SQL*Net from client
        201  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
       3000  rows processed

But I have estimatied the consistent gets as follows:

2 (for index) + 1 (for table) + 792(clustering factor) + 3000/15  (array size  ) = 995. 

Please can you tell me the correct way to estimate consistent gets for this query. I am using Oracle Database 11g Enterprise Edition Release 11.1.0.6.0.







Tom Kyte
August 16, 2011 - 5:17 pm UTC

and 995 was pretty darn close to 1107, if I got it so close, I'd be really happy.

What issue do you see?

Your two for the index is low. Your index traversal is one block (blevel), you have about 376 (68,047 keys/181 leaf blocks) - so you need to read about 8 leafs. You will read each leaf block about 25 times (arraysize of 15). So, 1 + 8*25 = 201 index blocks read.

But here is where it gets really fuzzy - the table blocks. The clustering factor is a measure of how many IO's to read the entire table in a single call via the index. Only - we are NOT reading the entire table - we are reading a slice. So, the clustering factor (entire table) cannot be used to accurately predict (it can be used to ESTIMATE, but not accurately predicate) how many IO's we'd be doing against the table.

I don't know why you are adding the clustering factor? The clustering factor is a number that varies from the number of blocks in the table to the number of rows in the table. If the clustering factor is near the number of rows in the table, then the number of IO's would be the number of rows returned (3,000 in your example). On the other hand, if the clustering is near the number of blocks in the table - then the minimum IO's would be your number of rows divided by array fetch size OR number of rows divided by the number of rows per block - whichever is larger - and remember - this would be ON AVERAGE, not a guaranteed number for each "slice" of the table.

So, if you have about 3/4 rows per block - that would be about 900 IOs against the table - ASSUMING they were perfectly clustered (which they aren't). Meaning - you could have many more rows per block and still hit 900 IO's if you hit some data that was scattered.


You cannot get the perfect number, you can only get expected RANGES of numbers.

consistent gets estimation

Arvind Mishra, August 17, 2011 - 2:19 am UTC

Hi Tom,

1)....What issue do you see?
I do not see any issue but I thought somehow I can calculate exact number.

2)....Your two for the index is low......

Thanks for this. I did not know how to calculate index i/o. This is great.

3) What is the best way to see how many rows are there in a block?

4) How can I see from which blocks my query rows are retrieved for example from which blocks following query is retrieving rows:

select *
from
btable where join_date = TO_DATE('19-OCT-07','DD-MON-YY')

5) ....I don't know why you are adding the clustering factor?....
I added this because i know that I was calculating low value for index i/o so this higher value will nearly compensate it.

6)...So, if you have about 3/4 rows per block .......

I am not sure how you reach on this conclusion. Is it because clustering factor is 792 and total blocks in 510?


7) Please see bellow:

SQL ID : ga7x21a4vdb4z
select *
from
btable where join_date = TO_DATE('19-OCT-07','DD-MON-YY')


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.02 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 201 0.01 0.01 0 1107 0 3000
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 203 0.01 0.04 0 1107 0 3000

Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 81

Rows Row Source Operation
------- ---------------------------------------------------
3000 TABLE ACCESS BY INDEX ROWID BTABLE (cr=1107 pr=0 pw=0 time=199 us cost=2 size=2590 card=70)
3000 INDEX RANGE SCAN IDX_JOIN_DT (cr=208 pr=0 pw=0 time=85 us cost=1 size=0 card=70)(object id 71256)

Tom, the total 1107 is including index range scan (cr=208) or excluding it?


Thanks,

Arvind













Tom Kyte
August 17, 2011 - 4:21 am UTC

1) as stated - you cannot - you can get ranges - lower and upper bounds, but that is about it.

3) it depends, you can get an average (number of rows in table divided by number of blocks in table), but remember it is an AVERAGE - some blocks will have more - some less. You don't know what blocks you'll get. If your slice of the table you need has LOTS of rows per block (above average) some other execution of the same exact query with different bind inputs might hit a slice of the able with FEW rows per block (below average)

it depends - that is why a RANGE is the best you could hope for - at best.

4) you could use dbms_rowid to see what blocks the data was on if you really wanted.

5) flawed logic, see my response above for why this doesn't make sense at all.


6) i wasn't reaching a conclusion - I was saying "suppose you had 3/4 rows per block". I told you I DON'T KNOW how many rows per block you had - I was telling a story. Read what I said again:

So, if you have about 3/4 rows per block - that would be about 900 IOs against the table - ASSUMING they were perfectly clustered (which they aren't). Meaning - you could have many more rows per block and still hit 900 IO's if you hit some data that was scattered.


I was trying to convey the fact that "it depends on the data and how it is laid out"



7) hey - i got the index IO's just about right didn't I :)


Rows     Row Source Operation
-------  ---------------------------------------------------
   3000  TABLE ACCESS BY INDEX ROWID BTABLE (cr=1107 pr=0
   3000   INDEX RANGE SCAN IDX_JOIN_DT (cr=208 pr=0 pw=0 


that shows we did 208 IO's against the index and then (1107-208) IO's against the table.

consistent gets estimation

Arvind Mishra, August 21, 2011 - 12:13 am UTC

Hi Tom,

1) It is great that you have estimated perfect index consistent gets...WOW....you are ORACLE :-)

2)My queries rows are distributed as follows:


  1  select distinct  blocks,count(distinct blocks) over() bloack_total , count(rid)over (partition 
  2  (
  3  select dbms_rowid.rowid_block_number(rowid) blocks,rowid rid,name,join_date
  4  from
  5*  btable where join_date = TO_DATE('19-OCT-07','DD-MON-YY')) order by blocks
SQL> /

    BLOCKS BLOACK_TOTAL ROWS_IN_BLOCKS
---------- ------------ --------------
       655           16             75
       656           16            224
       657           16            192
       658           16            200
       659           16            215
       660           16            196
       661           16            194
       662           16            188
       663           16            194
       664           16            201
       665           16            202

    BLOCKS BLOACK_TOTAL ROWS_IN_BLOCKS
---------- ------------ --------------
       666           16            192
       667           16            189
       668           16            190
       669           16            183
       670           16            165

16 rows selected.

3) The other required data is:
 
SQL> SELECT I.INDEX_NAME, I.INDEX_TYPE, c.column_name,I.BLEVEL,I.LEAF_BLOCKS INDEX_LEAF_BLOCKS,
  2  I.clustering_factor,t.blocks TABLE_BLOKS,t.num_rows TOTAL_ROWS
  3  FROM USER_INDEXES I , USER_TABLES T, USER_IND_COLUMNS C
  4  WHERE
  5  I.TABLE_NAME = T.TABLE_NAME AND I.INDEX_NAME = C.INDEX_NAME
  6  AND I.TABLE_NAME = 'BTABLE' AND I.INDEX_NAME ='IDX_JOIN_DT' ;

INDEX_NAME                     INDEX_TYPE
------------------------------ ---------------------------
COLUMN_NAME
--------------------------------------------------------------------------------
    BLEVEL INDEX_LEAF_BLOCKS CLUSTERING_FACTOR TABLE_BLOKS TOTAL_ROWS
---------- ----------------- ----------------- ----------- ----------
IDX_JOIN_DT                    NORMAL
JOIN_DATE
         1               181               792         510      68047

SQL> SHOW ARRAYSIZE
arraysize 15

4) TKProf

SQL ID : ga7x21a4vdb4z
select * 
from
 btable where join_date = TO_DATE('19-OCT-07','DD-MON-YY')


call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.02          0          0          0           0
Execute      1      0.00       0.00          0          0          0           0
Fetch      201      0.01       0.01          0       1107          0        3000
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total      203      0.01       0.04          0       1107          0        3000

Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 81  

Rows     Row Source Operation
-------  ---------------------------------------------------
   3000  TABLE ACCESS BY INDEX ROWID BTABLE (cr=1107 pr=0 pw=0 time=199 us cost=2 size=2590 
card=70)
   3000   INDEX RANGE SCAN IDX_JOIN_DT (cr=208 pr=0 pw=0 time=85 us cost=1 size=0 card=70)(object 
id 71256)

5) If I have perfect clustering factor(512) then can I calculate i/o as follow?

SQL> select 3000/16 from dual; -- to get rows per block

   3000/16
----------
     187.5

SQL> select 188/15 from dual; -- to get i/o per block considering array size

    188/15
----------
12.5333333

SQL> select 12*16 from dual; -- to get total i/o for all 16 blocks

     12*16
----------
       192

6) My index clustering factor is 792 which is not too far from perfect number 512 but Oracle is doing 1107 - 208 = 899 consistent gets
   to fetch all the rows. Why? Can you please show me how it is doing 899 consistent gets?

Thanks,

Arvind

Tom Kyte
August 30, 2011 - 2:01 pm UTC

6) because the data is not "perfectly" clustered. Because you are doing 201 fetches (so you have AT LEAST 201 IO's since you have to get and then re-get the same blocks sometimes).


unless the data is *perfectly* clustered - you cannot really estimate how many IO's exactly it will do - just upper and lower bounds.

All it takes is one row "out of order" and all bets are off.

for example:


ops$tkyte%ORA11GR2> drop table t;

Table dropped.

Elapsed: 00:00:01.06
ops$tkyte%ORA11GR2> 
ops$tkyte%ORA11GR2> create table t
  2  as
  3  select rpad('*',22,'*') data, to_date( '20-oct-2011', 'dd-mon-yyyy' ) join_date
  4    from dual
  5  connect by level <= 3000;

Table created.

Elapsed: 00:00:00.02
ops$tkyte%ORA11GR2> 
ops$tkyte%ORA11GR2> select dbms_rowid.rowid_block_number(rowid), count(*)
  2    from t
  3   group by dbms_rowid.rowid_block_number(rowid)
  4   order by 1
  5  /

DBMS_ROWID.ROWID_BLOCK_NUMBER(ROWID)   COUNT(*)
------------------------------------ ----------
                              420259        200
                              420260        200
                              420261        200
                              420262        200
                              420263        200
                              420264        200
                              420265        200
                              420266        200
                              420267        200
                              420268        200
                              420269        200
                              420270        200
                              420271        200
                              420273        200
                              420274        200

15 rows selected.

Elapsed: 00:00:00.03
ops$tkyte%ORA11GR2> 
ops$tkyte%ORA11GR2> 
ops$tkyte%ORA11GR2> insert into t
  2  select rpad('*',22,'*') data,
  3         case when mod(rownum,200) < 6
  4                  then to_date( '21-oct-2011', 'dd-mon-yyyy' )
  5                  else to_date( '22-oct-2011', 'dd-mon-yyyy' )
  6              end join_date
  7    from dual
  8   connect by level <= 3000*33
  9  /

99000 rows created.

Elapsed: 00:00:00.19
ops$tkyte%ORA11GR2> 
ops$tkyte%ORA11GR2> create index t_idx on t(join_date);

Index created.

Elapsed: 00:00:00.21
ops$tkyte%ORA11GR2> exec dbms_stats.gather_table_stats( user, 'T' );

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.13
ops$tkyte%ORA11GR2> 
ops$tkyte%ORA11GR2> select ut.num_rows, ut.blocks, ui.clustering_factor
  2    from user_tables ut, user_indexes ui
  3   where ut.table_name = 'T'
  4     and ui.index_name = 'T_IDX';

  NUM_ROWS     BLOCKS CLUSTERING_FACTOR
---------- ---------- -----------------
    102000        626              1001

Elapsed: 00:00:00.01
ops$tkyte%ORA11GR2> 
ops$tkyte%ORA11GR2> set autotrace traceonly;
ops$tkyte%ORA11GR2> select /*+ index( t t_idx ) */ * from t where join_date = to_date( '20-oct-2011' );

3000 rows selected.

Elapsed: 00:00:00.04

Execution Plan
----------------------------------------------------------
Plan hash value: 470836197

-------------------------------------------------------------------------------------
| Id  | Operation                   | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |       | 34000 |  1029K|   426   (1)| 00:00:06 |
|   1 |  TABLE ACCESS BY INDEX ROWID| T     | 34000 |  1029K|   426   (1)| 00:00:06 |
|*  2 |   INDEX RANGE SCAN          | T_IDX | 34000 |       |    91   (0)| 00:00:02 |
-------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - access("JOIN_DATE"=TO_DATE(' 2011-10-20 00:00:00', 'syyyy-mm-dd
              hh24:mi:ss'))


Statistics
----------------------------------------------------------
          1  recursive calls
          0  db block gets
        424  consistent gets
          0  physical reads
          0  redo size
     134381  bytes sent via SQL*Net to client
       2608  bytes received via SQL*Net from client
        201  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
       3000  rows processed

ops$tkyte%ORA11GR2> select /*+ index( t t_idx ) */ * from t where join_date = to_date( '21-oct-2011' );

2970 rows selected.

Elapsed: 00:00:00.03

Execution Plan
----------------------------------------------------------
Plan hash value: 470836197

-------------------------------------------------------------------------------------
| Id  | Operation                   | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |       | 34000 |  1029K|   426   (1)| 00:00:06 |
|   1 |  TABLE ACCESS BY INDEX ROWID| T     | 34000 |  1029K|   426   (1)| 00:00:06 |
|*  2 |   INDEX RANGE SCAN          | T_IDX | 34000 |       |    91   (0)| 00:00:02 |
-------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - access("JOIN_DATE"=TO_DATE(' 2011-10-21 00:00:00', 'syyyy-mm-dd
              hh24:mi:ss'))


Statistics
----------------------------------------------------------
          1  recursive calls
          0  db block gets
        896  consistent gets
          0  physical reads
          0  redo size
     133041  bytes sent via SQL*Net to client
       2586  bytes received via SQL*Net from client
        199  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
       2970  rows processed

ops$tkyte%ORA11GR2> set autotrace off



the data you are going after just must be spread out whereas some of the other data is not.

consistent gets estimation

Arvind Mishra, September 06, 2011 - 12:53 am UTC

Hi Tom,

1) Thanks a lot...I am starting to understand it now.

2)You said since we are doing 201 fetches therefore lower bound of consistent gets will be 201. What about upper bound? How much upper bound you expect for 3000 rows with clustering factor 792 and why?

I wanted to learn this so that using auto trace report I can
check if a query is taking excessive consistent gets or it is doing the required number of consistent gets.

Thanks,

Arvind
Tom Kyte
September 06, 2011 - 10:46 am UTC

2) it would be 3201 - because there could be a case whereby much of the data is clustered - but you pick on some data that is NOT well clustered.

Just as demonstrated - I had well clustered data (and lots of it) but for one attribute value - it was randomly distributed.


Think about this - 1,000,000 rows - a column with two values - Y and N. Almost all of them are Y's (they are processed records for example). Because the vast majority of the records are Y - the clustering factor will be *very* good (almost by definition - there are so many Y's you know that every block has many Y's on it).

Now, what about the N's - the few records. Will they be well clustered? Present a case for the argument that a) yes they will be, b) no they won't be, c) they will sort of be mostly. If you think about how the data might arrive or different processing scenarios - you should be able to come up with a case for each...

consistent gets estimation

Arvind Mishra, September 07, 2011 - 12:15 am UTC

It is clear to me now. Thanks for answering my so many questions on the same topic without getting frustrated!!Thanks a lot.

colin, September 08, 2011 - 4:40 am UTC

Hi Tom,


http://jonathanlewis.wordpress.com/oracle-core/oc-2-undo-and-redo/#comment-41661

Oracle does not apply the changes you have requested until it starts processing the commit; - Jonathan Lewis

It means – as part of COMMIT only – Oracle will change the blocks ... Until we issue COMMIT – do the data blocks won’t be changed/modified ??

So .. if we say : update table1 set y=y*10

And we did not commit – what is the current status of the data blocks/undo blocks/redo etc ?

Thanks

Tom Kyte
September 08, 2011 - 5:24 pm UTC

It means for tiny transactions - the way it is physically processed it a tiny bit different from normal. It is not "in general", it is an optimization.


if you did an update of any table of size - and updated each row - it wouldn't be "teeny tiny" and wouldn't happen that way.


His entry on that page answers your last questoin "what is the current status", that was the entire goal of his comment?? To answer that.


A reader, September 09, 2011 - 10:00 am UTC

Thank you Tom !!

Sir, my question is -

1.Does Oracle prepare the redo change vector when we get the block in 'consistent mode' itself or when we get the block in 'current mode' in case of UPDATE statement?
(because when we execute UPDATE stmt, first we get the block in consistent mode and then we get it in current mode and update it)

2.

update t set x=x+2 ; (t is a small table)
(10g - private redo/in memory undo enabled)
select * from t ; (I did not commit)
in this case I can see new values - but as said above - Oracle does not change the blocks until we commit - then where are getting new values ?

3. for tiny transaction - private redo/in memory undo is used - how to define 'tiny' here ? any internal setting ?

Many thanks Tom for your time .

cheers


Tom Kyte
September 09, 2011 - 11:51 am UTC

1) it would have to wait until there was something to be done - we get the block in consistent mode to see if we want to change data (nothing has been changed), we get the block in current mode to actually make the changes (that is when redo would start to be generated).

2) the blocks are changed - they are flagged as "we have changes pending". And I see you asked Jonathan and he has responded to you already:

http://jonathanlewis.wordpress.com/oracle-core/oc-2-undo-and-redo/#comment-41661

3) nothing we care about - nothing exposed that we want to look at.

Don't be looking for teeny tiny precision screw drivers to tune with, learn the basics first (and probably last - the basics are themselves pretty large). This is nitty gritty interesting to talk about over beers topics - but are frankly not necessarily useful day to day. I'd much rather talk about that stuff myself

A reader, September 09, 2011 - 12:48 pm UTC

Hi Tom,

How do you know that i asked there ? actually I was waiting for response from him when I posted here ..
Tom Kyte
September 10, 2011 - 2:58 pm UTC

because I read the link you provided me.

You asked Jonathan and me at the same time, when I got around to reading it, Jonathan had already replied. I'm glad I read it again so that I didn't waste time responding myself...


A reader, September 09, 2011 - 1:54 pm UTC

“Redo Buffer Mechanism”:
-------------------------------------

Sir, I am trying to understand the basics , that’s why these many questions..but you are so kind ful that helping us learn many things about oracle..
Sorry but can you please correct if the below stpes are wrong .. this will help me a lot to understand the basics ..

Say
select * from emp ; (emp is table – that contains 10K employee unique records, emp_id –primary key )
update emp set sal=sal+1000 where emp_id in (2,9780)
Assume it has to update 2 blcoks ..,
It will first do index scan and will find out out the ADDRESS of the 2 blocks to be updated ...

1.It will get first block in consitent mode and confirm that it has to be changed
2.Get the first block in current mode , before changing the block – generate change vector for undo and data block
3.combine and copy the change vector for redo and undo into redo log buffer
4.update undo block
5.update data block
Let the 1st block go

Repeat the above steps for 2nd block also ..(1,2,3,4,5)

In this case – Oracle will visit each block 2 times ..(consistent mode and current mode)

“Private Redo” (>=10g ):
------------------------------
1.It will get first block in consitent mode and confirm that it has to be changed
2.Get the first block in current mode , before changing the block – generate change vector for undo and data block in ‘in memory undo’ and in ‘private redo’
3.It will get second block in consitent mode and confirm that it has to be changed
4.Get the first block in current mode , before changing the block – generate change vector for undo and data block in ‘in memory undo’ and in ‘private redo’
5.comine and copy these ‘private redo’ and ‘in memory undo’ to redo logbuffer
6.On COMMIT call – update the undo and data blocks ..

In this case – Oracle will visit the block 3 times ..(consistent mode and current mode and currentmode)


Sorry but can you please correct if the above stpes are wrong .. this will help me a lot to understand the basics ..

Many thanks

Tom Kyte
September 10, 2011 - 3:06 pm UTC

it is more detail than you really need to know. I'm dead serious when i say:

Don't be looking for teeny tiny precision screw drivers to tune with, learn the basics first (and probably last - the basics are themselves pretty large). This is nitty gritty interesting to talk about over beers topics - but are frankly not necessarily useful day to day. I'd much rather talk about that stuff myself


The gist of your first bit is correct - use a consistent read to read a block, upon discovering "hey, I do want to update that block" - get it in current mode and modify it - which includes generating redo and undo for that change.


Your second one seems off - why are you getting the first block again in step 4? You never let it go in the first place.

But again, this isn't really an entirely useful discussion. Way, way far, too low level.

If you have two blocks to update - you'll do at least two consistent gets and at least two current mode gets against the table regardless.

A reader, October 20, 2011 - 7:58 am UTC

Hi Tom,

I did not do any updates prior to this - just I executed a SELECT statments 2 times 

first one did 16 consietent gets and the 2nd one did 8 consitent gets and it continued the same (8) for 3rd,4th time.

why there were 16 in the first run ?
(I did not do any updates prior to this -so I think no delayed commit cleanouts )


SQL> select * from employee;
8 rows selected.

Elapsed: 00:00:00.62

Execution Plan
----------------------------------------------------------
   0      SELECT STATEMENT Optimizer=ALL_ROWS (Cost=3 Card=8 Bytes=408
          )

   1    0   TABLE ACCESS (FULL) OF 'EMPLOYEE' (TABLE) (Cost=3 Card=8 B
          ytes=408)




Statistics
----------------------------------------------------------
          4  recursive calls
          0  db block gets
         16  consistent gets
          0  physical reads
          0  redo size
        808  bytes sent via SQL*Net to client
        363  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
          8  rows processed



SQL> select * from employee;

8 rows selected.

Elapsed: 00:00:00.31

Execution Plan
----------------------------------------------------------
   0      SELECT STATEMENT Optimizer=ALL_ROWS (Cost=3 Card=8 Bytes=408
          )

   1    0   TABLE ACCESS (FULL) OF 'EMPLOYEE' (TABLE) (Cost=3 Card=8 B
          ytes=408)





Statistics
----------------------------------------------------------
          0  recursive calls
          0  db block gets
          8  consistent gets
          0  physical reads
          0  redo size
        808  bytes sent via SQL*Net to client
        363  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
          8  rows processed

Tom Kyte
October 20, 2011 - 8:08 am UTC


did you see:


4 recursive calls


on the first call? You did some hard parsing, the hard parsing did some sql of its own (4 of them). they did some IO, this IO was counted in the execution of your sql.

Buffer Pinned Count - issue

Parthiban Nagarajan, March 06, 2012 - 10:33 am UTC

Hi Tom

An INSERT into a TEMP table statement was running for more than an hour which we killed it at last.

I am not sure whether the following information is enough.
Could you give your comments on the following statistics?
Do we have bad SQL or bad configuration?

SID CLASS NAME VALUE
--- ----------- -------------------------------------- ----------
167 SQL & Cache buffer is pinned count 8688542771
167 SQL table fetch by rowid 4385169152
167 User session connect time 1331047546
167 Debug process last non-idle time 1331047546
167 Redo redo size 145612044
167 Debug undo change vector size 97005040
167 User session logical reads 95304197
167 Cache consistent gets from cache 94272977
167 Cache consistent gets 94272951
167 Debug no work - consistent read gets 93002234
167 Cache consistent gets from cache (fastpath) 93000329
167 SQL & Cache buffer is not pinned count 82872909
167 User temp space allocated (bytes) 56623104
167 Cache physical read total bytes 42631168
167 SQL cell physical IO interconnect bytes 42631168
167 Cache physical read bytes 42516480
167 User session pga memory max 42028472
167 Cache physical read total bytes optimized 40026112
167 User session uga memory max 39736864
167 User session pga memory 3362232
167 User file io wait time 2373056
167 User session uga memory 1557360
167 SQL sorts (rows) 1414458
167 Cache db block changes 1375823
167 SQL table scan rows gotten 1279031
167 Cache consistent gets - examination 1272281
167 Cache db block gets from cache 1032724
167 Cache db block gets 1032724
167 Cache consistent changes 702293
167 Redo redo entries 673524
167 SQL HSC Heap Segment Block Changes 654067
167 Debug index fetch by key 591830
167 SQL rows fetched via callback 567245
Tom Kyte
March 06, 2012 - 1:59 pm UTC

167  SQL          table fetch by rowid                     4385169152


4,385,169,152

4.3 BILLION!!! yikes.

sounds like someone is forcing it to use an index inappropriately, definitely a bad plan.

https://asktom.oracle.com/pls/apex/f?p=100:11:0::::P11_QUESTION_ID:6749454952894#6760861174154

Re: Buffer Pinned Count - issue

Parthiban Nagarajan, March 06, 2012 - 9:43 pm UTC

Hi Tom

Thanks for your comments.
The SQL is INSERT INTO TEMP_A SELECT ... FROM HEAP_A, TEMP_B ... UNION SELECT ... FROM HEAP_A, TEMP_B ...

I understand that because the temp tables lack statistics, it could have estimated a bad plan.

The process ran for more than an hour and we killed the session and restarted it. Second time also, it ran as the 1st time and at this point only I got the above statistics. Then also, we killed the session and restarted the process. Third time, the process finished in seconds, which is the normal one.
The same code is running for years and now the strange issue arised but that also resolved with just the restart.

I wonder, why the things suddenly turn topsy turvy?!

Its funny that most of the computer problems are resolved by just the restart.

Could you advise me on what should be done to prevent such things?
Tom Kyte
March 07, 2012 - 6:16 pm UTC

get the plans, they are in v$sql_Plan, use dbms_xplan and you can dump out all of the plans and see if they are different.

sounds like a bind peeking issue - does this use binds.

why the cost is 0

A reader, March 09, 2012 - 6:45 am UTC

Hi Tom ,

I have observed that many time the cost of an operation is 0 in the explain plan. here is snippet from my 10053 output, which I took to understand why the cost is 0.

SELECT COUNT(*) "COUNT(*)" FROM "EMUDSS"."ORDERS" "ORDERS","EMUDSS"."LINEITEM" "LINEITEM" WHERE "ORDERS"."O_ORDERKEY"="LINEITEM"."L_ORDERKEY"
kkoqbc-end
: call(in-use=39416, alloc=65448), compile(in-use=38488, alloc=40584)
apadrv-end: call(in-use=39416, alloc=65448), compile(in-use=39344, alloc=40584)

sql_id=52a0skswvaqvk.
Current SQL statement for this session:
explain plan for select
count(*)
from
orders,
lineitem
where
o_orderkey = l_orderkey

============
Plan Table
============
---------------------------------------------+-----------------------------------+
| Id | Operation | Name | Rows | Bytes | Cost | Time |
---------------------------------------------+-----------------------------------+
| 0 | SELECT STATEMENT | | | | 343 | |
| 1 | SORT AGGREGATE | | 1 | 10 | | |
| 2 | NESTED LOOPS | | 583K | 5830K | 343 | 00:00:05 |
| 3 | INDEX FAST FULL SCAN | PK_LINEITEM| 586K | 2932K | 294 | 00:00:05 |
| 4 | INDEX UNIQUE SCAN | PK_ORDERS | 1 | 5 | 0 | |
---------------------------------------------+-----------------------------------+
Predicate Information:
----------------------
4 - access("O_ORDERKEY"="L_ORDERKEY")

Content of other_xml column
===========================
db_version : 10.2.0.1
parse_schema : EMUDSS
plan_hash : 2594599139
Outline Data:
/*+
BEGIN_OUTLINE_DATA
IGNORE_OPTIM_EMBEDDED_HINTS
OPTIMIZER_FEATURES_ENABLE('10.2.0.1')
ALL_ROWS
OUTLINE_LEAF(@"SEL$1")
INDEX_FFS(@"SEL$1" "LINEITEM"@"SEL$1" ("LINEITEM"."L_ORDERKEY" "LINEITEM"."L_LINENUMBER"))
INDEX(@"SEL$1" "ORDERS"@"SEL$1" ("ORDERS"."O_ORDERKEY"))
LEADING(@"SEL$1" "LINEITEM"@"SEL$1" "ORDERS"@"SEL$1")
USE_NL(@"SEL$1" "ORDERS"@"SEL$1")
END_OUTLINE_DATA
*/

can you please explain that how the cost is 0.

stats on the tables are up-to-date.

Regards
Ajeet

First fetch takes long time

Lasse Jenssen, May 31, 2012 - 7:07 am UTC

Hi Tom

I'm a little puzzled about the numbers I see in my tracefile(below). Look at the first FETCH compared to the all the other fetches:

WAIT #4575082744: nam='SQL*Net message to client' ela= 3 driver id=1952673792 #bytes=1 p3=0 obj#=-1 tim=13974612142459
WAIT #4575082744: nam='asynch descriptor resize' ela= 4 outstanding #aio=0 current aio limit=200 new aio limit=547 obj#=-1 tim=13974612147134
<b>FETCH #4575082744:c=2720000,e=28873995,p=0,cr=31483,cu=0,mis=0,r=100,dep=0,og=1,plh=4253839665,tim=13974641016514</b>
WAIT #4575082744: nam='SQL*Net message from client' ela= 10271 driver id=1952673792 #bytes=1 p3=0 obj#=-1 tim=13974641031381
WAIT #4575082744: nam='SQL*Net message to client' ela= 2 driver id=1952673792 #bytes=1 p3=0 obj#=-1 tim=13974641031498
FETCH #4575082744:c=0,e=384,p=0,cr=0,cu=0,mis=0,r=100,dep=0,og=1,plh=4253839665,tim=13974641031857
WAIT #4575082744: nam='SQL*Net message from client' ela= 4369 driver id=1952673792 #bytes=1 p3=0 obj#=-1 tim=13974641037404
WAIT #4575082744: nam='SQL*Net message to client' ela= 2 driver id=1952673792 #bytes=1 p3=0 obj#=-1 tim=13974641037499
FETCH #4575082744:c=0,e=377,p=0,cr=0,cu=0,mis=0,r=100,dep=0,og=1,plh=4253839665,tim=13974641037856
WAIT #4575082744: nam='SQL*Net message from client' ela= 3826 driver id=1952673792 #bytes=1 p3=0 obj#=-1 tim=13974641042850
WAIT #4575082744: nam='SQL*Net message to client' ela= 2 driver id=1952673792 #bytes=1 p3=0 obj#=-1 tim=13974641042948
FETCH #4575082744:c=0,e=3515,p=0,cr=0,cu=0,mis=0,r=100,dep=0,og=1,plh=4253839665,tim=13974641046441
WAIT #4575082744: nam='SQL*Net message from client' ela= 3830 driver id=1952673792 #bytes=1 p3=0 obj#=-1 tim=13974641050381
WAIT #4575082744: nam='SQL*Net message to client' ela= 2 driver id=1952673792 #bytes=1 p3=0 obj#=-1 tim=13974641050499
FETCH #4575082744:c=0,e=1575,p=0,cr=0,cu=0,mis=0,r=100,dep=0,og=1,plh=4253839665,tim=13974641052051
WAIT #4575082744: nam='SQL*Net message from client' ela= 8198 driver id=1952673792 #bytes=1 p3=0 obj#=-1 tim=13974641061585
WAIT #4575082744: nam='SQL*Net message to client' ela= 2 driver id=1952673792 #bytes=1 p3=0 obj#=-1 tim=13974641061711
FETCH #4575082744:c=0,e=353,p=0,cr=0,cu=0,mis=0,r=100,dep=0,og=1,plh=4253839665,tim=13974641062040
WAIT #4575082744: nam='SQL*Net message from client' ela= 4517 driver id=1952673792 #bytes=1 p3=0 obj#=-1 tim=13974641066622
WAIT #4575082744: nam='SQL*Net message to client' ela= 2 driver id=1952673792 #bytes=1 p3=0 obj#=-1 tim=13974641066733
FETCH #4575082744:c=0,e=425,p=0,cr=0,cu=0,mis=0,r=99,dep=0,og=1,plh=4253839665,tim=13974641067134
WAIT #4575082744: nam='SQL*Net message from client' ela= 20842 driver id=1952673792 #bytes=1 p3=0 obj#=-1 tim=13974641090138


Qustion: Is it correct that all the consistent reads are connected to the first fetch? Or does this depend on the execution plan?

Comment: Doing a hash join I would imagine the first fetch involving higher block reads (having to generate the hash map).

c=2720000,e=28873995


Question: Is the 'c' refering to CPU-time?

Question: 31000 consistent reads taking 38 sec. That might sound reasonable, doesn't it? Logical IO is definitly not for free.

Tom Kyte
May 31, 2012 - 10:29 am UTC

c is cpu, e is elapsed, cr is consistent reads, r is physical reads.


it is likely you had a query with a plan SIMILAR to:

HASH JOIN
   FULL SCAN A
   FULL SCAN B


the first fetch would have had to of full scanned and hashed the smaller of the two row sources A and B into memory (hopefully!) and then read a single record from the other row source at least before returning the first row. So, the first fetch would have taken a long time - but all of the remaining fetches would go really fast. and the last row would come back really fast relatively speaking.


On the other hand, if you had:

NEST LOOP JOIN
   FULL SCAN A
   INDEX RANGE SCAN B_IDX
      TABLE ACCESS BY INDEX ROWID B


you would find the first row comes back really really fast - but the last row takes a really really long time.

need your view

Biswaranjan, June 07, 2012 - 4:42 am UTC

Hi Tom,
I read about arraysize in this site and got my answer to
the question we were discussing at first_rows page(all is your help :).

I was reading this current page about the different arraysize impact on different block size and felt nice all about your explanations.

But I need your view to below mentioned followup comments
posted by you at the top of this page.

"Followup May 2, 2002 - 7am Central time zone:

Ok, 10 rows/block:

on first fetch we:
get block 1 (1 consistent get) 10 rows
get block 2 (2 consistent gets) 5 rows / 5 left

on second fetch we:
get block 2 (3 consistent gets) 5 rows / 0 left
get block 3 (4 consistent gets) 5 rows / 5 left

....

and so on. We visit EVERY OTHER block 2 times (the N/M still holds here. If I have 100 rows on 10
blocks and an array size of 10 -- I'll add 10 consistent gets for a grand total of 20 consistent
gets).
"

I think on the second fetch you wrote slight wrong about the no of rows.

it should have written like below as per my knowledge after understanding your all posted logic.

"
Ok, 10 rows/block:

on first fetch we:
get block 1 (1 consistent get) 10 rows
get block 2 (2 consistent gets) 5 rows / 5 left

on second fetch we:
get block 2 (3 consistent gets) 5 rows / 0 left --this correct
get block 3 (4 consistent gets) 10 rows( not 5 rows / o left (not 5 left ).
......
......
......

and so on. " (I think block 3 will never read more than one time.

As we are expecting this above result considering arraysize 15.

Can you please My though is right or not?

Thanks as always,
Biswaranjan.
Tom Kyte
June 07, 2012 - 7:09 am UTC

I did, it should have been 10 rows/0 left on block 3.

thanks for checking my math :)

I said the right things, but my example was off.

Ravi, September 07, 2012 - 7:39 am UTC

As already said by Mr.Tom (dated: Followup October 27, 2003 - 1pm Central time zone:) "it picks up where it left off" -- it will visit block 3 for the 2nd time and only once to block 4.

A reader, February 15, 2013 - 4:22 am UTC


is it know bug for merge join

ranjan, September 14, 2013 - 11:22 am UTC

Hi Tom,

First of all God bless you.

I have some doubt about the autotrace and tkprof plan below.

SQL*Plus: Release 10.2.0.1.0 - Production on Sat Sep 14 16:00:20 2013

Copyright (c) 1982, 2005, Oracle.  All rights reserved.


Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
With the Partitioning, OLAP and Data Mining options

SQL> drop table dem1;

Table dropped.

SQL> drop table dem2;

Table dropped.

SQL> create table dem1(a number,b number);

Table created.

SQL> create table dem2(a number,b number);

Table created.

SQL> insert into dem1 select level,level+10 from dual connect by level<2001;

2000 rows created.

SQL> insert into dem2 select level,level+10 from dual connect by level<2001;

2000 rows created.

SQL> commit;

Commit complete.
SQL> exec dbms_stats.gather_table_stats(ownname=>user,tabname=>'DEM1',estimate_percent=>100);

PL/SQL procedure successfully completed.

SQL> exec dbms_stats.gather_table_stats(ownname=>user,tabname=>'DEM2',estimate_percent=>100);

PL/SQL procedure successfully completed.

SQL> set timing on
SQL> alter system flush shared_pool;

System altered.

Elapsed: 00:00:00.06
SQL> alter system flush buffer_cache;

System altered.

Elapsed: 00:00:00.20
SQL> set autotrace traceonly
SQL> alter session set sql_trace=TRUE;

Session altered.
SQL> select /*+ use_nl(dem1,dem2) */ * from dem1,dem2;

4000000 rows selected.

Elapsed: 00:00:31.38

Execution Plan
----------------------------------------------------------
Plan hash value: 2583233735

---------------------------------------------------------------------------
| Id  | Operation          | Name | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |      |  4000K|    53M|  2321   (6)| 00:00:28 |
|   1 |  NESTED LOOPS      |      |  4000K|    53M|  2321   (6)| 00:00:28 |
|   2 |   TABLE ACCESS FULL| DEM1 |  2000 | 14000 |     3   (0)| 00:00:01 |
|   3 |   TABLE ACCESS FULL| DEM2 |  2000 | 14000 |     1   (0)| 00:00:01 |
---------------------------------------------------------------------------


Statistics
----------------------------------------------------------
        498  recursive calls
          0  db block gets
     282065  consistent gets
         26  physical reads
          0  redo size
   85426934  bytes sent via SQL*Net to client
    2933707  bytes received via SQL*Net from client
     266668  SQL*Net roundtrips to/from client
          8  sorts (memory)
          0  sorts (disk)
    4000000  rows processed


tkprof output
select /*+ use_nl(dem1,dem2) */ * 
from
 dem1,dem2


call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute      1      0.00       0.00          0          0          0           0
Fetch   266668      3.74       3.86          0     282003          0     4000000
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total   266670      3.74       3.86          0     282003          0     4000000
#################################################
the above statistics gives the right consistent gets which is
roundtrips(266668 which is nothing but [2000*2000]/15default arraysize)+8(blocks due to 2000 records)*2000(due to nested loop)=266668+14000
= 266668+16000=282668

I am ok with the above plan as my calculation matches nearly with the above statistics count.
only doubt about the time between tkprof time 3.84 and autotrace time 30 sec.
what is the unit of tkprof 3.84 if it is second why it is showing 3.84 instead of 30 second.(but anyway i am ok with the statistics).

##################################################### 
SQL> alter system flush shared_pool;

System altered.

Elapsed: 00:00:00.06
SQL> alter system flush buffer_cache;

System altered.
sql> select * from dem1,dem2;

4000000 rows selected.

Elapsed: 00:00:30.57

Execution Plan
----------------------------------------------------------
Plan hash value: 960301399

-----------------------------------------------------------------------------
| Id  | Operation            | Name | Rows  | Bytes | Cost (%CPU)| Time     |
-----------------------------------------------------------------------------
|   0 | SELECT STATEMENT     |      |  4000K|    53M|  2321   (6)| 00:00:28 |
|   1 |  MERGE JOIN CARTESIAN|      |  4000K|    53M|  2321   (6)| 00:00:28 |
|   2 |   TABLE ACCESS FULL  | DEM1 |  2000 | 14000 |     3   (0)| 00:00:01 |
|   3 |   BUFFER SORT        |      |  2000 | 14000 |  2318   (6)| 00:00:28 |
|   4 |    TABLE ACCESS FULL | DEM2 |  2000 | 14000 |     1   (0)| 00:00:01 |
-----------------------------------------------------------------------------


Statistics
----------------------------------------------------------
        490  recursive calls
          0  db block gets
       2070  consistent gets
         19  physical reads
          0  redo size
   85426934  bytes sent via SQL*Net to client
    2933707  bytes received via SQL*Net from client
     266668  SQL*Net roundtrips to/from client
          9  sorts (memory)
          0  sorts (disk)
    4000000  rows processed


tkprof for merge join cartesian
select * 
from
 dem1,dem2


call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        2      0.03       0.49          0          0          0           0
Execute      2      0.00       0.00          0          0          0           0
Fetch   282864      4.53       4.19          0       2141          0     4242926
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total   282868      4.57       4.68          0       2141          0     4242926

Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 62  

Rows     Row Source Operation
-------  ---------------------------------------------------
4000000  MERGE JOIN CARTESIAN (cr=2010 pr=0 pw=0 time=4016168 us)
   2000   TABLE ACCESS FULL DEM1 (cr=2003 pr=0 pw=0 time=14024 us)
4000000   BUFFER SORT (cr=7 pr=0 pw=0 time=19446 us)
   2000    TABLE ACCESS FULL DEM2 (cr=7 pr=0 pw=0 time=12 us)

********************************************************************************
##########################################################
My doubt is why it shows wrong statistics ,it is a know bug( I even got same in 9i :( ).

I even ran query like using where dem1.a=dem2.a but in case of merge join consistency get not including the roundtrips
value with its other consistent gets.
Note:setting arraysize to bigger value only reduce incase of nl and hash and not happening in case of merge join
because it showing consitency get excluding roundtrip value which is the cause of no effect of arraysize.
but incase of hash and nl join it gives correct statististics.


 I think its a autotrace and tkprof statistics bug for merge join.(am able to calculate the statistics because of
you :) ).

Could you please clarify my about doubts.


Thanks as always.


Tom Kyte
September 23, 2013 - 6:10 pm UTC

please be more specific about what you think is "a bug" here????

cont to my last post.

A reader, September 23, 2013 - 6:34 pm UTC

Hi Tom,

Nice to see you back,hope your two days conference went well :).
the "query" value in case of merge join is "2141".
Is not that wrong . it should be roundtrip + 2141 ?
Is not it missing the roundtrip.

Regards,
Ranjan

CG

Jag, October 06, 2013 - 10:17 am UTC

Hi Tom,

I have following scenario.

Table has 50 rows in 5 Blocks and Array size is 5


First Fetch - 1 Consistent Get - 5 Left (Block 1) - 5 Rows
Second Fetch - 2 Consistent gets - 0 Left (Block 1) - 10 Rows
Third Fetch - 3 Consistent gets - 5 Left (Block 2) - 15 Rows
4th Fetch - 4 Consistent gets - 0 Left (Block 2) - 20 Rows
5th Fetch - 5 Consistent Gets -5 Left (Block 3) - 25 Rows
6th Fetch - 6 Consistent Gets - 0 Left (Block 3) - 30 Rows
7th Fetch - 7 Consistent Gets - 5 Left (Block 4) - 35 Rows
8th Fetch - 8 CGs - O Left(block 4) - 40 Rows
9th Fetch 9 CGs - 5 Left (Block 5) - 45 Rows
10th Fetch - 10 CGs - 0 Left(Block 5) - 50 Rows



As per the formula u have given.

If you full scan a table T that has N blocks and R rows and you use an array fetch size of A, we
will typically perform the following number of consistent gets:

N of Blocks + Rows /ArraySize


= 5 + (50/5) =15 CG

But the above results shows i am only doing 10 CG ..where i am making the mistake ?

for my recent post.

Ranjan, October 11, 2013 - 2:17 pm UTC

Hi Tom,

Could you please have a look in my above post for the bug in
statistics of merge join.

Thanks as always,
Ranjan
Tom Kyte
October 11, 2013 - 4:26 pm UTC

I don't know what your followup means.

the tkprof is reporting precisely what happened.

there were 2,141 consistent gets from the buffer cache.

the autotrace report is reporting precisely what happened.

there were 266,668 round trips between the client and server.


the merge join cartesian is going to use temp, it won't get and re-get blocks over and over and over again. Consider:




ops$tkyte%ORA11GR2> create table t as select * from all_objects;

Table created.

ops$tkyte%ORA11GR2> exec dbms_stats.gather_table_stats( user, 'T' );

PL/SQL procedure successfully completed.



select * from t

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.01          0          0          0           0
Execute      1      0.00       0.00          0          0          0           0
Fetch     4867      0.13       0.12          0       5849          0       72980
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total     4869      0.13       0.13          0       5849          0       72980
********************************************************************************
select * from t order by 1

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute      1      0.00       0.00          0          0          0           0
Fetch     4867      0.17       0.17          0       1045          0       72980
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total     4869      0.17       0.17          0       1045          0       72980



the query with the order by had to read the entire table - sort it into temp and return from there. the query without the order by would tend to hit blocks over and over again.

basically the "same" query - they would have the same exact numbers of round trips - but different logical IO needs because of the entirely different workloads they perform.

Nice :)

Ranjan, October 11, 2013 - 8:00 pm UTC

It really makes sense now.
I learnt new thing today.

God bless you.


12.3 yrs later, still the most popular question

A reader, October 18, 2013 - 5:05 am UTC

:)

why we add total block number?

A reader, January 02, 2014 - 3:16 pm UTC

Hello Tom,

I don't understand one point that in order to compute total consistent gets why we add the total number of blocks to the result of number of rows divided by fetch arraysize? Shoulnd't the blocks have already been in the memory? Otherwise it should be physical reads, isn't it?

(Number of rows / array size) I understand this part
(Number of rows / array size) + number of blocks I don't understand this part

Thanks in advance.


consistent gets

A reader, January 16, 2014 - 1:38 pm UTC

Hello Tom,

Why do we see 6 consistent gets when we run (select * from t), since we have only one row and that row resides in 1 block. So why does it do 6 I/O calls for getting that one block?

Example:
========

SQL> create table t ( x int );

Table created.

Elapsed: 00:00:00.02


SQL> insert into t values ( 1 );

1 row created.

Elapsed: 00:00:00.03

Statistics
----------------------------------------------------------
45 recursive calls
56 db block gets
31 consistent gets
0 physical reads
6396 redo size
840 bytes sent via SQL*Net to client
783 bytes received via SQL*Net from client
3 SQL*Net roundtrips to/from client
2 sorts (memory)
0 sorts (disk)
1 rows processed

SQL> exec dbms_stats.gather_table_stats( user, 'T' );

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.41
SQL> select * from t;

X
----------
1

Elapsed: 00:00:00.01

Statistics
----------------------------------------------------------
1 recursive calls
0 db block gets
6 consistent gets
0 physical reads
0 redo size
523 bytes sent via SQL*Net to client
524 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
1 rows processed


Why it has to take 3 IO calls for db blocks gets when updating a row (as below)?

SQL> update t set x = x+1;

1 row updated.

Statistics
----------------------------------------------------------
1 recursive calls
3 db block gets
6 consistent gets


Tom Kyte
January 16, 2014 - 7:01 pm UTC

interestingly, Jonathan Lewis *just* recently wrote this up! He saved me some time:

http://jonathanlewis.wordpress.com/2014/01/08/cr-trivia/

just read something similar yesterday

wwtd, January 17, 2014 - 1:05 am UTC

funny that I just read something similar yesterday from Richard Foote.

http://richardfoote.wordpress.com/2009/04/16/indexes-on-small-tables-part-i-one-of-the-few/

hard parse

Fabio, March 13, 2014 - 6:33 pm UTC

Hi Tom,
i used one of your examples to test hard/soft parse 

drop table t;
create table t ( x int );

begin
 for i in 1 .. 100
     loop
      execute immediate 'insert into t this_should_parse_once (x) values (:x)' using i;
      end loop;
End;
/

but when i tkprof i got

insert into t this_should_parse_once (x)
values
 (:x)


call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse      100      0.00       0.00          0          0          0           0
Execute    100      0.01       0.01          0        100        329         100
Fetch        0      0.00       0.00          0          0          0           0
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total      200      0.01       0.01          0        100        329         100

Misses in library cache during parse: 1
Optimizer goal: ALL_ROWS
Parsing user id: 292     (recursive depth: 1)
********************************************************************************


SQL> select banner from v$version;

BANNER
----------------------------------------------------------------
Oracle9i Enterprise Edition Release 9.2.0.4.0 - 64bit Production
PL/SQL Release 9.2.0.4.0 - Production
CORE    9.2.0.3.0       Production
TNS for Linux: Version 9.2.0.4.0 - Production
NLSRTL Version 9.2.0.4.0 - Production

SQL>


any ideas?
Thanks
Fabio

Select statement and DB Block Gets

Arvind, March 10, 2017 - 2:31 am UTC

I have few questions:

1. Why every select statement takes some db block gets?
2. When can a select statement cause more db block gets?
3. If we read table via index then will it be db block get or consistent gets?

Thankx


Connor McDonald
March 11, 2017 - 1:41 am UTC

To read *your* data we need to first read *our* data.

Once we work out what "T" means in:

select * from T

we need to read some dictionary information to locate *where* T is, ie, what extents it comprises and where they are. To do that we read segment header block, and potentially extent map blocks. These will be accounting for your db block gets.

Hope this helps.

More to Explore

Performance

Get all the information about database performance in the Database Performance guide.