Skip to Main Content

Breadcrumb

May 4th

Question and Answer

Tom Kyte

Thanks for the question, Uday.

Asked: December 04, 2002 - 5:41 pm UTC

Last updated: February 13, 2013 - 7:57 am UTC

Version: 8.1.7

Viewed 10K+ times! This question is

You Asked

If LIOs are bad,
1). is it possible to reduce the LIOs programatically ?
2). If yes, what steps/guidelines are to be followed, while coding, inorder to reduce/avoid LIOs ?
3). what is a "OK" number of the LIOs (some percentage of SGA or PIO etc ?)

If possible, can you please demonstrate with an example ? (I will be using PL/SQL for coding)

Thanks,
Uday


and Tom said...

1) by rewriting your SQL, tuning your SQL, setting environmental things like sort_area_size or db_file_multi_block_read_count or the optimize_index_* parameters.

2) the piece of advice I have for everyone: totally and forever forget the concept that "if my query ain't using an index, it must be broken". Massive nested loop joins are the main culprit of this. If you are joining a 1,000,000 row table to a 2,000,000 row table and trying to get 2,000,000 rows back -- it is very doubtful that an index should be used.

3) ratios STINK ( i have stronger words but this is a public forum after all ). There is one ratio I use -- soft parse ratio (the ratio of soft to hard parses). It should be near 100 for most systems. All other ratios -- forget about them. There is no magic "good number". It is like the stupidest ratio of them all - cache hit. I have a theory that systems with high cache hit ratios, over 95, 96% -- are among the most poorly tuned systems. They are experiencing excessive LIO's due to massive nested loop joins. Yet, their DBA's sit there and say "well, my cache hit is 99% so all is well in the world".

I have no ratios for you. If you have a query that you need to execute and the best we can do is 1,000,000 LIO's -- then SO BE IT, that is that. However, if that query could be executing without doing 1,000,000 LIO's then we need to fix it.

Rating

  (89 ratings)

Is this answer out of date? If it is, please let us know via a Comment

Comments

Uday, December 05, 2002 - 10:37 am UTC


Good info...

Robert, December 05, 2002 - 10:58 am UTC

Tom,

Would you parse (no pun intended) a detailed explanation of your example...

<quote>
...Massive nested loop joins are the main culprit of this. If you are joining a 1,000,000 row table to a 2,000,000 row table and trying to get 2,000,000 rows back -- it is very doubtful that an index should be used.
<quote>

Thanks,

Robert.


Tom Kyte
December 05, 2002 - 2:43 pm UTC

Ok, here you go.  So, joe (or josephine) sql coder needs to run the following query:

select t1.object_name, t2.object_name
  from t t1, t t2
 where t1.object_id = t2.object_id
   and t1.owner = 'WMSYS'

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute      1      0.00       0.00          0          0          0           0
Fetch    35227      5.63       9.32      23380      59350          0      528384
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total    35229      5.63       9.33      23380      59350          0      528384

Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: 80

Rows     Row Source Operation
-------  ---------------------------------------------------
 528384  HASH JOIN
   8256   TABLE ACCESS FULL T
1833856   TABLE ACCESS FULL T

<b>suppose they ran it or explain planned it -- and saw that plan.  "Stupid stupid CBO" they say -- "I have indexes, why won't it use it.  We all know that indexes mean fast=true! Ok, let me use the faithful RBO and see what happens":</b>

select /*+ RULE */ t1.object_name, t2.object_name
  from t t1, t t2
 where t1.object_id = t2.object_id
   and t1.owner = 'WMSYS'


call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute      1      0.00       0.00          0          0          0           0
Fetch    35227    912.07    3440.70    1154555  121367981          0      528384
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total    35229    912.07    3440.70    1154555  121367981          0      528384

Misses in library cache during parse: 0
Optimizer goal: RULE
Parsing user id: 80

Execution Plan
----------------------------------------------------------
   0      SELECT STATEMENT Optimizer=HINT: RULE
   1    0   TABLE ACCESS (BY INDEX ROWID) OF 'T'
   2    1     NESTED LOOPS
   3    2       TABLE ACCESS (FULL) OF 'T'
   4    2       INDEX (RANGE SCAN) OF 'T_IDX' (NON-UNIQUE)


<b>be careful of what you ask for!</b>

It can be deadly...  Yes, that is 10 seconds vs 1 hour.  All *because of indexes*

Oh -- but my cache hit ratio, really good!

  1  SELECT phy.value,
  2         cur.value,
  3         con.value,
  4         1-((phy.value)/((cur.value)+(con.value))) "Cache hit ratio"
  5  FROM   v$sysstat cur, v$sysstat con, v$sysstat phy
  6  WHERE  cur.name='db block gets'
  7  AND    con.name='consistent gets'
  8* AND    phy.name='physical reads'
ops$tkyte@ORA920.US.ORACLE.COM> /

   VALUE      VALUE      VALUE Cache hit ratio
-------- ---------- ---------- ---------------
 1277377      58486  121661490      .989505609


98.9%!  Boy did i do a good job or what ;) 

For Robert

Connor McDonald, December 05, 2002 - 11:33 am UTC

create table one_mill ( x number, y number) tablespace users;

insert /*+ APPEND */ into one_mill
select dbms_utility.get_hash_value(rownum,1,1000000),rownum from x$ksmmem
where rownum < 1000000;

create table two_mill ( x number, y number) tablespace users;

insert /*+ APPEND */ into two_mill
select dbms_utility.get_hash_value(rownum,1,1000000),rownum from x$ksmmem
where rownum < 2000000;

create index ix1 on two_mill ( x ) nologging;

analyze table one_mill estimate statistics;
analyze table two_mill estimate statistics;

select count(*)
from (
select /*+ NO_MERGE ORDERED FULL(o) INDEX(t ix1) USE_NL(t) */ t.y
from one_mill o, two_mill t
where o.x = t.x ) ;

took 75 seconds

select count(*)
from (
select /*+ FULL(t) FULL(o) */ t.y
from one_mill o, two_mill t
where o.x = t.x ) ;

took 23 seconds

Thanks Tom and Connor! ...... now.....

Robert, December 05, 2002 - 4:27 pm UTC

Execellent examples!

Tom,

The explain plan in your example shows a full table scan on both tables... this means Oracle has to store contents of both tables in memory (correct?).

What if we have two 50,000,000 row tables joined. This will require extensive resources.

How would you suggest attacking this situation... would your philosophy be the same as with large transactions/rollback-segments? (i.e. just allocate the necessary resources and go for it)..

If so, what resources would we need to be aware of in joining, say, two 50,000,000 row tables as in your hash_join scenario?

Thanks,

Robert.



Tom Kyte
December 05, 2002 - 6:37 pm UTC

NO -- full scans -- nothing "gets stored in ram". The database is a tad more sophisticated then that. Result sets are not "pre-made" and stored somewhere. We do not stuff things into memory and assume ram is infinite.

We fetch data from tables as we need it. Here -- for the nested loops join -- it was processed much like this:


for x in ( select * from t t1 )
loop
for y in ( select * from t t2 where t2.object_id = X.OBJECT_ID )
loop
-- output to client --
end loop
end loop

that query needed very very little memory to work.

The other query was more complex in its processing, the hash join would use hash_area_size chunks of memory (or in 9i with pga aggregate targets -- some size would be assigned at runtime depending on resources) and the tables would be piecewise hashed together if the hash partitions that were made up didn't fit.

50,000,000 rows - 5 rows -- same thing. No difference (well, 50,000,000 rows would tend to use the entire hash area size where 5 rows probably won't)...


For 50,000,000 rows -- you might need some temp space. See
</code> http://asktom.oracle.com/pls/asktom/f?p=100:11:::::P11_QUESTION_ID:6501620260742 <code>

for an example (using sort_area_size but same concept)



FTS not stored in Buffer Cache???

Anthony, December 06, 2002 - 7:28 am UTC

<<QUOTE>>
NO -- full scans -- nothing "gets stored in ram". The database is a tad more
sophisticated then that. Result sets are not "pre-made" and stored somewhere.
We do not stuff things into memory and assume ram is infinite.
<<QUOTE>>

i red in OCP : Oracle8i DBA Arch and Administration (Sybex) on page 10 it said...

<<QUOTE>>
When new data buffers are moved to the LRU list, they are copied to the MRU end of the list, pushing out the buffers from the LRU end. An exception to this occurs when a full table scan is done and the blocks from a full table scan are written to the LRU end of the list.
<<QUOTE>>

in my own understanding (correct me if im wrong), blocks from a table (let's say more than a million records) will be cached to the database buffer but on the LRU end list. so this means all blocks from that table are actually cached?

can you share some lights on this?

Regards,
NOTNA

Tom Kyte
December 06, 2002 - 7:38 am UTC

you are missing the point -- the RESULT SET of your query against a 2 billion row table is not stored in memory somewhere waiting for you to fetch from it.

Sure, blocks are buffered in the buffer cache -- as individual separate blocks waiting for any session to come along and peek at them.

Is that your result set? No.

The question was:

"The explain plan in your example shows a full table scan on both tables... this
means Oracle has to store contents of both tables in memory (correct?)."

and the answer is definitely not. I can query a entire 5gig table with a 5meg buffer cache -- no problem. I will not have "the table in ram"

LIO, Buffer gets, Consistent gets

Arun Gupta, December 06, 2002 - 9:27 am UTC

Tom
01. What is the difference between LIO, Logical Reads, Buffer gets and Consistent gets?
02. As you have advised to reduce LIO, how can I find out LIO from statspack report or is there any other script to find it out?
03. If I have reduced the number of buffer gets for a query, have I reduced LIO?

Thanks

Tom Kyte
December 06, 2002 - 11:17 am UTC

1) that is the definition of LIO's -- logical io's, which are buffer reads which might be consistent gets or current mode gets.

2) buffer gets in the top sql report of statspack.

3) yup

Connor's example

Mikito Harakiri, February 26, 2003 - 5:42 pm UTC

OK, the time difference is 3 times. I rerun Connor's example and got 80 sec vs 40 sec. I would expect buffer gets statistics to reflect this differnce, but in reality there is about 1000 times difference. Something is definitely wrong with Oracle's buffer gets ariphmetics.

OK. We see the time difference 2-3 times. Big Deal! For the minuscule time savings like that CBO risks missing the indexed nested loops in the reverse situation when performance difference might be thousands times in favor of nested loops. And cardinality/cost calculation is *always* wrong for any sql of medium complexity. If you want to convince me otherwise, please, show me execution statistics where you see number of rows returned matching the cardinality estimation per each row source.

Tom Kyte
February 27, 2003 - 7:22 am UTC

mikito - be scienfitic -- post your results exactly ( just like I do ) and point out where your confusion is.

It is all mathematically provable (and you know, I'm not getting your attitude or approach -- things like "analytics are digusting, readers block readers -- where are you going with all of this?)"


the cost cardinality is not always wrong. that is so easy to prove I'll leave that to you (cause to prove "always" wrong, I only need one case -- thats trivial)......... If you want to prove me wrong - offer a scientific proof, not andetocal "evidence"


I don't mind "devils advocates" but you just keep saying unsubstantiated things with no proofs, making me spend an inordindate amount of time proviing the opposite.

A reader, February 27, 2003 - 12:11 pm UTC

tom, you said: "I have a theory that systems with high cache hit ratios, over 95, 96% -- are among the most poorly tuned systems. They are experiencing excessive LIO's due to massive nested loop joins. Yet, their DBA's sit there and say 'well, my cache hit is 99% so all is well in the world'."

should this be read as: HIGH cache hit ratio (excessive LIO's) is BAD, if it is the result of massive nested loop joins. how could one identify unnecessary nested loops?

Tom Kyte
February 27, 2003 - 7:58 pm UTC

yes, it was a bit tongue in cheek.


A very very very poor cache hit *might* be an indicator of a problem

A high cache hit ratio is just meaningless. Neither good, nor bad nor indifferent -- it by itself is not meaningful.

That is is 99% doesn't mean good -- that is my point.

You would look for queries with lots of gets per execute. You would make them get the answer with less gets per execute. The more rows they return -- the more gets they are allowed but a query that returns say a single unaggregated row but does 100 buffer gets might be in need of some help for example.

Connor's example detailed

Mikito hartakiri, February 27, 2003 - 2:04 pm UTC

Here is my statistics. I'm totally ignoring physical reads since I have a filer with 4 Gig cache (so that god knows how many buffers are there between oracle and disks;-)

db1>select count(*) from (
3 select /*+ ORDERED FULL(o) INDEX(t ix1) USE_NL(t) */ t.y
4 from one_mill o, two_mill t
5 where o.x = t.x ) ;

Elapsed: 00:01:42.00
2008774 consistent gets

db1>select count(*) from (
3 select /*+ FULL(t) FULL(o) use_hash(t o)*/ t.y
4 from one_mill o, two_mill t
5 where o.x = t.x ) ;

Elapsed: 00:00:27.05
6434 consistent gets

1 select count(*) from (
3 select /*+ FULL(t) FULL(o) use_merge(o t)*/ t.y
4 from one_mill o, two_mill t
5* where o.x = t.x )

Elapsed: 00:01:21.04
6434 consistent gets

The verdict: buffer gets is very crude representation of Logical IO. There is no 300x performance difference between the methods.

Example 2 (vision db):
SELECT count(ATTRIBUTE1)
FROM OE_ORDER_LINES_ALL
14077 intermediate rows processed (before aggregation -- that is the size of the table)
15998 buffer gets

When creating a clone table with "create as select", the number of buffer gets drops 12 times.

The Verdict is once again: there is something wrong with buffer gets count.

Finally, you can take any query with 3 or more tables. If the one happens to have a decent matching between number of rows processed and estimated cardinality, then I would easily add a couple of predicates that would make a CBO plan completely ridiculous. Wanna bet?

Tom Kyte
February 27, 2003 - 8:07 pm UTC

The verdict is a good case of false causality here but anyway.

actually -- ignoring the PIO in this case was probably the cause of your confusion. You might have noticed that PIO exceeded LIO in the merge join -- big sort spilt to disk and we did lots of direct reads/writes which bypass the buffer cache so there are extenuating circumstances.

whosoever promised you a 300x increase? 300% is what you got which -- hey, if I could get that every day I'd be a very very happy camper.

Consider that 1:42 is small in the grand scheme of a really big DW query -- that your example should be extrapolated up (Conners example was a 3x, just -- amazingly -- like yours) to something on the order of my example above -- you might see 10seconds vs 1 hour (tkprof don't lie)



My verdict -- besides not fully understanding either your point or motivation that is -- is that LIO is what I tune for but I ignore nothing cause it is all relevant.

There is not something wrong with buffer gets -- it is your analysis process that is flawed. Show me the plans -- and i'll betcha

a) the query used something akin to first rows
b) the CTAS optimized for all rows

guess what -- first rows does nested loops type stuff. All rows -- full scans, hashing. So, again, you drew a false conclusion based on lack of information.




put up or ...... show me the money -- show me that it happens every time, each and every time -- that was your premise. as I said, all i need to do with a "every time" sort of comment is show one example that is false. It is up to you to prove that it "always happens"

I find the cardinality to be fairly well derived -- which if it were not, the CBO would never ever get it right on a 3 table join--- hmmmm.

LIO's and star transformations

Dave, February 28, 2003 - 2:05 am UTC

I am fully into the idea of pretty much disregarding buffer cache hit ratios, but following an upgrade to 9iR2 I saw my chance to look at why I should be getting a 99%+ ratio on a data warehouse. Using the v$segment_statistics view (which I have become a great fan of) i saw a mind-boggling large number of LIO's on a very small date dimension table -- about a thousand rows.

It had 19 PIO's, and over 300,000,000 LIO's -- far in excess of the LIO's on any one of the fact tables in the DW, and driving the cache hit ratio to non-sensically high levels. Almost needless to say the optimizer is making extensive use of star transformations, and this tiny table gets used in hash joins for nearly every query.

Star transformations are doing a great job for us, and if the price to be paid is a relatively massive number of LIO's then I have no complaints. I do wonder though whether there are particular steps i might take to reduce LIO's while sticking with the star transforms.

Do you have any thoughts on this, Tom?

Tom Kyte
February 28, 2003 - 9:34 am UTC

If you use it alot -- for lots of queries -- one would expect it to have lots of LIO's.

It is a matter of "is it unreasonable given its use". If I used it once in a single query -- 300million would seem "excessive".

If I use it over and over and over and over -- it may not be.

The real question -- are you meeting your performance goals/expectations?


You might want to read
</code> http://www.dbazine.com/jlewis6.html <code>

to get a better understanding of what is going on as well -- JPL has a knack for saying things in a digestable manner

Rory, April 24, 2003 - 3:49 am UTC

Hi Tom,

Just a clarification when it comes to cache hit ratios. I have talked with other DBA's and they still think that cache hit ratio is important coz base from their experience, whenever it goes down from 99.99 to even just 99.98, then calls would be coming in as to the performance of the system. They have associated the cache hit ratio shld be 99.99% for them or a problem wld occur. How could I prove to them that they are wrong? How can I find the real problem to their system and prove them that cache hit raio doesnt have anything to do with that? thanks.

Tom Kyte
April 24, 2003 - 8:00 am UTC

show them this example and ask them which query would they want running on their system?

the one that generated a 50% cache hit
or the one that generated the 98% cache hit?


I'd *love* to see the system where a 0.01% reduction would be meaningful -- just love to. It must be the most finely tuned system on the planet.


It is all about wait events, what are people WAITING on.

It is all about reducing LIO's -- if you watch the pennies, the dollars take care of themselves. If you reduce your LIO's, you'll reduce your PIO's (see example above).


if you want a 99.9% cache, just run this procedure:


begin
loop
for x in ( select * from dual )
loop
null;
end loop;
end loop;
end;
/


run as many copies of that as you need to achieve your 99.9% cache hit.

Now what -- are you any faster?

Rory, April 24, 2003 - 9:25 pm UTC

Hi Tom,

Great example. My cache hit ratio did increase when I ran your script. If only I had your knowledge then I wld be able to prove them wrong. Any suggestions or technique you'd like to share on how to find out what the problem really is. Like for example look for waits first with this kind of value or maybe run statspack. Whatever. THanks again Tom.

Tom Kyte
April 25, 2003 - 7:43 am UTC

well, basically -- you never have a problem until you are not meeting your documented formal performance requirements.

After that -- there are two threads you can follow

o tune what you have
o fix what you have

tuning attempts to work inside the box -- we'll fix this query, that query, remove this wait, that wait and hope for the best.

fixing says "i'm going to look at the application architecture, its algorithms and approaches and first validate that they are right, correct and best practice. EG: I would take all bad code of the form

loop over every record
insert into another table
commit every now and again
end loop

and make it efficient, fast and correct via

insert into another_table select ... from ...;


I'll make sure I parse once, execute many
That I use binds
That I don't do "select count(*)" just to see if a record exists

and so on....

Rory, April 28, 2003 - 12:02 am UTC

Thanks TOm,

Again, thank you so much for sharing such knowledge. Wish u all the best in your site.

6 stars for this one...

Alvin, May 02, 2003 - 3:26 am UTC

Wow.... refuted another cw !

"If my explain plan ain't using my indexes.... most definitely broken"



Tell me if i'm wrong....

Alvin, May 02, 2003 - 10:06 pm UTC

If i remember correctly in one of the oracle docs that an index would be of use if the result set would be 15% or less than the query size.

i'm thinking...

The full table scan ran faster because of we expect 2,000,000 rows to be returned. Which is the size of the bigger table.

The query that forced an index usage ran slower because of the unecessary index reads both Logical and physical, since its unavoidable that the query will do a full table anyways.

was any of my ramblings right ?

Tom Kyte
May 02, 2003 - 10:09 pm UTC

give or take 15% to 85% yes -- that would be accurate.


but no, the hash join ran faster because it is a brute force "big" thing. it hit each block as little as possible

The nest loops was a way to make it take 3-4 LIO's to get to each block - rather then just read them one by one.

Should we then...

Kamal Kishore, May 03, 2003 - 10:07 am UTC

Hi Tom,
Should we then prefer Hash Join over Nested Loops?
Is there a reason to pick one compared to the other?
Thanks,


Tom Kyte
May 03, 2003 - 12:07 pm UTC

if one were infinitely superior to the other in all cases and every circumstance, we would have saved ourselves lots of time, energy and money by implementing only one of them.

they both are superior to eachother in different circumstances.

Say you were interested in getting just the first row back from this query or the first N rows - nested loops would be awesome.

Say you were interested in getting ALL of the rows but were paging through them in a client server type application (maintaining a stated connection to the database and fetching 15 rows at a time for a user to look at). Nested loops would be awesome (that and they would never actually get to the bottom, last page anyhow)


Say you were interested in getting ALL of the rows asap -- nested loops stink, hash join awesome.




Sam, May 03, 2003 - 12:29 pm UTC

Hi ,
In addition to what Tom has explained so far, I would like to present an analogy.
Let's say, you are working on a project and you need to refer to syntax of "CREATE INDEX" statement.
What will you do? Will you read oracle 8i complete reference cover to cover? Perhaps, not. The better option is to refer to index, which points to the actual page no. explaining the syntax of "CREATE INDEX". Here in this case, accessing data is faster through index.
Now, let's say, you are preparing for OCP certification and the complete 8i reference is part of the curriculum. In this scenario, if you refer to index for every chapter/syntax etc., it will be painful (You will probably miss the examination date). Better option will be to read the complete book cover to cover (i.e FULL TABLE SCAN).
Similarly whether to use Hash Join or Nested Loop join depends on the need/requirement.
Just to reparaphrase , what Tom said in another thread

"FULL TABLE SCAN is not evil. Indexes may not be GOOD"
"Nothing is black or white.Everything is a shade of grey"

Right Tom...?




Tom Kyte
May 03, 2003 - 7:35 pm UTC

yes.

This boogles my mind...

Alvin, May 14, 2003 - 2:46 am UTC

We have a query that does a full table scan... i thought that i can help the query by indexing all the columns (individually)in the predicate. 

I created the indexed the individual columns, ran analyze table estimate statistics and tested the full scan versus the optimizer=Choosen path. Here's what i got...


05:22:01 SQL> SELECT calldetailrecordid
05:22:07   2    FROM (SELECT /*+ FULL (rtbetelcalldetailrecords) */ ROWNUM rn, calldetailrecordid
05:22:07   3            FROM rtbetelephony.rtbetelcalldetailrecords
05:22:07   4           WHERE sys_proctime = 0
05:22:07   5             AND sys_procend = 0
05:22:07   6             AND billingrecordid = 0
05:22:07   7             AND duration > 0) temp
05:22:07   8   WHERE temp.rn >= 1
05:22:07   9     AND temp.rn <= 100
05:22:08  10  /


Elapsed: 00:00:56.32

Execution Plan
----------------------------------------------------------
   0      SELECT STATEMENT Optimizer=CHOOSE (Cost=41778 Card=1 Bytes=2
          6)

   1    0   VIEW (Cost=41778 Card=1 Bytes=26)
   2    1     COUNT
   3    2       TABLE ACCESS (FULL) OF 'RTBETELCALLDETAILRECORDS' (Cos
          t=41778 Card=1 Bytes=19)





Statistics
----------------------------------------------------------
          0  recursive calls
          4  db block gets
     275553  consistent gets
     242217  physical reads
          0  redo size
       1663  bytes sent via SQL*Net to client
        731  bytes received via SQL*Net from client
          8  SQL*Net roundtrips to/from client
          2  sorts (memory)
          0  sorts (disk)
        100  rows processed

02:17:31 SQL> SELECT calldetailrecordid
02:17:48   2    FROM (SELECT ROWNUM rn, calldetailrecordid
02:17:48   3            FROM rtbetelephony.rtbetelcalldetailrecords
02:17:48   4           WHERE sys_proctime = 0
02:17:48   5             AND sys_procend = 0
02:17:48   6             AND billingrecordid = 0
02:17:48   7             AND duration > 0) temp
02:17:48   8   WHERE temp.rn >= 1
02:17:48   9     AND temp.rn <= 100
02:17:49  10  /



Elapsed: 00:02:150.06

Execution Plan
----------------------------------------------------------
   0      SELECT STATEMENT Optimizer=CHOOSE (Cost=96 Card=1 Bytes=26)
   1    0   VIEW (Cost=96 Card=1 Bytes=26)
   2    1     COUNT
   3    2       TABLE ACCESS (BY INDEX ROWID) OF 'RTBETELCALLDETAILREC
          ORDS' (Cost=96 Card=1 Bytes=19)

   4    3         AND-EQUAL
   5    4           INDEX (RANGE SCAN) OF 'RTBETELCDR_SYS_PROCEND_IDX'
           (NON-UNIQUE) (Cost=31 Card=1)

   6    4           INDEX (RANGE SCAN) OF 'RTBETELCDR_SYS_PROCTIME_IDX
          ' (NON-UNIQUE) (Cost=31 Card=1)





Statistics
----------------------------------------------------------
          0  recursive calls
          0  db block gets
     284004  consistent gets
     252388  physical reads
          0  redo size
       1607  bytes sent via SQL*Net to client
        731  bytes received via SQL*Net from client
          8  SQL*Net roundtrips to/from client
          2  sorts (memory)
          0  sorts (disk)
        100  rows processed


Execution Plan
----------------------------------------------------------
   0      SELECT STATEMENT Optimizer=HINT: FIRST_ROWS (Cost=2 Card=2 B
          ytes=52)

   1    0   VIEW (Cost=2 Card=2 Bytes=52)
   2    1     COUNT
   3    2       TABLE ACCESS (BY INDEX ROWID) OF 'RTBETELCALLDETAILREC
          ORDS' (Cost=2 Card=2 Bytes=130)

   4    3         INDEX (RANGE SCAN) OF 'RTBETELCDR_BILLINGRECORDID_ID
          X' (NON-UNIQUE) (Cost=1 Card=2)





Statistics
----------------------------------------------------------
          0  recursive calls
          0  db block gets
     280238  consistent gets
     247007  physical reads
          0  redo size
       1663  bytes sent via SQL*Net to client
        731  bytes received via SQL*Net from client
          8  SQL*Net roundtrips to/from client
          2  sorts (memory)
          0  sorts (disk)
        100  rows processed

I don't get it... 

Full Elapsed: 00:00:56.32 <- it cost's more but it's faster
Index Elapsed: 00:02:150.06 <- more than twice the time
Index with First_rows hint Elapsed: 00:01:73.65

If i select count(*) the inner query i get around 2206 rows
the table itself has 550,000++... i thought that the % returned is reasonable enough to index this column.

1. Will the optimizer choose the full scan if i compute the statistics instead ? 

2. Will it be better if i group the predicate into one index ?

3. Why did the index use only 2 of my index ? when i have 4 ?

btw, the table is a flat file that they use in production.
 

Tom Kyte
May 14, 2003 - 7:05 am UTC

having each column indexed individually for this query -- useless.

having all columns indexed in a single index -- potentially helpful.


btw: table is not a flat file -- it is a table.




thanks !

Alvin, May 15, 2003 - 2:58 am UTC

I thought that oracle would query the 4 index and 'AND' the results and not touch the table.

Anyways i dropped the 4 indexes and used 1 index on 4 columns and it did help.


Thanks!


Tom Kyte
May 15, 2003 - 9:31 am UTC

it would do that with bitmap indexes, they are the ones that can be "anded" and "ored" together. b*trees, they cannot.

(ps: that does NOT mean use bitmaps, if this table is modified, not in a data warehouse, bitmaps would be inappropriate)

LIO in PL/SQL

Sam, June 20, 2003 - 7:31 pm UTC

Tom,

This discussion has been great.

I will never look at ratios the same way.

I have been asked to tune a slow running PL/SQL procedure which contains a structure like this.

For x_rec in SELECT... From … LOOP
… <<some commands>>
END LOOP;

The table has 90MM rows. The process takes hours to run. Since each row is fetched into the loop then processed, it would seem that there must be an LIO for each record returned by the cursor. It seems like this is the server side example of the too small arraysize on the client.

Is this correct?

I am looking at using BULK COLLECT the minimize LIO and perhaps increase performance. Am I on the right track?


Tom Kyte
June 21, 2003 - 10:08 am UTC

well, yes and no.

bulk collect (and forall to update if that is what you are doing) would be faster but -- it is still going to be "slow".

Do anything 90 million times and it'll be slow.

I would look to see (in this order)

o do i NEED to do that process (surprises abound on that one)

o can i do this process in a SINGLE SQL statement (very often is found to be true)

o can i paralellize this process -- by splitting the table up by rowid ranges and running N copies of this procedure simultaneously (this can be huge) see
</code> http://asktom.oracle.com/pls/asktom/f?p=100:11:::::P11_QUESTION_ID:10498431232211 <code>
for example


using bulking binds would be a way to make the 3rd option go even "faster"

A reader, July 28, 2003 - 11:53 am UTC


hash joins

kit, November 17, 2003 - 7:19 am UTC

HI tom,
I know in this threas a few people have covered hash joins. I was wondering could you please have a look at the below plan


HASH JOIN 8 M 596 M 293537
TABLE ACCESS FULL X 90 1 K 7
TABLE ACCESS BY INDEX ROWID Y 9 M 552 M 293457
INDEX FULL SCAN Y.A 9 M 30892


How would the above work
identify the Y rows via the index first and then do the hash join. would this be better than hash join on two
tables

Would it better just do full on both tables and then do the hash join

Tom Kyte
November 17, 2003 - 7:45 am UTC

sorry, not really sure what I'm looking at. hows about the autotrace traceonly explain output (a standard format)


and the query is always useful.

reduicing lios via hash joins

kit, November 17, 2003 - 10:20 am UTC

As you can see its more of a question of what the above plan is telling me. My understanding its carrying out index join of table
Y via the indexY.a and then the other full table on X. Once finds the records, does it then do the hash join between the keys of the
two tables

My question was is it wise to mix the Nl with the HASH join. I know there is no general answer, but I would apreciate the examples
you've come across.


some of your notes explain as below

can you please show an example to understand

" It would seem more logical to me to hash the SMALLER table (less temp stuff to
store, might fit into memory and all)...."






create a hash table on one of A or B (say A) on the join key creating temp_a.

while NOT eof on B
read a record in b
hash the join key and look up into temp_a by that hash key for matching
records
output the matches
end loop

So, a hash join can sometimes be much more efficient (one hash, not two sorts)

Hash joins are used any time a sort merge might be used in most cases. If you
don't see hash joins going on, perhaps you have hash_join_enabled turned off...



It would seem more logical to me to hash the SMALLER table (less temp stuff to
store, might fit into memory and all)....



Tom Kyte
November 19, 2003 - 6:35 am UTC

i'd like an autotrace output -- i'm not sure what i'm looking at

OCP Performance Tuning uses ratios

Ryan Gaffuri, November 17, 2003 - 11:22 am UTC

Tom,

Your a VP at Oracle. Cant you contact Oracle Education and knock some sense into them? Their Performance Tuning OCP tests for both 8i and 9i are LOADED with Ratios and other inaccuracies. I know Oracle is a big company, but this is pathetic. Alot of companies now require taking the OCP and the performance tuning test is just a waste of money.

Alot of entry level people study for these tests and are giving HORRIBLE information by Oracle information itself. Its very difficult to get them to do the write thing since the 'Official Oracle Certification Test' says to what is TOTALLY INACCURATE.

I know your in a seperate group. but cant you do anything? That test is stupid.

LIO VS PIO

Hien, November 17, 2003 - 5:53 pm UTC

Tom

Probably this question has been asked before but could you please explain the difference between LIO and PIO in relation to database design and application design (ie query, DML)

Thanks
Hien

Tom Kyte
November 21, 2003 - 7:31 am UTC

nothing really directly to do with database/application design (indirectly sure)

LIO is a logical IO. how many buffer gets did we need to do to answer your query.
PIO is a physical IO. how many times did we read disk to do that.

bad designs/sql can lead to excessive LIO's, excessive LIO's may lead to too many PIO's

OCP Performance test

Ryan Gaffuri, November 18, 2003 - 9:52 am UTC

Sorry let me re-word what I said yesterday. I dont mean my post as an attack on you. However, how come the OCP is all about hit ratios amongst alot of other inaccurate information? You are a VP at Oracle. Dont you have any sway over them? Its rather disheartening to have you tell us to do one thing(which I believe to be correct) and another group that actually certifies Oracle DBAs tell people to do the complete opposite.

Who writes these tests? Have you tried contacting them? It really looks bad for Oracle Corporation to have contradictory information about tuning. It also makes our jobs as DBAs harder since people often trust the 'official' word from Oracle over what you say. Therefore we have to try to convince them to not do what Oracle 'officially' tells them to do. This can cause alot of conflict on projects.



OCP tests

reader, November 19, 2003 - 2:31 pm UTC

I agree. The OCP tests and a lot of Oracle books give undue weightage to the ratios. It is high time that somebody changes this.

Any beginner can learn from books specifically made for OCP and pass these tests even though they don't have any real database experience.

Leave Tom Alone

Michael, November 21, 2003 - 11:33 am UTC

This is not the forum to be discussing OCP issues.
Tom has graciously help as many as he can.
If you don't like OCP, don't get.
If you don't like the book, don't read it.
This is not a whine forum.

cache table

reader, February 08, 2004 - 6:40 pm UTC

What is the effect of cache clause in creating tables? Is it going to cache the blocks permanently in the buffer cache until the instance is bounced. Is it a way to improve performance in oltp? Thanks.

Tom Kyte
February 08, 2004 - 9:09 pm UTC

No, it does not cache the blocks permanently. It only changes the manner in which they are treated in a full scan when the table is "large"

</code> http://asktom.oracle.com/pls/asktom/f?p=100:11:::::P11_QUESTION_ID:255215154182 <code>

you'd be better off using the KEEP and RECYCLE pools if you want this degree of control

OCP Tests

noel seq, February 09, 2004 - 1:15 am UTC

This is for Ryan Gaffuri.

Tom never meant that Hit ratios at all times are totally meaningless. He means that HIT ratios in *most* cases are totally meaningless especially in cases where the LIO's are too high or shared pool too large or where bind variables are not used for frequently repeated queries.

First Tune your system, then look at the hit ratios. Ratios are meaningful only in a tuned environment.

Don't expect your db to be 100% tuned immediately after putting it into production. Its the DBA's job to tune it and then as a follow up look at the ratios where even a slight change means that somewhere something needs tuning.

Noel.

Tom Kyte
February 09, 2004 - 7:23 am UTC

Let me clarify further.

A hit ratio, in isolation -- all by itself, is pretty much meaningless.

You need *more* information. Ratios are not even really relevant all by themselves in a "tuned" environment. They are simply indicators that -- coupled with knowledge -- can be used to identify places to look.


example:

soft parse ratio = 80%. Ok, is that good or bad?

You say bad? I say "oh wait, it is a data warehouse after all, they parse queries that will run for seconds or minutes and need to use advanced features such as star transformation. if they used binds, well, it wouldn't work right"


You say good? I say "oh wait, it is a high end transactional system. Anything less than 99% for them is 'performance nightmare city'. They have a bug in the developed code that must be fixed really soon"



You see -- you need ratio PLUS knowledge and maybe a slew of other numbers in order to come to conclusion.

ratios

Ryan Gaffuri, February 09, 2004 - 8:54 am UTC

How often do you look at ratios when tuning a database? I have seen you write about the parse/execute ratio. Do you even look at the cache hit ratio? In the oraperf statspack report it says to ignore this ratio. I've seen carrie milsap post that the only thing the cache hit ratio tells him is if it is too high, that there is alot of bad sql.

what about the other ratios? I never use them personally. Should I use them in any form of context? You don't really even mention them in either of your books.

My understanding was that you can get more precise information by looking at the wait events rather than looking at ratios.

Tom Kyte
February 09, 2004 - 9:43 am UTC

I use the soft parse % and ask "what kind of system do you have".

I predominantly use wait events for tuning applications...

A high cache hit ratio very often means

a) applications are totally untuned, performing massive LIO's that they need not be
b) applications are partially tuned
c) applications are totally tuned

that is - a high cache hit by itself means nothing.

server restart

Riyaz, June 11, 2004 - 8:52 am UTC

I have read most of your valuable points on bind and used effectively and fetched fruits iterms of huge performance improvement.

But our DBA is taking logical export daily in the morning and shut / start the db without restarting the server. (production db)

SO I am loosing effectivess is it n't?

But atlease i am happy that after first time, if same query comes on that day (only on that day), only soft parse will happen.




Tom Kyte
June 11, 2004 - 4:15 pm UTC

hopefully your dba knows they need to take real backups, not those fake ones. dmp files are not "very useful" for restoring from a media failure.

restarting the server every day -- hmmmm, hmmmm.

bind - everyday down/start the oracle

Riyaz, June 12, 2004 - 12:59 am UTC

I requested DBA manytimes, but still he is doing logical export backup. (db in archieve log mode only) and down/starting oracle instace after taking backup. (no restart of server)
(And also he is taking cold backup weekly once)
(product db size 20 GB)

So I get benefit of bind only for that day. Infact our system wasting the resource, Is it n't? pl confirm



Tom Kyte
June 12, 2004 - 9:40 am UTC

if the db is in archive log mode and they use export (and not backups) they have just wasted everyones time.

doing the export is fine
shuttting down and restarting is "not smart" (nice way of saying it).

Fetching row from 2 tables

A reader, June 22, 2004 - 8:56 pm UTC

Suppose I need to get some columns from 2 tables based on the PK for each table. Instead of doing

select col1 into v_col1 from t1 where pk=v_pk1;
select col2 into v_col2 from t2 where pk=v_pk2;

I do

select t1.col1,t2.col2 into v_col1,v_col2
where t1.pk=v_pk1 and t2.pk=v_pk2;

Are these 2 equivalent, would one incur more LIO than the other? Thanks

Tom Kyte
June 22, 2004 - 10:38 pm UTC

they should incurr the same number of IO's on the server -- you would be reducing the number of round trips to/from the server here is all.

that and both v_col1/v_col2 would be "not set" if either t1 or t2 did not have that key value.

A reader, June 22, 2004 - 8:57 pm UTC

Doh, I meant

select t1.col1,t2.col2 into v_col1,v_col2
from t1,t2
where t1.pk=v_pk1 and t2.pk=v_pk2;

How can I reduce the LIOs on these inserts?

Peter Tran, July 14, 2004 - 3:42 pm UTC

Hi Tom,

I hope this is the right thread to ask this question. I have the following table.

CREATE TABLE OD_BIDPRICE
( CMPID NUMBER(10,0) NOT NULL,
FLTDATE DATE NOT NULL,
EFFDATE DATE NOT NULL,
DISCONTINUEDATE DATE NOT NULL,
SEATIDX NUMBER(3,0) NOT NULL,
VALUE NUMBER(10,0) NOT NULL
) PARTITION BY HASH (CMPID)
(
PARTITION OD_BIDPRICE_P01,
PARTITION OD_BIDPRICE_P02,
PARTITION OD_BIDPRICE_P03,
PARTITION OD_BIDPRICE_P04,
PARTITION OD_BIDPRICE_P05,
PARTITION OD_BIDPRICE_P06,
PARTITION OD_BIDPRICE_P07,
PARTITION OD_BIDPRICE_P08
)


CREATE INDEX OD_BIDPRICE_PK ON OD_BIDPRICE
(
CMPID, EFFDATE, SEATIDX
) LOCAL
(
PARTITION OD_BIDPRICE_P01,
PARTITION OD_BIDPRICE_P02,
PARTITION OD_BIDPRICE_P03,
PARTITION OD_BIDPRICE_P04,
PARTITION OD_BIDPRICE_P05,
PARTITION OD_BIDPRICE_P06,
PARTITION OD_BIDPRICE_P07,
PARTITION OD_BIDPRICE_P08
)

select count(*) from od_bidprice partition(OD_BIDPRICE_P01);

COUNT(*)
----------
3079500

select count(*) from od_bidprice partition(OD_BIDPRICE_P02);
COUNT(*)
----------
3040068

select count(*) from od_bidprice partition(OD_BIDPRICE_P03);
COUNT(*)
----------
3092623

select count(*) from od_bidprice partition(OD_BIDPRICE_P04);
COUNT(*)
----------
3082870

select count(*) from od_bidprice partition(OD_BIDPRICE_P05);

COUNT(*)
----------
3123177

select count(*) from od_bidprice partition(OD_BIDPRICE_P06);
COUNT(*)
----------
3077022

select count(*) from od_bidprice partition(OD_BIDPRICE_P07);
COUNT(*)
----------
3098323

select count(*) from od_bidprice partition(OD_BIDPRICE_P08);
COUNT(*)
----------
3077328

When I do 8K inserts, I get the following tkprof:

********************************************************************************

INSERT INTO
OD_BidPrice (CmpId, FltDate, EffDate, DiscontinueDate, SeatIdx, Value)
VALUES (:1,
to_date(to_char(:2), 'YYYYMMDD'),
to_date(to_char(:3), 'YYYYMMDD'),
to_date(to_char(:4), 'YYYYMMDD'),
:5,
:6)

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 0 0.00 0.00 0 0 0 0
Execute 8283 172.55 1449.29 5 4031814 1244654 8283
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 8283 172.55 1449.29 5 4031814 1244654 8283

Misses in library cache during parse: 0
Optimizer mode: CHOOSE
Parsing user id: 21

Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
log buffer space 1 1.00 1.00
SQL*Net message to client 8283 0.02 0.08
SQL*Net message from client 8283 0.16 12.98
latch free 2921 0.32 22.87
buffer busy waits 3008 0.32 37.69
wait list latch free 3 0.05 0.08
log file switch completion 2 0.06 0.08
db file sequential read 5 0.07 0.15

********************************************************************************

Why the large LIOs? I only have PK index. NOTE this table can incur a lot of concurrent inserts.

Thanks,
-Peter


Tom Kyte
July 15, 2004 - 11:29 am UTC

well, you have multi-versioning going on -- you have index reading going on -- and you started tracing AFTER you started the inserts (there is no parse, hence you started tracing after the insert started -- so the counts could be "suspect")

if I was asked "how can I speed this up" -- I'd say three things right now....

o check your freelists -- maybe you need more

o what's up with to_date(to_char(:bv))? if :bv is already a date -- just use trunc(:bv)

o and most importantly -- figure out how to incorporate ARRAY PROCESSING into your code -- save up 100 rows, insert all at once. that single thing will give you the biggest bang for the buck.


Great advice!

Peter Tran, July 16, 2004 - 9:12 am UTC

Hi Tom,

Thank you! You said: "you have multi-versioning going on -- you have index reading going on --"

1) Since the table incurs high volume of concurrent reads and writes, then the sessions are most likely hitting the the RBS to get consisent-read, right?

2) When the query uses the RBS to build a consistent-read the LIOs also goes up? Do you have a link to another discussion demonstrating this?

3) Would it be better if I try to redesign the application to minimize concurrent reads/writes against heavily accessed tables?

Thanks,
-Peter

Tom Kyte
July 16, 2004 - 11:31 am UTC

1) could be

2) run this:

set echo on
clear screen

drop table t;

create table t ( x int, y int ) tablespace users;
insert into t values (1,1);
commit;
select * from t;
pause
clear screen


variable a refcursor
variable b refcursor
variable c refcursor

alter session set events '10046 trace name context forever, level 12';
begin
open :a for select * from t a;
open :b for select * from t b;
open :c for select * from t c;
end;
/
pause
clear screen
print a
pause
clear screen

begin
for i in 1 .. 10000
loop
update t set x = x+1;
commit;
end loop;
end;
/

pause
clear screen
print b
update t set x = x+1;
commit;

print c
pause
clear screen

and review the tkprof...

3) not really -- it is something to be aware of, not to avoid.


your biggest bang for the buck will be array processing <<<<<========


My buck...

Peter Tran, July 16, 2004 - 11:48 am UTC

"your biggest bang for the buck will be array processing <<<<========"

You're aren't kidding.

I passed your recommendation on to the developer who maintains this code.

Here's the result from his test.

First timings: 130.1 sec
Second run timings (old code still): 112.1 sec

Batch times: 12.3 sec (approx. 10X improvement)

Thanks!
-Peter

Ps. How do I get the HTML tags working on asktom?

Tom Kyte
July 16, 2004 - 1:56 pm UTC

the only tags I support are my own :)

use hash-B for bold start and hash-b for bold stop (hash = #)

Great...

Marcio, July 16, 2004 - 4:48 pm UTC

Test Bold
Test normal

*Wonderful*



Bold!

A reader, July 17, 2004 - 12:35 am UTC

Argh, you dont have to make it case-sensitive! :)

Tom Kyte
July 17, 2004 - 2:36 pm UTC

it is a thing I use -- for me. it was *trivial* to replace with and with

if it were a toggle or something -- it would mandate parsing -- didn't want to parse, wanted to replace.

OPTIMIZER_*

friend, July 22, 2004 - 11:13 pm UTC

Hi Tom

I am fine.

This time i have to set a test startegy for optimizer_index_cost_adj parameter.
example
explain plan for select * from gca_booking_detail where geo_cd='A';
----------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost |
----------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 3105K| 296M| 152K|
|* 1 | TABLE ACCESS FULL | GCA_BOOKING_DETAIL | 3105K| 296M| 152K|
----------------------------------------------------------------------------

alter session set optimizer_index_cost_adj=10;
explain plan for select * from gca_booking_detail where geo_cd='A';
-------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)|
-------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 3105K| 296M| 103K (0)|
| 1 | TABLE ACCESS BY INDEX ROWID| GCA_BOOKING_DETAIL | 3105K| 296M| 103K (0)|
|* 2 | INDEX RANGE SCAN | PK_GCA_BOOKING_DETAIL | 3105K| | 5523 (0)|
------------------------------------------------------------------------------------------

Sugegst the appropirate value of this parameter and also any issues if any

Tom Kyte
July 23, 2004 - 8:32 am UTC

suggest default values.


but search for those parameters by name on this site to read more about them.

if you have my book "Effective Oracle by Design" -- see chapter 6 on the CBO.

log

oracle, July 28, 2004 - 2:29 am UTC

How to find the users last login time?



Tom Kyte
July 28, 2004 - 8:13 am UTC

auditing -- you would enable auditing and you would have access to this sort of information.

auidt

oracle, July 28, 2004 - 8:18 am UTC

But for that i have to enable auditing before login of users right?

Tom Kyte
July 28, 2004 - 12:48 pm UTC

SQL> audit connect;


but yes, in order to get extra data that by default is not recorded -- you need to ask for it to be recorded.

 

Yes you have to enable it

Venkatesh Babu, July 28, 2004 - 10:32 am UTC

Yes

Needless to say.. you have to enable the auditing.

venki

strong

sim, July 29, 2004 - 9:27 am UTC

Hi Tom,
Requirement is One db for writting and one db for querying?
And synchronization process should be automatica without any downtime.

How to do that?
provide link and words


Tom Kyte
July 29, 2004 - 12:58 pm UTC

logical standby
or
replication


there is a data guard concepts guide (logical standby)
there is a replication guide

all documentation available on otn.oracle.com

Candidates for keep/recycle pools

A reader, August 09, 2004 - 6:42 pm UTC

Can you please show how to use the new v$segment_statistics view in order to determine good candidates for the keep/recycle buffer pools?

Something like, any segment experiencing more than 50% of its LIO as physical reads and whose segment size is less than 10% of the size of the default buffer pool would be a good candidate for the keep pool?

Or maybe there is another way to determine candidates for these pools?

Thanks

Tom Kyte
August 09, 2004 - 9:01 pm UTC

I believe the keep/recycle pool are for the "last 2 feet of tuning".

that is, you'll know the object(s) that belong there, you've been concentrating on that table or two for so long -- they are etched in your mind.

they are the last things to look at in the tuning cycle.

ratios -- nope, not going there.


you have segments whereby 50% of its LIO is PIO and it is less than 10% the size of the buffer cache (and the LIOs are *high* over a short period - 50 LIO's don't count)? that does sound extreme.

Buffer pools

A reader, August 09, 2004 - 9:12 pm UTC

"that is, you'll know the object(s) that belong there, you've been concentrating on that table or two for so long -- they are etched in your mind"

actually thats true. I see a lot of PIOs for a few tables and I thought that they might benefit from the keep pool?

why do you say the keep/recycle pools are the last things to look at?

what are the first things to look at? high buffer gets/execution, bind variables, that about it?

thanks

Tom Kyte
August 09, 2004 - 9:27 pm UTC

the first things?

the query
the algorithm
the application

why is it doing so much work?


the thing with caches is -- things you use, you keep. things you don't use as much, you thow out. they are generally very effective all by themselves. that is why the keep/recycle pool would be the last two feet (not even the last mile) of tuning.


high buffer gets/execute -> index abuse

bind variables -> too many hard parses, but not really anything to do with IO's too much.

calculating LIO

Ryan Gaffuri, August 13, 2004 - 10:46 pm UTC

I have a small table with 56 rows. The HWM is 60 Blocks. The table is in memory. No physical IO.

I do a full table scan and I use 63 LIOs. I am assuming that 60 of these LIOs are for the 60 Blocks. What accounts for the other 3 LIOs? I ran the query repeatedly and the number of LIOs is consistent.

The query is of the form:
select col1, col2
from table;




Tom Kyte
August 14, 2004 - 1:20 pm UTC

arraysize....


say you have a table with packed blocks.

say there are exactly 45 rows/block.

Say there are 2 blocks.

you use sqlplus -- default arraysize of 15.

The first fetch will fetch 15 rows, that'll get the first block, get 15 rows and return them.  

The second fetch will likewise fetch 15 rows -- that'll GET THE FIRST BLOCK again (another LIO) and get 15 rows and return them.

The 3rd will likewise fetch 15 rows.  that'll get the first block FOR THE THIRD time, get 15 rows and return them.

The 4th fetch will probably get the first block again -- just to see if the block is "exhausted of rows" and then get the second block....


Now, you probably get the first block 4 times and the second block 4 times.

2 blocks, 8 lios.



there are segment header gets as well (how many and what kind depends on release...)


consider:

ops$tkyte@ORA9IR2> create table t ( x int ) tablespace manual;
 
Table created.
 
ops$tkyte@ORA9IR2> insert into t select rownum from all_objects where rownum <= 45;
 
45 rows created.
 
ops$tkyte@ORA9IR2> alter table t minimize records_per_block;
 
Table altered.
 
ops$tkyte@ORA9IR2> insert into t select rownum from all_objects where rownum <= 45;
 
45 rows created.
 
ops$tkyte@ORA9IR2> commit;
 
Commit complete.
 
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> select dbms_rowid.rowid_block_number(rowid), count(*) from t
  2  group by dbms_rowid.rowid_block_number(rowid);
 
DBMS_ROWID.ROWID_BLOCK_NUMBER(ROWID)   COUNT(*)
------------------------------------ ----------
                                  10         45
                                  11         45
 
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> set autotrace traceonly statistics
ops$tkyte@ORA9IR2> set arraysize 2
ops$tkyte@ORA9IR2> select * from t;
 
90 rows selected.
 
 
Statistics
----------------------------------------------------------
          0  recursive calls
          0  db block gets
         48  consistent gets
          0  physical reads
          0  redo size
       6246  bytes sent via SQL*Net to client
        983  bytes received via SQL*Net from client
         46  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
         90  rows processed
 
ops$tkyte@ORA9IR2> set arraysize 15
ops$tkyte@ORA9IR2> select * from t;
 
90 rows selected.
 
 
Statistics
----------------------------------------------------------
          0  recursive calls
          0  db block gets
         10  consistent gets
          0  physical reads
          0  redo size
       1462  bytes sent via SQL*Net to client
        554  bytes received via SQL*Net from client
          7  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
         90  rows processed
 
ops$tkyte@ORA9IR2> set arraysize 100
ops$tkyte@ORA9IR2> select * from t;
 
90 rows selected.
 
 
Statistics
----------------------------------------------------------
          0  recursive calls
          0  db block gets
          5  consistent gets
          0  physical reads
          0  redo size
       1111  bytes sent via SQL*Net to client
        499  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
         90  rows processed
 
ops$tkyte@ORA9IR2> set autotrace off
 

LIO continued...

Ryan Gaffuri, August 13, 2004 - 10:48 pm UTC

Also, I am the only user so Oracle should not be reconstructing a read consistent view which would increase LIOs.

Tom Kyte
August 14, 2004 - 1:20 pm UTC

see above.

followup to previous question

Ryan Gaffuri, August 16, 2004 - 9:53 am UTC

Thanks Tom. I have several followup questions. They are a bit picky... hope its ok.

1. I set the arrays size to 1000. Since I only have 56 rows in the table and 60 blocks below
the HWM, I should get it all in one pass?

However, I still get 63 LIOs. I used the following query:

select dbms_rowid.rowid_block_number(rowid), count(*) from <my_table>
2 group by dbms_rowid.rowid_block_number(rowid);

I found that I have 27 blocks with rows in them. I take this to mean that I have 23 blocks with out
rows, but below the HWM. Can I take this to mean that it is not possible to use one LIO to read
to 2 blocks at the same time. So if I have an arraysize = 1000 and I have just 10 rows in each block
I will only fetch 10 rows/LIO?

2. Also does arraysize affect inserts, updates, and deletes from from sqlplus? What about when
I use a stored procedure? How do I adjust my arraysize then? I know how to use 'bulk collect'
but what if I use return a sql statement with a ref cursor to user or have DML in a packagE?


Tom Kyte
August 16, 2004 - 7:34 pm UTC

1) told you that there are gets for the segment headers -- how many depends on the release.

that query tells you how many blocks have rowid head pieces (hey, that could be another reason now that I think about it). In a table that has had rows migrated -- you cannot count rowids and see what blocks have data on them.


2) not from sqlplus, just array fetches. bulk collect is your "arraysize" in plsql. forall i dml is your bulk modification.

a ref cursor is just a cursor, it'll be as if you returned a cursor them (as if they opened it themselves) no different.

The cardinality example challenge of February 26, 2003...

Kerry, December 30, 2004 - 5:26 pm UTC

Refering back to the challenge on February 26, 2003 to produce an example of explain plan producing wrong cardinality, here's one. Sorry, it uses 4 tables, not just 3:

begin
execute immediate 'create table a (z integer, v integer)';
execute immediate 'create table b (w integer)';
execute immediate 'create table c (w integer, z integer)';
execute immediate 'create table d (v integer, w integer)';
end;

begin
for i in 1..7008
loop
insert into a (z,v) values (i ,i);
insert into b (w) values (i);
insert into c (w,z) values (i,i);
insert into d (v,w) values (i,i);
end loop;
end;

begin
dbms_stats.gather_table_stats(ownname => user,tabname => 'A');
dbms_stats.gather_table_stats(ownname => user,tabname => 'B');
dbms_stats.gather_table_stats(ownname => user,tabname => 'C');
dbms_stats.gather_table_stats(ownname => user,tabname => 'D');
end;

select *
from a, b, c, d
WHERE c.w = b.w
AND a.z = c.z
AND d.v = a.v
AND d.w = c.w;


The query returns 7008 rows. The cardinality reported by explain plan is just one row. The number 7008, in case you are wondering, comes from the real-life example from which I extracted this test case. In our application this is part of a much larger query which seems to choose random plans because of many different cardinality problems. Once the cardinality is wrong for one part of the query, it's like flipping a coin whether the stats gathered on all of the related tables will cause the optimizer to choose a good plan or a bad plan. It's been a real ride figuring out why our application dies on queries that worked fine the day before.

Is this just an issue of predicate independence assumption?

Kerry, January 03, 2005 - 5:46 pm UTC

Is this just an issue of predicate independence assumption (see </code> http://www.csd.uch.gr/~hy460/pdf/CostOptimizer.pdf <code>, or a different kind of problem?

DBMS_SQLTUNE does a decent job on a query as simple as the one above, but in a larger context (nested inside of a complicated view), DBMS_SQLTUNE doesn't find a reasonable plan.

Re: Kerry

Bob B, January 08, 2005 - 1:45 pm UTC

I'm curious, do you see the same cardinality "error" with

begin
execute immediate 'create table a (z integer, v integer)';
execute immediate 'create table b (w integer, z integer)';
execute immediate 'create table c (v integer, w integer)';
end;

begin
for i in 1..7008
loop
insert into a (z,v) values (i ,i);
insert into b (w,z) values (i,i);
insert into c (v,w) values (i,i);
end loop;
dbms_stats.gather_table_stats(ownname => user,tabname => 'A');
dbms_stats.gather_table_stats(ownname => user,tabname => 'B');
dbms_stats.gather_table_stats(ownname => user,tabname => 'C');
end;

select *
from a, b, c
WHERE a.z = b.z
AND c.v = a.v
AND c.w = b.w;

Yes, that's a 3 table version of the same problem

Kerry, January 10, 2005 - 1:36 pm UTC

Bob,

You have indeed reduced the problem to three tables. And I now have a name for this problem: "Transitive Closure" and you can find it in the same article I quoted above.


parallel query

Samuel, June 14, 2005 - 1:45 pm UTC

Does parallel query bypass buffer cache? If so, in my Data warehouse environment, we use parallel queries a lot. Does it mean that the buffer cache is not used for storing the table blocks? Thanks.

Tom Kyte
June 14, 2005 - 4:18 pm UTC

It can, it doesn't have to.

the buffer cache is definitely used for many things though.

parallel query and buffer cache

Samuel, June 14, 2005 - 4:23 pm UTC

-----It can, it doesn't have to.
-----the buffer cache is definitely used for many things though.

Is there a way to find out whether the parallel query used buffer cache or pga to store table blocks or not? Thanks.

Tom Kyte
June 14, 2005 - 4:40 pm UTC

you would look at the statistics regarding direct io and consistent gets for the PQ sessions involved.



A reader, June 22, 2005 - 11:01 am UTC

Hi Tom,

I have tables t1 and t2

create table t1
(
id number,
dd varchar2(20)
);

create table t2
(
id number,
type char(1)
);

I did the following

1 select /*+ FIRST_ROWS */ t1.dd,
2 max(decode(t2.type,'S','S')) maxS, max(decode(t2.type,'E','E')) maxE
3 from t1, t2
4 where t1.id = t2.id
5 and rownum <= 10
6* group by dd
10:50:44 > /

Elapsed: 00:04:38.01

Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=HINT: FIRST_ROWS (Cost=212711 Car
d=10 Bytes=290)

1 0 SORT (GROUP BY) (Cost=212711 Card=10 Bytes=290)
2 1 COUNT (STOPKEY)
3 2 HASH JOIN (Cost=158523 Card=12083133 Bytes=350410857)
4 3 PARTITION RANGE (ALL) (Cost=82947 Card=9551698 Bytes
=200585658)

5 4 TABLE ACCESS (FULL) OF 'T1' (TABLE) (Cost=829
47 Card=9551698 Bytes=200585658)

6 3 PARTITION RANGE (ALL) (Cost=44077 Card=16730772 Byte
s=133846176)

7 6 TABLE ACCESS (FULL) OF 'T2' (TABLE) (Cost=440
77 Card=16730772 Bytes=133846176)





Statistics
----------------------------------------------------------
535 recursive calls
0 db block gets
575745 consistent gets
555539 physical reads
0 redo size
552 bytes sent via SQL*Net to client
500 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
1 sorts (memory)
0 sorts (disk)
5 rows processed

What am I doing wrong? Just to retrieve 10 rows it is taking 4 min 38 secs. My pga_aggregate_target was 24 MB and I increased it to 256 MB. Should I increase it more? How do I know when to stop? Is it proportional to the memory? Please help.

Thanks.


Tom Kyte
June 22, 2005 - 4:44 pm UTC

it has to join and then rownum can be assigned to the join rows.

short of the fact that this query returns "garbage, random, meaningless data"

what is the real case you are working with????

A reader, June 22, 2005 - 5:04 pm UTC

The real case scenario is I have to retrieve the driverlicensenumber of a person, the ssn and some other demographic information along with the photo and signature. The demographic data is stored in one table and the photo and signature are stored in another table as two different rows but I don't want two rows for the same person and I am trying to return photo and signature in the same cursor along with the latest demographic data.

So now can you help me with this query please?

Thanks.


Tom Kyte
June 23, 2005 - 1:49 pm UTC

join


select ...
from t1, t2 A, t2 B
where t1.driver# = :x
and a.driver# = :x and a.type = 'photo'
and b.driver# = :x and b.type = 'sig';

or whatever you need to do to identify the rows of interest.

To "reader"

Peter Tran, June 23, 2005 - 1:48 am UTC

It may be easier for Tom to help you if you used a real example. Why can't you show the actual tables and indexes? It would also help if you show the real query you're running.

Without this info, Tom would have to be a mind reader to give you the correct answer. I've seen him do the mind-reading trick several times. Often, he guesses right but why make him go through the effort.

The more detail you give, the better the answer you'll receive.

Just my 2 cents.

-Peter

Thanks a lot Tom it works great

A reader, June 25, 2005 - 9:25 am UTC


LIO and concurrency

Ajeet, January 04, 2007 - 7:22 am UTC

Hi Tom,

I am facing a problem - i have a query which shows just 5 LIO's when I run it from SQL plus in the tkprof report.

but when the same query is being run by 100 users (a simulation run using Load runner) statspack shows 237 LIO's per execution.

Below is the output of trace file

select LOCPARAMNAME, LOCPARAMVALUE from peasuser.LOCATIONPARAMETERVALUES
where PROCESSID = '0-8421-59-91b68b0a0:10fef21b562:ffb' AND ACTIVITYID = -1

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 2 0.00 0.00 0 5 0 1
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 4 0.00 0.00 0 5 0 1

Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 5 (SYSTEM)

Rows Row Source Operation
------- ---------------------------------------------------
1 TABLE ACCESS BY INDEX ROWID LOCATIONPARAMETERVALUES (cr=5 pr=0 pw=0 time=92 us)
1 INDEX RANGE SCAN PKEY_LOC_VAL (cr=4 pr=0 pw=0 time=67 us)(object id 34272)


Rows Execution Plan
------- ---------------------------------------------------
0 SELECT STATEMENT MODE: ALL_ROWS
1 TABLE ACCESS MODE: ANALYZED (BY INDEX ROWID) OF
'LOCATIONPARAMETERVALUES' (TABLE)
1 INDEX MODE: ANALYZED (RANGE SCAN) OF 'PKEY_LOC_VAL' (INDEX
(UNIQUE))


and below is the output of statspack report -

^LSQL ordered by Gets DB/Inst: FINXIBM2/FINXIBM2 Snaps: 674-676
-> Resources reported for PL/SQL code includes the resources used by all SQL
statements called by the code.
-> End Buffer Gets Threshold: 10000 Total Buffer Gets: 39,984,854
-> Captured SQL accounts for 97.9% of Total Buffer Gets
-> SQL reported below exceeded 1.0% of Total Buffer Gets

CPU Elapsd Old
Buffer Gets Executions Gets per Exec %Total Time (s) Time (s) Hash Value
--------------- ------------ -------------- ------ -------- --------- ----------
5,285,749 22,275 237.3 13.2 138.70 211.88 3366211266
Module: JDBC Thin Client
select LOCPARAMNAME, LOCPARAMVALUE from LOCATIONPARAMETERVALUES
where PROCESSID = :1 AND ACTIVITYID = -1

what could be the possible reason of this.
how to fix this.

here is the descriptiton of the table

SQL> desc LOCATIONPARAMETERVALUES
Name Null? Type
----------------------------------------- -------- ----------------------------
PROCESSID NOT NULL VARCHAR2(40)
ACTIVITYID NOT NULL NUMBER(38)
LOCPARAMNAME NOT NULL VARCHAR2(100)
LOCPARAMVALUE VARCHAR2(100)
ACTIVITYINSTANCEID NOT NULL NUMBER(38)

table has approx 60000 rows.

indexes on the table are

INDEX_LOCVAL_PARAMNAME LOCPARAMNAME 1 NONUNIQUE
INDEX_LOCPV_PID_AID PROCESSID 1 NONUNIQUE
INDEX_LOCPV_PID_AID ACTIVITYID 2 NONUNIQUE
PKEY_LOC_VAL PROCESSID 1 UNIQUE
PKEY_LOC_VAL ACTIVITYID 2 UNIQUE
PKEY_LOC_VAL ACTIVITYINSTANCEID 3 UNIQUE
PKEY_LOC_VAL LOCPARAMNAME 4 UNIQUE

kindly advice.

Ajeet

Please see the above question

Ajeet, February 19, 2007 - 7:26 am UTC

Hi Tom,

could you please comment on the above question .also in general is it is possible that number of LIO (logical IO) for a query can be impacted by number of concurrent users.

Regards
Ajeet
Tom Kyte
February 19, 2007 - 9:35 am UTC

the other people are also modifying this structure right?

do this:

ops$tkyte%ORA10GR2> create table t ( x int );

Table created.

ops$tkyte%ORA10GR2> insert into t values ( 0 );

1 row created.

ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> exec open :x for select * from t;

PL/SQL procedure successfully completed.

ops$tkyte%ORA10GR2> begin
  2     for i in 1 .. 1000
  3     loop
  4          update t set x = x+1;
  5          commit;
  6     end loop;
  7  end;
  8  /

PL/SQL procedure successfully completed.

ops$tkyte%ORA10GR2> @mystat logical
ops$tkyte%ORA10GR2> set echo off

NAME                        VALUE
---------------------- ----------
session logical reads       34541

ops$tkyte%ORA10GR2> print x

         X
----------
         0

ops$tkyte%ORA10GR2> @mystat2
ops$tkyte%ORA10GR2> set echo off

NAME                        VALUE DIFF
---------------------- ---------- ------------------
session logical reads       35549            1,008

ops$tkyte%ORA10GR2> exec open :x for select * from t;

PL/SQL procedure successfully completed.

ops$tkyte%ORA10GR2> @mystat logical
ops$tkyte%ORA10GR2> set echo off

NAME                        VALUE
---------------------- ----------
session logical reads       35549

ops$tkyte%ORA10GR2> print x

         X
----------
      1000

ops$tkyte%ORA10GR2> @mystat2
ops$tkyte%ORA10GR2> set echo off

NAME                        VALUE DIFF
---------------------- ---------- ------------------
session logical reads       35560               11




[tkyte@desktop sqlstuff]$ cat mystat.sql
set echo off
set verify off
column value new_val V
define S="&1"
column name format a22

set autotrace off
select a.name, b.value
from v$statname a, v$mystat b
where a.statistic# = b.statistic#
and lower(a.name) like '%' || lower('&S')||'%'
-- and lower(a.name) = lower('&S')
/
set echo on
[tkyte@desktop sqlstuff]$ cat mystat2.sql
set echo off
set verify off
column diff format a18
select a.name, b.value Value, to_char(b.value-&V,'999,999,999,999') diff
from v$statname a, v$mystat b
where a.statistic# = b.statistic#
and lower(a.name) like '%' || lower('&S')||'%'
/
set echo on
[tkyte@desktop sqlstuff]$

yes, there are deletes from this table too

Ajeet, February 20, 2007 - 1:33 am UTC

Hi Tom,
Yes, I looked at the complete code and data is being deleted from this table as well as part of the same transaction in which the select statment is being executed.

so if data is being deleted/updated -the number of Logical IO will increase because now we have to read from undo segments for read consitency. is it correct ?I have read it somewhere in your book .

Thanks
Tom Kyte
February 20, 2007 - 9:38 am UTC

yes, it is correct.

I just, well, demonstrated that right above....

Side effect of reducing LIOS

manoj pradhan, May 28, 2007 - 7:27 am UTC

Tom,
I have re written a sql statement .
Now LIO of new Query is reduced by 68%
but other parameters got Increased.

TKPROF ---
SQL_Old
Call Count cpu elapsed disk query current rows
parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 119 0.64 0.61 0 180431 16 1759

SQL_new

parse 1 0.00 0.00 0 0 0 0
Execute 2 0.00 0.00 0 0 0 0
Fetch 119 0.77 0.84 2537 55990 77 0

Autotrace Statistics

SQL_Old SQL_New
recursive calls 0 40
db blocks gets 16 77
consistent gets 180431 56005
physical reads 0 2537
redo size 0 0
..
..
sort(memory ) 6 40
sort(disk) 0 5
rows processed 1759 1759

in SQL*Plus
SQL_New tooks 00:00:02.90
SQL_Old tooks 00:00:02:48

Would you please tell which Query is better SQL_Old or SQL_New and why ?



Tom Kyte
May 30, 2007 - 10:10 am UTC

neither is better, both are best, both are worst.


In general, the goal is to decrease the logical IO performed (in general, IN GENERAL)

LIOs = consistent gets = latches on buffer cache = locks, serialization devices.

locks = serialization.
latching = cpu consumption (the more people attempting to get the latch at the same time, the higher the cpu)

eg: the first query takes .64 cpu seconds IN SINGLE USER MODE. but it takes 180,431 logical IOs. Each of those LIOs requires latching. As you go multi-user, that query that takes .64 cpu seconds will start taking MORE cpu seconds as people collide (serialize) quickly on latches - the more people trying to get a latch, the more CPU it'll take to get it. So, the growth in CPU of that query over time will be at a high rate. I'm not stating the obvious that if it takes .64 cpus seconds for a single user, then it'll take 2x0.64 for 2 and 3x0.64 for three - rather that it'll be 2x(0.64 PLUS) for two and 3x(0.64 PLUS PLUS) for three and so on (plus is not really something we can compute, we just know it will be there)

the second query, using 55,999 (about 1/3) the LIO's does 1/3 the latching - however, it looks like it is doing a full scan which will tend to do physical IO (you did) and phyiscal IOs are costly as well as you increase users.

So, it is a trade off here, one will consume more latches and cpu as you scale up users, the other hits the disk and will incur physical IO waits as you scale up users.

manoj pradhan, May 29, 2007 - 11:41 am UTC

I have re written a sql statement .
Now LIO of new Query is reduced by 68%
but other parameters got Increased.

TKPROF ---
SQL_Old
Call Count cpu elapsed disk query current rows
-----------------------------------------------------------------------------------------------------------------------------------------------------------
parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 119 0.64 0.61 0 180431 16 1759

SQL_new

Call Count cpu elapsed disk query current rows
-----------------------------------------------------------------------------------------------------------------------------------------------------------
parse 1 0.00 0.00 0 0 0 0
Execute 2 0.00 0.00 0 0 0 0
Fetch 119 0.77 0.84 2537 55990 77 0

Autotrace Statistics

SQL_Old SQL_New
--------------------------------------------------------------------------------------
recursive calls 0 40
db blocks gets 16 77
consistent gets 180431 56005
physical reads 0 2537
redo size 0 0
..
..
sort(memory ) 6 40
sort(disk) 0 5
rows processed 1759 1759

in SQL*Plus
SQL_New tooks 00:00:02.90
SQL_Old tooks 00:00:02:48

Would you please tell which Query is better SQL_Old or SQL_New and why ?

Eldon, March 13, 2008 - 8:35 pm UTC

Note that the number of disk sorts in the new SQL query --
that's going to (artificially) slow the query down.

Check pga_aggregate_target, if using automatic pga sizing.

Reducing LIO's

Sanjay, April 10, 2008 - 11:24 pm UTC

Hi Tom,

In the following example, the elapsed time in the second query is very less compared to the first one, but the LIO is more. I want to choose the second one to implement, do you think it is a right choice.
  1  UPDATE counterparties   cp
  2  SET    status           =  'D'
  3       , date_to          =  to_date('28-mar-2008') - 1
  4  WHERE  status           =  'C'
  5  AND    src_sys_code     =  'CUSTDBNW'
  6  AND    exists
  7         (select 1 from ( SELECT sys_cust_id
  8           FROM   counterparties
  9           WHERE  status         =  'C'
 10           AND    src_sys_code   =  'CUSTDBNW'
 11           MINUS
 12           SELECT src_id
 13           FROM   tmp_cdb_per
 14           WHERE  src_sys_code   =  'CUSTDBNW'
 15           AND    reject_flag    IS NULL
 16           MINUS
 17           SELECT src_id
 18           FROM   tmp_cdb_np
 19           WHERE  src_sys_code   =  'CUSTDBNW'
 20           AND    reject_flag    IS NULL
 21*        ) cp1 where cp.sys_cust_id = cp1.sys_cust_id)
16:36:19 SQL> /

575 rows updated.

Elapsed: 00:02:157.47

Execution Plan
----------------------------------------------------------
   0      UPDATE STATEMENT Optimizer=CHOOSE (Cost=72760 Card=1 Bytes=1
          13)

   1    0   UPDATE OF 'COUNTERPARTIES'
   2    1     TABLE ACCESS (BY LOCAL INDEX ROWID) OF 'COUNTERPARTIES'
          (TABLE) (Cost=3 Card=1 Bytes=91)

   3    2       NESTED LOOPS (Cost=72760 Card=1 Bytes=113)
   4    3         VIEW (Cost=65903 Card=671691 Bytes=14777202)
   5    4           MINUS
   6    5             MINUS
   7    6               SORT (UNIQUE)
   8    7                 PARTITION RANGE (SINGLE) (Cost=45729 Card=67
          1691 Bytes=16120584)

   9    8                   TABLE ACCESS (FULL) OF 'COUNTERPARTIES' (T
          ABLE) (Cost=45729 Card=671691 Bytes=16120584)

  10    6               SORT (UNIQUE)
  11   10                 PARTITION RANGE (SINGLE) (Cost=1791 Card=229
          058 Bytes=5039276)

  12   11                   TABLE ACCESS (FULL) OF 'TMP_CDB_PER' (TABL
          E) (Cost=1791 Card=229058 Bytes=5039276)

  13    5             SORT (UNIQUE)
  14   13               PARTITION RANGE (SINGLE) (Cost=4459 Card=44977
          6 Bytes=9895072)

  15   14                 TABLE ACCESS (FULL) OF 'TMP_CDB_NP' (TABLE)
          (Cost=4459 Card=449776 Bytes=9895072)

  16    3         PARTITION RANGE (SINGLE) (Cost=2 Card=1)
  17   16           INDEX (RANGE SCAN) OF 'NI_FI_CNTR_SRC' (INDEX) (Co
          st=2 Card=1)





Statistics
----------------------------------------------------------
       2143  recursive calls
      13062  db block gets
     182160  consistent gets
     182381  physical reads
    1845960  redo size
        493  bytes sent via SQL*Net to client
        984  bytes received via SQL*Net from client
          3  SQL*Net roundtrips to/from client
        115  sorts (memory)
          0  sorts (disk)
        575  rows processed

16:38:59 SQL> ALTER SYSTEM FLUSH SHARED_POOL;

System altered.

Elapsed: 00:00:00.12
16:43:04 SQL> ALTER SYSTEM FLUSH BUFFER_CACHE;

System altered.

Elapsed: 00:01:115.71

SQL> UPDATE counterparties   cp
   2  SET    status           =  'D'
   3       , date_to          =  to_date('28-mar-2008') - 1
   4  WHERE  status           =  'C'
   5  AND    src_sys_code     =  'CUSTDBNW'
   6  AND    not exists
   7         ( SELECT src_id
   8              FROM   tmp_cdb_per cus_id
   9              WHERE  src_sys_code   =  'CUSTDBNW'
  10              AND    reject_flag    IS NULL
  11       and cp.sys_cust_id = src_id
  12   union 
  13   SELECT src_id
  14              FROM   tmp_cdb_np
  15              WHERE  src_sys_code   =  'CUSTDBNW'
  16              AND    reject_flag    IS NULL
  17       and cp.sys_cust_id = src_id
  18         ) 
  19  /

575 rows updated.

Elapsed: 00:00:32.28

Execution Plan
----------------------------------------------------------
   0      UPDATE STATEMENT Optimizer=CHOOSE (Cost=5482426 Card=671691
          Bytes=61123881)

   1    0   UPDATE OF 'COUNTERPARTIES'
   2    1     PARTITION RANGE (SINGLE) (Cost=34317 Card=33585 Bytes=30
          56235)

   3    2       TABLE ACCESS (BY LOCAL INDEX ROWID) OF 'COUNTERPARTIES
          ' (TABLE) (Cost=34317 Card=33585 Bytes=3056235)

   4    3         INDEX (RANGE SCAN) OF 'NI_FI_CNTR_SRC' (INDEX) (Cost
          =3685 Card=11617)

   5    4           SORT (UNIQUE) (Cost=8 Card=2 Bytes=44)
   6    5             UNION-ALL
   7    6               PARTITION RANGE (SINGLE) (Cost=3 Card=1 Bytes=
          22)

   8    7                 TABLE ACCESS (BY LOCAL INDEX ROWID) OF 'TMP_
          CDB_PER' (TABLE) (Cost=3 Card=1 Bytes=22)

   9    8                   INDEX (UNIQUE SCAN) OF 'UK_TCDBP_1' (INDEX
           (UNIQUE)) (Cost=2 Card=1)

  10    6               PARTITION RANGE (SINGLE) (Cost=3 Card=1 Bytes=
          22)

  11   10                 TABLE ACCESS (BY LOCAL INDEX ROWID) OF 'TMP_
          CDB_NP' (TABLE) (Cost=3 Card=1 Bytes=22)

  12   11                   INDEX (UNIQUE SCAN) OF 'UK_TCBNP_1' (INDEX
           (UNIQUE)) (Cost=2 Card=1)





Statistics
----------------------------------------------------------
       7625  recursive calls
      13062  db block gets
    4712313  consistent gets
      33661  physical reads
    1845792  redo size
        493  bytes sent via SQL*Net to client
        847  bytes received via SQL*Net from client
          3  SQL*Net roundtrips to/from client
     672133  sorts (memory)
          0  sorts (disk)
        575  rows processed

Tom Kyte
April 11, 2008 - 7:42 am UTC

The goal of reducing LIO's is the general goal. It'll improve scalability for concurrent users - but looking at a query like this, concurrency isn't an issue, this is a batch update. The runtime differences will be due to the physical IO's here.

You might even try:
UPDATE counterparties   cp
 SET    status           =  'D'
      , date_to          =  to_date('28-mar-2008') - 1
 WHERE  status           =  'C'
 AND    src_sys_code     =  'CUSTDBNW'
 AND    sys_cust_id NOT IN
        ( SELECT src_id
             FROM   tmp_cdb_per cus_id
             WHERE  src_sys_code   =  'CUSTDBNW'
             AND    reject_flag    IS NULL
             and src_id is not null
           union all
          SELECT src_id
             FROM   tmp_cdb_np
             WHERE  src_sys_code   =  'CUSTDBNW'
             AND    reject_flag    IS NULL
             and src_id is not null
        )


which would tend to full scan the two tmp tables and then do a hash anti join to CP - if src_id is not defined as NOT NULL in the two tmp tables - it probably should be, but the "and srd_id is not null" will deal with it if you didn't define it as not null (but it should be)

FORGET CACHE HIT RATIOS

Dhairyasheel, April 15, 2008 - 1:22 am UTC

Hi Tom,
Upon seeing so many people training their guns on Cache Hit Ratios makes me wonder whats the source of their inclination.
Unfortunately Its Oracle Press Materials which propogated this 90% buffer cache hit ratios methodology. The same Oracle Press Material from which I completed my OCP.

Traditionally I believe Oracle has had different methodologies of Performance Tuning. First it was Wait Interface, then Cache hit ratios then user response times.

However there is definately not any single golden rule for performance.

What should be avoided is adhoc queries from sql tools on production boxes which are running well tuned applications.

As far as buffer cache hit ratios go, I've seen milisecond response times with a Banking Production box showing 60% cache hit ratios. & nobody was complaining.

I think there is a serious need on educating people on how to use and respect the production environments & what methods to follow for Performance tuning.


how to arrive a baseline for LIO

Karthik, June 11, 2008 - 1:12 pm UTC

Tom,

How can i know what is a reasonable value for LIO. Say if i fetch 100 blocks via FTS from a single table and my MBRC is 32 then my no. of reads will be 3 or 4.If i use an index which takes lot more blocks(just assuming)and cost is comparitively high to FTS, then should i baseline it for 3 0r 4 for the current dataload? please see whether my approach is correct. If not please tell me how to have a basic idea about LIO
Tom Kyte
June 11, 2008 - 4:49 pm UTC

the number of reads with a multi block read count of 32 for 100 blocks might be 50 physical IO's - and the LIO's would always be at least 100, maybe more


say the buffer cache has blocks 1, 3, 5, 7, .... 99 in it.
the buffer cache does not have blocks 2, 4, 6, 8, ... loaded.

you select count(nullable_column) from t;

that must full scan - Now, we'd like to read 32 blocks - but we cannot. block 1 is in there, block 3 is in there. We need to read block 2 - but we cannot read blocks 1 and 3 (they are not right, we cannot use them, we have to use the stuff we find in the cache). Same thing happens with blocks 4, 6, 8 and so on.

So, 50 physical IO's later - blocks 1..100 have been in the buffer cache, and we did 100 logical IO's to read them out - compute the count and return to the client.

So, end result: 50 physical IO's, 100 logical IO's


Now, change the query to

select nullable_column from t;

and assume slow by slow processing. Assume each block in our table contains on average 10 rows (so there are 1,000 records).

We do the same physical IO's, but we do 1,000 LOGICAL IOS. Because, you prepare the query, then fetch row 1 - that causes us to get block 1 from the cache (LIO), return the row, discard the block. You fetch row 2 - that causes us to get block 1 again, and so on...

end result: 50 physical IO's, 1,000 logical IO's





High LIO Vs. Execessive redos

Chen, June 25, 2008 - 2:59 am UTC

In AWR We have seen very high logical reads (300K/S) and execessive redo size (1M/S). I know DMLs make redos and bad queries make high LIO. I wonder if the queried got tuned, would the redos be reduced?
Tom Kyte
June 25, 2008 - 8:49 am UTC

300k/second is tiny.
1m/s is small.

or they could be large
or they could be medium

they are what they are. They are not outrageous.

but anyway. Most SQL selects do not generate redo. Modifications do. The way to reduce redo is to reduce the size of the modifications (eg: make sure you have only the indexes you really need - use update NOT delete+insert - limit the number of times you rollback (you rollback a lot, doing an update generated redo - rolling it back generates MORE redo and all for naught) and so on)

Chen, June 25, 2008 - 3:09 am UTC

Tom,
I forgot to include the stats from AWR. please comment on the system's heath.
snapshot 1hr.

Load Profile
~~~~~~~~~~~~ Per Second Per Transaction
--------------- ---------------
Redo size: 2,028,573.00 1,119,790.27
Logical reads: 296,617.02 163,735.22
Block changes: 14,962.63 8,259.51
Physical reads: 1,042.23 575.32
Physical writes: 292.18 161.29
User calls: 5,080.09 2,804.25
Parses: 1,229.50 678.70
Hard parses: 0.62 0.34
Sorts: 220.41 121.67
Logons: 0.23 0.13
Executes: 3,046.53 1,681.71
Transactions: 1.81

% Blocks changed per Read: 5.04 Recursive Call %: 24.02
Rollback per transaction %: 4.00 Rows per Sort: 55.94

Instance Efficiency Percentages (Target 100%)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Buffer Nowait %: 99.96 Redo NoWait %: 99.37
Buffer Hit %: 99.65 In-memory Sort %: 100.00
Library Hit %: 99.93 Soft Parse %: 99.95
Execute to Parse %: 59.64 Latch Hit %: 99.95
Parse CPU to Parse Elapsd %: 67.57 % Non-Parse CPU: 98.71

Shared Pool Statistics Begin End
------ ------
Memory Usage %: 90.42 90.35
% SQL with executions>1: 84.38 86.19
% Memory for SQL w/exec>1: 92.32 91.50

Top 5 Timed Events Avg %Total
~~~~~~~~~~~~~~~~~~ wait Call
Event Waits Time (s) (ms) Time Wait Class
------------------------------ ------------ ----------- ------ ------ ----------
db file sequential read 1,056,206 50,503 48 46.1 User I/O
read by other session 397,266 23,788 60 21.7 User I/O
CPU time 20,065 18.3
db file scattered read 93,235 8,163 88 7.5 User I/O
db file parallel write 226,771 3,033 13 2.8 System I/O
-------------------------------------------------------------
Time Model Statistics DB/Inst: RSPROD01/RSPROD01 Snaps: 17951-17952
-> Total time in database user-calls (DB Time): 109514.2s
-> Statistics including the word "background" measure background process
time, and so do not contribute to the DB time statistic
-> Ordered by % or DB time desc, Statistic name

Statistic Name Time (s) % of DB Time
------------------------------------------ ------------------ ------------
sql execute elapsed time 108,350.5 98.9
DB CPU 20,065.1 18.3
parse time elapsed 589.5 .5
hard parse elapsed time 262.8 .2
hard parse (sharing criteria) elapsed time 118.7 .1
sequence load elapsed time 94.8 .1
PL/SQL execution elapsed time 72.5 .1
connection management call elapsed time 3.5 .0
repeated bind elapsed time 3.0 .0
PL/SQL compilation elapsed time 2.3 .0
hard parse (bind mismatch) elapsed time 1.3 .0
failed parse elapsed time 0.0 .0
DB time 109,514.2 N/A
background elapsed time 8,108.6 N/A
background cpu time 1,363.0 N/A
-------------------------------------------------------------

Wait Class DB/Inst: RSPROD01/RSPROD01 Snaps: 17951-17952
-> s - second
-> cs - centisecond - 100th of a second
-> ms - millisecond - 1000th of a second
-> us - microsecond - 1000000th of a second
-> ordered by wait time desc, waits desc

Avg
%Time Total Wait wait Waits
Wait Class Waits -outs Time (s) (ms) /txn
-------------------- ---------------- ------ ---------------- ------- ---------
User I/O 1,552,481 .0 82,477 53 237.2
System I/O 282,475 .0 6,424 23 43.2
Application 613 37.4 2,174 3547 0.1
Commit 6,416 .5 352 55 1.0
Configuration 684 26.9 331 483 0.1
Other 15,098 .1 89 6 2.3
Network 15,905,117 .0 41 0 2,430.1
Concurrency 7,440 4.4 14 2 1.1
-------------------------------------------------------------

Wait Events DB/Inst: RSPROD01/RSPROD01 Snaps: 17951-17952
-> s - second
-> cs - centisecond - 100th of a second
-> ms - millisecond - 1000th of a second
-> us - microsecond - 1000000th of a second
-> ordered by wait time desc, waits desc (idle events last)

Avg
%Time Total Wait wait Waits
Event Waits -outs Time (s) (ms) /txn
---------------------------- -------------- ------ ----------- ------- ---------
db file sequential read 1,056,206 .0 50,503 48 161.4
read by other session 397,266 .0 23,788 60 60.7
db file scattered read 93,235 .0 8,163 88 14.2
db file parallel write 226,771 .0 3,033 13 34.6
SQL*Net break/reset to clien 368 .0 1,490 4050 0.1
control file sequential read 24,372 .0 1,138 47 3.7
Log archive I/O 8,447 .0 813 96 1.3
log file parallel write 13,168 .0 785 60 2.0
SEEMS PROBLEM >>enq: TX - row lock contentio 238 95.8 680 2859 0.0
log file sequential read 7,347 .0 449 61 1.1
log file sync 6,416 .5 352 55 1.0
log file switch completion 352 37.2 218 618 0.1
control file parallel write 2,250 .0 205 91 0.3
SEEMS PROBLEM>>>enq: CF - contention 50 16.0 67 1335 0.0
log buffer space 229 .9 62 272 0.0
log file switch (private str 56 89.3 51 905 0.0

how to reduced elapsed time

A reader, July 24, 2008 - 12:43 pm UTC

Tom,

one of my insert is showing more elapsed time. below is the tkprof. one my transaction is going on and every time i am getting the almost same value

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.01 0.01 0 0 0 0
Execute 1 0.16 143.39 0 758 1 0
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 2 0.18 143.41 0 758 1 0

Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 29 (RAPT_UDM) (recursive depth: 1)

Rows Row Source Operation
------- ---------------------------------------------------
0 LOAD AS SELECT (cr=758 pr=0 pw=0 time=143392299 us)
0 PX COORDINATOR (cr=758 pr=0 pw=0 time=143391748 us)
0 PX SEND QC (RANDOM) :TQ10002 (cr=0 pr=0 pw=0 time=0 us)
0 HASH JOIN BUFFERED (cr=0 pr=0 pw=0 time=0 us)
0 BUFFER SORT (cr=0 pr=0 pw=0 time=0 us)
0 PX RECEIVE (cr=0 pr=0 pw=0 time=0 us)
0 PX SEND HASH :TQ10000 (cr=0 pr=0 pw=0 time=0 us)
42455 TABLE ACCESS FULL L_ORDERLINE_CAMPAIGN_MAP (cr=753 pr=0 pw=0 time=169933 us)
0 PX RECEIVE (cr=0 pr=0 pw=0 time=0 us)
0 PX SEND HASH :TQ10001 (cr=0 pr=0 pw=0 time=0 us)
0 NESTED LOOPS (cr=0 pr=0 pw=0 time=0 us)
0 PX BLOCK ITERATOR (cr=0 pr=0 pw=0 time=0 us)
0 TABLE ACCESS FULL SPLIT (cr=0 pr=0 pw=0 time=0 us)
0 TABLE ACCESS BY USER ROWID RAW_AD_LOG (cr=0 pr=0 pw=0 time=0 us)


Rows Execution Plan
------- ---------------------------------------------------
0 INSERT STATEMENT MODE: ALL_ROWS
0 LOAD AS SELECT OF 'STAGE_DELIVERY_AD_LOG'
0 PX COORDINATOR
0 PX SEND (QC (RANDOM)) OF ':TQ10002' [:Q1002]
0 HASH JOIN (BUFFERED) [:Q1002]
0 BUFFER (SORT) [:Q1002]
0 PX RECEIVE [:Q1002]
0 PX SEND (HASH) OF ':TQ10000'
42455 TABLE ACCESS MODE: ANALYZED (FULL) OF
'L_ORDERLINE_CAMPAIGN_MAP' (TABLE)
0 PX RECEIVE [:Q1002]
0 PX SEND (HASH) OF ':TQ10001' [:Q1001]
0 NESTED LOOPS [:Q1001]
0 PX BLOCK (ITERATOR) [:Q1001]
0 TABLE ACCESS MODE: ANALYZED (FULL) OF 'SPLIT'
(TABLE) [:Q1001]
0 TABLE ACCESS MODE: ANALYZED (BY USER ROWID) OF
'RAW_AD_LOG' (TABLE) [:Q1001]

at the End.. (in the procedure so many insert statments are there and each one has same kind of result i.e. more time undex ealspaed columns)

OVERALL TOTALS FOR ALL RECURSIVE STATEMENTS

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1796 1.12 1.57 0 1 49 0
Execute 7193 28.38 2765.41 10613392 77753516 708509 17937122
Fetch 17157 14.76 83.13 25568 48040 32 29417
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 26146 44.27 2850.12 10638960 77801557 708590 17966539

Misses in library cache during parse: 388
Misses in library cache during execute: 219

162 user SQL statements in session.
4499 internal SQL statements in session.
4661 SQL statements in session.
60 statements EXPLAINed in this session.

Is this correct? timing it is taking as per this only. i.e. How can i reduce that elapsed time. I am using Oracle 10g,
Tom Kyte
July 29, 2008 - 9:23 am UTC

you are using parallel query, the processing time is spent in the parallel execution servers.

that is, you are tracing the parent session, it does very little work - the real work is done in the parallel execution servers.

eg: there is nothing to see in this trace file, the work is not being performed there.

reducing elapsed

A reader, July 25, 2008 - 2:02 pm UTC

hi

tom can you help me about my previous message.

how to reduced elapsed time

A reader, July 29, 2008 - 2:33 pm UTC

thanks tom but i could not understand, but one thing is that there is very high elasped time, how can i reduce this to smaller no?

Tom Kyte
August 01, 2008 - 10:27 am UTC

well, you'd have to either

a) make the query run faster (write it differently, more efficiently - no idea if you can or cannot, that is just one approach)

b) look at what is being waited on by the parallel execution servers - if anything - try to eliminate that wait.

elapsed

A reader, July 29, 2008 - 8:29 pm UTC

detailed trace output.
call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 1 3.44 2811.88 0 1006559 6699 1
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 2 3.44 2811.88 0 1006559 6699 1

Misses in library cache during parse: 0
Optimizer mode: ALL_ROWS
Parsing user id: 29 (RAPT_UDM)

Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
SQL*Net message to client 1 0.00 0.00
SQL*Net message from client 1 17.13 17.13
********************************************************************************

UPDATE M_BATCH_TASKS SET TASK_STATUS = :B2
WHERE
TASK_NAME = :B1


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 1 0 0
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 2 0.00 0.00 0 1 0 0

Misses in library cache during parse: 1
Misses in library cache during execute: 1
Optimizer mode: ALL_ROWS
Parsing user id: 29 (RAPT_UDM) (recursive depth: 1)

Rows Execution Plan
------- ---------------------------------------------------
0 UPDATE STATEMENT MODE: ALL_ROWS
0 UPDATE OF 'M_BATCH_TASKS'
0 INDEX MODE: ANALYZED (UNIQUE SCAN) OF 'PK_M_BATCH_TASKS'
(INDEX (UNIQUE))


Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
SQL*Net message to client 1 0.00 0.00
SQL*Net message from client 1 6.54 6.54
********************************************************************************

COMMIT


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 3 0.00 0.00 0 0 0 0
Execute 3 0.01 0.16 3017622 971108 6244 0
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 6 0.01 0.16 3017622 971108 6244 0

Misses in library cache during parse: 0
Parsing user id: 29 (RAPT_UDM) (recursive depth: 1)

Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
PX Deq: Execute Reply 3 0.03 0.03
db file sequential read 2 0.02 0.02
rdbms ipc reply 3 0.03 0.03
log file sync 1 0.01 0.01
PX Deq: Signal ACK 7 0.03 0.03
SQL*Net message to dblink 1 0.00 0.00
SQL*Net message from dblink 1 0.00 0.00
********************************************************************************

SELECT PARAMETER_VALUE
FROM
M_BATCH_PARAMETERS WHERE PARAMETER_NAME = :B1


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 7 0.00 0.00 0 0 0 0
Fetch 7 0.00 0.00 0 14 0 7
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 15 0.00 0.00 0 14 0 7

Misses in library cache during parse: 1
Misses in library cache during execute: 2
Optimizer mode: ALL_ROWS
Parsing user id: 29 (RAPT_UDM) (recursive depth: 1)

Rows Execution Plan
------- ---------------------------------------------------
0 SELECT STATEMENT MODE: ALL_ROWS
0 TABLE ACCESS MODE: ANALYZED (BY INDEX ROWID) OF
'M_BATCH_PARAMETERS' (TABLE)
0 INDEX MODE: ANALYZED (UNIQUE SCAN) OF
'PK_M_BATCH_PARAMETERS' (INDEX (UNIQUE))

********************************************************************************

SELECT TO_NUMBER(TO_CHAR(:B1 ,'J'))
FROM
DUAL


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 1 0.00 0.00 0 0 0 1
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 3 0.00 0.00 0 0 0 1

Misses in library cache during parse: 1
Misses in library cache during execute: 1
Optimizer mode: ALL_ROWS
Parsing user id: 29 (RAPT_UDM) (recursive depth: 1)

Rows Execution Plan
------- ---------------------------------------------------
0 SELECT STATEMENT MODE: ALL_ROWS
0 FAST DUAL

********************************************************************************

SELECT MIN(TRUNC(SCHEDULED_DATE, 'MM')) ,MAX(TRUNC(SCHEDULED_DATE, 'MM'))
FROM
BD_F_ORDER_LINE


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 1 2.56 84.62 33273 33324 0 1
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 3 2.56 84.62 33273 33324 0 1

Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 29 (RAPT_UDM) (recursive depth: 1)

Rows Execution Plan
------- ---------------------------------------------------
0 SELECT STATEMENT MODE: ALL_ROWS
0 SORT (AGGREGATE)
0 TABLE ACCESS MODE: ANALYZED (FULL) OF 'F_ORDER_LINE' (TABLE)



Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
db file scattered read 1060 1.69 80.26
db file sequential read 1 0.00 0.00
latch free 2 0.09 0.09
latch: shared pool 1 0.00 0.00
********************************************************************************

SELECT MAX(PROPORTION_MONTH) ,MIN(PROPORTION_MONTH)
FROM
BD_L_AD_SLOT_PROPORTION


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.33 0.45 0 1057 1 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 1 0.00 31.59 0 0 0 1
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 3 0.33 32.05 0 1057 1 1

Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 29 (RAPT_UDM) (recursive depth: 1)

Rows Execution Plan
------- ---------------------------------------------------
0 SELECT STATEMENT MODE: ALL_ROWS (REMOTE)
0 SORT (AGGREGATE)
0 PARTITION RANGE (ALL) PARTITION: START=1 STOP=2
0 TABLE ACCESS MODE: ANALYZED (FULL) OF
'L_AD_SLOT_PROPORTION' (TABLE) [RTIMDV1] PARTITION: START=1
STOP=2


Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
single-task message 1 0.06 0.06
SQL*Net message to dblink 10 0.00 0.00
SQL*Net message from dblink 10 31.59 31.66
********************************************************************************

alter session disable parallel dml


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 2 0.00 0.00 0 0 0 0

Misses in library cache during parse: 0
Parsing user id: 29 (RAPT_UDM) (recursive depth: 1)
********************************************************************************

alter session force parallel dml parallel 4


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 2 0.00 0.00 0 0 0 0

Misses in library cache during parse: 0
Parsing user id: 29 (RAPT_UDM) (recursive depth: 1)
********************************************************************************

INSERT INTO RAPT_BDDM.F_AD_SLOT_DAXESH ( SCHEDULED_DATE ,AD_SLOT_ID ,
AD_COMPONENT_ID ,INVENTORY_TYPE_ID ,DELIVERY_TYPE_ID ,SCHEDULED_IMPRESSIONS
,EXTENDED_SALES_PRICE ,EXTENDED_SALES_PRICE_TD ,EXTENDED_LIST_PRICE ,
EXTENDED_FLOOR_PRICE ,CONFIRMED_TRAFFIC ,CONFIRMED_IMPRESSIONS ,
DELIVERED_REVENUE ,CLICKS ,ADJUSTMENT_IMPRESSIONS ,ADJUSTMENT_SALES_PRICE ,
SERVED_IMPRESSIONS ) SELECT TRUNC(SCHEDULED_DATE, 'MM') ,AD_SLOT_ID ,
AD_COMPONENT_ID ,INVENTORY_TYPE_ID ,DELIVERY_TYPE_ID ,
SUM(NVL(SCHEDULED_IMPRESSIONS,0)) SCHEDULED_IMPRESSIONS ,
SUM(NVL(EXTENDED_SALES_PRICE,0)) EXTENDED_SALES_PRICE ,
SUM(NVL(EXTENDED_SALES_PRICE_TD,0)) EXTENDED_SALES_PRICE_TD ,
SUM(EXTENDED_LIST_PRICE) EXTENDED_LIST_PRICE ,SUM(EXTENDED_FLOOR_PRICE)
EXTENDED_FLOOR_PRICE ,SUM(CONFIRMED_TRAFFIC) CONFIRMED_TRAFFIC ,
SUM(CONFIRMED_IMPRESSIONS) CONFIRMED_IMPRESSIONS ,SUM(DELIVERED_REVENUE)
DELIVERED_REVENUE ,SUM(CLICKS) CLICKS ,SUM(NVL(ADJUSTMENT_IMPRESSIONS,0))
ADJUSTMENT_IMPRESSIONS ,SUM(NVL(ADJUSTMENT_SALES_PRICE,0))
ADJUSTMENT_SALES_PRICE ,SUM(SERVED_IMPRESSIONS) SERVED_IMPRESSIONS FROM (
SELECT NVL(FOL.SCHEDULED_DATE, FDD.DELIVERY_DATE) SCHEDULED_DATE ,
NVL(FOL.AD_SLOT_ID, FDD.AD_SLOT_ID) AD_SLOT_ID ,NVL(FOL.AD_COMPONENT_ID,
FDD.AD_COMPONENT_ID) AD_COMPONENT_ID ,NVL(FOL.INVENTORY_TYPE_ID, CASE WHEN
FDD.ORDER_LINE_ID = '-1' THEN '7' WHEN FDD.ORDER_LINE_ID = '-2' THEN '6'
WHEN FDD.ORDER_LINE_ID = '-3' THEN '8' ELSE D.INVENTORY_TYPE_ID END )
INVENTORY_TYPE_ID ,NVL(FOL.DELIVERY_TYPE_ID, D.DELIVERY_TYPE_ID)
DELIVERY_TYPE_ID ,NVL(SCHEDULED_IMPRESSIONS,0) SCHEDULED_IMPRESSIONS ,
NVL(EXTENDED_SALES_PRICE,0) EXTENDED_SALES_PRICE ,(CASE WHEN
NVL(FOL.SCHEDULED_DATE,FDD.DELIVERY_DATE) > SYSDATE THEN 0 ELSE
NVL(EXTENDED_SALES_PRICE,0) END) EXTENDED_SALES_PRICE_TD ,
NVL(EXTENDED_LIST_PRICE,0) EXTENDED_LIST_PRICE ,NVL(EXTENDED_FLOOR_PRICE,0)
EXTENDED_FLOOR_PRICE ,(CASE WHEN NVL(FOL.ORDER_LINE_ID, FDD.ORDER_LINE_ID) =
'-2' THEN 0 WHEN NVL(FOL.ORDER_LINE_ID, FDD.ORDER_LINE_ID) = '-3' THEN 0
ELSE NVL(FDD.DELIVERED_IMPRESSIONS,0) END) CONFIRMED_TRAFFIC ,
NVL(FDD.DELIVERED_IMPRESSIONS,0) SERVED_IMPRESSIONS ,(CASE WHEN
NVL(FOL.ORDER_LINE_ID, FDD.ORDER_LINE_ID) = '-1' THEN 0 WHEN
NVL(FOL.ORDER_LINE_ID, FDD.ORDER_LINE_ID) = '-2' THEN 0 WHEN
NVL(FOL.ORDER_LINE_ID, FDD.ORDER_LINE_ID) = '-3' THEN 0 ELSE
NVL(FDD.DELIVERED_IMPRESSIONS,0) END) CONFIRMED_IMPRESSIONS ,CLICKS ,
NVL(ADJUSTMENT_IMPRESSIONS,0) ADJUSTMENT_IMPRESSIONS ,
NVL(ADJUSTMENT_SALES_PRICE,0) ADJUSTMENT_SALES_PRICE ,DELIVERED_REVENUE ,
TOTAL_IMPRESSIONS FROM (SELECT ORDER_ID ,ORDER_LINE_ID ,INVENTORY_TYPE_ID ,
DELIVERY_TYPE_ID FROM T_AD_SLOT_DIMENSIONS) D , ( ( SELECT SFOLT.ORDER_ID ,
SFOLT.ORDER_LINE_ID ,SCHEDULED_DATE ,AD_SLOT_ID ,AD_COMPONENT_ID ,
INVENTORY_TYPE_ID ,DELIVERY_TYPE_ID ,(CASE WHEN NVL(SCHEDULED_IMPRESSIONS,0)
<> 0 THEN (EXTENDED_SALES_PRICE/SCHEDULED_IMPRESSIONS)*1000 ELSE NULL END)
CPM ,SCHEDULED_IMPRESSIONS * (CASE WHEN TOTAL_PROPORTION = 0 THEN
1/TOTAL_AD_SLOT ELSE PROPORTION/TOTAL_PROPORTION END) SCHEDULED_IMPRESSIONS
,EXTENDED_SALES_PRICE * (CASE WHEN TOTAL_PROPORTION = 0 THEN
1/TOTAL_AD_SLOT ELSE PROPORTION/TOTAL_PROPORTION END) EXTENDED_SALES_PRICE ,
EXTENDED_LIST_PRICE * (CASE WHEN TOTAL_PROPORTION = 0 THEN 1/TOTAL_AD_SLOT
ELSE PROPORTION/TOTAL_PROPORTION END) EXTENDED_LIST_PRICE ,
EXTENDED_FLOOR_PRICE * (CASE WHEN TOTAL_PROPORTION = 0 THEN 1/TOTAL_AD_SLOT
ELSE PROPORTION/TOTAL_PROPORTION END) EXTENDED_FLOOR_PRICE ,
REMNANT_REVENUE_CPM * (CASE WHEN TOTAL_PROPORTION = 0 THEN 1/TOTAL_AD_SLOT
ELSE PROPORTION/TOTAL_PROPORTION END) REMNANT_REVENUE_CPM ,
ADJUSTMENT_IMPRESSIONS * (CASE WHEN TOTAL_PROPORTION = 0 THEN
1/TOTAL_AD_SLOT ELSE PROPORTION/TOTAL_PROPORTION END)
ADJUSTMENT_IMPRESSIONS ,ADJUSTMENT_SALES_PRICE * (CASE WHEN
TOTAL_PROPORTION = 0 THEN 1/TOTAL_AD_SLOT ELSE PROPORTION/TOTAL_PROPORTION
END) ADJUSTMENT_SALES_PRICE ,
SUM(SFOLT.SCHEDULED_IMPRESSIONS/(SFOLT.FLIGHT_END_DATE -
SFOLT.FLIGHT_START_DATE +1)) OVER ( PARTITION BY SFOLT.ORDER_ID,
SFOLT.ORDER_LINE_ID ORDER BY SFOLT.ORDER_ID, SFOLT.ORDER_LINE_ID)
TOTAL_IMPRESSIONS FROM T_ORDER_LINE_TOTAL_PROPORTION TP ,FOL_NEW SFOLT ,
RAPT_BDDM.L_AD_SLOT_PROPORTION LPTP WHERE TP.ORDER_ID = SFOLT.ORDER_ID AND
TP.ORDER_LINE_ID = SFOLT.ORDER_LINE_ID AND TP.FLIGHT_START_DATE =
SFOLT.FLIGHT_START_DATE AND TP.FLIGHT_END_DATE = SFOLT.FLIGHT_END_DATE AND
LEAST(GREATEST(TRUNC(SFOLT.SCHEDULED_DATE, 'MM'), :B4 ), :B3 ) =
PROPORTION_MONTH AND SFOLT.PLACEMENT_ID = LPTP.PLACEMENT_ID AND
SFOLT.DAY_OF_WEEK = LPTP.DAY_OF_WEEK AND SFOLT.SCHEDULED_DATE BETWEEN :B2
AND LAST_DAY(ADD_MONTHS(:B2 ,:B1 -1)) ) FOL FULL OUTER JOIN ( SELECT
FD.ORDER_ID ,FD.ORDER_LINE_ID ,FD.DELIVERY_DATE ,FD.AD_SLOT_ID ,
FD.AD_COMPONENT_ID ,FD.DELIVERED_IMPRESSIONS ,(FD.DELIVERED_IMPRESSIONS *
NVL(LDR.DELIVERED_CPM,0))/1000 DELIVERED_REVENUE ,FD.CLICKS FROM
RAPT_BDDM.F_DELIVERY_DETAIL FD ,T_DELIVERED_REVENUE LDR WHERE
FD.DELIVERY_DATE BETWEEN :B2 AND LAST_DAY(ADD_MONTHS(:B2 ,:B1 -1)) AND
FD.ORDER_ID = LDR.ORDER_ID (+) AND FD.ORDER_LINE_ID = LDR.ORDER_LINE_ID (+)
AND FD.DELIVERY_DATE = LDR.DELIVERY_DATE (+) ) FDD ON FOL.ORDER_LINE_ID =
FDD.ORDER_LINE_ID AND FOL.SCHEDULED_DATE = FDD.DELIVERY_DATE AND
FOL.AD_SLOT_ID = FDD.AD_SLOT_ID ) WHERE D.ORDER_LINE_ID =
NVL(FOL.ORDER_LINE_ID, FDD.ORDER_LINE_ID) ) GROUP BY TRUNC(SCHEDULED_DATE,
'MM') ,AD_SLOT_ID ,AD_COMPONENT_ID ,INVENTORY_TYPE_ID ,DELIVERY_TYPE_ID


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.50 2695.02 0 1055 454 243696
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 2 0.51 2695.03 0 1055 454 243696

Misses in library cache during parse: 1
Misses in library cache during execute: 1
Optimizer mode: ALL_ROWS
Parsing user id: 29 (RAPT_UDM) (recursive depth: 1)

Rows Row Source Operation
------- ---------------------------------------------------
4 PX COORDINATOR (cr=61 pr=0 pw=0 time=1047738002 us)
0 PX SEND QC (RANDOM) :TQ10015 (cr=0 pr=0 pw=0 time=0 us)
0 LOAD AS SELECT (cr=0 pr=0 pw=0 time=0 us)
0 PX RECEIVE (cr=0 pr=0 pw=0 time=0 us)
0 PX SEND ROUND-ROBIN :TQ10014 (cr=0 pr=0 pw=0 time=0 us)
0 HASH GROUP BY (cr=0 pr=0 pw=0 time=0 us)
0 PX RECEIVE (cr=0 pr=0 pw=0 time=0 us)
0 PX SEND HASH :TQ10013 (cr=0 pr=0 pw=0 time=0 us)
0 HASH GROUP BY (cr=0 pr=0 pw=0 time=0 us)
0 HASH JOIN (cr=0 pr=0 pw=0 time=0 us)
0 PX RECEIVE (cr=0 pr=0 pw=0 time=0 us)
0 PX SEND BROADCAST :TQ10007 (cr=0 pr=0 pw=0 time=0 us)
0 PX BLOCK ITERATOR (cr=0 pr=0 pw=0 time=0 us)
0 TABLE ACCESS FULL T_AD_SLOT_DIMENSIONS (cr=0 pr=0 pw=0 time=0 us)
0 VIEW (cr=0 pr=0 pw=0 time=0 us)
0 UNION-ALL (cr=0 pr=0 pw=0 time=0 us)
0 HASH JOIN OUTER (cr=0 pr=0 pw=0 time=0 us)
0 PX RECEIVE (cr=0 pr=0 pw=0 time=0 us)
0 PX SEND HASH :TQ10008 (cr=0 pr=0 pw=0 time=0 us)
0 VIEW (cr=0 pr=0 pw=0 time=0 us)
0 FILTER (cr=0 pr=0 pw=0 time=0 us)
0 HASH JOIN (cr=0 pr=0 pw=0 time=0 us)
0 PX RECEIVE (cr=0 pr=0 pw=0 time=0 us)
0 PX SEND BROADCAST LOCAL :TQ10004 (cr=0 pr=0 pw=0 time=0 us)
0 HASH JOIN BUFFERED (cr=0 pr=0 pw=0 time=0 us)
0 PX RECEIVE (cr=0 pr=0 pw=0 time=0 us)
0 PX SEND HASH :TQ10000 (cr=0 pr=0 pw=0 time=0 us)
0 PX BLOCK ITERATOR (cr=0 pr=0 pw=0 time=0 us)
0 TABLE ACCESS FULL FOL_NEW (cr=0 pr=0 pw=0 time=0 us)
0 PX RECEIVE (cr=0 pr=0 pw=0 time=0 us)
0 PX SEND HASH :TQ10001 (cr=0 pr=0 pw=0 time=0 us)
0 PX BLOCK ITERATOR (cr=0 pr=0 pw=0 time=0 us)
0 TABLE ACCESS FULL T_ORDER_LINE_TOTAL_PROPORTION (cr=0 pr=0 pw=0 time=0 us)
0 PX BLOCK ITERATOR PARTITION: 1 2 (cr=0 pr=0 pw=0 time=0 us)
0 TABLE ACCESS FULL L_AD_SLOT_PROPORTION PARTITION: 1 2 (cr=0 pr=0 pw=0 time=0 us)
0 PX RECEIVE (cr=0 pr=0 pw=0 time=0 us)
0 PX SEND HASH :TQ10009 (cr=0 pr=0 pw=0 time=0 us)
0 VIEW (cr=0 pr=0 pw=0 time=0 us)
0 FILTER (cr=0 pr=0 pw=0 time=0 us)
0 HASH JOIN RIGHT OUTER (cr=0 pr=0 pw=0 time=0 us)
0 PX RECEIVE (cr=0 pr=0 pw=0 time=0 us)
0 PX SEND BROADCAST :TQ10005 (cr=0 pr=0 pw=0 time=0 us)
0 PX BLOCK ITERATOR (cr=0 pr=0 pw=0 time=0 us)
0 TABLE ACCESS FULL T_DELIVERED_REVENUE (cr=0 pr=0 pw=0 time=0 us)
0 PX BLOCK ITERATOR (cr=0 pr=0 pw=0 time=0 us)
0 TABLE ACCESS FULL F_DELIVERY_DETAIL (cr=0 pr=0 pw=0 time=0 us)
0 FILTER (cr=0 pr=0 pw=0 time=0 us)
0 HASH JOIN RIGHT OUTER (cr=0 pr=0 pw=0 time=0 us)
0 PX RECEIVE (cr=0 pr=0 pw=0 time=0 us)
0 PX SEND BROADCAST :TQ10010 (cr=0 pr=0 pw=0 time=0 us)
0 PX BLOCK ITERATOR (cr=0 pr=0 pw=0 time=0 us)
0 TABLE ACCESS FULL T_DELIVERED_REVENUE (cr=0 pr=0 pw=0 time=0 us)
0 HASH JOIN ANTI (cr=0 pr=0 pw=0 time=0 us)
0 PX RECEIVE (cr=0 pr=0 pw=0 time=0 us)
0 PX SEND HASH :TQ10011 (cr=0 pr=0 pw=0 time=0 us)
0 PX BLOCK ITERATOR (cr=0 pr=0 pw=0 time=0 us)
0 TABLE ACCESS FULL F_DELIVERY_DETAIL (cr=0 pr=0 pw=0 time=0 us)
0 PX RECEIVE (cr=0 pr=0 pw=0 time=0 us)
0 PX SEND HASH :TQ10012 (cr=0 pr=0 pw=0 time=0 us)
0 VIEW VW_SQ_1 (cr=0 pr=0 pw=0 time=0 us)
0 FILTER (cr=0 pr=0 pw=0 time=0 us)
0 HASH JOIN (cr=0 pr=0 pw=0 time=0 us)
0 PX RECEIVE (cr=0 pr=0 pw=0 time=0 us)
0 PX SEND BROADCAST LOCAL :TQ10006 (cr=0 pr=0 pw=0 time=0 us)
0 HASH JOIN BUFFERED (cr=0 pr=0 pw=0 time=0 us)
0 PX RECEIVE (cr=0 pr=0 pw=0 time=0 us)
0 PX SEND HASH :TQ10002 (cr=0 pr=0 pw=0 time=0 us)
0 PX BLOCK ITERATOR (cr=0 pr=0 pw=0 time=0 us)
0 TABLE ACCESS FULL FOL_NEW (cr=0 pr=0 pw=0 time=0 us)
0 PX RECEIVE (cr=0 pr=0 pw=0 time=0 us)
0 PX SEND HASH :TQ10003 (cr=0 pr=0 pw=0 time=0 us)
0 PX BLOCK ITERATOR (cr=0 pr=0 pw=0 time=0 us)
0 TABLE ACCESS FULL T_ORDER_LINE_TOTAL_PROPORTION (cr=0 pr=0 pw=0 time=0 us)
0 PX BLOCK ITERATOR PARTITION: 1 2 (cr=0 pr=0 pw=0 time=0 us)
0 TABLE ACCESS FULL L_AD_SLOT_PROPORTION PARTITION: 1 2 (cr=0 pr=0 pw=0 time=0 us)


Rows Execution Plan
------- ---------------------------------------------------
0 INSERT STATEMENT MODE: ALL_ROWS
4 HASH (GROUP BY)
0 HASH JOIN
0 TABLE ACCESS MODE: ANALYZED (FULL) OF
'T_AD_SLOT_DIMENSIONS' (TABLE)
0 VIEW
0 UNION-ALL
0 HASH JOIN (RIGHT OUTER)
0 VIEW
0 FILTER
0 HASH JOIN (RIGHT OUTER)
0 TABLE ACCESS MODE: ANALYZED (FULL) OF
'T_DELIVERED_REVENUE' (TABLE)
0 TABLE ACCESS MODE: ANALYZED (FULL) OF
'F_DELIVERY_DETAIL' (TABLE)
0 VIEW
0 FILTER
0 HASH JOIN
0 HASH JOIN
0 TABLE ACCESS MODE: ANALYZED (FULL) OF
'FOL_NEW' (TABLE)
0 TABLE ACCESS MODE: ANALYZED (FULL) OF
'T_ORDER_LINE_TOTAL_PROPORTION' (TABLE)
0 PARTITION RANGE (ALL) PARTITION: START=1 STOP=2
0 TABLE ACCESS MODE: ANALYZED (FULL) OF
'L_AD_SLOT_PROPORTION' (TABLE) PARTITION: START=1 STOP=
2
0 FILTER
0 HASH JOIN (RIGHT OUTER)
0 TABLE ACCESS MODE: ANALYZED (FULL) OF
'T_DELIVERED_REVENUE' (TABLE)
0 HASH JOIN (ANTI)
0 TABLE ACCESS MODE: ANALYZED (FULL) OF
'F_DELIVERY_DETAIL' (TABLE)
0 VIEW OF 'VW_SQ_1' (VIEW)
0 FILTER
0 HASH JOIN
0 HASH JOIN
0 TABLE ACCESS MODE: ANALYZED (FULL) OF
'FOL_NEW' (TABLE)
0 TABLE ACCESS MODE: ANALYZED (FULL) OF
'T_ORDER_LINE_TOTAL_PROPORTION' (TABLE)
0 PARTITION RANGE (ALL) PARTITION: START=1
STOP=2
0 TABLE ACCESS MODE: ANALYZED (FULL) OF
'L_AD_SLOT_PROPORTION' (TABLE) PARTITION: START=1
STOP=2


Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
reliable message 1 0.00 0.00
enq: KO - fast object ch
Tom Kyte
August 01, 2008 - 11:09 am UTC

and, when you read this, what did you see as being your bottlenecks to your runtime?

what jumped out at you?

it jumped right off the page for me.

look at this query:

SELECT MIN(TRUNC(SCHEDULED_DATE, 'MM')) ,MAX(TRUNC(SCHEDULED_DATE, 'MM'))
FROM
BD_F_ORDER_LINE


I might suggest indexing that column and rewriting that query.....


select trunc( min(scheduled_date), 'MM' ) min_dt from t
union all
select trunc( max(scheduled_date), 'MM' ) max_dt from t;



ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> create table t as select * from all_objects;

Table created.

ops$tkyte%ORA10GR2> create index t_idx on t(created);

Index created.

ops$tkyte%ORA10GR2> exec dbms_stats.gather_table_stats( user, 'T' );

PL/SQL procedure successfully completed.

ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> set autotrace on
ops$tkyte%ORA10GR2> select min(trunc(created,'mm')), max(trunc(created,'mm')) from t;

MIN(TRUNC MAX(TRUNC
--------- ---------
01-JUN-05 01-AUG-08


Execution Plan
----------------------------------------------------------
Plan hash value: 1058879072

-------------------------------------------------------------------------------
| Id  | Operation             | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------
|   0 | SELECT STATEMENT      |       |     1 |     8 |    32   (4)| 00:00:01 |
|   1 |  SORT AGGREGATE       |       |     1 |     8 |            |          |
|   2 |   INDEX FAST FULL SCAN| T_IDX | 49817 |   389K|    32   (4)| 00:00:01 |
-------------------------------------------------------------------------------


Statistics
----------------------------------------------------------
          1  recursive calls
          0  db block gets
        140  consistent gets
          0  physical reads
          0  redo size
        514  bytes sent via SQL*Net to client
        385  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
          1  rows processed

ops$tkyte%ORA10GR2> select trunc( min(created), 'mm' ), 'min_mm' from t
  2  union all
  3  select trunc( max(created), 'mm' ), 'max_mm' from t;

TRUNC(MIN 'MIN_M
--------- ------
01-JUN-05 min_mm
01-AUG-08 max_mm


Execution Plan
----------------------------------------------------------
Plan hash value: 1579490351

-------------------------------------------------------------------------------------
| Id  | Operation                   | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |       |     2 |    16 |     4  (50)| 00:00:01 |
|   1 |  UNION-ALL                  |       |       |       |            |          |
|   2 |   SORT AGGREGATE            |       |     1 |     8 |            |          |
|   3 |    INDEX FULL SCAN (MIN/MAX)| T_IDX | 49817 |   389K|     2   (0)| 00:00:01 |
|   4 |   SORT AGGREGATE            |       |     1 |     8 |            |          |
|   5 |    INDEX FULL SCAN (MIN/MAX)| T_IDX | 49817 |   389K|     2   (0)| 00:00:01 |
-------------------------------------------------------------------------------------


Statistics
----------------------------------------------------------
          1  recursive calls
          0  db block gets
          4  consistent gets
          0  physical reads
          0  redo size
        548  bytes sent via SQL*Net to client
        385  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
          2  rows processed


elapsed

A reader, July 29, 2008 - 8:30 pm UTC


Rows Execution Plan
------- ---------------------------------------------------
0 INSERT STATEMENT MODE: ALL_ROWS
4 HASH (GROUP BY)
0 HASH JOIN
0 TABLE ACCESS MODE: ANALYZED (FULL) OF
'T_AD_SLOT_DIMENSIONS' (TABLE)
0 VIEW
0 UNION-ALL
0 HASH JOIN (RIGHT OUTER)
0 VIEW
0 FILTER
0 HASH JOIN (RIGHT OUTER)
0 TABLE ACCESS MODE: ANALYZED (FULL) OF
'T_DELIVERED_REVENUE' (TABLE)
0 TABLE ACCESS MODE: ANALYZED (FULL) OF
'F_DELIVERY_DETAIL' (TABLE)
0 VIEW
0 FILTER
0 HASH JOIN
0 HASH JOIN
0 TABLE ACCESS MODE: ANALYZED (FULL) OF
'FOL_NEW' (TABLE)
0 TABLE ACCESS MODE: ANALYZED (FULL) OF
'T_ORDER_LINE_TOTAL_PROPORTION' (TABLE)
0 PARTITION RANGE (ALL) PARTITION: START=1 STOP=2
0 TABLE ACCESS MODE: ANALYZED (FULL) OF
'L_AD_SLOT_PROPORTION' (TABLE) PARTITION: START=1 STOP=
2
0 FILTER
0 HASH JOIN (RIGHT OUTER)
0 TABLE ACCESS MODE: ANALYZED (FULL) OF
'T_DELIVERED_REVENUE' (TABLE)
0 HASH JOIN (ANTI)
0 TABLE ACCESS MODE: ANALYZED (FULL) OF
'F_DELIVERY_DETAIL' (TABLE)
0 VIEW OF 'VW_SQ_1' (VIEW)
0 FILTER
0 HASH JOIN
0 HASH JOIN
0 TABLE ACCESS MODE: ANALYZED (FULL) OF
'FOL_NEW' (TABLE)
0 TABLE ACCESS MODE: ANALYZED (FULL) OF
'T_ORDER_LINE_TOTAL_PROPORTION' (TABLE)
0 PARTITION RANGE (ALL) PARTITION: START=1
STOP=2
0 TABLE ACCESS MODE: ANALYZED (FULL) OF
'L_AD_SLOT_PROPORTION' (TABLE) PARTITION: START=1
STOP=2


Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
reliable message 1 0.00 0.00
enq: KO - fast object checkpoint 1 0.00 0.00
os thread startup 8 0.06 0.44
PX Deq: Join ACK 6 0.00 0.01
PX Deq Credit: send blkd 19 0.02 0.05
PX Deq Credit: need buffer 2 0.00 0.00
PX qref latch 136 0.00 0.05
PX Deq: Parse Reply 5 0.00 0.01
PX Deq: Execute Reply 2201 2.07 2692.91
rdbms ipc reply 4 0.06 0.12
enq: CI - contention 2 0.00 0.00
PX Deq: Signal ACK 2 0.00 0.00



********************************************************************************

OVERALL TOTALS FOR ALL NON-RECURSIVE STATEMENTS

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 1 3.44 2811.88 0 1006559 6699 1
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 2 3.44 2811.88 0 1006559 6699 1

Misses in library cache during parse: 0

Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
SQL*Net message to client 2 0.00 0.00
SQL*Net message from client 2 17.13 23.67
single-task message 1 0.06 0.06
SQL*Net message to dblink 9 0.00 0.00
SQL*Net message from dblink 9 0.04 0.06


OVERALL TOTALS FOR ALL RECURSIVE STATEMENTS

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 53 0.39 0.51 0 1057 1 0
Execute 671 0.93 2695.61 3017622 972205 6708 243705
Fetch 844 2.66 116.30 33273 35357 1 5753
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 1568 3.99 2812.43 3050895 1008619 6710 249458

Misses in library cache during parse: 29
Misses in library cache during execute: 27

Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
db file scattered read 1060 1.69 80.26
db file sequential read 3 0.02 0.02
latch free 2 0.09 0.09
latch: shared pool 1 0.00 0.00
SQL*Net message to dblink 3 0.00 0.00
SQL*Net message from dblink 3 31.59 31.59
reliable message 1 0.00 0.00
enq: KO - fast object checkpoint 1 0.00 0.00
os thread startup 8 0.06 0.44
PX Deq: Join ACK 6 0.00 0.01
PX Deq Credit: send blkd 19 0.02 0.05
PX Deq Credit: need buffer 2 0.00 0.00
PX qref latch 136 0.00 0.05
PX Deq: Parse Reply 5 0.00 0.01
PX Deq: Execute Reply 2204 2.07 2692.95
rdbms ipc reply 7 0.06 0.15
latch: cache buffers chains 1 0.00 0.00
buffer busy waits 1 0.00 0.00
enq: CI - contention 2 0.00 0.00
PX Deq: Signal ACK 9 0.03 0.03
log file sync 1 0.01 0.01

12 user SQL statements in session.
654 internal SQL statements in session.
666 SQL statements in session.
6 statements EXPLAINed in this session.
********************************************************************************
Trace file: rtimdv1_ora_2419.trc
Trace file compatibility: 10.01.00
Sort options: default

1 session in tracefile.
12 user SQL statements in trace file.
654 internal SQL statements in trace file.
666 SQL statements in trace file.
34 unique SQL statements in trace file.
6 SQL statements EXPLAINed using schema:
RAPT_UDM.prof$plan_table
Default table was used.
Table was created.
Table was dropped.
19196 lines in trace file.
2811 elapsed seconds in trace file.

can u pl. explain about PX Deq: Execute Reply, db file scattered read, SQL*Net message from client how can i reduce elaapsed time

mbrc in effect

kk, August 05, 2008 - 11:17 pm UTC

Tom,
You explained with an example for a previous question regarding MBRC and logical I/O

"you select count(nullable_column) from t;

that must full scan - Now, we'd like to read 32 blocks - but we cannot. block 1 is in there, block 3 is in there. We need to read block 2 - but we cannot read blocks 1 and 3 (they are not right, we cannot use them, we have to use the stuff we find in the cache). Same thing happens with blocks 4, 6, 8 and so on.

So, 50 physical IO's later - blocks 1..100 have been in the buffer cache, and we did 100 logical IO's to read them out - compute the count and return to the client.

So, end result: 50 physical IO's, 100 logical IO's "

Here what i cant understand is , after doing a PIO for the blocks not in cache,why oracle has to do 100 LIO? When the requirement of having all the blocks in the buffer is met then why dont it use MBRC to limit the no. of logical reads

Your second example i can easily understand but the first example though simple,difficult to interpret

Tom Kyte
August 06, 2008 - 8:46 am UTC

blocks in the buffer cache are scattered all about, we read a block in and take the DBA (data block address) and hash it to figure out where to store it in the cache so we can find it again later.

every block read from the cache is a single block logical IO, the concept of "contiguous storage" in a cache like that does not exist.

Adarsh Kumar, October 17, 2008 - 3:02 pm UTC

Excellent Answer

Clarification

Golf, June 05, 2009 - 9:50 pm UTC

Tom,

What exactly is enq: CF - contention on a AWR report. I mean I have read it is cause by many symptons. We have
Dataguard configuration and we noticed all apps getting
time outs when this event happens. By the way, we
are not running anything in parallel.

enq: CF - contention 1,822 816 448 125.1 Other
ARCH wait on SENDREQ 29 658 22,682 100.8 Network
ARCH wait on c/f tx acquire 2 120 117 977 18.0 Other
CPU time 111 17.0
db file sequential read 8,476 65 8 10.0 User I/O

from the alert logs.
....ORA-16146: standby destination control file enqueue unavailable

sof parse ratio

A reader, March 16, 2011 - 9:02 pm UTC

Can you tell us the SQL you use to determine the soft parse ratio for an oracle 10g system and what ranges are considered good or bad.
Tom Kyte
March 17, 2011 - 8:03 am UTC

@awrrpt or @spreport

AWR and statspack prints it out right at the top in the instance efficiency report.

Over 99 = good, the closer to 100 the better.
Under 99 = bad

Singular query slows down query dramatically

Max, November 03, 2011 - 3:11 am UTC

Hi Tom,

CBO chooses slow plan (in terms of execution time, and quantity of LIOs). I can rewrite the query so it will get fast, but I would be grateful, if you point out, what's wrong with this SQL...

Connected as shal@egp

SQL> select зао_ид as zao_id,
       зао_сумма - Nvl((select SUM(pay_raw_amount)
                         from payman.BUDEXEC_EXPENSES_AB_VIEW t
                        where zaoid = зао_ид),
                       0.00) as zao_amount_remnant
  from SHAL.ЗАЯВКИ z
 where зао_слж_код = 129
   and зао_созд_дата between '1-jan-2011' and '1-feb-2011';
SQL> /

31 rows selected.

Elapsed: 00:00:05.05

Execution Plan
----------------------------------------------------------
Plan hash value: 3409527681

---------------------------------------------------------------------------------------------------------------------
| Id  | Operation                              | Name                       | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                       |                            |     6 |   126 |    17   (6)| 00:00:01 |
|   1 |  SORT AGGREGATE                        |                            |     1 |    26 |            |          |
|   2 |   VIEW                                 | BUDEXEC_EXPENSES_AB_VIEW   |   214 |  5564 |  4562   (2)| 00:00:09 |
|   3 |    UNION-ALL                           |                            |       |       |            |          |
|*  4 |     HASH JOIN OUTER                    |                            |     9 |   792 |  2278   (2)| 00:00:05 |
|*  5 |      TABLE ACCESS BY INDEX ROWID       | VIPISKA                    |     1 |     9 |     2   (0)| 00:00:01 |
|   6 |       NESTED LOOPS                     |                            |     1 |    75 |    16   (0)| 00:00:01 |
|   7 |        NESTED LOOPS                    |                            |     1 |    66 |    14   (0)| 00:00:01 |
|   8 |         NESTED LOOPS                   |                            |     1 |    51 |    12   (0)| 00:00:01 |
|   9 |          MERGE JOIN CARTESIAN          |                            |     1 |    43 |     7   (0)| 00:00:01 |
|  10 |           NESTED LOOPS                 |                            |     1 |    30 |     5   (0)| 00:00:01 |
|  11 |            NESTED LOOPS                |                            |     1 |    20 |     4   (0)| 00:00:01 |
|  12 |             TABLE ACCESS BY INDEX ROWID| ЗАЯВКИ                     |     1 |    10 |     2   (0)| 00:00:01 |
|* 13 |              INDEX UNIQUE SCAN         | XPKЗАЯВКИ                  |     1 |       |     1   (0)| 00:00:01 |
|  14 |             TABLE ACCESS BY INDEX ROWID| РАСХОДЫ                    |     1 |    10 |     2   (0)| 00:00:01 |
|* 15 |              INDEX RANGE SCAN          | XIF28РАСХОДЫ               |     1 |       |     1   (0)| 00:00:01 |
|* 16 |            TABLE ACCESS BY INDEX ROWID | СТАТЬИ_ВЫПЛАТ              |     1 |    10 |     1   (0)| 00:00:01 |
|* 17 |             INDEX UNIQUE SCAN          | XPKСТАТЬИ_ВЫПЛАТ           |     1 |       |     0   (0)| 00:00:01 |
|  18 |           BUFFER SORT                  |                            |     1 |    13 |     6   (0)| 00:00:01 |
|* 19 |            TABLE ACCESS BY INDEX ROWID | РАСПРЕДЕЛЕНИЯ_МЕСЯЦ_ЗАПИСИ |     1 |    13 |     2   (0)| 00:00:01 |
|* 20 |             INDEX RANGE SCAN           | XIF_РМЗ_ЗАО_ИД             |     1 |       |     1   (0)| 00:00:01 |
|  21 |          TABLE ACCESS BY INDEX ROWID   | ПЛАТЕЖИ                    |     4 |    32 |     5   (0)| 00:00:01 |
|* 22 |           INDEX RANGE SCAN             | XIF_ПЛТ_РМЗ_ИД             |     4 |       |     1   (0)| 00:00:01 |
|* 23 |         TABLE ACCESS BY INDEX ROWID    | РЕЕСТР_ПД                  |     1 |    15 |     2   (0)| 00:00:01 |
|* 24 |          INDEX RANGE SCAN              | XIF98РЕЕСТР_ПД             |     1 |       |     1   (0)| 00:00:01 |
|* 25 |        INDEX RANGE SCAN                | FK_VIPISKA_RPD             |     1 |       |     1   (0)| 00:00:01 |
|  26 |      VIEW                              | ALL_KONTRA_VIEW            | 49147 |   623K|  2260   (2)| 00:00:05 |
|  27 |       UNION-ALL                        |                            |       |       |            |          |
|* 28 |        HASH JOIN                       |                            | 39568 |   540K|  1813   (2)| 00:00:04 |
|* 29 |         TABLE ACCESS FULL              | ПАСПОРТА_ЮЛ                | 39568 |   270K|  1616   (1)| 00:00:04 |
|* 30 |         VIEW                           | index$_join$_011           | 40109 |   274K|   193   (6)| 00:00:01 |
|* 31 |          HASH JOIN                     |                            |       |       |            |          |
|* 32 |           INDEX RANGE SCAN             | XPКНАПРЗФЛ                 | 40109 |   274K|    84   (6)| 00:00:01 |
|  33 |           INDEX FAST FULL SCAN         | XPKКОНТРАГЕНТЫ             | 40109 |   274K|   134   (3)| 00:00:01 |
|* 34 |        HASH JOIN                       |                            |  9579 |   130K|   447   (3)| 00:00:01 |
|* 35 |         VIEW                           | index$_join$_013           |  9580 | 67060 |   130   (6)| 00:00:01 |
|* 36 |          HASH JOIN                     |                            |       |       |            |          |
|* 37 |           INDEX RANGE SCAN             | XPКНАПРЗФЛ                 |  9580 | 67060 |    20   (5)| 00:00:01 |
|  38 |           INDEX FAST FULL SCAN         | XPKКОНТРАГЕНТЫ             |  9580 | 67060 |   134   (3)| 00:00:01 |
|* 39 |         TABLE ACCESS FULL              | ПАСПОРТА_ФЛ                |  9634 | 67438 |   316   (1)| 00:00:01 |
|* 40 |     HASH JOIN OUTER                    |                            |   205 | 16195 |  2284   (2)| 00:00:05 |
|* 41 |      TABLE ACCESS BY INDEX ROWID       | РЕЕСТР_ПД                  |     1 |    15 |     2   (0)| 00:00:01 |
|  42 |       NESTED LOOPS                     |                            |     6 |   396 |    22   (0)| 00:00:01 |
|  43 |        NESTED LOOPS                    |                            |     5 |   255 |    12   (0)| 00:00:01 |
|  44 |         MERGE JOIN CARTESIAN           |                            |     1 |    43 |     7   (0)| 00:00:01 |
|  45 |          NESTED LOOPS                  |                            |     1 |    30 |     5   (0)| 00:00:01 |
|  46 |           NESTED LOOPS                 |                            |     1 |    20 |     4   (0)| 00:00:01 |
|  47 |            TABLE ACCESS BY INDEX ROWID | ЗАЯВКИ                     |     1 |    10 |     2   (0)| 00:00:01 |
|* 48 |             INDEX UNIQUE SCAN          | XPKЗАЯВКИ                  |     1 |       |     1   (0)| 00:00:01 |
|  49 |            TABLE ACCESS BY INDEX ROWID | РАСХОДЫ                    |     1 |    10 |     2   (0)| 00:00:01 |
|* 50 |             INDEX RANGE SCAN           | XIF28РАСХОДЫ               |     1 |       |     1   (0)| 00:00:01 |
|* 51 |           TABLE ACCESS BY INDEX ROWID  | СТАТЬИ_ВЫПЛАТ              |     1 |    10 |     1   (0)| 00:00:01 |
|* 52 |            INDEX UNIQUE SCAN           | XPKСТАТЬИ_ВЫПЛАТ           |     1 |       |     0   (0)| 00:00:01 |
|  53 |          BUFFER SORT                   |                            |     1 |    13 |     6   (0)| 00:00:01 |
|* 54 |           TABLE ACCESS BY INDEX ROWID  | РАСПРЕДЕЛЕНИЯ_МЕСЯЦ_ЗАПИСИ |     1 |    13 |     2   (0)| 00:00:01 |
|* 55 |            INDEX RANGE SCAN            | XIF_РМЗ_ЗАО_ИД             |     1 |       |     1   (0)| 00:00:01 |
|  56 |         TABLE ACCESS BY INDEX ROWID    | ПЛАТЕЖИ                    |     4 |    32 |     5   (0)| 00:00:01 |
|* 57 |          INDEX RANGE SCAN              | XIF_ПЛТ_РМЗ_ИД             |     4 |       |     1   (0)| 00:00:01 |
|* 58 |        INDEX RANGE SCAN                | XIF98РЕЕСТР_ПД             |     1 |       |     1   (0)| 00:00:01 |
|  59 |      VIEW                              | ALL_KONTRA_VIEW            | 49147 |   623K|  2260   (2)| 00:00:05 |
|  60 |       UNION-ALL                        |                            |       |       |            |          |
|* 61 |        HASH JOIN                       |                            | 39568 |   540K|  1813   (2)| 00:00:04 |
|* 62 |         TABLE ACCESS FULL              | ПАСПОРТА_ЮЛ                | 39568 |   270K|  1616   (1)| 00:00:04 |
|* 63 |         VIEW                           | index$_join$_021           | 40109 |   274K|   193   (6)| 00:00:01 |
|* 64 |          HASH JOIN                     |                            |       |       |            |          |
|* 65 |           INDEX RANGE SCAN             | XPКНАПРЗФЛ                 | 40109 |   274K|    84   (6)| 00:00:01 |
|  66 |           INDEX FAST FULL SCAN         | XPKКОНТРАГЕНТЫ             | 40109 |   274K|   134   (3)| 00:00:01 |
|* 67 |        HASH JOIN                       |                            |  9579 |   130K|   447   (3)| 00:00:01 |
|* 68 |         VIEW                           | index$_join$_023           |  9580 | 67060 |   130   (6)| 00:00:01 |
|* 69 |          HASH JOIN                     |                            |       |       |            |          |
|* 70 |           INDEX RANGE SCAN             | XPКНАПРЗФЛ                 |  9580 | 67060 |    20   (5)| 00:00:01 |
|  71 |           INDEX FAST FULL SCAN         | XPKКОНТРАГЕНТЫ             |  9580 | 67060 |   134   (3)| 00:00:01 |
|* 72 |         TABLE ACCESS FULL              | ПАСПОРТА_ФЛ                |  9634 | 67438 |   316   (1)| 00:00:01 |
|  73 |  TABLE ACCESS BY INDEX ROWID           | ЗАЯВКИ                     |     6 |   126 |    17   (6)| 00:00:01 |
|  74 |   BITMAP CONVERSION TO ROWIDS          |                            |       |       |            |          |
|  75 |    BITMAP AND                          |                            |       |       |            |          |
|  76 |     BITMAP CONVERSION FROM ROWIDS      |                            |       |       |            |          |
|  77 |      SORT ORDER BY                     |                            |       |       |            |          |
|* 78 |       INDEX RANGE SCAN                 | XIF99ЗАЯВКИ                |   644 |       |     3   (0)| 00:00:01 |
|  79 |     BITMAP CONVERSION FROM ROWIDS      |                            |       |       |            |          |
|* 80 |      INDEX RANGE SCAN                  | XIF25ЗАЯВКИ                |   644 |       |     4   (0)| 00:00:01 |
---------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   4 - access("KNA"."ИД"(+)="ЗАО_КНА_ИД_КРД")
   5 - filter("VI"."OPER_TYPE"='D')
  13 - access("ЗАО_ИД"=:B1)
  15 - access("РСХ_ЗАО_ИД"=:B1)
  16 - filter("STV"."СТВ_ГВП_КОД"<>1102)
  17 - access("STV"."СТВ_КОД"="R"."РСХ_СТВ_КОД")
       filter(TO_NUMBER(LTRIM(TO_CHAR("STV"."СТВ_КОД")))<>110103)
  19 - filter("РМЗ_ФРС_КОД"=1)
  20 - access("РМЗ_ЗАО_ИД"=:B1)
  22 - access("ПЛТ_РМЗ_ИД"="РМЗ_ИД")
  23 - filter("РПД_СТАТУС"='5   ')
  24 - access("ПЛТ_ИД"="РПД_ПЛТ_ИД")
  25 - access("VI"."PLAT_ID"="РПД_ИД")
       filter("VI"."PLAT_ID" IS NOT NULL)
  28 - access("C"."ПСП_КНА_ИД"="A"."КНА_ИД")
  29 - filter("C"."ПСП_АКТ"='Y')
  30 - filter("A"."КНА_ПРЗ_ФЛ"='N')
  31 - access(ROWID=ROWID)
  32 - access("A"."КНА_ПРЗ_ФЛ"='N')
  34 - access("B"."ПСФ_КНА_ИД"="A"."КНА_ИД")
  35 - filter("A"."КНА_ПРЗ_ФЛ"='Y')
  36 - access(ROWID=ROWID)
  37 - access("A"."КНА_ПРЗ_ФЛ"='Y')
  39 - filter("B"."ПСФ_АКТ"='Y')
  40 - access("KNA"."ИД"(+)="ЗАО_КНА_ИД_КРД")
  41 - filter("РПД_СТАТУС"='5   ')
  48 - access("ЗАО_ИД"=:B1)
  50 - access("РСХ_ЗАО_ИД"=:B1)
  51 - filter("STV"."СТВ_ГВП_КОД"<>1102)
  52 - access("STV"."СТВ_КОД"="R"."РСХ_СТВ_КОД")
       filter(TO_NUMBER(LTRIM(TO_CHAR("STV"."СТВ_КОД")))<>110103)
  54 - filter("РМЗ_ФРС_КОД"<>1)
  55 - access("РМЗ_ЗАО_ИД"=:B1)
  57 - access("ПЛТ_РМЗ_ИД"="РМЗ_ИД")
  58 - access("ПЛТ_ИД"="РПД_ПЛТ_ИД")
  61 - access("C"."ПСП_КНА_ИД"="A"."КНА_ИД")
  62 - filter("C"."ПСП_АКТ"='Y')
  63 - filter("A"."КНА_ПРЗ_ФЛ"='N')
  64 - access(ROWID=ROWID)
  65 - access("A"."КНА_ПРЗ_ФЛ"='N')
  67 - access("B"."ПСФ_КНА_ИД"="A"."КНА_ИД")
  68 - filter("A"."КНА_ПРЗ_ФЛ"='Y')
  69 - access(ROWID=ROWID)
  70 - access("A"."КНА_ПРЗ_ФЛ"='Y')
  72 - filter("B"."ПСФ_АКТ"='Y')
  78 - access("ЗАО_СОЗД_ДАТА">=TO_DATE(' 2011-01-01 00:00:00', 'syyyy-mm-dd hh24:mi:ss') AND
              "ЗАО_СОЗД_ДАТА"<=TO_DATE(' 2011-02-01 00:00:00', 'syyyy-mm-dd hh24:mi:ss'))
  80 - access("ЗАО_СЛЖ_КОД"=129)


Statistics
----------------------------------------------------------
         62  recursive calls
          0  db block gets
      62011  consistent gets
       2170  physical reads
          0  redo size
       1036  bytes sent via SQL*Net to client
        418  bytes received via SQL*Net from client
          4  SQL*Net roundtrips to/from client
         63  sorts (memory)
          0  sorts (disk)
         31  rows processed


As one can see from explain plan, BUDEXEC_EXPENSES_AB_VIEW is rather complex view, and ЗАЯВКИ is just table. I am sure, that correct plan would be to get rows from ЗАЯВКИ first (we'll get 31 rows in this case) and then to execute singular subquery for each row. I wonder, why CBO does not choose such plan?

This is how I would like the results to be made:
SQL> declare
  2    zao_amount_remnant   number;
  3    tmp_number           number;
  4    nrows                binary_integer;
  5  begin
  6   nrows := 0;
  7   for rr in (select зао_ид,
  8                     зао_сумма
  9                from SHAL.ЗАЯВКИ z
 10               where зао_слж_код = 129
 11                 and зао_созд_дата between '1-jan-2011' and '1-feb-2011')
 12   loop
 13         select SUM(pay_raw_amount)
 14           into tmp_number
 15           from payman.BUDEXEC_EXPENSES_AB_VIEW t
 16          where t.zaoid = rr.зао_ид;
 17         zao_amount_remnant := rr.зао_сумма - Nvl(tmp_number, 0.00);
 18         nrows := nrows + 1;
 19   end loop;
 20   dbms_output.put_line(nrows);
 21  end;
 22  /

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.01
SQL>


bytes received via SQL*Net from client  1041
bytes sent via SQL*Net to client        301
consistent gets                         1270
db block gets                           0
physical reads                          0
redo size                               0
sorts (disk)                            0
sorts (memory)                          63
SQL*Net roundtrips to/from client       2



Oracle version is 10.2.0.5

Best regards,
Max Zykov


hash join && nested loop

A reader, March 08, 2012 - 3:22 am UTC

Hi Tom,

per your reply below, i did some testing, but why both two query get 'hash join'? I am think second one is something like 'paging' and should be 'nested loop'.

'Say you were interested in getting ALL of the rows but were paging through them in a client server
type application (maintaining a stated connection to the database and fetching 15 rows at a time
for a user to look at). Nested loops would be awesome (that and they would never actually get to
the bottom, last page anyhow)'

create table t1(id int, name varchar2(10));
insert into t1 select rownum, 'name'||rownum from dual connect by level<=20000;
create table t2(id int, age number);
insert into t2 select rownum, rownum*10 from dual connect by level<=10000;
create index ind2 on t2(id);
analyze table t1 compute statistics;
analyze table t2 compute statistics;
analyze index ind2 compute statistics;
select a.name, b.age, rownum rn from t1 a, t2 b where a.id=b.id;
hash join
select * from (select a.name, b.age, rownum rn from t1 a, t2 b where a.id=b.id order by a.name) where rn<=10;
why still hash join?

Tom Kyte
March 08, 2012 - 5:49 am UTC

stop using analyze to gather statistics, that method has been deprecated for a long time (this entire century in fact)


Look at your inner most query there - order by name.

do you have an index on a.name that would allow it to find the first 10 rows quickly?

Think about what it would have to do with a nested loops join without such an index.. It would have to read ALL OF one of the tables, join it to the other slow by slow, then sort it all. It would be much more efficient to full scan, hash, full scan and then sort - rather than full scan - index range scan for every row - then sort.


Sorting makes less LIO's?

Jichao Li, May 18, 2012 - 2:57 am UTC

Tom,

Here is a test I've done which shows that a query with ORDER BY makes less LIO's than that without the ORDER BY (everything else is same). Before doing the test I had expected that removing the ORDER BY will gain me better performance.

Below is the test scripts and trace output.

drop table t;

create table t (id number, dt date, msg varchar2(100));

insert into t
select mod(rownum, 500) + 1, created, object_name
  from all_objects
 where rownum <= 1000;
 
create index ix_t on t (id);

begin
  dbms_stats.gather_table_stats (
     ownname => user,
     tabname => 'T',
     cascade => true
  );
end;
/

select * from t ---SORTLESS
 where id = 1

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute      1      0.00       0.00          0          0          0           0
Fetch        2      0.00       0.00          0          5          0           2
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        4      0.00       0.00          0          5          0           2

Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 103  
Number of plan statistics captured: 1

Rows (1st) Rows (avg) Rows (max)  Row Source Operation
---------- ---------- ----------  ---------------------------------------------------
         2          2          2  TABLE ACCESS BY INDEX ROWID T (cr=5 pr=0 pw=0 time=32 us)
         2          2          2   INDEX RANGE SCAN IX_T (cr=3 pr=0 pw=0 time=23 us)(object id 312034)


select * from t ---SORT
 where id = 1
 order by dt

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute      1      0.00       0.00          0          0          0           0
Fetch        2      0.01       0.00          0          4          0           2
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        4      0.01       0.00          0          4          0           2

Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 103  
Number of plan statistics captured: 1

Rows (1st) Rows (avg) Rows (max)  Row Source Operation
---------- ---------- ----------  ---------------------------------------------------
         2          2          2  SORT ORDER BY (cr=4 pr=0 pw=0 time=44 us)
         2          2          2   TABLE ACCESS BY INDEX ROWID T (cr=4 pr=0 pw=0 time=22 us)
         2          2          2    INDEX RANGE SCAN IX_T (cr=2 pr=0 pw=0 time=14 us)(object id 312034)



The query without ORDER BY makes 5 reads, while the other makes 4. The ORDER BY is the only difference and the steps to read index and table block should require the exactly same reads for both queries.

This is consistently reproducible on my 10.2.0.4 DB.

Could you help explain why this happens? And what's really going on underneath? Why does Oracle make the extra reads or how have the reads been saved for the query with ORDER BY?

Thanks in advance and looking forward to your excellent answer,
Jichao Li
Tom Kyte
May 21, 2012 - 7:20 am UTC

Before doing the test I had expected that removing the ORDER BY will gain me better performance.

*it would* - why do you think otherwise?

In your example here - your table is so teeny tiny trivial - that the test doesn't even really count.

Here is a short excerpt from an article I recently did for the UKOUG Scene magazine (not printed yet, next issue). It explains what is happening here and why adding an order by is *not* a tuning technique!!

<quote>


On the wire compression

For many releases now, the database has been silently compressing data on the network. When you array fetch data from the database, SQL*Net will write the first row in its entirety on the network. When it goes to write the second row however, it will only transmit column values that differ from the first row. The third row written on the network will similarly be only the changed values from the second row, and so on. This compression therefore works well with repetitive data - of which we have a lot typically! Additionally, the compression works even better with data that is sorted by these repeating values so that the repeating values are near each other in the result set.

We can observe this rather easily – I’ll start by creating some data to play with:

ops$tkyte%ORA11GR2> create table t
  2  as
  3  select *
  4    from all_objects;
Table created.

ops$tkyte%ORA11GR2> begin
  2          dbms_stats.gather_table_stats( user, 'T' );
  3  end;
  4  /
PL/SQL procedure successfully completed.


Now, this newly created table is about 8MB in size and consists of 1,031 blocks in my database (your numbers might be a little different – but you’ll observe the same effects if you run this example yourself). Additionally, this table stores about 70 rows per block – there are about 72,000 rows on 1,031 blocks. What I’m going to do next is select the entire contents of the table over the network using SQL*Plus with autotrace enabled to see the amount of data transferred:

ops$tkyte%ORA11GR2> set arraysize 15
ops$tkyte%ORA11GR2> set autotrace traceonly statistics

ops$tkyte%ORA11GR2> select * from t;
72228 rows selected.

Statistics
--------------------------------------------------------
       5794  consistent gets
    8015033  bytes sent via SQL*Net to client
      53385  bytes received via SQL*Net from client
       4817  SQL*Net roundtrips to/from client
      72228  rows processed


I’d like to point out that I started by setting the arraysize in SQL*Plus to 15 – which is the default size used by SQL*Plus so my setting the value explicitly wasn’t really necessary – I just wanted to show the reader explicitly how many rows we were retrieving at a time.

Now, in looking at the numbers we can see that about 8MB of data was transferred from the server back to the client. That represents our table. Furthermore, we can see that 5,794 blocks were accessed (the consistent gets). Previously I said that there were 1,031 blocks in my table – that might raise the question “How or why did we perform 5,794 IO’s on a table that has only 1,031 blocks?!”. The answer to that has to do with how the database server retrieved data for this query. This query entailed a simple “FULL SCAN” which would allow the database server to retrieve the data as needed from the table directly. In order to retrieve the first row of this result set, the database server would only have to retrieve the first block, get the first row, and return the row’s data to the client. In order to get the first 15 rows, all the server has to do is retrieve the first block of the blocks retrieved by the query from the buffer cache, read 15 rows from the block and return them. That would be one consistent get. In order to retrieve the next 15 rows, the database server would read the block out of the buffer cache again, get rows 16 through 30, and return them. To get the next 15, it would repeat the process. Since the block contains about 70 rows, we would have to read each block about five times, sometimes less, sometimes more! If you look at the consistent gets again and multiply the number of blocks in the table by a number close to five, you’ll see that fact played out. We did about five times as many consistent gets against the table as we had blocks in the table.

So, in short, we did 5,794 IO’s and transferred about 8MB of data. Now, let’s see what happens if we modify the query slightly. I will add an order by clause to the query. The order by clause will sort the data by the TIMESTAMP column. Now, a couple of facts about this TIMESTAMP column are of note. Firstly, it is NOT NULL for every row in the table. Secondly, it is always 19 characters wide. Thirdly, more than 50% of the rows in the table have the same value (it repeats a lot). And lastly, of the remaining 36,000 rows, there are only about 1,500 unique values; it really repeats a lot. So in short, it is a not null, wide field with a ton of repetition. So, let’s see what happens when we run this modified query:

ops$tkyte%ORA11GR2> select * from t order by timestamp;
72228 rows selected.

Statistics
--------------------------------------------------------
       1031  consistent gets
    3427630  bytes sent via SQL*Net to client
      53385  bytes received via SQL*Net from client
       4817  SQL*Net roundtrips to/from client
      72228  rows processed


That is a very different set of statistics from the ones obtained previously. It is the same data entirely. The only difference in the query is that the latest version specifies a sort order. The two numbers that pop out immediately are the consistent gets and the amount of data transferred. We went from 8MB of data down to about 3.4MB of data! This difference is due to the repeating TIMESTAMP attribute. Every time we array fetched 15 rows, we sent the TIMESTAMP column value approximately once (sometimes a little more than once) but nearly once per array fetch. The consistent gets also dropped considerably which has nothing to do with SQL*Net, but rather the way this query had to be processed. In order to get the first row out of this result set, the database had to have read the entire table and sorted it. (Remember, there are no indexes in place. Furthermore, even if there were, the database, by default, would not use an index to read an entire table!). This means that, in order to get the first row, all 1,031 blocks were read and sorted in temporary space – either in memory or on disk. Then, to retrieve rows from this result set, we would be reading from temporary space, not from the buffer cache. A read from Temp is not a logical IO. It is not a consistent get. Hence the consistent gets stop at 1,031 as the entire query is read from Temp space.

Now, you may have heard a rule of thumb (ROT) pertaining to query tuning. It is one I’ve written myself in the past and it goes something like this:

In general, when tuning a query, you are looking for approaches that will reduce the logical IO’s the query performs. That will, in general, lower the CPU used by the query and tend to lower the possible physical IO’s it might have to perform.

Note however the use of the phrase “In general”, for that is important. Here we have an example of a query that needs 5,794 consistent gets and another query that retrieves the same exact data (albeit in a different order but that doesn’t matter since the first one didn’t specify any order) using only 1,031 consistent gets. But ask yourself the following question, “Which of these two queries is more “tuned”, the one without an order by or the one with an order by?” I would definitely vote for the one without an order by! This is just one example where a particular rule of thumb is not true (this is one reason why I’m not a huge fan of ROT! You have to be very careful and know when it applies and when it does not apply).

</quote>

Sorting makes less LIO's?

Jichao Li, May 21, 2012 - 10:26 pm UTC

Tom,

First thank you for the detailed answer as well as sharing the knowledge about network compression.

To follow up the test in your answer, since the result with ORDER BY "looks" better than the one without ORDER BY, what other measurements we can take to prove that removing ORDER BY will gain us better performance?

Thanks,
Jichao


Tom Kyte
May 22, 2012 - 8:01 am UTC

CPU time

use tkprof, use a LARGER table

Also, use critical thinking. Ask yourself how it could be that adding a SORT step would improve things.

Now that you know what it does (reads into temp, sorts, reads from temp instead of the cache) - you can use that to your advantage. There could be cases where the sort is "advantageous" - can you think of any? I can think of one clear case - but it would require a "misconfigured database server" - that is, the DBA isn't doing something right....

How can I tune this?

rebisco, September 21, 2012 - 1:46 am UTC

Hi Tom,

How can I tune this query? table2 have many indexes created for other procedures that are using this table. I cant remove any index from this table since removing it will hardly impact the performance of other modules. table1 have more than 48K rows while table2 contains more than 1million rows.

SQL> SELECT
  2      '312312312' id,
  3      b.LotId,
  4      b.objId,
  5      b.txn,
  6      '',
  7      b.act1||'-'||b.act2 activity,
  8      b.group,
  9      b.product,
 10      substr(b.TxnTimeStamp,1,15),
 11      b.qty,
 12      b.step,
 13      x.stage
 14  FROM
 15      table1 x,
 16      table2 b
 17  WHERE
 18      x.objId = b.objId
 19      AND x.id = '312312312'
 20      AND
 21      b.act1||'-'||b.act2 IN (SELECT act1 FROM Activities WHERE Module = 'module1' AND Category = 'START')
 22      AND b.txn < x.txn;

61796 rows selected.

Elapsed: 00:00:37.79

Statistics
----------------------------------------------------------
        240  recursive calls
          0  db block gets
    1336444  consistent gets
     586499  physical reads
          0  redo size
    6679476  bytes sent via SQL*Net to client
      29072  bytes received via SQL*Net from client
       4121  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
      61796  rows processed

Tom Kyte
September 26, 2012 - 12:18 pm UTC

I don't know, but only because

a) I don't know your data
b) i don't know how many rows the various predicates would result in
c) i don't know your indexes
d) i don't know the question you are trying to answer
e) i don't know your database constraints in place (primary/foreign keys, not null, check etc)

in short, I don't know very much at all. how could one tune a query without knowing anything about the data or the question being asked.


you tell me a teeny tiny minuscule bit of data about 2 of THREE tables in the query, totally insufficient.


I can say that junk like this:

b.act1||'-'||b.act2 IN (SELECT act1 FROM Activities WHERE Module =
'module1' AND Category = 'START')

isn't going to perform well - who thought to design a system where a key is two attributes in one place and a single attribute in another??

I'm very suspicious of a construct like this too:

22 AND b.txn < x.txn;


that is a fuzzy join, you really want a mini almost cartesian join between B and X?

A reader, October 19, 2012 - 3:06 am UTC

Hi Tom,

Why massive nested loop is a big culprit?

Either nested loop or others like hash join, inevitably they will hit our requested amount of data, what different?

Is it due to:
1. for hash join, it is something like processed in batch
2. while for nested loop, we may revisit the block's we have touched before again and again, which result in more LIO - more latch then
Tom Kyte
October 23, 2012 - 11:13 am UTC

what would be faster to get the book 'war and peace' from a library where books are assembled as you ask for them

a) ask for it page by page - having the librarian walk back and forth for each page

b) ask for it a chapter at table - having to go back and forth for each chapter

c) asking for the book.



If you want a few pages - maybe asking for the pages would be the best (nested loops)

If you want a chapter or two - maybe asking for them would be best (a little bit of bulk processing)

If you want the book - well, that would be the most efficient thing to ask for (full scans, hash joins)



big bulky operations work well for millions of rows

little tiny operations - small numbers of rows - work well with indexing strategies

rebisco, December 06, 2012 - 10:23 pm UTC

Hello Tom,

Sorry, been away for a while.
Below are some of the information you requested.

a, c, e) table creates/index
create table table1
(
  id    NUMBER(38),
  stage VARCHAR2(50),
  plant VARCHAR2(51),
  sysid VARCHAR2(45),
  txn   NUMBER(38)
) NOLOGGING;

-- Create/Recreate indexes 
create index IDX_SYSID on table1 (sysid) NOLOGGING;
create index IDX_PS on table1 (stage) NOLOGGING;
create index IDX_SID on table1 (id) NOLOGGING;
create index IDX_TXN on table1 (txn) NOLOGGING;

create table table2
(
  objId         VARCHAR2(45),
  lotId         VARCHAR2(40),
  ghistkey      VARCHAR2(45),
  sysid         VARCHAR2(45) default sys_guid() not null,
  act1          VARCHAR2(40),
  act2          VARCHAR2(40),
  txntimestamp  VARCHAR2(18),
  group         VARCHAR2(40),
  qty           NUMBER(38),
  step          VARCHAR2(40),
  stage         VARCHAR2(40),
  stage1        VARCHAR2(40),
  txn           NUMBER(38),
  wtxn          VARCHAR2(45)
)
partition by range (txntimestamp)
(
  partition P_Q4_08 values less than ('20090101'),
  partition P_Q1_09 values less than ('20090401'),
  partition P_Q2_09 values less than ('20090701'),
  partition P_Q3_09 values less than ('20091001'),
  partition P_Q4_09 values less than ('20100101'),
  partition P_Q1_10 values less than ('20100401'),
  partition P_Q2_10 values less than ('20100701'),
  partition P_Q3_10 values less than ('20101001'),
  partition P_Q4_10 values less than ('20110101'),
  partition P_Q1_11 values less than ('20110401'),
  partition P_Q2_11 values less than ('20110701'),
  partition P_Q3_11 values less than ('20111001'),
  partition P_Q4_11 values less than ('20120101'),
  partition P_Q1_12 values less than ('20120401'),
  partition P_Q2_12 values less than ('20120701'),
  partition P_Q3_12 values less than ('20121001'),
  partition P_Q4_12 values less than ('20130101'),
  partition P_Q1_13 values less than ('20130401'),
  partition P_OTHER values less than (MAXVALUE)
);

-- Create/Recreate primary, unique and foreign key constraints 
alter table table2 add constraint P_TABLE2 primary key (sysid) using index;

-- Create/Recreate indexes 
create index IDX_ACT on table2 (act1);
create index IDX_GHK on table2 (ghistkey);
create index IDX_OBJID on table2 (objId);
create index IDX_TXN on table2 (txn);
create index IDX_TXNTIME on table2 (txntimestamp);
create index I_STAGE on table2 (stage1);
create index I_ACT2 on table2 (act2);
create index I_LOTID on table2 (lotId);
create index I_WTXN on table2 (wtxn);

b) Below is the Explain plan of the following SQL:
SQL> SELECT
  2      312312312 id,
  3      b.txn,
  4      b.LotId,
  5      b.objId,
  6      substr(b.TxnTimeStamp,1,15),
  7      b.qty,
  8      b.act1||'-'||b.act2,
  9      b.group,
 10      b.step,
 11      b.stage
 12    FROM
 13      table1 x,
 14      table2 b
 15    WHERE
 16      x.sysid = b.objId
 17      AND x.id = 312312312
 18      AND b.act2 IN (SELECT act1 FROM Activities WHERE Module = 'module1' AND Category = 'START')
 19      AND b.txn < x.txn
 20      AND b.stage = x.stage
 21      ORDER BY b.txn;

SELECT STATEMENT, GOAL = ALL_ROWS   Cost=1582509 Cardinality=5496 Bytes=1417968  Time=18991
 SORT ORDER BY   Cost=1582509 Cardinality=5496 Bytes=1417968  Time=18991
  HASH JOIN   Cost=1582202 Cardinality=5496 Bytes=1417968 Access predicates="X"."sysid"="B"."objId" AND "B"."stage"="X"."stage" Time=18987
   TABLE ACCESS FULL Object owner=user1 Object name=table1 Cost=8880 Cardinality=687654 Bytes=48823434  Time=107
   TABLE ACCESS BY GLOBAL INDEX ROWID Object owner=user1 Object name=table2 Cost=879215 Cardinality=107423 Bytes=16435719  Time=10551
    NESTED LOOPS   Cost=1564021 Cardinality=699263 Bytes=130762181  Time=18769
     SORT UNIQUE   Cost=2 Cardinality=7 Bytes=238  Time=1
      TABLE ACCESS BY INDEX ROWID Object owner=user1 Object name=Activities Cost=2 Cardinality=7 Bytes=238  Time=1
       INDEX RANGE SCAN Object owner=user1 Object name=I_T_ACT Cost=1 Cardinality=7  Access predicates="MODULE"='module1' AND "CATEGORY"='START' Time=1
     INDEX RANGE SCAN Object owner=user1 Object name=I_ACT2 Cost=5161 Cardinality=1324361  Access predicates="B"."ACT2"="ACT1" Time=62


d) As of this testing, Table1 have 687,715 rows while Table2 have roughly 66 Million rows. Both tables are increasing by 50 rows everyday. The above query estimated to run 27days base on the ORACLE Enterprise Manager Console. Maybe with your expertise it will be lowered down.

Thank you very, Tom. I really appreciate your brilliance and expertise on this matter.

Thanks again,
Rebisco

Don't know if Tom is around

rebisco, February 03, 2013 - 7:53 pm UTC

Hello Tom,

This post is 2 months old already but, I'm not sure if you are reading this. But If you do, please do reply. For sure my previous post was long but, I am just giving you the answers/specs that you were asking especially in table creations. I hope you will find sometime reading this.

Thank you and regards,
Rebisco
Tom Kyte
February 04, 2013 - 10:18 am UTC

I don't really have anything to say - i don't know your data (skew, volumes, order of arrival, etc)

how many rows out of how many rows would this predicate return in general:

17 AND x.id = 312312312




for all of the physical IO it does - it looks like it is running pretty good as it is. the join condition shows me that the data model is totally botched (your keys, you have to concatenate to join???!?!?! ouch). the mini cartesian join (fuzzy join) is another big red flag (sometimes indicative of a bad model and something that'll never perform very well)

doesn't matter that the optimizer estimated days if it really only takes seconds as this one does...


Any suggestion?

Rebisco, February 13, 2013 - 3:45 am UTC

Hello Tom,

You are asking for our data (skew, volumes, order of arrival).
Is there a command/SQL that I can execute to get the said data?
17 AND x.id = 312312312
For the above predicate it will result to a thousand or more rows but, this is mainly depends on the date range that the user is retrieving. Table1 in the above SQL is pre-queried using a date range specified by user.

for all of the physical IO it does - it looks like it is running pretty good as it is
I don't know why did you say so. Can you elaborate more?

the join condition shows me that the data model is totally botched (your keys, you have to concatenate to join???!?!?! ouch).
The above join condition really we don't have much control since this is belong to other schema. We just use their table to get the data for reporting.

the mini cartesian join (fuzzy join) is another big red flag (sometimes indicative of a bad model and something that'll never perform very well)
This really gives us headache. Can you give any suggestion as to how do we prevent this?

Thank you and regards,
Rebisco
Tom Kyte
February 13, 2013 - 7:57 am UTC

... Is there a command/SQL that I can execute to get the said data?
....

that is something you should know about your data (it is your data). You can run queries of course (doing some counts with group bys) and investigating clustering factors of some indexes (search this site for clustering factor if that term isn't familiar to you)


... For the above predicate it will result to a thousand or more rows but, this is mainly depends on the date range that the user is retrieving. Table1 in the above SQL is pre-queried using a date range specified by user. ...


have you considered that an index on x.id and possibly x.id, date_column - would be "nice to have"? I don't know what date_column we are talking about since your where clause doesn't seem to reference any date columns...

 15    WHERE
 16      x.sysid = b.objId
 17      AND x.id = 312312312
 18      AND b.act2 IN (SELECT act1 FROM Activities WHERE Module = 'module1' AND Category = 
'START')
 19      AND b.txn < x.txn
 20      AND b.stage = x.stage
 21      ORDER BY b.txn;



for all of the physical IO it does - it looks like it is running pretty good as it is
I don't know why did you say so. Can you elaborate more?


well, considering it did over 1/2 million physical IO's in less than 38 seconds:

ops$tkyte%ORA11GR2> select 38/586499  from dual;

 38/586499
----------
.000064791


and given that the industry average for IO's would be in the 3-5ms range (0.003 to 0.005 seconds) - your IO's are incredibly fast (you are hitting your secondary sga - the file system cache, these are not true physical IO's)

that is why I would say that - for all of the physical IO this appears to do - it is running *pretty darn good*. don't you agree?



the join condition shows me that the data model is totally botched (your keys, you have to concatenate to join???!?!?! ouch).
The above join condition really we don't have much control since this is belong to other schema. We just use their table to get the data for reporting.


of course YOU HAVE CONTROL - you are a corporate entity aren't you. You have corporate data models don't you. You have change request capabilities. Please don't say "we can't", do say "we have chosen NOT to" do something. Your data model is botched - your keys are not keys - this'll never perform as it should. think about it.




the mini cartesian join (fuzzy join) is another big red flag (sometimes indicative of a bad model and something that'll never perform very well)
This really gives us headache. Can you give any suggestion as to how do we prevent this?


one word for you in the future:

design


if you design, you won't have to tune...

I cannot do your design as I don't know what question you are really trying to ask. All I see is a poorly performing query.

I don't know your data skew
I don't know how your data arrives
I don't know your data
I don't know your question
I don't know your schema (constraints, indexes, etc)

and this review/follow area isn't appropriate for that (so I'm not asking you to post that, it would take more than the 32k I've allocated for reviews... I'm trying to say you need to take your knowledge of your data and use that..)