Skip to Main Content
  • Questions
  • Tuning SQL Statements Using Explain Plan

Breadcrumb

May 4th

Question and Answer

Connor McDonald

Thanks for the question, Vinod.

Asked: May 20, 2000 - 4:51 am UTC

Last updated: September 29, 2022 - 12:36 am UTC

Version: 8i

Viewed 100K+ times! This question is

You Asked

Hi Tom , I am new in tuning sql statements. Can u give a methodology of tuning the sql statements and to predict the output of explain plan.
When i use explain plan for a particular query, i am able to understand what the result means. can u pls clarify it for me in detail.

Another doubt is that while using Powerbuilder 7.0 with Oracle8i rel 8.1.5 , PB gets hanged while going to the datawindow SQL painter . This happens only when OPTIMIZER_MODE = COST. This problem is ridiculous for our development team and our team needs clarification from u.

Thanks
Vinod

and Tom said...

As for "a methodolody to predict the output of explain plan" -- I'm not sure at all what you are looking for. Do you mean "how do I interpret it"? If so please see:

</code> http://docs.oracle.com/cd/B10501_01/server.920/a96533/ex_plan.htm#16972

In fact, the entire document:

http://docs.oracle.com/cd/B10501_01/server.920/a96533/toc.htm <code>

which is the server tuning guide, will be invaluable to you in learning this. It covers all of the access plans and such so you'll know what a SORT MERGE JOIN versus NESTED LOOPS means.

As for the second problem -- it sounds like a bad query plan is being generated for the given query. PB isn't really hung in all probability but the query is taking a very long time to complete. You say you've set the optimizer_mode = cost, but values for optimizer_mode are

o RULE
o CHOOSE (uses CBO if statistics are present, RBO otherwise)
o FIRST_ROWS (find a plan to get the first row the fastest using
the CBO)
o ALL_ROWS (find a plan to get the last row the fastest using
the CBO)


So, I'll assume you've set the optimizer_mode to FIRST_ROWS or ALL_ROWS. My question back to you would be -- have you analyzed the tables or have you just set the optimizer_mode. If you have not analyzed the tables recently -- with their current set of data, then the plans generated by the optimizer can be quite bad indeed.


I would suggest as a way to see what is really happening -- to read the server tuning manual and find all about SQL_TRACE, TIMED_STATISTICS, and TKPROF. Those three things are the most powerful application tuning tools out there. They will show you the SQL your application is submitting to the database, the plans used to run it, how many rows flowed through each step of the plan, how many rows were returned to the client, how many fetches took place to get those rows, how much CPU it took and how much wall clock time it took (plus lots more information).


Addenda: Updated links to recent versions of the documentation here:

https://docs.oracle.com/en/database/oracle/oracle-database/19/tgdba/index.html

Rating

  (377 ratings)

Is this answer out of date? If it is, please let us know via a Comment

Comments

Tuning in development, test, production

Schesser, April 23, 2002 - 12:16 pm UTC

Tom, I'm a regular visitor toyour website, and have quite a bit on tuning on your forum.

I was in a meeting today, and was surprised to hear from a lot of developers that they are under the opinion that a sql tuned in development will work the same way in test and in production. This is in a case where the dev and test database is refreshed everyday.

I disagree with them, even though the database is refreshed every day.

My point of view is that even though everything is the same datawise in d1 , t1 and p1, the OPTIMIZER will not necessarily decide on the same execution path in all the 3 instances.

My standpoint is based ont he facts

1.The hardware is not the same for the 3 instances.
2.More number of users access the production database. If the number of users in development is 100, then it is 1000 in production.
3.When contention for objects increases(which is the case in production) the OPTIMIZER will for sure take different decesions as far as execution path is concerned.

Can you give your valuable expert opinion on the above.
1.Is my contention right? If so can you give me some more factors which make the up the difference in performance between devolopment and production.
2.How can I benchmark my standpoint and show it on paper that tuning in development not necessarily meants the sql will do fine in Production.

Tom Kyte
April 23, 2002 - 1:27 pm UTC

If you use RBO -- the plans will probably never change. It depends -- most likely they will be the same (but create your indexes in a different order and we might use different indexes)


If you use CBO -- the plans can and will change. They can change from day to day.


Query plan chaging question?

William, April 28, 2002 - 10:15 pm UTC

Tom,

Would the query plan change if the table/index statistics
are not re-gathered or any initialisation parameters changed?

Tom Kyte
April 29, 2002 - 7:29 am UTC

The plans should stay constant if all other things are constant (the software is deterministic, given the same inputs, it'll come to the same conclusion)



Mike, April 29, 2002 - 12:21 pm UTC

without the stats, and set hint with FIRST_ROWS. The explain still shown the cost=something, which the costs come? I assume Oracle will use rule, if no stats:

SQL>  select /*+ first_rows */ count(*) from tt where object_id=1000;


Execution Plan
----------------------------------------------------------
   0      SELECT STATEMENT Optimizer=HINT: FIRST_ROWS (Cost=1 Card=1 B
          ytes=13)

   1    0   SORT (AGGREGATE)
   2    1     INDEX (RANGE SCAN) OF 'TT_IO_IDX' (NON-UNIQUE) (Cost=1 C
          ard=20 Bytes=260)
 

Tom Kyte
April 29, 2002 - 12:59 pm UTC

Your assumption is wrong.  Oracle makes up statistics when you demand the use of the CBO.  It does this based on cached data dictionary information (how many extents, whats the high water mark and so on)


Watch the cost/card go up and down (notice that a flush must take place, else the cached data dictionary info is used)

ops$tkyte@ORA817DEV.US.ORACLE.COM> create table t as select * from all_objects where 1=0;

Table created.

ops$tkyte@ORA817DEV.US.ORACLE.COM> set autotrace traceonly explain
ops$tkyte@ORA817DEV.US.ORACLE.COM> select /*+ first_rows */ * from t t1;

Execution Plan
----------------------------------------------------------
   0      SELECT STATEMENT Optimizer=HINT: FIRST_ROWS (Cost=1 Card=82 Bytes=10496)
   1    0   TABLE ACCESS (FULL) OF 'T' (Cost=1 Card=82 Bytes=10496)



ops$tkyte@ORA817DEV.US.ORACLE.COM> set autotrace off
ops$tkyte@ORA817DEV.US.ORACLE.COM> 
ops$tkyte@ORA817DEV.US.ORACLE.COM> 
ops$tkyte@ORA817DEV.US.ORACLE.COM> insert /*+ append */ into t select * from all_objects;

22907 rows created.

ops$tkyte@ORA817DEV.US.ORACLE.COM> commit;

Commit complete.

ops$tkyte@ORA817DEV.US.ORACLE.COM> insert /*+ append */ into t select * from all_objects;

22907 rows created.

ops$tkyte@ORA817DEV.US.ORACLE.COM> commit;

Commit complete.

ops$tkyte@ORA817DEV.US.ORACLE.COM> insert /*+ append */ into t select * from all_objects;

22907 rows created.

ops$tkyte@ORA817DEV.US.ORACLE.COM> commit;

Commit complete.

ops$tkyte@ORA817DEV.US.ORACLE.COM> 
ops$tkyte@ORA817DEV.US.ORACLE.COM> alter system flush shared_pool;

System altered.

ops$tkyte@ORA817DEV.US.ORACLE.COM> 
ops$tkyte@ORA817DEV.US.ORACLE.COM> set autotrace traceonly explain
ops$tkyte@ORA817DEV.US.ORACLE.COM> select /*+ first_rows */ * from t t2;

Execution Plan
----------------------------------------------------------
   0      SELECT STATEMENT Optimizer=HINT: FIRST_ROWS (Cost=28 Card=77188 Bytes=9880064)
   1    0   TABLE ACCESS (FULL) OF 'T' (Cost=28 Card=77188 Bytes=9880064)



ops$tkyte@ORA817DEV.US.ORACLE.COM> set autotrace off
ops$tkyte@ORA817DEV.US.ORACLE.COM> 
ops$tkyte@ORA817DEV.US.ORACLE.COM> delete from t;

68721 rows deleted.

ops$tkyte@ORA817DEV.US.ORACLE.COM> commit;

Commit complete.

ops$tkyte@ORA817DEV.US.ORACLE.COM> alter system flush shared_pool
  2  /

System altered.

ops$tkyte@ORA817DEV.US.ORACLE.COM> set autotrace traceonly explain
ops$tkyte@ORA817DEV.US.ORACLE.COM> select /*+ first_rows */ * from t t3;

Execution Plan
----------------------------------------------------------
   0      SELECT STATEMENT Optimizer=HINT: FIRST_ROWS (Cost=28 Card=77188 Bytes=9880064)
   1    0   TABLE ACCESS (FULL) OF 'T' (Cost=28 Card=77188 Bytes=9880064)



ops$tkyte@ORA817DEV.US.ORACLE.COM> truncate table t;

Table truncated.

ops$tkyte@ORA817DEV.US.ORACLE.COM> alter system flush shared_pool;

System altered.

ops$tkyte@ORA817DEV.US.ORACLE.COM> select /*+ first_rows */ * from t t4;

Execution Plan
----------------------------------------------------------
   0      SELECT STATEMENT Optimizer=HINT: FIRST_ROWS (Cost=1 Card=82 Bytes=10496)
   1    0   TABLE ACCESS (FULL) OF 'T' (Cost=1 Card=82 Bytes=10496)
 

Rajiv, August 08, 2002 - 10:35 am UTC

Hi Tom,

I learned a lot from your site. The way you explain things with examples is superb. Thanks for all your help.

I would like to ask you about tuning complex select statements with several joins and outer joins (the ones geneated by OLAP tools etc). Explain plan would show the access path and SQL trace the wait statistics. That will help us to see if the access path is optimum or if there are any inordinate waits. But I feel sometimes in case of complex statements, a total re-write or even design changes would be necessary to achieve better response time. What should be the principles of re-writing an SQL statement effectively? Inorder to illustrate what I am trying to ask, let me post the predicate of a long SQL I am trying to tune:

----------------------
where m.material = ms.mat_sales
and ms.matl_grp_2 = g.matl_grp_2
and ms."/BIC/ZCPRDSTAT" = ps."BIC_ZCPRDSTAT"
and ms.salesorg = '1010'
and ms.distr_chan = '01'
and ms.mat_sales = pz.material (+)
and ms.plant = pz.plant (+)
and pz.comp_code (+) = '101'
and pz.val_class (+) = '7920'
and pz.fiscvarnt (+) = 'K4'
and pz.fiscper (+) = mercator.fisc_per
and m.division = d.division
and m."/BIC/ZCLANGUAE" = le."BIC_ZCLANGUAE"
and m."/BIC/ZCLANGUAG" = lg."BIC_ZCLANGUAG"
and m.matl_group = mg.matl_group
and m."/BIC/ZCDESIGN" = mt."BIC_ZCDESIGN"
and m."/BIC/ZCPRODCAT" = mp."BIC_ZCPRODCAT"
and m."/BIC/ZCPRDCODE" = pc."BIC_ZCPRDCODE"
and m."/BIC/ZCRELEASE" = mr."BIC_ZCRELEASE"
and m."/BIC/ZCSERIES" = s."BIC_ZCSERIES"
and m."/BIC/ZCSUBCAT1" = sc1."BIC_ZCSUBCAT1"
and m."/BIC/ZCSUBCAT2" = sc2."BIC_ZCSUBCAT2"
and m."/BIC/ZCSUBCAT3" = sc3."BIC_ZCSUBCAT3"
and m.material = mu.material (+)
and mu.mat_unit (+) = 'CS'
and m."/BIC/A_ISBN" || '.1' = w.wbs_elemt (+)

And part of the Explain plan on a select count(*) using this query as an inline view is:

0 SELECT STATEMENT Optimizer=CHOOSE (Cost=21521004692346200000
0 Card=1 Bytes=282)

1 0 SORT (AGGREGATE)
2 1 HASH JOIN (OUTER) (Cost=215210046923462000000 Card=57211
36614938280000000000 Bytes=1613360525412600000000000000)

3 2 NESTED LOOPS (Cost=5559474405851140 Card=7711673875745
780000000 Bytes=2066728598699870000000000)

4 3 HASH JOIN (Cost=5559474405851140 Card=77116738757457
80000000 Bytes=2028170229321140000000000)

5 4 VIEW OF 'BIC_TZCSUBCAT3_BO_VW' (Cost=8 Card=732 By
tes=4392)

6 5 SORT (UNIQUE) (Cost=8 Card=732 Bytes=13000)
7 6 UNION-ALL
8 7 TABLE ACCESS (FULL) OF '/BIC/TZCSUBCAT3' (Co
st=1 Card=650 Bytes=13000)

----------

Except one or two, the tables are'nt that big. Also, eventhough we have indexes on most of the join columns, many of them are not being used. We have many such queries and we will have more data in Production. What can we do to to speed up such queries. I hope I am clear in what I am asking. Wating for your reply.

Rajiv.



Tom Kyte
August 08, 2002 - 1:08 pm UTC

Are those cardinality counts right?

Indexes do not imply, mean, infer or suggest "faster". In this query, it appears most of the rows in most of the tables would be accessed --hashing is better for that then slower index reads.

Is that the full plan?

(bummer on those table and column names -- cannot believe someone would do that do you, slashes and all...)



A reader, August 08, 2002 - 1:42 pm UTC

Ya, those CARD numbers were scary. It was not the full plan. Let me post both the query and the plan here.

About the slashes on the table names: This is SAP Business Warehouse. This is the way they name the tables and columns. It was hard for me too to believe when I saw it first.

We have a number of small views (union of 1 or 2 tables with dual) used in the query. (the ones with _VW are views).

Here's the query and plan:

SQL> set autot traceonly explain
SQL> 
SQL> 
SQL> ed
Wrote file afiedt.buf

  1  select
  2    m.material, m."/BIC/A_ISBN",
  3    m."/BIC/ZKAGERAFR", m."/BIC/ZKAGERATO",
  4    m."/BIC/ZCAUDIENC", m."/BIC/ZCARTIST", m."/BIC/ZCDISPOSA",
  5    m.EANUPC, m.DIVISION, m."/BIC/ZKEXTENT",
  6    m."/BIC/ZKGRARAFR", m."/BIC/ZKGRARATO",
  7    m."/BIC/ZCLANGUAE", m."/BIC/ZCLANGUAG",
  8    m."/BIC/ZCTXTGRUN", m."/BIC/ZCTXTGRU1", m."/BIC/ZCTXTGRU2",
  9    m.MATL_TYPE, m.MATL_GROUP, m."/BIC/ZCDESIGN", m."/BIC/ZCOUTPRDT",
 10    m."/BIC/ZCPRODCAT", ms.PROD_HIER, m."/BIC/ZCPRDCODE", m."/BIC/ZCPUBLDAT",
 11    m."/BIC/ZCRELEASE", m."/BIC/ZC1SHPDAT",
 12    m."/BIC/ZCSUBCAT1", m."/BIC/ZCSUBCAT2", m."/BIC/ZCSUBCAT3",
 13    m.LENGHT, m."/BIC/ZCSUBTIT1", m."/BIC/ZCSUBTIT2", m."/BIC/ZCSUBTIT3",
 14    m.HEIGHT, m.WIDTH, m. "/BIC/ZKVOLNUM", m.NET_WEIGHT,
 15    m."/BIC/ZCSERIES", ms.matl_grp_1, ms.matl_grp_2,
 16    ms."/BIC/ZCONSALED", ms."/BIC/ZCPRDSTAT", ms.PLANT,
 17    d.DIVISION_DESC, le.BIC_ZCLANGUAE_DESC, lg.BIC_ZCLANGUAG_DESC,
 18    mg.MATL_GROUP_DESC, mt.BIC_ZCDESIGN_DESC,
 19    mp.BIC_ZCPRODCAT_DESC, pc.BIC_ZCPRDCODE_DESC, mr.BIC_ZCRELEASE_DESC,
 20    g.MATL_GRP_2_DESC, s.BIC_ZCSERIES_DESC,
 21    ps.BIC_ZCPRDSTAT_DESC, sc1.BIC_ZCSUBCAT1_DESC, sc2.BIC_ZCSUBCAT2_DESC,
 22    sc3.BIC_ZCSUBCAT3_DESC,
 23    mu.NUMERATOR, w.PS_RESPNO, m."/BIC/ZCSERIES2", m."/BIC/ZCSERIES3",
 24    pz.PLANT, pz.PRICE_MAT, m."/BIC/ZKPRICECA"
 25  from  sapr3."/BI0/PMATERIAL" m,
 26    sapr3."/BI0/PMAT_SALES" ms,
 27    BOADMIN.BI0_TDIVISION_BO_VW d,
 28    BOADMIN.BIC_TZCLANGUAE_BO_VW le,
 29    BOADMIN.BIC_TZCLANGUAG_BO_VW lg,
 30    BOADMIN.BI0_TMATL_GROUP_BO_VW mg,
 31    BOADMIN.BIC_TZCDESIGN_BO_VW mt,
 32    BOADMIN.BIC_TZCPRODCAT_BO_VW mp,
 33    BOADMIN.BIC_TZCPRDCODE_BO_VW pc,
 34    BOADMIN.BIC_TZCRELEASE_BO_VW mr,
 35    BOADMIN.BI0_TMATL_GRP_2_BO_VW g,
 36    BOADMIN.BIC_TZCSERIES_BO_VW s,
 37    BOADMIN.BIC_TZCPRDSTAT_BO_VW ps,
 38    BOADMIN.BIC_TZCSUBCAT1_BO_VW sc1,
 39    BOADMIN.BIC_TZCSUBCAT2_BO_VW sc2,
 40    BOADMIN.BIC_TZCSUBCAT3_BO_VW sc3,
 41    sapr3."/BI0/PMAT_UNIT" mu,
 42    sapr3."/BI0/PWBS_ELEMT" w,
 43    sapr3."/BIC/AP1OSCIMV00" pz
 44  where  m.material = ms.mat_sales
 45    and  ms.matl_grp_2 = g.matl_grp_2
 46    and  ms."/BIC/ZCPRDSTAT" = ps."BIC_ZCPRDSTAT"
 47    and  ms.salesorg = '1010'
 48    and  ms.distr_chan = '01'
 49    and  ms.mat_sales = pz.material (+)
 50    and  ms.plant = pz.plant (+)
 51    and  pz.comp_code (+) = '101'
 52    and  pz.val_class (+) = '7920'
 53    and  pz.fiscvarnt (+) = 'K4'
 54    and  pz.fiscper (+) = mercator.fisc_per
 55    and  m.division = d.division
 56    and  m."/BIC/ZCLANGUAE" = le."BIC_ZCLANGUAE"
 57    and  m."/BIC/ZCLANGUAG" = lg."BIC_ZCLANGUAG"
 58    and  m.matl_group = mg.matl_group
 59    and  m."/BIC/ZCDESIGN" = mt."BIC_ZCDESIGN"
 60    and  m."/BIC/ZCPRODCAT" = mp."BIC_ZCPRODCAT"
 61    and  m."/BIC/ZCPRDCODE" = pc."BIC_ZCPRDCODE"
 62    and  m."/BIC/ZCRELEASE" = mr."BIC_ZCRELEASE"
 63    and  m."/BIC/ZCSERIES"  = s."BIC_ZCSERIES"
 64    and  m."/BIC/ZCSUBCAT1" = sc1."BIC_ZCSUBCAT1"
 65    and  m."/BIC/ZCSUBCAT2" = sc2."BIC_ZCSUBCAT2"
 66    and  m."/BIC/ZCSUBCAT3" = sc3."BIC_ZCSUBCAT3"
 67    and  m.material = mu.material (+)
 68    and  mu.mat_unit (+) = 'CS'
 69*   and  m."/BIC/A_ISBN" || '.1' = w.wbs_elemt (+)
 70  /

Execution Plan
----------------------------------------------------------
   0      SELECT STATEMENT Optimizer=CHOOSE (Cost=74990665314992100000
          0 Card=5721136614938280000000000 Bytes=530349364204779000000
          0000000)

   1    0   HASH JOIN (OUTER) (Cost=749906653149921000000 Card=5721136
          614938280000000000 Bytes=5303493642047790000000000000)

   2    1     HASH JOIN (Cost=33776078474649500 Card=77116738757457800
          00000 Bytes=6994488205301420000000000)

   3    2       VIEW OF 'BIC_TZCSUBCAT3_BO_VW' (Cost=8 Card=732 Bytes=
          13176)

   4    3         SORT (UNIQUE) (Cost=8 Card=732 Bytes=13000)
   5    4           UNION-ALL
   6    5             TABLE ACCESS (FULL) OF '/BIC/TZCSUBCAT3' (Cost=1
           Card=650 Bytes=13000)

   7    5             TABLE ACCESS (FULL) OF 'DUAL' (Cost=1 Card=82)
   8    2       HASH JOIN (Cost=24056545335293200 Card=105350736007456
          00000 Bytes=9365680431062840000000)

   9    8         VIEW OF 'BIC_TZCSUBCAT2_BO_VW' (Cost=8 Card=732 Byte
          s=13176)

  10    9           SORT (UNIQUE) (Cost=8 Card=732 Bytes=13000)
  11   10             UNION-ALL
  12   11               TABLE ACCESS (FULL) OF '/BIC/TZCSUBCAT2' (Cost
          =1 Card=650 Bytes=13000)

  13   11               TABLE ACCESS (FULL) OF 'DUAL' (Cost=1 Card=82)
  14    8         HASH JOIN (Cost=24043532550830800 Card=1439217705019
          8900 Bytes=12535586210723300000)

  15   14           VIEW OF 'BIC_TZCSERIES_BO_VW' (Cost=11 Card=1074 B
          ytes=31146)

  16   15             SORT (UNIQUE) (Cost=11 Card=1074 Bytes=30752)
  17   16               UNION-ALL
  18   17                 TABLE ACCESS (FULL) OF '/BIC/TZCSERIES' (Cos
          t=1 Card=992 Bytes=30752)

  19   17                 TABLE ACCESS (FULL) OF 'DUAL' (Cost=1 Card=8
          2)

  20   14           HASH JOIN (Cost=24043509114311200 Card=13400537290
          688 Bytes=11283252398759300)

  21   20             VIEW OF 'BIC_TZCPRODCAT_BO_VW' (Cost=6 Card=152
          Bytes=2280)

  22   21               SORT (UNIQUE) (Cost=6 Card=152 Bytes=1190)
  23   22                 UNION-ALL
  24   23                   TABLE ACCESS (FULL) OF '/BIC/TZCPRODCAT' (
          Cost=1 Card=70 Bytes=1190)

  25   23                   TABLE ACCESS (FULL) OF 'DUAL' (Cost=1 Card
          =82)

  26   20             HASH JOIN (Cost=24043509089064700 Card=881614295
          44 Bytes=72909502232888)

  27   26               TABLE ACCESS (FULL) OF '/BI0/TMATL_GRP_2' (Cos
          t=1 Card=237 Bytes=6399)

  28   26               HASH JOIN (OUTER) (Cost=24043509040196500 Card
          =88161429544 Bytes=70529143635200)

  29   28                 NESTED LOOPS (OUTER) (Cost=24043435323438600
           Card=744931389402 Bytes=575831964007746)

  30   29                   HASH JOIN (Cost=24043435323438600 Card=168
          706101620750000 Bytes=122311923675044000000)

  31   30                     VIEW OF 'BIC_TZCPRDSTAT_BO_VW' (Cost=6 C
          ard=90 Bytes=1350)

  32   31                       SORT (UNIQUE) (Cost=6 Card=90 Bytes=13
          6)

  33   32                         UNION-ALL
  34   33                           TABLE ACCESS (FULL) OF '/BIC/TZCPR
          DSTAT' (Cost=1 Card=8 Bytes=136)

  35   33                           TABLE ACCESS (FULL) OF 'DUAL' (Cos
          t=1 Card=82)

  36   30                     HASH JOIN (Cost=24040201730831000 Card=1
          3121585681613900 Bytes=9316325833945870000)

  37   36                       TABLE ACCESS (FULL) OF '/BI0/PMAT_SALE
          S' (Cost=481 Card=56618 Bytes=3906642)

  38   36                       HASH JOIN (Cost=13709136541766 Card=19
          2498514928967000 Bytes=123391548069468000000)

  39   38                         VIEW OF 'BIC_TZCDESIGN_BO_VW' (Cost=
          6 Card=215 Bytes=3440)

  40   39                           SORT (UNIQUE) (Cost=6 Card=215 Byt
          es=2128)

  41   40                             UNION-ALL
  42   41                               TABLE ACCESS (FULL) OF '/BIC/T
          ZCDESIGN' (Cost=1 Card=133 Bytes=2128)

  43   41                               TABLE ACCESS (FULL) OF 'DUAL'
          (Cost=1 Card=82)

  44   38                         HASH JOIN (Cost=1250511513346 Card=5
          7301883513739100 Bytes=35813677196087000000)

  45   44                           VIEW OF 'BIC_TZCLANGUAG_BO_VW' (Co
          st=6 Card=137 Bytes=2192)

  46   45                             SORT (UNIQUE) (Cost=6 Card=137 B
          ytes=880)

  47   46                               UNION-ALL
  48   47                                 TABLE ACCESS (FULL) OF '/BIC
          /TZCLANGUAG' (Cost=1 Card=55 Bytes=880)

  49   47                                 TABLE ACCESS (FULL) OF 'DUAL
          ' (Cost=1 Card=82)

  50   44                           HASH JOIN (Cost=9345853174 Card=58
          55666928411300 Bytes=3566101159402480000)

  51   50                             VIEW OF 'BIC_TZCSUBCAT1_BO_VW' (
          Cost=6 Card=130 Bytes=2340)

  52   51                               SORT (UNIQUE) (Cost=6 Card=130
           Bytes=960)

  53   52                                 UNION-ALL
  54   53                                   TABLE ACCESS (FULL) OF '/B
          IC/TZCSUBCAT1' (Cost=1 Card=48 Bytes=960)

  55   53                                   TABLE ACCESS (FULL) OF 'DU
          AL' (Cost=1 Card=82)

  56   50                             HASH JOIN (Cost=75162061 Card=45
          043591757010 Bytes=26620762728392900)

  57   56                               VIEW OF 'BIC_TZCLANGUAE_BO_VW'
           (Cost=6 Card=119 Bytes=2856)

  58   57                                 SORT (UNIQUE) (Cost=6 Card=1
          19 Bytes=814)

  59   58                                   UNION-ALL
  60   59                                     TABLE ACCESS (FULL) OF '
          /BIC/TZCLANGUAE' (Cost=1 Card=37 Bytes=814)

  61   59                                     TABLE ACCESS (FULL) OF '
          DUAL' (Cost=1 Card=82)

  62   56                               HASH JOIN (Cost=357783 Card=37
          8517577790 Bytes=214619466606930)

  63   62                                 VIEW OF 'BIC_TZCRELEASE_BO_V
          W' (Cost=6 Card=102 Bytes=1632)

  64   63                                   SORT (UNIQUE) (Cost=6 Card
          =102 Bytes=360)

  65   64                                     UNION-ALL
  66   65                                       TABLE ACCESS (FULL) OF
           '/BIC/TZCRELEASE' (Cost=1 Card=20 Bytes=360)

  67   65                                       TABLE ACCESS (FULL) OF
           'DUAL' (Cost=1 Card=82)

  68   62                                 HASH JOIN (Cost=11746 Card=3
          710956645 Bytes=2044737111395)

  69   68                                   VIEW OF 'BIC_TZCPRDCODE_BO
          _VW' (Cost=6 Card=101 Bytes=1414)

  70   69                                     SORT (UNIQUE) (Cost=6 Ca
          rd=101 Bytes=304)

  71   70                                       UNION-ALL
  72   71                                         TABLE ACCESS (FULL)
          OF '/BIC/TZCPRDCODE' (Cost=1 Card=19 Bytes=304)

  73   71                                         TABLE ACCESS (FULL)
          OF 'DUAL' (Cost=1 Card=82)

  74   68                                   HASH JOIN (Cost=4855 Card=
          36742145 Bytes=19730531865)

  75   74                                     VIEW OF 'BI0_TMATL_GROUP
          _BO_VW' (Cost=6 Card=94 Bytes=1692)

  76   75                                       SORT (UNIQUE) (Cost=6
          Card=94 Bytes=228)

  77   76                                         UNION-ALL
  78   77                                           TABLE ACCESS (FULL
          ) OF '/BI0/TMATL_GROUP' (Cost=1 Card=12 Bytes=228)

  79   77                                           TABLE ACCESS (FULL
          ) OF 'DUAL' (Cost=1 Card=82)

  80   74                                     HASH JOIN (Cost=3645 Car
          d=6644856 Bytes=3448680264)

  81   80                                       VIEW OF 'BI0_TDIVISION
          _BO_VW' (Cost=6 Card=88 Bytes=1320)

  82   81                                         SORT (UNIQUE) (Cost=
          6 Card=88 Bytes=102)

  83   82                                           UNION-ALL
  84   83                                             TABLE ACCESS (FU
          LL) OF '/BI0/TDIVISION' (Cost=1 Card=6 Bytes=102)

  85   83                                             TABLE ACCESS (FU
          LL) OF 'DUAL' (Cost=1 Card=82)

  86   80                                       TABLE ACCESS (FULL) OF
           '/BI0/PMATERIAL' (Cost=3621 Card=830607 Bytes=418625928)

  87   29                   TABLE ACCESS (BY INDEX ROWID) OF '/BIC/AP1
          OSCIMV00'

  88   87                     INDEX (RANGE SCAN) OF '/BIC/AP1OSCIMV00~
          0' (UNIQUE)

  89   28                 TABLE ACCESS (FULL) OF '/BI0/PMAT_UNIT' (Cos
          t=234 Card=98301 Bytes=2654127)

  90    1     TABLE ACCESS (FULL) OF '/BI0/PWBS_ELEMT' (Cost=101 Card=
          74188 Bytes=1483760)


Thanks for your time.

Rajiv. 

Tom Kyte
August 08, 2002 - 2:44 pm UTC

Well, I do believe that a cost of Seven Hundred Forty-Nine quintillion Nine Hundred Six quadrillion Six Hundred Fifty-Three trillion One Hundred Forty-Nine billion Nine Hundred Twenty-One million is the largest cost I've ever seen!


The plan looks appropriate however, given the predicate -- you don't have many predicates other then joins in there meaning the full table scans are appropriate.

You might want to look at upping the hash_area_size and utilizing parallel query to help with these objects.


A reader, August 08, 2002 - 4:07 pm UTC

Oh! I didn't know I was working on the costliest query in the whole world!!

Thanks for the suggestion. I was alerady trying to use parallel excution on this query. I will increase the hash area too.

I am also trying to re-write the query by creating a table/mv replacing those views: they are views on lookup tables where the data does not change. Do you think it will help?

Also would you suggest any other re-writing tips: like replace the outer join with subqueries or anything like that? Thanks for your help.

Rgds,
Rajiv.

Tom Kyte
August 09, 2002 - 7:58 am UTC

You might try selecting a select, instead of:

1 select
....
23 mu.NUMERATOR, w.PS_RESPNO, m."/BIC/ZCSERIES2", m."/BIC/ZCSERIES3",
....
69* and m."/BIC/A_ISBN" || '.1' = w.wbs_elemt (+)
70 /

You can


1 select
....
23 mu.NUMERATOR,
(select w.PS_RESPNO
from sapr3."/BI0/PWBS_ELEMT" w
where w.wbs_elemt = m."/BIC/A_ISBN" )
,
m."/BIC/ZCSERIES2", m."/BIC/ZCSERIES3",
....
69*
70 /


and remove W from the "from" list in the major query. You might try that for some of the other outer-joined to things as well.


Additionally -- determine IF in fact you TRULY NEED the outer joins in all cases -- I've found many times they are there simply "in case". outer joins can be very expensive.

Doing that select of a select with the SC1, SC2, SC3 tables might be dramatic as well -- may find that better then a join join join join (then again, might not be, give it a try)

A reader, August 10, 2002 - 5:12 pm UTC

Tom, I tried with the subquery, but it didnt make much difference. The query runs for about half hour still.

I loaded all the tables in another system and tried to run the query there. Amazingly, it was done in less than 3mins. The plan looks mostly similar, but the cost and card numbers are far less. I compared the buffers and other init parameters of the two systems. While they are different, they were not excessively different. Also the opt* init parameters were same. On both the systems the tables involved were of same size and the stats were current. The first (slow) system was a 12 CPU RS/6000 AIX 4.3. There are 2 other instances running on this box. The second system was a 24 CPU RS/6000 AIX 5 with only one database running.

If you are interested in moving this thread, I can post the details of comparison. Else, I would like to have your colsing comments on why we had such excessive costs on the first system. Can it be attributed to the smaller CPU strength alone? How can I find more info on what is going on?

Thanks for your time and valuable suggestions. Appreciate it a lot.

Rgds,
Rajiv.

Tom Kyte
August 11, 2002 - 9:31 am UTC

Are the STATS the same (up to date) in both boxes.

Explain Plan document.

Kashif, January 16, 2003 - 1:38 pm UTC

Hi Tom,

The Explain Plan document you provided the link to (to the original poster) does not adequately explain how the Explain Plan is actually supposed to be interpreted. For example, I understand that an explain is supposed to be read innermost to outermost, but that info is not mentioned in the document. I also tried looking for this info in your book but was unable to get much information. Do you have any other documents/links which might explain how to interpret the Explain Plan more thoroughly? Thanks.

Kashif

Tom Kyte
January 16, 2003 - 7:48 pm UTC

Ok, tell me if this helps. It is an excerpt from my forth coming book:


1.3.3 Reading an Explain Plan

This is a frequently asked question - how exactly do you read an explain plan. Here I will present my 'simple' approach to reading the plan. I do suggest however that a quick read of the chapter in the Oracle Performance Guide - the chapter on the explain plan command - would be very useful as well.

For all of the details on reading an explain plan, please refer to the Oracle Performance Guide. There are complete details on how to read the query plan and interpret the results.

We'll take a look at a query plan resulting from a query against the SCOTT/TIGER tables (note, I add primary keys to the EMP and DEPT tables - hence, they are indexed):

scott@ORA920> delete from plan_table;
7 rows deleted.

scott@ORA920> explain plan for
2 select ename, dname, grade
3 from emp, dept, salgrade
4 where emp.deptno = dept.deptno
5 and emp.sal between salgrade.losal and salgrade.hisal
6 /
Explained.

scott@ORA920> @?/rdbms/admin/utlxpls

PLAN_TABLE_OUTPUT
-----------------------------------------------------------------
| Id | Operation |Name |Rows|Bytes|Cost |
-----------------------------------------------------------------
| 0 | SELECT STATEMENT | | | | |
| 1 | NESTED LOOPS | | | | |
| 2 | NESTED LOOPS | | | | |
| 3 | TABLE ACCESS FULL | SALGRADE| | | |
|* 4 | TABLE ACCESS FULL | EMP | | | |
| 5 | TABLE ACCESS BY INDEX ROWID| DEPT | | | |
|* 6 | INDEX UNIQUE SCAN | DEPT_PK | | | |
-----------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

4 - filter("EMP"."SAL"<="SALGRADE"."HISAL" AND
"EMP"."SAL">="SALGRADE"."LOSAL")
6 - access("EMP"."DEPTNO"="DEPT"."DEPTNO")

Note: rule based optimization

21 rows selected.

Now - what happens first? How does that plan actually get evaluated? First I'll show you the psuedo code for how the plan is evaluated and then we'll discuss how I arrived at this conclusion:

For salgrade in (select * from salgrade)
Loop
For emp in ( select * from emp )
Loop
If ( emp.sal between salgrade.losal and salgrade.hisal )
Then
Select * into dept_rec
From dept
Where dept.deptno = emp.deptno;

OUTPUT RECORD with fields from salgrade,emp,dept
End if;
End loop;
End loop;

The way I read the plan was to turn it into a graph of sorts - an evaluation tree. In order to do that, we need to understand something about access paths.

For detailed information on the access paths available to Oracle, please see the Oracle Performance and Tuning Guide.

In order to build the tree - we can start at the top, with step 1. That will be our "root node" in the tree. Next, we need to find the things that "feed" this root node - that will be steps 2 and 5 - as you can see - 2 and 5 are at the same level of indentation - they "feed" into step 1. Further, we can see that steps 3 and 4 feed step 2 and that step 6 feeds step 5. Putting that together iteratively - we would draw:


1
/ \
2 5
/ \ \
3 4 6


And then just read the tree. In order to get 1 we need 2 and 5 - 2 is "first". In order to get 2, we need 3 and 4. 3 is "first". That is how I arrived at the psuedo code for:

For salgrade in (select * from salgrade)
Loop
For emp in ( select * from emp )
Loop

Full scan SALGRADE is step 3, full scan EMP is step 4 and step 2 is a nested loop - which is roughly equivalent to two "for loops". Once we get step 2 going like that - we can look at step 5. Step 5 runs step 6 first - step 6 is the index scan step. We are taking the output of step 2 here and using that to "feed" this part of the query plan. So, the output from step 2 is used to perform an index scan - that index scan output is used to TABLE ACCESS BY ROWID the DEPT table and that result is the output of step 1 - our result set.

Now, to make this "interesting", we will run an equivalent query - but we'll mix up the order of the tables in the from clause this time. Since I am using the rule based optimizer - this will affect the generated query plan (and is just one reason why you DON'T want to use the rule based optimizer! We'll cover more reasons in a later section). We'll use the same logic to build its query plan tree and evaluate how it processed the query:

scott@ORA920> delete from plan_table;
7 rows deleted.

scott@ORA920> explain plan for
2 select ename, dname, grade
3 from salgrade, dept, emp
4 where emp.deptno = dept.deptno
5 and emp.sal between salgrade.losal and salgrade.hisal
6 /
Explained.

scott@ORA920> @?/rdbms/admin/utlxpls

PLAN_TABLE_OUTPUT
-------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes
-------------------------------------------------------------------
| 0 | SELECT STATEMENT | | |
| 1 | NESTED LOOPS | | |
| 2 | NESTED LOOPS | | |
| 3 | TABLE ACCESS FULL | EMP | |
| 4 | TABLE ACCESS BY INDEX ROWID| DEPT | |
|* 5 | INDEX UNIQUE SCAN | DEPT_PK | |
|* 6 | TABLE ACCESS FULL | SALGRADE | |
-------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
5 - access("EMP"."DEPTNO"="DEPT"."DEPTNO")
6 - filter("EMP"."SAL"<="SALGRADE"."HISAL" AND
"EMP"."SAL">="SALGRADE"."LOSAL")
Note: rule based optimization

21 rows selected.

Here we see that steps 2 and 6 feed 1, steps 3 and 4 feed 2, and step 5 feeds 4. Drawing the tree:
1
/ \
2 6
/ \
3 4
\
5

So, the psuedo code logic here is - starting with steps 3 and 4:

For emp in ( select * from emp )
Loop
-- using the index
Select * from dept where dept.deptno = emp.deptno

For salgrade in (select * from salgrade )
Loop
If ( emp.sal between salgrade.losal and salgrade.hisal )
Then
OUTPUT RECORD;
End if;
End loop
End loop;

And that is it - if you draw the graphical tree like that and then read it bottom up, left to right, you'll get a good understanding of the "flow" of the data.


Fabulous!

Kashif, January 17, 2003 - 6:08 pm UTC

Hi Tom,

Thanks for that excellent easy-to-follow explanation. I understand the plan much better now. I'm just not quite getting my particular plan:

batchcode@QGF1> get ins2cpi.sql
1 insert into hmc.cpi_lnum (lnum, clnum, Loan_Type, Loan_amount,
2 RESEND_FLAG, PRVS_LOAD, Error_flag)
3 (
4 select /*+ ORDERED PUSH_SUBQ INDEX (dt pk_dates) */
5 dt.lnum, tr.clnum, pr.s_ltype, pr.ola , 'F', 1, 'N'
6 from chk_screen c,
7 hmc_dates hmc,
8 dates dt,
9 hmv_current_status hmv,
10 tracking tr,
11 product pr
12 where c.lnum = hmc.lnum (+)
13 and c.lnum = dt.lnum
14 and c.lnum = hmv.lnum
15 and c.lnum = tr.lnum
16 and c.lnum = pr.lnum
17 and dt.lnfundat >= to_date ( '10111999', 'MMDDYYYY' )
18 and dt.lnfundat <= to_date ( '12182002', 'MMDDYYYY' )
19 and hmc.send_to_cpi is null
20 and c.s_checkstatus = 'CS_CMPT'
21 and hmv.status_code||'' >= 700
22 and hmv.status_code||'' <= 2300
23 and length (tr.clnum) >= 10
24 and not exists (select null
25 from wireinfo w
26 where w.lnum = c.lnum)
27* )
batchcode@QGF1> get queryplan.txt
ID Query_Plan
---------- -------------------------------------------------------
0 INSERT STATEMENT Cost = 71
1 NESTED LOOPS
2 NESTED LOOPS
3 NESTED LOOPS
4 NESTED LOOPS
5 FILTER
6 NESTED LOOPS OUTER
7 TABLE ACCESS FULL CHK_SCREEN
8 INDEX RANGE SCAN PK_WIREINFO
9 TABLE ACCESS BY INDEX ROWID HMC_DATES
10 INDEX UNIQUE SCAN HMC_DATES
11 TABLE ACCESS BY INDEX ROWID DATES
12 INDEX UNIQUE SCAN PK_DATES
13 TABLE ACCESS BY INDEX ROWID HMV_CURRENT_STATUS
14 INDEX UNIQUE SCAN PK_HMV_CURRENT_STATUS
15 TABLE ACCESS BY INDEX ROWID TRACKING
16 INDEX UNIQUE SCAN PK_TRACKING
17 TABLE ACCESS BY INDEX ROWID PRODUCT
18 INDEX UNIQUE SCAN PK_PRODUCT

19 rows selected.

batchcode@QGF1>

It all makes sense to me, except for step 8, where there is a range scan on the PK_WIREINFO index. From what you explained, this step is feeding step 7, which is the FTS of chk_screen. But that seems counter-intuitive. How can there be a range scan on pk_wireinfo, and it also be the starting point for the plan? Alternatively, if I think that chk_screen was full table scanned first, followed by a range scan on pk_wireinfo for the ids produced from the FTS of chk_screen, that makes complete intuitive sense to me, but then it fails your explanation. I know this is caused by the PUSH_SUBQ hint used, which is causing the subquery to be evaluated first. Any feedback would be helpful. Thanks in advance.

Kashif

Tom Kyte
January 17, 2003 - 6:34 pm UTC

not knowing the tables and what indexes go where -- well -- that makes it sort of difficult. (you know my opinion of hints too don't you?)




For anyone interested...

Kashif, January 28, 2003 - 1:06 pm UTC

Thanks Tom, for the explanation. For anyone interested, Chapter 9 of the Performance Tuning Guide for 9i, "Using EXPLAIN PLAN", is significantly better than previous releases (pre-9i), especially when it comes to explaining how to read the EXPLAIN PLAN in the section, "Reading EXPLAIN PLAN Output". This coupled with Tom's explanation above provided me with a comprehensive understanding of the EXPLAIN PLAN. Keep up the good work Tom.

Kashif

2 Points

Robert, March 26, 2003 - 12:56 pm UTC

Tom,

1) Tom, your explaination of reading an explain plan is, I think, the best and most succinct explaination I have seen to date... also, I hope to study your psuedo-code to completely understand Oracle's workings.

2) Thanks to Kashif for the heads-up on the Performance Tuning Guide for 9i. I will check this out. I agree with the previous folks... the explain plan explaination in the 8i Performance and Tuning Guide was weak, and didn't explain it clearly as Tom has.

Thanks,

Robert

A reader, November 07, 2003 - 5:12 pm UTC

Hi Tom,
I've ran below SQL 2hr. after starting database 

  1  select a.file#,b.name,MAXIORTM,MAXIOWTM from v$datafile a,
  2   v$tablespace b,v$filestat c
  3  where a.ts#=b.ts#
  4    and c.file# = a.file#
  5*   and B.NAME = 'TEMP'
SQL> /

     FILE# NAME                             MAXIORTM   MAXIOWTM
---------- ------------------------------ ---------- ----------
         5 TEMP                              1685945       1263  

Is writing in TEMP taking too much time??
Please advice.

We are using oracle 8.1.7.4

Thanks
 

Tom Kyte
November 07, 2003 - 6:05 pm UTC

you tell us? i don't know, is it?

Reading Explain Plan

A reader, November 07, 2003 - 6:19 pm UTC

Tom,

I have read books on tuning that simply states in statements about how each loop joins are executed. But you have given a pseudo code for how the plan gets executed which is really very very helpful. I would appreciate if you could suggest some books that explain each of these execution steps as pseudocodes rather than lengthy discussions/explanations.

As always thanks much for your clear and concise explanation

Tom Kyte
November 08, 2003 - 10:12 am UTC

well, i do that for some (not all) of the access paths in my new book "Effective Oracle by Design" -- hash joins, sort merges, cartesian joins and so on. they are a mix of "lengthly discussion" and psuedo code snippets.

Forth coming book

A reader, November 07, 2003 - 6:25 pm UTC

Tom,

In this discussion you mention about your forth coming book. Is the one you are referring to is "Expert Oracle by Design" or is that any other new book that you are coming up with. I would be excited to see if you can come up with a new book explaining all of the Oracle tuning tools and tips/techniques

Tom Kyte
November 08, 2003 - 10:12 am UTC

It is called "Effective Oracle by Design", releases in August 2003 (see homepage)

cpu costing

Prince, November 08, 2003 - 6:13 am UTC

Respected Tom!

You're really performing a great valuable service. I have tried to use the expalin plan according to your instructions. But it doesn't give plan_table_output but 'Note: cpu costing is off, 'plan_table' is old version' as follows. My question is 'What is cpu costing and how it can be enabled or disabled?' (I am using Oracle9i Enterprise Edition Release 9.2.0.1.0 on MS Windows 2000 Advanced Server.)

SQL> delete from plan_table;

6 rows deleted.

SQL> explain plan for
  2  SELECT designation.designationcode, designation.name, designation.grade,
  3         designation.pricode
  4      FROM hrms.designation designation, hrms.employees employees;

Explained.

SQL> @E:\Ora92\rdbms\admin\utlxpls

PLAN_TABLE_OUTPUT
----------------------------------------------------------------------------------------------


---------------------------------------------------------------------------------------------
| Id  | Operation                      |  Name                      | Rows  | Bytes | Cost  |
---------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT               |                            |  8521K|   186M|  1360 |
|   1 |  MERGE JOIN CARTESIAN          |                            |  8521K|   186M|  1360 |
|   2 |   TABLE ACCESS FULL            | DESIGNATION                |   451 | 10373 |     3 |
|   3 |   BUFFER SORT                  |                            | 18895 |       |  1357 |
|   4 |    BITMAP CONVERSION TO ROWIDS |                            |       |       |       |
|   5 |     BITMAP INDEX FAST FULL SCAN| EMPLOYEE_EMPLOYEETYPE_IDX  |       |       |       |
---------------------------------------------------------------------------------------------

PLAN_TABLE_OUTPUT
----------------------------------------------------------------------------------------------


Note: cpu costing is off, 'plan_table' is old version

13 rows selected. 

Tom Kyte
November 08, 2003 - 10:24 am UTC

drop and recreate your plan table using the current script (should be done with each upgrade -- recreate the plan tables)


currently, in 9i, cpu costing is off by default, this will change in 10g. it is an undocumented init.ora parameter for cpu costing right now, so just ignore it.


Cardinality in EXPLAIN PLAN

A reader, November 08, 2003 - 9:36 am UTC

Dear Tom,

Thanks for sharing your valuable knowledge with us.

Could you kindly explain how the cardinality is calculated in explain plan output?

In my below query,
(1)For predicator first_name like 'Sa%' ==> 3 rows returned but Card=1
(2)For predicator first_name like 'S%' ==> 13 rows returned but Card=12


SQL> analyze table employees compute statistics for table for all indexes for all indexed columns;

Table analyzed.

SQL> set autotrace on
SQL> select employee_id from employees where first_name like 'Sa%';

EMPLOYEE_ID
-----------
        161
        192
        194


Execution Plan
----------------------------------------------------------
   0      SELECT STATEMENT Optimizer=CHOOSE (Cost=2 Card=1 Bytes=9)
   1    0   TABLE ACCESS (FULL) OF 'EMPLOYEES' (Cost=2 Card=1 Bytes=9)




Statistics
----------------------------------------------------------
          7  recursive calls
          0  db block gets
         10  consistent gets
          0  physical reads
          0  redo size
        428  bytes sent via SQL*Net to client
        503  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
          3  rows processed

SQL> select employee_id from employees where first_name like 'S%';

EMPLOYEE_ID
-----------
        100
        116
        117
        123
        128
        138
        161
        166
        173
        192
        194
        203
        205

13 rows selected.


Execution Plan
----------------------------------------------------------
   0      SELECT STATEMENT Optimizer=CHOOSE (Cost=2 Card=12 Bytes=108)
   1    0   TABLE ACCESS (FULL) OF 'EMPLOYEES' (Cost=2 Card=12 Bytes=1
          08)


Statistics
----------------------------------------------------------
          7  recursive calls
          0  db block gets
         10  consistent gets
          0  physical reads
          0  redo size
        517  bytes sent via SQL*Net to client
        503  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
         13  rows processed

SQL>
 

Tom Kyte
November 08, 2003 - 10:28 am UTC

the cardinality is a best guess ESTIMATE. It was very close to dead on in this case, pretty good for a guess no?

Re: cpu costing

Prince Faran, November 09, 2003 - 2:52 am UTC

Ahaa! thats nice Tom. I have followed your instructions as under:-

SQL> drop table plan_table;

Table dropped.

SQL> @E:\Ora92\rdbms\admin\utlxplan.sql

Table created.

New plan_table is created. May I know plz.. Had the old plan_table any performance impact or anyother issue until it is not used explicitily by the user?

Thanks. 

Tom Kyte
November 09, 2003 - 7:05 am UTC


it just had less "functionality"

EXPLAIN PLAN Restrictions

Kamal Kishore, November 18, 2003 - 8:53 am UTC

Hi Tom,
In the following document at
</code> http://download-west.oracle.com/docs/cd/B10501_01/server.920/a96533/ex_plan.htm#15821 <code>

It says that:
<quote>
Oracle does not support EXPLAIN PLAN for statements performing implicit type conversion of date bind variables. With bind variables in general, the EXPLAIN PLAN output might not represent the real execution plan.

From the text of a SQL statement, TKPROF cannot determine the types of the bind variables. It assumes that the type is CHARACTER, and gives an error message if this is not the case. You can avoid this limitation by putting appropriate type conversions in the SQL statement.
</quote>

Can you please elaborate on this and explain what the implications might be for someone trying to tune the application and any precautions/guidelines that must be kept in mind (while coding and/or while tuning).
Thanks,


Tom Kyte
November 21, 2003 - 7:52 am UTC

it should have said something like:

"if you do the really bad practice of relying on implicit conversions, you should expect bad things to happen"


IMO, you should code:


select * from t where date_column = to_date( :bind, 'date format' )

you should NEVER NEVER code

select * from t where date_column = :bind;


the second query is a bug waiting to happen. implicit conversions should always be avoided.

I think the response could do with some extra info

Gary, November 23, 2003 - 5:16 pm UTC

If you've got a PL/SQL block on the lines of
DECLARE
v_start_date date;
v_end_date date;
BEGIN
SELECT result_col FROM <very_complex query>
WHERE col_date BETWEEN v_start_date AND v_end_date
END;

If you want an explain plan of the query, you should not simply cut and paste the SQL to do an explain of
SELECT result_col FROM <very_complex query>
WHERE col_date BETWEEN :v_start_date AND :v_end_date
because, as stated in the reference, the explain plan doesn't know that the variables are defined as dates.
You have to go in and put the to_dates...
SELECT result_col FROM <very_complex query>
WHERE col_date BETWEEN to_date(:v_start_date,'format') AND to_date(:v_end_date,'format')

Personally, I'd love an oracle initialization parameter to turn off implicit data conversions

Unique scans in a NESTED Loop

A reader, December 15, 2003 - 6:25 pm UTC

Tom,

I have the following questions :

If I have an EXPLAIN PLAN that shows the following plan :
NESTED LOOPS
TABLE ACCESS FULL OF A
TABLE ACCESS BY ROWID OF B
INDEX UNIQUE SCAN B_IDX_UQ

1. Suppose A has around say 20000 rows. For each row in A, is it a good idea to perform index lookup followed by a table access of B (even if the index is unique). If so how to avoid this
2. Suppose the index B_IDX_UQ is such that each index entry points to a different table block, this will be almost equivalent to reading all of the blocks in B that are below HWM.
2a.How to improve the above EXPLAIN PLAN and will be better to use a FTS in that case?
2b.Does block selectivity matter (my understanding is block selectivity is only for index range scans. Is that true)?
2c.Does clustering factor also relevant in the above case (I remember you mentioning in one of your discussion that clustering_factor is for index range scans only and not for index unique scans).
3. The examples I see in books and the one on your site are showing index-range scans for queries that use single tables. If I have a table-join and an EXPLAIN PLAN like say,
<Some join method such as Nested Loops or Hash Join>
TABLE ACCESS BY INDEX ROWID C
INDEX RANGE SCAN C_IDX
TABLE ACCESS BY INDEX ROWID D
INDEX RANGE SCAN D_IDX
and the D_IDX and C_IDX have both very poor clustering_factors how can we tune the query above and resolve the problem. Can you provide an example showing importance of clustering_factor in such kind of joins above.
4. Suppose a query is doing a FTS as opposed to index scan (and I know that you always emphasize the fact that FTS is not a evil and index lookups are not always good), is there any way to examine why it is performing a FTS. ie. What things to look at (basically the steps/methods) to conclude that the CBO has arrived at the execution plan of using a FTS. ie to basically conclude that CBO has arrived at the FTS execution path because of such and such reasons...

Tom Kyte
December 16, 2003 - 7:14 am UTC

1) the answer is:

    yes
    no
    maybe 
    sometimes

depends -- do you want the first row first?  if so, that NL join is great.
is B HUGE -- if so, NL might be really appropriate

tell you what -- analyze the tables, use the cbo and let it decide.

2) again, ANALYZE (and it depends on the size of B as well -- you are making the unstated assumption that B is smaller or equal in size to A)

so 2a -- use cbo
   2b -- yes it does -- look at the last pair of plans below
   2c -- it comes into play in joins as well (they are almost like index range 
         scans in a fashion)


ops$tkyte@ORA9IR2> /*
DOC>
DOC>drop table a;
DOC>drop table b1;
DOC>drop table b2;
DOC>
DOC>create table a
DOC>as
DOC>select rownum id, all_objects.* from all_objects;
DOC>
DOC>create table b1
DOC>as
DOC>select rownum id, rpad('*',2000,'*') data from all_objects;
DOC>alter table b1 add constraint b1_pk primary key(id);
DOC>
DOC>create table b2
DOC>as
DOC>select * from b1 order by reverse(id);
DOC>alter table b2 add constraint b2_pk primary key(id);
DOC>*/
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> analyze table a delete statistics;
 
Table analyzed.
 
ops$tkyte@ORA9IR2> analyze table b1 delete statistics;
 
Table analyzed.
 
ops$tkyte@ORA9IR2> analyze table b2 delete statistics;
 
Table analyzed.
 
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> set autotrace traceonly explain
ops$tkyte@ORA9IR2> select * from a, b1 where a.id = b1.id;
 
Execution Plan
----------------------------------------------------------
   0      SELECT STATEMENT Optimizer=CHOOSE
   1    0   NESTED LOOPS
   2    1     TABLE ACCESS (FULL) OF 'A'
   3    1     TABLE ACCESS (BY INDEX ROWID) OF 'B1'
   4    3       INDEX (UNIQUE SCAN) OF 'B1_PK' (UNIQUE)
 
 
 
ops$tkyte@ORA9IR2> select * from a, b2 where a.id = b2.id;
 
Execution Plan
----------------------------------------------------------
   0      SELECT STATEMENT Optimizer=CHOOSE
   1    0   NESTED LOOPS
   2    1     TABLE ACCESS (FULL) OF 'A'
   3    1     TABLE ACCESS (BY INDEX ROWID) OF 'B2'
   4    3       INDEX (UNIQUE SCAN) OF 'B2_PK' (UNIQUE)
 
 
 
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> analyze table a compute statistics;
 
Table analyzed.
 
ops$tkyte@ORA9IR2> analyze table b1 compute statistics;
 
Table analyzed.
 
ops$tkyte@ORA9IR2> analyze table b2 compute statistics;
 
Table analyzed.
 
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> select * from a, b1 where a.id = b1.id;
 
Execution Plan
----------------------------------------------------------
   0      SELECT STATEMENT Optimizer=CHOOSE (Cost=2524 Card=29352 Bytes=61463088)
   1    0   HASH JOIN (Cost=2524 Card=29352 Bytes=61463088)
   2    1     TABLE ACCESS (FULL) OF 'A' (Cost=44 Card=29352 Bytes=2641680)
   3    1     TABLE ACCESS (FULL) OF 'B1' (Cost=956 Card=29353 Bytes=58823412)
 
 
 
ops$tkyte@ORA9IR2> select * from a, b2 where a.id = b2.id;
 
Execution Plan
----------------------------------------------------------
   0      SELECT STATEMENT Optimizer=CHOOSE (Cost=2524 Card=29352 Bytes=61463088)
   1    0   HASH JOIN (Cost=2524 Card=29352 Bytes=61463088)
   2    1     TABLE ACCESS (FULL) OF 'A' (Cost=44 Card=29352 Bytes=2641680)
   3    1     TABLE ACCESS (FULL) OF 'B2' (Cost=956 Card=29353 Bytes=58823412)
 
 
 
ops$tkyte@ORA9IR2> select /*+ index(b1 b1_pk) */ * from a, b1 where a.id = b1.id;
 
Execution Plan
----------------------------------------------------------
   0      SELECT STATEMENT Optimizer=CHOOSE (Cost=10674 Card=29352 Bytes=61463088)
   1    0   MERGE JOIN (Cost=10674 Card=29352 Bytes=61463088)
   2    1     TABLE ACCESS (BY INDEX ROWID) OF 'B1' (Cost=9846 Card=29353 Bytes=58823412)
   3    2       INDEX (FULL SCAN) OF 'B1_PK' (UNIQUE) (Cost=61 Card=29353)
   4    1     SORT (JOIN) (Cost=828 Card=29352 Bytes=2641680)
   5    4       TABLE ACCESS (FULL) OF 'A' (Cost=44 Card=29352 Bytes=2641680)
 
 
 
ops$tkyte@ORA9IR2> select /*+ index(b2 b2_pk) */ * from a, b2 where a.id = b2.id;
 
Execution Plan
----------------------------------------------------------
   0      SELECT STATEMENT Optimizer=CHOOSE (Cost=29396 Card=29352 Bytes=61463088)
   1    0   NESTED LOOPS (Cost=29396 Card=29352 Bytes=61463088)
   2    1     TABLE ACCESS (FULL) OF 'A' (Cost=44 Card=29352 Bytes=2641680)
   3    1     TABLE ACCESS (BY INDEX ROWID) OF 'B2' (Cost=1 Card=1 Bytes=2004)
   4    3       INDEX (UNIQUE SCAN) OF 'B2_PK' (UNIQUE)
 
 
 
ops$tkyte@ORA9IR2> set autotrace off
ops$tkyte@ORA9IR2>


3) use the CBO!

4) search for 10053 on this site and if you have my new book Effective Oracle by Design -- see chapter 6. 

Pls help

A reader, December 16, 2003 - 1:39 pm UTC

Tom,

Thanks much for your response. However, I am not able to understand what you are trying to explain in that example and what numbers to look for in that example. Is it that you are illustrating how the explain plan changes after the analyze or is it something more to that that I am not clear of. Pls. help
Also, for Qn 3, you say use CBO. Can you pls. explain.

So are you saying that if you use CBO and appropriately analyze the tables and indexes, it will automatically come up with a good optimal plan and if it is then how to ensure that it is the correct optimal plan?

Tom Kyte
December 16, 2003 - 2:36 pm UTC

look at the plans -- the plans are different for the two queries and the ONLY difference between the two is the cluster factor for the indexes. so, in answer "Does clustering factor also relevant in the above case" - the answer is "yes" and "here is an example showing it"

my point for #3 was -- use the CBO and let it pick. It has the rules for choosing which to use. Your question:

3. The examples I see in books and the one on your site are showing index-range
scans for queries that use single tables. If I have a table-join and an EXPLAIN
PLAN like say,
<Some join method such as Nested Loops or Hash Join>
TABLE ACCESS BY INDEX ROWID C
INDEX RANGE SCAN C_IDX
TABLE ACCESS BY INDEX ROWID D
INDEX RANGE SCAN D_IDX
and the D_IDX and C_IDX have both very poor clustering_factors how can we tune
the query above and resolve the problem. Can you provide an e


doesn't really "make sense". poor clustering factors (don't really exist, they are facts -- they are not "good" or "poor" really, they are just facts) do not lead to poor performance. I cannot tell you have to tune a hypothetical query with "some join technique"

that is the job of the optimizer.

Poor clustering factor

A reader, December 16, 2003 - 5:38 pm UTC

Tom,

I think I am not asking the question correctly to you.

1. My question is that if the clustering factor is poor (say clustering factor = num of rows for both C and D) and the execution plan is as per shown in my question, then it would generate lot of logical I/Os (because of index lookup followed by a table lookup) and a FTS can be much better in that case. Is my understanding correct?
2. If 1 is true and if we find large number of logical I/Os, then how do we conclude from that execution plan that :
Analyzing the table and indexes might provide a better plan?
It may be because of the poor index clustering factors on those data tables or that it is just due to data volume (because some queries as such may hit more data). ie how to back-track from the explain plan back to data distribution? (because in the example you had shown, as soon as the data was analyzed, different execution plans were generated based on the index clustering factor and distribution of data).
So if I am just given an execution plan, where and how to start and how do I go from there to conclude that it is because of data distribution, clustering factor etc. ie.
How do we conclude whether the plan is optimal/correct?
Sorry for keep you asking same questions as I am getting really confused about the way the CBO is generating execution plans and I am not really sure how to tie-back the execution plan with data distribution. A detailed explanation/example on the above questions would be of great help.
3. In the NL question above, you mentioned that NL will be good if the table B is HUGE. But if the index unique scans for B are such that each index entry points to a different data block, then how does NL help in that case. My doubt is if the index entries in B point to different data blocks, then by the time the nested loop comes to read the same data block, it might have aged out of the buffer cache in which case it would have to do a physical read causing too much I/O. Your explanation on this would be of great help.

Tom Kyte
December 16, 2003 - 6:57 pm UTC

1) no, that plan looks very innocent to me.  I do not see what you see.

consider:

ops$tkyte@ORA9IR2> select /*+ USE_HASH( t1 t2 ) */ *
  2    from t1, t2
  3   where t1.x = t2.a
  4     and t1.y = 5
  5     and t2.b = 5
  6  /
 
Execution Plan
----------------------------------------------------------
   0      SELECT STATEMENT Optimizer=CHOOSE (Cost=3 Card=1 Bytes=52)
   1    0   HASH JOIN (Cost=3 Card=1 Bytes=52)
   2    1     TABLE ACCESS (BY INDEX ROWID) OF 'T1' (Cost=1 Card=1 Bytes=26)
   3    2       INDEX (RANGE SCAN) OF 'T1_IDXY' (NON-UNIQUE) (Cost=1 Card=1)
   4    1     TABLE ACCESS (BY INDEX ROWID) OF 'T2' (Cost=1 Card=1 Bytes=26)
   5    4       INDEX (RANGE SCAN) OF 'T2_IDXB' (NON-UNIQUE) (Cost=1 Card=82)
 

it'll use the index to get A ROW (in this example).  so what if the clustering factor is "poor" or not.  

it is a function of the cardinality, not the use of the index per se.

2) if the numbers in the autotrace are way wrong (estimated cardinalities) that is what indicates an analyze is needed (or a "stronger analyze" with histograms or something).  You cannot go from "hi LIO's -> you need to analyze", there is no correlation

 

Pls. clarify

A reader, December 16, 2003 - 8:34 pm UTC

Tom,

In the above example, you assume that only 1 row is retrieved.
1. Is that from the Card= column in each step of the explain plan? If not how do you say it gets 1 row.
2. My question is what if a considerable number of rows are retrieved in both the steps of the HASH JOIN by using INDEX then TABLE approach and the clustering factor is low. In that case, my understanding is FTS and INDEX then TABLE approach will vary considerably. Is that correct?
3. Out of the 3 info. Bytes= Cost= and Card= which must be taken into consideration/relevant

Tom Kyte
December 17, 2003 - 6:41 am UTC

1) exactly, card=1 says "optimizer thinks a single row"

2) is 100,000 rows considerable? maybe it is, maybe it is not. It depends on the size of the underlying objects. It is a function of the number of IO's the optimizer things it'll have to do.

If the cost of doing the index IO's plus table IO's exceeds the cost of full scanning the table using multi-block IO, then YES (but -- that is exactly what the optimizer is tasked with doing) the full scan will be "cheaper".

3) all of it.

Tuning???

A reader, December 17, 2003 - 2:56 pm UTC

Tom,

1.If CBO is capable of deciding the right path when the tables and indexes/indexed columns are analyzed properly, then what is that we need to do to tune a query. How do we conclude that our query is not optimal? Is it by looking at the Logical I/Os
2. In some of your earlier discussions that the Cost and efficiency of a query are not inter-related, but here you say the Cost is also important/relevant. Can you explain/clarify on this.
3. If Card= and Bytes= are also relevant, what should we focus on in our queries about these figures?

Tom Kyte
December 18, 2003 - 9:13 am UTC

1) tune processes and applications.  

It goes like this:

user: "system is slow"
you:  "ok, the entire thing, an application, a screen, what is slow?"
user: "oh, ok -- this application is slow"
you:  "ok, the entire thing, a part of it, a specific feature?"
user: "oh, ok -- when I push this button it takes a long time, I push that button 
       alot"

you: "ok, we'll get on it"

now you have instrumented your code, so you run say a tkprof on that section of that application.  You look for OBVIOUS low hanging fruit (i "grep ^total" in the tkprof and look for "big" totals -- things taking a long time).  You look at them.

You either decide "boy, that query does a ton of work -- can we do it better" or even more optimally "boy, we got that algorithm wrong didn't we.  who did this row by row stuff -- look, this entire process should be this "update" statement. "

2) cost is relevant, it helps you understand WHY the optimizer is doing what it does.  without it, we'd never be able to understand "why"

3) you should make sure they are in line with reality.  Small example:


ops$tkyte@ORA920> create table t as select * from all_objects;
 
Table created.
 
ops$tkyte@ORA920> create index t_idx on t(object_id);
 
Index created.
 
ops$tkyte@ORA920>
ops$tkyte@ORA920> analyze table t compute statistics
  2  for table
  3  for all indexes;
 
Table analyzed.
 
ops$tkyte@ORA920>
ops$tkyte@ORA920> set autotrace traceonly explain
ops$tkyte@ORA920> select * from t where object_id = 55;
 
Execution Plan
----------------------------------------------------------
   0      SELECT STATEMENT Optimizer=CHOOSE (Cost=3 Card=316 Bytes=30336)
   1    0   TABLE ACCESS (BY INDEX ROWID) OF 'T' (Cost=3 Card=316 Bytes=30336)
   2    1     INDEX (RANGE SCAN) OF 'T_IDX' (NON-UNIQUE) (Cost=2 Card=126)
 
 
<b>Hmmm, optimizer things 316 rows -- that is OBVIOUSLY wrong for we know that object_id is near unique!!  something is amiss.  garbage in, garbage out.  We need to figure this out, what is wrong -- oh, we didn't actually take a good look at the column data -- lets do that:</b>
 
ops$tkyte@ORA920> analyze table t compute statistics
  2  for table
  3  for all indexes
  4  for all indexed columns;
 
Table analyzed.
 
ops$tkyte@ORA920> select * from t where object_id = 55;
 
Execution Plan
----------------------------------------------------------
   0      SELECT STATEMENT Optimizer=CHOOSE (Cost=3 Card=1 Bytes=96)
   1    0   TABLE ACCESS (BY INDEX ROWID) OF 'T' (Cost=3 Card=1 Bytes=96)
   2    1     INDEX (RANGE SCAN) OF 'T_IDX' (NON-UNIQUE) (Cost=2 Card=1)
 
 
 
ops$tkyte@ORA920> set autotrace off

<b>ahh, now it got it right.  good data = better plans.  </b>

if the card= is way off from reality and the plan is bad, there is a major problem there -- and the garbage data (or insufficient data) provided to the optimizer is the cause. 

Execution Plan vs Explain Plan

A reader, December 19, 2003 - 3:55 pm UTC

Tom,

Can you pls. provide me an example of what the Card value in the Execution plan vs Explain plan means. My understanding is that the Execution Plan provides the actual cardinality whereas the Explain plan provides the estimated cardinality.
1. Is my understanding correct. It would be really helpful if you could provide me an example to illustrate this.
2. Pls. provide me info. on where to find the documentation for the 10053 event. I like to have a very introductory/basic understanding of what this event means, how to arrive at costs of various access paths, how to interpret the 10053 results etc. I was trying to understand some materials by Wolfgang Brietling but it looks too advanced to me. Please point me to very basic materials on this

Thanx

Tom Kyte
December 20, 2003 - 9:33 am UTC

only if you tell me what the difference between an "explain" and "execution" plan are?

to me -- they are the same.

There is an explain plan that shows the execution plan.

In an explain plan, we can see estimated cardinalities, the number of rows the optimizer is expecting.

In a tkprof report, we can see the execution plan. this time, the ACTUAL number of rows OBSERVED flowing through each phase of the plan is viewable.


Metalink has documents on 10053.

As well, my new book "effective oracle by design" has a section on it. I did it "simple" I think.

AUTOTRACE

A reader, December 21, 2003 - 6:44 pm UTC

Tom,

I was going through your new book 'Expert Oracle by Design' where you are discussing about AUTOTRACE. In the AUTOTRACE output, you have two sections : Execution plan section and the statistics section :
1. Is execution plan section equivalent to the EXPLAIN PLAN and the Card= value shows only the estimated cardinality.
2. The 'Rows Processed' Statistics section in the Statistics section shows the actual number of rows processed.
3. If I do not see the Cost=,Card= etc on the explain plan, does that mean that the CBO is not being used.
Is my understanding on 1,2 and 3 correct. If not please explain me with an example.
4. If I take a TKPROF report, where do I see the estimated rows and the actual rows processed.

THanks

Tom Kyte
December 21, 2003 - 6:59 pm UTC

1) yes

2) yes

3) yes

4) you do not see "estimated" (thats why I often ask for AUTOTRACE *and* TKPROF's so I can see "actual" vs "estimated"). autotrace = estimated. tkprof = "what actually happened".

you see the actual rows processed right in the tkprof -- edit one and it'll just be staring you in the face.

Table not analyzed

A reader, December 21, 2003 - 7:27 pm UTC

Tom,

I have now clearly understood what each section means. So my question on this is :
If the 'Rows Processed' (Card=) value from Autotrace section (Question 1 above) has a huge disparity with the 'Rows processed' in the Statistics section, then it means
1. Means data has not been properly analyzed. Is that true?
2. Does it also mean that my query is not properly tuned?

Thanks


Tom Kyte
December 21, 2003 - 7:53 pm UTC

it COULD mean #1. It definitely means "something is wrong".

it could be insufficient stats (eg: add histograms and all of a sudden the optimizer "knows more").

it could be a "product issue" (aka: a bug)

it definitely does not imply or mean #2!

Pls. clarify

A reader, December 21, 2003 - 10:47 pm UTC

Tom,

In one of the discussion title "Physical Reads vs Logical Reads" where you compare colocated vs. disorganized tables, you are stating the following :

Now, going higher on the array size (this is the sqlplus default size):

ops$tkyte@ORA920.LOCALHOST> set arraysize 15
ops$tkyte@ORA920.LOCALHOST> select * from colocated where x > 0;

29319 rows selected.

Statistics
----------------------------------------------------------
...
5047 consistent gets
...
29319 rows processed

ops$tkyte@ORA920.LOCALHOST> select * from disorganized where x > 0;

29319 rows selected.

Statistics
----------------------------------------------------------
...
31325 consistent gets
...
29319 rows processed

the colocated table is now a fraction of the LIO's. We did 15 row array
fetches, meaning we did about 2,000 "fetch" calls. Apparently we did about 2.5
LIO's per fetch. How so? Well we did this:

- get index block (1 LIO)
- get row 1 from table for this index block (1 LIO for block X)
- get rows 2..15 from table for this index block (maybe 0 LIO's - all on block X, maybe 1 additional LIO cause we just read all of the rows on block X)

So, we do 2-3 LIO's for every 15 rows -- 5k LIOS.

In the above explanation, could you pls. explain me as to what you mean by
"- get rows 2..15 from table for this index block (maybe 0 LIO's - all on block X, maybe 1 additional LIO cause we just read all of the rows on block X)"

2. In your book Expert-one-on-one Oracle, you mention the following (page 279)
<Quote>
Suppose you have a table where the rows have a primary key populated by a sequence. As data is added to the table, rows with sequential sequence numbers are in general 'next' to each other. The table is naturally clustered, in order, by the primary key (since the data is added in more or less that order). It will not be strictly clustered in order by the key of course(we would have to use an IOT to acheive that) but in general....
</Quote>
In the above para, can you pls clarify this with an example as to what you mean by :
"It will not be strictly clustered in order by the key of course(we would have to use an IOT to acheive that"

3. What is meant by naturally clustered vs. automatically clustered vs. uniformly distributed?

Tom Kyte
December 22, 2003 - 9:06 am UTC

what I was saying is -- if the rows are next to eachother on the block -- then the first row we fetched will perform the logical IO to retrieve the block from the buffer cache. Rows 2-15 will be read from that same block -- no need to go back to the buffer cache for the next row, we already have it. Hence, we do not need to do anymore LIO's to get the rows from the table. If the 2cnd row WAS NOT next to the first row on that block -- we would have discarded that block and done another LIO to get row 2.


I need no example really, you just need to know that rows will go in where they fit. This is typically at the "end" of the table. The rows will NOT be in order of the sequence - they will be CLOSE, but they will not be in order of the sequence. (think of the session that retrieves a sequence.nextval -- waits 10 seconds -- then inserts the row. In that 10 seconds, perhaps 100 other rows were inserted using sequence.nextval. This row will be "100 away from its prior/next values). Unless you use an IOT or some other structure that enforces placement of data -- you are using a HEAP table and in a HEAP, rows just go where they fit.

Naturally clustered means that due to the NATURE of the way the data is inserted, the data will "naturally" (as a side effect, because of) be ordered nicely. If you insert 1, 2, 3, 4, 5, 6, 7, 8, .... infinity into a HEAP table, in order, most of the times you'll observe the data is physically organized that way. "it naturally" did it.

the other two terms, not familiar with them.




How to determine if data colocated

A reader, January 06, 2004 - 12:14 pm UTC

Tom,

In the case of heap tables, we can quickly determine whether a column say X populated by a sequence generated is naturally clustered (or atleast in close proximity) or not by using the block number of the rowid using the DBMS_ROWID package. However since IOTs do not have the concept of ROWID, how can we determine that the data is actually clustered in case of IOT.

Thanks much

Tom Kyte
January 06, 2004 - 2:08 pm UTC

well, the dbms_rowid thingy isnt very "scalable" is it?  I mean, you have to look back a row and forward a row for each row.  how do you do that for say 100,000 rows?

In an IOT, the data is ALWAYS clustered by the primary key (thats the point of the structure entirely!).  It is the other columns that might be clustered and for that, I would use analyze -- just like I would for a "real" table

ops$tkyte@ORA920PC> create table t ( x int primary key, y int, z int, data char(80) default 'x' )
  2  organization index;
 
Table created.
 
ops$tkyte@ORA920PC> insert into t select rownum, rownum, dbms_random.random, 'x' from all_objects;
 
30197 rows created.
 
ops$tkyte@ORA920PC> create index t_idx1 on t(y);
 
Index created.
 
ops$tkyte@ORA920PC> create index t_idx2 on t(z);
 
Index created.
 
ops$tkyte@ORA920PC> analyze table t compute statistics for table for all indexes;
 
Table analyzed.
 
ops$tkyte@ORA920PC> select index_name, clustering_factor from user_indexes where table_name = 'T';
 
INDEX_NAME                     CLUSTERING_FACTOR
------------------------------ -----------------
SYS_IOT_TOP_42421                              0
T_IDX1                                       400
T_IDX2                                     30146
 
 

Row proximity

A reader, January 06, 2004 - 8:05 pm UTC

Tom,

In a heap table what would be then the best way to determine if a table is naturally clustered other than the query using DBMS_ROWID package as per my question above?

Tom Kyte
January 06, 2004 - 8:43 pm UTC

tell me how you would tell with dbms_rowid in the first place! for a non-trivial sized table.

analyze the index (if you don't have an index - you don't care). look at the clustering factor.

can we ignore the exec plan?

steve, January 30, 2004 - 4:08 pm UTC

Hi Tom,

If we should account on CBO to find best execution plan,
1) should we care about the plan while tuning the sql?
2) can we just focus on statistics generated by SQL_TRACE such as LIO, PIO, or WAIT number and ignore the plan while tuning the sql?

Thanks!

Steve


Tom Kyte
January 30, 2004 - 8:06 pm UTC

1) yes
2) no

the cbo is a mathematical model. it takes inputs - generates outputs. You need to sometimes look at a plan and say "gee, thats not right, lets hint to see if we are right and it is wrong and if so -- look to see why it might be so, perhaps optimizer_index_caching or optimizer_index_cost_adj needs to be tweaked"

Have you heard of this ... ?

Gabe, February 13, 2004 - 1:18 pm UTC

Tom,

Can you please express your opinion regarding the following approach to “performance tuning”?

Background:
Big Java/Oracle web-based implementation for mission critical application (core functionality for the business) … RUP methodology (tones of “artifacts” lots of them out of date by now) … brand new data model/architecture … large data migration from legacy which they only started to analyze (build use-cases) … project late … dictatorial (alternative opinions not tolerated) … sweat-shop. Extensive use of a pl/sql (a good architectural decision) … all access is supposed to be though the pl/sql layer … but performance tuning was perceived to be just another effort at the end of the project.

Task (in the name of performance tuning):
I was asked to extract a unique (???) list of all the selects (no updates or deletes) from the database, run explain plan for all of them and produce a list of top-n by cost.

Note: the Selects are to be extracted from the pl/sql code, view definitions (????), etc. … not captured with sql_trace or from v$sql or from v$sql_plan or whatever (the application is not very stable and quite buggy at this time).

I think it is a complete waste of time because:
1. The volume of data in the test database is only a fraction of the real data after migration (half of tables are actually empty)
2. There is no assurance the environment is isolated
3. Short of writing a parser … I have to manually extract and edit the sql (packaged functions not listed in the package spec used in the select list, convert pl/sql variable to actual values or placeholders for bind variables, remove the into clause, etc.) … the code is still changing and hence there is no assurance the statements I’m extracting will actually be there later on … unless I’m automating the whole thing the process is not repeatable (and I was only given 2-3 days to go though all the code … +125K lines in user_source)
4. etc.

Am I missing something? My understanding is that the cost is a formula of a cpu cost and io cost calculated at the time the explain plan is executed … the cost of one sql doesn’t have anything to do with the cost of another sql … so, all this comparison of hundreds and hundreds of painfully extracted (and doctored) sql statements means (imho) … zilch, nada. As I was going through the code I did identify few bugs … so the byproducts of this effort could be positive … but the whole thing seems to me totally dumb.

Thanks

Tom Kyte
February 13, 2004 - 1:46 pm UTC

well, forgetting for a moment that "explain plan lies" and "tkprof/v$sql_plan can only tell the truth" (and would be tons easier than extracting sql from plsql....)

and forgetting for a moment that the cost of a query only indirectly relates to its performance (it should be direct, but hey -- reality -- it is not)

and forgetting that unless you have stats that represent reality (and you seem not) the cost is just a random number.....


Nah, this is a total and utter waste of time. You must

a) get a stable, not so buggy thing
b) trace the "slow parts"
c) fix their algorithms and or sql (sql isn't always the culprit, it is as often, if not more often the algorithms employed that are the issue)


This is a really "bad idea"

Question on tuning

PRS, February 13, 2004 - 2:20 pm UTC

I have following query.
SELECT NVL (SUM (DECODE (rc_status, 'NEW', 1, 0)), 0)
,NVL (SUM (DECODE (rc_status, 'PEND', 1, 0)), 0)
FROM ps_rc_case a, ps_wsi_rccase_flds b
WHERE a.case_id = b.case_id
AND a.business_unit = b.business_unit
AND b.wsi_sub_assign_to = '50003540'
AND a.rc_status IN ('NEW', 'PEND')

Table ps_rc_case and ps_wsi_rccase_flds have 7.5 million rows. It is using the right index. But this query takes 13 seconds which is not acceptable to online application.
We have ORACLE 9.2.0.4.0 on Sun-solaris 9 (E4500). Each indexe has 1924 extenets on a LMT defined as 4MB uniform size. Any idea will be appereciated as we want to have less than a second turnaround time required for this.
Follwing is the trace with explain plan.
NVL(SUM(DECODE(RC_STATUS,'NEW',1,0)),0) NVL(SUM(DECODE(RC_STATUS,'PEND',1,0)),0)
--------------------------------------- ----------------------------------------
3 6


Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=3740 Card=1 Bytes=33
)

1 0 SORT (AGGREGATE)
2 1 HASH JOIN (Cost=3740 Card=12046 Bytes=397518)
3 2 INDEX (RANGE SCAN) OF 'PSEWSI_RCCASE_FLDS' (NON-UNIQUE
) (Cost=308 Card=88831 Bytes=1332465)

4 2 INDEX (FAST FULL SCAN) OF 'PS7RC_CASE' (NON-UNIQUE) (C
ost=2934 Card=1011449 Bytes=18206082)





Statistics
----------------------------------------------------------
0 recursive calls
0 db block gets
30965 consistent gets
0 physical reads
0 redo size
301 bytes sent via SQL*Net to client
275 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
1 rows processed


Tom Kyte
February 13, 2004 - 2:57 pm UTC

well, hows about you tell us

a) what indexes go with what tables
b) what indexes in their entirety (names, list of columns, and the table to which they belon) exist to be used....


c) AND b.wsi_sub_assign_to = '50003540' -- is that a constant for real or a bind variable? if a constant -- how many rows match that. if a bind, what is


select min( count(*) ), max(count(*)), avg(count(*))
from ps_wsi_rccase_flds
group by wsi_sub_assign_to

d) what is select count(*) from the other table where a.rc_status IN ('NEW', 'PEND')

Answers to the questions you asked

PRS, February 13, 2004 - 3:43 pm UTC

Tom,
Following are the answers for the questions you asked.

a. Table PS_RC_CASE use the index PS7RC_CASE. Table PS_WSI_RCCASE_FLDS uses the index PSEWSI_RCCASE_FLDS.

b. Index PS7RC_CASE is defined on columns CASE_ID,BUSINESS_UNIT,RC_STATUS. Index PSEWSI_RCCASE_FLDS defined on columns wsi_sub_assign_to,case_id, business_unit.

Statistics for both the index shown below.
For Index PS7RC_CASE on PS_RC_CASE
select * from dba_indexes where index_name = 'PS7RC_CASE'
LEAF_BLOCKS30496 DISTINCT_KEYS 7561526
AVG_LEAF_BLOCKS_PER_KEY 1
AVG_DATA_BLOCKS_PER_KEY 1
CLUSTERING_FACTOR420853 NUM_ROWS7561526 SAMPLE_SIZE 7561526
LAST_ANALYZED 2/13/2004 3:07:50 PM
For Index psewsi_rccase_flds on PS_wsi_RCCASE_flds
LEAF_BLOCKS 26316
DISTINCT_KEYS 7561525
AVG_LEAF_BLOCKS_PER_KEY 1
AVG_DATA_BLOCKS_PER_KEY 1
CLUSTERING_FACTOR5837952 NUM_ROWS7561525 SAMPLE_SIZE 7561525
LAST_ANALYZED 2/13/2004 1:59:49 PM

c. It is a bind variable. Following is the query result.
select min( count(*) ), max(count(*)), avg(count(*))
from ps_wsi_rccase_flds
group by wsi_sub_assign_to
MIN(COUNT(*)) MAX(COUNT(*)) AVG(COUNT(*))
------------- ------------- ---------------
1 7477302 72014.533

D. The counts are as inder for the query you asked.
SELECT COUNT(*)
FROM PS_RC_CASE WHERE
rc_status IN ('NEW', 'PEND')
COUNT(*)
--------
6929

Any help is appreciated.
Thanks,
PRS

Tom Kyte
February 13, 2004 - 4:10 pm UTC

well, not much is going to help this puppy (except if you only go after the count(*)'s = 1 that is!)

with an average of 72k rows per wsi_sub_assign_to records per value, well, it's got tons of stuff to shift through there doesn't it. or 7,000 index probes (if we indexed differently) from ps_rc_case into the other table.

Either way, it won't be brilliantly fast -- it is pure "math" at work here.


We don't want to drive from B into A (72k rows is too big). We'd have to drive from A into B -- but even then, it is nasty with 7k index probes (if you indexed case_id,business_unit,wsi_sub_assign_to in ps_wsi_rccase_flds in that order (well, the last field being last is the relevant one) it could index probe but I see that as being a SLOWER alternative to what you have probably)


sorry -- but without changing the underlying structures, setting them up specifically to answer this sort of question, I do not see you getting there from here. Think about (after looking at the above numbers) what work must be done in order to get that very simple looking row.

We either find 72k rows in table B and then goto table A row by row
or
we have 7k rows in table A and then goto table B row by row
or
we do what it is doing which is to goto both and slam them together in bulk (most likely the fastest).


You can try this:

index on a(rc_status,business_unit,case_id)
index on b(business_unit,case_id,wsi_sub_assign_to)

it can then use these two indexes to answer the question,just a thought. But look at the data, use your knowledge of it to answer the question "what could it do to do this fast". If you come up with an idea, let us know and I'll try to show you the way to the sql that does that.




Howto explain plan sql with type casting ?

A reader, February 16, 2004 - 6:43 pm UTC

(Oracle 9.2)

I have a statement like ...

select * from t where id in (SELECT * FROM TABLE(CAST(arr AS NUMBER_ARRAY)));

How do I run an explain plan on it?

I even took the _transformed_ sql from v$sqltext:

SELECT * from t where id = (SELECT * FROM TABLE(CAST(:b1 AS NUMBER_ARRAY)))

but still ...

abc@abc> explain plan for
2 SELECT * from t where id = (SELECT * FROM TABLE(CAST(:b1 AS NUMBER_ARRAY)));
SELECT * from t where id = (SELECT * FROM TABLE(CAST(:b1 AS NUMBER_ARRAY)))
*
ERROR at line 2:
ORA-00932: inconsistent datatypes: expected - got CHAR

Thank you.



Tom Kyte
February 17, 2004 - 8:02 am UTC

if you want to use explain plan -- which assumes ALL binds are character strings regardless -- you'll have to put in a hard coded value there:


ops$tkyte@ORA920PC> explain plan for
  2  select *
  3    from t
  4   where id = (select * from table( cast( <b>number_array(1,2,3)</b> as number_array ) ))
  5  /
 
Explained.
 
ops$tkyte@ORA920PC> select * from table(dbms_xplan.display);
 
PLAN_TABLE_OUTPUT
-----------------------------------------------------------------------------------------------------------------------------------
 
-------------------------------------------------------------------------------
| Id  | Operation                               |  Name        |Rows|Bytes|Cst|
-------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                        |              |  1 | 100 |  2|
|   1 |  TABLE ACCESS BY INDEX ROWID            | T            |  1 | 100 |  2|
|*  2 |   INDEX UNIQUE SCAN                     | SYS_C007505  |100 |     |  1|
|   3 |    COLLECTION ITERATOR CONSTRUCTOR FETCH|              |    |     |   |
-------------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
   2 - access("T"."ID"= (SELECT /*+ */ VALUE(KOKBF$) FROM
              TABLE(CAST("NUMBER_ARRAY"(1,2,3) AS "NUMBER_ARRAY") ) "KOKBF$"))
 
Note: cpu costing is off
 
17 rows selected.
 

Howto explain plan sql with type casting ?

A reader, February 17, 2004 - 9:20 am UTC

I did try a to_number(:c) ... guess that did not get automagically cast to a number_array with one element.

Thank you again.

question

PRS, February 17, 2004 - 1:47 pm UTC

Hi Tom,
   I have following query. It does a full table scan on one of the table inspite of having an index. There are only 63 rows. Looks like oracle optimizer behaving really odd.

Query:
SQL> l
  1  SELECT SUM(CLAIM_AMOUNT)
  2    FROM PS_RCFCASE_COMPL A,
  3         PS_WSI_RCCASE_FLDS B
  4   WHERE A.CASE_ID = B.CASE_ID
  5     and a.business_unit = b.business_unit
  7*    AND B.WSI_SYSTM_DWNTM_ID = '91'
SQL> /

SUM(CLAIM_AMOUNT)
-----------------
        -37228.56


Execution Plan
----------------------------------------------------------
   0      SELECT STATEMENT Optimizer=CHOOSE (Cost=13577 Card=1 Bytes=2
          9)

   1    0   SORT (AGGREGATE)
   2    1     HASH JOIN (Cost=13577 Card=101202 Bytes=2934858)
   3    2       INDEX (FAST FULL SCAN) OF 'PSDWSI_RCCASE_FLDS' (NON-UN
          IQUE) (Cost=2525 Card=100952 Bytes=1514280)

   4    2       TABLE ACCESS (FULL) OF 'PS_RCFCASE_COMPL' (Cost=8016 C
          ard=7580277 Bytes=106123878)





Statistics
----------------------------------------------------------
          0  recursive calls
          0  db block gets
     110131  consistent gets
      82882  physical reads
       5400  redo size
        209  bytes sent via SQL*Net to client
        275  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
          1  rows processed


   Indexes defined as under.
   Table Name PS_WSI_RCCASE_FLDS - 
   Index Name - PSDWSI_RCCASE_FLDS  
   COlumn names - (CASE_ID,BUSINESS_UNIT,WSI_SYSTM_DWNTM_ID)

   Table Name - PS_RCFCASE_COMPL
   Index Name - PS_RCFCASE_COMPL
   Column names -(CASE_ID,BUSINESS_UNIT)
  It is not using the index on PS_RCFCASE_COMPL. Both the tables have 7.5 millions rows. Index and tables are analayzed. Index 100% analyzed and table 35% analyzed.

   This query takes 2 minutes to come back. It is just oracle optimizer not making the use of an index.

Any help is appreciated. 

Tom Kyte
February 17, 2004 - 3:04 pm UTC

why would you want it to use the index? did you want it to run *really truly slow*???


tell me -- what index do you believe would be useful here and more importantly -- WHY?


I see this query:

1 SELECT SUM(CLAIM_AMOUNT)
2 FROM PS_RCFCASE_COMPL A,
3 PS_WSI_RCCASE_FLDS B
4 WHERE A.CASE_ID = B.CASE_ID
5 and a.business_unit = b.business_unit
7* AND B.WSI_SYSTM_DWNTM_ID = '91'


Ok, either A or B could be used to "drive" the query.


So, if we use A, we can index into B but we would have to index into B 7,580,277 times!!! (since there is no predicate on A, every row in A has to be considered, we'd have to look into B's index 7.5 MILLION times. Not very efficient)

If we choose B to drive, the optimizer is thinking "after predicate of ..._id = '91' is applied" there will be 100,952 rows that we would have to perform an index range scan on into A (3 LIO's to read the index plus one table access by index rowid -- 400,000 IO's for that).


So, are these numbers accurate.
If you "hint it" to use the plan you think is much much better -- is it?
If it is, have you looked at optimizer_index_* init.ora parameters (search for them on this site to read about them)


Answer

PRS, February 17, 2004 - 3:46 pm UTC

Tom,
I changed the same query following way and it comes back in less than a second. Also created index PSDWSI_RCCASE_FLDS only on WSI_SYSTEM_DWNTM_ID column.

SELECT /*+ ordered index(b PSDWSI_RCCASE_FLDS) use_nl(a) */
SUM(CLAIM_AMOUNT)
FROM PS_WSI_RCCASE_FLDS B, PS_RCFCASE_COMPL A
WHERE A.CASE_ID = B.CASE_ID
and a.business_unit = b.business_unit
AND B.WSI_SYSTM_DWNTM_ID = '91'

Basically it was doing a FAST FULL SCAN on index
PSDWSI_RCCASE_FLDS which is same as almost full table scan. So I just rebuild the index with only one column and it worked fine. Also I use hints.

Thanks,
PRS

Tom Kyte
February 17, 2004 - 6:57 pm UTC

without the hints, what happens -- you changed the schema, the first schema perhaps wasn't "right" for using the index.

I'd like to totally LOSE The hints.

questions

PRS, February 18, 2004 - 9:15 am UTC

Hi Tom,
I also do not like to use hints. But what option do I have here. ORACLE does not provide any other option here. This are the table designed this way. We have a good machine with 4 CPU sun e4500 with 8GB RAM. Why Oracle is doing hash joins and whole thing goes for a toss. This is a simple data volume with 7.5 million rows in each table. ORACLE claims that it's CBO is the best. But in some case I really doubt. It is a nightmare sometime to tune some of the queries. You are left with no option. This is just my perception specially in using the HINTS. I hate using hins. But I have no option here. What do you think?

Always appreciate your input. You are the GURU. But sometime this CBO is horrible.

Thanks,
PRS

Tom Kyte
February 18, 2004 - 8:53 pm UTC

you changed the index -- what is the plan WITHOUT hints using autotrace with the NEW SCHEMA you have.

forget the old stuff you posted, quite simply "not relevant". you changed the entire situation. we are back at square "uno" here -- forget everything that was said before, you changed it ALL.

slow query in stored procedure but very fast as single SQL

A reader, February 23, 2004 - 2:41 pm UTC

Hi

I have a query inside a procedure, the procedure receives the parameter and pass that parameter to the query inside as a condition. The thing is the query by itself runs very fast (instantaneously) no matter if I use bind variables or literals but inside the procedure it takes 5 minutes!!! I dont understand why is this, I thought queries like this inside procedures can be simulated by using bind variables such as

var x number
exec :x=7818654

query here
where phone=:x

But NO, it does not produce the same effect, I am forced to execute the proceudre everytime I want to test a new HINT!!! How so? Any workaround? Expected behaviour (hope not... if that's the case we might as well change rdbms!)



Tom Kyte
February 23, 2004 - 4:56 pm UTC

can you show us

a) a tkprof from sqlplus (showing fast)
b) a tkprof from plsql (showing slow)

to start with.

here is the tkprof

A reader, February 24, 2004 - 6:25 am UTC

query:

in sqlplus
var x varchar2(24)
exec :x := 'AA'

here it´s fast because it uses predicate ph.phone = :x

SELECT
be.pa_segment_mkt, bc.SEGMENT, co.customer_type,
be.pa_segment_org
FROM sysadm.ps_cm_phone ph,
sysadm.ps_bo_cm_profl_dtl d,
sysadm.ps_bo_cm_profile x1,
sysadm.ps_bo_rel a,
sysadm.ps_bo_rel_cat_item b,
sysadm.ps_bo_rel_cm c,
sysadm.ps_pa_bc_ext be,
sysadm.ps_bc bc,
sysadm.ps_rd_company co
WHERE ph.phone = :x
AND a.rel_type_id = b.rel_type_id
AND a.start_dt <=
TO_DATE (TO_CHAR (SYSDATE, :"SYS_B_0"),
:"SYS_B_1"
)
AND a.end_dt >
TO_DATE (TO_CHAR (SYSDATE, :"SYS_B_2"),
:"SYS_B_3"
)
AND b.bo_rel_category_id IN (:"SYS_B_4", :"SYS_B_5")
AND a.rel_type_id = c.rel_type_id
AND a.bo_id_1 = c.bo_id_1
AND a.role_type_id_1 = c.role_type_id_1
AND a.bo_id_2 = c.bo_id_2
AND a.role_type_id_2 = c.role_type_id_2
AND c.bo_id = d.bo_id
AND c.profile_cm_seq = d.profile_cm_seq
AND x1.end_dt >
TO_DATE (TO_CHAR (SYSDATE, :"SYS_B_6"),
:"SYS_B_7"
)
AND x1.bo_id = d.bo_id
AND x1.cm_type_id = d.cm_type_id
AND x1.bo_id = a.bo_id_1
AND d.eff_status = :"SYS_B_8"
AND d.cm_id = ph.cm_id
AND a.bo_id_2 = bc.bo_id
AND c.bo_id = be.bo_id
AND c.bo_id = co.bo_id(+)

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.01 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 1 0.01 0.00 0 3 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 3 0.02 0.00 0 3 0 0


in pl/sql

CANT PASTE,it has been running over one hour already and not finishing, the problem is in PL/SQ it does not use ph.phone=phone!!!



Tom Kyte
February 24, 2004 - 8:21 am UTC

I don't see the plan, the plan is what is important -- you need to exit sqlplus before running tkprof.

and try adding /*+ first_rows */ to the plsql based one, see what happens there.


does this query really get zero rows?? even in the plsql ?

To "A reader"

Kim Berg Hansen, February 24, 2004 - 9:29 am UTC

Hi, "A reader"

In your last sentence you say "ph.phone=phone"...

Does that mean that you have a PL/SQL variable or argument with the same name as a column in your table ?
That could be the case - if so then rename the variable/argument.



Ideal Value of COST

Dhrubo, April 13, 2004 - 7:33 am UTC

Hi Tom,
Is there any thing called "Ideal value of COST".Is this COST a function of CPU COST & IO COST?Please explain

Tom Kyte
April 13, 2004 - 8:17 am UTC

there is no such concept as the ideal value of cost.

the cost is dependent on cpu costing only when enabled in 9i (or by 'accident' sometimes, that is a product "issue") and is on in 10g. In 9i and before, the cost is predominantly IO based -- in 10g and beyond cpu and io based.

Reading Explain plan

parag jayant patankar, May 12, 2004 - 7:27 am UTC

Hi Tom,

I have gone thru your explanation of reading explain plan using tree structure. It is very clear. That is great.

Tom will you pl explain us the following explain plan operations in simple language for novice person like me

INDEX UNIQUE SCAN
INDEX RANGE SCAN
NESTED LOOPS
NESTED LOOPS OUTER
HASH JOIN
MERGE JOIN
FILTER
VIEW
REMOTE
SORT ORDER BY
SORT GROUP BY
SORT JOIN

Once I understand the concepts, I can read and do further study of this.

thanks & regards

Tom Kyte
May 12, 2004 - 7:57 am UTC

if you have my book "Effective Oracle by Design", check that out -- i do many of them there.

Else, well not else, in addition read the performance guide (on otn), it covers them as well.

10g

SR, May 21, 2004 - 2:32 am UTC

Tom

The advent of 10g and associated hypes brought about tremendous confusion in DBA community. Updating oneself has never been a problem, but when claims are made about some radical changes and absolute u-turns, the mayhem overshadows the eagerness to learn. Putting it simply, these are some of my queries:

1. Should I change the methodologies for TUNING which I have been adapting so far? How long tools like tkprof (favorite of yours too) will be having their present relevance?

2.Should I start THINKING differently when I approach a 10g database to tune it PROACTIVELY?

3.Should I start abandoning the classical administrative interfaces like normal SQL prompt and start using OEM for almost all operations?


You have to please help us out.

Tom Kyte
May 21, 2004 - 10:46 am UTC

please elaborate -- what are the radical changes and absolute u-turns you speak of? I see tools that help us visualize a system, tools that help tune at the system level -- but as we really know, 99.999% of performance will come from the proper design and implementation of our applications -- no silver bullets there. If you build an application that doesn't use bind variables -- cursor sharing can "help", but it won't fix everything. session cached cursors can further help, but only a little. At the end of the day, the only thing that will fix it will be to correct the bug in the code is -- well -- fixing the bug in the code.

1) tkprof, sql_trace, 10046 level traces are far from gone.

The database will tune:

for x in ( select * from t )
loop
insert into t2 values (t.x,t.y,t.z);
end loop;

by adding array fetches for us, making the select * from t go as fast as it could -- but it will NEVER turn that into:

insert /*+ append */ into t2 select * from t;

which will *blow away* any procedural code you write.


1) they are not going away anytime soon. in fact, AWR is just statspack on steriods. Useful for a "system" after the apps have been tuned up. not so useful for tuning an application (10046 trace with tkprof does that)

2) if you haven't be proactively tuning, you've been doing it wrong all along. TUNE is a 4 letter word (cursor word). DESIGN is a 6 letter word (good word).

3) no way.

SR, May 24, 2004 - 1:28 am UTC

Thanks a million, Tom - That cleared all the clouds !

Thanks, but am I interpreting this right

A reader, June 01, 2004 - 6:14 am UTC

with a explain such as the one you used above

PLAN_TABLE_OUTPUT
-------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes
-------------------------------------------------------------------
| 0 | SELECT STATEMENT | | |
| 1 | NESTED LOOPS | | |
| 2 | NESTED LOOPS | | |
| 3 | TABLE ACCESS FULL | EMP | |
| 4 | TABLE ACCESS BY INDEX ROWID| DEPT | |
|* 5 | INDEX UNIQUE SCAN | DEPT_PK | |
|* 6 | TABLE ACCESS FULL | SALGRADE | |
-------------------------------------------------------------------


I think I now understand how to read them in terms of order performed, but I still am unsure
if I have the following right..
Does the database read the whole of EMP (storing it in memory ?) and then for every
deptno, it looks in retrieves the row from DEPT by scanning the index DEPT_PK and then looking at the DEPT table.
Each matched record it stores aside again, and once all records in EMP have been processed, it then takes the
result set stored, and compares with SALGRADE table outputting the results to the select.

Tom Kyte
June 01, 2004 - 8:44 am UTC

a query like that will read the first row from EMP, find the deptno, look up the deptno and output that record to the next level where it will fullscan salgrade looking for the match and then output that to the client application.

Then, onto the next row in emp, lookup deptno, full scan salgrade

over and over.

Oh My Goodness !!

A reader, June 02, 2004 - 3:35 am UTC

At last

after 5 or 6 years using Oracle, I think I finally understand the way it works now ! Thanks so much. It makes it clear why nested loops can be so inefficient in certain circumstances. I never realised it would full scan SALGRADE for every row !

So, for hash joins, for the following plan

Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=2891 Card=10 Bytes=790)
1 0 HASH JOIN (Cost=2891 Card=10 Bytes=790)
2 1 TABLE ACCESS (FULL) OF 'I2' (Cost=963 Card=100 Bytes=2000)
3 1 HASH JOIN (Cost=1927 Card=1000 Bytes=59000)
4 3 TABLE ACCESS (FULL) OF 'I1' (Cost=963 Card=100 Bytes=2000)
5 3 TABLE ACCESS (FULL) OF 'MAP' (Cost=963 Card=100000


This is saying, Full scan I1, build a hash table in memory (or TEMP tablespace if cant fit in hash_area_size) and then full scan MAP, hash the key and probe the hash map in memory. Output all the matches to the next level, where I2 is full scanned, hashed into memory (or TEMP) and the same process happens, before all matched records are output to the client.

Is that right ?

There is no iteration, rows are only returned once all records have been processed / matched ?

Each of the tables are full scanned only once ?

Many, many thanks.



Tom Kyte
June 02, 2004 - 8:29 am UTC

correct.

Clarification

YenYang, June 03, 2004 - 8:44 am UTC

Im new to tuning and Im yet to go through your book on tuning. Im working on Oracle 9.2.
Here are the 2 statements I have executed by interchanging the tables in FROM clause.

After looking at the explain plan, all I could infer (with my little knowledge) is that Case 2 requires less overhead (Cost=1 Card=1 Bytes=51) than Case 1 which requires (Cost=1 Card=1 Bytes=64). Bytes=51 and Bytes=64. Is this what I should look at or something else also ?

Case 1

SELECT mcb.consumer_id, mcb.routing_and_transit_number, mcb.consumer_account_number,
mcb.consumer_settlement_ref_number, bm.mem_id,
DECODE(mcb.account_type,'DDE',0,'SV',1,null),mcb.account_type
FROM cbmig_cons_bank mcb,
CBMIG_CONSUMER_CROSS_REF bm
WHERE mcb.consumer_id = bm.consumer_id
AND mcb.current_status = 'NOT_PROCESSED'
AND bm.current_status = 'PROCESSED'
AND bm.sponsor_id = 1


Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=1 Card=1 Bytes=115)
1 0 TABLE ACCESS (BY INDEX ROWID) OF 'CBMIG_CONSUMER_CROSS_REF
'

2 1 NESTED LOOPS (Cost=1 Card=1 Bytes=115)
3 2 TABLE ACCESS (BY INDEX ROWID) OF 'CBMIG_CONS_BANK' (Co
st=1 Card=1 Bytes=64)

4 3 INDEX (RANGE SCAN) OF 'CBMIG_CONS_BANK_IDX1' (NON-UN
IQUE) (Cost=1 Card=1)

5 2 INDEX (RANGE SCAN) OF 'CBMIG_CONSUMER_CROSS_REF_IDX5'
(NON-UNIQUE)

Case 2

SELECT mcb.consumer_id, mcb.routing_and_transit_number, mcb.consumer_account_number,
mcb.consumer_settlement_ref_number, bm.mem_id,
DECODE(mcb.account_type,'DDE',0,'SV',1,null),mcb.account_type
FROM CBMIG_CONSUMER_CROSS_REF bm,
cbmig_cons_bank mcb
WHERE bm.consumer_id = mcb.consumer_id
AND mcb.current_status = 'NOT_PROCESSED'
AND bm.current_status = 'PROCESSED'
AND bm.sponsor_id = 1


Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=1 Card=1 Bytes=115)
1 0 TABLE ACCESS (BY INDEX ROWID) OF 'CBMIG_CONS_BANK'
2 1 NESTED LOOPS (Cost=1 Card=1 Bytes=115)
3 2 TABLE ACCESS (BY INDEX ROWID) OF 'CBMIG_CONSUMER_CROSS
_REF' (Cost=1 Card=1 Bytes=51)

4 3 INDEX (RANGE SCAN) OF 'CBMIG_CONSUMER_CROSS_REF_IDX1
' (NON-UNIQUE) (Cost=1 Card=1)

5 2 INDEX (RANGE SCAN) OF 'CBMIG_CONS_BANK_IDX1' (NON-UNIQ
UE)




Tom Kyte
June 03, 2004 - 1:15 pm UTC

they are the same, there is no material difference between these two.

Hash Join --Full Scan

READER, June 16, 2004 - 10:21 am UTC

While doing hash join it always seen that it does a FTS to make the hash key.
Can this key be made based on an Index ?
Will this be efficient ?
Say we have two tables 50 millions and 1 million.
Going thru each and every row to prepare a hash key will be expensive compared to going thru the index .
Is it correct ?

Thanks .

Tom Kyte
June 16, 2004 - 1:04 pm UTC

tell me how could going via an index 50 million times be MORE efficient than full scanning -- since you have to go to each row.

please keep repeating the following until you believe it:

"full scans are not evil"
"indexes are not the goal"


to process 50million rows, a full scan is going to be *brutally* efficient.

Question probably not understood

Logan Palanisamy, June 16, 2004 - 8:10 pm UTC

Tom,

You probably misunderstood the READER'S question.

The person is asking why not "Fast Full Scan the Index" to build the hash-key table instead of "Full Scanning the Table"?

I too feel "Fast Full Scanning the Index" may be faster than the Full "Scanning the Table" to build the Hash-Keys table.



Tom Kyte
June 17, 2004 - 7:58 am UTC

i disagree.

for you see -- if a fast full scan of the index would work, we do that.  we'll gladly and quickly use an index as a table -- given that the index has all of the columns we need to process the query.

so, I doubt it was that.

ops$tkyte@ORA9IR2> create table t1 as select * from all_objects where 1=0;
 
Table created.
 
ops$tkyte@ORA9IR2> create table t2 as select * from all_objects where 1=0;
 
Table created.
 
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> create index t1_idx on t1(object_id,object_name,owner);
 
Index created.
 
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> exec dbms_stats.set_table_stats( user, 'T1', numrows => 10000000, numblks => 100000 );
 
PL/SQL procedure successfully completed.
 
ops$tkyte@ORA9IR2> exec dbms_stats.set_table_stats( user, 'T2', numrows => 10000000, numblks => 100000 );
 
PL/SQL procedure successfully completed.
 
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> set autotrace traceonly explain
ops$tkyte@ORA9IR2> select t1.object_id, t1.owner, t2.object_name
  2    from t1, t2
  3   where t1.object_id = t2.object_id;
 
Execution Plan
----------------------------------------------------------
   0      SELECT STATEMENT Optimizer=CHOOSE (Cost=25851 Card=10000000 Bytes=600000000)
   1    0   HASH JOIN (Cost=25851 Card=10000000 Bytes=600000000)<b>
   2    1     INDEX (FAST FULL SCAN) OF 'T1_IDX' (NON-UNIQUE) (Cost=4 Card=10000000 Bytes=300000000)</b>
   3    1     TABLE ACCESS (FULL) OF 'T2' (Cost=9619 Card=10000000 Bytes=300000000)
 <b>
fast full scan cause index = skinnier table for this query.
 </b>
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> select t1.object_id, t1.owner, t1.status, t2.object_name
  2    from t1, t2
  3   where t1.object_id = t2.object_id;
 
Execution Plan
----------------------------------------------------------
   0      SELECT STATEMENT Optimizer=CHOOSE (Cost=36638 Card=10000000 Bytes=650000000)
   1    0   HASH JOIN (Cost=36638 Card=10000000 Bytes=650000000)
   2    1     TABLE ACCESS (FULL) OF 'T2' (Cost=9619 Card=10000000 Bytes=300000000)
   3    1     TABLE ACCESS (FULL) OF 'T1' (Cost=9619 Card=10000000 Bytes=350000000)
 
<b>no index because index did not contain all of the information we needed, we would have to go to the table after all and that would be heinously slow</b> 
 
 
ops$tkyte@ORA9IR2> set autotrace off
 

Fast Full scan ...

Reader, June 17, 2004 - 7:42 am UTC

I would appreciate your views on a fast full scan vs FTS on making hask key on a 50 million row table.

Thanks

Tom Kyte
June 17, 2004 - 10:19 am UTC

see above, it just happens.

Why this

Goh Seong Hin, June 18, 2004 - 12:24 am UTC

Dear Tom,

If i am not mistaken, the optimizer will use the index if you add in this line (and t1.object_id < 1000 in your previous SQL

select t1.object_id, t1.owner, t1.status, t2.object_name
from t1, t2
where t1.object_id = t2.object_id
and t1.object_id < 1000;

Can you help to elaborate why the optimizer will choose index range for this case ?

Thanks.

Rgds,
SHGoh

Tom Kyte
June 18, 2004 - 10:36 am UTC

sure, indexes can, are, will be used -- absolutely. I showed that above.

The question typically stems from:

I have

select t1.object_id, t1.owner, t1.status, t2.object_name
from t1, t2
where t1.object_id = t2.object_id

ad object_id is indexed, why isn't the optimizer using the index.


the answer is "cause it would be the worst thing on the planet to actually do"

same query, differet executing sys/system different plan, how so?!

A reader, July 01, 2004 - 7:41 am UTC

Hi

I have a query which isnt performed as good as desire, it reads 15 i/o in 8.1.7 and 100 in 10.1.0.2.0.
However in 10g if I run the qurey under sys or system it only uses 10 I/O.

The query has owner prefixed:

SELECT libro.c_nivel, libro.c_asignatura, libro.c_titulo, libro.n_alumnos
FROM sim.libro_usado libro, sim.definicion_mercado def
WHERE libro.c_centro = :b7
AND libro.c_nivel = :b6
AND libro.c_asignatura = :b5
AND libro.b_cerrado_periodo = 'N'
AND libro.b_bloqueado = 'N'
AND libro.b_estado = 'N'
AND libro.c_titulo_es_adoptado IN (
SELECT titulo.c_titulo
FROM sim.titulo
WHERE titulo.c_tipo_titulo IN (
SELECT c_tipo_titulo
FROM sim.tipo_titulo
WHERE t_clase = 'T'))
AND def.c_pais = :b4
AND def.c_autonoma = :b3
AND def.c_provincia = :b2
AND def.c_periodo = :b1
AND def.c_nivel = libro.c_nivel
AND def.b_def_mercado = 'S'
AND def.c_edicion = libro.c_edicion_es_adoptado
AND def.c_idioma = libro.c_idioma_es_adoptado
AND def.c_asignatura = libro.c_asignatura
AND def.b_bloqueado = 'N'
ORDER BY libro.c_nivel, libro.c_asignatura;

Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=ALL_ROWS (Cost=4 Card=1 Bytes=88)
1 0 TABLE ACCESS (BY GLOBAL INDEX ROWID) OF 'LIBRO_USADO' (TABLE) (Cost=1 Card=1 Bytes=46)
2 1 NESTED LOOPS (Cost=2 Card=1 Bytes=88)
3 2 TABLE ACCESS (BY INDEX ROWID) OF 'DEFINICION_MERCADO' (TABLE) (Cost=1 Card=1 Bytes=42)
4 3 INDEX (RANGE SCAN) OF 'DEF__MERCADO_UK' (INDEX (UNIQUE)) (Cost=1 Card=1)
5 2 INDEX (RANGE SCAN) OF 'LUS_LOCALIZA_UK' (INDEX (UNIQUE)) (Cost=1 Card=1)
6 5 NESTED LOOPS (Cost=2 Card=1 Bytes=18)
7 6 TABLE ACCESS (BY INDEX ROWID) OF 'TITULO' (TABLE) (Cost=1 Card=1 Bytes=11)
8 7 INDEX (UNIQUE SCAN) OF 'TITULO_PK' (INDEX (UNIQUE)) (Cost=1 Card=1)
9 6 TABLE ACCESS (BY INDEX ROWID) OF 'TIPO_TITULO' (TABLE) (Cost=1 Card=1 Bytes=7)
10 9 INDEX (UNIQUE SCAN) OF 'TTIT_PK' (INDEX (UNIQUE)) (Cost=0 Card=1)

Statistics
----------------------------------------------------------
16 recursive calls
0 db block gets
10 consistent gets
0 physical reads
0 redo size
414 bytes sent via SQL*Net to client
373 bytes received via SQL*Net from client
1 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
0 rows processed


Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=ALL_ROWS (Cost=7 Card=1 Bytes=113)
1 0 TABLE ACCESS (BY GLOBAL INDEX ROWID) OF 'LIBRO_USADO' (TABLE) (Cost=1 Card=1 Bytes=46)
2 1 NESTED LOOPS (Cost=5 Card=1 Bytes=113)
3 2 NESTED LOOPS (Cost=4 Card=1 Bytes=67)
4 3 NESTED LOOPS (Cost=3 Card=1 Bytes=25)
5 4 NESTED LOOPS (Cost=2 Card=1 Bytes=10)
6 5 INDEX (UNIQUE SCAN) OF 'ASIGNATURA_PK' (INDEX (UNIQUE)) (Cost=0 Card=1 Bytes=5)
7 5 INDEX (UNIQUE SCAN) OF 'NIVEL_PK' (INDEX (UNIQUE)) (Cost=0 Card=1 Bytes=5)
8 4 INDEX (UNIQUE SCAN) OF 'PROVINCIA_PK' (INDEX (UNIQUE)) (Cost=0 Card=1 Bytes=15)
9 3 TABLE ACCESS (BY INDEX ROWID) OF 'DEFINICION_MERCADO' (TABLE) (Cost=1 Card=1 Bytes=42)
10 9 INDEX (RANGE SCAN) OF 'DEF_MERCAD_PROVINCIA_FK_I' (INDEX) (Cost=0 Card=1)
11 2 INDEX (RANGE SCAN) OF 'LUS_LOCALIZA_UK' (INDEX (UNIQUE)) (Cost=1 Card=1)
12 11 NESTED LOOPS (Cost=2 Card=1 Bytes=18)
13 12 TABLE ACCESS (BY INDEX ROWID) OF 'TITULO' (TABLE) (Cost=1 Card=1 Bytes=11)
14 13 INDEX (UNIQUE SCAN) OF 'TITULO_PK' (INDEX (UNIQUE)) (Cost=1 Card=1)
15 12 TABLE ACCESS (BY INDEX ROWID) OF 'TIPO_TITULO' (TABLE) (Cost=1 Card=1 Bytes=7)
16 15 INDEX (UNIQUE SCAN) OF 'TTIT_PK' (INDEX (UNIQUE)) (Cost=0 Card=1)

Statistics
----------------------------------------------------------
80 recursive calls
0 db block gets
102 consistent gets
0 physical reads
0 redo size
414 bytes sent via SQL*Net to client
373 bytes received via SQL*Net from client
1 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
0 rows processede


How so? Why owner needs 102 consistent gets and system only 10?

Tom Kyte
July 01, 2004 - 11:13 am UTC

do not use sys
do not use system

pretend they do not exist

they are not for your use

they are ours

they are not accounts to be used

ignore them

forget they exist


use your own accounts.

how do you get the parse,fetch statistics

ramakrishna, July 07, 2004 - 10:59 pm UTC

Hi Tom , im a beginner to this DBA job, reading all your articles on this site and your books. I want to know, in most of the articles, i think the solution which you send is from your laptop,i mean the sql demonstrations which you show giving solutions. i have observed, you must have enabled some feature where you get the parse,execute and fetch statistics without using tkprof???am i right. how do i enable this ???

thanx in advance

Tom Kyte
July 08, 2004 - 8:11 am UTC

i use tkprof all of the time....

I use a sun box, linux server, desktop, and a laptop for examples -- they all look the same though as it is always

o autotrace
o explain plan
o tkprof
o runstats

showing the results -- if you have my book "Effective Oracle By Design" -- i spent a chapter covering the tools I use.

A reader, July 29, 2004 - 7:39 am UTC

Hi All,
I have the following query which runs in 150 millisecond when invoked from a single user environment but while doing concurrent users test(25 users) the same query takes 3 seconds to complete.The table (tab1) on which this query is fired contains 50 million rows.Can anyone please help me out in tuning this query.

Query
------
SELECT /*+ INDEX (tab1 PK_tab1) */ TR_SUB_PREM_KEY, CIR_KEY,CIR_TYPE,locality,LTRIM(RTRIM(SUBSTR(source_postcode,1,DECODE((INSTR(source_postcode,' ')-1),-1,4,(INSTR(source_postcode,' ')-1))))),
LTRIM(RTRIM(SUBSTR(source_postcode,DECODE((INSTR(source_postcode,' ')+1),1,5,(INSTR(source_postcode,' ')+1))))),
FROM tab1
WHERE tab1.TR_SUB_PREM_KEY='R00000016682' .

Execution Plan
----------------------------------------------------------
0
SELECT STATEMENT Optimizer=FIRST_ROWS (Cost=5 Card=1 Bytes=97)


1 0
TABLE ACCESS (BY INDEX ROWID) OF 'tab1' (Cost=5 Card=1 Bytes=97)


2 1
INDEX (RANGE SCAN) OF 'PK_tab1' (NON-UNIQUE) (Cost=4 Card=1)

Statistics
----------------------------------------------------------
0 recursive calls
0 db block gets
6 consistent gets
1 physical reads
0 redo size
2199 bytes sent via SQL*Net to client
655 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
1 rows processed




Tom Kyte
July 29, 2004 - 11:55 am UTC

so, how many cpus do you have.

 
ops$tkyte@ORA9IR2> select .15 * 25 from dual;
 
    .15*25
----------
      3.75
 

employee table lookup

Sudhir, July 29, 2004 - 11:36 am UTC

Tom,

I searched for the 'employee table lookup' and landed here. What I was trying to find was the reference to what you say in your book - an application you had developed inhouse to do employee lookup and you build a temporary flat table and so forth.

Here is one idea I though about it


@?/rdbms/admin/utlsampl.sql
connect scott/tiger
create table empvert(key varchar2(100),colname varchar2(50), rid rowid);



insert into scott.empvert
(
select empno||'','EMPNO',rowid rid from emp
union all
select ename,'ENAME',rowid rid from emp
union all
select job,'JOB',rowid rid from emp
union all
select MGR||'','MGR',rowid rid from emp
union all
select to_char(hiredate,'YYYYMMDDHH24MISS'),'HIREDATE',rowid rid from emp
union all
select deptno||'','DEPTNO',rowid rid from scott.emp
);
create index si_empvert_1 on empvert(key);
create index si_empvert_2 on empvert(COLNAME);


1 select * from emp where rowid in (
2* select rid from empvert where key like '&key' and colname = '&col')
/
Enter value for key: M%
Enter value for col: ENAME
old 2: select rid from empvert where key like '&key' and colname = '&col')
new 2: select rid from empvert where key like 'M%' and colname = 'ENAME')

EMPNO ENAME JOB MGR HIREDATE SAL COMM
---------- ---------- --------- ---------- --------- ---------- ----------
DEPTNO
----------
7654 MARTIN SALESMAN 7698 28-SEP-81 1250 1400
30

7934 MILLER CLERK 7782 23-JAN-82 1300
10

1 select * from emp where rowid in (
2* select rid from empvert where key like '&key%' )
/
Enter value for key: J
old 2: select rid from empvert where key like '&key%' )
new 2: select rid from empvert where key like 'J%' )

EMPNO ENAME JOB MGR HIREDATE SAL COMM
---------- ---------- --------- ---------- --------- ---------- ----------
DEPTNO
----------
7566 JONES MANAGER 7839 02-APR-81 2975
20

7900 JAMES CLERK 7698 03-DEC-81 950
30


/
Enter value for key: 78
old 2: select rid from empvert where key like '&key%' )
new 2: select rid from empvert where key like '78%' )

EMPNO ENAME JOB MGR HIREDATE SAL COMM
---------- ---------- --------- ---------- --------- ---------- ----------
DEPTNO
----------
7566 JONES MANAGER 7839 02-APR-81 2975
20

7698 BLAKE MANAGER 7839 01-MAY-81 2850
30

7782 CLARK MANAGER 7839 09-JUN-81 2450
10

7839 KING PRESIDENT 17-NOV-81 5000
10

7844 TURNER SALESMAN 7698 08-SEP-81 1500 0
30

7876 ADAMS CLERK 7788 23-MAY-87 1100
20


6 rows selected.

What do you think?

Thanks



Tom Kyte
July 29, 2004 - 1:26 pm UTC

don't understand why you would do that when a normal index on ENAME would be much more efficient?

My point is

A reader, July 29, 2004 - 9:46 pm UTC

You can search on any column value with sql and with one index. For example key like 'MANAGER', will give you all managers. Or key like 'JAMES' will give all rows with that word.
Thats why I named empvert(icle) as table derived from emp.
thanks


Tom Kyte
July 30, 2004 - 7:34 am UTC

so? why is this redundant data better then have an index per column on the already stored data.

what have you improved here?

for the reason ...

A reader, July 30, 2004 - 8:03 pm UTC

you don't need to know which column the user specified key exists.

thanks

Tom Kyte
July 31, 2004 - 10:52 am UTC

does not compute -- don't know what you mean by "you don't need to knwo which column the user specified key exists" means.


I'm not seeing the point here. if you have:

a) the table emp
b) indexes on the columns you want to search on
c) you generate the query "select * from emp where ename = :x"

you'll use an index. in fact, if you

where ename = :x or job = :y


where ename = :x and job = :y

we'll be able to not only actually answer those questions easily (try that with your inverted table), we'd be able to use the indexes as well.



What it looks like you are trying to reinvent is the multi column datastore index already available in Oracle text and I do not see the benefit here in this case.

ok, here is one way

A reader, August 01, 2004 - 7:16 am UTC

where ename = :x and job = :y

select * from emp where rowid in (
select rid from empvert where key = :x and
colname = 'ENAME' and rid in
(select rid from empvert where key = :y and colname='JOB' )
)

where ename = :x or job = :y

select * from emp where rowid in (
select rid from empvert where key = :x and
colname = 'ENAME' or
select rid from empvert where key = :y and colname='JOB'
)

I am just trying out something.

thanks

Tom Kyte
August 01, 2004 - 11:05 am UTC

"we'll be able to not only actually answer those questions easily "
^^^^^^


believe me -- I know you *can* write the queries (in fact, in Effective Oracle by Design -- i demonstrate it -- to show how wrong a model like this is).

the key word here is "easily"



Now think about the case where you have 1,000,000 rows "where key = :y and colname = 'JOB'" and 1 row "where key = :x and colname = 'ENAME'"

and if you say "ok, i'll flip them", then I'll flip them myself. Basically to write this easily and efficiently -- you need to maintain your own statistics so you know how to build the query and so on.


Thanks but I'll stick with "where ename = :x or job = :y"


this solution calls for either:

a) index the columns in the table and just do it as "normal"
b) use a multi-column datastore index from Oracle text






How to do this?

A reader, August 20, 2004 - 3:14 pm UTC

A query runs fine in the UAT environment but tanks in Production. The hardware on both environments is the same, the data volume is the same, stats gathered on both using 'dbms_stats' (cascade=>true, method_opt=>'for all indexed columns')

On UAT, it runs in 7 seconds, on Prod, it takes 700 seconds. The execution plan is different. On both databases, the stats in the dictionary are very close to reality. i.e. select count(*) from table is the same as dba_Tables.num_rows and so on.

One interesting thing is that if I add /*+ first_rows */ to the Prod query, the plan becomes identical to the UAT query and the execution time also drops down to almost 7 seconds!

How would you go about solving this problem?

Thanks

Tom Kyte
August 21, 2004 - 11:09 am UTC

i'd get the 10053 trace file from both systems and compare them to see what parameters I've set differently as the 10053 trace will dump out the init.ora settings used by the optimizer. a simple diff on those files would probably tell me what I changed between the two environments.

How to do this?

A reader, August 21, 2004 - 10:38 am UTC

OK I ran explain plan in both databases and did

select distinct operation,owner,object_name from plan_table
where operation in ('TABLE ACCESS','INDEX');

Made sure that stats on all these objects are up-to-date.

The plans are still different between the 2 environments! :(

How to go about solving this? Why does the Prod env need the FIRST_ROWS to get a good plan, but the UAT env figures that out by itself? [The data volumes in the tables are identical between the 2 environments]

Thanks

Tom Kyte
August 21, 2004 - 12:16 pm UTC

read above please -- 10053 trace.

if you don't know what that is, search for it on this site, or see chapter 6 in my book Effective Oracle by Design.

your init.ora's are probably different and you've given the optimizer different settings -- one of which is not appropriate.

How to do this?

A reader, August 21, 2004 - 11:37 pm UTC

You are right, as usual.

After a lot of elimination and comparing the 2 environments, the determining factor turned out to be the db_file_multiblock_read_count

It is set to 32 on UAT and 64 on Prod.

Changing it has a huge impact on this particular query

Lowering it from 32 to 16 doesnt harm UAT
Increasing it from 32 to 64 makes UAT tank
Increasing it from 32 to 128 makes UAT tank

Lowering it from 64 to 32 doesnt change Prod (still bad)
Lowering it from 64 to 16 makes Prod speed up a lot
Increasing it from 64 to 128 also makes Prod speed up a lot

Questions:

What is the meaning of all this? Why is this parameter having such a dramatic impact on the query?

Also, why is the effect not the same in both directions (increasing vs. decreasing) on both the databases?

What is the best way to set this parameter initially, monitor its effect and tune incrementally?

How much does the underlying hardware have to do with this parameter? [In this case, UAT is a Sun E450 with RAID 5 LUNs on a A1000. Prod is a Sun V880 with RAID 0+1 LUNs on a A1000]

How can I set this parameter in a methodical way considering the underlying hardware for each database? Are there any disk I/O measurements I can take to help me?

Help! Thanks

Tom Kyte
August 22, 2004 - 8:18 am UTC

that parameter dramatically affects the cost of a full table scan.

bigger values -> full scan cheaper
smaller values -> full scan more expensive

I would leave this (and most all parameters) unchanged, at their default, unless I had good reason to change them.


you would compare the 10053 traces to see what is different between the two environments.


you would use system statistics (gathered via dbms_stats) in order to give the optimizer accurate information about your systems cpu, single block and multiblock IO capabilities.

A reader, August 22, 2004 - 9:31 am UTC

1. Thanks, but why would the effect of increasing/decreasing it not be the same on both the databses? Since the underlying hardware is different, I can understand if the real performance of the query is different, but at least the explain plan's should behave the same on both databases in response to increasing/decreasing it, right?

"I would leave this (and most all parameters) unchanged, at their default, unless I had good reason to change them"

2. Thats what I meant, what would be a "good reason" to change them. OK so you are saying the default (8) is a good initial value.

3. How can I monitor the effect of this value 8 on my system and see if I would benefit from a higher number?

4. My RAID system (Sun A1000)has a very high throughput, read prefetching, caching, etc that would effectiely be wasted if I set this to the default of 8, wouldnt it?

"you would compare the 10053 traces to see what is different between the two environments"

5. Hm, it is really difficult to compare 10053 traces from 2 databases, most everything looks different. Eyeballing it is not easy. Is there a efficient way to compare these (except 'diff f1.trc f2.trc')?


"you would use system statistics (gathered via dbms_stats) in order to give the optimizer accurate information about your systems cpu, single block and multiblock IO capabilities"

6. OK so would dbms_stats.gather_system_stats make it such that the value of db_file_multiblock_read_count is irrelevant? No matter what I set it to, Oracle would "know" my CPU/disk speeds so it would automatically (internally) adjust the value of multiblock_read_count?

Thanks

Tom Kyte
August 22, 2004 - 4:51 pm UTC

1) as i said, compare the 10053 traces -- what did you see in there that was different. that will TELL you what was different.

2) a good reason would be support asking you to for example....

3) benchmarking on the actual hardware

4) no?!?

5) they should be virtually the same if all is as you say "these databases are the same". that was the point, if they are radically different, these databases cannot even begin to be compared to eachother.

6) no, it makes it so the optimizer better understands you IO characteristics, how your CPU's do. it would still read 8 blocks in a single IO when performing multi-block IO.

db scattered file wait event

A reader, August 22, 2004 - 3:09 pm UTC

I tried the measurements you mention in

</code> http://asktom.oracle.com/pls/asktom/f?p=100:11:::::P11_QUESTION_ID:4433887271030, <code>

and came up with p3=128 i.e. my OS max IO size also seems to be 1M (my db_block_size is 8K)

So, I should set my db_file_mbrc=128, right?

I have it currently set to 64. And as I said, performance (of this one query) is good if I set this parameter either to 32 or 128! I cant figure out why both lowering it and increasing it improve performance!

My concern is that I dont want do "dumb down" my system or give Oracle a lowest common denominator view of my system's IO capabilities if I can avoid it. The RAID A1000 has a tremendous IO bandwidth (rated at 40MB/sec, UltraSCSI3) that I would like to leverage and not waste.

The gather_system_stats looks like a good feature, but I cant find any docs on how these work with the mbrc parameter.

Thanks

Tom Kyte
August 22, 2004 - 5:08 pm UTC

no, i would let it default to 8.


lowering it results in a different plan.

increasing it results in larger IO's


use dbms_stats to gather system stats.

they do not "work with" the mbrc -- they give the optimizer information about the true cost of IO on your system period.

mbrc

A reader, August 22, 2004 - 8:38 pm UTC

Why do you say that I should let MBRC default to 8?

For a 8k block size that is just 64K read in a single IO request. My OS and disk subsystem is able to handle upto 1MB in a single IO request.

So why should I throttle it down?

On a related note, since this is a ALTER SYSTEM dynamic parameter, when I change it to 8, will all my plans get invalidated in the shared pool and be hard-parsed the next time they are executed?

Thanks

Tom Kyte
August 23, 2004 - 7:29 am UTC

because I believe all parameters that don't have to be set, shouldn't be set.

do you want to be doing full scans frequently?
are you doing full scans frequently?
do you want the cost of a full scan to be lower than it is now?


if you change a parameter used by the CBO, it'll cause a child cursor to be created if one already exists -- that child cursor will be for "that environment"

mbrc

A reader, August 23, 2004 - 9:25 am UTC

"because I believe all parameters that don't have to be set, shouldn't be set"

right, but it is a regular parameter not a hidden "_" parameter, so Oracle does support changing it.

Not to sound like a broken record, but I would have thought this would be a perfectly valid situation to change it i.e. my hardware can support it, it is not as if I am lying to the optimizer.

So under what circumstances would you (or Support) recommend changing it?

Am I full scanning tables frequently? How can I tell? I look at v$sysstat.table scans (long tables) and I see a number. How can I tell if this number is good or bad?

"if you change a parameter used by the CBO, it'll cause a child cursor to be created if one already exists -- that child cursor will be for "that environment""

Not sure I understand this. I meant ALTER SYSTEM not ALTER SESSION. If I ALTER SYSTEM this parameter, would all cursors in shared pool be invalidated and hard-parsed next time?

Thanks

Tom Kyte
August 23, 2004 - 10:21 am UTC

of course we do -- caveat emptor.

i've already told you when I would change it -- i wouldn't.


you should know if you intend on scanning tables frequently, it would be part of your design from day one?


if your dfmbrc was 8 and you parsed "select * from t" -- it would be there in v$sql.

if you alter the dfmbr from 8 to 16 and parse "select * from t" -- it would create a child cursor under the existing query that was parsed with dfmbrc of 8.

regardless of system or session setting.

the cursors are not invalidated by a simple parameter change, but new child cursors would be "seen"




Different results with different execution plans

A reader, August 26, 2004 - 10:30 am UTC

Hi Tom,

Is it possible that for the same query, the database returns different results by using different execution plans (giving it different hints etc.)?

I remember I had one such situation before but I can't remember how the query was written.

Do you know if this is possible?

Thanks.

Tom Kyte
August 26, 2004 - 10:39 am UTC

if it happens it would be what is known as "a bug"

15 Table Join

chetan, October 18, 2004 - 6:41 am UTC

Dear Sir,

I have a Multi Table view with outer joins. It takes REAL : 1200 to return 158 rows.

please tell me what is the way out to optimize this so that it can return me the data at a faster rate.

CREATE OR REPLACE VIEW VIEW_ENQUIRY AS
(
SELECT DISTINCT
EM.ECODE AS ECODE,
EM.eID AS EID,
EM.eDate AS EDATE,
EM.vDate AS VDATE,
EM.EMode AS EMODE,
EM.Cust_Code AS CUST_CODE,
EM.Contact_Code AS CONTACT_CODE,
EM.Prod_Code AS PROD_CODE,
EM.Qty_Required AS QTY,
EM.Qty_Unit AS QTY_UNIT_CODE,
EM.Qty_Detail AS QTY_DETAIL,
EM.PayTerms AS PAYTERMS,
EM.PayMode AS PAYMODE,
EM.ShipTo AS SHIPTO,
EM.SpecReqired AS SPECS,
EM.SampRequired AS SAMPLE,
EM.Shipment1 AS SHIPMENT1,
EM.Shipment2 AS SHIPMENT2,
EM.Shipment3 AS SHIPMENT3,
EM.Compete_Rate AS COMPETE_RATE,
EM.Compete_Unit AS COMPETE_UNIT_CODE,
EM.RecDate AS RECORD_DATE,
EM.UpDateDate AS UPDATE_DATE,
EM.RecCreated AS RECCREATEDBY,
EM.RecUpdated AS RECUPDATEDBY,
PRODUCT_MASTER.PROD_NAME AS PRODUCT_NAME,
PRODUCT_MASTER.CAS_NO AS PRODUCT_CASNO,
--
QTY_UNIT_MASTER.UNIT_NAME AS QTY_UNIT_NAME,
--ENQ_QTY_UNIT.UNIT_CODE AS QTY_UNIT_CODE,
--
COMP_UNIT_MASTER.UNIT_NAME AS COMPETE_UNIT_NAME,
--ENQ_COMPETE_UNIT.UNIT_CODE AS COMPETE_UNIT_CODE,
--
--ENQ_PARTY_MASTER.PARTY_CODE AS PARTY_CODE,
PARTY_MASTER.PARTY_NAME AS PARTY_NAME,
--
CONTACT_MASTER.FNAME||DECODE(NVL(CONTACT_MASTER.SNAME,''),'','',' ')||CONTACT_MASTER.SNAME||DECODE(NVL(CONTACT_MASTER.LNAME,''),'','',' ')||CONTACT_MASTER.LNAME AS CONTACT_PERSON,
ADRS_MASTER.ADRS_1 AS ADRS_1,
ADRS_MASTER.ADRS_2 AS ADRS_2,
ADRS_MASTER.STREET_NAME AS STREET_NAME,
ADRS_MASTER.PIN AS PIN,
--
COMM_MASTER.OFF_TEL_1||DECODE(NVL(COMM_MASTER.OFF_TEL_2,''),'','',' ; ')||COMM_MASTER.OFF_TEL_2 AS TEL,
COMM_MASTER.FAX_1||DECODE(NVL(COMM_MASTER.FAX_2,''),'','',' ; ')||COMM_MASTER.FAX_2 AS FAX,
COMM_MASTER.OFF_EMAIL || DECODE(NVL(COMM_MASTER.PERS_EMAIL,''),'','',' ; ')||COMM_MASTER.PERS_EMAIL AS EMAIL,
--
CITY_MASTER.CITY_NAME AS CITY_NAME,
--
STATE_MASTER.STATE_NAME AS STATE_NAME,
--
COUNTRY_MASTER.COUNTRY_NAME AS COUNTRY_NAME,
--
USER_MASTER.USER_CODE AS USER_CODE,
USER_MASTER.USER_NAME AS USER_NAME,
--
ES.STATUS AS STATUS,
ES.ESTATUS AS STATUS_CODE
From
ENQUIRY_MASTER EM,
USER_MASTER USER_MASTER,
ENQUIRYPROCESS EP,
ENQUIRY_STATUS ES,
PRODUCT_MASTER PRODUCT_MASTER,
UNIT_MASTER QTY_UNIT_MASTER,
UNIT_MASTER COMP_UNIT_MASTER,
PARTY_MASTER PARTY_MASTER,
CONTACT_PERSON CONTACT_MASTER,
COMM_MASTER COMM_MASTER,
ADDRESS_MASTER ADRS_MASTER,
CITY_MASTER CITY_MASTER,
STATE_MASTER STATE_MASTER,
COUNTRY_MASTER COUNTRY_MASTER
WHERE
EM.Prod_Code = PRODUCT_MASTER.PROD_CODE AND
EM.Qty_Unit = QTY_UNIT_MASTER.UNIT_CODE(+) AND
EM.Compete_Unit = COMP_UNIT_MASTER.UNIT_CODE(+) AND
EM.Cust_Code = PARTY_MASTER.PARTY_CODE AND
EM.Contact_Code = CONTACT_MASTER.CONT_PERS_CODE AND
EP.ECODE = EM.ECODE AND
USER_MASTER.USER_CODE = EP.USERCODE AND
ES.ESTATUS = EP.STATUS AND
PARTY_MASTER.COMM_CODE = COMM_MASTER.COMM_CODE(+) AND
PARTY_MASTER.ADRS_CODE = ADRS_MASTER.ADRS_CODE(+) AND
ADRS_MASTER.CITY_CODE = CITY_MASTER.CITY_CODE(+) AND
aDRS_MASTER.STATE_CODE = STATE_MASTER.STATE_CODE(+) AND
ADRS_MASTER.COUNTRY_CODE = COUNTRY_MASTER.COUNTRY_CODE(+)
)
/


i hope you will be able to solve my problem ...here is the CBO results as follows ::::::::]

L> set autotrace traceonly
L> set timing on
L> select count(*) from view_enquiry;

eal: 17395

ecution Plan
--------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE
1 0 SORT (AGGREGATE)
2 1 VIEW OF 'VIEW_ENQUIRY'
3 2 SORT (UNIQUE)
4 3 NESTED LOOPS
5 4 NESTED LOOPS (OUTER)
6 5 NESTED LOOPS (OUTER)
7 6 NESTED LOOPS (OUTER)
8 7 NESTED LOOPS (OUTER)
9 8 NESTED LOOPS (OUTER)
10 9 NESTED LOOPS (OUTER)
11 10 NESTED LOOPS (OUTER)
12 11 NESTED LOOPS
13 12 NESTED LOOPS
14 13 NESTED LOOPS
15 14 NESTED LOOPS
16 15 NESTED LOOPS
17 16 TABLE ACCESS (FULL) OF 'EN
QUIRYPROCESS'

18 16 TABLE ACCESS (BY INDEX ROW
ID) OF 'ENQUIRY_STATUS'

19 18 INDEX (UNIQUE SCAN) OF '
SYS_C005354' (UNIQUE)

20 15 TABLE ACCESS (BY INDEX ROWID
) OF 'USER_MASTER'

21 20 INDEX (UNIQUE SCAN) OF 'PK
_USER_CODE' (UNIQUE)

22 14 TABLE ACCESS (BY INDEX ROWID)
OF 'ENQUIRY_MASTER'

23 22 INDEX (UNIQUE SCAN) OF 'SYS_
C005667' (UNIQUE)

24 13 TABLE ACCESS (BY INDEX ROWID) OF
'CONTACT_PERSON'

25 24 INDEX (UNIQUE SCAN) OF 'PK_CON
TACT_PERSON' (UNIQUE)

26 12 TABLE ACCESS (BY INDEX ROWID) OF '
PARTY_MASTER'

27 26 INDEX (UNIQUE SCAN) OF 'PK_PARTY
_MASTER' (UNIQUE)

28 11 TABLE ACCESS (BY INDEX ROWID) OF 'AD
DRESS_MASTER'

29 28 INDEX (UNIQUE SCAN) OF 'PK_ADDRESS
_MASTER' (UNIQUE)

30 10 TABLE ACCESS (BY INDEX ROWID) OF 'COUN
TRY_MASTER'

31 30 INDEX (UNIQUE SCAN) OF 'PK_COUNTRY_M
ASTER' (UNIQUE)

32 9 TABLE ACCESS (BY INDEX ROWID) OF 'STATE_
MASTER'

33 32 INDEX (UNIQUE SCAN) OF 'PK_STATE_MASTE
R' (UNIQUE)

34 8 TABLE ACCESS (BY INDEX ROWID) OF 'CITY_MAS
TER'

35 34 INDEX (UNIQUE SCAN) OF 'PK_CITY_MASTER'
(UNIQUE)

36 7 TABLE ACCESS (BY INDEX ROWID) OF 'COMM_MASTE
R'

37 36 INDEX (UNIQUE SCAN) OF 'SYS_C006283' (UNIQ
UE)

38 6 TABLE ACCESS (BY INDEX ROWID) OF 'UNIT_MASTER'
39 38 INDEX (UNIQUE SCAN) OF 'SYS_C005120' (UNIQUE
)

40 5 TABLE ACCESS (BY INDEX ROWID) OF 'UNIT_MASTER'
41 40 INDEX (UNIQUE SCAN) OF 'SYS_C005120' (UNIQUE)
42 4 TABLE ACCESS (BY INDEX ROWID) OF 'PRODUCT_MASTER'
43 42 INDEX (UNIQUE SCAN) OF 'PK_PRODUCT_MASTER' (UNIQ
UE)





atistics
--------------------------------------------------------
900 recursive calls
143 db block gets
3643 consistent gets
727 physical reads
13980 redo size
564 bytes sent via SQL*Net to client
661 bytes received via SQL*Net from client
4 SQL*Net roundtrips to/from client
1 sorts (memory)
1 sorts (disk)
1 rows processed




-------------------------
I am really tensed with this results need your help badly ...
its oracle 8.0.3 Server
On NT 4 with 512 MB of Ram


Tom Kyte
October 18, 2004 - 8:54 am UTC

you are using the RBO, not the CBO there. Nest loops join "heck".


8.0.3 -- on an nt server with 512mb -- surprised it ran that fast....




A reader, October 20, 2004 - 5:30 pm UTC

Tom,
Below is from Oracle Support

The plans generated by the CBO depend upon the sizes of the tables, and potentially on the data distributi
ons
as well, if histograms are being used. When using the CBO with a small amount of data to test an application
prototype, do not assume that the plan chosen for the full-size database will be the same as that chosen for
the prototype.

Is it true that If I have a well tuned query that's tested in Development with less data, can be slow in production environment with more data? Could you please comment on it?
If so, what are the factors that I should consider..
Thanks in advance



Tom Kyte
October 20, 2004 - 8:54 pm UTC

absolutely true. and the converse, and "neither"

the factors you should consider is that unless you have a test environment that acurately reflects your production system (not only in data, but the load you put it under), you will have no idea how the system you are inflicting on your end users will actually perform.

and remember, end users are like elephants -- they have exceedingly long memories.



A reader, November 05, 2004 - 1:22 pm UTC

TOM,

One of the tables here has an id column which is the only pk and some more columns. A query run on the date column is performing a full scan. Is there a way to optimize that query? Also, in reporting purposes where a date column is not a part of pk and fastness is required, what needs to be done? Thanks

Tom Kyte
November 05, 2004 - 5:40 pm UTC

create index on t(date_column)?

assuming that date_column is selective enough.

A reader, November 06, 2004 - 6:52 pm UTC

Tom,

Thank you, before creating index I wanted to show you how this is: Please comment

SQL> explain plan for  SELECT count(*) FROM alphatab WHERE to_char(createdate,'mm/dd/yyyy') = (SELECT to_char(sysdate-4,'mm/dd/yyyy') FROM DUAL) 
  2  /
 
Explained.
 
SQL> @?/rdbms/admin/utlxpls
 
Plan Table
--------------------------------------------------------------------------------
| Operation                 |  Name    |  Rows | Bytes|  Cost  | Pstart| Pstop |
--------------------------------------------------------------------------------
| SELECT STATEMENT          |          |     1 |    7 |   8921 |       |       |
|  SORT AGGREGATE           |          |     1 |    7 |        |       |       |
|   FILTER                  |          |       |      |        |       |       |
|    TABLE ACCESS FULL      | alphatab |    15K|  106K|   8921 |       |       |
|    TABLE ACCESS FULL      |DUAL      |    82 |      |      1 |       |       |
--------------------------------------------------------------------------------
 
8 rows selected.
 
SQL> set timing on
SQL> set autotrace traceonly
SQL> SELECT count(*) FROM alphatab WHERE to_char(createdate,'mm/dd/yyyy') = (SELECT to_char(sysdate-4,'mm/dd/yyyy') FROM DUAL)
  2  /
 
Elapsed: 00:01:14.65
 
Execution Plan
----------------------------------------------------------
   0      SELECT STATEMENT Optimizer=CHOOSE (Cost=8921 Card=1 Bytes=7)
   1    0   SORT (AGGREGATE)
   2    1     FILTER
   3    2       TABLE ACCESS (FULL) OF 'ALPHATAB' (Cost=8921 Card=1
          5572 Bytes=109004)
 
   4    2       TABLE ACCESS (FULL) OF 'DUAL' (Cost=1 Card=82)
 
 
 
 
Statistics
----------------------------------------------------------
          0  recursive calls
         42  db block gets
      58803  consistent gets
      58642  physical reads
          0  redo size
        369  bytes sent via SQL*Net to client
        425  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
          1  rows processed
 

Tom Kyte
November 06, 2004 - 9:07 pm UTC

SELECT count(*) FROM alphatab WHERE
to_char(createdate,'mm/dd/yyyy') = (SELECT to_char(sysdate-4,'mm/dd/yyyy') FROM
DUAL)

should be


select
from alphatab
where createdate >= trunc(sysdate-4,'mm')
and createdate < add_months(trunc(sysdate-4,'mm'),1)



A reader, November 07, 2004 - 4:57 pm UTC

Tom,

Thank you. There is not much difference in the way the queries execute (except 1  sorts (memory) thing). I think the number of consistent gets and reads is large therefore the query is not running efficiently. 

The init.ora doesn't have optimizer mode set and I believe it is using RBO(Card is not shown). How should it be run by CBO? Should this be run by CBO? This table is analyzed twice daily using "analyze table estimate statistics". 

SQL> explain plan for 
select count(*)
  from alphatab
 where createdate >= trunc(sysdate,'mm') 
   and createdate <  add_months(trunc(sysdate,'mm'),1)  2    3    4    5  
  6  /
 
Explained.
 
SQL> @?/rdbms/admin/utlxpls
 
Plan Table
--------------------------------------------------------------------------------
| Operation                 |  Name    |  Rows | Bytes|  Cost  | Pstart| Pstop |
--------------------------------------------------------------------------------
| SELECT STATEMENT          |          |     1 |    7 |   8921 |       |       |
|  SORT AGGREGATE           |          |     1 |    7 |        |       |       |
|   TABLE ACCESS FULL       |alphatab  |     3K|   21K|   8921 |       |       |
--------------------------------------------------------------------------------
 
6 rows selected.
 
SQL> set timing on
SQL> set autotrace traceonly
SQL> 
select count(*)
  from alphatab
 where createdate >= trunc(sysdate,'mm') 
   and createdate <  add_months(trunc(sysdate,'mm'),1)SQL>   2    3    4  
  5  /
 
Elapsed: 00:01:10.32
 
Execution Plan
----------------------------------------------------------
   0      SELECT STATEMENT Optimizer=CHOOSE (Cost=8921 Card=1 Bytes=7)
   1    0   SORT (AGGREGATE)
   2    1     TABLE ACCESS (FULL) OF 'ALPHATAB' (Cost=8921 Card=320
          2 Bytes=22414)
 
 
 
 
 
Statistics
----------------------------------------------------------
          0  recursive calls
         38  db block gets
      58782  consistent gets
      58237  physical reads
          0  redo size
        369  bytes sent via SQL*Net to client
        425  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          1  sorts (memory)
          0  sorts (disk)
          1  rows processed
 
SQL>  

Tom Kyte
November 07, 2004 - 5:15 pm UTC

umm, you do not appear to have an index -- correct?

that fix for your query allows you to index that column, and the CBO (after analyzing) will use it if it is appropriate to do so.

A reader, November 07, 2004 - 6:25 pm UTC

Tom,

With reference to above post, does "analyze table estimate statistics" (all tables) twice daily force an optimizer to be RBO? Would "analyze table compute statistics for table for all indexes for all indexed columns" make it cbo? Here, optimizer is not specified in init.ora

Tom Kyte
November 08, 2004 - 9:41 am UTC

no? if you have stats -- the premise is the CBO would be used. estimate or compute -- stats are stats.




A reader, November 23, 2004 - 11:45 pm UTC

Tom,

With reference to the above three postings, in an 8.1.7 OLTP table with 15000-20000 rows added daily as a continuous process, will a new index (on a date column) slow down inserts?



Tom Kyte
November 24, 2004 - 7:03 am UTC

15/20k rows?  that is teeny tiny small. that is about 4 new rows every 3 seconds during an 8 hour day - not massive.

is it going to affect the insert?  yes, there will be the index maintenance.

is it going to be huge?  not just this one index, if you have lots of indexes that have no benefit (this one would have benefit maybe) they are 'overhead' and should be removed.  if the payback exceeds any perceived price, then this index would stay.


ops$tkyte@ORA9IR2> create table t as select * from all_objects where 1=0;
 
Table created.
 
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> @trace
ops$tkyte@ORA9IR2> alter session set events '10046 trace name context forever, level 12';
 
Session altered.
 
ops$tkyte@ORA9IR2> begin
  2          for x in ( select * from all_objects where rownum < 20000 )
  3          loop
  4                  insert into t T1 values X;
  5          end loop;
  6  end;
  7  /
 
PL/SQL procedure successfully completed.
 
ops$tkyte@ORA9IR2> @traceoff
ops$tkyte@ORA9IR2> alter session set events '10046 trace name context off';
 
Session altered.
 
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> truncate table t;
 
Table truncated.
 
ops$tkyte@ORA9IR2> create index t_idx on t(created);
 
Index created.
 
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> @trace
ops$tkyte@ORA9IR2> alter session set events '10046 trace name context forever, level 12';
 
Session altered.
 
ops$tkyte@ORA9IR2> begin
  2          for x in ( select * from all_objects where rownum < 20000 )
  3          loop
  4                  insert into t T2 values X;
  5          end loop;
  6  end;
  7  /
 
PL/SQL procedure successfully completed.
 
ops$tkyte@ORA9IR2> @traceoff
ops$tkyte@ORA9IR2> alter session set events '10046 trace name context off';
 
Session altered.


INSERT into t T1 values (:b1,:b2,:b3,:b4,:b5,:b6,:b7,:b8,:b9,:b10,:b11,:b12,:b13)
                                                                                                                      
                                                                                                                      
call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          1          0           0
Execute  19999     22.89      22.60          0        302      23071       19999
Fetch        0      0.00       0.00          0          0          0           0
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total    20000     22.89      22.60          0        303      23071       19999
                                                                                                                      
INSERT into t T2 values (:b1,:b2,:b3,:b4,:b5,:b6,:b7,:b8,:b9,:b10,:b11,:b12,:b13)
                                                                                                                      
                                                                                                                      
call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute  19999     23.52      23.36          2        632      65112       19999
Fetch        0      0.00       0.00          0          0          0           0
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total    20000     23.52      23.36          2        632      65112       19999
 

A reader, November 24, 2004 - 9:51 am UTC

Tom,

If the inserts prove to be time-taking after a new index is added, should the index be not created and instead go for a materialized view? In OLTP, is materialized view an/the alternative to avoid new indexes?

Tom Kyte
November 24, 2004 - 10:19 am UTC

how would a MV help in this case?

everytime you run the query, it hits *different* data since it is based on sysdate.

Question about execution plan

POLINA, December 27, 2004 - 12:07 pm UTC

Dear, Tom!
I have 2 tables
km_daily_transact - num of rows 23,209,784
unique_index
 prod_no,
 nwpr_date,
 cstmr_no
km_customers - num of row 37,964 
unique index - cstmr_no

the execution plan is:

SQL> select a.cstmr_no,b.cstmr_desc  from km_daily_transact a,km_customers b
  2  where
  3  a.cstmr_no=b.cstmr_no and 
  4   a.prod_no=1
  5  and a.nwpr_date between '01-nov-2004'  and  '26-dec-2004'  
  6  /

354069 rows selected.


Execution Plan
----------------------------------------------------------
   0      SELECT STATEMENT Optimizer=CHOOSE (Cost=408 Card=10589 Bytes
          =423560)

   1    0   HASH JOIN (Cost=408 Card=10589 Bytes=423560)
   2    1     PARTITION RANGE (ITERATOR)
   3    2       INDEX (RANGE SCAN) OF 'KM_DAILY_TRANSACT_PK' (UNIQUE)
          (Cost=206 Card=10589 Bytes=180013)

   4    1     TABLE ACCESS (FULL) OF 'KM_CUSTOMERS' (Cost=194 Card=379
          64 Bytes=873172)


Statistics
----------------------------------------------------------
          0  recursive calls
          7  db block gets
       5285  consistent gets
       1702  physical reads
          0  redo size
   10479182  bytes sent via SQL*Net to client
    1558173  bytes received via SQL*Net from client
      23606  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
     354069  rows processed

Why there is a full scan of km_customers and not by index cstmr_no- i expected that for every fetch from km_daily_transact there will be join by specific cstmr_no to km_customers?
Thank you
Polina 
 

Tom Kyte
December 27, 2004 - 12:23 pm UTC

because full scans are not evil.

hint it to use an index and see how slow it truly can go :)



I just didn't understand well the order of the above explain plan.

polina, December 27, 2004 - 3:26 pm UTC

Tom,thank you for the answer,
Write me please , if i understood the plan properly, if not, correct me please:
First of all: km_customers table is loaded to memory because it has less records.
Second: Oracle scans km_customers in memory and for each
cstmr_no joins to the big table km_daily_transact.
Have a nice day.
Polina

Tom Kyte
December 27, 2004 - 4:40 pm UTC

given this plan:

0 SELECT STATEMENT Optimizer=CHOOSE (Cost=408 Card=10589 Bytes
1 0 HASH JOIN (Cost=408 Card=10589 Bytes=423560)
2 1 PARTITION RANGE (ITERATOR)
3 2 INDEX (RANGE SCAN) OF 'KM_DAILY_TRANSACT_PK' (UNIQUE)
4 1 TABLE ACCESS (FULL) OF 'KM_CUSTOMERS' (Cost=194 Card=379


it would have used the index to find the rows in km_daily_transact, created a hash table based on the rows returned from that (it is assuming 10,589 rows from km_daily_transact and 37,900 some odd records from km_customers -- km_daily_transact is "smaller")

that hash table may or may not be in RAM. it will then full scan km_customers and row by row probe the hash table to find the records to join to.


If you have effective Oracle by Design -- I go over the processing of stuff like this in some detail.


Continue question

Polina, December 28, 2004 - 2:28 am UTC

Hello,Tom,
thank you for the answer,

something in above i didn't understand:

0 SELECT STATEMENT Optimizer=CHOOSE (Cost=408 Card=10589 Bytes
1 0 HASH JOIN (Cost=408 Card=10589 Bytes=423560)
2 1 PARTITION RANGE (ITERATOR)
3 2 INDEX (RANGE SCAN) OF 'KM_DAILY_TRANSACT_PK' (UNIQUE)
4 1 TABLE ACCESS (FULL) OF 'KM_CUSTOMERS' (Cost=194 Card=379
What is the meaning of hash table?
When you say that hash table may or may not be in RAM
do you mean that the hash table is not in the cash and for each row of km_customers i must perform i/o readings to join to km_daily_transact?
Thank you.

Polina



Tom Kyte
December 28, 2004 - 10:21 am UTC

if you would like all of the nitty gritty details, like an explaination of the computer science term "hash table" and "may or may not be in ram" and how it gets paged in and out and so on -- please check out "Effective Oracle by Design" where I spent a good 40 pages on "how steps in plans are executed".

A hash table is
</code> http://www.google.com/search?q=define%3A+hash+table&sourceid=mozilla-search&start=0&start=0&ie=utf-8&oe=utf-8&client=firefox-a&rls=org.mozilla:en-US:official <code>

a hash table may or may not fit into ram, depends on how big it is, how much ram you got.

Which should I believe?

Daniel, January 06, 2005 - 1:47 pm UTC

Dear Tom,

As far as I know, for Oracle 8i and below, there are two different sql parsers, one for the database and another for PL/SQL. If this is true, how can I tune a sql statement running inside a procedure or package?

Another thing: when I check explain with a statement using bind variables or constant values I get different plans. What's wrong?

SQL>    select
  2      distinct cas_Operation, cas_Case
  3      from orb_Users_Groups, orb_ProcessStages_Groups, orb_Cases
  4      where (ugr_Operation = 'OPE1' and ugr_User = 'SYSTEM')
  5      and   (psg_Operation = 'OPE1' and psg_Group = ugr_Group)
  6      and   (cas_Operation = 'OPE1'
  7        and cas_Process = psg_Process 
  8        and cas_ProcessStage = psg_ProcessStage
  9        and cas_Status = 3);

não há linhas selecionadas


Plano de Execução
----------------------------------------------------------
   0      SELECT STATEMENT Optimizer=CHOOSE (Cost=36 Card=4 Bytes=820)
   1    0   SORT (UNIQUE) (Cost=36 Card=4 Bytes=820)
   2    1     HASH JOIN (Cost=34 Card=4 Bytes=820)
   3    2       NESTED LOOPS (Cost=2 Card=91 Bytes=13286)
   4    3         INDEX (FAST FULL SCAN) OF 'ORB_PROCESSSTAGES_GROUPS_
          PK' (UNIQUE) (Cost=2 Card=713 Bytes=52762)

   5    3         INDEX (UNIQUE SCAN) OF 'ORB_USERS_GROUPS_PK' (UNIQUE
          )

   6    2       TABLE ACCESS (FULL) OF 'ORB_CASES' (Cost=29 Card=434 B
          ytes=25606)

WRONG! Very hi cost!

SQL> var i_Operation char(24)
SQL> var i_User char(24)
SQL>    select
  2      distinct cas_Operation, cas_Case
  3      from orb_Users_Groups, orb_ProcessStages_Groups, orb_Cases
  4      where (ugr_Operation = :i_Operation and ugr_User = :i_User)
  5      and   (psg_Operation = :i_Operation and psg_Group = ugr_Group)
  6      and   (cas_Operation = :i_Operation 
  7        and cas_Process = psg_Process 
  8        and cas_ProcessStage = psg_ProcessStage
  9        and cas_Status = 3);

não há linhas selecionadas


Plano de Execução
----------------------------------------------------------
   0      SELECT STATEMENT Optimizer=CHOOSE (Cost=9 Card=1 Bytes=205)
   1    0   SORT (UNIQUE) (Cost=9 Card=1 Bytes=205)
   2    1     NESTED LOOPS (Cost=7 Card=1 Bytes=205)
   3    2       NESTED LOOPS (Cost=2 Card=1 Bytes=146)
   4    3         INDEX (RANGE SCAN) OF 'ORB_PROCESSSTAGES_GROUPS_PK'
          (UNIQUE) (Cost=2 Card=8 Bytes=592)

   5    3         INDEX (UNIQUE SCAN) OF 'ORB_USERS_GROUPS_PK' (UNIQUE
          )

   6    2       TABLE ACCESS (BY INDEX ROWID) OF 'ORB_CASES' (Cost=5 C
          ard=5 Bytes=295)

   7    6         INDEX (RANGE SCAN) OF 'ORB_CASES_I06' (NON-UNIQUE) (
          Cost=1 Card=5)

FINE!

Thanks,
Daniel
 

Tom Kyte
January 06, 2005 - 2:05 pm UTC

there is but one optimizer.


it is in the database.


the sql parsing issue is that plsql has to take code like this:


declare
   l_data number;
begin
   ...
   for x in (select a, b, c from t where x = l_data )
   loop



and find the SQL "select a, b, c from t where x = l_data", tokenize and parse it to it can understand:

a) this is a query that returns 3 columns
b) whose names are a, b, c
c) with datatypes so and so
d) there is a reference to X and L_DATA as well
e) the table T doesn't have a column L_DATA but it does have a column X
f) l_data is a plsql variable -- therefore, must be a bind variable.
g) so store in the compiled code this query:

    "select a, b, c from t where x = :bv0"

   and when we run it, bind L_data to :bv0

that is what the PLSQL parser does.  The problem was the plsql parser didn't always keep up with the latest sql so

  for x in ( select * from (select * from dual order by 1 ) )

would "fail" in 8i plsql -- but that same query would suceed in sqlplus.  

So, in 9i, they use a common parser to tokenize -- steps a..g would be done using the same parser as the data used to parse the output of G.




Now, you are asking the question "what about binds" -- the thing is

where x = 5;

vs

where x = :x;


Ok, the optimizer knows "where x=5 returns like 1 row, we have histograms, 1 row -- we know that"

with "where x = :x" the optimizer knows -- well, it knows that :x could be anything -- it does not "know" 5, it knows "any value".  It'll take a look at the number of distinct values for x (say 100 of them) and the number of rows (say 500 rows) and say "ah hah -- about 5 rows on average"


ops$tkyte@ORA9IR2> create table t as select mod(rownum,100) x from all_objects where rownum <= 500;
Table created.
 
ops$tkyte@ORA9IR2> update t set x = 6 where x = 5;
5 rows updated.
 
ops$tkyte@ORA9IR2> update t set x = 5 where x = 6 and rownum = 1;
1 row updated.
 
ops$tkyte@ORA9IR2> exec dbms_stats.gather_table_stats( user, 'T', method_opt=> 'for all columns size 254' );
PL/SQL procedure successfully completed.
 
ops$tkyte@ORA9IR2> set autotrace traceonly explain
ops$tkyte@ORA9IR2> select * from t where x = 5;
 
Execution Plan
----------------------------------------------------------
   0      SELECT STATEMENT Optimizer=CHOOSE (Cost=2 Card=1 Bytes=2)
   1    0   TABLE ACCESS (FULL) OF 'T' (Cost=2 Card=1 Bytes=2)
 
 
 
ops$tkyte@ORA9IR2> variable x number
ops$tkyte@ORA9IR2> select * from t where x = :x;
 
Execution Plan
----------------------------------------------------------
   0      SELECT STATEMENT Optimizer=CHOOSE (Cost=2 Card=5 Bytes=10)
   1    0   TABLE ACCESS (FULL) OF 'T' (Cost=2 Card=5 Bytes=10)
 
 
 
ops$tkyte@ORA9IR2>


Just like that.  So bind variables will many times end up with a different plan than constants.



and the optimzer above -- it was not "wrong -- very high cost", it said "you know,given the specific inputs you gave me, this is the best we can do"  later, when you took the specifics away -- it said "you know, generically, on average this is what you can expect"


 

I believe

Daniel, January 07, 2005 - 7:27 am UTC

Great. Thank you!

Just a clarification

A reader, January 19, 2005 - 3:35 pm UTC

Tom,

In your explanation above in interpreting the EXPLAIN PLAN using a pseudocode, can we not use the following pseudocode for step 4 (while accessing the EMP table)

for salgrade in (select * from salgrade)
loop
for emp in (select * from emp where sal between salgrade.losal and salgrade.hisal)
loop
...
end loop
end loop

instead of the if statement inside the second loop (the reason being that the Filter condition in step 4 is ideally executed while selecting from the table). Is that correct?
If not pls. advise.
Also it would be helpful if you can provide a pseudocode for MERGE JOIN and HASH JOIN operations as those operations are commonly occuring in EXPLAIN PLAN and most books are explaining them in words rather than using some sort of pseudocodes. I am from a technical background and providing such pseudocodes is much helpful than verbal explanation

Thanks

Tom Kyte
January 20, 2005 - 10:04 am UTC

it is *psuedo code*

a procedural interpretation of something non-procedural.

it is fine as I have it, for purposes of ILLUSTRATION of what is logically taking place.

I do that last paragraph in my book "Effective Oracle by Design" for hash joins, sort merge, anti-joins, outer joins, etc.

I have a question on range and full scan

Raja, January 26, 2005 - 4:08 am UTC

for the same query, I saw the following query plans for production and TEST:
In production:
Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=FIRST_ROWS (Cost=3 Card=3 Bytes=7
14)

1 0 NESTED LOOPS (Cost=3 Card=3 Bytes=714)
2 1 TABLE ACCESS (BY INDEX ROWID) OF 'SERVICE_T' (Cost=2 Car
d=1 Bytes=78)

3 2 INDEX (UNIQUE SCAN) OF 'I_SERVICE__ID' (UNIQUE) (Cost=
2 Card=1)

4 1 TABLE ACCESS (BY INDEX ROWID) OF 'AU_SERVICE_T' (Cost=2
Card=3 Bytes=480)

5 4 INDEX (RANGE SCAN) OF 'I_AU_SERVICE_PARENT__ID' (NON-U
NIQUE) (Cost=2 Card=3)

=========================================
In TEST:

Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=FIRST_ROWS (Cost=1990 Card=3 Byte
s=948)

1 0 NESTED LOOPS (Cost=1990 Card=3 Bytes=948)
2 1 TABLE ACCESS (BY INDEX ROWID) OF 'SERVICE_T' (Cost=2 Car
d=1 Bytes=78)

3 2 INDEX (UNIQUE SCAN) OF 'I_SERVICE__ID' (UNIQUE) (Cost=
2 Card=1)

4 1 TABLE ACCESS (BY INDEX ROWID) OF 'AU_SERVICE_T' (Cost=19
89 Card=3 Bytes=480)

5 4 INDEX (FULL SCAN) OF 'AU_SERVICE_PARENT' (NON-UNIQUE)
(Cost=19878 Card=3)


==============

In production it is range scan and in TEST it is FULL scan.

Everythingelse is same in both the environments. along with optimizer mode and set of indexes...etc.,

Could you please comment on this...

Thanks a lot

from
Raja


Tom Kyte
January 26, 2005 - 8:51 am UTC

<quote>
Everythingelse is same in both the environments
</quote>

laughing out loud..... something is different.

that is about the extent of any comment one could make simply given two plans.


Now, you can use a 10053 optimizer trace to see what optimizer parameters might be different (search this site for that number or see chapter 6 of "Effective Oracle by Design" for details).

Look at the statistics, look at the indexes (make sure they all in fact exist), look at the datatypes (make sure they are the same) -- you'll find something is *different* (heck, unless you backed up production and restored that very database to test -- I can promise you they are different. the bits are stored in different places affecting things like cluster factors and such)

Tuning SQL query

Arya, February 14, 2005 - 12:00 pm UTC

Hi Tom,

My java developer has written following query which is erroring out with "unable to extend temp space". I'm not able to figure out what's wrong in it. Could you please throw me some light on where's the problem.

Original Query  If I execute this it is erroring out with "Unable to extend temp Space"

  1  select * from
  2              (select sd.LEASE_NBR,
  3                 sd.PROJECT_NBR,
  4                 b.RGN_CD ,
  5                 sd.DOC_TYPE,
  6                 sd.VERSION_NBR||'-'|| sd.SUB_VERSION_NBR DOC_NO,
  7                 CASE WHEN sd.PROCESSED_DT is null THEN 'No Processed' ELSE 'Processed' END Status,
  8                 sd.SCANNING_REQUEST_DT Request_Date,
  9                 TRUNC(sd.PROCESSED_DT, 'DDD')-TRUNC(sd.SCANNING_REQUEST_DT ,'DDD') Process_time,
 10                 sd.NUM_PAGES_ACTUAL||'/'||sd.NUM_PAGES_EXPECTED Actual_Expect,
 11                 elease_util.Initcap2(a.LONG_AGENCY_NAME) Agency_Name,
 12                 elease_util.Initcap2(sd.DOC_CATEGORY) doc_category,
 13                 sd.DOC_ID,
 14                 sd.PROCESSED_DT,
 15                 sd.UPDATE_DT,
 16                 elease_util.Initcap2(sd.DOC_NAME) doc_name,
 17                 sd.num_pages_expected,
 18                 sd.scanning_comments,
 19                 sd.VERSION_NBR,
 20                 sd.SUB_VERSION_NBR,
 21                 sd.agency_cd,
 22                 sd.oa_nbr
 23              from Scanned_docs sd, Buildings b, leases l, agencies a
 24              where l.BUILDING_ID (+)= b.BUILDING_ID and
 25                l.LEASE_NBR = sd.LEASE_NBR and
 26               a.AGENCY_CD (+)= sd.AGENCY_CD )
 27  where LEASE_NBR LIKE '%'
 28  and  UPPER(DOC_CATEGORY) LIKE '%'
 29  and RGN_CD LIKE '%'
 30  and UPDATE_DT >= TO_DATE('2/7/05 ','MM/DD/YYYY')
 31* and NVL(TO_CHAR(processed_dt),'%') like '%'

Execution Plan
----------------------------------------------------------
   0      SELECT STATEMENT Optimizer=CHOOSE (Cost=235 Card=1 Bytes=147)
   1    0   NESTED LOOPS (OUTER) (Cost=235 Card=1 Bytes=147)
   2    1     HASH JOIN (Cost=234 Card=1 Bytes=116)
   3    2       TABLE ACCESS (FULL) OF 'SCANNED_DOCS' (Cost=9 Card=5 Bytes=450)
   4    2       FILTER
   5    4         HASH JOIN (OUTER)
   6    5           TABLE ACCESS (FULL) OF 'BUILDINGS' (Cost=150 Card=1456 Bytes=14560)
   7    5           VIEW OF 'LEASES' (Cost=73 Card=13825 Bytes=221200)
   8    7             TABLE ACCESS (FULL) OF 'LEASES' (Cost=73 Card=13825 Bytes=221200)
   9    1     TABLE ACCESS (BY INDEX ROWID) OF 'AGENCIES' (Cost=1 Card=1 Bytes=31)
  10    9       INDEX (UNIQUE SCAN) OF 'AGENCY_PK' (UNIQUE)

Commented last line and executed and it is running in just seconds

  1  select * from
  2              (select sd.LEASE_NBR,
  3                 sd.PROJECT_NBR,
  4                 b.RGN_CD ,
  5                 sd.DOC_TYPE,
  6                 sd.VERSION_NBR||'-'|| sd.SUB_VERSION_NBR DOC_NO,
  7                 CASE WHEN sd.PROCESSED_DT is null THEN 'No Processed' ELSE 'Processed' END Status,
  8                 sd.SCANNING_REQUEST_DT Request_Date,
  9                 TRUNC(sd.PROCESSED_DT, 'DDD')-TRUNC(sd.SCANNING_REQUEST_DT ,'DDD') Process_time,
 10                 sd.NUM_PAGES_ACTUAL||'/'||sd.NUM_PAGES_EXPECTED Actual_Expect,
 11                 elease_util.Initcap2(a.LONG_AGENCY_NAME) Agency_Name,
 12                 elease_util.Initcap2(sd.DOC_CATEGORY) doc_category,
 13                 sd.DOC_ID,
 14                 sd.PROCESSED_DT,
 15                 sd.UPDATE_DT,
 16                 elease_util.Initcap2(sd.DOC_NAME) doc_name,
 17                 sd.num_pages_expected,
 18                 sd.scanning_comments,
 19                 sd.VERSION_NBR,
 20                 sd.SUB_VERSION_NBR,
 21                 sd.agency_cd,
 22                 sd.oa_nbr
 23              from Scanned_docs sd, Buildings b, leases l, agencies a
 24              where l.BUILDING_ID (+)= b.BUILDING_ID and
 25                l.LEASE_NBR = sd.LEASE_NBR and
 26               a.AGENCY_CD (+)= sd.AGENCY_CD )
 27  where LEASE_NBR LIKE '%'
 28  and  UPPER(DOC_CATEGORY) LIKE '%'
 29  and RGN_CD LIKE '%'
 30  and UPDATE_DT >= TO_DATE('2/7/05 ','MM/DD/YYYY')
 31* --and NVL(TO_CHAR(processed_dt),'%') like '%'
SQL> /

Execution Plan
----------------------------------------------------------
   0      SELECT STATEMENT Optimizer=CHOOSE (Cost=240 Card=21 Bytes=3087)
   1    0   HASH JOIN (OUTER) (Cost=240 Card=21 Bytes=3087)
   2    1     HASH JOIN (Cost=234 Card=21 Bytes=2436)
   3    2       TABLE ACCESS (FULL) OF 'SCANNED_DOCS' (Cost=9 Card=97 Bytes=8730)
   4    2       FILTER
   5    4         HASH JOIN (OUTER)
   6    5           TABLE ACCESS (FULL) OF 'BUILDINGS' (Cost=150 Card=1456 Bytes=14560)
   7    5           VIEW OF 'LEASES' (Cost=73 Card=13825 Bytes=221200)
   8    7             TABLE ACCESS (FULL) OF 'LEASES' (Cost=73 Card=13825 Bytes=221200)
   9    1     TABLE ACCESS (FULL) OF 'AGENCIES' (Cost=5 Card=983 Bytes=30473)


Commented out last but one line and being executed with in seconds

  1  select * from
  2              (select sd.LEASE_NBR,
  3                 sd.PROJECT_NBR,
  4                 b.RGN_CD ,
  5                 sd.DOC_TYPE,
  6                 sd.VERSION_NBR||'-'|| sd.SUB_VERSION_NBR DOC_NO,
  7                 CASE WHEN sd.PROCESSED_DT is null THEN 'No Processed' ELSE 'Processed' END Status,
  8                 sd.SCANNING_REQUEST_DT Request_Date,
  9                 TRUNC(sd.PROCESSED_DT, 'DDD')-TRUNC(sd.SCANNING_REQUEST_DT ,'DDD') Process_time,
 10                 sd.NUM_PAGES_ACTUAL||'/'||sd.NUM_PAGES_EXPECTED Actual_Expect,
 11                 elease_util.Initcap2(a.LONG_AGENCY_NAME) Agency_Name,
 12                 elease_util.Initcap2(sd.DOC_CATEGORY) doc_category,
 13                 sd.DOC_ID,
 14                 sd.PROCESSED_DT,
 15                 sd.UPDATE_DT,
 16                 elease_util.Initcap2(sd.DOC_NAME) doc_name,
 17                 sd.num_pages_expected,
 18                 sd.scanning_comments,
 19                 sd.VERSION_NBR,
 20                 sd.SUB_VERSION_NBR,
 21                 sd.agency_cd,
 22                 sd.oa_nbr
 23              from Scanned_docs sd, Buildings b, leases l, agencies a
 24              where l.BUILDING_ID (+)= b.BUILDING_ID and
 25                l.LEASE_NBR = sd.LEASE_NBR and
 26               a.AGENCY_CD (+)= sd.AGENCY_CD )
 27  where LEASE_NBR LIKE '%'
 28  and  UPPER(DOC_CATEGORY) LIKE '%'
 29  and RGN_CD LIKE '%'
 30  --and UPDATE_DT >= TO_DATE('2/7/05 ','MM/DD/YYYY')
 31* and NVL(TO_CHAR(processed_dt),'%') like '%'
SQL> /

Execution Plan
----------------------------------------------------------
   0      SELECT STATEMENT Optimizer=CHOOSE (Cost=235 Card=1 Bytes=147)
   1    0   NESTED LOOPS (OUTER) (Cost=235 Card=1 Bytes=147)
   2    1     HASH JOIN (Cost=234 Card=1 Bytes=116)
   3    2       TABLE ACCESS (FULL) OF 'SCANNED_DOCS' (Cost=9 Card=5 Bytes=450)
   4    2       FILTER
   5    4         HASH JOIN (OUTER)
   6    5           TABLE ACCESS (FULL) OF 'BUILDINGS' (Cost=150 Card=1456 Bytes=14560)
   7    5           VIEW OF 'LEASES' (Cost=73 Card=13825 Bytes=221200)
   8    7             TABLE ACCESS (FULL) OF 'LEASES' (Cost=73 Card=13825 Bytes=221200)
   9    1     TABLE ACCESS (BY INDEX ROWID) OF 'AGENCIES' (Cost=1 Card=1 Bytes=31)
  10    9       INDEX (UNIQUE SCAN) OF 'AGENCY_PK' (UNIQUE)

I'm not able to understand why this behaviour. I mean if I comment last line or last but one line it is running with out any problems and If I run as is erroring out.

I appreciate all your help.

Thanks
arya 

Tom Kyte
February 14, 2005 - 2:42 pm UTC

perhaps the error is---

you actually do have insufficient temporary tablespace configured for your system to run the queries you want to? that is an option here.


are the card=values in the autotrace anywhere near what you believe "reality would have to say on the topic"?

Tuning SQL query

Arya, February 14, 2005 - 1:31 pm UTC

Hi Tom,

In the following query If I comment out any one condition from where clause query runs fine is there any limit of having where clause like '%'?

1 select * from
2 (select sd.LEASE_NBR,
3 sd.PROJECT_NBR,
4 b.RGN_CD ,
5 sd.DOC_TYPE,
6 sd.VERSION_NBR||'-'|| sd.SUB_VERSION_NBR DOC_NO,
7 CASE WHEN sd.PROCESSED_DT is null THEN 'No Processed' ELSE 'Processed' END Status,
8 sd.SCANNING_REQUEST_DT Request_Date,
9 TRUNC(sd.PROCESSED_DT, 'DDD')-TRUNC(sd.SCANNING_REQUEST_DT ,'DDD') Process_time,
10 sd.NUM_PAGES_ACTUAL||'/'||sd.NUM_PAGES_EXPECTED Actual_Expect,
11 elease_util.Initcap2(a.LONG_AGENCY_NAME) Agency_Name,
12 elease_util.Initcap2(sd.DOC_CATEGORY) doc_category,
13 sd.DOC_ID,
14 sd.PROCESSED_DT,
15 sd.UPDATE_DT,
16 elease_util.Initcap2(sd.DOC_NAME) doc_name,
17 sd.num_pages_expected,
18 sd.scanning_comments,
19 sd.VERSION_NBR,
20 sd.SUB_VERSION_NBR,
21 sd.agency_cd,
22 sd.oa_nbr
23 from Scanned_docs sd, Buildings b, leases l, agencies a
24 where l.BUILDING_ID (+)= b.BUILDING_ID and
25 l.LEASE_NBR = sd.LEASE_NBR and
26 a.AGENCY_CD (+)= sd.AGENCY_CD )
27 where LEASE_NBR LIKE '%'
28 and UPPER(DOC_CATEGORY) LIKE '%'
29 and RGN_CD LIKE '%'
30 and UPDATE_DT >= TO_DATE('2/7/05 ','MM/DD/YYYY')
31* and NVL(TO_CHAR(processed_dt),'%') like '%'

If I comment upper(doc_category) like '%' with all other conditions it works fine similarly commenting rgn_cd like '%' with all other conditions uncommented again it works fine? So I'm really puzzled to understand what's going on?


I request your help in this regard.

Thanks
arya

Redundant predicates in the where clause

Logan Palanisamy, February 14, 2005 - 3:52 pm UTC

Arya,

What is the use of predicates like:

LEASE_NBR LIKE '%'
UPPER(DOC_CATEGORY) LIKE '%'
RGN_CD LIKE '%'

in the where clause? Since they select all the rows, you might as well not have them.

like

select * from emp where emp_name like '%'

is the same as

select * from emp;



Tom Kyte
February 14, 2005 - 6:01 pm UTC

well, technically, the same as:

select * from emp where emp_name is not null;




Tuning SQL

Arya, February 14, 2005 - 6:10 pm UTC

Palanisami,

This query runs on java application for a search screen i.e. they have coded with all like '%'. I don't know whether I have answered your question appropriately.

Thanks
Arya

Tom Kyte
February 14, 2005 - 6:50 pm UTC

(ask them "hey, if they give '%', how about if you leave it off the query"

ask them also -- you guys, you ARE using bind variables right?)



Two digit years

Gary, February 14, 2005 - 7:57 pm UTC

I love the

"and UPDATE_DT >= TO_DATE('2/7/05 ','MM/DD/YYYY')"

It's a strange requirement to exclude rows updated over 2000 years ago.

Tom Kyte
February 14, 2005 - 8:02 pm UTC

thank you for point that out, "houston we have a problem" pops into my head suddenly.

laughing out loud -- that probably would be the entire table, unless of course, that was the format and string used upon insert -- in which case, they have a looming "year 100 problem" in about 95 years as they cycle back around.

Size of string changes the explain plan

Bipul, February 15, 2005 - 8:28 am UTC

Hi Tom.

I have a query for which the explain plan changes based on the length of string.

select topn_ord , arx_id , arx_bibl , arx_status , arx_type , man_date_pub as pub_date
from topn , arx , man
where topn_item_id=arx_id
and man_id=arx_man_id
and topn_type='TOP_ARTICLE_ACCESSES_ALL_TIME_JOU_10022'
and topn_ord<=20
order by topn_ord;

The explain plan is

Operation Object Name Rows Bytes Cost Object Node In/Out PStart PStop

SELECT STATEMENT Optimizer Mode=CHOOSE 14 K 8591
SORT ORDER BY 14 K 11 M 8591
HASH JOIN 14 K 11 M 6878
TABLE ACCESS FULL WEBUSER.BMC_MANUSCRIPT 68 K 1 M 1163
HASH JOIN 14 K 10 M 5511
VIEW WEBUSER.BMC_TOPN_SUMMARY 14 K 356 K 1258
SORT ORDER BY 14 K 4 M 1258
VIEW 14 K 4 M 621
WINDOW BUFFER 14 K 932 K 621
TABLE ACCESS BY INDEX ROWID WEBUSER.BMC_TOPN_SUMMARY_ORIGINAL 14 K 932 K 621
INDEX RANGE SCAN WEBUSER.BMC_TOPN_TYPE_IDX 14 K 165
TABLE ACCESS FULL WEBUSER.BMC_ARTICLE_XML 43 K 32 M 3971


It does FTS on two large tables.

But the plan for same sql with topn_type='TOP_ARTICLE_ACCESSES_ALL_TIME_J' or anything upto 31 character is much improved.


Operation Object Name Rows Bytes Cost Object Node In/Out PStart PStop

SELECT STATEMENT Optimizer Mode=CHOOSE 207 433
NESTED LOOPS 207 164 K 433
NESTED LOOPS 204 159 K 229
VIEW WEBUSER.BMC_TOPN_SUMMARY 204 5 K 25
SORT ORDER BY 204 59 K 25
VIEW 204 59 K 12
WINDOW BUFFER 204 13 K 12
TABLE ACCESS BY INDEX ROWID WEBUSER.BMC_TOPN_SUMMARY_ORIGINAL 204 13 K 12
INDEX RANGE SCAN WEBUSER.BMC_TOPN_TYPE_IDX 204 5
TABLE ACCESS BY INDEX ROWID WEBUSER.BMC_ARTICLE_XML 1 773 1
INDEX UNIQUE SCAN WEBUSER.BMC_ARX_PK 1
TABLE ACCESS BY INDEX ROWID WEBUSER.BMC_MANUSCRIPT 1 17 1
INDEX UNIQUE SCAN WEBUSER.BMC_MAN_PK 1

The data in table topn [its a view on a table] is fairly well distributed and we have histograms calculated as well.

Can't figure out why the plan changes? Any suggestion would be very useful.

Thanks
bipul


Tom Kyte
February 15, 2005 - 3:32 pm UTC

could perhaps be that "where column = longstring" returns a totally different set/amount of rows could it not.


histograms are computed only on the first 32 bytes of a string as well. so, it probably "knows" more about the 31 byte string (which is sort of irrelevant, it gets no data does it.....) and chooses a totally different plan.

Data set returned is similar

bipul, February 16, 2005 - 9:17 am UTC

Hi Tom.

Thanks for your feedback. The two query with smaller string and longer string returns similar sets.
For example,

select topn_ord , arx_id , arx_bibl , arx_status , arx_type , man_date_pub as
pub_date
from topn , arx , man
where topn_item_id=arx_id
and man_id=arx_man_id
and topn_type='TOP_ARTICLE_ACCESSES_ALL_TIME'
order by topn_ord;

returns 100 rows

and

select topn_ord , arx_id , arx_bibl , arx_status , arx_type , man_date_pub as
pub_date
from topn , arx , man
where topn_item_id=arx_id
and man_id=arx_man_id
and topn_type='TOP_ARTICLE_ACCESSES_ALL_TIME_JOU_10022'
order by topn_ord;

returns 48 rows. Not vastly different.

I think your point about "histograms are computed only on the first 32 bytes of a string as well" could be the reason why it chooses different plan when the length(topn_type) > 31.

Thanks again!
bipul



get explain plan from system table

Sean, February 16, 2005 - 5:17 pm UTC

I have a script to get all heavy cpu usage sessions by quering v$session, v$process and process id I got from prstat command on Solaris.
I am then able to pull the sql statement by joining $session and v$sqltext_with_newlines by address column.

Does Oracle store explain plan somewhere in system table other than plan_table, so I can get them by my script?

As you know, I don't want to manually login to each schema and analyze sql one by one.

Thanks so much for your help.



Tom Kyte
February 16, 2005 - 5:41 pm UTC

v$sql_plan is available in software written this century, but not last century (meaning, in 9i and above -- yes)

How many tables in a query is optimal

BK, March 07, 2005 - 10:06 am UTC

Hi Tom.

Is there any maximum limit on the number of tables used in a query? Will the optimizer be able to generate a good plan if the number of tables in query is more than 6 or 7.

Thanks
BK

Tom Kyte
March 07, 2005 - 3:54 pm UTC

there are no maximums I've hit -- I've gone to 500+

It takes a while to parse of course!

More than 6 or 7 -- sure, hundreds -- bad idea. Somewhere between 1 and 100 maybe.

Tunning SQL query

Thomas Varekat, March 18, 2005 - 10:06 am UTC

The below query with group by clause takes more than 20 seconds but without group by takes around 2 seconds.
Need help to fine tune this query.

Please find the query and explain plan below:

SELECT PSL.PROCESS_STATUS psl_ps
, count(UPSA.UNIT_ID) upsa_ui
FROM
PROCESS_SUB_PROCESS_DEFN PSPD
,BUSINESS_PROCESS_LIST BPL
,PROCESS_STATUS_LIST PSL_PARENT
,PROCESS_STATUS_LIST PSL
,PROCESS_STATUS_DEFN PSD
,STR_BUS_REF SBR
,UNIT U
,STRUCTURE_MF_DATA SMD_ALL
,BUS_REF_LIST BRL
, UNIT_PROC_STATUS_ASSIGN UPSA
WHERE 1=1
AND BPL.PROCESS_ID = PSPD.MAJOR_PROCESS_ID
AND PSD.PROCESS_SUB_PROCESS_ID = PSPD.PROCESS_SUB_PROCESS_ID
AND BPL.PROCESS_NAME = 'RAPID TAG RENEWAL'
AND PSL.PROCESS_STATUS_ID = PSL_PARENT.PROCESS_STATUS_ID -- no parentrelationship
AND PSD.PROCESS_STATUS_ID = PSL.PROCESS_STATUS_ID
AND UPSA.IS_CURRENT_SW = 'Y'
AND UPSA.PROCESS_SUB_PROCESS_ID = PSPD.PROCESS_SUB_PROCESS_ID
AND UPSA.PROCESS_STATUS_ID = PSL.PROCESS_STATUS_ID
AND UPSA.PROCESS_STATUS_ID = PSD.PROCESS_STATUS_ID
AND UPSA.PROCESS_SUB_PROCESS_ID = PSD.PROCESS_SUB_PROCESS_ID
AND SMD_ALL.STRUCTURE_ID = SBR.STRUCTURE_ID
AND BRL.BUS_REF_ID = SBR.BUS_REF_ID
AND UPSA.UNIT_ID = U.UNIT_ID
AND BRL.REFERENCE_ID =U.REFERENCE_ID
AND PSL_PARENT.PROCESS_STATUS in ('CURRENT' )
AND BPL.IS_MAJOR_PROCESS_SW = 'Y'
AND (PSPD.EFFECTIVE_START_DATE IS NULL OR PSPD.EFFECTIVE_START_DATE <SYSDATE)
AND (PSPD.EFFECTIVE_END_DATE IS NULL OR PSPD.EFFECTIVE_END_DATE > SYSDATE)
AND UPSA.EFFECTIVE_END_DATE IS NULL
AND EXISTS
( SELECT
'Y'
FROM STRUCTURE_MF_DATA SMD
,STRUCTURE_CONTACT_LIST SCL
-- ,INDIVIDUAL I
,COUNTRY_GE_CORP CGC
-- ,CONTACT_TYPE_LIST CTL
WHERE
SMD_ALL.MF_ACCESS_CODE_1 = '005029'
AND SMD.MF_ACCESS_CODE_1 = '005029'
AND SMD.GE_CORP_CODE = CGC.CORP_CODE
AND CGC.COUNTRY_CODE = 'US'
AND SCL.INDIVIDUAL_ID = 79
AND SCL.CONTACT_TYPE_REF = 'TG' --CTL.CONTACT_TYPE_REF
AND SMD.STRUCTURE_ID = SCL.STRUCTURE_ID
AND SMD_ALL.GE_CORP_CODE = SMD.GE_CORP_CODE
AND SMD_ALL.MF_ACCESS_CODE_1 = SMD.MF_ACCESS_CODE_1
AND (SMD.MF_ACCESS_CODE_2 = ' ' OR SMD_ALL.MF_ACCESS_CODE_2 = SMD.MF_ACCESS_CODE_2)
AND (SMD.MF_ACCESS_CODE_3 = ' ' OR SMD_ALL.MF_ACCESS_CODE_3 = SMD.MF_ACCESS_CODE_3)
AND (SMD.MF_ACCESS_CODE_4 = ' ' OR SMD_ALL.MF_ACCESS_CODE_4 = SMD.MF_ACCESS_CODE_4)
AND (SMD.MF_ACCESS_CODE_5 = ' ' OR SMD_ALL.MF_ACCESS_CODE_5 = SMD.MF_ACCESS_CODE_5)
AND (SMD.MF_ACCESS_CODE_6 = ' ' OR SMD_ALL.MF_ACCESS_CODE_6 = SMD.MF_ACCESS_CODE_6)
AND NOT EXISTS
(
SELECT
'CHILD STRUCTURE'
FROM
STRUCTURE_MF_DATA SMD1
,STRUCTURE_CONTACT_LIST SCL1
WHERE
SMD1.STRUCTURE_ID = SCL1.STRUCTURE_ID
AND SCL1.INDIVIDUAL_ID = SCL.INDIVIDUAL_ID
AND SCL1.CONTACT_TYPE_REF =SCL.CONTACT_TYPE_REF
AND SMD1.LEVEL_NUMBER < SMD.LEVEL_NUMBER
AND SUBSTR((NVL(SMD.MF_ACCESS_CODE_1,' ') || NVL(SMD.MF_ACCESS_CODE_2,' ') ||NVL(SMD.MF_ACCESS_CODE_3,' ') ||NVL(SMD.MF_ACCESS_CODE_4,' ') ||NVL(SMD.MF_ACCESS_CODE_5,' ') ||NVL(SMD.MF_ACCESS_CODE_6,' ')),1,(SMD1.LEVEL_NUMBER)*6)
= SUBSTR((NVL(SMD1.MF_ACCESS_CODE_1,' ') || NVL(SMD1.MF_ACCESS_CODE_2,' ') ||NVL(SMD1.MF_ACCESS_CODE_3,' ') ||NVL(SMD1.MF_ACCESS_CODE_4,' ') ||NVL(SMD1.MF_ACCESS_CODE_5,' ') ||NVL(SMD1.MF_ACCESS_CODE_6,' ')),1,(SMD1.LEVEL_NUMBER)*6)
AND SCL1.STR_CONTACT_EFF_DATE < SYSDATE
AND (SCL1.STR_CONTACT_END_DATE IS NULL OR SCL1.STR_CONTACT_END_DATE > SYSDATE)
)
AND SCL.STR_CONTACT_EFF_DATE < SYSDATE
AND (SCL.STR_CONTACT_END_DATE IS NULL OR SCL.STR_CONTACT_END_DATE > SYSDATE)
--AND CTL.CONTACT_TYPE_DESC = 'RAPIDTAG RENEWAL REPORT CONTACT'
)
group by PSL.PROCESS_STATUS


Explain plan with group by
Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=296 Card=1 Bytes=181
)

1 0 SORT (GROUP BY) (Cost=296 Card=1 Bytes=181)
2 1 FILTER
3 2 NESTED LOOPS (Cost=285 Card=1 Bytes=181)
4 3 NESTED LOOPS (Cost=283 Card=1 Bytes=138)
5 4 NESTED LOOPS (Cost=274 Card=3 Bytes=384)
6 5 NESTED LOOPS (Cost=262 Card=3 Bytes=354)
7 6 HASH JOIN (Cost=256 Card=3 Bytes=324)
8 7 TABLE ACCESS (FULL) OF 'T4283_PROCESS_SUB_PR
OCESS_DEFN' (Cost=1 Card=19 Bytes=456)

9 7 NESTED LOOPS (Cost=254 Card=427 Bytes=35868)
10 9 NESTED LOOPS (Cost=4 Card=1 Bytes=66)
11 10 NESTED LOOPS (Cost=3 Card=1 Bytes=60)
12 11 NESTED LOOPS (Cost=2 Card=1 Bytes=38)
13 12 TABLE ACCESS (BY INDEX ROWID) OF 'T4
867_PROCESS_STATUS_LIST' (Cost=1 Card=1 Bytes=22)

14 13 INDEX (UNIQUE SCAN) OF 'I4867_IX1_
PROCESS_STATUS_LIST' (UNIQUE)

15 12 TABLE ACCESS (BY INDEX ROWID) OF 'T4
282_BUSINESS_PROCESS_LIST' (Cost=1 Card=1 Bytes=16)

16 15 INDEX (UNIQUE SCAN) OF 'I4282_IX1_
BUSINESS_PROCESS' (UNIQUE)

17 11 TABLE ACCESS (BY INDEX ROWID) OF 'T486
7_PROCESS_STATUS_LIST' (Cost=1 Card=39 Bytes=858)

18 17 INDEX (UNIQUE SCAN) OF 'I4867_PK_PRO
CESS_STATUS_LIST' (UNIQUE)

19 10 INDEX (FULL SCAN) OF 'I4868_PK_PROCESS_S
TATUS_DEFN' (UNIQUE) (Cost=1 Card=26 Bytes=156)

20 9 TABLE ACCESS (BY INDEX ROWID) OF 'T4869_UN
IT_PROC_STATUS_ASSIGN' (Cost=250 Card=355619 Bytes=6401142)

21 20 INDEX (RANGE SCAN) OF 'I4869_IX4_UNIT_PR
OC_STATUS_ASS' (UNIQUE) (Cost=73 Card=355619)

22 6 TABLE ACCESS (BY INDEX ROWID) OF 'T4201_UNIT'
(Cost=2 Card=1388953 Bytes=13889530)

23 22 INDEX (UNIQUE SCAN) OF 'I4201_PK_UNIT' (UNIQ
UE) (Cost=1 Card=1388953)

24 5 TABLE ACCESS (BY INDEX ROWID) OF 'T4059_BUS_REF_
LIST' (Cost=4 Card=3004512 Bytes=30045120)

25 24 INDEX (RANGE SCAN) OF 'I4059_IX5_BUS_REF_LIST'
(NON-UNIQUE) (Cost=3 Card=3004512)

26 4 INDEX (RANGE SCAN) OF 'I4110_PK_STR_BUS_REF' (UNIQ
UE) (Cost=3 Card=1388543 Bytes=13885430)

27 3 TABLE ACCESS (BY INDEX ROWID) OF 'T4118_STRUCTURE_MF
_DATA' (Cost=2 Card=71080 Bytes=3056440)

28 27 INDEX (UNIQUE SCAN) OF 'I4118_PK_STRUCTURE_MF_DATA
' (UNIQUE) (Cost=1 Card=71080)

29 2 FILTER
30 29 FILTER
31 30 NESTED LOOPS (Cost=8 Card=1 Bytes=73)
32 31 NESTED LOOPS (Cost=6 Card=1 Bytes=28)
33 32 INDEX (UNIQUE SCAN) OF 'I4317_PK_COUNTRY_GE_CO
RP' (UNIQUE)

34 32 TABLE ACCESS (BY INDEX ROWID) OF 'T4113_STRUCT
URE_CONTACT_LIST' (Cost=5 Card=1 Bytes=24)

35 34 INDEX (RANGE SCAN) OF 'I4113_IX8_STRUCTURE_C
ONTACT_LI' (NON-UNIQUE) (Cost=2 Card=1)

36 31 TABLE ACCESS (BY INDEX ROWID) OF 'T4118_STRUCTUR
E_MF_DATA' (Cost=2 Card=1 Bytes=45)

37 36 INDEX (UNIQUE SCAN) OF 'I4118_PK_STRUCTURE_MF_
DATA' (UNIQUE) (Cost=1 Card=1)

38 29 NESTED LOOPS (Cost=8 Card=1 Bytes=67)
39 38 TABLE ACCESS (BY INDEX ROWID) OF 'T4113_STRUCTURE_
CONTACT_LIST' (Cost=6 Card=1 Bytes=24)

40 39 INDEX (RANGE SCAN) OF 'I4113_IX8_STRUCTURE_CONTA
CT_LI' (NON-UNIQUE) (Cost=3 Card=1)

41 38 TABLE ACCESS (BY INDEX ROWID) OF 'T4118_STRUCTURE_
MF_DATA' (Cost=2 Card=711 Bytes=30573)

42 41 INDEX (UNIQUE SCAN) OF 'I4118_PK_STRUCTURE_MF_DA
TA' (UNIQUE) (Cost=1 Card=711)



Tom Kyte
March 18, 2005 - 10:44 am UTC

are you running this in toad?

Tunning Sql statements with Group by

Thomas Varekat, March 21, 2005 - 1:45 am UTC

Yes i am running the query in code.


Tuning Sql statements

Thomas Varekat, March 21, 2005 - 1:53 am UTC

Yes. I am running the sql query in toad. But the explain plan was from sql-plus

Tom Kyte
March 21, 2005 - 10:25 am UTC

fine - toad fetches the first N records and stops. Toad faked you out.

toad didn't run the query to completion.

use sqlplus and you'll see.

Explain plan question

veeresh, March 22, 2005 - 2:19 am UTC

Hi Tom,
I have two different Queries.

Query 1. Select * from dept
Query 2. Select * from emp where nvl(deptno,0) = Nvl(:Deptno,0)

Query 2 will executed based upon the number of records fethed by Query 1.
Means For each record of Query 1 , Query2 2 will execute.

How I can Explain plan for both queries in a single statement?

Thanks in Advance



Tom Kyte
March 22, 2005 - 11:09 am UTC

you cannot, they are two statements, explain plan is a statement thing.

nvl(deptno,0) - ugh.

Tuning Sql statements

Thomas Varekat, March 23, 2005 - 3:29 am UTC

Even in sql-plus the group by is taking lot of time. If i remove the group by clause the result is retrieved within 2-3 seconds.

Tom Kyte
March 23, 2005 - 8:47 am UTC

tkprof it.

show us something "real" -- something we can see.

tkprof will show us the work performed.

analyze

safrin, March 23, 2005 - 9:20 am UTC

In the above answer you have said,

"If you have not analyzed the tables recently -- with
their current set of data, then the plans generated by the optimizer can be
quite bad indeed."

so it means, in production tables should be analyzed every 3 months or so , so that optimizer will prepare new plan as per the current data. (if CHOOSE ie.. CBO)
Am I right.





Tom Kyte
March 23, 2005 - 9:27 am UTC

3 months? I'm fond of 2.432^1.23 hours

(kidding).

it is a function of how frequently the data changes - you might never need to analyze some tables, you might frequently need others to be analyzed, you might use dbms_stats.set table stats for yet others to give the optimizer the information it needs.

Unique situation

Utpal, March 24, 2005 - 3:20 am UTC

Tom,

I have a unique situation. I have a query which when fired independently uses the index and works properly. But the same query which is a cursor inside a procedure makes a full table scan. I even forced the query to use the Index. Even then it is not doing so. The table contains 1500000 records and table and index of that table have been analyzed with compute statistics. There is only one table in the query - No joins ! - Very strange ! Can you please let me know what could be the reason ?

Tom Kyte
March 24, 2005 - 8:47 am UTC

test case?

tkprof results?

anything?

CBO vs RBO

Nitesh, March 31, 2005 - 6:41 am UTC

we have a situation whereby the SQL cannot be changed as it is being generated by an OLTP s/w (Siebel). We notice when the below SQL runs in RBO (using hint) then the results return in 2 secs. However, when its in 'CHOOSE' (ie. CBO)mode then it takes about 40mins!!

The DBA tells me that all table stats are up-to-date. I have copied the SQL, the explain plan for 'choose' and explain plan for RULE below - any tips on improving performance would be appreciated.

SELECT
T31.CONFLICT_ID,
T31.LAST_UPD,
T31.CREATED,....
FROM SIEBEL.S_CONTACT_FNX T1,
SIEBEL.S_ACT_EMP T2,
SIEBEL.S_CONTACT T3,
SIEBEL.S_PROD_LN T4,
SIEBEL.S_CONTACT T5,
SIEBEL.S_EVT_MKTG T6,
SIEBEL.S_PROD_LN T7,
SIEBEL.S_FN_APPR T8,
SIEBEL.S_CONTACT T9,
SIEBEL.S_REVN T10,
SIEBEL.S_PARTY T11,
SIEBEL.S_PROD_INT T12,
SIEBEL.S_PROJ T13,
SIEBEL.S_PROD_DEFECT T14,
SIEBEL.S_PARTY_RPT_REL T15,
SIEBEL.S_PROJITEM T16,
SIEBEL.S_EVT_ACT_SS T17,
SIEBEL.S_POSTN T18,
SIEBEL.S_PARTY T19,
SIEBEL.S_CONTACT T20,
SIEBEL.S_EVT_ACT_FNX T21,
SIEBEL.S_USER T22,
SIEBEL.S_CONTACT T23,
SIEBEL.S_ASSET T24,
SIEBEL.S_PROD_INT T25,
SIEBEL.S_EVT_CAL T26,
SIEBEL.S_ORG_EXT T27,
SIEBEL.S_OPTY T28,
SIEBEL.S_POSTN T29,
SIEBEL.S_CONTACT T30,
SIEBEL.S_EVT_ACT T31
WHERE
T21.AMS_ACT_ID = T8.ROW_ID (+) AND
T31.OWNER_PER_ID = T9.PAR_ROW_ID (+) AND
T31.ASSET_ID = T24.ROW_ID (+) AND
T31.OPTY_ID = T28.ROW_ID (+) AND
T31.TARGET_OU_ID = T27.PAR_ROW_ID (+) AND
T31.SRA_DEFECT_ID = T14.ROW_ID (+) AND
T31.PROJ_ID = T13.ROW_ID (+) AND
T31.PROJ_ITEM_ID = T16.ROW_ID (+) AND
T31.CREATED_BY = T30.PAR_ROW_ID (+) AND
T9.PR_HELD_POSTN_ID = T18.PAR_ROW_ID (+) AND
T31.X_PROD_INT_ID = T12.ROW_ID (+) AND
T31.X_PROD_LN_ID = T7.ROW_ID (+) AND
T31.LAST_UPD_BY = T23.PAR_ROW_ID (+) AND
T31.ROW_ID = T21.PAR_ROW_ID (+) AND
T31.ROW_ID = T17.PAR_ROW_ID (+) AND
T31.ROW_ID = T26.PAR_ROW_ID (+) AND
T31.ROW_ID = T6.PAR_ROW_ID (+) AND
T31.OWNER_PER_ID = T2.EMP_ID AND T31.ROW_ID = T2.ACTIVITY_ID AND
T2.EMP_ID = T11.ROW_ID AND
T2.EMP_ID = T20.PAR_ROW_ID (+) AND
T2.EMP_ID = T22.PAR_ROW_ID (+) AND
T20.PR_HELD_POSTN_ID = T29.PAR_ROW_ID (+) AND
T31.TARGET_PER_ID = T19.ROW_ID (+) AND
T31.TARGET_PER_ID = T5.PAR_ROW_ID (+) AND
T31.TARGET_PER_ID = T1.PAR_ROW_ID (+) AND
T28.PR_OPTYPRD_ID = T10.ROW_ID (+) AND
T10.PROD_ID = T25.ROW_ID (+) AND
T10.PROD_LN_ID = T4.ROW_ID (+) AND
T2.EMP_ID = T3.ROW_ID AND
T3.PR_HELD_POSTN_ID = T15.SUB_PARTY_ID AND
((T31.TODO_CD != 'Time Tracking' AND T31.TODO_CD != 'Property Information' AND (T31.X_POSTN_TYPE != 'Mobile CRM' OR T31.X_POSTN_TYPE IS NULL)) AND
(T15.PARTY_ID = '0-5220') AND
(T31.APPT_REPT_REPL_CD IS NULL) AND
((T2.ACT_TEMPLATE_FLG != 'Y' AND T2.ACT_TEMPLATE_FLG != 'P' OR T2.ACT_TEMPLATE_FLG IS NULL) AND (T31.OPTY_ID IS NULL OR T28.SECURE_FLG != 'Y' OR T28.SECURE_FLG IS NULL OR T31.OPTY_ID IN (
SELECT SQ1_T1.OPTY_ID
FROM SIEBEL.S_OPTY_POSTN SQ1_T1, SIEBEL.S_CONTACT SQ1_T2, SIEBEL.S_PARTY SQ1_T3, SIEBEL.S_POSTN SQ1_T4

WHERE ( SQ1_T4.PR_EMP_ID = SQ1_T2.PAR_ROW_ID AND SQ1_T3.ROW_ID = SQ1_T4.PAR_ROW_ID AND SQ1_T1.POSITION_ID = SQ1_T3.ROW_ID)
AND (SQ1_T2.ROW_ID = '0-1')))) AND
(T31.PRIV_FLG = 'N' OR T31.PRIV_FLG IS NULL OR T31.OWNER_PER_ID = '0-1')) AND
((T2.ACT_EVT_STAT_CD = 'Not Started' OR T2.ACT_EVT_STAT_CD = 'In Progress') AND T2.ACT_TODO_PLNEND_DT <= TO_DATE('03/29/2005 23:59:59','MM/DD/YYYY HH24:MI:SS'))
ORDER BY
T31.CREATED DESC
_____________
Operation Object Name Rows Bytes Cost

SELECT STATEMENT Optimizer Mode=CHOOSE 1 2085
SORT ORDER BY 1 1 K 2085
FILTER
NESTED LOOPS OUTER 1 1 K 2083
NESTED LOOPS OUTER 1 1 K 2082
NESTED LOOPS OUTER 1 1 K 2081
NESTED LOOPS OUTER 1 1 K 2080
NESTED LOOPS OUTER 1 1 K 2079
NESTED LOOPS OUTER 1 1 K 2078
NESTED LOOPS OUTER 1 1 K 2077
NESTED LOOPS 1 1 K 2076
NESTED LOOPS 15 20 K 2075
NESTED LOOPS OUTER 15 20 K 2074
NESTED LOOPS 15 20 K 2073
NESTED LOOPS OUTER 15 19 K 2072
NESTED LOOPS OUTER 15 19 K 2071
NESTED LOOPS OUTER 15 18 K 2070
NESTED LOOPS OUTER 15 18 K 2069
NESTED LOOPS OUTER 15 17 K 2068
NESTED LOOPS OUTER 15 17 K 2067
NESTED LOOPS OUTER 15 15 K 2066
NESTED LOOPS OUTER 15 15 K 2065
NESTED LOOPS OUTER 15 14 K 2064
NESTED LOOPS OUTER 15 14 K 2063
NESTED LOOPS OUTER 15 13 K 2062
NESTED LOOPS OUTER 15 13 K 2061
NESTED LOOPS OUTER 15 12 K 2060
NESTED LOOPS OUTER 15 11 K 2059
NESTED LOOPS OUTER 15 9 K 2058
NESTED LOOPS OUTER 15 8 K 2057
NESTED LOOPS OUTER 15 7 K 2056
NESTED LOOPS 15 6 K 2055
NESTED LOOPS OUTER 460 26 K 2046
TABLE ACCESS FULL S_ACT_EMP 460 17 K 2041
TABLE ACCESS BY INDEX ROWID S_USER 1 18 2
INDEX UNIQUE SCAN S_USER_U2 1
TABLE ACCESS BY INDEX ROWID S_EVT_ACT 1 411 2
INDEX UNIQUE SCAN S_EVT_ACT_P1 1 1
TABLE ACCESS BY INDEX ROWID S_PROJITEM 1 45 2
INDEX UNIQUE SCAN S_PROJITEM_P1 1
TABLE ACCESS BY INDEX ROWID S_PROD_DEFECT 1 43 2
INDEX UNIQUE SCAN S_PROD_DEFECT_P1 1
TABLE ACCESS BY INDEX ROWID S_PROJ 1 107 2
INDEX UNIQUE SCAN S_PROJ_P1 1
TABLE ACCESS BY INDEX ROWID S_EVT_ACT_SS 1 137 2
INDEX RANGE SCAN S_EVT_ACT_SS_U1 1 1
TABLE ACCESS BY INDEX ROWID S_PROD_LN 1 27 2
INDEX UNIQUE SCAN S_PROD_LN_P1 1
TABLE ACCESS BY INDEX ROWID S_EVT_CAL 1 60 2
INDEX RANGE SCAN S_EVT_CAL_U1 1 1
TABLE ACCESS BY INDEX ROWID S_EVT_MKTG 1 56 2
INDEX RANGE SCAN S_EVT_MKTG_U1 1 1
TABLE ACCESS BY INDEX ROWID S_ORG_EXT 1 24 2
INDEX UNIQUE SCAN S_ORG_EXT_U3 1
TABLE ACCESS BY INDEX ROWID S_PROD_INT 1 16 2
INDEX UNIQUE SCAN S_PROD_INT_P1 1
TABLE ACCESS BY INDEX ROWID S_EVT_ACT_FNX 1 68 2
INDEX RANGE SCAN S_EVT_ACT_FNX_U1 1 2
TABLE ACCESS BY INDEX ROWID S_FN_APPR 1 12 2
INDEX UNIQUE SCAN S_FN_APPR_P1 1
TABLE ACCESS BY INDEX ROWID S_OPTY 1 118 2
INDEX UNIQUE SCAN S_OPTY_P1 1 1
TABLE ACCESS BY INDEX ROWID S_REVN 1 21 2
INDEX UNIQUE SCAN S_REVN_P1 1 1
TABLE ACCESS BY INDEX ROWID S_PROD_LN 1 27 2
INDEX UNIQUE SCAN S_PROD_LN_P1 1
TABLE ACCESS BY INDEX ROWID S_PROD_INT 1 16 2
INDEX UNIQUE SCAN S_PROD_INT_P1 1
TABLE ACCESS BY INDEX ROWID S_CONTACT_FNX 1 100 2
INDEX RANGE SCAN S_CONTACT_FNX_I04 1 2
INDEX UNIQUE SCAN S_PARTY_P1 1 10 1
INDEX UNIQUE SCAN S_PARTY_P1 1 10 1
TABLE ACCESS BY INDEX ROWID S_ASSET 1 28 2
INDEX UNIQUE SCAN S_ASSET_P1 1 1
TABLE ACCESS BY INDEX ROWID S_CONTACT 1 24 2
INDEX UNIQUE SCAN S_CONTACT_P1 1 1
INDEX RANGE SCAN S_PARTY_RPT_REL_U1 1 17 1
TABLE ACCESS BY INDEX ROWID S_CONTACT 1 29 2
INDEX UNIQUE SCAN S_CONTACT_U2 1 1
TABLE ACCESS BY INDEX ROWID S_CONTACT 1 29 2
INDEX UNIQUE SCAN S_CONTACT_U2 1 1
TABLE ACCESS BY INDEX ROWID S_CONTACT 1 44 2
INDEX UNIQUE SCAN S_CONTACT_U2 1 1
TABLE ACCESS BY INDEX ROWID S_POSTN 1 17 2
INDEX UNIQUE SCAN S_POSTN_U2 1
TABLE ACCESS BY INDEX ROWID S_CONTACT 1 44 2
INDEX UNIQUE SCAN S_CONTACT_U2 1 1
TABLE ACCESS BY INDEX ROWID S_POSTN 1 17 2
INDEX UNIQUE SCAN S_POSTN_U2 1
TABLE ACCESS BY INDEX ROWID S_CONTACT 1 54 2
INDEX UNIQUE SCAN S_CONTACT_U2 1 1
NESTED LOOPS 1 68 5
NESTED LOOPS 2 116 4
NESTED LOOPS 1 37 3
TABLE ACCESS BY INDEX ROWID S_CONTACT 1 18 2
INDEX UNIQUE SCAN S_CONTACT_P1 3 M 2
INDEX RANGE SCAN S_OPTY_POSTN_U1 1 19 2
TABLE ACCESS BY INDEX ROWID S_POSTN 2 42 2
INDEX RANGE SCAN S_POSTN_V1 2 1
INDEX UNIQUE SCAN S_PARTY_P1 1 10 1
________________
Operation Object Name Rows Bytes Cost Object Node In/Out PStart PStop

SELECT STATEMENT Optimizer Mode=HINT: RULE
SORT ORDER BY
FILTER
NESTED LOOPS OUTER
NESTED LOOPS OUTER
NESTED LOOPS OUTER
NESTED LOOPS OUTER
NESTED LOOPS OUTER
NESTED LOOPS OUTER
NESTED LOOPS OUTER
NESTED LOOPS OUTER
NESTED LOOPS OUTER
NESTED LOOPS OUTER
NESTED LOOPS OUTER
NESTED LOOPS OUTER
NESTED LOOPS OUTER
NESTED LOOPS
NESTED LOOPS OUTER
NESTED LOOPS OUTER
NESTED LOOPS OUTER
NESTED LOOPS OUTER
NESTED LOOPS OUTER
NESTED LOOPS OUTER
NESTED LOOPS OUTER
NESTED LOOPS OUTER
NESTED LOOPS OUTER
NESTED LOOPS OUTER
NESTED LOOPS OUTER
NESTED LOOPS OUTER
NESTED LOOPS OUTER
NESTED LOOPS
NESTED LOOPS
NESTED LOOPS
TABLE ACCESS BY INDEX ROWID S_PARTY_RPT_REL
INDEX RANGE SCAN S_PARTY_RPTREL_F50
TABLE ACCESS BY INDEX ROWID S_CONTACT
INDEX RANGE SCAN S_CONTACT_V5
TABLE ACCESS BY INDEX ROWID S_ACT_EMP
INDEX RANGE SCAN S_ACT_EMP_M4
TABLE ACCESS BY INDEX ROWID S_EVT_ACT
INDEX UNIQUE SCAN S_EVT_ACT_P1
TABLE ACCESS BY INDEX ROWID S_CONTACT
INDEX UNIQUE SCAN S_CONTACT_U2
TABLE ACCESS BY INDEX ROWID S_OPTY
INDEX UNIQUE SCAN S_OPTY_P1
TABLE ACCESS BY INDEX ROWID S_ORG_EXT
INDEX UNIQUE SCAN S_ORG_EXT_U3
TABLE ACCESS BY INDEX ROWID S_ASSET
INDEX UNIQUE SCAN S_ASSET_P1
TABLE ACCESS BY INDEX ROWID S_CONTACT
INDEX UNIQUE SCAN S_CONTACT_U2
TABLE ACCESS BY INDEX ROWID S_USER
INDEX UNIQUE SCAN S_USER_U2
TABLE ACCESS BY INDEX ROWID S_CONTACT
INDEX UNIQUE SCAN S_CONTACT_U2
TABLE ACCESS BY INDEX ROWID S_POSTN
INDEX UNIQUE SCAN S_POSTN_U2
INDEX UNIQUE SCAN S_PARTY_P1
TABLE ACCESS BY INDEX ROWID S_PROJITEM
INDEX UNIQUE SCAN S_PROJITEM_P1
TABLE ACCESS BY INDEX ROWID S_PROD_DEFECT
INDEX UNIQUE SCAN S_PROD_DEFECT_P1
TABLE ACCESS BY INDEX ROWID S_PROJ
INDEX UNIQUE SCAN S_PROJ_P1
TABLE ACCESS BY INDEX ROWID S_PROD_INT
INDEX UNIQUE SCAN S_PROD_INT_P1
INDEX UNIQUE SCAN S_PARTY_P1
TABLE ACCESS BY INDEX ROWID S_REVN
INDEX UNIQUE SCAN S_REVN_P1
TABLE ACCESS BY INDEX ROWID S_PROD_INT
INDEX UNIQUE SCAN S_PROD_INT_P1
TABLE ACCESS BY INDEX ROWID S_CONTACT
INDEX UNIQUE SCAN S_CONTACT_U2
TABLE ACCESS BY INDEX ROWID S_POSTN
INDEX UNIQUE SCAN S_POSTN_U2
TABLE ACCESS BY INDEX ROWID S_PROD_LN
INDEX UNIQUE SCAN S_PROD_LN_P1
TABLE ACCESS BY INDEX ROWID S_CONTACT
INDEX UNIQUE SCAN S_CONTACT_U2
TABLE ACCESS BY INDEX ROWID S_PROD_LN
INDEX UNIQUE SCAN S_PROD_LN_P1
TABLE ACCESS BY INDEX ROWID S_EVT_CAL
INDEX RANGE SCAN S_EVT_CAL_U1
TABLE ACCESS BY INDEX ROWID S_EVT_ACT_FNX
INDEX RANGE SCAN S_EVT_ACT_FNX_U1
TABLE ACCESS BY INDEX ROWID S_FN_APPR
INDEX UNIQUE SCAN S_FN_APPR_P1
TABLE ACCESS BY INDEX ROWID S_EVT_ACT_SS
INDEX RANGE SCAN S_EVT_ACT_SS_U1
TABLE ACCESS BY INDEX ROWID S_EVT_MKTG
INDEX RANGE SCAN S_EVT_MKTG_U1
TABLE ACCESS BY INDEX ROWID S_CONTACT_FNX
INDEX RANGE SCAN S_CONTACT_FNX_I05
NESTED LOOPS
NESTED LOOPS
NESTED LOOPS
TABLE ACCESS BY INDEX ROWID S_CONTACT
INDEX UNIQUE SCAN S_CONTACT_P1
TABLE ACCESS BY INDEX ROWID S_POSTN
INDEX RANGE SCAN S_POSTN_V1
INDEX UNIQUE SCAN S_PARTY_P1
INDEX RANGE SCAN S_OPTY_POSTN_U1

Tom Kyte
March 31, 2005 - 8:09 am UTC

sorry, i cannot take every multi page query in a followup/review and say "why" -- i do these sections really fast.

My approach that you can follow (not asking you to do this and post the number) would be:

a) obtain an autotrace traceonly explain output of the query, noting the CARD= values

b) obtain a tkprof of same, the Row Source plan in there will have the actuals

c) see where they diverge significantly -- see where the CBO thought (autotrace) that it would get say 1,000 rows but the tkprof shows it got 100,000 rows.

d) ask "why would the cbo have thought that", generally indicates missing/invalid/stale stats.

full table scan because of subquery

Sean Li, April 10, 2005 - 2:09 pm UTC

Hi Tom,

Why does this query do full table scan on table t2 when there is a subquery?  How can I avoid full table scan in this case?  Of course, four tables have different data in PROD.  We have to use RBO.  Oracle 9204

Thanks so much for your help.
----------------------------------------------------
create table t1 as select * from sys.dba_objects;
create index t1_object_id_idx on t1(object_id);

create table t2 as select * from sys.dba_objects;
create index t2_object_id_idx on t2(object_id);

create table t3 as select * from sys.dba_objects;
create index t3_object_id_idx on t3(object_id);

create table t4 as select * from sys.dba_objects;
create index t4_object_id_idx on t4(object_id);

create or replace view v1(object_id) as 
    select t1.object_id
    from t1, t2
    where t1.object_id=t2.object_id
    union
    select t3.object_id
    from t3, t2
    where t3.object_id=t2.object_id;


-- It uses indexes if there is no subquery.
SQL>explain plan for select * from v1
 where object_id=1;

SQL> @d:/oracle/ora92/rdbms/admin/utlxpls.sql

PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------

--------------------------------------------------------------------------
| Id  | Operation            |  Name             | Rows  | Bytes | Cost  |
--------------------------------------------------------------------------
|   0 | SELECT STATEMENT     |                   |       |       |       |
|   1 |  VIEW                | V1                |       |       |       |
|   2 |   SORT UNIQUE        |                   |       |       |       |
|   3 |    UNION-ALL         |                   |       |       |       |
|   4 |     NESTED LOOPS     |                   |       |       |       |
|*  5 |      INDEX RANGE SCAN| T1_OBJECT_ID_IDX  |       |       |       |
|*  6 |      INDEX RANGE SCAN| T2_OBJEC_ID_IDX   |       |       |       |
|   7 |     NESTED LOOPS     |                   |       |       |       |
|*  8 |      INDEX RANGE SCAN| T3_OBJECT_ID_IDX  |       |       |       |
|*  9 |      INDEX RANGE SCAN| T2_OBJEC_ID_IDX   |       |       |       |
--------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   5 - access("T1"."OBJECT_ID"=1)
   6 - access("T1"."OBJECT_ID"="T2"."OBJECT_ID")
   8 - access("T3"."OBJECT_ID"=1)
   9 - access("T3"."OBJECT_ID"="T2"."OBJECT_ID")

Note: rule based optimization

25 rows selected.

SQL> truncate table plan_table;

Table truncated.

-- full table scan on t2 when there is subquery.
SQL> explain plan for select * from v1
  2  where object_id in (select object_id from t4
  3                      where object_id=1);

Explained.

SQL> @d:/oracle/ora92/rdbms/admin/utlxpls.sql

PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------

-----------------------------------------------------------------------------
| Id  | Operation               |  Name             | Rows  | Bytes | Cost  |
-----------------------------------------------------------------------------
|   0 | SELECT STATEMENT        |                   |       |       |       |
|   1 |  MERGE JOIN             |                   |       |       |       |
|   2 |   SORT JOIN             |                   |       |       |       |
|   3 |    VIEW                 | V1                |       |       |       |
|   4 |     SORT UNIQUE         |                   |       |       |       |
|   5 |      UNION-ALL          |                   |       |       |       |
|   6 |       NESTED LOOPS      |                   |       |       |       |
|   7 |        TABLE ACCESS FULL| T2                |       |       |       |
|*  8 |        INDEX RANGE SCAN | T1_OBJECT_ID_IDX  |       |       |       |
|   9 |       NESTED LOOPS      |                   |       |       |       |
|  10 |        TABLE ACCESS FULL| T2                |       |       |       |
|* 11 |        INDEX RANGE SCAN | T3_OBJECT_ID_IDX  |       |       |       |
|* 12 |   SORT JOIN             |                   |       |       |       |
|  13 |    VIEW                 | VW_NSO_1          |       |       |       |
|  14 |     SORT UNIQUE         |                   |       |       |       |
|* 15 |      INDEX RANGE SCAN   | T4_OBJECT_ID_IDX  |       |       |       |
-----------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   8 - access("T1"."OBJECT_ID"="T2"."OBJECT_ID")
  11 - access("T3"."OBJECT_ID"="T2"."OBJECT_ID")
  12 - access("V1"."OBJECT_ID"="VW_NSO_1"."$nso_col_1")
       filter("V1"."OBJECT_ID"="VW_NSO_1"."$nso_col_1")
  15 - access("T4"."OBJECT_ID"=1)

Note: rule based optimization

32 rows selected.

SQL>  

my query processing 31 records at a time only

prasad, April 29, 2005 - 4:57 pm UTC

In the package,one Query - each time processing 30 - 32 records only why ??

any performance issue ??


Tom Kyte
April 29, 2005 - 6:45 pm UTC

sorry, does not make sense, don't know what you are trying to say

Explain Plan On Update --Locks the table.

Neeraj Nagpal, May 09, 2005 - 12:36 pm UTC

Tom,

Is there a reason why explain plan on an UPDATE statement,
should lock the row in the table being explained ??


CREATE TABLE TWO ( ONE NUMBER, TWO CHAR(1 BYTE) )
/
Insert into TWO  values (1,'2')
/
SQL> commit;

Commit complete.

SQL> 
SQL> 
SQL> 
SQL> set autot trace expl
SQL> update two set one = 2
SQL> /

1 row updated.

Execution Plan
----------------------------------------------------------
0 null UPDATE STATEMENT Optimizer=CHOOSE                            null
1    0   UPDATE OF 'TWO'                                            null
2    1     TABLE ACCESS (FULL) OF 'TWO'                             null



Now Checking the locks 
------------------
    select  distinct SESSION_ID     , a.os_user_name as "Os user",b.object_name "Object Name",
    decode(a.locked_mode,
    ,'ROW SHARE MODE',
    ,'ROW EXCLUSIVE MODE',
    ,'SHARE MODE',
    ,'SHARE ROW EXCLUSIVE MODE',
    ,'EXCLUSIVE','UNKNOWN')as "Lock Mode"
     from
    v$locked_object a, all_objects b where
  a.object_id = b.object_id
SQL> 
SQL> 
SQL> /

SESSION_ID Os user                        Object Name                    Lock Mode
---------- ------------------------------ ------------------------------ ---------------------------
         8 nnagpal                        PLAN_TABLE                     ROW EXCLUSIVE MODE
         8 nnagpal                        TWO  <---- (Locked Row)        ROW EXCLUSIVE MODE




Thanks Always For your help,
Neeraj 
 

Tom Kyte
May 09, 2005 - 2:35 pm UTC

SQL> update two set one = 2
SQL> /

1 row updated.


because autotrace traceonly explain with a MODIFICATION runs the statement.

a select is special because sqlplus knows that "even if I don't bother running this, they will get what they want without it running"

autotrace traceonly explain *does not prevent the statement from running*

it just doesn't run ones it knows "it doesn't need to" 

Thanks Very Much

Neeraj Nagpal, May 09, 2005 - 8:01 pm UTC

TOM, Thanks so much for your help!


Explain Plan Interpretation

Dennis, May 10, 2005 - 1:12 am UTC

Hi Tom,

Most of the queries in one of our databases are having execution plan similar to the one below.

Plan Table
----------------------------------------------------------------------------------------------------
| OPERATION | NAME | ROWS | BYTES| COST |
----------------------------------------------------------------------------------------------------
| SELECT STATEMENT | | 1 | 112 | 301 |
| TABLE ACCESS BY INDEX ROWID |AMC | 1 | 112 | 301 |
| INDEX RANGE SCAN |AMC_PK | 418 | | 8 |
--------------------------------------------------------------------------------

What I interpret is the index was scanned for 418 rows to fetch 1 row from the table.
What I would like to know is
1.What causes the index to be so unselective when it is a primary key.
The application concerned has lots of deletes and inserts on this table does this have anything to do with this.
2.What does the rows coulmn signify is it the actual rows that will be returned after the execution or is it the no of rows scanned to fetch 1 row.

thanks,
Dennis


Tom Kyte
May 10, 2005 - 8:06 am UTC

that means it THINKS it will get 418 rows from the index, to find a single row in the table. It is the "plan"

tkprof of a sql_trace will show you what actually happens.


1) you don't give me anything to work with, how can I tell????? no creates, no stats gathering method..... you'll need to give me something to go on.

2) it is the number of rows flowing out of that step and into the next step.



dennis, May 16, 2005 - 4:22 am UTC

Thanks Tom

autotrace plan is dfferent comapred with tkprof

A reader, May 23, 2005 - 9:45 am UTC

Hi

We have one query running in development and production, one is very fast and the other slow. Two environments are very similar, almost same amount of data.
When we compared execution plan using autotrace they both share the same plan but when we sql traced the sessions and tkprof to get the execution plan then we see they are different.

Under what circumstances can autotrace behave this way?

Tom Kyte
May 23, 2005 - 2:24 pm UTC

autotrace does not do bind variable peeking, real life will (in 9i and above).

autotrace will always assume ALL binds are strings, no matter what you define them as. You would have to use to_number(:bv) and to_date(:bv) to avoid implicit conversions


Best if you can put the literal values in (the ones the query was parsed with the first time) when autotracing...

re-write SQL

Laurie Murray, May 24, 2005 - 9:39 am UTC

I need to tune this SQL. Have worked a lot with indexing, hints and archiving off old data. None of my usual tricks are working. It looks like the SQL itself may need to be re-written. Can you give me a couple of suggestions for re-writing it? Thank you.
SELECT d.descr
, e.first_name || ' ' || e.last_name
, c.descr
, b.descr
, SUM(a.tl_quantity )
FROM ps_lp_curr_proj_vw b
, ps_lp_tl_task_bu c
, ps_dept_tbl d
, ps_personal_data e
, ps_tl_rptd_elptime a
WHERE b.taskgroup (+) = a.taskgroup
AND b.project_id (+) = a.project_id
AND c.taskgroup = a.taskgroup
AND c.task = a.task
AND c.effdt = (
SELECT MAX(c1.effdt)
FROM ps_lp_tl_task_bu c1
WHERE c1.taskgroup = c.taskgroup
AND c1.task = c.task
AND c1.effdt <= a.dur)
AND d.deptid = a.deptid
AND d.effdt = (
SELECT MAX(d1.effdt)
FROM ps_dept_tbl d1
WHERE d1.deptid = d.deptid
AND d1.effdt <= a.dur)
AND e.emplid = a.emplid
AND a.taskgroup = 'EUFIN'
AND a.dur >= '01/01/2005'
AND a.dur <= '07/05/2005'
GROUP BY d.descr , e.first_name || ' ' || e.last_name , c.descr , b.descr
ORDER BY d.descr , e.first_name || ' ' || e.last_name , c.descr , b.descr
/
Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=1536 Card=1 Bytes=16
4)

1 0 SORT (GROUP BY) (Cost=1536 Card=1 Bytes=164)
2 1 FILTER
3 2 NESTED LOOPS (Cost=1533 Card=1 Bytes=164)
4 3 NESTED LOOPS (OUTER) (Cost=1532 Card=1 Bytes=143)
5 4 NESTED LOOPS (Cost=1479 Card=1 Bytes=106)
6 5 HASH JOIN (Cost=1473 Card=3 Bytes=207)
7 6 INDEX (FAST FULL SCAN) OF 'PS0LP_TL_TASK_BU' (
NON-UNIQUE) (Cost=2 Card=19 Bytes=684)

8 6 INDEX (FAST FULL SCAN) OF 'LMLMLMMAY232005' (N
ON-UNIQUE) (Cost=1470 Card=4533 Bytes=149589)

9 5 INDEX (FAST FULL SCAN) OF 'LMLMLMLM' (NON-UNIQUE
) (Cost=2 Card=1394 Bytes=51578)

10 4 VIEW OF 'PS_LP_CURR_PROJ_VW' (Cost=53 Card=81 Byte
s=2997)

11 10 FILTER
12 11 INDEX (FAST FULL SCAN) OF 'PS0LP_TL_PROJECT' (
UNIQUE) (Cost=53 Card=81 Bytes=3564)

13 11 SORT (AGGREGATE)
14 13 INDEX (RANGE SCAN) OF 'FELIX' (NON-UNIQUE) (
Cost=2 Card=1 Bytes=22)

15 3 TABLE ACCESS (BY INDEX ROWID) OF 'PS_PERSONAL_DATA'
(Cost=1 Card=12492 Bytes=262332)

16 15 INDEX (UNIQUE SCAN) OF 'PS_PERSONAL_DATA' (UNIQUE)
17 2 SORT (AGGREGATE)
18 17 FIRST ROW (Cost=2 Card=1 Bytes=19)
19 18 INDEX (RANGE SCAN (MIN/MAX)) OF 'PS_LP_TL_TASK_BU'
(UNIQUE) (Cost=2 Card=1)

20 2 SORT (AGGREGATE)
21 20 INDEX (RANGE SCAN) OF 'LMLMLMLM' (NON-UNIQUE) (Cost=
2 Card=1 Bytes=14)



Statistics
----------------------------------------------------------
0 recursive calls
24458 db block gets
298062340 consistent gets
28 physical reads
0 redo size
12222 bytes sent via SQL*Net to client
1540 bytes received via SQL*Net from client
12 SQL*Net roundtrips to/from client
1 sorts (memory)
0 sorts (disk)
153 rows processed

ps_lp_curr_proj_vw b is a simple view
ps_lp_tl_task_bu c is a table with 1500 rows
ps_dept_tbl d is a table with 1394 rows
ps_personal_data e is a table with 12,492 rows
ps_tl_rptd_elptime a is a table with 954,250 rows

Statistics are current on all tables.

ps_lp_curr_proj_vw view definition:
SELECT a.taskgroup
, a.project_id
, a.effdt
, a.eff_status
, a.DESCR
, a.descrshort
, a.user_field_3
, a.user_field_1
, a.user_field_2
, a.product
, a.lp_project_cat
, a.comments
FROM ps_lp_tl_project a
WHERE a.effdt = (
SELECT MAX(a1.effdt)
FROM ps_lp_tl_project a1
WHERE a1.taskgroup = a.taskgroup
AND a1.project_id = a.project_id
AND a1.effdt <= SYSDATE
AND a1.eff_status = 'A')

ps_lp_tl_project is a table with 44,419 rows.

LMLMLMMAY232005 is an index on ps_tl_rptd_elptime(DUR,
PROJECT_ID, TASK, EMPLID, DEPTID, TASKGROUP, TL_QUANTITY)

LMLMLMLM is an index on ps_dept_tbl( EFFDT,DEPTID,DESCR)

FELIX is an index on ps_lp_tl_project ( PROJECT_ID,
EFFDT, TASKGROUP, EFF_STATUS, DESCR)

Tom Kyte
May 24, 2005 - 12:58 pm UTC

what are the primary keys -- I mean, can we defer some of the joins until after the group by and perhaps use scalar subqueries?


eg: can we just sum by taskgroup, taks, a.dur -- then pick up the strings?

Here are the PK's.

Laurie Murray, May 24, 2005 - 3:49 pm UTC

Hi, Tom: I'm looking into the re-write based on your comments. In the meantime, here's the answer to your question.
thanks,
Laurie

Primary Keys:
PS_PERSONAL_DATA EMPLID 1

PS_DEPT_TBL SETID 1
PS_DEPT_TBL DEPTID 2
PS_DEPT_TBL EFFDT 3

PS_LP_TL_TASK_BU TASKGROUP 1
PS_LP_TL_TASK_BU TASK 2
PS_LP_TL_TASK_BU EFFDT 3

PS_TL_RPTD_ELPTIME EMPLID 1
PS_TL_RPTD_ELPTIME EMPL_RCD 2
PS_TL_RPTD_ELPTIME DUR 3
PS_TL_RPTD_ELPTIME SEQNUM 4

PS_LP_TL_PROJECT TASKGROUP 1
PS_LP_TL_PROJECT PROJECT_ID 2
PS_LP_TL_PROJECT EFFDT 3



Tom Kyte
May 24, 2005 - 4:11 pm UTC

so it looks like getting the strings from the other tables by taskgroup/task/ MAX(effdt) could be put off, so you could probably aggregate by taskgroup/task/dur and then join.

explain plan for and TKPROF

PINGU, July 28, 2005 - 7:22 am UTC

I have this query

SELECT :b1, :b2, DECODE (:b3, 'C', (-:b4), 'D', :b4, :b6),
DECODE (:b3, 'C', (-:b8), 'D', :b8, :b10), :b11,
DECODE (:b3, 'R', :b8, NULL), :b14, :b15, :b16, :b17,
DECODE (:b3, 'R', :b4, NULL),
DECODE (:b3, 'C', (-:b21), 'D', :b21, :b23),
DECODE (:b3, 'C', (-:b25), 'D', :b25, :b27),
DECODE (:b3, 'R', :b25, NULL), :b30, :b31, :b32, :b33, :b34, :b35,
:b36, :b37, DECODE (:b3, 'C', NULL, cdl.dr_code_combination_id),
DECODE (:b3, 'C', cdl.cr_code_combination_id, NULL),
DECODE (:b40, 'N', 'H', 'Z'), :b41, :b42, SYSDATE, :b43, :b44, :b45,
:b46, SYSDATE, 'N', DECODE (:b3, 'R', DECODE (:b48, 'N', 'I', :b3),
:b3), :b51, DECODE (:b3, 'R', :b21, NULL),
:b54, :b55, :b56, 'N', DECODE (:b3, 'R', 'N', NULL), :b58:b59, :b60,
:b61, DECODE (:b3, 'R', DECODE (:b48, 'N', :b64, NULL), NULL), :b65,
:b66, :b67, :b68, :b69, :b70, :b71, :b72
FROM pa_expenditure_items itm,
pa_expenditure_items itm1,
pa_cost_distribution_lines cdl
WHERE itm.expenditure_item_id = :b2
AND itm1.expenditure_item_id = itm.adjusted_expenditure_item_id
AND DECODE(itm1.cost_distributed_flag, 'S',
DECODE(itm1.cost_dist_rejection_code, NULL, 'Y', 'N'), 'N', 'N', 'Y') = 'Y'
AND cdl.expenditure_item_id = itm.adjusted_expenditure_item_id
AND cdl.reversed_flag IS NULL
AND cdl.line_num_reversed IS NULL
AND cdl.line_type = :b3
AND cdl.line_num = (SELECT MAX(cdl1.line_num)
FROM pa_cost_distribution_lines cdl1
WHERE cdl1.expenditure_item_id = cdl.expenditure_item_id
AND cdl1.line_type = cdl.line_type
AND cdl1.reversed_flag IS NULL
AND cdl1.line_num_reversed IS NULL )
AND :b65:b76 IS NOT NULL
AND :b66:b78 IS NOT NULL
AND :b67:b80 IS NOT NULL
AND :b68:b82 IS NOT NULL
AND :b69:b84 IS NOT NULL
AND :b70:b86 IS NOT NULL
AND :b71:b88 IS NOT NULL
AND :b72:b90 IS NOT NULL
AND :b91 IS NULL
AND itm.adjusted_expenditure_item_id IS NOT NULL
AND NOT EXISTS
(
SELECT NULL
FROM pa_cost_distribution_lines cdl3
WHERE cdl3.expenditure_item_id = :b2
AND DECODE (cdl3.line_type, 'C', cdl3.cr_code_combination_id, cdl3.dr_code_combination_id) = :b93
AND cdl3.billable_flag = :b42
AND cdl3.acct_raw_cost = DECODE (cdl3.line_type, 'R', :b6, 'D', :b4, 'C', (-:b4))
AND NVL (cdl3.ind_compiled_set_id, (-99)) = NVL (:b51, (-99))
AND DECODE (cdl3.line_type, 'R', NVL (cdl3.acct_burdened_cost, 0), 1) = DECODE (:b3, 'R', NVL (:b4, 0), 1 )
AND cdl3.line_type = DECODE (:b3, 'R', DECODE (:b48, 'N', 'I', :b3), :b3) -- causa el CONCAT
AND cdl3.denom_raw_cost = DECODE (cdl3.line_type, 'R', :b10, 'D', :b8, 'C', (-:b8))
AND DECODE (cdl3.line_type, 'R', NVL (cdl3.denom_burdened_cost, 0), 1) = DECODE (:b3, 'R', NVL (:b8, 0), 1)
AND cdl3.amount = DECODE(cdl3.line_type, 'R', :b23, 'D', :b21, 'C', (-:b21) )
AND DECODE (cdl3.line_type, 'R', NVL (cdl3.burdened_cost, 0), 1) = DECODE (:b3,'R', NVL (:b21, 0), 1)
AND cdl3.project_raw_cost = DECODE (cdl3.line_type, 'R', :b27, 'D', :b25, 'C', (-:b25))
AND DECODE (cdl3.line_type, 'R', NVL (cdl3.project_burdened_cost, 0), 1) = DECODE (:b3, 'R', NVL (:b25, 0), 1 )
AND cdl3.line_num_reversed IS NULL
AND cdl3.reversed_flag IS NULL
)


When traced with sql_trace tkprof shows a full scan on pa_cost_distribution_lines is performed, however if I do explain plan for the query it shows index range scan instead of full scan.

You mentioned that this only happens with autotrace how come it happens with explain plan as well? You said we should put to_number(:binds) however I see this in the plan_table

21 - access("PA_COST_DISTRIBUTION_LINES_ALL"."EXPENDITURE_ITEM_ID"=TO_NUMBER(:Z))

So explain plan actually does implicit to_numeber for us why it shows different plan as TKPROF?

Tom Kyte
July 28, 2005 - 9:13 am UTC

search this site for

"bind variable peeking"

Does CBO get it horribly wrong?

Marco Coletti, July 29, 2005 - 3:18 pm UTC

I submitted a TAR to Metalink on the following problem, which is under review. It seems confirmed that CBO guesses wrong in this situation.

Still I can't believe that perhaps nobody noticed such a horrible behaviour.

I observed in 9.2.0.6 that sometimes the INDEX RANGE SCAN or INDEX UNIQUE SCAN execution plan step has no cost (zero cost) leading the CBO to choose NESTED LOOP over HASH JOIN.

Example:

create table A as select * from ALL_OBJECTS;
create table B as select * from ALL_OBJECTS;
alter table B add constraint PKB primary key (OBJECT_ID);
execute dbms_stats.gather_table_stats(OwnName => user, TabName => 'A', Cascade => true);
execute dbms_stats.gather_table_stats(OwnName => user, TabName => 'B', Cascade => true);

explain plan for select count(1) from A, B where A.OBJECT_ID = B.OBJECT_ID;

select * from plan;

--------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost |
--------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 10 | 15 |
| 1 | SORT AGGREGATE | | 1 | 10 | |
| 2 | NESTED LOOPS | | 23835 | 232K| 15 |
| 3 | TABLE ACCESS FULL | A | 23835 | 116K| 15 |
|* 4 | INDEX UNIQUE SCAN | PKB | 1 | 5 | |
--------------------------------------------------------------------

You can see there is no cost associated to step id 4.
The CBO then thinks that doing and index unique scan 23835 times costs nothing.
Step 2 should cost about 24000, that is:
15 + 23835*c
where c is the cost of an index unique scan (which is at least 1).

When hinted for hash join, the CBO estimates a cost of 22:

explain plan for select /*+ use_hash(A B) */ count(1) from A, B where A.OBJECT_ID = B.OBJECT_ID;

----------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost |
----------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 10 | 22 |
| 1 | SORT AGGREGATE | | 1 | 10 | |
|* 2 | HASH JOIN | | 23835 | 232K| 22 |
| 3 | TABLE ACCESS FULL | A | 23835 | 116K| 15 |
| 4 | INDEX FAST FULL SCAN| PKB | 23836 | 116K| 3 |
----------------------------------------------------------------------

which is correct.
Of course 15 is better than 22, then the CBO goes incorrectly for nested loop.

But hash join is better, as demonstrated here:

set autotrace on

select /*+ use_hash(A B) */ count(1) from A, B where A.OBJECT_ID = B.OBJECT_ID;

----------------------------
0 recursive calls
0 db block gets
385 consistent gets
0 physical reads
...


select count(1) from A, B where A.OBJECT_ID = B.OBJECT_ID;

----------------------------
0 recursive calls
0 db block gets
24168 consistent gets
0 physical reads
...




The CBO makes a better guess when the primary key constraint on B is dropped and replaced with a simple index:


alter table B drop constraint PKB;

create index IXB on B (OBJECT_ID);

execute dbms_stats.gather_table_stats(OwnName => user, TabName => 'B', Cascade => true);

explain plan for select count(1) from A, B where A.OBJECT_ID = B.OBJECT_ID;

----------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost |
----------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 10 | 23 |
| 1 | SORT AGGREGATE | | 1 | 10 | |
|* 2 | HASH JOIN | | 23835 | 232K| 23 |
| 3 | TABLE ACCESS FULL | A | 23835 | 116K| 15 |
| 4 | INDEX FAST FULL SCAN| IXB | 23836 | 116K| 4 |
----------------------------------------------------------------------

explain plan for select /*+ use_nl(A B) */ count(1) from A, B where A.OBJECT_ID = B.OBJECT_ID;

--------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost |
--------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 10 | 23850 |
| 1 | SORT AGGREGATE | | 1 | 10 | |
| 2 | NESTED LOOPS | | 23835 | 232K| 23850 |
| 3 | TABLE ACCESS FULL | A | 23835 | 116K| 15 |
|* 4 | INDEX RANGE SCAN | IXB | 1 | 5 | 1 |
--------------------------------------------------------------------



As a final note:

> show parameters optimizer

NAME TYPE VALUE
------------------------------------ ----------- ------
optimizer_dynamic_sampling integer 1
optimizer_features_enable string 9.2.0
optimizer_index_caching integer 0
optimizer_index_cost_adj integer 100
optimizer_max_permutations integer 2000
optimizer_mode string CHOOSE

> select * from V$VERSION;

BANNER
----------------------------------------------------------------
Oracle9i Enterprise Edition Release 9.2.0.6.0 - 64bit Production
PL/SQL Release 9.2.0.6.0 - Production
CORE 9.2.0.6.0 Production
TNS for Solaris: Version 9.2.0.6.0 - Production
NLSRTL Version 9.2.0.6.0 - Production


Tom Kyte
July 29, 2005 - 5:44 pm UTC

all I can say is 10g got it right from the get go.

Could be that it is not very widespread as the query, if made to touch the table, does the right thing. Not having to hit B in this case makes for an 'unusual' query.

Alberto Dell'Era, July 30, 2005 - 5:44 am UTC

Another very simple workaround to Marco's problem is to create the PK as DEFERRABLE:

alter table B add constraint PKB primary key (OBJECT_ID) deferrable;

----------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost |
----------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | | 68 |
| 1 | SORT AGGREGATE | | 1 | 10 | |
|* 2 | HASH JOIN | | 35940 | 350K| 68 |
| 3 | TABLE ACCESS FULL | A | 35940 | 175K| 51 |
| 4 | INDEX FAST FULL SCAN| PKB | 35941 | 175K| 9 |
----------------------------------------------------------------------

CBO still wrong

Marco Coletti, August 02, 2005 - 5:30 pm UTC

Tom:

The Oracle Metalink analist told me that he observed the same wrong behaviour (i.e. zero cost on the last step) in 9.2.0.6 and 10.0.1.4 with the following query:
select count(*) from emp,dept where emp.deptno=dept.deptno
the only relevant difference being that explain plan shows cost "0" for 10.0.1.4 against empty cost for 9.2.0.6.


I don't agree that the query is "unusual" since we have coded many statements like the following:

delete (
select 0
from A, B
where A.ID = B.ID
);

Here the goal is to delete all "old" rows from A, where "old" rows are by definition those listed in B, and of course there is a primary key B(ID).
It must be done online, that is concurrently with a process that does insert/update on A.


Touching table B makes the CBO account for the cost of "TABLE ACCESS BY INDEX ROWID", but the cost of "INDEX UNIQUE SCAN" is still zero:

explain plan for select /*+ use_nl(A B) */ B.OBJECT_TYPE from A, B where A.OBJECT_ID = B.OBJECT_ID;

----------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost |
----------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 23837 | 442K| 23852 |
| 1 | NESTED LOOPS | | 23837 | 442K| 23852 |
| 2 | TABLE ACCESS FULL | A | 23837 | 116K| 15 |
| 3 | TABLE ACCESS BY INDEX ROWID| B | 1 | 14 | 1 |
|* 4 | INDEX UNIQUE SCAN | PKB | 1 | | |
----------------------------------------------------------------------------

The correct total cost for above should be about
15 + 23837*(1+1) = 47689


----

Alberto Dell'Era suggested to create constraint PKB as deferrable, and indeed the CBO gets it right -->

alter table B drop constraint PKB;
alter table B add constraint PKB primary key (OBJECT_ID) deferrable;
execute dbms_stats.gather_table_stats(OwnName => user, TabName => 'B', Cascade => true);

explain plan for select /*+ use_nl(A B) */ 1 from A, B where A.OBJECT_ID = B.OBJECT_ID;

--------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost |
--------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 23837 | 232K| 23852 |
| 1 | NESTED LOOPS | | 23837 | 232K| 23852 |
| 2 | TABLE ACCESS FULL | A | 23837 | 116K| 15 |
|* 3 | INDEX RANGE SCAN | PKB | 1 | 5 | 1 |
--------------------------------------------------------------------

The "deferrable" state makes the CBO access the index by RANGE instead of UNIQUE because:

select UNIQUENESS from USER_INDEXES where INDEX_NAME = 'PKB';

UNIQUENES
---------
NONUNIQUE


At this point, it would seem that the problem is: cost for "INDEX UNIQUE SCAN" calculated as zero.
Not entirely correct, as we can see -->

Dropping the deferrable constraint leaves behind a nonunique index that is reused when the constraint is created again in non deferrable state.

alter table B drop constraint PKB;
alter table B add constraint PKB primary key (OBJECT_ID);

select UNIQUENESS from USER_INDEXES where INDEX_NAME = 'PKB';

UNIQUENES
---------
NONUNIQUE

execute dbms_stats.gather_table_stats(OwnName => user, TabName => 'B', Cascade => true);

The nonunique index must be accessed by RANGE, but the CBO still assumes zero for the cost:

explain plan for select /*+ use_nl(A B) */ 1 from A, B where A.OBJECT_ID = B.OBJECT_ID;

--------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost |
--------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 23837 | 232K| 15 |
| 1 | NESTED LOOPS | | 23837 | 232K| 15 |
| 2 | TABLE ACCESS FULL | A | 23837 | 116K| 15 |
|* 3 | INDEX RANGE SCAN | PKB | 1 | 5 | |
--------------------------------------------------------------------


What about no constraint and unique index? -->

alter table B drop constraint PKB;
drop index PKB;
create unique index UXB on B (OBJECT_ID);
execute dbms_stats.gather_table_stats(OwnName => user, TabName => 'B', Cascade => true);
explain plan for select /*+ use_nl(A B) */ 1 from A, B where A.OBJECT_ID = B.OBJECT_ID;

--------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost |
--------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 23837 | 232K| 15 |
| 1 | NESTED LOOPS | | 23837 | 232K| 15 |
| 2 | TABLE ACCESS FULL | A | 23837 | 116K| 15 |
|* 3 | INDEX UNIQUE SCAN | UXB | 1 | 5 | |
--------------------------------------------------------------------
WRONG!


What about a unique constraint and nonunique index? -->

drop index UXB;
create index IXB on B (OBJECT_ID);
alter table B add constraint UB unique (OBJECT_ID);
execute dbms_stats.gather_table_stats(OwnName => user, TabName => 'B', Cascade => true);
explain plan for select /*+ use_nl(A B) */ 1 from A, B where A.OBJECT_ID = B.OBJECT_ID;


--------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost |
--------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 23837 | 232K| 15 |
| 1 | NESTED LOOPS | | 23837 | 232K| 15 |
| 2 | TABLE ACCESS FULL | A | 23837 | 116K| 15 |
|* 3 | INDEX RANGE SCAN | IXB | 1 | 5 | |
--------------------------------------------------------------------
WRONG!


What if the unique constraint is deferrable? -->

alter table B drop constraint UB;
alter table B add constraint UB unique (OBJECT_ID) deferrable;
explain plan for select /*+ use_nl(A B) */ 1 from A, B where A.OBJECT_ID = B.OBJECT_ID;

--------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost |
--------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 23837 | 232K| 23852 |
| 1 | NESTED LOOPS | | 23837 | 232K| 23852 |
| 2 | TABLE ACCESS FULL | A | 23837 | 116K| 15 |
|* 3 | INDEX RANGE SCAN | IXB | 1 | 5 | 1 |
--------------------------------------------------------------------
RIGHT!


The problem seems related to "non deferrable uniqueness".
To sum up:
CBO assumes zero cost for accessing an index (by UNIQUE SCAN or RANGE SCAN) when uniqueness on the same column is enforced (by non-deferrable pk/unique constraint or unique index).


Help required

Anil Pant, August 24, 2005 - 8:38 am UTC

First of all sorry as I could not format the explain plan output properly.
Help me in finding out why CBBILL_SUMMARY table is not using index.

Im referring to this line TABLE ACCESS FULL CB3DBA CBBILL_SUMMARY 385068 15184955 106294685
in the explain plan.

I've verified the indexes and all the statistics are updated.


UPDATE CBMEMBER A
SET (MEM_NEXT_BILL_DATE, MEM_LAST_BILL_DATE,
MEM_INITIALBILL_DATE) =
(SELECT TRUNC(BRMBLRL_RUN_DATE),
ADD_MONTHS(TRUNC(BRMBLRL_RUN_DATE), -1),
ADD_MONTHS(TRUNC(BRMBLRL_RUN_DATE), -1)
FROM CBBRAND_MBILL_RULE X
WHERE A.BR_ID = X.BR_ID),
MEM_MODIFIED_DATE = SYSDATE,
MEM_MODIFIER_ID = 99904
WHERE MEMSTA_ID = 1 AND
MEM_ID IN
(
SELECT MEM_ID
FROM CBMEMBER MEM,
CBBRAND_MBILL_RULE MBR
WHERE MBR.BILLING_CYCLE = 2 AND
MBR.MULTIJOB IS NOT NULL AND
--MEM.MEMSTA_ID = 1 AND
MEM.BR_ID = MBR.BR_ID AND
EXISTS
(
SELECT 1
FROM CBBILL_SUMMARY BLS
WHERE BLS.MEM_ID = MEM.MEM_ID AND
BLTYP_ID = 3 AND BLS.BLSSTA_ID != 4
)
)


UPDATE STATEMENT, GOAL = CHOOSE 411255 835212 33408480
UPDATE CB3DBA CBMEMBER
HASH JOIN 411255 835212 33408480
VIEW SYS VW_NSO_1 391489 835212 10857756
SORT UNIQUE 391489 835212 20045088
HASH JOIN 387411 835212 20045088
NESTED LOOPS 1639 274653 4669101
TABLE ACCESS FULL CB3DBA CBBRAND_MBILL_RULE 1 26 234
INDEX RANGE SCAN CB3DBA MEM_BR_UK 63 10564 84512
TABLE ACCESS FULL CB3DBA CBBILL_SUMMARY 385068 15184955 106294685
TABLE ACCESS FULL CB3DBA CBMEMBER 23130 1018328 27494856
TABLE ACCESS BY INDEX ROWID CB3DBA CBBRAND_MBILL_RULE 1 1 5
INDEX UNIQUE SCAN CB3DBA BR_MBILL_RULE_UNQ 79

Tom Kyte
August 24, 2005 - 2:09 pm UTC

indexes do not mean "fast=true", perhaps the index is not the way to go.

How do you know the query portition is "slow" here, perhaps it is the update itself.

trace it, see what is taking long.



A reader, August 26, 2005 - 12:07 pm UTC

Thanks - Marco Coletti for letting us know about this.

CBO and number of disks

Sven, September 28, 2005 - 8:20 am UTC

Hi Tom,

Will the CBO when calculating the cost for the query take into consideration (make the assumption) about how many disks are available to execute a query plan?

Thanks,

Sven


Tom Kyte
September 28, 2005 - 10:23 am UTC

no, not really - it takes into consideration the single block and multiblock IO historical times if available - but the number of "disks" (a disk isn't a disk anymore), no.

What do you think about this ? Your opinion

A reader, October 04, 2005 - 4:31 pm UTC

Tom,

We have a lot stuff liket his...does it look good to you?
any work around??

SELECT count(victoria)
INTO v_victor
FROM visitation
WHERE trunc(l_in_date) BETWEEN NVL(l_s_date,l_e_date)
AND NVL(l_e_date,l_s_date)
AND t_date IS NOT NULL;

Tom Kyte
October 04, 2005 - 8:27 pm UTC

I've no idea what are columns and what are variables. If l_ things are variables..... I'd say

well, regardless - I'd ask "WHY ARE YOU COUNTING" - what is the point behind counting -- 999999999999 times out of 1000000000000 you don't need to do it (if you are counting to see if there is something to process- why not JUST TRY TO PROCESS IT and then discover "nothing there" rather then hitting everything to be processed??

yes l's are variables...

A reader, October 05, 2005 - 9:05 am UTC


Tom Kyte
October 05, 2005 - 11:24 am UTC

so, why are you counting in the first place ?!?!?


SELECT count(victoria)
INTO v_victor
FROM visitation
WHERE trunc(l_in_date) BETWEEN NVL(l_s_date,l_e_date)
AND NVL(l_e_date,l_s_date)
AND t_date IS NOT NULL;


if trunc(l_in_date) BETWEEN NVL(l_s_date,l_e_date)
AND NVL(l_e_date,l_s_date)
then
SELECT count(victoria)
INTO v_victor
FROM visitation
WHERE t_date IS NOT NULL;
else
v_victor := 0;
end if


is a bad way to do it... good way is:


<this space intentionally left blank>


counting, almost always a waste of resources.

taking your suggesting!!!

A reader, October 05, 2005 - 11:28 am UTC

It will be left

Parse

Aru, October 19, 2005 - 7:11 pm UTC

Hi Tom,

I have a certain query that has the following information in OEM about

SELECT DISTINCT t1.currenttimespentminutes, t1.totaltimespentminutes
FROM timespent t1, service_request t2, parent_child_links t2mm
WHERE t1.dbid = t2mm.child_dbid (+)
AND 16782564 = t2mm.child_fielddef_id (+)
AND t2mm.parent_dbid = t2.dbid (+)
AND t1.dbid <> 0
AND t2.id = 'PROD00095465';


9 SELECT STATEMENT
8 SORT [UNIQUE]
7 FILTER
6 HASH JOIN [OUTER]
4 NESTED LOOPS
2 CQ_PROD.SERVICE_REQUEST TABLE ACCESS [BY INDEX ROWID]
1 CQ_PROD.SERVICE_REQUEST_UK INDEX [UNIQUE SCAN]
3 CQ_PROD.TIMESPENT TABLE ACCESS [FULL]
5 CQ_PROD.PARENT_CHILD_LINKS TABLE ACCESS [FULL]


Disk reads per execution 2666
Buffer Gets per execution 4042
executions 122
buffer gets per row 4076
sorts 125
rows processed 124
parse Calls per execution 1.00
parse calls 122
loads 1
Cpu time 17909000
Elapsed time 27450000


What I am struggling with is the information 'parse Calls per execution'
and 'parse calls' being 1.00 and 126 respectively. Does this mean that
the query was "hard" parsed 122 times? Also the 1.00 parse calls per
execution is indicative that it was parsed each time it was executed.
Is that hard parse or cold parse?
PLease please clarify Tom,
Regards,
Aru.

Tom Kyte
October 19, 2005 - 7:58 pm UTC

that means the query was parsed 122 times.

probably 1 hard parse and 121 softparses.

It basically means you always parse it, execute it once and never again in your program.




Puzzled timings of similar query...

Shailesh Saraff, October 21, 2005 - 7:22 am UTC

Hello Tom,

We have tried one exercise and would like to share with you. Similar query has given two different elapsed time. Please check WHERE clause (small changes in expression strings etc.), difference in elapsed time is huge.

What is NESTED LOOPS (SEMI) and NESTED LOOPS (ANTI)? Can you please us explain why timings are so different. (Buffer was flushed before each query execution). First query 117 seconds and Second Query takes 31.90 seconds.

Thanks & Regards,

Shailesh

SELECT Distinct TblFall.FallId, TblPatient.PatientenNr, TblPatient.Name, TblPatient.Vorname
FROM TblPatient, TblFall, TblBewegung
WHERE (TblFall.Fallart IN ('ambulant'))
AND TblFall.FallStartDatum >= TO_DATE('01.04.2005','DD.MM.YYYY')
AND TblFall.FallStartDatum <= TO_DATE('05.10.2005','DD.MM.YYYY')
AND (TblBewegung.PflegerischeOE IN ('AN'))
AND TblPatient.PatientenId = TblFall.PatientenId
And TblPatient.Stornierer IS NULL
AND TblFall.Stornierer IS NULL
And TblFall.FallId >= 0
AND TblFall.Einrichtung = '1'
AND TblFall.FallId = TblBewegung.FallId
AND TblBewegung.Stornierer IS NULL
AND TblPatient.PatientenId = TblFall.PatientenId
And TblFall.Stornierer IS NULL
And TblFall.FallId >= 0
AND TblFall.Einrichtung = '1'
AND EXISTS (SELECT TblSchein.FallId
FROM TblSchein
WHERE TblFall.FallId = TblSchein.FallId
AND (TblSchein.ScheinTyp IN ('NtfallSchein'))
AND TblSchein.GueltigVon >= TO_DATE('01.04.2005','DD.MM.YYYY')
AND TblSchein.GueltigBis <= TO_DATE('05.10.2005','DD.MM.YYYY')
AND TblSchein.Stornierungszeit IS NULL)
AND NOT EXISTS (SELECT TblLMLeistAbrechng.BewegungsId
FROM TblLMLeistAbrechng
WHERE TblBewegung.BewegungsId = TblLMLeistAbrechng.BewegungsId
AND TblLMLeistAbrechng.Stornierer Is NULL
AND NVL(TblLMLeistAbrechng.AbrechenbarITB, 'N') = 'J')

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.21 0.21 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 1 1.14 116.85 3414 80901 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 3 1.35 117.06 3414 80901 0 0

Rows Execution Plan
------- ---------------------------------------------------
0 SELECT STATEMENT MODE: ALL_ROWS
0 SORT (UNIQUE)
0 NESTED LOOPS (SEMI)
0 NESTED LOOPS
0 NESTED LOOPS (ANTI)
0 NESTED LOOPS
19664 TABLE ACCESS MODE: ANALYZED (BY INDEX ROWID) OF
'TBLFALL' (TABLE)
30977 INDEX MODE: ANALYZED (RANGE SCAN) OF 'XIE7TBLFALL'
(INDEX)
0 TABLE ACCESS MODE: ANALYZED (BY INDEX ROWID) OF
'TBLBEWEGUNG' (TABLE)
38492 INDEX MODE: ANALYZED (RANGE SCAN) OF
'XIE1TBLBEWEGUNG' (INDEX)
0 TABLE ACCESS MODE: ANALYZED (BY INDEX ROWID) OF
'TBLLMLEISTABRECHNG' (TABLE)
0 INDEX MODE: ANALYZED (RANGE SCAN) OF
'XIE1TBLLMLEISTABRE' (INDEX)
0 TABLE ACCESS MODE: ANALYZED (BY INDEX ROWID) OF
'TBLPATIENT' (TABLE)
0 INDEX MODE: ANALYZED (UNIQUE SCAN) OF 'XPKPATIENT2'
(INDEX (UNIQUE))
0 TABLE ACCESS MODE: ANALYZED (BY INDEX ROWID) OF
'TBLSCHEIN' (TABLE)
0 INDEX MODE: ANALYZED (RANGE SCAN) OF 'XIE1TBLSCHEIN'
(INDEX)

**************************************

SELECT Distinct TblFall.FallId, TblPatient.PatientenNr, TblPatient.Name, TblPatient.Vorname
FROM TblPatient, TblFall, TblBewegung
WHERE TblFall.Fallart IN ('ambulant')
AND TO_DATE('01.04.2005', 'DD.MM.YYYY') <= TblFall.FallStartDatum
AND TO_DATE('05.10.2005', 'DD.MM.YYYY') >= TblFall.FallStartDatum
AND TblBewegung.PflegerischeOE IN ('AN')
AND TblFall.PatientenId = TblPatient.PatientenId
AND TblPatient.Stornierer IS NULL
AND TblFall.Stornierer IS NULL
AND 0 <= TblFall.FallId
AND '1' = TblFall.Einrichtung
AND TblBewegung.FallId = TblFall.FallId
AND TblBewegung.Stornierer IS NULL
AND NOT EXISTS (SELECT TblLMLeistAbrechng.BewegungsId
FROM TblLMLeistAbrechng
WHERE TblLMLeistAbrechng.BewegungsId = TblBewegung.BewegungsId
AND TblLMLeistAbrechng.Stornierer Is NULL
AND 'J' = NVL(TblLMLeistAbrechng.AbrechenbarITB, 'N'))
AND TblBewegung.FallId >= 0
AND EXISTS (SELECT TblSchein.FallId
FROM TblSchein
WHERE TblSchein.FallId = TblBewegung.FallId
AND TblSchein.ScheinTyp IN ('NtfallSchein')
AND TO_DATE('01.04.2005', 'DD.MM.YYYY') <= TblSchein.GueltigVon
AND TO_DATE('05.10.2005', 'DD.MM.YYYY') >= TblSchein.GueltigBis
AND TblSchein.Stornierungszeit IS NULL)

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.01 0.01 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 1 0.84 31.89 3415 80901 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 3 0.85 31.90 3415 80901 0 0

Rows Execution Plan
------- ---------------------------------------------------
0 SELECT STATEMENT MODE: ALL_ROWS
0 SORT (UNIQUE)
0 NESTED LOOPS (ANTI)
0 NESTED LOOPS
0 NESTED LOOPS (SEMI)
0 NESTED LOOPS
19664 TABLE ACCESS MODE: ANALYZED (BY INDEX ROWID) OF
'TBLFALL' (TABLE)
30977 INDEX MODE: ANALYZED (RANGE SCAN) OF 'XIE7TBLFALL'
(INDEX)
0 TABLE ACCESS MODE: ANALYZED (BY INDEX ROWID) OF
'TBLBEWEGUNG' (TABLE)
38492 INDEX MODE: ANALYZED (RANGE SCAN) OF
'XIE1TBLBEWEGUNG' (INDEX)
0 TABLE ACCESS MODE: ANALYZED (BY INDEX ROWID) OF
'TBLSCHEIN' (TABLE)
0 INDEX MODE: ANALYZED (RANGE SCAN) OF 'XIE1TBLSCHEIN'
(INDEX)
0 TABLE ACCESS MODE: ANALYZED (BY INDEX ROWID) OF
'TBLPATIENT' (TABLE)
0 INDEX MODE: ANALYZED (UNIQUE SCAN) OF 'XPKPATIENT2'
(INDEX (UNIQUE))
0 TABLE ACCESS MODE: ANALYZED (BY INDEX ROWID) OF
'TBLLMLEISTABRECHNG' (TABLE)
0 INDEX MODE: ANALYZED (RANGE SCAN) OF
'XIE1TBLLMLEISTABRE' (INDEX)

*******************************************

Tom Kyte
October 21, 2005 - 8:35 am UTC

I'd be curious as to see the wait events for this - they did the same work it looks like - but one of them just had to wait longer (eg: it could be a fluke)



that is however, a lot of work to find zero rows don't you think.

why toad shows correct plan and sqlplus doesnt

A reader, October 25, 2005 - 5:08 pm UTC

Hi

I am using Oracle 8.1.7.4 on HP-UX.

I am tunign a query with no bind variables when I do set autotrace on I see a plan totally different than that plan I see in TOAD

The plan in TOAD is correct but autotrace shows the wrong one (I tkprofed).

How so? Where does TOAD get the plan from? Why SQLPLUS gives wrong plan?

Tom Kyte
October 26, 2005 - 11:32 am UTC

sqlplus uses explain plan.

do this, run the tkprof with explain=user/pass so we get

a) the actual plan
b) the explain plan plan

and show it to us.

you would have to ask the makers of toad where they get the plan from.

explain plan

Parag J Patankar, November 10, 2005 - 6:00 am UTC

Hi Tom,

In Oracle 9i, I am running a SQL query which is taking 2 hours to run. I just want to see execution plan but it is when I am doing "traceonly explain" it also taking very long time ( ultimately I cancelled this ). Is it normal ? how can I see only execution plan fast without waiting for so much of time.

Secondly my exeuction plan breaks down in two lines I want to appear in one line ( line width should be 132 chars ) how can I do that ?

regards & thanks
pjp

Tom Kyte
November 11, 2005 - 11:43 am UTC

if the query is running, the plan would already be in v$sql_PLAN, you can get it from there.


You can also:

explain plan for select .....;
select * from table(dbms_xplan.display);



if the explain plan call takes long - it is a problem with parsing the sql statement itself.

Explain plan output

Parag J Patankar, November 12, 2005 - 1:33 am UTC

Hi Tom,

Thanks for your answer. Will you also answer my question of how to set up Explain plan output for linesize 100.

regards & thanks
pjp

Tom Kyte
November 12, 2005 - 10:54 am UTC

set linesize.


ops$tkyte@ORA9IR2> set linesize 20
ops$tkyte@ORA9IR2> select * from table(dbms_xplan.display);

PLAN_TABLE_OUTPUT
--------------------

--------------------
--------------------
--------------------
--------

| Id  | Operation
         |  Name
   | Rows  | Bytes |
 Cost  |

--------------------
--------------------
--------------------
--------

|   0 | SELECT STATE
MENT     |
   |       |       |
       |

|   1 |  TABLE ACCES
S FULL   | DUAL
   |       |       |
       |

--------------------
--------------------
--------------------
--------


Note: rule based opt
imization


9 rows selected.

ops$tkyte@ORA9IR2> set linesize 100
ops$tkyte@ORA9IR2> select * from table(dbms_xplan.display);

PLAN_TABLE_OUTPUT
----------------------------------------------------------------------------------------------------

--------------------------------------------------------------------
| Id  | Operation            |  Name       | Rows  | Bytes | Cost  |
--------------------------------------------------------------------
|   0 | SELECT STATEMENT     |             |       |       |       |
|   1 |  TABLE ACCESS FULL   | DUAL        |       |       |       |
--------------------------------------------------------------------

Note: rule based optimization

9 rows selected.
 

this is cool stuff, may I have more please...

Kevin Meade, November 14, 2005 - 4:53 pm UTC

Nice thread

To go one step further with this, while tuning sql I have often wanted to associate a sql statement in the cursor cache with the piece of code it comes from. Particularly, I am currently trying to match sql from the cursor cache with the specific packaged procedure/function from which it eminates.

I know v$open_cursor shows open sql in the cursor cache. I know v$session shjows the user and sid. I know v$process shows OS info and program name. But I can't find any likage back to the source code.

Is there a way to map sql in the cursor cache to the packaged procedure/function that owns it?

I know your time is valuable and limited so I have tried to find an answer elsewhere. I have searched the net, Oracle Metalink, the other suggested resources your sites offers, and here, to no avail.

Thanks, Kevin

Tom Kyte
November 15, 2005 - 7:31 am UTC

nope, it is NOT a one to one function.

1 or 1,000 programs may issue:

select * from emp;




What I like to see is the liberal use of dbms_application_info!

the action and module you can set (like HTMLDB does) is associated with the SQL in v$SQL

Tuning revisited in context of a unique index

Devopam Mittra, November 14, 2005 - 5:35 pm UTC

Hi Tom,
I have a typical situation :
Table Tab1 has columns A,B,C and D.
Uniqueness of a record is defined as a combination of A AND(B OR C) i.e.
either of values B or C (or both).
Simply defined a Unique index on A,B,C and it works fine, as per logic.
Business logic wise, there can't exist a record with both B and C NULL at the same time.
But the data loading process is becoming slower by the day as the table size is growing.

Would it be beneficial if I define a pseudo column E (B||'-'||C) and define a primary key on A and E instead of Unique index being maintained.

The data querying modules run fine and pick up the correct path as well. So not a problem there.
regards
Devopam

Tom Kyte
November 15, 2005 - 7:33 am UTC

You have an atypical situation (almost every time someone starts with "I have a typical situation" - it turns out to be something atypical!)


the size of the table should not affect load times - an insert into a 50 billion row table should take about as long as into an empty table.


A primary key will have an index associated with it - unique or non-unique - but there WILL be an index.


tell us why you believe the "size" is affecting the load?

If there is no one to one correspondance then I don't undertand this

Kevin Meade, November 15, 2005 - 5:22 pm UTC

Maybe I am interpreting things wrong but this example tells me there is indeed a one to one correspondence of open cursors to programs executing sql...

create or replace procedure p1 is
v1 number;
begin
dbms_application_info.set_client_info('start');
select 1 into v1 from dual;
select 1 into v1 from dual;
dbms_lock.sleep(30);
dbms_application_info.set_client_info('done');
end;
/

execute p1

-- jump over to second sqlplus session to watch

KEVIN@>select * from v$open_cursor where sid in (select sid from v$session where client_info is not null);


SADDR SID USER_NAME ADDRESS HASH_VALUE
-------- ---------- ------------------------------ -------- ----------
SADDR SID USER_NAME ADDRESS
-------- ---------- ------------------------------ --------
HASH_VALUE
----------
SQL_TEXT
------------------------------------------------------------
432EBEAC 17 KEVIN 4CDBFCD0
3029709661
BEGIN p1; END;

432EBEAC 17 KEVIN 4CDB5AA0
3876449241
SELECT 1 FROM DUAL

432EBEAC 17 KEVIN 4CDB5AA0
3876449241
SELECT 1 FROM DUAL


3 rows selected.

KEVIN@>desc v$open_cursor
Name Null? Type
----------------------------- -------- -----------------
SADDR RAW(4)
SID NUMBER
USER_NAME VARCHAR2(30)
ADDRESS RAW(4)
HASH_VALUE NUMBER
SQL_TEXT VARCHAR2(60)


Oracle has clearly cached one statement with one plan but, I see two entries in v$open_cursor because this process opens the same statement twice. This seems to me to be a one to one correspondance between the code and the v$open_cursor table. Since in executing each sql statement, Oracle had to get it from where ever it lives, I was hoping it had this information buried somewhere (maybe in a not yet exposed v$... or available via some utility package I don't know yet, or gettable with some x$magic) and you could tell me how to dig it out. I suppose if oracle just doesn't keep this info around during execution then I am out of luck.

As for lots of DBMS_APPLICATION_INFO, I agree, but the reality is as a contractor, I move to places that don't do it and so what am I to do? Tell them, "go back to all your pl/sql code, put in these statements then spend time retesting it all, and then I can get you some cool documentation"... I can hear their response already... "MIster, you were supposed to be an expert at this. If you can't get what we want without requiring us to go through hoops well, guess we will have to find someone else". Dealing with after-the-fact dirty systems is just the way it is. Which is why I rely so heavily on Oracle to make me look good.

Thanks, I'm done. I'll try to get it some other way.

Kevin

Tom Kyte
November 16, 2005 - 8:32 am UTC

you cannot use plsql to test this particular thing because plsql is the *best* environment for doing sql and very smartly caches things.

write a program using another language and you'll find that if you open/parse/close a cursor here and there, you'll have - well, zero open cursors.... but you'll have the same "cursor" pointed to in v$sql


in v$sql, there is only one (even from plsql). so in v$sql, it is not possible to say "this sql is executed from this line of code in this program"


There is not any facility in general to do what you want - for in general it is not doable (same sql, many programs) (we have no clue what line of code you are in in your java/vb/c/etc program)



Re:Tuning revisited in context of a unique index

Devopam Mittra, November 15, 2005 - 7:27 pm UTC

Hi Tom,
Thanks for the enlightment :-) about typicality of a problem.

If we have high volume of data in a table where a unique/pk index is preserving the business logic for uniqueness of a record, my understanding is that it should take higher processing time/resources to evaluate if a new record can be inserted or updated.
High volume : 40 mil rec - 200 mil rec.
Index spans across 5 columns of datatype VARCHAR,DATE and NUMBER.

Please advise further.
regards
Devopam

Tom Kyte
November 16, 2005 - 8:36 am UTC

right, it will take time to verify that the data is unique (obviously) however, it won't take measurably more time with 5,000,000,000,000,000 rows in the table than 5.

You said:

But the data loading process is becoming slower by the day as the table size is
growing.


I don't see the connection between this unique constraint and increasing load times.

explain plan output in one line

Parag J Patankar, November 16, 2005 - 1:22 am UTC

Hi Tom,

Sorry I might not be clear while explaining the question. I want explain output in one line when I setup "autotrace". For e.g.

if I do "autotrace" I am getting following output

Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=9 Card=1 Bytes=5)
1 0 SORT (AGGREGATE)
2 1 INDEX (FAST FULL SCAN) OF 'X01DT31' (UNIQUE) (Cost=9 Car
d=1 Bytes=5)

I want ouput like following

Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=9 Card=1 Bytes=5)
1 0 SORT (AGGREGATE)
2 1 INDEX (FAST FULL SCAN) OF 'X01DT31' (UNIQUE) (Cost=9 Card=1 Bytes=5)

I set up linesize 9999 but no effect on output.

regards & thanks
pjp

Tom Kyte
November 16, 2005 - 9:00 am UTC

from effective Oracle by design:


<quote>
You do have some amount of control over the formatting of this report. The defaults (found in $ORACLE_HOME/sqlplus/admin/glogin.sql) are as follows:

COLUMN id_plus_exp FORMAT 990 HEADING i
COLUMN parent_id_plus_exp FORMAT 990 HEADING p
COLUMN plan_plus_exp FORMAT a60
COLUMN object_node_plus_exp FORMAT a8
COLUMN other_tag_plus_exp FORMAT a29
COLUMN other_plus_exp FORMAT a44

The ID_PLUS_EXP and PARENT_ID_PLUS_EXP columns are the first two numbers you see in the EXPLAIN PLAN output above. The PLAN_PLUS_EXP column is perhaps the most important one. It is the textual description of the plan step itself; for example, TABLE ACCESS (FULL) OF ‘DEPT’ (Cost=2 Card=4 Bytes=72). I find the default of 60 characters wide to be too small for most uses, so I set it to 100 in my login.sql file.

The last three settings control the output information displayed for parallel query plans. The easiest way to see which columns they affect is to run a parallel query with SET AUTOTRACE TRACEONLY EXPLAIN and turn them off, one by one. You’ll clearly see what disappears from the report, so you’ll know what they control.

Re:Tuning revisited in context of a unique index

Devopam Mittra, November 17, 2005 - 3:33 am UTC

Hi Tom,
Is the assumption fair with no performance hit against a table of 5 recs vs that big a number ?
Am not doubting the capabilities of Oracle but it should be a reasonable assumption that there will be a performance change.

Okay about my prob, there are usually 1 mil rec inserts per day in this table.
Of late the procedure is taking more time. (not a drastic change though)

So I was evaluating if keeping an index on a non-null column would help me instead of keeping a unique index with NULL recs ~few mil recs.

Please let me know if the line of approach is correct.
regards
Devopam



Tom Kyte
November 17, 2005 - 8:12 am UTC

why is it a reasonable assumption?

What is the thoughts behind this assumption?


have you traced your application to see what is taking "long"?

Re:Tuning revisited in context of a unique index

Devopam Mittra, November 18, 2005 - 2:54 am UTC

Hi Tom,
Please find my responses as under.
why is it a reasonable assumption?
Searching a 5 record table for uniqueness should not need an index in the first place itself.
A FTS will be way faster.


What is the thoughts behind this assumption?
With the table size the index size also grows. So now the query will be sampling more no of records to determine the uniqueness.

have you traced your application to see what is taking "long"?
There is an I/O contention issue during the data loading time (it is a scheduled activity) and that could be a potential problem

Please let me know if the assumptions made above make sense.

I was trying to explore the option of a PK vs a Unique index (not a constraint) as a faster means of determining the uniqueness.

Requesting your opinion on the subject matter. Are these two types of indices going to perform the same way , then I guess I should explore the other aspects of the ETL process for timing improvement.

regards
Devopam

Tom Kyte
November 18, 2005 - 10:45 am UTC

but oracle needs and will only enforce a unique constraint with an index. so the theory of the fts does not fly.

And - for a 5 row table, it would be more efficient to use an index, not a fts. In fact, for any sized table it would be more efficient to use the index, not a fts!


The time to find a unique key in an index is directly related to the height of the index. You will find that you can have millions and millions and millions of rows and it'll still only take 3 or 4 logical IO's to determine "unique or not"


a primary key is enforced via an index, don't try to "outsmart" anything here by using - well - AN INDEX, that is already happening.


IF your goal is to tune performance
THEN
your first step must be to ascertain where your time is being spent
and concentrate on tuning that. Not on tuning hypothetical situations :)
END IF



explain plan interpretation

dhamodharan.l, November 22, 2005 - 9:11 am UTC

hi tom the exlanation u gane is very useful for me please tell my how do solve using this explain plan
--------------------------------------------------------------------------------
-

| Id | Operation | Name | Rows | Bytes | Cost
|

--------------------------------------------------------------------------------
-

| 0 | SELECT STATEMENT | | | |

PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------
|

| 1 | NESTED LOOPS | | | |
|

| 2 | NESTED LOOPS | | | |
|

| 3 | NESTED LOOPS | | | |
|


PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------
| 4 | TABLE ACCESS FULL | PATREVISIT | | |
|

| 5 | TABLE ACCESS BY INDEX ROWID| CONSULTANT | | |
|

| 6 | INDEX UNIQUE SCAN | PK_CONSULTANT | | |
|

| 7 | TABLE ACCESS FULL | PATMLC | | |
|

PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------

| 8 | TABLE ACCESS FULL | CONSULTANT | | |
|

--------------------------------------------------------------------------------
-


Note: rule based optimization, 'PLAN_TABLE' is old version

16 rows selected.

Tom Kyte
November 22, 2005 - 10:04 am UTC

ctl-f for

Reading an Explain Plan

I've described how to read a plan.


Index Access for a table

Vivek Sharma, November 25, 2005 - 9:18 am UTC

Hi Tom,

I am one of your regular reader and appreciate your efforts. I have learned a lot from this website (educational). :)

I have one general query on the Columns used by an Index Access. My query is :

Suppose a query
----------------

select a.x, a.y, b.x, b.z, c.x
from a, b, c
where a.xy = b.xy
and a.pq = c.pq
and a.mn = 'some value'
and b.op = some_value
and c.dt = some_date;

I have a Composite Index on a(xy, pq) and my autotrace shows that A is my driving table and a nested loop of A and B are then joined again via nested loop of C.

My Query is that though the Index on A is on (xy, pq) but since column PQ is joined to a different table (C) and this step is just a Nested loop between A and B, will the query make use of both columns of an Index or just XY.

I think that since column PQ is joined to table C, the Index will only use Column XY as a joining condition between A and B and hence, PQ will be of no use.

Am I correct ? Please let me know.

Thanks and Regards
Vivek



Tom Kyte
November 25, 2005 - 10:57 am UTC

I cannot see the query using the index on a(xy,pq) if A is in fact the driving table.

I would see a full scan of A or a index on A(mn) being used.



Sorry Tom....this is in continuation of my previous post

Vivek Sharma, November 25, 2005 - 11:07 am UTC

Sorry Tom,

This is in continuation to my previous post.

There is no index on A(m,n)....though I understand that the Optimizer should do a FTS on A, but I have seen the CBO using an index on A(xy, pq)..may be because of the setting of optimizer_index_cost_adj and optimizer_index_caching. In this case, it was CBO but I have also seen this behaviour in RBO. So was just curious to know that whether it will only use XY column of A for the Scan.

Awaiting your response.

Regards
Vivek


Tom Kyte
November 25, 2005 - 1:40 pm UTC

not if A was the DRIVING table it wouldn't, you would need to set up a test case to show otherwise.

If b/c were joined AND THEN that is joined to A, I could see the index but not otherwise.

reader

A reader, November 29, 2005 - 3:31 pm UTC

Hi Tom

may i know how you have converted


PLAN_TABLE_OUTPUT
-----------------------------------------------------------------
| Id | Operation |Name |Rows|Bytes|Cost |
-----------------------------------------------------------------
| 0 | SELECT STATEMENT | | | | |
| 1 | NESTED LOOPS | | | | |
| 2 | NESTED LOOPS | | | | |
| 3 | TABLE ACCESS FULL | SALGRADE| | | |
|* 4 | TABLE ACCESS FULL | EMP | | | |
| 5 | TABLE ACCESS BY INDEX ROWID| DEPT | | | |
|* 6 | INDEX UNIQUE SCAN | DEPT_PK | | | |
-----------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

4 - filter("EMP"."SAL"<="SALGRADE"."HISAL" AND
"EMP"."SAL">="SALGRADE"."LOSAL")
6 - access("EMP"."DEPTNO"="DEPT"."DEPTNO")

Note: rule based optimization

21 rows selected.





into



For salgrade in (select * from salgrade)
Loop
For emp in ( select * from emp )
Loop
If ( emp.sal between salgrade.losal and salgrade.hisal )
Then
Select * into dept_rec
From dept
Where dept.deptno = emp.deptno;

OUTPUT RECORD with fields from salgrade,emp,dept
End if;
End loop;
End loop;





and then into this


1
/ \
2 5
/ \ \
3 4 6

i have read How the CBO Chooses an Access Path in oracle performance tuning guide but i could not somehow understand that fully.

Tom Kyte
November 30, 2005 - 11:02 am UTC

I did???? I mean that was the crux of the posting you cut this from - it goes through this step by step.

reader

A reader, November 30, 2005 - 11:12 am UTC

Hi

in the below graph you said that we move from left to right.first you wrote step 3(For salgrade in (select * from salgrade) ) then step 4(For emp in ( select * from emp )) and then step 2(nested loop) after then cant we do step 6 insted of step 5 as you have done in the pseudocode...will you be able to explain this whole pseudocode again with the help of this graph. i would be thankful to you ....

1
/ \
2 5
/ \ \
3 4 6

reader

A reader, December 10, 2005 - 1:55 am UTC

Hi Tom

Please could you answere this(I am novice in this matter,i have also read access paths but i somehow could not understand this):
In the below graph you said that we move from left to right.first you wrote
step 3
(For salgrade in (select * from salgrade) ) then step 4(For emp in ( select *
from emp ))
and then
step 2
(nested loop), then after that cant we do step 6 insted of
step 5 as you have done in the pseudocode...WILL YOU BE ABLE TO EXPLAIN this
whole pseudocode again with the help of this graph. I would be thankful to you ....

1
/ \
2 5
/ \ \
3 4 6




Tom Kyte
December 10, 2005 - 5:22 am UTC

I cannot think of any other ways to say it other than the way I said it above where i went step by step by step by step through this. sorry. I thought I said it as simply as I can. maybe someone else can try - but I cannot see any easier way to say it.

tkprof script

Reader, December 26, 2005 - 2:48 pm UTC

I scanned all (3) books but could not find the script which you use to read the tkprof during your presentations .

Where can I find it ?

Thanks


Tom Kyte
December 26, 2005 - 2:58 pm UTC

I don't know what you mean?

tkprof is a command line tool provided with the database, it generates a report. it is in $ORACLE_HOME/bin

TKPROF

reader, December 26, 2005 - 11:01 pm UTC

Well ,during your demos you execute a sql then you run a script which prints the tkprof output .(I assume you grab the pid the pass this to the tkprof )

I will try to explain in detail if I am not clear this time .

I was told that it is in the book but I could not find it .

Thanks


Tom Kyte
December 27, 2005 - 9:37 am UTC

oh, all I'm doing is:

SQL> disconnect
SQL> !tkprof `ls -t $ORACLE_HOME/admin/$ORACLE_SID/udump/*ora_*.trc | head -1` ./tk.prf
SQL> edit tk.prf
SQL> connect /


in my scripts (or calling a small script tk, that does the same)

I just find the last trace file generated, and tkprof it.  works good on my single user demo system.


You can also using my getspid script (which is in the books)

ops$tkyte@ORA10GR2> l
  1  select a.spid dedicated_server,
  2        b.process clientpid
  3    from v$process a, v$session b
  4   where a.addr = b.paddr
  5*    and b.sid = (select sid from v$mystat where rownum=1)
ops$tkyte@ORA10GR2> /

DEDICATED_SE CLIENTPID
------------ ------------
2533         2520

ops$tkyte@ORA10GR2> !ls -ltr $ORACLE_HOME/admin/$ORACLE_SID/udump/*2533.trc
-rw-rw----  1 ora10gr2 ora10gr2 18690 Dec 27 10:29 /home/ora10gr2/admin/ora10gr2/udump/ora10gr2_ora_2533.trc


it would be more appropriate on a multi-user system where many people might be generating traces.


 

THANK YOU !

reader, December 27, 2005 - 7:24 pm UTC


your user

Reader, January 11, 2006 - 7:50 am UTC

In the above script is your user identified externally .

Thanks ,



Tom Kyte
January 12, 2006 - 10:26 am UTC

yes? is that a problem?

Explain Plan

a, January 12, 2006 - 10:12 am UTC

500000 rows created.

Elapsed: 00:08:01.12

Execution Plan
----------------------------------------------------------
0 INSERT STATEMENT Optimizer=CHOOSE (Cost=1681 Card=500000 Byt
es=103500000)

1 0 LOAD AS SELECT
2 1 HASH JOIN (Cost=1681 Card=500000 Bytes=103500000)
3 2 TABLE ACCESS (FULL) OF 'LND_DIABETES_DERIVED' (Cost=2
Card=500000 Bytes=10500000)

4 2 TABLE ACCESS (FULL) OF 'LND_DIABETES_TRANS' (Cost=263
Card=500000 Bytes=93000000)





Statistics
----------------------------------------------------------
5000 recursive calls
264003 db block gets
26628 consistent gets
16683 physical reads
179047124 redo size
791 bytes sent via SQL*Net to client
5801 bytes received via SQL*Net from client
3 SQL*Net roundtrips to/from client
1 sorts (memory)
0 sorts (disk)
500000 rows processed

Could you please let us know why 500000 records take 8 minutes?

Tom Kyte
January 12, 2006 - 11:10 am UTC

no, not from this. No more than you could.

I could think of many reasons, none of which might apply, or all of which might apply, or some of which might apply.

for example, you generated some 175mb of redo. Likely you have many indexes on this table (guess based on redo generated)


you did 16,683 physical IO's, if each took 0.01 seconds, that would be almost 3 minutes - maybe IO is even slower on your system.

maybe the system was loaded and you had a hard time getting CPU time.

maybe your redo logs are way small and you suffered from lots of "checkpoint not complete, cannot allocate new log" messages.

maybe......


suggestion: trace with wait event recording enabled, use tkprof, see what you are waiting on.

How to retrieve long running sql queries ran by a particular user at a past time period

A reader, January 13, 2006 - 2:17 pm UTC

Tom,

How can I retrieve the query ( with complete sql_text) ran by a particular user during a past time window, say last night 8-9pm?

Thanks,

Tom Kyte
January 15, 2006 - 3:13 pm UTC

were you auditing what this user was doing last night between 8-9pm, if not, you won't be able to retrieve the query the user was running.

If you were - it'll be in the fine grained auditing trail log.

Detecting query plan changes for selected queries

Menon, January 16, 2006 - 8:19 pm UTC

Hi Tom
Recently, we ran into a problem wherein after changing a database parameter, one of the queries changed plan to use full table scan with pretty bad results. We have a tar open with Oracle on this issue (though they simply told us to use hints..)

Anyways, when we ran the same query in test, we were able to see that the plan change was due to the db parameter change.
This led to a discussion on whether it makes sense to have a framework to detect changes in query plans in test system (which has the same configuration as the production system.) The idea is that whenever DBAs change a parameter, we have a way of identifying for queries that are in the critical path, if the execution path has changed or not. Such a system would have been usful at least in the above case where the plan change was not for the better. (Note that I am well aware that plans change (and change for the good - usually based on changing stats) and FTS in general is not bad and so on.)
Assume that we have identified a set of important queries (embedded in PL/SQL). We can even mark this queries with comments to identify them (instead of using SQL Hash created by Oracle which is overly sensitive to spaces and so forth for our purposes.

Does it make sense to create such a framework - is it worth it? On a cursory glance, I don't think it involves a lot of work (esp. if it is generic only for an app) but I am just wondering if it is a good idea in general or not. This would be a *very small* subset of the intentions of the hotsos "laredo" product </code> https://portal.hotsos.com/products/laredo <code>

Btw, if there are other such existing products (esp. free ones), let me know.

Tom Kyte
January 16, 2006 - 9:04 pm UTC

the product I know of that does this is Laredo from hotsos.com

Yes, it does make sense. (and with the hash in v$sql_plan, it can be relatively easy to identify changed plans)

thanx...

Menon, January 17, 2006 - 1:24 am UTC

"the product I know of that does this is Laredo from hotsos.com"

Yup - that is the one I also mentioned above...

"Yes, it does make sense. (and with the hash in v$sql_plan, it can be relatively easy to identify changed plans)"..

That is one idea though the problem is that anytime you edit the query,the hash changes... I was thinking more like marking the selected queries to identify them...

Thanx for your input!

Tom Kyte
January 17, 2006 - 8:40 am UTC

if you edit the query - nothing much is going to be able to match them.

my point was "if you hold all things constant EXCEPT for something that affects the optimizer (gather stats, change parameters, perform a patch upgrade....) - the hash can be very useful for identifying changed plans"

hmm..

Menon, January 17, 2006 - 1:14 pm UTC

"if you edit the query - nothing much is going to be able to match them."

But if you put a known string in the comment as a hint
(e.g.
change select * from emp to select /*+ PLAN_CHANGE_MONITORED(Q1) */ * from emp, then you could track them regardless of hash value? Q1 - is the identifier for this particular query. The next query would be Q2 and so on. This would also document the queries whose plans are being monitored. You would still use v$sql_plan to detect plan changes. You would store the old plans in another table with query identifier being the string for each query (instead of the hash value).

User user-function instead????

A reader, January 20, 2006 - 10:05 am UTC

Hi Tom,
I've got a sql as follows as part of a report query.

select trx.ctry_orig_dest, c.ctry_name,
sum(decode(trx.ccy, 'EUR', trx.trxvalue,
decode(fx.invrate,
'N', trx.trxvalue * fx.fxrate,
trx.trxvalue / fx.fxrate)) ) incoming_payments,
trx.ccy
from acc_trxc trx, acc_mast acc, fxrates_arch fx,iso_countries c
where trx.trxtype = 'CASH RECD'
and trx.accode = acc.accode
and acc.country_code = 'DEU'
and trx.ctry_orig_dest = c.ctry_code
and trx.ctry_orig_dest <> 'DEU'
and fx.trdate between trunc(to_date(:end_date)) and trunc(to_date(:end_date)) + 1
and trx.ccy = fx.prm_ccy
and fx.sec_ccy = 'EUR'
group by trx.ctry_orig_dest, c.ctry_name, ccy
having sum(decode(trx.ccy, 'EUR', trx.trxvalue,
decode(fx.invrate, 'N', trx.trxvalue * fx.fxrate,
trx.trxvalue / fx.fxrate)) ) >= 12500



There are 10 such individual SELECTS, all joined together using a UNION ALL.
My query is regarding the sum(decode...) in the statement above. As you can see, we are using it in both the select-ed list as well as for checking the group total is > 12500. This summation is being done in all individual SELECTs. Is there any way I can just get the actual sum just once and then select-check it, so that the operation is not done twice? Any suggestions you can give me will be greatly appreciated. By the way, the client is still on version 817 (!). Thanks

Tom Kyte
January 20, 2006 - 10:47 am UTC

are you sure the operation is done twice. if so, "how"

Reg. last query

A reader, January 20, 2006 - 11:57 am UTC

HI Tom,
Thanks for the quick response.
Oops, I just assumed the sum(decode..) stuff would be done twice. Isn't this the case? How would I go about checking this?
Thanks

Tom Kyte
January 20, 2006 - 12:52 pm UTC

the optimizer will take whatever shortcuts it feels like taking for us. it may do it twice, it may not do it twice. we cannot really control that (sql is somewhat non-procedural that way). inline view won't do it.

but one would presume it would only do it once - since it is in fact the same thing.

Thank you

A reader, January 23, 2006 - 3:51 am UTC


elapsed time

Mehmood, January 27, 2006 - 5:10 am UTC

Dear Tom:

I am analyzing the trace output of the same query in two databases, All the init parameters are almost same. The execution plan is same, but one query is taking almost 25 seconds of elapsed time, and on the other database same query is taking 0.22 seconds of elapsed time. Can you please put some light on this. I am pasting the output from both the queries.

=======================================
Database 1:
********************************************************************************

UPDATE VEHICLE_M SET TRIPS=:b1 - 1
WHERE
VEH_NO = :b2


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.01 0 0 0 0
Execute 12 0.00 25.04 0 0 48 12
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 13 0.00 25.05 0 0 48 12

Misses in library cache during parse: 0
Optimizer goal: CHOOSE
Parsing user id: 59 (CPSPROD)

Rows Row Source Operation
------- ---------------------------------------------------
12 UPDATE VEHICLE_M
0 TABLE ACCESS FULL VEHICLE_M


Rows Execution Plan
------- ---------------------------------------------------
0 UPDATE STATEMENT GOAL: CHOOSE
12 UPDATE OF 'VEHICLE_M'
0 TABLE ACCESS GOAL: ANALYZED (FULL) OF 'VEHICLE_M' [:Q2125000]

SELECT /*+ Q2125000 NO_EXPAND ROWID(A1) */ A1.ROWID,A1."VEH_NO",
A1."TRIPS" FROM "VEHICLE_M" PX_GRANULE(0, BLOCK_RANGE,
DYNAMIC) A1 WHERE A1."VEH_NO"=:B1

********************************************************************************

=======================================
Database 2:
=======================================

********************************************************************************

UPDATE VEHICLE_M SET TRIPS=:b1 - 1
WHERE
VEH_NO = :b2


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.01 0 0 0 0
Execute 12 0.00 0.21 0 0 48 12
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 13 0.00 0.22 0 0 48 12

Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: 47 (CPSPROD)

Rows Row Source Operation
------- ---------------------------------------------------
12 UPDATE VEHICLE_M
0 TABLE ACCESS FULL VEHICLE_M


Rows Execution Plan
------- ---------------------------------------------------
0 UPDATE STATEMENT GOAL: CHOOSE
12 UPDATE OF 'VEHICLE_M'
0 TABLE ACCESS GOAL: ANALYZED (FULL) OF 'VEHICLE_M' [:Q884000]

SELECT /*+ Q884000 NO_EXPAND ROWID(A1) */ A1.ROWID,A1."VEH_NO",
A1."TRIPS" FROM "VEHICLE_M" PX_GRANULE(0, BLOCK_RANGE,
DYNAMIC) A1 WHERE A1."VEH_NO"=:B1

********************************************************************************

Thanks for your time.


Tom Kyte
January 27, 2006 - 8:41 am UTC

why are you using parallel query to update 12 rows?

what is the size of the table - and is the load on these systems "the same" (and are the systems "the same"

elapsed time

Mehmood, January 28, 2006 - 2:49 am UTC

Dear Tom:

We have parameter parallel_max_servers=5 in both the databases, so parallel queries are being used.

And the size of the table is 5MB in both the tables.

The Database 1, where the query is taking 25 seconds, is dual processor machine, with 1GB RAM, with minimum load.

And the Database 2, where the query is taking 0.22 seconds, is the single processor machine, with 512MB RAM.

Thanks for the time.

Tom Kyte
January 28, 2006 - 1:02 pm UTC

why are you using parallel at all for such small stuff.

elapsed time

Mehmood, January 29, 2006 - 3:37 am UTC

So is it the parallel query which is taking 25 seconds to process 12 rows.

Should I set the max_parallel_server=0??

And could you please suggest, where we should use the parallel query, and is this depends upon the number of processors in the machine. And also does parallel query will have any impact,if we have single processor machine.

Thanks for the help.

Tom Kyte
January 29, 2006 - 8:31 am UTC

it could well be - if you have something tiny to do - as you do - just setting up for parallel query (getting the processes going, getting ready to just "start" and then doing it and coordinating it takes many times longer than "just doing it yourself"

why are you doing parallel DML - you have to ASK specfically to do that in your application, stop asking for it.

<quote src=Expert Oracle: Database Architecture>
When to Use Parallel Execution

Parallel execution can be fantastic. It can allow you to take a process that executes over many hours or days and complete it in minutes. Breaking down a huge problem into small components may, in some cases, dramatically reduce the processing time. However, one underlying concept that it will be useful to keep in mind while considering parallel execution is summarized by this very short quote from Practical Oracle8i: Building Efficient Databases (Addison-Wesley, 2001) by Jonathan Lewis:

PARALLEL QUERY option is essentially nonscalable.

Parallel execution is essentially a nonscalable solution. It was designed to allow an individual user or a particular SQL statement to consume all resources of a database. If you have a feature that allows an individual to make use of everything that is available, and then allow two individuals to use that feature, you’ll have obvious contention issues. As the number of concurrent users on your system begins to overwhelm the number of resources you have (memory, CPU, and I/O), the ability to deploy parallel operations becomes questionable. If you have a four-CPU machine, for example, and on average you have 32 users executing queries simultaneously, then the odds are that you do not want to parallelize their operations. If you allowed each user to perform just a “parallel 2” query, then you would now have 64 concurrent operations taking place on a machine with just four CPUs. If the machine were not overwhelmed before parallel execution, it almost certainly would be now.

In short, parallel execution can also be a terrible idea. In many cases, the application of parallel processing will only lead to increased resource consumption, as parallel execution attempts to use all available resources. In a system where resources must be shared by many concurrent transactions, such as an OLTP system, you would likely observe increased response times due to this. It avoids certain execution techniques that it can use efficiently in a serial execution plan and adopts execution paths such as full scans in the hope that by performing many pieces of the larger, bulk operation in parallel, it would be better than the serial plan. Parallel execution, when applied inappropriately, may be the cause of your performance problem, not the solution for it.
So, before applying parallel execution, you need the following two things to be true:

* You must have a very large task, such as the full scan of 50GB of data.
* You must have sufficient available resources. Before parallel full scanning 50GB of data, you would want to make sure that there is sufficient free CPU (to accommodate the parallel processes) as well as sufficient I/O. The 50GB should be spread over more than one physical disk to allow for many concurrent read requests to happen simultaneously, there should be sufficient I/O channels from the disk to the computer to retrieve the data from disk in parallel, and so on.

If you have a small task, as generally typified by the queries carried out in an OLTP system, or you have insufficient available resources, again as is typical in an OLTP system where CPU and I/O resources are often already used to their maximum, then parallel execution is not something you’ll want to consider. So you can better understand this concept, I present the following analogy.

...
</quote>

problem resolved

Mehmood, January 30, 2006 - 2:15 am UTC

Dear Tom:

Thanks for the help. I have setup max_parallel_servers=0, and now it is taking less then 0.22 seconds to get the 12 rows, previoulsy it was taking 25 seconds.

I must say, Tom you are very helping for the World's Oracle family. you are very valuable asset of all the Oracle community.


problem resolved

Mehmood, January 30, 2006 - 2:18 am UTC

Dear Tom:

Thanks for the help. I have setup max_parallel_servers=0, and now it is taking less then 0.22 seconds to get the 12 rows, previoulsy it was taking 25 seconds.

I must say, Tom you are very helping for the World's Oracle family. you are very valuable asset of all the Oracle community.


plan differerence when executing standalone versus PL/SQL

Menon, January 31, 2006 - 10:53 am UTC

Hi Tom
consider the following procedure:
create or replace procedure p
as
l_count number;
begin
select count(*)
into l_count
from emp;
end;
/
If I execute this query in PL/SQL it will use an execution plan.

Now assuming the same input/same CBO environment, if I execute this SQL standalone, can it ever have a different execution plan? I don't think so but wanted to confirm...

Tom Kyte
January 31, 2006 - 3:31 pm UTC

that simple query - probably not, in a CBO environment in plsql or standalone it would likely either

a) index fast full scan any index that includes a non-null column
b) full scan the emp table if such an index does not exist


(ignoring fine grained access control (fgac), ignoring who owns the procedure, and other likely esoteric things)

clarification of the above question

Menon, January 31, 2006 - 1:09 pm UTC

The above procedure is just an example to elaborate the question: "If I execute a query in PL/SQL and execute it as a SQL, can they generate different plans - assuming all the other variables (such as stats/CBO parameters etc.) are same?

Tom Kyte
January 31, 2006 - 3:37 pm UTC

yes, especially dependent on version. older plsql releases did not respect the session setting for optimizer mode.

binding can be a little bit different (could lead to differents with bind variable peeking in some strange cases)

Answer is "yes"
But not usually.

thanx!

Menon, January 31, 2006 - 4:15 pm UTC

"yes, especially dependent on version. older plsql releases did not respect the
session setting for optimizer mode.

binding can be a little bit different (could lead to differents with bind
variable peeking in some strange cases)

Answer is "yes"
But not usually. "

I should have mentioned - the Oracle version is 9.2.0.3 in my case. So do you know if the PL/SQL not respecting optimizer in rare situations in this version as well or has the problem been addressed?

And do you have the "strange cases" you mention above listed somewhere (in asktom or metalink, for example)?

Thanx!

Tom Kyte
February 01, 2006 - 2:26 am UTC

in 9ir2, plsql obeys the optimizer settting - you can set up a quick test to verify that it is happening for you.

Full table scans on the dual table

ravinder matte, January 31, 2006 - 5:55 pm UTC

Tom,

I have seen 99% of the FULL tablescans are on the DUAL table in my database when I query the v$sql_plan.

When i see the statspack report, top sql statement is SELECT NULL FROM DUAL FOR UPDATE NOWAIT;

I dont understand what is causing this?

Database :9206V
AIX 5.2

Thanks
Matte

Tom Kyte
February 01, 2006 - 2:31 am UTC

sqlplus is likely doing that. do you use sqlplus alot? There was a time when sqlplus would issue that every time you connected (to see if the database was read/write), they was corrected in current releases.

If that is your top sql though, must be an almost idle database otherwise?

thanx Tom!

Menon, February 01, 2006 - 5:37 am UTC

"in 9ir2, plsql obeys the optimizer settting - you can set up a quick test to
verify that it is happening for you.
"
thanx! Yeah, well, in this case it is a counter example that can kill you. One of the cases where you have to know if there is any known bug since simply proving it for one case does not mean that you cant have another counter case..

Have a little question about CBO behaviour

Dmytro, February 06, 2006 - 10:31 am UTC

Where is near 2 mln. records in the table and query has constraint like this:

select
...
where
...
and dt_create <= :date

CBO always supposes cardinality 15 and so uses index, but in real queries :date constraint is almost unselective (few hundreds of thousands or even almost all records satisfy it). If I'm using constant CBO makes right decision and query is executed in two min, but if I'm using a variable (what is needed), it takes near an hour to run.
What can be done about this? Maybe few ideas, that can help in such a case. Or only /*+CARDINALITY*/ will help? But what if somebody will set the :date, that selects only few records?

Tom Kyte
February 07, 2006 - 12:59 am UTC

need test case to work with - or at least a slightly longer description of the data.  How is the data such that the cbo things "15", 15 is a rather odd number.


for example:

ops$tkyte@ORA10GR2> create table t as select * from all_objects;

Table created.

ops$tkyte@ORA10GR2> exec dbms_stats.gather_table_stats( user, 'T' );

PL/SQL procedure successfully completed.

ops$tkyte@ORA10GR2> create index t_idx on t(created);

Index created.

ops$tkyte@ORA10GR2> variable d varchar2(25)
ops$tkyte@ORA10GR2>
ops$tkyte@ORA10GR2> set autotrace traceonly explain
ops$tkyte@ORA10GR2> select * from t where created <= to_date(:d,'dd-mon-yyyy');

Execution Plan
----------------------------------------------------------
Plan hash value: 470836197

-------------------------------------------------------------------------------
| Id  | Operation                   | Name  | Rows  | Bytes | Cost (%CPU)| Time     
-------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |       |  2496 |   226K|    10   (0)| 00:
|   1 |  TABLE ACCESS BY INDEX ROWID| T     |  2496 |   226K|    10   (0)| 00
|*  2 |   INDEX RANGE SCAN          | T_IDX |   449 |       |     3   (0)| 00
-------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - access("CREATED"<=TO_DATE(:D,'dd-mon-yyyy'))

<b>
The 2,496 number is the result of:

ops$tkyte@ORA10GR2> select num_rows from user_tables where table_name = 'T';

  NUM_ROWS
----------
     49925

ops$tkyte@ORA10GR2> select 2496/49925 from dual;

2496/49925
----------
.049994992
</b>

the optimizer used a 5% guess, so if 15 were the guess - I could think your statistics would have the table having 300 rows - but you say 2 million.

 

My stupid mistake....

Dmytro, February 08, 2006 - 6:52 am UTC

Was looking on the cost column instead of cardinality.
Today tested on the other schema, so there is 2,400,000 records in the table and plan of query:

SELECT *
FROM details
WHERE dt_created < :dt

is like this:

SELECT STATEMENT, GOAL = CHOOSE Cost=2831 Cardinality=120082 Bytes=10206970
TABLE ACCESS BY INDEX ROWID Object owner=AIRT_RT Object name=ANIMALS Cost=2831 Cardinality=120082 Bytes=10206970
INDEX RANGE SCAN Object owner=AIRT_RT Object name=AN_DT_CREATE_I Cost=69 Cardinality=21615

So, CBO makes few percent guess and is using an index. The same thing is happening, if I include this "...AND dt_created < :dt" in more complicated queries with lot of joins and other stuff.

These queries are reports and they usually include data for a long period of time, so use of this index is death-like for their perfomance. I can't drop this index, because it's helpful in other queries and sometimes even reports are taken for one month or something like this...

Now I'm using /*+NO_INDEX (det det_dt_create_i)*/ and it really helps, but I'm not too fond of this kind of solution. Can something be better, than this? Some dynamic estimation of cardinality? This possibly means hard-parsing every time, but what if query is run only once a day, for example (hm... but it can be few dozens of such a queries)?..

Hope, I have clarified the situation a little...

Appreciate your help.

Tom Kyte
February 08, 2006 - 8:14 am UTC

what are you binding in there - a string, or a DATE?

One more mistake... :)

A reader, February 08, 2006 - 6:57 am UTC

Two queries absolutely alike except the table names and too many opened windows... :)

so, query like this:
SELECT * FROM animals WHERE dt_created <:dt

and this hint I'm using
/*+NO_INDEX (an an_dt_create_i)*/

About binding string or date

Dmytro, February 08, 2006 - 9:25 am UTC

I'm using PL/SQL Developer :)), so to get this plan, I just wrote query as is with ":dt" and receive plan.
Real query is used to open ref_cursor, in PL/SQL function and in place of ':dt' stays call to another function, that gets date and returns date (with time or without depending on the second parameter).

In simple words - date.


Tom Kyte
February 08, 2006 - 10:15 am UTC

ok, with the function - it won't bind variable peek (if it did, it would get the right plan)

How about instead of

where date_column = f(x)

you run


plsql_variable_of_type_date := f(x);

...
where date_column = plsql_variable_of_type_date
....

?

Remote objects stats

Reader, February 13, 2006 - 7:22 am UTC

Tom, i have question and unfortunately can't find answer in the docs.

If my query envolves remote tables (assume we use CBO) - does CBO look at remote objects stats or not?

It seems for me what is does so.

CREATE DATABASE LINK self CONNECT TO user identified by "password" using 'connect';

SQL> create table t1 as select * from all_objects;

Table created.

SQL> exec dbms_stats.gather_table_stats(user, 't1');

PL/SQL procedure successfully completed.

SQL> set autotrace traceonly explain
SQL> select * from t1@self, dual;

Execution Plan
----------------------------------------------------------
Plan hash value: 3027949496

-------------------------------------------------------------------------------------------
| Id  | Operation          | Name | Rows  | Bytes | Cost (%CPU)| Time     | Inst   |IN-OUT|
-------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |      | 49181 |  4562K|   432  (66)| 00:00:06 |        |      |
|   1 |  NESTED LOOPS      |      | 49181 |  4562K|   432  (66)| 00:00:06 |        |      |
|   2 |   TABLE ACCESS FULL| DUAL |     1 |     2 |     2   (0)| 00:00:01 |        |      |
|   3 |   REMOTE           | T1   | 49181 |  4466K|   150   (1)| 00:00:02 |   SELF | R->S |
-------------------------------------------------------------------------------------------

Remote SQL Information (identified by operation id):
----------------------------------------------------

   3 - SELECT "OWNER","OBJECT_NAME","SUBOBJECT_NAME","OBJECT_ID","DATA_OBJECT_ID","
       OBJECT_TYPE","CREATED","LAST_DDL_TIME","TIMESTAMP","STATUS","TEMPORARY","GENERATED"
       ,"SECONDARY" FROM "T1" "T1" (accessing 'SELF' )

It picked cardinality just right. Please clarify:

1. By asking remote site for stats we indeed increase parse time (by orders and orders of magnitude)?
2. We increase latch hold time by orders of magnitude as well?
 

Tom Kyte
February 13, 2006 - 8:44 am UTC

... If my query envolves remote tables (assume we use CBO) - does CBO look at remote
objects stats or not? ...

yes it does.

Explain Plan indicates full table scan

Bhaskar, February 17, 2006 - 8:33 am UTC

Hi Tom,
Explain plan indicates a fulltablescan for a BIG table[optiontable] in the below query. Forcing optimizer to use a index on this table makes things worse.

SELECT DISTINCT OPTIONTABLE.MOSB_OPTN_CD, DESCTABLE.CONFIG_SHORT_DESC,ORDERTABLE.MODEL_YR_CD,ORDERTABLE.MODEL_YR_SUFX_CD
FROM
VEH_CONFIG_DESC DESCTABLE
,VEH_ORDER ORDERTABLE
,VEH_ORDER_OPTN OPTIONTABLE
WHERE
DESCTABLE.COUNTRY_CD = 'US'
AND DESCTABLE.LANG_CD = 'US'
AND DESCTABLE.CONFIG_TYPE = 'O'
AND DESCTABLE.MODEL_YR_CD = ORDERTABLE.MODEL_YR_CD
AND DESCTABLE.MODEL_YR_SUFX_CD = ORDERTABLE.MODEL_YR_SUFX_CD
AND ORDERTABLE.ORDER_ID = OPTIONTABLE.ORDER_ID
AND OPTIONTABLE.MOSB_OPTN_CD = DESCTABLE.MOSB_CONFIG_CD
AND OPTIONTABLE.OVC_OPTN_CD = DESCTABLE.OVC_CONFIG_CD
AND NVL (ORDERTABLE.RECORD_STATUS_TYPE,'TRANSFER') <> 'TRANSFER'
AND (ORDERTABLE.DLR_CD IN ('USA304', 'USA319', 'USA322','USA330','USA305')
OR ORDERTABLE.RESRVD_DLR_CD IN ('USA304', 'USA319', 'USA322','USA330','USA305')
OR ORDERTABLE.DELVRY_DEST_CD IN ('USA304', 'USA319', 'USA322','USA330','USA305')
OR ORDERTABLE.CHARGE_TO_CD IN ('USA304', 'USA319', 'USA322','USA330','USA305'))
AND DESCTABLE.CONFIG_TYPE = 'O'
ORDER BY OPTIONTABLE.MOSB_OPTN_CD;
-----------------------------------------------------
Explain Plan Output is as below.

<PRE>
-----------------------------------------------------------------------------------

| Id | Operation | Name | Rows | Bytes | Cost |

-----------------------------------------------------------------------------------

| 0 | SELECT STATEMENT | | 1 | 97 | 227 |
| 1 | SORT UNIQUE | | 1 | 97 | 226 |
| 2 | NESTED LOOPS | | 1 | 97 | 224 |
| 3 | HASH JOIN | | 66 | 4026 | 92 |
| 4 | TABLE ACCESS BY INDEX ROWID| VEH_CONFIG_DESC | 64 | 3200 | 14 |
| 5 | INDEX RANGE SCAN | SYS_C001860 | 5 | | 11 |
| 6 | TABLE ACCESS FULL | VEH_ORDER_OPTN | 144K| 1550K| 76 |
| 7 | TABLE ACCESS BY INDEX ROWID | VEH_ORDER | 1 | 36 | 2 |
| 8 | INDEX UNIQUE SCAN | SYS_C001569 | 13 | | 1 |
-----------------------------------------------------------------------------------

Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=227 Card=1 Bytes=97)
1 0 SORT (UNIQUE) (Cost=226 Card=1 Bytes=97)
2 1 NESTED LOOPS (Cost=224 Card=1 Bytes=97)
3 2 HASH JOIN (Cost=92 Card=66 Bytes=4026)
4 3 TABLE ACCESS (BY INDEX ROWID) OF 'VEH_CONFIG_DESC' (Cost=14 Card=64 Bytes=3200)
5 4 INDEX (RANGE SCAN) OF 'SYS_C001860' (UNIQUE) (Cost =11 Card=5)
6 3 TABLE ACCESS (FULL) OF 'VEH_ORDER_OPTN' (Cost=76 Card=144366 Bytes=1588026)
7 2 TABLE ACCESS (BY INDEX ROWID) OF 'VEH_ORDER' (Cost=2 Card=1 Bytes=36)
8 7 INDEX (UNIQUE SCAN) OF 'SYS_C001569' (UNIQUE) (Cost=1 Card=13)


Statistics
----------------------------------------------------------
0 recursive calls
0 db block gets
1675841 consistent gets
30470 physical reads
0 redo size
6704 bytes sent via SQL*Net to client
743 bytes received via SQL*Net from client
10 SQL*Net roundtrips to/from client
1 sorts (memory)
0 sorts (disk)
128 rows processed
</PRE>
-----------------------------------------------

can you explain why optimizer is going for FTS and why a index scan is much costlier.Table details are as below.
VEH_CONFIG_DESC [SMALLEST]-5000 rows
VEH_ORDER [MEDIUM]- 600,000
VEH_ORDER_OPTN [BIG] - 1100,000

This query was run in 9.2 and I have computed statistics for all indexed columns.At presne it takes 30 seconds to run.Can we write the query in a more efficient way ?

thanks in advance,
bhaskar

Tom Kyte
February 17, 2006 - 2:49 pm UTC

if going for the index makes it worse, why would you do that?


are we a warehouse or a transactional system here. I can suggest bitmaps but only if the database is read only/read mostly.

Tuning HASH JOIN

Prerak Mehta, February 17, 2006 - 3:40 pm UTC

Hi Tom,

I have a billing monthend query that takes 20 hours to finish.

SELECT /*+ ORDERED */ sum(pin_round(decode(impact_type, 4, 0.0, 64, 0.0,
portion * db_gross_amt), adjust, rounding)) db_gross, sum(
pin_round(decode(impact_type, 4, 0.0, 64, 0.0, portion * cr_gross_amt),
adjust, rounding)) cr_gross, sum(pin_round(decode(impact_type, 4, 0.0,
64, 0.0, portion * db_disc_amt), adjust, rounding)) db_disc, sum(
pin_round(decode(impact_type, 4, 0.0, 64, 0.0, portion * cr_disc_amt),
adjust, rounding)) cr_disc, sum(pin_round(decode(impact_type, 4, portion
* db_amt, 64, portion * db_amt, 0.0), adjust, rounding)) db_tax,
sum(pin_round(decode(impact_type, 4, portion * cr_amt, 64, portion *
cr_amt, 0.0), adjust, rounding)) cr_tax, ar_account_obj_id0, poid_id0,
resource_id, gl_id
FROM (SELECT eb.impact_type, greatest(0, decode(greatest(e.earned_end_t -
e.earned_start_t, 43200), 43200, 1, round((e.earned_end_t -
greatest(:b3, e.earned_start_t)) / (24 * 3600), 0) /
round((e.earned_end_t - e.earned_start_t) / (24 * 3600), 0)))
portion, greatest(eb.amount, 0) db_amt, least(eb.amount, 0)
cr_amt, greatest(eb.discount, 0) db_disc_amt, least(
eb.discount, 0) cr_disc_amt, greatest(eb.amount + eb.discount,
0) db_gross_amt, least(eb.amount + eb.discount, 0)
cr_gross_amt, i.ar_account_obj_id0, i.poid_id0,
eb.resource_id, eb.gl_id
FROM ledger_report_gl_segments_t gs, item_t i,
event_bal_impacts_t eb, event_t e
WHERE gs.obj_id0 = :b6
AND gs.gl_segment = i.gl_segment
AND i.effective_t > 0
AND i.effective_t < :b3
AND (i.poid_type = '/item/misc'
OR i.poid_type = '/item/sponsor'
OR i.poid_type IN (SELECT obj_type
FROM config_items_t ci, config_events_t ce
WHERE ci.obj_id0 = ce.obj_id0
AND ci.rec_id = ce.rec_id2
AND ce.event_type LIKE
'/event/billing/product/fee/cycle/cycle_forward%'
))
AND i.poid_db IS NOT NULL
AND eb.item_obj_id0 = i.poid_id0
AND eb.resource_id + 0 BETWEEN :b5 AND :b4
AND eb.gl_id <> 0
AND e.poid_id0 = eb.obj_id0
AND e.earned_end_t > :b3
AND e.poid_type LIKE
'/event/billing/product/fee/cycle/cycle_forward%'
AND (:b2 IS NULL
OR i.ar_account_obj_id0 BETWEEN :b2 AND :b1)) event_info, (
SELECT rec_id, rounding, decode(rounding_mode, 0, 0, 1, 0.4 /
power(10, rounding), 2, (-0.5) / power(10, rounding), 3, 0)
adjust
FROM config_beid_balances_t) beid_info
WHERE event_info.resource_id = beid_info.rec_id
GROUP BY ar_account_obj_id0, poid_id0, gl_id, resource_id


Optimizer Mode Used:

FIRST_ROWS

Total Cost:

351,253



Execution Steps:

Step # Step Name

21 SELECT STATEMENT
20 SORT [GROUP BY]
19 FILTER
15 FILTER
14 HASH JOIN
1 PBDPIN.CONFIG_BEID_BALANCES_T TABLE ACCESS [FULL]
13 NESTED LOOPS
9 HASH JOIN
6 PBDPIN.ITEM_T TABLE ACCESS [BY INDEX ROWID]
5 NESTED LOOPS
3 PBDPIN.LEDGER_REPORT_GL_SEGMENTS_T TABLE ACCESS [BY INDEX ROWID]
2 PBDPIN.I_LEDGER_REPORT_GL_SEGS__ID INDEX [RANGE SCAN]
4 PBDPIN.I_ITEM_GLSEG_EFF__ID INDEX [RANGE SCAN]
8 PARTITION RANGE [ALL]
7 PBDPIN.EVENT_BAL_IMPACTS_T TABLE ACCESS [FULL]
12 PARTITION RANGE [ITERATOR]
11 PBDPIN.EVENT_T TABLE ACCESS [BY LOCAL INDEX ROWID]
10 PBDPIN.I_EVENT__ID INDEX [UNIQUE SCAN]
18 NESTED LOOPS
16 PBDPIN.CONFIG_ITEMS_T TABLE ACCESS [FULL]
17 PBDPIN.CONFIG_EVENTS_T TABLE ACCESS [FULL]


I took statspack.snap (i_snap_level=>7) & took explian plan thru the sprepsql.sql which looked like this:

--------------------------------------------------------------------------------
| Operation | PHV/Object Name | Rows | Bytes| Cost |
--------------------------------------------------------------------------------
|SELECT STATEMENT |----- 1738109184 ----| | |1019789 |
|SORT GROUP BY | | 9K| 1M|1019789 |
| FILTER | | | | |
| FILTER | | | | |
| HASH JOIN | | 9K| 1M|1019547 |
| TABLE ACCESS FULL |CONFIG_BEID_BALANCES | 76 | 836 | 2 |
| NESTED LOOPS | | 9K| 1M|1019544 |
| HASH JOIN | | 146K| 12M| 580404 |
| HASH JOIN | | 925K| 43M| 347339 |
| TABLE ACCESS BY INDEX ROW|LEDGER_REPORT_GL_SEG | 1 | 9 | 2 |
| INDEX RANGE SCAN |I_LEDGER_REPORT_GL_S | 1 | | 1 |
| TABLE ACCESS FULL |ITEM_T | 925K| 35M| 347336 |
| PARTITION RANGE ALL | | | | |
| TABLE ACCESS FULL |EVENT_BAL_IMPACTS_T | 11M| 402M| 231249 |
| PARTITION RANGE ITERATOR | | | | |
| TABLE ACCESS BY LOCAL INDE|EVENT_T | 1 | 42 | 3 |
| INDEX UNIQUE SCAN |I_EVENT__ID | 1 | | 2 |
| NESTED LOOPS | | 1 | 70 | 4 |
| TABLE ACCESS FULL |CONFIG_ITEMS_T | 1 | 26 | 2 |
| TABLE ACCESS FULL |CONFIG_EVENTS_T | 1 | 44 | 2 |
--------------------------------------------------------------------------------


I have 9iR2 on Solaris 64bit, PGA_AGGREGATE_TARGET=10gb, workarea_size_policy=auto & mts_servers=1. So in the procedure, which runs this query i have set session level parameters:

sql_stmt1:='alter session set optimizer_index_cost_adj=10';
execute immediate sql_stmt1;
sql_stmt2:='alter session set optimizer_index_caching=90';
execute immediate sql_stmt2;
sql_stmt3:='alter session set workarea_size_policy=manual';
execute immediate sql_stmt3;
sql_stmt4:='alter session set sort_area_size=204857600';
execute immediate sql_stmt4;
sql_stmt5:='alter session set sort_area_retained_size=204857600';
execute immediate sql_stmt5;
sql_stmt6:='alter session set hash_area_size=504857600';
execute immediate sql_stmt6;
sql_stmt7:='alter session set db_file_multiblock_read_count=32';
execute immediate sql_stmt7;
sql_stmt8:='alter session set hash_multiblock_io_count=32';
execute immediate sql_stmt8;

My concern here is that my hash join takes around 16-17 hours to finish.I tried nested loop(USE_NL), hash join (USE_HASH) & driving_site (DRIVING_SITE) hints too. Then i removed all hints and nothing changed. I need some guidance from you as to how to improve performance of this hash join.

Prerak

Tom Kyte
February 17, 2006 - 5:22 pm UTC

when you do the hash join (brutally efficient - to process this much data, no indexes) what are you waiting for - physical io? temp writes? what? maybe run for a while with 10046 level 12 trace and see what your big waits are (I sort of suspect "physical IO"

Tuning Hash Joins

Prerak, February 17, 2006 - 4:02 pm UTC

Hi Tom,

Forgot to mention no of rows in each tables.

EVENT_T has 451087086 rows
CONFIG_ITEMS_T has 7 rows
CONFIG_EVENTS_T has 11 rows
ITEM_T has 73858158 rows
LEDGER_REPORT_GL_SEGMENTS_T has 122 rows
EVENT_BAL_IMPACTS_T has 57932811 rows
CONFIG_BEID_BALANCES_T has 76 rows

Prerak

Tuning Hash Joins

Prerak, February 20, 2006 - 3:54 pm UTC

Hi Tom,

Its showing wait event scattered read & sequential read while reading indexed data.Is there other way to tune this query?

/orasource/app/oracle/admin/devdb/udump/devdb_ora_11901.trc
Oracle9i Enterprise Edition Release 9.2.0.6.0 - 64bit Production
With the Partitioning and OLAP options
JServer Release 9.2.0.6.0 - Production
ORACLE_HOME = /orasource/app/oracle/product/9.2.0
System name: SunOS
Node name: DEVDB01
Release: 5.9
Version: Generic_118558-04
Machine: sun4u
Instance name: DEVDB
Redo thread mounted by this instance: 1
Oracle process number: 76
Unix process pid: 11901, image: oracle@DEVDB01 (TNS V1-V3)

*** 2006-02-07 16:05:05.724
*** SESSION ID:(142.10960) 2006-02-07 16:05:05.667
WAIT #6: nam='db file scattered read' ela= 2274 p1=86 p2=3165545 p3=32
WAIT #6: nam='db file scattered read' ela= 2771 p1=86 p2=3165577 p3=32
WAIT #6: nam='db file scattered read' ela= 2505 p1=86 p2=3165609 p3=32
WAIT #6: nam='direct path write' ela= 8 p1=203 p2=190345 p3=126
WAIT #6: nam='db file scattered read' ela= 2384 p1=86 p2=3165641 p3=32
WAIT #6: nam='db file scattered read' ela= 2376 p1=86 p2=3165673 p3=32
WAIT #6: nam='db file scattered read' ela= 2473 p1=86 p2=3165705 p3=32
WAIT #6: nam='db file scattered read' ela= 2743 p1=86 p2=3165737 p3=32
WAIT #6: nam='db file scattered read' ela= 2397 p1=86 p2=3165769 p3=32
WAIT #6: nam='db file scattered read' ela= 2444 p1=86 p2=3165801 p3=32
WAIT #6: nam='db file scattered read' ela= 2393 p1=86 p2=3165833 p3=32
WAIT #6: nam='db file scattered read' ela= 2689 p1=86 p2=3165865 p3=32
WAIT #6: nam='db file scattered read' ela= 2528 p1=86 p2=3165897 p3=32
WAIT #6: nam='db file scattered read' ela= 2506 p1=86 p2=3165929 p3=32
WAIT #6: nam='db file scattered read' ela= 2379 p1=86 p2=3165961 p3=32
WAIT #6: nam='db file scattered read' ela= 2276 p1=86 p2=3165993 p3=32
WAIT #6: nam='db file scattered read' ela= 2437 p1=86 p2=3166025 p3=32
WAIT #6: nam='db file scattered read' ela= 2387 p1=86 p2=3166057 p3=32
WAIT #6: nam='db file scattered read' ela= 2221 p1=86 p2=3166089 p3=32
WAIT #6: nam='db file scattered read' ela= 2480 p1=86 p2=3166121 p3=32
WAIT #6: nam='db file scattered read' ela= 2247 p1=86 p2=3166153 p3=32
WAIT #6: nam='db file scattered read' ela= 2426 p1=86 p2=3166185 p3=32
WAIT #6: nam='db file scattered read' ela= 2407 p1=86 p2=3166217 p3=32
WAIT #6: nam='db file scattered read' ela= 2457 p1=86 p2=3166249 p3=32
WAIT #6: nam='db file scattered read' ela= 2366 p1=86 p2=3166281 p3=32
WAIT #6: nam='db file scattered read' ela= 2245 p1=86 p2=3166313 p3=32
WAIT #6: nam='db file scattered read' ela= 2358 p1=86 p2=3166345 p3=32
WAIT #6: nam='db file scattered read' ela= 2212 p1=86 p2=3166377 p3=32
WAIT #6: nam='db file scattered read' ela= 3921 p1=86 p2=3166409 p3=32
WAIT #6: nam='db file scattered read' ela= 2448 p1=86 p2=3166441 p3=32
WAIT #6: nam='db file scattered read' ela= 2466 p1=86 p2=3166473 p3=32
WAIT #6: nam='db file scattered read' ela= 2408 p1=86 p2=3166505 p3=32
WAIT #6: nam='db file scattered read' ela= 2420 p1=86 p2=3166537 p3=32
WAIT #6: nam='db file scattered read' ela= 2458 p1=86 p2=3166569 p3=32
WAIT #6: nam='db file scattered read' ela= 2466 p1=86 p2=3166601 p3=32
WAIT #6: nam='db file scattered read' ela= 2451 p1=86 p2=3166633 p3=32
WAIT #6: nam='db file scattered read' ela= 2184 p1=86 p2=3166665 p3=32
WAIT #6: nam='db file scattered read' ela= 2342 p1=86 p2=3166697 p3=32
WAIT #6: nam='db file scattered read' ela= 2453 p1=86 p2=3166729 p3=32
WAIT #6: nam='db file scattered read' ela= 2588 p1=86 p2=3166761 p3=32
WAIT #6: nam='db file scattered read' ela= 2302 p1=86 p2=3166793 p3=32
WAIT #6: nam='db file scattered read' ela= 2353 p1=86 p2=3166825 p3=32
WAIT #6: nam='db file scattered read' ela= 2513 p1=86 p2=3166857 p3=32
WAIT #6: nam='db file scattered read' ela= 2371 p1=86 p2=3166889 p3=32
WAIT #6: nam='db file scattered read' ela= 2528 p1=86 p2=3166921 p3=32
WAIT #6: nam='db file scattered read' ela= 2436 p1=86 p2=3166953 p3=32
WAIT #6: nam='db file scattered read' ela= 2415 p1=86 p2=3166985 p3=32
WAIT #6: nam='db file scattered read' ela= 2508 p1=86 p2=3167017 p3=32
WAIT #6: nam='db file scattered read' ela= 2635 p1=86 p2=3167049 p3=32
WAIT #6: nam='db file scattered read' ela= 2415 p1=86 p2=3167081 p3=32
WAIT #6: nam='db file scattered read' ela= 2410 p1=86 p2=3167113 p3=32
WAIT #6: nam='db file scattered read' ela= 2536 p1=86 p2=3167145 p3=32
WAIT #6: nam='db file scattered read' ela= 2546 p1=86 p2=3167177 p3=32
WAIT #6: nam='db file scattered read' ela= 2558 p1=86 p2=3167209 p3=32
WAIT #6: nam='db file scattered read' ela= 2405 p1=86 p2=3167241 p3=32
WAIT #6: nam='db file scattered read' ela= 2420 p1=86 p2=3167273 p3=32
WAIT #6: nam='db file scattered read' ela= 2360 p1=86 p2=3167305 p3=32
WAIT #6: nam='db file scattered read' ela= 2507 p1=86 p2=3167337 p3=32
WAIT #6: nam='db file scattered read' ela= 2386 p1=86 p2=3167369 p3=32
WAIT #6: nam='db file scattered read' ela= 2272 p1=86 p2=3167401 p3=32
WAIT #6: nam='db file scattered read' ela= 2583 p1=86 p2=3167433 p3=32
WAIT #6: nam='db file scattered read' ela= 2432 p1=86 p2=3167465 p3=32
WAIT #6: nam='db file scattered read' ela= 2433 p1=86 p2=3167497 p3=32
WAIT #6: nam='db file scattered read' ela= 2477 p1=86 p2=3167529 p3=32
WAIT #6: nam='db file scattered read' ela= 2449 p1=86 p2=3167561 p3=32
WAIT #6: nam='db file scattered read' ela= 2396 p1=86 p2=3167593 p3=32
WAIT #6: nam='db file scattered read' ela= 2462 p1=86 p2=3167625 p3=32











[DEVDB] more devdb_ora_18621.trc
/orasource/app/oracle/admin/devdb/udump/devdb_ora_18621.trc
Oracle9i Enterprise Edition Release 9.2.0.6.0 - 64bit Production
With the Partitioning and OLAP options
JServer Release 9.2.0.6.0 - Production
ORACLE_HOME = /orasource/app/oracle/product/9.2.0
System name: SunOS
Node name: DEVDB01
Release: 5.9
Version: Generic_118558-04
Machine: sun4u
Instance name: DEVDB
Redo thread mounted by this instance: 1
Oracle process number: 29
Unix process pid: 18621, image: oracle@DEVDB01 (TNS V1-V3)

*** 2006-02-13 15:54:46.691
*** SESSION ID:(31.25776) 2006-02-13 15:54:46.676
WAIT #6: nam='db file sequential read' ela= 10195 p1=9 p2=2449148 p3=1
WAIT #6: nam='db file sequential read' ela= 65 p1=92 p2=1325447 p3=1
WAIT #6: nam='db file sequential read' ela= 127 p1=43 p2=860015 p3=1
WAIT #6: nam='db file sequential read' ela= 32 p1=92 p2=1310677 p3=1
WAIT #6: nam='db file sequential read' ela= 82 p1=9 p2=2449052 p3=1
WAIT #6: nam='db file sequential read' ela= 35 p1=90 p2=2093212 p3=1
WAIT #6: nam='db file sequential read' ela= 794 p1=9 p2=2449053 p3=1
WAIT #6: nam='db file sequential read' ela= 9449 p1=90 p2=2130959 p3=1
WAIT #6: nam='db file sequential read' ela= 4878 p1=112 p2=1445329 p3=1
WAIT #6: nam='db file sequential read' ela= 81 p1=43 p2=860018 p3=1
WAIT #6: nam='db file sequential read' ela= 9051 p1=165 p2=1443553 p3=1
WAIT #6: nam='db file sequential read' ela= 103 p1=9 p2=2449058 p3=1
WAIT #6: nam='db file sequential read' ela= 76 p1=9 p2=2449054 p3=1
WAIT #6: nam='db file sequential read' ela= 76 p1=9 p2=2449059 p3=1
WAIT #6: nam='db file sequential read' ela= 73 p1=9 p2=2449055 p3=1
WAIT #6: nam='db file sequential read' ela= 68 p1=9 p2=2449060 p3=1
WAIT #6: nam='db file sequential read' ela= 79 p1=136 p2=11797 p3=1
WAIT #6: nam='db file sequential read' ela= 10626 p1=90 p2=2118267 p3=1
WAIT #6: nam='db file sequential read' ela= 65 p1=112 p2=1399999 p3=1
WAIT #6: nam='db file sequential read' ela= 59 p1=9 p2=2449056 p3=1
WAIT #6: nam='db file sequential read' ela= 71 p1=9 p2=2449061 p3=1
WAIT #6: nam='db file sequential read' ela= 320 p1=9 p2=2449062 p3=1
WAIT #6: nam='db file sequential read' ela= 1048 p1=9 p2=2449254 p3=1
WAIT #6: nam='db file sequential read' ela= 12904 p1=9 p2=2650599 p3=1
WAIT #6: nam='db file sequential read' ela= 656 p1=90 p2=2118266 p3=1
WAIT #6: nam='db file sequential read' ela= 71 p1=9 p2=2449064 p3=1
WAIT #6: nam='db file sequential read' ela= 15378 p1=90 p2=2228955 p3=1
WAIT #6: nam='db file sequential read' ela= 59 p1=9 p2=2449063 p3=1
WAIT #6: nam='db file sequential read' ela= 520 p1=43 p2=860019 p3=1
WAIT #6: nam='db file sequential read' ela= 364 p1=24 p2=1776630 p3=1
WAIT #6: nam='db file sequential read' ela= 80 p1=9 p2=2449065 p3=1
WAIT #6: nam='db file sequential read' ela= 75 p1=92 p2=1425066 p3=1
WAIT #6: nam='db file sequential read' ela= 25 p1=90 p2=2100240 p3=1
WAIT #6: nam='db file sequential read' ela= 72 p1=9 p2=2449069 p3=1
WAIT #6: nam='db file sequential read' ela= 5837 p1=90 p2=2221098 p3=1
WAIT #6: nam='db file sequential read' ela= 26 p1=9 p2=2380528 p3=1
WAIT #6: nam='db file sequential read' ela= 25 p1=9 p2=2410865 p3=1
WAIT #6: nam='db file sequential read' ela= 59 p1=9 p2=2449070 p3=1
WAIT #6: nam='db file sequential read' ela= 7969 p1=90 p2=2123047 p3=1
WAIT #6: nam='db file sequential read' ela= 7503 p1=90 p2=2171955 p3=1
WAIT #6: nam='db file sequential read' ela= 51 p1=112 p2=1412022 p3=1
WAIT #6: nam='db file sequential read' ela= 379 p1=9 p2=2449071 p3=1
WAIT #6: nam='db file sequential read' ela= 77 p1=9 p2=2449066 p3=1
WAIT #6: nam='db file sequential read' ela= 29 p1=90 p2=2089254 p3=1
WAIT #6: nam='db file sequential read' ela= 69 p1=9 p2=2449072 p3=1
WAIT #6: nam='db file sequential read' ela= 69 p1=9 p2=2449067 p3=1
WAIT #6: nam='db file sequential read' ela= 73 p1=92 p2=1352412 p3=1
WAIT #6: nam='db file sequential read' ela= 39 p1=90 p2=2089048 p3=1
WAIT #6: nam='db file sequential read' ela= 27 p1=90 p2=2102877 p3=1
WAIT #6: nam='db file sequential read' ela= 28 p1=92 p2=1324163 p3=1
WAIT #6: nam='db file sequential read' ela= 73 p1=136 p2=11798 p3=1
WAIT #6: nam='db file sequential read' ela= 70 p1=9 p2=2449073 p3=1
WAIT #6: nam='db file sequential read' ela= 69 p1=9 p2=2449068 p3=1
WAIT #6: nam='db file sequential read' ela= 9280 p1=92 p2=1516381 p3=1
WAIT #6: nam='db file sequential read' ela= 72 p1=9 p2=2449074 p3=1
WAIT #6: nam='db file sequential read' ela= 30 p1=9 p2=2409403 p3=1
WAIT #6: nam='db file sequential read' ela= 25 p1=9 p2=2415802 p3=1
WAIT #6: nam='db file sequential read' ela= 71 p1=9 p2=2449080 p3=1
WAIT #6: nam='db file sequential read' ela= 11694 p1=90 p2=2130220 p3=1
WAIT #6: nam='db file sequential read' ela= 217 p1=9 p2=2449081 p3=1
WAIT #6: nam='db file sequential read' ela= 29 p1=9 p2=2410567 p3=1
WAIT #6: nam='db file sequential read' ela= 70 p1=9 p2=2449075 p3=1
WAIT #6: nam='db file sequential read' ela= 57 p1=9 p2=2449082 p3=1
WAIT #6: nam='db file sequential read' ela= 64 p1=9 p2=2449077 p3=1
WAIT #6: nam='db file sequential read' ela= 25 p1=9 p2=2449083 p3=1
WAIT #6: nam='db file sequential read' ela= 69 p1=43 p2=860020 p3=1
WAIT #6: nam='db file sequential read' ela= 73 p1=9 p2=2449078 p3=1
WAIT #6: nam='db file sequential read' ela= 31 p1=92 p2=1323566 p3=1
WAIT #6: nam='db file sequential read' ela= 590 p1=92 p2=1698704 p3=1
WAIT #6: nam='db file sequential read' ela= 26 p1=9 p2=2395157 p3=1
WAIT #6: nam='db file sequential read' ela= 10223 p1=9 p2=2642348 p3=1
WAIT #6: nam='db file sequential read' ela= 76 p1=90 p2=2114145 p3=1
WAIT #6: nam='db file sequential read' ela= 1105 p1=90 p2=2118404 p3=1
WAIT #6: nam='db file sequential read' ela= 32 p1=92 p2=1342704 p3=1
WAIT #6: nam='db file sequential read' ela= 2643 p1=90 p2=2118260 p3=1
WAIT #6: nam='db file sequential read' ela= 74 p1=9 p2=2449085 p3=1
WAIT #6: nam='db file sequential read' ela= 59 p1=9 p2=2449079 p3=1
WAIT #6: nam='db file sequential read' ela= 64 p1=136 p2=11799 p3=1
WAIT #6: nam='db file sequential read' ela= 61 p1=9 p2=2449086 p3=1
WAIT #6: nam='db file sequential read' ela= 280 p1=9 p2=2449087 p3=1
WAIT #6: nam='db file sequential read' ela= 634 p1=90 p2=2162638 p3=1
WAIT #6: nam='db file sequential read' ela= 11513 p1=9 p2=2657242 p3=1
WAIT #6: nam='db file sequential read' ela= 112 p1=9 p2=2449092 p3=1
WAIT #6: nam='db file sequential read' ela= 106 p1=90 p2=2118224 p3=1
WAIT #6: nam='db file sequential read' ela= 28 p1=9 p2=2395910 p3=1
WAIT #6: nam='db file sequential read' ela= 27 p1=9 p2=2394417 p3=1
WAIT #6: nam='db file sequential read' ela= 111 p1=9 p2=2449088 p3=1
WAIT #6: nam='db file sequential read' ela= 76 p1=9 p2=2449093 p3=1
WAIT #6: nam='db file sequential read' ela= 82 p1=92 p2=1385298 p3=1
WAIT #6: nam='db file sequential read' ela= 73 p1=9 p2=2449089 p3=1
WAIT #6: nam='db file sequential read' ela= 110 p1=9 p2=2449094 p3=1
WAIT #6: nam='db file sequential read' ela= 79 p1=9 p2=2449090 p3=1
WAIT #6: nam='db file sequential read' ela= 67 p1=9 p2=2449095 p3=1
WAIT #6: nam='db file sequential read' ela= 25 p1=9 p2=2395033 p3=1
WAIT #6: nam='db file sequential read' ela= 27 p1=90 p2=2087329 p3=1
WAIT #6: nam='db file sequential read' ela= 3846 p1=90 p2=2118108 p3=1
WAIT #6: nam='db file sequential read' ela= 49 p1=9 p2=2415780 p3=1
WAIT #6: nam='db file sequential read' ela= 98 p1=9 p2=2449096 p3=1
WAIT #6: nam='db file sequential read' ela= 73 p1=90 p2=2117687 p3=1
WAIT #6: nam='db file sequential read' ela= 63 p1=9 p2=2449102 p3=1
WAIT #6: nam='db file sequential read' ela= 71 p1=9 p2=2449097 p3=1
WAIT #6: nam='db file sequential read' ela= 644 p1=9 p2=2449098 p3=1
WAIT #6: nam='db file sequential read' ela= 97 p1=43 p2=860022 p3=1
WAIT #6: nam='db file sequential read' ela= 74 p1=9 p2=2449103 p3=1
WAIT #6: nam='db file sequential read' ela= 1186 p1=90 p2=2118272 p3=1
WAIT #6: nam='db file sequential read' ela= 207 p1=136 p2=11800 p3=1
WAIT #6: nam='db file sequential read' ela= 30 p1=90 p2=2097444 p3=1
WAIT #6: nam='db file sequential read' ela= 7029 p1=90 p2=2117826 p3=1
WAIT #6: nam='db file sequential read' ela= 33 p1=112 p2=1413019 p3=1
WAIT #6: nam='db file sequential read' ela= 32 p1=9 p2=2449099 p3=1
WAIT #6: nam='db file sequential read' ela= 71 p1=9 p2=2449104 p3=1
WAIT #6: nam='db file sequential read' ela= 77 p1=43 p2=860021 p3=1


Tom Kyte
February 21, 2006 - 7:24 am UTC

hmm.

crystal ball is broken, I cannot see your terminal, so no. I have no information for you.

Tuning Hash Joins

Prerak, February 20, 2006 - 3:56 pm UTC

Sorry for the above reply. Its only db file sequential reads....not the scattered reads.

Prerak

About: "a little question about CBO behaviour"

Dmytro, February 21, 2006 - 4:34 am UTC

It's just amazing how little things can make major changes, that make your code work. Variables rules, PL/SQL functions in SQL queries must die! :)) No more hints, no more headache, just simple and plain code. :)
Sorry for not replying for so long - there were other tasks, which overhelmed me and I was able to return to this problem only now.
Using variable in place of direct function call resolve the problem and our reports now take no longer than few minutes.
Thanks, Tom. :)

Optimizer in PLAN_TABLE

ina, February 27, 2006 - 6:47 am UTC

Tom, really great tipps on this site!

Generating the estimated execution plan of my queries, I use "explain plan for" and then select the information I am interested in from the plan_table.
For the optimizer column of the plan_table, I get "ANALYZED" quite often. In the documentation, I just found that there are 4 optimizer modes, RULE, ALL_ROWS, FIRST_ROW and CHOOSE. What is then meant by "ANALYZED" there?

Thanks for help!
Ina

Tom Kyte
February 27, 2006 - 7:17 am UTC

means the step you are looking at may have been influenced by the existence of statistics. The thing it looked at was analyzed, statistics were present.

"Simple" Queries on Dictionary Tables Have Slow Parsing

Jay, March 27, 2006 - 5:12 pm UTC

Hi, Tom,

After analyzing the slow performance of a third party software on one of our 10gr2 databases, I found most of the time it spent was on parsing some seemingly simple queries on data dictionary views. For example:

SELECT PARTITIONED
FROM
SYS.DBA_TABLES WHERE TABLE_NAME='' AND OWNER=''


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 4 1.83 1.79 0 0 0 0
Execute 4 0.01 0.00 0 0 0 0
Fetch 4 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 12 1.84 1.79 0 0 0 0

Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 40


SELECT COMPRESSION
FROM
ALL_TABLES WHERE 1=2


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 2.12 2.07 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 1 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 3 2.12 2.08 0 0 0 0

Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 40

I tried several times (after the queries were out of pool), once right after the DB was re-started and I was the only logged in user, the parsing on these several queries consistently took long time (ie. a couple of seconds) while other apparently more complex queries which involve many joins are parsed much faster on the same database. And this packaged software works generally well and fast on our other databases (including both 9i and 10gr2 database). Particulary, the parsing time on my another database is negligible when the queries are executed first time (not in v$sql) while the two databases are almost identical except the fast database is on a 8 CPU x 800 MHz HP machine with 16G memory while the slow one is on a 4 CPU x 200 MHz HP machine with 3G memory (both servers are ~90% idle when I was testing). So my question is, is the speed of the CPUs the determining factor here? If yes, why those several seemingly simple queries consistently slower to be parsed than others of similar or more complexity on a same DB?

Thanks

Tom Kyte
March 27, 2006 - 8:24 pm UTC

did you think to look at the view definitions underneath those simple queries.

You'll find them to be "rather complex".


Are you using the CBO by force on the other machines? I see all_rows here - is it by chance "choose" elsewhere - and do you have statistics?


(and wow, where 1=2, hmm - think this guys could have used - oh, I don't know "describe" apis to figure out the size and shape rather than parsing a query that will never actually be used??)

OK

A reader, March 28, 2006 - 3:45 am UTC

Hi Tom,
Could you please provide the LINK for TKPROF and
explain plan??

Queries on Dictionary Views Have Long Parsing Time

Jay, March 28, 2006 - 9:49 am UTC

By "complex" I meant some queries like this (on the same slow DB)

SELECT 'YES'
FROM
DUAL WHERE EXISTS (SELECT 'x' FROM SYS.DBA_TABLES WHERE OWNER = 'DEFCON' AND
TABLE_NAME = 'LU_FREQUENCY' AND TEMPORARY='Y' AND TABLE_NAME LIKE 'RUPD$%')
OR EXISTS (SELECT 'X' FROM SYS.DBA_SNAPSHOT_LOGS WHERE LOG_OWNER = 'DEFCON'
AND LOG_TABLE = 'LU_FREQUENCY')


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.55 0.53 0 0 0 0
Execute 1 0.00 0.00 0 4 0 0
Fetch 1 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 3 0.55 0.53 0 4 0 0

Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 60

It also involves a lot of fixed tables and other dictionary views but its parsing time is shorter. Which makes me think maybe CPU speed is not the only determining factor. (But again, it may)

I got a trace from the other (faster) database. I added spaces in the command to make sure it parses:

select compression
from
all_tables where 1 =2


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.23 0.23 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 1 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 3 0.23 0.23 0 0 0 0

Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 51

It also uses ALL_ROWS (both dbs are 10g, I didn't change the default). The execution plan was the same too. However, the parsing was much faster (0.23s vs. >2s).

Tom Kyte
March 28, 2006 - 4:08 pm UTC

tell them BIND VARIABLES ARE GOOD

you will reduce time spent parsing
you will not kill my database instance
you will find things work better.


if they didn't do hard parsing, this would be a non-issue.


are the stats in the databases "the same", do you have them for one and not for the other?

A reader, March 28, 2006 - 2:45 pm UTC

Guess the system tables are analyzed in both the databases

Tom Kyte
March 28, 2006 - 8:13 pm UTC

"guess"?

Re: Queries on Dictionary Views Have Long Parsing Time

Jay, March 29, 2006 - 9:25 am UTC

That's a good catch. Apparently they didn't use bind variables.

Aside from that, the stats on both DBs were collected.

Dynamic Optimization

Chris Slattery, April 10, 2006 - 12:05 pm UTC

Tom, does the optimizer change plans based on instance workload ?

The main thing I can see is PGA_AGGREGATE_TARGET ... if more users were connected surely the amount per user will decrease therefore decreasing sort space for example, possibly making NL join better than a hash join ...

Of course that leads into a discussion then of when the optimizer would notice this for a query in the pool already.

Can't see it in my [admittedly quick] scan of the doco.


Tom Kyte
April 11, 2006 - 10:26 am UTC

It will not change the plan based on the current workload, it changes the amount of resources it is willing to let each operation have.

plan_table output,

A reader, April 19, 2006 - 2:50 pm UTC

I have plan_table created in 9.2.0.5 database and in 9.2.0.1 database.

The objects in both the databases are IDENTICAL. Even the data too.

I did explain plan for an identitical sql statement using the following technique:
"explain plan set statement_id = 'p1' for"

On a 9.2.0.5 database, I got the following result:

PLAN_TABLE_OUTPUT
----------------------------------------------------------------------------------------------------

-------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost |
-------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 56 | 3 |
| 1 | SORT AGGREGATE | | 1 | 56 | |
| 2 | NESTED LOOPS | | 1 | 56 | 3 |
|* 3 | TABLE ACCESS BY INDEX ROWID| INVESTMENT_DATA | 1 | 33 | 2 |
|* 4 | INDEX RANGE SCAN | FNDX_INVEST_DATA_VERSION | 110 | | 1 |
|* 5 | TABLE ACCESS BY INDEX ROWID| INVESTMENT_ATTRIB | 1 | 23 | 1 |
|* 6 | INDEX UNIQUE SCAN | PK_INVESTMENT_ATTRIB | 1 | | |
-------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

3 - filter("IDAT"."PLANNING_CODE"='xx')
4 - access("IDAT"."PORTFOLIO_CODE"='134267' AND "IDAT"."VERSION_NUMBER"=1)
5 - filter("IA"."ATTRIBUTE_TYPE"='MES' AND "IA"."FIELD_TYPE"='C' AND
"IA"."TIME_PHASE_IND"='N' AND "IA"."DEFAULT_POS_NEG"='N')
6 - access("IDAT"."INVEST_MODEL_CODE"="IA"."INVEST_MODEL_CODE" AND
"IDAT"."INVEST_ATTRIB"="IA"."INVEST_ATTRIB")

Note: cpu costing is off

and on the other database (9.2.0.1), the output I got is:

PLAN_TABLE_OUTPUT
----------------------------------------------------------------------------------------------------

-------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost |
-------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | | |
| 1 | SORT AGGREGATE | | | | |
| 2 | NESTED LOOPS | | | | |
|* 3 | TABLE ACCESS BY INDEX ROWID| INVESTMENT_DATA | | | |
|* 4 | INDEX RANGE SCAN | FNDX_INVEST_DATA_VERSION | | | |
|* 5 | TABLE ACCESS BY INDEX ROWID| INVESTMENT_ATTRIB | | | |
|* 6 | INDEX UNIQUE SCAN | PK_INVESTMENT_ATTRIB | | | |
-------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

3 - filter("IDAT"."PLANNING_CODE"='xx')
4 - access("IDAT"."PORTFOLIO_CODE"='134267' AND "IDAT"."VERSION_NUMBER"=1)
5 - filter("IA"."DEFAULT_POS_NEG"='N' AND "IA"."TIME_PHASE_IND"='N' AND "IA"."FIELD_T
YPE"='C' AND "IA"."ATTRIBUTE_TYPE"='MES')
6 - access("IDAT"."INVEST_MODEL_CODE"="IA"."INVEST_MODEL_CODE" AND "IDAT"."INVEST_ATT
RIB"="IA"."INVEST_ATTRIB"

Note: rule based optimization

Everything above is same except the statement "Note" at the bottom.

I don't understand what does that "Note:" mean and why it is different?

I checked the optimizer_mode parameter in both the databases and it says "CHOOSE".

Thanks,



Tom Kyte
April 19, 2006 - 5:20 pm UTC

... The objects in both the databases are IDENTICAL. Even the data too. ....

really, you backed up the database and restored it?



you gathered statistics in the one database and are using the cost based optimizer (the one that has the rows column filled in), or you have set the optimizer mode to rule in one and not the other, or you have set the optimizer mode to all/first_rows in one and choose in the other, or .... (eg: something is hugely different)

the other database is using the RBO (rule based optimizer)


these databases are hugely NOT THE SAME, very much NOT IDENTICAL.

follow up,

A reader, April 20, 2006 - 10:26 am UTC

YOu are absolutely right. The schema where the plan shows rows column as null was not analyzed.

That is why "Note: rule based optimization" at the bottom of the plan.

On other schema where the rows column is filled up with values, what does the Note cpu costing is off mean?

Thanks,

Tom Kyte
April 20, 2006 - 12:30 pm UTC

means you are in 9i whereby cpu costing (on in 10g by default) is off. I different computation method for the cost based optimizer.

Reading Explain Plan

Sanji, May 05, 2006 - 4:01 pm UTC

Hello Tom,

This is for academic purposes.
I am trying to interpret the following explain plan. (The query though, needs to be changed)

SELECT :B1 , 'NULL',PARENT_TABLE_NAME,PARENT_COLUMN_NAME,CHILD_TABLE_NAME,CHILD_COLUMN_NAME
FROM CNTESTBED.SDE_BUILD_FK_CONSTRAINTS T ,
CNTESTBED.SDE_WORK_TAB_COLUMNS P1 ,
CNTESTBED.SDE_WORK_TAB_COLUMNS P2
WHERE BUILD_ID = :B2
AND P1.TABLE_NAME = T.PARENT_TABLE_NAME
AND P1.COLUMN_NAME = T.PARENT_COLUMN_NAME
AND P1.REQUEST_ID = :B1
AND P2.TABLE_NAME = T.CHILD_TABLE_NAME
AND P2.COLUMN_NAME = T.CHILD_COLUMN_NAME
AND P2.REQUEST_ID = :B1
UNION
SELECT :B1 , 'NULL',PARENT_TABLE_NAME,PARENT_COLUMN_NAME,CHILD_TABLE_NAME,CHILD_COLUMN_NAME
FROM CNTESTBED.SDE_BUILD_FK_CONSTRAINTS S ,
CNTESTBED.SDE_WORK_TAB_COLUMNS P1 ,
CNTESTBED.SDE_WORK_TAB_COLUMNS P2
WHERE BUILD_ID = :B3
AND P1.TABLE_NAME = S.PARENT_TABLE_NAME
AND P1.COLUMN_NAME = S.PARENT_COLUMN_NAME
AND P1.REQUEST_ID = :B1
AND P2.TABLE_NAME = S.CHILD_TABLE_NAME
AND P2.COLUMN_NAME = S.CHILD_COLUMN_NAME
AND P2.REQUEST_ID = :b1



---------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
---------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 2 | 240 | 404 (52)| 00:00:05 |
| 1 | SORT UNIQUE | | 2 | 240 | 404 (52)| 00:00:05 |
| 2 | UNION-ALL | | | | | |
| 3 | NESTED LOOPS | | 1 | 120 | 201 (2)| 00:00:03 |
|* 4 | HASH JOIN | | 14 | 1232 | 173 (3)| 00:00:03 |
|* 5 | TABLE ACCESS FULL| SDE_BUILD_FK_CONSTRAINTS | 665 | 37240 | 40 (5)| 00:00:01 |
|* 6 | INDEX RANGE SCAN | SDE_WORK_TAB_COLUMNS_UNQ1 | 7957 | 248K| 132 (1)| 00:00:02 |
|* 7 | INDEX RANGE SCAN | SDE_WORK_TAB_COLUMNS_UNQ1 | 1 | 32 | 2 (0)| 00:00:01 |
| 8 | NESTED LOOPS | | 1 | 120 | 201 (2)| 00:00:03 |
|* 9 | HASH JOIN | | 14 | 1232 | 173 (3)| 00:00:03 |
|* 10 | TABLE ACCESS FULL| SDE_BUILD_FK_CONSTRAINTS | 665 | 37240 | 40 (5)| 00:00:01 |
|* 11 | INDEX RANGE SCAN | SDE_WORK_TAB_COLUMNS_UNQ1 | 7957 | 248K| 132 (1)| 00:00:02 |
|* 12 | INDEX RANGE SCAN | SDE_WORK_TAB_COLUMNS_UNQ1 | 1 | 32 | 2 (0)| 00:00:01 |
---------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

4 - access("P1"."TABLE_NAME"="T"."PARENT_TABLE_NAME" AND
"P1"."COLUMN_NAME"="T"."PARENT_COLUMN_NAME")
5 - filter("BUILD_ID"=TO_NUMBER(:B2))
6 - access("P1"."REQUEST_ID"=TO_NUMBER(:B1))
7 - access("P2"."REQUEST_ID"=TO_NUMBER(:B1) AND
"P2"."TABLE_NAME"="T"."CHILD_TABLE_NAME" AND "P2"."COLUMN_NAME"="T"."CHILD_COLUMN_NAME")
9 - access("P1"."TABLE_NAME"="S"."PARENT_TABLE_NAME" AND
"P1"."COLUMN_NAME"="S"."PARENT_COLUMN_NAME")
10 - filter("BUILD_ID"=TO_NUMBER(:B3))
11 - access("P1"."REQUEST_ID"=TO_NUMBER(:B1))
12 - access("P2"."REQUEST_ID"=TO_NUMBER(:B1) AND
"P2"."TABLE_NAME"="S"."CHILD_TABLE_NAME" AND "P2"."COLUMN_NAME"="S"."CHILD_COLUMN_NAME")

Would the the tree stucture be like this

2
/ \
/ \
/ \
3 8
/ \ / \
4 7 9 12
/ \ / \
5 6 10 11

or would it be like this


2
/ \
/ \
/ \
3 8
/ \ / \
4 7 12 9
/ \ / \
5 6 11 10



Thanks
Sanji

Tree structure

Sanji, May 24, 2006 - 3:47 pm UTC

I believe there was a mistake in the earlier post.
Could you please clarify the doubt. Would the structure be like this

2
/ \
/ \
/ \
3 8
/ \ / \
4 7 9 12
/ \ /\
5 6 10 11



or would it be like this


2
/\
/ \
/ \
3 8
/ \ / \
4 7 12 9
/ \ / \
5 6 11 10


Thanks
Sanji

Tom Kyte
May 25, 2006 - 1:15 pm UTC

whats the difference conceptually between them...

same orders of steps there.

Interpreting Explain plan

Sanji, May 30, 2006 - 2:34 pm UTC

:) That's precisely what i was trying to figure out. Checked with Oracle Documentation and Effective Oracle by Design, but there too the examples are the same, mentioned in this forum (unless until i missed out on other examples).
Could you please direct me to the link/ study material where i can get information regarding interpreting "complex" explain plans.

Regards
Sanji

Tom Kyte
May 30, 2006 - 6:46 pm UTC

but this isn't "complex" - it is a simple tree?

with hash joins, we can decide to change the order mid-stream (driving table) so top or bottom doesn't really matter technically.



nvl function used on primary keys

A reader, June 07, 2006 - 7:36 am UTC

Hi

I am checking SQL statements in our application which performs very badly. I have identified the problem but I cant get a solution to my developers, hoping you could throw some lights.

The problem is in the application the users are forced to enter some values in a forms but not all of values are compulsory, this leaded to a situation, for example using EMP table, to query that table our developers did this

select *
from emp
where empno = nvl(?, empno)
and ename = nvl(?, ename);

the NVL function changes the execution plan dramatically. Obviously this doesnt seem logical, empno is a primary key how on earth will it ever get null values? The answer my developers gave me is that sometimes users are not forced to enter all primary key values.

How can we overcome this problem? Should we use NVL in the application? (Before sending to the SQL statement)


Wrong execution plan after analyze table

Branka, June 07, 2006 - 2:28 pm UTC

Tom,
I have strange situation, and hope that you can help me.
I have 2 tables. If I delete statistics on both of them, query that join them run 00:00:00.02 sec.
If I analyze "big" table, it run same time, and same execution plan.
When I analyze second table, it run 00:00:08.08 sec, and use full table scan on "big" (3,5 mil records) table.
It is in DEV, and nobody use any of this tables.

14:00:10 pacr@PACRD>
14:00:11 pacr@PACRD>
14:00:11 pacr@PACRD>
14:00:11 pacr@PACRD>
14:00:11 pacr@PACRD>
14:00:11 pacr@PACRD> exec DBMS_STATS.DELETE_TABLE_STATS ( ownname=>'PACR', tabname =>'P_SECURITY_REVIEW_QUEUE')

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.01
14:00:31 pacr@PACRD> exec DBMS_STATS.DELETE_TABLE_STATS ( ownname=>'PACR', tabname =>'P_APPLICATION')

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.01
14:00:57 pacr@PACRD> SELECT count(a.APPLICATION_SERIAL_NUMBER)
14:01:02 2 FROM p_APPLICATION a join P_SECURITY_REVIEW_QUEUE ap
14:01:02 3 ON ap.APPLICATION_SERIAL_NUMBER = a.APPLICATION_SERIAL_NUMBER
14:01:02 4 WHERE ap.SECURITY_STATUS = 2
14:01:02 5 AND ( a.assignment_classification = 'E')
14:01:02 6 AND ( a.FK_SL_LANG_CD = 'EN')
14:01:03 7 /

COUNT(A.APPLICATION_SERIAL_NUMBER)
----------------------------------
1963

Elapsed: 00:00:00.02

14:01:10 pacr@PACRD>
14:01:10 pacr@PACRD> explain plan for
14:01:22 2 SELECT count(a.APPLICATION_SERIAL_NUMBER)
14:01:22 3 FROM p_APPLICATION a join P_SECURITY_REVIEW_QUEUE ap
14:01:22 4 ON ap.APPLICATION_SERIAL_NUMBER = a.APPLICATION_SERIAL_NUMBER
14:01:22 5 WHERE ap.SECURITY_STATUS = 2
14:01:22 6 AND ( a.assignment_classification = 'E')
14:01:22 7 AND ( a.FK_SL_LANG_CD = 'EN')
14:01:23 8 /

Explained.

Elapsed: 00:00:00.00
14:01:24 pacr@PACRD>
14:01:24 pacr@PACRD> select * from table(dbms_xplan.display());

PLAN_TABLE_OUTPUT
-----------------------------------------------------------------------------------------------------------------------------------

------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost |
------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | | |
| 1 | SORT AGGREGATE | | | | |
| 2 | NESTED LOOPS | | | | |
|* 3 | TABLE ACCESS FULL | P_SECURITY_REVIEW_QUEUE | | | |
|* 4 | TABLE ACCESS BY INDEX ROWID| P_APPLICATION | | | |
|* 5 | INDEX UNIQUE SCAN | PK_P_APPLICATION | | | |
------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

3 - filter(TO_NUMBER("AP"."SECURITY_STATUS")=2)
4 - filter("A"."FK_SL_LANG_CD"='EN' AND "A"."ASSIGNMENT_CLASSIFICATION"='E')
5 - access("AP"."APPLICATION_SERIAL_NUMBER"="A"."APPLICATION_SERIAL_NUMBER")

Note: rule based optimization

20 rows selected.

Elapsed: 00:00:00.05
14:01:40 pacr@PACRD> exec dbms_stats.gather_table_stats( ownname=>'PACR', tabname =>'P_APPLICATION', cascade=>true ,method_opt => 'FOR ALL COLUMNS SIZE AUTO')

PL/SQL procedure successfully completed.

Elapsed: 00:09:31.05
14:11:48 pacr@PACRD>
14:11:48 pacr@PACRD>
14:11:49 pacr@PACRD> SELECT count(a.APPLICATION_SERIAL_NUMBER)
14:14:05 2 FROM p_APPLICATION a join P_SECURITY_REVIEW_QUEUE ap
14:14:05 3 ON ap.APPLICATION_SERIAL_NUMBER = a.APPLICATION_SERIAL_NUMBER
14:14:05 4 WHERE ap.SECURITY_STATUS = 2
14:14:05 5 AND ( a.assignment_classification = 'E')
14:14:05 6 AND ( a.FK_SL_LANG_CD = 'EN');

COUNT(A.APPLICATION_SERIAL_NUMBER)
----------------------------------
1963

Elapsed: 00:00:00.02

14:14:09 pacr@PACRD> explain plan for
14:14:16 2 SELECT count(a.APPLICATION_SERIAL_NUMBER)
14:14:17 3 FROM p_APPLICATION a join P_SECURITY_REVIEW_QUEUE ap
14:14:17 4 ON ap.APPLICATION_SERIAL_NUMBER = a.APPLICATION_SERIAL_NUMBER
14:14:17 5 WHERE ap.SECURITY_STATUS = 2
14:14:17 6 AND ( a.assignment_classification = 'E')
14:14:17 7 AND ( a.FK_SL_LANG_CD = 'EN')
14:14:17 8 /

Explained.

Elapsed: 00:00:00.01
14:14:19 pacr@PACRD> select * from table(dbms_xplan.display());

PLAN_TABLE_OUTPUT
-----------------------------------------------------------------------------------------------------------------------------------

------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost |
------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 31 | 84 |
| 1 | SORT AGGREGATE | | 1 | 31 | |
| 2 | NESTED LOOPS | | 39 | 1209 | 84 |
|* 3 | TABLE ACCESS FULL | P_SECURITY_REVIEW_QUEUE | 39 | 663 | 6 |
|* 4 | TABLE ACCESS BY INDEX ROWID| P_APPLICATION | 1 | 14 | 2 |
|* 5 | INDEX UNIQUE SCAN | PK_P_APPLICATION | 1 | | 1 |
------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

3 - filter(TO_NUMBER("AP"."SECURITY_STATUS")=2)
4 - filter("A"."ASSIGNMENT_CLASSIFICATION"='E' AND "A"."FK_SL_LANG_CD"='EN')
5 - access("AP"."APPLICATION_SERIAL_NUMBER"="A"."APPLICATION_SERIAL_NUMBER")

Note: cpu costing is off

20 rows selected.

Elapsed: 00:00:00.05
14:14:34 pacr@PACRD> exec dbms_stats.gather_table_stats( ownname=>'PACR', tabname =>'P_SECURITY_REVIEW_QUEUE', cascade=>true ,method_opt => 'FOR ALL COLUMNS SIZE AUTO')

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.07
14:17:02 pacr@PACRD>
14:17:02 pacr@PACRD> SELECT count(a.APPLICATION_SERIAL_NUMBER)
14:17:11 2 FROM p_APPLICATION a join P_SECURITY_REVIEW_QUEUE ap
14:17:11 3 ON ap.APPLICATION_SERIAL_NUMBER = a.APPLICATION_SERIAL_NUMBER
14:17:11 4 WHERE ap.SECURITY_STATUS = 2
14:17:11 5 AND ( a.assignment_classification = 'E')
14:17:11 6 AND ( a.FK_SL_LANG_CD = 'EN')
14:17:11 7
14:17:12 pacr@PACRD> /

COUNT(A.APPLICATION_SERIAL_NUMBER)
----------------------------------
1963

Elapsed: 00:00:08.08
14:17:22 pacr@PACRD> explain plan for
14:17:31 2 SELECT count(a.APPLICATION_SERIAL_NUMBER)
14:17:32 3 FROM p_APPLICATION a join P_SECURITY_REVIEW_QUEUE ap
14:17:32 4 ON ap.APPLICATION_SERIAL_NUMBER = a.APPLICATION_SERIAL_NUMBER
14:17:32 5 WHERE ap.SECURITY_STATUS = 2
14:17:32 6 AND ( a.assignment_classification = 'E')
14:17:32 7 AND ( a.FK_SL_LANG_CD = 'EN')
14:17:32 8 /

Explained.

Elapsed: 00:00:00.00
14:17:33 pacr@PACRD> select * from table(dbms_xplan.display());

PLAN_TABLE_OUTPUT
-----------------------------------------------------------------------------------------------------------------------------------

---------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost |
---------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 25 | 3206 |
| 1 | SORT AGGREGATE | | 1 | 25 | |
|* 2 | HASH JOIN | | 4097 | 100K| 3206 |
|* 3 | TABLE ACCESS FULL | P_SECURITY_REVIEW_QUEUE | 4097 | 45067 | 6 |
|* 4 | TABLE ACCESS FULL | P_APPLICATION | 570K| 7797K| 3126 |
---------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

2 - access("AP"."APPLICATION_SERIAL_NUMBER"="A"."APPLICATION_SERIAL_NUMBER")
3 - filter(TO_NUMBER("AP"."SECURITY_STATUS")=2)
4 - filter("A"."ASSIGNMENT_CLASSIFICATION"='E' AND "A"."FK_SL_LANG_CD"='EN')

Note: cpu costing is off

19 rows selected.

Elapsed: 00:00:00.05
14:17:40 pacr@PACRD> spool off



Tom Kyte
June 07, 2006 - 3:40 pm UTC

are the estimated cardinalities even close the "real"

suggest you do not use method opt at all to start.

Wrong execution plan after analyze table

Branka, June 07, 2006 - 4:17 pm UTC

It didn't help.
Why not to use method opt at all ?
What else can I try (other than not to have statistics on that table)?
Thanks
Branka

16:08:20 pacr@PACRD> exec DBMS_STATS.DELETE_TABLE_STATS ( ownname=>'PACR', tabname =>'P_SECURITY_REVIEW_QUEUE')

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.00
16:08:22 pacr@PACRD> exec dbms_stats.gather_table_stats( ownname=>'PACR', tabname =>'P_SECURITY_REVIEW_QUEUE', cascade=>true )

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.04
16:08:35 pacr@PACRD> SELECT count(a.APPLICATION_SERIAL_NUMBER)
16:08:41 2 FROM p_APPLICATION a join P_SECURITY_REVIEW_QUEUE ap
16:08:41 3 ON ap.APPLICATION_SERIAL_NUMBER = a.APPLICATION_SERIAL_NUMBER
16:08:41 4 WHERE ap.SECURITY_STATUS = 2
16:08:41 5 AND ( a.assignment_classification = 'E')
16:08:41 6 AND ( a.FK_SL_LANG_CD = 'EN')
16:08:41 7 /

COUNT(A.APPLICATION_SERIAL_NUMBER)
----------------------------------
1963

Elapsed: 00:00:07.09
16:08:50 pacr@PACRD> select count(*) from p_APPLICATION;

COUNT(*)
----------
3533001

Elapsed: 00:00:02.07
select count(*) from p_APPLICATION
where assignment_classification = 'E'
and FK_SL_LANG_CD = 'EN';

COUNT(*)
----------
570487

select count(*) from P_SECURITY_REVIEW_QUEUE
where SECURITY_STATUS = 2;

COUNT(*)
----------
4097


Tom Kyte
June 07, 2006 - 5:38 pm UTC

can you please tell us if the estimated cardinalities on each step of the plan *are anywhere close to accurate*.




Wrong execution plan after analyze table

Branka, June 07, 2006 - 5:46 pm UTC

Estimated cardinalities are accurate. I put all selects in my respond.

Tom Kyte
June 07, 2006 - 6:36 pm UTC

umm, well if one plan says "39" and the other says "4097"

?

It is estimating:

|* 4 | TABLE ACCESS FULL | P_APPLICATION | 570K| 7797K| 3126
|
---------------------------------------------------------------------------------


4 - filter("A"."ASSIGNMENT_CLASSIFICATION"='E' AND "A"."FK_SL_LANG_CD"='EN')


does that where clause retrieve 570,000 records or ??

Wrong execution plan after analyze table

branka, June 07, 2006 - 6:58 pm UTC

First plan is without statistics on P_SECURITY_REVIEW_QUEUE and P_APPLICATION table.
Second plan is with statistics on P_APPLICATION table.
Third plan is with statistics on both table.
Third plan has all cardinality correct.

pacr@PACRD> select count(*) from p_APPLICATION
2 where assignment_classification = 'E'
3 and FK_SL_LANG_CD = 'EN';


COUNT(*)
----------
570487


Tom Kyte
June 07, 2006 - 7:07 pm UTC

so, is 39 or 4907 right for that other predicate?

Wrong execution plan

Branka, June 07, 2006 - 7:12 pm UTC

4097 is correct value for other predicate.

3 - filter(TO_NUMBER("AP"."SECURITY_STATUS")=2)

select count(*) from P_SECURITY_REVIEW_QUEUE
where SECURITY_STATUS = 2;

COUNT(*)
----------
4097


Tom Kyte
June 07, 2006 - 8:24 pm UTC

ok, can we see a tkprof of both the "good and the bad"

nvl function used on primary keys

A reader, June 08, 2006 - 3:28 am UTC

Hi

I have read the link you posted but I cant find useful information from it? Do you mean we should use dynamic SQL?

TIA

Tom Kyte
June 08, 2006 - 9:08 am UTC

yes.

It shows how to dynamically construct the predicate based on the non-null inputs to a query, using binding.

Wrong execution plan

Branka, June 08, 2006 - 2:27 pm UTC

"GOOD"
SELECT count(a.APPLICATION_SERIAL_NUMBER)
FROM p_APPLICATION a join P_SECURITY_REVIEW_QUEUE ap
ON ap.APPLICATION_SERIAL_NUMBER = a.APPLICATION_SERIAL_NUMBER
WHERE ap.SECURITY_STATUS = 2
AND ( a.assignment_classification = 'E')
AND ( a.FK_SL_LANG_CD = 'EN')

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.01 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 2 0.11 0.10 0 12338 0 1
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 4 0.12 0.11 0 12338 0 1

Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: 30

Rows Row Source Operation
------- ---------------------------------------------------
1 SORT AGGREGATE
1963 NESTED LOOPS
4097 TABLE ACCESS FULL P_SECURITY_REVIEW_QUEUE
1963 TABLE ACCESS BY INDEX ROWID P_APPLICATION
4096 INDEX UNIQUE SCAN PK_P_APPLICATION (object id 41898)


Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
SQL*Net message to client 2 0.00 0.00
SQL*Net message from client 2 15.10 15.10


"BAD"


SELECT count(a.APPLICATION_SERIAL_NUMBER)
FROM p_APPLICATION a join P_SECURITY_REVIEW_QUEUE ap
ON ap.APPLICATION_SERIAL_NUMBER = a.APPLICATION_SERIAL_NUMBER
WHERE ap.SECURITY_STATUS = 2
AND ( a.assignment_classification = 'E')
AND ( a.FK_SL_LANG_CD = 'EN')

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 2 8.59 11.75 28264 32310 0 1
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 4 8.59 11.76 28264 32310 0 1

Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: 30

Rows Row Source Operation
------- ---------------------------------------------------
1 SORT AGGREGATE
1963 HASH JOIN
4097 TABLE ACCESS FULL OBJ#(41896)
570487 TABLE ACCESS FULL OBJ#(41895)


Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
SQL*Net message to client 2 0.00 0.00
db file scattered read 2445 0.03 3.84
db file sequential read 606 0.01 0.33
SQL*Net message from client 2 27.81 27.81



Tom Kyte
June 08, 2006 - 3:41 pm UTC

ok, what are your optimizer related parameters set to (how about a list of all non-default ones)

wrong execution plan

Branka, June 08, 2006 - 4:02 pm UTC

Paramater Name Value
aq_tm_processes 1
audit_file_dest XXXXXXX
audit_trail OS
background_dump_dest XXXXXX
compatible 9.2.0.0.0
control_files XXXXXXX
core_dump_dest XXXXXX
db_block_size 8192
db_cache_size 100663296
db_domain FALSE
db_file_multiblock_read_count 16
db_name XXXX
dblink_encrypt_login TRUE
fast_start_mttr_target 300
hash_join_enabled TRUE
instance_name XXXX
java_pool_size 83886080
job_queue_processes 10
large_pool_size 16777216
open_cursors 300
pga_aggregate_target 25165824
processes 150
query_rewrite_enabled FALSE
remote_login_passwordfile EXCLUSIVE
remote_os_authent FALSE
shared_pool_size 285212672
sort_area_size 524288
star_transformation_enabled FALSE
timed_statistics TRUE
undo_management AUTO
undo_retention 10800
undo_tablespace UNDOTBS1
user_dump_dest XXXX
utl_file_dir XXXX


Tom Kyte
June 08, 2006 - 4:07 pm UTC

Ok, in this case - everything looks "OK" - it did the right thing, but right now it believes all IO is going to be physical IO (and it is the physical IO that is the big time sink here).

It sees the 5,000 index range scans+table access by index rowid steps and says "cheaper to full scan", but in your case - it is not.

Have you any system statistics (NOT stats on sys owned tables, but statistics on the cpu speed, single block IO, multiblock IO times?)

Are you predominantly "a transactional system" or "a warehouse/reporting system"?

Wrong execution plan

Branka, June 08, 2006 - 4:15 pm UTC

I don't have any system statistics.
We are predominantly "a transactional system"
I don't understand, why it make wrong decition only on this 2 tables, out of 200-300 tables.
Do you suggest to collect system statistics?

Tom Kyte
June 08, 2006 - 7:59 pm UTC

wait, one more thing I'd like to see

the plan with statistics on both, but using a first_rows or INDEX/USE_NL hint to get the prefered plan - to compare to the default.


It would be interesting to TEST system statistics
and we might, maybe, consider adjusting optimizer_index* parameters.

Wrong execution plan

Branka, June 08, 2006 - 4:32 pm UTC

I change optimizer_index_cost_adj and optimizer_index_caching,
and see result.

pacr@PACRD>
pacr@PACRD> SELECT count(a.APPLICATION_SERIAL_NUMBER)
2 FROM p_APPLICATION a join P_SECURITY_REVIEW_QUEUE ap
3 ON ap.APPLICATION_SERIAL_NUMBER = a.APPLICATION_SERIAL_NUMBER
4 WHERE ap.SECURITY_STATUS = 2
5 AND ( a.assignment_classification = 'E')
6 AND ( a.FK_SL_LANG_CD = 'EN')
7
pacr@PACRD>
pacr@PACRD> set time on
16:26:48 pacr@PACRD> set timing on
16:26:51 pacr@PACRD> /


COUNT(A.APPLICATION_SERIAL_NUMBER)
----------------------------------
1963

Elapsed: 00:00:08.01
16:27:01 pacr@PACRD>
16:27:01 pacr@PACRD>
16:27:01 pacr@PACRD>
16:27:01 pacr@PACRD>
16:27:01 pacr@PACRD>
16:27:01 pacr@PACRD>
16:27:01 pacr@PACRD>
16:27:01 pacr@PACRD>
16:27:01 pacr@PACRD>
16:27:01 pacr@PACRD> alter session set optimizer_index_caching =40
16:27:29 2 ;

Session altered.

Elapsed: 00:00:00.00
16:27:31 pacr@PACRD> alter session set optimizer_index_cost_adj =40;

Session altered.

Elapsed: 00:00:00.00
16:27:40 pacr@PACRD> SELECT count(a.APPLICATION_SERIAL_NUMBER)
16:27:43 2 FROM p_APPLICATION a join P_SECURITY_REVIEW_QUEUE ap
16:27:47 3 ON ap.APPLICATION_SERIAL_NUMBER = a.APPLICATION_SERIAL_NUMBER
16:27:47 4 WHERE ap.SECURITY_STATUS = 2
16:27:47 5 AND ( a.assignment_classification = 'E')
16:27:47 6 AND ( a.FK_SL_LANG_CD = 'EN');


COUNT(A.APPLICATION_SERIAL_NUMBER)
----------------------------------
1963


Tom Kyte
June 08, 2006 - 8:00 pm UTC

yes, i know that will do it, but.... it affects lots of (every) other plans too.

Wrong execution plan

Branka, June 08, 2006 - 4:37 pm UTC

Forgot timing row.
16:27:40 pacr@PACRD> SELECT count(a.APPLICATION_SERIAL_NUMBER)
16:27:43 2 FROM p_APPLICATION a join P_SECURITY_REVIEW_QUEUE ap
16:27:47 3 ON ap.APPLICATION_SERIAL_NUMBER = a.APPLICATION_SERIAL_NUMBER
16:27:47 4 WHERE ap.SECURITY_STATUS = 2
16:27:47 5 AND ( a.assignment_classification = 'E')
16:27:47 6 AND ( a.FK_SL_LANG_CD = 'EN');


COUNT(A.APPLICATION_SERIAL_NUMBER)
----------------------------------
1963

Elapsed: 00:00:00.03
16:27:49 pacr@PACRD> explain plan for
16:29:41 2 SELECT count(a.APPLICATION_SERIAL_NUMBER)
16:29:49 3 FROM p_APPLICATION a join P_SECURITY_REVIEW_QUEUE ap
16:29:53 4 ON ap.APPLICATION_SERIAL_NUMBER = a.APPLICATION_SERIAL_NUMBER
16:29:53 5 WHERE ap.SECURITY_STATUS = 2
16:29:53 6 AND ( a.assignment_classification = 'E')
16:29:53 7 AND ( a.FK_SL_LANG_CD = 'EN')
16:29:53 8 /

Explained.

Elapsed: 00:00:00.00
16:29:54 pacr@PACRD> select * from table(dbms_xplan.display());


PLAN_TABLE_OUTPUT
----------------------------------------------------------------------------------------------------

------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost |
------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 25 | 1645 |
| 1 | SORT AGGREGATE | | 1 | 25 | |
| 2 | NESTED LOOPS | | 4097 | 100K| 1645 |
|* 3 | TABLE ACCESS FULL | P_SECURITY_REVIEW_QUEUE | 4097 | 45067 | 6 |
|* 4 | TABLE ACCESS BY INDEX ROWID| P_APPLICATION | 1 | 14 | 1 |
|* 5 | INDEX UNIQUE SCAN | PK_P_APPLICATION | 1 | | |
------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

3 - filter(TO_NUMBER("AP"."SECURITY_STATUS")=2)
4 - filter("A"."ASSIGNMENT_CLASSIFICATION"='E' AND "A"."FK_SL_LANG_CD"='EN')
5 - access("AP"."APPLICATION_SERIAL_NUMBER"="A"."APPLICATION_SERIAL_NUMBER")

Note: cpu costing is off

voodoo

voodoo, June 08, 2006 - 6:56 pm UTC

Maybe set optimizer_index_cacheing way up to 95 and see if it doesn't get rid of that full table scan. The general idea being a transaction oriented system would be more dependent on indexes.

Wrong execution plan

Branka, June 12, 2006 - 10:34 am UTC

So what is solution?
I tried same query in Production database (same data as in dev), but it doesn't want to change plan.
I tried with several optimizer_index_caching and optimizer_index_cost_adj, and it didn't help.

Wrong execution plan

Branka, June 13, 2006 - 1:57 pm UTC

What will happen with this query when we move to Oracle 10g?
I understand that all tables have to be analyzed in 10g.

Tom Kyte
June 13, 2006 - 5:03 pm UTC

dynamic sampling will kick in if you have not analyzed a table

Wrong execution plan

Branka, June 15, 2006 - 9:29 am UTC

Can you suggest what to do with this table for now? (table that create bad execution plan when have statistics).
Is it best to leave that table without statistics, than to setup monitoring parameter for all other tables, and than DBMS_JOB, that geather stale statistics?
What will be with that once when we move to 10g, where we don't have monitor parameter? Would I need to make sure that all tables are no monitor, or Oracle will do it for me?


Tom Kyte
June 15, 2006 - 4:34 pm UTC

have you tried histograms so it "understands" this really badly skewed data you have produced by using this "fake" date?

(do you have Jonathan Lewis book - Cost Based Oracle? You'll want it)

Wrong execution plan

Branka, June 19, 2006 - 12:19 pm UTC

histograms didn't help.
execute dbms_stats.gather_table_stats(ownname => 'PACR',tabname =>'P_APPLICATION', estimate_percent => DBMS_STATS.AUTO_SAMPLE_SIZE, method_opt => 'for all columns size auto', degree => DBMS_STATS.DEFAULT_DEGREE);

INDEX/USE_NL hint did help
explain plan for
2 SELECT /*+ INDEX (a, PK_P_APPLICATION) USE_NL (a b) */ count(a.APPLICATION_SERIAL_NUMBER)
3 FROM P_APPLICATION a join P_SECURITY_REVIEW_QUEUE ap
4 ON ap.APPLICATION_SERIAL_NUMBER = a.APPLICATION_SERIAL_NUMBER
5 WHERE ap.SECURITY_STATUS = 2
6 AND ( a.assignment_classification = 'E')
7 AND ( a.FK_SL_LANG_CD = 'EN')
8 /

Explained.

Elapsed: 00:00:00.00
pacr@PACRD> select * from table(dbms_xplan.display());


PLAN_TABLE_OUTPUT
----------------------------------------------------------------------------------------------------

------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost |
------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 25 | 8200 |
| 1 | SORT AGGREGATE | | 1 | 25 | |
| 2 | NESTED LOOPS | | 4097 | 100K| 8200 |
|* 3 | TABLE ACCESS FULL | P_SECURITY_REVIEW_QUEUE | 4097 | 45067 | 6 |
|* 4 | TABLE ACCESS BY INDEX ROWID| P_APPLICATION | 1 | 14 | 2 |
|* 5 | INDEX UNIQUE SCAN | PK_P_APPLICATION | 1 | | 1 |
------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

3 - filter(TO_NUMBER("AP"."SECURITY_STATUS")=2)
4 - filter("A"."ASSIGNMENT_CLASSIFICATION"='E' AND "A"."FK_SL_LANG_CD"='EN')
5 - access("AP"."APPLICATION_SERIAL_NUMBER"="A"."APPLICATION_SERIAL_NUMBER")

Note: cpu costing is off

20 rows selected.

Elapsed: 00:00:00.04
pacr@PACRD> SELECT /*+ INDEX (a, PK_P_APPLICATION) USE_NL (a b) */ count(a.APPLICATION_SER
IAL_NUMBER)
2 FROM P_APPLICATION a join P_SECURITY_REVIEW_QUEUE ap
3 ON ap.APPLICATION_SERIAL_NUMBER = a.APPLICATION_SERIAL_NUMBER
4 WHERE ap.SECURITY_STATUS = 2
5 AND ( a.assignment_classification = 'E')
6 AND ( a.FK_SL_LANG_CD = 'EN');


COUNT(A.APPLICATION_SERIAL_NUMBER)
----------------------------------
1963

Elapsed: 00:00:00.01

Wrong excution plan

Branka, June 20, 2006 - 2:05 pm UTC

I attached wrong dbms_stats.
I wanted to say that this one didn't help.

execute dbms_stats.gather_schema_stats(
ownname => 'PACR',
estimate_percent => DBMS_STATS.AUTO_SAMPLE_SIZE,
method_opt => 'for all columns size skewonly',
degree => DBMS_STATS.DEFAULT_DEGREE);

One more question.
Does book Cost-Based Oracle Fundamentals apply to Oracle 10g database?

Tom Kyte
June 21, 2006 - 9:36 am UTC

well, is that the dbms_stats command I asked you to use?

yes, the book applies to 10g and before.

Number of rows fetched

Vikram Romeo, June 22, 2006 - 4:02 pm UTC

Hi Tom,

When a long running query is executing, how do we find out how many rows have been fetched so far. Is there any V$ table for finding that out

Appreciate your response.

Regards,
Vikram Romeo

Tom Kyte
June 22, 2006 - 4:21 pm UTC

the client fetching from the cursor can see that, but it is not really exposed outside of that client.

Number of rows fetched

Vikram Romeo, June 22, 2006 - 4:43 pm UTC

Thanks for the prompt response Tom,

But Iam just running a SQL query .. I am not opening any cursor explicitly in a pl/sql or something. Just a simple SQL.

I want to know whether my SQL is doing any activity or is it just "idle".

Any ideas Tom?

Regards,
Vikram Romeo

Tom Kyte
June 23, 2006 - 9:47 am UTC

select status from v$session

will return whether you are active, inactive, using a shared server (active) or not using a shared server and so on....

Tuning in development, test, production

Jdam, June 28, 2006 - 8:48 am UTC

Hi Tom, I have the same concerns about tuning sql in development and production for the facts already mention by Schesser, could you elaborate more about your answer "If you use CBO -- the plans can and will change. They can change from day to
day."

Thanks
Jdam

Tom Kyte
June 28, 2006 - 9:08 am UTC

what more can we say? The CBO develops plans based on statistics. As statistics change, plans will change.

Tuning in development, test, production

emob, June 28, 2006 - 10:16 am UTC

You suggested using:
Select status from v$session

I got back:
9:38:49 AM Start SQL Editor Execution ...
9:38:49 AM Processing ...
9:38:49 AM select status from v$session
9:38:49 AM *
9:38:49 AM ORA-00942: table or view does not exist
9:38:49 AM *** Script stopped due to error ***

Is this a rights problem?

Thanks.

Tom Kyte
June 28, 2006 - 10:34 am UTC

well, you sort of need to have access to the underlying view.

v$session is something you need to have been granted access to, and apparently, you have not been.

SQL stmt

Abid Ali khan, July 06, 2006 - 5:05 am UTC

hi tom, its Ali here, can u plz anser my Querry
there are 3 records
Number Country
8688686886 XYZ
8686886 ABC
68686886 CDF

so the Qierry is
U have to find all the data related to certain number,if u pass the number 868,v have to get first 2 records,


Tom Kyte
July 08, 2006 - 9:38 am UTC

"U" isn't here. I've yet to meet this "U" individual - big backlog of requests for their time.

Read up on "LIKE", you'll convert the number to a string and use like

where to_char(num) like :x || '%'

bind in 868

A reader, August 02, 2006 - 11:19 am UTC

Tom

I have a query plan from a SQL*Plus autotrace as following:

Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=RULE
1 0 SORT (ORDER BY)
2 1 FILTER
3 2 TABLE ACCESS (BY INDEX ROWID) OF 'ACT_SCH_APPS_ALL'
4 3 INDEX (RANGE SCAN) OF 'ASA_SCHEME_OFFICE_LOT' (NON-U
NIQUE)

5 2 NESTED LOOPS
6 5 NESTED LOOPS
7 6 TABLE ACCESS (BY INDEX ROWID) OF 'PARTY_ROLES'
8 7 INDEX (UNIQUE SCAN) OF 'PARR_BRN_UK' (UNIQUE)
9 6 TABLE ACCESS (BY INDEX ROWID) OF 'CLAIM_EVENTS'
10 9 INDEX (RANGE SCAN) OF 'CLAE_PROI_IDX' (NON-UNIQU
E)

11 5 AND-EQUAL
12 11 INDEX (RANGE SCAN) OF 'INVO_APP_IDX' (NON-UNIQUE)
13 11 INDEX (RANGE SCAN) OF 'INVO_PARR_IDX' (NON-UNIQUE)


Now, am I correct in staing the first step to be executed is

8 7 INDEX (UNIQUE SCAN) OF 'PARR_BRN_UK' (UNIQUE)

That sounds incorrect because the actual query goes because it needs to do, first to supply the index with values :

3 2 TABLE ACCESS (BY INDEX ROWID) OF 'ACT_SCH_APPS_ALL'

The actual query which may be irrelevant in this case:


SELECT cdm_office.NAME
( cdm_farm.ao
( cdm_business.farm
( asa_claimant_id )))
office
, cdm_business.farm ( asa_claimant_id ) farm, asa_claimant_id brn
FROM act_sch_apps
WHERE asa_as_code = 'LMCM05'
AND asa_status IS NULL
AND NOT EXISTS (
SELECT -- ordered
NULL
FROM party_roles, involvements, claim_events
WHERE clae_proi_id = 291
AND invo_app_id = clae_app_id
AND invo_parr_id = parr_id
AND parr_brn = asa_claimant_id )
ORDER BY office, farm, brn


So why does the explain plan say it does Step
first?

8 7 INDEX (UNIQUE SCAN) OF 'PARR_BRN_UK' (UNIQUE)

Tom Kyte
August 02, 2006 - 12:33 pm UTC

it does this:

3 2 TABLE ACCESS (BY INDEX ROWID) OF 'ACT_SCH_APPS_ALL'
4 3 INDEX (RANGE SCAN) OF 'ASA_SCHEME_OFFICE_LOT' (NON-U
NIQUE)

and then filters that output using the output of the nested loops join query below it.

ctl-f for

1.3.3 Reading an Explain Plan

on this page.

A reader, August 02, 2006 - 2:49 pm UTC

2 1 FILTER
3 2 TABLE ACCESS (BY INDEX ROWID) OF 'ACT_SCH_APPS_ALL'

5 2 NESTED LOOPS

Tom,

Is this because for in order to do Step 2, it has to do Step 3 and Step 5 first (before Step 2).

But, because they are both at the same 'Rank' in relation to Step 2, the 'Top' one, i.e. Step 3 is done first.

For every row arriving from
3 2 TABLE ACCESS (BY INDEX ROWID)

Is 'supplied' to
5 2 NESTED LOOPS

and filtered at Step 2?

Tom Kyte
August 03, 2006 - 7:43 am UTC

top down.

build the tree and then using the "approach" outlined above:

...
And then just read the tree. In order to get 1 we need 2 and 5 - 2 is "first".
In order to get 2, we need 3 and 4. 3 is "first". That is how I arrived at the
psuedo code for:
.......

extract sql and its explain plan,

A reader, August 02, 2006 - 5:03 pm UTC

We have an application that can run any time during the day.

We want to extract all the SQL's that runs within the application and generate the explain plan and store the file in some location.

Our database is 10gR1 and has AWR running in the background for every hour. I din't see the EXPLAIN PLAN in the text report.

What is the best way to do this? I am thinking of tkprof but starting tkprof right after the session starts is the trick.

Thanks,


Fetching Huge Records with Derived Field

Rahul Dutta, August 03, 2006 - 2:43 am UTC

Hi Tom,

I am facing this problem where I have to do some calculation on the data in a table with more than half million records and it is taking very long time. Please suggest some tips.
========================
The query as follows:
=========================
SELECT
A.LATEST_LAN_FLG,
A.CUSTOMER_KEY,
A.BUSS_CD,
A.LAN,
(SELECT COUNT(*) FROM MINING.CRASH_FACT C WHERE C.CUSTOMERID=A.CUSTOMER_KEY
AND C.DIMREPMONTH>=C.DIMSTARTMONTH AND C.DIMREPMONTH<=TO_CHAR(ADD_MONTHS(SYSDATE,-2),'YYYYMM'))SUM_LANS_MOB_BY_CP,

(SELECT COUNT(DISTINCT LAN ) FROM MINING.CRASH_MONTHLYFACT C WHERE C.CUSTOMERID=A.CUSTOMER_KEY
AND C.DIMDISBMONTH<=TO_CHAR(ADD_MONTHS(SYSDATE,-2),'YYYYMM'))CNT_LANS_ALL_BY_CP,

(SELECT MAX(D.TENOR) FROM MINING.CRASH_MONTHLYFACT D WHERE D.CUSTOMERID=A.CUSTOMER_KEY
AND D.DIMDISBMONTH<=TO_CHAR(ADD_MONTHS(SYSDATE,-2),'YYYYMM')) MAX_TENOR_BY_CP,

(SELECT COUNT(DISTINCT DIMREPMONTH) FROM MINING.CRASH_FACT C WHERE C.CUSTOMERID=A.CUSTOMER_KEY AND C.DIMREPMONTH>=C.DIMSTARTMONTH
AND C.DIMREPMONTH<=TO_CHAR(ADD_MONTHS(SYSDATE,-2),'YYYYMM') AND C.CURRBOUNCES>=1 )CNT_BOUNCE_TIMES_BY_CP ,

(SELECT COUNT(DISTINCT DIMREPMONTH) FROM MINING.CRASH_FACT C WHERE C.CUSTOMERID=A.CUSTOMER_KEY AND C.DIMREPMONTH>=C.DIMSTARTMONTH
AND C.DIMREPMONTH BETWEEN TO_CHAR(ADD_MONTHS(SYSDATE,-13),'YYYYMM') AND TO_CHAR(ADD_MONTHS(SYSDATE,-2),'YYYYMM') AND C.DIMCURRBKT=2)DPD_1_TO_30_BY_12MTH,

(SELECT COUNT(DISTINCT LAN) FROM MINING.CRASH_FACT C WHERE C.CUSTOMERID=A.CUSTOMER_KEY AND C.DIMREPMONTH>=C.DIMSTARTMONTH
AND C.DIMREPMONTH BETWEEN TO_CHAR(ADD_MONTHS(SYSDATE,-13),'YYYYMM') AND TO_CHAR(ADD_MONTHS(SYSDATE,-2),'YYYYMM'))CNT_LIVE_LAN_BY_12MTH,

(SELECT COUNT(*) FROM MINING.CRASH_FACT C WHERE C.LAN=A.LAN
AND C.DIMREPMONTH>=C.DIMSTARTMONTH AND C.DIMREPMONTH<=TO_CHAR(ADD_MONTHS(SYSDATE,-2),'YYYYMM'))FEEDER_LAN_MOB_BY_CP

FROM CRM.LOANS A
WHERE A.LATEST_LAN_FLG='Y'
AND A.BUSS_CD IN ('O','P','V')

=====================================
Plan is as follows The Cost is coming at 5005 (most of which is due to the FTS on CRM.LOAN Table. Can you please suggest some tips??
=====================================
SELECT STATEMENT, GOAL = CHOOSE 12087192 503633 5005 CHOOSE
SORT AGGREGATE 21 1
TABLE ACCESS BY GLOBAL INDEX ROWID 441 21 24 CRASH_FACT ANALYZED
INDEX RANGE SCAN 21 3 CRASHFACT_CUST ANALYZED
SORT GROUP BY 27 1
TABLE ACCESS BY INDEX ROWID 27 1 5 CRASH_MONTHLYFACT ANALYZED
INDEX RANGE SCAN 1 3 CRASH_MONTH1 ANALYZED
SORT AGGREGATE 18 1
TABLE ACCESS BY INDEX ROWID 18 1 5 CRASH_MONTHLYFACT ANALYZED
INDEX RANGE SCAN 1 3 CRASH_MONTH1 ANALYZED
SORT GROUP BY 24 1
TABLE ACCESS BY GLOBAL INDEX ROWID 72 3 24 CRASH_FACT ANALYZED
INDEX RANGE SCAN 21 3 CRASHFACT_CUST ANALYZED
SORT GROUP BY 24 1
FILTER
TABLE ACCESS BY GLOBAL INDEX ROWID 24 1 24 CRASH_FACT ANALYZED
INDEX RANGE SCAN 21 3 CRASHFACT_CUST ANALYZED
SORT GROUP BY 34 1
FILTER
TABLE ACCESS BY GLOBAL INDEX ROWID 102 3 24 CRASH_FACT ANALYZED
INDEX RANGE SCAN 21 3 CRASHFACT_CUST ANALYZED
SORT AGGREGATE 27 1
TABLE ACCESS BY GLOBAL INDEX ROWID 405 15 18 CRASH_FACT ANALYZED
INDEX RANGE SCAN 15 3 CRASHFACT_LAN ANALYZED
TABLE ACCESS FULL 12087192 503633 5005 LOANS ANALYZED


Thanks
Rahul


A query about SQL writting

RM, August 09, 2006 - 9:30 am UTC

Hi Tom, I have a scenario where I am trying to tune a long running query. Its a simple select statement to a view. e.g.
select * from view1;

View1 query view2.
View2 query view3 and view4.
View3 query view5.
View4 query View6 and view7.
View5 query union all of 2 tables.
View6 query 5 big tables in select and from clause.
View7 query a Materialise view.

Whats your view point towards this type of sql setup. I am thinking this to be major culprit in the long execution of this query. Can you suggest anything to reduce the execution time.

One more. Does HIGH COST in the explain plan determines the performance problem and long execution time, why yes and why not??

Tom Kyte
August 09, 2006 - 10:57 am UTC

I don't necessarily like views of views of views of views.

Typically - what is supplied and done by view1 is not entirely needed by view2 (does stuff view2 does not really need) and so on.

So you get "baggage"

I like one level views.


A high cost might be indicative of a very expensive query, yes. The cost is a function of the PREDICTED work to be performed.

Different execution plans

A reader, August 10, 2006 - 6:20 am UTC

Hi Tom,
we have a query which takes 31 seconds when called through JDBC.
The same sql statement, when executed from sqlplus takes only 1 second.

When we took help of our dba to find out why it takes 31 seconds in production,
our dba found that one table is undergoing full table scan when called from JDBC.
This was also found from V$SQL_PLAN.

However, explain plan shows index access and we get the output in just 1 second.

How can we find out what is the reason for this discrepancy?

Thanks.

Tom Kyte
August 10, 2006 - 9:30 am UTC

when you do it in sqlplus, is that in "production" as well - or something entirely different.

Same database

Giridhar, August 11, 2006 - 2:52 am UTC

Hi Tom,
Whatever we are doing in sqlplus is in production, accessing the same database which was being accessed by JDBC.
As we used bind variables in our JDBC program, we tried as follows:

We pass account number as input for our query and the column which is used is CHAR(9).
Hence we tested with following cases


a) var act1 char(9)
var act2 char(9)
var act3 char(9)

exec :act1 := '5JC010588';
exec :act2 := '5JC010604';
exec :act3 := '5JC010620';

Execute the query which is like

SELECT /*+ ORDERED */ COLUMN_A,COLUMN_B,......FROM
TABLE_A N, TABLE_B P, TABLE_C S WHERE
N.ACCTNUM = P.ACCTNUM AND P.CSP_NUM != 'USD999997' AND P.CSP_NUM = S.CSP_NUM(+) AND
N.CSP_NUM = P.CSP_NUM AND N.ACCTNUM IN ( :act1 , :act2 , :act3 )
.......


This is taking just less than a second, But it still does full table scan of one of the large tables.

b) var act1 varchar2(9)
var act2 varchar2(9)
var act3 varchar2(9)

exec :act1 := '5JC010588';
exec :act2 := '5JC010604';
exec :act3 := '5JC010620';

Execute the query which is like

SELECT /*+ ORDERED */ COLUMN_A,COLUMN_B,......FROM
TABLE_A N, TABLE_B P, TABLE_C S WHERE
N.ACCTNUM = P.ACCTNUM AND P.CSP_NUM != 'USD999997' AND P.CSP_NUM = S.CSP_NUM(+) AND
N.CSP_NUM = P.CSP_NUM AND N.ACCTNUM IN ( :act1 , :act2 , :act3 )
.......


This is taking around 30 to 40 seconds, But it still does full table scan of one of the large tables. All the other indexes remain same, just by changing the datatype from varchar to char, i see a drastic improvement in performance.

I requested my DBA to find out the exact execution plan from V$SQL_PLAN today
to see if there is any difference in execution plans for the above two cases.
Only difference in the above two cases is the variable type i used , char in the first case
and varchar2 in second case.

Thanks for your help tom.




Tom Kyte
August 11, 2006 - 10:47 am UTC

and what pray tell is the actual STRUCTURE OF THE TABLE

without an example that we can reproduce with, not even going to look too hard.

LOSE THE HINT, just don't do that.

full table scan

A reader, August 11, 2006 - 2:40 pm UTC

I have a query like

select a,b,c..

where a.id=b.id
and b.uid=c.uid
and a.re_is in ( select ....)
and a.hist_date = ( select max(hist_date) from a a1
where a1.id=a.id)

this is having a full table scan on select max query.
Is there any other way I can write it to make it faster as this is taking lot of time.


Tom Kyte
August 11, 2006 - 2:58 pm UTC

great, no create tables, no indexing scheme, no plan, nothing.

I've a feeling this query was rewritten as an efficient job.


looks like you want the "max record" for A - but you know what, without the query itself - we cannot say if analytics could be used or not.

You see - you get the max hist_date from A by ID, but you constrain A by other restrictions in the outer query.... So, who knows what could be done.

run fast after stats removed on one table

A reader, September 06, 2006 - 7:47 am UTC

we have following query runs hours if stats on all tables collected

SELECT pw.proj_id, pw1.proj_id as parent_proj_id
from PROJWBS pw
, PROJWBS pw1
WHERE pw.proj_node_flag = 'Y'
AND pw1.wbs_id = pw.parent_wbs_id
AND pw.proj_id IN (SELECT u.proj_id FROM UACCESS u WHERE u.user_id = 10499)

analyze statement:
analyze table <tab> compute statistics;
the explain plan:
NESTED LOOPS
MERGE JOIN CARTESIAN
INDEX FAST FULL SCAN NDX_PROJWBS_PERF1
BUFFER SORT
SORT UNIQUE
INDEX RANGE SCAN PK_UACCESS
INDEX RANGE SCAN NDX_PROJWBS_PERF1

it runs 1 second if the stats on usccess only removed.
explain plan after stats remove on uaccess:
NESTED LOOPS
NESTED LOOPS SEMI
INDEX FAST FULL SCAN NDX_PROJWBS_PERF1
INDEX UNIQUE SCAN PK_UACCESS
TABLE ACCESS BY INDEX ROWID PROJWBS
INDEX UNIQUE SCAN PK_PROJWBS

Could you please explain on the Bad plan?


What's Buffer Sort?

A reader, September 09, 2006 - 6:34 pm UTC

I understood you did not take above my question for the reason of lacking of info.
it's 9.2.0.7 database.

i have to try to understanding the following plan:

NESTED LOOPS
MERGE JOIN CARTESIAN
INDEX FAST FULL SCAN NDX_PROJWBS_PERF1
BUFFER SORT
SORT UNIQUE
INDEX RANGE SCAN PK_UACCESS
INDEX RANGE SCAN NDX_PROJWBS_PERF1

What is the Buffer Sort? I failed to find in the oracle documents. And again could you please explain the bad execution plan?


Tom Kyte
September 10, 2006 - 9:17 am UTC

lack of information implies no comments really can be or should be made. You should not use analyze (dbms_stats please) and you should (you) compare the estimated cardinalities of the explain plan with the realities shown in a tkprof "row source operation" - looking for large disparities and then asking "why" (it might be obvious)

The BUFFER SORT operation can be used by optimizer when it thinks that temporarily storing the input row source and sorting it by a key column might be useful in eventually doing join; this would make the join efficient in terms of IO. This is no different from an ordinary sort in terms of overhead.


Primary Key index not being used

Deep, September 11, 2006 - 8:40 am UTC

Hi Tom,

Have a query in hand which does not use a primary key index of a table even though only one row needs to be fetched from the said table.

Does it have anything to do with optimizer_index_cost_adj parameter?

my query is :

select a.*,b.* from t1 a,t2 b
where t1.id=t2.oid;

It is doing full table scans of all the tables. For t2 it is ok to have a FTS as it is very small.

For t1 id is the primary key.

All the statistics is also up to date.

my optimizer_mode=all_rows.

Even after setting the optimizer_index_cost_adj=10 it is still doing the FTS of t1.



Tom Kyte
September 11, 2006 - 10:27 am UTC

umm, looks to me that it has to get eventually many/most of the rows from both tables - sure, a single row might join to one row over there - but you have "many of these single rows"


first_rows_N optimization would lead to the index, but beware of what you ask for!!!!

</code> http://asktom.oracle.com/pls/asktom/f?p=100:11:::::P11_QUESTION_ID:6749454952894#6760861174154 <code>

How to explain cardinality of view to the optimizer

Dmytro, September 18, 2006 - 8:10 am UTC

Hello Tom! I have a little problem with optimizer. Asked through a few forums, but nobody really help.
I have two projects on me, somehow connected. And there was one table, that was used by both of the projects, but it was doubled for some reason - one in the first project schema and one in the other. There are some minor differences between them, but the most important is, that in the first project, history of data changes was stored in the same table with actual data, only having date of change set and "deleted" records have status 2 ("disabled").
For some reason, now we must "merge" the tables, to have the same data in both projects and we need all the data to be taken from the first project.
So I create a view like this:
CREATE OR REPLACE VIEW TERRITORIES
AS
SELECT field1 AS fielda
,field2 AS fieldb
..........
FROM project1.territories
WHERE field1 IS NOT NULL
AND date_changed IS NULL
AND status = 1;

So, all seems to be great, but... There are 45000 records in the project1 table and only few of them are filtered by our WHERE-condition (there were(and will be) only a few changes in this table), but optimizer shows cardinality estimate of 2047 for select from this view instead of 44700 or so...
Bad thing is, that there are few reports joined to this table (and now view) and their plans changed quite a bit after table become a view...
What can be done about this, except hinting every query we had with this view, to make optimizer make right guess about cardinality of the view...
I try to create hiostograms on status column, but it not worked. Appreciate your help.


Tom Kyte
September 18, 2006 - 1:33 pm UTC

if you add a level 3 dynamic sample hint - what then?



SELECT /*+ dynamic_sampling(t 3) */
field1 AS fielda
,field2 AS fieldb
..........
FROM project1.territories t
WHERE field1 IS NOT NULL
AND date_changed IS NULL
AND status = 1;

Wow!

Dmytro, September 18, 2006 - 2:04 pm UTC

That really helps. Now select(*) from view shows true cardinality - more than 40000. I'll check plans for our reports, but now they must be just right.
Now it is clear to me, that I need to read some book about optimization.
Thanks Tom.

But still...

Dmytro, September 18, 2006 - 2:34 pm UTC

Hm... its strange to me, but it seems, that sampling works just fine with select from the view, but stops working after joining to the other...
For example, I have second view without WHERE clause, just select from the table with territory names in project1 schema.
I write something like this:
select * from territories t join territory_names tn on tn.tn_id = t.tn_tn_id
and my plan is like this:

SELECT STATEMENT, GOAL = CHOOSE Cost=48 Cardinality=2048 Bytes=88064
HASH JOIN Cost=48 Cardinality=2048 Bytes=88064
TABLE ACCESS FULL Object owner=PROJECT1 Object name=TERRITORIES Cost=32 Cardinality=2048 Bytes=51200
TABLE ACCESS FULL Object owner=PROJECT1 Object name=TERRITORY_NAMES Cost=14 Cardinality=26076 Bytes=469368

but simple select from territories gives us plan like this:

SELECT STATEMENT, GOAL = CHOOSE Cost=32 Cardinality=40954 Bytes=1023850
TABLE ACCESS FULL Object owner=PROJECT1 Object name=TERRITORIES Cost=32 Cardinality=40954 Bytes=1023850

Am I missing something?..

Suggestion

Reader, September 20, 2006 - 9:02 am UTC

Sir,
Following is the table structure:

Table T1
COL1
ADDRESS1
ADDRESS2
ADDRESS3

Data looks like this:

COL1 ADDRESS1 ADDRESS2 ADDRESS3
1 ABC PQR XYZ
2 LMN OPQ QRT
3 XYZ TTT PPP

Following is the table structure:

Table T2
COL1
COL5

Data looks like this:

COL1 COL2
1 A
2 B

I want to print the data in this way:

COL1 ADDRESS_LINE COL5
1 ABC A
1 PQR A
1 XYZ A
2 LMN B
2 OPQ B
2 QRT B

Of the two approaches (Query1 and Query2) can you suggest which is the better one?

Query1:
SELECT T1.COL1, T1.ADDRESS1 ADDRESS_LINE, T2.COL5
FROM T1, T2
WHERE T1.COL1 = T2.COL1
UNION
SELECT T1.COL1, T1.ADDRESS2 ADDRESS_LINE, T2.COL5
FROM T1, T2
WHERE T1.COL1 = T2.COL1
UNION
SELECT T1.COL1, T1.ADDRESS3 ADDRESS_LINE, T2.COL5
FROM T1, T2
WHERE T1.COL1 = T2.COL1


Query2:
SELECT T1.COL1, T1.ADDRESS_LINE, T2.COL5
FROM (
SELECT T1.COL,
DECODE(R, 1, ADDRESS1, 2, ADDRESS2, 3, ADDRESS3) ADDRESS_LINE
FROM T1,
(
SELECT ROWNUM R
FROM T1
WHERE ROWNUM <=3)) T1,
T2
WHERE T1.COL1 = T2.COL1

Regards


Tom Kyte
September 20, 2006 - 3:10 pm UTC

query 2, but you can use

(select 1 r from dual union all select 2 from dual union all select 3 from dual)

or

(select level r from dual connect by level <= 3)


since t1 might not always have three rows...

Too good

Reader, September 21, 2006 - 12:58 am UTC

Thanks Tom for the reply.

Question On a statement from the Manual

Neeraj Nagpal, September 25, 2006 - 5:15 pm UTC

I was reading about the Explain Plan Properties from the -- Database Tuning with the Oracle Tuning Pack -- manual at </code> http://download-west.oracle.com/docs/cd/B10501_01/em.920/a86647/vmqstats.htm#1015761 <code>
-- I could'nt get exact sense out of this. Could you please elaborate this for me.



Full Table scans as non-driving tables in nested loop joins

There may be an opportunity for speeding up the access to the non-driving table when it is joined in the nested loops fashion via a full table scan lookup. Often an appropriatae concatenation of columns from the non-driving table is all that is required to perform the lookup. The rule -of-thumb indicator is found on any Explain Plan TABLE ACCESS (FULL) object which is a child of a NESTED LOOPS object and is not the first in the chain of tables being joined.


Thanks Always For your Help,
Neeraj Nagpal

Tom Kyte
September 26, 2006 - 2:21 am UTC

I believe they are referring to a query like this:


ops$tkyte%ORA10GR2> set autotrace traceonly explain
ops$tkyte%ORA10GR2> select /*+ USE_NL(emp dept) */ *
  2    from emp, dept
  3   where emp.deptno = dept.deptno;

Execution Plan
----------------------------------------------------------
Plan hash value: 4192419542

------------------------------------------------------------------------
| Id  | Operation          | Name | Rows  | Bytes | Cost (%CPU)| Time
------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |      |    14 |  1638 |     9   (0)| 00:00:0
|   1 |  NESTED LOOPS      |      |    14 |  1638 |     9   (0)| 00:00:0
|   2 |   TABLE ACCESS FULL| DEPT |     4 |   120 |     3   (0)| 00:00:0
|*  3 |   TABLE ACCESS FULL| EMP  |     4 |   348 |     2   (0)| 00:00:0
------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   3 - filter("EMP"."DEPTNO"="DEPT"."DEPTNO")

Note
-----
   - dynamic sampling used for this statement
<b>
and are saying "emp, this non-driving table, is being full scanned for every row in dept.  That might not be efficient.  Therefore, if you index the obvious set of columns, it might be better"

Actually, in this case, I would say a pair of full scans and a hash join would be in general better - unless you need to optimize for response time, in which case the following would make sense:</b>

ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> create index emp_idx on emp(deptno);

Index created.

ops$tkyte%ORA10GR2> select /*+ USE_NL(emp dept) */ *
  2    from emp, dept
  3   where emp.deptno = dept.deptno;

Execution Plan
----------------------------------------------------------
Plan hash value: 1619650520

------------------------------------------------------------------------
| Id  | Operation                   | Name    | Rows  | Bytes | Cost (%C
------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |         |    14 |  1638 |     6
|   1 |  TABLE ACCESS BY INDEX ROWID| EMP     |     4 |   348 |     1
|   2 |   NESTED LOOPS              |         |    14 |  1638 |     6
|   3 |    TABLE ACCESS FULL        | DEPT    |     4 |   120 |     3
|*  4 |    INDEX RANGE SCAN         | EMP_IDX |     5 |       |     0
------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   4 - access("EMP"."DEPTNO"="DEPT"."DEPTNO")

Note
-----
   - dynamic sampling used for this statement
 

Thanks So Much For your response..

Neeraj, September 26, 2006 - 1:27 pm UTC

Tom,

Thanks so much for your response. If both the tables involved in the NL join happen to be huge tables --  then would it be even more efficient to have both the child objects of  NESTED LOOP be assessed by the INDEX?? Like in the following example:


  1  select
  2  /*+ USE_NL(properties) */
  3  Properties.FIPS_STATE_COUNTY,
  4  Properties.FIPS_TOWNSHIP,
  5  Properties.APN_SOURCE
  6  from Properties,COUNTY_CONTROL cc2
  7  where Properties.fips_state_County = cc2.fips_state_county and
  8  Properties.fips_township = cc2.fips_township
  9  and source_owner = 'FNC' AND
 10  properties.datetime_last_update >=
 11  TO_DATE ('28-aug-2006 21:02:14', 'DD-MON-YYYY HH24:MI:SS')   AND
 12  properties.datetime_last_update <
 13  TO_DATE ('21-sep-2006 00:00:00', 'DD-MON-YYYY HH24:MI:SS')
 14* AND ( properties.fips_state_county >= '13001' AND properties.fips_state_county <= '13999' /*GA*
09:53:20 ADBPROD@PROD SQL> /

Execution Plan
----------------------------------------------------------
   0 null SELECT STATEMENT Optimizer=CHOOSE (Cost=5710 Card=227809 Byt null
          es=19363765)

   1    0   NESTED LOOPS (Cost=5710 Card=227809 Bytes=19363765)        null
   2    1     TABLE ACCESS (BY GLOBAL INDEX ROWID) OF 'PROPERTIES' (Co null
          st=5710 Card=646411 Bytes=45895181)

   3    2       INDEX (RANGE SCAN) OF 'IDX_PROPERITES_REPL' (NON-UNIQU null
          E) (Cost=671 Card=2783) <----

   4    1     INDEX (UNIQUE SCAN) OF 'COUNTY_CONTROL_PK_1' (UNIQUE)   <----



Thanks,
Neeraj Nagpal 

Tom Kyte
September 26, 2006 - 4:57 pm UTC

if both tables are "OF ANY SIZE"

then:

if you want to optimize to get the LAST ROW RETURNED AS FAST AS YOU CAN
then
probably two full scans and a hash join is best, I would not
want a single index used
ELSE if you want to optmize to get the FIRST ROWS RETURNED as fast as you can
and can wait hours for the last row (interactive application)
then
a full scan of one table and nested loops index into another is
probably best
end if.


</code> http://asktom.oracle.com/pls/asktom/f?p=100:11:::::P11_QUESTION_ID:6749454952894#6760861174154 <code>

Thanks.

Neeraj Nagpal, September 26, 2006 - 6:27 pm UTC

Thanks for your prompt answer.

Neeraj Nagpal

identify whether development is occuring in the production environment

Dawar, October 17, 2006 - 7:05 pm UTC

Tom,

which Oracle table should be checked first in order to try to identify whether development is occuring in the production environment?


cheers,
Dawar

Tom Kyte
October 18, 2006 - 7:46 am UTC

DBA_TAB_PRIVS and DBA_SYS_PRIVS

do users that should not have privileges, have them....

Dawar, October 17, 2006 - 7:55 pm UTC

I am confused btw the following tables

sys.source$, sys.obj$, sys.views$,sys.user$ & sys.code$



Tom Kyte
October 18, 2006 - 7:48 am UTC

easy to fix.

stop thinking about them.


You would use
*_SOURCE (*=DBA, USER, ALL)
*_OBJECTS
*_VIEWS
*_USERS (no user_users of course!)


quite easy to fix, wish all of the problems posted on this site were that easy!

cardinality and the explain plan

Ryan, October 25, 2006 - 10:44 am UTC

I am looking at an explain plan with set autotrace on. I left the explain plan off of this because it wrapped and was not readable. I was not able to format it.

Table C has 53,000 rows. When I query dba_tables.num_rows it states that there are 53,000 rows. How can I have 'rows' of 1634 on a full table scan? Shouldn't it be all the rows in the table since Oracle is doing a full table scan?

Now we do have have a where clause that filters the number of rows returned. However, my understanding of a full table scan is that oracle reads up to the high water mark. Shouldn't that be all the rows? Or is it just the rows returned?

Same with bytes. If Oracle does a full table scan that reads 150 MBs of data, but only return 30k of data, it looks like the bytes column will say 30k. Is this correct?

I get the same number when I look at the cardinality column in the 'explain plan for'. I had though that Oracle got the rows for a full table scan from dba_tables.num_rows? I guess not.


Tom Kyte
October 25, 2006 - 10:59 am UTC

the rows shows the number of rows flowing OUT OF the step..


so, select * from t; that would show 53,000

select * from t where <predicate>; that would show the guess of 1,634 rows after the predicate is applied.

Insert into <tab> select *...

dev, October 27, 2006 - 3:21 am UTC

Which one is faster and Why?
1.Insert into <tab_name> select *from <tab_name_1>
OR
2.Create table <tab_name> as select *from <tab_name_1

Tom Kyte
October 27, 2006 - 7:46 am UTC

create table as select will be a direct path operation, it can skip undo and even redo generation.

insert (without append) cannot do that, it writes to the buffer cache and always generates undo and redo.


for a small table, because of the fact that create table would do direct path operations - insert might be faster (buffered IO). for large sets, the opposite would likely be true.

so "it depends"

explain plan followup...

Ryan, October 30, 2006 - 1:13 pm UTC

wouldn't it be more useful if for a full tablescan the rows or bytes section showed the number of rows or bytes actually accessed? It makes it easier to look for bottlenecks. You can get this with a 10046 trace (by looking at the cr=), but this way you can do it without having to run a trace?

Tom Kyte
October 30, 2006 - 3:10 pm UTC

not for the EXPLAIN plan, the "cost" gives you that.

for the ROW SOURCE OPERATION - we see that, you see the actuals - that (found in the tkprof) shows you what really happened.

The explain plan is the GUESS that after scanning table T, we'll get X rows as output. The COST of scanning table T is there (that is the cost of the full table scan), the results of scanning table T are what are relevant to the steps further on up.

Explain Plan

Anna, November 07, 2006 - 7:39 am UTC

Sorry Tom i had to use this link for posting i never get an oppurtunity... can you please help me to identify the problem in this SQL query it take more then 1 min to return data and i want to bring this down to probably 5 sec

SELECT /*+ ALL_ROWS PARALLEL(oeh 20) */ DISTINCT reqh.shipcmf ST_CMF,
reqh.reqnumber Req_No,
reqh.contract Req_Contract,
reqh.divpulldate Req_Div_Pull,
reqh.closedate Req_Closed,
reql.catalognumber Req_Item,
(NVL(reql.ordqty, 0) - NVL(reql.voidqty, 0)) Req_Actual_Qty,
reql.shipqty Req_Ship_Qty,
DECODE(ott.transaction_type_id, 1001, 'ATO', 1335, 'D/S', 1550, 'ATO', 1554, 'D/S') Order_Type,
oeh.header_id hdr_id,
oeh.sold_to_org_id sold_to_org,
oeh.order_number Order_No,
oeh.ordered_date Ordered,
oel.ordered_item O_Ln_Item,
oel.ordered_quantity O_Ln_Qty,
NVL(oel.fulfilled_quantity, 0) O_Ln_Fill_Qty,
oel.flow_status_code O_Ln_Status
FROM dw.reqheaders@DW_LNK reqh,
DW.REQLINES@DW_LNK reql,
apps.oe_order_headers_all@adpds_score_lnk oeh,
apps.oe_order_lines_all@adpds_score_lnk oel,
apps.oe_transaction_types_tl@adpds_score_lnk ott
WHERE reqh.reqnumber = oeh.attribute11
AND reqh.contract = oeh.attribute12
AND reqh.reqnumber = reql.reqnumber
AND reql.catalognumber = oel.ordered_item
AND reql.erline = oel.orig_sys_line_ref
AND oeh.header_id = oel.header_id
AND oeh.order_type_id IN (1001, 1335, 1550, 1554)
AND oeh.order_type_id = ott.transaction_type_id
AND oeh.flow_status_code <> 'CANCELLED'
AND oel.flow_status_code <> 'CANCELLED'
AND oeh.cancelled_flag = 'N'
AND oel.cancelled_flag = 'N'
AND TRUNC(oeh.creation_date) BETWEEN '01-OCT-2006' AND '01-NOV-2006'
AND NOT EXISTS (SELECT /*+ ALL_ROWS */ DISTINCT 1
FROM DW.FSORDHIST@DW_LNK soh,
dw.fsorddtlhist@DW_LNK sol
WHERE oeh.header_id = soh.header_id
AND oel.attribute12 = sol.reqnumber || '*' || sol.sonumber || '*' || sol.solinenumber
AND oel.ordered_item = sol.catalognumber
AND oeh.attribute11 = sol.reqnumber)



----------------------------------------------------
Operation Object Name Rows Bytes Cost Object Node In/Out PStart PStop

SELECT STATEMENT Optimizer Mode=HINT: ALL_ROWS 1 12901
HASH UNIQUE 1 886 12901
FILTER
NESTED LOOPS 1 886 869
NESTED LOOPS 7 5 K 861
NESTED LOOPS 4 1 K 851
FILTER 4 52 1
REMOTE ADPDS_SCORE_LNK.HOFFMAN.DS.ADP.COM SERIAL
REMOTE .REQHEADERS 1 46 2 DW_LNK.HOFFMAN.DS.ADP.COM SERIAL
REMOTE .OE_ORDER_LINES_ALL 2 916 3 ADPDS_SCORE_LNK.HOFFMAN.DS.ADP.COM SERIAL
REMOTE .REQLINES 1 61 2 DW_LNK.HOFFMAN.DS.ADP.COM SERIAL
MERGE JOIN CARTESIAN 1 38 12031
REMOTE .FSORDDTLHIST 1 25 4 DW_LNK.HOFFMAN.DS.ADP.COM SERIAL
BUFFER SORT 9 117 12027
REMOTE .FSORDHIST 9 117 12027 DW_LNK.HOFFMAN.DS.ADP.COM SERIAL
----------------------------------------------
can you tell me where is this query getting stuck and what can be donw about it....

Tom Kyte
November 07, 2006 - 4:39 pm UTC

is 5 seconds even remotely reasonable.

first thing for you would be: lose parallel, parallel is not for queries taking few seconds, if you need parallel for "few seconds", there is a big problem right there and then. Heck, it could take a couple of seconds just to set up to get ready to go for parallel query.

Second, look at the estimated card= values there, are they even CLOSE to reality. if so, this would not take a minute probably, if not, you need to ask "why"

A reader, November 08, 2006 - 5:13 am UTC

I got rid of parallel after your suggestion but beleive me that hint made my query run in 45 sec instead of 1:16 Sec, i may be wrong in analysing that....

i did not understnad this one clearly
"estimated card= values there, are they even CLOSE to
reality. if so, this would not take a minute probably, if not, you need to ask
"why" ....

do you mean the data here is vast and it is bound to take the time it right now... or..??

Tom Kyte
November 08, 2006 - 8:30 am UTC

are the cardinality values (the rows, the estimated number of rows) even close to being correct in your explain plan above.

A reader, November 09, 2006 - 7:45 am UTC

So you mean to say i must be missing some thing in the Where clause which qualifys as a genuin relation....

To my knoweldge i have given all possible where clauses
and even then i just cant get rid of those Cardinality values.

What would you do in such case?

Tom Kyte
November 09, 2006 - 9:00 am UTC

I want you to tell me if the guess the optimizer is making on the rows is CORRECT or NOT

that is all.



ops$tkyte%ORA9IR2> create table t ( x int );

Table created.

ops$tkyte%ORA9IR2> set autotrace traceonly explain

ops$tkyte%ORA9IR2> select /*+ all_rows */ * from t;

Execution Plan
----------------------------------------------------------
   0
SELECT STATEMENT Optimizer=HINT: ALL_ROWS (Cost=2 Card=82 Bytes=1066)


   1    0
  TABLE ACCESS (FULL) OF 'T' (Cost=2 Card=82 Bytes=1066)


<b>way off...</b>


ops$tkyte%ORA9IR2> exec dbms_stats.gather_table_stats(user, 'T')

PL/SQL procedure successfully completed.

ops$tkyte%ORA9IR2> select /*+ all_rows */ * from t;

Execution Plan
----------------------------------------------------------
   0
SELECT STATEMENT Optimizer=HINT: ALL_ROWS (Cost=2 Card=1 Bytes=13)


   1    0
  TABLE ACCESS (FULL) OF 'T' (Cost=2 Card=1 Bytes=13)

<b>Not way off</b>

that is what I am asking you - way off or NOT way off 

A reader, November 09, 2006 - 10:07 am UTC

Table Stats are all gathered.....Its an automatic process
where Stats are gathered automatically.

Tom Kyte
November 09, 2006 - 2:29 pm UTC

and that....

does not answer anything asked here.

at all.

Change in execution path

Rajesh, November 10, 2006 - 4:40 am UTC

Hi Tom,

In our production database the query does a full table scan and is very slow.In our test database it uses and index range scan and is considerably faster.The test system has less data, but I also believe the use of the index improves performance in this case.

SQL>l
  1  Select table_name, index_name, column_name, column_position from dba_ind_columns
  2  where table_name in ('REVISIONS','SUBSCRIPTION') and table_owner='STELLENT_CON'
  3* order by table_name,index_name,column_position
(DSIMS@DBPRD)SQL>/

TABLE_NAME                INDEX_NAME                     COLUMN_NAME                         COLUMN_POSITION            
------------------------- ------------------------------ ----------------------------------- ---------------            
REVISIONS                 DCHECKOUTUSER                  DCHECKOUTUSER                                     1            
REVISIONS                 DDOCACCOUNT                    DDOCACCOUNT                                       1            
REVISIONS                 DDOCNAME                       DDOCNAME                                          1            
REVISIONS                 DFLAG1                         DFLAG1                                            1            
REVISIONS                 DINDATE                        DINDATE                                           1            
REVISIONS                 DINDEXERSTATE                  DINDEXERSTATE                                     1            
REVISIONS                 DOUTDATE                       DOUTDATE                                          1            
REVISIONS                 DRELEASEDATE                   DRELEASEDATE                                      1            
REVISIONS                 DRELEASESTATE                  DRELEASESTATE                                     1            
REVISIONS                 DREVCLASSID_2                  DREVCLASSID                                       1            
REVISIONS                 DSTATUS                        DSTATUS                                           1            
REVISIONS                 PK_REVISIONS                   DID                                               1            
REVISIONS                 REVISIONINDEX                  DREVISIONID                                       1            
REVISIONS                 REVISIONINDEX                  DREVCLASSID                                       2            
SUBSCRIPTION              PK_SUBSCRIPTION                DSUBSCRIPTIONALIAS                                1            
SUBSCRIPTION              PK_SUBSCRIPTION                DSUBSCRIPTIONALIASTYPE                            2            
SUBSCRIPTION              PK_SUBSCRIPTION                DSUBSCRIPTIONID                                   3            
SUBSCRIPTION              PK_SUBSCRIPTION                DSUBSCRIPTIONTYPE                                 4            


SQL>desc STELLENT_CON.SUBSCRIPTION
 Name                                      Null?    Type
 ----------------------------------------- -------- ----------------------------
 DSUBSCRIPTIONALIAS                        NOT NULL VARCHAR2(50)
 DSUBSCRIPTIONALIASTYPE                    NOT NULL VARCHAR2(30)
 DSUBSCRIPTIONEMAIL                                 VARCHAR2(80)
 DSUBSCRIPTIONID                           NOT NULL VARCHAR2(255)
 DSUBSCRIPTIONTYPE                         NOT NULL VARCHAR2(30)
 DSUBSCRIPTIONCREATEDATE                   NOT NULL DATE
 DSUBSCRIPTIONNOTIFYDATE                            DATE
 DSUBSCRIPTIONUSEDDATE                              DATE


SQL>desc STELLENT_CON.REVISIONS
 Name                                      Null?    Type
 ----------------------------------------- -------- ----------------------------
 DID                                       NOT NULL NUMBER(38)
 DDOCNAME                                  NOT NULL VARCHAR2(30)
 DDOCTYPE                                  NOT NULL VARCHAR2(30)
 DDOCTITLE                                 NOT NULL VARCHAR2(255)
 DDOCAUTHOR                                         VARCHAR2(50)
 DREVCLASSID                               NOT NULL NUMBER(38)
 DREVISIONID                               NOT NULL NUMBER(38)
 DREVLABEL                                          VARCHAR2(10)
 DISCHECKEDOUT                             NOT NULL NUMBER(1)
 DCHECKOUTUSER                                      VARCHAR2(30)
 DSECURITYGROUP                            NOT NULL VARCHAR2(30)
 DCREATEDATE                                        DATE
 DINDATE                                            DATE
 DOUTDATE                                           DATE
 DSTATUS                                            VARCHAR2(20)
 DRELEASESTATE                                      CHAR(1)
 DFLAG1                                             CHAR(1)
 DWEBEXTENSION                                      VARCHAR2(255)
 DPROCESSINGSTATE                                   CHAR(1)
 DMESSAGE                                           VARCHAR2(255)
 DDOCACCOUNT                                        VARCHAR2(30)
 DRELEASEDATE                                       DATE
 DRENDITION1                                        CHAR(1)
 DRENDITION2                                        CHAR(1)
 DINDEXERSTATE                                      CHAR(1)
 DPUBLISHTYPE                                       CHAR(1)
 DPUBLISHSTATE                                      CHAR(1)


Here is the good plan and bad plan

GOOD PLAN
==========

SQL>explain plan for SELECT Subscription.*, Revisions.*
  2  FROM STELLENT_CON.SUBSCRIPTION, STELLENT_CON.Revisions
  3  WHERE  SUBSCRIPTION.dSubscriptionAlias = 'joseph.a.zimmerlin' AND
  4      Subscription.dSubscriptionAliasType = 'user' AND
  5      Subscription.dSubscriptionType = 'Basic' AND
  6      Revisions.dID IN (SELECT MAX(dID) FROM STELLENT_CON.Revisions where
  7              Revisions.dDocName = Subscription.dSubscriptionID);

Explained.


SQL>@?/rdbms/admin/utlxpls.sql

PLAN_TABLE_OUTPUT                                                               
--------------------------------------------------------------------------------
Plan hash value: 534145969                                                      
                                                                                
--------------------------------------------------------------------------------
-------------------                                                             
                                                                                
| Id  | Operation                       | Name            | Rows  | Bytes | Cost
 (%CPU)| Time     |                                                             
                                                                                
--------------------------------------------------------------------------------
-------------------                                                             
                                                                                
|   0 | SELECT STATEMENT                |                 |     1 |   197 |    3
6   (0)| 00:00:01 |                                                             
                                                                                
|   1 |  NESTED LOOPS                   |                 |     1 |   197 |     
3   (0)| 00:00:01 |                                                             
                                                                                
|   2 |   TABLE ACCESS BY INDEX ROWID   | SUBSCRIPTION    |     1 |    55 |     
2   (0)| 00:00:01 |                                                             
                                                                                
|*  3 |    INDEX RANGE SCAN             | PK_SUBSCRIPTION |     1 |       |     
1   (0)| 00:00:01 |                                                             
                                                                                
|   4 |   TABLE ACCESS BY INDEX ROWID   | REVISIONS       |     1 |   142 |     
1   (0)| 00:00:01 |                                                             
                                                                                
|*  5 |    INDEX UNIQUE SCAN            | PK_REVISIONS    |     1 |       |     
0   (0)| 00:00:01 |                                                             
                                                                                
|   6 |     SORT AGGREGATE              |                 |     1 |    17 |     
       |          |                                                             
                                                                                
|   7 |      TABLE ACCESS BY INDEX ROWID| REVISIONS       |     1 |    17 |     
3   (0)| 00:00:01 |                                                             
                                                                                
|*  8 |       INDEX RANGE SCAN          | DDOCNAME        |     1 |       |     
1   (0)| 00:00:01 |                                                             
                                                                                
--------------------------------------------------------------------------------
-------------------                                                             
                                                                                
                                                                                
Predicate Information (identified by operation id):                             
---------------------------------------------------                             
                                                                                
   3 - access("SUBSCRIPTION"."DSUBSCRIPTIONALIAS"='joseph.a.zimmerlin' AND      
              "SUBSCRIPTION"."DSUBSCRIPTIONALIASTYPE"='user' AND                
              "SUBSCRIPTION"."DSUBSCRIPTIONTYPE"='Basic')                       
       filter("SUBSCRIPTION"."DSUBSCRIPTIONTYPE"='Basic')                       
   5 - access("REVISIONS"."DID"= (SELECT MAX("DID") FROM "STELLENT_CON"."REVISIO
NS"                                                                             
                                                                                
              "REVISIONS" WHERE "REVISIONS"."DDOCNAME"=:B1))                    
   8 - access("REVISIONS"."DDOCNAME"=:B1)                                       

26 rows selected.



BAD PLAN
========

SQL>explain plan for SELECT Subscription.*, Revisions.*
  2  FROM STELLENT_CON.SUBSCRIPTION, STELLENT_CON.Revisions
  3  WHERE  SUBSCRIPTION.dSubscriptionAlias = 'joseph.a.zimmerlin' AND
  4      Subscription.dSubscriptionAliasType = 'user' AND
  5      Subscription.dSubscriptionType = 'Basic' AND
  6      Revisions.dID IN (SELECT MAX(dID) FROM STELLENT_CON.Revisions where
  7              Revisions.dDocName = Subscription.dSubscriptionID);

Explained.

(DSIMS@DBPRD)SQL>@?/rdbms/admin/utlxpls.sql

PLAN_TABLE_OUTPUT                                                               
--------------------------------------------------------------------------------
Plan hash value: 3881148907                                                     
                                                                                
--------------------------------------------------------------------------------
--------------------                                                            
                                                                                
| Id  | Operation                        | Name            | Rows  | Bytes | Cos
t (%CPU)| Time     |                                                            
                                                                                
--------------------------------------------------------------------------------
--------------------                                                            
                                                                                
|   0 | SELECT STATEMENT                 |                 |     1 |   230 |   2
91   (3)| 00:00:04 |                                                            
                                                                                
|*  1 |  FILTER                          |                 |       |       |    
        |          |                                                            
                                                                                
|   2 |   HASH GROUP BY                  |                 |     1 |   230 |   2
91   (3)| 00:00:04 |                                                            
                                                                                
|   3 |    MERGE JOIN CARTESIAN          |                 |  8571 |  1925K|   2
89   (3)| 00:00:04 |                                                            
                                                                                
|   4 |     TABLE ACCESS BY INDEX ROWID  | REVISIONS       |     1 |    29 |    
 3   (0)| 00:00:01 |                                                            
                                                                                
|   5 |      NESTED LOOPS                |                 |     1 |    84 |    
 6   (0)| 00:00:01 |                                                            
                                                                                
|   6 |       TABLE ACCESS BY INDEX ROWID| SUBSCRIPTION    |     1 |    55 |    
 3   (0)| 00:00:01 |                                                            
                                                                                
|*  7 |        INDEX RANGE SCAN          | PK_SUBSCRIPTION |     1 |       |    
 2   (0)| 00:00:01 |                                                            
                                                                                
|*  8 |       INDEX RANGE SCAN           | DDOCNAME        |     1 |       |    
 1   (0)| 00:00:01 |                                                            
                                                                                
|   9 |     BUFFER SORT                  |                 | 64072 |  9135K|   2
88   (3)| 00:00:04 |                                                            
                                                                                
|  10 |      TABLE ACCESS FULL           | REVISIONS       | 64072 |  9135K|   2
83   (3)| 00:00:04 |                                                            
                                                                                
--------------------------------------------------------------------------------
--------------------                                                            
                                                                                
                                                                                
Predicate Information (identified by operation id):                             
---------------------------------------------------                             
                                                                                
   1 - filter("REVISIONS"."DID"=MAX("DID"))                                     
   7 - access("SUBSCRIPTION"."DSUBSCRIPTIONALIAS"='joseph.a.zimmerlin' AND      
              "SUBSCRIPTION"."DSUBSCRIPTIONALIASTYPE"='user' AND                
              "SUBSCRIPTION"."DSUBSCRIPTIONTYPE"='Basic')                       
       filter("SUBSCRIPTION"."DSUBSCRIPTIONTYPE"='Basic')                       
   8 - access("REVISIONS"."DDOCNAME"="SUBSCRIPTION"."DSUBSCRIPTIONID")          

27 rows selected.


Thank you,
Rajesh 

Tom Kyte
November 10, 2006 - 8:54 am UTC

why do you believe that, have you tried hinting the query in your tests in production to see if your belief is accurate.

same query different plan

Ramprsad, November 30, 2006 - 5:25 am UTC

Hi Tom,

The below query takes 10 min to execute.

SELECT DISTINCT
O.CustomerID,
O.OrderID,
SD.SampleNumber,
SD.Test
FROM SMXPSU.Orders O
INNER JOIN SMXPSU.Projects P
ON O.CustomerID=P.CustomerID AND O.ProjectID=P.ProjectID
INNER JOIN SMXPSU.ProjectDefs PD
ON P.CustomerID=PD.CustomerID AND P.ProjectID=PD.ProjectID
INNER JOIN SMXPSU.SampleDetails SD
ON O.OrderID=SD.OrderID AND P.Matrix=SD.Matrix AND P.Test=SD.Test AND
P.Method=SD.Method
LEFT JOIN SMXPSU.Invoices I
ON SD.OrderID=I.OrderID AND SD.SampleNumber=I.SampleNumber AND SD.Test=I.Test
LEFT JOIN SMXPSU.Ampro_IncompleteOrders A
ON SD.OrderID=A.OrderID AND SD.Samplenumber=A.SampleNumber
WHERE I.InvoiceID IS NULL AND A.OrderID IS NULL AND A.SampleNumber IS NULL



Same query as above - written in "old" Oracle style - executes in 8 seconds


SELECT DISTINCT
O.CustomerID,
O.OrderID,
SD.SampleNumber,
SD.Test
FROM SMXPSU.Orders O,
SMXPSU.Projects P,
SMXPSU.ProjectDefs PD,
SMXPSU.SampleDetails SD,
SMXPSU.Invoices I,
SMXPSU.Ampro_IncompleteOrders A
WHERE O.CustomerID = P.CustomerID
AND O.ProjectID = P.ProjectID
AND P.CustomerID = PD.CustomerID
AND P.ProjectID = PD.ProjectID
AND O.OrderID = SD.OrderID
AND P.Matrix = SD.Matrix
AND P.Test = SD.Test
AND P.Method = SD.Method
AND SD.OrderID = I.OrderID(+)
AND SD.SampleNumber = I.SampleNumber(+)
AND SD.Test = I.Test(+)
AND SD.OrderID = A.OrderID(+)
AND SD.Samplenumber = A.SampleNumber(+)
AND I.InvoiceID IS NULL
AND A.OrderID IS NULL
AND A.SampleNumber IS NULL


SELECT DISTINCT
O.CustomerID,
O.OrderID,
SD.SampleNumber,
SD.Test
FROM SMXPSU.Orders O
INNER JOIN SMXPSU.Projects P
ON O.CustomerID=P.CustomerID AND O.ProjectID=P.ProjectID
INNER JOIN SMXPSU.ProjectDefs PD
ON P.CustomerID=PD.CustomerID AND P.ProjectID=PD.ProjectID
INNER JOIN SMXPSU.SampleDetails SD
ON O.OrderID=SD.OrderID AND P.Matrix=SD.Matrix AND P.Test=SD.Test AND
P.Method=SD.Method
LEFT JOIN SMXPSU.Invoices I
ON SD.OrderID=I.OrderID AND SD.SampleNumber=I.SampleNumber AND SD.Test=I.Test
LEFT JOIN SMXPSU.Ampro_IncompleteOrders A
ON SD.OrderID=A.OrderID AND SD.Samplenumber=A.SampleNumber
WHERE I.InvoiceID IS NULL AND A.OrderID IS NULL AND A.SampleNumber IS NULL

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.02 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 93 471.29 569.55 4860 96618455 0 1372
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 95 471.29 569.58 4860 96618455 0 1372

Misses in library cache during parse: 1
Optimizer mode: CHOOSE
Parsing user id: 61

Rows Row Source Operation
------- ---------------------------------------------------
1372 SORT UNIQUE
39404 FILTER
791619 NESTED LOOPS OUTER
791571 FILTER
821891 NESTED LOOPS OUTER
805134 NESTED LOOPS
437484 NESTED LOOPS
437484 HASH JOIN
4197 TABLE ACCESS FULL PROJECTS
5458 TABLE ACCESS FULL ORDERS
437484 INDEX UNIQUE SCAN PK_PROJECTDEFS (object id 30761)
805134 TABLE ACCESS BY INDEX ROWID SAMPLEDETAILS
380667292 INDEX RANGE SCAN ID_SD_MATRX (object id 30799)
30320 TABLE ACCESS BY INDEX ROWID RESULTS
2578412 INDEX RANGE SCAN PK_RESULTS (object id 30831)
752215 INDEX RANGE SCAN PK_INVOICES (object id 30828)


Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
SQL*Net message to client 93 0.00 0.00
db file sequential read 4756 1.32 70.01
db file scattered read 20 0.94 1.12
latch free 3 0.00 0.00
SQL*Net message from client 93 13.60 20.83
********************************************************************************




SELECT DISTINCT
O.CustomerID,
O.OrderID,
SD.SampleNumber,
SD.Test
FROM SMXPSU.Orders O,
SMXPSU.Projects P,
SMXPSU.ProjectDefs PD,
SMXPSU.SampleDetails SD,
SMXPSU.Invoices I,
SMXPSU.Ampro_IncompleteOrders A
WHERE O.CustomerID = P.CustomerID
AND O.ProjectID = P.ProjectID
AND P.CustomerID = PD.CustomerID
AND P.ProjectID = PD.ProjectID
AND O.OrderID = SD.OrderID
AND P.Matrix = SD.Matrix
AND P.Test = SD.Test
AND P.Method = SD.Method
AND SD.OrderID = I.OrderID(+)
AND SD.SampleNumber = I.SampleNumber(+)
AND SD.Test = I.Test(+)
AND SD.OrderID = A.OrderID(+)
AND SD.Samplenumber = A.SampleNumber(+)
AND I.InvoiceID IS NULL
AND A.OrderID IS NULL
AND A.SampleNumber IS NULL

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 93 2.76 6.58 3100 553931 0 1372
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 95 2.76 6.59 3100 553931 0 1372

Misses in library cache during parse: 1
Optimizer mode: CHOOSE
Parsing user id: 61

Rows Row Source Operation
------- ---------------------------------------------------
1372 SORT UNIQUE
39404 NESTED LOOPS
39404 NESTED LOOPS
21619 FILTER
56746 NESTED LOOPS OUTER
56734 FILTER
62389 NESTED LOOPS OUTER
58142 NESTED LOOPS
58142 TABLE ACCESS FULL SAMPLEDETAILS
58142 TABLE ACCESS BY INDEX ROWID ORDERS
58142 INDEX UNIQUE SCAN PK_ORDERS (object id 30758)
5655 TABLE ACCESS BY INDEX ROWID RESULTS
347254 INDEX RANGE SCAN PK_RESULTS (object id 30831)
35127 INDEX RANGE SCAN PK_INVOICES (object id 30828)
39404 TABLE ACCESS BY INDEX ROWID PROJECTS
39729 INDEX RANGE SCAN PK_ (object id 30746)
39404 INDEX UNIQUE SCAN PK_PROJECTDEFS (object id 30761)


Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
SQL*Net message to client 93 0.00 0.00
db file sequential read 2849 0.03 3.62
db file scattered read 85 0.00 0.11
SQL*Net message from client 93 0.58 7.70


Why the execution plan has been changed? Whant needs to be done to get same execution plan and same execution time?

Thanks,
Ram


Tom Kyte
November 30, 2006 - 10:02 am UTC

Advantages and Disadvantages

Deepak, December 06, 2006 - 10:01 am UTC

Hi Tom,

Would like to have your comments on the following...

> Does the parallel query have any disadvantages at times? Under what circumstances should I think of disabling the parallel query option in a multi-CPU box?


> What if I set the parallel DEGREE of all the tables of my database to a number 'n' (n may be 2,3,4,5...) which is greater than the number of CPUs in my box keeping an eye on the future addition of new CPUs to my box. Will there be any overhead because of that?

Sorry for asking these trivial questions.



Tom Kyte
December 07, 2006 - 8:38 am UTC

of course - everything, EVERYTHING has

a) times you want to use it
b) times you do not want to use it.


if you have 32 cpus, but you have 100 concurrent users - using parallel query doesn't even begin to make sense - since you already have 3 times as many things trying to use the cpu as cpus! If you let all 100 concurrent users use parallel 8 - you would have 24 times as many things.....





Cardinality in index scan step

Sanji, December 06, 2006 - 3:08 pm UTC

Tom
The env is Oracle 9i Rel2, HP-UX11i.
I extracted a query from v$session_wait with persistent "db file sequential read" wait event, and generated the explain plan through autotrace

select TO_CHAR(API.COMPANY,'FM0999'), API.VENDOR, API.INVOICE, TO_CHAR(API.SUFFIX,'FM099'),
from LAWSON.APINVOICE API
where API.COMPANY=:a
and API.VENDOR=:b
and API.INVOICE=:c
and API.SUFFIX=:d
and API.CANCEL_SEQ=:e
order by API.COMPANY,API.VENDOR, API.INVOICE, API.SUFFIX, API.CANCEL_SEQ

Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=4 Card=1 Bytes=484)
1 0 TABLE ACCESS (BY INDEX ROWID) OF 'APINVOICE' (Cost=4 Card=1 Bytes=484)
2 1 INDEX (UNIQUE SCAN) OF 'APISET1' (UNIQUE) (Cost=3 Card=10589683)

From v$session_wait P1/P2, the sequential file wait was on APISET1 index.

Table APINVOICE's record count is 13470630
The cardinality from INDEX Unique Scan step is 10589683. . Does this mean that the optimizer had to traverse 10589683 (tentatively) rowids to arrive at the one required by the query ?

Thanks
Sanji

Tom Kyte
December 07, 2006 - 8:51 am UTC

do you have a full example with the create table and dbms_stats.set_table_stats/index_stats to reproduce with - that looks odd.




Mistake

sanji, December 07, 2006 - 10:17 am UTC

I made a mistake. The query from v$session_wait was infact

select TO_CHAR(API.COMPANY,'FM0999'), API.VENDOR, API.INVOICE, TO_CHAR(API.SUFFIX,'FM099'),
TO_CHAR(API.CANCEL_SEQ,'FM0999'), TO_CHAR(API.CANCEL_DATE, 'YYYYMMDD'), TO_CHAR(API.BATCH_NUM,'FM099999'),
TO_CHAR(API.BATCH_DATE, 'YYYYMMDD'), API.VOUCHER_NBR, API.AUTH_CODE, API.PROC_LEVEL, API.ACCR_CODE, API.INVOICE_TYPE,
API.INV_CURRENCY, API.PAY_CURRENCY, TO_CHAR(API.INVOICE_DTE, 'YYYYMMDD'), API.PURCH_FR_LOC, API.PO_NUMBER,
TO_CHAR(API.PO_RELEASE,'FM0999'), API.PO_CODE, API.DESCRIPTION, API.BASE_INV_AMT, API.BASE_ACT_AMT,
TO_CHAR(API.BASE_ND,'FM0'), API.TRAN_INV_AMT, API.TRAN_ALOW_AMT, API.TRAN_TXBL_AMT, TO_CHAR(API.TRAN_ND,'FM0'),
API.TRAN_TAX_AMT, API.BASE_DISC_AMT, API.TRAN_DISC_AMT, API.BASE_TOT_PMT, API.TRAN_TOT_PMT, API.BASE_TOT_DIST,
API.TRAN_TOT_DIST, API.TRAN_TOT_TAX, API.TRAN_TOT_TXBL, API.TRAN_PAID_AMT, API.ORIG_CNV_RATE, API.ANTICIPATION,
API.DISCOUNT_RT, TO_CHAR(API.DISC_DATE, 'YYYYMMDD'), TO_CHAR(API.DUE_DATE, 'YYYYMMDD'), TO_CHAR(API.NBR_SPLIT_PMT,'FM099'),
API.SPLIT_PMT_SCH, TO_CHAR(API.NBR_RECUR_PMT,'FM099'), API.RECUR_FREQ, API.REMIT_TO_CODE, API.CASH_CODE, API.BANK_INST_CODE,
API.CURR_RECALC, API.TAX_CODE, API.INCOME_CODE, API.DIST_CODE, TO_CHAR(API.REC_STATUS,'FM0'), TO_CHAR(API.CREATE_DATE, 'YYYYMMDD'),
TO_CHAR(API.DISTRIB_DATE, 'YYYYMMDD'), API.OPERATOR, TO_CHAR(API.CREATION_TIME,'FM099999'), API.VENDOR_GROUP, API.PAY_VENDOR,
API.PAY_GROUP, API.INVOICE_GROUP, TO_CHAR(API.LAST_DIST_SEQ,'FM0999'), TO_CHAR(API.LAST_PMT_SEQ,'FM0999'), API.DISCOUNT_CODE,
API.INVOICE_SOURCE, API.INVC_REF_TYPE, API.APPROVED_FLAG, API.APPRV_OPERATOR,TO_CHAR(API.RETURN_NUMBER,'FM0999999999'),
API.JRNL_BOOK_NBR, API.TAX_POINT, TO_CHAR(API.OBJ_ID,'FM099999999999'), TO_CHAR(API.RECON_DATE, 'YYYYMMDD'), TO_CHAR(API.POD_PRINTED,'FM0'),
API.MATCH_REF_NBR, API.MATCH_FL, API.TERMS_CD, TO_CHAR(API.RCPT_INV_DATE, 'YYYYMMDD'), API.RETAIL_AMT, TO_CHAR(API.MATCH_STATUS,'FM0'),
API.REASON_CODE, API.HANDLING_CODE, API.MATCH_AMT, API.AOC_ALLOW_AMT, API.LOCATION, TO_CHAR(API.MATCH_OBJ_ID,'FM099999999999'),
API.CBPRINT_FL, API.MATCH_TABLE, API.TAX_CODE_CNTL, TO_CHAR(API.LAST_MATCH_LN,'FM0999'), API.MATCH_LEVEL,
TO_CHAR(API.MATCH_DATE, 'YYYYMMDD'), API.PO_INV_TAX, API.BYPASS_MATCH, API.SERVICE_FL, API.SERVICE_AMT, API.BUYER, API.FINAL_DST_FLAG,
API.NOTC, API.STAT_PROC, API.SHIP_VIA, API.UNLOADING_PORT, TO_CHAR(API.INTRASTAT_NBR,'FM099999999999'), API.DROPSHIP_FL, API.FOB_CODE,
TO_CHAR(API.JBK_SEQ_NBR,'FM0999999999'), API.L_INDEX
from LAWSON.APINVOICE API
where API.COMPANY=:a
and API.VENDOR=:b
and API.INVOICE=:c
and API.SUFFIX=:d
and API.CANCEL_SEQ=:e

and the explain plan

Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=4 Card=1 Bytes=484)
1 0 TABLE ACCESS (BY INDEX ROWID) OF 'APINVOICE' (Cost=4 Card=1 Bytes=484)
2 1 INDEX (UNIQUE SCAN) OF 'APISET1' (UNIQUE) (Cost=3 Card=10589683)

The table definition is
CREATE TABLE APINVOICE (
COMPANY NUMBER(4) NOT NULL,VENDOR CHAR(9) NOT NULL,INVOICE CHAR(22) NOT NULL,
SUFFIX NUMBER(3) NOT NULL,CANCEL_SEQ NUMBER(4) NOT NULL,CANCEL_DATE DATE NOT NULL,
BATCH_NUM NUMBER(6) NOT NULL,BATCH_DATE DATE NOT NULL,VOUCHER_NBR CHAR(10) NOT NULL,
AUTH_CODE CHAR(3) NOT NULL,PROC_LEVEL CHAR(5) NOT NULL,ACCR_CODE CHAR(4) NOT NULL,
INVOICE_TYPE CHAR(1) NOT NULL,INV_CURRENCY CHAR(5) NOT NULL,PAY_CURRENCY CHAR(5) NOT NULL,
INVOICE_DTE DATE NOT NULL,PURCH_FR_LOC CHAR(4) NOT NULL,PO_NUMBER CHAR(14) NOT NULL,
PO_RELEASE NUMBER(4) NOT NULL,PO_CODE CHAR(4) NOT NULL,DESCRIPTION CHAR(30) NOT NULL,
BASE_INV_AMT NUMBER(15,2) NOT NULL,BASE_ACT_AMT NUMBER(15,2) NOT NULL,BASE_ND NUMBER(1) NOT NULL,
TRAN_INV_AMT NUMBER(15,2) NOT NULL,TRAN_ALOW_AMT NUMBER(15,2) NOT NULL,TRAN_TXBL_AMT NUMBER(15,2) NOT NULL,
TRAN_ND NUMBER(1) NOT NULL,TRAN_TAX_AMT NUMBER(15,2) NOT NULL,BASE_DISC_AMT NUMBER(15,2) NOT NULL,
TRAN_DISC_AMT NUMBER(15,2) NOT NULL,BASE_TOT_PMT NUMBER(15,2) NOT NULL,TRAN_TOT_PMT NUMBER(15,2) NOT NULL,
BASE_TOT_DIST NUMBER(15,2) NOT NULL,TRAN_TOT_DIST NUMBER(15,2) NOT NULL,TRAN_TOT_TAX NUMBER(15,2) NOT NULL,
TRAN_TOT_TXBL NUMBER(15,2) NOT NULL,TRAN_PAID_AMT NUMBER(15,2) NOT NULL,ORIG_CNV_RATE NUMBER(12,6) NOT NULL,
ANTICIPATION CHAR(1) NOT NULL,DISCOUNT_RT NUMBER(5,5) NOT NULL,DISC_DATE DATE NOT NULL,
DUE_DATE DATE NOT NULL,nBR_SPLIT_PMT NUMBER(3) NOT NULL,SPLIT_PMT_SCH CHAR(1) NOT NULL,
NBR_RECUR_PMT NUMBER(3) NOT NULL,RECUR_FREQ CHAR(1) NOT NULL,REMIT_TO_CODE CHAR(4) NOT NULL,
CASH_CODE CHAR(4) NOT NULL,BANK_INST_CODE CHAR(3) NOT NULL,CURR_RECALC CHAR(1) NOT NULL,
TAX_CODE CHAR(10) NOT NULL,INCOME_CODE CHAR(4) NOT NULL,DIST_CODE CHAR(9) NOT NULL,
REC_STATUS NUMBER(1) NOT NULL,CREATE_DATE DATE NOT NULL,DISTRIB_DATE DATE NOT NULL,
OPERATOR CHAR(10) NOT NULL,CREATION_TIME NUMBER(6) NOT NULL,VENDOR_GROUP CHAR(4) NOT NULL,
PAY_VENDOR CHAR(9) NOT NULL,PAY_GROUP CHAR(4) NOT NULL,INVOICE_GROUP CHAR(4) NOT NULL,
LAST_DIST_SEQ NUMBER(4) NOT NULL,LAST_PMT_SEQ NUMBER(4) NOT NULL,DISCOUNT_CODE CHAR(10) NOT NULL,
INVOICE_SOURCE CHAR(1) NOT NULL,INVC_REF_TYPE CHAR(2) NOT NULL,APPROVED_FLAG CHAR(1) NOT NULL,
APPRV_OPERATOR CHAR(10) NOT NULL,RETURN_NUMBER NUMBER(10) NOT NULL,JRNL_BOOK_NBR CHAR(12) NOT NULL,
TAX_POINT CHAR(1) NOT NULL,OBJ_ID NUMBER(12) NOT NULL,RECON_DATE DATE NOT NULL,
POD_PRINTED NUMBER(1) NOT NULL,MATCH_REF_NBR CHAR(22) NOT NULL,MATCH_FL CHAR(1) NOT NULL,
TERMS_CD CHAR(5) NOT NULL,RCPT_INV_DATE DATE NOT NULL,RETAIL_AMT NUMBER(15,2) NOT NULL,
MATCH_STATUS NUMBER(1) NOT NULL,REASON_CODE CHAR(4) NOT NULL,HANDLING_CODE CHAR(4) NOT NULL,
MATCH_AMT NUMBER(15,2) NOT NULL,AOC_ALLOW_AMT NUMBER(15,2) NOT NULL,LOCATION CHAR(5) NOT NULL,
MATCH_OBJ_ID NUMBER(12) NOT NULL,CBPRINT_FL CHAR(1) NOT NULL,MATCH_TABLE CHAR(10) NOT NULL,
TAX_CODE_CNTL CHAR(1) NOT NULL,LAST_MATCH_LN NUMBER(4) NOT NULL,MATCH_LEVEL CHAR(3) NOT NULL,
MATCH_DATE DATE NOT NULL,PO_INV_TAX NUMBER(15,2) NOT NULL,BYPASS_MATCH CHAR(1) NOT NULL,
SERVICE_FL CHAR(1) NOT NULL,SERVICE_AMT NUMBER(15,2) NOT NULL,BUYER CHAR(3) NOT NULL,
FINAL_DST_FLAG CHAR(1) NOT NULL,NOTC CHAR(2) NOT NULL,STAT_PROC CHAR(6) NOT NULL,
SHIP_VIA CHAR(12) NOT NULL,UNLOADING_PORT CHAR(5) NOT NULL,INTRASTAT_NBR NUMBER(12) NOT NULL,
DROPSHIP_FL CHAR(1) NOT NULL,FOB_CODE CHAR(3) NOT NULL,JBK_SEQ_NBR NUMBER(10) NOT NULL,
L_INDEX CHAR(4) NOT NULL,APISET11_SS_SW CHAR(1) NOT NULL,APISET14_SS_SW CHAR(1) NOT NULL,
APISET2_SS_SW CHAR(1) NOT NULL,APISET7_SS_SW CHAR(1) NOT NULL,APISET8_SS_SW CHAR(1) NOT NULL,
L_ATAPI_SS_SW CHAR(1) NOT NULL,
CONSTRAINT APISET1 PRIMARY KEY(COMPANY, VENDOR,INVOICE, SUFFIX, CANCEL_SEQ)
)

Tom, should i set the table stats and then regenerate the explain plan ?
It took almost one and half hour yesterday to execute
dbms_stats.gather_table_stats('LAWSON','APINVOICE',estimate_percent=>35,cascade=>true,degree=>4)

Thanks
Sanji

Tom Kyte
December 07, 2006 - 1:11 pm UTC

give me a simple - small - create table, along with create index (something I can cut and paste and run) and use dbms_stats to set appropriate enough stats to reproduce your issue

so we can all see it in our own database.

Table Stats details

Sanji, December 07, 2006 - 10:38 am UTC

In case, if the following information is required.

select table_name, num_rows,blocks,empty_blocks,avg_space,avg_row_len,sample_size,global_stats,user_stats
from dba_tables
where table_name='APINVOICE';

TABLE_NAME NUM_ROWS BLOCKS EMPTY_BLOCKS AVG_SPACE AVG_ROW_LEN SAMPLE_SIZE GLO USE
---------- ---------- ---------- ------------ ---------- ----------- ----------- --- ---
APINVOICE 13474569 2248283 17330 442 589 4716099 YES NO

Thanks
Sanji

Cannot reproduce the problem

Sanji, December 07, 2006 - 2:19 pm UTC

Tried reproducing the scenario with a smaller table structure. Couldn't reproduce.
I then took the export of the table and imported it in a test database (similar configuration). The same scenario could not be reproduced either.

The explain plan of the same query in the test db

Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=4 Card=1 Bytes=204)
1 0 TABLE ACCESS (BY INDEX ROWID) OF 'APINVOICE' (Cost=4 Card=1 Bytes=204)
2 1 INDEX (UNIQUE SCAN) OF 'APISET1' (UNIQUE) (Cost=3 Card=1)

The problem is occuring only on the production box.

Coming back to the question

1> Does the cardinality in the index (unique scan) suggest that the optimizer would traverse that many rowids to arrive at the result set ?

2> When i exported/imported this table on the test machine, the explain plan of the same query was perfect. Could, the reorg, have done the trick ?
The stats on the test machine were collected as they were on the production box.

Thanks
Sanji

Tom Kyte
December 07, 2006 - 5:47 pm UTC

1) it looks funny, wrong
2) no

Tuning

Sanji, December 08, 2006 - 10:27 am UTC

Tom,

Is there an alternative to approaching this problem.

For academic purposes too i'd want to understand the concept behind the cardinality represented in the INDEX (UNIQUE SCAN) step being so high (comparitively), if at all the cardinality matters here.

I cannot reproduce the problem on the test machine and the query is persistent on the production server.
The db_block_size is 4Kb and all the tablespaces are dictionary managed.

Would appreciate any suggestion.

Thanks
Sanji

Tom Kyte
December 09, 2006 - 12:28 pm UTC

it looks wrong, don't know how else to say that to you - it does not look correct.

why choose hash join

Harry Zhang, December 12, 2006 - 4:22 am UTC

Hi tom,

Why the parser chose hash join without hints while with hints it chose nested loop. Thanks!

SQL> select /*+ INDEX(ID ID_PK)*/ distinct fd.field_id, fd.field_name, fd.field_valid_id, fd.field_dsply_length
  2   from field_def fd, instrument i, instrument_data id
 where i.mod_group_id=109521 and id.ric=i.ric and fd.field_id=id.field_id
  3    4   order by fd.field_name
  5  /

Elapsed: 00:00:00.52

Execution Plan
----------------------------------------------------------
   0      SELECT STATEMENT Optimizer=CHOOSE (Cost=7847 Card=1604 Bytes=83408)
   1    0   SORT (UNIQUE) (Cost=7828 Card=1604 Bytes=83408)
   2    1     HASH JOIN (Cost=7808 Card=1604 Bytes=83408)
   3    2       TABLE ACCESS (FULL) OF 'FIELD_DEF' (Cost=7 Card=3611 Bytes=75831)
   4    2       NESTED LOOPS (Cost=7799 Card=11043 Bytes=342333)
   5    4         TABLE ACCESS (BY INDEX ROWID) OF 'INSTRUMENT' (Cost=1123 Card=3338 Bytes=50070)
   6    5           INDEX (RANGE SCAN) OF 'INS_MG_FK_I' (NON-UNIQUE) (Cost=11 Card=3338)
   7    4         INDEX (RANGE SCAN) OF 'ID_PK' (UNIQUE) (Cost=2 Card=3 Bytes=48)




Statistics
----------------------------------------------------------
          0  recursive calls
          0  db block gets
        888  consistent gets
          0  physical reads
          0  redo size
        688  bytes sent via SQL*Net to client
        498  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          1  sorts (memory)
          0  sorts (disk)
          3  rows processed


SQL> select distinct fd.field_id, fd.field_name, fd.field_valid_id, fd.field_dsply_length
 from field_def fd, instrument i, instrument_data id
 where i.mod_group_id=109521 and id.ric=i.ric and fd.field_id=id.field_id
  2   order by fd.field_name  3    4
  5  /

Elapsed: 00:00:19.27

Execution Plan
----------------------------------------------------------
   0      SELECT STATEMENT Optimizer=CHOOSE (Cost=3808 Card=1604 Bytes=83408)
   1    0   SORT (UNIQUE) (Cost=3789 Card=1604 Bytes=83408)
   2    1     HASH JOIN (Cost=3769 Card=1604 Bytes=83408)
   3    2       TABLE ACCESS (FULL) OF 'FIELD_DEF' (Cost=7 Card=3611 Bytes=75831)
   4    2       HASH JOIN (Cost=3760 Card=11043 Bytes=342333)
   5    4         TABLE ACCESS (BY INDEX ROWID) OF 'INSTRUMENT' (Cost=1123 Card=3338 Bytes=50070)
   6    5           INDEX (RANGE SCAN) OF 'INS_MG_FK_I' (NON-UNIQUE) (Cost=11 Card=1)
   7    4         INDEX (FAST FULL SCAN) OF 'ID_PK' (UNIQUE) (Cost=2443 Card=6743363 Bytes=107893808)




Statistics
----------------------------------------------------------
          0  recursive calls
          0  db block gets
      25678  consistent gets
      25123  physical reads
          0  redo size
        688  bytes sent via SQL*Net to client
        498  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          1  sorts (memory)
          0  sorts (disk)
          3  rows processed

 

Tom Kyte
December 12, 2006 - 7:00 am UTC

are the estimated cardinalities in the query plan anywhere close to "reality"

Harry Zhang, December 13, 2006 - 2:30 am UTC

select distinct fd.field_id, fd.field_name, fd.field_valid_id, fd.field_dsply_length
from field_def fd, instrument i, instrument_data id
where i.mod_group_id=109521 and id.ric=i.ric and fd.field_id=id.field_id
order by fd.field_name

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.03 0.02 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 2 6.66 6.87 0 25679 0 3
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 4 6.69 6.89 0 25679 0 3

Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: 39

Rows Row Source Operation
------- ---------------------------------------------------
3 SORT UNIQUE
682 HASH JOIN
3611 TABLE ACCESS FULL FIELD_DEF
682 HASH JOIN
340 TABLE ACCESS BY INDEX ROWID INSTRUMENT
340 INDEX RANGE SCAN INS_MG_FK_I (object id 29597)
6743363 INDEX FAST FULL SCAN ID_PK (object id 29608)

********************************************************************************

select /*+ INDEX(ID ID_PK)*/ distinct fd.field_id, fd.field_name, fd.field_valid_id, fd.field_dsply_length
from field_def fd, instrument i, instrument_data id
where i.mod_group_id=109521 and id.ric=i.ric and fd.field_id=id.field_id
order by fd.field_name

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.01 0.00 0 0 0 0
Execute 1 0.01 0.00 0 0 0 0
Fetch 2 0.04 0.03 0 888 0 3
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 4 0.06 0.04 0 888 0 3

Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: 39

Rows Row Source Operation
------- ---------------------------------------------------
3 SORT UNIQUE
682 HASH JOIN
3611 TABLE ACCESS FULL FIELD_DEF
682 NESTED LOOPS
340 TABLE ACCESS BY INDEX ROWID INSTRUMENT
340 INDEX RANGE SCAN INS_MG_FK_I (object id 29597)
682 INDEX RANGE SCAN ID_PK (object id 29608)

********************************************************************************

select /*+ USE_NL (i,id)*/ distinct fd.field_id, fd.field_name, fd.field_valid_id, fd.field_dsply_length
from field_def fd, instrument i, instrument_data id
where i.mod_group_id=109521 and id.ric=i.ric and fd.field_id=id.field_id
order by fd.field_name

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.02 0.01 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 2 0.04 0.03 0 888 0 3
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 4 0.06 0.05 0 888 0 3

Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: 39

Rows Row Source Operation
------- ---------------------------------------------------
3 SORT UNIQUE
682 HASH JOIN
3611 TABLE ACCESS FULL FIELD_DEF
682 NESTED LOOPS
340 TABLE ACCESS BY INDEX ROWID INSTRUMENT
340 INDEX RANGE SCAN INS_MG_FK_I (object id 29597)
682 INDEX RANGE SCAN ID_PK (object id 29608)

********************************************************************************


Tom Kyte
December 13, 2006 - 7:46 am UTC

and so I don't have to - did you compare the ESTIMATED to the ACTUAL and what did you see - how about you point it out to us.

Harry Zhang, December 14, 2006 - 2:00 am UTC

I think the instrment table estimated cardinalities is wrong.
5 4 TABLE ACCESS (BY INDEX ROWID) F 'INSTRUMENT' (Cost=1123 Card=3338 Bytes=50070)

Actual card=340, so the optimizer chose hash join.

Currently we use
DBMS_STATS.GATHER_TABLE_STATS ( ownname => 'OPS$GLOBAL',tabname => 'INSTRUMENT',estimate_percent => 25, method_opt => 'FOR ALL INDEXED COLUMNS SIZE 254',cascade => TRUE,degree => 4)

for analyze, need to change the way we analyze?

Thanks


Tom Kyte
December 15, 2006 - 8:06 am UTC

are you sure you meant to gather histograms

Raj, December 18, 2006 - 11:15 am UTC

Hi Tom,
I have the following query, which takes a long time to execute for this caseid alone ('515157'). I have tried analyze command but there was no improvement in response time. The response time for all other caseids is just the blink of an eye and there are not much difference in the number of records. I have also noticed that this uses only indexes. Apart from this I don't have any clue from the explain plan. The explain plan is the same for all case ids. Can you please help me out. I'm new to performance tuning.
SELECT pt.plcysymbl, pt.plcyid
FROM view_dbl_bllgrp v,
exprncgrp_bllgrp_assoc eba,
pega_exprncgrp_assoc pexa,
case_commission_plan ccp,
policy_translation pt
WHERE EXISTS (
SELECT 'X'
FROM view_dbl_bllgrp v2,
exprncgrp_bllgrp_assoc eba2,
pega_exprncgrp_assoc pexa2,
case_commission_plan ccp2,
policy_translation pt2
WHERE ( NVL (ccp2.fltamt, 0) != NVL (ccp.fltamt, 0)
OR NVL (ccp2.frqncycd, ' ') != NVL (ccp.frqncycd, ' ')
OR NVL (ccp2.cmmssnschdlid, 0) !=
NVL (ccp.cmmssnschdlid, 0)
)
AND NVL (ccp2.cmmssnplntrmdt, v2.frstbllduedt + 1) >
v2.frstbllduedt
AND v2.bllgrpid = eba2.bllgrpid
AND v2.caseid = eba2.caseid
AND NVL (eba2.exprncgrpbllgrptrmdt,
pexa2.pegaexprncgrpeffctvdt + 1
) > pexa2.pegaexprncgrpeffctvdt
AND eba2.exprncgrpbllgrpeffctvdt <
NVL (pexa2.pegaexprncgrptrmdt,
eba2.exprncgrpbllgrpeffctvdt + 1
)
AND eba2.exprncgrpid = pexa2.exprncgrpid
AND eba2.caseid = pexa2.caseid
AND pexa2.fndngmthdplnnbr = pt2.fndngmthdplnnbr
AND pexa2.cvrgplnnbr = pt2.cvrgplnnbr
AND pexa2.fndngmthdcd = pt2.fndngmthdcd
AND pexa2.cvrgtypcd = pt2.cvrgtypcd
AND pexa2.cvrgctgrycd = pt2.cvrgctgrycd
AND pexa2.caseid = pt2.caseid
AND NVL (ccp2.cmmssnplntrmdt, ccp.cmmssnplneffctvdt + 1) >
ccp.cmmssnplneffctvdt
AND ccp2.cmmssnplneffctvdt <
NVL (ccp.cmmssnplntrmdt, ccp2.cmmssnplneffctvdt + 1)
AND ccp2.plnnbr =
DECODE (ccp2.fndngmthdcd,
'N/A', pt2.cvrgplnnbr,
pt2.fndngmthdplnnbr
)
AND ccp2.fndngmthdcd IN ('N/A', pt2.fndngmthdcd)
AND ccp2.cvrgtypcd IN ('N/A', pt2.cvrgtypcd)
AND ccp2.cvrgctgrycd IN ('N/A', pt2.cvrgctgrycd)
AND ccp2.caseid = pt2.caseid
AND ( pt2.fndngmthdplnnbr != pt.fndngmthdplnnbr
OR pt2.cvrgplnnbr != pt.cvrgplnnbr
OR pt2.fndngmthdcd != pt.fndngmthdcd
OR pt2.cvrgtypcd != pt.cvrgtypcd
OR pt2.cvrgctgrycd != pt.cvrgctgrycd
)
AND pt2.plcysymbl = pt.plcysymbl
AND pt2.plcyid = pt.plcyid
AND pt2.caseid = pt.caseid)
AND NVL (ccp.cmmssnplntrmdt, v.frstbllduedt + 1) > v.frstbllduedt
AND v.bllgrpid = eba.bllgrpid
AND v.caseid = eba.caseid
AND NVL (eba.exprncgrpbllgrptrmdt, pexa.pegaexprncgrpeffctvdt + 1) >
pexa.pegaexprncgrpeffctvdt
AND eba.exprncgrpbllgrpeffctvdt <
NVL (pexa.pegaexprncgrptrmdt, eba.exprncgrpbllgrpeffctvdt + 1)
AND eba.exprncgrpid = pexa.exprncgrpid
AND eba.caseid = pexa.caseid
AND pexa.fndngmthdplnnbr = pt.fndngmthdplnnbr
AND pexa.cvrgplnnbr = pt.cvrgplnnbr
AND pexa.fndngmthdcd = pt.fndngmthdcd
AND pexa.cvrgtypcd = pt.cvrgtypcd
AND pexa.cvrgctgrycd = pt.cvrgctgrycd
AND pexa.caseid = pt.caseid
AND ccp.plnnbr =
DECODE (ccp.fndngmthdcd,
'N/A', pt.cvrgplnnbr,
pt.fndngmthdplnnbr
)
AND ccp.fndngmthdcd IN ('N/A', pt.fndngmthdcd)
AND ccp.cvrgtypcd IN ('N/A', pt.cvrgtypcd)
AND ccp.cvrgctgrycd IN ('N/A', pt.cvrgctgrycd)
AND ccp.caseid = pt.caseid
AND (pt.caseid, pt.plcyid, pt.plcysymbl, 'MORE') IN (
SELECT caseid, plcyid, plcysymbl,
DECODE (COUNT (DISTINCT cvrgctgrycd
|| '/'
|| cvrgtypcd
|| '/'
|| fndngmthdcd
|| '/'
|| cvrgplnnbr
|| '/'
|| fndngmthdplnnbr
),
1, 'ONE',
'MORE'
)
FROM policy_translation
WHERE caseid = '515157'
GROUP BY caseid, plcyid, plcysymbl)
ORDER BY pt.plcysymbl, pt.plcyid

Elapsed: 00:20:15.08

Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=86 Card=1 Bytes=181)
1 0 SORT (ORDER BY) (Cost=20 Card=1 Bytes=181)
2 1 FILTER
3 2 TABLE ACCESS (BY INDEX ROWID) OF 'CASE_COMMISSION_PLAN
' (Cost=2 Card=1 Bytes=38)

4 3 NESTED LOOPS (Cost=14 Card=1 Bytes=181)
5 4 NESTED LOOPS (Cost=12 Card=1 Bytes=143)
6 5 NESTED LOOPS (Cost=9 Card=1 Bytes=106)
7 6 NESTED LOOPS (Cost=6 Card=1 Bytes=67)
8 7 NESTED LOOPS (Cost=4 Card=1 Bytes=47)
9 8 NESTED LOOPS (Cost=3 Card=1 Bytes=28)
10 9 INDEX (UNIQUE SCAN) OF 'UICASE' (UNIQUE) (Cost=1 Card=1 Bytes=8)

11 9 TABLE ACCESS (BY INDEX ROWID) OF 'CASE_ADMIN_PROVISION' (Cost=2 Card=1 Bytes=20)

12 11 INDEX (RANGE SCAN) OF 'UICASEADMINPROV' (UNIQUE) (Cost=1 Card=3)

13 8 TABLE ACCESS (BY INDEX ROWID) OF 'BILL_GROUP' (Cost=1 Card=1 Bytes=19)

14 13 INDEX (UNIQUE SCAN) OF 'UIBILLGROUP' (UNIQUE)

15 7 TABLE ACCESS (BY INDEX ROWID) OF 'EXPRNCGRP_BLLGRP_ASSOC' (Cost=2 Card=1 Bytes=20)

16 15 INDEX (RANGE SCAN) OF 'IEXPBLLGRPASSOC' (NON-UNIQUE) (Cost=1 Card=1)

17 6 TABLE ACCESS (BY INDEX ROWID) OF 'PEGA_EXPRNCGRP_ASSOC' (Cost=3 Card=1 Bytes=39)

18 17 INDEX (RANGE SCAN) OF 'IPEGAEXPGRPASSOC' (NON-UNIQUE) (Cost=2 Card=1)

19 5 TABLE ACCESS (BY INDEX ROWID) OF 'POLICY_TRANSLATION' (Cost=3 Card=1 Bytes=37)

20 19 INDEX (RANGE SCAN) OF 'IPOLICYTRNSLTN1' (NON-UNIQUE) (Cost=2 Card=1)

21 4 INDEX (RANGE SCAN) OF 'UICASECOMMPLN' (UNIQUE) (Cost=1 Card=1)

22 2 TABLE ACCESS (BY INDEX ROWID) OF 'PEGA_EXPRNCGRP_ASSOC' (Cost=3 Card=1 Bytes=39)

23 22 NESTED LOOPS (Cost=14 Card=1 Bytes=181)
24 23 NESTED LOOPS (Cost=11 Card=1 Bytes=142)
25 24 NESTED LOOPS (Cost=9 Card=1 Bytes=122)
26 25 NESTED LOOPS (Cost=6 Card=1 Bytes=85)
27 26 NESTED LOOPS (Cost=4 Card=1 Bytes=47)
28 27 NESTED LOOPS (Cost=3 Card=1 Bytes=28)
29 28 INDEX (UNIQUE SCAN) OF 'UICASE' (UNIQUE)(Cost=1 Card=1 Bytes=8)

30 28 TABLE ACCESS (BY INDEX ROWID) OF 'CASE_ADMIN_PROVISION' (Cost=2 Card=1 Bytes=20)

31 30 INDEX (RANGE SCAN) OF 'UICASEADMINPROV' (UNIQUE) (Cost=1 Card=3)

32 27 TABLE ACCESS (BY INDEX ROWID) OF 'BILL_GROUP' (Cost=1 Card=1 Bytes=19)

33 32 INDEX (UNIQUE SCAN) OF 'UIBILLGROUP' (UNIQUE)

34 26 TABLE ACCESS (BY INDEX ROWID) OF 'CASE_COMMISSION_PLAN' (Cost=2 Card=1 Bytes=38)

35 34 INDEX (RANGE SCAN) OF 'UICASECOMMPLN' (UNIQUE) (Cost=1 Card=1)

36 25 TABLE ACCESS (BY INDEX ROWID) OF 'POLICY_TRANSLATION' (Cost=3 Card=1 Bytes=37)

37 36 INDEX (RANGE SCAN) OF 'IPOLICYTRNSLTN' (NON-UNIQUE) (Cost=2 Card=1)

38 24 TABLE ACCESS (BY INDEX ROWID) OF 'EXPRNCGRP_BLLGRP_ASSOC' (Cost=2 Card=1 Bytes=20)

39 38 INDEX (RANGE SCAN) OF 'IEXPBLLGRPASSOC' (NON-UNIQUE) (Cost=1 Card=1)

40 23 INDEX (RANGE SCAN) OF 'IPEGAEXPGRPASSOC' (NON-UNIQUE) (Cost=2 Card=1)

41 2 FILTER
42 41 SORT (GROUP BY) (Cost=52 Card=1 Bytes=37)
43 42 TABLE ACCESS (BY INDEX ROWID) OF 'POLICY_TRANSLATION' (Cost=44 Card=862 Bytes=31894)

44 43 INDEX (RANGE SCAN) OF 'IPOLICYTRNSLTN' (NON-UNIQUE) (Cost=5 Card=862)

Statistics
----------------------------------------------------------
14 recursive calls
0 db block gets
25394348 consistent gets
0 physical reads
0 redo size
188 bytes sent via SQL*Net to client
3173 bytes received via SQL*Net from client
1 SQL*Net roundtrips to/from client
1 sorts (memory)
0 sorts (disk)
0 rows processed



Tom Kyte
December 18, 2006 - 2:32 pm UTC

compare a tkprof of a "good one" and a "bad one"

hash join vs nested loop

Harry Zhang, December 19, 2006 - 9:24 am UTC

Thanks Tom,

About the analyze of the table above in the previous quesiton, we are using size 245 which is nearly the max allowed, but still not enough to give info to the optimizer. we have more than 50000 distinct values for the mod_group_id column.

SQL> select count(distinct MOD_GROUP_ID) from instrument;

COUNT(DISTINCTMOD_GROUP_ID)
---------------------------
                      53578

SQL> select * from ( select count(mod_group_id), mod_group_id from instrument group by mod_group_id order by count(mod_group_id)) where rownum<20;

COUNT(MOD_GROUP_ID) MOD_GROUP_ID
------------------- ------------
                  1        11123
                  1        11131
                  1        11133
...

SQL> select * from ( select count(mod_group_id), mod_group_id from instrument group by mod_group_id order by count(mod_group_id) desc) where rownum<20;

COUNT(MOD_GROUP_ID) MOD_GROUP_ID
------------------- ------------
              55494        99916
              19016        99887
              17256        86546
...

SQL> select table_name, num_rows from user_tables  where table_name='INSTRUMENT';

TABLE_NAME                       NUM_ROWS
------------------------------ ----------
INSTRUMENT                        4091400

SQL>  select INDEX_name, num_rows from user_indexes  where index_name='INS_MG_FK_I';

INDEX_NAME                       NUM_ROWS
------------------------------ ----------
INS_MG_FK_I                       4030436

SQL> select TABLE_NAME, COLUMN_NAME, ENDPOINT_NUMBER, ENDPOINT_VALUE from user_tab_histograms where table_name='INSTRUMENT' and column_name='MOD_GROUP_ID';

TABLE_NAME                     COLUMN_NAME          ENDPOINT_NUMBER ENDPOINT_VALUE
------------------------------ -------------------- --------------- --------------
INSTRUMENT                     MOD_GROUP_ID                       0           3182
INSTRUMENT                     MOD_GROUP_ID                       1           3953
INSTRUMENT                     MOD_GROUP_ID                       2          10037
                                    .
                                    .
                                    .
TABLE_NAME                     COLUMN_NAME          ENDPOINT_NUMBER ENDPOINT_VALUE
------------------------------ -------------------- --------------- --------------
INSTRUMENT                     MOD_GROUP_ID                     244         199397
INSTRUMENT                     MOD_GROUP_ID                     245         248978

I tried dynamic sampling and only with /*+ dynamic_sampling(i 10) */, dynamic sampling can work as needed. 

Is it correct to use dynamic sampling?
If I use dyanmic sampling there are also some overhead for the optimizer.


 

Tom Kyte
December 19, 2006 - 10:20 am UTC

dynamic sampling is good for this - IF you are not using binds, so if this is a warehouse - sure, if this is a transactional system - not good.

and if it is a warehouse, the sampling will increase parse times (but decrease execution time)

hash join vs nested loop

Harry Zhang, December 20, 2006 - 12:19 am UTC

Hi Tom,

Unfortunately, this is transactional system and we are using bind variables. Any other suggestions? Thanks

Tom Kyte
December 20, 2006 - 8:08 am UTC

are you sure it is a transactional system - if you have whooping large result sets such that you sometimes what full scan and sometimes not? does the application retrieve ALL of the data in one fell swoop or does it paginate through it?

Harry Zhang, January 10, 2007 - 2:11 am UTC

Thanks Tom,

We are using forms to display, currently form will cache 500 and user can scroll down to see more.


A reader, April 27, 2007 - 5:34 am UTC

One of our team is doing query optimization in the application. They take top SQL from "Oracle Enterprize Manager" and mostly add index to avoid "full table scan" in the prospactive of the current query only.
1-Is this a good approach?
2-What factors should be take in to consideration when applying indexes.?
3-One of DBA says that "if we apply index which is not being used by "QUERY" then it have no side effect on Performance". He says that upto Oracle 7 there reverse impact on performance by applying indexes. In Oracle 10 G
Oracle itself manage rebuilding of indexes and therefore if we add some extra index then it will not decrease performance.

Tom Kyte
April 27, 2007 - 11:00 am UTC

1) no, not really. It is perhaps "the approach" when the people designing the system did not give sufficient thought to what they were doing however.

The good approach is to have thought of this, well, a long time ago.

2) as few as possible, as many as you need. Be careful to not end up with

idx1(a)
idx2(a,b)
idx3(a,b,c)

and such.

3) he is making stuff up about "upto oracle7".

Indexes rarely, if ever, need rebuilding. But that has nothing to do with the other points stated in 3.

Massive Updation

Karteek, April 27, 2007 - 7:11 am UTC

Hi Tom,

We have a partition table (PRODUCT_SALES) containing 150MM records. We need to do logical deletion of about 10MM data using a staging table (DELTA_OFFERS - contains 10MM recs).

Unfortunately our design is such a way that we can't use partition pruning here - records being deleted scattered across multiple partitions. Inorder to do this updation (logical delete) , I have used a cursor which fetches in bulk and deletes; and commits every 1MM records. Here is the stmt I used to create cursor...

SELECT /*+ INDEX(XP PROOFF_UNIQ_UPDATE) INDEX(XD D_IDX) */ P.ROWID FROM
PRODUCT_SALES P, DELTA_OFFERS D
WHERE P.ACCOUNT_KEY = D.ACCOUNT_KEY AND
P.CHANNEL_CODE = CHANNEL_CODE AND
P.PRODUCT_CODE = D.PRODUCT_CODE AND P.ACTIVE_FLAG != 'D'

and EXPLAIN PLAN for this query is

ID PARENT OPERATION OBJECT_NAME
---- ------ ----------------------------------- -----------------
0 SELECT STATEMENT
1 0 HASH JOIN
2 1 INDEX FULL SCAN D_IDX
3 1 TABLE ACCESS BY GLOBAL INDEX ROWID XSELL_PRODUCT_OFFER
4 3 INDEX FULL SCAN PROOFF_UNIQ_UPDATE


To me this plan is looking okay (not sure - you pls tell me). But this is not performing well...even with stopkey - rownum < N. Please advice. Also would you suggest any alternate method of deleting...may with different approach or with different SQL

-Karteek

- Karteek
Tom Kyte
April 27, 2007 - 11:04 am UTC

just do a single delete????



Karteek, April 27, 2007 - 1:41 pm UTC

for updating 10MM, might be single delete could do better. But this number may likely increase to 20 to 30MM - would you suggest the same in that case also?

updating 30MM data, in any method, is obviously not a good idea, but if it is unavoidable, like in our case...what should I do? what do suggest for size of undo segment for this 10 MM update operation - assume avg size of each record is 150 bytes and updating field is just a flag.

- Karteek
Tom Kyte
April 27, 2007 - 5:30 pm UTC

absolutely.

let it grow - use automatic undo and let it grow to size.

Karteek, April 28, 2007 - 12:51 pm UTC

I would like to take your suggestion Tom - on using a single update with auto extend undo. To make this effective I want to take the advantage of partitions and RAC.

Currently we have 8 list partitions - on client_id. First, I want to load my delta file(s) into a partitioned staging table (looks alike with partitioned Live table), so all my 10MM (may be more) records get scattered into 8 partitions. As our system is RAC (3 instances), I want to perform this updation operation from all the 3 instances, in parallel. i.e I categorize all my 8 partitions into 3 groups (based on partition size, no.of records and instance capacity) and do the update operation from each instance, so that any one instance takes care of only a set specific partitions - not all.

I hope you have got what my point is, just by reading my first 2 lines. But Tom, I need your suggestion here. When I do this massive updation parallelly, lot of undo gets generated, from all 3 instances. How should I handle undo read/write contention?. Can I allocate separate undo tablespaces to all 3 instances?, so that if I put all 3 tablespaces on different disks, I can avoid I/O contention).

I think, it is usual that each instance will have its own group of REDO log (at least 2 files for each instance)...how is it for UNDO?.

And also...would there be chances of getting Snapshot too old error...if my updation runs for more time may be a couple of hours (even if I put auto extend undo)?

Please advice me a better solution here.

- Karteek

Tom Kyte
April 29, 2007 - 9:23 am UTC

each instance would have it's own undo tablespace.
http://docs.oracle.com/docs/cd/B19306_01/rac.102/b28759/adminrac.htm#CHDGAIFJ


until you "commit", the undo will be retained for sure - hence a 1555 on the data you are modifying is unlikely.


STATS line in 10046 trace file,

A reader, May 08, 2007 - 11:51 am UTC

I am having a situation where the STATS line in 10046 extended sql trace file is showing weird number for the number of rows it is working on. I guess the query was killed after a while the last line in the trace file is STAT lines.

STAT #1 id=27 cnt=23719573 pid=6 pos=2 obj=613891 op='INDEX RANGE SCAN OBJ#(613891) (cr=4243104 pr=7 pw=0 time=256638704 us)'

Here, the value of cnt is 23 million. However, for the object_id 613891 which is an index, the number of rows in the dictionary is around 730K.
1* select index_name, num_rows,distinct_keys,leaf_blocks,blevel,clustering_factor
SQL> /

INDEX_NAME NUM_ROWS DISTINCT_KEYS LEAF_BLOCKS BLEVEL CLUSTERING_FACTOR
------------------------------ ---------- ------------- ----------- ---------- -----------------
IXF1DIM_CC_HIER_BDG 738378 43434 1624 2 168745


Where is the 23 million number coming from?

Thanks,

Tom Kyte
May 10, 2007 - 8:50 pm UTC

well, things from the dictionary are "as of the last stats gathering" and might well be not relevant.

but think "nested loops - something done in a loop - over and over and over again"


Need some information

Abhijit Mallick, May 15, 2007 - 11:26 am UTC

Tom,
Thanks for the details about Explain Plan. I have some doubts, please clarify.

When exactly the plan flushed from the Library Cache? After it flushed how the next call to the same query get the execution plan? Does it parse and regenerate the plan? What is the way to use the efficient plan whenever a query is called for execution?

Thanks,
--Abhijit Mallick
Tom Kyte
May 15, 2007 - 9:20 pm UTC

when is a plan flushed?

when it needs be.


many things cause that, aging out of the LRU cache (not enough space). gathering statistics (typically). adding an index. changing something - that is when.

and the very next execution of that statement will cause it to hard parse again.

Index Usage

RAM, May 23, 2007 - 6:19 am UTC

Hi tom ,

I have a question the below select statement

SELECT   a.account_id, NVL (a.display_name, a.NAME) AS display_name
    FROM gbc_core.accounts a, gbc_core.account_types AT
   WHERE a.account_type_id = AT.account_type_id AND AT.account_level IN (1)
ORDER BY display_name


** This Query Is Table Scanning as there are no Supporting index for ACCOUNT_TYPE_ID on accounts table**

Created an index on accounts table for account_type_id , after which also the query did not use the index , so tried a different possibility as given below but again non index was used here as well.

SELECT    a.account_id, NVL (a.display_name, a.NAME) AS display_name
    FROM gbc_core.accounts a where exists ( select 'x' from gbc_core.account_types AT
   WHERE a.account_type_id = AT.account_type_id AND AT.account_level IN (1))
ORDER BY display_name


I did do some data volume comparisons :

select account_type_id,account_level from gbc_core.account_types


Account_type_id 10,35 and 40 maps to account_level=1 in account_types table , this is a table which has 10 rows

select account_type_id,count(*) FROM gbc_core.accounts group by account_type_id

Out of 85,000 records

Account_type_id 10 - > 464 rows
Account_type_id 35 -> 34 rows
Account_type_id 40 -> 34 rows


The optimizer shows a cardinality of 3 for the account_types table which makes sense.
We have 12 distinct values for account_type_id on accounts table , and the final cardinality reported by optimizer is 85,000/12 , despite having histograms collected with bucket size as 12.

On further analysis i observed that the usage of index is supported only if the index hint or first_rows is specified.Interestingly when the index is used the buffer gets is reduced from 1200 to 32 and no physical reads.

I am not able to understand why the optimizer chooses a FTS on the accounts table even though it is provided with all the requried data distribution details and statistics collected as well.

Thanks in advance
Tom Kyte
May 23, 2007 - 7:12 am UTC

insufficient data - no creates, no dbms_stats.set_...., no plans to look at, no peek at how you gather statists

so, it got three on the one table.
what did it guess for the other table.


in short, the optimizer will look at:

WHERE a.account_type_id = AT.account_type_id AND AT.account_level IN (1)


and say "hmm, I get about 3 records with SOME UNKNOWN values of account_type_id because of the account_level in (1)"

Now, what guess do I have for the cardinality of

where account_type_id in ( ?, ?, ? )

(I have three account_type_id's, at optimization time - i don't know WHAT they are, just that there are probably 3)

Index Usage

A reader, May 23, 2007 - 8:37 am UTC

Hi tom ,

Appologies for not giving the explain plan.

The plan for FTS and index access given below

PLAN_TABLE_OUTPUT

 
-------------------------------------------------------------------------------
| Id  | Operation            |  Name          | Rows  | Bytes |TempSpc| Cost  |
-------------------------------------------------------------------------------
|   0 | SELECT STATEMENT     |                | 21255 |  1203K|       |   321 |
|   1 |  SORT ORDER BY       |                | 21255 |  1203K|  2856K|   321 |
|*  2 |   HASH JOIN          |                | 21255 |  1203K|       |   117 |
|*  3 |    TABLE ACCESS FULL | ACCOUNT_TYPES  |     3 |    21 |       |     2 |
|   4 |    TABLE ACCESS FULL | ACCOUNTS       | 85018 |  4234K|       |   114 |
-------------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
   2 - access("A"."ACCOUNT_TYPE_ID"="AT"."ACCOUNT_TYPE_ID")
   3 - filter("AT"."ACCOUNT_LEVEL"=1)
 
Note: cpu costing is off




PLAN_TABLE_OUTPUT

 
---------------------------------------------------------------------------------------------
| Id  | Operation                    |  Name                | Rows  | Bytes |TempSpc| Cost  |
---------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT             |                      | 21255 |  1203K|       |   347 |
|   1 |  SORT ORDER BY               |                      | 21255 |  1203K|  3032K|   347 |
|   2 |   TABLE ACCESS BY INDEX ROWID| ACCOUNTS             |  7085 |   352K|       |    47 |
|   3 |    NESTED LOOPS              |                      | 21255 |  1203K|       |   143 |
|*  4 |     TABLE ACCESS FULL        | ACCOUNT_TYPES        |     3 |    21 |       |     2 |
|*  5 |     INDEX RANGE SCAN         | IND_ACCOUNT_TYPE_ID  |  7085 |       |       |    34 |
---------------------------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
   4 - filter("AT"."ACCOUNT_LEVEL"=1)
   5 - access("A"."ACCOUNT_TYPE_ID"="AT"."ACCOUNT_TYPE_ID")
 
Note: cpu costing is off


select count(account_type_id) from account_types gives 14 rows

select count(account_type_id) from accounts gives 85023 rows

select COUNT (DISTINCT account_type_id) from accounts gives 12 rows

GATHERED STATISTICS With

DBMS_STATS.GATHER_TABLE_STATS(owner,'ACCOUNTS',METHOD_OPT=> 'FOR ALL INDEXED COLUMNS SIZE AUTO')


select NUM_BUCKETS from dba_tab_columns where table_name='ACCOUNTS' AND COLUMN_NAME='ACCOUNT_TYPE_ID'


Returns 11 rows

Account_type_id has values 10,35 and 40 for account_level=1
under account_types table.

For accounts table Out of 85,000 records

Account_type_id 10 has 464 rows
Account_type_id 35 has 34 rows
Account_type_id 40 has 34 rows


Thanks again

Tom Kyte
May 23, 2007 - 9:29 am UTC

ok, so, out of 80k rows, it thinks 20k (25%) will be returned from the table.

why - because it knows "3 rows from one table" - fine. but it does not know that "3 rows from that one table means we'll get 10, 35, 40 as values from the other and gets 1,000"

it only knows "we'll get about 3 values, that'll get about 20k rows"

it does not know that account_level => 1 means account_type_id 10,35, 40 will be returned (histograms are about A COLUMN, not across columns)

have you tried dynamic sampling?

select /*+ dynamic_sampling(t 3) */ ..........


that'll let us understand the relationships cross columns at hard parse time.

Index Usage

RAM, May 23, 2007 - 10:07 am UTC

Hi tom ,

Tried dynamic sampling , no luck it still goes for a FTS.

So can we conclude that having histograms will not help when we have skewed data across columns.

Thanks
Tom Kyte
May 26, 2007 - 9:29 am UTC

no, you cannot conclude that. That is precisely what histograms are for.

can you show us the autotrace traceonly explain (use code button to preserve formatting) for the query with and without dynamic sampling please.

Consistent Gets

Marc, May 24, 2007 - 2:03 am UTC

Tom, I can you explain what Consistent Gets are and why they change each time I re-run this query?

SQL> SELECT COUNT (web_url)
2 FROM assets_active s
3 WHERE i_full_format != 'folder'
4 AND (( r_object_type = 'XMY'
5 AND ( EXISTS (
6 SELECT web_url
7 FROM assets_active_X r
8 WHERE s.web_url = r.web_url
9 AND r.keywords = 'X')
10 AND EXISTS (
11 SELECT web_url
12 FROM assets_active_x r
13 WHERE s.web_url = r.web_url
14 AND r.keywords = 'B')
15 )
16 AND provider = 'IG'
17 )
18 )
19 ORDER BY r_date DESC;

COUNT(WEB_URL)
-----------------
1790


Statistics
----------------------------------------------------------
0 recursive calls
0 db block gets
10853 consistent gets
5644 physical reads
0 redo size
205 bytes sent via SQL*Net to client
235 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
1 rows processed
Tom Kyte
May 26, 2007 - 10:57 am UTC

consistent gets are logical IO's - reads from the buffer cache - and will include reads to the rollback segements. autotrace also prints out consistent gets performed as the result of recursive sql as well.

So, if you hard parse that query, you would likely see "NN recursive calls" (sql done by us to do your sql). All of the logical IO they did - you would see added into your consistent gets. Next time you run it, no recursive sql, less IO's

or, if you are on a multi-user system and someone modifies the assets_active (or any referenced table) of yours while your query is running - you will need to undo those changes and the additional reads to the undo tablespace will be counted there as well. So, depending on how active the table was while you were running it - that'll change the IO's

Index Usage

A reader, June 01, 2007 - 7:35 am UTC

Hi tom ,

I am providing with the auto trace outputs with and with out dynamic sampling

SELECT   a.account_id, NVL (a.display_name, a.NAME) AS display_name
FROM gbc_core.accounts a, gbc_core.account_types AT
WHERE a.account_type_id = AT.account_type_id AND AT.account_level IN (1)
ORDER BY display_name;

542 rows selected.


Execution Plan
----------------------------------------------------------
   0
SELECT STATEMENT Optimizer=CHOOSE (Cost=321 Card=21255 Bytes=1232790)


   1    0
  SORT (ORDER BY) (Cost=321 Card=21255 Bytes=1232790)


   2    1
    HASH JOIN (Cost=117 Card=21255 Bytes=1232790)


   3    2
      TABLE ACCESS (FULL) OF 'ACCOUNT_TYPES' (Cost=2 Card=3 Bytes=21)


   4    2
      TABLE ACCESS (FULL) OF 'ACCOUNTS' (Cost=114 Card=85018 Bytes=4335918)






Statistics
----------------------------------------------------------
          0  recursive calls
          0  db block gets
       1181  consistent gets
         84  physical reads
         60  redo size
      13600  bytes sent via SQL*Net to client
        598  bytes received via SQL*Net from client
         38  SQL*Net roundtrips to/from client
          1  sorts (memory)
          0  sorts (disk)
        542  rows processed






SELECT  /*+dynamic_sampling(accounts 3)*/ a.account_id, NVL (a.displa
_name, a.NAME) AS display_name
FROM gbc_core.accounts a, gbc_core.account_types AT
WHERE a.account_type_id = AT.account_type_id AND AT.account_level IN (1
ORDER BY display_name;

542 rows selected.


Execution Plan
----------------------------------------------------------
   0
SELECT STATEMENT Optimizer=CHOOSE (Cost=321 Card=21255 Bytes=1232790)


   1    0
  SORT (ORDER BY) (Cost=321 Card=21255 Bytes=1232790)


   2    1
    HASH JOIN (Cost=117 Card=21255 Bytes=1232790)


   3    2
      TABLE ACCESS (FULL) OF 'ACCOUNT_TYPES' (Cost=2 Card=3 Bytes=21)


   4    2
      TABLE ACCESS (FULL) OF 'ACCOUNTS' (Cost=114 Card=85018 Bytes=4335918)






Statistics
----------------------------------------------------------
          0  recursive calls
          0  db block gets
       1179  consistent gets
         83  physical reads
          0  redo size
      13600  bytes sent via SQL*Net to client
        598  bytes received via SQL*Net from client
         38  SQL*Net roundtrips to/from client
          1  sorts (memory)
          0  sorts (disk)
        542  rows processed

DBMS_XPLAN.DISPLAY

pg, August 30, 2007 - 10:18 am UTC

Tom,

What is this column menas Cost (%CPU) when i use DBMS_XPLAN.DISPLAY to display a plan?

Say I have two plans on the same database.

1.
Cost (%CPU)
-----------
1020 (2)

2.
Cost (%CPU)
-----------
1913 (1)


Which one is better ?

Thanks,

Tom Kyte
September 04, 2007 - 5:14 pm UTC

they are both perfect for their respective queries.

in theory, the second one will consume more resources, in theory.

Another important performance suggestion reqd

Abhijit Mallick, October 10, 2007 - 9:09 am UTC

Please look at the 2 queries and explain plana and suggest which one will be better to write the query:

1st Option:

SELECT oh.order_num,
oh.current_status_code,
oh.company_num,
oh.order_date,
oh.customer_num,
oh.po_num,
oh.invoice_date,
oh.ship_code,
oh.ship_way_bill,
sr.sales_rep_name salesrep_name,
pc.payment_code_1,
pc.payment_code_2,
pc.payment_code_3,
pc.pay_code_desc,
(SELECT cq.first_name || '' || cq.last_name
FROM customer_view.contact cq
WHERE oh.customer_num = cq.customer_num
AND cq.address_seq_num = oh.ship_to_seq_num
AND cq.address_type = 'S') bill_to_contact_name,
ad.company_name bill_to_company_name,
ad.street_address_1 bill_to_street_address_1,
ad.street_address_2 bill_to_street_address_2,
ad.city bill_to_city,
ad.state bill_to_state,
ad.zip_code bill_to_zip_code,
(SELECT cq.first_name || '' || cq.last_name
FROM customer_view.contact cq
WHERE oh.customer_num = cq.customer_num
AND cq.address_seq_num = 0
AND cq.address_type = 'B') ship_to_contact_name,
ad2.company_name ship_to_company_name,
ad2.street_address_1 ship_to_street_address_1,
ad2.city ship_to_city,
ad2.state ship_to_state,
ad2.zip_code ship_to_zip_code,
(decode(sr.area_code,
0,
NULL,
area_code) || decode(sr.phone_num,
0,
NULL,
sr.phone_num)) sales_phone_num,
(decode(sr.fax_area_code,
0,
NULL,
fax_area_code) || decode(fax_phone_num,
0,
NULL,
fax_phone_num)) sales_fax_num
--get_cmpmstr_phone_num(oh.business_unit_id,oh.company_num,oh.location_num,'C') customer_svc_phone_num,
--get_cmpmstr_phone_num(oh.business_unit_id,oh.company_num,oh.location_num,'T') tech_support_phone_num
FROM axis_view.ordhdr oh,
customer_view.contact ct,
--customer_base.contact cq2,
customer_view.address ad,
customer_view.address ad2,
axis_view.salesrep sr,
doms.us_cmpmstr cm,
(SELECT b.order_num,
MAX(b.pay_code1) payment_code_1,
MAX(b.pay_code2) payment_code_2,
MAX(b.pay_code3) payment_code_3,
MAX(b.pay_code_desc1) pay_code_desc
FROM (SELECT a.order_num,
a.sequence_num,
a.pay_code,
p.pay_code_desc,
decode(a.sequence_num,
1,
a.pay_code,
NULL) pay_code1,
decode(a.sequence_num,
1,
p.pay_code_desc,
NULL) pay_code_desc1,
decode(a.sequence_num,
2,
a.pay_code,
NULL) pay_code2,
decode(a.sequence_num,
3,
a.pay_code,
NULL) pay_code3
FROM axis_view.ordpay a,
axis_view.paycode p
WHERE order_num = 679356249
AND a.pay_code = p.pay_code) b
GROUP BY b.order_num) pc
WHERE oh.customer_num = ad.customer_num
AND ad.address_seq_num = 0
AND ad.address_type = 'B'
AND oh.customer_num = ad2.customer_num
AND oh.ship_to_seq_num = ad2.address_seq_num
AND ad2.address_type = 'S'
AND sr.salesrep_num = oh.salesrep_index9
AND oh.order_num = 679356249
AND oh.business_unit_id = 11

Execution Plan
----------------------------------------------------------

----------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)|
----------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 324 | 777K (1)|
| 1 | TABLE ACCESS HASH | CONTACT | 1 | 21 | 1 (0)|
| 2 | TABLE ACCESS HASH | CONTACT | 1 | 21 | 1 (0)|
| 3 | MERGE JOIN CARTESIAN | | 1 | 324 | 777K (1)|
| 4 | MERGE JOIN CARTESIAN | | 1 | 324 | 14 (8)|
| 5 | MERGE JOIN CARTESIAN | | 1 | 233 | 4 (0)|
| 6 | NESTED LOOPS | | 1 | 233 | 4 (0)|
| 7 | NESTED LOOPS | | 1 | 171 | 3 (0)|
| 8 | NESTED LOOPS | | 1 | 106 | 2 (0)|
| 9 | TABLE ACCESS HASH | ORDHDR | 1 | 65 | 1 (0)|
| 10 | TABLE ACCESS BY INDEX ROWID| SALESREP | 1 | 41 | 1 (0)|
| 11 | INDEX UNIQUE SCAN | XPK_SALESREP | 1 | | 0 (0)|
| 12 | TABLE ACCESS HASH | ADDRESS | 1 | 65 | 1 (0)|
| 13 | TABLE ACCESS HASH | ADDRESS | 1 | 62 | 1 (0)|
| 14 | BUFFER SORT | | 1 | | 3 (0)|
| 15 | INDEX FULL SCAN | USCMPMSTR_IXPK_BUICOMPLOCADDR | 1 | | |
| 16 | BUFFER SORT | | 1 | 91 | 14 (8)|
| 17 | VIEW | | 1 | 91 | 10 (10)|
| 18 | SORT GROUP BY | | 1 | 29 | 10 (10)|
| 19 | HASH JOIN | | 1 | 29 | 10 (10)|
| 20 | TABLE ACCESS HASH | ORDPAY | 1 | 9 | 1 (0)|
| 21 | TABLE ACCESS FULL | PAYCODE | 1678 | 33560 | 8 (0)|
| 22 | BUFFER SORT | | 161M| | 777K (1)|
| 23 | INDEX FULL SCAN | XTM_CONTACT_CUST_CONT | 161M| | 777K (1)|
----------------------------------------------------------------------------------------------------

Note
-----
- 'PLAN_TABLE' is old version


Statistics
----------------------------------------------------------
750 recursive calls
0 db block gets
291 consistent gets
2 physical reads
0 redo size
1940 bytes sent via SQL*Net to client
3326 bytes received via SQL*Net from client
1 SQL*Net roundtrips to/from client
21 sorts (memory)
0 sorts (disk)
0 rows processed



2nd option
SELECT oh.order_num,
(select op.PAY_CODE PAY_CODE from axis_view.ordpay op where

op.order_num = oh.order_num and op.SEQUENCE_NUM =1) payment_code_1,
(select op.PAY_CODE PAY_CODE from axis_view.ordpay op where

op.order_num = oh.order_num and op.SEQUENCE_NUM =2) payment_code_2,
(select op.PAY_CODE PAY_CODE from axis_view.ordpay op where

op.order_num = oh.order_num and op.SEQUENCE_NUM =3) payment_code_3,
oh.CURRENT_STATUS_CODE,
oh.company_num,
oh.order_date,
oh.customer_num,
oh.po_num,
oh.invoice_date,
oh.ship_code,
oh.ship_way_bill,
(Select sr.SALES_REP_NAME from axis_view.salesrep sr where

sr.SALESREP_NUM = oh.SALESREP_INDEX9) salesrep_name,
(select op.PAY_CODE PAY_CODE from axis_view.ordpay op where

op.order_num = oh.order_num and op.SEQUENCE_NUM =1) pay_code_desc,
(select cq.first_name||''||cq.last_name from customer_view.contact

cq where oh.customer_num = cq.customer_num and cq.address_seq_num = oh.ship_to_seq_num and cq.address_type = 'S')bill_to_contact_name,
ad.company_name bill_to_company_name,
ad.street_address_1 bill_to_street_address_1,
ad.street_address_2 bill_to_street_address_2,
ad.city bill_to_city,
ad.state bill_to_state,
ad.zip_code bill_to_zip_code,
(select cq.first_name||''||cq.last_name from customer_view.contact

cq where oh.customer_num = cq.customer_num and cq.address_seq_num = 0 and cq.address_type = 'B')ship_to_contact_name ,
ad2.company_name ship_to_company_name,
ad2.street_address_1 ship_to_street_address_1,
ad2.city ship_to_city,
ad2.state ship_to_state,
ad2.zip_code ship_to_zip_code,
(SELECT decode(phone_area_code, 0, NULL, phone_area_code) ||

decode(phone_num, 0, NULL, phone_num) FROM doms_er.cmpmstr WHERE business_unit_id = oh.business_unit_id AND company_num = oh.company_num AND

location_num = oh.location_num AND address_type = 'A') sales_phone_num,
(SELECT decode(phone_area_code, 0, NULL, phone_area_code) || decode(phone_num, 0, NULL, phone_num) FROM doms_er.cmpmstr

WHERE business_unit_id = oh.business_unit_id AND company_num = oh.company_num AND location_num = oh.location_num AND address_type = 'F')

sales_fax_num,
(SELECT decode(phone_area_code, 0, NULL, phone_area_code) || decode(phone_num, 0, NULL, phone_num) FROM doms_er.cmpmstr

WHERE business_unit_id = oh.business_unit_id AND company_num = oh.company_num AND location_num = oh.location_num AND address_type = 'C')

customer_svc_phone_num,
(SELECT decode(phone_area_code, 0, NULL, phone_area_code) || decode(phone_num, 0, NULL, phone_num) FROM doms_er.cmpmstr

WHERE business_unit_id = oh.business_unit_id AND company_num = oh.company_num AND location_num = oh.location_num AND address_type = 'T')

tech_support_phone_num
FROM axis_view.ordhdr oh,
customer_view.address ad,
customer_view.address ad2
where oh.customer_num = ad.customer_num and
ad.address_seq_num = 0 and ad.address_type = 'B' and
oh.customer_num = ad2.customer_num and
oh.ship_to_seq_num = ad2.address_seq_num and ad2.address_type = 'S'
and oh.order_num = order_num_in
AND oh.business_unit_id = buid_in;


Execution Plan
----------------------------------------------------------
Plan hash value: 3826482004

----------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
----------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 195 | 3 (0)| 00:00:01 |
|* 1 | TABLE ACCESS HASH | ORDPAY | 1 | 9 | 1 (0)| 00:00:01 |
|* 2 | TABLE ACCESS HASH | ORDPAY | 1 | 9 | 1 (0)| 00:00:01 |
|* 3 | TABLE ACCESS HASH | ORDPAY | 1 | 9 | 1 (0)| 00:00:01 |
| 4 | TABLE ACCESS BY INDEX ROWID| SALESREP | 1 | 21 | 2 (0)| 00:00:01 |
|* 5 | INDEX UNIQUE SCAN | XPK_SALESREP | 1 | | 1 (0)| 00:00:01 |
|* 6 | TABLE ACCESS HASH | ORDPAY | 1 | 9 | 1 (0)| 00:00:01 |
|* 7 | TABLE ACCESS HASH | CONTACT | 1 | 21 | 1 (0)| 00:00:01 |
|* 8 | TABLE ACCESS HASH | CONTACT | 1 | 21 | 1 (0)| 00:00:01 |
| 9 | TABLE ACCESS BY INDEX ROWID| US_CMPMSTR | 1 | 57 | 1 (0)| 00:00:01 |
|* 10 | INDEX UNIQUE SCAN | USCMPMSTR_IXPK_BUICOMPLOCADDR | 1 | | 1 (0)| 00:00:01 |
| 11 | TABLE ACCESS BY INDEX ROWID| US_CMPMSTR | 1 | 57 | 1 (0)| 00:00:01 |
|* 12 | INDEX UNIQUE SCAN | USCMPMSTR_IXPK_BUICOMPLOCADDR | 1 | | 1 (0)| 00:00:01 |
| 13 | TABLE ACCESS BY INDEX ROWID| US_CMPMSTR | 1 | 57 | 1 (0)| 00:00:01 |
|* 14 | INDEX UNIQUE SCAN | USCMPMSTR_IXPK_BUICOMPLOCADDR | 1 | | 1 (0)| 00:00:01 |
| 15 | TABLE ACCESS BY INDEX ROWID| US_CMPMSTR | 1 | 57 | 1 (0)| 00:00:01 |
|* 16 | INDEX UNIQUE SCAN | USCMPMSTR_IXPK_BUICOMPLOCADDR | 1 | | 1 (0)| 00:00:01 |
| 17 | NESTED LOOPS | | 1 | 195 | 3 (0)| 00:00:01 |
| 18 | NESTED LOOPS | | 1 | 133 | 2 (0)| 00:00:01 |
|* 19 | TABLE ACCESS HASH | ORDHDR | 1 | 68 | 1 (0)| 00:00:01 |
|* 20 | TABLE ACCESS HASH | ADDRESS | 1 | 65 | 1 (0)| 00:00:01 |
|* 21 | TABLE ACCESS HASH | ADDRESS | 1 | 62 | 1 (0)| 00:00:01 |
----------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

1 - access("ORDP"."ORDER_NUM"=:B1)
filter("ORDP"."SEQUENCE_NUM"=1)
2 - access("ORDP"."ORDER_NUM"=:B1)
filter("ORDP"."SEQUENCE_NUM"=2)
3 - access("ORDP"."ORDER_NUM"=:B1)
filter("ORDP"."SEQUENCE_NUM"=3)
5 - access("SALESREP_NUM"=:B1)
6 - access("ORDP"."ORDER_NUM"=:B1)
filter("ORDP"."SEQUENCE_NUM"=1)
7 - access("CUSTOMER_NUM"=:B1)
filter("ADDRESS_SEQ_NUM"=:B1 AND "ADDRESS_TYPE"='S')
8 - access("CUSTOMER_NUM"=:B1)
filter("ADDRESS_SEQ_NUM"=0 AND "ADDRESS_TYPE"='B')
10 - access("BUSINESS_UNIT_ID"=:B1 AND "COMPANY_NUM"=:B2 AND "LOCATION_NUM"=:B3 AND
"ADDRESS_TYPE"='A')
12 - access("BUSINESS_UNIT_ID"=:B1 AND "COMPANY_NUM"=:B2 AND "LOCATION_NUM"=:B3 AND
"ADDRESS_TYPE"='F')
14 - access("BUSINESS_UNIT_ID"=:B1 AND "COMPANY_NUM"=:B2 AND "LOCATION_NUM"=:B3 AND
"ADDRESS_TYPE"='C')
16 - access("BUSINESS_UNIT_ID"=:B1 AND "COMPANY_NUM"=:B2 AND "LOCATION_NUM"=:B3 AND
"ADDRESS_TYPE"='T')
19 - access("ORDER_NUM"=745365943)
filter("BUSINESS_UNIT_ID"=11)
20 - access("CUSTOMER_NUM"="CUSTOMER_NUM")
filter("ADDRESS_SEQ_NUM"=0 AND "ADDRESS_TYPE"='B')
21 - access("CUSTOMER_NUM"="CUSTOMER_NUM")
filter("ADDRESS_TYPE"='S' AND "SHIP_TO_SEQ_NUM"="ADDRESS_SEQ_NUM")

Note
-----
- 'PLAN_TABLE' is old version


Statistics
----------------------------------------------------------
495 recursive calls
0 db block gets
307 consistent gets
0 physical reads
0 redo size
2090 bytes sent via SQL*Net to client
1986 bytes received via SQL*Net from client
1 SQL*Net roundtrips to/from client
9 sorts (memory)
0 sorts (disk)
0 rows processed

DML privileage for running explain plan

RB, April 29, 2008 - 8:09 pm UTC

Hi Tom,

Why would we need DML privilege to run explain plan for an update statement?

Thanks,
Ravi
Tom Kyte
April 30, 2008 - 9:40 am UTC

you don't have the ability to do the update, that is why - in order to even think about beginning to process that statement, you need access.

the explain for that update for you is "ora-01031", that, in effect, is your plan. There is no table to update - we cannot "start" to explain how we would do something we cannot do.

Re: DML privileage for running explain plan

RB, April 30, 2008 - 1:34 pm UTC

Thanks for the reply Tom.

Sometimes it so happens that some query would have performance issues in production system for which I have read only access. At least i want to have a look at explain plain to see what is going on. Am i off the line?
Tom Kyte
April 30, 2008 - 3:24 pm UTC

we cannot parse it, you are not allowed to see it - you haven't been granted access to it.

query v$sql_plan - that shows the actual plan - you probably don't want to use explain plan in any case - explain plan doesn't show you what plan WAS used, just what plan MIGHT be used.

Reading Explain plans??

Newbie, May 08, 2008 - 11:04 am UTC

Tom, I am trying to get a count of some status from Customer table and ran queries with 
<code>set Autottrace on Explain


I am at very basic level in optimization techniques and was wondering if you could help me read these plans??
set autotrace on explain;

select sum(case when ui_status=1 then 0 else 1 end) ui_claim
from (
      select distinct id,ui_status
        from Customer_data sd 
       where office in (select office from offices@DBLink1
                            where admin_area='PT0055')
         and ctime between to_date('01/01/2007','mm/dd/yyyy') and to_date('12/31/2007','mm/dd/yyyy')
         and sd.status_cd=1) ;


  UI_CLAIM
----------
      2398


1 row selected.

Execution Plan
----------------------------------------------------------
   0       SELECT STATEMENT Optimizer Mode=CHOOSE (Cost=9328 Card=1 Bytes=19)
   1    0    SORT AGGREGATE (Card=1 Bytes=19)
   2    1      VIEW (Cost=9328 Card=877 Bytes=16 K)
   3    2        SORT UNIQUE (Cost=9328 Card=877 Bytes=38 K)
   4    3          HASH JOIN (Cost=9312 Card=877 Bytes=38 K)
   5    4            REMOTE (Cost=3 Card=10 Bytes=120)
   6    4            TABLE ACCESS FULL CUSTOMER_DATA (Cost=9308 Card=23 K Bytes=749 K)



select sum(case when ui_status=1 then 0 else 1 end) ui_claim
  from customer_data sd ,offices@DBlink1 e
 where sd.office=e.office
   and e.admin_area='PT0055'
   and ctime between to_date('01/01/2007','mm/dd/yyyy') and to_date('12/31/2007','mm/dd/yyyy')
   and sd.status_cd=1


  UI_CLAIM
----------
      2398


1 row selected.

Execution Plan
----------------------------------------------------------
   0       SELECT STATEMENT Optimizer Mode=CHOOSE (Cost=9312 Card=1 Bytes=31)
   1    0    SORT AGGREGATE (Card=1 Bytes=31)
   2    1      HASH JOIN (Cost=9312 Card=877 Bytes=26 K)
   3    2        REMOTE (Cost=3 Card=10 Bytes=100)
   4    2        TABLE ACCESS FULL CUSTOMER_DATA (Cost=9308 Card=23 K Bytes=476 K)



From above, which one looks better and why?

Thanks for any info you can provide me regarding these queries.</code>
Tom Kyte
May 12, 2008 - 9:38 am UTC

you cannot compare them for performance.

they return potentially different answers.

You have to run the one that returns the CORRECT ANSWER. Think about that please....

Tuning SQL Queries

Krishna, June 10, 2008 - 10:48 am UTC

It was excellent explaniation.
Thanks a lot.

the time column of utlxpls

Mike, July 18, 2008 - 9:50 am UTC

Tom, I have assigned to a new project with the resposible for tunning many long run reports in the 10gr2 RDBMS.
Many have run over 10 hrs. To be effective, I would like use 10gR2's utlxpls.sql to check the queies I have made the changes on before launching/tracing them on the database. I have gathered all the stats for the base objects.
Question on the explain plan output, --the column of "time", How "real" the time is for the operation? Is it close to the actual runtime of a complicated and long run query, if all the stats are available?
TIA

Tom Kyte
July 18, 2008 - 5:07 pm UTC

it is not real, it is a pure guess. Based on the work we estimate needs to be done.

It's goal - to be close.

Can you rely on it, no - not really, it is an estimate. And if the plan under/over estimated something - the time, it'll be way off.


do not use utlxpls, use dbms_xplan - much easier.

ops$tkyte%ORA10GR2> delete from plan_table;

0 rows deleted.

ops$tkyte%ORA10GR2> explain plan for &1;
old   1: explain plan for &1
new   1: explain plan for select * from dual

Explained.

ops$tkyte%ORA10GR2> select * from table(dbms_xplan.display);

PLAN_TABLE_OUTPUT
-------------------------------------------------------------------------------
Plan hash value: 272002086

--------------------------------------------------------------------------
| Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |      |     1 |     2 |     2   (0)| 00:00:01 |
|   1 |  TABLE ACCESS FULL| DUAL |     1 |     2 |     2   (0)| 00:00:01 |
--------------------------------------------------------------------------

8 rows selected.

explain plan

Saran, July 22, 2008 - 9:48 am UTC

Hi Tom,

I'm trying to tune the below sql code but I'm getting unable to extend temp segment. Since I'm pulling this data from a source(OLTP) to target(DM), I've to live with source db settings. Is there a way to tune this query to overcome temp segment error? Both these tables contains millions of rows.

Thanks,
Saran

SELECT /*+ ORDERED USE_NL(B,A) INDEX(ps_line,PS_LINE) */
a.minprmchgat, a.dscrptn, a.svcsblld, a.nrbasfee,
a.slryamt, a.bllntyp, b.ap, b.fy,
COUNT (1) OVER (PARTITION BY a.bu, a.in, a.lsn) inv_mth_cnt,
ROW_NUMBER () OVER (PARTITION BY a.bu, a.in, a.lsn ORDER BY b.fy,
b.ap) inv_mth_rnum,
LEAD (LAST_DAY (TO_DATE ( ''
|| ap
|| '/'
|| fy
|| '',
'MM/YYYY'
)
),
1,
TO_DATE ('12/31/9999', 'mm/dd/yyyy')
) OVER (PARTITION BY a.bu, a.in, b.lsn ORDER BY fy,
ap) row_term_date
FROM (SELECT /*+ index(PS_BI_ENTRY, PSBBI_ENTRY) */
DISTINCT bu, in, lsn,
ap, fy
FROM ps_bi_entry
WHERE fy = 2007
AND bu = 'H'
AND status = 'D') b,
ps_line a
WHERE a.bu = b.bu
AND a.in = b.in
AND a.lsn = b.lsn


----------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes |TempSpc| Cost |
----------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 4260K| 455M| | 6925K|
| 1 | WINDOW SORT | | 4260K| 455M| 1109M| 6925K|
| 2 | WINDOW SORT | | 4260K| 455M| 1109M| 6925K|
| 3 | NESTED LOOPS | | 4260K| 455M| | 6775K|
| 4 | VIEW | | 3299K| 176M| | 177K|
| 5 | SORT UNIQUE | | 3299K| 75M| 252M| 177K|
| 6 | TABLE ACCESS BY INDEX ROWID| PS_BI_ENTRY | 3299K| 75M| | 160K|
| 7 | INDEX SKIP SCAN | PSBBI_ENTRY | 3299K| | | 5378 |
| 8 | TABLE ACCESS BY INDEX ROWID | PS_LINE | 1 | 56 | | 2 |
| 9 | INDEX UNIQUE SCAN | PS_LINE | 1 | | | 1 |
----------------------------------------------------------------------------------------------
Tom Kyte
July 22, 2008 - 11:31 am UTC

I'd start by losing every single hint.

and realize that if you don't want "temp" - you'll be doing things as slow as humanly possible - row by slow row.

@Saran: temp

Duke Ganote, July 22, 2008 - 8:25 pm UTC

Or just split the work (just as if you implemented DIY parallelization) and do the work sequentially.
http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:29986713177238
Of course, each piece of the 'split work' must be identifiable (so you can restart it if it fails, or if it needs to be split further due to temp limits).

I just had to do that last week for a "one time" data set. The single SQL blew the temp space, so I split the work into 300 pieces; that worked for 298 pieces -- two I had to split further!

Performance

Thakur Manoj, July 24, 2008 - 8:15 am UTC

Dear Tom
I would like to ask you one question.I have a script which is embedd in a view

select a.borrid, a.mdlid, a.year, a.user_id, substr(b.prmname, instr(b.prmname, '- ')+2) as prmname, d.attribute as value, to_char(a.value) as Marks, a.prmid as roword
from subjective_prm_trans a, mdl_parameter_tree b, mdl_parameter_tree c, mdl_grading_details d
where a.mdlid = b.mdlid and
a.prmid = b.prmid and
b.mdlid = c.mdlid and
b.parentid = c.prmid and
a.mdlid = d.mdlid and
a.gddid = d.gddid and
trim(lower(c.prmname)) = 'conduct of project account'

This query is taking 0.797 seconds to execute.But After looking at the plan the tables are having the indexes but still table is doing a full table scan.How to overcome this full table scan and use the indexes to overcome this full table scan

below is the explain plan
Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=117 Card=52 Bytes=5928)

1 0 HASH JOIN (Cost=117 Card=52 Bytes=5928)
2 1 HASH JOIN (Cost=114 Card=70 Bytes=7210)
3 2 HASH JOIN (Cost=91 Card=2485 Bytes=193830)
4 3 TABLE ACCESS (FULL) OF 'MODEL' (Cost=2 Card=12 Bytes =48)

5 3 HASH JOIN (Cost=88 Card=2485 Bytes=183890)
6 5 TABLE ACCESS (FULL) OF 'COAITEMS' (Cost=11 Card=2011 Bytes=62341)

7 5 HASH JOIN (Cost=72 Card=14947 Bytes=642721)
8 7 TABLE ACCESS (FULL) OF 'COATREE' (Cost=2 Card=1 Bytes=22)

9 7 TABLE ACCESS (FULL) OF 'COARATIOS'(Cost=68 Card =116094 Bytes=2437974)

10 2 TABLE ACCESS (FULL) OF 'STATEMENT' (Cost=13 Card=15911 Bytes=397775)

11 1 TABLE ACCESS (FULL) OF 'AUDIT_METHOD' (Cost=2 Card=3 Bytes=33)


Statistics
----------------------------------------------------------
0 recursive calls
0 db block gets
932 consistent gets
2869 physical reads
0 redo size
4383378 bytes sent via SQL*Net to client
85573 bytes received via SQL*Net from client
7736 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
116017 rows processed


How to increase the performance also of the query

Any Help

Regards
Tom Kyte
July 24, 2008 - 11:02 am UTC

why do you think an index would be useful, it would appear to me that indexes would make this painfully slow.

I always have to laugh when I see this:

trim(lower(c.prmname)) = 'conduct of project account'


why do you need to

a) lower
b) trim

why wouldn't the data be inserted in the format that the application expects?


are your statistics up to date?

and you have views of views here - not very easy for anyone to comment on anything.

Tuning SQL Statement

Thakur Manoj, July 25, 2008 - 5:53 am UTC

Dear Sir

As you said the indexes would be painfully slow that is correct.Well I have run the dbms_stats.gather_schema_stats command for all indexed columns yesterday.Well I also think that sometimes full table scan is good.Well I think since the query is getting executed in less than 1 second.So I think we should leave the query as it is.I have also let CBO to do its own job by running this query

exec dbms_stats.gather_schema_stats('USERNAME',METHOD_OPT=>'For all indexed columns size 250',CASCADE=>true);

Now I have one more query which is also embedded in a view.This query is taking 2.75 seconds to execute

I am attaching the explain plan and the query

select l.borrid, l.mdlid, l.year, l.user_id, l.prmname, l.value, to_char(m.score) as marks, l.roword
from
(
select a.borrid, b.mdlid, to_number(to_char(d.stmtdt, 'yyyy'), '9999') as year, a.user_id, lpad(c.item_name, length(c.item_name)+((c.tr_lvl-1)*5), ' ') as prmname, to_char(a.value) as value, a.item_id as roword
from coaratios a, model b, coaitems c, statement d, coatree e, audit_method f
where
a.coaid = b.coaid and
a.coaid = c.coaid and
a.item_id = c.item_id and
a.tr_id = c.tr_id and
a.coaid = d.coaid and
a.stmtid = d.stmtid and
a.borrid = d.borrid and
a.user_id = d.user_id and
a.coaid = e.coaid and
a.tr_id = e.tr_id and
d.audit_method_id = f.id and
trim(lower(e.tr_name)) = 'key financial ratios'
) l,
(
select distinct x.borrid, x.mdlid, x.year, x.user_id, x.ratioid, x.score from rat_calc_finratio x
) m
where l.borrid = m.borrid (+) and
l.year = m.year (+) and
l.mdlid = m.mdlid (+) and
l.user_id = m.user_id (+) and
l.roword = m.ratioid (+)
union all
select a.borrid, a.mdlid, a.year, a.user_id, substr(b.prmname, instr(b.prmname, '- ', 0, 1)+2) as prmname, d.attribute as value, TO_CHAR(a.value) as Marks, a.prmid as roword
from subjective_prm_trans a, mdl_parameter_tree b, mdl_parameter_tree c, mdl_grading_details d
where a.mdlid = b.mdlid and
a.prmid = b.prmid and
b.mdlid = c.mdlid and
b.parentid = c.prmid and
a.mdlid = d.mdlid and
a.gddid = d.gddid and
trim(lower(c.prmname)) = 'frs'

Below is the execution plan

Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=843 Card=322 Bytes=145624)

1 0 UNION-ALL
2 1 HASH JOIN (OUTER) (Cost=472 Card=52 Bytes=112684)
3 2 VIEW (Cost=116 Card=52 Bytes=108628)
4 3 HASH JOIN (Cost=116 Card=52 Bytes=5356)
5 4 HASH JOIN (Cost=113 Card=52 Bytes=5148)
6 5 HASH JOIN (Cost=110 Card=70 Bytes=6790)
7 6 HASH JOIN (Cost=88 Card=2485 Bytes=183890)
8 7 TABLE ACCESS (FULL) OF 'COAITEMS' (Cost=11 Card=2011 Bytes=62341)

9 7 HASH JOIN (Cost=72 Card=14947 Bytes=642721)
10 9 TABLE ACCESS (FULL) OF 'COATREE' (Cost=2 Card=1 Bytes=22)

11 9 TABLE ACCESS (FULL) OF 'COARATIOS' (Cost=68 Card=116094 Bytes=2437974)

12 6 TABLE ACCESS (FULL) OF 'STATEMENT' (Cost=13 Card=15911 Bytes=365953)

13 5 TABLE ACCESS (FULL) OF 'AUDIT_METHOD' (Cost=2 Card=3 Bytes=6)

14 4 TABLE ACCESS (FULL) OF 'MODEL' (Cost=2 Card=12 Bytes=48)

15 2 VIEW (Cost=325 Card=55820 Bytes=4353960)
16 15 SORT (UNIQUE) (Cost=325 Card=55820 Bytes=1116400)
17 16 TABLE ACCESS (FULL) OF 'RAT_CALC_FINRATIO' (Cost=51 Card=55820 Bytes=1116400)

18 1 HASH JOIN (Cost=371 Card=270 Bytes=32940)
19 18 TABLE ACCESS (FULL) OF 'MDL_GRADING_DETAILS' (Cost=7 Card=2985 Bytes=41790)

20 18 HASH JOIN (Cost=361 Card=3731 Bytes=402948)
21 20 HASH JOIN (Cost=7 Card=12 Bytes=1020)
22 21 TABLE ACCESS (FULL) OF 'MDL_PARAMETER_TREE' (Cost=3 Card=9 Bytes=369)

23 21 TABLE ACCESS (FULL) OF 'MDL_PARAMETER_TREE' (Cost=3 Card=1099 Bytes=48356)

24 20 TABLE ACCESS (FULL) OF 'SUBJECTIVE_PRM_TRANS' (Cost=348 Card=378290 Bytes=8700670)





Statistics
----------------------------------------------------------
0 recursive calls
6 db block gets
6477 consistent gets
11752 physical reads
0 redo size
6312723 bytes sent via SQL*Net to client
105780 bytes received via SQL*Net from client
9573 SQL*Net roundtrips to/from client
0 sorts (memory)
1 sorts (disk)
143577 rows processed

Now How I should increase the performance?

Regards

Performance Issue

Manoj Thakur, July 28, 2008 - 5:28 am UTC

Any Update Tom on the below

To Manoj

lalu121212, July 28, 2008 - 6:07 am UTC

Have a try with parallel hint!

Help Needed

Vaibhav, July 30, 2008 - 4:47 am UTC

Greetings!!!

SELECT *
FROM (SELECT *
FROM (SELECT NVL(mt.originating_party_id, 'NULL'),
TO_CHAR(COUNT (CASE when me.error_timestamp > (TO_TIMESTAMP('2008-07-29 10:00:00.000', 'YYYY-MM-DD HH24:MI:SS.FF') - 24/24) AND me.error_timestamp <= (TO_TIMESTAMP('2008-07-29 10:00:00.000', 'YYYY-MM-DD HH24:MI:SS.FF') - 23/24)
THEN 1 END)) in1,
TO_CHAR(COUNT (CASE when me.error_timestamp > (TO_TIMESTAMP('2008-07-29 10:00:00.000', 'YYYY-MM-DD HH24:MI:SS.FF') - 23/24) AND me.error_timestamp <= (TO_TIMESTAMP('2008-07-29 10:00:00.000', 'YYYY-MM-DD HH24:MI:SS.FF') - 22/24)
THEN 1 END)) in2,
TO_CHAR(COUNT (CASE when me.error_timestamp > (TO_TIMESTAMP('2008-07-29 10:00:00.000', 'YYYY-MM-DD HH24:MI:SS.FF') - 22/24) AND me.error_timestamp <= (TO_TIMESTAMP('2008-07-29 10:00:00.000', 'YYYY-MM-DD HH24:MI:SS.FF') - 21/24)
THEN 1 END)) in3,
TO_CHAR(COUNT (CASE when me.error_timestamp > (TO_TIMESTAMP('2008-07-29 10:00:00.000', 'YYYY-MM-DD HH24:MI:SS.FF') - 21/24) AND me.error_timestamp <= (TO_TIMESTAMP('2008-07-29 10:00:00.000', 'YYYY-MM-DD HH24:MI:SS.FF') - 20/24)
THEN 1 END)) in4,
TO_CHAR(COUNT (CASE when me.error_timestamp > (TO_TIMESTAMP('2008-07-29 10:00:00.000', 'YYYY-MM-DD HH24:MI:SS.FF') - 20/24) AND me.error_timestamp <= (TO_TIMESTAMP('2008-07-29 10:00:00.000', 'YYYY-MM-DD HH24:MI:SS.FF') - 19/24)
THEN 1 END)) in5,
TO_CHAR(COUNT (CASE when me.error_timestamp > (TO_TIMESTAMP('2008-07-29 10:00:00.000', 'YYYY-MM-DD HH24:MI:SS.FF') - 19/24) AND me.error_timestamp <= (TO_TIMESTAMP('2008-07-29 10:00:00.000', 'YYYY-MM-DD HH24:MI:SS.FF') - 18/24)
THEN 1 END)) in6,
TO_CHAR(COUNT (CASE when me.error_timestamp > (TO_TIMESTAMP('2008-07-29 10:00:00.000', 'YYYY-MM-DD HH24:MI:SS.FF') - 18/24) AND me.error_timestamp <= (TO_TIMESTAMP('2008-07-29 10:00:00.000', 'YYYY-MM-DD HH24:MI:SS.FF') - 17/24)
THEN 1 END)) in7,
TO_CHAR(COUNT (CASE when me.error_timestamp > (TO_TIMESTAMP('2008-07-29 10:00:00.000', 'YYYY-MM-DD HH24:MI:SS.FF') - 17/24) AND me.error_timestamp <= (TO_TIMESTAMP('2008-07-29 10:00:00.000', 'YYYY-MM-DD HH24:MI:SS.FF') - 16/24)
THEN 1 END)) in8,
TO_CHAR(COUNT (CASE when me.error_timestamp > (TO_TIMESTAMP('2008-07-29 10:00:00.000', 'YYYY-MM-DD HH24:MI:SS.FF') - 16/24) AND me.error_timestamp <= (TO_TIMESTAMP('2008-07-29 10:00:00.000', 'YYYY-MM-DD HH24:MI:SS.FF') - 15/24)
THEN 1 END)) in9,
TO_CHAR(COUNT (CASE when me.error_timestamp > (TO_TIMESTAMP('2008-07-29 10:00:00.000', 'YYYY-MM-DD HH24:MI:SS.FF') - 15/24) AND me.error_timestamp <= (TO_TIMESTAMP('2008-07-29 10:00:00.000', 'YYYY-MM-DD HH24:MI:SS.FF') - 14/24)
THEN 1 END)) in10,
TO_CHAR(COUNT (CASE when me.error_timestamp > (TO_TIMESTAMP('2008-07-29 10:00:00.000', 'YYYY-MM-DD HH24:MI:SS.FF') - 14/24) AND me.error_timestamp <= (TO_TIMESTAMP('2008-07-29 10:00:00.000', 'YYYY-MM-DD HH24:MI:SS.FF') - 13/24)
THEN 1 END)) in11,
TO_CHAR(COUNT (CASE when me.error_timestamp > (TO_TIMESTAMP('2008-07-29 10:00:00.000', 'YYYY-MM-DD HH24:MI:SS.FF') - 13/24) AND me.error_timestamp <= (TO_TIMESTAMP('2008-07-29 10:00:00.000', 'YYYY-MM-DD HH24:MI:SS.FF') - 12/24)
THEN 1 END)) in12,
TO_CHAR(COUNT (CASE when me.error_timestamp > (TO_TIMESTAMP('2008-07-29 10:00:00.000', 'YYYY-MM-DD HH24:MI:SS.FF') - 12/24) AND me.error_timestamp <= (TO_TIMESTAMP('2008-07-29 10:00:00.000', 'YYYY-MM-DD HH24:MI:SS.FF') - 11/24)
THEN 1 END)) in13,
TO_CHAR(COUNT (CASE when me.error_timestamp > (TO_TIMESTAMP('2008-07-29 10:00:00.000', 'YYYY-MM-DD HH24:MI:SS.FF') - 11/24) AND me.error_timestamp <= (TO_TIMESTAMP('2008-07-29 10:00:00.000', 'YYYY-MM-DD HH24:MI:SS.FF') - 10/24)
THEN 1 END)) in14,
TO_CHAR(COUNT (CASE when me.error_timestamp > (TO_TIMESTAMP('2008-07-29 10:00:00.000', 'YYYY-MM-DD HH24:MI:SS.FF') - 10/24) AND me.error_timestamp <= (TO_TIMESTAMP('2008-07-29 10:00:00.000', 'YYYY-MM-DD HH24:MI:SS.FF') - 9/24)
THEN 1 END)) in15,
TO_CHAR(COUNT (CASE when me.error_timestamp > (TO_TIMESTAMP('2008-07-29 10:00:00.000', 'YYYY-MM-DD HH24:MI:SS.FF') - 9/24) AND me.error_timestamp <= (TO_TIMESTAMP('2008-07-29 10:00:00.000', 'YYYY-MM-DD HH24:MI:SS.FF') - 8/24)
THEN 1 END)) in16,
TO_CHAR(COUNT (CASE when me.error_timestamp > (TO_TIMESTAMP('2008-07-29 10:00:00.000', 'YYYY-MM-DD HH24:MI:SS.FF') - 8/24) AND me.error_timestamp <= (TO_TIMESTAMP('2008-07-29 10:00:00.000', 'YYYY-MM-DD HH24:MI:SS.FF') - 7/24)
THEN 1 END)) in17,
TO_CHAR(COUNT (CASE when me.error_timestamp > (TO_TIMESTAMP('2008-07-29 10:00:00.000', 'YYYY-MM-DD HH24:MI:SS.FF') - 7/24) AND me.error_timestamp <= (TO_TIMESTAMP('2008-07-29 10:00:00.000', 'YYYY-MM-DD HH24:MI:SS.FF') - 6/24)
THEN 1 END)) in18,
TO_CHAR(COUNT (CASE when me.error_timestamp > (TO_TIMESTAMP('2008-07-29 10:00:00.000', 'YYYY-MM-DD HH24:MI:SS.FF') - 6/24) AND me.error_timestamp <= (TO_TIMESTAMP('2008-07-29 10:00:00.000', 'YYYY-MM-DD HH24:MI:SS.FF') - 5/24)
THEN 1 END)) in19,
TO_CHAR(COUNT (CASE when me.error_timestamp > (TO_TIMESTAMP('2008-07-29 10:00:00.000', 'YYYY-MM-DD HH24:MI:SS.FF') - 5/24) AND me.error_timestamp <= (TO_TIMESTAMP('2008-07-29 10:00:00.000', 'YYYY-MM-DD HH24:MI:SS.FF') - 4/24)
THEN 1 END)) in20,
TO_CHAR(COUNT (CASE when me.error_timestamp > (TO_TIMESTAMP('2008-07-29 10:00:00.000', 'YYYY-MM-DD HH24:MI:SS.FF') - 4/24) AND me.error_timestamp <= (TO_TIMESTAMP('2008-07-29 10:00:00.000', 'YYYY-MM-DD HH24:MI:SS.FF') - 3/24)
THEN 1 END)) in21,
TO_CHAR(COUNT (CASE when me.error_timestamp > (TO_TIMESTAMP('2008-07-29 10:00:00.000', 'YYYY-MM-DD HH24:MI:SS.FF') - 3/24) AND me.error_timestamp <= (TO_TIMESTAMP('2008-07-29 10:00:00.000', 'YYYY-MM-DD HH24:MI:SS.FF') - 2/24)
THEN 1 END)) in22,
TO_CHAR(COUNT (CASE when me.error_timestamp > (TO_TIMESTAMP('2008-07-29 10:00:00.000', 'YYYY-MM-DD HH24:MI:SS.FF') - 2/24) AND me.error_timestamp <= (TO_TIMESTAMP('2008-07-29 10:00:00.000', 'YYYY-MM-DD HH24:MI:SS.FF') - 1/24)
THEN 1 END)) in23,
TO_CHAR(COUNT (CASE when me.error_timestamp > (TO_TIMESTAMP('2008-07-29 10:00:00.000', 'YYYY-MM-DD HH24:MI:SS.FF') - 1/24) AND me.error_timestamp <= (TO_TIMESTAMP('2008-07-29 10:00:00.000', 'YYYY-MM-DD HH24:MI:SS.FF'))
THEN 1 END)) in24,
TO_CHAR(SUM (COUNT(1)) OVER (PARTITION BY originating_party_id ORDER BY originating_party_id)) total_count
FROM mhs_errors me, mhs_transaction_status_v mtsv, mhs_transactions mt
WHERE me.transaction_id = mtsv.transaction_id
AND mtsv.transaction_id = mt.transaction_id
AND me.error_timestamp > TO_TIMESTAMP('2008-07-29 10:00:00.000', 'YYYY-MM-DD HH24:MI:SS.FF') - 24/24 AND me.error_timestamp <= TO_TIMESTAMP('2008-07-29 10:00:00.000', 'YYYY-MM-DD HH24:MI:SS.FF')
AND current_status IN ('PERSISTED_FOR_EXECUTION')
GROUP BY mt.originating_party_id)
ORDER BY TO_NUMBER(total_count) desc)
WHERE ROWNUM <= NVL(TO_NUMBER(:size), 10)




Hi,

i have this above query that gives the count of errors that occured over the last 24 hours from the specified date, grouping is done on "ORIGINATING_PARTY_ID".

This same procedure is supposed to produce another 2 reports where grouping is done on "INTERACTION_ID" and "ERROR_CODE"

With 8 lakh records, this query took about 10 mins to execute. ie (for the entire proc to retrun results to java, it would take 10*3 = 30mins)

following is the execution plan, pls tell me how do i get the result as soon as possible.

what indexes i need to create. my current indexes are getting used "INDEX FAST FULL SCAN" but the problem is that this index itself contains loads of data which nees to be scanned...hence it takes a lot of time.

Also, our client will be having a dashboard where in this proc would be called every 4 mins and the data would keep refreshing.

If my proc takes 30 mins to return data, then how in this world is the proc goin to refresh data in 4 mins

TOM, plz help me out with this


Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=ALL_ROWS (Cost=3834 Card=1 Bytes=
577)

1 0 COUNT (STOPKEY)
2 1 VIEW (Cost=3834 Card=1 Bytes=577)
3 2 SORT (ORDER BY STOPKEY) (Cost=3834 Card=1 Bytes=577)
4 3 VIEW (Cost=3833 Card=1 Bytes=577)
5 4 WINDOW (SORT) (Cost=3833 Card=1 Bytes=47)
6 5 HASH (GROUP BY) (Cost=3833 Card=1 Bytes=47)
7 6 FILTER
8 7 HASH JOIN (Cost=3830 Card=8194 Bytes=385118)
9 8 HASH JOIN (Cost=3038 Card=8164 Bytes=26941
2)

10 9 INDEX (FAST FULL SCAN) OF 'A' (INDEX) (C
ost=1616 Card=8164 Bytes=89804)

11 9 INDEX (FAST FULL SCAN) OF 'B' (INDEX) (C
ost=1394 Card=816372 Bytes=17960184)

12 8 INDEX (FAST FULL SCAN) OF 'A' (INDEX) (Cos
t=763 Card=816372 Bytes=11429208)


Statistics
----------------------------------------------------------
816421 recursive calls
0 db block gets
3281002 consistent gets
22801 physical reads
0 redo size
953 bytes sent via SQL*Net to client
5400 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
816374 sorts (memory)
0 sorts (disk)
1 rows processed


This execution plan remains the same for the remaining two queries as well...

i got a shock when i saw the no. of consistent gets but i guess, oracle cant help it because
it has that 8 lakhs of data present in that given time range

Please help me TOM
Tom Kyte
August 02, 2008 - 5:24 pm UTC

well, why do you convert numbers, into strings, to convert back into numbers???


select * 
   from (select *
           from (select ...., <b>to_char</b>(sum( ... )) total_count)
          order by <b>to_number</b>(total_count) desc)
  where rownum <= nvl(...,10)



in fact, you to_char numbers all over the place, why??

the fastest way to do something is to - NOT DO IT. I'd lose all of the to_char's and to_numbers

that said, it won't be magic in this case - that won't "solve your problem", but it is a big mistake to do what you are doing (convert and convert and convert)

the to_chars - consume more storage
the more storage you need, the faster the aggregates will spill to temp.
the more conversions you do, the more cpu you'll just waste and burn.


this is a little whacky:

TO_CHAR(SUM (COUNT(1)) OVER (PARTITION BY originating_party_id ORDER BY originating_party_id)) total_count

if you partition by X, then sorting by X produces just some random ordering. You then group by this column X.

So, why are you doing this? why not just use COUNT(*)???

ops$tkyte%ORA10GR2> select deptno,
  2         sum(count(1)) over (partition by deptno order by deptno) tc,
  3             count(*) right_way_to_get_a_total_count
  4   from scott.emp
  5   group by deptno
  6  /

    DEPTNO         TC RIGHT_WAY_TO_GET_A_TOTAL_COUNT
---------- ---------- ------------------------------
        10          3                              3
        20          5                              5
        30          6                              6



lakhs is not a universal term. I believe is is 100,000 - 800,000 records is "nothing"

let's start there, remove all of the conversions - stop doing that.

use the right function, you want a count by originating_party_id - just count them, you don't need or want an analytic at that point.

Thanks for the explanation

Vaibhav, August 03, 2008 - 2:02 am UTC

Hi Tom,

thanks for that detailed explanation.

i did not use these to_char functions when i wrote the query initially, but it gave error on the java end "invalid scale size" because it could not recognize the data type of the count value since it is not a column of the table and by default it was taking it as varchar2 and since the query was returning a number value, error was produced. so i used the to_char function and it solved the problem.

also, the java guys told me that it would be efficient on their part if i take all the values as string, so i did to_number conversion for p_batchsize.

also, the current_status comes from the view which takes a join on the master table for the transaction id.

on using trace file, i figured out that those 16 lacs recursive calls were the calls made to the view to find out the current status of that transaction id.

infact we have many queries where we select the current status from the view and thus, needing a nested loop join everytime. so i just suggested my designers if we can add a column to the master table so that we can avoid all these recursive calls and nested loops. overhead would be updating the master table every time a status changes.

also, i am executing the query on 10 lc records 3 times since i have to group by on three different columns. is there any way to scan the records just once and still get the o/p. wont be possible i guess

what do u suggest tom about that addition of new columns
Tom Kyte
August 03, 2008 - 2:13 pm UTC

"U" is not available.

I'll respond briefly in their place, but might not do so on the future.

No idea what a lacs is
No idea what a lc is



... also, the java guys told me that it would be efficient on their part if i take
all the values as string, so i did to_number conversion for p_batchsize.
..

what I think they said was "we can be really lazy if you just use strings, no need to actually know what we are dealing with".




are you trying to say you have a view here and that view is calling plsql over and over?



Apologies

Vaibhav, August 04, 2008 - 2:32 am UTC

Hi Tom,

i apologize for those typos. I assure you that won't happen again.

We have a master table MHS_TRANSACTIONS that keeps some details about the transaction having transaction_id as the PK.

We have another table MHS_TRANSACTION_STATUS that keeps the history about the various states that this transaction goes through.

Now, we have a view MHS_TRANSACTION_STATUS_V which gets the latest (current) status for that transaction id.

In the query where i am counting the errors, i need to select transactions based on their latest (current) status.

So, my flow goes like this,

1) get the transaction id from MHS_TRANSACTIONS
2) join on transaction_id of MHS_ERRORS and check the error_timestamp
3) join on MHS_TRANSACTION_STATUS_V to get the current(latest) status of that transaction
4) group on originating_party_id from MHS_TRANSACTIONS table

Statistics show that i had 8 lakh recursive calls. i further investigated what those recursive calls were made for. using tkprof i realized that these calls were made to execute the query of the view in order to get the latest status.

In order to avoid step 3, i added a new column CURRENT_STATUS in my master table MHS_TRANSACTIONS. As suggested by you, i eliminated the use of analytic functions and to my surprise, number of sorts(memory) reduced to 1 from 1632746. did that actually make such a big difference?

my new statistics are:

Statistics
----------------------------------------------------------
1 recursive calls
0 db block gets
19888 consistent gets
15436 physical reads
0 redo size
953 bytes sent via SQL*Net to client
5400 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
1 sorts (memory)
0 sorts (disk)
1 rows processed


Just 1 recursive call and the response time reduced from 9 minutes to directly 2 minutes with less number of logical I/O's. So i guess, this should be a good idea to get the current status in the master table.

Also, i spoke to the JAVA developers and asked them to handle the o/p on their end so that i don't need to use TO_CHAR and TO_NUMBER functions.

Is there anything else, that can help me here?
Tom Kyte
August 04, 2008 - 1:14 pm UTC

without seeing your actual sql - no comments are really possible. I'm still not sure what you were doing under the covers.

I would have joined (no views) and used analytics or KEEP with dense_rank to retain the most current record.

do analytics sort?
yes, they do.


Vaibhav, August 05, 2008 - 12:09 am UTC

Good Morning,

This is the query that i executed:

SELECT *
FROM (SELECT *
FROM (SELECT NVL(mt.originating_party_id, 'NULL'),
TO_CHAR(COUNT (CASE when me.error_timestamp > (TO_TIMESTAMP(p_error_date, 'YYYY-MM-DD HH24:MI:SS.FF') - 24/24) AND me.error_timestamp <= (TO_TIMESTAMP(p_error_date, 'YYYY-MM-DD HH24:MI:SS.FF') - 23/24)
THEN 1 END)) in1,
TO_CHAR(COUNT (CASE when me.error_timestamp > (TO_TIMESTAMP(p_error_date, 'YYYY-MM-DD HH24:MI:SS.FF') - 23/24) AND me.error_timestamp <= (TO_TIMESTAMP(p_error_date, 'YYYY-MM-DD HH24:MI:SS.FF') - 22/24)
THEN 1 END)) in2,
TO_CHAR(COUNT (CASE when me.error_timestamp > (TO_TIMESTAMP(p_error_date, 'YYYY-MM-DD HH24:MI:SS.FF') - 22/24) AND me.error_timestamp <= (TO_TIMESTAMP(p_error_date, 'YYYY-MM-DD HH24:MI:SS.FF') - 21/24)
THEN 1 END)) in3,
TO_CHAR(COUNT (CASE when me.error_timestamp > (TO_TIMESTAMP(p_error_date, 'YYYY-MM-DD HH24:MI:SS.FF') - 21/24) AND me.error_timestamp <= (TO_TIMESTAMP(p_error_date, 'YYYY-MM-DD HH24:MI:SS.FF') - 20/24)
THEN 1 END)) in4,
TO_CHAR(COUNT (CASE when me.error_timestamp > (TO_TIMESTAMP(p_error_date, 'YYYY-MM-DD HH24:MI:SS.FF') - 20/24) AND me.error_timestamp <= (TO_TIMESTAMP(p_error_date, 'YYYY-MM-DD HH24:MI:SS.FF') - 19/24)
THEN 1 END)) in5,
TO_CHAR(COUNT (CASE when me.error_timestamp > (TO_TIMESTAMP(p_error_date, 'YYYY-MM-DD HH24:MI:SS.FF') - 19/24) AND me.error_timestamp <= (TO_TIMESTAMP(p_error_date, 'YYYY-MM-DD HH24:MI:SS.FF') - 18/24)
THEN 1 END)) in6,
TO_CHAR(COUNT (CASE when me.error_timestamp > (TO_TIMESTAMP(p_error_date, 'YYYY-MM-DD HH24:MI:SS.FF') - 18/24) AND me.error_timestamp <= (TO_TIMESTAMP(p_error_date, 'YYYY-MM-DD HH24:MI:SS.FF') - 17/24)
THEN 1 END)) in7,
TO_CHAR(COUNT (CASE when me.error_timestamp > (TO_TIMESTAMP(p_error_date, 'YYYY-MM-DD HH24:MI:SS.FF') - 17/24) AND me.error_timestamp <= (TO_TIMESTAMP(p_error_date, 'YYYY-MM-DD HH24:MI:SS.FF') - 16/24)
THEN 1 END)) in8,
TO_CHAR(COUNT (CASE when me.error_timestamp > (TO_TIMESTAMP(p_error_date, 'YYYY-MM-DD HH24:MI:SS.FF') - 16/24) AND me.error_timestamp <= (TO_TIMESTAMP(p_error_date, 'YYYY-MM-DD HH24:MI:SS.FF') - 15/24)
THEN 1 END)) in9,
TO_CHAR(COUNT (CASE when me.error_timestamp > (TO_TIMESTAMP(p_error_date, 'YYYY-MM-DD HH24:MI:SS.FF') - 15/24) AND me.error_timestamp <= (TO_TIMESTAMP(p_error_date, 'YYYY-MM-DD HH24:MI:SS.FF') - 14/24)
THEN 1 END)) in10,
TO_CHAR(COUNT (CASE when me.error_timestamp > (TO_TIMESTAMP(p_error_date, 'YYYY-MM-DD HH24:MI:SS.FF') - 14/24) AND me.error_timestamp <= (TO_TIMESTAMP(p_error_date, 'YYYY-MM-DD HH24:MI:SS.FF') - 13/24)
THEN 1 END)) in11,
TO_CHAR(COUNT (CASE when me.error_timestamp > (TO_TIMESTAMP(p_error_date, 'YYYY-MM-DD HH24:MI:SS.FF') - 13/24) AND me.error_timestamp <= (TO_TIMESTAMP(p_error_date, 'YYYY-MM-DD HH24:MI:SS.FF') - 12/24)
THEN 1 END)) in12,
TO_CHAR(COUNT (CASE when me.error_timestamp > (TO_TIMESTAMP(p_error_date, 'YYYY-MM-DD HH24:MI:SS.FF') - 12/24) AND me.error_timestamp <= (TO_TIMESTAMP(p_error_date, 'YYYY-MM-DD HH24:MI:SS.FF') - 11/24)
THEN 1 END)) in13,
TO_CHAR(COUNT (CASE when me.error_timestamp > (TO_TIMESTAMP(p_error_date, 'YYYY-MM-DD HH24:MI:SS.FF') - 11/24) AND me.error_timestamp <= (TO_TIMESTAMP(p_error_date, 'YYYY-MM-DD HH24:MI:SS.FF') - 10/24)
THEN 1 END)) in14,
TO_CHAR(COUNT (CASE when me.error_timestamp > (TO_TIMESTAMP(p_error_date, 'YYYY-MM-DD HH24:MI:SS.FF') - 10/24) AND me.error_timestamp <= (TO_TIMESTAMP(p_error_date, 'YYYY-MM-DD HH24:MI:SS.FF') - 9/24)
THEN 1 END)) in15,
TO_CHAR(COUNT (CASE when me.error_timestamp > (TO_TIMESTAMP(p_error_date, 'YYYY-MM-DD HH24:MI:SS.FF') - 9/24) AND me.error_timestamp <= (TO_TIMESTAMP(p_error_date, 'YYYY-MM-DD HH24:MI:SS.FF') - 8/24)
THEN 1 END)) in16,
TO_CHAR(COUNT (CASE when me.error_timestamp > (TO_TIMESTAMP(p_error_date, 'YYYY-MM-DD HH24:MI:SS.FF') - 8/24) AND me.error_timestamp <= (TO_TIMESTAMP(p_error_date, 'YYYY-MM-DD HH24:MI:SS.FF') - 7/24)
THEN 1 END)) in17,
TO_CHAR(COUNT (CASE when me.error_timestamp > (TO_TIMESTAMP(p_error_date, 'YYYY-MM-DD HH24:MI:SS.FF') - 7/24) AND me.error_timestamp <= (TO_TIMESTAMP(p_error_date, 'YYYY-MM-DD HH24:MI:SS.FF') - 6/24)
THEN 1 END)) in18,
TO_CHAR(COUNT (CASE when me.error_timestamp > (TO_TIMESTAMP(p_error_date, 'YYYY-MM-DD HH24:MI:SS.FF') - 6/24) AND me.error_timestamp <= (TO_TIMESTAMP(p_error_date, 'YYYY-MM-DD HH24:MI:SS.FF') - 5/24)
THEN 1 END)) in19,
TO_CHAR(COUNT (CASE when me.error_timestamp > (TO_TIMESTAMP(p_error_date, 'YYYY-MM-DD HH24:MI:SS.FF') - 5/24) AND me.error_timestamp <= (TO_TIMESTAMP(p_error_date, 'YYYY-MM-DD HH24:MI:SS.FF') - 4/24)
THEN 1 END)) in20,
TO_CHAR(COUNT (CASE when me.error_timestamp > (TO_TIMESTAMP(p_error_date, 'YYYY-MM-DD HH24:MI:SS.FF') - 4/24) AND me.error_timestamp <= (TO_TIMESTAMP(p_error_date, 'YYYY-MM-DD HH24:MI:SS.FF') - 3/24)
THEN 1 END)) in21,
TO_CHAR(COUNT (CASE when me.error_timestamp > (TO_TIMESTAMP(p_error_date, 'YYYY-MM-DD HH24:MI:SS.FF') - 3/24) AND me.error_timestamp <= (TO_TIMESTAMP(p_error_date, 'YYYY-MM-DD HH24:MI:SS.FF') - 2/24)
THEN 1 END)) in22,
TO_CHAR(COUNT (CASE when me.error_timestamp > (TO_TIMESTAMP(p_error_date, 'YYYY-MM-DD HH24:MI:SS.FF') - 2/24) AND me.error_timestamp <= (TO_TIMESTAMP(p_error_date, 'YYYY-MM-DD HH24:MI:SS.FF') - 1/24)
THEN 1 END)) in23,
TO_CHAR(COUNT (CASE when me.error_timestamp > (TO_TIMESTAMP(p_error_date, 'YYYY-MM-DD HH24:MI:SS.FF') - 1/24) AND me.error_timestamp <= (TO_TIMESTAMP(p_error_date, 'YYYY-MM-DD HH24:MI:SS.FF'))
THEN 1 END)) in24,
TO_CHAR( COUNT(1) ) total_count
FROM mhs_errors me, mhs_transactions mt
WHERE me.transaction_id = mt.transaction_id
AND me.error_timestamp > TO_TIMESTAMP(p_error_date, 'YYYY-MM-DD HH24:MI:SS.FF') - 24/24 AND me.error_timestamp <= TO_TIMESTAMP(p_error_date, 'YYYY-MM-DD HH24:MI:SS.FF')
AND current_status IN (p_status)
GROUP BY mt.originating_party_id)
ORDER BY TO_NUMBER(total_count) desc)
WHERE ROWNUM <= NVL(TO_NUMBER(p_batchsize), 10)


its statistics are as follows:

Statistics
----------------------------------------------------------
1 recursive calls
0 db block gets
19888 consistent gets
15436 physical reads
0 redo size
953 bytes sent via SQL*Net to client
5400 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
1 sorts (memory)
0 sorts (disk)
1 rows processed



In my original query, the from clause was:

FROM mhs_errors me, mhs_transactions mt, mhs_transaction_status_v mtsv

"mhs_transaction_status_v" ---- i avoided accessing this view by adding that new column current_status in my master table mhs_transactions and all the recursive calls disappeared

You suggested using DENSE_RANK to get the current status but even in that case i would need to access the table mhs_transaction_status where we store the history of various transaction states which would result in another join.

so i thought that the query's performance can be improved by avoiding the access to other table to get the current status.

So it would be best to store the current status in my master table itself and avoid all joins.

i may be wrong, but this is what i thought.

is there any other way out?


Tom Kyte
August 05, 2008 - 9:13 am UTC

...
You suggested using DENSE_RANK to get the current status but even in that case
i would need to access the table mhs_transaction_status where we store the
history of various transaction states which would result in another join.
....


so, now you have this column current status to maintain - that is what we call the proverbial "tradeoff"

I'd not want to maintain, validate, ensure correctness of this replicated data.

Vaibhav, August 05, 2008 - 11:38 am UTC

Hi,

"trade off" i understand this, but i have no other option other than adding that current status column in my master table. the end user is waiting for the o/p to take further action on it. he wants the screen to be refreshed every 4 minutes but if the query itself takes 10 minutes, then how do we go about it.

i used everything possible, tried creating all possible indexes, all different possible composite indexes that would help. i am even getting a fast full scan on that composite index. i have no sorting happening on disk.

i don't see any other way of improving the response time. all i can do is increase the buffer size so that i can reduce on the physical reads. this won't be a good thing to do i feel. the other thing i can do is suggest my designers to alter the design and de-normalize the tables to avoid those joins but then storage would be an issue. also, its too late to alter the design now

Tom, what can possibly help me?
Tom Kyte
August 05, 2008 - 1:09 pm UTC

... but i have no other option ...

no, you do have other options, I already told you - outer join, do not use view with function.


Vaibhav, August 06, 2008 - 4:21 am UTC

Hi,

Couldn't get you?

i did not understand how an Outer view an be used here and how will it improve my performance

i have extracted a part of that query and used the view in the from clause. Can you please modify it to make use of an outer join.


SELECT NVL(mt.originating_party_id, 'NULL'),
TO_CHAR(COUNT (CASE when me.error_timestamp > (TO_TIMESTAMP(p_error_date,
'YYYY-MM-DD HH24:MI:SS.FF') - 24/24) AND me.error_timestamp <= (TO_TIMESTAMP(p_error_date,
'YYYY-MM-DD HH24:MI:SS.FF') - 23/24)
THEN 1 END)) in1,
....

TO_CHAR(COUNT (CASE when me.error_timestamp > (TO_TIMESTAMP(p_error_date,
'YYYY-MM-DD HH24:MI:SS.FF') - 1/24) AND me.error_timestamp <= (TO_TIMESTAMP(p_error_date,
'YYYY-MM-DD HH24:MI:SS.FF'))
THEN 1 END)) in24,
TO_CHAR( COUNT(1) ) total_count
FROM mhs_errors me, mhs_transactions mt, mhs_transaction_status_v mtsv
WHERE me.transaction_id = mt.transaction_id
AND mt.transaction_id = mtsv.transaction_id --(unwanted join)
AND current_status IN (p_status)
GROUP BY mt.originating_party_id

Regards,
Vaibhav

Tom Kyte
August 06, 2008 - 8:52 am UTC

instead of using a function to do in effect a "do it yourself nested loop join", just JOIN to the table - using an outer join (you must need that since you have a 'when no data found' exception handler and return null.

I don't see a function call in your example. That is what we were trying to avoid.

Vaibhav, August 06, 2008 - 11:19 am UTC

Hi,

Looks like i couldn't explain you my problem. i have no function call in my query.

"AND current_status IN (p_status)"

This current_status comes from the view, so if i have 10 lakh transaction id's, this view would be called in 10 lakh times to get the corresponding current status for that transaction id.

And this is what i want to avoid. Hence, i suggested the addition of this column current_status in the main table itself and avoid all these calls to the view.

As you said, it is definitely going to be an overhead maintaining the validity of this replicated column but it is serving my purpose well.

Tom, is there a way i scan just once and still get the o/p for three different cursors with different group by columns. I don't think it would be possible.

Any thoughts on this?

I hope i explained well this time :)
Tom Kyte
August 06, 2008 - 12:39 pm UTC

.... So, my flow goes like this,

1) get the transaction id from MHS_TRANSACTIONS
2) join on transaction_id of MHS_ERRORS and check the error_timestamp
3) join on MHS_TRANSACTION_STATUS_V to get the current(latest) status of that transaction
4) group on originating_party_id from MHS_TRANSACTIONS table

Statistics show that i had 8 lakh recursive calls. i further investigated what those recursive
calls were made for. using tkprof i realized that these calls were made to execute the query of the
view in order to get the latest status.
.....


I don't understand how you can have a "flow" and "execute the query of the view" over and over if you didn't have a function.


You would need to look into the view - and maybe not use the view (the view answers someone elses question, not yours - you might just be going after the BASE TABLES if you want a good plan, skip the view, just query what you need from the tables)

Vaibhav, August 07, 2008 - 12:32 am UTC

Hi,

Yes, now, i have no views in my query, i am getting all my data from the base tables itself.

The question is how do i optimize my query to get the o/p in least time. How to avoid scanning the entire table thrice to get the o/p for three different ref cursors.

Basically, my problem is i want the error count for all the three ref cursors but grouping is done on different columns

1st ref cursor: GROUP BY interaction_id (column of table 1)
2nd ref cursor: GROUP BY orig_party_id (column of table 1)
3rd ref cursor: GROUP BY error_code (column of table 2)

is it possible to get the entire record set of my interest in an array, an then process this array to have three different group by columns

This might help me in eliminating the other two database calls.

Not sure how this is going to work and if it is possible. Just a thought that popped up in my mind.

What do you say TOM

And ya, i want to thank you for following up on my questions in no time. Thanks a million for this

Tom Kyte
August 07, 2008 - 1:20 pm UTC

... Yes, now, i have no views in my query, ...

sigh, I know.

you removed it by replicating data.

my advice is, was and will be:

go back to the original query, do not use the view, write it directly against the base tables, do not replicate data from one table to another.


I have no idea where these ref cursors suddenly appeared from or why they are relevant.

Vaibhav, August 07, 2008 - 2:32 pm UTC

PROCEDURE sps_msg_monitor(p_error_date IN VARCHAr2,
p_status IN mhs_transaction_status.status_type%TYPE,
o_cur_error_code SYS_REFCURSOR,
o_cur_orig_party_id SYS_REFCURSOR,
o_cur_interaction_id SYS_REFCURSOR)
AS

BEGIN

OPEN o_cur_error_code FOR
SELECT
...
...
-- the big query goes here to count the errors
...
...
GROUP BY error_code;

OPEN o_cur_orig_party_id FOR
SELECT
...
...
-- the big query goes here to count the errors
...
...
GROUP BY originating_party_id;

OPEN o_cur_interaction_id FOR
SELECT
...
...
-- the big query goes here to count the errors
...
...
GROUP BY interaction_id;

END sps_msg_monitor;



1)
This is the actual procedure that would be hit.

I have to return 3 ref cursors having different GROUP BY columns

So, this is what i meant when i said that i will have to scan my entire records THRICE.

How to go about this kind of requirement. how not to scan 3 times

2)
Also, you suggested using the base table to get the currsnt status,
" AND current_status = (SELECT status_type
FROM (SELECT status_type
FROM mhs_transaction_status
ORDER BY recorded_time DESC)
WHERE ROWNUM = 1) "

This is what you meant i suppose, getting the status from the table itself and
not replicating it from anywhere but even in this case, the outer query will be
executed once for each row in the parent table and i will have to join on transaction id
to make sure that i am getting the latest status for that particular transaction id only

i don't want this to happen. because this is as good as using the view because
this is what my view's query is.

also, i dont actually mean replicating the column, instead, i would use an update statement
for my master table everytime a new status is entered in mhs_transaction_status


INSERT INTO MHS_TRANSACTION_STATUS VALUES(p_transaction_id, p_status_type, systimestamp)
followed by
UPDATE MHS_TRANSACTIONS, SET current_status = p_status_type WHERE transaction_id = p_transaction_id

So, there is no way that the data is going to be corrupt at any point of time, the over head is
just the additional UPDATE statement.

No what do you suggest about the addition of this column in master table.

i am definitely avoiding a join if i do this.

Your suggestions TOM
Tom Kyte
August 07, 2008 - 3:14 pm UTC

  1  select case when grouping(deptno) = 0 then 'by deptno'
  2              when grouping(job) = 0 then 'by job'
  3                     when grouping(mgr) = 0 then 'by mgr'
  4             end what,
  5             mgr, deptno, job, sum(sal)
  6*   from emp group by grouping sets((mgr),(deptno),(job))
scott%ORA10GR2> /

WHAT                                  MGR     DEPTNO JOB         SUM(SAL)
------------------------------ ---------- ---------- --------- ----------
by mgr                               7839                            8275
by mgr                                                               5000
by mgr                               7782                            1300
by mgr                               7698                            6550
by mgr                               7902                             800
by mgr                               7566                            3800
by mgr                               7788                            1100
by job                                               CLERK           4150
by job                                               SALESMAN        5600
by job                                               PRESIDENT       5000
by job                                               MANAGER         8275
by job                                               ANALYST         3800
by deptno                                         30                 9400
by deptno                                         20                 8675
by deptno                                         10                 8750

15 rows selected.




grouping sets can definitely create your aggregates as the differing levels in a single query - grouping(column_name) tells you which aggregate you have...

Vaibhav, August 08, 2008 - 3:08 pm UTC

hi,

that was awesome...i have heard about all these cube, rollup, grouping sets but i just don't know how to use them...

i need to improve my understanding with these functions

i am facing the following issue when using grouping sets:

SELECT case when grouping(me.error_code) = 0 then me.error_code
when grouping(mt.interaction_id) = 0 then mt.interaction_id
when grouping(mt.originating_party_id) = 0 then mt.originating_party_id
end what,
TO_CHAR(COUNT (CASE when me.error_timestamp > (TO_TIMESTAMP('2008-08-08 10:00:00.000', 'YYYY-MM-DD HH24:MI:SS.FF') - 24/24) AND me.error_timestamp <= (TO_TIMESTAMP('2008-08-08 10:00:00.000', 'YYYY-MM-DD HH24:MI:SS.FF') - 23/24)
THEN 1 END)) in1,
TO_CHAR(COUNT(1)) total_count
FROM mhs_errors me, mhs_transaction_status_v mtsv, mhs_transactions mt
WHERE me.transaction_id = mtsv.transaction_id
AND mt.transaction_id = mt.transaction_id
GROUP BY GROUPING SETS( (me.error_code), (mt.interaction_id), (mt.originating_party_id) )

i get "invalid sql statement"

when i remove the to_timestamp function,

SELECT case when grouping(me.error_code) = 0 then me.error_code
when grouping(mt.interaction_id) = 0 then mt.interaction_id
when grouping(mt.originating_party_id) = 0 then mt.originating_party_id
end what,
COUNT (CASE when me.error_timestamp BETWEEN sysdate - 1 and sysdate THEN 1 END) in1,
TO_CHAR(COUNT(1)) total_count
FROM mhs_errors me, mhs_transaction_status_v mtsv, mhs_transactions mt
WHERE me.transaction_id = mtsv.transaction_id
AND mt.transaction_id = mt.transaction_id
GROUP BY GROUPING SETS( (me.error_code), (mt.interaction_id), (mt.originating_party_id) )

it works fine

mine is oracle 10.2.0.3

is this a known bug
Tom Kyte
August 12, 2008 - 4:22 am UTC

more likely your sql is bad, but since I cannot run it - no table creates or anything - and I'm not a sql parser at heart, I cannot say anything else.

You likely have a syntax error.... and if we could run your sql, we'd help you pinpoint it.

Vaibhav, August 11, 2008 - 2:45 am UTC

Hi,

i found a workaround for the above issue.

I used a function that returns me the timestamp when i pas the input varchar2 parameter. it worked fine.

Now i have two issues,

1) when i had 3 dirrferent cursors, i could use order by "total count" to display the top errors, but now when i have everytjunig coming out from same query and now when i use order by, it orders by on the entire result set and not just error code or interactionid

2) also, with 3 refcursors, i could use batchsize to restrict the number of records seen on the screen, but now seen that wont work with this single query.

what i want is:

if my batchsize is 3, then

WHAT count1 count2 total
err2 500 300 800
err5 200 100 300
err1 100 50 150

op4 1000 500 1500
op1 200 1000 1200
op3 100 500 600

int1 500 300 800
int5 400 200 600
int2 300 100 400

will i need to use analytics here. i want to avoid them if possible, i know we can use rank() and just select the top 3 ones for each of them, but is it possible in this context

please help TOM

Vaibhav, August 12, 2008 - 7:57 am UTC

Hi,

It worked with To_TIMESTAMP itslef. Still couldn't figure out why it did not work initially.

Anyways, here are the DB scripts

CREATE TABLE mhs_transactions
(
transaction_id VARCHAR2(36),
originating_party_id VARCHAR2(50),
destination_party_id VARCHAR2(50),
interaction_id VARCHAR2(128),
service_id VARCHAR2(50)
);

ALTER TABLE mhs_transactions
ADD CONSTRAINT pk_mhs_transactions PRIMARY KEY(transaction_id);

CREATE TABLE mhs_errors
(
error_id NUMBER(10, 0),
transaction_id VARCHAR2(36) NOT NULL,
error_code VARCHAR2(50) NOT NULL,
error_timestamp TIMESTAMP
);

ALTER TABLE mhs_errors
ADD CONSTRAINT fk_mhs_errors FOREIGN KEY(transaction_id) REFERENCES mhs_transactions(transaction_id);


Insert into MHS_TRANSACTIONS
Values ('tran1', 'op1', 'dp1', 'i1', 's1');
Insert into MHS_TRANSACTIONS
Values ('tran2', 'op2', 'dp2', 'i2', 's2');
Insert into MHS_TRANSACTIONS
Values ('tran3', 'op3', 'dp2', 'i2', 's2');
Insert into MHS_ERRORS
Values (1, 'tran1', 'err1', TO_TIMESTAMP('8/12/2008 1:36:28.000000 PM','fmMMfm/fmDDfm/YYYY fmHH12fm:MI:SS.FF AM'));
Insert into MHS_ERRORS
Values (2, 'tran1', 'err1', TO_TIMESTAMP('8/12/2008 3:36:28.000000 PM','fmMMfm/fmDDfm/YYYY fmHH12fm:MI:SS.FF AM'));
Insert into MHS_ERRORS
Values (3, 'tran1', 'err2', TO_TIMESTAMP('8/12/2008 2:36:28.000000 PM','fmMMfm/fmDDfm/YYYY fmHH12fm:MI:SS.FF AM'));
Insert into MHS_ERRORS
Values (4, 'tran2', 'err1', TO_TIMESTAMP('8/12/2008 1:36:28.000000 PM','fmMMfm/fmDDfm/YYYY fmHH12fm:MI:SS.FF AM'));
Insert into MHS_ERRORS
Values (5, 'tran2', 'err3', TO_TIMESTAMP('8/12/2008 12:36:28.000000 PM','fmMMfm/fmDDfm/YYYY fmHH12fm:MI:SS.FF AM'));
Insert into MHS_ERRORS
Values (6, 'tran3', 'err3', TO_TIMESTAMP('8/12/2008 12:40:28.000000 PM','fmMMfm/fmDDfm/YYYY fmHH12fm:MI:SS.FF AM'));
COMMIT;


I am not posting the entire query, just a part of it.

SELECT CASE WHEN grouping(me.error_code) = 0 THEN me.error_code
WHEN grouping(mt.interaction_id) = 0 THEN mt.interaction_id
WHEN grouping(mt.originating_party_id) = 0 THEN mt.originating_party_id
end what,
TO_CHAR(COUNT (CASE when me.error_timestamp > (TO_TIMESTAMP('2008-08-12 18:00:00.000', 'YYYY-MM-DD HH24:MI:SS.FF') - 9/24) AND me.error_timestamp <= (TO_TIMESTAMP('2008-08-12 18:00:00.000', 'YYYY-MM-DD HH24:MI:SS.FF') - 8/24)
THEN 1 END)) in16,
TO_CHAR(COUNT (CASE when me.error_timestamp > (TO_TIMESTAMP('2008-08-12 18:00:00.000', 'YYYY-MM-DD HH24:MI:SS.FF') - 8/24) AND me.error_timestamp <= (TO_TIMESTAMP('2008-08-12 18:00:00.000', 'YYYY-MM-DD HH24:MI:SS.FF') - 7/24)
THEN 1 END)) in17,
TO_CHAR(COUNT (CASE when me.error_timestamp > (TO_TIMESTAMP('2008-08-12 18:00:00.000', 'YYYY-MM-DD HH24:MI:SS.FF') - 7/24) AND me.error_timestamp <= (TO_TIMESTAMP('2008-08-12 18:00:00.000', 'YYYY-MM-DD HH24:MI:SS.FF') - 6/24)
THEN 1 END)) in18,
TO_CHAR(COUNT (CASE when me.error_timestamp > (TO_TIMESTAMP('2008-08-12 18:00:00.000', 'YYYY-MM-DD HH24:MI:SS.FF') - 6/24) AND me.error_timestamp <= (TO_TIMESTAMP('2008-08-12 18:00:00.000', 'YYYY-MM-DD HH24:MI:SS.FF') - 5/24)
THEN 1 END)) in19,
TO_CHAR(COUNT (CASE when me.error_timestamp > (TO_TIMESTAMP('2008-08-12 18:00:00.000', 'YYYY-MM-DD HH24:MI:SS.FF') - 5/24) AND me.error_timestamp <= (TO_TIMESTAMP('2008-08-12 18:00:00.000', 'YYYY-MM-DD HH24:MI:SS.FF') - 4/24)
THEN 1 END)) in20,
TO_CHAR(COUNT (CASE when me.error_timestamp > (TO_TIMESTAMP('2008-08-12 18:00:00.000', 'YYYY-MM-DD HH24:MI:SS.FF') - 4/24) AND me.error_timestamp <= (TO_TIMESTAMP('2008-08-12 18:00:00.000', 'YYYY-MM-DD HH24:MI:SS.FF') - 3/24)
THEN 1 END)) in21,
TO_CHAR(COUNT (CASE when me.error_timestamp > (TO_TIMESTAMP('2008-08-12 18:00:00.000', 'YYYY-MM-DD HH24:MI:SS.FF') - 3/24) AND me.error_timestamp <= (TO_TIMESTAMP('2008-08-12 18:00:00.000', 'YYYY-MM-DD HH24:MI:SS.FF') - 2/24)
THEN 1 END)) in22,
TO_CHAR(COUNT(1)) total_count
FROM mhs_errors me, mhs_transactions mt
WHERE me.transaction_id = mt.transaction_id
AND me.error_timestamp > TO_TIMESTAMP('2008-08-12 18:00:00.000', 'YYYY-MM-DD HH24:MI:SS.FF') - 24/24 AND me.error_timestamp <= TO_TIMESTAMP('2008-08-12 18:00:00.000', 'YYYY-MM-DD HH24:MI:SS.FF')
GROUP BY GROUPING SETS( (me.error_code), (mt.interaction_id), (mt.originating_party_id) );


WHAT IN18 IN19 IN20 IN21 IN22 TOTAL_COUNT
err3 0 2 0 0 0 2
err1 0 0 2 0 1 3
err2 0 0 0 1 0 1

i1 0 0 1 1 1 3
i2 0 2 1 0 0 3

op2 0 1 1 0 0 2
op1 0 0 1 1 1 3
op3 0 1 0 0 0 1



here, how do i get the o/p sorted on total_count for each error_code, interaction_id, originating_party_id

what i want is,

WHAT IN18 IN19 IN20 IN21 IN22 TOTAL_COUNT
err1 0 0 2 0 1 3
err3 0 2 0 0 0 2
err2 0 0 0 1 0 1

i1 0 0 1 1 1 3
i2 0 2 1 0 0 3

op1 0 0 1 1 1 3
op2 0 1 1 0 0 2
op3 0 1 0 0 0 1


also, is it possible to use batchsize for each of the, for eg:, if batchsize is 2, then
i see the 2 topmost errors, interactions and orig_party ids.

do we have a solution here.
Tom Kyte
August 13, 2008 - 4:31 am UTC

your sql does not compile... however

... total_count for each error_code, interaction_id, originating_party_id ...

order by error_code, interaction_id, originating_party_id, total_count desc


you have:

CASE WHEN grouping(me.error_code) = 0 THEN me.error_code
WHEN grouping(mt.interaction_id) = 0 THEN mt.interaction_id
WHEN grouping(mt.originating_party_id) = 0 THEN mt.originating_party_id
end what,

add

CASE WHEN grouping(me.error_code) = 0 THEN 'error code' WHEN grouping(mt.interaction_id) = 0 THEN 'interaction id'
WHEN grouping(mt.originating_party_id) = 0 THEN 'orig party id'
end what2,


and order by what2, what, total_count desc


Vaibhav, August 13, 2008 - 5:00 am UTC

Hi,

i made sure the scripts work before posting them here. i copied it form hjere itself, and it still works at my end.

anyways, i guess you got me wrong. what i want is

1) the o/p sorted on TOTAL_COUNT for error_code

2) the o/p sorted on TOTAL_COUNT for interaction_id

3) the o/p sorted on TOTAL_COUNT for originating_party_id

you suggested using,
order by what2, what, total_count desc

this would definitely not work as it is goin to sort the error_code itself and not the total count.

i want the follwoing o/p:

WHAT IN18 IN19 IN20 IN21 IN22 TOTAL_COUNT
err1 0 0 2 0 1 3
err3 0 2 0 0 0 2
err2 0 0 0 1 0 1

i1 0 0 1 1 1 3
i2 0 2 1 0 0 3

op1 0 0 1 1 1 3
op2 0 1 1 0 0 2
op3 0 1 0 0 0 1

the error_codes are not sorted here, instead, ordering is required on their count, ie total_count. i got nothing to do with the error_code, int_id, orig_party_id...i am interested in their error counts.

Thanks a zillion

vaibhav, August 14, 2008 - 12:39 am UTC

Hi,

I found the solution to the above problem.

i simply used the Rank() function and got my o/p

Select a.*, rank() over(partition by what2 order by total_count desc) rank
FROM (the above query)a;

Thanks a zillion for all the help you provided.

Appreciate it !!!

Explain plan cost

Jay, August 21, 2008 - 5:13 am UTC

Dear Tom,
I have an explain plan which is
Execution Plan
----------------------------------------------------------                                          
Plan hash value: 2144297467                                                                         
                                                                                                    
-------------------------------------------------------------------------------------------------------------------------                                                                               
                                                                                                    
| Id  | Operation                             | Name                    | Rows  | Bytes |TempSpc| Cost (%CPU)| Time     |                                                                               
                                                                                                    
-------------------------------------------------------------------------------------------------------------------------                                                                               
                                                                                                    
|   0 | SELECT STATEMENT                      |                         |   199 | 35820 |       |  1570   (1)| 00:00:19 |                                                                               
                                                                                                    
|   1 |  SORT ORDER BY                        |                         |   199 | 35820 |       |  1570   (1)| 00:00:19 |                                                                               
                                                                                                    
|*  2 |   FILTER                              |                         |       |       |       |            |          |                                                                               
                                                                                                    
|   3 |    NESTED LOOPS OUTER                 |                         |   199 | 35820 |       |  1569   (1)| 00:00:19 |                                                                               
                                                                                                    
|   4 |     NESTED LOOPS                      |                         |   199 | 33034 |       |  1523   (1)| 00:00:19 |                                                                               
                                                                                                    
|   5 |      NESTED LOOPS OUTER               |                         |    36 |  5112 |       |  1520   (1)| 00:00:19 |                                                                               
                                                                                                    
|*  6 |       HASH JOIN                       |                         |    36 |  4572 |       |  1448   (1)| 00:00:18 |                                                                               
                                                                                                    
|*  7 |        TABLE ACCESS BY INDEX ROWID    | PRGMEMACCMST            |  3583 |   111K|       |  1284   (1)| 00:00:16 |                                                                               
                                                                                                    
|*  8 |         INDEX RANGE SCAN              | PRGMEMACCMST_ENRDAT_IDX |  3584 |       |       |    12   (0)| 00:00:01 |                                                                               
                                                                                                    
|   9 |        VIEW                           | TCC_VIEW2               |  6193 |   574K|       |   164   (0)| 00:00:02 |                                                                               
                                                                                                    
|  10 |         UNION-ALL PARTITION           |                         |       |       |       |           |          |                                                                               
                                                                                                    
|* 11 |          HASH JOIN                    |                         |   619K|    38M|    28M|  9389   (1)| 00:01:53 |                                                                               
                                                                                                    
|* 12 |           INDEX FAST FULL SCAN        | CUSCNTINF_PK            |  1238K|    14M|       |  1082   (2)| 00:00:13 |                                                                               
                                                                                                    
|* 13 |           HASH JOIN                   |                         |   621K|    32M|    19M|  4942   (2)| 00:01:00 |                                                                               
                                                                                                    
|* 14 |            TABLE ACCESS FULL          | MEMMST                  |   621K|    12M|       |  1466   (2)| 00:00:18 |                                                                               
                                                                                                    
|* 15 |            TABLE ACCESS FULL          | CUSINDINF               |   619K|    19M|       |  1176   (2)| 00:00:15 |                                                                               
                                                                                                    
|  16 |          NESTED LOOPS                 |                         |     1 |    61 |       |     3   (0)| 00:00:01 |                                                                               
                                                                                                    
|  17 |           NESTED LOOPS                |                         |     1 |    49 |       |     2   (0)| 00:00:01 |                                                                               
                                                                                                    
|  18 |            TABLE ACCESS BY INDEX ROWID| CUSCRPINF               |     1 |    28 |       |     0   (0)| 00:00:01 |                                                                               
                                                                                                    
|* 19 |             INDEX RANGE SCAN          | CUSCRPINF_PK            |     1 |       |       |     0   (0)| 00:00:01 |                                                                               
                                                                                                    
|* 20 |            TABLE ACCESS BY INDEX ROWID| MEMMST                  |     1 |    21 |       |     2   (0)| 00:00:01 |                                                                               
                                                                                                    
|* 21 |             INDEX RANGE SCAN          | MEMMST_IDX              |     1 |       |       |     2   (0)| 00:00:01 |                                                                               
                                                                                                    
|* 22 |           INDEX UNIQUE SCAN           | CUSCNTINF_PK            |     1 |    12 |       |     1   (0)| 00:00:01 |                                                                               
                                                                                                    
|* 23 |       VIEW PUSHED PREDICATE           | EAIMEMCNTINF            |     1 |    15 |       |     2   (0)| 00:00:01 |                                                                               
                                                                                                    
|  24 |        NESTED LOOPS                   |                         |     2 |    62 |       |     5   (0)| 00:00:01 |                                                                               
                                                                                                    
|  25 |         TABLE ACCESS BY INDEX ROWID   | MEMMST                  |     1 |    17 |       |     3   (0)| 00:00:01 |                                                                               
                                                                                                    
|* 26 |          INDEX UNIQUE SCAN            | MEMMST_PK               |     1 |       |       |     2   (0)| 00:00:01 |                                                                               
                                                                                                    
|* 27 |         INDEX RANGE SCAN              | TCC_CUSCNTINF           |     2 |    28 |       |     2   (0)| 00:00:01 |                                                                               
                                                                                                    
|* 28 |      TABLE ACCESS BY INDEX ROWID      | PRGPNTTYPMST            |     6 |   144 |       |     3   (0)| 00:00:01 |                                                                               
                                                                                                    
|* 29 |       INDEX RANGE SCAN                | PRGPNTTYPMST_PK         |     6 |       |       |     1   (0)| 00:00:01 |                                                                               
                                                                                                    
|  30 |     VIEW                              |                         |     1 |    14 |       |     0   (0)| 00:00:01 |                                                                               
                                                                                                    
|* 31 |      TABLE ACCESS FULL                | PRGMEMACCDTL            |     1 |    27 |       |  1509   (2)| 00:00:19 |                                                                               
                                                                                                    
-------------------------------------------------------------------------------------------------------------------------                                                                               
                                                                                                    
                                                                                                    
Predicate Information (identified by operation id):                                                 
---------------------------------------------------                                                 
                                                                                                    
   2 - filter(SYSDATE@!-60<=SYSDATE@!)                                                              
   6 - access("A"."CMPCOD"="D"."CMPCOD" AND "A"."MEMSHPNUM"="D"."MEMSHPNUM")                        
   7 - filter("D"."PRGCOD"='SMILE' AND "D"."CMPCOD"='UL')                                           
   8 - access("D"."ENRDAT">=SYSDATE@!-60 AND "D"."ENRDAT"<=SYSDATE@!)                               
  11 - access("MST"."CMPCOD"="CNTINF"."CMPCOD" AND "MST"."CUSNUM"="CNTINF"."CUSNUM" AND             
              "IND"."PRFADR"="CNTINF"."ADRTYP")                                                     
  12 - filter("CNTINF"."CMPCOD"='UL')                                                               
  13 - access("MST"."CMPCOD"="IND"."CMPCOD" AND "MST"."CUSNUM"="IND"."CUSNUM")                      
  14 - filter("MST"."CMPCOD"='UL' AND "MST"."MEMSHPTYP"='I')                                        
  15 - filter("IND"."CMPCOD"='UL')                                                                  
  19 - access("CRP"."CMPCOD"='UL')                                                                  
  20 - filter("MST"."MEMSHPTYP"='C')                                                                
  21 - access("MST"."CUSNUM"="CRP"."CUSNUM" AND "MST"."CMPCOD"='UL')                                
       filter("MST"."CMPCOD"='UL')                                                                  
  22 - access("CNTINF"."CMPCOD"='UL' AND "MST"."CUSNUM"="CNTINF"."CUSNUM" AND "CNTINF"."ADRTYP"='B')
  23 - filter("A"."PRFADR"="F"."ADRTYP"(+))                                                         
  26 - access("MST"."CMPCOD"='UL' AND "MST"."MEMSHPNUM"="A"."MEMSHPNUM")                            
       filter("MST"."CMPCOD"="A"."CMPCOD")                                                          
  27 - access("MST"."CUSNUM"="CNTINF"."CUSNUM" AND "CNTINF"."CMPCOD"='UL')                          
  28 - filter("G"."DPNFLG"='0')                                                                     
  29 - access("G"."CMPCOD"='UL' AND "G"."PRGCOD"='SMILE')                                           
  31 - filter("A"."MEMSHPNUM"="E"."MEMSHPNUM" AND "G"."PNTTYP"="E"."PNTTYP" AND "A"."CMPCOD"="G"."CM
PCOD" AND                                                                                           
                                                                                                    
              "G"."PRGCOD"="E"."PRGCOD")                                                            



How is it that the cost for step 0 (1570) is less than the that for id 11 (9389). The total cost (I assume that is the one shown in the first line) should be more than that of any of the inner steps?

A reader, September 25, 2008 - 11:24 pm UTC


AUTOTRACE

AK, December 01, 2008 - 2:11 pm UTC

I am using
SET AUTOTRACE TRACEONLY;

But it seems the query is getting executed. Isn't TRACEONLY supposed to only show the estimated query plan WITHOUT executing the query.
Tom Kyte
December 02, 2008 - 5:47 am UTC

if you use

SQL> set autotrace traceonly EXPLAIN

that would be true, if you just to "traceonly" that shows

a) the plan
b) the STATISTICS(io's and the like) incurred to run the query


if you just want the plan of a SELECT (modifications are ALWAYS executed) - use autotrace traceonly explain

recursive calls

A reader, December 23, 2008 - 6:49 pm UTC

Hi Tom
How can we tune recursive calls ? I have 1 query which is doing excessive recursive calls in prod but not in qa. Comparison is like 0 vs 400+

Where should I start looking at ? Any documentation would be an excellent help for me ! Is my data dict bad on the tables in the query ? How can I check that ?

Thanks as always !

Shamik
Tom Kyte
December 29, 2008 - 3:24 pm UTC

without an example it is sort of "hard to say". We'd need to know what is causing the recursive calls first...

for example, maybe this is a big hash join - well, on production it is a big hash join but on test it is tiny. So on test - it is done in memory. On production, it needs to allocate temporary space.


Maybe in test you insert 100 rows using a sequence
and in production you insert 100000...

ops$tkyte%ORA10GR2> create table t ( x int );

Table created.

ops$tkyte%ORA10GR2> create sequence s cache 100;

Sequence created.

ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> variable x number
ops$tkyte%ORA10GR2> exec :x := 1

PL/SQL procedure successfully completed.

ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> insert into t select s.nextval from dual connect by level <= :x;

1 row created.

ops$tkyte%ORA10GR2> set autotrace on statistics
ops$tkyte%ORA10GR2> exec :x := 100

PL/SQL procedure successfully completed.

ops$tkyte%ORA10GR2> insert into t select s.nextval from dual connect by level <= :x;

100 rows created.


Statistics
----------------------------------------------------------
         14  recursive calls
          6  db block gets
          5  consistent gets
          0  physical reads
        696  redo size
        909  bytes sent via SQL*Net to client
       1008  bytes received via SQL*Net from client
          4  SQL*Net roundtrips to/from client
          2  sorts (memory)
          0  sorts (disk)
        100  rows processed

ops$tkyte%ORA10GR2> exec :x := 100000

PL/SQL procedure successfully completed.

ops$tkyte%ORA10GR2> insert into t select s.nextval from dual connect by level <= :x;

100000 rows created.


Statistics
----------------------------------------------------------
      14688  recursive calls
       5734  db block gets
       1938  consistent gets
          2  physical reads
    2278884  redo size
        909  bytes sent via SQL*Net to client
       1008  bytes received via SQL*Net from client
          4  SQL*Net roundtrips to/from client
          2  sorts (memory)
          0  sorts (disk)
     100000  rows processed

ops$tkyte%ORA10GR2> set autotrace off


and there the 'fix' would be to set a better cache value

ops$tkyte%ORA10GR2> alter sequence s cache 100000;

Sequence altered.

ops$tkyte%ORA10GR2> insert into t select s.nextval from dual connect by level <= :x;

100000 rows created.


Statistics
----------------------------------------------------------
        109  recursive calls
       1828  db block gets
        367  consistent gets
          0  physical reads
    1571744  redo size
        911  bytes sent via SQL*Net to client
       1008  bytes received via SQL*Net from client
          4  SQL*Net roundtrips to/from client
          2  sorts (memory)
          0  sorts (disk)
     100000  rows processed



If it is not obvious to you "what the cause is", turn on sql tracing and look at the recursive sql that is executed (400 is neither excessive nor absurdly large - and probably is not the cause of your performance issue - but a sql trace would show you that)



recursive calls

A reader, December 23, 2008 - 6:49 pm UTC

Hi Tom
How can we tune recursive calls ? I have 1 query which is doing excessive recursive calls in prod but not in qa. Comparison is like 0 vs 400+

Where should I start looking at ? Any documentation would be an excellent help for me ! Is my data dict bad on the tables in the query ? How can I check that ?

Thanks as always !


Please explain

Chinni, January 05, 2009 - 10:28 am UTC

Dear Sir,
I am ardent fan your site. Thank you very much for sharing your ideas.
I am trying to understand the plan table output of a query, but kind of stuck at one point.
Could you kindly explain why there are MERGE JOIN CARTESIAN(at two places, eventhough I have joins on the tables)
and Buffer Sort involved in the plan. I am sure that I am missing some basic point.
This is the first time i am posting any query. Please don't mind if the formatting is not proper.

SELECT STATEMENT Optimizer Mode=CHOOSE 1 M 2932
MERGE JOIN CARTESIAN 1 M 762 M 2932 :Q13781005 P->S QC (RANDOM)
HASH JOIN OUTER 539 412 K 2613 :Q13781005 PCWP
NESTED LOOPS OUTER 539 404 K 2609 :Q13781005 PCWP
NESTED LOOPS 539 373 K 2474 :Q13781005 PCWP
NESTED LOOPS OUTER 539 314 K 2204 :Q13781005 PCWP
HASH JOIN 539 291 K 1934 :Q13781005 PCWP
REMOTE 518 18 K 43 :Q13781000 S->P BROADCAST
HASH JOIN 631 317 K 1891 :Q13781005 PCWP
MERGE JOIN CARTESIAN 87 K 3 M 1242 :Q13781001 S->P HASH
TABLE ACCESS FULL CRIS.LOC 402 13 K 36
BUFFER SORT 218 1 K 1206
TABLE ACCESS FULL CRIS.TRT 218 1 K 3
TABLE ACCESS FULL CRIS.TRA 141 K 63 M 649 :Q13781004 P->P HASH 35 35
TABLE ACCESS BY INDEX ROWID CRIS.COP 1 44 2 :Q13781005 PCWP
INDEX UNIQUE SCAN CRIS.PK_COP 1 1 :Q13781005 PCWP
TABLE ACCESS BY INDEX ROWID CRIS.SEC 1 113 2 :Q13781005 PCWP
INDEX UNIQUE SCAN CRIS.SEC_PK 1 1 :Q13781005 PCWP
TABLE ACCESS BY LOCAL INDEX ROWID CRIS.TEV 1 58 1 :Q13781005 PCWP 35 35
INDEX UNIQUE SCAN CRIS.TEV_PK 1 :Q13781005 PCWP 35 35
TABLE ACCESS FULL CRIS.MATCHING_INFO 606 8 K 4 :Q13781002 S->P BROADCAST
BUFFER SORT 1 K 18 K 2928 :Q13781005 PCWP
TABLE ACCESS FULL CRIS.CAL 1 K 18 K 49 :Q13781003 S->P BROADCAST


This is the where condition of the query.
......
WHERE dcsw_user.tra.tra_trt_trans_type = dcsw_user.trt.trt_trans_type
AND dcsw_user.tra.tra_sit_id = dcsw_user.sec.sec_sit_id
AND dcsw_user.tra.tra_sec_id = dcsw_user.sec.sec_id
AND dcsw_user.tra.tra_sit_id = dcsw_user.acc.acc_sit_id
AND dcsw_user.tra.tra_acc_id = dcsw_user.acc.acc_id
AND dcsw_user.tra.tra_transact_type = dcsw_user.tev.tev_tra_transact_type(+)
AND dcsw_user.tra.tra_trans_no = dcsw_user.tev.tev_tra_trans_no(+)
AND dcsw_user.tra.tra_sit_id = dcsw_user.tev.tev_sit_id(+)
AND dcsw_user.max_trans_no (dcsw_user.tra.tra_sit_id,
dcsw_user.tra.tra_transact_type,
dcsw_user.tra.tra_trans_no
) = dcsw_user.tev.tev_no(+)
AND dcsw_user.tra.tra_sit_id = dcsw_user.cop.cop_sit_id(+)
AND dcsw_user.tra.tra_settl_agent = dcsw_user.cop.cop_internal_id(+)
AND dcsw_user.cop.cop_type(+) = 'S'
AND dcsw_user.tra.tra_sit_id = dcsw_user.cal.cal_sit_id
AND dcsw_user.tra.tra_sit_id = dcsw_user.loc.loc_sit_id
AND ( ( dcsw_user.tra.tra_loc_no IS NOT NULL
AND dcsw_user.tra.tra_loc_no = dcsw_user.loc.loc_no
)
OR ( dcsw_user.tra.tra_loc_no IS NULL
AND dcsw_user.tra.tra_loc_no_to = dcsw_user.loc.loc_no
)
)
AND dcsw_user.tra.tra_transact_type = dcsw_user.tev.tev_tra_transact_type(+)
AND dcsw_user.tra.tra_trans_no = dcsw_user.tev.tev_tra_trans_no(+)
AND dcsw_user.tra.tra_sit_id = dcsw_user.tev.tev_sit_id(+)
AND dcsw_user.max_trans_no (dcsw_user.tra.tra_sit_id,
dcsw_user.tra.tra_transact_type,
dcsw_user.tra.tra_trans_no
) = dcsw_user.tev.tev_no(+)
AND dcsw_user.tev.tev_sit_id = dcsw_user.matching_info.mai_sit_id(+)
AND dcsw_user.tev.tev_reason_desc = dcsw_user.matching_info.mai_code_12(+)
AND (dcsw_user.tra.tra_sit_id = 'TE')
AND dcsw_user.tra.tra_sit_id = 'TE'

Thank you very much for your time.

Please explain

Chinni, January 05, 2009 - 10:28 am UTC

Dear Sir,
I am ardent fan your site. Thank you very much for sharing your ideas.
I am trying to understand the plan table output of a query, but kind of stuck at one point.
Could you kindly explain why there are MERGE JOIN CARTESIAN(at two places, even though I have joins on the tables)
and Buffer Sort involved in the plan. I am sure that I am missing some basic point.
This is the first time i am posting any query. Please don't mind if the formatting is not proper.

SELECT STATEMENT Optimizer Mode=CHOOSE 1 M 2932
MERGE JOIN CARTESIAN 1 M 762 M 2932 :Q13781005 P->S QC (RANDOM)
HASH JOIN OUTER 539 412 K 2613 :Q13781005 PCWP
NESTED LOOPS OUTER 539 404 K 2609 :Q13781005 PCWP
NESTED LOOPS 539 373 K 2474 :Q13781005 PCWP
NESTED LOOPS OUTER 539 314 K 2204 :Q13781005 PCWP
HASH JOIN 539 291 K 1934 :Q13781005 PCWP
REMOTE 518 18 K 43 :Q13781000 S->P BROADCAST
HASH JOIN 631 317 K 1891 :Q13781005 PCWP
MERGE JOIN CARTESIAN 87 K 3 M 1242 :Q13781001 S->P HASH
TABLE ACCESS FULL CRIS.LOC 402 13 K 36
BUFFER SORT 218 1 K 1206
TABLE ACCESS FULL CRIS.TRT 218 1 K 3
TABLE ACCESS FULL CRIS.TRA 141 K 63 M 649 :Q13781004 P->P HASH 35 35
TABLE ACCESS BY INDEX ROWID CRIS.COP 1 44 2 :Q13781005 PCWP
INDEX UNIQUE SCAN CRIS.PK_COP 1 1 :Q13781005 PCWP
TABLE ACCESS BY INDEX ROWID CRIS.SEC 1 113 2 :Q13781005 PCWP
INDEX UNIQUE SCAN CRIS.SEC_PK 1 1 :Q13781005 PCWP
TABLE ACCESS BY LOCAL INDEX ROWID CRIS.TEV 1 58 1 :Q13781005 PCWP 35 35
INDEX UNIQUE SCAN CRIS.TEV_PK 1 :Q13781005 PCWP 35 35
TABLE ACCESS FULL CRIS.MATCHING_INFO 606 8 K 4 :Q13781002 S->P BROADCAST
BUFFER SORT 1 K 18 K 2928 :Q13781005 PCWP
TABLE ACCESS FULL CRIS.CAL 1 K 18 K 49 :Q13781003 S->P BROADCAST


This is the where condition of the query.
......
WHERE dcsw_user.tra.tra_trt_trans_type = dcsw_user.trt.trt_trans_type
AND dcsw_user.tra.tra_sit_id = dcsw_user.sec.sec_sit_id
AND dcsw_user.tra.tra_sec_id = dcsw_user.sec.sec_id
AND dcsw_user.tra.tra_sit_id = dcsw_user.acc.acc_sit_id
AND dcsw_user.tra.tra_acc_id = dcsw_user.acc.acc_id
AND dcsw_user.tra.tra_transact_type = dcsw_user.tev.tev_tra_transact_type(+)
AND dcsw_user.tra.tra_trans_no = dcsw_user.tev.tev_tra_trans_no(+)
AND dcsw_user.tra.tra_sit_id = dcsw_user.tev.tev_sit_id(+)
AND dcsw_user.max_trans_no (dcsw_user.tra.tra_sit_id,
dcsw_user.tra.tra_transact_type,
dcsw_user.tra.tra_trans_no
) = dcsw_user.tev.tev_no(+)
AND dcsw_user.tra.tra_sit_id = dcsw_user.cop.cop_sit_id(+)
AND dcsw_user.tra.tra_settl_agent = dcsw_user.cop.cop_internal_id(+)
AND dcsw_user.cop.cop_type(+) = 'S'
AND dcsw_user.tra.tra_sit_id = dcsw_user.cal.cal_sit_id
AND dcsw_user.tra.tra_sit_id = dcsw_user.loc.loc_sit_id
AND ( ( dcsw_user.tra.tra_loc_no IS NOT NULL
AND dcsw_user.tra.tra_loc_no = dcsw_user.loc.loc_no
)
OR ( dcsw_user.tra.tra_loc_no IS NULL
AND dcsw_user.tra.tra_loc_no_to = dcsw_user.loc.loc_no
)
)
AND dcsw_user.tra.tra_transact_type = dcsw_user.tev.tev_tra_transact_type(+)
AND dcsw_user.tra.tra_trans_no = dcsw_user.tev.tev_tra_trans_no(+)
AND dcsw_user.tra.tra_sit_id = dcsw_user.tev.tev_sit_id(+)
AND dcsw_user.max_trans_no (dcsw_user.tra.tra_sit_id,
dcsw_user.tra.tra_transact_type,
dcsw_user.tra.tra_trans_no
) = dcsw_user.tev.tev_no(+)
AND dcsw_user.tev.tev_sit_id = dcsw_user.matching_info.mai_sit_id(+)
AND dcsw_user.tev.tev_reason_desc = dcsw_user.matching_info.mai_code_12(+)
AND (dcsw_user.tra.tra_sit_id = 'TE')
AND dcsw_user.tra.tra_sit_id = 'TE'

Thank you very much for your time.

DBMS_XPLAN.DISPLAY_CURSOR

Raja, January 12, 2009 - 9:20 am UTC

Hi Tom,

Thanks for your explanation.

We came accross the following query:

select * from table(dbms_xplan.display_cursor('801pqde6ywsy5',0,'ADVANCED')) ;

Its giving us lots of good information. How can we interpret these information.

For example what does the following section mean in the output:

Query Block Name / Object Alias (identified by operation id)

Thanks for your time.

Thanks

Execution Plan of Dictionary View Query

Siu-Ki Chan, April 02, 2009 - 11:17 am UTC

Hi Tom,

This is one of the sqls issued by 10g OEM Change Manager:

SELECT a.column_name, b.data_type, a.column_position
FROM sys.dba_ind_columns a, sys.dba_tab_columns b
WHERE a.table_owner = b.owner
and a.table_name = b.table_name
and a.column_name = b.column_name
and a.index_owner = 'BWCM_WS_NDB'
and a.index_name = 'XIE01WS_CRM_MAP'
and a.table_owner = 'BWCM_WS_NDB'
and a.table_name = 'WS_CRM_MAP'
ORDER BY a.column_position

which takes close to 30 minutes. The same sql with a RULE hint takes less than 1 second.

tkprof output for the unhinted long running sql is as follows:

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.91 0.90 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 2 1554.39 1521.28 8 86034457 0 2
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 4 1555.30 1522.18 8 86034457 0 2

Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 479

Rows Row Source Operation
------- ---------------------------------------------------
0 NESTED LOOPS OUTER (cr=0 pr=0 pw=0 time=0 us)
0 TABLE ACCESS CLUSTER COL$ (cr=0 pr=0 pw=0 time=0 us)
0 INDEX UNIQUE SCAN I_OBJ# (cr=0 pr=0 pw=0 time=0 us)(object id 3)
0 TABLE ACCESS CLUSTER ATTRCOL$ (cr=0 pr=0 pw=0 time=0 us)
0 TABLE ACCESS BY INDEX ROWID OBJ$ (cr=0 pr=0 pw=0 time=0 us)
0 INDEX UNIQUE SCAN I_OBJ1 (cr=0 pr=0 pw=0 time=0 us)(object id 36)
0 TABLE ACCESS BY INDEX ROWID OBJ$ (cr=0 pr=0 pw=0 time=0 us)
0 INDEX UNIQUE SCAN I_OBJ1 (cr=0 pr=0 pw=0 time=0 us)(object id 36)
0 TABLE ACCESS BY INDEX ROWID OBJ$ (cr=0 pr=0 pw=0 time=0 us)
0 INDEX UNIQUE SCAN I_OBJ1 (cr=0 pr=0 pw=0 time=0 us)(object id 36)
0 TABLE ACCESS BY INDEX ROWID OBJ$ (cr=0 pr=0 pw=0 time=0 us)
0 INDEX UNIQUE SCAN I_OBJ1 (cr=0 pr=0 pw=0 time=0 us)(object id 36)
0 TABLE ACCESS BY INDEX ROWID OBJ$ (cr=0 pr=0 pw=0 time=0 us)
0 INDEX UNIQUE SCAN I_OBJ1 (cr=0 pr=0 pw=0 time=0 us)(object id 36)
2 SORT ORDER BY (cr=86034457 pr=8 pw=0 time=900273982 us)
2 FILTER (cr=86034457 pr=8 pw=0 time=43274482 us)
754 NESTED LOOPS OUTER (cr=86034449 pr=8 pw=0 time=2507825187 us)
754 NESTED LOOPS OUTER (cr=86034449 pr=8 pw=0 time=2507810861 us)
754 NESTED LOOPS (cr=86034449 pr=8 pw=0 time=2507792767 us)
284258 NESTED LOOPS (cr=85465931 pr=7 pw=0 time=2541828449 us)
8527740 NESTED LOOPS OUTER (cr=68410449 pr=7 pw=0 time=1100080629 us)
8527740 NESTED LOOPS (cr=8716269 pr=7 pw=0 time=247306646 us)
22620 NESTED LOOPS (cr=30189 pr=7 pw=0 time=1449788 us)
377 NESTED LOOPS (cr=4928 pr=7 pw=0 time=155493 us)
377 NESTED LOOPS OUTER (cr=3795 pr=7 pw=0 time=102768 us)
377 NESTED LOOPS OUTER (cr=3037 pr=7 pw=0 time=66989 us)
377 NESTED LOOPS (cr=398 pr=2 pw=0 time=17350 us)
1 NESTED LOOPS (cr=13 pr=1 pw=0 time=548 us)
1 NESTED LOOPS (cr=9 pr=0 pw=0 time=183 us)
1 NESTED LOOPS (cr=6 pr=0 pw=0 time=134 us)
1 TABLE ACCESS BY INDEX ROWID USER$ (cr=3 pr=0 pw=0 time=82 us)
1 INDEX UNIQUE SCAN I_USER1 (cr=2 pr=0 pw=0 time=46 us)(object id 44)
1 TABLE ACCESS BY INDEX ROWID USER$ (cr=3 pr=0 pw=0 time=42 us)
1 INDEX UNIQUE SCAN I_USER1 (cr=2 pr=0 pw=0 time=21 us)(object id 44)
1 TABLE ACCESS BY INDEX ROWID USER$ (cr=3 pr=0 pw=0 time=41 us)
1 INDEX UNIQUE SCAN I_USER1 (cr=2 pr=0 pw=0 time=20 us)(object id 44)
1 TABLE ACCESS BY INDEX ROWID OBJ$ (cr=4 pr=1 pw=0 time=355 us)
1 INDEX RANGE SCAN I_OBJ2 (cr=3 pr=1 pw=0 time=287 us)(object id 37)
377 TABLE ACCESS CLUSTER COL$ (cr=385 pr=1 pw=0 time=15717 us)
1 INDEX UNIQUE SCAN I_OBJ# (cr=2 pr=0 pw=0 time=37 us)(object id 3)
0 TABLE ACCESS CLUSTER COLTYPE$ (cr=2639 pr=5 pw=0 time=38494 us)
377 INDEX RANGE SCAN I_HH_OBJ#_INTCOL# (cr=758 pr=0 pw=0 time=22670 us)(object id 257)
377 TABLE ACCESS BY INDEX ROWID OBJ$ (cr=1133 pr=0 pw=0 time=45268 us)
377 INDEX RANGE SCAN I_OBJ2 (cr=756 pr=0 pw=0 time=19748 us)(object id 37)
22620 TABLE ACCESS CLUSTER ICOL$ (cr=25261 pr=0 pw=0 time=616558 us)
377 INDEX UNIQUE SCAN I_OBJ# (cr=379 pr=0 pw=0 time=13617 us)(object id 3)
8527740 TABLE ACCESS CLUSTER COL$ (cr=8686080 pr=0 pw=0 time=230561995 us)
0 TABLE ACCESS CLUSTER ATTRCOL$ (cr=59694180 pr=0 pw=0 time=809872364 us)
284258 TABLE ACCESS BY INDEX ROWID OBJ$ (cr=17055482 pr=0 pw=0 time=389968778 us)
8527740 INDEX UNIQUE SCAN I_OBJ1 (cr=8527742 pr=0 pw=0 time=203917575 us)(object id 36)
754 TABLE ACCESS BY INDEX ROWID IND$ (cr=568518 pr=1 pw=0 time=16974454 us)
284258 INDEX UNIQUE SCAN I_IND1 (cr=284260 pr=1 pw=0 time=6279622 us)(object id 39)
0 TABLE ACCESS BY INDEX ROWID OBJ$ (cr=0 pr=0 pw=0 time=12093 us)
0 INDEX RANGE SCAN I_OBJ3 (cr=0 pr=0 pw=0 time=5858 us)(object id 38)
0 TABLE ACCESS CLUSTER USER$ (cr=0 pr=0 pw=0 time=9138 us)
0 INDEX UNIQUE SCAN I_USER# (cr=0 pr=0 pw=0 time=3792 us)(object id 11)
0 NESTED LOOPS OUTER (cr=0 pr=0 pw=0 time=0 us)
0 TABLE ACCESS CLUSTER COL$ (cr=0 pr=0 pw=0 time=0 us)
0 INDEX UNIQUE SCAN I_OBJ# (cr=0 pr=0 pw=0 time=0 us)(object id 3)
0 TABLE ACCESS CLUSTER ATTRCOL$ (cr=0 pr=0 pw=0 time=0 us)
0 TABLE ACCESS CLUSTER TAB$ (cr=0 pr=0 pw=0 time=0 us)
0 INDEX UNIQUE SCAN I_OBJ# (cr=0 pr=0 pw=0 time=0 us)(object id 3)


Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
SQL*Net message to client 2 0.00 0.00
db file sequential read 8 0.00 0.00
latch: cache buffers chains 1 0.00 0.00
SQL*Net message from client 2 0.00 0.00

Obviously, there exists at least one better plan like the one used by the same sql with a RULE hint.

What would it take for the CBO to recognize the better plan?

What options do we have to speed this up, given that the sql is issued by an application, OEM Change Manager?

Thanks.

Tom Kyte
April 02, 2009 - 11:32 am UTC

do you have current realistic statistics on your dictionary tables?

Execution Plan of Dictionary View Query

Siu-Ki Chan, April 02, 2009 - 11:39 am UTC

The stats were brought up to date by doing the following:

DBMS_STATS.DELETE_DICTIONARY_STATS(FORCE => TRUE, NO_INVALIDATE => FALSE);
DBMS_STATS.GATHER_DICTIONARY_STATS(ESTIMATE_PERCENT => NULL,
METHOD_OPT => 'FOR ALL COLUMNS SIZE AUTO',
GRANULARITY => 'ALL',
CASCADE => TRUE,
OPTIONS => 'GATHER',
NO_INVALIDATE => FALSE);
DBMS_STATS.DELETE_FIXED_OBJECTS_STATS(FORCE => TRUE, NO_INVALIDATE => FALSE);
DBMS_STATS.GATHER_FIXED_OBJECTS_STATS(NO_INVALIDATE => FALSE);
Tom Kyte
April 02, 2009 - 12:42 pm UTC

ok.... so did you just do that and something changed or what??!?

it is not very clear.

Execution Plan of Dictionary View Query

Siu-Ki Chan, April 02, 2009 - 12:52 pm UTC

Hi Tom,

I noticed OEM Change Manager running slow comparing indexes and started looking to spot this long running sql issued by the Change Manager.

Thinking statistics might be the problem, I did the delete/gather of statistics.

No improvement was realized however. I did trace and tkprof resulting in execution plan as posted.

Thanks.
Tom Kyte
April 02, 2009 - 1:09 pm UTC

do an explain on it as well - let's compare the estimated card=value to the actuals observed above.

Execution Plan of Dictionary View Query

Siu-Ki Chan, April 02, 2009 - 3:27 pm UTC

SQL> set autotrace traceonly explain
SQL> SELECT a.column_name, b.data_type, a.column_position
  2    FROM sys.dba_ind_columns a, sys.dba_tab_columns b
  3   WHERE a.table_owner = b.owner
  4     and a.table_name = b.table_name
  5     and a.column_name = b.column_name
  6     and a.index_owner = 'BWCM_WS_NDB'
  7     and a.index_name = 'XIE01WS_CRM_MAP'
  8     and a.table_owner = 'BWCM_WS_NDB'
  9     and a.table_name = 'WS_CRM_MAP'
 10  ORDER BY a.column_position;

Execution Plan
----------------------------------------------------------
Plan hash value: 1552453957

----------------------------------------------------------------------------------------------------
| Id  | Operation                                   | Name              | Rows  | Bytes | Cost (%CPU)| Time     |
----------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                            |                   |     1 |   372 |    26   (4)| 00:00:01 |
|   1 |  NESTED LOOPS OUTER                         |                   |     1 |    75 |     3   (0)| 00:00:01 |
|*  2 |   TABLE ACCESS CLUSTER                      | COL$              |     1 |    27 |     2   (0)| 00:00:01 |
|*  3 |    INDEX UNIQUE SCAN                        | I_OBJ#            |     1 |       |     1   (0)| 00:00:01 |
|*  4 |   TABLE ACCESS CLUSTER                      | ATTRCOL$          |     1 |    48 |     1   (0)| 00:00:01 |
|   5 |  TABLE ACCESS BY INDEX ROWID                | OBJ$              |     1 |    32 |     2   (0)| 00:00:01 |
|*  6 |   INDEX UNIQUE SCAN                         | I_OBJ1            |     1 |       |     1   (0)| 00:00:01 |
|   7 |   TABLE ACCESS BY INDEX ROWID               | OBJ$              |     1 |    32 |     2   (0)| 00:00:01 |
|*  8 |    INDEX UNIQUE SCAN                        | I_OBJ1            |     1 |       |     1   (0)| 00:00:01 |
|   9 |    TABLE ACCESS BY INDEX ROWID              | OBJ$              |     1 |    32 |     2   (0)| 00:00:01 |
|* 10 |     INDEX UNIQUE SCAN                       | I_OBJ1            |     1 |       |     1   (0)| 00:00:01 |
|  11 |     TABLE ACCESS BY INDEX ROWID             | OBJ$              |     1 |    32 |     2   (0)| 00:00:01 |
|* 12 |      INDEX UNIQUE SCAN                      | I_OBJ1            |     1 |       |     1   (0)| 00:00:01 |
|  13 |      TABLE ACCESS BY INDEX ROWID            | OBJ$              |     1 |    32 |     2   (0)| 00:00:01 |
|* 14 |       INDEX UNIQUE SCAN                     | I_OBJ1            |     1 |       |     1   (0)| 00:00:01 |
|  15 |  SORT ORDER BY                              |                   |     1 |   372 |    26   (4)| 00:00:01 |
|* 16 |   FILTER                                    |                   |       |       |            |          |
|  17 |    NESTED LOOPS OUTER                       |                   |     1 |   372 |    22   (0)| 00:00:01 |
|  18 |     NESTED LOOPS OUTER                      |                   |     1 |   368 |    21   (0)| 00:00:01 |
|  19 |      NESTED LOOPS                           |                   |     1 |   333 |    19   (0)| 00:00:01 |
|  20 |       NESTED LOOPS                          |                   |     1 |   321 |    18   (0)| 00:00:01 |
|  21 |        NESTED LOOPS OUTER                   |                   |     1 |   285 |    17   (0)| 00:00:01 |
|  22 |         NESTED LOOPS OUTER                  |                   |     1 |   275 |    15   (0)| 00:00:01 |
|  23 |          NESTED LOOPS                       |                   |     1 |   247 |    14   (0)| 00:00:01 |
|  24 |           NESTED LOOPS                      |                   |     1 |   210 |    13   (0)| 00:00:01 |
|  25 |            NESTED LOOPS OUTER               |                   |     1 |   171 |    10   (0)| 00:00:01 |
|  26 |             NESTED LOOPS                    |                   |     1 |   123 |     9   (0)| 00:00:01 |
|  27 |              NESTED LOOPS                   |                   |     1 |    96 |     8   (0)| 00:00:01 |
|  28 |               NESTED LOOPS                  |                   |     1 |    75 |     7   (0)| 00:00:01 |
|  29 |                NESTED LOOPS                 |                   |     1 |    39 |     4   (0)| 00:00:01 |
|  30 |                 NESTED LOOPS                |                   |     1 |    26 |     3   (0)| 00:00:01 |
|  31 |                  TABLE ACCESS BY INDEX ROWID| USER$             |     1 |    13 |     2   (0)| 00:00:01 |
|* 32 |                   INDEX UNIQUE SCAN         | I_USER1           |     1 |       |     1   (0)| 00:00:01 |
|  33 |                  TABLE ACCESS BY INDEX ROWID| USER$             |     1 |    13 |     1   (0)| 00:00:01 |
|* 34 |                   INDEX UNIQUE SCAN         | I_USER1           |     1 |       |     0   (0)| 00:00:01 |
|  35 |                 TABLE ACCESS BY INDEX ROWID | USER$             |     1 |    13 |     1   (0)| 00:00:01 |
|* 36 |                  INDEX UNIQUE SCAN          | I_USER1           |     1 |       |     0   (0)| 00:00:01 |
|  37 |                TABLE ACCESS BY INDEX ROWID  | OBJ$              |     1 |    36 |     3   (0)| 00:00:01 |
|* 38 |                 INDEX RANGE SCAN            | I_OBJ2            |     1 |       |     2   (0)| 00:00:01 |
|  39 |               TABLE ACCESS CLUSTER          | ICOL$             |     2 |    42 |     1   (0)| 00:00:01 |
|* 40 |                INDEX UNIQUE SCAN            | I_OBJ#            |     1 |       |     0   (0)| 00:00:01 |
|* 41 |              TABLE ACCESS CLUSTER           | COL$              |    11 |   297 |     1   (0)| 00:00:01 |
|* 42 |             TABLE ACCESS CLUSTER            | ATTRCOL$          |     1 |    48 |     1   (0)| 00:00:01 |
|  43 |            TABLE ACCESS BY INDEX ROWID      | OBJ$              |     1 |    39 |     3   (0)| 00:00:01 |
|* 44 |             INDEX RANGE SCAN                | I_OBJ2            |     1 |       |     2   (0)| 00:00:01 |
|* 45 |           TABLE ACCESS CLUSTER              | COL$              |     1 |    37 |     1   (0)| 00:00:01 |
|* 46 |            INDEX UNIQUE SCAN                | I_OBJ#            |     1 |       |     0   (0)| 00:00:01 |
|* 47 |          TABLE ACCESS CLUSTER               | COLTYPE$          |     1 |    28 |     1   (0)| 00:00:01 |
|* 48 |         INDEX RANGE SCAN                    | I_HH_OBJ#_INTCOL# |     1 |    10 |     2   (0)| 00:00:01 |
|* 49 |        TABLE ACCESS BY INDEX ROWID          | OBJ$              |     1 |    36 |     1   (0)| 00:00:01 |
|* 50 |         INDEX UNIQUE SCAN                   | I_OBJ1            |     1 |       |     0   (0)| 00:00:01 |
|* 51 |       TABLE ACCESS BY INDEX ROWID           | IND$              |     1 |    12 |     1   (0)| 00:00:01 |
|* 52 |        INDEX UNIQUE SCAN                    | I_IND1            |     1 |       |     0   (0)| 00:00:01 |
|* 53 |      TABLE ACCESS BY INDEX ROWID            | OBJ$              |     1 |    35 |     2   (0)| 00:00:01 |
|* 54 |       INDEX RANGE SCAN                      | I_OBJ3            |     1 |       |     1   (0)| 00:00:01 |
|  55 |     TABLE ACCESS CLUSTER                    | USER$             |     1 |     4 |     1   (0)| 00:00:01 |
|* 56 |      INDEX UNIQUE SCAN                      | I_USER#           |     1 |       |     0   (0)| 00:00:01 |
|  57 |    NESTED LOOPS OUTER                       |                   |     1 |    75 |     3   (0)| 00:00:01 |
|* 58 |     TABLE ACCESS CLUSTER                    | COL$              |     1 |    27 |     2   (0)| 00:00:01 |
|* 59 |      INDEX UNIQUE SCAN                      | I_OBJ#            |     1 |       |     1   (0)| 00:00:01 |
|* 60 |     TABLE ACCESS CLUSTER                    | ATTRCOL$          |     1 |    48 |     1   (0)| 00:00:01 |
|* 61 |      TABLE ACCESS CLUSTER                   | TAB$              |     1 |    13 |     2   (0)| 00:00:01 |
|* 62 |       INDEX UNIQUE SCAN                     | I_OBJ#            |     1 |       |     1   (0)| 00:00:01 |
----------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - filter("TC"."INTCOL#"=:B1-1)
   3 - access("TC"."OBJ#"=:B1)
   4 - filter("AC"."OBJ#"(+)=:B1 AND "AC"."INTCOL#"(+)=:B2-1 AND "TC"."OBJ#"="AC"."OBJ#"(+) AND
              "TC"."INTCOL#"="AC"."INTCOL#"(+))
   6 - access("O"."OBJ#"=:B1)
   8 - access("O"."OBJ#"=:B1)
  10 - access("O"."OBJ#"=:B1)
  12 - access("O"."OBJ#"=:B1)
  14 - access("O"."OBJ#"=:B1)
  16 - filter("C"."NAME"=DECODE(BITAND("C"."PROPERTY",1024),1024, (SELECT /*+ */
              DECODE(BITAND("TC"."PROPERTY",1),1,"AC"."NAME","TC"."NAME") FROM SYS."ATTRCOL$" "AC","SYS"."C
              WHERE "TC"."OBJ#"=:B1 AND "TC"."INTCOL#"=:B2-1 AND "AC"."OBJ#"(+)=:B3 AND "AC"."INTCOL#"(+)=:
              "TC"."OBJ#"="AC"."OBJ#"(+) AND "TC"."INTCOL#"="AC"."INTCOL#"(+)),DECODE("AC"."NAME",NULL,"C".
              "NAME")) AND ("O"."TYPE#"=2 AND  NOT EXISTS (SELECT /*+ */ 0 FROM "SYS"."TAB$" "T" WHERE "T".
              AND (BITAND("T"."PROPERTY",512)=512 OR BITAND("T"."PROPERTY",8192)=8192)) OR ("O"."TYPE#"=3 O
              "O"."TYPE#"=4)))
  32 - access("U"."NAME"='BWCM_WS_NDB')
  34 - access("BO"."NAME"='BWCM_WS_NDB')
  36 - access("IO"."NAME"='BWCM_WS_NDB')
  38 - access("BO"."USER#"="BASE"."OWNER#" AND "BASE"."NAME"='WS_CRM_MAP')
  40 - access("IC"."BO#"="BASE"."OBJ#")
  41 - filter("IC"."BO#"="C"."OBJ#")
  42 - filter("C"."OBJ#"="AC"."OBJ#"(+) AND "C"."INTCOL#"="AC"."INTCOL#"(+))
  44 - access("O"."OWNER#"="U"."USER#" AND "O"."NAME"='WS_CRM_MAP')
  45 - filter(DECODE("C"."PROPERTY",0,'NO',DECODE(BITAND("C"."PROPERTY",32),32,'YES','NO'))='NO')
  46 - access("O"."OBJ#"="C"."OBJ#")
  47 - filter("C"."OBJ#"="AC"."OBJ#"(+) AND "C"."INTCOL#"="AC"."INTCOL#"(+))
  48 - access("C"."OBJ#"="H"."OBJ#"(+) AND "C"."INTCOL#"="H"."INTCOL#"(+))
  49 - filter("IDX"."NAME"='XIE01WS_CRM_MAP' AND "IO"."USER#"="IDX"."OWNER#")
  50 - access("IC"."OBJ#"="IDX"."OBJ#")
  51 - filter(("I"."TYPE#"=1 OR "I"."TYPE#"=2 OR "I"."TYPE#"=3 OR "I"."TYPE#"=4 OR "I"."TYPE#"=6 OR
              "I"."TYPE#"=7 OR "I"."TYPE#"=9) AND "C"."INTCOL#"=DECODE(BITAND("I"."PROPERTY",1024),0,"IC"."
              "."SPARE2"))
  52 - access("IDX"."OBJ#"="I"."OBJ#")
  53 - filter("OT"."TYPE#"(+)=13)
  54 - access("AC"."TOID"="OT"."OID$"(+))
  56 - access("OT"."OWNER#"="UT"."USER#"(+))
  58 - filter("TC"."INTCOL#"=:B1-1)
  59 - access("TC"."OBJ#"=:B1)
  60 - filter("AC"."OBJ#"(+)=:B1 AND "AC"."INTCOL#"(+)=:B2-1 AND "TC"."OBJ#"="AC"."OBJ#"(+) AND
              "TC"."INTCOL#"="AC"."INTCOL#"(+))
  61 - filter(BITAND("T"."PROPERTY",512)=512 OR BITAND("T"."PROPERTY",8192)=8192)
  62 - access("T"."OBJ#"=:B1)

Tom Kyte
April 02, 2009 - 4:50 pm UTC

column PLAN_TABLE_OUTPUT format a72 truncate


and post that, I cannot read it - OR - better yet - you can lay it next to the row source operation and maybe start to ask the question "why are the estimates so far off on step X", that is what I would be doing - you can try that too - you might be able to see 'why' it is happening.

Execution Plan of Dictionary View Query

Siu-Ki Chan, April 03, 2009 - 1:09 am UTC

Hi Tom,

Sorry for the oversized lines. Here is the same explain plan results with columns beyond 80 truncated:

Execution Plan
----------------------------------------------------------
Plan hash value: 1552453957

---------------------------------------------------------------------------------
| Id  | Operation                                   | Name              | Rows  |
---------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                            |                   |     1 |
|   1 |  NESTED LOOPS OUTER                         |                   |     1 |
|*  2 |   TABLE ACCESS CLUSTER                      | COL$              |     1 |
|*  3 |    INDEX UNIQUE SCAN                        | I_OBJ#            |     1 |
|*  4 |   TABLE ACCESS CLUSTER                      | ATTRCOL$          |     1 |
|   5 |  TABLE ACCESS BY INDEX ROWID                | OBJ$              |     1 |
|*  6 |   INDEX UNIQUE SCAN                         | I_OBJ1            |     1 |
|   7 |   TABLE ACCESS BY INDEX ROWID               | OBJ$              |     1 |
|*  8 |    INDEX UNIQUE SCAN                        | I_OBJ1            |     1 |
|   9 |    TABLE ACCESS BY INDEX ROWID              | OBJ$              |     1 |
|* 10 |     INDEX UNIQUE SCAN                       | I_OBJ1            |     1 |
|  11 |     TABLE ACCESS BY INDEX ROWID             | OBJ$              |     1 |
|* 12 |      INDEX UNIQUE SCAN                      | I_OBJ1            |     1 |
|  13 |      TABLE ACCESS BY INDEX ROWID            | OBJ$              |     1 |
|* 14 |       INDEX UNIQUE SCAN                     | I_OBJ1            |     1 |
|  15 |  SORT ORDER BY                              |                   |     1 |
|* 16 |   FILTER                                    |                   |       |
|  17 |    NESTED LOOPS OUTER                       |                   |     1 |
|  18 |     NESTED LOOPS OUTER                      |                   |     1 |
|  19 |      NESTED LOOPS                           |                   |     1 |
|  20 |       NESTED LOOPS                          |                   |     1 |
|  21 |        NESTED LOOPS OUTER                   |                   |     1 |
|  22 |         NESTED LOOPS OUTER                  |                   |     1 |
|  23 |          NESTED LOOPS                       |                   |     1 |
|  24 |           NESTED LOOPS                      |                   |     1 |
|  25 |            NESTED LOOPS OUTER               |                   |     1 |
|  26 |             NESTED LOOPS                    |                   |     1 |
|  27 |              NESTED LOOPS                   |                   |     1 |
|  28 |               NESTED LOOPS                  |                   |     1 |
|  29 |                NESTED LOOPS                 |                   |     1 |
|  30 |                 NESTED LOOPS                |                   |     1 |
|  31 |                  TABLE ACCESS BY INDEX ROWID| USER$             |     1 |
|* 32 |                   INDEX UNIQUE SCAN         | I_USER1           |     1 |
|  33 |                  TABLE ACCESS BY INDEX ROWID| USER$             |     1 |
|* 34 |                   INDEX UNIQUE SCAN         | I_USER1           |     1 |
|  35 |                 TABLE ACCESS BY INDEX ROWID | USER$             |     1 |
|* 36 |                  INDEX UNIQUE SCAN          | I_USER1           |     1 |
|  37 |                TABLE ACCESS BY INDEX ROWID  | OBJ$              |     1 |
|* 38 |                 INDEX RANGE SCAN            | I_OBJ2            |     1 |
|  39 |               TABLE ACCESS CLUSTER          | ICOL$             |     2 |
|* 40 |                INDEX UNIQUE SCAN            | I_OBJ#            |     1 |
|* 41 |              TABLE ACCESS CLUSTER           | COL$              |    11 |
|* 42 |             TABLE ACCESS CLUSTER            | ATTRCOL$          |     1 |
|  43 |            TABLE ACCESS BY INDEX ROWID      | OBJ$              |     1 |
|* 44 |             INDEX RANGE SCAN                | I_OBJ2            |     1 |
|* 45 |           TABLE ACCESS CLUSTER              | COL$              |     1 |
|* 46 |            INDEX UNIQUE SCAN                | I_OBJ#            |     1 |
|* 47 |          TABLE ACCESS CLUSTER               | COLTYPE$          |     1 |
|* 48 |         INDEX RANGE SCAN                    | I_HH_OBJ#_INTCOL# |     1 |
|* 49 |        TABLE ACCESS BY INDEX ROWID          | OBJ$              |     1 |
|* 50 |         INDEX UNIQUE SCAN                   | I_OBJ1            |     1 |
|* 51 |       TABLE ACCESS BY INDEX ROWID           | IND$              |     1 |
|* 52 |        INDEX UNIQUE SCAN                    | I_IND1            |     1 |
|* 53 |      TABLE ACCESS BY INDEX ROWID            | OBJ$              |     1 |
|* 54 |       INDEX RANGE SCAN                      | I_OBJ3            |     1 |
|  55 |     TABLE ACCESS CLUSTER                    | USER$             |     1 |
|* 56 |      INDEX UNIQUE SCAN                      | I_USER#           |     1 |
|  57 |    NESTED LOOPS OUTER                       |                   |     1 |
|* 58 |     TABLE ACCESS CLUSTER                    | COL$              |     1 |
|* 59 |      INDEX UNIQUE SCAN                      | I_OBJ#            |     1 |
|* 60 |     TABLE ACCESS CLUSTER                    | ATTRCOL$          |     1 |
|* 61 |      TABLE ACCESS CLUSTER                   | TAB$              |     1 |
|* 62 |       INDEX UNIQUE SCAN                     | I_OBJ#            |     1 |
---------------------------------------------------------------------------------

The estimated cardinality was quite a bit off for some of the row source operations, in particular operation id #39 and #45.

I don't have a clue as to why the discrepancy, and will appreciate your help in tracking this down.

Thanks.

Execution Plan of Dictionary View Query

Siu-Ki Chan, April 05, 2009 - 1:43 pm UTC

Hi Tom,

Any pointer as to how/where one could research this sort of problem further?

Thanks.
Tom Kyte
April 07, 2009 - 6:04 am UTC

from above...

better yet - you can lay it next to the row source operation and maybe start to ask the question "why are the estimates so far off on step X", that is what I would be doing - you can try that too - you might be able to see 'why' it is happening.

for you see, you chopped off all of the relevant information - I said what my goal was - and you gave something that was impossible to achieve that goal.

outer join expensive?

A reader, April 07, 2009 - 3:30 am UTC

Tom,

You said "...I've found many
times they are there simply "in case". outer joins can be very expensive."

We've got some foreign key columns which were previously nullable, so in a query where we want to pull back all the child rows and show a description from the parent, we obviously use an outer join for that purpose.

However, someone in the design dept has read that "Outer join" is expensive and to eliminate it, we will introduce a "dummy" value in every foreign key table so that any existing or new foreign key columns with a NULL value can be assigned this "dummy" value, thus making effectively any foreign key columns to become mandatory and outer-joins no longer necessary.

What do you think?
Tom Kyte
April 13, 2009 - 11:13 am UTC

I think you model your data as it exists.

If you are using an outer join when you DO NOT NEED TO (because the relationship is either 1:1 mandatory or 1:m mandatory), then you did it wrong.

If you have a 1:1 or 1:m OPTIONAL relationship, then using an outer join is the only right/correct approach.

Execution Plan of Dictionary View Query

Siu-Ki Chan, April 07, 2009 - 9:58 am UTC

Hi Tom,

I must have misunderstood when you said:

column PLAN_TABLE_OUTPUT format a72 truncate

If I gave you something that was impossible to achieve that goal, it certainly was not my intention. Afterall, I participated in this exchange seeking help, and not to waste your time or my time.

I was able to and have indeed put the the output side by side. I just don't know the proper way to send/post it in its entirety so you can read it.

Thanks.

Tom Kyte
April 13, 2009 - 12:41 pm UTC

sorry - I misread the posting, I see the cardinalities are there.

However, the explain plan is different from the row source operation.

the actual plan used at run time - different from the plan explain plan came up with.

can you get the plan out of v$sql_plan directly instead of using explain plan?

sequential read taking more time

Sam, April 15, 2009 - 3:19 am UTC

Hi Tom,

In our production system, I ran a query and it took 6+ seconds to retreive the result. When I had a look into the tkprof report, it says that most of the wait is there on 'db file sequential read'. What might the problem?

select count(:"SYS_B_0") 
from
 UC_OBJ_INV_BILL_MS  where EXT_SUPL_BILL_NUM = :"SYS_B_1" and CRDTR_PRTNR_ID =
   :"SYS_B_2" and bill_stat <> :"SYS_B_3" 


call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute      1      0.00       0.00          0          0          0           0
Fetch        1      0.37       6.13       2550       2599          0           1
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        3      0.37       6.14       2550       2599          0           1

Misses in library cache during parse: 1
Misses in library cache during execute: 1
Optimizer mode: ALL_ROWS
Parsing user id: 138  

Rows     Row Source Operation
-------  ---------------------------------------------------
      1  SORT AGGREGATE (cr=2599 pr=2550 pw=0 time=6138648 us)
      1   TABLE ACCESS BY GLOBAL INDEX ROWID UC_OBJ_INV_BILL_MS PARTITION: ROW LOCATION ROW LOCATION (cr=2599 pr=2550 pw=0 time=6138621 us)
   3099    INDEX RANGE SCAN IDX_OBJINVBILL_CRDTRPRTNRID (cr=24 pr=24 pw=0 time=28093 us)(object id 598000)


Elapsed times include waiting on following events:
  Event waited on                             Times   Max. Wait  Total Waited
  ----------------------------------------   Waited  ----------  ------------
  SQL*Net message to client                       2        0.00          0.00
  SQL*Net message from client                     2       21.94         21.94
  db file sequential read                      2550        0.64          5.97


When I saw the oracle documentation, it asks to look into V$FILESTAT.
Tom Kyte
April 15, 2009 - 9:12 am UTC


ops$tkyte%ORA10GR2> select 6.14/2550 from dual;

 6.14/2550
----------
.002407843




I don't see a problem, you did 2550 physical IO's against the table. Each IO took about 0.002 to 0.003 seconds to execute. Not bad.

Now, the question might be "how can I make this go faster".


Look at the row source operation - you went to the index on IDX_OBJINVBILL_CRDTRPRTNRID (no clue what columns those are on really). We used that index - and we discovered 3,099 possible rows (the entire predicate count not be evaluated against the index apparently - we had to go to the table). So, we went to the table 3,099 times (resulting in 2,550 physical IO's - the blocks were not in the cache). Of the 3,099 times we went to the table - we discovered ONLY ONE RECORD!!!!!

Ouch, that index is missing some attributes.

ouch ouch, you are using cursor_sharing = force|similar - so in addition to having this poorly performing query, you have a security hole the size of a space shuttle in your application (google sql injection - you are subject to it, no matter what your developers say, you are subject to it, you should be very afraid of this code). In addition to the security issue - you have performance and scalability issues (else you wouldn't even be using cursor_sharing=force|similar) and you haven't fixed them with cursor sharing, you may have *lessened* them, but they still exist hugely (you could do many times more work on the same hardware if you removed the "need" for this crutch called cursor_sharing=force|similar).

ouch again, select count(:"SYS_B_0")

http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:1594885400346999596#1628907000346905841

But back to the query - consider this simple example:

ops$tkyte%ORA10GR2> drop table t;
ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> set echo on
ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> create table t
ops$tkyte%ORA10GR2> as
ops$tkyte%ORA10GR2> select rownum id, x.*
ops$tkyte%ORA10GR2>   from (
ops$tkyte%ORA10GR2> select * from all_objects
ops$tkyte%ORA10GR2> union all
ops$tkyte%ORA10GR2> select * from all_objects where owner = 'SYSMAN'
ops$tkyte%ORA10GR2>        ) x
ops$tkyte%ORA10GR2> order by dbms_random.random;
ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> alter table t add constraint t_pk primary key(id);
ops$tkyte%ORA10GR2> create index t_idx1 on t(owner);
ops$tkyte%ORA10GR2> create index t_idx2 on t(owner,object_name,status);
ops$tkyte%ORA10GR2> */
ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> set autotrace on
ops$tkyte%ORA10GR2> select /*+ index(t t_idx1) */ count(*) from t where owner = 'SYSMAN' and object_name = 'AQ$MGMT_NOTIFY_QTABLE' and status <> 'INVALID';

  COUNT(*)
----------
         2


Execution Plan
----------------------------------------------------------
Plan hash value: 4230998786

------------------------------------------------------------------------
| Id  | Operation                    | Name   | Rows  | Bytes | Cost (%C
------------------------------------------------------------------------
|   0 | SELECT STATEMENT             |        |     1 |    39 |   288
|   1 |  SORT AGGREGATE              |        |     1 |    39 |
|*  2 |   TABLE ACCESS BY INDEX ROWID| T      |     1 |    39 |   288
|*  3 |    INDEX RANGE SCAN          | T_IDX1 |  2232 |       |     7
------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - filter("OBJECT_NAME"='AQ$MGMT_NOTIFY_QTABLE' AND "STATUS"<>'INVAL
   3 - access("OWNER"='SYSMAN')

Note
-----
   - dynamic sampling used for this statement


Statistics
----------------------------------------------------------
          0  recursive calls
          0  db block gets
        728  consistent gets
          0  physical reads
          0  redo size
        411  bytes sent via SQL*Net to client
        400  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
          1  rows processed

ops$tkyte%ORA10GR2> select /*+ index(t t_idx2) */ count(*) from t where owner = 'SYSMAN' and object_name = 'AQ$MGMT_NOTIFY_QTABLE' and status <> 'INVALID';

  COUNT(*)
----------
         2


Execution Plan
----------------------------------------------------------
Plan hash value: 2020952945

------------------------------------------------------------------------
| Id  | Operation         | Name   | Rows  | Bytes | Cost (%CPU)| Time
------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |        |     1 |    39 |     2   (0)| 00:00:
|   1 |  SORT AGGREGATE   |        |     1 |    39 |            |
|*  2 |   INDEX RANGE SCAN| T_IDX2 |     1 |    39 |     2   (0)| 00:00:
------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - access("OWNER"='SYSMAN' AND "OBJECT_NAME"='AQ$MGMT_NOTIFY_QTABLE'
              )
       filter("STATUS"<>'INVALID')

Note
-----
   - dynamic sampling used for this statement


Statistics
----------------------------------------------------------
          0  recursive calls
          0  db block gets
          2  consistent gets
          0  physical reads
          0  redo size
        411  bytes sent via SQL*Net to client
        400  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
          1  rows processed

ops$tkyte%ORA10GR2> set autotrace off



You want an index on

EXT_SUPL_BILL_NUM,CRDTR_PRTNR_ID,bill_stat


the first two columns can be in any order that makes sense to you - the last column should be bill_stat.


In that fashion, you'll avoid the table altogether.



Probably though, the best thing to do would be......


NOT TO EXECUTE THIS QUERY AT ALL - i hate count queries like this. I'll bet your code looks like:

select this silly count into some variable.

if ( variable <> 0 )
then
    process the data
end if



The code should be:


PROCESS THE DATA (if there isn't any, big deal, no problem, the code will just exit, it has to - I mean eventually there isn't any data to process - so just PROCESS THE DATA DO NOT COUNT THE ROWS TO SEE IF YOU SHOULD PROCESS, JUST PROCESS IT ALREADY)



need help on a sql

Reader, June 16, 2009 - 10:50 am UTC

Agreement is a composite partitioned table with 9.6m records out of which all are in just one partition itself.
Currently the sql is just not completing and hogging i/o too. Can you please suggest if there is a better way to write this sql?



SELECT *
FROM AGREEMENT A
WHERE EXISTS
(SELECT 1
FROM AGREEMENT B
WHERE B.END_DTE > TO_DATE ('20081201', 'YYYYMMDD')
AND B.STRT_DTE <= TO_DATE ('20091231', 'YYYYMMDD')
AND A.AGR_ID = B.AGR_ID
GROUP BY B.AGR_ID
HAVING A.END_DTE = MIN (B.END_DTE))



PLAN_TABLE_OUTPUT
----------------------------------------------------------------------------------------------------
Plan hash value: 577910687

----------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | Pstart| Pstop |
----------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 135 | 159M (1)|533:11:28 | | |
|* 1 | FILTER | | | | | | | |
| 2 | PARTITION RANGE ALL | | 10M| 1369M| 39355 (5)| 00:07:53 | 1 |
| 3 | PARTITION LIST ALL | | 10M| 1369M| 39355 (5)| 00:07:53 | 1 | 1
| 4 | TABLE ACCESS FULL | AGREEMENT | 10M| 1369M| 39355 (5)| 00
|* 5 | FILTER | | | | | | | |
| 6 | SORT GROUP BY NOSORT | | 1 | 28 | 15 (0)| 00:00:01 | |
|* 7 | TABLE ACCESS BY GLOBAL INDEX ROWID| AGREEMENT | 19 | 532 | 1
|* 8 | INDEX RANGE SCAN | XIF2AGREEMENT | 19 | | 3 (0)| 00
----------------------------------------------------------------------------------------------------



THanks a lot in advance
Tom Kyte
June 16, 2009 - 2:38 pm UTC

select * from (
select a.*, min(end_dte) over (partition by agr_id) min_dt
  from agreement a
 where end_dte > TO_DATE ('20081201', 'YYYYMMDD')
   and STRT_DTE <= TO_DATE ('20091231', 'YYYYMMDD')
) where end_dte = min_dt;


or if end_dte is unique within agr_id
select * from (
select a.*, row_number over (partition by agr_id order by end_dte) rn
  from agreement a
 where end_dte > TO_DATE ('20081201', 'YYYYMMDD')
   and STRT_DTE <= TO_DATE ('20091231', 'YYYYMMDD')
) where rn = 1;

Thanks, this is very useful.

Reader, June 17, 2009 - 5:21 am UTC


Tunning the delete statement

Karthi, July 09, 2009 - 2:24 am UTC

Hi Tom,

I am trying to tune the below delete statement.

Current Script:

DELETE FROM CUSTOMER_ORDER WHERE (NAME = :B1 OR NAME IS NULL)

Plan:

DELETE STATEMENT CHOOSE Cost: 220 Bytes: 10 Cardinality: 1
2 DELETE Test.CUSTOMER_ORDER
1 TABLE ACCESS FULL Test.CUSTOMER_ORDER Cost: 220 Bytes: 10 Cardinality:

Index on the NAME field:

I have created the index on the name field.

create index NAME_IDX on CUSTOMER_ORDER(NAME)

Modified Script:

I have created the index on name field. But the statement is going for the FULL table scan. The reason is the 'WHERE' clause is having the 'OR' condition

So I have divided the delete statement into two parts.

Part 1:
The part 1 is working fine and it is using the index. It is working fine.

delete FROM CUSTOMER_ORDER WHERE NAME = :B1

DELETE STATEMENT CHOOSE Cost: 1 Bytes: 10 Cardinality: 1
2 DELETE Test.CUSTOMER_ORDER
1 INDEX RANGE SCAN NON-UNIQUE Test.NAME_IDX Cost: 1 Bytes: 10 Cardinality: 1


Part 2:
Still the part 2 is going for the Full table scan because it is checking the NULL value.

delete FROM CUSTOMER_ORDER WHERE NAME IS NULL

DELETE STATEMENT CHOOSE Cost: 220 Bytes: 10 Cardinality: 1
2 DELETE Test.CUSTOMER_ORDER
1 TABLE ACCESS FULL Test.CUSTOMER_ORDER Cost: 220 Bytes: 10 Cardinality: 1

Could you please provide some guidance to tune this query.

Regards,
Karthi


Tom Kyte
July 14, 2009 - 2:10 pm UTC

... The reason is the 'WHERE' clause is having the 'OR' condition ...

not entirely, it is not.

The reason is because NAME is nullable, the index on only on name and entirely null keys are NOT entered into b*tree indexes.

http://asktom.oracle.com/Misc/mull-about-null.html
http://asktom.oracle.com/Misc/something-about-nothing.html


ops$tkyte%ORA10GR2> create table t
  2  as
  3  select object_name name, a.*
  4    from all_objects a
  5  /

Table created.

ops$tkyte%ORA10GR2> alter table t modify name null;

Table altered.

ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> create index t_idx on t(name);

Index created.

ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> exec dbms_stats.gather_table_stats( user, 'T' );

PL/SQL procedure successfully completed.

ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> variable b1 varchar2(30)
ops$tkyte%ORA10GR2> exec :b1 := 'T'

PL/SQL procedure successfully completed.

ops$tkyte%ORA10GR2> set serveroutput off
ops$tkyte%ORA10GR2> delete from t
  2  where rowid in
  3  (select rowid from t where name = :b1
  4   union all
  5   select rowid from t where name is null
  6  );

2 rows deleted.

ops$tkyte%ORA10GR2> select * from table(dbms_xplan.display_cursor);

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID  7tqunp4j4x42j, child number 0
-------------------------------------
delete from t where rowid in (select rowid from t where name = :b1  union all
select rowid from t where name is null )

Plan hash value: 4195076662

-----------------------------------------------------------------------------------------
| Id  | Operation                    | Name     | Rows  | Bytes | Cost (%CPU)| Time     |
-----------------------------------------------------------------------------------------
|   0 | DELETE STATEMENT             |          |       |       |    67 (100)|          |
|   1 |  DELETE                      | T        |       |       |            |          |
|   2 |   NESTED LOOPS               |          |  1496 | 73304 |    67   (2)| 00:00:01 |
|   3 |    VIEW                      | VW_NSO_1 |     3 |    36 |    64   (2)| 00:00:01 |
|   4 |     SORT UNIQUE              |          |     3 |   111 |    64  (99)| 00:00:01 |
|   5 |      UNION-ALL               |          |       |       |            |          |
|*  6 |       INDEX RANGE SCAN       | T_IDX    |     2 |    74 |     1   (0)| 00:00:01 |
|*  7 |       TABLE ACCESS FULL      | T        |     1 |    37 |    63   (2)| 00:00:01 |
|   8 |    TABLE ACCESS BY USER ROWID| T        |   499 | 18463 |     1   (0)| 00:00:01 |
-----------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   6 - access("NAME"=:B1)
   7 - filter("NAME" IS NULL)


27 rows selected.

ops$tkyte%ORA10GR2> rollback;

Rollback complete.

ops$tkyte%ORA10GR2> drop index t_idx;

Index dropped.

ops$tkyte%ORA10GR2> create index t_idx on t(name,0);

Index created.

ops$tkyte%ORA10GR2> delete from t
  2  where rowid in
  3  (select rowid from t where name = :b1
  4   union all
  5   select rowid from t where name is null
  6  );

2 rows deleted.

ops$tkyte%ORA10GR2> select * from table(dbms_xplan.display_cursor);

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID  7tqunp4j4x42j, child number 0
-------------------------------------
delete from t where rowid in (select rowid from t where name = :b1  union all
select rowid from t where name is null )

Plan hash value: 954215705

-----------------------------------------------------------------------------------------
| Id  | Operation                    | Name     | Rows  | Bytes | Cost (%CPU)| Time     |
-----------------------------------------------------------------------------------------
|   0 | DELETE STATEMENT             |          |       |       |     6 (100)|          |
|   1 |  DELETE                      | T        |       |       |            |          |
|   2 |   NESTED LOOPS               |          |  1496 | 73304 |     6   (0)| 00:00:01 |
|   3 |    VIEW                      | VW_NSO_1 |     3 |    36 |     3   (0)| 00:00:01 |
|   4 |     SORT UNIQUE              |          |     3 |   111 |     3  (34)| 00:00:01 |
|   5 |      UNION-ALL               |          |       |       |            |          |
|*  6 |       INDEX RANGE SCAN       | T_IDX    |     2 |    74 |     2   (0)| 00:00:01 |
|*  7 |       INDEX RANGE SCAN       | T_IDX    |     1 |    37 |     1   (0)| 00:00:01 |
|   8 |    TABLE ACCESS BY USER ROWID| T        |   499 | 18463 |     1   (0)| 00:00:01 |
-----------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   6 - access("NAME"=:B1)
   7 - access("NAME" IS NULL)


27 rows selected.

Tunning the delete statement

Karthi, July 21, 2009 - 9:59 am UTC

Thanks a lot tom. it is working fine.

help needed

Ashwani Singh, August 04, 2009 - 1:04 am UTC

Hi Tom,
please help us identify the bottle necks in following query.
INSERT INTO
bi_lab_trans_fact
    (uni_project_id         ,
    event_num         ,
    labor_Invoice_Amount ,
    labor_revenue_Amount ,
    proj_labor_Inv_Amt     ,
    proj_labor_rev_Amt    ,
    project_currency_code,
    line_num         ,
    company_id         ,
    cost_center_id         ,
    currency_id         ,
    account_id         ,
    oracle_office_id ,
    event_type         ,
    event_classification_type,
    gl_date_id         ,
    rec_start_date        ,
    uni_client_id         ,
    uni_solution_id     ,
    transfer_status_code ,
    accrue_through_date ,
    bill_through_date ,
    released_date     ,
    transferred_date,
    Type             ,
    task_Number     ,
    Labor_Comments    ,
     labor_rev_variables ,
     labor_rev_attributes ,
    uni_person_id,
    event_id,
    end_of_cal_month_id,
    request_id
    )
SELECT DISTINCT
    proj.uni_project_id
    ,pe.event_num
    ,NULL
    ,pe.projfunc_revenue_amount
    ,NULL
    ,pe.project_revenue_amount
    ,pe.project_currency_code
    ,pdra.draft_revenue_num
    ,comp.company_id
    ,ccd.cost_center_id
    ,cd.currency_id
    ,ad.account_id
    ,off.oracle_office_id
    ,pet.event_type
    ,pet.event_type_classification
    ,dd.date_id
    ,pe.last_update_date
    ,proj.uni_client_id
    ,proj.uni_solution_id
    ,pdra.transfer_status_code
    ,accrue_through_date
    ,NULL
    ,released_date
    ,transferred_date
    ,'LABOR_REVENUE'
    ,task_Number
    ,pe.description
     ,pe.attribute1||','
    ,pe.attribute2
    ,upi.uni_person_id
    ,pe.event_id
    ,dd1.date_id
    ,pe.request_id
FROM
    pa_draft_revenues_all pdra,
    pa_cust_event_rdl_all pce,
    pa_events pe,
    pa_tasks pt,
    pa_expenditure_items_all peia,
    pa_expenditures_all pea,
    sqlldrusr.sape_expenditure_event_mapping seem,
    pa_projects_all ora_proj,
    proj_dim proj,
    date_dim dd,
    date_dim dd1,
    gl_sets_of_books book,
    hr_organization_information org,
    currency_dim cd,
    gl_code_combinations gcc,
    company_dim comp,
    cost_center_dim ccd,
    account_dim ad,
    ora_office_dim off,
    pa_event_types pet,
    (SELECT uni_person_id,system_data FROM uni_pers_id WHERE active_yn = 'Y' AND system_id = 1) upi,
    (SELECT * FROM per_all_people_f WHERE  trunc(sysdate)-1 BETWEEN effective_start_date AND effective_end_date) papf
WHERE
    eid_pkg_numeric_functions.fnc_cnv_ora_textpid_to_number(ora_proj.segment1) = proj.pyramid_pid
AND    pdra.project_id = ora_proj.project_id
AND    pdra.project_id = pce.project_id
AND    pdra.draft_revenue_num = pce.draft_revenue_num
AND    pce.project_id = pe.project_id
AND    pce.event_num = pe.event_num
AND    pe.attribute10 = TO_CHAR(seem.event_attribute10(+))
AND     seem.expenditure_item_id = peia.expenditure_item_id(+)
AND     peia.expenditure_id = pea.expenditure_id(+)
AND     pea.incurred_by_person_id = papf.person_id(+)
AND    papf.party_id = upi.system_data(+)
AND    NVL(pe.task_id,0) = NVL(pce.task_id,0)
AND    pe.task_id = pt.task_id(+)
AND    pe.event_type = pet.event_type
AND    pt.task_number in ('LABOR BILLABLE','LABOR NON BILLABLE')
AND    TRUNC(pdra.gl_date) = dd.full_date
AND    dd.date_type = 'FULL'
AND    dd.end_of_cal_month = dd1.full_date
AND    dd1.date_type = 'FULL'
AND    pce.org_id = org.organization_id
AND    org.org_information_context = 'Legal Entity Accounting'
AND    TO_NUMBER(org.org_information1) = book.set_of_books_id
AND    book.currency_code = cd.abbreviation
AND    pce.code_combination_id = gcc.code_combination_id
AND    gcc.segment1 = comp.company_code
AND    gcc.segment2 = ccd.cost_center_code
AND    gcc.segment3 = ad.account_code
AND    gcc.chart_of_accounts_id = ad.chart_of_accounts_id
AND    gcc.segment4 = off.office_code
AND    ((pe.last_update_date >=  trunc(sysdate)-1 AND pe.last_update_date <  trunc(sysdate))
    OR
    (pce.program_update_date >=  trunc(sysdate)-1 AND pce.program_update_date <  trunc(sysdate))
    OR
    (pdra.program_update_date >=  trunc(sysdate)-1 AND pdra.program_update_date <  trunc(sysdate))
    )
    AND
     NOT EXISTS (SELECT 1
         FROM
              bi_lab_trans_fact bi
         WHERE
              bi.uni_project_id = proj.uni_project_id
         AND    bi.event_id = pe.event_id
         AND    bi.line_num = pdra.draft_revenue_num
         AND    bi.type = 'LABOR_REVENUE'
    )

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.04       0.03          0          0          0           0
Execute      2     42.75      42.21      25266   10600235       1809           0
Fetch        0      0.00       0.00          0          0          0           0
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        3     42.80      42.25      25266   10600235       1809           0

Misses in library cache during parse: 1
Optimizer goal: RULE
Parsing user id: 21  (EID)

Rows     Row Source Operation
-------  ---------------------------------------------------
      0  SORT UNIQUE 
      0   FILTER  
      0    MERGE JOIN OUTER 
      0     SORT JOIN 
      0      NESTED LOOPS OUTER 
      0       NESTED LOOPS OUTER 
      0        NESTED LOOPS OUTER 
      0         MERGE JOIN OUTER 
      0          SORT JOIN 
      0           TABLE ACCESS BY INDEX ROWID PROJ_DIM 
      1            NESTED LOOPS  
      0             NESTED LOOPS  
      0              NESTED LOOPS  
      0               NESTED LOOPS  
      0                NESTED LOOPS  
 228819                 NESTED LOOPS  
 228819                  NESTED LOOPS  
 228819                   NESTED LOOPS  
 228819                    NESTED LOOPS  
 228819                     NESTED LOOPS  
 228819                      NESTED LOOPS  
 228819                       NESTED LOOPS  
 228819                        NESTED LOOPS  
 228819                         NESTED LOOPS  
 448179                          FILTER  
 488210                           NESTED LOOPS OUTER 
 488210                            MERGE JOIN  
     45                             SORT JOIN 
     45                              TABLE ACCESS FULL PA_EVENT_TYPES 
 488210                             SORT JOIN 
 488210                              TABLE ACCESS FULL PA_EVENTS 
 462072                            TABLE ACCESS BY INDEX ROWID PA_TASKS 
 462072                             INDEX RANGE SCAN PA_TASKS_IND1 (object id 439317)
 228819                          TABLE ACCESS BY INDEX ROWID PA_CUST_EVENT_RDL_ALL 
 230576                           INDEX RANGE SCAN PA_CUST_EVENT_RDL_ALL_IND1 (object id 439301)
 228819                         TABLE ACCESS BY INDEX ROWID GL_CODE_COMBINATIONS 
 228819                          INDEX RANGE SCAN GL_CODE_COMBINATION_IND1 (object id 441407)
 228819                        TABLE ACCESS BY INDEX ROWID ORA_OFFICE_DIM 
 228819                         INDEX RANGE SCAN ORA_OFFICE_DIM_IND1 (object id 418257)
 228819                       TABLE ACCESS BY INDEX ROWID ACCOUNT_DIM 
 686285                        INDEX RANGE SCAN ACCOUNT_DIM_IND1 (object id 417684)
 228819                      TABLE ACCESS BY INDEX ROWID COST_CENTER_DIM 
 228819                       INDEX RANGE SCAN COST_CENTER_DIM_IND1 (object id 417926)
 228819                     TABLE ACCESS BY INDEX ROWID COMPANY_DIM 
 228819                      INDEX RANGE SCAN COMPANY_DIM_IND1 (object id 417907)
 228819                    TABLE ACCESS BY INDEX ROWID HR_ORGANIZATION_INFORMATION 
2535695                     INDEX RANGE SCAN HR_ORGANIZATION_INFO_IND1 (object id 441478)
 228819                   TABLE ACCESS BY INDEX ROWID GL_SETS_OF_BOOKS 
 228819                    INDEX RANGE SCAN GL_SETS_OF_BOOKS_IND1 (object id 441405)
 228819                  TABLE ACCESS BY INDEX ROWID CURRENCY_DIM 
 228819                   INDEX RANGE SCAN CURRENCY_DIM_IND1 (object id 417946)
      0                 TABLE ACCESS BY INDEX ROWID PA_DRAFT_REVENUES_ALL 
 228819                  INDEX RANGE SCAN PA_DRAFT_REVENUES_ALL_IND1 (object id 439311)
      0                TABLE ACCESS BY INDEX ROWID DATE_DIM 
      0                 INDEX RANGE SCAN DATE_DIM_IND1 (object id 417959)
      0               TABLE ACCESS BY INDEX ROWID DATE_DIM 
      0                INDEX RANGE SCAN DATE_DIM_IND1 (object id 417959)
      0              TABLE ACCESS BY INDEX ROWID PA_PROJECTS_ALL 
      0               INDEX RANGE SCAN PA_PROJECTS_IND1 (object id 439315)
      0             INDEX RANGE SCAN PROJ_DIM_IND1 (object id 418398)
      0          SORT JOIN 
      0           TABLE ACCESS FULL SAPE_EXPENDITURE_EVENT_MAPPING 
      0         TABLE ACCESS BY INDEX ROWID PA_EXPENDITURE_ITEMS_ALL 
      0          INDEX RANGE SCAN PA_EXPEN_ITEM_ALL_IND (object id 441396)
      0        TABLE ACCESS BY INDEX ROWID PA_EXPENDITURES_ALL 
      0         INDEX RANGE SCAN PA_EXP_ALL_IND (object id 441393)
      0       TABLE ACCESS BY INDEX ROWID PER_ALL_PEOPLE_F 
      0        INDEX RANGE SCAN PER_ALL_PEOPLE_F_IND2 (object id 441377)
      0     SORT JOIN 
      0      TABLE ACCESS FULL UNI_PERS_ID 
      0    INDEX UNIQUE SCAN BI_LAB_TRANS_FACT_IND4 (object id 421671)


Rows     Execution Plan
-------  ---------------------------------------------------
      0  INSERT STATEMENT   GOAL: RULE
      0   SORT (UNIQUE)
      0    FILTER
      0     MERGE JOIN (OUTER)
      0      SORT (JOIN)
      0       NESTED LOOPS (OUTER)
      0        NESTED LOOPS (OUTER)
      0         NESTED LOOPS (OUTER)
      0          MERGE JOIN (OUTER)
      0           SORT (JOIN)
      0            TABLE ACCESS (BY INDEX ROWID) OF 'PROJ_DIM'
      1             NESTED LOOPS
      0              NESTED LOOPS
      0               NESTED LOOPS
      0                NESTED LOOPS
      0                 NESTED LOOPS
 228819                  NESTED LOOPS
 228819                   NESTED LOOPS
 228819                    NESTED LOOPS
 228819                     NESTED LOOPS
 228819                      NESTED LOOPS
 228819                       NESTED LOOPS
 228819                        NESTED LOOPS
 228819                         NESTED LOOPS
 228819                          NESTED LOOPS
 448179                           FILTER
 488210                            NESTED LOOPS 
                                       (OUTER)
 488210                             MERGE JOIN
     45                              SORT (JOIN)

     45                               TABLE 
                                          ACCESS (FULL) OF 'PA_EVENT_TYPES'
 488210                              SORT (JOIN)

 488210                               TABLE 
                                          ACCESS (FULL) OF 'PA_EVENTS'
 462072                             TABLE ACCESS 
                                        (BY INDEX ROWID) OF 'PA_TASKS'
 462072                              INDEX 
                                       (RANGE SCAN) OF 'PA_TASKS_IND1' 
                                         (NON-UNIQUE)
 228819                           TABLE ACCESS (BY 
                                    INDEX ROWID) OF 'PA_CUST_EVENT_RDL_ALL'

 230576                            INDEX (RANGE 
                                     SCAN) OF 'PA_CUST_EVENT_RDL_ALL_IND1' 
                                       (NON-UNIQUE)
 228819                          TABLE ACCESS (BY 
                                     INDEX ROWID) OF 'GL_CODE_COMBINATIONS'
 228819                           INDEX (RANGE 
                                    SCAN) OF 'GL_CODE_COMBINATION_IND1' 
                                      (NON-UNIQUE)
 228819                         TABLE ACCESS (BY 
                                    INDEX ROWID) OF 'ORA_OFFICE_DIM'
 228819                          INDEX (RANGE SCAN) 
                                     OF 'ORA_OFFICE_DIM_IND1' (NON-UNIQUE)
 228819                        TABLE ACCESS (BY INDEX 
                                   ROWID) OF 'ACCOUNT_DIM'
 686285                         INDEX (RANGE SCAN) 
                                    OF 'ACCOUNT_DIM_IND1' (NON-UNIQUE)
 228819                       TABLE ACCESS (BY INDEX 
                                  ROWID) OF 'COST_CENTER_DIM'
 228819                        INDEX (RANGE SCAN) OF 
                                   'COST_CENTER_DIM_IND1' (NON-UNIQUE)
 228819                      TABLE ACCESS (BY INDEX 
                                 ROWID) OF 'COMPANY_DIM'
 228819                       INDEX (RANGE SCAN) OF 
                                  'COMPANY_DIM_IND1' (NON-UNIQUE)
 228819                     TABLE ACCESS (BY INDEX ROWID)
                                 OF 'HR_ORGANIZATION_INFORMATION'
2535695                      INDEX (RANGE SCAN) OF 
                                 'HR_ORGANIZATION_INFO_IND1' (NON-UNIQUE)
 228819                    TABLE ACCESS (BY INDEX ROWID) 
                               OF 'GL_SETS_OF_BOOKS'
 228819                     INDEX (RANGE SCAN) OF 
                                'GL_SETS_OF_BOOKS_IND1' (NON-UNIQUE)
 228819                   TABLE ACCESS (BY INDEX ROWID) OF 
                              'CURRENCY_DIM'
 228819                    INDEX (RANGE SCAN) OF 
                               'CURRENCY_DIM_IND1' (NON-UNIQUE)
      0                  TABLE ACCESS (BY INDEX ROWID) OF 
                             'PA_DRAFT_REVENUES_ALL'
 228819                   INDEX (RANGE SCAN) OF 
                              'PA_DRAFT_REVENUES_ALL_IND1' (NON-UNIQUE)
      0                 TABLE ACCESS (BY INDEX ROWID) OF 
                            'DATE_DIM'
      0                  INDEX (RANGE SCAN) OF 
                             'DATE_DIM_IND1' (NON-UNIQUE)
      0                TABLE ACCESS (BY INDEX ROWID) OF 
                           'DATE_DIM'
      0                 INDEX (RANGE SCAN) OF 
                            'DATE_DIM_IND1' (NON-UNIQUE)
      0               TABLE ACCESS (BY INDEX ROWID) OF 
                          'PA_PROJECTS_ALL'
      0                INDEX (RANGE SCAN) OF 
                           'PA_PROJECTS_IND1' (NON-UNIQUE)
      0              INDEX (RANGE SCAN) OF 'PROJ_DIM_IND1' 
                         (NON-UNIQUE)
      0           SORT (JOIN)
      0            TABLE ACCESS (FULL) OF 
                       'SAPE_EXPENDITURE_EVENT_MAPPING'
      0          TABLE ACCESS (BY INDEX ROWID) OF 
                     'PA_EXPENDITURE_ITEMS_ALL'
      0           INDEX (RANGE SCAN) OF 'PA_EXPEN_ITEM_ALL_IND' 
                      (NON-UNIQUE)
      0         TABLE ACCESS (BY INDEX ROWID) OF 
                    'PA_EXPENDITURES_ALL'
      0          INDEX (RANGE SCAN) OF 'PA_EXP_ALL_IND' (NON-UNIQUE)

      0        TABLE ACCESS (BY INDEX ROWID) OF 'PER_ALL_PEOPLE_F'
      0         INDEX (RANGE SCAN) OF 'PER_ALL_PEOPLE_F_IND2' 
                    (NON-UNIQUE)
      0      SORT (JOIN)
      0       TABLE ACCESS (FULL) OF 'UNI_PERS_ID'
      0     INDEX (UNIQUE SCAN) OF 'BI_LAB_TRANS_FACT_IND4' (UNIQUE)

Nested loop and Hash_join tendency towards Sql query format

A reader, September 30, 2009 - 10:32 am UTC

I have following 2 queries whose output is same. Both queries work well and use hash_join in 4 out of 5 environment. all environments are similar, there could be difference in data.

my question is that in one of the environment, one query used nested loop and second one uses hash_join. The performance of hash_join is good and nested loop is worst.

So, please tell me the reason behind it and does that also means that we should always go with first query syntax, since its always work across any environment.

Here is the query:

select nvl(GRP_GROUP_NME, 'NONE') ---into v_portfolio_nme
from G002_GRPS_MEMBRSHP d1,gnvods_stg_taxlot_rsl i
where MBR_ACCOUNT_NO = i.acct_id
and grp_group_typ = 'BO' --and d1.buss_dt='04-Sep-2009'
and run_id=4 and subtask_id=0

select
( select distinct (nvl(GRP_GROUP_NME, 'NONE')) from
G002_GRPS_MEMBRSHP where grp_group_typ = 'BO' and
MBR_ACCOUNT_NO=r.acct_id)
FROM gnvods_stg_taxlot_rsl r
where run_id=4 and subtask_id=0
Tom Kyte
October 07, 2009 - 8:45 am UTC

... there could
be difference in data.
...

then please - please - do not even expect for the plans to be similar.


look at the plans, they have the estimated cardinalities. Are they different in the different environments? (rhetorical question, of course they are).



A reader, October 22, 2009 - 8:48 am UTC

Tom,

I have a query running on two tables DEFECTS(size-1386854, index on create_date, CBSP) and RPC(size-164798, index on CBSP_NO).

I read somewhere indexes are not used if data being retrieved is more tahn 75% of total table size and same is indicated by explain plan also. Hence, in case of huge date range, my query is taking hour to execute. Can you please guide me on how to improve the performance of the query.

Also, explain plan says "dynamic sampling used for this statement".

EXPLAIN PLAN SET STATEMENT_ID = 'aa' FOR
SELECT CBSP_CN,
RPC_CN,
PR_CN,
PV_CN,
PL_CN,
RESOLVED,
TOTAL
FROM RPC, DEFECTS
WHERE DEFECTS.CREAT_DATE BETWEEN
to_date('20000101','yyyymmdd') AND to_date('20091021','yyyymmdd') AND RPC.CBSP_NO = DEFECTS.CBSP;

-------------------------------------------------
| Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time |
-------------------------------------------------
| 0 | SELECT STATEMENT | | 1545K| 1967M| | 20369 (1)| 00:04:46 |
|* 1 | HASH JOIN | | 1545K| 1967M| 166M| 20369 (1)| 00:04:46 |
| 2 | TABLE ACCESS FULL| PBI_VIEW_RPC | 147K| 164M| | 778 (2)| 00:00:11 |
|* 3 | TABLE ACCESS FULL| METADATA_RATIO_RESOLVE_DEFECTS | 1545K| 243M| | 1315 (4)| 00:00:19 |
-------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

1 - access("PBI_VIEW_RPC"."CBSP_NO"="METADATA_RATIO_RESOLVE_DEFECTS"."CBSP")
3 - filter("METADATA_RATIO_RESOLVE_DEFECTS"."CREAT_DATE">=TO_DATE(' 2000-01-01 00:00:00',
'syyyy-mm-dd hh24:mi:ss') AND "METADATA_RATIO_RESOLVE_DEFECTS"."CREAT_DATE"<=TO_DATE(' 2009-10-21
00:00:00', 'syyyy-mm-dd hh24:mi:ss'))

Note
-----
- dynamic sampling used for this statement

22 rows selected.
Tom Kyte
October 23, 2009 - 1:32 pm UTC

... I read somewhere indexes are not used if data being retrieved is more tahn 75%
of total table size ...

stop reading that site, an index may be used to return every row in a table - 100% of them.

or we might stop using an index if you try to access 0.5% of the data in the table.

then again, we might use an index to get 10% of the data from that same table under different circumstances.


In short: it depends.


If that query takes an hour to execute either

a) your pga settings are way way way way too small and you are doing multi-pass workarea temp space things over and over and over

b) you have the worlds slowest disk

c) your system is already heavily loaded


the query looks perfect from where I'm sitting. Get 147k rows from PBI_VIEW_RPC, hash that into memory. Then start full scanning METADATA_RATIO_RESOLVE_DEFECTS to find about 1.5 million records (the entire table) and join that to the hash table in memory and return the row to the client.


dynamic sample in this case means one or both of your tables do not have statistics.



trace that query for us and post a tkprof showing it taking an hour - use 10046 level 12 tracing..

Query Perfomance

A reader, November 16, 2009 - 3:37 am UTC

Hi,
Following is my query which need to be tuned ,exp_cube table is having around 5 billion records and ers_cube_query,country
are very small tables and security contains arond 1 million records ,this query is taking around 5-6 minutes in production .



SELECT /*+ INDEX(ec) ORDERED */
count(*)
FROM ers_cube_query ecq,
exp_cube ec,
country c
WHERE ecq.derive_csa_from IN ( 'COUNTRY', 'PRE_PL')
AND ecq.cube_agg_code = ec.agg_code
AND ec.exp_seq_id = 105329
AND ec.part_no = 1
AND ec.product = DECODE(ec.agg_code, 'MTM', 'IR_DID', ec.product)
AND ec.country = c.rpl_name
AND c.ers_flag = 'Y'
UNION ALL
SELECT /*+ INDEX(ec) */
count(*)
FROM exp_cube ec,
security s,
country c,
ers_cube_query ecq
WHERE ec.exp_seq_id = 105329
AND ec.part_no = 1
AND ec.ccy_1 = s.sec_code
AND s.country_code = c.country_code
AND c.ers_flag = 'Y'
AND ec.product = DECODE(ec.agg_code, 'MTM', 'IR_DID', ec.product)
AND ecq.cube_agg_code = ec.agg_code
AND ecq.derive_csa_from = 'CURRENCY'

below is the expalin plan for this query .

Execution Plan
----------------------------------------------------------
Plan hash value: 2572815200

--------------------------------------------------------------------------------
----------------------------------------------

| Id | Operation | Name | Rows | B
ytes | Cost (%CPU)| Time | Pstart| Pstop |

--------------------------------------------------------------------------------
----------------------------------------------

| 0 | SELECT STATEMENT | | 2 |
145 | 18 (56)| 00:00:01 | | |

| 1 | UNION-ALL | | |
| | | | |

| 2 | SORT AGGREGATE | | 1 |
69 | | | | |

|* 3 | TABLE ACCESS BY INDEX ROWID | COUNTRY | 1 |
13 | 1 (0)| 00:00:01 | | |

| 4 | NESTED LOOPS | | 1 |
69 | 9 (12)| 00:00:01 | | |

|* 5 | HASH JOIN | | 1 |
56 | 8 (13)| 00:00:01 | | |

|* 6 | TABLE ACCESS FULL | ERS_CUBE_QUERY | 7 |
140 | 2 (0)| 00:00:01 | | |

| 7 | PARTITION RANGE SINGLE | | 1 |
36 | 5 (0)| 00:00:01 | 1 | 1 |

| 8 | PARTITION HASH SINGLE | | 1 |
36 | 5 (0)| 00:00:01 | 1 | 1 |

|* 9 | TABLE ACCESS BY LOCAL INDEX ROWID| EXP_CUBE | 1 |
36 | 5 (0)| 00:00:01 | 1 | 1 |

|* 10 | INDEX RANGE SCAN | EXP_CUBE_IDX | 1 |
| 4 (0)| 00:00:01 | 1 | 1 |

|* 11 | INDEX RANGE SCAN | COUNTRY_NU01 | 1 |
| 0 (0)| 00:00:01 | | |

| 12 | SORT AGGREGATE | | 1 |
76 | | | | |

| 13 | NESTED LOOPS | | 1 |
76 | 9 (0)| 00:00:01 | | |

| 14 | NESTED LOOPS | | 1 |
71 | 8 (0)| 00:00:01 | | |

| 15 | NESTED LOOPS | | 1 |
56 | 6 (0)| 00:00:01 | | |

| 16 | PARTITION RANGE SINGLE | | 1 |
36 | 5 (0)| 00:00:01 | 1 | 1 |

| 17 | PARTITION HASH SINGLE | | 1 |
36 | 5 (0)| 00:00:01 | 1 | 1 |

|* 18 | TABLE ACCESS BY LOCAL INDEX ROWID| EXP_CUBE | 1 |
36 | 5 (0)| 00:00:01 | 1 | 1 |

|* 19 | INDEX RANGE SCAN | EXP_CUBE_IDX | 1 |
| 4 (0)| 00:00:01 | 1 | 1 |

|* 20 | TABLE ACCESS BY INDEX ROWID | ERS_CUBE_QUERY | 1 |
20 | 1 (0)| 00:00:01 | | |

|* 21 | INDEX UNIQUE SCAN | ERS_CUBE_QUERY_PK | 1 |
| 0 (0)| 00:00:01 | | |

|* 22 | TABLE ACCESS BY INDEX ROWID | SECURITY | 1 |
15 | 2 (0)| 00:00:01 | | |

|* 23 | INDEX UNIQUE SCAN | SEC_UK | 1 |
| 1 (0)| 00:00:01 | | |

|* 24 | TABLE ACCESS BY INDEX ROWID | COUNTRY | 1 |
5 | 1 (0)| 00:00:01 | | |

|* 25 | INDEX UNIQUE SCAN | COU_PK | 1 |
| 0 (0)| 00:00:01 | | |

--------------------------------------------------------------------------------
----------------------------------------------


Predicate Information (identified by operation id):
---------------------------------------------------

3 - filter("C"."ERS_FLAG"='Y')
5 - access("ECQ"."CUBE_AGG_CODE"="EC"."AGG_CODE")
6 - filter("ECQ"."DERIVE_CSA_FROM"='COUNTRY' OR "ECQ"."DERIVE_CSA_FROM"='PRE_
PL')

9 - filter("EC"."PRODUCT"=DECODE("EC"."AGG_CODE",'MTM','IR_DID',"EC"."PRODUCT
"))

10 - access("EC"."PART_NO"=1 AND "EC"."EXP_SEQ_ID"=105987)
11 - access("EC"."COUNTRY"="C"."RPL_NAME")
18 - filter("EC"."PRODUCT"=DECODE("EC"."AGG_CODE",'MTM','IR_DID',"EC"."PRODUCT
"))

19 - access("EC"."PART_NO"=1 AND "EC"."EXP_SEQ_ID"=105987)
20 - filter("ECQ"."DERIVE_CSA_FROM"='CURRENCY')
21 - access("ECQ"."CUBE_AGG_CODE"="EC"."AGG_CODE")
22 - filter("S"."COUNTRY_CODE" IS NOT NULL)
23 - access("EC"."CCY_1"="S"."SEC_CODE")
24 - filter("C"."ERS_FLAG"='Y')
25 - access("S"."COUNTRY_CODE"="C"."COUNTRY_CODE")


Statistics
----------------------------------------------------------
1 recursive calls
0 db block gets
7214849 consistent gets
655688 physical reads
0 redo size
227 bytes sent via SQL*Net to client
247 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
2 rows processed
a) what are the possiblty to tune this query .
b) How we can evaluate the exact timings of query ,because some time we run query second time it
reduece the timing ,like above query when i runs first time it takes much time but second time it takes less
time.so how we can know the exact timing of the query execution .

Query Perfomance

A Reader, November 24, 2009 - 6:25 am UTC

Hi Tom,
Anything we can do for above question ,this is very urgent,
can we merge these union all or any possibility to tune above query,
Waiting for your earlier responce .


Tom Kyte
November 24, 2009 - 11:53 am UTC

I know nothing about your data
I know nothing about your physical schema
I see a really large plan
I see a really large plan with very small row count estimates - but given the amount of work the query actaully does - I *know* those estimates must be way off - which leads me to believe your statistics are really wrong (stale, missing, incomplete, whatever)
I see a fairly large query

I see it counting rows
I see you using hints (bad, unless you have some reason to do so, did you, do you?)


I would tune this by not counting rows, it is exceedingly rare where you actually need to count rows. why are you counting rows??? What do you do with this number.

The fastest way to do something is to not do it in the first place.

tuning

A reader, November 24, 2009 - 12:05 pm UTC


Why count rows

Martin, November 24, 2009 - 9:23 pm UTC

I can think of a few.
Count of items sold
Count of items returned faulty
Count of items in stock
Count of items manufactured
Head count


Tom Kyte
November 25, 2009 - 12:14 pm UTC

tell me why you actually need them.

are you counting items sold in your code in order to find out if there are items that were sold and need processing (99.9999% of the time, that is why people count). If so, stop counting and just start processing, if there isn't anything - your code will stop anyway.

are you counting items returned in order to find out.....

are you counting stock items.....



And if you frequently need those values - you would not count them over and over and over - most people (self included, look at my home page, the numbers on the top of how many of this, that or the other thing - materialized views maintained in the background - better than good enough) would do them once and use them over and over (hence you don't really need to count again, that accounts for 99% of the reming 100-99.999% of the time)

I see your point

Martin, November 24, 2009 - 9:29 pm UTC

Most of those would be a sum

Query Perfomace

A reader, November 25, 2009 - 12:33 am UTC

Hi Tom,
I removed the column name ,sorry for this .now this is the full query.this query is returning approximetly 3 lacs records. exp_cube is very large table ,and analysed last year.and this is very crucial table and using all over ,so i am worried that after analyse it might impact on other query also .so i am tring to tune this query.

Waiting for your earlier responce .

WITH
elu1 AS ( SELECT /*+ MATERIALIZE */ DISTINCT elu1.look_up_value, c.rpl_name
FROM epc_look_up elu1,
security s,
country c
WHERE look_up_code LIKE 'ERS_ONSHORE_CUR%'
AND elu1.type_code = 'ERS'
AND elu1.look_up_value = s.sec_short_code
AND s.country_code = c.country_code
AND c.ers_flag = 'Y'
)
SELECT /*+ INDEX(ec) ORDERED */
ec.exp_seq_id,ec.part_no, ec.pkg_name, ec.rpl_unit_id, ec.risk_class, ec.report_status_code, ec.rpl_loc, ec.rpl_subsid, ec.rpl_book,ec.book_trad_bank, ec.book_prop_flow,
ec.cob_date,ec.gpc_book,ec.product,ec.country,ec.rating,ec.ccy_1,ec.ccy_2,REPLACE(ec.tenor1,'0D','1D'),ec.tenor2,ec.tenor3,ec.sector,ec.agg_code,
ec.risk_amount,ec.source_system,ec.trigger_file,ec.merger_flag,ec.ccdb_issuer,ec.issuer_name,ec.mkt_index,ec.ccdb_id,
ec.buba_id,ec.rr_style_type_code,ec.source_type_code,ec.issuer_code,ec.sc_bm_sec_code,'N/A' as bc_bm_sec_code ,ec.rf_sec_code,
ec.bond_maturity,ec.coupon, ec.issuer_rating, ec.issue_rating_source, ec.issuer_rating_source
FROM ers_cube_query ecq,
exp_cube ec,
country c
WHERE ecq.derive_csa_from IN ( 'COUNTRY', 'PRE_PL')
AND ecq.cube_agg_code = ec.agg_code
AND ec.exp_seq_id = 106667
AND ec.part_no = 1
AND ec.product = DECODE(ec.agg_code, 'MTM', 'IR_DID', ec.product)
AND ec.country = c.rpl_name
AND c.ers_flag = 'Y'
UNION ALL
SELECT /*+ INDEX(ec) */
ec.exp_seq_id, ec.part_no, ec.pkg_name, ec.rpl_unit_id, ec.risk_class, ec.report_status_code, ec.rpl_loc, ec.rpl_subsid, ec.rpl_book,ec.book_trad_bank, ec.book_prop_flow,
ec.cob_date,ec.gpc_book,ec.product,c.rpl_name ,ec.rating,ec.ccy_1,ec.ccy_2,REPLACE(ec.tenor1,'0D','1D'),ec.tenor2,ec.tenor3,ec.sector,ec.agg_code,
ec.risk_amount,ec.source_system,ec.trigger_file,ec.merger_flag,ec.ccdb_issuer,ec.issuer_name,ec.mkt_index,ec.ccdb_id,
ec.buba_id,ec.rr_style_type_code,ec.source_type_code,ec.issuer_code,ec.sc_bm_sec_code,'N/A' as bc_bm_sec_code ,ec.rf_sec_code,
ec.bond_maturity,ec.coupon, ec.issuer_rating, ec.issue_rating_source, ec.issuer_rating_source
FROM exp_cube ec,
security s,
country c,
ers_cube_query ecq
WHERE ec.exp_seq_id = 106667
AND ec.part_no = 1
AND ec.ccy_1 = s.sec_code
AND s.country_code = c.country_code
AND c.ers_flag = 'Y'
AND ec.product = DECODE(ec.agg_code, 'MTM', 'IR_DID', ec.product)
AND ecq.cube_agg_code = ec.agg_code
AND ecq.derive_csa_from = 'CURRENCY'
UNION ALL
SELECT /*+ INDEX(ec) ORDERED */
ec.exp_seq_id, ec.part_no, ec.pkg_name, ec.rpl_unit_id, ec.risk_class, ec.report_status_code, ec.rpl_loc, ec.rpl_subsid, ec.rpl_book,ec.book_trad_bank, ec.book_prop_flow,
ec.cob_date,ec.gpc_book,ec.product,elu1.rpl_name ,ec.rating,ec.ccy_1,ec.ccy_2,REPLACE(ec.tenor1,'0D','1D'),ec.tenor2,ec.tenor3,ec.sector,ec.agg_code,
ec.risk_amount,ec.source_system,ec.trigger_file,ec.merger_flag,ec.ccdb_issuer,ec.issuer_name,ec.mkt_index,ec.ccdb_id,
ec.buba_id,ec.rr_style_type_code,ec.source_type_code,ec.issuer_code,ec.sc_bm_sec_code,'N/A' as bc_bm_sec_code ,ec.rf_sec_code,
ec.bond_maturity,ec.coupon, ec.issuer_rating, ec.issue_rating_source, ec.issuer_rating_source
FROM exp_cube ec,
elu1
WHERE ec.exp_seq_id = 106667
AND ec.part_no = 1
AND ec.product = DECODE(ec.agg_code, 'MTM', 'IR_DID', ec.product)
AND ec.agg_code IN (SELECT look_up_value
FROM epc_look_up elu2
WHERE elu2.look_up_code LIKE 'ERS_ONSHORE_AGG%'
AND elu2.type_code = 'ERS'
)
AND ec.ccy_1 = elu1.look_up_value
;

Plan
SELECT STATEMENT CHOOSECost: 29 Bytes: 1,750 Cardinality: 3
42 TEMP TABLE TRANSFORMATION
8 LOAD AS SELECT
7 HASH UNIQUE Cost: 26 Bytes: 118 Cardinality: 2
6 HASH JOIN Cost: 25 Bytes: 118 Cardinality: 2
4 TABLE ACCESS BY INDEX ROWID TABLE GPCPROD.SECURITY Cost: 5 Bytes: 10 Cardinality: 1
3 NESTED LOOPS Cost: 22 Bytes: 172 Cardinality: 4
1 TABLE ACCESS FULL TABLE GPCPROD.EPC_LOOK_UP Cost: 2 Bytes: 132 Cardinality: 4
2 INDEX RANGE SCAN INDEX GPCPROD.SEC_INX Cost: 2 Cardinality: 2
5 TABLE ACCESS FULL TABLE GPCPROD.COUNTRY Cost: 2 Bytes: 2,464 Cardinality: 154
41 UNION-ALL
17 TABLE ACCESS BY INDEX ROWID TABLE GPCPROD.COUNTRY Cost: 1 Bytes: 13 Cardinality: 1
16 NESTED LOOPS Cost: 9 Bytes: 401 Cardinality: 1
14 HASH JOIN Cost: 8 Bytes: 388 Cardinality: 1
9 TABLE ACCESS FULL TABLE GPCPROD.ERS_CUBE_QUERY Cost: 2 Bytes: 140 Cardinality: 7
13 PARTITION RANGE SINGLE Cost: 5 Bytes: 368 Cardinality: 1 Partition #: 15 Partitions accessed #1
12 PARTITION HASH SINGLE Cost: 5 Bytes: 368 Cardinality: 1 Partition #: 16 Partitions accessed #1
11 TABLE ACCESS BY LOCAL INDEX ROWID TABLE GPCPROD.EXP_CUBE Cost: 5 Bytes: 368 Cardinality: 1 Partition #: 17 Partitions accessed #1
10 INDEX RANGE SCAN INDEX GPCPROD.EXP_CUBE_IDX Cost: 4 Cardinality: 1 Partition #: 18 Partitions accessed #1
15 INDEX RANGE SCAN INDEX GPCPROD.COUNTRY_NU01 Cost: 0 Cardinality: 1
30 NESTED LOOPS Cost: 9 Bytes: 419 Cardinality: 1
27 NESTED LOOPS Cost: 8 Bytes: 403 Cardinality: 1
24 NESTED LOOPS Cost: 6 Bytes: 388 Cardinality: 1
21 PARTITION RANGE SINGLE Cost: 5 Bytes: 368 Cardinality: 1 Partition #: 23 Partitions accessed #1
20 PARTITION HASH SINGLE Cost: 5 Bytes: 368 Cardinality: 1 Partition #: 24 Partitions accessed #1
19 TABLE ACCESS BY LOCAL INDEX ROWID TABLE GPCPROD.EXP_CUBE Cost: 5 Bytes: 368 Cardinality: 1 Partition #: 25 Partitions accessed #1
18 INDEX RANGE SCAN INDEX GPCPROD.EXP_CUBE_IDX Cost: 4 Cardinality: 1 Partition #: 26 Partitions accessed #1
23 TABLE ACCESS BY INDEX ROWID TABLE GPCPROD.ERS_CUBE_QUERY Cost: 1 Bytes: 20 Cardinality: 1
22 INDEX UNIQUE SCAN INDEX (UNIQUE) GPCPROD.ERS_CUBE_QUERY_PK Cost: 0 Cardinality: 1
26 TABLE ACCESS BY INDEX ROWID TABLE GPCPROD.SECURITY Cost: 2 Bytes: 15 Cardinality: 1
25 INDEX UNIQUE SCAN INDEX (UNIQUE) GPCPROD.SEC_UK Cost: 1 Cardinality: 1
29 TABLE ACCESS BY INDEX ROWID TABLE GPCPROD.COUNTRY Cost: 1 Bytes: 16 Cardinality: 1
28 INDEX UNIQUE SCAN INDEX (UNIQUE) GPCPROD.COU_PK Cost: 0 Cardinality: 1
40 HASH JOIN Cost: 11 Bytes: 930 Cardinality: 1
37 HASH JOIN Cost: 9 Bytes: 401 Cardinality: 1
32 SORT UNIQUE Cost: 2 Bytes: 132 Cardinality: 4
31 TABLE ACCESS FULL TABLE GPCPROD.EPC_LOOK_UP Cost: 2 Bytes: 132 Cardinality: 4
36 PARTITION RANGE SINGLE Cost: 5 Bytes: 368 Cardinality: 1 Partition #: 37 Partitions accessed #1
35 PARTITION HASH SINGLE Cost: 5 Bytes: 368 Cardinality: 1 Partition #: 38 Partitions accessed #1
34 TABLE ACCESS BY LOCAL INDEX ROWID TABLE GPCPROD.EXP_CUBE Cost: 5 Bytes: 368 Cardinality: 1 Partition #: 39 Partitions accessed #1
33 INDEX RANGE SCAN INDEX GPCPROD.EXP_CUBE_IDX Cost: 4 Cardinality: 1 Partition #: 40 Partitions accessed #1
39 VIEW GPC_GENIE. Cost: 2 Bytes: 1,058 Cardinality: 2
38 TABLE ACCESS FULL TABLE (TEMP) SYS.SYS_TEMP_0FD9E01E5_6D2984FE Cost: 2 Bytes: 50 Cardinality: 2


Tom Kyte
November 25, 2009 - 12:18 pm UTC

lose all of the hints and ensure that the estimated cardinalties are close the actuals - if they are not, we cannot get the correct plan and one of the causes MIGHT BE your very old and perhaps very stale statistics.

If you have the hints in there for a reason, then keep on going - hint it to death, tell it precisely, exactly, verbosely what you want it to do. You have limited the optimizers set of access paths this way, but not told it what to do precisely. You give incomplete direction here.

I would start by removing that

never ever - NEVER EVER - replace a select list with a count(*)

Not for tuning
Not for "for example"

it completely changes everything


the plan differences between

select * from emp, dept where emp.deptno = dept.deptno

and

select count(*) from the same


are HUGE - the count(*) might not even touch a table! The select * would touch two tables at least.

Query Performance

A reader, November 26, 2009 - 1:32 am UTC

Hi Tom ,
Thanks for reply.just wanted to know one thing .you said to check the cardinality that means, for example if my below query i returning 2 lacs record like

SELECT /*+ INDEX(ec) ORDERED */
ec.exp_seq_id,ec.part_no, ec.pkg_name, ec.rpl_unit_id, ec.risk_class, ec.report_status_code, ec.rpl_loc, ec.rpl_subsid, ec.rpl_book,ec.book_trad_bank, ec.book_prop_flow,
ec.cob_date,ec.gpc_book,ec.product,ec.country,ec.rating,ec.ccy_1,ec.ccy_2,REPLACE(ec.tenor1,'0D','1D'),ec.tenor2,ec.tenor3,ec.sector,ec.agg_code,
ec.risk_amount,ec.source_system,ec.trigger_file,ec.merger_flag,ec.ccdb_issuer,ec.issuer_name,ec.mkt_index,ec.ccdb_id,
ec.buba_id,ec.rr_style_type_code,ec.source_type_code,ec.issuer_code,ec.sc_bm_sec_code,'N/A' as bc_bm_sec_code ,ec.rf_sec_code,
ec.bond_maturity,ec.coupon, ec.issuer_rating, ec.issue_rating_source, ec.issuer_rating_source
FROM ers_cube_query ecq,
exp_cube ec,
country c
WHERE ecq.derive_csa_from IN ( 'COUNTRY', 'PRE_PL')
AND ecq.cube_agg_code = ec.agg_code
AND ec.exp_seq_id = 106187
AND ec.part_no = 4
AND ec.product = DECODE(ec.agg_code, 'MTM', 'IR_DID', ec.product)
AND ec.country = c.rpl_name
AND c.ers_flag = 'Y'

Plan
SELECT STATEMENT CHOOSECost: 292,648 Bytes: 6,029,870 Cardinality: 14,707
8 HASH JOIN Cost: 292,648 Bytes: 6,029,870 Cardinality: 14,707
1 TABLE ACCESS FULL TABLE GPCPROD.COUNTRY Cost: 3 Bytes: 2,002 Cardinality: 154
7 HASH JOIN Cost: 292,644 Bytes: 2,662,679 Cardinality: 6,707
2 TABLE ACCESS FULL TABLE GPCPROD.ERS_CUBE_QUERY Cost: 2 Bytes: 140 Cardinality: 7
6 PARTITION RANGE SINGLE Cost: 292,640 Bytes: 32,287,411 Cardinality: 85,643 Partition #: 5 Partitions accessed #4
5 PARTITION HASH SINGLE Cost: 292,640 Bytes: 32,287,411 Cardinality: 85,643 Partition #: 6 Partitions accessed #1
4 TABLE ACCESS BY LOCAL INDEX ROWID TABLE GPCPROD.EXP_CUBE Cost: 292,640 Bytes: 32,287,411 Cardinality: 85,643 Partition #: 7 Partitions accessed #25
3 INDEX RANGE SCAN INDEX GPCPROD.EXP_CUBE_IDX Cost: 17,894 Cardinality: 5,052,910 Partition #: 8 Partitions accessed #25


cardinality: 14,707 so you mean that it should close to 2 lacs or in each step we have to check whether it's returning a correct number of cardinality or we should consider only below statment .

SELECT STATEMENT CHOOSECost: 292,648 Bytes: 6,029,870 Cardinality: 14,707

Tom Kyte
November 28, 2009 - 12:40 pm UTC

what is a lac.


anyway, did you actually check the cardinality, I see a big bunch of basically unreadable information - you should be able to do the check and report back in a terse format the relevant information.



For each step in the plan - are the estimated cardinality values even close to what you observe (eg: compare the explain plan to a tkprof row source operation)


Query Optimization

A reader, November 26, 2009 - 3:31 am UTC

HI tom ,
I run this query on production system ,if you see in explain plan it's not showing
cardinality but it's showing rows column and it's access only one row from all the table
how it is possible ? tables contains lots of values.

SELECT /*+ INDEX(ec) ORDERED */
count(*)
FROM ers_cube_query ecq,
exp_cube ec,
country c
WHERE ecq.derive_csa_from IN ( 'COUNTRY', 'PRE_PL')
AND ecq.cube_agg_code = ec.agg_code
AND ec.exp_seq_id = 106187
AND ec.part_no = 4
AND ec.product = DECODE(ec.agg_code, 'MTM', 'IR_DID', ec.product)
AND c.ers_flag = 'Y'
AND ec.country = c.rpl_name
PLAN_TABLE_OUTPUT
----------------------------------------------------------------------------------------------------
Plan hash value: 2607312409

----------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | Pstart| Pstop |
----------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 388 | 6 (0)| 00:00:01 | | |
| 1 | NESTED LOOPS | | 1 | 388 | 6 (0)| 00:00:01 | | |
| 2 | PARTITION RANGE SINGLE | | 1 | 368 | 5 (0)| 00:00:01 | 1 | 1 |
| 3 | PARTITION HASH SINGLE | | 1 | 368 | 5 (0)| 00:00:01 | 1 | 1 |
|* 4 | TABLE ACCESS BY LOCAL INDEX ROWID| EXP_CUBE | 1 | 368 | 5 (0)| 00:00:01 |
|* 5 | INDEX RANGE SCAN | EXP_CUBE_IDX | 1 | | 4 (0)| 00:00:01 | 1 |

PLAN_TABLE_OUTPUT
----------------------------------------------------------------------------------------------------
|* 6 | TABLE ACCESS BY INDEX ROWID | ERS_CUBE_QUERY | 1 | 20 | 1 (0)| 00:00:01
|* 7 | INDEX UNIQUE SCAN | ERS_CUBE_QUERY_PK | 1 | | 0 (0)| 00:00:01 | | |
----------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

4 - filter("EC"."PRODUCT"=DECODE("EC"."AGG_CODE",'MTM','IR_DID',"EC"."PRODUCT"))
5 - access("EC"."PART_NO"=1 AND "EC"."EXP_SEQ_ID"=106667)
6 - filter("ECQ"."DERIVE_CSA_FROM"='COUNTRY' OR "ECQ"."DERIVE_CSA_FROM"='PRE_PL')
7 - access("ECQ"."CUBE_AGG_CODE"="EC"."AGG_CODE")

22 rows selected.

Elapsed: 00:00:00.12

Execution Plan
----------------------------------------------------------
Plan hash value: 3013799171

-----------------------------------------------------
| Id | Operation | Name |
-----------------------------------------------------
| 0 | SELECT STATEMENT | |
| 1 | COLLECTION ITERATOR PICKLER FETCH| DISPLAY |
-----------------------------------------------------

Note
-----
- rule based optimizer used (consider using cbo)


Statistics
----------------------------------------------------------
19 recursive calls
12 db block gets
37 consistent gets
0 physical reads
0 redo size
2275 bytes sent via SQL*Net to client
253 bytes received via SQL*Net from client
3 SQL*Net roundtrips to/from client
1 sorts (memory)
0 sorts (disk)
22 rows processed
Tom Kyte
November 28, 2009 - 12:46 pm UTC

rows is cardinality. that is the estimated row count.

... - rule based optimizer used (consider using cbo) ...

that is rather interesting. explain that please.



as for the row count = 1, it is say "given this query, I expect to get 1 row from the index, and then 1 matching row from the table"

The table might have billions of rows, but if you "where primary_key = 5", you will get 1 row. That is the estimated cardinality - 1.

Are the ESTIMATES close to reality.

see this article
https://www.oracle.com/technetwork/issue-archive/2009/09-jan/o19asktom-086775.html

not for dynamic sampling (although that might apply) but for the "estimated cardinality is not close to reality" discussion it has.

explain plan

A reader, December 11, 2009 - 9:30 pm UTC


Explain Plan Question

anon, December 14, 2009 - 5:53 am UTC

Hi Tom,

I have two queries, one takes approximately 1 minute to run and the otther between 50mins to one hour. Both the queries return the same result, but the one that takes a shorter amount of time is a bit more complicated than the other. I have run dbms_monitor.session_trace_enable for both queries and these are the results:

i) Long running query (takes 50-60 mins):

********************************************************************************
count = number of times OCI procedure was executed
cpu = cpu time in seconds executing
elapsed = elapsed time in seconds executing
disk = number of physical reads of buffers from disk
query = number of buffers gotten for consistent read
current = number of buffers gotten in current mode (usually for update)
rows = number of rows processed by the fetch or execute call
********************************************************************************

BEGIN dbms_monitor.session_trace_enable( waits=>true ); END;


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 0 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 16 0 1
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 1 0.00 0.00 0 16 0 1

Misses in library cache during parse: 0
Misses in library cache during execute: 1
Optimizer mode: ALL_ROWS
Parsing user id: 89

Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
SQL*Net message to client 1 0.00 0.00
SQL*Net message from client 1 0.00 0.00
********************************************************************************

BEGIN DBMS_OUTPUT.GET_LINES(:LINES, :NUMLINES); END;


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 2 0.00 0.00 0 0 0 0
Execute 2 0.00 0.00 0 0 0 2
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 4 0.00 0.00 0 0 0 2

Misses in library cache during parse: 0
Optimizer mode: ALL_ROWS
Parsing user id: 89

Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
SQL*Net message to client 2 0.00 0.00
SQL*Net message from client 2 2010.31 2034.16
********************************************************************************

select count(*)
FROM src_trn_det trn,
ccdb_cpty_det cpy
WHERE trn.cpty_type_cd = cpy.counterparty_id_src
AND trn.cpty_cd = cpy.counterparty_id
AND trn.trn_link_cd IS NOT NULL
AND trn.cob_dt = TO_DATE ('30-JUN-2008', 'DD-MON-YYYY')
AND trn.run_id = 946804
AND exists (
SELECT null
FROM src_trn_det trn2,
ccdb_cpty_det cpy2
WHERE trn2.cpty_type_cd = cpy2.counterparty_id_src
AND trn2.cpty_cd = cpy2.counterparty_id
AND trn2.cob_dt = trn.cob_dt
AND trn2.cres_sys_id = trn.cres_sys_id
AND trn2.trn_src_cd = trn.trn_src_cd
AND trn2.trn_link_cd = trn.trn_link_cd
and trn2.rowid != trn.rowid
and nvl(cpy2.crms_org_id,-1) != nvl(cpy.crms_org_id,-1)
)

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.03 0.03 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 2 2575.58 2797.96 3265 518880791 0 1
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 4 2575.61 2797.99 3265 518880791 0 1

Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 89

Rows Row Source Operation
------- ---------------------------------------------------
1 SORT AGGREGATE (cr=518880791 pr=3265 pw=0 time=256335263 us)
0 FILTER (cr=518880791 pr=3265 pw=0 time=256335230 us)
61190 NESTED LOOPS (cr=227962 pr=133 pw=0 time=2640509 us)
81153 PARTITION RANGE ALL PARTITION: 1 1 (cr=4464 pr=60 pw=0 time=324749 us)
81153 TABLE ACCESS BY LOCAL INDEX ROWID SRC_TRN_DET PARTITION: 1 1 (cr=4464 pr=60 pw=0 time=324731 us)
81153 INDEX RANGE SCAN SRC_TRN_DET_IDX15 PARTITION: 1 1 (cr=201 pr=25 pw=0 time=81244 us)(object id 11795)
61190 TABLE ACCESS BY INDEX ROWID CCDB_CPTY_DET (cr=223498 pr=73 pw=0 time=1429356 us)
61190 INDEX UNIQUE SCAN CCDB_CPTY_DET_PK (cr=162308 pr=21 pw=0 time=692316 us)(object id 11098)
0 NESTED LOOPS (cr=518652829 pr=3132 pw=0 time=2795336949 us)
164364260 PARTITION RANGE SINGLE PARTITION: KEY KEY (cr=25498945 pr=3132 pw=0 time=823403206 us)
164364260 TABLE ACCESS BY LOCAL INDEX ROWID SRC_TRN_DET PARTITION: KEY KEY (cr=25498945 pr=3132 pw=0 time=658731041 us)
164364260 INDEX RANGE SCAN SRC_TRN_DET_IDX08 PARTITION: KEY KEY (cr=817308 pr=176 pw=0 time=165448137 us)(object id 38519)
0 TABLE ACCESS BY INDEX ROWID CCDB_CPTY_DET (cr=493153884 pr=0 pw=0 time=2059205362 us)
164364260 INDEX UNIQUE SCAN CCDB_CPTY_DET_PK (cr=328789624 pr=0 pw=0 time=1151219537 us)(object id 11098)


Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
SQL*Net message to client 2 0.00 0.00
db file sequential read 3265 0.11 5.70
latch: cache buffers chains 8 0.00 0.00
SQL*Net message from client 2 0.00 0.00
********************************************************************************

BEGIN dbms_monitor.session_trace_disable; END;


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 1
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 2 0.00 0.00 0 0 0 1

Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 89



********************************************************************************

OVERALL TOTALS FOR ALL NON-RECURSIVE STATEMENTS

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 4 0.03 0.03 0 0 0 0
Execute 5 0.00 0.00 0 16 0 4
Fetch 2 2575.58 2797.96 3265 518880791 0 1
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 11 2575.61 2798.00 3265 518880807 0 5

Misses in library cache during parse: 2
Misses in library cache during execute: 1

Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
SQL*Net message to client 5 0.00 0.00
SQL*Net message from client 5 2010.31 2034.16
db file sequential read 3265 0.11 5.70
latch: cache buffers chains 8 0.00 0.00


OVERALL TOTALS FOR ALL RECURSIVE STATEMENTS

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 0 0.00 0.00 0 0 0 0
Execute 0 0.00 0.00 0 0 0 0
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 0 0.00 0.00 0 0 0 0

Misses in library cache during parse: 0

5 user SQL statements in session.
0 internal SQL statements in session.
5 SQL statements in session.
********************************************************************************
Trace file: cre2frt_ora_22388760.trc
Trace file compatibility: 10.01.00
Sort options: default

1 session in tracefile.
5 user SQL statements in trace file.
0 internal SQL statements in trace file.
5 SQL statements in trace file.
4 unique SQL statements in trace file.
3450 lines in trace file.
13422 elapsed seconds in trace file.

ii) Faster query (takes approximately 1 minute):

select count(*)
FROM src_trn_det trn, ccdb_cpty_det cpy
WHERE trn.cpty_type_cd = cpy.counterparty_id_src
AND trn.cpty_cd = cpy.counterparty_id
AND trn.trn_link_cd IS NOT NULL
AND trn.cob_dt = TO_DATE ('30-JUN-2008', 'DD-MON-YYYY')
AND trn.run_id = 946804
AND EXISTS (
SELECT null
FROM src_trn_det trn2,
ccdb_cpty_det cpy2
WHERE trn2.cpty_type_cd = cpy2.counterparty_id_src
AND trn2.cpty_cd = cpy2.counterparty_id
AND trn2.cob_dt = trn.cob_dt
AND trn2.cres_sys_id = trn.cres_sys_id
AND trn2.trn_src_cd = trn.trn_src_cd
AND trn2.trn_link_cd = trn.trn_link_cd
AND NOT trn2.rowid = trn.rowid
)
AND EXISTS (
SELECT null
FROM src_trn_det trn2,
ccdb_cpty_det cpy2
WHERE trn2.cpty_type_cd = cpy2.counterparty_id_src
AND trn2.cpty_cd = cpy2.counterparty_id
AND trn2.cob_dt = trn.cob_dt
AND trn2.cres_sys_id = trn.cres_sys_id
AND trn2.trn_src_cd = trn.trn_src_cd
AND trn2.trn_link_cd = trn.trn_link_cd
AND NOT(nvl(cpy2.crms_org_id,-1) = nvl(cpy.crms_org_id,-1))
)

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.05 0.06 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 2 53.67 58.55 3 11044701 0 1
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 4 53.72 58.61 3 11044701 0 1

Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 89

Rows Row Source Operation
------- ---------------------------------------------------
1 SORT AGGREGATE (cr=11044701 pr=3 pw=0 time=58553103 us)
0 FILTER (cr=11044701 pr=3 pw=0 time=58553078 us)
61190 NESTED LOOPS (cr=227962 pr=3 pw=0 time=1468698 us)
81153 PARTITION RANGE ALL PARTITION: 1 1 (cr=4464 pr=3 pw=0 time=324725 us)
81153 TABLE ACCESS BY LOCAL INDEX ROWID SRC_TRN_DET PARTITION: 1 1 (cr=4464 pr=3 pw=0 time=243560 us)
81153 INDEX RANGE SCAN SRC_TRN_DET_IDX15 PARTITION: 1 1 (cr=201 pr=0 pw=0 time=81233 us)(object id 11795)
61190 TABLE ACCESS BY INDEX ROWID CCDB_CPTY_DET (cr=223498 pr=0 pw=0 time=987639 us)
61190 INDEX UNIQUE SCAN CCDB_CPTY_DET_PK (cr=162308 pr=0 pw=0 time=602259 us)(object id 11098)
61103 NESTED LOOPS (cr=430432 pr=0 pw=0 time=1960979 us)
61103 PARTITION RANGE SINGLE PARTITION: KEY KEY (cr=247123 pr=0 pw=0 time=1332279 us)
61103 TABLE ACCESS BY LOCAL INDEX ROWID SRC_TRN_DET PARTITION: KEY KEY (cr=247123 pr=0 pw=0 time=1121425 us)
61103 INDEX RANGE SCAN SRC_TRN_DET_IDX08 PARTITION: KEY KEY (cr=186020 pr=0 pw=0 time=782326 us)(object id 38519)
61103 INDEX UNIQUE SCAN CCDB_CPTY_DET_PK (cr=183309 pr=0 pw=0 time=510011 us)(object id 11098)
0 NESTED LOOPS (cr=10386307 pr=0 pw=0 time=54747352 us)
3154524 PARTITION RANGE SINGLE PARTITION: KEY KEY (cr=915080 pr=0 pw=0 time=15913803 us)
3154524 TABLE ACCESS BY LOCAL INDEX ROWID SRC_TRN_DET PARTITION: KEY KEY (cr=915080 pr=0 pw=0 time=12741249 us)
3154524 INDEX RANGE SCAN SRC_TRN_DET_IDX08 PARTITION: KEY KEY (cr=34233 pr=0 pw=0 time=3238110 us)(object id 38519)
0 TABLE ACCESS BY INDEX ROWID CCDB_CPTY_DET (cr=9471227 pr=0 pw=0 time=39812978 us)
3154524 INDEX UNIQUE SCAN CCDB_CPTY_DET_PK (cr=6316703 pr=0 pw=0 time=23050073 us)(object id 11098)

Please can you explain why my faster query (though more complicated) runs faster than i).

What changes would I need to make, in order to make i) faster?

Thanks very much




Explain Plan Question

anon, December 14, 2009 - 5:57 am UTC

ii) Faster Query (takes appximately 1 minute) to run

select count(*)
FROM src_trn_det trn, ccdb_cpty_det cpy
WHERE trn.cpty_type_cd = cpy.counterparty_id_src
AND trn.cpty_cd = cpy.counterparty_id
AND trn.trn_link_cd IS NOT NULL
AND trn.cob_dt = TO_DATE ('30-JUN-2008', 'DD-MON-YYYY')
AND trn.run_id = 946804
AND EXISTS (
SELECT null
FROM src_trn_det trn2,
ccdb_cpty_det cpy2
WHERE trn2.cpty_type_cd = cpy2.counterparty_id_src
AND trn2.cpty_cd = cpy2.counterparty_id
AND trn2.cob_dt = trn.cob_dt
AND trn2.cres_sys_id = trn.cres_sys_id
AND trn2.trn_src_cd = trn.trn_src_cd
AND trn2.trn_link_cd = trn.trn_link_cd
AND NOT trn2.rowid = trn.rowid
)
AND EXISTS (
SELECT null
FROM src_trn_det trn2,
ccdb_cpty_det cpy2
WHERE trn2.cpty_type_cd = cpy2.counterparty_id_src
AND trn2.cpty_cd = cpy2.counterparty_id
AND trn2.cob_dt = trn.cob_dt
AND trn2.cres_sys_id = trn.cres_sys_id
AND trn2.trn_src_cd = trn.trn_src_cd
AND trn2.trn_link_cd = trn.trn_link_cd
AND NOT(nvl(cpy2.crms_org_id,-1) = nvl(cpy.crms_org_id,-1))
)

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.05 0.06 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 2 53.67 58.55 3 11044701 0 1
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 4 53.72 58.61 3 11044701 0 1

Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 89

Rows Row Source Operation
------- ---------------------------------------------------
1 SORT AGGREGATE (cr=11044701 pr=3 pw=0 time=58553103 us)
0 FILTER (cr=11044701 pr=3 pw=0 time=58553078 us)
61190 NESTED LOOPS (cr=227962 pr=3 pw=0 time=1468698 us)
81153 PARTITION RANGE ALL PARTITION: 1 1 (cr=4464 pr=3 pw=0 time=324725 us)
81153 TABLE ACCESS BY LOCAL INDEX ROWID SRC_TRN_DET PARTITION: 1 1 (cr=4464 pr=3 pw=0 time=243560 us)
81153 INDEX RANGE SCAN SRC_TRN_DET_IDX15 PARTITION: 1 1 (cr=201 pr=0 pw=0 time=81233 us)(object id 11795)
61190 TABLE ACCESS BY INDEX ROWID CCDB_CPTY_DET (cr=223498 pr=0 pw=0 time=987639 us)
61190 INDEX UNIQUE SCAN CCDB_CPTY_DET_PK (cr=162308 pr=0 pw=0 time=602259 us)(object id 11098)
61103 NESTED LOOPS (cr=430432 pr=0 pw=0 time=1960979 us)
61103 PARTITION RANGE SINGLE PARTITION: KEY KEY (cr=247123 pr=0 pw=0 time=1332279 us)
61103 TABLE ACCESS BY LOCAL INDEX ROWID SRC_TRN_DET PARTITION: KEY KEY (cr=247123 pr=0 pw=0 time=1121425 us)
61103 INDEX RANGE SCAN SRC_TRN_DET_IDX08 PARTITION: KEY KEY (cr=186020 pr=0 pw=0 time=782326 us)(object id 38519)
61103 INDEX UNIQUE SCAN CCDB_CPTY_DET_PK (cr=183309 pr=0 pw=0 time=510011 us)(object id 11098)
0 NESTED LOOPS (cr=10386307 pr=0 pw=0 time=54747352 us)
3154524 PARTITION RANGE SINGLE PARTITION: KEY KEY (cr=915080 pr=0 pw=0 time=15913803 us)
3154524 TABLE ACCESS BY LOCAL INDEX ROWID SRC_TRN_DET PARTITION: KEY KEY (cr=915080 pr=0 pw=0 time=12741249 us)
3154524 INDEX RANGE SCAN SRC_TRN_DET_IDX08 PARTITION: KEY KEY (cr=34233 pr=0 pw=0 time=3238110 us)(object id 38519)
0 TABLE ACCESS BY INDEX ROWID CCDB_CPTY_DET (cr=9471227 pr=0 pw=0 time=39812978 us)
3154524 INDEX UNIQUE SCAN CCDB_CPTY_DET_PK (cr=6316703 pr=0 pw=0 time=23050073 us)(object id 11098)

Please can you tell me why ii) runs faster than i) based on the plan and what changes I would need to make to i) in order to run at a comparable speed?

Vijai Narayanan SV, January 07, 2010 - 4:25 am UTC

Tom,
The answers you provided were really useful.

I am stuck in a situation,
i need to find the plans for all the cursors present in a proc. We need to analyze the plans without executing the proc. is there any way that we may find the execution plan for all the cursors present in a proc.

( I already made a proc to get the cursor SQL alone seperately, All i need to do is find the execution plan of these.) Please help me out....
Tom Kyte
January 11, 2010 - 8:55 am UTC

only if they already exist in v$sql_plan - you can use the program_id and program_line# columns in v$sql to marry up the sql with the procedure, you didn't even need to parse the procedure in the first place if that is what you did.

else, you'll have to actually run the procedure (as explain plan can "lie" sometimes.

The best bet would be to run the procedure with dbms_monitor enabled trace with wait events.

Merge the Union

A Reader, January 07, 2010 - 5:38 am UTC

Hi Tom,
Below query is combine two statment with UNION ,can we merge these two statment with single statment.

SELECT /*+ index(ec) */
ec.exp_seq_id, ec.part_no, ec.pkg_name, ec.rpl_unit_id, ec.risk_class,
ec.report_status_code, ec.rpl_loc, ec.rpl_subsid, ec.rpl_book,ec.book_trad_bank, ec.book_prop_flow,
ec.cob_date, ec.gpc_book, ec.product, ec.country, ec.rating, ec.ccy_1,
ec.ccy_2, REPLACE(ec.tenor1,'1D','0D') AS tenor1, ec.tenor2, ec.tenor3, ec.sector,
ec.agg_code || '_PL' AS agg_code,
(CASE
WHEN ec.agg_code = 'EQ_CASH_GAMMA'
THEN 50
* ec.risk_amount
* NVL (DECODE (ec.shore_ind,
'ON', s.shock_os,s.shock)
, 1)
* NVL (DECODE (ec.shore_ind,
'ON', s.shock_os,s.shock)
, 1)
ELSE ec.risk_amount
* NVL (DECODE (ec.shore_ind,
'ON', s.shock_os,s.shock)
, 1)
END
) AS risk_amount,
ec.source_system, ec.trigger_file, ec.merger_flag, ec.ccdb_issuer,
ec.issuer_name, ec.mkt_index, ec.ccdb_id, ec.buba_id,
ec.rr_style_type_code, ec.source_type_code, ec.issuer_code,
ec.sc_bm_sec_code, ec.bc_bm_sec_code, ec.rf_sec_code, ec.bond_maturity,
ec.coupon, ec.shore_ind, ec.issuer_rating, ec.issue_rating_source, ec.issuer_rating_source
FROM tmp_ers_cube ec,
v_ers_shock s
WHERE ec.agg_code = s.cube_agg_code
AND ec.country = s.rpl_name
AND ec.product = s.gpc_prod
AND DECODE(s.tenor,'UNDEFINED', ec.tenor1,s.tenor)=ec.tenor1
AND DECODE (ec.shore_ind,'ON', shock_os,shock) IS NOT NULL

AND s.gpc_prod = 'ALL'
AND ec.product = s.gpc_prod

decode(s.gpc_prod,'ALL',

UNION
SELECT /*+ index(ec) */
ec.exp_seq_id, ec.part_no, ec.pkg_name, ec.rpl_unit_id, ec.risk_class,
ec.report_status_code, ec.rpl_loc, ec.rpl_subsid, ec.rpl_book,ec.book_trad_bank, ec.book_prop_flow,
ec.cob_date, ec.gpc_book, ec.product, ec.country, ec.rating, ec.ccy_1,
ec.ccy_2, REPLACE(ec.tenor1,'1D','0D'), ec.tenor2, ec.tenor3, ec.sector,
ec.agg_code || '_PL',
(CASE
WHEN ec.agg_code = 'EQ_CASH_GAMMA'
THEN 50
* ec.risk_amount
* NVL (DECODE (ec.shore_ind,
'ON', s.shock_os,s.shock)
, 1)
* NVL (DECODE (ec.shore_ind,
'ON', s.shock_os,s.shock)
, 1)
ELSE ec.risk_amount
* NVL (DECODE (ec.shore_ind,
'ON', s.shock_os,s.shock)
, 1)
END
) AS risk_amount,
ec.source_system, ec.trigger_file, ec.merger_flag, ec.ccdb_issuer,
ec.issuer_name, ec.mkt_index, ec.ccdb_id, ec.buba_id,
ec.rr_style_type_code, ec.source_type_code, ec.issuer_code,
ec.sc_bm_sec_code, ec.bc_bm_sec_code, ec.rf_sec_code, ec.bond_maturity,
ec.coupon, ec.shore_ind, ec.issuer_rating, ec.issue_rating_source, ec.issuer_rating_source
FROM tmp_ers_cube ec,
v_ers_shock s
WHERE ec.agg_code = s.cube_agg_code
AND ec.country = s.rpl_name
AND s.gpc_prod = 'ALL'
AND DECODE(s.tenor,'UNDEFINED', ec.tenor1,s.tenor)=ec.tenor1
AND DECODE (ec.shore_ind,'ON', shock_os,shock) IS NOT NULL


If you see only difference in following statment is where clause


" AND s.gpc_prod = 'ALL'
AND ec.product = s.gpc_prod "

Please help me out.
Thanks
Tom Kyte
January 11, 2010 - 8:57 am UTC

this is not a complete sql statement.


AND            s.gpc_prod = 'ALL'  
AND            ec.product = s.gpc_prod              
                
decode(s.gpc_prod,'ALL',
                
                UNION
                SELECT   /*+ index(ec) */

Merge the Union (Ignore my last post)

A Reader, January 07, 2010 - 5:46 am UTC

Hi Tom,
Sorry for sending incorrect query .typo mistake.

This is correct query ,i just wanted to combine these two statment in one statment which is being joined with union .
can we use decode in this .b'cause it's taking so approx 3 min in production so my idea to reduce union for below statment .

SELECT /*+ index(ec) */
ec.exp_seq_id, ec.part_no, ec.pkg_name, ec.rpl_unit_id, ec.risk_class,
ec.report_status_code, ec.rpl_loc, ec.rpl_subsid, ec.rpl_book,ec.book_trad_bank, ec.book_prop_flow,
ec.cob_date, ec.gpc_book, ec.product, ec.country, ec.rating, ec.ccy_1,
ec.ccy_2, REPLACE(ec.tenor1,'1D','0D') AS tenor1, ec.tenor2, ec.tenor3, ec.sector,
ec.agg_code || '_PL' AS agg_code,
(CASE
WHEN ec.agg_code = 'EQ_CASH_GAMMA'
THEN 50
* ec.risk_amount
* NVL (DECODE (ec.shore_ind,
'ON', s.shock_os,s.shock)
, 1)
* NVL (DECODE (ec.shore_ind,
'ON', s.shock_os,s.shock)
, 1)
ELSE ec.risk_amount
* NVL (DECODE (ec.shore_ind,
'ON', s.shock_os,s.shock)
, 1)
END
) AS risk_amount,
ec.source_system, ec.trigger_file, ec.merger_flag, ec.ccdb_issuer,
ec.issuer_name, ec.mkt_index, ec.ccdb_id, ec.buba_id,
ec.rr_style_type_code, ec.source_type_code, ec.issuer_code,
ec.sc_bm_sec_code, ec.bc_bm_sec_code, ec.rf_sec_code, ec.bond_maturity,
ec.coupon, ec.shore_ind, ec.issuer_rating, ec.issue_rating_source, ec.issuer_rating_source
FROM tmp_ers_cube ec,
v_ers_shock s
WHERE ec.agg_code = s.cube_agg_code
AND ec.country = s.rpl_name
AND ec.product = s.gpc_prod
AND DECODE(s.tenor,'UNDEFINED', ec.tenor1,s.tenor)=ec.tenor1
AND DECODE (ec.shore_ind,'ON', shock_os,shock) IS NOT NULL

UNION


SELECT /*+ index(ec) */
ec.exp_seq_id, ec.part_no, ec.pkg_name, ec.rpl_unit_id, ec.risk_class,
ec.report_status_code, ec.rpl_loc, ec.rpl_subsid, ec.rpl_book,ec.book_trad_bank, ec.book_prop_flow,
ec.cob_date, ec.gpc_book, ec.product, ec.country, ec.rating, ec.ccy_1,
ec.ccy_2, REPLACE(ec.tenor1,'1D','0D'), ec.tenor2, ec.tenor3, ec.sector,
ec.agg_code || '_PL',
(CASE
WHEN ec.agg_code = 'EQ_CASH_GAMMA'
THEN 50
* ec.risk_amount
* NVL (DECODE (ec.shore_ind,
'ON', s.shock_os,s.shock)
, 1)
* NVL (DECODE (ec.shore_ind,
'ON', s.shock_os,s.shock)
, 1)
ELSE ec.risk_amount
* NVL (DECODE (ec.shore_ind,
'ON', s.shock_os,s.shock)
, 1)
END
) AS risk_amount,
ec.source_system, ec.trigger_file, ec.merger_flag, ec.ccdb_issuer,
ec.issuer_name, ec.mkt_index, ec.ccdb_id, ec.buba_id,
ec.rr_style_type_code, ec.source_type_code, ec.issuer_code,
ec.sc_bm_sec_code, ec.bc_bm_sec_code, ec.rf_sec_code, ec.bond_maturity,
ec.coupon, ec.shore_ind, ec.issuer_rating, ec.issue_rating_source, ec.issuer_rating_source
FROM tmp_ers_cube ec,
v_ers_shock s
WHERE ec.agg_code = s.cube_agg_code
AND ec.country = s.rpl_name
AND s.gpc_prod = 'ALL'
AND DECODE(s.tenor,'UNDEFINED', ec.tenor1,s.tenor)=ec.tenor1
AND DECODE (ec.shore_ind,'ON', shock_os,shock) IS NOT NULL



Tom Kyte
January 11, 2010 - 9:00 am UTC

 
SELECT <b>DISTINCT</b> .....
 FROM ....
WHERE ec.agg_code = s.cube_agg_code
  AND ec.country = s.rpl_name<b>
  AND (ec.product = s.gpc_prod or s.gpc_prod = 'ALL')</b>
  AND DECODE(s.tenor,'UNDEFINED', ec.tenor1,s.tenor)=ec.tenor1
  AND DECODE (ec.shore_ind,'ON', shock_os,shock) IS NOT NULL



distinct might not be necessary, only you have the metadata to know if that is true or not. I don't have details on primary/foreign keys or any other data constraints, you'll figure that out yourself.

execute slowly

ninhnq, January 14, 2010 - 1:32 am UTC

Hi Tom,
I have a sql command, when I do explain plan this command, It's a good command, but when I execute It is retrieve slowly.(Database version 10.2.0.4)
This is my command:

/* Formatted on 2010/01/13 16:36 (Formatter Plus v4.8.7) */
SELECT a.sub_id, a.cust_id, 2 product_id, a.shop_code, a.NAME, a.isdn,
a.contract_no, a.contract_date, a.sub_type, c.issue_datetime,
'' line_type, c.action_id, c.action_audit_id, '0' prepaid, a.reg_type,
a.status, c.reason_id, a.province, a.user_name, '0' status_reveive,
(SELECT r.NAME
FROM reasons r
WHERE r.reason_id = c.reason_id
AND product_id = 2
AND ROWNUM = 1) reason_name,
(SELECT ap.NAME
FROM ap_domain ap
WHERE ap.TYPE = 'ACTION_TYPE'
AND ap.product_id = 2
AND ap.code = c.action_id
AND ROWNUM = 1) action_name
FROM subscriber@link_cuspostpaid a, action_audit@link_cuspostpaid c
WHERE a.sta_datetime >= TO_DATE ('01/12/2009', 'dd/mm/yyyy')
AND a.sta_datetime < TO_DATE ('30/12/2009', 'dd/mm/yyyy') + 1
AND c.TYPE = '1'
AND c.pk_id = a.sub_id
AND c.action_id IN ('00', '21') --Dang ky moi
AND c.issue_datetime >= TO_DATE ('01/12/2009', 'dd/mm/yyyy')
AND c.issue_datetime < TO_DATE ('30/12/2009', 'dd/mm/yyyy') + 1
Please help me!
Tom Kyte
January 18, 2010 - 4:34 pm UTC

explain to me what "it's a good command" means - how do you know what is good?

Tuning the Update statement

Karthi, July 22, 2010 - 4:02 am UTC

Hi Tom,

We are tuning the SQL scripts in our production environment.  I need your suggestion on the below update statements. 

DB details:
~~~~~~~~~~~

Version: Oracle 9i
Archive mode: Enabled 

Current update script:
~~~~~~~~~~~~~~~~~~~~~

The below update statement is available in the current production environment and it is updating all the records in the “SV_ORDER” table. We found that it is having lot redo , DB block gets and Consistent gets.

So we have planned to reduce that one. We have added the WHERE condition to update the records only which got changed or which is NULL in the “SV_ORDER” table. After adding that condition, the redo size and other Statistics are very less. 

SQL> update SV_ORDER             SSW
  2      set SSW.SLSORD_TYP_KEY =
  3  nvl((select SSTW.SLSORD_TYP_KEY
  4  from SV_ORDER_TYP          SSTW
  5  where SSW.SLSORD_TYP=SSTW.SLSORD_TYP_NM AND
  6  SSW.ORG_ID = SSTW.ORG_ID),-1);

519866 rows updated.


Execution Plan
----------------------------------------------------------                      
   0      UPDATE STATEMENT Optimizer=CHOOSE (Cost=302 Card=519866 Byte          
          s=8317856)                                                            
                                                                                
   1    0   UPDATE OF 'SV_ORDER'                                            
   2    1     TABLE ACCESS* (FULL) OF 'SV_ORDER' (Cost=302 Card=51 :Q176407 
          9866 Bytes=8317856)                                          2000     
                                                                                
   3    1     TABLE ACCESS (BY INDEX ROWID) OF 'SV_ORDER_TYP' (Cos          
          t=1 Card=1 Bytes=19)                                                  
                                                                                
   4    3       INDEX (UNIQUE SCAN) OF 'SV_ORDER_TYP_UK' (UNIQUE)           


   2 PARALLEL_TO_SERIAL            SELECT /*+ NO_EXPAND ROWID(A1) */ A1.ROWID,A 
                                   1."SLSORD_TYP_KEY",A1."SLSORD_TYP",A         
                                                                                


Statistics
----------------------------------------------------------                      
         20  recursive calls                                                    
     602988  db block gets                                                      
      81070  consistent gets                                                    
      66043  physical reads                                                     
  147850232  redo size                                                          
        468  bytes sent via SQL*Net to client                                   
        523  bytes received via SQL*Net from client                             
          3  SQL*Net roundtrips to/from client                                  
          3  sorts (memory)                                                     
          0  sorts (disk)                                                       
     519866  rows processed                                                     

SQL> rollback;

Rollback complete.


Tuned update statement:  
~~~~~~~~~~~~~~~~~~~~~~

SQL> update SV_ORDER             SSW
  2      set SSW.SLSORD_TYP_KEY =
  3  nvl((select SSTW.SLSORD_TYP_KEY
  4  from SV_ORDER_TYP          SSTW
  5  where SSW.SLSORD_TYP=SSTW.SLSORD_TYP_NM AND
  6  SSW.ORG_ID = SSTW.ORG_ID),-1)
  7      where SSW.SLSORD_TYP_KEY is null
  8            or SSW.SLSORD_TYP_KEY <>
  9  nvl((select SSTW.SLSORD_TYP_KEY
 10  from SV_ORDER_TYP          SSTW
 11  where SSW.SLSORD_TYP=SSTW.SLSORD_TYP_NM AND
 12  SSW.ORG_ID = SSTW.ORG_ID),-1);

0 rows updated.


Execution Plan
----------------------------------------------------------                      
   0      UPDATE STATEMENT Optimizer=CHOOSE (Cost=302 Card=489286 Byte          
          s=7828576)                                                            
                                                                                
   1    0   UPDATE OF 'SV_ORDER'                                            
   2    1     FILTER                                                            
   3    2       TABLE ACCESS* (FULL) OF 'SV_ORDER' (Cost=302 Card= :Q176407 
          489286 Bytes=7828576)                                        7000     
                                                                                
   4    2       TABLE ACCESS (BY INDEX ROWID) OF 'SV_ORDER_TYP' (C          
          ost=1 Card=1 Bytes=19)                                                
                                                                                
   5    4         INDEX (UNIQUE SCAN) OF 'SV_ORDER_TYP_UK' (UNIQUE          
          )                                                                     
                                                                                
   6    1     TABLE ACCESS (BY INDEX ROWID) OF 'SV_ORDER_TYP' (Cos          
          t=1 Card=1 Bytes=19)                                                  
                                                                                
   7    6       INDEX (UNIQUE SCAN) OF 'SV_ORDER_TYP_UK' (UNIQUE)           


   3 PARALLEL_TO_SERIAL            SELECT /*+ NO_EXPAND ROWID(A1) */ A1.ROWID,A 
                                   1."SLSORD_TYP_KEY",A1."SLSORD_TYP",A         
                                                                                


Statistics
----------------------------------------------------------                      
         20  recursive calls                                                    
          3  db block gets                                                      
      43816  consistent gets                                                    
      39949  physical reads                                                     
        920  redo size                                                          
        487  bytes sent via SQL*Net to client                                   
        741  bytes received via SQL*Net from client                             
          3  SQL*Net roundtrips to/from client                                  
          3  sorts (memory)                                                     
          0  sorts (disk)    


Please provide your comments on this. 

My questions:

1. The second update statement is having less internal operations. Am I right?
2. Is there any draw back in this update statement? 

Regards,
Karthi 



Tom Kyte
July 23, 2010 - 9:12 am UTC

1) define "internal operations" for me. It does less work obviously

2) you will lock only rows you update now, not all rows in the table. If this is done at the same time others access the table - you will execute transactions in a different mix than before. Is this a draw back? Only you know the answer to that - does your code rely on the fact that it used to lock every row in the table?

Thanks a lot

Karthi, July 30, 2010 - 6:03 am UTC

Thanks a lot tom for your guidance!

[ tom ] 1) define "internal operations" for me. It does less work obviously

[karthi] : the internal operations like generation of undo , Redo and writing to the Archive log files.

[ tom ] 2) you will lock only rows you update now, not all rows in the table. If this is done at the same time others access the table - you will execute transactions in a different mix than before. Is this a draw back? Only you know the answer to that - does your code rely on the fact that it used to lock every row in the table?

[karthi] : the above code is running in the DW environment and it is part of the ETL script. I hope, there will not be any issue in locking side.

Tom Kyte
August 02, 2010 - 8:32 am UTC

the first one updated a lot of rows.

the second one updated nothing

so, yes, the first one did more work as far as redo and undo is concerned.

Are you sure the second one is correct? Because they sure are not "the same"

Checked with some test table

Karthi, August 03, 2010 - 6:22 am UTC

Hi Tom,


 Yes you are correct. My above example is having the zero for the 2nd update. I have tested with some sample table. 

The second update statement is taking the less time and generating the less redo.

Note: I have used the Oracle 9i DB.

Scripts:
--------
create table main ( id number, Address varchar2(10));

insert into main values (1,'NNNNNNN');
insert into main values (2,'NNNNNNN');
insert into main values (3,'NNNNNNN');
insert into main values (4,'NNNNNNN');


create table sub ( id number, Address varchar2(10));

insert into sub values (1,'A');
insert into sub values (2,null);
insert into sub values (4,'C');


SQL> insert into main
  2  select * from main;
............
.......
....
SQL> insert into main
  2  select * from main;


Now the table "Main" is having the 1048576 records and table "Sub" is having the 3 records.

SQL> select count(*) from main;

  COUNT(*)                                                                      
----------                                                                      
   1048576                                                                      

SQL> select count(*) from sub;

  COUNT(*)                                                                      
----------                                                                      
         3     


SQL> select address,count(*) from main
  2  group by address;

ADDRESS      COUNT(*)                                                           
---------- ----------                                                           
NNNNNNN       1048576  


SQL> set timing on
SQL> set autotrace traceonly
SQL> update main m
  2  set address = nvl((select address from sub s where m.id=s.id),-1);

1048576 rows updated.

Elapsed: 00:01:44.04

Execution Plan
----------------------------------------------------------                      
   0      UPDATE STATEMENT Optimizer=CHOOSE                                     
   1    0   UPDATE OF 'MAIN'                                                    
   2    1     TABLE ACCESS (FULL) OF 'MAIN'                                     
   3    1     TABLE ACCESS (FULL) OF 'SUB'                                      




Statistics
----------------------------------------------------------                      
        805  recursive calls                                                    
    1073721  db block gets                                                      
       2659  consistent gets                                                    
       2124  physical reads                                                     
  252412780  redo size                                                          
        618  bytes sent via SQL*Net to client                                   
        582  bytes received via SQL*Net from client                             
          3  SQL*Net roundtrips to/from client                                  
          6  sorts (memory)                                                     
          0  sorts (disk)                                                       
    1048576  rows processed  


The statement took 01:44 minutes to complete and it generated the 252412780 bytes. 


SQL> update main m
  2  set address = nvl((select address from sub s where m.id=s.id),-1)
  3  where address is null
  4  or address <>  nvl((select address from sub s where m.id=s.id),-1);

1048576 rows updated.

Elapsed: 00:01:42.05

Execution Plan
----------------------------------------------------------                      
   0      UPDATE STATEMENT Optimizer=CHOOSE                                     
   1    0   UPDATE OF 'MAIN'                                                    
   2    1     FILTER                                                            
   3    2       TABLE ACCESS (FULL) OF 'MAIN'                                   
   4    2       TABLE ACCESS (FULL) OF 'SUB'                                    
   5    1     TABLE ACCESS (FULL) OF 'SUB'                                      




Statistics
----------------------------------------------------------                      
        536  recursive calls                                                    
    1073630  db block gets                                                      
       2611  consistent gets                                                    
       1963  physical reads                                                     
  252411468  redo size                                                          
        627  bytes sent via SQL*Net to client                                   
        672  bytes received via SQL*Net from client                             
          3  SQL*Net roundtrips to/from client                                  
          1  sorts (memory)                                                     
          0  sorts (disk)                                                       
    1048576  rows processed                                                     

The statement took 01:42 minutes to complete and it generated the 252411468 bytes. There is no big difference when it updates all the records.


SQL> select address,count(*) from main
  2  group by address;

ADDRESS      COUNT(*)                                                           
---------- ----------                                                           
-1             524288                                                           
A              262144                                                           
C              262144                                                           

Elapsed: 00:00:01.03
SQL> commit;

Commit complete.

Elapsed: 00:00:00.00


i have changed the some set of records to check the execution time. Most of the data is same in the target table except some records. 

Only 262144 records need to affect (other records are same).

SQL> update sub set address='M' where id =4;

1 row updated.

Elapsed: 00:00:00.02
SQL> commit;

Commit complete.


SQL> set autotrace traceonly
SQL> 
SQL>  update main m
  2   set address = nvl((select address from sub s where m.id=s.id),-1);

1048576 rows updated.

Elapsed: 00:01:51.00

Execution Plan
----------------------------------------------------------                      
   0      UPDATE STATEMENT Optimizer=CHOOSE                                     
   1    0   UPDATE OF 'MAIN'                                                    
   2    1     TABLE ACCESS (FULL) OF 'MAIN'                                     
   3    1     TABLE ACCESS (FULL) OF 'SUB'                                      




Statistics
----------------------------------------------------------                      
        624  recursive calls                                                    
    1072662  db block gets                                                      
       4953  consistent gets                                                    
       2313  physical reads                                                     
  248380252  redo size                                                          
        629  bytes sent via SQL*Net to client                                   
        584  bytes received via SQL*Net from client                             
          3  SQL*Net roundtrips to/from client                                  
          1  sorts (memory)                                                     
          0  sorts (disk)                                                       
    1048576  rows processed                                                     


The statement took 01:51minutes to complete and it generated the 248380252  bytes.


SQL> set autotrace off;
SQL> select address,count(*) from main
  2  group by address;

ADDRESS      COUNT(*)                                                           
---------- ----------                                                           
-1             524288                                                           
A              262144                                                           
M              262144                                                           

Elapsed: 00:00:00.09

Rollback the result.

SQL> rollback;

Rollback complete.

Elapsed: 00:04:40.08
SQL> select address,count(*) from main
  2  group by address;

ADDRESS      COUNT(*)                                                           
---------- ----------                                                           
-1             524288                                                           
A              262144                                                           
C              262144        

SQL> set autotrace traceonly

SQL> update main m
  2  set address = nvl((select address from sub s where m.id=s.id),-1)
  3  where address is null
  4  or address <>  nvl((select address from sub s where m.id=s.id),-1);

262144 rows updated.

Elapsed: 00:00:45.04

Execution Plan
----------------------------------------------------------                      
   0      UPDATE STATEMENT Optimizer=CHOOSE                                     
   1    0   UPDATE OF 'MAIN'                                                    
   2    1     FILTER                                                            
   3    2       TABLE ACCESS (FULL) OF 'MAIN'                                   
   4    2       TABLE ACCESS (FULL) OF 'SUB'                                    
   5    1     TABLE ACCESS (FULL) OF 'SUB'                                      




Statistics
----------------------------------------------------------                      
        176  recursive calls                                                    
     268198  db block gets                                                      
       2421  consistent gets                                                    
       2054  physical reads                                                     
   62104336  redo size                                                          
        628  bytes sent via SQL*Net to client                                   
        672  bytes received via SQL*Net from client                             
          3  SQL*Net roundtrips to/from client                                  
          1  sorts (memory)                                                     
          0  sorts (disk)                                                       
     262144  rows processed   


The statement took only 00:45.04 seconds to complete and it generated the 62104336  bytes.

SQL> set autotrace off;
SQL> select 248380252-62104336 redo_diff from dual;

 REDO_DIFF                                                                      
----------                                                                      
 186275916

the secound update is taking the less execution time and generating the less Redo & DB block gets.


Please provide you suggestion to me. 

Regards,
Karthi

Tom Kyte
August 03, 2010 - 8:44 am UTC

suggestion for what? You have done all of the work - analyze the numbers, read them, understand them, make your decision (after of course making sure the updates actually do what you need them to do)

high redo entry

Arvind, September 08, 2010 - 9:19 am UTC

Hello Tom,

Can you suggest something for following insert statement:

insert into temp_all
SELECT /*+ */msd.inventory_item_id,
bom.component_item_id,
msd.organization_id,
msd.schedule_date,
null schedule_designator,
ecd.seq_num,
null schedule_designator,
0 mps_quantity
from msd,
ecd,
bom
where 1=1
and msd.global_name = 'US'
and msd.schedule_level = 2
and bom.ORGANIZATION_ID = msd.organization_id
and bom.inventory_item_id = msd.inventory_item_id
and msd.schedule_designator = (select max(schedule_designator)
from ETK
)
and ecd.calendar_date between msd.schedule_date and msd.schedule_date+6
and ecd.calendar_date > sysdate-8
and ecd.SEQ_NUM is not null --35 sec


5189429 rows created.

Elapsed: 00:21:08.07

Execution Plan
----------------------------------------------------------
0 INSERT STATEMENT Optimizer=CHOOSE (Cost=4326 Card=1925 Bytes
=121275)

1 0 MERGE JOIN (Cost=4326 Card=1925 Bytes=121275)
2 1 SORT (JOIN) (Cost=4310 Card=2589 Bytes=132039)
3 2 HASH JOIN (Cost=4276 Card=2589 Bytes=132039)
4 3 TABLE ACCESS (BY INDEX ROWID) OF 'msd' (Cost=4094 Card=25326 Bytes=886410)

5 4 BITMAP CONVERSION (TO ROWIDS)
6 5 BITMAP AND
7 6 BITMAP INDEX (SINGLE VALUE) OF 'IDX_BMPSD'
8 7 SORT (AGGREGATE)
9 8 INDEX (FULL SCAN (MIN/MAX)) OF 'IDX_ERP' (
NON-UNIQUE) (Cost=18 Card=12983 Bytes=142813)

10 6 BITMAP INDEX (SINGLE VALUE) OF 'IDX_BMP_SL'
11 3 INDEX (FULL SCAN) OF 'BOM_PK' (UNIQUE) (
Cost=748 Card=208537 Bytes=3336592)

12 1 FILTER
13 12 SORT (JOIN)
14 13 INDEX (FULL SCAN) OF 'ECD' (UNIQUE
) (Cost=27 Card=297 Bytes=3564)





Statistics
----------------------------------------------------------
1662 recursive calls
144397 db block gets
87438 consistent gets
48126 physical reads
230391184 redo size
356 bytes sent via SQL*Net to client
1673 bytes received via SQL*Net from client
3 SQL*Net roundtrips to/from client
2 sorts (memory)
1 sorts (disk)
5189429 rows processed
Tom Kyte
September 09, 2010 - 8:46 pm UTC

if the table you are inserting into is not indexed, you might consider using a direct path load (insert /*+ APPEND */ ) and nologging (alter table t nologging) - however, if you do that COORDINATE WITH THE DBA - they need to backup after you do that - soon.


If you have lots of indexes and the table is empty when you load and your entire goal is "reduce redo", then disable indexes - load nologging - rebuild indexes nologging - and there you go.

if the table is already big, then you have what you have, 230mb is tiny, small, nothing in the year 2010.

Redo log or sorting

Arvind, September 13, 2010 - 4:13 am UTC

But Tom it is taking 21 minutes to execute even in no logging mode. Can we do something to reduce execution time?
Tom Kyte
September 13, 2010 - 2:28 pm UTC

trace it, find out where your time is being spent (likely, it is not the insert part but the select part that is consuming a large amount of time - the way to find that out will be to trace it and see what it is doing and how long it is spending doing what)

temp area?

Arvind, September 15, 2010 - 6:48 am UTC

Hello Tom,

Here is tkprof:

insert /*+ append*/ into temp_all
SELECT msd.inventory_item_id,
  bom.component_item_id,
  msd.organization_id,
  msd.schedule_date,
  null week_start_date,
  ecd.seq_num,
  null schedule_designator,
  0 mps_quantity
 from r_mf_mrp_schedule_dates msd,
  erp_calendar_dates ecd,
  erp_tk_flat_bom bom
 where 1=1
 --and msd.organization_id = 1163
 --and msd.inventory_item_id = 268336 --6477129
 --and bom.component_item_id = 1740046 --5612534 --12121 --17020
 --and bom.organization_id = 362 --1163 --16 --923
 and msd.global_name = 'US'
 and msd.schedule_level = 2
 and bom.ORGANIZATION_ID = msd.organization_id
 and bom.inventory_item_id = msd.inventory_item_id
 and msd.schedule_designator =  (select max(schedule_designator)
                                from    ERP_TK_MPS_SUMMARY
                                     )
 --and ((epw.fiscal_wk_start_date+1 = ecd.week_start_date) or (epw.fiscal_wk_start_date+2 = ecd.week_start_date))
 --and ecd.week_start_date between epw.fiscal_wk_start_date+1 and epw.fiscal_wk_start_date+2
 --and msd.schedule_date+1 = ecd.CALENDAR_DATE
 and ecd.calendar_date between msd.schedule_date and msd.schedule_date+6
 --and msd.fiscal_wk_start_date+2 = ecd.week_start_date
 and ecd.calendar_date > sysdate-8
 and ecd.SEQ_NUM is not null
 --and ecd.calendar_date between ecd.week_start_date  and ecd.next_week_start_date

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.01          0          0          0           0
Execute      3   1010.22    1653.72    3207896      68701        551     5128037
Fetch        0      0.00       0.00          0          0          0           0
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        4   1010.22    1653.73    3207896      68701        551     5128037

Misses in library cache during parse: 1
Optimizer mode: CHOOSE
Parsing user id: 60  

Rows     Row Source Operation
-------  ---------------------------------------------------
      1  LOAD AS SELECT  
5128037   HASH JOIN  
2466132    MERGE JOIN  
    863     SORT JOIN 
    863      INDEX FULL SCAN ERP_CALENDAR_DATES_U1 (object id 24234890)
2466132     FILTER  
372651782      SORT JOIN 
 502399       TABLE ACCESS BY INDEX ROWID R_MF_MRP_SCHEDULE_DATES 
 612954        BITMAP CONVERSION TO ROWIDS 
      6         BITMAP AND  
     77          BITMAP INDEX SINGLE VALUE IDX_BMPSD (object id 24335933)
      1           SORT AGGREGATE 
      1            INDEX FULL SCAN (MIN/MAX) IDX_ERP (object id 24336306)
    156          BITMAP INDEX SINGLE VALUE IDX_BMP_SL (object id 24335897)
 208537    INDEX FULL SCAN ERP_TK_FLAT_BOM_PK (object id 24235051)


Elapsed times include waiting on following events:
  Event waited on                             Times   Max. Wait  Total Waited
  ----------------------------------------   Waited  ----------  ------------
  SQL*Net message to client                       2        0.00          0.00
  SQL*Net message from client                     2        0.29          0.58
  latch free                                     22        0.00          0.03
  db file sequential read                     34809        1.14        142.67
  control file sequential read                    5        0.00          0.00
  direct path write                            2109        0.24          0.93
  direct path read                           142758        2.35        337.58
  log file sync                                   1        0.00          0.00
********************************************************************************

as far as i know, high direct path read is sort area problem. If yes then how can I calculate what hould be size of my sort area size or hash area size for a session like this. Also, the current value of my ***_area_size are:

SQL> show parameter area_size

NAME                                 TYPE        VALUE
------------------------------------ ----------- -------------------
bitmap_merge_area_size               integer     1048576
create_bitmap_area_size              integer     8388608
hash_area_size                       integer     131072
sort_area_size                       integer     65536
workarea_size_policy                 string      AUTO

Tom Kyte
September 15, 2010 - 8:19 am UTC

your workarea_size_policy is set to auto, your pga_aggregate_target is what you want to look at - not the *_size things - using pga automatic memory management we use the pga_aggregate_target.

You can use statspack or AWR and the pga advisors contained therein to get an idea of what your setting would have to be to do this entirely in memory.


but first, compare the plan to the row source operation in there, does the plan show you an estimate of 372,651,782 rows coming out of that sort join step - that looks big to me, might be a bad plan - and we'd want to fix that first.

Cost Against Execution Time

Abhisek, October 12, 2010 - 8:38 am UTC

Hi Tom,

I would like to know how closely the cost and the execution time of a query plan are co-related?

Is it lower the cost, lower is execution time? Is it sometime or always.

Thanks a lot
Tom Kyte
October 12, 2010 - 9:17 am UTC

In a perfect world - you could derive perfectly the runtime from the cost.

It is not a perfect world - so there is a loose relationship between cost and time. In theory, lower cost = faster run time. In practice, you will find many exceptions because the cost is an output of a mathematical model whose inputs are sometimes imprecise or fuzzy or guessed at.

One more question

ABhisek, October 12, 2010 - 12:32 pm UTC

Hi Tom,

Thanks for the reply. Now if it is exception that lower cost=faster execution then fine.

But still, I would like to know how the cost is calculated. I see the execution plan where I see the cost but I dont understand how it was calculated. Any mathematical formula with an example will be highly appreciated.
Tom Kyte
October 12, 2010 - 12:53 pm UTC

.. Now if it is exception t ..

I didn't say that, it happens often.

The cost is a function of how many IO's we think we'll do and how much CPU we believe we'll need. It is not a simple "f(x)" function - it is a hugely complex algorithm with lots of side bars to it.

If you are interested in bits and pieces of it, I'll recommend Cost Based Oracle by Jonathan Lewis.

query taking long tim

Supriya Mishra, February 26, 2011 - 8:58 am UTC

Respected Tom,

I have a select query which filter out the data with the help of some "and" and "in" ,"exists" condition from another remote database tables.basically you are using the following query in side a package for report generation.while I ran that corresponding report it took very long time (inside the report this is the only one select statement used) ,i then ran this report from sql prompt and even it is taking more than 4 days and still running.
I study the sql query and found some in conditions are twice
written ( i thought to miniminse the query) for better performance .

please guide me how to modify the query so it will take little time.

specially the query uses 4 tables(service_agrrement,service,language_keys,subscriber).


THE CODE:


select distinct substr(subscriber_no,-8) msisdn, ban
from user.service_agreement
where (ban, subscriber_no) in ( select ban, subscriber_no
from user.service_agreement
where soc like 'XPRESS3%'
and service_type = 'P'
and (expiration_date >= sysdate or expiration_date is null or expiration_date = to_date('31-12-4700','DD-MM-YYYY'))
)
and (ban, subscriber_no) not in ( select ban, subscriber_no
from user.service_agreement
where --soc like 'GPRS%' and soc <> 'GPRSDAWAP'
soc in ( select ltrim(b.language_key,'s') soc
from services a, language_keys b
where a.language_key = b.language_key
and a.language = b.language
and a.soc_type <> 'P'
and service_carrier_group_code like '%GPRS%'
)
and soc <> 'GPRSDAWAP'
and service_type <> 'P'
and (expiration_date >= sysdate or expiration_date is null or expiration_date = to_date('31-12-4700','DD-MM-YYYY'))
and (ban, subscriber_no) in ( select ban, subscriber_no
from user.service_agreement
where soc like 'XPRESS3%'
and service_type = 'P'
and (expiration_date >= sysdate or expiration_date is null or expiration_date = to_date('31-12-4700','DD-MM-YYYY'))
)
)
and exists ( select 1
from user.subscriber
where customer_id = ban
and subscriber_no = subscriber_no
and sub_status in ('A','S')
)
and service_type <> 'P'
and (expiration_date >= sysdate or expiration_date is null or expiration_date = to_date('31-12-4700','DD-MM-YYYY'));



long live Tom,
regards,
supriya

how to use many put_line for show different result in same line?

fatiman, July 21, 2011 - 8:07 pm UTC

hi.
i want use many put_line command ,in any part of my query but see the result of this put_lines printed in same line.i dont want use put becuase i want write star imedately after the each update command complated.plz help me. thenks in advance.

dbms_output.put_line('welcome.if each update command occure and complate successfuly,u see on star');
update student set name='nikolas' where name='tom';
dbms_output.put_line('*');

update teacher set family='kaje' where family='karlos';
dbms_output.put_line('*');
update corse setname='ohisics'where name='math;
dbms_output.put_line('*');
/*************result**********/
*
*
*
but I want to see the resualt as
***
Tom Kyte
July 22, 2011 - 2:17 pm UTC

put_line doesn't put out a '*' "right after after each update command completes"

dbms_output writes to an array, a tool like sqlplus will retrieve the contents of this array AFTER the ENTIRE stored procedure completes.

You do not see any dbms_output output until AFTER the entire stored procedure completes.

put and put_line do NOT make that any different.


To prove that to yourself, replace your updates with "dbms_lock.sleep(20)".

You will wait 60 seconds before seeing any output.

Query tuning reg

Raja, July 22, 2011 - 5:35 am UTC

Hello Mr.Tom,
First my thanks in advance...
As i joined as a fresh developer under oracle in one MNC, want to know my responsibilities as a oracle developer in query tuning and in my proceedings. Please help me sir, as i very new to this field.
Tom Kyte
July 22, 2011 - 2:37 pm UTC

If you ask me - your responsibilities include getting to know your software. I would start with a reading of the 11gR2 concepts guide - the application developers guide - and the performance guide.

Also, find a mentor you respect and trust and work as closely with them as possible - asking them for feedback on the approach you are taking to problems, how they would do it (and why) and so on.

And keep doing that for about five years... At least :)

Not using dates for bind variables

Jess, August 22, 2011 - 11:16 am UTC

Dear Tom,

In the comments above, you mentioned that

> IMO, you should code:
> select * from t where date_column = to_date( :bind, 'date format' )
> you should NEVER NEVER code
> select * from t where date_column = :bind;
> the second query is a bug waiting to happen.  
> implicit conversions should always be avoided.

If that query is part of a procedure that, say, loops through the table for every available date and does a calculation for it using a bind variable, what you are saying makes sense to me if the date that's being passed in is a varchar.  But if the variable holding the date_column value is a date, not a varchar, why wouldn't it be safe to say "where date_column = :bind"?  Wouldn't it actually an implicit conversion to do to_date() on a date?  Many thanks.

Tom Kyte
August 30, 2011 - 2:37 pm UTC

Ok, let me revise,

> IMO, you should code:
> select * from t where date_column = to_date( :bind, 'date format' )
> you should NEVER NEVER code
> select * from t where date_column = :bind;
when :bind is a string or a number
> the second query is a bug waiting to happen.
> implicit conversions should always be avoided.


99.999% of the times, :bind would be a string in a host application. It is very rare to have someone bind a true date.


Jess, September 05, 2011 - 5:40 am UTC

> 99.999% of the times, :bind would be a string in a host application. It is very rare to have someone bind a true date.

Thanks for clarifying, Tom. Why is binding a date rare? Do you deem it to be a good/performant practice or just the opposite? Is there any benefit or downside to passing dates as strings?

I've recently been working on an application where just about all the dates get passed and worked on as dates...

Thank you!
Tom Kyte
September 06, 2011 - 8:41 am UTC

It is rare because there aren't really native "date" types in most programming languages. PLSQL has an Oracle date type as a variable datatype - a native, natural datatype - it is not as "easy" in many other programming languages.

explain plan in 10GR2

A reader, October 26, 2011 - 10:55 pm UTC

Tom:

All the tables in this Explain plan are analyzed using the below script, Pipelined table functions CS_ML_ENC_DX_EO & CS_ML_ENC_EO are added cardinality hint for 25 rows. But still the Estimated Cardinality in plan is far away from actuals. We are on 10.2.0.5. Is there is any solution available for me?


begin
  dbms_stats.gather_table_stats(
  ownname=>user,
  tabname=>'table_name',
  method_opt=>'for all indexed columns size 254',
  cascade=>true,
  degree=>4,
  estimate_percent=>dbms_stats.auto_sample_size,
  granularity=>'ALL');
end;
/


testdata@TST02> /

no rows selected

Elapsed: 00:00:00.53

Execution Plan
----------------------------------------------------------
Plan hash value: 157574005

---------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                            | Name                         | Rows  | Bytes |TempSpc| Cost (%CPU)| Time     | Pstart| Pstop |
---------------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                     |                              |   392M|    75G|       |    17G  (1)|999:59:59 |       |       |
|   1 |  SORT GROUP BY                       |                              |   392M|    75G|    80G|    17G  (1)|999:59:59 |       |       |
|   2 |   NESTED LOOPS OUTER                 |                              |   392M|    75G|       |    17G  (1)|999:59:59 |       |       |
|   3 |    NESTED LOOPS OUTER                |                              |   392M|    75G|       |  8626M  (1)|999:59:59 |       |       |
|*  4 |     HASH JOIN                        |                              |   392M|    74G|  8280K| 63176   (5)| 00:12:39 |       |       |
|*  5 |      HASH JOIN                       |                              | 42372 |  7779K|    13M| 53727   (1)| 00:10:45 |       |       |
|*  6 |       HASH JOIN                      |                              | 78621 |    13M|       | 14775   (2)| 00:02:58 |       |       |
|*  7 |        HASH JOIN                     |                              |    15 |  2235 |    11M|  4152   (1)| 00:00:50 |       |       |
|   8 |         VIEW                         |                              |   103K|    10M|       |   907   (1)| 00:00:11 |       |       |
|*  9 |          HASH JOIN                   |                              |   103K|    11M|       |   907   (1)| 00:00:11 |       |       |
|* 10 |           TABLE ACCESS FULL          | CS_HCC_MODEL                 | 27726 |  1516K|       |   237   (1)| 00:00:03 |       |       |
|* 11 |           TABLE ACCESS FULL          | CS_ICD9CM                    | 71550 |  4541K|       |   668   (1)| 00:00:09 |       |       |
|  12 |         NESTED LOOPS                 |                              |   526K|    22M|       |  1255   (1)| 00:00:16 |       |       |
|* 13 |          TABLE ACCESS FULL           | CS_PROJECT                   |    11 |   165 |       |     8   (0)| 00:00:01 |       |       |
|  14 |          PARTITION LIST ITERATOR     |                              | 47848 |  1355K|       |   113   (0)| 00:00:02 |   KEY |   KEY |
|* 15 |           TABLE ACCESS FULL          | CS_CHART_ENC_DX              | 47848 |  1355K|       |   113   (0)| 00:00:02 |   KEY |   KEY |
|  16 |        PARTITION LIST SINGLE         |                              |  1910K|    45M|       | 10610   (2)| 00:02:08 |     1 |     1 |
|* 17 |         TABLE ACCESS FULL            | CS_PROJ_CONTENT_MEMBER       |  1910K|    45M|       | 10610   (2)| 00:02:08 |     1 |     1 |
|  18 |       PARTITION LIST ALL             |                              |  7504K|   100M|       | 28934   (1)| 00:05:48 |     1 |     2 |
|  19 |        TABLE ACCESS FULL             | CS_CHART_ENC                 |  7504K|   100M|       | 28934   (1)| 00:05:48 |     1 |     2 |
|* 20 |      INDEX FAST FULL SCAN            | IDX_CS_PROJ_CONT_STAT_FLT_01 |  1259K|    19M|       |  4806   (2)| 00:00:58 |       |       |
|* 21 |     COLLECTION ITERATOR PICKLER FETCH| CS_ML_ENC_EO                 |       |       |       |            |          |       |       |
|* 22 |    COLLECTION ITERATOR PICKLER FETCH | CS_ML_ENC_DX_EO              |       |       |       |            |          |       |       |
---------------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   4 - access("A"."PROJ_KEY"="C"."PROJ_KEY")
   5 - access("DX"."ENCOUNTER_KEY"="ENC"."ENCOUNTER_KEY" AND "DX"."PROJ_KEY"="ENC"."PROJ_KEY")
   6 - access("DX"."PROJ_KEY"="A"."PROJ_KEY")
   7 - access("DX"."DX_CD"="T"."ICD9CM_CD")
       filter("DX"."DOS_THRU_DT">="T"."EFF_FROM_DT" AND "DX"."DOS_THRU_DT"<="T"."EFF_THRU_DT" AND
              "DX"."DOS_THRU_DT">="T"."EFF_DOS_FROM_DT" AND "DX"."DOS_THRU_DT"<="T"."EFF_DOS_THRU_DT")
   9 - access("ICD"."ICD9CM_CD_NODOT"="HCC"."ICD9CM_CD_NODOT")
  10 - filter("HCC"."MODEL_CAT" IS NOT NULL AND NVL("HCC"."MODEL_TYPE",'NoHCCType')='CMS')
  11 - filter("ICD"."TYPE"='1' AND "ICD"."VALIDITY"='C')
  13 - filter("PROJ_ACTUAL_START_DT">=TO_DATE(' 2011-08-29 00:00:00', 'syyyy-mm-dd hh24:mi:ss') AND "PROJ_ACTUAL_START_DT"<TO_DATE('
              2011-08-31 00:00:00', 'syyyy-mm-dd hh24:mi:ss'))
  15 - filter("DX"."PROJ_KEY"="PROJ_KEY")
  17 - filter("A"."MBR_HP_PRODUCT"='SHE' OR "A"."MBR_HP_PRODUCT"='SHW')
  20 - filter("C"."CURRENT_STATUS"='CHARTCOMP' OR "C"."CURRENT_STATUS"='CVCOMP' OR "C"."CURRENT_STATUS"='CVQACOMP' OR
              "C"."CURRENT_STATUS"='QACOMP')
  21 - filter("DX"."ENCOUNTER_KEY"=VALUE(KOKBF$))
  22 - filter("DX"."ENCOUNTER_KEY"=VALUE(KOKBF$) AND "DX"."DX_CD"=VALUE(KOKBF$))


Statistics
----------------------------------------------------------
          0  recursive calls
          0  db block gets
       4088  consistent gets
          0  physical reads
          0  redo size
        574  bytes sent via SQL*Net to client
        339  bytes received via SQL*Net from client
          1  SQL*Net roundtrips to/from client
          1  sorts (memory)
          0  sorts (disk)
          0  rows processed

testdata@TST02>

Tom Kyte
October 27, 2011 - 11:12 am UTC

you don't show me the query
you don't tell me which estimated cardinalities are the problem....


I do know that "join" conditions like this (what we can call a fuzzy join)

filter("DX"."DOS_THRU_DT">="T"."EFF_FROM_DT" AND 
       "DX"."DOS_THRU_DT"<="T"."EFF_THRU_DT" AND
       "DX"."DOS_THRU_DT">="T"."EFF_DOS_FROM_DT" AND 
       "DX"."DOS_THRU_DT"<="T"."EFF_DOS_THRU_DT")


are like the kiss of death for cardinality estimates - what a whacky join to "T" (and the only join - no equality anywhere). I'm skeptical that the optimizer would do anything reasonable with that - I know I wouldn't myself.


Init parameters which influence execution plan

A reader, March 08, 2012 - 11:19 am UTC

Tom,

Other than

•OPTIMIZER_INDEX_CACHING
•OPTIMIZER_INDEX_COST_ADJ
•DB_FILE_MULTIBLOCK_READ_COUNT

are there any other init parameters which have impact on execution plan?

Thanks.
Tom Kyte
March 09, 2012 - 9:13 am UTC

many/most of them.

and do not touch these parameters, let ALL OF THEM DEFAULT.


if you want a complete list, search this site for 10053, see how to generate a 10053 trace, create one and the seven page list of parameters will appear.

and then promptly *forget they exist*

Indexing strategy

santosh, March 16, 2012 - 1:46 pm UTC

Hi Tom,

Looking at time spent in an explain output of a particular query we are looking to tune,I found a set of rowids are being fetched from table's index and next table is being looked up for filter on some other columns (that aren't indexed).
Almost 100% of time is in table access operation .

Most of the queries consist of 3 columns ,one being unique and other 2 very less selective (one having 4 distinct values and other 16).
Do you advise if its worth a try to create the index with highly selective column as leading edge ,with least selective at the last .


Tom Kyte
March 16, 2012 - 4:23 pm UTC

Do you advise if its worth a try to create the index with highly selective
column as leading edge ,with least selective at the last .


the selectivity of a column has no bearing at all on its position in the index. None.

Think about it - say you have a column A with almost unique values and a column B with 3 values.

Suppose you have a query:

select * from t where A > 100 and B = 2;

would you want A or B first? I would want, I would demand, B be first. So we could navigate to the B=2 portion of the index and A > 100 and every single index entry we read would be part of our answer!!

If you put A first and B second - you would navigate to A > 100 and have to look at *each and every* entry such that A > 100. You would look at (in general) 3 times as many index entries as you should have!!!



Or, suppose the query was "where A = 100 and B = 2"

Now, would it matter if A was first or B was first?


Think before you answer - think about what it takes to get a rowid out of an index. We have to navigate the root, some branch and then get to a leaf block. We'd have to do that if A was first or B was first.



Now, if we have an index in (A,B) whose height was 3 (3 IOs to get a rowid) - then the index on (B,A) would be 3 as well probably (or less - more on that in a moment!). In short - it would take 3 blocks, we'd process 3 blocks, to get a rowid regardless of whether A was first or B was first.


Now, I just said the index on (B,A) might not have a height of three, it could be less. How could that be? Index key compression - we remove repetitive leading edge fields - if B repeated a lot (it does), we would store it ONCE per leaf block instead of one per row per leaf block. That could significantly decrease the index size, hence the height, hence the work involved in looking up a key.

column Selectivity and position in index

Guddu, March 25, 2012 - 1:08 pm UTC

Hello Tom ji,
I work with a well known company in India and have very experienced DBA team (provided by UK clients). In our good practices list provided by UK DBA team its has been mentioned to keep highly selective columns as leading ones while creating indexes probably to avoid range scans. Which is correct in few cases but not a silver bullet.

I'll surely send them you reply tomorrow and I'm sure they will provide few typical DBA style reasons to justify it ..... (normally they use wait events which normal developers don't understand...)
Tom Kyte
March 25, 2012 - 2:05 pm UTC

Ask them this simple question:

how many IO's does it take to retrieve a record via an index *at best*, what are those IO's (describe how an index works).

Now, that said, how is it that an index on (a,b) would be better than (b,a) for a predicate "where a=? and b=?"? Since the index height for both would be the same pretty much, we'd do just as many IO's through one as the other.

Please provide the *science* behind your selectivity claim. I have science that can prove the opposite is many times true. Put the least selective first, use index key compression and reduce the height (overall size in fact) of the b*tree index and you'll reduce the amount of work it takes to get a row from a table via that index.


further, if the predicate is "where a > ? and b = ?" it would be *not smart* at all to put a first even if a where unique and b has three values.

Rewrite the Predicate Meaning

Jim Cox, April 20, 2012 - 2:25 pm UTC

Hi Tom

Can you answer me a question I have on this message from OEM tuning advisor?
"Rewrite the predicate into an equivalent form to take advantage of indices."

I am not sure what that means exactly. Can you explain it to me in terms of the sql below. How would I re-write it ?

Thanks
Jim

**********
My Sql Code:

select
to_char(a.auditdate,'yyyy') yearofyear,
count (a.auditstr) livelinkevents
from
&X_schemaowner..dauditnew a,
&X_schemaowner..kuaf b
where
b.groupid = :v_groupid
and
a.performerid = b.id
and
a.auditid NOT IN (&X_excludeevents)
and
a.performerid NOT IN (&X_testusers)
group by
to_char(a.auditdate,'yyyy')
order by
to_char(a.auditdate,'yyyy');


OEM Sql from Tuning Advisor:

select
to_char(a.auditdate,'yyyy') yearofyear,
count (a.auditstr) livelinkevents
from
LLPROD.dauditnew a,
LLPROD.kuaf b
where
b.groupid = :v_groupid
and
a.performerid = b.id
and
a.auditid NOT IN (0,9,10,23,29,99,66601)
and
a.performerid NOT IN (27731942,27732162)
group by
to_char(a.auditdate,'yyyy')
order by
to_char(a.auditdate,'yyyy')


Predicate "A"."PERFORMERID"<>27731942 used at line ID 3 of the execution plan is an inequality condition on indexed column "PERFORMERID".
This inequality condition prevents the optimizer from selecting indices on table "LLPROD"."DAUDITNEW".

Predicate "A"."PERFORMERID"<>27732162 used at line ID 3 of the execution plan is an inequality condition on indexed column "PERFORMERID".
This inequality condition prevents the optimizer from selecting indices on table "LLPROD"."DAUDITNEW"

Tom Kyte
April 22, 2012 - 8:58 am UTC

It is just a rule drive bit of advice. It is using a checklist to offer suggestions. Not all suggestions would always make sense. Even the advice to use an index does not always make sense (it is wrong more often then you think). the advice here is just a hint, a thing to look at.

I could rewrite

and a.performerid not in ( 10, 20 )

as

and ((a.performerid < 10) OR (a.performerid > 10 and a.performerid < 20 ) or (a.performerid > 20))


but I doubt it would make sense. It is just a rule driven off of "inequalities do not use indexes to search for data" but equalities will. If you express an inequality as an equality - you MIGHT open up an access path that makes sense - then again, you might not.

Rewrite the Predicate Meaning - Thanks

Jim Cox, April 23, 2012 - 4:49 pm UTC

Thanks once again Tom

Appreciate it

Jim

SQL Query Exhausts Temp Space Now

Jim Cox, April 26, 2012 - 2:05 pm UTC

Hi Tom

I am running into a TEMP space issue and I think it is because the tables I am querying on now have grown.
I normally have 6GB of Temp space, but when I run either of these two queries, the Temp space gets exhausted now. I can increase the TEMP space, but

1. I wanted to ask you how I can determine which of these is a more efficient query.
2. What should I be doing to determine efficiency of queries I run, especially when i have written them differently and they produce the same results

for example of these two, which is more efficient ?

Thanks
Jim

--setup

column date4 new_value X_endmon noprint;
select TO_CHAR (LAST_DAY (ADD_MONTHS (sysdate,-1)),'YYYYMMDD') date4 from dual;

variable v_startdate varchar2(14);
variable v_endmon varchar2(14);

exec :v_startdate :='20050101000000';
exec :v_endmon :='&X_endmon'||'235959';

Query 1

select
TO_CHAR (a.auditdate,'yyyy') yeartodate,
COUNT (a.auditdate) livelinklogins,
COUNT (DISTINCT a.performerid) distinctusers
from
&X_schemaowner..dauditnew a,
&X_schemaowner..kuaf b
where
a.auditstr = 'Login'
and
a.auditdate >= TO_DATE (:v_startdate,'yyyymmddhh24miss')
and
a.auditdate <= TO_DATE (:v_endmon,'yyyymmddhh24miss')
and
a.performerid = b.id
and
b.groupid = :v_groupid
group by
TO_CHAR (a.auditdate,'yyyy')
order by
TO_CHAR (a.auditdate,'yyyy');



Query 2

WITH
s0 AS
(
select
a.auditdate auditdate,
a.performerid id
from
&X_schemaowner..dauditnew a,
&X_schemaowner..kuaf b
where
b.groupid = :v_groupid
and
a.performerid = b.id
and
a.auditstr = 'Login'
)
select
TO_CHAR (auditdate,'yyyy') yeartodate,
COUNT (auditdate) livelinklogins,
COUNT (DISTINCT (id)) distinctusers
from
s0
where
auditdate >= TO_DATE (:v_startdate,'yyyymmddhh24miss')
and
auditdate <= TO_DATE (:v_endmon,'yyyymmddhh24miss')
group by
TO_CHAR (auditdate,'yyyy')
order by
TO_CHAR (auditdate,'yyyy');
Tom Kyte
April 26, 2012 - 3:34 pm UTC


1. I wanted to ask you how I can determine which of these is a more efficient
query.
2. What should I be doing to determine efficiency of queries I run, especially
when i have written them differently and they produce the same results

for example of these two, which is more efficient ?


I typically use dbms_monitor.session_trace_enable( waits => true ) and TKPROF...




I recommend avoiding "to_char(dt,'yyyy')", use trunc(dt,'y'). The NLS code to do the character conversion is horribly complex - you don't want to invoke it millions of times. TRUNC() just sets some bytes at the end of a 7 byte field - very very fast.


both of those queries are almost certainly using the same plan, I'd be surprised if they weren't. They are the same.

SQL Query Exhausts Temp Space Now-Followup

Jm Cox, May 07, 2012 - 1:02 pm UTC

Hi Tom


I tried to replace the TO_CHAR but
was not able to use your suggestion as I need the year in 'yyyy' format

i.e

Year LiveLink Connects Distinct Logins
---------------- ------------------- ---------------
2005 57,622 403
2006 139,851 520
2007 149,741 592
2008 166,130 851

etc, etc


Thanks
Jim
Tom Kyte
May 07, 2012 - 2:14 pm UTC

so, format it in the select list, don't order by it, don't group by it, don't process it that way.



You can format data any way you like.


select to_char( dt_col, 'yyyy' ), .....
from (your query with group by)
order by dt_col;l

do the expensive stuff way at the end - you do not have to use yyyy in the query proper

SQL Query Exhausts Temp Space Now-Select List

Jim Cox, May 14, 2012 - 4:40 pm UTC

Hi Tom

I understand your point, but I do not understand how i would write this query using your guidelines. I am a wee bit inept on the syntax. Can you enlighten me on this one ?

Thanks
Jim

example:

select
TO_CHAR (a.auditdate,'yyyy') yeartodate,
COUNT (a.auditdate) livelinklogins,
COUNT (DISTINCT a.performerid) distinctusers
from
&X_schemaowner..dauditnew a,
&X_schemaowner..kuaf b
where
a.auditstr = 'Login'
and
a.auditdate >= TO_DATE (:v_startdate,'yyyymmddhh24miss')
and
a.auditdate <= TO_DATE (:v_endmon,'yyyymmddhh24miss')
and
a.performerid = b.id
and
b.groupid = :v_groupid
group by
TO_CHAR (a.auditdate,'yyyy')
order by
TO_CHAR (a.auditdate,'yyyy');
Tom Kyte
May 14, 2012 - 11:50 pm UTC

WITH
s0 
AS
( select a.auditdate auditdate, a.performerid id
    from &X_schemaowner..dauditnew a,
         &X_schemaowner..kuaf b
   where b.groupid = :v_groupid   
     and a.performerid = b.id
     and a.auditstr = 'Login'
)    
select to_char(yeartodateDt,'yyyy') yeartodate,
       liveLinkLogins,
       distinctUsers
  from (
select trunc(auditdate,'y') yeartodateDt,
       COUNT (auditdate) livelinklogins,
       COUNT (DISTINCT (id)) distinctusers
  from s0
 where auditdate >= TO_DATE (:v_startdate,'yyyymmddhh24miss')
   and auditdate <= TO_DATE (:v_endmon,'yyyymmddhh24miss')
 group by trunc( auditdate, 'y' )
       )
 order by yeartodateDt
/

SQL Query Exhausts Temp Space Now-Select List -Thanks

Jim Cox, May 16, 2012 - 4:58 pm UTC

Thanks Tom

yeah..no way i would have come up with this. Select within a Select always baffle me..

Appreciate It!
Jim

SQL Using TRUNC Formats

Jim Cox, May 22, 2012 - 10:58 am UTC

Hi Tom

I have re-coded alot of sql using your example for anything that pulls out data by years but I am having difficulty trying to use a format other than 'y' for the trunc to pull out data by year and month

Currently I get results like this using the code you provided

Year LiveLink Connects Distinct Logins
---------------- ------------------- ---------------
2005 57,584 402
2006 139,748 519
2007 149,499 591
2008 166,340 850
2009 125,760 929
2010 120,661 956
2011 135,008 1,061
2012 52,172 921
-------------------
sum 946,772

I was trying to re-write the code so that I could get a breakdown by month within a year, but does not seem possible with trunc as it only allows one parameter and not a combination of them like 'yyyymm'

Is this output possible with trunc ?

Month of Year LiveLink Connects Distinct Logins
---------------- ------------------- ---------------
2012/01 13,719 768
2012/02 12,587 776
2012/03 12,637 779
2012/04 13,229 763
-------------------
sum 52,172


This is the old code I have been using which you advised not to do

WITH
s0 AS
(
select
a.auditdate auditdate,
a.performerid id
from
&X_schemaowner..dauditnew a,
&X_schemaowner..kuaf b
where
b.groupid = :v_groupid
and
a.performerid = b.id
and
a.auditstr = 'Login'
)
select
TO_CHAR (auditdate,'yyyy/mm') dayofmonth,
COUNT (auditdate) livelinklogins,
COUNT (DISTINCT (id)) distinctusers
from
s0
where
auditdate >= TO_DATE (:v_year,'yyyymmddhh24miss')
and
auditdate <= TO_DATE (:v_endmon,'yyyymmddhh24miss')
group by
TO_CHAR (auditdate,'yyyy/mm')
order by
TO_CHAR (auditdate,'yyyy/mm');


Thanks
Jim



Tom Kyte
May 22, 2012 - 11:01 am UTC

truncate the date to a month level, not a year level then:

ops$tkyte%ORA11GR2> select trunc(sysdate,'mm') from dual;

TRUNC(SYS
---------
01-MAY-12




you can truncate off

seconds, hours, minutes, days, months......

check out the trunc function in the sql language reference manual.

SQL Using TRUNC Formats-Thanks

Jim Cox, May 29, 2012 - 11:07 am UTC

Hi Tom

my bad, i was not selecting the entire set as i was using rownum < 10000, so it looked like i was not getting all the data back and only one month.

Thanks
Jim

Explain Plan Assistance

Jim Cox, May 29, 2012 - 11:16 am UTC

Hi Tom

I am having difficulty determining why when i run this query the first time it uses up my TEMP space but if I accept the new explain plan (99.73% benefit) from OEM and then run the query a second time it runs fine with the new plan.

I do not understand why it uses the TEMP space the first time and then does not use it after that when i accept the OEM recommendation.

How can I get the sql to run with the best plan the first time ?

Can you shed some light on this one ?

Thanks
Jim

this is the original run

select
count (1)
from
&X_schemaowner..dauditnew a,
&X_schemaowner..kuaf b
where
b.groupid = :v_groupid
and
a.performerid = b.id
and
a.auditstr = 'Login'
and
auditdate >= TO_DATE (:v_startdate,'yyyymmddhh24miss')
and
auditdate <= TO_DATE (:v_endmon,'yyyymmddhh24miss')
and
rownum < 10000;

Error:

LLPROD.dauditnew a,
*
ERROR at line 4:
ORA-01652: unable to extend temp segment by 128 in tablespace TEMP


Tom Kyte
May 30, 2012 - 12:51 am UTC

I assume when you say:

but if I accept the new explain plan (99.73% benefit)
from OEM and then run the query a second time it runs fine with the new plan.


you mean you generated a sql profile correct? If so, the answer is "that is what sql profiles are for - to get us to the correct plan" (if the correct plan could be had on the first try every single time without a sql profile ever - why would sql profiles exist??)


In order to get a correct plan, the optimizer needs to be able to more or less accurately estimate cardinalities. Read this:

http://www.oracle.com/technetwork/issue-archive/2009/09-jan/o19asktom-086775.html

first and see what I mean by that. Now, in your case - for whatever reason (I don't have your table definition, I don't know your data, I don't know what inputs you used to run this query and hard parse it, etc etc etc) we must have been getting an incorrect cardinality estimate. Perhaps you didn't gather statistics, perhaps you didn't gather the right type of statistics, perhaps the optimizer had to "guess" like that article shows and needed a bit more help. The sql profile is used to correct that, to supply better estimated cardinalities - which let us get the right plan right off the bat.

think of a sql profile as if it were statistics on a query. you gather statistics on a table, on an index, on a column, on a set of columns, on an expression - and perhaps even on a query.

Explain Plan Assistance -Followup

Jim Cox, June 06, 2012 - 12:56 pm UTC


Hi Tom

appreciate the info you passed to me and the article
and yes i was talking about sql profiles in OEM

I am trying to apply your document to my case
but I do not see any difference between Code Listing 1 and 2

What am I missing ?

When I look at my plans below i see rows of 1820 and 269
is that correct ?
Not even close to the 950,000 I would get when the sql runs in plan 2

Thanks
Jim

===================
Unanalyzed Segments
===================

select count (*) from dauditnew;


COUNT(*)
----------
179045409

===================
Code Listing 1: Disabling dynamic sampling to see default cardinalities
===================

set autotrace traceonly explain
select /*+ dynamic_sampling(dauditnew 0) */ * from dauditnew;

Execution Plan
----------------------------------------------------------
Plan hash value: 84383378

-------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
-------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 171M| 16G| 351K (11)| 00:12:36 |
| 1 | TABLE ACCESS FULL| DAUDITNEW | 171M| 16G| 351K (11)| 00:12:36 |


===================
Code Listing 2: More-realistic cardinalities with dynamic sampling enabled
===================

select * from dauditnew;

Execution Plan
----------------------------------------------------------
Plan hash value: 84383378

-------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
-------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 171M| 16G| 351K (11)| 00:12:36 |
| 1 | TABLE ACCESS FULL| DAUDITNEW | 171M| 16G| 351K (11)| 00:12:36 |
-------------------------------------------------------------------------------


##################################

******************
No Profile (plan 1)
******************

set autotrace traceonly explain


select
a.auditdate auditdate,
a.performerid id
from
LLPROD.dauditnew a,
LLPROD.kuaf b
where
b.groupid = :v_groupid
and
a.performerid = b.id
and
a.auditstr = 'Login'
and
a.auditdate >= TO_DATE (:v_startdate,'yyyymmddhh24miss')
and
a.auditdate <= TO_DATE (:v_endmon,'yyyymmddhh24miss')
--and
-- rownum < 1000000;


Execution Plan
----------------------------------------------------------
Plan hash value: 1272927440

----------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
----------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1820 | 60060 | 1010 (2)| 00:00:03 |
|* 1 | FILTER | | | | | |
|* 2 | HASH JOIN | | 1820 | 60060 | 1010 (2)| 00:00:03 |
| 3 | TABLE ACCESS BY INDEX ROWID| KUAF | 147 | 1470 | 6 (0)| 00:00:01 |
|* 4 | INDEX RANGE SCAN | KUAF_GROUPID | 147 | | 1 (0)| 00:00:01 |
| 5 | TABLE ACCESS BY INDEX ROWID| DAUDITNEW | 15965 | 358K| 1003 (2)| 00:00:03 |
|* 6 | INDEX RANGE SCAN | DAUDITNEW_INDEXDATEEVENT | 28738 | | 384 (3)| 00:00:01 |
----------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

1 - filter(TO_DATE(:V_STARTDATE,'yyyymmddhh24miss')<=TO_DATE(:V_ENDMON,'yyyymmddhh24miss'))
2 - access("A"."PERFORMERID"="B"."ID")
4 - access("B"."GROUPID"=TO_NUMBER(:V_GROUPID))
6 - access("A"."AUDITDATE">=TO_DATE(:V_STARTDATE,'yyyymmddhh24miss') AND
"A"."AUDITSTR"='Login' AND "A"."AUDITDATE"<=TO_DATE(:V_ENDMON,'yyyymmddhh24miss'))
filter("A"."AUDITSTR"='Login')


******************
Accepting profile (plan 2)
******************

Execution Plan
----------------------------------------------------------
Plan hash value: 1272927440

----------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
----------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 269 | 8877 | 8 (13)| 00:00:01 |
|* 1 | FILTER | | | | | |
|* 2 | HASH JOIN | | 269 | 8877 | 8 (13)| 00:00:01 |
| 3 | TABLE ACCESS BY INDEX ROWID| KUAF | 147 | 1470 | 6 (0)| 00:00:01 |
|* 4 | INDEX RANGE SCAN | KUAF_GROUPID | 147 | | 1 (0)| 00:00:01 |
| 5 | TABLE ACCESS BY INDEX ROWID| DAUDITNEW | 15965 | 358K| 1 (0)| 00:00:01 |
|* 6 | INDEX RANGE SCAN | DAUDITNEW_INDEXDATEEVENT | 1 | | 1 (0)| 00:00:01 |
----------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

1 - filter(TO_DATE(:V_STARTDATE,'yyyymmddhh24miss')<=TO_DATE(:V_ENDMON,'yyyymmddhh24miss'))
2 - access("A"."PERFORMERID"="B"."ID")
4 - access("B"."GROUPID"=TO_NUMBER(:V_GROUPID))
6 - access("A"."AUDITDATE">=TO_DATE(:V_STARTDATE,'yyyymmddhh24miss') AND
"A"."AUDITSTR"='Login' AND "A"."AUDITDATE"<=TO_DATE(:V_ENDMON,'yyyymmddhh24miss'))
filter("A"."AUDITSTR"='Login')

Note
-----
- SQL profile "SYS_SQLPROF_014df0a970fa8001" used for this statement





Tom Kyte
June 06, 2012 - 1:05 pm UTC

I have no clue what you are trying to show with that full scan example. You run it once with level 0 (disabled) and you run it again with the default level that won't do anything given your very very simple tiny query with no where clauses or anything????


I don't think this segment is "unanalyzed", you don't show us everything - you cut off the autotrace report that would have told us :(


use this method so we can see actual vs estimated.

Explain Plan Assistance -Followup

Jim Cox, June 07, 2012 - 4:53 pm UTC

Hi Tom

okay.. I muffed it


I was trying to simulate what you presented in the Dynamic Sampling document by replacing your "t" table with my "dauditnew" table under "Unanalyzed Segments", Code Listing 1" and "Code Listing 2".

That is what I was trying to accomplish in the first part.

In the second part with my sql I was showing you the explain plan before and after i accepted the sql profile
That is all that was reported when i did the "set autotrace traceonly explain".

I will make another attempt to get it right.

Thanks
Jim

Excellent forum

Mini, November 01, 2012 - 1:06 am UTC

Hi Tom,
Thank you very much for your guidance. This site really helps me in solving my day to day problem .
I need to tune one query , it's taking more than 30 minutes to get one month data.Will you please help me to understand this.

Query :

SELECT A.LOCATION,A.EQPTYPE, A.EQPID,A.STAGE_ID,A.RECIPE_ID,
SUM (A.CURRENT_LOT_QTY)
,TIME.YYYYMMDD
FROM TIME, (
SELECT
LOT_HST_LOTRUN.LOCATION ,
LOT_HST_LOTRUN.EQPTYPE,
LOT_HST_LOTRUN.EQPID,
LOT_HST_LOTRUN.STAGE_ID,
LOT_HST_LOTRUN.RECIPE_ID,
LOT_HST_WIP.CURRENT_LOT_QTY,
LOT_HST_LOTRUN.LOTID ,
LOT_HST_LOTRUN.STEP_BEGIN_TIME ,
LOT_HST_LOTRUN.REPORT_HOUR_MOVE
FROM
LOT_HST_LOTRUN,
LOT_HST_WIP
WHERE
LOT_HST_LOTRUN.LOT_TYPE != 'TW'
AND LOT_HST_LOTRUN.TRACK_OUT_FLAG = 'Y'

AND LOT_HST_LOTRUN.STEP_BEGIN_TIME =LOT_HST_WIP.STEP_BEGIN_TIME
AND LOT_HST_LOTRUN.LOTID =LOT_HST_WIP.LOTID
AND LOT_HST_LOTRUN.REPORT_HOUR_MOVE <TO_DATE ('30-OCT-2012 12:00:00', 'DD-MON-YYYY HH24:MI:SS')
AND LOT_HST_LOTRUN.REPORT_HOUR_MOVE >= TO_DATE ('01-OCT-2012 12:00:00', 'DD-MON-YYYY HH24:MI:SS')

) A
WHERE
TO_CHAR(TIME.YYYYMMDD ,'HH24')<>'20'
AND A.REPORT_HOUR_MOVE < TIME.YYYYMMDD + 1
AND A.REPORT_HOUR_MOVE >= TIME.YYYYMMDD
AND TIME.YYYYMMDD < TO_DATE ('30-OCT-2012 12:00:00', 'DD-MON-YYYY HH24:MI:SS') + 1
AND TIME.YYYYMMDD >= TO_DATE ('01-OCT-2012 12:00:00', 'DD-MON-YYYY HH24:MI:SS') - 1
GROUP BY
A.LOCATION,A.EQPTYPE, A.EQPID,A.STAGE_ID,A.RECIPE_ID
,TIME.YYYYMMDD


Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=ALL_ROWS (Cost=13 Card=1 Bytes=10
6)

1 0 HASH (GROUP BY) (Cost=13 Card=1 Bytes=106)
2 1 TABLE ACCESS (BY INDEX ROWID) OF 'LOT_HST_WIP' (TABLE) (Cost=4 Card=7 Bytes=168)

3 2 NESTED LOOPS (Cost=12 Card=1 Bytes=106)
4 3 NESTED LOOPS (Cost=8 Card=1 Bytes=82)
5 4 INDEX (RANGE SCAN) OF 'TIME_PK' (INDEX) (Cost=2 Card=1 Bytes=7)

6 4 TABLE ACCESS (BY INDEX ROWID) OF 'LOT_HST_LOTRUN'
(TABLE) (Cost=6 Card=2 Bytes=150)

7 6 INDEX (RANGE SCAN) OF 'LOT_HST_LOTRUN_IK07' (INDEX) (Cost=4 Card=2)

8 3 INDEX (RANGE SCAN) OF 'LOT_HST_WIP_PK' (INDEX (UNIQUE)) (Cost=3 Card=1)



Statistics
----------------------------------------------------------
9 recursive calls
0 db block gets
3599258 consistent gets
276921 physical reads
31140 redo size
5655073 bytes sent via SQL*Net to client
70563 bytes received via SQL*Net from client
6378 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
95648 rows processed

Thanks for your help !
Tom Kyte
November 01, 2012 - 10:11 pm UTC

are your statistics representative of your data?

it would seem not, do you see the card= values in this plan? the optimizer thinks it will get very few rows (like one or two), but obviously it isn't.

How to find Row Source Operation without tracing the session?

Moorthy Rekapalli, November 05, 2012 - 9:47 am UTC

Tom,

I learn a lot from your site and thank you very much for your contribution to Oracle community. Please correct me if I am wrong.

Isn't explain plan a parse time plan? In other words, Oracle most probably will follow this path and not guaranteed 100%. Sometimes, I have seen differences in explain plan and "Row Source Operation" that we get by running tkprof on the trace file generated by - alter session set events '10046 trace name context forever, level 1';

My question is, without tracing the session; is there any other way to find the "Row Source Operation" for a query?

Thanks,
Moorthy.

Tom Kyte
November 05, 2012 - 9:53 am UTC

Isn't explain plan a parse time plan?

no, it is a guess that considers all binds to be varchar's )(probably wrong), doesn't bind peek (hence not real world)...


you can see the actual plan used by using v$sql_plan (dbms_xplan.display_cursor - pass it the sql_id of the query in question)

Optimizer Peeking Binds

Moorthy Rekapalli, November 05, 2012 - 10:38 am UTC

Tom,

Thanks for your quick response. I am glad that you mentioned about peeking bind value and in my experience, when dealing with filters on columns that have uneven data distribution, this peeking created some surprises.

Based on when optimizer peeks, the value may differ and execution plan may be very different based on that.

I know that mileage will vary for each application and one has to carefully profile/instrument the code before deploying in production. But, I want to know your opinion about optimizer peeking the bind value and the pros and cons that it brings and especially to plan stability.

If the query is aged out of library cache, it may re-peek and re-parse. Does AWR/ADDM also signal re-peeking of bind values for resource intensive queries so that it can self-heal?

Thanks again,
Moorthy.
Tom Kyte
November 05, 2012 - 11:19 am UTC

Does
AWR/ADDM also signal re-peeking of bind values for resource intensive queries
so that it can self-heal?


I don't know what you are trying to say there... ADDM is a reporting facility on top of AWR. AWR is a historical picture of what happened.

In a report/warehouse situation - you typically would not be using binds in the queries and histograms are "good"

In an OLTP situation - this is where it gets dicey. There you want queries that take way less than a second (milliseconds) in duration. And you want consistency.

See http://asktom.oracle.com/Misc/sqltracetrue-part-two.html for a short talk on that. (look for "SO, WHAT THE HECK DO WE DO ABOUT IT?" on that page)


estimated rows vs actual rows

A reader, November 08, 2012 - 11:24 am UTC

Tom,

With /*+ gather_plan_statistics */, the dbms_xpplan.display_cursor shows the optimizer thinks a range index scan returns way more rows than actual only 4 rows, therefore the join to another table on its PK column is executed by full table scan (a few million rows) and a hash join with the first table's index range scan result.

I re-collect the table and index stats, still does the same. I can give hint to force to do next loop with index, but how can I find out why the estimated rows is way off from the actual rows?

Thanks
Tom Kyte
November 08, 2012 - 2:16 pm UTC

let's see the output of dbms_xplan - the entire output from top to bottom.

A reader, November 08, 2012 - 3:08 pm UTC

Plan hash value: 664945124

-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows |E-Bytes|E-Temp | Cost (%CPU)| E-Time | A-Rows | A-Time | Buffers | Reads | OMem | 1Mem | O/1/M |
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | | | 2154 (100)| | 4 |00:00:18.65 | 51956 | 51819 | | | |
|* 1 | HASH JOIN | | 1 | 48480 | 5870K| 3648K| 2154 (1)| 55:46:36 | 4 |00:00:18.65 | 51956 | 51819 | 1114K| 1114K| 1/0/0|
|* 2 | HASH JOIN | | 1 | 48480 | 3077K| | 1541 (1)| 39:54:12 | 4 |00:00:14.88 | 41321 | 41232 | 1156K| 1156K| 1/0/0|
|* 3 | TABLE ACCESS FULL| NODE_BRIDGE | 1 | 45268 | 751K| | 1282 (0)| 33:11:48 | 4 |00:00:00.43 | 34470 | 34458 | | | |
|* 4 | TABLE ACCESS FULL| NODE_TREE_ENTRY | 1 | 260K| 11M| | 258 (0)| 06:40:51 | 264K|00:00:11.42 | 6851 | 6774 | | | |
|* 5 | TABLE ACCESS FULL | NODE_MEMBER | 1 | 155K| 8957K| | 399 (0)| 10:19:55 | 150K|00:00:00.33 | 10635 | 10587 | | | |
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------

Query Block Name / Object Alias (identified by operation id):
-------------------------------------------------------------

1 - SEL$9E43CB6E
3 - SEL$9E43CB6E / NB@SEL$1
4 - SEL$9E43CB6E / NTE@SEL$2
5 - SEL$9E43CB6E / NM@SEL$1

Predicate Information (identified by operation id):
---------------------------------------------------

1 - access("NM"."NODE_ID"="NB"."DESCENDANT_NODE_ID")
2 - access("NTE"."NODE_ID"="NB"."DESCENDANT_NODE_ID" AND "NTE"."NODE_TREE_TYPE_ID"="NB"."NODE_TREE_TYPE_ID")
3 - filter(("NB"."DEPTH_FROM_ANCESTOR"<=1 AND "NB"."ANCESTOR_NODE_ID"=5 AND "NB"."NODE_TREE_TYPE_ID"=3))
4 - filter("NTE"."NODE_TREE_TYPE_ID"=3)
5 - filter(("NM"."NODE_TYPE_NAME"<>'POINT' AND "NM"."NODE_TYPE_NAME"<>'CONTROL'))

Column Projection Information (identified by operation id):
-----------------------------------------------------------

1 - (#keys=1) "NM"."NODE_ID"[NUMBER,22], "NTE"."DISPLAY_LABEL"[VARCHAR2,255], "NB"."DEPTH_FROM_ANCESTOR"[NUMBER,22], "NTE"."PARENT_NODE_ID"[NUMBER,22],
"NM"."NODE_TYPE_NAME"[VARCHAR2,255], "NM"."DEFAULT_DISPLAY_LABEL"[VARCHAR2,255]
2 - (#keys=2) "NB"."DESCENDANT_NODE_ID"[NUMBER,22], "NB"."DEPTH_FROM_ANCESTOR"[NUMBER,22], "NTE"."PARENT_NODE_ID"[NUMBER,22], "NTE"."DISPLAY_LABEL"[VARCHAR2,255]
3 - "NB"."DESCENDANT_NODE_ID"[NUMBER,22], "NB"."NODE_TREE_TYPE_ID"[NUMBER,22], "NB"."DEPTH_FROM_ANCESTOR"[NUMBER,22]
4 - "NTE"."NODE_TREE_TYPE_ID"[NUMBER,22], "NTE"."NODE_ID"[NUMBER,22], "NTE"."PARENT_NODE_ID"[NUMBER,22], "NTE"."DISPLAY_LABEL"[VARCHAR2,255]
5 - "NM"."NODE_ID"[NUMBER,22], "NM"."DEFAULT_DISPLAY_LABEL"[VARCHAR2,255], "NM"."NODE_TYPE_NAME"[VARCHAR2,255]

A reader, November 08, 2012 - 3:16 pm UTC

Ignore the previous plan output. To simply a little bit, I just query one table, and there is a non-unique index on the 3 columns. See the different plans below:

select /*+ gather_plan_statistics */ * from node_bridge nb
where nb.node_tree_type_id=3 and
nb.ancestor_node_id=5 and
nb.depth_from_ancestor = 1

vs

select /*+ gather_plan_statistics index(nb) */ * from node_bridge nb
where nb.node_tree_type_id=3 and
nb.ancestor_node_id=5 and
nb.depth_from_ancestor = 1

Plan hash value: 3252443986

------------------------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows |E-Bytes| Cost (%CPU)| E-Time | A-Rows | A-Time | Buffers | Reads |
------------------------------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | | 1282 (100)| | 4 |00:00:01.68 | 34469 | 34457 |
|* 1 | TABLE ACCESS FULL| NODE_BRIDGE | 1 | 45268 | 3094K| 1282 (0)| 33:11:48 | 4 |00:00:01.68 | 34469 | 34457 |
------------------------------------------------------------------------------------------------------------------------------------

Query Block Name / Object Alias (identified by operation id):
-------------------------------------------------------------

1 - SEL$1 / NB@SEL$1

Predicate Information (identified by operation id):
---------------------------------------------------

1 - filter(("NB"."DEPTH_FROM_ANCESTOR"=1 AND "NB"."ANCESTOR_NODE_ID"=5 AND "NB"."NODE_TREE_TYPE_ID"=3))

Column Projection Information (identified by operation id):
-----------------------------------------------------------

1 - "NB"."ANCESTOR_NODE_ID"[NUMBER,22], "NB"."DESCENDANT_NODE_ID"[NUMBER,22], "NB"."NODE_TREE_TYPE_ID"[NUMBER,22],
"NB"."TREE_SEQUENCE"[NUMBER,22], "NB"."DEPTH_FROM_ROOT"[NUMBER,22], "NB"."DEPTH_FROM_ANCESTOR"[NUMBER,22],
"NB"."HAS_CHILDREN_IND"[NUMBER,22], "NB"."IS_LAST_SIBLING_IND"[NUMBER,22], "NB"."DEPTH_ANC_MORE_KIDS"[VARCHAR2,255],
"NB"."BREADCRUMB_NODE_IDS"[VARCHAR2,1000], "NB"."ANCESTOR_BASE_NODE"[NUMBER,22], "NB"."DESCENDANT_BASE_NODE"[NUMBER,22],
"NB"."ANCESTOR_IS_SUB_METER_IND"[NUMBER,22], "NB"."DESCENDANT_IS_SUB_METER_IND"[NUMBER,22]

vs


Plan hash value: 1802202345

---------------------------------------------------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows |E-Bytes| Cost (%CPU)| E-Time | A-Rows | A-Time | Buffers | Reads |
---------------------------------------------------------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | | 145 (100)| | 4 |00:00:00.13 | 6 | 6 |
| 1 | TABLE ACCESS BY INDEX ROWID| NODE_BRIDGE | 1 | 4 | 280 | 145 (0)| 03:45:17 | 4 |00:00:00.13 | 6 | 6 |
|* 2 | INDEX RANGE SCAN | IDX_NODE_BRIDGE_TREE_ANC_SEQ | 1 | 4 | | 144 (0)| 03:43:44 | 4 |00:00:00.12 | 3 | 3 |
---------------------------------------------------------------------------------------------------------------------------------------------------------------

Query Block Name / Object Alias (identified by operation id):
-------------------------------------------------------------

1 - SEL$1 / NB@SEL$1
2 - SEL$1 / NB@SEL$1

Predicate Information (identified by operation id):
---------------------------------------------------

2 - access("NB"."NODE_TREE_TYPE_ID"=3 AND "NB"."ANCESTOR_NODE_ID"=5 AND "NB"."DEPTH_FROM_ANCESTOR"=1)

Column Projection Information (identified by operation id):
-----------------------------------------------------------

1 - "NB"."ANCESTOR_NODE_ID"[NUMBER,22], "NB"."DESCENDANT_NODE_ID"[NUMBER,22], "NB"."NODE_TREE_TYPE_ID"[NUMBER,22], "NB"."TREE_SEQUENCE"[NUMBER,22],
"NB"."DEPTH_FROM_ROOT"[NUMBER,22], "NB"."DEPTH_FROM_ANCESTOR"[NUMBER,22], "NB"."HAS_CHILDREN_IND"[NUMBER,22], "NB"."IS_LAST_SIBLING_IND"[NUMBER,22],
"NB"."DEPTH_ANC_MORE_KIDS"[VARCHAR2,255], "NB"."BREADCRUMB_NODE_IDS"[VARCHAR2,1000], "NB"."ANCESTOR_BASE_NODE"[NUMBER,22],
"NB"."DESCENDANT_BASE_NODE"[NUMBER,22], "NB"."ANCESTOR_IS_SUB_METER_IND"[NUMBER,22], "NB"."DESCENDANT_IS_SUB_METER_IND"[NUMBER,22]
2 - "NB".ROWID[ROWID,10], "NB"."NODE_TREE_TYPE_ID"[NUMBER,22], "NB"."ANCESTOR_NODE_ID"[NUMBER,22], "NB"."DEPTH_FROM_ANCESTOR"[NUMBER,22]

Note
-----
- cardinality feedback used for this statement

Tom Kyte
November 08, 2012 - 3:37 pm UTC

I'm assuming that for this predicate:

filter(("NB"."DEPTH_FROM_ANCESTOR"=1 AND "NB"."ANCESTOR_NODE_ID"=5 AND
"NB"."NODE_TREE_TYPE_ID"=3))


you have single column statistics on each. We take the probability that depth_from_ancestor = 1 and multiply that by the probability that ancestor_node_id = 5 and so on.

I'll guess it is the same issue as I demonstrated here:

http://www.oracle.com/technetwork/issue-archive/2009/09-jan/o19asktom-086775.html


there is a correlation between those three columns and the statistics don't exist for it.


you could

o use dynamic sampling
o create a sql profile (better if you used binds and reusable sql so probably not totally appropriate here)
o created extended statistics on all three columns in 11g



it figured out the mistake after it ran the query once - hence the fixed up plan with cardinality feedback.


give dynamic sampling a whirl.

SQL Tuning

A reader, November 08, 2012 - 4:44 pm UTC

I'm on a 10.2.0.5 DB and running this query:

select a2.bwtdocumentnumber,
 a2.title,
 a2.saprev,
 a2.actualdoctype,
 a2.department,
 a2.owner,
 a0.bwtdocumentnumber,
 a0.description,
 a0.plant
 from v_bscrouter a0,
 WTDocumentUsageLink  a1,
 v_bscdocument a2
 where a0.ida2a2 = a1.ida3a5
 and a1.ida3b5 = a2.ida3masterreference
 and a0.createstampa2 > to_date('24-OCT-2012','DD-MON-YYYY')
 and a0.createstampa2 < to_date('25-OCT-2012','DD-MON-YYYY')
 and a0.latestiterationinfo = 1
 and a2.latestiterationinfo = 1
 and a0.branchiditerationinfo in (select max(branchiditerationinfo) from v_bscrouter where latestiterationinfo = 1 group by bwtdocumentnumber) and a2.branchiditerationinfo in (select max(branchiditerationinfo) from v_bscdocument where latestiterationinfo = 1 group by bwtdocumentnumber);



If I check no of rows in

select max(branchiditerationinfo) from v_bscrouter where latestiterationinfo = 1 group by bwtdocumentnumber

it's 37353

and in

select max(branchiditerationinfo) from v_bscdocument where latestiterationinfo = 1 group by bwtdocumentnumber

is 910270

The explain plan goes like:

--------------------------------------------------------------------------------------------------------
| Id  | Operation                        | Name                           | Rows  | Bytes | Cost (%CPU)|
--------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                 |                                |     1 |   185 | 14746   (1)|
|   1 |  FILTER                          |                                |       |       |            |
|   2 |   NESTED LOOPS                   |                                |  2097 |   378K| 12144   (1)|
|   3 |    NESTED LOOPS                  |                                |  2097 |   344K| 11934   (1)|
|   4 |     NESTED LOOPS                 |                                |   560 | 54880 | 11598   (1)|
|   5 |      NESTED LOOPS                |                                |    52 |  4368 | 11582   (1)|
|   6 |       TABLE ACCESS FULL          | BSCROUTER                      |    52 |  3380 | 11577   (1)|
|   7 |       TABLE ACCESS BY INDEX ROWID| BSCROUTERMASTER                |     1 |    19 |     1   (0)|
|   8 |        INDEX UNIQUE SCAN         | PK_BSCROUTERMASTER             |     1 |       |     1   (0)|
|   9 |      TABLE ACCESS BY INDEX ROWID | WTDOCUMENTUSAGELINK            |    11 |   154 |     1   (0)|
|  10 |       INDEX RANGE SCAN           | WTDOCUMENTUSAGELINK$IDA3A5     |    11 |       |     1   (0)|
|  11 |     TABLE ACCESS BY INDEX ROWID  | BSCDOCUMENT                    |     4 |   280 |     1   (0)|
|  12 |      INDEX RANGE SCAN            | BSCDOCUMENT$PTC1               |     4 |       |     1   (0)|
|  13 |    TABLE ACCESS BY INDEX ROWID   | BSCDOCUMENTMASTER              |     1 |    17 |     1   (0)|
|  14 |     INDEX UNIQUE SCAN            | PK_BSCDOCUMENTMASTER           |     1 |       |     1   (0)|
|  15 |   FILTER                         |                                |       |       |            |
|  16 |    SORT GROUP BY NOSORT          |                                |     1 |    36 |     2   (0)|
|  17 |     NESTED LOOPS                 |                                |     1 |    36 |     2   (0)|
|  18 |      TABLE ACCESS BY INDEX ROWID | BSCROUTERMASTER                | 37326 |   692K|     1   (0)|
|  19 |       INDEX FULL SCAN            | BSCROUTERMASTER$WTDOCUMENTNU$1 |     1 |       |     1   (0)|
|  20 |      TABLE ACCESS BY INDEX ROWID | BSCROUTER                      |     1 |    17 |     1   (0)|
|  21 |       INDEX RANGE SCAN           | BSCROUTER$UNIQ2                |     2 |       |     1   (0)|
|  22 |   FILTER                         |                                |       |       |            |
|  23 |    SORT GROUP BY NOSORT          |                                |     1 |    34 |     2   (0)|
|  24 |     TABLE ACCESS BY INDEX ROWID  | BSCDOCUMENT                    |     1 |    17 |     1   (0)|
|  25 |      NESTED LOOPS                |                                |     1 |    34 |     2   (0)|
|  26 |       TABLE ACCESS BY INDEX ROWID| BSCDOCUMENTMASTER              |   908K|    14M|     1   (0)|
|  27 |        INDEX FULL SCAN           | BSCDOCUMENTMASTER$WTDOCUMENT$1 |     1 |       |     1   (0)|
|  28 |       INDEX RANGE SCAN           | BSCDOCUMENT$PTC1               |     1 |       |     1   (0)|
--------------------------------------------------------------------------------------------------------


I am trying to change it to:

select a2.bwtdocumentnumber,
 a2.title,
 a2.saprev,
 a2.actualdoctype,
 a2.department,
 a2.owner,
 a0.bwtdocumentnumber,
 a0.description,
 a0.plant
 FROM   v_bscrouter a0, WTDocumentUsageLink  a1, v_bscdocument a2, (SELECT bwtdocumentnumber, max(branchiditerationinfo)
       FROM   v_bscrouter
       WHERE  latestiterationinfo = 1
       GROUP  BY bwtdocumentnumber) a3,
(SELECT bwtdocumentnumber, max(branchiditerationinfo)
       FROM   v_bscdocument
       WHERE  latestiterationinfo = 1
       GROUP  BY bwtdocumentnumber) a4
 WHERE a0.ida2a2 = a1.ida3a5
 and a1.ida3b5 = a2.ida3masterreference
 and a0.createstampa2 > to_date('24-OCT-2012','DD-MON-YYYY')
 and a0.createstampa2 < to_date('25-OCT-2012','DD-MON-YYYY')
 and a0.latestiterationinfo = 1
 and a2.latestiterationinfo = 1


but little use:

record count are:

SQL> select count(*) from v_bscrouter;

  COUNT(*)
----------
    429630

SQL> select count(*) from WTDocumentUsageLink;

  COUNT(*)
----------
  10740536

SQL> select count(*) from v_bscdocument;

  COUNT(*)
----------
   6189066


Thanks for all your insights.

A reader, November 08, 2012 - 6:36 pm UTC

using dynamic sampling does make the execution use index scan, which is the correct path. But the estimate rows is still a few hundred vs the actual 4 rows.

Total number of rows in the table: 3193530
ancestor_node_id: distinct rows 92176
node_tree_type_id: distinct rows 6
depth_from_ancestor: 12

The selectivity doesn't seem to be the problem here?
Could be histogram causing the estimate off?
I'll try again after removing histogram.
Tom Kyte
November 09, 2012 - 6:30 am UTC

a few hundred is close enough to get the right plan (if you want better estimates, turn up the sample, goto 7 or 8 or 9...)


the *cardinality* estimate was the problem, you have shown that when you get a number reasonably close - you get the right plan.


gather extended statistics on the three columns. leave the histograms alone.

Explain Plan

Arvind, November 30, 2012 - 11:02 pm UTC

Hi tom,

What is the meaning of view view in explain plan?

VIEW VIEW SYS.VW_NSO_1 Cost: 1,407 Bytes: 8,857,472 Cardinality: 276,796
3 NESTED LOOPS Cost: 1,407 Bytes: 11,902,228 Cardinality: 276,796


Thanks,

Arvind

Tom Kyte
December 03, 2012 - 8:10 am UTC

full example example please. let's see everything in context.

Explain plan shows incorrec number of rows

Girish, December 06, 2012 - 12:13 am UTC

Hi Tom,

I am generating EXPLAIN PLAN for below query which returns 47 rows but my plan shows only 25 rows and I also genetaed plan from dbms_xplan.display_cursor using sql_id even then it shows 25 rows.Why optimiser is not able to show exact number of rows returned

select * from customers
where exists ( select 1 from orders where orders.customer_id=customers.customer_id)
-- returns 47 rows

--------------------------------------------------------------------------------------
| Id  | Operation          | Name            | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |                 |    25 |  4250 |     5   (0)| 00:00:01 |
|   1 |  NESTED LOOPS SEMI |                 |    25 |  4250 |     5   (0)| 00:00:01 |
|   2 |   TABLE ACCESS FULL| CUSTOMERS       |   319 | 52954 |     5   (0)| 00:00:01 |
|*  3 |   INDEX RANGE SCAN | ORD_CUSTOMER_IX |     8 |    32 |     0   (0)| 00:00:01 |
--------------------------------------------------------------------------------------

Tom Kyte
December 14, 2012 - 1:59 pm UTC

the plan is an estimate, a guess, I would be surprised if it showed the exact right numbers in general - not the other way around.



25 is an awesome estimate, I am so seriously happy that it is so close!!! that is fantastic.


You have problems when the estimates are off by orders of magnitude(s). Here 25 is so darn close to 47 - I'm so happy to see it - perfect!!!

Xplain plan and tkprof

A Reader, December 17, 2012 - 3:34 am UTC

Hi Tom ,

I have taken explain plan and tkprof of a query. I have observed that the explain plan and the plan in the tkprof vary very much. I have taken both in the same session

Explain of the query

-------------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time | Pstart| Pstop |
-------------------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 14247 | 695K| | 6603 (4)| 00:01:20 | | |
| 1 | HASH UNIQUE | | 14247 | 695K| 1688K| 6603 (4)| 00:01:20 | | |
|* 2 | HASH JOIN | | 14247 | 695K| | 6422 (4)| 00:01:18 | | |
|* 3 | HASH JOIN | | 16531 | 613K| | 6124 (4)| 00:01:14 | | |
| 4 | PARTITION RANGE ALL| | 16531 | 387K| | 3757 (4)| 00:00:46 | 1 | 11 |
|* 5 | TABLE ACCESS FULL | T1 | 16531 | 387K| | 3757 (4)| 00:00:46 | 1 | 11 |
| 6 | PARTITION RANGE ALL| | 1498K| 20M| | 2345 (3)| 00:00:29 | 1 | 11 |
| 7 | TABLE ACCESS FULL | T2 | 1498K| 20M| | 2345 (3)| 00:00:29 | 1 | 11 |
| 8 | PARTITION RANGE ALL | | 66914 | 784K| | 296 (3)| 00:00:04 | 1 | 11 |
|* 9 | TABLE ACCESS FULL | T3 | 66914 | 784K| | 296 (3)| 00:00:04 | 1 | 11 |
-------------------------------------------------------------------------------------------------------------------------

TKprof of the query :

Rows Row Source Operation
------- ---------------------------------------------------
11569 HASH UNIQUE (cr=101832 pr=61 pw=0 time=1347039 us)
15415 NESTED LOOPS (cr=101832 pr=61 pw=0 time=1449119 us)
15415 NESTED LOOPS (cr=55580 pr=61 pw=0 time=863339 us)
15415 TABLE ACCESS BY GLOBAL INDEX ROWID T1 PARTITION: ROW LOCATION ROW LOCATION (cr=9286 pr=0 pw=0 time=123381 us)
15415 INDEX RANGE SCAN I1 (cr=83 pr=0 pw=0 time=30879 us)(object id 68681)
15415 TABLE ACCESS BY GLOBAL INDEX ROWID T2 PARTITION: ROW LOCATION ROW LOCATION (cr=46294 pr=61 pw=0 time=630911 us)
15415 INDEX RANGE SCAN I2 (cr=30876 pr=50 pw=0 time=230335 us)(object id 68682)
15415 PARTITION RANGE ITERATOR PARTITION: KEY KEY (cr=46252 pr=0 pw=0 time=524909 us)
15415 TABLE ACCESS BY LOCAL INDEX ROWID T3 PARTITION: KEY KEY (cr=46252 pr=0 pw=0 time=366165 us)
15415 INDEX UNIQUE SCAN I3 PARTITION: KEY KEY (cr=30830 pr=0 pw=0 time=173864 us)(object id 64509)

But when i take the explain plan and tkprof in two different new sessions i can see both the plans are same.

Xplain plan

-------------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time | Pstart| Pstop |
-------------------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 14247 | 695K| | 6603 (4)| 00:01:20 | | |
| 1 | HASH UNIQUE | | 14247 | 695K| 1688K| 6603 (4)| 00:01:20 | | |
|* 2 | HASH JOIN | | 14247 | 695K| | 6422 (4)| 00:01:18 | | |
|* 3 | HASH JOIN | | 16531 | 613K| | 6124 (4)| 00:01:14 | | |
| 4 | PARTITION RANGE ALL| | 16531 | 387K| | 3757 (4)| 00:00:46 | 1 | 11 |
|* 5 | TABLE ACCESS FULL | T1 | 16531 | 387K| | 3757 (4)| 00:00:46 | 1 | 11 |
| 6 | PARTITION RANGE ALL| | 1498K| 20M| | 2345 (3)| 00:00:29 | 1 | 11 |
| 7 | TABLE ACCESS FULL | T2 | 1498K| 20M| | 2345 (3)| 00:00:29 | 1 | 11 |
| 8 | PARTITION RANGE ALL | | 66914 | 784K| | 296 (3)| 00:00:04 | 1 | 11 |
|* 9 | TABLE ACCESS FULL | T3 | 66914 | 784K| | 296 (3)| 00:00:04 | 1 | 11 |
-------------------------------------------------------------------------------------------------------------------------
TKprof :

Rows Row Source Operation
------- ---------------------------------------------------
11569 HASH UNIQUE (cr=36286 pr=27910 pw=0 time=4472609 us)
15415 HASH JOIN (cr=36286 pr=27910 pw=0 time=4693128 us)
15415 HASH JOIN (cr=34970 pr=26620 pw=0 time=4140966 us)
15415 PARTITION RANGE ALL PARTITION: 1 11 (cr=24573 pr=16247 pw=0 time=3451952 us)
15415 TABLE ACCESS FULL T1 PARTITION: 1 11 (cr=24573 pr=16247 pw=0 time=3411146 us)
1511058 PARTITION RANGE ALL PARTITION: 1 11 (cr=10397 pr=10373 pw=0 time=4546524 us)
1511058 TABLE ACCESS FULL T2 PARTITION: 1 11 (cr=10397 pr=10373 pw=0 time=1667977 us)
86836 PARTITION RANGE ALL PARTITION: 1 11 (cr=1316 pr=1290 pw=0 time=623077 us)
86836 TABLE ACCESS FULL T3 PARTITION: 1 11 (cr=1316 pr=1290 pw=0 time=401641 us)

Can you please explain why there is a difference in the plans when taken from same session and when taken from two new sessions.

Why two Nested Loops

Snehasish Das, January 27, 2014 - 2:59 pm UTC

Hi Tom,

Good day.

Can you please guide me as to why there are two nested loops in the plan in a two table join. Can't One nested loop do the same ?

select  row_id,x_mtn_subscription_value_band,x_mtn_net_value,x_mtn_impl_band,value_band,net_value,impl_band
from
s_asset ast,temp_vb vb
where
ast.serial_num = vb.msisdn
and ast.status_cd = 'Active'


The plan is as below.
PLAN_TABLE_OUTPUT                                                                                                                                                                                                                                                                                           
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Plan hash value: 2439014578                                                                                                                                                                                                                                                                                 
                                                                                                                                                                                                                                                                                                            
-------------------------------------------------------------------------------------------                                                                                                                                                                                                                 
| Id  | Operation                    | Name       | Rows  | Bytes | Cost (%CPU)| Time     |                                                                                                                                                                                                                 
-------------------------------------------------------------------------------------------                                                                                                                                                                                                                 
|   0 | SELECT STATEMENT             |            |   736K|   200M| 30609   (1)| 00:06:08 |                                                                                                                                                                                                                 
|   1 |  NESTED LOOPS                |            |       |       |            |          |                                                                                                                                                                                                                 
|   2 |   NESTED LOOPS               |            |   736K|   200M| 30609   (1)| 00:06:08 |                                                                                                                                                                                                                 
|   3 |    TABLE ACCESS FULL         | TEMP_VB    |   499K|    13M|   597   (2)| 00:00:08 |                                                                                                                                                                                                                 
|*  4 |    INDEX RANGE SCAN          | S_ASSET_M1 |     2 |       |     1   (0)| 00:00:01 |                                                                                                                                                                                                                 
|*  5 |   TABLE ACCESS BY INDEX ROWID| S_ASSET    |     1 |   258 |     1   (0)| 00:00:01 |                                                                                                                                                                                                                 
-------------------------------------------------------------------------------------------                                                                                                                                                                                                                 
                                                                                                                                                                                                                                                                                                            
Predicate Information (identified by operation id):                                                                                                                                                                                                                                                         
---------------------------------------------------                                                                                                                                                                                                                                                         
                                                                                                                                                                                                                                                                                                            
   4 - access("AST"."SERIAL_NUM"="VB"."MSISDN")                                                                                                                                                                                                                                                             
   5 - filter("AST"."STATUS_CD"='Active')                                                                                                                                                                                                                                                                   

18 rows selected.




My DB version is 11.2.0.2.0.

regards,
Snehasish Das

2 Nested Loop

Snehasish Das, April 07, 2014 - 7:48 am UTC

Hi Tom,

Can you please help me with the above request.

Regards,
Snehasish Das
Tom Kyte
April 16, 2014 - 5:39 pm UTC

it is just a formatting thing. It is doing the join before the table access. It is just an artifact of the explain plain report.


I'd be more worried about why there is a nested loop in there, this looks more like a candidate for hash joins. have you set some magic parameter to make this happen?

Thanks

Snehasish Das, April 17, 2014 - 10:46 am UTC

Hi Tom,

this was just a sample query to understand about the 2 nested loops.

So I shouldn't be worried about the nested loops running twice.

Regards,
Snehasish Das
Tom Kyte
April 17, 2014 - 5:28 pm UTC

well, I'm sorry - but you should be worried. Unless you've done some seriously bad init.ora settings or session settings, I would not want to see a nested loops in there at all.

Need help to choose the best of the plans

PPD, April 20, 2014 - 6:40 pm UTC

Hi Tom,

Need your help to choose the best of these below plans.

=============
SQL
=============

SELECT /*+ leading(b) index_join(b S_ORG_EXT_X_PN1 S_ORG_EXT_X_U1) use_hash(a) */
a.row_id, a.x_cust_id, a.ou_type_cd
FROM siebel.s_org_ext a, siebel.s_org_ext_x b
WHERE b.par_row_id = a.row_id AND b.attrib_41 IS NULL
AND a.bu_id = '1-1RPIZ5'

================
PLAN Option = 1
================

SELECT STATEMENT; Optimizer=All_rows; Cost=67,560; Cardinality=2,234,080; Bytes=129,576,640; Cpu_cost=3,610,191,929; Io_cost=67,327; Time=811
HASH JOIN; Cost=67,560; Cardinality=2,234,080; Bytes=129,576,640; Cpu_cost=3,610,191,929; Io_cost=67,327; Temp_space=67,027,000; Access_predicates="b"."par_row_id"="a"."row_id"; Time=811
VIEW VIEW SIEBEL.index$_join$_002; Object_instance=2; Cost=13,229; Cardinality=2,234,080; Bytes=40,213,440; Cpu_cost=1,688,353,177; Io_cost=13,121; Time=159
HASH JOIN; Access_predicates=Rowid=rowid
INDEX FAST FULL SCAN INDEX SIEBEL.S_ORG_EXT_X_PN1; Optimizer=Analyzed; Cost=210; Cardinality=2,234,080; Bytes=40,213,440; Cpu_cost=12,155,908; Io_cost=209; Filter_predicates="b"."attrib_41" is null; Time=3
INDEX FAST FULL SCAN INDEX (UNIQUE) SIEBEL.S_ORG_EXT_X_U1; Optimizer=Analyzed; Cost=231; Cardinality=2,234,080; Bytes=40,213,440; Cpu_cost=12,654,282; Io_cost=230; Time=3
TABLE ACCESS BY INDEX ROWID TABLE SIEBEL.S_ORG_EXT; Optimizer=Analyzed; Object_instance=1; Cost=40,473; Cardinality=4,314,577; Bytes=172,583,080; Cpu_cost=570,121,792; Io_cost=40,437; Time=486
INDEX RANGE SCAN INDEX (UNIQUE) SIEBEL.S_ORG_EXT_U2; Optimizer=Analyzed; Search_columns=1; Cost=253; Cardinality=4,314,577; Cpu_cost=9,847,561; Io_cost=252; Access_predicates="a"."bu_id"='1-1rpiz5'; Time=4

================
PLAN Option = 2
================

SELECT STATEMENT; Optimizer=All_rows; Cost=189,103; Cardinality=2,234,080; Bytes=129,576,640; Cpu_cost=36,856,346,602; Io_cost=186,730; Time=2,270
HASH JOIN; Cost=189,103; Cardinality=2,234,080; Bytes=129,576,640; Cpu_cost=36,856,346,602; Io_cost=186,730; Temp_space=67,027,000; Access_predicates="b"."par_row_id"="a"."row_id"; Time=2,270
VIEW VIEW SIEBEL.index$_join$_002; Object_instance=2; Cost=13,229; Cardinality=2,234,080; Bytes=40,213,440; Cpu_cost=1,688,353,177; Io_cost=13,121; Time=159
HASH JOIN; Access_predicates=Rowid=rowid
INDEX FAST FULL SCAN INDEX SIEBEL.S_ORG_EXT_X_PN1; Optimizer=Analyzed; Cost=210; Cardinality=2,234,080; Bytes=40,213,440; Cpu_cost=12,155,908; Io_cost=209; Filter_predicates="b"."attrib_41" is null; Time=3
INDEX FAST FULL SCAN INDEX (UNIQUE) SIEBEL.S_ORG_EXT_X_U1; Optimizer=Analyzed; Cost=231; Cardinality=2,234,080; Bytes=40,213,440; Cpu_cost=12,654,282; Io_cost=230; Time=3
TABLE ACCESS FULL TABLE SIEBEL.S_ORG_EXT; Optimizer=Analyzed; Object_instance=1; Cost=162,016; Cardinality=4,314,577; Bytes=172,583,080; Cpu_cost=33,816,276,465; Io_cost=159,839; Filter_predicates="a"."bu_id"='1-1rpiz5'; Time=1,945


I got doubt to choose among both of the plan.
So far, my impression is to avoid multiple IOs.
In Plan option -1, the plan is going to read the rowids from index S_ORG_EXT_U2 and then again another IO will be done to retrieve the data from table it's S_ORG_EXT.

Can you please help me to find the best among them ?

Thanks,
PPD

Language to describe/write an execution plans

Edgar, July 02, 2015 - 12:45 pm UTC

Dear Tom,

Just an idea..
I am not really sure if it is worth.. but maybe worth to think about a little.

An explain plan output is looking pretty much similar to usual procedural programming language code.
There are loop-like looking joins and iterators; the list (sequence) of an actions.

1.
For the adaptive plans, it would be nice to have there conditional branching (if..then ..else).

2.
In common; it looks like it is possible to represent the execution plan as a procedural language code.
It would be interesting to develop specific procedural language to describe the execution plans.
Then the SQL Optimizer could work as a translator from SQL to that language (or p-code).

In this case some new possibilities would be real: data/version/statistics independent execution plans copied from one environment into another in form of a code.
Direct coding of queries on that language instead of SQL for some sophisticated cases.

However, we already have PL/SQL in the database as a procedural language; so maybe it could be adopted for coding of execution plans..

Regards.

why check constraint does not impact optimizer

listong, October 22, 2015 - 8:26 am UTC

Hi Tom,

I know constraints can influence optimizor in some way.

But I am not sure why below CHECK constraint does not work?

Basically I create a table with check (>10), then my where condition is <10, i was expecting it do nothing, but it still did a FTS.

Could you please kindly help to elaborate?

Below is my test script:
create table test(id int, a int);
alter table test add constraint con1 check(id>10);
insert into test select rownum+10,rownum+10 from dual connect by level<=10;
select * from a where id<10;

SQL> select * from test where id<10;

no rows selected


Execution Plan
----------------------------------------------------------
Plan hash value: 1357081020

--------------------------------------------------------------------------
| Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |      |     1 |     4 |     2   (0)| 00:00:01 |
|*  1 |  TABLE ACCESS FULL| TEST |     1 |     4 |     2   (0)| 00:00:01 |
--------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter("ID"<10)


Statistics
----------------------------------------------------------
          0  recursive calls
          0  db block gets
          3  consistent gets
          0  physical reads
          0  redo size
        329  bytes sent via SQL*Net to client
        408  bytes received via SQL*Net from client
          1  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
          0  rows processed

SQL> 

Then I though 'FTS' might just appears there, and underneath it did nothing. 
But a false condition 1=2 shows different 'consistent get' from it.  where 1=2 shows 0 'consistent get'
So it looks to me that it still scaned the table.

SQL> select * from test where 1=2;

no rows selected


Execution Plan
----------------------------------------------------------
Plan hash value: 766971379

---------------------------------------------------------------------------
| Id  | Operation          | Name | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |      |     1 |     4 |     0   (0)|          |
|*  1 |  FILTER            |      |       |       |            |          |
|   2 |   TABLE ACCESS FULL| TEST |    10 |    40 |     2   (0)| 00:00:01 |
---------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter(NULL IS NOT NULL)


Statistics
----------------------------------------------------------
          1  recursive calls
          0  db block gets
          0  consistent gets
          0  physical reads
          0  redo size
        329  bytes sent via SQL*Net to client
        408  bytes received via SQL*Net from client
          1  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
          0  rows processed



Then I thougt might due to it is nullable, even modified it to be not null still does not work as my expection.
SQL> alter table test modify(id int not null);

Table altered.



SQL> select * from test where id<10;

no rows selected


Execution Plan
----------------------------------------------------------
Plan hash value: 1357081020

--------------------------------------------------------------------------
| Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |      |     1 |     4 |     2   (0)| 00:00:01 |
|*  1 |  TABLE ACCESS FULL| TEST |     1 |     4 |     2   (0)| 00:00:01 |
--------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter("ID"<10)


Statistics
----------------------------------------------------------
        113  recursive calls
          0  db block gets
         13  consistent gets
          0  physical reads
          0  redo size
        329  bytes sent via SQL*Net to client
        408  bytes received via SQL*Net from client
          1  SQL*Net roundtrips to/from client
          3  sorts (memory)
          0  sorts (disk)
          0  rows processed


Chris Saxon
October 24, 2015 - 8:34 pm UTC

What version are you using ? This from 12c

SQL> create table test(id int, a int);

Table created.

SQL> alter table test add constraint con1 check(id>10);

Table altered.

SQL> insert into test select rownum+10,rownum+10 from dual connect by level<=10;

10 rows created.

SQL> set autotrace traceonly explain
SQL> select * from test where id<10;

Execution Plan
----------------------------------------------------------
Plan hash value: 766971379

---------------------------------------------------------------------------
| Id  | Operation          | Name | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |      |     1 |    26 |     0   (0)|          |
|*  1 |  FILTER            |      |       |       |            |          |
|*  2 |   TABLE ACCESS FULL| TEST |     1 |    26 |     3   (0)| 00:00:01 |
---------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter(NULL IS NOT NULL)
   2 - filter("ID"<10)

Note
-----
   - dynamic statistics used: dynamic sampling (level=2)

SQL>
SQL>




See the 'null is not null' ?

URL provided, shows error

Suman, October 31, 2015 - 7:14 pm UTC

Hi Tom,

The URL http://download-west.oracle.com/docs/cd/B10501_01/server.920/a96533/toc.htm

throws error:

Invalid URL

The requested URL "/docs/cd/B10501_01/server.920/a96533/toc.htm", is invalid.
Reference #9.1c287c7c.1446318749.b58ae07
Connor McDonald
November 02, 2015 - 1:01 am UTC

Head to docs.oracle.com, and work your through the most recent documentation, not the 9i stuff. Dont forget that Tom's first reply to this question was *15* years ago.

links are invalid

avinash, April 11, 2017 - 10:27 am UTC

http://download-west.oracle.com/docs/cd/B10501_01/server.920/a96533/ex_plan.htm#16972
In fact, the entire document:

http://downloadwest.oracle.com/docs/cd/B10501_01/server.920/a96533/toc.htm


hi tom i am getting message that this links are invalid, can u share the valid ones
Connor McDonald
April 12, 2017 - 1:42 am UTC

"server.920" is probably the give away here .... they are docs for Oracle 9.2

Change "downloadwest" to "docs" and you should be good

(I'll arrange a data patch to the ask tom answers as well to back fix this in our database)

"Harnessing the power of Optimizer Hints" Presentation by Maria Colgan

Rajeshwaran Jeyabal, September 19, 2017 - 1:33 am UTC

Team:

recently there is a Twitter post by Maria on the presentation "Harnessing the power of Optimizer Hints".

Can that content/slides be uploaded to AskTom?

Order of Execution Plan

Greg, October 16, 2017 - 9:20 pm UTC

Tom

How do you find the order of execution plans when it does not give you the order by number as the below?

Oracle 11g

Im running out of temp space on this
call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute      1      0.01       0.38          2        267         40           0
Fetch        1    544.41     887.01     352968       1463          2           0
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        3    544.43     887.40     352970       1730         42           0

Misses in library cache during parse: 0
Optimizer mode: CHOOSE
Parsing user id: SYS
Number of plan statistics captured: 1

Rows (1st) Rows (avg) Rows (max)  Row Source Operation
---------- ---------- ----------  ---------------------------------------------------
         0          0          0  TEMP TABLE TRANSFORMATION  (cr=267 pr=2 pw=4 time=382479 us)
         0          0          0   LOAD AS SELECT  (cr=192 pr=0 pw=1 time=8510 us)
        32         32         32    HASH JOIN RIGHT SEMI (cr=192 pr=0 pw=0 time=6894 us cost=34 size=84 card=4)
        32         32         32     VIEW  VW_NSO_1 (cr=177 pr=0 pw=0 time=6725 us cost=28 size=52 card=4)
        32         32         32      NESTED LOOPS  (cr=177 pr=0 pw=0 time=6663 us cost=28 size=124 card=4)
        32         32         32       NESTED LOOPS  (cr=135 pr=0 pw=0 time=6051 us cost=28 size=124 card=4)
        32         32         32        HASH JOIN  (cr=124 pr=0 pw=0 time=5731 us cost=24 size=96 card=4)
        32         32         32         HASH JOIN  (cr=123 pr=0 pw=0 time=4372 us cost=23 size=64 card=4)
       107        107        107          TABLE ACCESS BY INDEX ROWID IX_FLR_PERFORMANCE (cr=112 pr=0 pw=0 time=389 us cost=18 size=162 card=18)
       107        107        107           INDEX RANGE SCAN IX_FLR_PERFORMANCE_I2 (cr=2 pr=0 pw=0 time=158 us cost=1 size=0 card=18)(object id 89471)
       284        284        284          VIEW  index$_join$_003 (cr=11 pr=0 pw=0 time=2029 us cost=5 size=1967 card=281)
       284        284        284           HASH JOIN  (cr=11 pr=0 pw=0 time=1933 us)
       284        284        284            INLIST ITERATOR  (cr=4 pr=0 pw=0 time=323 us)
       284        284        284             INDEX RANGE SCAN IX_FLR_FLOORPLAN_I6 (cr=4 pr=0 pw=0 time=200 us cost=2 size=1967 card=281)(object id 89556)
      1074       1074       1074            INDEX FAST FULL SCAN IX_FLR_FLOORPLAN_P1 (cr=7 pr=0 pw=0 time=433 us cost=4 size=1967 card=281)(object id 89554)
       284        284        284         INDEX FULL SCAN IX_STR_STORE_FLOORPLAN_I1 (cr=1 pr=0 pw=0 time=117 us cost=1 size=2264 card=283)(object id 89465)
        32         32         32        INDEX UNIQUE SCAN IX_STR_STORE_P1 (cr=11 pr=0 pw=0 time=96 us cost=0 size=0 card=1)(object id 89050)
        32         32         32       TABLE ACCESS BY INDEX ROWID IX_STR_STORE (cr=42 pr=0 pw=0 time=321 us cost=1 size=7 card=1)
      1072       1072       1072     TABLE ACCESS FULL NEX_SALES_STORE (cr=15 pr=0 pw=0 time=44 us cost=6 size=8576 card=1072)
         0          0          0   LOAD AS SELECT  (cr=4 pr=1 pw=1 time=1187 us)
         1          1          1    SORT AGGREGATE (cr=4 pr=1 pw=0 time=424 us)
        32         32         32     VIEW  (cr=4 pr=1 pw=0 time=442 us cost=2 size=0 card=4)
        32         32         32      TABLE ACCESS FULL SYS_TEMP_0FD9D662E_ACCAE0 (cr=4 pr=1 pw=0 time=410 us cost=2 size=16 card=4)
         0          0          0   LOAD AS SELECT  (cr=11 pr=0 pw=1 time=959 us)
        18         18         18    TABLE ACCESS BY INDEX ROWID IX_SPC_PERFORMANCE (cr=11 pr=0 pw=0 time=295 us cost=83 size=2584 card=152)
        18         18         18     INDEX RANGE SCAN IX_SPC_PERFORMANCE_I1 (cr=3 pr=0 pw=0 time=59 us cost=3 size=0 card=152)(object id 89321)
         0          0          0   LOAD AS SELECT  (cr=60 pr=1 pw=1 time=2017 us)
        18         18         18    NESTED LOOPS  (cr=60 pr=1 pw=0 time=1565 us cost=459 size=3825 card=153)
        18         18         18     NESTED LOOPS  (cr=42 pr=1 pw=0 time=1379 us cost=459 size=3825 card=153)
        18         18         18      VIEW  VW_NSO_2 (cr=4 pr=1 pw=0 time=903 us cost=2 size=1976 card=152)
        18         18         18       HASH UNIQUE (cr=4 pr=1 pw=0 time=900 us)
        18         18         18        VIEW  (cr=4 pr=1 pw=0 time=313 us cost=2 size=912 card=152)
        18         18         18         TABLE ACCESS FULL SYS_TEMP_0FD9D6630_ACCAE0 (cr=4 pr=1 pw=0 time=294 us cost=2 size=2584 card=152)
        18         18         18      INDEX RANGE SCAN NEX_SALES_PROUCT_I2 (cr=38 pr=0 pw=0 time=338 us cost=2 size=0 card=1)(object id 89779)
        18         18         18     TABLE ACCESS BY INDEX ROWID NEX_SALES_PRODUCT (cr=18 pr=0 pw=0 time=184 us cost=3 size=12 card=1)
         0          0          0   PX COORDINATOR  (cr=0 pr=0 pw=0 time=369021 us)
         0          0          0    PX SEND QC (RANDOM) :TQ10007 (cr=0 pr=0 pw=0 time=0 us cost=1530 size=84816 card=152)
         0          0          0     HASH GROUP BY (cr=0 pr=0 pw=0 time=0 us cost=1530 size=84816 card=152)
         0          0          0      PX RECEIVE  (cr=0 pr=0 pw=0 time=0 us cost=1530 size=84816 card=152)
         0          0          0       PX SEND HASH :TQ10006 (cr=0 pr=0 pw=0 time=0 us cost=1530 size=84816 card=152)
         0          0          0        HASH GROUP BY (cr=0 pr=0 pw=0 time=0 us cost=1530 size=84816 card=152)
         0          0          0         HASH JOIN RIGHT OUTER (cr=0 pr=0 pw=0 time=0 us cost=1529 size=85374 card=153)
         0          0          0          BUFFER SORT (cr=0 pr=0 pw=0 time=0 us)
         0          0          0           PX RECEIVE  (cr=0 pr=0 pw=0 time=0 us cost=61 size=290 card=5)
         0          0          0            PX SEND BROADCAST :TQ10001 (cr=0 pr=0 pw=0 time=0 us cost=61 size=290 card=5)
       239        239        239             VIEW  (cr=162 pr=1 pw=0 time=4631 us cost=61 size=290 card=5)
       239        239        239              HASH GROUP BY (cr=162 pr=1 pw=0 time=4257 us cost=61 size=270 card=5)
       295        295        295               HASH JOIN  (cr=162 pr=1 pw=0 time=3769 us cost=61 size=270 card=5)
      1197       1197       1197                NESTED LOOPS  (cr=160 pr=1 pw=0 time=3975 us cost=59 size=49250 card=985)
      1197       1197       1197                 NESTED LOOPS  (cr=45 pr=1 pw=0 time=1328 us cost=59 size=49250 card=985)
        18         18         18                  MERGE JOIN CARTESIAN (cr=6 pr=1 pw=0 time=671 us cost=4 size=2907 card=153)
         1          1          1                   VIEW  (cr=4 pr=1 pw=0 time=612 us cost=2 size=13 card=1)
         1          1          1                    TABLE ACCESS FULL SYS_TEMP_0FD9D662F_ACCAE0 (cr=4 pr=1 pw=0 time=609 us cost=2 size=13 card=1)
        18         18         18                   BUFFER SORT (cr=2 pr=0 pw=0 time=62 us cost=4 size=918 card=153)
        18         18         18                    VIEW  (cr=2 pr=0 pw=0 time=24 us cost=2 size=918 card=153)
        18         18         18                     TABLE ACCESS FULL SYS_TEMP_0FD9D6631_ACCAE0 (cr=2 pr=0 pw=0 time=23 us cost=2 size=918 card=153)
      1197       1197       1197                  INDEX RANGE SCAN NEX_MVW_AGG_SALES_L4WK_I2 (cr=39 pr=0 pw=0 time=548 us cost=0 size=0 card=6)(object id 104647)
      1197       1197       1197                 MAT_VIEW ACCESS BY INDEX ROWID NEX_MVW_AGG_SALES_L4WK (cr=115 pr=0 pw=0 time=1477 us cost=1 size=186 card=6)
        32         32         32                VIEW  (cr=2 pr=0 pw=0 time=19 us cost=2 size=16 card=4)
        32         32         32                 TABLE ACCESS FULL SYS_TEMP_0FD9D662E_ACCAE0 (cr=2 pr=0 pw=0 time=19 us cost=2 size=16 card=4)
         0          0          0          HASH JOIN RIGHT OUTER (cr=0 pr=0 pw=0 time=0 us cost=1468 size=76500 card=153)
         0          0          0           BUFFER SORT (cr=0 pr=0 pw=0 time=0 us)
         0          0          0            PX RECEIVE  (cr=0 pr=0 pw=0 time=0 us cost=73 size=290 card=5)
         0          0          0             PX SEND BROADCAST :TQ10002 (cr=0 pr=0 pw=0 time=0 us cost=73 size=290 card=5)
       330        330        330              VIEW  (cr=168 pr=0 pw=0 time=7839 us cost=73 size=290 card=5)
       330        330        330               HASH GROUP BY (cr=168 pr=0 pw=0 time=6724 us cost=73 size=275 card=5)
       354        354        354                HASH JOIN  (cr=168 pr=0 pw=0 time=7130 us cost=73 size=275 card=5)
        32         32         32                 VIEW  (cr=2 pr=0 pw=0 time=86 us cost=2 size=16 card=4)
        32         32         32                  TABLE ACCESS FULL SYS_TEMP_0FD9D662E_ACCAE0 (cr=2 pr=0 pw=0 time=54 us cost=2 size=16 card=4)
      1398       1398       1398                 NESTED LOOPS  (cr=166 pr=0 pw=0 time=7832 us cost=70 size=65433 card=1283)
      1398       1398       1398                  NESTED LOOPS  (cr=46 pr=0 pw=0 time=953 us cost=70 size=65433 card=1283)
        18         18         18                   MERGE JOIN CARTESIAN (cr=4 pr=0 pw=0 time=89 us cost=4 size=2907 card=153)
         1          1          1                    VIEW  (cr=2 pr=0 pw=0 time=38 us cost=2 size=13 card=1)
         1          1          1                     TABLE ACCESS FULL SYS_TEMP_0FD9D662F_ACCAE0 (cr=2 pr=0 pw=0 time=35 us cost=2 size=13 card=1)
        18         18         18                    BUFFER SORT (cr=2 pr=0 pw=0 time=57 us cost=4 size=918 card=153)
        18         18         18                     VIEW  (cr=2 pr=0 pw=0 time=31 us cost=2 size=918 card=153)
        18         18         18                      TABLE ACCESS FULL SYS_TEMP_0FD9D6631_ACCAE0 (cr=2 pr=0 pw=0 time=30 us cost=2 size=918 card=153)
      1398       1398       1398                   INDEX RANGE SCAN NEX_MVW_AGG_SALES_L13WK_I2 (cr=42 pr=0 pw=0 time=1202 us cost=0 size=0 card=8)(object id 104655)
      1398       1398       1398                  MAT_VIEW ACCESS BY INDEX ROWID NEX_MVW_AGG_SALES_L13WK (cr=120 pr=0 pw=0 time=2834 us cost=1 size=256 card=8)
         0          0          0           HASH JOIN OUTER (cr=0 pr=0 pw=0 time=0 us cost=1395 size=67626 card=153)
         0          0          0            BUFFER SORT (cr=0 pr=0 pw=0 time=0 us)
         0          0          0             PX RECEIVE  (cr=0 pr=0 pw=0 time=0 us cost=944 size=60741 card=153)
         0          0          0              PX SEND HASH :TQ10003 (cr=0 pr=0 pw=0 time=0 us cost=944 size=60741 card=153)
 351876511  351876511  351876511               HASH JOIN OUTER (cr=1127 pr=352966 pw=3230324 time=373134042 us cost=944 size=60741 card=153)
 145685507  145685507  145685507                HASH JOIN OUTER (cr=948 pr=80941 pw=80941 time=66109109 us cost=866 size=51867 card=153)
   4756734    4756734    4756734                 HASH JOIN OUTER (cr=772 pr=0 pw=0 time=1792046 us cost=768 size=39015 card=153)
    238740     238740     238740                  HASH JOIN OUTER (cr=601 pr=0 pw=0 time=101373 us cost=707 size=30141 card=153)
      9708       9708       9708                   HASH JOIN OUTER (cr=424 pr=0 pw=0 time=19594 us cost=635 size=21267 card=153)
       388        388        388                    HASH JOIN OUTER (cr=239 pr=0 pw=0 time=10940 us cost=558 size=12393 card=153)
        18         18         18                     NESTED LOOPS  (cr=55 pr=0 pw=0 time=545 us cost=458 size=3519 card=153)
        18         18         18                      NESTED LOOPS  (cr=40 pr=0 pw=0 time=416 us cost=458 size=3519 card=153)
        18         18         18                       VIEW  (cr=2 pr=0 pw=0 time=106 us cost=2 size=1672 card=152)
        18         18         18                        TABLE ACCESS FULL SYS_TEMP_0FD9D6630_ACCAE0 (cr=2 pr=0 pw=0 time=104 us cost=2 size=2584 card=152)
        18         18         18                       INDEX RANGE SCAN NEX_SALES_PROUCT_I2 (cr=38 pr=0 pw=0 time=124 us cost=2 size=0 card=1)(object id 89779)
        18         18         18                      TABLE ACCESS BY INDEX ROWID NEX_SALES_PRODUCT (cr=15 pr=0 pw=0 time=49 us cost=3 size=12 card=1)
       386        386        386                     VIEW  (cr=184 pr=0 pw=0 time=8949 us cost=100 size=348 card=6)
       386        386        386                      HASH GROUP BY (cr=184 pr=0 pw=0 time=7878 us cost=100 size=336 card=6)
       389        389        389                       HASH JOIN  (cr=184 pr=0 pw=0 time=8772 us cost=100 size=336 card=6)
        32         32         32                        VIEW  (cr=2 pr=0 pw=0 time=75 us cost=2 size=16 card=4)
        32         32         32                         TABLE ACCESS FULL SYS_TEMP_0FD9D662E_ACCAE0 (cr=2 pr=0 pw=0 time=42 us cost=2 size=16 card=4)
      1746       1746       1746                        NESTED LOOPS  (cr=182 pr=0 pw=0 time=3151 us cost=98 size=85800 card=1650)
      1746       1746       1746                         NESTED LOOPS  (cr=44 pr=0 pw=0 time=1139 us cost=98 size=85800 card=1683)
        18         18         18                          MERGE JOIN CARTESIAN (cr=4 pr=0 pw=0 time=107 us cost=4 size=2907 card=153)
         1          1          1                           VIEW  (cr=2 pr=0 pw=0 time=23 us cost=2 size=13 card=1)
         1          1          1                            TABLE ACCESS FULL SYS_TEMP_0FD9D662F_ACCAE0 (cr=2 pr=0 pw=0 time=22 us cost=2 size=13 card=1)
        18         18         18                           BUFFER SORT (cr=2 pr=0 pw=0 time=88 us cost=4 size=918 card=153)
        18         18         18                            VIEW  (cr=2 pr=0 pw=0 time=35 us cost=2 size=918 card=153)
        18         18         18                             TABLE ACCESS FULL SYS_TEMP_0FD9D6631_ACCAE0 (cr=2 pr=0 pw=0 time=16 us cost=2 size=918 card=153)
      1746       1746       1746                          INDEX RANGE SCAN NEX_MVW_AGG_SALES_L52WK_LY_I2 (cr=40 pr=0 pw=0 time=833 us cost=0 size=0 card=11)(object id 104675)
      1746       1746       1746                         MAT_VIEW ACCESS BY INDEX ROWID NEX_MVW_AGG_SALES_L52WK_LY (cr=138 pr=0 pw=0 time=2685 us cost=1 size=363 card=11)
       352        352        352                    VIEW  (cr=185 pr=0 pw=0 time=6810 us cost=77 size=348 card=6)
       352        352        352                     HASH GROUP BY (cr=185 pr=0 pw=0 time=5748 us cost=77 size=330 card=6)
       361        361        361                      HASH JOIN  (cr=185 pr=0 pw=0 time=9285 us cost=77 size=330 card=6)
        32         32         32                       VIEW  (cr=2 pr=0 pw=0 time=58 us cost=2 size=16 card=4)
        32         32         32                        TABLE ACCESS FULL SYS_TEMP_0FD9D662E_ACCAE0 (cr=2 pr=0 pw=0 time=56 us cost=2 size=16 card=4)
      1576       1576       1576                       NESTED LOOPS  (cr=183 pr=0 pw=0 time=3072 us cost=75 size=78132 card=1532)
      1576       1576       1576                        NESTED LOOPS  (cr=47 pr=0 pw=0 time=1226 us cost=75 size=78132 card=1532)
        18         18         18                         MERGE JOIN CARTESIAN (cr=4 pr=0 pw=0 time=88 us cost=4 size=2907 card=153)
         1          1          1                          VIEW  (cr=2 pr=0 pw=0 time=19 us cost=2 size=13 card=1)
         1          1          1                           TABLE ACCESS FULL SYS_TEMP_0FD9D662F_ACCAE0 (cr=2 pr=0 pw=0 time=19 us cost=2 size=13 card=1)
        18         18         18                          BUFFER SORT (cr=2 pr=0 pw=0 time=71 us cost=4 size=918 card=153)
        18         18         18                           VIEW  (cr=2 pr=0 pw=0 time=37 us cost=2 size=918 card=153)
        18         18         18                            TABLE ACCESS FULL SYS_TEMP_0FD9D6631_ACCAE0 (cr=2 pr=0 pw=0 time=37 us cost=2 size=918 card=153)
      1576       1576       1576                         INDEX RANGE SCAN NEX_MVW_AGG_SALES_L26WK_LY_I2 (cr=43 pr=0 pw=0 time=974 us cost=0 size=0 card=10)(object id 104667)
      1576       1576       1576                        MAT_VIEW ACCESS BY INDEX ROWID NEX_MVW_AGG_SALES_L26WK_LY (cr=136 pr=0 pw=0 time=2558 us cost=1 size=320 card=10)
       312        312        312                   VIEW  (cr=177 pr=0 pw=0 time=6590 us cost=72 size=290 card=5)
       312        312        312                    HASH GROUP BY (cr=177 pr=0 pw=0 time=5026 us cost=72 size=275 card=5)
       336        336        336                     HASH JOIN  (cr=177 pr=0 pw=0 time=3933 us cost=72 size=275 card=5)
        32         32         32                      VIEW  (cr=2 pr=0 pw=0 time=32 us cost=2 size=16 card=4)
        32         32         32                       TABLE ACCESS FULL SYS_TEMP_0FD9D662E_ACCAE0 (cr=2 pr=0 pw=0 time=31 us cost=2 size=16 card=4)
      1478       1478       1478                      NESTED LOOPS  (cr=175 pr=0 pw=0 time=5940 us cost=70 size=68442 card=1342)
      1478       1478       1478                       NESTED LOOPS  (cr=43 pr=0 pw=0 time=862 us cost=70 size=68442 card=1377)
        18         18         18                        MERGE JOIN CARTESIAN (cr=4 pr=0 pw=0 time=82 us cost=4 size=2907 card=153)
         1          1          1                         VIEW  (cr=2 pr=0 pw=0 time=23 us cost=2 size=13 card=1)
         1          1          1                          TABLE ACCESS FULL SYS_TEMP_0FD9D662F_ACCAE0 (cr=2 pr=0 pw=0 time=22 us cost=2 size=13 card=1)
        18         18         18                         BUFFER SORT (cr=2 pr=0 pw=0 time=43 us cost=4 size=918 card=153)
        18         18         18                          VIEW  (cr=2 pr=0 pw=0 time=10 us cost=2 size=918 card=153)
        18         18         18                           TABLE ACCESS FULL SYS_TEMP_0FD9D6631_ACCAE0 (cr=2 pr=0 pw=0 time=9 us cost=2 size=918 card=153)
      1478       1478       1478                        INDEX RANGE SCAN NEX_MVW_AGG_SALES_L13WK_LY_I2 (cr=39 pr=0 pw=0 time=629 us cost=0 size=0 card=9)(object id 104659)
      1478       1478       1478                       MAT_VIEW ACCESS BY INDEX ROWID NEX_MVW_AGG_SALES_L13WK_LY (cr=132 pr=0 pw=0 time=2146 us cost=1 size=288 card=9)
       241        241        241                  VIEW  (cr=171 pr=0 pw=0 time=7371 us cost=61 size=290 card=5)
       241        241        241                   HASH GROUP BY (cr=171 pr=0 pw=0 time=5082 us cost=61 size=270 card=5)
       288        288        288                    HASH JOIN  (cr=171 pr=0 pw=0 time=3111 us cost=61 size=270 card=5)
      1265       1265       1265                     NESTED LOOPS  (cr=169 pr=0 pw=0 time=5094 us cost=59 size=50900 card=1018)
      1265       1265       1265                      NESTED LOOPS  (cr=46 pr=0 pw=0 time=2924 us cost=59 size=50900 card=1071)
        18         18         18                       MERGE JOIN CARTESIAN (cr=4 pr=0 pw=0 time=80 us cost=4 size=2907 card=153)
         1          1          1                        VIEW  (cr=2 pr=0 pw=0 time=37 us cost=2 size=13 card=1)
         1          1          1                         TABLE ACCESS FULL SYS_TEMP_0FD9D662F_ACCAE0 (cr=2 pr=0 pw=0 time=36 us cost=2 size=13 card=1)
        18         18         18                        BUFFER SORT (cr=2 pr=0 pw=0 time=27 us cost=4 size=918 card=153)
        18         18         18                         VIEW  (cr=2 pr=0 pw=0 time=28 us cost=2 size=918 card=153)
        18         18         18                          TABLE ACCESS FULL SYS_TEMP_0FD9D6631_ACCAE0 (cr=2 pr=0 pw=0 time=26 us cost=2 size=918 card=153)
      1265       1265       1265                       INDEX RANGE SCAN NEX_MVW_AGG_SALES_L4WK_LY_I2 (cr=42 pr=0 pw=0 time=727 us cost=0 size=0 card=7)(object id 104651)
      1265       1265       1265                      MAT_VIEW ACCESS BY INDEX ROWID NEX_MVW_AGG_SALES_L4WK_LY (cr=123 pr=0 pw=0 time=1548 us cost=1 size=217 card=7)
        32         32         32                     VIEW  (cr=2 pr=0 pw=0 time=48 us cost=2 size=16 card=4)
        32         32         32                      TABLE ACCESS FULL SYS_TEMP_0FD9D662E_ACCAE0 (cr=2 pr=0 pw=0 time=47 us cost=2 size=16 card=4)
       389        389        389                 VIEW  (cr=176 pr=0 pw=0 time=7386 us cost=99 size=504 card=6)
       389        389        389                  HASH GROUP BY (cr=176 pr=0 pw=0 time=6013 us cost=99 size=354 card=6)
       391        391        391                   HASH JOIN  (cr=176 pr=0 pw=0 time=4357 us cost=99 size=354 card=6)
        32         32         32                    VIEW  (cr=2 pr=0 pw=0 time=141 us cost=2 size=16 card=4)
        32         32         32                     TABLE ACCESS FULL SYS_TEMP_0FD9D662E_ACCAE0 (cr=2 pr=0 pw=0 time=139 us cost=2 size=16 card=4)
      1630       1630       1630                    NESTED LOOPS  (cr=174 pr=0 pw=0 time=5098 us cost=97 size=91795 card=1669)
      1630       1630       1630                     NESTED LOOPS  (cr=45 pr=0 pw=0 time=821 us cost=97 size=91795 card=1683)
        18         18         18                      MERGE JOIN CARTESIAN (cr=4 pr=0 pw=0 time=105 us cost=4 size=2907 card=153)
         1          1          1                       VIEW  (cr=2 pr=0 pw=0 time=23 us cost=2 size=13 card=1)
         1          1          1                        TABLE ACCESS FULL SYS_TEMP_0FD9D662F_ACCAE0 (cr=2 pr=0 pw=0 time=20 us cost=2 size=13 card=1)
        18         18         18                       BUFFER SORT (cr=2 pr=0 pw=0 time=68 us cost=4 size=918 card=153)
        18         18         18                        VIEW  (cr=2 pr=0 pw=0 time=22 us cost=2 size=918 card=153)
        18         18         18                         TABLE ACCESS FULL SYS_TEMP_0FD9D6631_ACCAE0 (cr=2 pr=0 pw=0 time=20 us cost=2 size=918 card=153)
      1630       1630       1630                      INDEX RANGE SCAN NEX_MVW_AGG_SALES_L52WK_I2 (cr=41 pr=0 pw=0 time=796 us cost=0 size=0 card=11)(object id 104671)
      1630       1630       1630                     MAT_VIEW ACCESS BY INDEX ROWID NEX_MVW_AGG_SALES_L52WK (cr=129 pr=0 pw=0 time=2198 us cost=1 size=396 card=11)
       366        366        366                VIEW  (cr=179 pr=0 pw=0 time=6489 us cost=77 size=348 card=6)
       366        366        366                 HASH GROUP BY (cr=179 pr=0 pw=0 time=5504 us cost=77 size=330 card=6)
       375        375        375                  HASH JOIN  (cr=179 pr=0 pw=0 time=4583 us cost=77 size=330 card=6)
        32         32         32                   VIEW  (cr=2 pr=0 pw=0 time=81 us cost=2 size=16 card=4)
        32         32         32                    TABLE ACCESS FULL SYS_TEMP_0FD9D662E_ACCAE0 (cr=2 pr=0 pw=0 time=78 us cost=2 size=16 card=4)
      1511       1511       1511                   NESTED LOOPS  (cr=177 pr=0 pw=0 time=4354 us cost=75 size=75837 card=1487)
      1511       1511       1511                    NESTED LOOPS  (cr=46 pr=0 pw=0 time=937 us cost=75 size=75837 card=1530)
        18         18         18                     MERGE JOIN CARTESIAN (cr=4 pr=0 pw=0 time=221 us cost=4 size=2907 card=153)
         1          1          1                      VIEW  (cr=2 pr=0 pw=0 time=113 us cost=2 size=13 card=1)
         1          1          1                       TABLE ACCESS FULL SYS_TEMP_0FD9D662F_ACCAE0 (cr=2 pr=0 pw=0 time=112 us cost=2 size=13 card=1)
        18         18         18                      BUFFER SORT (cr=2 pr=0 pw=0 time=92 us cost=4 size=918 card=153)
        18         18         18                       VIEW  (cr=2 pr=0 pw=0 time=43 us cost=2 size=918 card=153)
        18         18         18                        TABLE ACCESS FULL SYS_TEMP_0FD9D6631_ACCAE0 (cr=2 pr=0 pw=0 time=43 us cost=2 size=918 card=153)
      1511       1511       1511                     INDEX RANGE SCAN NEX_MVW_AGG_SALES_L26WK_I2 (cr=42 pr=0 pw=0 time=531 us cost=0 size=0 card=10)(object id 104663)
      1511       1511       1511                    MAT_VIEW ACCESS BY INDEX ROWID NEX_MVW_AGG_SALES_L26WK (cr=131 pr=0 pw=0 time=2186 us cost=1 size=320 card=10)
         0          0          0            PX RECEIVE  (cr=0 pr=0 pw=0 time=0 us cost=451 size=6885 card=153)
         0          0          0             PX SEND HASH :TQ10005 (cr=0 pr=0 pw=0 time=0 us cost=451 size=6885 card=153)
         0          0          0              VIEW  (cr=0 pr=0 pw=0 time=0 us cost=451 size=6885 card=153)
         0          0          0               HASH GROUP BY (cr=0 pr=0 pw=0 time=0 us cost=451 size=3978 card=153)
         0          0          0                PX RECEIVE  (cr=0 pr=0 pw=0 time=0 us cost=450 size=15912 card=612)
         0          0          0                 PX SEND HASH :TQ10004 (cr=0 pr=0 pw=0 time=0 us cost=450 size=15912 card=612)
         0          0          0                  NESTED LOOPS OUTER (cr=0 pr=0 pw=0 time=0 us cost=450 size=15912 card=612)
         0          0          0                   BUFFER SORT (cr=0 pr=0 pw=0 time=0 us)
         0          0          0                    PX RECEIVE  (cr=0 pr=0 pw=0 time=0 us)
         0          0          0                     PX SEND ROUND-ROBIN :TQ10000 (cr=0 pr=0 pw=0 time=0 us)
       576        576        576                      VIEW  (cr=6 pr=1 pw=0 time=1041 us cost=10 size=6120 card=612)
       576        576        576                       MERGE JOIN CARTESIAN (cr=6 pr=1 pw=0 time=808 us cost=10 size=15912 card=612)
        32         32         32                        VIEW  (cr=2 pr=0 pw=0 time=108 us cost=2 size=52 card=4)
        32         32         32                         TABLE ACCESS FULL SYS_TEMP_0FD9D662E_ACCAE0 (cr=2 pr=0 pw=0 time=107 us cost=2 size=16 card=4)
       576        576        576                        BUFFER SORT (cr=4 pr=1 pw=0 time=789 us cost=10 size=1989 card=153)
        18         18         18                         VIEW  (cr=4 pr=1 pw=0 time=626 us cost=2 size=1989 card=153)
        18         18         18                          TABLE ACCESS FULL SYS_TEMP_0FD9D6631_ACCAE0 (cr=4 pr=1 pw=0 time=625 us cost=2 size=918 card=153)
         0          0          0                   MAT_VIEW ACCESS BY INDEX ROWID NEX_MVW_STORE_ITEM_INFO (cr=0 pr=0 pw=0 time=0 us cost=2 size=16 card=1)
         0          0          0                    INDEX RANGE SCAN NEX_MVW_STORE_ITEM_INFO_I2 (cr=0 pr=0 pw=0 time=0 us cost=0 size=0 card=10)(object id 104679)



Connor McDonald
October 17, 2017 - 3:13 am UTC

Generally most nested means earlier in the execution, but the trace file will have the SQL_ID, which you can then use to mine V$SQL_PLAN to get the true ID number

its good

vineet tyagi, November 22, 2018 - 6:30 am UTC

excution time is more and

Tony, October 15, 2019 - 9:30 am UTC

I have a query in awr report whose elapsed time/execution is ~20s. When the same query is executed in sql developer in the same db, the execution time is 2s. When I checked the explain plan of the query and the actual plan of the query I find they are different. Since the estimated plan is better than the actual plan, is there any way to force the optimizer to use the estimated plan instead of the actual plan it used.
Connor McDonald
October 21, 2019 - 6:28 am UTC

Yes, check out SQL Plan Management (SPM). It is designed for exactly this scenario.

Nigel Bayliss has a great presentation on the history of options at your disposal, but go with the SPM option

https://www.doag.org/formes/pubfiles/9111274/2017-DB-Nigel_Bayliss-Getting_Creative_with_SQL_Profiles_and_SPM-Praesentation.pdf



Checking PHV history on ADG

DBA, September 12, 2022 - 7:34 pm UTC

Hi,
On the primary read/write database, it is possible to see what PHV a SQL_ID used in the past (using dba_hist_sqlstat, dba_hist_snapshot, dba_hist_active_sess_history). But on an ADG instance, which is a read-only instance, is it possible to see the history of Plan Hash Value that a sql_id has taken?
Connor McDonald
September 29, 2022 - 12:36 am UTC

You could use "remote" AWR for that to effectively have your typical AWR infrastructure for an active data guard instance.

Community member Yossi wrote a nice piece on this a while back

https://oracledba.blogspot.com/2018/05/managing-awr-in-adg-standby-databases.html

More to Explore

Performance

Get all the information about database performance in the Database Performance guide.