Skip to Main Content
  • Questions
  • Oracle 9i generates statistics automatically?

Breadcrumb

May 4th

Question and Answer

Tom Kyte

Thanks for the question, M.

Asked: May 10, 2002 - 6:30 pm UTC

Last updated: September 25, 2011 - 11:48 am UTC

Version: 9i

Viewed 10K+ times! This question is

You Asked

We are using Oracle 9i database. We want to use Cost Based Optimiser, so need to generate statistics for objects. I was wondering if Oracle 9i automatically generates statistics whenever schema/data get updated. If not, do we need to generate statics manually whenever schema/data get updated. What is the best way to do it? (using a procedure, trigger ? using ANALYZE command or something else? how often to do it? ...)

Thanks for any help

and Tom said...

search this site for

dbms_stats monitoring

to read some more info.


I would advise this:

o alter your tables to be "monitoring". we'll record the effects of changes to tables in the dictionary.

o use dbms_stats to collect statstics

o use the gather stale option to gather statistics on tables -- this works in conjunction with the monitoring to gather stats only on objects that NEED it.

o use dbms_job to schedule the stats collection to happen when you want it to (eg: late at night when the load it low)

o if you use the gather stale, you can do it as frequently as you like since only "stale" objects will be analyzed.


Rating

  (62 ratings)

Is this answer out of date? If it is, please let us know via a Comment

Comments

No examples on how to use it

reader, May 13, 2002 - 8:24 am UTC

You should have given examples on how to use it

Tom Kyte
May 13, 2002 - 9:01 am UTC

Sorry, my crystal ball is in the shop -- hence my inability to read your mind and deliver exactly what you wanted.

Read
</code> http://docs.oracle.com/docs/cd/A91202_01/901_doc/appdev.901/a89852/dbms_sta.htm#999107 <code>

It goes through this and has examples as well.

Can't believe it...

Jan van Mourik, May 13, 2002 - 10:56 am UTC

Mr. Tom, you are slacking... No examples! Are we now supposed to do some of the research ourselves?!?

Can't believe how demanding some people are! All of us should be grateful for any and all the time Mr. Kyte spends answering our questions, day in day out, even in the weekend. And maybe sometimes we have to do some research ourselves... Isn't that our job anyway???

Where is my application...?

Connor, May 13, 2002 - 11:48 am UTC

"Tom - I need my system designed, my research done, my application coded, my test plan created, and the system implemented. Your site says 'sorry no questions' too often - can you please have all my work done for me by the end of the day"

Man - I cannot believe the audacity of some people...

Back on the original topic, I hear rumours that 9i.2 will do some form of on-the-fly statistics gathering if required (ie certain parameters set). Can you confirm/deny?

Cheers
Connor

Tom Kyte
May 13, 2002 - 1:59 pm UTC

(you should see some of the email I get, makes this look tame)

I believe you mean the more intense stats collection that goes on in order to find problem statements:


<quote>
Setting the Level of Statistics Collection

Oracle9i Release 2 (9.2) provides the initialization parameter STATISTICS_LEVEL , which controls all major statistics collections or advisories in the database. This parameter sets the statistics collection level for the database.

Depending on the setting of STATISTICS_LEVEL, certain advisories and statistics are collected, as follows:

BASIC: No advisories or statistics are collected.

TYPICAL: The following advisories or statistics are collected:

* Buffer cache advisory
* MTTR advisory
* Shared Pool sizing advisory
* Segment level statistics
* PGA target advisory
* Timed statistics

ALL: All of the preceding advisories or statistics are collected, plus the following:

* Timed operating system statistics
* Row source execution statistics

The default level is TYPICAL. STATISTICS_LEVEL is a dynamic parameter and can be altered at the system or the session level.

When modified by ALTER SYSTEM, all advisories or statistics in the preceding list are dynamically turned on or off, depending on the new value of STATISTICS_LEVEL.

When modified by ALTER SESSION, only the following advisories or statistics are turned on or off in the local session only. Their systemwide state is not changed.

* Timed statistics
* Timed operating system statistics
* Row source execution statistics
</quote>


Or you might mean:

<quote>

Dynamic Sampling for the Optimizer

The purpose of dynamic sampling is to improve server performance by determining more accurate selectivity and cardinality estimates. More accurate selectivity and cardinality estimates allow the optimizer to produce better performing plans.

You can use dynamic sampling to:

* Estimate single-table predicate selectivities when collected statistics cannot be used or are likely to lead to significant errors in estimation.
* Estimate table cardinality for tables without statistics or for tables whose statistics are too out of date to trust.

The DYNAMIC_SAMPLING hint lets you control dynamic sampling to improve server performance by determining more accurate selectivity and cardinality estimates. You can set the value of DYNAMIC_SAMPLING to a value from 0 to 10. The higher the level, the more effort the compiler puts into dynamic sampling and the more broadly it is applied. Sampling defaults to cursor level unless you specify a table.
See Also:

* "Dynamic Sampling" for information about when and how to use dynamic sampling
* "DYNAMIC_SAMPLING" for details about using this hint

......

Dynamic Sampling

The purpose of dynamic sampling is to improve server performance by determining more accurate selectivity and cardinality estimates. More accurate selectivity and cardinality estimates allow the optimizer to produce better performing plans.

You can use dynamic sampling to:

* Estimate single-table predicate selectivities when collected statistics cannot be used or are likely to lead to significant errors in estimation.
* Estimate table cardinality for tables without statistics or for tables whose statistics are too out of date to trust.
* How Dynamic Sampling Works

The primary performance attribute is compile time. Oracle determines at compile time whether a query would benefit from dynamic sampling. If so, a recursive SQL statement is issued to scan a small random sample of the table's blocks, and to apply the relevant single table predicates to estimate predicate selectivities. The sample cardinality can also be used, in some cases, to estimate table cardinality.

Depending on the value of the OPTIMIZER_DYNAMIC_SAMPLING initialization parameter, a certain number of blocks are read by the dynamic sampling query.

</quote>



Consultancy for nothing...

Andre Whittick Nasser, May 13, 2002 - 1:59 pm UTC

I myself get upset when I receive some messages from so-and-so asking: "Would you please design a system for ... ", about 40 lines long. And this is no kidding.

IMP ... STATISTICS=SAFE

Andre Whittick Nasser, May 13, 2002 - 2:13 pm UTC

Well, every now and then I happen to find these "vague" terms in Oracle literature.

Oracle says the SAFE option of the ANALYZE parameter imports statistics that are "not questionable"; otherwise, Oracle recalculates them.

How is that ?

Thanks

Tom Kyte
May 13, 2002 - 2:28 pm UTC

When are statistics questionable ?

The precalculated optimizer statistics are flagged as questionable at export
time if:

-> There are row errors while exporting
-> The client character set or NCHAR character set does not match the server
character set or NCHAR character set
-> A QUERY clause is specified
-> Only certain partitions or subpartitions are exported



dynamic sampling

PINGU, May 12, 2003 - 6:00 am UTC

hi

what is dynamic sampling for, I have been reading the doc over and over again about 5 times and I still dont see what is useful for...?

Tom Kyte
May 12, 2003 - 7:57 am UTC

search for optimizer_dynamic_sampling


it basically lets the sql optimizer process interrogate the database table that is not analyzed but used in a query with other tables that are before parsing the query.

In that fashion, the database "can have a clue" as to the statistics regarding the unanalyzed table.

sooooo only useful

PINGU, May 12, 2003 - 8:30 am UTC

when I dont analyze my tables I guess? If that´s the case I am fine! I always analyze my tables and indexes :-D

Tom Kyte
May 12, 2003 - 9:14 am UTC

global temporary tables...
DUAL...

DBMS_STATS vs. ANALYZE

Arun Gupta, May 14, 2003 - 8:14 pm UTC

Tom
Though Oracle recommends use of DBMS_STATS to gather statistics, I had a strange experience. Using Oracle 9.2.0.3, we use DBMS_STATS to gather schema statistics on a nightly basis. One day, suddenly the CPU utilization on a 4 CPU Sunfire 15k went up to 25% on all CPUs. I looked at the sessions through OEM and found that three sessions were marked as active. All these sessions were executing the same SQL for last several hours. I extracted the SQL and ran it from SQL*Plus with the same parameters as entered from front end. The query returned results within 2 seconds. I did this 100 times in a loop, no problem. I had personally tuned the query and saved the plan and execution statistics. Comparison yielded no difference. The same search fired from the front end just hanged up. The sessions showed that LIO and PIO were steadily increasing. The sessions were not waiting on a lock, the queries are select statements. I killed the sessions, CPU utilization went down to 5%. This kept happening for several days. One day, as I was investigating, a colleague said "analyze the tables". I checked the last analyzed date on tables, indexes and columns. They were analyzed the night before. Still, I issued the analyze table compute statistics against the three underlying tables. The problem disappeared immediately!! My question here is that does dbms_stats miss out something which analyze puts back?

Thanks

Tom Kyte
May 14, 2003 - 11:52 pm UTC

did you compare the before/afters of the dictionary tables?

were the tables partitioned?

and most importantly -- what parameters did you use for dbms_stats.



Analysis not required?

Tony, May 15, 2003 - 2:21 am UTC

Do you meant to say that statistics collection using Analyze or dbms_stats is not at all required
if optimizer_dynamic_sampling is enabled?




Tom Kyte
May 15, 2003 - 9:22 am UTC

did i say that?

I'm saying for tables that are not analyzed for whatever reason (global temporary tables pop into mind), this will help correct that.

do you really want to in effect analyze all of your objects for each parse? or would it be better to analyze once and use it over and over....

confused about gather stale and auto ...

Reader, October 10, 2003 - 9:15 pm UTC

Tom, could you please help me in understanding the difference between gather stale and gather auto feature in gathering schema level stats? Thanks.

Tom Kyte
October 11, 2003 - 10:17 am UTC

gather auto does what STALE does PLUS it gathers stats on tables that have no stats.

both require monitoring to be enabled on the tables to work properly.

gather stale and auto procedures

Sam Bernard, October 11, 2003 - 11:37 am UTC

Do these work only at the schema level? I mean if I wanted to use these procedures for individual tables in a schema after monitoring flags set to on and let oracle gather stats, can I? Thanks.

Tom Kyte
October 11, 2003 - 11:52 am UTC

documentation says....

</code> http://docs.oracle.com/docs/cd/B10501_01/appdev.920/a96612/d_stats2.htm#1003997 <code>

you can do it at the database level as well.

it does really "make sense" to do at the "table" level.



Thanks

Sam Bernard, October 11, 2003 - 12:47 pm UTC

Just for curiosity, in which data dictionary table(s), oracle stores the statistics for tables, indexes? Thanks.

Tom Kyte
October 11, 2003 - 1:11 pm UTC

the view you need is user|dba|all_indexes or user|dba|all_tables

if you are truly curious -- select the view text from all_views to see where the stuff "really" comes from.

Thanks. I did and it is amazing! Thanks again for the info.

Sam Bernard, October 11, 2003 - 1:28 pm UTC


Setting For GTT's - 9.2.0.3

Steve, October 19, 2003 - 7:49 pm UTC

Tom,
What value for optimizer_dynamic_sampling would you recommend when using Global Temporary Tables?
Thanks.


Tom Kyte
October 20, 2003 - 8:18 am UTC

at least 2 -- in order to get all unanalyzed tables (the GTT in this case) to be sampled (since 0 disables this and 1 doesn't do anything if an index exists)

What about 'with' clause?

Steve, February 17, 2004 - 9:12 pm UTC

Can dynamic sampling be applied to the temporary table that is created when you use a query inside a 'with' clause? The times that I've used a 'with' clause, the explain has always shown that a temporary table is created prior to running the rest of my query. (9.2.0.4)

Tom Kyte
February 18, 2004 - 7:36 am UTC

the statistics in this case come from the UNDERLYING tables.

It is a chicken and egg problem -- the plan is fixed at parse time -- *before* the "with" subquery would be materialized. hence the dynamic sampling could have no effect.

so, no dynamic sampling on the "materialized temp table" but dynamic sampling on the objects that make up the temp table.

Very impressive

Nasir, February 18, 2004 - 2:17 pm UTC

The term 'Tables' is so deep now. I am glad Oracle does not have term 'Chairs' like 'Tables'
:)

More about 'with' clause

Steve, February 18, 2004 - 7:44 pm UTC

After seeing your response and looking at other related posts, I've come to the conclusion that the way I'm using the with clause is a bad idea. I'm using it to retrieve a variable set of key values from a collection built by a packaged function and using the results in several IN clauses in the main query.

Since the optimizer has no way of knowing how big that set of values is, it will lean toward doing a hash join with full table scans (in my case, a bad choice for an 80 million row table). This issue has been discussed in a number of posts on this site.

It seems that a much better choice for this kind of situation is to populate a global temporary table and join to that table in my main query allowing the DYNAMIC_SAMPLING hint to figure out how best to optimize it.

Am I on the right track here?

Another question: if the GTT is used in a OLTP environment, how does the fact that the GTT may contain rows for multiple users at any point in time affect the statistics created by DYNAMIC_SAMPLING? Does it somehow know to look at only the portion of the GTT that belongs to the current user?

By the way, I would like to say that after referencing this site extensively in the last few months (and posting for the first time yesterday), I have found it to be an extremely valuable source of accurate information and extensive pertinent examples. I commend you for having the patience and willingness to spend the time required to do it and Oracle for providing a venue for it.

Tom Kyte
February 19, 2004 - 8:13 am UTC

Actually..... this is an excellent opportunity for a discussion on the CARDINALITY hint (not quite well documented).

It might just be best for you to say "about how many rows" will be coming out of that plsql procedure -- you should have some sort of idea "about how many" will be returned (does not have to be exact).  

We'll take a look at how to do this, first a little setup:


ops$tkyte@ORA920PC> create or replace type str2tblType as table of varchar2(30);
  2  /
 
Type created.
 
ops$tkyte@ORA920PC> create or replace
  2  function str2tbl( p_str in varchar2, p_delim in varchar2 default ',' )
  3  return str2tblType
  4  as
  5      l_str      long default p_str || p_delim;
  6      l_n        number;
  7          l_data     str2tblType := str2tbltype();
  8  begin
  9          dbms_application_info.set_client_info( userenv('client_info')+1 );
 10      loop
 11          l_n := instr( l_str, p_delim );
 12          exit when (nvl(l_n,0) = 0);
 13                  l_data.extend;
 14                  l_data(l_data.count) := ( ltrim(rtrim(substr(l_str,1,l_n-1))) );
 15          l_str := substr( l_str, l_n+1 );
 16      end loop;
 17      return l_data;
 18  end;
 19  /
 
Function created.
 
ops$tkyte@ORA920PC>
ops$tkyte@ORA920PC> drop table emp;
 
Table dropped.
 
ops$tkyte@ORA920PC> create table emp
  2  as
  3  select object_name ename, max(object_id) empno, max(object_type) ot, max(created) created,
  4         rpad( '*', 80, '*') data
  5    from all_objects
  6   group by object_name;
 
Table created.


ops$tkyte@ORA920PC>
ops$tkyte@ORA920PC> alter table emp add constraint emp_pk primary key(empno)
  2  /
 
Table altered.
 
ops$tkyte@ORA920PC> create index ename_idx on emp(ename)
  2  /
 
Index created.
 
ops$tkyte@ORA920PC> analyze table emp compute statistics for table for all indexes for all indexed columns
  2  /
 
Table analyzed.
 
 
ops$tkyte@ORA920PC>
ops$tkyte@ORA920PC> variable in_list varchar2(255)
ops$tkyte@ORA920PC> exec :in_list := 'DBMS_PIPE,DBMS_OUTPUT,UTL_FILE';
 
PL/SQL procedure successfully completed.
 
ops$tkyte@ORA920PC> set autotrace traceonly explain
ops$tkyte@ORA920PC> select *
  2    from TABLE(cast( str2tbl( :in_list ) as str2tblType) ) t
  3  /
 
Execution Plan
----------------------------------------------------------
   0      SELECT STATEMENT Optimizer=CHOOSE
   1    0   COLLECTION ITERATOR (PICKLER FETCH) OF 'STR2TBL'


<b>given that, we can see that we can "select" from a plsql function.  We'd be interested in the cost/card information, so...</b>

ops$tkyte@ORA920PC> select /*+ FIRST_ROWS */ *
  2    from TABLE(cast( str2tbl( :in_list ) as str2tblType) ) t
  3  /
 
Execution Plan
----------------------------------------------------------
   0      SELECT STATEMENT Optimizer=HINT: FIRST_ROWS (Cost=11 Card=8168)
   1    0   COLLECTION ITERATOR (PICKLER FETCH) OF 'STR2TBL'


<b>8,168 -- thats the magic number.  that's the magic number of rows the optimizer uses against gtt's and this "trick".  That's a big number -- probably many times larger than your reality.

but what if you knew that "in general" your routine returned "about 10-20" rows.  Is there a way to tell the optimizer this information?  Yes..  Consider:</b>



ops$tkyte@ORA920PC> select <b>/*+ cardinality(t 10 ) */</b> *
  2    from TABLE(cast( str2tbl( :in_list ) as str2tblType) ) t
  3  /
 
Execution Plan
----------------------------------------------------------
   0      SELECT STATEMENT Optimizer=CHOOSE (Cost=11 Card=10)
   1    0   COLLECTION ITERATOR (PICKLER FETCH) OF 'STR2TBL'


<b>Now, the optimizer thinks "10" not "8,168".  But...</b>

ops$tkyte@ORA920PC> select *
  2    from emp
  3   where ename in ( select /*+ cardinality(t 10 ) */ *
  4                      from TABLE(cast( str2tbl( :in_list ) as str2tblType) ) t
  5                  )
  6  /
 
Execution Plan
----------------------------------------------------------
   0      SELECT STATEMENT Optimizer=CHOOSE (Cost=306920 Card=1 Bytes=88)
   1    0   NESTED LOOPS (SEMI) (Cost=306920 Card=1 Bytes=88)
   2    1     TABLE ACCESS (FULL) OF 'EMP' (Cost=53 Card=18051 Bytes=1588488)
   3    1     COLLECTION ITERATOR (PICKLER FETCH) OF 'STR2TBL'


<b>it didn't "work" in a more complex query  (we'd expect "index emp" if it did) -- that is because the card was not attached to step 3 (as it was not attached to step 1 above).  In fact, we'd find this plan to be "unappealing" for more than just the bad card:</b>


ops$tkyte@ORA920PC> set autotrace traceonly
ops$tkyte@ORA920PC> select *
  2    from emp
  3   where ename in ( select /*+ cardinality(t 10 ) */ *
  4                      from TABLE(cast( str2tbl( :in_list ) as str2tblType) ) t
  5                  )
  6  /
 
 
Execution Plan
----------------------------------------------------------
   0      SELECT STATEMENT Optimizer=CHOOSE (Cost=306920 Card=1 Bytes=88)
   1    0   NESTED LOOPS (SEMI) (Cost=306920 Card=1 Bytes=88)
   2    1     TABLE ACCESS (FULL) OF 'EMP' (Cost=53 Card=18051 Bytes=1588488)
   3    1     COLLECTION ITERATOR (PICKLER FETCH) OF 'STR2TBL'
 
 
 
 
Statistics
----------------------------------------------------------
         23  recursive calls
          0  db block gets
        341  consistent gets
          0  physical reads
          0  redo size
        768  bytes sent via SQL*Net to client
        499  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          5  sorts (memory)
          0  sorts (disk)
          3  rows processed
 
ops$tkyte@ORA920PC> set autotrace off
ops$tkyte@ORA920PC> select userenv('client_info') from dual;
<b> 
USERENV('CLIENT_INFO')
----------------------------------------------------------------
18051</b>

<b>hey -- our function (i counted the calls using dbms_application_info) was called 18,050 times -- funny:</b>

ops$tkyte@ORA920PC> select count(*) from emp;
 
  COUNT(*)
----------
     18051

<b>thats the number of rows in the table -- the function was called over and over.  The solution is to use rownum (order by, union, interect, group by, etc will have the same effect) to have the subquery "materialized"</b>

ops$tkyte@ORA920PC> set autotrace traceonly
ops$tkyte@ORA920PC> select *
  2    from emp
  3   where ename in ( select /*+ cardinality(t 10 ) */ *
  4                 from TABLE(cast( str2tbl( :in_list ) as str2tblType) ) t
  5       <b>                              where rownum >= 0</b>
  6                  )
  7  /
 
 
Execution Plan
----------------------------------------------------------
   0      SELECT STATEMENT Optimizer=CHOOSE (Cost=62 Card=10 Bytes=1050)
   1    0   TABLE ACCESS (BY INDEX ROWID) OF 'EMP' (Cost=2 Card=1 Bytes=88)
   2    1     NESTED LOOPS (Cost=62 Card=10 Bytes=1050)<b>
   3    2       VIEW OF 'VW_NSO_1' (Cost=17 Card=10 Bytes=170)</b>
   4    3         SORT (UNIQUE)
   5    4           COUNT
   6    5             FILTER
   7    6               COLLECTION ITERATOR (PICKLER FETCH) OF 'STR2TBL'<b>
   8    2       INDEX (RANGE SCAN) OF 'ENAME_IDX' (NON-UNIQUE) (Cost=1 Card=1)
 </b>
 
 
 
Statistics
----------------------------------------------------------
          0  recursive calls
          0  db block gets
         15  consistent gets
          0  physical reads
          0  redo size
        768  bytes sent via SQL*Net to client
        499  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          1  sorts (memory)
          0  sorts (disk)
          3  rows processed
 
ops$tkyte@ORA920PC> set autotrace off
ops$tkyte@ORA920PC> select userenv('client_info') from dual;
 <b>
USERENV('CLIENT_INFO')
----------------------------------------------------------------
1
 </b>

Now we

a) have the card in the plan
b) use the index as expected
c) as a bonus, have our function called once per query -- not per row


sooo, long story short

a) yes, the insertion of the procedure into a gtt and then dynamic sampling on the gtt will work

b) but, you might be able to avoid those steps with the cardinality hint (one of the good hints -- gives the optimizer INFORMATION, does not tell it what to do)


Note that cardinality is 9ir1 and above and dynamic sampling is 9ir2 and above.


 



 

Excellent response

Steve, February 19, 2004 - 4:05 pm UTC

I appreciate your very detailed and helpful response. I wasn't aware of the cardinality hint, so that was a great help. I'm not sure which of the two options (gtt or subquery) will be better in this particular query, so I'll try out both and see how they work.

My subquery will return one row in the majority of cases, but could return hundreds of rows in other cases. The gtt option would give the optimizer the most accurate counts to work with, but it seems that the cardinality estimate would be close enough to get it to make the right choice.

Thanks again for all the help.

Even more about 'with' clause

Steve, February 19, 2004 - 11:11 pm UTC

I tried using the cardinality hint that you suggested and
found that it worked great when I use my with clause query
only once in the main query. However, when I use it twice
(as a subquery in two unioned selects), it no longer
recognizes the hint.

I think this is similar to the example you provided which
did not use the 'where rownum >= 0'. The two queries were
merged together and the table alias t was lost, so the hint
was invalid and ignored. With the rownum clause, the
queries were not merged and the alias stayed valid.

In my case, the subquery is being used twice in the main
query, so it is stripped out of the body of the query and
processed once at the beginning of the process to create a
materialized temp table which is then used in the body of
the query. In the process of all that, the alias appears
to be getting lost once again and the hint becomes invalid.
I tried putting the hint in several places in the query
hoping to find a workaround, but it hasn't worked so far.

Since it's getting late, I'm hoping I can get away with
just including the two queries and their plans, but if
that's not enough, I can put together a better example
tomorrow. Thanks in advance for your help.

Less complex query and plan:

WITH property_list_subquery AS
(SELECT column_value property_id FROM TABLE(CAST(
ds_resstat_table_pkg.load_property_table('1', '98125', '~', 'S')
AS ds_resstat_resort_table_type)) )
SELECT
'A' column1
,1 curr_ctr
FROM pm_resort_s
WHERE resort IN ( SELECT /*+ cardinality(t 5 ) */ * FROM property_list_subquery t where rownum >= 0 )

Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=14 Card=1 Bytes=18)
1 0 HASH JOIN (SEMI) (Cost=14 Card=1 Bytes=18)
2 1 INDEX (FAST FULL SCAN) OF 'RESORT_PK' (UNIQUE) (Cost=2 Card=5022 Bytes=30132)
3 1 VIEW OF 'VW_NSO_1' (Cost=11 Card=5 Bytes=60)
4 3 COUNT
5 4 FILTER
6 5 COLLECTION ITERATOR (PICKLER FETCH) OF 'LOAD_PROPERTY_TABLE'

More complex query and plan:

WITH property_list_subquery AS
(SELECT column_value property_id FROM TABLE(CAST(
ds_resstat_table_pkg.load_property_table('1', '98125', '~', 'S')
AS ds_resstat_resort_table_type)) )
SELECT
'A' column1
,1 curr_ctr
FROM pm_resort_s
WHERE resort IN ( SELECT /*+ cardinality(t 5 ) */ * FROM property_list_subquery t where rownum >= 0 )
UNION
SELECT
'B' column1
,2 curr_ctr
FROM pm_resort_s
WHERE resort IN ( SELECT /*+ cardinality(t 5 ) */ * FROM property_list_subquery t where rownum >= 0 )

Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=78 Card=31004 Bytes=558072)
1 2 RECURSIVE EXECUTION OF 'SYS_LE_2_0'
2 0 TEMP TABLE TRANSFORMATION
3 2 SORT (UNIQUE) (Cost=78 Card=31004 Bytes=558072)
4 3 UNION-ALL
5 4 NESTED LOOPS (Cost=3 Card=15502 Bytes=279036)
6 5 VIEW OF 'VW_NSO_2' (Cost=3 Card=16360 Bytes=196320)
7 6 COUNT
8 7 FILTER
9 8 VIEW (Cost=3 Card=16360 Bytes=196320)
10 9 TABLE ACCESS (FULL) OF 'SYS_TEMP_0FD9D666D_D288A10B' (Cost=3 Card=16360 Bytes=196320)
11 5 INDEX (UNIQUE SCAN) OF 'RESORT_PK' (UNIQUE)
12 4 NESTED LOOPS (Cost=3 Card=15502 Bytes=279036)
13 12 VIEW OF 'VW_NSO_1' (Cost=3 Card=16360 Bytes=196320)
14 13 COUNT
15 14 FILTER
16 15 VIEW (Cost=3 Card=16360 Bytes=196320)
17 16 TABLE ACCESS (FULL) OF 'SYS_TEMP_0FD9D666D_D288A10B' (Cost=3 Card=16360 Bytes=196320)
18 12 INDEX (UNIQUE SCAN) OF 'RESORT_PK' (UNIQUE)


A link from "collection inlist" searches to here would be useful

Bob Lyon, March 19, 2004 - 4:07 pm UTC

Tom,

FWIW, you are one impressive guru!

Showing how to "tune" collections as inlists was very useful, but I stumbled onto this thread more or less by accident. You might consider putting links to this thread in the major threads found by searching for "collection inlist".

I've played with your example on several databases, and the "8168" default cardinality appears to be based on the database block size.

If there is any Oracle documentation anywhere on the CARDINALITY hint, I couldn't find it.

What about an example of doing this in 10g which is supposed to natively support IN ( collection ). Sorry, I'm on windoze servers and 10g is not ready for us yet, or I'd try it myself. Will a CARDINALITY hint still be needed?

Whatever Larry pays you, its not enough...

Tom Kyte
March 20, 2004 - 9:45 am UTC

well, lets see -- it is not "in (collection) it is "member of collection" and would look like this:




ops$tkyte@ORA10G> create table t
  2  as
  3  select * from all_objects
  4  /
 
Table created.
 
ops$tkyte@ORA10G> alter table t add constraint t_pk primary key(object_id);
 
Table altered.
 
ops$tkyte@ORA10G> analyze table t compute statistics for table for all indexes for all indexed  2  columns;
 
Table analyzed.
 
ops$tkyte@ORA10G>
ops$tkyte@ORA10G> create or replace type ArrayType as table of Number
  2  /
 
Type created.
 
ops$tkyte@ORA10G>
ops$tkyte@ORA10G> @trace
ops$tkyte@ORA10G> alter session set events '10046 trace name context forever, level 12';
 
Session altered.
 
ops$tkyte@ORA10G> variable x refcursor;
ops$tkyte@ORA10G> variable y refcursor;
ops$tkyte@ORA10G> declare
  2      l_data arrayType := arrayType(1,2,3,5,6);
  3  begin
  4      open :x for
  5      select object_name, owner
  6        from t
  7       where object_id member of l_data;
  8
  9          open :y for
 10          select object_name, owner
 11            from t
 12           where object_id in ( select /*+ cardinality( t 10 ) */ * from TABLE(l_data) T ); 13  end;
 14  /
 
PL/SQL procedure successfully completed.
 
ops$tkyte@ORA10G>
ops$tkyte@ORA10G> print x
 
OBJECT_NAME                    OWNER
------------------------------ ------------------------------
I_OBJ#                         SYS
C_TS#                          SYS
CLU$                           SYS
C_OBJ#                         SYS
 
ops$tkyte@ORA10G> print y
 
OBJECT_NAME                    OWNER
------------------------------ ------------------------------
C_OBJ#                         SYS
I_OBJ#                         SYS
CLU$                           SYS
C_TS#                          SYS
 
member of is not looking index friendly at this point -- even when hinted to use the index, it fast full scans the index -- not range scan:

SELECT OBJECT_NAME, OWNER
FROM
 T WHERE OBJECT_ID MEMBER OF :B1
                                                                                              
                                                                                              
call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute      1      0.00       0.00          0          6          0           0
Fetch        1      0.08       0.11          4        654          0           4
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        3      0.08       0.11          4        660          0           4
                                                                                              
Rows     Row Source Operation
-------  ---------------------------------------------------
      4  TABLE ACCESS FULL T (cr=654 pr=4 pw=0 time=110229 us)
********************************************************************************
                                                                                              
SELECT OBJECT_NAME, OWNER
FROM
 T WHERE OBJECT_ID IN ( SELECT /*+ cardinality( t 10 ) */ * FROM TABLE(:B1 )
  T )
                                                                                              
                                                                                              
call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute      1      0.01       0.03          0          0          0           0
Fetch        1      0.00       0.00          0         11          0           4
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        3      0.01       0.03          0         11          0           4
                                                                                              
Rows     Row Source Operation
-------  ---------------------------------------------------
      4  NESTED LOOPS  (cr=11 pr=0 pw=0 time=229 us)
      5   SORT UNIQUE (cr=0 pr=0 pw=0 time=101 us)
      5    COLLECTION ITERATOR PICKLER FETCH (cr=0 pr=0 pw=0 time=32 us)
      4   TABLE ACCESS BY INDEX ROWID OBJ#(49694) (cr=11 pr=0 pw=0 time=111 us)
      4    INDEX UNIQUE SCAN OBJ#(49695) (cr=7 pr=0 pw=0 time=70 us)(object id 49695)
                                                                                              
 

IN Clause Performance

Vinnie, July 29, 2004 - 12:47 pm UTC

Tom,

Why would the performace of the IN clause using the function be that much slower?

--drop table unit;
--drop table part;
--CREATE TABLE UNIT
--( ROW_ID NUMBER(19) NOT NULL ,
-- CONSTRAINT PK9 PRIMARY KEY (ROW_ID) USING INDEX PCTFREE 1,
--TIMEID NUMBER(19) NOT NULL ,
--ELA_TIME NUMBER NOT NULL,
--ID VARCHAR2(256),
--NAME VARCHAR2(256),
--GROUP_NAME VARCHAR2(256)
--);

--ALTER TABLE UNIT ADD CONSTRAINT UN23 UNIQUE (TIMEID,ID,ELA_TIME) USING INDEX;

--CREATE TABLE PART
--( ROW_ID NUMBER(19) NOT NULL ,CONSTRAINT PK10 PRIMARY KEY (ROW_ID) USING INDEX PCTFREE 1,
--TIMEID NUMBER(19) NOT NULL ,
--ELA_TIME NUMBER NOT NULL,
--ID VARCHAR2(256),
--NAME VARCHAR2(256)
--);

--ALTER TABLE PART ADD CONSTRAINT UN25 UNIQUE (TIMEID,ID,ELA_TIME) USING INDEX;

--analyze table unit compute statistics;
--analyze table part compute statistics;

/*
CREATE OR REPLACE VIEW UNIT_PART as
SELECT
a1.ID,
a1.TIMEID,
a1.ELA_TIME,
a1.group_name,
a1.NAME,
f1.name part_name
FROM
UNIT a1,
PART f1
WHERE
a1.name = f1.id
and a1.ela_time =
(select
max(ela_time)
from
UNIT
where
timeid = a1.timeid
and id = a1.id
and ela_time <= a1.ela_time
)
and f1.ela_time =
(select
max(ela_time)
from
PART
where
timeid = f1.timeid
and id = f1.id
and ela_time <= a1.ela_time
)
;
*/


alter session set sql_trace=true;
SELECT UNIQUE id,name, part_name
from
unit_part where
name IN (select * from the ( select cast(str2tbl('PE0000020909' ) as mytableType) from dual));

SELECT UNIQUE id,name, part_name
from
unit_part where
name IN (select * from the ( select cast(str2tbl('PE0000020909, PE0000020913' ) as mytableType) from dual));

SELECT UNIQUE id,part_name, part_name
from
unit_part where
name IN ('PE0000020909');

SELECT UNIQUE id,name, part_name
from
unit_part where
name IN ('PE0000020909','PE0000020913');
alter session set sql_trace=false;


SELECT UNIQUE id,name, part_name
from
unit_part where
name IN (select * from the ( select cast(str2tbl(:"SYS_B_0" ) as mytableType) from dual))

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 2 0.00 0.00 0 0 0 0
Execute 2 0.00 0.00 0 0 0 0
Fetch 34 14.78 14.42 70 649892 0 466
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 38 14.78 14.42 70 649892 0 466

Misses in library cache during parse: 0
Optimizer goal: CHOOSE
Parsing user id: 314

Rows Row Source Operation
------- ---------------------------------------------------
230 SORT UNIQUE
29726 FILTER
29726 NESTED LOOPS
41918 HASH JOIN
20963 TABLE ACCESS FULL UNIT
10 TABLE ACCESS FULL PART
29726 COLLECTION ITERATOR PICKLER FETCH
29470 SORT AGGREGATE
29470 FIRST ROW
29470 INDEX RANGE SCAN (MIN/MAX) UN23 (object id 37940)
14330 SORT AGGREGATE
14330 FIRST ROW
14330 INDEX RANGE SCAN (MIN/MAX) UN25 (object id 37943)

********************************************************************************


SELECT UNIQUE id,part_name, part_name
from
unit_part where
name IN (:"SYS_B_0")

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 17 1.03 1.01 37 101885 0 230
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 19 1.03 1.01 37 101885 0 230

Misses in library cache during parse: 0
Optimizer goal: CHOOSE
Parsing user id: 314

Rows Row Source Operation
------- ---------------------------------------------------
230 SORT UNIQUE
29726 FILTER
29726 NESTED LOOPS
14863 TABLE ACCESS FULL UNIT
29726 TABLE ACCESS FULL PART
14863 SORT AGGREGATE
14863 FIRST ROW
14863 INDEX RANGE SCAN (MIN/MAX) UN23 (object id 37940)
27478 SORT AGGREGATE
27478 FIRST ROW
27478 INDEX RANGE SCAN (MIN/MAX) UN25 (object id 37943)

********************************************************************************

SELECT UNIQUE id,name, part_name
from
unit_part where
name IN (:"SYS_B_0",:"SYS_B_1")

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 17 0.92 0.89 38 73507 0 236
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 19 0.92 0.89 38 73507 0 236

Misses in library cache during parse: 0
Optimizer goal: CHOOSE
Parsing user id: 314

Rows Row Source Operation
------- ---------------------------------------------------
236 SORT UNIQUE
29782 FILTER
29782 HASH JOIN
14891 TABLE ACCESS FULL UNIT
4 TABLE ACCESS FULL PART
29526 SORT AGGREGATE
29526 FIRST ROW
29526 INDEX RANGE SCAN (MIN/MAX) UN23 (object id 37940)
14360 SORT AGGREGATE
14360 FIRST ROW
14360 INDEX RANGE SCAN (MIN/MAX) UN25 (object id 37943)

********************************************************************************

alter session set sql_trace=false





Tom Kyte
July 29, 2004 - 1:54 pm UTC

add a where rownum >= 0 to the subquery with str2tbl and rerun

WORKS

Vinnie, July 29, 2004 - 2:44 pm UTC

That worked like a charm!
What did the rownum>=0 actually do for me?

Tom Kyte
July 29, 2004 - 3:05 pm UTC

materialized the plsql function "in temp" so it was not called over and over and over again (if you have Effective Oracle by Design, i go over this "trick" in the effective sql section)

OPTIMIZER_DYNAMIC_SAMPLING 10g

Robert, August 01, 2004 - 10:03 am UTC

From,
</code> http://download-west.oracle.com/docs/cd/B13789_01/server.101/b10752/stats.htm#42992 <code>

<quote> For dynamic sampling to automatically gather the necessary statistics, this parameter should be set to a value of 2 or higher. The default value is 2.<quote>

It is quite a change from 9.2.0.4 as in 10g the default is set to 2. Does it mean that for every query, optimizer is going to check to see if it needs to collect additional data to come with better execution plan? Don't you think it is a overhead? Why it is made a default setting of 2 for this parameter?

Tom Kyte
August 01, 2004 - 11:15 am UTC

since the CBO is the only game in town in 10g -- it means the initial query of an unanalyzed object is going to collect additional data -- yes.

if the tables are analyzed, this is a "no-op"
if the tables are not yet analyzed, this is sort of necessary as the cbo would be flying blind otherwise (you want overhead? run the cbo without stats sometime)

dynamic sampling

Robert, August 01, 2004 - 11:38 am UTC

Great stuff. Thanks.
<quote>it means the initial query of
an unanalyzed object is going to collect additional data <quote>

Tom, does it mean the first time the optimizer collects stats for unanalyzed table (because dynamic sampling parameter is set to 2), the stats is populated in the data dictionary? So next time if i issue a query that refers the same table, it would use the stats. right?

Tom Kyte
August 01, 2004 - 11:59 am UTC

that would be the first time the query is hard parsed. it is associated with the query, not stored in the dictionary. to get the stats into the dictionary and have them be used, you are back to "analyze".

Arun Gupta, October 25, 2004 - 9:42 am UTC

Tom,
I need help in understanding how dynamic_sampling works. I have 9ir2 database with optimizer_dynamic_sampling=1 (default). I have a query with a GTT as one of the tables. In a 10046 trace session, I ran this query 4 times. Every time I changed the number of rows in GTT before running this query. The tkprof report shows that the CBO was able to get the exact number of rows for every run. I have two questions:
a) If optimizer_dynamic_sampling is at it's default (1), and the CBO can find out exact number of rows in GTT, when should I use dynamic_sampling hint and sampling levels greater than 1?
b) Since the query was hard parsed only once, and soft parsed subsequently, how was the CBO able to determine that the number of rows in GTT had changed?

I have the tkprof output, but it is bit long so I am not posting it here. I can if you want to see.
Thanks

Tom Kyte
October 25, 2004 - 10:52 am UTC

a tkprof report SHOWS ACTUAL numbers (reality) -- not estimated cardinalities!!!




optimizer

reader, October 28, 2004 - 7:11 pm UTC

Oracle doc talks about optimizer uses io and cpu stats in coming with better execution plans. How does it use these stats, can you give an example scenario? thanks.

thx

Matthias Rogel, April 08, 2005 - 10:06 am UTC

thanx very much for the
/*+ cardinality ... */ -hint -hint

Many Thanks

Amol, May 31, 2005 - 6:31 am UTC

Very very useful tips for using cardinality hint effectively.


is there a dynamic sampling for a collection?

Vladimir Sadilovskiy, June 12, 2005 - 11:55 am UTC

Tom,

This post and answers you provided are most match the issue I've being trying to solve for about 2 months. I've found several things I cannot comprehend laying around this issue, opened several BUGs. But still didn't
resolve it.

I really need your help on this. Hope you can solve it or show me another way. Thank you.

My questions are:
Q1. Is this expected behavior or rather a bug? If expected why it doesn't do same thing to normal tables?
Q2. Is there any other workaround besides _always_semi_join=off available?
Q3 (10g only). Why the estimated cardinality of the sub-query is far away from the hinted one?
Q4: Is there something to do with poor information regarding uniqueness of values in the collection?
Q5: In case #4 is true, how to hint NDV of the collection values?
Q6: Is it possible to dynamic sample table function or provide more accurate values to CBO?


Firstly, I've tested this for 9i (I've tried your > 0 workaround in 10g and got rather weird behavior. So, I thought I divide this follow-up in two pieces). Here is my test case you might want to play with.

SQL> create table t1 as select * from all_objects;

Table created.

Elapsed: 00:00:00.05
SQL> insert /*+ append */ into t1 select * from t1;

6111 rows created.

Elapsed: 00:00:00.04
SQL> commit;

Commit complete.

Elapsed: 00:00:00.00
SQL> insert /*+ append */ into t1 select * from t1;

12222 rows created.

Elapsed: 00:00:00.02
SQL> commit;

Commit complete.

Elapsed: 00:00:00.00
SQL> insert /*+ append */ into t1 select * from t1;

24444 rows created.

Elapsed: 00:00:00.06
SQL> commit;

Commit complete.

Elapsed: 00:00:00.00
SQL> create table t2 as select * from all_objects;

Table created.

Elapsed: 00:00:00.05
SQL> insert /*+ append */ into t2 select * from t2;

6111 rows created.

Elapsed: 00:00:00.04
SQL> commit;

Commit complete.

Elapsed: 00:00:00.00
SQL> insert /*+ append */ into t2 select * from t2;

12222 rows created.

Elapsed: 00:00:00.02
SQL> commit;

Commit complete.

Elapsed: 00:00:00.00
SQL> insert /*+ append */ into t2 select * from t2;

24444 rows created.

Elapsed: 00:00:00.06
SQL> commit;

Commit complete.

Elapsed: 00:00:00.00
SQL> create sequence ts;

Sequence created.

Elapsed: 00:00:00.00
SQL> update t1 set object_id=ts.nextval;

48880 rows updated.

Elapsed: 00:00:07.05
SQL> commit;

Commit complete.

Elapsed: 00:00:00.00
SQL> drop sequence ts;

Sequence dropped.

Elapsed: 00:00:00.00
SQL> create sequence ts;

Sequence created.

Elapsed: 00:00:00.00
SQL> update t2 set object_id=ts.nextval;

48888 rows updated.

Elapsed: 00:00:08.00
SQL> commit;

Commit complete.

Elapsed: 00:00:00.00
SQL> update t1 set status = 0;

48880 rows updated.

Elapsed: 00:00:03.05
SQL>
SQL> update t2 set status = 0;

48888 rows updated.

Elapsed: 00:00:02.09
SQL>
SQL> create or replace type table_of_number as table of number;
  2  /

Type created.

Elapsed: 00:00:00.08
SQL>
SQL> create or replace function getnumericlist(card_n number) return table_of_number
  2  as
  3     l_numeric_list table_of_number;
  4  begin
  5     select object_id
  6       bulk collect into l_numeric_list
  7       -- from (select distinct object_id from t1)
  8       from t1
  9      where rownum <= card_n;
 10
 11     return l_numeric_list;
 12  end getnumericlist;
 13  /

Function created.

Elapsed: 00:00:00.02
SQL>
SQL> begin
  2     dbms_stats.delete_table_stats(null, 't1');
  3     dbms_stats.gather_table_stats(null, 't1',
  4         estimate_percent => 100,
  5         method_opt => 'for all columns size 254',
  6         cascade => true);
  7
  8     dbms_stats.delete_table_stats(null, 't2');
  9     dbms_stats.gather_table_stats(null, 't2',
 10         estimate_percent => 100,
 11         method_opt => 'for all columns size 254',
 12         cascade => true);
 13  end;
 14  /

PL/SQL procedure successfully completed.

Elapsed: 00:00:13.08
SQL>

Ok, here it goes the test. Let's run query with sub-query from a collection with low cardinality - 10. 

SQL> select count(*) from (
  2  select t1.object_name,t1.object_type, t2.object_name
  3   from t1, t2
  4  where t1.object_id in (select /*+ cardinality(nlist 10) */ *
  5                           from table(cast(getnumericlist(10) as
  6  table_of_number)) nlist)
  7    and t2.object_id = t1.object_id
  8    and t1.status = 0
  9    and t2.status = 0);

  COUNT(*)
----------
        10

Elapsed: 00:00:00.01

Execution Plan
----------------------------------------------------------
   0      SELECT STATEMENT Optimizer=CHOOSE (Cost=43 Card=1 Bytes=16)
   1    0   SORT (AGGREGATE)
   2    1     TABLE ACCESS (BY INDEX ROWID) OF 'T2' (Cost=2 Card=1 Bytes=7)
   3    2       NESTED LOOPS (Cost=43 Card=10 Bytes=160)
   4    3         NESTED LOOPS (Cost=23 Card=10 Bytes=90)
   5    4           SORT (UNIQUE)
   6    5             COLLECTION ITERATOR (PICKLER FETCH) OF 'GETNUMERICLIST'
   7    4           TABLE ACCESS (BY INDEX ROWID) OF 'T1' (Cost=1 Card=1 Bytes=7)
   8    7             INDEX (UNIQUE SCAN) OF 'T1_I' (UNIQUE)
   9    3         INDEX (RANGE SCAN) OF 'T2_I' (NON-UNIQUE) (Cost=1 Card=1)


Statistics
----------------------------------------------------------
          1  recursive calls
          0  db block gets
         48  consistent gets
          0  physical reads
          0  redo size
        379  bytes sent via SQL*Net to client
        503  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          1  sorts (memory)
          0  sorts (disk)
          1  rows processed

If you ask me, the plan is perfect. Let's see what happens if higher cardinality is provided.

SQL> select count(*) from (
  2  select t1.object_name,t1.object_type, t2.object_name
  3   from t1, t2
  4  where t1.object_id in (select /*+ cardinality(nlist 10000) */ *
  5                           from table(cast(getnumericlist(10000) as
  6  table_of_number)) nlist)
  7    and t2.object_id = t1.object_id
  8    and t1.status = 0
  9    and t2.status = 0);

  COUNT(*)
----------
     10000

Elapsed: 00:00:00.03

Execution Plan
----------------------------------------------------------
   0      SELECT STATEMENT Optimizer=CHOOSE (Cost=97 Card=1 Bytes=16)
   1    0   SORT (AGGREGATE)
   2    1     TABLE ACCESS (BY INDEX ROWID) OF 'T2' (Cost=2 Card=1 Bytes=7)
   3    2       NESTED LOOPS (Cost=97 Card=1 Bytes=16)
   4    3         HASH JOIN (SEMI) (Cost=95 Card=1 Bytes=9)
   5    4           TABLE ACCESS (FULL) OF 'T1' (Cost=57 Card=48848 Bytes=341936)
   6    4           COLLECTION ITERATOR (PICKLER FETCH) OF 'GETNUMERICLIST'
   7    3         INDEX (RANGE SCAN) OF 'T2_I' (NON-UNIQUE) (Cost=1 Card=1)


Statistics
----------------------------------------------------------
          1  recursive calls
          0  db block gets
      11215  consistent gets
         77  physical reads
          0  redo size
        379  bytes sent via SQL*Net to client
        503  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
          1  rows processed

This was not so perfect. Incorrectly calculated cardinality out of semi join makes CBO employ nested loop on T2_I rather than using hash join on T2. Let's reduce cardinality a bit. Let's do 100. Do you think it would have generated NLs? Well it should have, but hang on ...

SQL> select count(*) from (
  2  select t1.object_name,t1.object_type, t2.object_name
  3   from t1, t2
  4  where t1.object_id in (select /*+ cardinality(nlist 100) */ *
  5                           from table(cast(getnumericlist(100) as
  6  table_of_number)) nlist)
  7    and t2.object_id = t1.object_id
  8    and t1.status = 0
  9    and t2.status = 0);

  COUNT(*)
----------
       100

Elapsed: 00:00:03.06

Execution Plan
----------------------------------------------------------
   0      SELECT STATEMENT Optimizer=CHOOSE (Cost=190 Card=1 Bytes=16)
   1    0   SORT (AGGREGATE)
   2    1     TABLE ACCESS (BY INDEX ROWID) OF 'T2' (Cost=2 Card=1 Bytes=7)
   3    2       NESTED LOOPS (Cost=190 Card=1 Bytes=16)
   4    3         HASH JOIN (SEMI) (Cost=188 Card=1 Bytes=9)
   5    4           TABLE ACCESS (FULL) OF 'T1' (Cost=139 Card=48848 Bytes=341936)
   6    4           COLLECTION ITERATOR (PICKLER FETCH) OF 'GETNUMERICLIST'
   7    3         INDEX (RANGE SCAN) OF 'T2_I' (NON-UNIQUE) (Cost=1 Card=1)


Statistics
----------------------------------------------------------
          1  recursive calls
          0  db block gets
        698  consistent gets
         77  physical reads
          0  redo size
        379  bytes sent via SQL*Net to client
        503  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
          1  rows processed

... it did that semi-join trick again. Why? The cost of joining T2_I with nested loops much cheaper in CBO's eyes than the cost if it used NLs in the previous step. In fact using semi-join is not so good as this.

SQL> alter session set "_always_semi_join"=off;

Session altered.

Elapsed: 00:00:00.00
SQL> select count(*) from (
  2  select t1.object_name,t1.object_type, t2.object_name
  3   from t1, t2
  4  where t1.object_id in (select /*+ cardinality(nlist 100) */ *
  5                           from table(cast(getnumericlist(100) as
  6  table_of_number)) nlist)
  7    and t2.object_id = t1.object_id
  8    and t1.status = 0
  9    and t2.status = 0);

  COUNT(*)
----------
       100

Elapsed: 00:00:00.03

Execution Plan
----------------------------------------------------------
   0      SELECT STATEMENT Optimizer=CHOOSE (Cost=328 Card=1 Bytes=6)
   1    0   SORT (AGGREGATE)
   2    1     VIEW (Cost=328 Card=100 Bytes=600)
   3    2       SORT (UNIQUE) (Cost=328 Card=100 Bytes=3000)
   4    3         TABLE ACCESS (BY INDEX ROWID) OF 'T2' (Cost=2 Card=1Bytes=14)
   5    4           NESTED LOOPS (Cost=325 Card=100 Bytes=3000)
   6    5             NESTED LOOPS (Cost=125 Card=100 Bytes=1600)
   7    6               COLLECTION ITERATOR (PICKLER FETCH) OF 'GETNUMERICLIST'
   8    6               TABLE ACCESS (BY INDEX ROWID) OF 'T1' (Cost=1Card=1 Bytes=14)
   9    8                 INDEX (UNIQUE SCAN) OF 'T1_I' (UNIQUE)
  10    5             INDEX (RANGE SCAN) OF 'T2_I' (NON-UNIQUE) (Cost=1 Card=1)


Statistics
----------------------------------------------------------
          1  recursive calls
          0  db block gets
        314  consistent gets
          0  physical reads
          0  redo size
        379  bytes sent via SQL*Net to client
        503  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          1  sorts (memory)
          0  sorts (disk)
          1  rows processed

SQL>

Now I'm reverting back and doing your rownum > 0 trick.

SQL> alter session set "_always_semi_join"=choose;

Session altered.

Elapsed: 00:00:00.00
SQL> select count(*) from (
  2  select t1.object_name,t1.object_type, t2.object_name
  3   from t1, t2
  4  where t1.object_id in (select /*+ cardinality(nlist 100) */ *
  5                           from table(cast(getnumericlist(100) as
  6  table_of_number)) nlist where rownum > 0)
  7    and t2.object_id = t1.object_id
  8    and t1.status = 0
  9    and t2.status = 0);

  COUNT(*)
----------
       100

Elapsed: 00:00:00.02

Execution Plan
----------------------------------------------------------
   0      SELECT STATEMENT Optimizer=CHOOSE (Cost=196 Card=1 Bytes=27)
   1    0   SORT (AGGREGATE)
   2    1     TABLE ACCESS (BY INDEX ROWID) OF 'T2' (Cost=2 Card=1 Bytes=7)
   3    2       NESTED LOOPS (Cost=196 Card=1 Bytes=27)
   4    3         HASH JOIN (SEMI) (Cost=194 Card=1 Bytes=20)
   5    4           TABLE ACCESS (FULL) OF 'T1' (Cost=145 Card=48880 Bytes=342160)
   6    4           VIEW OF 'VW_NSO_1' (Cost=25 Card=100 Bytes=1300)
   7    6             COUNT
   8    7               FILTER
   9    8                 COLLECTION ITERATOR (PICKLER FETCH) OF 'GETNUMERICLIST'
  10    3         INDEX (RANGE SCAN) OF 'T2_I' (NON-UNIQUE) (Cost=1 Card=1)


Statistics
----------------------------------------------------------
          1  recursive calls
          0  db block gets
        716  consistent gets
         77  physical reads
          0  redo size
        379  bytes sent via SQL*Net to client
        503  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
          1  rows processed

SQL>

Unfortunately, no change in semi-join behavior and it has even more LIOs.

The weird thing in 10g were happening on Linux box. So, test for 10g is coming after I install it on the XP box, where the above test was done. 

Again, thanks in advance for helping me with this. 

- Vladimir
 

sorry, setup was so huge forgot the indexes...

Vladimir Sadilovskiy, June 12, 2005 - 12:06 pm UTC

Tom,

I'm terribly sorry I forgot to provide the index statements. Please, execute them before gathering statistics:

create unique index t1_i on t1(object_id);
create index t2_i on t2(object_id);

They are different in my environment - one is unique another is normal. But I don't really think that changes the picture a lot.

Thank you.

- Vladimir

Tom Kyte
June 12, 2005 - 12:20 pm UTC

Have you peeked at

</code> https://www.oracle.com/technetwork/issue-archive/2014/14-sep/o54asktom-2279184.html

and

http://asktom.oracle.com/~tkyte/cardinality.html <code>


with collections -- there are quite simply "no statistics" other then the supplied (burned into the kernel ones).

this is sort of "big" for a followup/review.

but while in general the goal is reduced LIO, there are other things to consider there -- use tkprof, which one does less over all work as well -- one does sorting, one does not and so on.


I don't see anything inheritly "evil" happenning here. The reduced LIO's are most likely due to the fact that the data was sorted off into temp and read from temp instead of the buffer cache.


eg:

select * from big_table;

will do more LIO than

select * from big_table order by something;

in general, but I can assure you that select * from big_table is easier to process than select * from big_table order by something is.



LIOs are no concearn to me here. The incorrectly assumed cardinality out of semi-join is.

Vladimir Sadilovskiy, June 12, 2005 - 1:11 pm UTC

Tom,

I've read the posts. Thank you. And to supply more details to CBO I'm using cardinality hint. But what good is it? CBO calculates cardinality of semi-join as:

semi-join cost: outer card.*outer select.

Am I wrong to say that should be?

outer card.*outer select.*inner ndv

I think this subquery from a table function is pretty neat thing (I probably should be proud I've found it myself :)) and I've done some testing long time ago when we were looking for good substitute for IN-LISTs. I draw some charts for several types of queries and in conclusion can say it starts to be more effective than IN-LIST when number of elements more than 100. Since then, developers employed this in many places. But when somebody used it in the mentioned statement, semi-join hit us really bad. Of course I've looked through and analyzed 10046 and 10053 traces. This full table scan used in semi-join is a bad thing when followed by NLs on non-unique index range scan.

IF you have time, would you please give me a rough direction where I can look forward into.

Thanks again.

- Vladimir

Tom Kyte
June 12, 2005 - 1:58 pm UTC

if you believe this to be a bug, the only place I can send you is back to support with the test cases to get a bug filed.

but what about the idea Tom?

Vladimir Sadilovskiy, June 12, 2005 - 3:38 pm UTC

Tom, I did open a SR. The only cause why I'm asking you, is because I know you're an artist, someone who willing and able to find pretty solutions.

Tom, why didn't you answer most of my questions? You think that the test case isn't good enough? Don't get me wrong. I'm almost desperate to find a good workaround. Even a bit of information I don't see or have might help.

But in case your are interested to solve it here is the tests that I just ran on 10g.

SQL> select count(*) from (
  2  select t1.object_name,t1.object_type, t2.object_name
  3   from t1, t2
  4  where t1.object_id in (select /*+ cardinality(nlist 100) */ *
  5                           from table(cast(getnumericlist(100) as
  6  table_of_number)) nlist)
  7    and t2.object_id = t1.object_id
  8    and t1.status = 0
  9    and t2.status = 0);

  COUNT(*)
----------
       100

Elapsed: 00:00:00.29

Execution Plan
----------------------------------------------------------
   0      SELECT STATEMENT Optimizer=ALL_ROWS (Cost=174 Card=1 Bytes=16)
   1    0   SORT (AGGREGATE)
   2    1     TABLE ACCESS (BY INDEX ROWID) OF 'T2' (TABLE) (Cost=2 Card=1 Bytes=7)
   3    2       NESTED LOOPS (Cost=174 Card=1 Bytes=16)
   4    3         HASH JOIN (RIGHT SEMI) (Cost=172 Card=1 Bytes=9)
   5    4           COLLECTION ITERATOR (PICKLER FETCH) OF 'GETNUMERICLIST'
   6    4           TABLE ACCESS (FULL) OF 'T1' (TABLE) (Cost=147 Card=47289 Bytes=331023)
   7    3         INDEX (RANGE SCAN) OF 'T2_I' (INDEX) (Cost=1 Card=1)


Statistics
----------------------------------------------------------
         12  recursive calls
          0  db block gets
        813  consistent gets
        274  physical reads
          0  redo size
        393  bytes sent via SQL*Net to client
        508  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
          1  rows processed

SQL>

First, same semi-join with cardinality 1. Just to show where CBO arrives at this.
...
Join order[2]:  T1[T1]#0  KOKBF$[KOKBF$]#2  T2[T2]#1
Now joining: KOKBF$[KOKBF$]#2 *******
NL Join
  Outer table: cost: 147  cdn: 47289  rcz: 7  resp:  147
  Inner table: KOKBF$  Alias: KOKBF$
    Access Path: table-scan  Resc: 22
    Join:  Resc:  1043033  Resp:  1043033
  Best NL cost: 1043033  resp: 1043033
Semi-join cardinality:  1 = outer (47289) * sel (2.1147e-005) [flag=12]
...

Second, to my question number 3. Collection cardinality is 84 instead of 100. Lack of knowledge about NDV (question number 4)!? And the ratio keeps falling with higher hinted cardinality. Up to the value 255 in computed cardinality.
...
Join order[5]:  KOKBF$[KOKBF$]#2  T1[T1]#0  T2[T2]#1
    SORT resource      Sort statistics
      Sort width:           5 Area size:      131072 Max Area size:     1257472
      Degree:               1
      Blocks to Sort:       1 Row size:           13 Total Rows:            100
      Initial runs:         1 Merge passes:        0 IO Cost / pass:          0
      Total IO sort cost: 0      Total CPU sort cost: 10888337
      Total Temp space used: 0
Now joining: T1[T1]#0 *******
NL Join
  Outer table: cost: 25  cdn: 50  rcz: 2  resp:  25
  Inner table: T1  Alias: T1
    Access Path: table-scan  Resc: 145
    Join:  Resc:  7262  Resp:  7262
  Access Path: index (unique)
    Index: T1_I
    rsc_cpu: 15773   rsc_io: 1
    ix_sel:  2.1147e-005    ix_sel_with_filters:  2.1147e-005
    NL Join:  resc: 75  resp: 75
  Access Path: index (eq-unique)
    Index: T1_I
    rsc_cpu: 15973   rsc_io: 1
    ix_sel:  0.0000e+000    ix_sel_with_filters:  0.0000e+000
    NL Join:  resc: 75  resp: 75
  Best NL cost: 75  resp: 75
Join cardinality:  84 = outer (84) * inner (47289) * sel (2.1147e-005)  [flag=0]
...

And finally, when with rownum > 0 CBO goes way off the tracks. This is what I meant by weird behavior in 10g.

SQL> select count(*) from (
  2  select t1.object_name,t1.object_type, t2.object_name
  3   from t1, t2
  4  where t1.object_id in (select /*+ cardinality(nlist 100) */ *
  5                           from table(cast(getnumericlist(100) as
  6  table_of_number)) nlist where rownum > 0)
  7    and t2.object_id = t1.object_id
  8    and t1.status = 0
  9    and t2.status = 0);

  COUNT(*)
----------
       100

Elapsed: 00:00:32.45

Execution Plan
----------------------------------------------------------
   0      SELECT STATEMENT Optimizer=ALL_ROWS (Cost=879155 Card=1 Bytes=14)
   1    0   SORT (AGGREGATE)
   2    1     FILTER
   3    2       HASH JOIN (Cost=421 Card=47289 Bytes=662046)
   4    3         TABLE ACCESS (FULL) OF 'T2' (TABLE) (Cost=147 Card=47290 Bytes=331030)
   5    3         TABLE ACCESS (FULL) OF 'T1' (TABLE) (Cost=147 Card=47289 Bytes=331023)
   6    2       FILTER
   7    6         COUNT
   8    7           FILTER
   9    8             COLLECTION ITERATOR (PICKLER FETCH) OF 'GETNUMERICLIST'


Statistics
----------------------------------------------------------
      47290  recursive calls
          0  db block gets
     237755  consistent gets
        953  physical reads
          0  redo size
        393  bytes sent via SQL*Net to client
        508  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
          1  rows processed

SQL>

I know this thing is "big" as you've mentioned and maybe not so important to the community So, I understand if you didn't comment. I'll try to wait my turn to ask you this question once more.

Thanks again.

- Vladimir
 

Tom Kyte
June 12, 2005 - 3:46 pm UTC

I've had better luck with the (undocumented) cardinality hint via:

with T
as
( select *
from TABLE(cast( str2tbl( :in_list ) as str2tblType) ) t where rownum > 0
)
select *
from emp
where ename in ( select /*+ cardinality(10) */ * from t )
/


that technique.



Tom, it didn't work in 10g with new appeared "features" ...

Vladimir Sadilovskiy, June 12, 2005 - 5:36 pm UTC

Tom,

I copied semantics in your example and ran it on both versions.

with nlist
as
( select distinct *
from table(cast(getnumericlist(10000) as
table_of_number)) where rownum > 0
)
select t1.object_name,t1.object_type, t2.object_name
from t1, t2
where t1.object_id in (select /*+ cardinality(nlist 10000) */ *
from nlist)
and t2.object_id = t1.object_id
and t1.status = 0
and t2.status = 0
/

"Goup by cardinality"???. I didn't know CBO can do this. Or is it 10g feature? Always nice to see something new. Anyway, now the cardinality hint is totally suppressed.

Join order[5]: VW_NSO_1[VW_NSO_1]#2 T1[T1]#0 T2[T2]#1
SORT resource Sort statistics
Sort width: 5 Area size: 131072 Max Area size: 1257472
Degree: 1
Blocks to Sort: 25 Row size: 25 Total Rows: 8168
Initial runs: 2 Merge passes: 1 IO Cost / pass: 16
Total IO sort cost: 41 Total CPU sort cost: 16256570
Total Temp space used: 353000
GROUP BY cardinality: 1, TABLE cardinality: 8168
Now joining: T1[T1]#0 *******
...
Join cardinality: 1 = outer (1) * inner (47289) * sel (2.1147e-005) [flag=2]

This group by cardinality = 1. Does it imply that NDV of the collection assumed as 1? Then we really need to show CBO it's not true. I know you said we can't. Do you think it's a good enhancement to ask for?

Behavior didn't change when trying this in 9i. Still semi-join with cardinality 1.

Tom, if you will happen to come across a pretty workaround or know about a fix of some sort, please, post it here. And I'll be looking for it as well.

Thanks.

- Vladimir

Tom Kyte
June 12, 2005 - 11:11 pm UTC

If it is not working, global temporary tables would be the next logical choice for you.


10.1.0.3:

ops$tkyte@ORA10GR1> /*
ops$tkyte@ORA10GR1>
ops$tkyte@ORA10GR1> drop table t1;
ops$tkyte@ORA10GR1> drop table t2;
ops$tkyte@ORA10GR1> create table t1 as select * from all_objects;
ops$tkyte@ORA10GR1> create table t2 as select * from all_objects;
ops$tkyte@ORA10GR1> create index t1_idx on t1(object_id);
ops$tkyte@ORA10GR1> create index t2_idx on t2(object_id);
ops$tkyte@ORA10GR1>
ops$tkyte@ORA10GR1> exec dbms_stats.gather_table_stats( user, 'T1', cascade=>true );
ops$tkyte@ORA10GR1> exec dbms_stats.gather_table_stats( user, 'T2', cascade=>true );
ops$tkyte@ORA10GR1> */
ops$tkyte@ORA10GR1> variable x varchar2(2000)
ops$tkyte@ORA10GR1>
ops$tkyte@ORA10GR1> set autotrace traceonly explain
ops$tkyte@ORA10GR1> with nlist
  2  as
  3  ( select /*+ cardinality(x 10) */ to_number(column_value)
  4      from table(cast(str2tbl(:x) as str2tblType)) x where rownum > 0
  5  )
  6  select t1.object_name,t1.object_type, t2.object_name
  7   from t1, t2
  8  where t1.object_id in (select * from nlist)
  9    and t2.object_id = t1.object_id
 10    and t1.status = 0
 11    and t2.status = 0
 12  /
 
Execution Plan
----------------------------------------------------------
   0      SELECT STATEMENT Optimizer=ALL_ROWS (Cost=34 Card=1 Bytes=96)
   1    0   TABLE ACCESS (BY INDEX ROWID) OF 'T2' (TABLE) (Cost=2 Card=1 Bytes=37)
   2    1     NESTED LOOPS (Cost=34 Card=1 Bytes=96)
   3    2       NESTED LOOPS (Cost=32 Card=1 Bytes=59)
   4    3         VIEW OF 'VW_NSO_1' (VIEW) (Cost=29 Card=10 Bytes=130)
   5    4           SORT (UNIQUE)
   6    5             VIEW (Cost=29 Card=10 Bytes=130)
   7    6               COUNT
   8    7                 FILTER
   9    8                   COLLECTION ITERATOR (PICKLER FETCH) OF 'STR2TBL'
  10    3         TABLE ACCESS (BY INDEX ROWID) OF 'T1' (TABLE) (Cost=2 Card=1 Bytes=46)
  11   10           INDEX (RANGE SCAN) OF 'T1_IDX' (INDEX) (Cost=1 Card=1)
  12    2       INDEX (RANGE SCAN) OF 'T2_IDX' (INDEX) (Cost=1 Card=1)
 
 
 
ops$tkyte@ORA10GR1> set autotrace off
ops$tkyte@ORA10GR1>


You might need to play with the placement of the cardinality hint.  It is undocumented.  It would be a nice hint to have (or an equivalent thereof) yes. 

It is true and I got same with 10. Try higher values, you'll be amazed

Vladimir Sadilovskiy, June 13, 2005 - 1:58 am UTC

Was going to play with selectivity hint, but was removed forom 10g right?

Tom Kyte
June 13, 2005 - 10:45 am UTC

yes, it was.

Oh... my. Didn't notice that you have incorrect cardinality from NLs.

Vladimir Sadilovskiy, June 13, 2005 - 2:10 am UTC

10053 will probably tell us why. Ok.. I'm off for now or I'll oversleep my Oracle tuning training. :) Good night.

Cardinality hint and rownum > 0

Todd Barry, June 27, 2005 - 4:42 pm UTC

I have had the cardinality hint (with a value of 1) in all of our queries involving collections since we always have far fewer than 8K entries in the collection (usually 1-10).

Today I have discovered (imagine a panic-stricken face here) that these queries have been returning "truncated" results for an unknown time. If there are, say, 10 entries in the collection, the query is behaving as if there are only 2 entries. Furthermore, the behavior is not consistent across our various test and production instances - some work, some do not. In fact, the query works correctly on one instance of a replicated site, but not the other.

Now, what seems to make the queries work consistently is adding the 'rownum > 0'. I understood that this "materialized" the query and impacted performance, but I never thought that it was required in order for the query to return all/correct data.

Still investigating (this is on 9.2.0.6) and have not been able to come up with a small, reproducible test case yet.

Tom Kyte
June 27, 2005 - 5:21 pm UTC

the cardinality hint is a red herring here.

There must be a optimizer plan issue (bug) if this is true. The rownum > 0 prevents view merging and generates very different plans. I would encourage you to attempt to isolate one that gets the wrong answer and work with support (it'll be a high priority bug, wrong answers are)

Materializing a result set in the temporary tablespace

A reader, June 28, 2005 - 4:58 am UTC

You say here, and in some other posts as well, that materializing a result set(eg by adding a rownum > 0 to an inline view) goes into the temporary tablespace. Does it always go into the temporary tablespace? Couldn't oracle materialize the result set in memory instead, assuming that there's enough memory to accommodate it?

Thanks for any answers.

Tom Kyte
June 28, 2005 - 7:54 am UTC

it materializes it to "temp", temp might be in memory, might be on disk.

View merging

Todd Barry, June 28, 2005 - 6:25 pm UTC

Tom, yes, I am seeing that the use of either no_merge or 'rownum > 0' does prevent the view merging and "forces" a correct answer.

select distinct
w.work_id,
w.country_id,
wl.country_id
from (select /*+ no_merge cardinality(t 1) */
column_value product_id
from table(cast(pkg_string.f_number_table(:p_id_set) as NumberTable)) t
-- where rownum > 0
) t,
psh, trk, rw, w, wl
where psh.product_id = t.product_id
and trk.side_id = psh.side_id
and rw.recording_id = trk.recording_id
and w.work_id = rw.work_id
and wl.work_id = w.work_id;

Here's the interesting thing about the cardinality hint - if I change the value to something else (even just 2), the correct results are returned (at least in this instance).

I will continue trying to come up with a test case for support.

Tom Kyte
June 28, 2005 - 8:21 pm UTC

optimizer "product issue" (aka bug)

card= is what causes changes in plans, I would expect it to be very sensitive to changes in card= estimations

the cardinality hint in a red herring in that it is not the "cause", just something you can use to get this to happen predicably.

what kind of sampling would be better in this case

Sam, September 30, 2005 - 11:33 am UTC

Tom
In one of my application the scenario is like

1.application is having two schemas.all the flat files comes at night 9PM till 12 PM.So all my batch programs runs from night 9 PM to 12 PM.In one flat file records varies from 10000 to 100000.
2. This flat files get loaded into schema A. finally the batch process(PLSQL) reads from this schema and process this and loads into schema B. So my schema B contains the processed data.
3. I join lot of tables from schema A to B to validate these datas .
4. Sqlloader every time loads the new data ,first truncates the table and loads the fresh data.

In this case what kind of statistics gathering procedre I should adopt ? (For Schema A). What is for B.

Because everytime this schema A statistics are outdated the quries are running slow too.



Tom Kyte
September 30, 2005 - 12:04 pm UTC

100,000 records would analyze fast, why not gather stats after the load?


for B, you can use monitoring and gather stale. Just beware of monotomically increasing values - like a timestamp, date, or sequence value - you may need to gather stats on those tables even when "not stale" (or use dbms_stats.set column stats to let us know the new high value for that column)

kind of statistics gathering

sam, October 02, 2005 - 12:51 am UTC

Tom
Our batch process is designed such a that after sqlloader loads the data in schema A ,immediately PLSQL batch program starts without a gap. and the business logic is in such a way is a series of programs to be run one after. So I don't have time to analyze the Schema A tables immediately after loading.
1. Do you want me to put a analyze table statement immediately after sqlloader loads the data ?then start the PLSQL batch.
2. Do you want me to put dbms_stats gather statistics with auto sample option.(enabling optimizer dynamic sampling ,hints in batch programs etc)
Which option is better or any options you suggest ?


Tom Kyte
October 02, 2005 - 10:41 am UTC

No, you mean "i didn't take the bit of time it would take", not "I don't have time". You have a flow - you made it, you can change it.

1) dbms_stats, not analyze. You know what tables need fresh statistics, heck, you don't even have to gather if you don't want to - if you KNOW how much you just loaded, you can use dbms_stats.set_table_stats/index_stats to tell us, the optimizer cares not where the stats come from only that valid statistics exist.

2) I don't want you to do anything - except test, understand how it works, and see what does best for you.

Table Statistics collection in RAC

Krishan Jaglan, December 09, 2005 - 1:38 pm UTC

Hi tom,

I have 2 node 9.2.0.6 RAC on HP-UX 11.23.
case 1:
If both node are up n runing AND we run ANALYZE
1) V$SESSION_wait shows "global cache cr request"
2) 'top' show that session is using very less cpu (14-15%)
3) tooks around 10-11 Hr to finish for 700tables of 100GB size in total.

Case 2:
If only one Instance running in Cluster ,
1) V$session_Wait show wait on "db file scattered read"
2) 'top' shows heavy cpu utilization(80-90%)
3) tooks only 4-5 hr to finish the same.

I know fundamently in oracle RAC, if any block is not found on current instance , it will request to anohter instances before its goes for Disk Read and wait event it will show "global cache cr request". in case of analyze we know requested block will not be found in another instance and at the end it will do 'Disk read only'.

Now basically its doing
request to another instance for required block + disk read(because blocks are not thr in end instance as well) means its doing double task. and going directly for disk read will make its more faster.

Now my question is "IS there any way i can force it to directly go for disk read rather then going for global cr and then disk read. " or same other way around to boost the performance. we know if only one node is running performance is good. how we can achieve the same if all nodes are running.

FYI: all table change frequently so all tables need to analyzed weekly.

quick response will be highly appreciated.

Thanks in advance.



Tom Kyte
December 09, 2005 - 2:00 pm UTC

you'd have to convince the other node to CHECKPOINT the blocks it has - but that would be called OPS (Oracle parallel server) so you could read them from disk.

Problem is - you have dirty blocks over there that need to come over the interconnect (and getting them over the interconnect will be faster then physical IO)


Tables do not need to be analyzed frequently in all cases, even if they change. That and there is no way you should be using analyze. Perhaps you really mean to estimate with dbms_stats?

Table Statistics collection in RAC

Krishan jaglan, December 09, 2005 - 2:56 pm UTC

Tom,

Thanks for quick response. but this activity i am testing on a environment where no user is connect and no dirty block avilable in other instances. but still its take double time.( 4-5 hr in case of one instance ) and (10-11 hr both instance are running). There is not a single user connected except the session doing analyze.

thanks

Tom Kyte
December 10, 2005 - 4:43 am UTC

but in real life there will be. In real life this will take "even more"

recommendation stands

o estimate
o use dbms_stats (can use parallel, analyze cannot)

CBO - Init.ora Parameters.

VIKAS, February 03, 2006 - 4:51 am UTC

Dear Mr. Kyte

Can you please throw some light on the various Pfile / Spfile parameters that can be changed in order to influence the way Cost Based Optimizer actually works to selects the best way to execute and fetch results for user queries.

What is the difference between DBMS_STATS.GATHER_*_STATS, DBMS_STATS.GET_*_STATS and DBMS_STATS.SET_*_STATS. when should we use one, and what for?


Also, I would like to know which parameters can be altered Globally and which can be set Locally with regards to CBO?

And, What effects does changing the Init.ora parameters such as :-

optimizer_index_caching, optimizer_index_cost_adj etc

will have on the CBO? Do you recommend changing the default values of such Init.ora parameters? if yes, what according to you might be the ideal values for all such parametrs?

finally, how does changing these parameters effect the way Optimizer works (Please, if you could supply few examples)?

Can it be said the even Optimizer can be / needs to be Optimized.

Thanx for sharing your valuable information on the subject in advance.

Take care, regards...
Vikas.

composite index faling in CARDINALITY hint in varying elements in IN list.

Arul Prasad, March 08, 2006 - 10:34 am UTC

Hi Tom
    The Information u gave abt CARDINALITY hint is very usefull.
but in my case i have a composite index .

    In this case even though i have used the CARDINALITY hint.My index getting falied ur help is needed

    i have a table called test_idx;
    
SQL> desc test_idx;
 Name   Null?    Type
 ----------------------------------------- -------- ----------------------------
 ID             NUMBER(10) 
 CLASS          NUMBER(10)
 PRICE          NUMBER(10)

SQL> select class , price , count(price) from test_idx group by class , price;
 
     CLASS      PRICE COUNT(PRICE)
---------- ---------- ------------
        10         11           39
        10         12           39
        10         23          499
        20          7           85
        20          9         2456
        20         11          160
        20         12          160
        20         13          199
        20         14          199
        20         15          199
        20         16          199
 
     CLASS      PRICE COUNT(PRICE)
---------- ---------- ------------
        20         17           98
        20         23         2001
        20         71          199
        20         72          199
        20        600          999
        20        349            1
        20        414          247
        20        415          199
        20        502          199
        20        505         1700
        20        507         1999
 
     CLASS      PRICE COUNT(PRICE)
---------- ---------- ------------
        20        534         3999
        20        601         3999
        30          7           49
        30          9         1999
        30         10          207
        30         23          499
        30        414           49
        30        501          199
        30        505          299

SQL> alter table test_idx add constraint id_pk_idx primary key(id);
 
Table altered.

SQL> create index idx_cmp on test_idx(class,price);
 
Index created.

SQL> CREATE OR REPLACE TYPE TY_TBL_NUM AS TABLE OF NUMBER;
  2  /
 
Type created.

SQL> ;
  1  CREATE OR REPLACE FUNCTION FN_GET_NUM_LIST(PARAM_STR VARCHAR2) RETURN TY_TBL_NUM
  2  AS
  3    V_STRINGS long default PARAM_STR || ',';
  4    V_INDEX   number;
  5    V_DATA    TY_TBL_NUM := TY_TBL_NUM();
  6    BEGIN
  7    LOOP
  8     V_INDEX := INSTR(V_STRINGS, ',' );
  9     EXIT WHEN (NVL(V_INDEX,0) = 0);
 10     V_DATA.extend;
 11     V_DATA( V_DATA.count ) := ltrim(rtrim(substr(V_STRINGS,1,V_INDEX-1)));
 12     V_STRINGS := SUBSTR( V_STRINGS, V_INDEX+1 );
 13    END LOOP;
 14    RETURN V_DATA;
 15*   END FN_GET_NUM_LIST;

SQL> analyze table test_idx compute statistics for table for all indexes for all indexed columns
  2  /
 
Table analyzed.

SQL> set autotrace traceonly
SQL> set timing on
SQL> rem first case index failing
SQL> select * from test_idx  where price in ( SELECT /*+ cardinality(t 17) */ * FROM  TABLE (CAST(  FN_GET_NUM_LIST('7,2,17,333,349,501,1,320,414,406,1889,3018,1364,1140,10,3052,71') AS TY_TBL_NUM )) t  where rownum >= 0  ) and class = 20
  2  /
 
630 rows selected.
 
Elapsed: 00:00:20.16
 
Execution Plan
----------------------------------------------------------
   0      SELECT STATEMENT Optimizer=CHOOSE (Cost=31 Card=848 Bytes=18
          656)
 
   1    0   HASH JOIN (SEMI) (Cost=31 Card=848 Bytes=18656)
   2    1     TABLE ACCESS (FULL) OF 'TEST_IDX' (Cost=10 Card=19496 By
          tes=175464)
 
   3    1     VIEW OF 'VW_NSO_1' (Cost=17 Card=17 Bytes=221)
   4    3       COUNT
   5    4         FILTER
   6    5           COLLECTION ITERATOR (PICKLER FETCH) OF 'FN_GET_NUM
          _LIST'
 
 
 
 
 
Statistics
----------------------------------------------------------
         69  recursive calls
          0  db block gets
        113  consistent gets
          0  physical reads
          0  redo size
       8337  bytes sent via SQL*Net to client
        564  bytes received via SQL*Net from client
         43  SQL*Net roundtrips to/from client
          6  sorts (memory)
          0  sorts (disk)
        630  rows processed

SQL> rem second case index enabled

SQL>  select * from test_idx  where price in (7,2,17,333,349,501,1,320,414,406,1889,3018,1364,1140,10,3052,71) and class = 20
  2  /
 
630 rows selected.
 
Elapsed: 00:00:19.79
 
Execution Plan
----------------------------------------------------------
   0      SELECT STATEMENT Optimizer=CHOOSE (Cost=8 Card=950 Bytes=855
          0)
 
   1    0   INLIST ITERATOR
   2    1     TABLE ACCESS (BY INDEX ROWID) OF 'TEST_IDX' (Cost=8 Card
          =950 Bytes=8550)
 
   3    2       INDEX (RANGE SCAN) OF 'IDX_CMP' (NON-UNIQUE) (Cost=4 C
          ard=950)
 
 
 
 
 
Statistics
----------------------------------------------------------
          0  recursive calls
          0  db block gets
        126  consistent gets
          0  physical reads
          0  redo size
       8339  bytes sent via SQL*Net to client
        564  bytes received via SQL*Net from client
         43  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
        630  rows processed

SQL> set autotrace off;
SQL> select count(*) from test_idx ;
 
  COUNT(*)
----------
     23374
 
Elapsed: 00:00:00.42

how to enable my index idx_cmp .. ?
 

Tom Kyte
March 09, 2006 - 12:27 pm UTC

srry, I cld nt rd ths. yr kybrd is brk

why after reading the page you read to post this, people still persist in using this stuff that just "are not words in any language", I'll never ever understand.

trace it, I don't see why it would have taken so long from the above - did you see - no physical IO, less logical IO, lets see a tkprof (and please - begging you - use a vowel or two?)

composite index faling in CARDINALITY hint in varying elements in IN list

K.Arul Prasad, March 23, 2006 - 9:39 am UTC

hi Tom

Sorry for the previous message which i have typed fast i missed the vowels

I have generated the TKProf for the the queries

1) Query which use the cardinality hint. but the full table scan was happening.

select * from test_idx where price in
( SELECT /*+ cardinality(t 17) */ * FROM
TABLE (
CAST(
FN_GET_NUM_LIST(
'7,2,17,333,349,501,1,320,414,406,1889,3018,
1364,1140,10,3052,71'
)
AS TY_TBL_NUM
)
) t where rownum >= 0
)
AND class = 20


call count cpu elapsed disk query current rows
- ---------- ---------- ---------- ---------- ----------
Parse 1 0.06 0.06 0 33 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 2 0.06 0.05 0 61 0 6
------- ---------- ---------- ---------- ---------- -------
total 4 0.12 0.11 0 94 0 6

Rows Execution Plan
------- ---------------------------------------------------
0 SELECT STATEMENT GOAL: CHOOSE
6 HASH JOIN (SEMI)
18872 TABLE ACCESS GOAL: ANALYZED (FULL) OF 'TEST_IDX'
17 VIEW OF 'VW_NSO_1'
17 COUNT
17 FILTER
17 COLLECTION ITERATOR (PICKLER FETCH) OF
'FN_GET_NUM_LIST'


2) query using without the function and cardinality hint. Range scan was taking

select * from test_idx where price in
(7,2,17,333,349,501,1,320,414,406,1889,3018,1364,
1140,10,3052,71) and class = 20

call count cpu elapsed disk query current rows
-- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 2 0.10 0.09 0 62 0 6
---------- ---------- ---------- ---------- ----------
total 4 0.10 0.10 0 62 0 6

Rows Execution Plan
------- ---------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=5 Card=428 Bytes=385
2)

1 0 INLIST ITERATOR
2 1 TABLE ACCESS (BY INDEX ROWID) OF 'TEST_IDX' (Cost=5 Card
=428 Bytes=3852)

3 2 INDEX (RANGE SCAN) OF 'IDX_CMP' (NON-UNIQUE) (Cost=3 C
ard=428)


So please Guide me i need to set the bind variables for the varying element IN list. and my query perfomance also need to be tuned.

Tom Kyte
March 23, 2006 - 11:12 am UTC

autotrace it, very likely the hint was not being picked up but only the explain plan would show that.

try this syntax:

with T
as
( select *
from TABLE(cast( str2tbl( :in_list ) as str2tblType) ) t where rownum > 0
)
select *
from emp
where ename in ( select /*+ cardinality(10) */ * from t )
/


composite index faling in CARDINALITY hint in varying elements in IN list

Arul Prasad, March 28, 2006 - 7:20 am UTC

hi Tom

I just try with your new syntax using With clause even though it was not taking . I was going through your book expert one to one Oracle .

In the index chapter u shared the effects of clustering_factor in Index

So i have a doubt the following is my clustering factor fo the index , whether this will be reason for going to the full table scan.?

INDEX_NAME NUM_ROWS BLOCKS CLUSTERING FACTOR
------------- --------- ------- -----------------
IDX_CMP 22750 58 1573





Tom Kyte
March 28, 2006 - 8:02 am UTC

not following you on this one at all.

A reader, March 28, 2006 - 4:53 pm UTC

TO "Arul Prasad from India" one tiny advice dont use "u" on Tom's Site.

What is the alternative to Cardinality in Oracle 8.1.7?

Sinan Topuz, June 09, 2006 - 10:27 pm UTC

Tom,

PRODUCT VERSION
---------------------------------------------------------------- ---------
NLSRTL 3.4.1.0.0
Oracle8i 8.1.7.4.1
PL/SQL 8.1.7.4.0
TNS for 32-bit Windows: 8.1.7.4.0

I tried using the str2tbl method to remediate an application (I am currently using where instr(' || col_val || ', col_2) > 0 approach ). I think I need something that I can use in Oracle 8.1.7 instead of Cardinality hint (if exists) for the problem below. I have a view and when I do:

select col_1 from the_view
where col_2 in ('134450, 127854, 127851, 127658') it works very fast. 1.562 seconds.

,but when I try the following it takes 32 seconds.

select col_1 from the_view
where col_2 in (
select *from THE (select /*+ FIRST_ROWS */ cast(dw_list('134450, 127854, 127851, 127658') as typ_workorderlist ) from dual )
)

typ_workorderlist and col_2 have the same data type (varchar(30)).

When I looked at the execution plan for the second query I see the magic number 8,168 under Cardinality column.

I ran
"analyze table r5personnel compute statistics for table for all indexes for all indexed columns"
for all tables included in the view, but that did not help.

Could you give me any suggestions?

Thanks,

Tom Kyte
June 10, 2006 - 10:10 am UTC

move the first rows out a level.

first rows the query, not the subquery.

Better, but it still takes 23 seconds.

Sinan Topuz, June 10, 2006 - 11:15 am UTC

Tom,

Thanks for the quick response. I tried this;

select col_1 from the_view
where col_2 in (
select /*+ FIRST_ROWS */ *from THE (select cast(dw_list('134450, 127854,
127851, 127658') as typ_workorderlist ) from dual )
)

, and this one;

select /*+ FIRST_ROWS */ col_1 from the_view
where col_2 in (
select *from THE (select cast(dw_list('134450, 127854,
127851, 127658') as typ_workorderlist ) from dual )
)

and it helped in a sense. Now it takes 23 seconds to execute. Would you suggest any more improvements to reduce that magic number in the execution plan?

Thanks,

Tom Kyte
June 11, 2006 - 11:48 am UTC

not in 8i.

But - regardless of the cardinalities (say you got the card= down to 5 for the subquery) - what effect would that have on response time - if you in fact have the plan you desire?

One more thing you might do is:


select /*+ first_rows */ ...
from table
where column in ( select column_value
from (select column_value, rownum
from table( cast( blah as type ) )
where rownum > 0 )
)


if the pickler code is invoked more than once to get it done just one time (rownum will tend to have the plan materialize the subquery).

Might not make any difference for you in this case.

Down to 10 secs. Could you look at the execution plan?

Sinan Topuz, June 11, 2006 - 1:21 pm UTC

Tom,

Thank you again for your response.

The query execution time is now even better.

If this is the end of it, I guess I will have to stick with the instr solution, but before that if it is not going to be too much of text, I would like to show you the execution plans of both queries. Does it look like I need to consider optimizing my view first?

Thank you.

SQL> select count(1) from dw_workordercard;

  COUNT(1)
----------
    100652

SQL> select /*+ first_rows */ evt_code
  2    from dw_workordercard
  3   where evt_code in ( '134450')
  4  /

EVT_CODE                                                                        
------------------------------                                                  
134450                                                                          

Elapsed: 00:00:00.31

Execution Plan
----------------------------------------------------------                      
   0      SELECT STATEMENT Optimizer=HINT: FIRST_ROWS (Cost=18 Card=24          
           Bytes=408)                                                           
                                                                                
   1    0   VIEW (Cost=18 Card=24 Bytes=408)                                    
   2    1     UNION-ALL                                                         
   3    2       NESTED LOOPS (Cost=10 Card=12 Bytes=1668)                       
   4    3         NESTED LOOPS (Cost=10 Card=17 Bytes=2312)                     
   5    4           MERGE JOIN (OUTER) (Cost=8 Card=2 Bytes=230)                
   6    5             MERGE JOIN (OUTER) (Cost=7 Card=1 Bytes=107)              
   7    6               NESTED LOOPS (Cost=6 Card=1 Bytes=90)                   
   8    7                 NESTED LOOPS (OUTER) (Cost=3 Card=1 Bytes=47          
          )                                                                     
                                                                                
   9    8                   NESTED LOOPS (Cost=2 Card=1 Bytes=30)               
  10    9                     TABLE ACCESS (BY INDEX ROWID) OF 'R5ACTI          
          VITIES' (Cost=2 Card=1 Bytes=18)                                      
                                                                                
  11   10                       INDEX (UNIQUE SCAN) OF 'R5PRIK_ACT' (U          
          NIQUE) (Cost=1 Card=1)                                                
                                                                                
  12    9                     INDEX (UNIQUE SCAN) OF 'R5PRIK_TSK' (UNI          
          QUE)                                                                  
                                                                                
  13    8                   TABLE ACCESS (BY INDEX ROWID) OF 'R5ADDETA          
          ILS' (Cost=1 Card=492 Bytes=8364)                                     
                                                                                
  14   13                     INDEX (UNIQUE SCAN) OF 'R5PRIK_ADD' (UNI          
          QUE)                                                                  
                                                                                
  15    7                 VIEW OF 'DW_WORKORDERSSUB'                            
  16   15                   NESTED LOOPS (Cost=3 Card=1 Bytes=56)               
  17   16                     TABLE ACCESS (BY INDEX ROWID) OF 'R5EVEN          
          TS' (Cost=2 Card=1 Bytes=44)                                          
                                                                                
  18   17                       INDEX (UNIQUE SCAN) OF 'R5PRIK_EVT' (U          
          NIQUE) (Cost=1 Card=1)                                                
                                                                                
  19   16                     TABLE ACCESS (BY INDEX ROWID) OF 'R5PERS          
          ONNEL' (Cost=1 Card=76 Bytes=912)                                     
                                                                                
  20   19                       INDEX (UNIQUE SCAN) OF 'R5PRIK_PER' (U          
          NIQUE)                                                                
                                                                                
  21    6               TABLE ACCESS (BY INDEX ROWID) OF 'R5ADDETAILS'          
           (Cost=1 Card=1 Bytes=17)                                             
                                                                                
  22   21                 INDEX (UNIQUE SCAN) OF 'R5PRIK_ADD' (UNIQUE)          
  23    5             SORT (JOIN) (Cost=7 Card=2 Bytes=16)                      
  24   23               INDEX (RANGE SCAN) OF 'R5IDX1_BOO' (NON-UNIQUE          
          ) (Cost=1 Card=2 Bytes=16)                                            
                                                                                
  25    4           TABLE ACCESS (BY INDEX ROWID) OF 'R5OBJECTS' (Cost          
          =1 Card=11556 Bytes=242676)                                           
                                                                                
  26   25             INDEX (UNIQUE SCAN) OF 'R5PRIK_OBJ' (UNIQUE)              
  27    3         INDEX (UNIQUE SCAN) OF 'R5PRIK_MRC' (UNIQUE)                  
  28    2       NESTED LOOPS (Cost=8 Card=12 Bytes=1260)                        
  29   28         NESTED LOOPS (Cost=8 Card=17 Bytes=1734)                      
  30   29           MERGE JOIN (OUTER) (Cost=6 Card=2 Bytes=162)                
  31   30             MERGE JOIN (OUTER) (Cost=5 Card=1 Bytes=73)               
  32   31               NESTED LOOPS (Cost=5 Card=1 Bytes=68)                   
  33   32                 NESTED LOOPS (Cost=2 Card=1 Bytes=25)                 
  34   33                   TABLE ACCESS (BY INDEX ROWID) OF 'R5ADDETA          
          ILS' (Cost=2 Card=1 Bytes=17)                                         
                                                                                
  35   34                     INDEX (UNIQUE SCAN) OF 'R5PRIK_ADD' (UNI          
          QUE) (Cost=1 Card=1)                                                  
                                                                                
  36   33                   INDEX (UNIQUE SCAN) OF 'R5PRIK_ACT' (UNIQU          
          E)                                                                    
                                                                                
  37   32                 VIEW OF 'DW_WORKORDERSSUB'                            
  38   37                   NESTED LOOPS (Cost=3 Card=1 Bytes=56)               
  39   38                     TABLE ACCESS (BY INDEX ROWID) OF 'R5EVEN          
          TS' (Cost=2 Card=1 Bytes=44)                                          
                                                                                
  40   39                       INDEX (UNIQUE SCAN) OF 'R5PRIK_EVT' (U          
          NIQUE) (Cost=1 Card=1)                                                
                                                                                
  41   38                     TABLE ACCESS (BY INDEX ROWID) OF 'R5PERS          
          ONNEL' (Cost=1 Card=76 Bytes=912)                                     
                                                                                
  42   41                       INDEX (UNIQUE SCAN) OF 'R5PRIK_PER' (U          
          NIQUE)                                                                
                                                                                
  43   31               INDEX (UNIQUE SCAN) OF 'HELPDESK_PR_KEY' (UNIQ          
          UE)                                                                   
                                                                                
  44   30             SORT (JOIN) (Cost=6 Card=2 Bytes=16)                      
  45   44               INDEX (RANGE SCAN) OF 'R5IDX1_BOO' (NON-UNIQUE          
          ) (Cost=1 Card=2 Bytes=16)                                            
                                                                                
  46   29           TABLE ACCESS (BY INDEX ROWID) OF 'R5OBJECTS' (Cost          
          =1 Card=11556 Bytes=242676)                                           
                                                                                
  47   46             INDEX (UNIQUE SCAN) OF 'R5PRIK_OBJ' (UNIQUE)              
  48   28         INDEX (UNIQUE SCAN) OF 'R5PRIK_MRC' (UNIQUE)                  




Statistics
----------------------------------------------------------                      
          0  recursive calls                                                    
          1  rows processed                                                     

On the other than the second query gives the following:

SQL> select /*+ FIRST_ROWS */ evt_code
  2    from dw_workordercard
  3   where evt_code in ( select column_value
  4                       from (select column_value, rownum
  5                               from table( cast( dw_list('134450') as typ_workorderlist ) )
  6                              where rownum > 0 )
  7                    )
  8  /

EVT_CODE                                                                        
------------------------------                                                  
134450                                                                          

Elapsed: 00:00:10.50

Execution Plan
----------------------------------------------------------                      
   0      SELECT STATEMENT Optimizer=HINT: FIRST_ROWS (Cost=44419 Card          
          =419345 Bytes=14257730)                                               
                                                                                
   1    0   HASH JOIN (Cost=44419 Card=419345 Bytes=14257730)                   
   2    1     VIEW (Cost=44383 Card=5134 Bytes=87278)                           
   3    2       UNION-ALL                                                       
   4    3         NESTED LOOPS (OUTER) (Cost=23027 Card=1848 Bytes=256          
          872)                                                                  
                                                                                
   5    4           NESTED LOOPS (Cost=21179 Card=1848 Bytes=225456)            
   6    5             NESTED LOOPS (Cost=21179 Card=2610 Bytes=310590)          
   7    6               NESTED LOOPS (OUTER) (Cost=20863 Card=316 Byte          
          s=30968)                                                              
                                                                                
   8    7                 NESTED LOOPS (OUTER) (Cost=20549 Card=314 By          
          tes=28260)                                                            
                                                                                
   9    8                   NESTED LOOPS (Cost=20235 Card=314 Bytes=22          
          922)                                                                  
                                                                                
  10    9                     NESTED LOOPS (Cost=20235 Card=765 Bytes=          
          46665)                                                                
                                                                                
  11   10                       VIEW OF 'DW_WORKORDERSSUB' (Cost=19470          
           Card=765 Bytes=32895)                                                
                                                                                
  12   11                         NESTED LOOPS (Cost=19470 Card=765 By          
          tes=42840)                                                            
                                                                                
  13   12                           TABLE ACCESS (BY INDEX ROWID) OF '          
          R5EVENTS' (Cost=18463 Card=1007 Bytes=44308)                          
                                                                                
  14   13                             INDEX (RANGE SCAN) OF 'R5IDXF_EV          
          T' (NON-UNIQUE) (Cost=453 Card=1007)                                  
                                                                                
  15   12                           TABLE ACCESS (BY INDEX ROWID) OF '          
          R5PERSONNEL' (Cost=1 Card=76 Bytes=912)                               
                                                                                
  16   15                             INDEX (UNIQUE SCAN) OF 'R5PRIK_P          
          ER' (UNIQUE)                                                          
                                                                                
  17   10                       TABLE ACCESS (BY INDEX ROWID) OF 'R5AC          
          TIVITIES' (Cost=1 Card=100384 Bytes=1806912)                          
                                                                                
  18   17                         INDEX (UNIQUE SCAN) OF 'R5PRIK_ACT'           
          (UNIQUE)                                                              
                                                                                
  19    9                     INDEX (UNIQUE SCAN) OF 'R5PRIK_TSK' (UNI          
          QUE)                                                                  
                                                                                
  20    8                   TABLE ACCESS (BY INDEX ROWID) OF 'R5ADDETA          
          ILS' (Cost=1 Card=3095 Bytes=52615)                                   
                                                                                
  21   20                     INDEX (UNIQUE SCAN) OF 'R5PRIK_ADD' (UNI          
          QUE)                                                                  
                                                                                
  22    7                 INDEX (RANGE SCAN) OF 'R5IDX1_BOO' (NON-UNIQ          
          UE) (Cost=1 Card=95559 Bytes=764472)                                  
                                                                                
  23    6               TABLE ACCESS (BY INDEX ROWID) OF 'R5OBJECTS' (          
          Cost=1 Card=11556 Bytes=242676)                                       
                                                                                
  24   23                 INDEX (UNIQUE SCAN) OF 'R5PRIK_OBJ' (UNIQUE)          
  25    5             INDEX (UNIQUE SCAN) OF 'R5PRIK_MRC' (UNIQUE)              
  26    4           TABLE ACCESS (BY INDEX ROWID) OF 'R5ADDETAILS' (Co          
          st=1 Card=492 Bytes=8364)                                             
                                                                                
  27   26             INDEX (UNIQUE SCAN) OF 'R5PRIK_ADD' (UNIQUE)              
  28    3         NESTED LOOPS (Cost=21356 Card=3286 Bytes=345030)              
  29   28           NESTED LOOPS (Cost=21356 Card=3286 Bytes=318742)            
  30   29             NESTED LOOPS (Cost=21356 Card=4642 Bytes=436348)          
  31   30               NESTED LOOPS (OUTER) (Cost=20794 Card=562 Byte          
          s=41026)                                                              
                                                                                
  32   31                 NESTED LOOPS (Cost=20235 Card=559 Bytes=3633          
          5)                                                                    
                                                                                
  33   32                   NESTED LOOPS (OUTER) (Cost=19470 Card=765           
          Bytes=36720)                                                          
                                                                                
  34   33                     VIEW OF 'DW_WORKORDERSSUB' (Cost=19470 C          
          ard=765 Bytes=32895)                                                  
                                                                                
  35   34                       NESTED LOOPS (Cost=19470 Card=765 Byte          
          s=42840)                                                              
                                                                                
  36   35                         TABLE ACCESS (BY INDEX ROWID) OF 'R5          
          EVENTS' (Cost=18463 Card=1007 Bytes=44308)                            
                                                                                
  37   36                           INDEX (RANGE SCAN) OF 'R5IDXF_EVT'          
           (NON-UNIQUE) (Cost=453 Card=1007)                                    
                                                                                
  38   35                         TABLE ACCESS (BY INDEX ROWID) OF 'R5          
          PERSONNEL' (Cost=1 Card=76 Bytes=912)                                 
                                                                                
  39   38                           INDEX (UNIQUE SCAN) OF 'R5PRIK_PER          
          ' (UNIQUE)                                                            
                                                                                
  40   33                     INDEX (UNIQUE SCAN) OF 'HELPDESK_PR_KEY'          
           (UNIQUE)                                                             
                                                                                
  41   32                   TABLE ACCESS (BY INDEX ROWID) OF 'R5ADDETA          
          ILS' (Cost=1 Card=11344 Bytes=192848)                                 
                                                                                
  42   41                     INDEX (UNIQUE SCAN) OF 'R5PRIK_ADD' (UNI          
          QUE)                                                                  
                                                                                
  43   31                 INDEX (RANGE SCAN) OF 'R5IDX1_BOO' (NON-UNIQ          
          UE) (Cost=1 Card=95559 Bytes=764472)                                  
                                                                                
  44   30               TABLE ACCESS (BY INDEX ROWID) OF 'R5OBJECTS' (          
          Cost=1 Card=11556 Bytes=242676)                                       
                                                                                
  45   44                 INDEX (UNIQUE SCAN) OF 'R5PRIK_OBJ' (UNIQUE)          
  46   29             INDEX (UNIQUE SCAN) OF 'R5PRIK_MRC' (UNIQUE)              
  47   28           INDEX (UNIQUE SCAN) OF 'R5PRIK_ACT' (UNIQUE)                
  48    1     VIEW OF 'VW_NSO_1' (Cost=34 Card=8168 Bytes=138856)               
  49   48       SORT (UNIQUE) (Cost=34 Card=8168 Bytes=138856)                  
  50   49         VIEW (Cost=3 Card=8168 Bytes=138856)                          
  51   50           COUNT                                                       
  52   51             FILTER                                                    
  53   52               COLLECTION ITERATOR (PICKLER FETCH)                     




Statistics
----------------------------------------------------------                      
          0  recursive calls                                                    
          1  rows processed                                                     

SQL> spool off 

Tom Kyte
June 11, 2006 - 3:27 pm UTC

please compare like to like....

create a small table, pop the value into it. gather stats.

run the two queries - so we are comparing apples to apples and not aples to toaster ovens.

You are in fact comparing "equality" versus "set" here.  If you are only using one thing - it would make sense to use equals, not IN - and the optimizer did that for you:

ops$tkyte@ORA10GR2> /*
ops$tkyte@ORA10GR2> drop table t1;
ops$tkyte@ORA10GR2> drop table t2;
ops$tkyte@ORA10GR2>
ops$tkyte@ORA10GR2> create table t1
ops$tkyte@ORA10GR2> as
ops$tkyte@ORA10GR2> select * from all_objects;
ops$tkyte@ORA10GR2> alter table t1 add constraint t1pk primary key(object_id);
ops$tkyte@ORA10GR2> exec dbms_stats.gather_table_stats( user, 'T1' );
ops$tkyte@ORA10GR2>
ops$tkyte@ORA10GR2> create table t2 ( x number );
ops$tkyte@ORA10GR2> insert into t2 values ( 10 );
ops$tkyte@ORA10GR2> insert into t2 values ( 20 );
ops$tkyte@ORA10GR2> insert into t2 values ( 30 );
ops$tkyte@ORA10GR2> exec dbms_stats.gather_table_stats( user, 'T2' );
ops$tkyte@ORA10GR2> */
ops$tkyte@ORA10GR2> column plan_table_output format a80 truncate
ops$tkyte@ORA10GR2> set autotrace traceonly explain
ops$tkyte@ORA10GR2> select * from t1 where object_id in (10);

Execution Plan
----------------------------------------------------------
Plan hash value: 575657410

-------------------------------------------------------------------------------
| Id  | Operation                   | Name | Rows  | Bytes | Cost (%CPU)| Time
-------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |      |     1 |    93 |     2   (0)| 00:00
|   1 |  TABLE ACCESS BY INDEX ROWID| T1   |     1 |    93 |     2   (0)| 00:00
|*  2 |   INDEX UNIQUE SCAN         | T1PK |     1 |       |     1   (0)| 00:00
-------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - access("OBJECT_ID"=10)

ops$tkyte@ORA10GR2> select * from t1 where object_id in (10,20);

Execution Plan
----------------------------------------------------------
Plan hash value: 2856779526

-------------------------------------------------------------------------------
| Id  | Operation                    | Name | Rows  | Bytes | Cost (%CPU)| Time
-------------------------------------------------------------------------------
|   0 | SELECT STATEMENT             |      |     2 |   186 |     4   (0)| 00:0
|   1 |  INLIST ITERATOR             |      |       |       |            |
|   2 |   TABLE ACCESS BY INDEX ROWID| T1   |     2 |   186 |     4   (0)| 00:0
|*  3 |    INDEX RANGE SCAN          | T1PK |     2 |       |     3   (0)| 00:0
-------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   3 - access("OBJECT_ID"=10 OR "OBJECT_ID"=20)

ops$tkyte@ORA10GR2> select * from t1 where object_id in (select * from t2);

Execution Plan
----------------------------------------------------------
Plan hash value: 1920265602

-------------------------------------------------------------------------------
| Id  | Operation                    | Name | Rows  | Bytes | Cost (%CPU)| Time
-------------------------------------------------------------------------------
|   0 | SELECT STATEMENT             |      |     3 |   288 |     6  (17)| 00:0
|   1 |  NESTED LOOPS                |      |     3 |   288 |     6  (17)| 00:0
|   2 |   SORT UNIQUE                |      |     3 |     9 |     3   (0)| 00:0
|   3 |    TABLE ACCESS FULL         | T2   |     3 |     9 |     3   (0)| 00:0
|   4 |   TABLE ACCESS BY INDEX ROWID| T1   |     1 |    93 |     1   (0)| 00:0
|*  5 |    INDEX UNIQUE SCAN         | T1PK |     1 |       |     0   (0)| 00:0
-------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   5 - access("OBJECT_ID"="T2"."X")

ops$tkyte@ORA10GR2> set autotrace off
 

A little suggestion for the web site.

Sinan Topuz, June 11, 2006 - 7:59 pm UTC

Tom,

Thank you very much for the response.

I have not had a chance to try it out yet. When I do it, I will post what I find.

Off the topic, I would like to suggest something for asktom.com if I may.

I think it would be nice if we could see the latest followup (from you) or the review on the Question Review page, above the user input form.

Just an idea...



Tom's latest followup

pasko, June 12, 2006 - 9:09 am UTC

Hi Sinan,

regarding your Question about seeing Tom's latest follow-up:
just look at the Top of this Page,you will see:

'You Asked (Jump to Tom's latest followup)'


I meant the Question Review "entry" page actually.

Sinan Topuz, June 12, 2006 - 9:47 am UTC


PLS-00801: internal error [22914]

Sinan Topuz, November 15, 2006 - 12:43 pm UTC

Tom,

Now this query works fine. I needed to tune up the view.

select evt_code
      from dw_workordercard
     where evt_code in ( select column_value
                         from (select column_value, rownum
                                 from table( cast( dw_list('134450') as typ_workorderlist ) )
                                where rownum > 0 )
                      )

I tried to use this statement in a PL/SQL procedure and when I compiled the code, I got PLS-00801: internal error [22914] error. Is this a bug, or could you suggest any workarounds or point my mistake out please?

SQL> create type Typ_WorkOrder as table of varchar2 (30);
  2  /

Type created.

SQL> 
SQL> create or replace procedure list_wo
  2  ( pwoid in varchar2
  3   )is
  4  
  5    l_typ    typ_workorder;
  6    l_found  number; 
  7  
  8  begin
  9  
 10    SELECT COUNT(1)
 11         INTO l_found
 12         FROM r5events
 13        WHERE evt_code in ( select column_value
 14                              from (select column_value, rownum
 15                                      from table( cast( dw_list( pwoid ) as l_typ ) )
 16                                     where rownum > 0 )
 17                          );
 18  
 19  end;
 20  /

Warning: Procedure created with compilation errors.

SQL> show err
Errors for PROCEDURE LIST_WO:

LINE/COL ERROR
-------- -----------------------------------------------------------------
10/3     PLS-00801: internal error [22914]
10/3     PL/SQL: SQL Statement ignored
SQL> 

Thanks, 

Tom Kyte
November 16, 2006 - 8:07 am UTC

cast(dw_list(pwoid) as l_typ)

l_typ is a variable? that is wrong there.

Sinan Topuz, November 16, 2006 - 11:18 am UTC

Tom,

I replaced the expression 
cast( dw_list( pwoid ) as l_typ 
with 
cast( dw_list( pwoid ) as Typ_WorkOrder
in which Typ_WorkOrder was created as 

create type Typ_WorkOrder as table of varchar2 (30);
/

before the procedure was compiled. I also removed the line 
l_typ    typ_workorder;
in the declaration section, but I still get the same error.


SQL> create type Typ_WorkOrder as table of varchar2 (30);
  2  /

Type created.

SQL> create or replace procedure list_wo
  2  ( pwoid in varchar2
  3   )is
  4  l_found  number;
  5  begin
  6  
  7    SELECT COUNT(1)
  8         INTO l_found
  9         FROM r5events
 10        WHERE evt_code in ( select column_value
 11                              from (select column_value, rownum
 12                                      from table( cast( dw_list( pwoid ) as Typ_WorkOrder) )
 13                                      where rownum > 0 )
 14                           );
 15   
 16   end;
 17  /

Warning: Procedure created with compilation errors.

SQL> show err
Errors for PROCEDURE LIST_WO:

LINE/COL ERROR
-------- -----------------------------------------------------------------
7/3      PLS-00801: internal error [22914]
7/3      PL/SQL: SQL Statement ignored
SQL> 


Thank you, 

There is a workaround for the problem I posted!

Sinan Topuz, November 16, 2006 - 2:28 pm UTC

Tom,

The problem has been fixed. There is a workaround which is shown below. I found it on a website.

replace the following line of code

from table( cast( dw_list( '134450,134451,134452,142152,145525,144542,123478' ) as typ_workorderlist ) )

with this one

from THE( select cast( dw_list( '134450,134451,134452,142152,145525,144542,123478' ) as typ_workorderlist ) from sys.dual )

and it compiles the procedure and works fine.

Thanks for your time and help anyway.

Sinan

Counting function calls using dbms_application_info

Shivdeep Modi, January 05, 2007 - 11:02 am UTC

Hi Tom,

I've tried using the example. Gave me a different plan. Seems to have picked the correct cardinality. However I cannot count the function calls. Must be someting I am missing but can't seem to figure out.

/NCLDBA/NV02 > variable in_list varchar2(255)
/NCLDBA/NV02 > exec :in_list := 'DBMS_PIPE,DBMS_OUTPUT,UTL_FILE';

PL/SQL procedure successfully completed.

/NCLDBA/NV02 > select *
2 from emp
3 where ename in ( select /*+ cardinality(t 10 ) */ *
4 from TABLE(cast( str2tbl( :in_list ) as str2tblType) ) t
5 )
6 /

3 rows selected.


Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=41 Card=10 Bytes=910
)

1 0 TABLE ACCESS (BY INDEX ROWID) OF 'EMP' (Cost=2 Card=1 Byte
s=89)

2 1 NESTED LOOPS (Cost=41 Card=10 Bytes=910)
3 2 SORT (UNIQUE)
4 3 COLLECTION ITERATOR (PICKLER FETCH) OF 'STR2TBL'
5 2 INDEX (RANGE SCAN) OF 'ENAME_IDX' (NON-UNIQUE) (Cost=1
Card=1)





Statistics
----------------------------------------------------------
8 recursive calls
0 db block gets
20 consistent gets
0 physical reads
0 redo size
477 bytes sent via SQL*Net to client
340 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
1 sorts (memory)
0 sorts (disk)
3 rows processed

/NCLDBA/NV02 > set autotrace off
/NCLDBA/NV02 > select userenv('client_info') from dual;

USERENV('CLIENT_INFO')
----------------------------------------------------------------


1 row selected.

/NCLDBA/NV02 >

Regards,
Shivdeep Modi
Tom Kyte
January 06, 2007 - 8:26 am UTC

you didn't call dbms_application_info.set_client_info(0) to set the counter to zero, hence it remained null.

looks like that bit got chopped from my example, just make that call before you run the query to reset the counter.

Pickler fetch

Kishore, September 07, 2007 - 5:47 am UTC

Hi Tom

Do you think under the below mentioned scenario the function could be executed the same number of times as rows in EMP. I thought it would not because we are first getting it in a local variable and then using it. Thus the query will be called only once. however even for a similar case it is executing the query multiple times for us. Could you please let me know any possible reason for it ?


declare
lt_type str2tblType := str2tblType();
lr sys_refcursor;
begin
lt_type := str2tbl( :in_list );
open lr
select *
from emp
where ename in ( select /*+ cardinality(t 10 ) */ *
from TABLE(cast( lt_type as str2tblType) ) t
)
end;


Regards
Kishore
Tom Kyte
September 12, 2007 - 9:58 am UTC

I'm not sure what you are observing, it is not clear from your description.

str2tab will be invoked ONCE here.

show us from start to finish everything - and how you determined it is getting invoked more than once.

cardinality hint

A reader, September 02, 2009 - 12:56 pm UTC

Hi Tom,
I have been reading your post about how to use the cardinality hint, and almost all examples I have come across are those in which the pipeline function is in an 'IN' clause.
Will the cardinality hint also work if we are treating the pipeline function as a table in the from clause with the driving table?

Tom Kyte
September 02, 2009 - 1:13 pm UTC

yes, it can.

cardinality hint with query block name

Naresh Bhandare, September 24, 2011 - 11:59 pm UTC

Hello Tom,

I used a cardinality hint in combination with the qb_name hint as in below:

select /*+ NO_USE_HASH(X) INDEX_RS(X,order_id_IDX)
cardinality(@sq 100) */
*
FROM DTL_FACT X WHERE X.ORDER_ID IN
( SELECT /*+ qb_name(sq) parallel(orderes_temp1,2) */ ORDER_ID
FROM ORDERS_TEMP1
WHERE FEED_ID IN (7, 8) MINUS
SELECT /*+ parallel ( HDR_TEMP,2) */ ORDER_ID
FROM HDR_TEMP)
/

I wanted the plan to use a NL operation for the "X" table but it was always going to a hash join (reason being a high cardinality estimate of 700K for the "minus sub-query") before I added the cardinality hint .

I kno wit worked but question is: Is there any reason to not use this form of the hint - meaning cardinality(@sq) - is it "illegal" in any way to use an explicitly named query block instead of a table?

oracle version 11.2.0.2.0.

Tom Kyte
September 25, 2011 - 11:48 am UTC

that is valid, as long as you saw the cardinality change - that is evidence "it is ok that way".


using cardinality hint with a named query block

Naresh Bhandare, September 25, 2011 - 12:00 am UTC

Hello Tom,

I used a cardinality hint in combination with the qb_name hint as in below:

select /*+ NO_USE_HASH(X) INDEX_RS(X,order_id_IDX)
cardinality(@sq 100) */
*
FROM DTL_FACT X WHERE X.ORDER_ID IN
( SELECT /*+ qb_name(sq) parallel(orderes_temp1,2) */ ORDER_ID
FROM ORDERS_TEMP1
WHERE FEED_ID IN (7, 8) MINUS
SELECT /*+ parallel ( HDR_TEMP,2) */ ORDER_ID
FROM HDR_TEMP)
/

I wanted the plan to use a NL operation for the "X" table but it was always going to a hash join
(reason being a high cardinality estimate of 700K for the "minus sub-query") before I added the
cardinality hint .

I kno wit worked but question is: Is there any reason to not use this form of the hint - meaning
cardinality(@sq) - is it "illegal" in any way to use an explicitly named query block instead of a
table?

oracle version 11.2.0.2.0.




problem posting a review

Naresh Bhandare, September 25, 2011 - 12:07 am UTC

I am actually trying to post the previous review on a question regarding caridnality hint but it keeps coming here - no idea what is going wrong.

More to Explore

PL/SQL demos

Check out more PL/SQL tutorials on our LiveSQL tool.

PL/SQL docs

PL/SQL reference manual from the Oracle documentation library