Skip to Main Content
  • Questions
  • Clarificaion about optimizer_dynamic_sampling

Breadcrumb

Question and Answer

Tom Kyte

Thanks for the question, amit.

Asked: December 25, 2004 - 5:12 pm UTC

Last updated: May 07, 2013 - 4:51 pm UTC

Version: 9.2.0.5

Viewed 1000+ times

You Asked

Hi,
From the 9.2 Oracle documenation:

"The cardinality statistic is used, if it exists. If there is a predicate, dynamic sampling is done with a table hint and cardinality is not estimated.

To force cardinality estimation even for an analyzed table, you can use a further hint, dynamic_sampling_est_cdn, as in the following example:

SELECT /*+ dynamic_sampling(e 1) dynamic_sampling_est_cdn(t) */ count(*)
FROM employees e;

This forces cardinality estimation for employees, even if the table is analyzed. The following query does both selectivity and cardinality estimation for employees:

SELECT /*+ dynamic_sampling(e 1) dynamic_sampling_est_cdn(e) */ count(*)
FROM employees e
WHERE cols > 3;"

i.e

SELECT /*+ dynamic_sampling(e 1) */ count(*)
FROM employees e
WHERE cols > 3;

in the case of above query Optimizer would do a dynamic sampling , get the sampled selectivity , and compute the computed cardinality from this instead of using the sampled cardinality. Am I correct ?

But When I ran the above query with 10053 trace it shows that the optimizer is using both sampled selectivity and sampled cardinality:

*** 2004-12-25 17:02:30.734
*** SESSION ID:(123.18420) 2004-12-25 17:02:30.733
QUERY
SELECT /*+ dynamic_sampling(e 1) */ count(3)
FROM emp e
WHERE empno > 3

...
...

** Generated dynamic sampling query:
query text : SELECT /*+ ALL_ROWS IGNORE_WHERE_CLAUSE */ NVL(SUM(C1),0), NVL(SUM(C2),0) FROM
(SELECT /*+ IGNORE_WHERE_CLAUSE NOPARALLEL("E") */ 1 AS C1, CASE WHEN "E"."EMPNO">3 THEN 1 ELSE
0 END AS C2 FROM "EMP" "E") SAMPLESUB
*** 2004-12-25 17:02:30.736
** Executed dynamic sampling query:
level : 1
sample pct. : 100.000000
actual sample size : 14
filtered sample card. : 14
orig. card. : 82
block cnt. : 1
max. sample block cnt. : 32
sample block cnt. : 1
min. sel. est. : 0.0500
** Using dynamic sampling card. : 14
** Using dynamic sel. est. : 1.00000000
TABLE: EMP ORIG CDN: 14 ROUNDED CDN: 14 CMPTD CDN: 14

Where am I going wrong ??


thanks

and Tom said...

"To force cardinality estimation even for an analyzed table, you can use a
further hint, dynamic_sampling_est_cdn, as in the following example:"


even for an analyzed table.


so, i created a 1,000,000 table and queried it:

select /*+ dynamic_sampling(big_table 1) */ count(*)
from big_Table
where id > 3

when the table is UNANALYZED, we get:


*** 2004-12-26 11:58:31.886
** Generated dynamic sampling query:
query text :
SELECT /*+ ALL_ROWS IGNORE_WHERE_CLAUSE */ NVL(SUM(C1),0), NVL(SUM(C2),0) FROM (SELECT /*+ IGNORE_WHERE_CLAUSE NOPARALLEL("BIG_TABLE") */ 1 AS C1, CASE WHEN "BIG_TABLE"."ID">3 THEN 1 ELSE 0 END AS C2 FROM "BIG_TABLE" SAMPLE BLOCK (0.212285) "BIG_TABLE") SAMPLESUB
*** 2004-12-26 11:58:31.897
** Executed dynamic sampling query:
level : 1
sample pct. : 0.212285
actual sample size : 2342
filtered sample card. : 2342
orig. card. : 1192774
block cnt. : 14603
max. sample block cnt. : 32
sample block cnt. : 31
min. sel. est. : 0.0500
** Using dynamic sampling card. : 1103233
** Using dynamic sel. est. : 1.00000000
TABLE: BIG_TABLE ORIG CDN: 1103233 ROUNDED CDN: 1103233 CMPTD CDN: 1103233
Access path: tsc Resc: 2218 Resp: 2218
BEST_CST: 2218.00 PATH: 2 Degree: 1


But, when I inject a:

exec dbms_stats.set_table_stats( user, 'BIG_TABLE', numrows => 100 );

and redo it:


** Generated dynamic sampling query:
query text :
SELECT /*+ ALL_ROWS IGNORE_WHERE_CLAUSE */ NVL(SUM(C1),0), NVL(SUM(C2),0) FROM (SELECT /*+ IGNORE_WHERE_CLAUSE NOPARALLEL("BIG_TABLE") */ 1 AS C1, CASE WHEN "BIG_TABLE"."ID">3 THEN 1 ELSE 0 END AS C2 FROM "BIG_TABLE" SAMPLE BLOCK (0.212285) "BIG_TABLE") SAMPLESUB
*** 2004-12-26 11:59:46.180
** Executed dynamic sampling query:
level : 1
sample pct. : 0.212285
actual sample size : 2342
filtered sample card. : 2342
orig. card. : 100
block cnt. : 14603
max. sample block cnt. : 32
sample block cnt. : 31
min. sel. est. : 0.0500
** Using dynamic sel. est. : 1.00000000
TABLE: BIG_TABLE ORIG CDN: 100 ROUNDED CDN: 100 CMPTD CDN: 100
Access path: tsc Resc: 17 Resp: 17
BEST_CST: 17.00 PATH: 2 Degree: 1


no dynamic estimated cardinality, but if i run:

select /*+ dynamic_sampling(big_table 1) dynamic_sampling_est_cdn(big_table) */ count(*)
from Big_table
where id > 3


** Generated dynamic sampling query:
query text :
SELECT /*+ ALL_ROWS IGNORE_WHERE_CLAUSE */ NVL(SUM(C1),0), NVL(SUM(C2),0) FROM (SELECT /*+ IGNORE_WHERE_CLAUSE NOPARALLEL("BIG_TABLE") */ 1 AS C1, CASE WHEN "BIG_TABLE"."ID">3 THEN 1 ELSE 0 END AS C2 FROM "BIG_TABLE" SAMPLE BLOCK (0.212285) "BIG_TABLE") SAMPLESUB
*** 2004-12-26 12:01:01.576
** Executed dynamic sampling query:
level : 1
sample pct. : 0.212285
actual sample size : 2342
filtered sample card. : 2342
orig. card. : 100
block cnt. : 14603
max. sample block cnt. : 32
sample block cnt. : 31
min. sel. est. : 0.0500
** Using dynamic sampling card. : 1103233
** Using dynamic sel. est. : 1.00000000
TABLE: BIG_TABLE ORIG CDN: 1103233 ROUNDED CDN: 1103233 CMPTD CDN: 1103233
Access path: tsc Resc: 17 Resp: 17
BEST_CST: 17.00 PATH: 2 Degree: 1


it starts using it again

So, the estimated cardinality caveat is about a table that already has a cardinality associated with it and your emp table didn't.....



Rating

  (25 ratings)

Is this answer out of date? If it is, please let us know via a Comment

Comments

dynamic_sampling_est_cdn

amit poddar, December 26, 2004 - 12:47 pm UTC

Hi,
You were right, I did not analyze the emp table but I thought I did.. Anyway,

I did some more testing, it seems that to force the optimizer to use sampled cardinality for analyzed tables the only way is to use dynamic_sampling_est_cdn hint, since even at optimizer_dynamic_sampling level 10 it did not use the sampled cardinality for analyzed tables.

Am I correct in my conclusion ?




Tom Kyte
December 26, 2004 - 12:50 pm UTC

that is what it does, yes. dyanamic sample is really to be used with predominalty un-analyzed tables.

more clarification

amit poddar, December 26, 2004 - 1:37 pm UTC

But in case of analyzed tables can't dynamic sampling be used to overcome the predicate independence assumption made by the optimizer.

Consider:
SQL> create table (id1 number(3), id2 number(3));

Table created

SQL> insert into t values (1,33);

1 row created.

SQL> insert into t values (1,33);

1 row created.

SQL> insert into t values (7,34);

1 row created.

SQL> insert into t values (7,34);

1 row created.

SQL> commit;

Commit complete.

SQL> 
SQL> analyze table t compute statistics;

Table analyzed.


SQL> alter session set optimizer_dynamic_sampling=0;

Session altered.

SQL> explain plan for
  2  select /*+ DYNAMIC_SAMPLING (t &1) */ * from t t where id1=1 and id2=33;
Enter value for 1: 0
old   2: select /*+ DYNAMIC_SAMPLING (t &1) */ * from t t where id1=1 and id2=33
new   2: select /*+ DYNAMIC_SAMPLING (t 0) */ * from t t where id1=1 and id2=33

Explained.

SQL> @?/rdbms/admin/utlxpls.sql

PLAN_TABLE_OUTPUT
----------------------------------------------------------------------------------------------------------------------------------

--------------------------------------------------------------------
| Id  | Operation            |  Name       | Rows  | Bytes | Cost  |
--------------------------------------------------------------------
|   0 | SELECT STATEMENT     |             |     1 |     4 |     5 |
|*  1 |  TABLE ACCESS FULL   | T           |     1 |     4 |     5 |
--------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter("T"."ID1"=1 AND "T"."ID2"=33)

Note: cpu costing is off

14 rows selected.

In this case id2 is dependent on id1
Two distinct values for each, hence combinded selectivity for id1=:b1 and id2=:b2, assuming predicate independence is (1/2*1/2)=0.25. Hence cardinality is (no of rows in t * selecitivity = 4 * 0.25=1. But this is incorrect.

Now lets try with dynamic sampling level 1

SQL> explain plan for
  2  select /*+ DYNAMIC_SAMPLING (t &1) */ * from t t where id1=1 and id2=33;
Enter value for 1: 1
old   2: select /*+ DYNAMIC_SAMPLING (t &1) */ * from t t where id1=1 and id2=33
new   2: select /*+ DYNAMIC_SAMPLING (t 1) */ * from t t where id1=1 and id2=33

Explained.

SQL> @?/rdbms/admin/utlxpls.sql

PLAN_TABLE_OUTPUT
----------------------------------------------------------------------------------------------------------------------------------

--------------------------------------------------------------------
| Id  | Operation            |  Name       | Rows  | Bytes | Cost  |
--------------------------------------------------------------------
|   0 | SELECT STATEMENT     |             |     2 |     8 |     5 |
|*  1 |  TABLE ACCESS FULL   | T           |     2 |     8 |     5 |
--------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter("T"."ID1"=1 AND "T"."ID2"=33)

Note: cpu costing is off

14 rows selected.

SQL> 

Now we get the right cardinality i.e. 2 rows.
From the 10053 trace:

** Performing dynamic sampling initial checks. **
Column:        ID2  Col#: 2      Table:  T   Alias:  T
    NDV: 2         NULLS: 0         DENS: 5.0000e-01 LO:  33  HI: 34
    NO HISTOGRAM: #BKT: 1 #VAL: 2
Column:        ID1  Col#: 1      Table:  T   Alias:  T
    NDV: 2         NULLS: 0         DENS: 5.0000e-01 LO:  1  HI: 7
    NO HISTOGRAM: #BKT: 1 #VAL: 2
** Dynamic sampling initial checks returning TRUE (level = 1).
*** 2004-12-26 13:19:18.006
** Generated dynamic sampling query:
    query text : SELECT /*+ ALL_ROWS IGNORE_WHERE_CLAUSE */ NVL(SUM(C1),0), NVL(SUM(C2),0) FROM
(SELECT /*+ IGNORE_WHERE_CLAUSE NOPARALLEL("T") */ 1 AS C1, CASE WHEN "T"."ID1"=1 AND "T"."ID2"=
33 THEN 1 ELSE 0 END AS C2 FROM "T" "T") SAMPLESUB
*** 2004-12-26 13:19:18.008
** Executed dynamic sampling query:
    level : 1
    sample pct. : 100.000000
    actual sample size : 4
    filtered sample card. : 2
    orig. card. : 4
    block cnt. : 22
    max. sample block cnt. : 32
    sample block cnt. : 22
    min. sel. est. : 0.2500
** Using dynamic sel. est. : 0.50000000
  TABLE:  T     ORIG CDN: 4  ROUNDED CDN: 2  CMPTD CDN: 2
  Access path: tsc  Resc:  5  Resp:  5
  BEST_CST: 5.00  PATH: 2  Degree:  1

In this case we were able to overcome the predicate independence assumption by using dynamic sampling.

Moreover from Oracle docs:

Level 4: Apply dynamic sampling to all tables that meet Level 3 criteria, plus all tables that have single-table predicates that reference 2 or more columns. The number of blocks sampled is the default number of dynamic sampling blocks.

It looks like they are referring to predicate independence assumption.

Am I correct ?
Is this one of the intended use of optimizer_dynamic_sampling ?


 

Tom Kyte
December 26, 2004 - 1:50 pm UTC

yes, something sql profiles do in 10g more presistantly as well -- but their most common use (dynamic sampling) is for un-analyzed objects.

as pointed out -- if the cardinality information is already there, it'll not be used unless you force it.

thanks

A reader, December 26, 2004 - 1:55 pm UTC


optimizer_dynamic_sampling

A reader, January 26, 2005 - 6:25 pm UTC

The PeopleSoft application uses tables as temporary tables in the application. Since the temporary tables can have varying data (depending on the process parameters), will dynamic sampling help resolve bad execution plans. 

I tested it out on 9.2.0.5 but it did not appear to help. here are my results.

The sql - 
SELECT  
VERSION, A.FIELDNAME, FIELDTYPE, LENGTH, DECIMALPOS, FORMAT, 
FORMATLENGTH, IMAGE_FMT, FORMATFAMILY, DISPFMTNAME, 
DEFCNTRYYR,IMEMODE,KBLAYOUT,OBJECTOWNERID, 
DEFRECNAME, DEFFIELDNAME, CURCTLFIELDNAME, USEEDIT, 
USEEDIT2, EDITTABLE, DEFGUICONTROL, SETCNTRLFLD, 
LABEL_ID, TIMEZONEUSE, TIMEZONEFIELDNAME, CURRCTLUSE, 
RELTMDTFIELDNAME, TO_CHAR(B.LASTUPDDTTM,'YYYY-MM-DD-HH24.MI.SS."000000"'), 
B.LASTUPDOPRID, B.FIELDNUM, A.FLDNOTUSED, A.AUXFLAGMASK, B.RECNAME 
FROM PSDBFIELD_tmp A, PSRECFIELD B 
WHERE B.RECNAME = '1099C_CUST_DATA'  AND A.FIELDNAME = B.FIELDNAME 
AND B.SUBRECORD ='N' ORDER BY B.RECNAME, B.FIELDNUM

Table PSDBFIELD_tmp does not have stats and has an index on column FIELDNAME. I have updated the table such that there are only two distinct values of FIELDNAME.

SQL> select fieldname, count(*) from PSDBFIELD_tmp group by fieldname;

FIELDNAME            COUNT(*)
------------------ ----------
CUST_ID                 49999
TEMP                     7094

SQL> select table_name, last_analyzed from user_tables where table_name='PSDBFIELD_TMP';

TABLE_NAME                     LAST_ANAL
------------------------------ ---------
PSDBFIELD_TMP

PSDBFIELD_TMP does not have statistics. 

When optimizer_dynamic_sampling=1 (2 or higher)I get teh following execution plan.

Execution Plan
----------------------------------------------------------
   0      SELECT STATEMENT Optimizer=CHOOSE (Cost=12776 Card=78459 Byt
          es=20320881)

   1    0   SORT (ORDER BY) (Cost=12776 Card=78459 Bytes=20320881)
   2    1     TABLE ACCESS (BY INDEX ROWID) OF 'PSDBFIELD_TMP' (Cost=2
           Card=8267 Bytes=1562463)

   3    2       NESTED LOOPS (Cost=22 Card=78459 Bytes=20320881)
   4    3         TABLE ACCESS (BY INDEX ROWID) OF 'PSRECFIELD' (Cost=
          4 Card=9 Bytes=630)

   5    4           INDEX (RANGE SCAN) OF 'PSFPSRECFIELD' (NON-UNIQUE)
           (Cost=3 Card=9)

   6    3         INDEX (RANGE SCAN) OF 'PSDBFIELD_TMP_INDX' (NON-UNIQ
          UE) (Cost=1 Card=22626)





Statistics
----------------------------------------------------------
          3  recursive calls
          2  db block gets
      50647  consistent gets
       1850  physical reads
          0  redo size
    1259210  bytes sent via SQL*Net to client
      37314  bytes received via SQL*Net from client
       3335  SQL*Net roundtrips to/from client
          1  sorts (memory)
          1  sorts (disk)
      49999  rows processed

SQL> analyze table PSDBFIELD_TMP compute statistics;

Table analyzed.

Execution Plan
----------------------------------------------------------
   0      SELECT STATEMENT Optimizer=CHOOSE (Cost=3467 Card=54184 Byte
          s=5960240)

   1    0   SORT (ORDER BY) (Cost=3467 Card=54184 Bytes=5960240)
   2    1     HASH JOIN (Cost=132 Card=54184 Bytes=5960240)
   3    2       TABLE ACCESS (BY INDEX ROWID) OF 'PSRECFIELD' (Cost=4
          Card=9 Bytes=630)

   4    3         INDEX (RANGE SCAN) OF 'PSFPSRECFIELD' (NON-UNIQUE) (
          Cost=3 Card=9)

   5    2       TABLE ACCESS (FULL) OF 'PSDBFIELD_TMP' (Cost=86 Card=5
          7093 Bytes=2283720)





Statistics
----------------------------------------------------------
          0  recursive calls
          2  db block gets
        562  consistent gets
       1269  physical reads
          0  redo size
    1259212  bytes sent via SQL*Net to client
      37314  bytes received via SQL*Net from client
       3335  SQL*Net roundtrips to/from client
          0  sorts (memory)
          1  sorts (disk)
      49999  rows processed


Thanks





 

Tom Kyte
January 27, 2005 - 7:53 am UTC

it is very unclear from the above what level you actually used with dynamic sampling and whether or not some statistics existed on the "temporary" table prior to running the query

without a tkprof to review against (the row source operations) even harder.

in other words -- really hard to comment on what happened here.

RBO and dynamic sampling

Jaromir, July 19, 2005 - 2:41 pm UTC

<quote> dyanamic sample is really to be used with
predominalty un-analyzed tables</quote>

Am I right in the assumption (optimizer_mode=choose, optimizer_dynamic_sampling
= 2, 9.2.0.6) that if *some* of the tables are un-analyzed the dynamic sampling will be performed but if *all* of them are un-analyzed nothing happens and a rule plan will be executed.
This sound pretty logical to me but I didn't consider it until recently.
Will this behavior change in 10g?

Thanks

Jaromir


Tom Kyte
July 19, 2005 - 5:37 pm UTC

in 10g, all_rows is the optimizer mode the data runs in, not choose anymore.

but if the RBO is used - dynamic sample does not take place.

OK

A reader, November 17, 2005 - 2:36 am UTC

Hi Tom,
What is the need to sample data??

What benefit does it provide?

Tom Kyte
November 17, 2005 - 8:04 am UTC

sampling data that has not been analyzed gives the optimizer a fighting chance to get the right answer (right plan)


otherwise, it has no idea what the data looks like.

Can you clarify this please?

Satheesh Babu.S, September 26, 2006 - 5:37 am UTC

Tom,
  I am in 10.2.0.2 windows box. If i give optimizer_dynamic_sampling in alter session command, then query is not doing the sampling. But if i give it as hint in the query it is doing the sampling. Am I missing something here?

SQL> EXEC DBMS_STATS.GATHER_TABLE_STATS('ARYAA','T1');

PL/SQL procedure successfully completed.

SQL> alter session set optimizer_dynamic_sampling=2;

Session altered.

SQL> select * from t1 where id1=1 and id2=33;

       ID1        ID2
---------- ----------
         1         33
         1         33


Execution Plan
----------------------------------------------------------
Plan hash value: 3617692013

--------------------------------------------------------------------------
| Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |      |     1 |     6 |     3   (0)| 00:00:01 |
|*  1 |  TABLE ACCESS FULL| T1   |     1 |     6 |     3   (0)| 00:00:01 |
--------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter("ID1"=1 AND "ID2"=33)


Statistics
----------------------------------------------------------
        355  recursive calls
          0  db block gets
         65  consistent gets
          1  physical reads
          0  redo size
        498  bytes sent via SQL*Net to client
        380  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
         11  sorts (memory)
          0  sorts (disk)
          2  rows processed

SQL> select /*+ dynamic_sampling(t1 2) */ * from t1 where id1=1 and id2=33;

       ID1        ID2
---------- ----------
         1         33
         1         33


Execution Plan
----------------------------------------------------------
Plan hash value: 3617692013

--------------------------------------------------------------------------
| Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |      |     2 |    12 |     3   (0)| 00:00:01 |
|*  1 |  TABLE ACCESS FULL| T1   |     2 |    12 |     3   (0)| 00:00:01 |
--------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter("ID1"=1 AND "ID2"=33)

Note
-----
   - dynamic sampling used for this statement


Statistics
----------------------------------------------------------
        700  recursive calls
          0  db block gets
        118  consistent gets
          6  physical reads
          0  redo size
        498  bytes sent via SQL*Net to client
        380  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          8  sorts (memory)
          0  sorts (disk)
          2  rows processed

SQL>

Regards,
Satheesh Babu.S 

Tom Kyte
September 26, 2006 - 3:38 pm UTC

incomplete test case.... no create table, no insert, no looky....

Jonathan Lewis, September 26, 2006 - 4:39 pm UTC

Looks like the poster has a table which has got statistics, so the 'alter session' to level 2 has no effect - and level 2 is the 10g default anyway. The query would only be sampled (assuming the table had proper stats) if the parameter setting was 4 - forcing a sample because of the potential for dependent predicates.

Setting the hint to level 2 in the statement - whether query level or table level - forces a sample whether or not it's needed.

It's much more fun trying to guess what the user did to get a result - but then, I only answer a couple of questions a week, so I can spare the time occasionally.


Tom Kyte
September 26, 2006 - 5:18 pm UTC

... It's much more fun trying to guess what the user did to get a result ...

laughing out loud - that is why I have adopted a strong rule of "ignore them, if I cannot try to reproduce them"

it is not worth the time it takes.

Apologies...

Satheesh Babu.S, September 27, 2006 - 2:53 am UTC

Tom,
   The create and insert script i have used is same that of one of the above post (amit podder). But my fault, i should have put it here.
   My question is, if i put dynamic_sampling in alter session/system command it is not considered for sampling. However if it is set at query level as hint it is considered for sampling.I am in 10.2.0.2 windows box.



SQL> create table t1 (id1 number(3), id2 number(3));

Table created

SQL> insert into t1 values (1,33);

1 row created.

SQL> insert into t1 values (1,33);

1 row created.

SQL> insert into t1 values (7,34);

1 row created.

SQL> insert into t1 values (7,34);

1 row created.

SQL> commit;

Commit complete.

SQL> EXEC DBMS_STATS.GATHER_TABLE_STATS('ARYAA','T1');

PL/SQL procedure successfully completed.

SQL> alter session set optimizer_dynamic_sampling=2;

Session altered.

SQL> select * from t1 where id1=1 and id2=33;

       ID1        ID2
---------- ----------
         1         33
         1         33


Execution Plan
----------------------------------------------------------
Plan hash value: 3617692013

--------------------------------------------------------------------------
| Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |      |     1 |     6 |     3   (0)| 00:00:01 |
|*  1 |  TABLE ACCESS FULL| T1   |     1 |     6 |     3   (0)| 00:00:01 |
--------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter("ID1"=1 AND "ID2"=33)


Statistics
----------------------------------------------------------
        355  recursive calls
          0  db block gets
         65  consistent gets
          1  physical reads
          0  redo size
        498  bytes sent via SQL*Net to client
        380  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
         11  sorts (memory)
          0  sorts (disk)
          2  rows processed

SQL> select /*+ dynamic_sampling(t1 2) */ * from t1 where id1=1 and id2=33;

       ID1        ID2
---------- ----------
         1         33
         1         33


Execution Plan
----------------------------------------------------------
Plan hash value: 3617692013

--------------------------------------------------------------------------
| Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |      |     2 |    12 |     3   (0)| 00:00:01 |
|*  1 |  TABLE ACCESS FULL| T1   |     2 |    12 |     3   (0)| 00:00:01 |
--------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter("ID1"=1 AND "ID2"=33)

Note
-----
   - dynamic sampling used for this statement


Statistics
----------------------------------------------------------
        700  recursive calls
          0  db block gets
        118  consistent gets
          6  physical reads
          0  redo size
        498  bytes sent via SQL*Net to client
        380  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          8  sorts (memory)
          0  sorts (disk)
          2  rows processed

SQL>

Regards,
Satheesh Babu.S 


 

Tom Kyte
September 27, 2006 - 5:03 am UTC

#bsee Jonathan's comments right above. He already diagnosed it...

dynamic sampling

Ryan, November 17, 2006 - 10:53 am UTC

1. We are in 10G R2 using the default dynamic sampling of two We are doing alot of queries that join to GTTs. In most cases we are not having problems. With some jobs (based on data), Oracle thinks there is 1 row in the GTT and as it progresses through the joins in the plans it thinks there is 1 row returned at each level.

At the top level(we have multi-table joins) oracle thinks it is join 1 row to 1 row and ends up doing 1-2 cartesian joins.

our GTTs have 1,000 rows in them. I am not sure I am reading the web correctly, but I believe I read that Oracle does not use dynamic sampling if there are less than 32 blocks?

2. When I look in a tkprof output Oracle seems to do slightly different dynamic sampling queries for different queries. In some cases it will have a case statement that looks at a certain field in the join condition. In other cases the sample clause will actually contain a bind variable. Is there any public data about the internnals of dynamic sampling queries?

Tom Kyte
November 19, 2006 - 4:07 pm UTC

1) not really:


ops$tkyte%ORA10GR2> create global temporary table gtt ( x int );

Table created.

ops$tkyte%ORA10GR2> set autotrace on
ops$tkyte%ORA10GR2> select * from gtt;

no rows selected


Execution Plan
----------------------------------------------------------
Plan hash value: 917624683

--------------------------------------------------------------------------
| Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |      |     1 |    13 |     2   (0)| 00:00:01 |
|   1 |  TABLE ACCESS FULL| GTT  |     1 |    13 |     2   (0)| 00:00:01 |
--------------------------------------------------------------------------

Note
-----
   - dynamic sampling used for this statement


Statistics
----------------------------------------------------------
         93  recursive calls
          0  db block gets
          8  consistent gets
          0  physical reads
          0  redo size
        270  bytes sent via SQL*Net to client
        374  bytes received via SQL*Net from client
          1  SQL*Net roundtrips to/from client
          2  sorts (memory)
          0  sorts (disk)
          0  rows processed


what is more likely is the query was hard parsed long ago, when the table was "small"


2) no, other than it'll look at the query and run the right sample based on your predicate. 

Clarification - Dynamic Sampling !!!

Nikhil, November 20, 2006 - 4:01 am UTC

Hello Tom,
I got this information from this link:
</code> http://www.dbazine.com/oracle/or-articles/hotka3 <code>

We would like to have your comments about this.

<<quote
The minimum number of blocks in a table being considered for dynamic sampling is 32. Dynamic sampling will not occur on tables with less than 32 blocks.
quote>>

<<quote
The default number of blocks that are dynamically sampled is 32. This is the default for both Oracle9 and Oracle10. Perhaps Oracle will allow for this default to be adjusted in future releases but, for now, it is internally set at 32 blocks. Dynamic sampling does not occur on table objects that contain less blocks than this default.
quote>>


Thanks

Tom Kyte
November 20, 2006 - 1:33 pm UTC

seems to use it on my 8 block table:

ops$tkyte%ORA10GR2> create table t( x int );

Table created.

ops$tkyte%ORA10GR2> exec show_space('T');
Unformatted Blocks .....................               0
FS1 Blocks (0-25)  .....................               0
FS2 Blocks (25-50) .....................               0
FS3 Blocks (50-75) .....................               0
FS4 Blocks (75-100).....................               0
Full Blocks        .....................               0
Total Blocks............................               8
Total Bytes.............................          65,536
Total MBytes............................               0
Unused Blocks...........................               5
Unused Bytes............................          40,960
Last Used Ext FileId....................               4
Last Used Ext BlockId...................           2,609
Last Used Block.........................               3

PL/SQL procedure successfully completed.

ops$tkyte%ORA10GR2> set autotrace traceonly explain
ops$tkyte%ORA10GR2> select * from t;

Execution Plan
----------------------------------------------------------
Plan hash value: 1601196873

--------------------------------------------------------------------------
| Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |      |     1 |    13 |     2   (0)| 00:00:01 |
|   1 |  TABLE ACCESS FULL| T    |     1 |    13 |     2   (0)| 00:00:01 |
--------------------------------------------------------------------------

Note
-----
   - dynamic sampling used for this statement

ops$tkyte%ORA10GR2> set autotrace off


and even in 9i, note the difference in cardinalities when I change the dynamic sampling level...

ops$tkyte%ORA9IR2> drop table t;

Table dropped.

ops$tkyte%ORA9IR2> create table t( x int );

Table created.

ops$tkyte%ORA9IR2> alter session set optimizer_goal=all_rows;

Session altered.

ops$tkyte%ORA9IR2> alter session set optimizer_dynamic_sampling = 1;

Session altered.

ops$tkyte%ORA9IR2> @plan "select * from t"
ops$tkyte%ORA9IR2> delete from plan_table;

2 rows deleted.

ops$tkyte%ORA9IR2> explain plan for &1;
old   1: explain plan for &1
new   1: explain plan for select * from t

Explained.

ops$tkyte%ORA9IR2> select * from table(dbms_xplan.display);

PLAN_TABLE_OUTPUT
-------------------------------------------------------------------------------

--------------------------------------------------------------------
| Id  | Operation            |  Name       | Rows  | Bytes | Cost  |
--------------------------------------------------------------------
|   0 | SELECT STATEMENT     |             |    82 |  1066 |     2 |
|   1 |  TABLE ACCESS FULL   | T           |    82 |  1066 |     2 |
--------------------------------------------------------------------

Note: cpu costing is off

9 rows selected.

ops$tkyte%ORA9IR2>
ops$tkyte%ORA9IR2> alter session set optimizer_dynamic_sampling = 2;

Session altered.

ops$tkyte%ORA9IR2> @plan "select * from t"
ops$tkyte%ORA9IR2> delete from plan_table;

2 rows deleted.

ops$tkyte%ORA9IR2> explain plan for &1;
old   1: explain plan for &1
new   1: explain plan for select * from t

Explained.

ops$tkyte%ORA9IR2> select * from table(dbms_xplan.display);

PLAN_TABLE_OUTPUT
-------------------------------------------------------------------------------

--------------------------------------------------------------------
| Id  | Operation            |  Name       | Rows  | Bytes | Cost  |
--------------------------------------------------------------------
|   0 | SELECT STATEMENT     |             |     1 |    13 |     2 |
|   1 |  TABLE ACCESS FULL   | T           |     1 |    13 |     2 |
--------------------------------------------------------------------

Note: cpu costing is off

9 rows selected.
 

OPTIMIZER_DYNAMIC_SAMPLING to 4

Karthik, May 02, 2007 - 5:08 am UTC

Hello Tom,

On Oracle 9i,
actlink_push table has 20555 rows
actlink_mapping table has 21027 rows
Both the tables are analyzed using ANALYZE command. The DBAs have not given the permission to execute DBMS_STAT we can only analyze the tables using ANALYZE.

The columns in the WHERE clause are indexed.

Both the plans are similar then why the consistent gets are less with OPTIMIZER_DYNAMIC_SAMPLING to 4. What does this mean? I don't want to specify any hint neither I want to set the OPTIMIZER_DYNAMIC_SAMPLING to 4 in the init.ora.


OPTIMIZER_DYNAMIC_SAMPLING to Default

SQL> SELECT a.actlinkId,actionIndex,a.fieldId,assignShort,assignLong,sampleServer,
2 sampleSchema
3 FROM actlink_push a,
4 actlink_mapping b
5 WHERE a.actlinkId=b.actlinkId
6 AND b.schemaId=4796
7 ORDER BY 1,2 DESC;

9 rows selected.

Elapsed: 00:00:02.01

Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=50 Card=1148 Bytes=1
51536)

1 0 SORT (ORDER BY) (Cost=50 Card=1148 Bytes=151536)
2 1 NESTED LOOPS (Cost=26 Card=1148 Bytes=151536)
3 2 TABLE ACCESS (FULL) OF 'ACTLINK_PUSH' (Cost=26 Card=19
025 Bytes=2378125)

4 2 INDEX (UNIQUE SCAN) OF 'ACTLINK_MAPPING_IND' (UNIQUE)
Statistics
----------------------------------------------------------
0 recursive calls
0 db block gets
21031 consistent gets
0 physical reads
0 redo size
1815 bytes sent via SQL*Net to client
335 bytes received via SQL*Net from client
11 SQL*Net roundtrips to/from client
1 sorts (memory)
0 sorts (disk)
9 rows processed


ALTER SESSION SET OPTIMIZER_DYNAMIC_SAMPLING = 4;

SQL> SELECT a.actlinkId,actionIndex,a.fieldId,assignShort,assignLong,sampleServer,
2 sampleSchema
3 FROM actlink_push a,
4 actlink_mapping b
5 WHERE a.actlinkId=b.actlinkId
6 AND b.schemaId=4796
7 ORDER BY 1,2 DESC;

9 rows selected.

Elapsed: 00:00:01.09

Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=50 Card=1148 Bytes=1
51536)

1 0 SORT (ORDER BY) (Cost=50 Card=1148 Bytes=151536)
2 1 NESTED LOOPS (Cost=26 Card=1148 Bytes=151536)
3 2 TABLE ACCESS (FULL) OF 'ACTLINK_PUSH' (Cost=26 Card=19
025 Bytes=2378125)

4 2 INDEX (UNIQUE SCAN) OF 'ACTLINK_MAPPING_IND' (UNIQUE)




Statistics
----------------------------------------------------------
0 recursive calls
0 db block gets
21 consistent gets
0 physical reads
0 redo size
1905 bytes sent via SQL*Net to client
335 bytes received via SQL*Net from client
11 SQL*Net roundtrips to/from client
1 sorts (memory)
0 sorts (disk)
9 rows processed

Thanks
Tom Kyte
May 02, 2007 - 8:24 am UTC

... The DBAs have not given the permission to execute DBMS_STAT ...

you do not have any DBA's working with you then, you are calling them the wrong name - DBA is not appropriate for them.

explain plan lies - use tkprof.
http://asktom.oracle.com/Misc/when-explanation-doesn-sound-quite.html

and you can alter your session in your application (you need no permission from those that must not be called DBAs to do that) instead of hinting or setting in the init.ora

OPTIMIZER_DYNAMIC_SAMPLING to 4

Karthik, May 02, 2007 - 10:56 am UTC

Hello Tom,
If a query shows a different plan and reads less of buffers with OPTIMIZER_DYNAMIC_SAMPLING to 4 then what does it indicate? Should I set the OPTIMIZER_DYNAMIC_SAMPLING to 4 or is there anything I need to do?

Thanks
Tom Kyte
May 02, 2007 - 5:30 pm UTC



do you understand what dynamic sampling does?

Karthik, May 03, 2007 - 4:47 am UTC

Here is my understanding about OPTIMIZER_DYNAMIC_SAMPLING.

Setting a specific value would gather statistics at the query executing level to generate execution plan. If tables in the query are not analyzed then setting to 2 would sample all unanalyzed tables referenced in the query.

The reason why Iam asking is even after analyzing the tables Im finding that consistent gets are more when compared to setting the OPTIMIZER_DYNAMIC_SAMPLING to 4.

In the above example I had given previousily though the explain plan looks same for both the queries the consistent reads are different. I remember you saying that explain plan can be lying and I read that link which you had given.

Thanks
Tom Kyte
May 03, 2007 - 5:42 pm UTC

setting sampling to level 4 causes the hard parse to gather statistics very specific to the query itself - it looks at the where clause and samples based on that.

see
http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:40230704959128#69134369478883

so, the sampling ADDS to the set of statistics available to the optimizer for that specific query - leading to a better plan.

Does oracle stores the samples taken?

Bilal HATIPOGLU, August 02, 2007 - 4:42 am UTC

Hi Tom,

I have a question about dynamic sampling:

Does oracle stores the samples taken, and uses them for other similar queries? If yes, where they are stored? In dictionary??

Let's look at this:

SQL> create table tk_ds_test as select * from all_objects;

Table created.

SQL>
SQL> set autotrace traceonly
SQL> select * from tk_ds_test where owner='PUBLIC';

19988 rows selected.


Execution Plan
----------------------------------------------------------
Plan hash value: 1929618903

--------------------------------------------------------------------------------
| Id  | Operation         | Name       | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |            | 19394 |  2424K|   132   (2)| 00:00:02 |
|*  1 |  TABLE ACCESS FULL| TK_DS_TEST | 19394 |  2424K|   132   (2)| 00:00:02 |
--------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter("OWNER"='PUBLIC')

Note
-----
   - dynamic sampling used for this statement


Statistics
----------------------------------------------------------
        288  recursive calls
          0  db block gets
       1058  consistent gets
        568  physical reads
          0  redo size
     784137  bytes sent via SQL*Net to client
       4762  bytes received via SQL*Net from client
        401  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
      19988  rows processed

SQL> select * from tk_ds_test where owner='TEST';

23 rows selected.


Execution Plan
----------------------------------------------------------
Plan hash value: 1929618903

--------------------------------------------------------------------------------
| Id  | Operation         | Name       | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |            |     6 |   768 |   132   (2)| 00:00:02 |
|*  1 |  TABLE ACCESS FULL| TK_DS_TEST |     6 |   768 |   132   (2)| 00:00:02 |
--------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter("OWNER"='TEST')

Note
-----
   - dynamic sampling used for this statement


Statistics
----------------------------------------------------------
          4  recursive calls
          0  db block gets
        640  consistent gets
          0  physical reads
          0  redo size
       2455  bytes sent via SQL*Net to client
        373  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
         23  rows processed

SQL>

It seems it used dynamic sampling on both queries. But did it really sampled again, randomly different blocks, or used previous samples? Actually, she can use the samples that are previously taken.

In the documentation, I found this:

"Dynamic sampling is repeatable if no rows have been inserted, deleted, or updated in the table being sampled."
http://download.oracle.com/docs/cd/B19306_01/server.102/b14211/stats.htm#sthref1500
Considering table has no change on it after the first query, is it about what am i saying?




Tom Kyte
August 05, 2007 - 10:18 am UTC

no, dynamic sampling is for a query, a single query - if you were to hard parse that query again, it would dynamically sample it all over again.


in the above, it sampled twice - with different sampling queries/inputs all together (because your literal was completely different)

when oracle use dynamic sampling?

A reader, August 13, 2007 - 6:30 pm UTC

The optimizer_dynamic_sampling parameter can have values from 0-10 (in 10g). Each of them at different level.

Now, I create a table and perform regular DML activities on that. Howver, I have not analyzed it.

How oracle collect statistics on that table using the dyanamic sampling option? At what interval it collect statistics? Does it write to data dictionary?

If that table is part of other tables (some are analyzed and some are not), does dynamic sampling will be applied (during hard parse) on all the tables that doesn't have statistics?

What are the impacts on the database if we bump up the value of optimizer_dynamic_sampling from its default value 2 in 10g?


Tom Kyte
August 20, 2007 - 10:57 am UTC

dynamic sampling happens at hard parse time, all of the hard parses against that table - until statistics exist.

It does not save this information (hence the term dynamic) - it is collected by the optimizer at hard parse time and used exclusively to optimizer THAT individual sql statement.


If you bump the value up, the documented behavior kicks in (not sure what else to say beyond that). It does what it is documented to do - the higher levels cause different sample types to be taken of different (larger) sample sizes. That will be the impact.

dynamic sampling and soft parse,

A reader, August 20, 2007 - 12:11 pm UTC

Thanks. If I execute the same SQL again that was hard parsed and Oracle use dynamic sampling on my tables, will it do a soft parse or a hard parse?
Tom Kyte
August 22, 2007 - 10:37 am UTC

umm, you said it was hard parsed.


so, it will of course be doing a hard parse - because you said "that was hard parsed"

follow up,

A reader, August 22, 2007 - 10:39 am UTC

hard parse for the first time, but when I run the same query again?
Tom Kyte
August 22, 2007 - 2:05 pm UTC

then it will be a soft parse - just reuse the existing plan - no sampling needed since we only sample to determine a plan.

Clarification

Matt, January 07, 2008 - 3:41 am UTC

Tom,

Can I just clarify something - we have a piece of SQL (part of a larger batch routine) which drives from a global temporary table. The table has no stats and we have optimizer_dynamic_sampling=2.

This gtt can have either 10 records or 100,000 records depending on how many items are being processed.

Are you saying that in fact optimizer_dynamic_sampling will not cause the SQL to be re-parsed (hard) each time. So in effect we could generate a plan the first time for 10 records, which then will be subsequently used for the 100,000 records?

Can we force hard parsing each time?
Tom Kyte
January 07, 2008 - 8:10 am UTC

... Are you saying that in fact optimizer_dynamic_sampling will not cause the SQL
to be re-parsed (hard) each time. ...

correct, dynamic sampling is something that affects how a query is hard parsed (the optimizer will sample the table during the hard parse)

dynamic sampling will NOT cause something to be hard parsed - it is something that might be DONE during a hard parse.


You could gather or set statistics on the object before accessing it (setting statistics makes more sense probably, you just filled it, tell us how many rows you put in)

Or use dynamic sql - and embed into the statment something that makes it unique - not entirely unique probably - but maybe something like:


insert into gtt select .....
l_rows := round( sql%rowcount, -2 ); -- nearest hundredth

open l_query for 
'select /* rows around ' || l_rows || '*/ * from .......';



that give you multiple plans - but not one for each and every single value.

Useful Article

Matt, January 07, 2008 - 6:56 am UTC

I think you've already explained the options available to us here..

http://tkyte.blogspot.com/2007/09/sqltracetrue-part-two.html

Thanks.

To Matt re: clarification

Stew Ashton, January 07, 2008 - 11:15 am UTC


If you decide to write dynamic SQL and use SQL%rowcount, you might consider using the cardinality hint to give the optimizer an approximate count of the rows in the GTT. Just a variant of Tom's answer that he mentions elsewhere...
http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:7038986332061#29716539084590

Tom Kyte
January 07, 2008 - 11:29 am UTC

well, the dynamic sampling they are using would be my preference over the undocumented hint! I skipped cardinality in this example solution simply because dynamic sample would provide that estimate automatically.

Understood

Stew Ashton, January 07, 2008 - 12:02 pm UTC


Thanks for the clarification :)

Not hard parsed in each execution?

Mike, September 22, 2010 - 12:40 am UTC

Tom,

"dynamic sampling will NOT cause something to be hard parsed - it is something that might be DONE during a hard parse."

Is it true in Oracle 10g also?

I was thinking that dynamic sampling (default level 2 or 3, 4, etc) will always force statistics gathering for each execution and thus invalidate the previous plan.

Thank You


Tom Kyte
September 23, 2010 - 10:28 am UTC

... Is it true in Oracle 10g also? ...

Yes, dynamic sampling is done during a hard parse, it is part of a hard parse, it does not CAUSE a hard parse to occur, it is a side effect that happens during a hard parse.


Misleading Name

Vinod, September 23, 2010 - 10:47 pm UTC

Tom,

Its adviced to use dynamic sampling for volatile tables(GTT) for which number of rows will change dynamically in each execution. If the sampling is not done for each execution, it defeats the very purpose of using dynamic sampling.
1. Can we expect enhancements in the future release to make it really dynamic?
2. Where exactly the dynamic sampled stats are stored?

Thank you.

Tom Kyte
September 27, 2010 - 10:08 am UTC

... defeats the very purpose of using dynamic sampling. ...

how so? by what definition? dynamic sampling is used to more correctly ESTIMATE cardinalities when you query objects that do not have statistics OR the optimizer looks at a where clause and makes a "guess" (uses rules). The optimizer would verify that guess OR get sampled statistics for un-analyzed objects at hard parse time to fill in the missing gaps. That is the definition of dynamic sampling. It does NOT mean "kill the system by hard parsing every single time"


1) it already is really dynamic.
2) they are not, that is the point, they are dynamic and associated with a single sql statement for the time that sql statement is in the shared pool. If you want them to be "not dynamic" (to be stored in the dictionary) you want to use a SQL Profile.

Hint DYNAMIC_SAMPLING_EST_CDN is listed in Oracle 11g R2 documentation

Matthias Schulz, May 07, 2013 - 4:18 pm UTC

Dear Tom,

the hint DYNAMIC_SAMPLING_EST_CDN is not mentioned in the documentations for Oracles 10g R2 or Oracle 11g R2:

http://www.oracle.com/pls/db102/search?remark=quick_search&word=DYNAMIC_SAMPLING_EST_CDN&tab_id=&format=ranked

http://www.oracle.com/pls/db112/search?word=DYNAMIC_SAMPLING_EST_CDN&partno=


but it is mentioned in the documentation for Oracle 9i:
http://tahiti.oracle.com/pls/db92/db92.drilldown?remark=&word=DYNAMIC_SAMPLING_EST_CDN&book=&preference=


My questions are:
a) Is the DYNAMIC_SAMPLING_EST_CDN hint deprecated or has it just been forgotten in newer documentations?

b) Is it still safe and officially allowed to use this hint with Oracle 11g?

Best regards,
Matthias Schulz

Tom Kyte
May 07, 2013 - 4:51 pm UTC

it looks like it might be one of those "we forgot to document when the documentation moved around" items.

It is a valid hint in 11gr2:

ops$tkyte%ORA11GR2> exec print_table( q'|select * from v$sql_hint where name like '%EST_CDN'|' )
.NAME                         : DYNAMIC_SAMPLING_EST_CDN
.SQL_FEATURE                  : QKSFM_DYNAMIC_SAMPLING_EST_CDN
.CLASS                        : DYNAMIC_SAMPLING_EST_CDN
.INVERSE                      :
.TARGET_LEVEL                 : 4
.PROPERTY                     : 272
.VERSION                      : 9.2.0
.VERSION_OUTLINE              :
-----------------

PL/SQL procedure successfully completed.



and Jonathan Lewis points out that it probably got lost in a shuffle:
http://jonathanlewis.wordpress.com/2009/06/14/undocumented-hints/

I would suggest filing an SR asking for it to be documented and see what support says on that particular one.

No harm can come from it - the worst that can happen is it wouldn't take effect

More to Explore

Performance

Get all the information about database performance in the Database Performance guide.