Skip to Main Content

Breadcrumb

May 4th

Question and Answer

Connor McDonald

Thanks for the question, Harald.

Asked: September 11, 2000 - 10:09 am UTC

Last updated: October 08, 2021 - 1:18 am UTC

Version: 8.1.6

Viewed 10K+ times! This question is

You Asked

We are doing performance-testing of SQL-statements in our project. When running explain-plans on the statements, the results are often not realistic due to the fact that we have not loaded the tables with data, and therefore the indexes, in most cases, will not be used as they will in production.

I found the DBMS_STATS-package and am wondering if I can use this package to virtually load my test-environment when using the explain-plan. Our Oracle-support team doubt that it will be possible to extract the data using GET/EXPORT_TABLE/INDEX/COLUMN_STATS, modifying the data and then using SET/IMPORT_TABLE/INDEX/COLUMN_STATS to put new statistics back into the data-dictionary-tables.

Are there any descriptions on the tables the CREATE_STAT_TABLE makes?

Regards,
Harald

and Tom said...



That table structure is not documented and is not designed to be modified by anything other then the DBMS_STATS package. Its sole purpose is to take stats from one database to another via export/import in the dbms_stats package. Attempting to plug your own values in there would be difficult at best and potentially harmful at worst.

Instead of trying to insert/update rows in that table -- you should just use the other entries in the dbms_stats package. For example, to set the statistics for a table -- you would use the dbms_stats.set_table_stats procedure. It can either put the stats you want into the data dictionary directly OR you can have this procedure stuff those stats in the table created by CREATE_STAT_TABLE. You use the formal named arguments in the set_table_stats procedure and they'll take care of stuffing them into the table itself. You lose NO functionality by doing this -- there would be NOTHING to be gained by accessing the table created directly.

You would use the SET_COLUMN|INDEX|TABLE_STATS to set whatever stats you wanted on the data. There would be no reason to modify existing data. You can see all of the "existing" data by querying the data dictionary.

Rating

  (247 ratings)

Is this answer out of date? If it is, please let us know via a Comment

Comments

what is the utility of dbms_stats.set_table_stats

Wells, September 29, 2001 - 1:40 pm UTC

Is it used to set statistics and test the performance i.e. the execution path.

Actually, We are interested in knowing how we can use and where we can use dbms_stats.set_table_stats to the maximum benefit.

Tom Kyte
September 29, 2001 - 2:02 pm UTC

You can use it to transfer the stats from your production system (with all of the data) to your test system. You might do that to review the query plans generated by new/altered queries in your application before putting them into production -- so you can see what might be different.

I use it to show the effects of having differeing numbers of rows/blocks in tables. For example, in order to demonstrate a point -- I might analyze an empty table -- run a query and show the plan. Then, I will use dbms_stats to trick the optimizer into thinking there are LOTS and LOTS of rows and blocks in there and how the different plans. You can use this to see how the optimizer will change its mind about the proper plan over time.

Another use I've found for it is with temporary tables. You cannot really analyze them but what you can do is create a "real" table -- fill it up with representative data -- and then export these stats using dbms_stats. Later, you create the temporary table and import these other stats you saved. Now the optimizer will use these stats for the temporary table when developing query plans that involve it. This could be important if you put lots of rows into a temp table and then use it in subsquent SQL statments -- the optimizer would otherwise think the temporary table is "small"

why is dbms_stats slower than analyze

A reader, November 01, 2001 - 12:12 pm UTC

Hi

I have tested a few times and always find that dbms_stats is slower than ordinary analyze even specifying parallel degree, why is that? we are using 8.1.6.3

Also is it possible analyze a table using analyze table xx compute statistics in parallel?

Tom Kyte
November 01, 2001 - 4:48 pm UTC

Geez, no test case, no parameters, not even "we find it X% slower" -- nothing.

Perhaps the tables you used were trivial in size (parallel can take much longer).  Perhaps you were comparing apples with oranges (eg: dbms_stats by default does "for all columns size 1" -- did your analyze do that too?)

No, you cannot analyze in parallel, only dbms_stats can.

Also, I do not see it being significantly slower (given that dbms_stats will just issue an analyze in this particular case you would expect that to be so!):

ops$tkyte@ORA717DEV.US.ORACLE.COM> variable n number
ops$tkyte@ORA717DEV.US.ORACLE.COM> 
ops$tkyte@ORA717DEV.US.ORACLE.COM> analyze table t delete statistics;

Table analyzed.

ops$tkyte@ORA717DEV.US.ORACLE.COM> 
ops$tkyte@ORA717DEV.US.ORACLE.COM> exec :n := dbms_utility.get_time

PL/SQL procedure successfully completed.

ops$tkyte@ORA717DEV.US.ORACLE.COM> exec dbms_stats.gather_table_stats( user, 'T', method_opt => 'for all indexed columns', cascade => true );

PL/SQL procedure successfully completed.

ops$tkyte@ORA717DEV.US.ORACLE.COM> 
ops$tkyte@ORA717DEV.US.ORACLE.COM> exec dbms_output.put_line( (dbms_utility.get_time-:n) || ' hsecs ' );
22762 hsecs

PL/SQL procedure successfully completed.

ops$tkyte@ORA717DEV.US.ORACLE.COM> 
ops$tkyte@ORA717DEV.US.ORACLE.COM> analyze table t delete statistics;

Table analyzed.

ops$tkyte@ORA717DEV.US.ORACLE.COM> 
ops$tkyte@ORA717DEV.US.ORACLE.COM> exec :n := dbms_utility.get_time

PL/SQL procedure successfully completed.

ops$tkyte@ORA717DEV.US.ORACLE.COM> 
ops$tkyte@ORA717DEV.US.ORACLE.COM> analyze table t
  2  compute statistics
  3  for table
  4  for all indexed columns
  5  for all indexes
  6  /

Table analyzed.

ops$tkyte@ORA717DEV.US.ORACLE.COM> 
ops$tkyte@ORA717DEV.US.ORACLE.COM> exec dbms_output.put_line( (dbms_utility.get_time-:n) || ' hsecs ' );
22732 hsecs

PL/SQL procedure successfully completed.

ops$tkyte@ORA717DEV.US.ORACLE.COM> select num_Rows from user_tables where table_name = 'T';

  NUM_ROWS
----------<b>
   1093312</b>


and when I do it in parallel -- I see a HUGE difference:


ops$tkyte@ORA717DEV.US.ORACLE.COM> variable n number
ops$tkyte@ORA717DEV.US.ORACLE.COM> 
ops$tkyte@ORA717DEV.US.ORACLE.COM> analyze table t delete statistics;

Table analyzed.

ops$tkyte@ORA717DEV.US.ORACLE.COM> 
ops$tkyte@ORA717DEV.US.ORACLE.COM> exec :n := dbms_utility.get_time

PL/SQL procedure successfully completed.

ops$tkyte@ORA717DEV.US.ORACLE.COM> exec dbms_stats.gather_table_stats( user, 'T', method_opt => 'for all indexed columns size 1', cascade => true, degree=>8 );

PL/SQL procedure successfully completed.

ops$tkyte@ORA717DEV.US.ORACLE.COM> 
ops$tkyte@ORA717DEV.US.ORACLE.COM> exec dbms_output.put_line( (dbms_utility.get_time-:n) || ' hsecs ' );
2525 hsecs

PL/SQL procedure successfully completed.

I think you were using tables that were so small as to be insignificant in even timing them.

Btw, table t is:

create table t as select * from all_objects;
insert into t select * from t;
insert into t select * from t;
insert into t select * from t;
insert into t select * from t;
insert into t select * from t;
insert into t select * from t;

create index t_idx1 on t(object_id);
create index t_idx2 on t(object_name);


 

Partitions are a slightly different story...

Connor, November 02, 2001 - 5:01 am UTC

analyze table p1 partition (x1)
compute statistics

********************************************************************************

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.03 0.05 0 0 0 0
Execute 3 14.89 44.28 11089 1818 300 0
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 4 14.92 44.33 11089 1818 300 0

BEGIN dbms_stats.GATHER_TABLE_STATS('MCDONAC','P1','X1'); END;

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 11 0.04 0.08 0 0 0 0
Execute 18 0.01 0.03 2 23 13 9
Fetch 15 50.47 198.18 48473 7511 1723 10
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 44 50.52 198.29 48475 7534 1736 19

so you do need to be a little wary before converting all of your analyze scripts to use dbms_stats.

Tom Kyte
November 02, 2001 - 8:29 pm UTC

Connor -- you've compared apples to toaster ovens. That dbms_stats gathers lots more stats then the analyze command does!

The analyze doesn't work 100% on partitions, for example -- analyze that table and look at the LAST_ANALYZED in the USER_TAB_COLUMNS table -- it'll not have changed (nor will the values there). Then, do it with dbms_stats and you'll find that it does. they are two different commands above.





DBMS_STATS slower

A reader, November 02, 2001 - 6:49 am UTC

Hi Tom

I have done a test after reading this thread on a Tru64 Machine with 1 CPU ONLY, Oracle 8.1.7 Enterprise Edition and a table of 4500000 rows


SQL> analyze table opsc.CALLS_OPSC estimate statistics sample 20 percent for all columns size 1;

Table analyzed.

Elapsed: 00:17:14.11

SQL> exec dbms_stats.GATHER_TABLE_STATS('OPSC', 'CALLS_OPSC', estimate_percent => 20, method_opt => 'for all columns size 1', cascade => true, degree => 2);

PL/SQL procedure successfully completed.

Elapsed: 00:20:05.20


Is DBMS_STATS slower in this case because we only have one CPU? Also why with 

analyze table opsc.CALLS_OPSC estimate statistics sample 20 percent for all columns size 1;

in num_rows I see NULL? Only column statistics appears in dba_tab_columns

By the way there is no index in this table


cheers 

Tom Kyte
November 02, 2001 - 7:42 am UTC

Your commands are totally different.

You have to compare:

analyze table opsc.CALLS_OPSC estimate statistics sample 20 percent
for table
for all columns size 1
for all indexes

(you didn't analyze the table!! dbms_stats did, analyze did not)

and yes, there is a chance that the overhead of PQ on a single cpu system outweighs any benefit in this particular case. But here, the difference is most likely due to the fact that dbms_stats did more work.

17 minutes to sample 20% of a 4.5 million row table seems long to me. You might want to tkprof that to see if you have tons of waits and if so, play around with your sort area size and such -- both of those should go faster.



Your followup on partitions - dbms_stats vs analyze

Connor McDonald, November 03, 2001 - 5:59 am UTC

I wasn't trying to dispute that dbms_stats and analyze do something very different. I'm just trying to warn people about possible consequences of taking a set of existing ANALYZE scripts and merely converting them (in what would seem a fairly natural sense) to DBMS_STATS - sure you get better stats - but you may get a large resource hit.

Cheers
Connor

Tom Kyte
November 03, 2001 - 8:12 am UTC

ahh, OK, I would think (hope, pray, whatever) that people would test things -- read the docs and realize that these are different implementations -- collected radically different values - did different things.

You are right though -- pointing it out doesn't hurt (but I still feel compelled to point out "apples and oranges" just in case someone reading THOUGHT they were the same!)

Thanks Connor -- always appreciate your input.

missread

A reader, December 11, 2001 - 4:51 am UTC

Hi Tom

I read the replay "dbms_stats is slower", it states

SQL> analyze table opsc.CALLS_OPSC estimate statistics sample 20 percent for all 
columns size 1;

Table analyzed.

Elapsed: 00:17:14.11

SQL> exec dbms_stats.GATHER_TABLE_STATS('OPSC', 'CALLS_OPSC', estimate_percent 
=> 20, method_opt => 'for all columns size 1', cascade => true, degree => 2);

PL/SQL procedure successfully completed.

Elapsed: 00:20:05.20

I am not sure, you said they are different but I dont see the difference because method_opt => 'for all columns size 1' is specified too. Unless even you specify for all columns Oracle does for all columns as well?

Also is there anyway to know the default values of parameters of this package? It seems impossible and how can we know what default it is doing? 

Tom Kyte
December 11, 2001 - 8:03 am UTC

They are different because the analyze command does ONE thing -- gets the column stats.  It does NOT get the table stats, it does NOT get the index stats.  The dbms_stats call does get the table stats (gather_table_stats always does).  In addition it gets stats FOR ALL COLUMNS (what the analyze did).  In addition, it was specified with CASCADE=>TRUE meaning the indexes associated with the table will be analyzed as well.

You can see the difference to confirm it via:

ops$tkyte@ORA817DEV.US.ORACLE.COM> create table t as select * from all_objects;
ops$tkyte@ORA817DEV.US.ORACLE.COM> create index t_idx on t(object_id);

ops$tkyte@ORA817DEV.US.ORACLE.COM> analyze table t delete statistics;
Table analyzed.

ops$tkyte@ORA817DEV.US.ORACLE.COM> analyze table t
  2  estimate statistics sample 20 percent
  3  for all columns size 1;
Table analyzed.

ops$tkyte@ORA817DEV.US.ORACLE.COM> 
ops$tkyte@ORA817DEV.US.ORACLE.COM> select last_analyzed from user_tables where table_name = 'T';

LAST_ANAL
---------


ops$tkyte@ORA817DEV.US.ORACLE.COM> select last_analyzed from user_indexes where index_name = 'T_IDX';

LAST_ANAL
---------


ops$tkyte@ORA817DEV.US.ORACLE.COM> select column_name, last_analyzed from user_tab_columns where table_name = 'T';

COLUMN_NAME                    LAST_ANAL
------------------------------ ---------
OWNER                          11-DEC-01
OBJECT_NAME                    11-DEC-01
SUBOBJECT_NAME                 11-DEC-01
OBJECT_ID                      11-DEC-01
DATA_OBJECT_ID                 11-DEC-01
OBJECT_TYPE                    11-DEC-01
CREATED                        11-DEC-01
LAST_DDL_TIME                  11-DEC-01
TIMESTAMP                      11-DEC-01
STATUS                         11-DEC-01
TEMPORARY                      11-DEC-01
GENERATED                      11-DEC-01
SECONDARY                      11-DEC-01

13 rows selected.

<b>Note that the columns have a last_analyzed -- the table and index DO NOT</b>

ops$tkyte@ORA817DEV.US.ORACLE.COM> analyze table t delete statistics;
Table analyzed.

ops$tkyte@ORA817DEV.US.ORACLE.COM> begin
  2     dbms_stats.GATHER_TABLE_STATS
  3     (user, 'T', estimate_percent => 20,
  4          method_opt => 'for all columns size 1',
  5          cascade => true, degree => 2);
  6  end;
  7  /

PL/SQL procedure successfully completed.

ops$tkyte@ORA817DEV.US.ORACLE.COM> select last_analyzed from user_tables where table_name = 'T';

LAST_ANAL
---------
11-DEC-01

ops$tkyte@ORA817DEV.US.ORACLE.COM> select last_analyzed from user_indexes where index_name = 'T_IDX';

LAST_ANAL
---------
11-DEC-01

ops$tkyte@ORA817DEV.US.ORACLE.COM> select column_name, last_analyzed from user_tab_columns where table_name = 'T';

COLUMN_NAME                    LAST_ANAL
------------------------------ ---------
OWNER                          11-DEC-01
OBJECT_NAME                    11-DEC-01
SUBOBJECT_NAME                 11-DEC-01
OBJECT_ID                      11-DEC-01
DATA_OBJECT_ID                 11-DEC-01
OBJECT_TYPE                    11-DEC-01
CREATED                        11-DEC-01
LAST_DDL_TIME                  11-DEC-01
TIMESTAMP                      11-DEC-01
STATUS                         11-DEC-01
TEMPORARY                      11-DEC-01
GENERATED                      11-DEC-01
SECONDARY                      11-DEC-01

13 rows selected.

<b>and now the table, index and columns have been indexed</b>


As for how to know the default parameters (and how this package works) one would read the documentation on it:
http://docs.oracle.com/docs/cd/A81042_01/DOC/server.816/a76936/dbms_sta.htm#999107
....

 

Follow on questions

Robert Ware, April 18, 2002 - 3:47 pm UTC

Tom 

First, THANK YOU FOR ALL THE INFORMATION SHARING YOU PROVIDE.

1)  Does dbms_stats provide information on chained_rows compatible to the analyze command?  If so what do I need to modify in my script below to implement it?

     ---We recently replaced---

RDWARE@ORCL..SQL>BEGIN
  2    FOR s in (select distinct(owner) schema
  3              from dba_tables
  4              where owner not like 'SYS%')
  5    LOOP
  6        dbms_output.put_line('Analyzing the entire '||s.schema||' schema');
  7        dbms_utility.analyze_schema(s.schema,'ESTIMATE',null,20,'for table for all indexed columns size 254');
  8    End Loop;
  9  END;
 10  /
Analyzing the entire LJCAVANO schema
Analyzing the entire OUTLN schema
Analyzing the entire PD6_REPO schema
Analyzing the entire PD8_REPO schema
Analyzing the entire RDWARE schema
Analyzing the entire RPCOCA schema
Analyzing the entire RSPAYNE schema
Analyzing the entire SCOTT schema
Analyzing the entire SDCORP_REPO schema

PL/SQL procedure successfully completed.

Elapsed: 00:01:05.72

           ---with---

RDWARE@ORCL..SQL>BEGIN
  2    FOR s in (select distinct(owner) schema
  3              from dba_tables
  4              where owner not like 'SYS%')
  5    LOOP
  6        dbms_output.put_line('Analyzing the entire '||s.schema||' schema');
  7        dbms_stats.gather_schema_stats(ownname           => s.schema,
  8                                       estimate_percent  => 20,
  9                                       method_opt        => 'FOR ALL COLUMNS SIZE 1',
 10                                       degree            => 8,
 11                                       cascade           => true);
 12    End Loop;
 13  END;
 14  /
Analyzing the entire LJCAVANO schema
Analyzing the entire OUTLN schema
Analyzing the entire PD6_REPO schema
Analyzing the entire PD8_REPO schema
Analyzing the entire RDWARE schema
Analyzing the entire RPCOCA schema
Analyzing the entire RSPAYNE schema
Analyzing the entire SCOTT schema
Analyzing the entire SDCORP_REPO schema

PL/SQL procedure successfully completed.

Elapsed: 00:09:21.68 

Tom Kyte
April 18, 2002 - 9:23 pm UTC

if you do not use parallel (and use size > 1), it'll get the chain count Estimated. the parallel stuff seems to disable that.

DBMS_STATS: i tried and it worked! Made my life easy.. ;)

Yogeeraj, June 29, 2002 - 7:21 am UTC

Hello,

I was trying to analyze a schema in our database, and seems like it is was not working!! 
------------------------------------------------------------
SQL> exec print_table('select * from dba_tables where owner=''CMTSTORE'' and table_name=''STT110''');
OWNER                         : CMTSTORE
TABLE_NAME                    : STT110
TABLESPACE_NAME               : CMTSTORE_LMT_DATA_MEDIUM
CLUSTER_NAME                  :
IOT_NAME                      :
PCT_FREE                      : 10
PCT_USED                      : 40
INI_TRANS                     : 1
MAX_TRANS                     : 255
INITIAL_EXTENT                : 10485760
NEXT_EXTENT                   : 1048576
MIN_EXTENTS                   : 1
MAX_EXTENTS                   : 2147483645
PCT_INCREASE                  : 0
FREELISTS                     : 1
FREELIST_GROUPS               : 1
LOGGING                       : YES
BACKED_UP                     : N
NUM_ROWS                      : 62255
BLOCKS                        : 679
EMPTY_BLOCKS                  : 668
AVG_SPACE                     : 898
CHAIN_CNT                     : 0
AVG_ROW_LEN                   : 73
AVG_SPACE_FREELIST_BLOCKS     : 6942
NUM_FREELIST_BLOCKS           : 5
DEGREE                        :          1
INSTANCES                     :          1
CACHE                         :     N
TABLE_LOCK                    : ENABLED
SAMPLE_SIZE                   : 12451
LAST_ANALYZED                 : 19-apr-2002 10:26:00
PARTITIONED                   : NO
IOT_TYPE                      :
TEMPORARY                     : N
SECONDARY                     : N
NESTED                        : NO
BUFFER_POOL                   : DEFAULT
ROW_MOVEMENT                  : DISABLED
GLOBAL_STATS                  : YES
USER_STATS                    : NO
DURATION                      :
SKIP_CORRUPT                  : DISABLED
MONITORING                    : NO
CLUSTER_OWNER                 :
-----------------

PL/SQL procedure successfully completed.

SQL> exec dbms_stats.gather_schema_stats('cmtstore',method_opt=>'for all indexed columns',cascade=>true,stattab=>'CMTSTORESTATS');

PL/SQL procedure successfully completed.

SQL> exec dbms_stats.create_stat_table('yd','cmtstorestats');

PL/SQL procedure successfully completed.

SQL> exec print_table('select * from dba_tables where owner=''CMTSTORE'' and tab
le_name=''STT110''');
OWNER                         : CMTSTORE
TABLE_NAME                    : STT110
TABLESPACE_NAME               : CMTSTORE_LMT_DATA_MEDIUM
CLUSTER_NAME                  :
IOT_NAME                      :
PCT_FREE                      : 10
PCT_USED                      : 40
INI_TRANS                     : 1
MAX_TRANS                     : 255
INITIAL_EXTENT                : 10485760
NEXT_EXTENT                   : 1048576
MIN_EXTENTS                   : 1
MAX_EXTENTS                   : 2147483645
PCT_INCREASE                  : 0
FREELISTS                     : 1
FREELIST_GROUPS               : 1
LOGGING                       : YES
BACKED_UP                     : N
NUM_ROWS                      : 62255
BLOCKS                        : 679
EMPTY_BLOCKS                  : 668
AVG_SPACE                     : 898
CHAIN_CNT                     : 0
AVG_ROW_LEN                   : 73
AVG_SPACE_FREELIST_BLOCKS     : 6942
NUM_FREELIST_BLOCKS           : 5
DEGREE                        :          1
INSTANCES                     :          1
CACHE                         :     N
TABLE_LOCK                    : ENABLED
SAMPLE_SIZE                   : 12451
LAST_ANALYZED                 : 19-apr-2002 10:26:00
PARTITIONED                   : NO
IOT_TYPE                      :
TEMPORARY                     : N
SECONDARY                     : N
NESTED                        : NO
BUFFER_POOL                   : DEFAULT
ROW_MOVEMENT                  : DISABLED
GLOBAL_STATS                  : YES
USER_STATS                    : NO
DURATION                      :
SKIP_CORRUPT                  : DISABLED
MONITORING                    : NO
CLUSTER_OWNER                 :
-----------------

PL/SQL procedure successfully completed.

SQL>
============================================================

You will note that the last_analyzed column did not change.
  LAST_ANALYZED                 : 19-apr-2002 10:26:00

also, no records found in the table which i specified to save the previous statistics:
  SQL> select count(*)
    2  from cmtstorestats;

    COUNT(*)
  ----------
           0
  SQL>
============================================================
Then as you always recommend ==> when back to Oracle Documentations: "Supplied PL/SQL Packages" and found that something was missing - 
  "statown  Schema containing stattab (if different than ownname)".

and tried again: (IT WORKED!!)
============================================================
yd@CMTDB.CMT.MU> exec dbms_stats.gather_schema_stats('cmtstore',method_opt=>'for all indexed columns',cascade=>true,stattab=>'CMTSTORESTATS',statown=>'yd');

PL/SQL procedure successfully completed.

Elapsed: 00:00:52.57

yd@CMTDB.CMT.MU> exec print_table('select * from dba_tables where owner=''CMTSTO
RE'' and table_name=''STT110''');
OWNER                         : CMTSTORE
TABLE_NAME                    : STT110
TABLESPACE_NAME               : CMTSTORE_LMT_DATA_MEDIUM
CLUSTER_NAME                  :
IOT_NAME                      :
PCT_FREE                      : 10
PCT_USED                      : 40
INI_TRANS                     : 1
MAX_TRANS                     : 255
INITIAL_EXTENT                : 10485760
NEXT_EXTENT                   : 1048576
MIN_EXTENTS                   : 1
MAX_EXTENTS                   : 2147483645
PCT_INCREASE                  : 0
FREELISTS                     : 1
FREELIST_GROUPS               : 1
LOGGING                       : YES
BACKED_UP                     : N
NUM_ROWS                      : 74026
BLOCKS                        : 807
EMPTY_BLOCKS                  : 472
AVG_SPACE                     : 950
CHAIN_CNT                     : 0
AVG_ROW_LEN                   : 76
AVG_SPACE_FREELIST_BLOCKS     : 7737
NUM_FREELIST_BLOCKS           : 4
DEGREE                        :          1
INSTANCES                     :          1
CACHE                         :     N
TABLE_LOCK                    : ENABLED
SAMPLE_SIZE                   : 74026
LAST_ANALYZED                 : 29-jun-2002 15:21:13
PARTITIONED                   : NO
IOT_TYPE                      :
TEMPORARY                     : N
SECONDARY                     : N
NESTED                        : NO
BUFFER_POOL                   : DEFAULT
ROW_MOVEMENT                  : DISABLED
GLOBAL_STATS                  : NO
USER_STATS                    : NO
DURATION                      :
SKIP_CORRUPT                  : DISABLED
MONITORING                    : NO
CLUSTER_OWNER                 :
-----------------

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.02
yd@CMTDB.CMT.MU>
============================================================

I have two question:
1. If i set MONITORING on all tables and indexes (??) do i  still need to run these steps regularly?
2. If i decide to use only CBO on my production database, what type of problems (if any) can i anticipate?

Thanks a lot

Best Regards
Yogeeraj.

PS. Since i got your book, i have lesser questions to ask! Thanks. (however the DBMS_STATS is not there in the Appendices ;) - maybe you don't use it that frequently)
 

Tom Kyte
June 29, 2002 - 10:28 am UTC

1) Yes.

if you use monitoring on a table, you still use dbms_stats to gather stats -- but only on STALE objects (not everything)

Monitoring on an index is for an entirely differently purpose, it'll tell you if the index was used or not. It is not for statistics purposes.

2) None.


I viewed dbms_stats (like backup and recovery) as being 100% a DBA thing. The book is definitely 100% useful to the DBA however, it was geared towards the developer. At 1,200+ pages -- something had to give (believe me, there is a lot more I left out then I put in)

Thanks a lot for the reply.

Yogeeraj, July 01, 2002 - 4:29 am UTC

thanks for the reply.

BTW, very nice discussion in this link:
</code> http://asktom.oracle.com/pls/asktom/f?p=100:11:::::P11_QUESTION_ID:1260600445561,%7Bmonitoring%7D <code>

One last Question:
I have a "huge table" with 1.5M records. What will be the best way to update its statistics (table-with lots of inserts/updates 24x7)? (how do i know if i really need to do it? ;) the current statistics are since the date we moved from Oracle 7.2.3 to Oracle 8.1.7.

============================================================
yd@CMTDB.CMT.MU> exec print_table('select * from dba_tables where table_name=''KNT200'' and owner=''SWPROD''');
OWNER : SWPROD
TABLE_NAME : KNT200
TABLESPACE_NAME : PFS_TBS_KN
CLUSTER_NAME :
IOT_NAME :
PCT_FREE : 15
PCT_USED : 50
INI_TRANS : 10
MAX_TRANS : 200
INITIAL_EXTENT : 272629760
NEXT_EXTENT : 10485760
MIN_EXTENTS : 1
MAX_EXTENTS : 240
PCT_INCREASE : 0
FREELISTS : 1
FREELIST_GROUPS : 1
LOGGING : YES
BACKED_UP : N
NUM_ROWS : 1319110
BLOCKS : 17160
EMPTY_BLOCKS : 16119
AVG_SPACE : 1226
CHAIN_CNT : 0
AVG_ROW_LEN : 85
AVG_SPACE_FREELIST_BLOCKS : 0
NUM_FREELIST_BLOCKS : 0
DEGREE : 1
INSTANCES : 1
CACHE : N
TABLE_LOCK : ENABLED
SAMPLE_SIZE : 1054
LAST_ANALYZED : 16-dec-2001 10:36:59
PARTITIONED : NO
IOT_TYPE :
TEMPORARY : N
SECONDARY : N
NESTED : NO
BUFFER_POOL : DEFAULT
ROW_MOVEMENT : DISABLED
GLOBAL_STATS : NO
USER_STATS : NO
DURATION :
SKIP_CORRUPT : DISABLED
MONITORING : NO
CLUSTER_OWNER :
-----------------

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.03

yd@CMTDB.CMT.MU> select count(*) from knt200;

COUNT(*)
----------
1540611

Elapsed: 00:00:09.33

Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=2 Card=1)
1 0 SORT (AGGREGATE)
2 1 INDEX (FAST FULL SCAN) OF 'IND_KNT200_1' (NON-UNIQUE) (Cost=2 Card=1319110)

Statistics
----------------------------------------------------------
0 recursive calls
4 db block gets
5434 consistent gets
5432 physical reads
0 redo size
298 bytes sent via SQL*Net to client
359 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
1 rows processed
============================================================

Thank you again for you time.
Best Regards
Yogeeraj

Tom Kyte
July 01, 2002 - 7:19 am UTC

well, 1.5 million records with an average row length of 85 bytes is not very big at all.

But, you would alter table T monitoring and let dbms_stats tell you when to do it. I would analyze this table once (fully) and then let monitoring take it from there.

You are only talking about 150 meg or so of data here, pretty small.

bright

Yogeeraj, July 01, 2002 - 8:46 am UTC

hello,
thank you for the precious guidelines.

(We always thought that our 16 GB Database and 2M record/200MB tables could be classified as large databse/tables ;))

One little confusion about "monitoring"
you said:
</code> http://asktom.oracle.com/pls/asktom/f?p=100:11:::::P11_QUESTION_ID:1260600445561,%7Bmonitoring%7D <code>
<quote>
...since you know the table is undergoing lots of inserts during the day, monitoring is not really needed - you know for a fact the stats are going stale frequently and need to be re-gathered.
</quote>

so should i really, do a "alter table knt200 monitoring;"?

Also, is it true that a "table" is locked during the whole analysis operation when using DBMS_STATS? - a problem for "online tables".

How can i know how often i need to run the DBMS_STATS.GATHER_SCHEMA_STATS with the GATHER STALE option?

thank you again for your precious time and all the coaching you give us here!

Best Regards
Yogeeraj


Tom Kyte
July 01, 2002 - 8:52 am UTC

You can either gather stats every night (if about 10% of the table will change -- thats the threshold monitoring will use) or monitor it and gather stale. it is ultimately upto you. I might monitor and then see how often we do gather stats -- to see if my thoughts about how fast this table changes were in fact correct.

No, a table is not locked during an analyze or dbms_stats.

What the user_tab_modifications view. It'll show you how rapidly things are changing. How often you run this will be a function of

a) how often you WANT to run it
b) how often you CAN run it
c) how often you should run it (based on how fast the data changes)

DBMS_STATS

Suresh, July 01, 2002 - 10:32 am UTC

Tom,
This is with reference to the review posted by
Robert Ware, St.Louis. In that you have mentioned, DBMS_STATS will be able to gather statistics of chained rows if we make 'size > 1'.

In the URL I have mentioned below, you said the following.

</code> http://asktom.oracle.com/pls/asktom/f?p=100:11:::::P11_QUESTION_ID:4347359891525,%7Bsuresh%7D <code>

"DBMS_STATS include the number of rows, number of blocks currently containing data, and average row length but not the number of chained rows, average free space, or number of unused data blocks."

Could you please let me know which of the statement is correct or have I misunderstood.

Thanks,
Suresh

Tom Kyte
July 01, 2002 - 10:41 am UTC

when you do size > 1, dbms_stats seems to use analyze instead of doing it itself.

i would not count on that always being the case, but thats the way it is right now.

Reader

A reader, March 25, 2003 - 12:38 am UTC

How does oracle determine the optimal bucket size based on the data distribution and application work load on the
columns automatically ( how Oracle calculates the work load) when

method_opt =>'for all columns size AUTO' is used

Thanks

Using dbms_stats to transport statistics for performance tuning

Wolfgang Breitling, March 25, 2003 - 10:23 am UTC

I use that all the time. I take schema exports of a system (export rows=n) and build an empty tuning sandbox. Then I export the statistics from the production database (dbms_stats.export_xxx_stats followed by an export of the stattab table) and then import the statistics into the sandbox. Then I can do analysis of sql in the sandbox. Given the same input parameters the optimizer will come up with the same access plan - even across OS platforms. Knowing what is wrong with the original plan I can try various things, sometimes with the aid of a 10053 trace, to get the optimizer to choose a different plan. At that point I have to go back to the production system to try out the modification to see if that new plan is actually performing better.
That seems like a lot of work for potentially little gain, but I am a consultant, so this setup allows me to do some of the sql analysis work off-site.

As a note to the claims that dbms_stats is slower than analyze. I have not done any stringent comparisons, but on my first experiences with dbms_stats I had the distinct impression that it was faster than the analyze and I am only using dbms_stats to gather statistics, but then I do not constantly analyze tables either.

how can i gather the statstics only in the "create stat table" ?

biti_rainy, May 21, 2003 - 10:41 pm UTC

SQL> begin
  2  DBMS_STATS.GATHER_SCHEMA_STATS (
  3     ownname => user,
  4     method_opt  => 'for all indexed columns',
  5     cascade     => true,
  6     stattab     => 'BILLY_STAT', 
  7     statid      => '2003-05-22-01',
  8     statown     => user);
  9  end;
 10  /

PL/SQL procedure successfully completed.


SQL> select count(*) from billy_stat;

  COUNT(*)
----------
         0


there is  no  data in  billy_stat , but  the  statistics is  stored in the DICT TABLE .

if  i  want the statistics only store in the billy_stat, how to  do ?
 

Tom Kyte
May 23, 2003 - 7:40 am UTC

you don't.

you would

o export the stats
o gather them, saving in billy_stat
o import the original ones


(but one would question WHY you wouldn't want them put into the dictionary since they would be more current/accurate)

frequence of gathering statistics

A reader, August 06, 2003 - 2:52 pm UTC

Tom,
The impression I get from you and other articles I have read, is that current statistics are a good thing.

"And if you don't keep your statistics current, the cost-based optimizer may not function effectively for a particular query."
</code> https://asktom.oracle.com/Misc/oramag/back-to-basics-v3.html

And your statement: "but one would question WHY you wouldn't want them put into the dictionary since they would be more current/accurate".

But then I recently read an article by Donald K. Burleson where he states:
"One of the most common mistakes made by Oracle DBAs is to frequently re-analyze the schema. Remember, the sole purpose of doing that is to change the execution plans for your SQL, and if it ain't broke, don't fix it. If you are satisfied with your current SQL performance, re-analyzing 
a schema could cause significant performance problems and undo the tuning efforts of the development staff. In practice, very few shops are sufficiently dynamic to require periodic schema re-analysis."
http://technet.oracle.com/oramag/webcolumns/2003/techarticles/burleson_cbo_pt1_pt2.html <code>

So, I am wondering your take on what Mr. Burleson says.

Thanks.


Tom Kyte
August 09, 2003 - 12:19 pm UTC

I disagree... I would say you might consider EXPORTING your statistics frequently (nice to keep the history as a BONUS) so that in the event something does "go bad" -- you can get back to yesterday quickly.

But having current statistics is extremely relevant for the most trivial of applications -- consider a range scan on time sensitive data (every day you add more records, the CBO -- which hasn't seen a new stat for 6 months assumes there is NO DATA for the last 6 months).


Alter table monitoring
Export old
Gather stale


that way you get the benefits of any advances in performance, coupled with a good and immediate fallback plan. You get a very good history to boot (hey, how did my data change in size/skew over the last 6 months? 12 months? 2 years?)


thanks Tom - makes sense

A reader, August 11, 2003 - 8:06 am UTC


method_opt definition from 8.1.7 documentation

A reader, September 11, 2003 - 9:25 am UTC

Hi

I came across these setence in 8.1.7 documentation talking about DBMS_STATS

method_opt:
Method options of the following format (the phrase 'SIZE 1' is required to ensure gathering statistics in parallel and for use with the phrase hidden)

I dont understand what's this talking about at all, is it saying if we leave method_opt NULL we cannot gather statistics in parallel...??!!


Tom Kyte
September 11, 2003 - 6:00 pm UTC

big_table@ORA920> alter session set events '10046 trace name context forever, level 12';

Session altered.

big_table@ORA920> begin
2 dbms_stats.gather_table_stats
3 ( ownname => user,
4 tabname => 'BIG_TABLE',
5 method_opt => NULL,
6 degree => 4 );
7 end;
8 /


and tkprof shows....


select /*+ cursor_sharing_exact dynamic_sampling(0) no_monitoring parallel(t,
4) parallel_index(t,4)
*/ count(*),sum(vsize("ID")),substrb(dump(min("ID")
,16,0,32),1,120),substrb(dump(max("ID"),16,0,32),1,120),count(distinct
"OWNER"),sum(vsize("OWNER")),substrb(dump(min("OWNER"),16,0,32),1,120),
substrb(dump(max("OWNER"),16,0,32),1,120),count(distinct "OBJECT_NAME"),
sum(vsize("OBJECT_NAME")),substrb(dump(min("OBJECT_NAME"),16,0,32),1,120),
substrb(dump(max("OBJECT_NAME"),16,0,32),1,120),count("SUBOBJECT_NAME"),
count(distinct "SUBOBJECT_NAME"),sum(vsize("SUBOBJECT_NAME")),
substrb(dump(min("SUBOBJECT_NAME"),16,0,32),1,120),
substrb(dump(max("SUBOBJECT_NAME"),16,0,32),1,120),count(distinct
"OBJECT_ID"),sum(vsize("OBJECT_ID")),substrb(dump(min("OBJECT_ID"),16,0,32),
1,120),substrb(dump(max("OBJECT_ID"),16,0,32),1,120),count("DATA_OBJECT_ID")
,count(distinct "DATA_OBJECT_ID"),sum(vsize("DATA_OBJECT_ID")),
substrb(dump(min("DATA_OBJECT_ID"),16,0,32),1,120),
substrb(dump(max("DATA_OBJECT_ID"),16,0,32),1,120),count("OBJECT_TYPE"),
count(distinct "OBJECT_TYPE"),sum(vsize("OBJECT_TYPE")),
substrb(dump(min("OBJECT_TYPE"),16,0,32),1,120),
substrb(dump(max("OBJECT_TYPE"),16,0,32),1,120),count(distinct "CREATED"),
substrb(dump(min("CREATED"),16,0,32),1,120),substrb(dump(max("CREATED"),16,
0,32),1,120),count(distinct "LAST_DDL_TIME"),
substrb(dump(min("LAST_DDL_TIME"),16,0,32),1,120),
substrb(dump(max("LAST_DDL_TIME"),16,0,32),1,120),count("TIMESTAMP"),
count(distinct "TIMESTAMP"),sum(vsize("TIMESTAMP")),
substrb(dump(min("TIMESTAMP"),16,0,32),1,120),substrb(dump(max("TIMESTAMP"),
16,0,32),1,120),count("STATUS"),count(distinct "STATUS"),sum(vsize("STATUS")
),substrb(dump(min("STATUS"),16,0,32),1,120),substrb(dump(max("STATUS"),16,
0,32),1,120),count("TEMPORARY"),count(distinct "TEMPORARY"),
sum(vsize("TEMPORARY")),substrb(dump(min("TEMPORARY"),16,0,32),1,120),
substrb(dump(max("TEMPORARY"),16,0,32),1,120),count("GENERATED"),
count(distinct "GENERATED"),sum(vsize("GENERATED")),
substrb(dump(min("GENERATED"),16,0,32),1,120),substrb(dump(max("GENERATED"),
16,0,32),1,120),count("SECONDARY"),count(distinct "SECONDARY"),
sum(vsize("SECONDARY")),substrb(dump(min("SECONDARY"),16,0,32),1,120),
substrb(dump(max("SECONDARY"),16,0,32),1,120)
from
"BIG_TABLE"."BIG_TABLE" t




it is that if you gather histograms in 8i and want parallel, you would use size 1. setting method-opt to null bypasses histograms entirely.

am slightly confused

Sravan, September 18, 2003 - 5:24 pm UTC

Tom,
I deleted the statistics on a schema.
Ran a job which gathered statistics on that schema and took 13 hours.
I tried to run it in parallel, by specifying the degree. It took 16 hours.
(Please note, that am not specifying the "method_opt" option.)

Why is the parallel job taking longer? Am I missing something obvious here or forgot to include or do other prep work ?

First Job definition
=====================
DECLARE
v_jobno number ;
BEGIN
DBMS_JOB.SUBMIT(v_jobno,
'DBMS_STATS.GATHER_SCHEMA_STATS(''SUPPORT'',
ESTIMATE_PERCENT => 35,
CASCADE => TRUE);');
COMMIT;
END;
/
======================================

2nd Job definition with Degree specified(there are 2 CPU's on the unix box)
========================================
DECLARE
v_jobno number ;
BEGIN
DBMS_JOB.SUBMIT(v_jobno, 'DBMS_STATS.GATHER_SCHEMA_STATS(''SUPPORT'',
ESTIMATE_PERCENT => 35,
CASCADE => TRUE,
DEGREE => 3);');
COMMIT;
END;
/
================================

Tom Kyte
September 18, 2003 - 5:58 pm UTC

other things on the box contending with the parallel jobs...

all of the data being on too few disks (so the parallel jobs contend with eachother)

parallel has overheads - if the schema has lots of little things, then the overhead of parallel is greater then any benefit you might get.


use statspack to see what things are "waiting for" and whats going on .

thanks

sravan, September 23, 2003 - 10:39 am UTC

Tom,
will look into it.

dbms_stats

K.venkat, September 25, 2003 - 8:05 am UTC

Tom,

Using dbms_stats, to get the index analyzed, should we use the cascade option? When one should go for index analysis?

Thanks,
Venkat

Tom Kyte
September 25, 2003 - 8:43 am UTC

cascade => true is one method.

you might do individual indexes if you wanted

o to do the simultaneously
o gather different leves of stats for different indexes

Table lock ....

reader, September 25, 2003 - 6:11 pm UTC

When i use dbms_stats package to collect statistics on a table, does it lock the table like analyze table command? Thanks.

Tom Kyte
September 25, 2003 - 11:35 pm UTC

no but then neither does analyze (unless you are doing validate structure and then you can use validate online in 9i)



Validate structure

Reader, September 27, 2003 - 11:43 am UTC

Is there an equivalent procedure in dbms_stats to validate structure online? Thanks. I am trying my best to digest enormous information that you have provided in your new book Effective Oracle by Design, that is simply superb.

Tom Kyte
September 27, 2003 - 1:05 pm UTC

analyze is slated to do this and this alone at some time in the future. analyze is destined to be the "validate" thing -- analyze validate structure, analyze list chained rows, etc.

dbms_stats will stick with stats and stats only.

Table lock ? Commit ?

A reader, September 28, 2003 - 12:42 am UTC

If not table lock how come we cannot compile a proc using that table when analyzing is in process using dbms_stats
Also I see dbms_stats issues commits automatically Correct ?

Tom Kyte
September 28, 2003 - 9:23 am UTC

it doesn't work that way for me.

I just kicked off a dbms_stats.gather_table_stats on a big_table

I then proceeded to create and compile procedures that access that table

no problem.


as dbms_stats updates the data dictionary, it'll commit in order to make those changes visible and so as to not lock up the dictionary entries.

Table Lock ,commit ?

A reader, September 29, 2003 - 10:08 pm UTC

You said
Followup:
no but then neither does analyze (unless you are doing validate structure
and then you can use validate online in 9i).

Are you sure because it does lock the table otherwise why would proc wait to compile while anlyze is going.I know this may not be the correct way just tyring to understand the behaviour of analyzing and dbms_stats. Yours is the the correct way.




Tom Kyte
September 30, 2003 - 7:02 am UTC


it *doesn't* block the procedure.

Ok, so I write a shell script:


#!/bin/csh -vx

echo analyze table big_table compute statistics > analyze.sql
echo / >> analyze.sql
echo exit >> analyze.sql

sqlplus big_table/big_table @analyze &

sleep 5

sqlplus big_table/big_table <<EOF
@showsql
create or replace procedure p
as
begin
for x in ( select * from big_table )
loop
null;
end loop;
end;
/

exit
EOF



that'll kick off an analyze, give it a couple of seconds to start... then it creates a procedure in another session that references the table being analyzed. The output of that was:

$ ./test.sh


echo analyze table big_table compute statistics > analyze.sql
echo analyze table big_table compute statistics
echo / >> analyze.sql
echo /
echo exit >> analyze.sql
echo exit

sqlplus big_table/big_table @analyze &
[1] 28005

sqlplus big_table/big_table @analyze
sleep 5
sleep 5

SQL*Plus: Release 8.1.7.0.0 - Production on Tue Sep 30 06:57:40 2003

(c) Copyright 2000 Oracle Corporation. All rights reserved.


Connected to:
Oracle8i Enterprise Edition Release 8.1.7.4.0 - Production
With the Partitioning option
JServer Release 8.1.7.4.0 - Production


sqlplus big_table/big_table << EOF
sqlplus big_table/big_table

SQL*Plus: Release 8.1.7.0.0 - Production on Tue Sep 30 06:57:45 2003

(c) Copyright 2000 Oracle Corporation. All rights reserved.


Connected to:
Oracle8i Enterprise Edition Release 8.1.7.4.0 - Production
With the Partitioning option
JServer Release 8.1.7.4.0 - Production

big_table@ORA817DEV>
USERNAME SID_SERIAL STATUS MODULE LAST_CALL_ET
--------------- --------------- ---------- --------------- ------
BIG_TABLE '8,30332' ACTIVE 01@ analyze.sql 4
BIG_TABLE '9,231' ACTIVE 01@ showsql.sql 0

2 rows selected.

that shows the analyze is running and has been for about 4 seconds...

--------------------
BIG_TABLE(8,30332) ospid = 28005 command = 62 program = sqlplus@aria-dev (TNS V1-V3) dedicated server=28007
Tuesday 06:57 Tuesday 06:57 last et = 4
analyze table big_table compute statistics

big_table@ORA817DEV> 2 3 4 5 6 7 8 9
Procedure created.

and at the same time -- we created a procedure.

big_table@ORA817DEV> big_table@ORA817DEV> Disconnected from Oracle8i Enterprise Edition Release 8.1.7.4.0 - Production
With the Partitioning option
JServer Release 8.1.7.4.0 - Production



Monitoring Threshold

Matt, November 07, 2003 - 3:52 am UTC

"You can either gather stats every night (if about 10% of the table will change
-- thats the threshold monitoring will use) or monitor it and gather stale."

Our tests look to show that monitor will show *all* changes to the table - regardless of the 10% threshold, and the dbms_stats gather stale filters out the 10%.

This is on 8.1.7.4 does behaviour change with 9i ?

Tom Kyte
November 07, 2003 - 9:10 am UTC

yes, monitor monitors all changes and we only gather stats when it hits a threshold. I'm not sure what you mean by this tho? the threshold that triggers that gather and resets the monitoring is about 10% change.

oops... the proof

Matt, November 07, 2003 - 4:46 am UTC

SQL> select count(*) from stat_test_table
  2  /

  COUNT(*)
----------
      1000

1 row selected.

SQL>
SQL> select num_rows,to_char(last_analyzed,'DD-MON-RRRR HH24:MI:SS') from all_tables
  2  where table_name = 'STAT_TEST_TABLE'
  3  /

  NUM_ROWS TO_CHAR(LAST_ANALYZE
---------- --------------------
      1000 07-NOV-2003 08:59:45

1 row selected.

SQL>
SQL> select table_name, inserts, updates,deletes from all_tab_modifications
  2  /

no rows selected

SQL>
SQL> insert
  2  into stat_test_table
  3  select rownum+1000,
  4             owner,
  5             object_name,
  6             object_type,
  7             1,
  8             1
  9  from all_objects
 10  where rownum <=5
 11  /

5 rows created.

SQL>
SQL> commit;

Commit complete.

--
-- Force the refresh of all_tab_modifications by restarting instance
-- 

SVRMGR> shutdown immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.
SVRMGR> startup
ORACLE instance started.
Total System Global Area                        127309948 bytes
Fixed Size                                         116860 bytes
Variable Size                                    94253056 bytes
Database Buffers                                 32768000 bytes
Redo Buffers                                       172032 bytes
Database mounted.
Database opened.

--
-- Reconnect
--

SQL> conn matt/matt
Connected.

SQL> select table_name, inserts, updates,deletes from all_tab_modifications
/  2

TABLE_NAME                        INSERTS    UPDATES    DELETES
------------------------------ ---------- ---------- ----------
STAT_TEST_TABLE                         5          0          0


begin
dbms_stats.gather_schema_stats ('MATT',  options => 'GATHER STALE');
end;

SQL> select table_name, inserts, updates,deletes from all_tab_modifications
/  2

TABLE_NAME                        INSERTS    UPDATES    DELETES
------------------------------ ---------- ---------- ----------
STAT_TEST_TABLE                         5          0          0


--
-- Now Insert Higher Than 10%
-- 

 1  insert
  2  into stat_test_table
  3  select rownum+1200,
  4             owner,
  5              object_name,
  6              object_type,
  7                 1,
  8                 1
  9      from all_objects
 10*    where rownum <=110
SQL> /

110 rows created.

SQL> commit;

Commit complete.

SQL> select count(*) from stat_test_table;

  COUNT(*)
----------
      1115

1 row selected.

--
-- BOUNCE THE INSTANCE
--

  1*  select table_name, inserts, updates,deletes from all_tab_modifications
SQL> /

TABLE_NAME                        INSERTS    UPDATES    DELETES
------------------------------ ---------- ---------- ----------
STAT_TEST_TABLE                       115          0          0

1 row selected.

SQL> l
  1  begin
  2  dbms_stats.gather_schema_stats ('MATT',  options => 'GATHER STALE');
  3  end;
  4*
SQL> /

PL/SQL procedure successfully completed.

SQL> select table_name, inserts, updates,deletes from all_tab_modifications
  2  /

no rows selected

1* select num_rows,to_char(last_analyzed,'DD-MON-RRRR HH24:MI:SS') from all_tables
SQL> 2 where table_name = 'STAT_TEST_TABLE'

SQL> /

  NUM_ROWS TO_CHAR(LAST_ANALYZE
---------- --------------------
      1115 07-NOV-2003 09:46:01

1 row selected. 

Tom Kyte
November 07, 2003 - 9:34 am UTC

yes, that is the way it works -- dbms_stats uses the 10% threshold and resets the monitoring statistics.

'GATHER STALE' is not working!?

harri, November 24, 2003 - 6:35 am UTC

Hi!

I have this procedure:

SQL> select text from dba_source where name='HK_ANALYZE';

TEXT
--------------------------------------------------------------------------------
procedure hk_analyze
(v_schema in varchar2,
 v_estimate_pct in number default 0,
 v_degree in number default 1)
as
 a dbms_stats.objecttab;
 start_time varchar(30);
 stop_time varchar(30);

begin
  select TO_CHAR(sysdate, 'DD-MON-YYYY HH24:MI:SS') into start_time from dual;
     dbms_stats.gather_schema_stats
          (OWNNAME => v_schema,
           ESTIMATE_PERCENT => v_estimate_pct,
           CASCADE => TRUE,
           GRANULARITY => 'ALL',
           OPTIONS=>'GATHER EMPTY',
           DEGREE => v_degree,
           OBJLIST=>a);
dbms_output.put_line('1. kerran analysoidut');
    for i in 1 .. a.count
    loop
      dbms_output.put_line( a(i).ownname || '.' ||a(i).objName ||' ('||a(i).objT
ype || ')'  );


      end loop;
dbms_output.put_line('-----');
     dbms_stats.gather_schema_stats
          (OWNNAME => v_schema,
           ESTIMATE_PERCENT => v_estimate_pct,
           CASCADE => TRUE,
           GRANULARITY => 'ALL',
           OPTIONS=>'GATHER STALE',
           DEGREE => v_degree,
           OBJLIST=>a);
dbms_output.put_line('Analysoitu, koska vM-dhintM-dM-dn 10% muuttunut:');
    for i in 1 .. a.count
 loop
      dbms_output.put_line( a(i).ownname || '.' ||a(i).objName ||' ('||a(i).objT
ype || ')'  );


      end loop;
select TO_CHAR(sysdate, 'DD-MON-YYYY HH24:MI:SS') into stop_time from dual;
dbms_output.put_line('-----');
DBMS_OUTPUT.PUT_LINE('Analysointi alkoi: ' || start_time);
DBMS_OUTPUT.PUT_LINE('Analysointi loppui: ' || stop_time);
end hk_analyze;

45 rows selected.

******

SQL> create table MYYNTI.HK_TESTI as select * from dba_objects where rownum<1000;

Table created.

SQL> exec hk_analyze('MYYNTI', 99,1)
1. kerran analysoidut:
"MYYNTI"."HK_TESTI" (TABLE)
-----
Analysoitu, koska vM-dhintM-dM-dn 10% muuttunut:
-----
Analysointi alkoi: 24-NOV-2003 13:13:14
Analysointi loppui: 24-NOV-2003 13:13:15

PL/SQL procedure successfully completed.

...this is ok. Table is analyzed first time.


SQL> exec hk_analyze('MYYNTI', 99,1)
1. kerran analysoidut:
-----
Analysoitu, koska vM-dhintM-dM-dn 10% muuttunut:
-----
Analysointi alkoi: 24-NOV-2003 13:13:21
Analysointi loppui: 24-NOV-2003 13:13:21

PL/SQL procedure successfully completed.

...and this is ok. No inserts or updates to that table. But then gather stale-part of the procedure is not working: 


SQL> insert into MYYNTI.HK_TESTI select * from dba_objects where rownum<110;

109 rows created.

SQL> commit;

Commit complete.

SQL> exec hk_analyze('MYYNTI', 99,1)
1. kerran analysoidut:
-----
Analysoitu, koska vM-dhintM-dM-dn 10% muuttunut:
-----
Analysointi alkoi: 24-NOV-2003 13:13:54
Analysointi loppui: 24-NOV-2003 13:13:55

PL/SQL procedure successfully completed.

SQL> insert into MYYNTI.HK_TESTI select * from dba_objects where rownum<110;

109 rows created.

SQL> commit;

Commit complete.

SQL> exec hk_analyze('MYYNTI', 99,1)
1. kerran analysoidut:
-----
Analysoitu, koska vM-dhintM-dM-dn 10% muuttunut:
-----
Analysointi alkoi: 24-NOV-2003 13:14:11
Analysointi loppui: 24-NOV-2003 13:14:11

PL/SQL procedure successfully completed.

SQL> delete from MYYNTI.HK_TESTI where rownum<400;

399 rows deleted.

SQL> exec hk_analyze('MYYNTI', 99,1)
1. kerran analysoidut:
-----
Analysoitu, koska vM-dhintM-dM-dn 10% muuttunut:
-----
Analysointi alkoi: 24-NOV-2003 13:14:41
Analysointi loppui: 24-NOV-2003 13:14:41

PL/SQL procedure successfully completed.

SQL> commit;

Commit complete.

SQL> exec hk_analyze('MYYNTI', 99,1)
1. kerran analysoidut:
-----
Analysoitu, koska vM-dhintM-dM-dn 10% muuttunut:
-----
Analysointi alkoi: 24-NOV-2003 13:14:48
Analysointi loppui: 24-NOV-2003 13:14:48

PL/SQL procedure successfully completed.

SQL> update MYYNTI.HK_TESTI set owner='TESTI' where owner='SYS';

818 rows updated.

SQL> commit;

Commit complete.

SQL> exec hk_analyze('MYYNTI', 99,1)
1. kerran analysoidut:
-----
Analysoitu, koska vM-dhintM-dM-dn 10% muuttunut:
-----
Analysointi alkoi: 24-NOV-2003 13:15:36
Analysointi loppui: 24-NOV-2003 13:15:36

PL/SQL procedure successfully completed.

SQL>


******

translations of strings
"Analysoitu, koska vM-dhintM-dM-dn 10% muuttunut:" = 
analyzed becouse of 10% changes

"1. kerran analysoidut" = first time analyzed

****


So, what is wrong here?

-Harri














 

Tom Kyte
November 24, 2003 - 8:52 am UTC

depending on your version (and you do not seem to mention one) the stats info is flushed every 3 hours or during a normal shutdown. If you have current releases, this is not the case.

also -- you never made the table monitoring?? alter table T monitoring; else it will never be stale.



I tested in 9i with monitoring and it worked fine.

GATHER-problem

HK, November 24, 2003 - 2:27 pm UTC

The version of that db is 8.1.7.4.0.

I did't know that "flush"-thing. Now, when flush is done it's analyzing that table:

SQL> exec hk_analyze('MYYNTI', 30,2)
1. kerran analysoidut:
-----
Analysoitu, koska yli 10% muuttunut:
"MYYNTI"."HK_TESTI" (TABLE)
-----
Analysointi alkoi: 24-NOV-2003 21:21:09
Analysointi loppui: 24-NOV-2003 21:21:13

*****

Thanks for Your information!!


Originally I developed procedure in 9i db. In procedure there I have line:
dbms_stats.ALTER_SCHEMA_TAB_MONITORING(v_schema, TRUE); 


That's not working in 8i. So, now I added lines:

begin
  for asd in no_monitor loop
   begin
     execute immediate ('alter table '|| v_schema || '.' || asd.table_name || ' monitoring');
   exception
     when e_busy then
       DBMS_OUTPUT.PUT_LINE('Setting monitoring on table' || v_schema || '.' || asd.table_name || '  was unsuccesfull!');
   end;
  end loop;


...and it seems to work fine. What You think, is that a good solution for that?




 

Tom Kyte
November 24, 2003 - 3:06 pm UTC

yup, thats perfect (the execute immediate)

move statistics between users

A reader, December 08, 2003 - 1:41 pm UTC

Hi

we have 2 schemas in our database, one is development and the other query testing.

we analyze development schema once every 2 days, it takes around 8 hours. how can we move these statistics from deveopment schema to to query testing schema?

with dbms_stats.get/set or dbms_stats.export/import?

What is the difference?

Tom Kyte
December 09, 2003 - 5:34 am UTC

dbms_stats export and import.

export and import copy stats from place to place

set lets you manaully "set" them to whatever values you want.

gather_table_stats & gather stale option

Craig, February 06, 2004 - 2:46 pm UTC

Does the gather stale option work only with gather_schema_stats, or could you gather stale for a single table using gather_table_stats?

All the examples and the docs seem to imply gather stale works only with gather_schema_stats.

Tom Kyte
February 07, 2004 - 1:48 pm UTC

gather_schema|database_stats actually.


If you want to use gather_table_stats only on stale tables ( presumuably because you want different method_opts or something) you would use gather_schema_stats with LIST STALE to get a list of stale objects that you could then use gather_table_stats on.

Results of a Trace of Gather_Table_Stats for Global Statistics

Doug Wingate, February 09, 2004 - 6:45 pm UTC

I've skimmed through this thread and, although I think my question is germaine to the topic, I didn't find that it had already been discussed. If I'm mistaken, please just tell me where I can read your prior response. Thanks.

In the course of my work today, I had occasion to execute this call to Gather_Table_Stats to refresh the global statistics on a partitioned data warehouse fact table. I confess that the database is at level 8.1.7.3. (We're working on upgrading to level 9.2.)

EXECUTE DBMS_STATS.GATHER_TABLE_STATS (-
ownname => 'WAREHOUSE',-
tabname => 'CHARGES',-
partname => NULL,-
estimate_percent => 3,-
block_sample => TRUE,-
method_opt => 'FOR ALL INDEXED COLUMNS SIZE 75',-
degree => 4,-
granularity => 'GLOBAL',-
cascade => TRUE,-
stattab => NULL,-
statid => NULL,-
statown => NULL);

I traced the execution of the procedure and looked at the trace file to see what I could learn about how Gather_Table_Stats does its work of gathering global statistics. I found this ANALYZE statement buried deep in the trace file.

analyze table "WAREHOUSE"."CHARGES"
ESTIMATE statistics sample 3 percent
FOR TABLE
FOR ALL INDEXES
FOR ALL INDEXED COLUMNS
SIZE 75

Setting aside, if possible, the matter of the index statistics also gathered here, are 'global statistics' really merely the same as the statistics one can gather with 'ANALYZE TABLE...ESTIMATE STATISTICS...FOR TABLE;'? I mean, if you or someone else wishes to say that Gather_Table_Stats guides and simplifies the writing of ANALYZE statements by correctly writing ANALYZE statements on users' behalf, in response to simple parameters that allow the user to clearly express his intention, rather than leaving the user to flounder and perhaps write the statement incorrectly, I agree. But is there any more to it than that even in the case of global statistics?

Thanks.

Tom Kyte
February 09, 2004 - 8:44 pm UTC

In 8i, dbms_stats many times dropped down to "analyze" (serial)

In 9i, it does so infinitely less often.

So, you are just seeing the artifacts of an implementation over time. In 8i, anything with the size > 1 wasn't parallelized (done with analyze). Later, when done with sql, it could be.

Very Helpful

Doug Wingate, February 10, 2004 - 2:25 pm UTC

Thanks. It hadn't occurred to me until now that to the extent Gather_Table_Stats uses SELECT statements to gather statistics, its execution plans can be optimized just like any other SELECT statement's plan can be optimized.

global statistics

Marvin, May 06, 2004 - 5:33 am UTC

Hi

I am using dbms_stats for a oltp database with hash and range partitioned tables. I use gather stale option this leads me to some doubts, when I run dbms_stats I gather statistics only for those partitions which have been gone over 10% DML (as monitoring suggests) so my GLOBAL_STATS is never updated, can this lead to problems? The other option is set granularity to 'ALL' (i have subpartitions too) but this is not a very helpful option. By setting to ALL I am forced to analyze all partitions again which means the gather_stale is useless (not only gathering modified objects but ALL)

Is this how dbms_stats works...? Is there anyway to do cumulative global stats?

Tom Kyte
May 06, 2004 - 7:41 am UTC

the only way to get global stats is to look at all partitions. The reason is:

if column X has 10 distinct values in each of P1, P2, ... P10 -- does column x have

a) 10 distinct values in the table..
b) 100 distinct values
c) some number between 10 and 100

the answer is any of the above -- you have to look at them all.

So, you can use dbms_stats to LIST STALE and gather global stats on all tables with stale partitions.

partition stats are used by the optimizer when a query is known to evaluate to a FIXED partition. Eg: select * from partitioned_tab where part_key = 55 would use partition stats since the optimizer would know at optimize time which partition it will hit. However, "select * from partitioned_tab where part_key = :x" would not since the partition is not known until runtime (bind time). That does not mean "don't use binds", that does mean "gather global stats"




exec dbms_stats.set_table_stats( user, 't', numrows => 10000, numblks => 1000 );

sonali, May 26, 2004 - 3:57 pm UTC

select num_Rows from user_tables where table_name = 'T';

execUTE dbms_stats.set_table_stats( user, 'T', numrows => 10000, numblks => 1000 );

From what I understand reading all the discussion that one can set table_stats if the table is not analyzed( dbms_stats.gather_table_stats..)

Can I use this set_table_stats to fake as if the table has million rows where as in real it has only 500 rows. This way I can forcee the query plan , table design, indexes etc..at the customer site.
We always have this issue with performanace as we have small development databases and huge customer databases.

Thanks

Tom Kyte
May 26, 2004 - 4:43 pm UTC


you need to have a huge customer database upon which to test with.

what if Oracle said "you know what, we don't really have any large databases to test with, we'll just try 500meg or so. If it works OK with that, we can ship it"

same concept -- we test larger than our customers go, just to know that it has a chance of working. otherwise, we know it has no chance.

So what if you set the stats and tell your test instance (that has 1000 rows) that it should pretend it has 1,000,000.

What would you gain from that?

o not the knowledge of how the query will really perform
o not the knowledge of how the query will scale

the only thing you gain is "i can see the probable plan it'll use". Now what?


get a terabyte of disk. fill it up.

dbms_stats.auto_sample_size ????

Neeraj, May 27, 2004 - 2:29 am UTC

SELECT COUNT(*) FROM GCA_DS_PRS_EXT_PUB
57207324

Execute DBMS_STATS.GATHER_TABLE_STATS(OWNNAME =>'GCA_USER',TABNAME => 'GCA_DS_PRS_EXT_PUB',ESTIMATE_PERCENT =>DBMS_STATS.AUTO_SAMPLE_SIZE,method_opt=> 'for all INDEXED COLUMNS size 1',cascade => true,degree=>3);

value of sample_size is 57207323 and num_rows is 57207323 why ?????

Is it mean estimation is 100% ?

Tom Kyte
May 27, 2004 - 9:08 am UTC

auto means auto. auto means we do what we want. there are many triggers that say "hey, we've seen some things that lead us to believe we want to do 50% or more" and once you hit 49%+ -- analyze becomes compute since you are touching pretty much every byte anyway.

DBMS_STATS.AUTO_SAMPLE_SIZE

friend, May 27, 2004 - 11:59 pm UTC

thanx TOM.
But still i want to share these stats with you.


Execute DBMS_STATS.GATHER_TABLE_STATS(OWNNAME =>'GCA_USER',TABNAME => 'PUB',ESTIMATE_PERCENT =>DBMS_STATS.AUTO_SAMPLE_SIZE,method_opt=> 'for a
ll INDEXED COLUMNS size 1',cascade => true,degree=>3);

PL/SQL procedure successfully completed.

Elapsed: 00:46:22.25

select sample_size,num_rows from user_tables where table_name='PUB';

SAMPLE_SIZE NUM_ROWS
----------- ----------
57207323 57207323

Elapsed: 00:00:00.03

Execute DBMS_STATS.GATHER_TABLE_STATS(OWNNAME =>'GCA_USER',TABNAME => 'PUB',ESTIMATE_PERCENT =>DBMS_STATS.AUTO_SAMPLE_SIZE,method_opt=> 'for a
ll INDEXED COLUMNS size 1',cascade => true,degree=>3);





PL/SQL procedure successfully completed.

Elapsed: 00:52:26.88

select sample_size,num_rows from user_tables where table_name='PUB';

SAMPLE_SIZE NUM_ROWS
----------- ----------
57207323 57207323

Elapsed: 00:00:00.08



select sample_size,num_rows from user_tables where table_name='BOOK';

SAMPLE_SIZE NUM_ROWS
----------- ----------
4968487 24842435

Elapsed: 00:00:00.05


Execute DBMS_STATS.GATHER_TABLE_STATS(OWNNAME =>'GCA_USER',TABNAME => 'BOOK',ESTIMATE_PERCENT =>DBMS_STATS.AUTO_SAMPLE_SIZE,method_opt=> 'for a
ll INDEXED COLUMNS size 1',cascade => true,degree => 3);




PL/SQL procedure successfully completed.

Elapsed: 01:16:55.60



select sample_size,num_rows from user_tables where table_name='BOOK';


SAMPLE_SIZE NUM_ROWS
----------- ----------
24842085 24842085

Elapsed: 00:00:00.01


Execute DBMS_STATS.GATHER_TABLE_STATS(OWNNAME =>'GCA_USER',TABNAME => 'BOOK',ESTIMATE_PERCENT =>DBMS_STATS.AUTO_SAMPLE_SIZE,method_opt=> 'for a
ll INDEXED COLUMNS size 1',cascade => true,degree => 3);

PL/SQL procedure successfully completed.

Elapsed: 01:16:49.76

SAMPLE_SIZE NUM_ROWS
----------- ----------
24842085 24842085

Elapsed: 00:00:00.02


Execute DBMS_STATS.GATHER_TABLE_STATS(OWNNAME =>'GCA_USER',TABNAME => 'BILL',ESTIMATE_PERCENT =>DBMS_STATS.AUTO_SAMPLE_SIZE,method_opt => 'for
all INDEXED COLUMNS size 1',CASCADE => TRUE,degree => 3);


PL/SQL procedure successfully completed.

Elapsed: 01:14:53.66


select sample_size,num_rows from user_tables where table_name='BILL';


SAMPLE_SIZE NUM_ROWS
----------- ----------
24046022 24046022

Elapsed: 00:00:00.02

Execute DBMS_STATS.GATHER_TABLE_STATS(OWNNAME =>'GCA_USER',TABNAME => 'BILL',ESTIMATE_PERCENT =>DBMS_STATS.AUTO_SAMPLE_SIZE,method_opt => 'for
all INDEXED COLUMNS size 1',CASCADE => TRUE,degree => 3);


PL/SQL procedure successfully completed.

Elapsed: 01:10:09.34



select sample_size,num_rows from user_tables where table_name='BILL';

SAMPLE_SIZE NUM_ROWS
----------- ----------
24046022 24046022



Tom Kyte
May 28, 2004 - 8:05 am UTC

thanks?

friend

friend, May 28, 2004 - 8:45 am UTC

welcome sir
but still ihave a proble :)
as sample_size is equal to num_rows and count(*) of base table

and some very good dba ( under whom i am working) pointing on these stats and planning to implement estimate_percent=> 20 as acc to them its 100% ( num_rows = count(*) of table)





Tom Kyte
May 28, 2004 - 11:17 am UTC

you either want "auto" or you don't. It is up to you.

dbms_stats

friend, May 28, 2004 - 8:53 am UTC

i mean accordingt o them if sample_size= count(*) from tables = num_rows from user_tables then it stands for estimate_percent => 100

Need words from you to explain them and also i have to buy your book and i am getting here expert one on one wrex publisher please confirm the name....
and effective oracle by design is different from above ????

Tom Kyte
May 28, 2004 - 11:23 am UTC

auto means just that, you are saying "i don't want to set the details, please use your best judgement". the algorithm is saying "you know what, we want to do it all"

i have two main books, expert one on one oracle (apress is current publisher), and effective oracle by design (oraclepress)

Thiru, June 08, 2004 - 10:35 am UTC

user logged in : my_user
when dbms_stats.delete_schema_stats('MY_USER') is run, the stats are not
deleted as can be verified from user_object_tables and user_indexes
while if I am logged in as system and run
dbms_stats.delete_schema_stats('MY_USER') then the stats is gone in
my_user schema.

Any reason for this. No errors come up when it is run by my_user. Also
my_user has dba role.

Tom Kyte
June 08, 2004 - 11:54 am UTC

can you help me to reproduce the issue?



ops$tkyte@ORA9IR2> drop user my_user
  2  cascade;
 
User dropped.
 
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> grant dba to my_user identified by my_user;
 
Grant succeeded.
 
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> @connect my_user/my_user;
ops$tkyte@ORA9IR2> set termout off
my_user@ORA9IR2> set termout on
my_user@ORA9IR2>
my_user@ORA9IR2> create table t as select * from scott.emp;
 
Table created.
 
my_user@ORA9IR2>
my_user@ORA9IR2> exec dbms_stats.gather_schema_stats( user );
 
PL/SQL procedure successfully completed.
 
my_user@ORA9IR2> select num_rows from user_tables;
 
  NUM_ROWS
----------
        14
 
my_user@ORA9IR2>
my_user@ORA9IR2> exec dbms_stats.delete_schema_stats( user );
 
PL/SQL procedure successfully completed.
 
my_user@ORA9IR2>
my_user@ORA9IR2> select num_rows from user_tables;
 
  NUM_ROWS
----------
 
 
my_user@ORA9IR2>
 

A reader, June 08, 2004 - 12:59 pm UTC

I was checking only user_object_tables and user_indexes and not user_tables.
The delete schema stats is working for heap tables only and not for object tables.
When I query from user_object_tables the last_analyzed date still shows up. Also user_indexes
stats does not get deleted.


Tom Kyte
June 08, 2004 - 1:41 pm UTC

in 9.2 and before, object tables are not done by gather schema stats, they are not removed by delete schema stats. (no, i don't know why)

in 10g, they are. So, in 9i and before, you need to use gather table/index stats for object tables and delete table/index stats. this applies to nested tables as well.

in 10g, you can use schema stats on them as well.

(i've never used object types to store data, never hit that nuance before myself)

DBMS_STATS.AUTO_SAMPLE_SIZE

Arun Gupta, June 24, 2004 - 1:06 pm UTC

Tom,
When I use auto_sample_size, is there a way for me to know what sample size Oracle actually used in gather_schema_stats? Does the auto_sample_size effect indexes also? After using this command, I checked the num_rows in a table T1 and it's pk index, PK_T1. The num_rows in PK_T1 was much higher than num_rows in T1.

The sample_size of T1=num_rows of T1 and
sample_size of PK_T1=num_rows of PK_T1 but
num_rows of T1 < num_rows of PK_T1.

The last_analyzed_time of T1 and PK_T1 is within 13 seconds of each other, there was no other activity.

My dbms_stats command is:
dbms_stats.gather_schema_stats(
ownname => 'ABC',
estimate_percent =>dbms_stats.auto_sample_size,
options => 'GATHER STALE',
degree => DBMS_STATS.DEFAULT_DEGREE,
cascade => TRUE);
The monitoring option is on. What could be causing this discrepancy?

Thanks

Tom Kyte
June 24, 2004 - 3:13 pm UTC

the data dictionary has "sample_size" as a column.

what is "much higher"

Arun Gupta, June 24, 2004 - 3:41 pm UTC

The num_rows in T1 is 280,855 less than num_rows in PK_T1. The num_rows in T1=8827920 and num_rows in PK_T1=9108775.


Tom Kyte
June 24, 2004 - 3:48 pm UTC

that is a 3% margin of error on an estimate. not bad eh?

Just like to you and me, $1,000,000 would be a lot. (making an assumption on your part)

But, to Warren Buffet -- maybe not.

Look at it in proportion there.

DBMS_STATS on Partitioned Table

Wayne, June 24, 2004 - 8:59 pm UTC

Tom,
I am analyzing a partition (2.5M rows) of a table using
DBMS_STATS.GATHER_TABLE_STATS(...) specifying:
granularity => 'PARTITION' -- no global stats
cascade => true -- indices only take 3-4 minutes
degree =>2

But it took 1 hour and 37 minutes (Unix/12CPU, Ora817).
If I specify estimate_percent =>10, it took 10 minutes.
My questions:
1) Does the number look ok (10 min for 10% data)? Any parameters might need to be tweaked?
* If I set granularity as default, it will take 3.7 hours for that partition (collect global stats).
** It took 3.5 minutes to do the following, am I still comparing apples to oranges (except the degree =2)?
analyze table t partition a estimate statistics sample 10
for all columns
for all local indexes
for all indexed columns

2) DBAs collect global stats on weekends sampling 5% of data (gather_schema_stats). Does 5% look low? Queries seems to run fine though (select count(*) from the_table returns results (59M) in 3 seconds provided that all local indexes have the fresh stats).

Thanks,

Tom Kyte
June 25, 2004 - 7:39 am UTC

1) PQ is not always the fastest way to the end. I would drop the degree=>2 and see what happpens. 2.5 rows is not huge. If data is not spread over physical devices, pq probably won't be doing too much good.

tkprof is always very revealing to see what is happening and what is taking long and what you are waiting on (10046 level 12 trace)

2) a select count(*) with the CBO will pretty much go for the primary key index using a fast full scan regardless. It would not be a good benchmark query.

If your data is not heavily skewed, if it is nicely distributed, 5% may be better than good enough, sure.

Partition Stats

Wayne, June 25, 2004 - 9:14 pm UTC

Tom,
I retested the gather_table_stats on a partition with and without parallelization, it works just fine (3.5 min and 4.5 minutes respectively).



migrate

lotus, July 06, 2004 - 2:49 am UTC

dear tom

We are moving from 8i to 9i .
Please suggest the precautions we should take from application point of view?
Please provide dtails and links to study


Tom Kyte
July 06, 2004 - 7:42 am UTC

you must read the Migration guide from cover to cover:

</code> https://docs.oracle.com#index-MIG <code>

and you must test your application upgrade on a *test* machine. test not only functionality -- but scale.

Ultimate link

Neeraj, July 07, 2004 - 12:16 am UTC

Dear All

The above is link is magic box of docuementation.
Thanx Tom :)


Best Regards


DBMS_STATS.DEFAULT_DEGREE

Vlado, July 15, 2004 - 9:45 am UTC

I'm trying to figure out which init.ora parameters DBMS_STATS.DEFAULT_DEGREE depends on. Have not found anything in the Oracle documentation nor Metalink. Any ideas?

Tom Kyte
July 15, 2004 - 12:50 pm UTC

the parallel ones -- parallel automatic tuning, parallel max servers and the like.

total size used by a table

apl, August 04, 2004 - 7:30 am UTC

Can we get the total size used by the table and total number of rows presented in the table without analyzing the table? I mean i don't want to use dbms_stats.gather_database_stats or analyze command.

Tom Kyte
August 04, 2004 - 10:46 am UTC

select count(*) from t; gives you the number of rows.

select blocks from dba_segments where segment_name = 'T' and owner = user;

gives you the allocated space.

The query is wrong!!!!!!!!

A reader, August 04, 2004 - 6:14 pm UTC

The query provided by Tom is wrong.
Here is the correct query:

select blocks from dba_segments where segment_TYPE = 'TABLE' and owner = user;


I hope it is the right answer. As segement_name = 'T' does not exist in 9iR2. And also it should have been segment_type.

Tom must have been in a hurry. ;)



Sorry I must have been in hurry!!!!!!!

A reader, August 04, 2004 - 6:51 pm UTC

Sorry Tom.
I must have been in hurry. The query is right. I just did not read it properly.
Quite embarrassing I must admit...


Space Used

APL, August 05, 2004 - 12:07 am UTC

But i want the used space by a table.
Actually my requirement is gather all the used space for every table in user schemas for each user in a database without using 'analyzing' and 'dbms_gather_stats'.


Tom Kyte
August 05, 2004 - 12:43 pm UTC

and which part of my response did not meet that requirement?

you asked "number of rows" -- i responded with "select count(*)"

you asked "size consumed" -- i responded with a query against dba_segments.

No stats, no analyze, no dbms_stats - but both answers recieved.


What was missing?

Size of table

APL, August 06, 2004 - 12:26 am UTC

What about the free space? I want the used space ie excluding the PCTFREE?
I think from your query we will get the allocated blocks.


Tom Kyte
August 06, 2004 - 8:07 am UTC

that is correct -- and what was asked for (you get what you ask for, nothing more, nothing less)


"Can we get the total size used by the table and total number of rows presented
in the table without analyzing the table?"


the total size used by the table is in dba_segments. that is what is allocated to, therefore "in use"

what is "used space excluding the pctfree"? pctfree is just a number "please keep 10% free during insertions". with pctfree of 90, you can have blocks that are 100% full, 50% full, 10% full, 0% full.

if you want the "bytes consumed by rows in the table"

select count(*), sum( nvl(vsize(c1),0) + nvl(vsize(c2),0) + ... + nvl(vsize(cN,0)) from t;



what is the correct granualarity in this case

Ajeet, August 18, 2004 - 7:08 am UTC

Tom,
I have a table called rdm_f_parameter -- 

SQL> desc rdm_f_parameter ;
 Name                                      Null?    Type
 ----------------------------------------- -------- --------------
 DIM_SEQ_ID                                NOT NULL NUMBER(9)
 PARAM_SEQ_ID                              NOT NULL NUMBER(9)
 FLEET                                     NOT NULL VARCHAR2(10)
 FLIGHT_DATETIME                           NOT NULL DATE
 PARAMETER_VALUE                           NOT NULL NUMBER(15,4)
 CREATION_DATE                             NOT NULL DATE

It is partitioned by range on flight_datetime
and subpartition by list on fleet.

all queries on this table will have flight_datetime and fleet both in the where clause -- a sample SQL is below --

select 
a.flight_datetime
,max( decode( c.parameter_name, 'DEGT', a.parameter_value, NULL ) ) DEGT
,max( decode( c.parameter_name, 'DPOIL', a.parameter_value, NULL ) ) DPOIL
,max( decode( c.parameter_name, 'BRAT', a.parameter_value, NULL ) ) BRAT
from
 rdm.rdm_d_plot b
,rdm.rdm_d_param c
,rdm.rdm_f_parameter a
where a.dim_seq_id = b.dim_seq_id
and   a.param_seq_id = c.param_seq_id
and   c.parameter_name in ('DEGT','DPOIL','BRAT')
and   a.fleet = 'MAS'
and   b.fleet = 'MAS'
and   b.aircraft_id = '9M-MMQ'
and   b.flight_phase = 'CRUISE'
and   b.engine_id = '856295'
and   a.flight_datetime >= trunc(sysdate) - 360
group by a.flight_datetime ; 

the 2 other tables in above sql are kind of Lookup table with maximum 5000 rows. 

I have been trying the different granularity options while collection stats using dbms_stats..what should be the correct granularity  -- Partition or Subpartition as I do get a different (very different ) explain plan.also when I use subpartition as granularity I do get slighlty better response time.

The reason I asked this is  -- the rdm_f_parameter table is very large it has 120 Million rows as of today.

I have 2 bitmap indexes - on rdm_f_parameter 
one on dim_seq_id and another on param_seq_id.

I have kept no index on fleet or flight_datetime as queries will always have these 2 columns in the where clause.

do you see any desgin issue ?

I don't have a response time or performance issue as of now --and I am still in development database so wanted to have your opinion.

Thanks and Kind regards,
Ajeet

 

Tom Kyte
August 18, 2004 - 9:21 am UTC

do you use bind variables?

do the plans have "key key" in the partition start/stop or a single fixed number?


IF the query plans are always using "fixed partitions" in the partition start/stop, then partition level stats will be used.  eg given:

ops$tkyte@ORA9IR2> CREATE TABLE t
  2  (
  3    temp_date  date,
  4    x          int,
  5    y int
  6  )
  7  PARTITION BY RANGE (temp_date)
  8  subpartition by hash(x)
  9  (
 10    PARTITION part1 VALUES LESS THAN (to_date('13-mar-2003','dd-mon-yyyy')) tablespace users,
 11    PARTITION part2 VALUES LESS THAN (to_date('14-mar-2003','dd-mon-yyyy')) tablespace tools,
 12    PARTITION junk VALUES LESS THAN (MAXVALUE)
 13  )
 14  ;
 
Table created.
 
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> delete from plan_table;
 
4 rows deleted.
 
ops$tkyte@ORA9IR2> explain plan for
  2  select * from t where x = 5 and temp_date = to_date('01-jan-2004');
 
Explained.
 
ops$tkyte@ORA9IR2> select * from table(dbms_xplan.display);
 
PLAN_TABLE_OUTPUT
------------------------------------
 
-------------------------------------------------------------------------------
| Id  | Operation            |  Name  | Rows  | Bytes | Cost  | Pstart| Pstop |
-------------------------------------------------------------------------------
|   0 | SELECT STATEMENT     |        |     1 |    35 |     2 |       |       |
|*  1 |  TABLE ACCESS FULL   | T      |     1 |    35 |     2 |     3 |     3 |
-------------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
   1 - filter("T"."X"=5 AND "T"."TEMP_DATE"=TO_DATE('2004-01-01 00:00:00',
              'yyyy-mm-dd hh24:mi:ss'))
 
Note: cpu costing is off
 
15 rows selected.

<b>local partition statistics on partition "3" will be used to optimize that query plan since the optimizer knows "partition 3" but...</b>

 
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> variable x number
ops$tkyte@ORA9IR2> variable y varchar2(20)
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> delete from plan_table;
 
2 rows deleted.
 
ops$tkyte@ORA9IR2> explain plan for
  2  select * from t where x = :x and temp_date = to_date(:y);
 
Explained.
 
ops$tkyte@ORA9IR2> select * from table(dbms_xplan.display);
 
PLAN_TABLE_OUTPUT
-------------------------------------------------------------------------------------------------------------------------
 
--------------------------------------------------------------------------------------
| Id  | Operation              |  Name| Rows  | Bytes | Cost  | Pstart| Pstop |----------------------------------------------------------------------------------
|   0 | SELECT STATEMENT       |      |     1 |    35 |     2 |       |       |
|   1 |  PARTITION RANGE SINGLE|      |       |       |       |   KEY |   KEY |
|   2 |   PARTITION HASH SINGLE|      |       |       |       |   KEY |   KEY |
|*  3 |    TABLE ACCESS FULL   | T    |     1 |    35 |     2 |   KEY |   KEY |
-------------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
   3 - filter("T"."X"=TO_NUMBER(:Z) AND "T"."TEMP_DATE"=TO_DATE(:Z))
 
Note: cpu costing is off
 
16 rows selected.

<b>will use table (global) statistics to optimize since it is not know what partition(s) will be accessed -- only that some subset of partitions will be</b>

 

Bind varaibles most of the time

Ajeet, August 19, 2004 - 4:17 am UTC

Tom -
Thanks for great explanation - it is now very clear when to use what granularity in dbms_stats for partitioning table.
In my case most of the quries will use bind variables as we use a stored procedure which passes the values in the where clause of the sql - I have given above.so use bind variables in 95% of quries on the above mentioned table.
Also my explain shows keys/keys as pstart and pstop .
However in 5% of cases -- some power user may want to run the above query with hard coded values for Fleet and Flight_datetime .As informed my table is partition by range on flight_datetime and subpartiton by list on Fleet.

Thanks so much .
Ajeet

Blocks

A reader, August 19, 2004 - 5:04 pm UTC

Hi Tom,
Just wanted to know how to translate blocks in terms of size.
That is I want to get the size of the tablei/indexes in terms of MB, etc.
So how do I interpret blocks in terms of actual size.
Thanks as always

Tom Kyte
August 19, 2004 - 8:07 pm UTC

query v$parameter to see your block size

ops$tkyte@ORA9IR2> select value from v$parameter where name = 'db_block_size'
  2  /
 
VALUE
-------------------------------------------------------------------------------
8192


block size is 8,192 bytes - now multiply. 

export/import_database_stats

A reader, August 21, 2004 - 10:47 pm UTC

I frequently get errors when using dbms_stats.export_database_stats or import_databse_stats.

Consider

$ sqlplus  "/ as sysdba"

SQL*Plus: Release 9.2.0.4.0 - Production on Sat Aug 21 22:28:50 2004

Copyright (c) 1982, 2002, Oracle Corporation.  All rights reserved.

Connected to:
Oracle9i Enterprise Edition Release 9.2.0.4.0 - 64bit Production
With the Partitioning, OLAP and Oracle Data Mining options
JServer Release 9.2.0.4.0 - Production


SQL> set timing on
SQL> exec dbms_stats.import_database_stats('cbo_stats','2004-07-25.FULL','cbo')
BEGIN dbms_stats.import_database_stats('cbo_stats','2004-07-25.FULL','cbo'); END;

*
ERROR at line 1:
ORA-20000: Insufficient privileges to analyze an object within the database
ORA-06512: at "SYS.DBMS_STATS", line 4914
ORA-06512: at "SYS.DBMS_STATS", line 4943
ORA-06512: at "SYS.DBMS_STATS", line 5460
ORA-06512: at line 1


Elapsed: 00:02:27.73
SQL> show parameter o7

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
O7_DICTIONARY_ACCESSIBILITY          boolean     TRUE
SQL> 

Why is this?

When this happens, what state is the dictionary left it? Are part of the stats imported or nothing? How can I tell?

Thanks 

Tom Kyte
August 22, 2004 - 8:14 am UTC

please contact support to see if you are hitting <Bug:3026474> on the import statistics and for guidance on the status of your statistics (you could enable sql_trace and see if there are any XCTEND -- commit -- records in there to see if some stats where committed before the error)

is it necessary to delete statistics before recalculating them

A reader, September 08, 2004 - 9:47 am UTC

I was wondering if it is recommended as part of the regular maintenance of statistics, to first delete them before recalculating them on a regular interval. i.e. if I have dbms_stats.gather_schema_stats running every night for a specified schema, should I first run dbms_stats.delete_schema_stats as normal practice, or should I only worry about deleting the stats if I change the way/parameters of my anaylze job?

Tom Kyte
September 08, 2004 - 10:30 am UTC

it would be:

a) save the old ones (export stats)
b) gather new


no need to remove the old (that leaves you at a point in time with NO stats -- eg: RBO kicks in or even as bad -- cbo gets some analyzed and unanalyzed tables)

create index ... compute statistics

A reader, October 13, 2004 - 2:50 pm UTC

I have some large tables with indexes. I gathered stats on these tables using dbms_stats (cascade=>true,method_opt=>'for all indexed columns')

Now, I want to create a new index on one of the existing columns

If I do

create index ... compute statistics;

is this the same as

create index ...;

and regathering stats on the table using (cascade=>true,method_opt=>'for all indexed columns')

The latter would do much more work than the former by gathering stats on the table and all the other indexes which I dont necessarily want.

[The reason I ask is that I am seeing some queries where the CBO is not properly picking the new index. If I hint the query to use the new index, the LIOs drop by orders of magnitude]

Thanks

Tom Kyte
October 13, 2004 - 3:28 pm UTC

ops$tkyte@ORA9IR2> create table t
  2  as
  3  select * from all_objects;
 
Table created.
 
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> exec dbms_stats.gather_table_stats(user,'T',method_opt => 'for all indexed columns', cascade=>true );
 
PL/SQL procedure successfully completed.
 
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> create index t1_idx on t(object_id) COMPUTE STATISTICS ;
 
Index created.
 
ops$tkyte@ORA9IR2> create index t2_idx on t(owner) COMPUTE STATISTICS ;
 
Index created.
 
ops$tkyte@ORA9IR2> column column_name format a15
ops$tkyte@ORA9IR2> select index_name, clustering_factor, num_rows from user_indexes where table_name = 'T';
 
INDEX_NAME                     CLUSTERING_FACTOR   NUM_ROWS
------------------------------ ----------------- ----------
T1_IDX                                     24513      27897
T2_IDX                                       397      27897
 
ops$tkyte@ORA9IR2> select column_name, count(*) from user_tab_histograms where table_name = 'T' group by column_name;
 
COLUMN_NAME       COUNT(*)
--------------- ----------
OBJECT_ID                2
OWNER                    2
 
ops$tkyte@ORA9IR2> exec dbms_stats.gather_table_stats(user,'T',method_opt => 'for all indexed columns', cascade=>true );
 
PL/SQL procedure successfully completed.
 
ops$tkyte@ORA9IR2> select index_name, clustering_factor, num_rows from user_indexes where table_name = 'T';
 
INDEX_NAME                     CLUSTERING_FACTOR   NUM_ROWS
------------------------------ ----------------- ----------
T1_IDX                                     24513      27897
T2_IDX                                       397      27897
 
ops$tkyte@ORA9IR2> select column_name, count(*) from user_tab_histograms where table_name = 'T' group by column_name;
 
COLUMN_NAME       COUNT(*)
--------------- ----------
OBJECT_ID               76
OWNER                   17
 
<b>it does not gather histograms -- just basic column level statistics</b>

 

outstanding as always

A reader, October 13, 2004 - 4:06 pm UTC


A reader, November 09, 2004 - 9:46 am UTC

OK so if I do

1) dbms_stats.gather_table_stats(user,'t',cascade=>true,method_opt=>'for all indexed columns')

2) create index t_idx on t(i) compute statistics;

3) dbms_stats.gather_table_stats(user,'t',cascade=>false,method_opt=>'for columns i')

that would be like doing (1) with the index already in place?

(3) seems a bit expensive just to gather histograms on the
column. It would re-gather stats on the table as well. Is there a way to do just the histograms on a specific column without re-gathering stats on the table itself?

Thanks

Tom Kyte
November 09, 2004 - 10:59 am UTC

No, the stats gathered on the create index are just column level stats (hi/lo/num distinct).  the other other (3) gathers histograms.  consider:



ops$tkyte@ORA9IR2> create table t as select * from all_objects;
Table created.
 
ops$tkyte@ORA9IR2> exec dbms_stats.gather_table_stats( user, 'T', cascade => true, method_opt=>'for all indexed columns' );
PL/SQL procedure successfully completed.
 
ops$tkyte@ORA9IR2> select 'table', table_name, num_rows from user_tables where table_name = 'T'
  2  union all
  3  select 'index', index_name, num_rows from user_indexes where table_name = 'T'
  4  union all
  5  select 'cols', column_name, count(*) from user_tab_histograms where table_name = 'T' group by column_name;
 
'TABL TABLE_NAME                       NUM_ROWS
----- ------------------------------ ----------
table T                                   27928
 
ops$tkyte@ORA9IR2> create index t_idx on t(object_type) compute statistics;
Index created.
 
ops$tkyte@ORA9IR2> select 'table', table_name, num_rows from user_tables where table_name = 'T'
  2  union all
  3  select 'index', index_name, num_rows from user_indexes where table_name = 'T'
  4  union all
  5  select 'cols', column_name, count(*) from user_tab_histograms where table_name = 'T' group by column_name;
 
'TABL TABLE_NAME                       NUM_ROWS
----- ------------------------------ ----------
table T                                   27928
index T_IDX                               27928
cols  OBJECT_TYPE                             2
 
ops$tkyte@ORA9IR2> exec dbms_stats.gather_table_stats( user, 'T', cascade => false, method_opt=>'for columns object_type' );
PL/SQL procedure successfully completed.
 
ops$tkyte@ORA9IR2> select 'table', table_name, num_rows from user_tables where table_name = 'T'
  2  union all
  3  select 'index', index_name, num_rows from user_indexes where table_name = 'T'
  4  union all
  5  select 'cols', column_name, count(*) from user_tab_histograms where table_name = 'T' group by column_name;
 
'TABL TABLE_NAME                       NUM_ROWS
----- ------------------------------ ----------
table T                                   27928
index T_IDX                               27928<b>
cols  OBJECT_TYPE                            26</b>
 
ops$tkyte@ORA9IR2>



3) well -- in order to gather histograms, you are going the scan the table, the table stats sort of "come for free" there. 

A reader, November 09, 2004 - 5:31 pm UTC

I got what you meant, but you misunderstood my question

"that would be like doing (1) with the index already in place?"

And the answer is yes

i.e. if my table already had the index and I did cascade=>true,method_opt=>'for all indexed columns', this would gather index stats and histograms

Instead, if I add the index later (create index ... compute statistics) and I want the same end-result, I need to do

cascade=>false(since the index DDL already did this)
method_opt=>'for columns xxx' (this will do the histograms)

Thanks



gathering stats on partition

A reader, November 16, 2004 - 3:03 pm UTC

Hi Tom,
I am not able to understand the purpose of degree:
if I am gathering stats for a partition, and there is no global index. Will degree helpful? if yes, what it will do for gathering stats for a partition of a table with no global indices.
Thanks

Tom Kyte
November 16, 2004 - 11:10 pm UTC

degree is for "parallel"

do you want to do it in parallel? if so, degree is useful, else it is not.

Partition Stats

A reader, November 17, 2004 - 2:27 am UTC

Thanks Tom.
So, even for partition stats and local indices, I can gather stats in parallel?

Tom Kyte
November 17, 2004 - 9:25 am UTC

you can gather statistics on a segment in parallel. partitions are segments.

Thanks a lot!

A reader, November 17, 2004 - 2:16 pm UTC


Wrong Sample Size for Indexes with DBMS_STATS

Kiran Ghanta, December 27, 2004 - 4:53 pm UTC

Hi Tom,
I sincerely appreciate your response to my question regarding DBMS_STATS on 9.2.0.4
I am not sure, if I am hitting a bug with DBMS_STATS or not, here is what I have observed while using DBMS_STATS
=========================================================
TableName : TEST_1 (97million rows) Primary Key : TEST_PK_1
With DBMS_STATS ==>
Est.Percent No.Rows SampleSize Sample%
----------------------------------------------------
Est 5% 96,394,065 247,212 0.256459773
Est 7% 102,514,323 262,908 0.256459773
Est 10% 96,139,054 246,558 0.256459773
Est 20% 99,199,183 254,406 0.256459773
Est 40% 102,259,312 262,254 0.256459774
Est 100% 94,949,004 243,506 0.256459773

With Analyze ==>
Est.Percent No.Rows SampleSize Sample%
----------------------------------------------------
Est 5% 97,408,463 5,101,692 5.237421722
Est 7% 97,409,408 7,347,964 7.543382257
Est 10% 97,410,149 11,212,668 11.51078005
Est 20% 97,410,723 18,946,872 19.4504993
Est 40% 97,411,101 34,701,732 35.62400142
Compute 97,349,264 97,349,264 100

Could you please let me know why the DBMS_STATS is always gathering stats for only .25% sample? I am seeing this behavior in gathering stats for indexes with both gather_table_stats and gather_index_stats procedures.
I have tested this with different volumes of data(43million rows and 1.5million rows). The Sample percents are constant at .456% and 16% respectively.

Please advise,

thanx!

kiran







Tom Kyte
December 27, 2004 - 5:30 pm UTC

share with us the precise command you used please.

Wrong Sample Size for Indexes with DBMS_STATS

Kiran Ghanta, December 27, 2004 - 6:21 pm UTC

Tom,
Here is the sample test, I ran for the stats
======================================================
rghanta@QA.WORLD > @gather_tbl_stats.sql
rghanta@QA.WORLD > variable n1 number
rghanta@QA.WORLD > exec :n1:=dbms_utility.get_time;
rghanta@QA.WORLD > set echo on
rghanta@QA.WORLD > DECLARE
2 BEGIN
3 dbms_stats.gather_table_stats( ownname => 'RGHANTA',
4 tabname => 'TEST_ALA_NONPART',
5 estimate_percent => 5,
6 block_sample => FALSE,
7 degree => 1,
8 granularity => 'DEFAULT',
9 cascade => TRUE );
10 END;
11 /
rghanta@QA.WORLD > set echo off
Table Test_ALA_NONPART 97million rows Stats gathered in 297.4 secs
rghanta@QA.WORLD > @qindstats

Average Average
Index Distinct Number Leaf Blocks Data Blocks Cluster Sample Date
Name Unique Keys of Rows Per Key Per Key Factor Size SamplePcnt MM-DD-YYYY
--------------- --------- ------------ ------------ ----------- ----------- ------------ ----------- ---------- ----------
ALA_PK UNIQUE 95,204,015 95,204,015 1 1 2,065,821 244,160 .256459772 12-27-2004
ALA_DL_NPART NONUNIQUE 54 98,561,814 8,372 41,855 2,260,202 253,098 .256791134 12-27-2004

============================

Deleted the table stats using DELETE_TABLE_STATS procedure

rghanta@QA.WORLD > @qindstats

Average Average
Index Distinct Number Leaf Blocks Data Blocks Cluster Sample Date
Name Unique Keys of Rows Per Key Per Key Factor Size SamplePcnt MM-DD-YYYY
--------------- --------- ------------ ------------ ----------- ----------- ------------ ----------- ---------- ----------
ALA_PK UNIQUE
ALA_DL_NPART NONUNIQUE

rghanta@QA.WORLD > exec dbms_stats.gather_index_stats(user,'ALA_PK',ESTIMATE_PERCENT=>5);
rghanta@QA.WORLD > @qindstats

Average Average
Index Distinct Number Leaf Blocks Data Blocks Cluster Sample Date
Name Unique Keys of Rows Per Key Per Key Factor Size SamplePcnt MM-DD-YYYY
--------------- --------- ------------ ------------ ----------- ----------- ------------ ----------- ---------- ----------
ALA_PK UNIQUE 93,588,947 93,588,947 1 1 2,036,966 240,018 .256459772 12-27-2004
ALA_DL_NPART NONUNIQUE

==============================

Deleted the index stats using DELETE_INDEX_STATS procedure

rghanta@QA.WORLD > exec dbms_stats.delete_index_stats(user,'ALA_PK');
rghanta@QA.WORLD > @qindstats

Average Average
Index Distinct Number Leaf Blocks Data Blocks Cluster Sample Date
Name Unique Keys of Rows Per Key Per Key Factor Size SamplePcnt MM-DD-YYYY
--------------- --------- ------------ ------------ ----------- ----------- ------------ ----------- ---------- ----------
ALA_PK UNIQUE
ALA_DL_NPART NONUNIQUE

Gathered index stats with analyze command..
rghanta@QA.WORLD > analyze index ala_pk estimate statistics sample 5 percent;
rghanta@QA.WORLD > @qindstats

Average Average
Index Distinct Number Leaf Blocks Data Blocks Cluster Sample Date
Name Unique Keys of Rows Per Key Per Key Factor Size SamplePcnt MM-DD-YYYY
--------------- --------- ------------ ------------ ----------- ----------- ------------ ----------- ---------- ----------
ALA_PK UNIQUE 97,408,462 97,408,463 1 1 2,109,950 5,101,692 5.23742173 12-27-2004
ALA_DL_NPART NONUNIQUE

====================================

Pl.let me know if you find any thing wrong.

thanx!
kiran




temp tables and dbms_stats.set_table_stats

denni50, December 29, 2004 - 2:38 pm UTC

Hi Tom

Happy 2005!

Came upon a question on this thread dbms_stats.set_table_stats) where you answered:

"Another use I've found for it is with temporary tables. You cannot really analyze them but what you can do is create a "real" table -- fill it up with representative data -- and then export these stats using dbms_stats. Later, you create the temporary table and import these other stats you saved. Now the optimizer will use these stats for the temporary table when developing query plans that involve it. This could be important if you put lots of rows into a temp table and then use it in subsquent SQL statments -- the optimizer would otherwise think the temporary table is "small" "

it brought to mind a recent occurrence where we were in the testing stages of upgrading our canned software to a later version and discovered one of the functionalties had been changed considerably and used a temp table to store
updated data(which can involve 700,000+ rows) that is stored in a separate tablespace. When we tested the new module, that generated a summary report, it was calculated that an additional 31 1/2 hours was added to the process that was entirely attributed to the loading into the new temp_table.
(below is the actual time from the report showing the loading phase into the temp_table)

time calculation:
total seconds: 113042.75/60 = 1884.0458 minutes
1884.0458/60 = 31.40 hours

The vendor never provided an answer or solution as to why the process was taking so long and we had to scrap the upgrade. When I read your answer about temp tables and stats it rang a bell as our temp tables were not analyzed as you described.

Could you please expand on a "real" table vs temp table.
Does it need the exact same structure as the temp table?
Not sure how I would fill it with representative data since the functionality does the loading through the job process.
Next I would export the "real" table with the stats...create the temp_table..then import the data and stats from the "real" table...is that correct?

I believe this is the reason the process took so long as sql statements were also being run against the temp table in addition to the loading.

thanks




Tom Kyte
December 29, 2004 - 7:39 pm UTC

the process is:

create a REAL table that is identical to your temporary table in name, structure, etc.

fill this REAL table with representative data, what you expect normally.

gather and export the stats for this REAL table.

drop it, create the GTT in its place, import the stats.


(or just use dbms_stats.set_ to set the statistics)

thanks Tom!!

denni50, December 30, 2004 - 8:41 am UTC

that is soooo cool....can't wait to test it out.
:~)

btw:
went to NYC for the x-mas(Amtrak train)and had the good
fortune to sit next to(of all things) an Oracle DBA
from Herndon VA. We virtually spent the entire travel
talking about you, your website and your books...it
was very entertaining and engaging to say the least.
(don't worry...we showered you with accolades!...chuckle)


Wrong Sample Size for Indexes with DBMS_STATS

Kiran Ghanta, December 30, 2004 - 9:42 am UTC

Hi Tom,
I was hoping you could advise me on the problem described above. The distinct keys, data blocks per key numbers vary significantly for nonunique indexes between DBMS_STATS and ANALYZE. Do you think it would be a good idea, if I use DBMS_STATS for TABLE STATS and ANALYZE for INDEX STATS? I need to gather stats for the partitioned tables as well.
Pl. advise,

thank you!
kiran



Tom Kyte
December 30, 2004 - 11:20 am UTC

can you summarize the differences you feel are "significant"

the num of rows example you posted -- they are different by small %'s at most?

Wrong Sample Size for Indexes with DBMS_STATS

Kiran Ghanta, December 30, 2004 - 2:26 pm UTC

Thank you Tom for your response. Here is the comparison between DBMS_STATS and ANALYZE for one of my nonunique indexes.

Estimate BtreeLvl LeafBlks DstnctKeys NumOfRows AvgLfBlkPerKey AvgDatBlkPerKey ClustFact SampleSize SamplePct
----------------------------------------------------------------------------------------------------------------------------------
DBMS_STATS
----------
10% 3 424,859 62 92,619,241 6,852 34,375 2,131,304 237,838 0.256791135
20% 3 438,878 53 95,675,422 8,280 41,807 2,215,808 245,686 0.256791133
40% 3 445,498 59 97,118,618 7,550 38,790 2,288,630 249,392 0.256791133
100% 3 421,744 53 91,940,090 7,957 40,837 2,164,404 236,094 0.256791134

ANALYZE
-------
10% 3 445,530 397 97,124,067 1,122 5,637 2,238,114 10,747,673 11.06592149
20% 3 445,530 586 97,124,866 760 3,758 2,202,672 21,476,121 22.1118668
40% 3 445,530 852 97,125,330 522 2,619 2,231,520 49,907,940 51.38509182
Compute 3 445,308 984 97,077,102 452 1,850 1,820,419 97,077,102 100


thanks,
kiran

Tom Kyte
December 30, 2004 - 2:44 pm UTC

the fundemental difference is the computation of distinct keys (it would be easier to look at this stuff side by side

dbms_stats analyze pct_diff
distinct keys 12341342 432432432 123


it would not wrap and it would show concisely the differences.... I ended up having to do that -- it points only back to distinct keys as the others are derived from that)

leaf blocks, btreelvl, cluster factor, num rows, no problem.


I would file a tar with support to see why it is not bumping up the sample size however (but unless you have identified a problem with the use of the existing stats, no i wouldn't flip over to analyze -- the important numbers seem to be there)


Wrong Sample Size for Indexes with DBMS_STATS

Kiran Ghanta, December 31, 2004 - 9:20 am UTC

Thanks a lot Tom. Wish you a very happy new year!

dbms_stats.gather_schema_stats

A reader, January 14, 2005 - 3:16 pm UTC

Tom,
I have a few questions about dbms_stats package
1. If I use dbms_stats.gather_schema_stats with "stattab =>", such as execute dbms_stats.gather_schema_stats('prod', statstab => 'mytab', statown =>'dev'), does it mean the schema is analyzed even though the statistics is stored in dev.mytab instead of data dictionary? Will the statistics be used by the optimizer?
2. I used above command
execute dbms_stats.gather_schema_stats('prod', statstab => 'mytab', statown =>'dev')
to gather the statisitics, but when I first run the package, the dev.mytab has nothing there though the last_anlayzed is not null in the all_tables for the owner of 'prod'. If I run twice execute dbms_stats.gather_schema_stats('prod', statstab => 'mytab', statown =>'dev'), then the data is populated in the dev.mytab table. Is there something wrong with my code or there is a bug in dbms_stats.gather_schema_stats?

Thank you,

Daxu





Tom Kyte
January 14, 2005 - 8:58 pm UTC

it specifies where to SAVE existing stats PRIOR to gathering new stats, consider:



ops$tkyte@ORA9IR2> create table t as select * from all_objects;
 
Table created.
 
ops$tkyte@ORA9IR2> set autotrace traceonly explain;
ops$tkyte@ORA9IR2> select count(*) from t;
 
Execution Plan
----------------------------------------------------------
   0      SELECT STATEMENT Optimizer=CHOOSE
   1    0   SORT (AGGREGATE)
   2    1     TABLE ACCESS (FULL) OF 'T'
 
 
<b>using rbo, no card=</b>

 
ops$tkyte@ORA9IR2> set autotrace off
ops$tkyte@ORA9IR2> exec dbms_stats.CREATE_STAT_TABLE( user, 'STATS' );
 
PL/SQL procedure successfully completed.
 
ops$tkyte@ORA9IR2> exec dbms_stats.gather_schema_stats( user, stattab=>'STATS' );
 
PL/SQL procedure successfully completed.
 
ops$tkyte@ORA9IR2> select count(*) from stats;
 
  COUNT(*)
----------
         0

<b>nothing in the stats table yet..</b>
 
ops$tkyte@ORA9IR2> set autotrace traceonly explain;
ops$tkyte@ORA9IR2> select count(*) from t;
 
Execution Plan
----------------------------------------------------------
   0      SELECT STATEMENT Optimizer=CHOOSE (Cost=62 Card=1)
   1    0   SORT (AGGREGATE)
   2    1     TABLE ACCESS (FULL) OF 'T' (Cost=62 Card=27784)
 
<b>but the analyze kicked in the CBO, we have stats</b>
 
 
ops$tkyte@ORA9IR2> set autotrace off
ops$tkyte@ORA9IR2> truncate table t;
 
Table truncated.
 
ops$tkyte@ORA9IR2> exec dbms_stats.gather_schema_stats( user, stattab=>'STATS' );
 
PL/SQL procedure successfully completed.
 
ops$tkyte@ORA9IR2> set autotrace traceonly explain;
ops$tkyte@ORA9IR2> select count(*) from t;
 
Execution Plan
----------------------------------------------------------
   0      SELECT STATEMENT Optimizer=CHOOSE (Cost=2 Card=1)
   1    0   SORT (AGGREGATE)
   2    1     TABLE ACCESS (FULL) OF 'T' (Cost=2 Card=1)
 
<b>new stats....</b>
 
 
ops$tkyte@ORA9IR2> set autotrace off
ops$tkyte@ORA9IR2> select count(*) from stats;
 
  COUNT(*)
----------
        40
 
ops$tkyte@ORA9IR2>

<b>old ones saved for us...</b>
 

dbms_stats

A reader, January 15, 2005 - 1:48 am UTC

Tom.
Thank you very much for the detailed explanation, now I am clear about this.

Daxu

running dbms_stats twice a day?

Eugene, January 15, 2005 - 9:09 pm UTC

Hi Tom,
Would it be beneficial to run statistics gathering twice a day? Will I see noticeable benefits OR will it just be a waist of recources?

Thanks,
Eugene

Tom Kyte
January 15, 2005 - 9:35 pm UTC

do you update about 1% of your data per day

or 100% of your data

or more or less or...................


sort of depends, would the statistics gathered about your data be significantly different if you gathered

a) every minute
b) every hour
c) every N hours (n < 24)
d) every day
e) every Unit of time (unit of time > a day)

that is what you need to think about.

running dbms_stats twice a day?

Eugene, January 16, 2005 - 2:18 pm UTC

Tom,
Thanks for reply. Is there a "rule" you go by to determine when you need to run stats. For instance 10% of records are updated and stats need to be run OR since only 3% of rows were updated and there is no benefit from running it again.
I've heard DBAs writing their own procedures to determine that value (% of changed rows) to run or not to run DBMS_STATS.
Wonder what you'd say...

Thanks,
Eugene

Tom Kyte
January 16, 2005 - 3:50 pm UTC

why would they write their own procedure when this is precisely what "alter table T monitoring" and gather stale do?

Many times, a % change (which is what the database already has code to do) is sufficient -- but there are cases that can be problematic (especially in a system where binds are not used -- data warehouses for example). As you add data, the "upper/lower bound" of some columns might change drastically. So that yesterday:

select .... where x < 5;

was correctly estimated to return "N" rows -- but after an incremental load of a small percentage of the overall data -- now returns 50000*N rows. They may need special consideration (but then again, scenario was a DW, you know when the data is going to change drastically...)

running dbms_stats twice a day?

Eugene, January 16, 2005 - 9:33 pm UTC

Thanks Tom,
I will look into "monitoring" and "gather stale" some more.

Eugene

best sample size

sreenivasa rao, January 20, 2005 - 9:48 am UTC

HI,tom
The production DB on version 8.1.5.
SQL> DESC STI.TBL_FAXSTATUS
 Name                                      Null?    Type
 ----------------------------------------- -------- ----------------
 DN_JOBID                                  NOT NULL NUMBER(10)
 DC_CUSTID                                          VARCHAR2(20)
 DC_STATUS                                          VARCHAR2(1)
 DC_FAXNO                                           VARCHAR2(12)
 DC_FAXREQTYPE                             NOT NULL VARCHAR2(5)
 DD_SYSDATE                                         DATE
 DC_DOMAIN                                          VARCHAR2(2)

SQL> SELECT * FROM STI.TBL_FAXSTATUS;

  DN_JOBID DC_CUSTID            D DC_FAXNO     DC_FA DD_SYSDAT DC
---------- -------------------- - ------------ ----- --------- --
      7740 4129038080008053     1 923414206    9111  07-JAN-05 CC
      7742 4129038083254092     0 92582349     9111  14-JAN-05 CC
      7741 4129034057113059     1 951634514    9111  12-JAN-05 CC
      7744 5543757680173810     1 926143499    9111  19-JAN-05 CC


SQL> EXEC DBMS_STATS.GATHER_TABLE_STATS(OWNNAME => 'STI',ESTIMATE_PERCENT =>5,TABNAME => 

'TBL_FAXSTATUS',METHOD_OPT => '
FOR COLUMNS DN_JOBID SIZE 4,DC_CUSTID SIZE 4,DC_STATUS SIZE 2,DC_FAXNO SIZE 4,DC_FAXREQTYPE 

SIZE 1,DD_SYSDATE SIZE 4,DC_
DOMAIN SIZE 1',cascade =>TRUE);

PL/SQL procedure successfully completed.

Here am i correct in gathering statistics for the above table?
Pls guide me how to set estimate_percent and method_opt.
i set the sample size 5 as oracle recommonds it.
questions:
1.How do you set the above values(estimate_percent,method_opt) in my 8.1.5 database.
2.If thousands of distinct values for a column in a million row table,how can we set the method_opt for that column,as it ranges only for (0-254).
thanks in advance for your great suggestions.
sreenivas 

parameter: method_opt

A reader, March 18, 2005 - 11:34 pm UTC

Hi Tom,

Is default method_opt good enough? How can I tune "size ?" in
this parameter?

thanks,

Error with dbms_stats

A reader, May 01, 2005 - 6:47 am UTC

Hi Tom,
When I analyze a partition (about 100Million rows) with compute statistics, it works. But with dbms_stats, it comes out with error:
BEGIN
*
ERROR at line 1:
ORA-12801: error signaled in parallel query server P004
ORA-04030: out of process memory when trying to allocate 16384 bytes (sort
subheap,sort key)
ORA-06512: at "SYS.DBMS_STATS", line 9136
ORA-06512: at "SYS.DBMS_STATS", line 9150
ORA-06512: at line 3
____________________
I am confused, why it is using parallel query? I did not put any parallel clause in table definition, and even parallel parameters settings are (please see parallel_automatic_tuning boolean FALSE):

parallel_adaptive_multi_user boolean FALSE
parallel_automatic_tuning boolean FALSE
parallel_execution_message_size integer 2152
parallel_instance_group string
parallel_max_servers integer 6
parallel_min_percent integer 0
parallel_min_servers integer 0
parallel_server boolean FALSE
parallel_server_instances integer 1
parallel_threads_per_cpu integer 2

__________________
The command used is:

DBMS_STATS.GATHER_TABLE_STATS
(OWNNAME => 'TEST',
TABNAME => 'TEST',
PARTNAME => 'TEST_PART8',
METHOD_OPT => 'FOR ALL INDEXED COLUMNS size 1',
CASCADE => TRUE); -- ???
______________
Any help will be useful.
Thanks & Regards

Tom Kyte
May 01, 2005 - 8:47 am UTC

dbms_stats defaults parallelism differently in different releases and you don't say what you are using so.....

suggest you check out the supplied packages guide for your release to see how it is defaulting

and in any case, if you don't want it, you can always say so.

You do have parallel query enabled.
You might have a non-default degree of parallelism on the table.

about the setting of method_opt parameter

Peiyan yu, May 09, 2005 - 9:51 pm UTC

Tom:
I see the following in this paper:
it is that if you gather histograms in 8i and want parallel, you would use size
1. setting method-opt to null bypasses histograms entirely.

It's what i want. In normal cituation, I wont collect the histogram statistics, so I prepare to set the method_opt to null, but I dont get the expected result .

create table t_histogram(id number primary key , name varchar2(10) , depno number) monitoring ;


insert into t_histogram select object_id , substr(object_name,1,10) , mod(rownum,3)
from user_objects where rownum < 21 ;


commit;


exec dbms_stats.gather_table_stats('DCE','T_HISTOGRAM',method_opt => null, cascade=>true );

column column_name format a10
column table_name format a20
SELECT * FROM USER_HISTOGRAMS WHERE TABLE_NAME='T_HISTOGRAM' ;

TABLE_NAME COLUMN_NAM ENDPOINT_NUMBER ENDPOINT_VALUE ENDPOINT_ACTUAL_VALUE
-------------------- ---------- --------------- -------------- --------------------------------------------------------------------------------
T_HISTOGRAM ID 0 49461
T_HISTOGRAM NAME 0 3.649482310901
T_HISTOGRAM DEPNO 0 0
T_HISTOGRAM ID 1 49481
T_HISTOGRAM NAME 1 3.651914664850
T_HISTOGRAM DEPNO 1 2

From oracle's doc , I dont find the method to bypass the histograms, do you give me some suggestions.

Thank you
Peiyan Yu

Tom Kyte
May 10, 2005 - 7:38 am UTC

method_opt => 'for columns'

in 9i, 10g will not generate histograms.



about the setting of metho_opt

Peiyan Yu, May 10, 2005 - 10:17 am UTC

Thank you very much.

I collected the statistics by what you told me. Indeed, when setting the 'for columns', the dbms_stats did not collect the histogram, but it also did not collect the column statistics, but what I want to do is bypassing the histogram. Because using default setting, I think the histogram statistics is not exact.

Best reguards
Peiyan Yu

Tom Kyte
May 10, 2005 - 10:45 am UTC

which do you want -- histograms of size 1 (column stats) or nothing?

about the setting of method_opt

Peiyan yu, May 10, 2005 - 9:59 pm UTC

Thank you again.

What I mean is if I only collect column statistics,not including the histogram.

When setting the ' for all columns size 1' , by querying the user_tab_histograms, I can get histogram statistics, that's not what I want. But if I collect the columns statistics , It's necessary to get some histogram statistics, even if the size is one .

Best Reguards
Peiyan Yu




Tom Kyte
May 10, 2005 - 11:32 pm UTC

confused, example of what you

a) see
b) want to see

would help.

about the setting of method_opt

peiyan yu, May 11, 2005 - 9:56 am UTC

For example:
SQL> desc t_histogram
 Name                                      Null?    Type
 ----------------------------------------- -------- ----------------------------
 ID                                        NOT NULL NUMBER
 NAME                                               VARCHAR2(10)
 DEPNO                                              NUMBER

SQL> insert into t_histogram select object_id , substr(object_name,1,10) , mod(object_id,3)
  2  from all_objects where rownum<21 ;

20 rows created.

SQL> commit ;

Commit complete.

SQL> analyze table t_histogram delete statistics ;

Table analyzed.

Now collecting statistics, set the method_opt to 'for all columns size 1':

SQL> exec dbms_stats.gather_table_stats('SIRIUS','T_HISTOGRAM',METHOD_OPT=>'for all columns size 1') ;

PL/SQL procedure successfully completed.

I queryed the statistics result.

This is the first part.
SQL> select table_name,column_name,density,last_analyzed
  2  from user_tab_columns where table_name='T_HISTOGRAM';

TABLE_NAME      COLUMN_NAM    DENSITY LAST_ANALYZE
--------------- ---------- ---------- ------------
T_HISTOGRAM     ID                .05 11-MAY-05
T_HISTOGRAM     NAME       .083333333 11-MAY-05
T_HISTOGRAM     DEPNO      .333333333 11-MAY-05


This is the second part.
SQL> select count(1) from user_tab_histograms
  2  where table_name ='T_HISTOGRAM' ;

  COUNT(1)
----------
         6

The first part is what I want, but the second is not what i want. Now , how to set the method_opt parameter ? 

If setting the para to 'for columns' , then no statistics about columns. that is not what i want.

thanks
peiyan yu
 

Tom Kyte
May 11, 2005 - 12:16 pm UTC

you don't get one without the other -- why do you think you want one without the other?

about the setting of metho_opt

peiyan yu, May 11, 2005 - 9:09 pm UTC

</code> http://download-west.oracle.com/docs/cd/B10501_01/server.920/a96533/stats.htm#27065 <code>
<quote>
Histograms are not useful for columns with the following characteristics:

All predicates on the column use bind variables.
The column data is uniformly distributed.
The column is unique and is used only with equality predicates.
</quote>

and if setting the method_opt to 'for all columns size 1', in this cituation, this histogram statistics is helpful to cbo optimizer?

according those, I think perhaps there should be option to let us to bypass the collect statistics.

of course , from the oracle doc , I cant find the setting, so ask you for help .

thank for your reply
peiyan yu


Tom Kyte
May 12, 2005 - 7:29 am UTC

and then you don't get the other statistics either. You either

a) want them
b) don't want them



dbms_stats.create_stat_table which table?

Gabriel, August 15, 2005 - 3:42 pm UTC

Hello Tom,

Is it possible to find out which table is used by dbms_stats package to store the data? dbms_stats.create_stat_table lets the user create its own table but how would somebody else figure out which table it is?

Thank you

Tom Kyte
August 15, 2005 - 10:37 pm UTC

They would ask them?


dbms_stats.gather_* routines populate the dictionary.


You can then export/import them to a named table. You have to let others know what the name of this table is if you care to share them.

info not stored anywhere

Gab, August 17, 2005 - 2:54 pm UTC

Hello Tom,

If the person that created the stats table is on vacation and hasn't shared that info before leaving (we've all been there) is there any other way to find out which table is that?
I find it hard to believe that the information of which table is populated by the dbms_stats package is not stored somewhere in the dictionary. How does dbms_stats go by finding out which table to store the info every time it runs?

Thank you,

Tom Kyte
August 17, 2005 - 5:08 pm UTC

dbms_stats DOES NOT populate that table -- dbms_stats.gather_table_stats populates the dictionary.

you can decide to EXPORT (copy from the dictionary) statistics to a table, that is what this table is.

If the guy that COPIED them into a table is on vacation, so what, you don't "need" them.


I think you are confused about how/where stats go -- when you GATHER - they go into the dictionary (you know what tables those are)

when you EXPORT (copy) them, they go into any table name you desire, you could have hundreds of tables they are copied into. But only the person that copied them needs to know where they were copied to (they are a COPY)

what is this table then?

Gab, August 18, 2005 - 3:32 pm UTC

Hello Tom,

DBMS_STATS.GATHER_DATABASE_STATS (
estimate_percent NUMBER DEFAULT NULL,
block_sample BOOLEAN DEFAULT FALSE,
method_opt VARCHAR2 DEFAULT 'FOR ALL COLUMNS SIZE 1',
degree NUMBER DEFAULT NULL,
granularity VARCHAR2 DEFAULT 'DEFAULT',
cascade BOOLEAN DEFAULT FALSE,
stattab VARCHAR2 DEFAULT NULL,
statid VARCHAR2 DEFAULT NULL,
options VARCHAR2 DEFAULT 'GATHER',
objlist OUT ObjectTab,
statown VARCHAR2 DEFAULT NULL,
gather_sys BOOLEAN DEFAULT FALSE,
no_invalidate BOOLEAN DEFAULT FALSE,
gather_temp BOOLEAN DEFAULT FALSE);

stattab
User stat table identifier describing where to save the current statistics.

The statistics table is assumed to reside in the same schema as the object being analyzed, so there must be one such table in each schema to use this option.


Can you please tell me then what is the purpose of this stattab table and how can I find which one was used?

Thank you again,

Tom Kyte
August 18, 2005 - 4:59 pm UTC

that saves the current stats before they are replaced. 

the new stats go into the dictionary.


we neither care what this name is, nor do we track it.  It is not relevant to our operation, every time you call dbms_stats, it could have a *DIFFERENT NAME*.  If you are using this feature, one would certainly hope you would have a name well known to all in place (eg: dba's just don't log into the database and type in the gather command in sqlplus do they -- you have a job, certified by your team, agreed upon, in some sort of configuration management -- and you can just read that to see how you might name this table in your system)


ops$tkyte@ORA9IR2> exec dbms_stats.create_stat_table( user, 'OUR_FINE_NAME' );

PL/SQL procedure successfully completed.

ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> create table t as select * from all_objects where 1=0;

Table created.

ops$tkyte@ORA9IR2> exec dbms_stats.gather_table_stats( user, 'T', stattab => 'OUR_FINE_NAME' );

PL/SQL procedure successfully completed.

ops$tkyte@ORA9IR2> select count(*) from our_fine_name;

  COUNT(*)
----------
         0

ops$tkyte@ORA9IR2> select num_rows from user_tables where table_name = 'T';

  NUM_ROWS
----------
         0

ops$tkyte@ORA9IR2> insert /*+ APPEND */ into t select * from all_objects;

28001 rows created.

ops$tkyte@ORA9IR2> exec dbms_stats.gather_table_stats( user, 'T', stattab => 'OUR_FINE_NAME' );

PL/SQL procedure successfully completed.

ops$tkyte@ORA9IR2> select count(*) from our_fine_name;

  COUNT(*)
----------
        14

ops$tkyte@ORA9IR2> select num_rows from user_tables where table_name = 'T';

  NUM_ROWS
----------
     28001

<b>after the gather, the current stats - in dictionary, the old stats - in our_fine_name, but we neither care, nor are we dependent on "our_fine_name", it is just data -- data you are managing.  If this is a relevant table to you -- you would know it's name already</b>




  1  select owner, table_name
  2  from all_tab_columns
  3  where column_name in ( 'STATID', 'TYPE', 'VERSION', 'FLAGS' )
  4  group by owner, table_name
  5* having count(*) = 4
ops$tkyte@ORA9IR2> /

OWNER                          TABLE_NAME
------------------------------ ------------------------------
A                              SYSTEM_STATS
Z001698                        ST
OPS$TKYTE                      OUR_FINE_NAME


would be a way to see "candidate" tables -- tables that *might* have some stats saved into them.
 

Statistics by session on temporary tables

Grant in Calgary, September 15, 2005 - 1:42 pm UTC

Hi Tom,
You had talked about creating statistics on temporary tables above. Is there a way to import or alter those statistics at a session level. The reason I ask is that I have a procedure that is frequently called from multiple sessions concurrently that use a temporary table. The problem is that one session may populate the table with 50,000 records and work with it while at the same time another session has 20,000,000 records. I want to find a way to import the statistics by session for these temporary tables.

Tom Kyte
September 15, 2005 - 2:33 pm UTC

how many times is this query executed per day.

How do you fake stats for a function cast as a table ?

Rahul, September 16, 2005 - 5:22 pm UTC

Tom,

Learnt more than a few things from your site and your book. Keep it up!

A question regarding stats and function tables. We use queries of the following form :

SELECT fnc.col2,
tab.col1,
tabl.col2
from Table(cast(my_func(4,'1,AA,2,BB,3,CC,4,DD') as myTableType)) fnc,
real_table tab
where fnc.id = tab.id

Here my_func could be either a pipelined or a regular function.

Question is, how does the optimizer estimate the number of rows that fnc will return ?

If we know the approx number of rows that fnc will return, is there a way to supply these stats to the optimizer (via DBMS_STATS or hints) ?



Tom Kyte
September 16, 2005 - 6:12 pm UTC

Works like a charm !

Rahul, September 16, 2005 - 6:46 pm UTC

Hard to believe, but I had never heard of asktom till I bought Expert one on one.

Since then I have been regularly searching here for solutions to my Oracle problems. I can't remember a time when I did not find a solution already analyzed in depth.

Guess I should have just looked harder for this one :).

Thanks a lot!

Statistics by session on temporary tables

Grant, September 21, 2005 - 4:12 pm UTC

Actually Tom, I continued searching your site after my questions became aware of dynamic sampling which seems to be the answer to my problem in regards to changing in and out statistics for temporary tables.

Thanks anyways Tom.

GATHER_SCHEMA_STATS

Deepak, October 11, 2005 - 1:25 am UTC

Hi Tom,

We use the DBMS_STATS.GATHER_TABLE_STATS & DBMS_STATS.GATHER_INDEX_STATS to collect the index and table statistics for ALL of our schema tables & indexes.

Would like to know if we use DBMS_STATS.GATHER_SCHEMA_STATS instead of the prior approach, then will their be any benefits and vice-versa?

Tom Kyte
October 11, 2005 - 6:43 am UTC

do you use a consistent set of options for the gather commands or do they vary by table/index?

GATHER_CHEMA_STATS

Deepak, October 11, 2005 - 10:39 am UTC

Hi Tom,

We use a consistent set of configuration for all tables/indexes.

Tom Kyte
October 11, 2005 - 3:38 pm UTC

then, verify you get the same stats doing what you believe to be equivalent commands in a TEST database (to verify the parameters you send to gather schema stats do the same thing as gather table stats does now) and feel free to simplify your process after you validate it.

ORA-03113: end-of-file on communication channel

Branka, October 17, 2005 - 1:46 pm UTC


When I run
exec dbms_stats.gather_table_stats( ownname=>'BB', tabname =>'B_TAB', cascade=>true ,method_opt => 'FOR ALL COLUMNS SIZE AUTO')
I get ERROR at line 1:
ORA-03113: end-of-file on communication channel

Any other table get analyzed without problem.
When I analyze with
analyze table BB.B_TAB compute statistics for table for all indexes for all indexed columns
I don't have any problem.
Do you have any idea what can be problem?
Thanks


Tom Kyte
October 17, 2005 - 2:42 pm UTC

3113, 7445, 600 -> all imply "please contact support, something unexpected has happened and we had to "stop""



DBMS_STATS - lots of unique SQL :(

Philip Moore, October 24, 2005 - 4:38 pm UTC

Hi Tom,

I often use DBMS_STATS - but I find it to generate a LOT of SQL when used on partitioned tables with locally partitioned indexes.

The use of DBMS_STATS (at least in 9i Release 2) seems to flood the shared pool with literally hundreds of unique SQL statements. This is due to the fact that DBMS_STATS issues SQL with "PARTITION()" clauses - with literal values for the partition name.

Imagine a star schema where we have a fact table with 12 partitions. We then create a bitmap index (Locally partitioned) on that table for just a single dimension.

When we run DBMS_STATS on the table, we get 26 unique SQL statements: 1 for the table "global" gather, 1 for the index "global" gather, 12 for the table partition gathers, and 12 for the index partition gathers. (3 more are shown in my demo below (1 each for the proc call, table build, and index build)).

This seems to directly refute what you have been trying to instill in us as developers for quite some time - use Bind Variables - and reuse stored SQL! Shouldn't Oracle get rid of the "PARTITION()" clauses - and instead place a predicate such as:

"sale_date BETWEEN :partition_low_value AND :partition_high_value"

Why not rely on Oracle's robust partition pruning capability instead of naming the partition in SQL - causing yet another unique SQL statement to get parsed, stored, etc (especially one that will NOT get executed again for quite some time)?

That would reduce our 26 statements to 4, a drastic difference - saving 22 slots for parsed SQL.

I give an example of this below. This test was done in Oracle 9i Release 2 (9.2.0.6)

phil@ORA9iR2 -> @c:\asktom
phil@ORA9iR2 -> /*----------------------------------------------------------------*/
phil@ORA9iR2 -> PROMPT Let's create our table
Let's create our table
phil@ORA9iR2 ->
phil@ORA9iR2 -> DROP TABLE asktom_test
2 /
DROP TABLE asktom_test
*
ERROR at line 1:
ORA-00942: table or view does not exist


phil@ORA9iR2 ->
phil@ORA9iR2 -> CREATE TABLE asktom_test
2 ( product_id INT
3 , sale_date DATE
4 , sale_amount NUMBER
5 )
6 PARTITION BY RANGE (sale_date)
7 ( PARTITION p_jan_2005 VALUES LESS THAN (TO_DATE('02/01/2005', 'MM/DD/YYYY'))
8 , PARTITION p_feb_2005 VALUES LESS THAN (TO_DATE('03/01/2005', 'MM/DD/YYYY'))
9 , PARTITION p_mar_2005 VALUES LESS THAN (TO_DATE('04/01/2005', 'MM/DD/YYYY'))
10 , PARTITION p_apr_2005 VALUES LESS THAN (TO_DATE('05/01/2005', 'MM/DD/YYYY'))
11 , PARTITION p_may_2005 VALUES LESS THAN (TO_DATE('06/01/2005', 'MM/DD/YYYY'))
12 , PARTITION p_jun_2005 VALUES LESS THAN (TO_DATE('07/01/2005', 'MM/DD/YYYY'))
13 , PARTITION p_jul_2005 VALUES LESS THAN (TO_DATE('08/01/2005', 'MM/DD/YYYY'))
14 , PARTITION p_aug_2005 VALUES LESS THAN (TO_DATE('09/01/2005', 'MM/DD/YYYY'))
15 , PARTITION p_sep_2005 VALUES LESS THAN (TO_DATE('10/01/2005', 'MM/DD/YYYY'))
16 , PARTITION p_oct_2005 VALUES LESS THAN (TO_DATE('11/01/2005', 'MM/DD/YYYY'))
17 , PARTITION p_nov_2005 VALUES LESS THAN (TO_DATE('12/01/2005', 'MM/DD/YYYY'))
18 , PARTITION p_dec_2005 VALUES LESS THAN (TO_DATE('01/01/2006', 'MM/DD/YYYY'))
19 )
20 /

Table created.

phil@ORA9iR2 ->
phil@ORA9iR2 -> /*----------------------------------------------------------------*/
phil@ORA9iR2 -> CREATE BITMAP INDEX asktom_test_product_id_bidx ON asktom_test(product_id)
2 LOCAL
3 /

Index created.

phil@ORA9iR2 ->
phil@ORA9iR2 -> /*----------------------------------------------------------------*/
phil@ORA9iR2 -> PROMPT Now let's insert some data...
Now let's insert some data...
phil@ORA9iR2 ->
phil@ORA9iR2 -> INSERT /*+ append */
2 INTO asktom_test
3 SELECT MOD(LEVEL, 100) AS product_id
4 , TO_DATE('01/01/2005', 'MM/DD/YYYY') + MOD(ROWNUM, 365) AS sale_date
5 , dbms_random.RANDOM AS sale_amount
6 FROM dual
7 CONNECT BY LEVEL <= 120000
8 /

120000 rows created.

phil@ORA9iR2 ->
phil@ORA9iR2 -> COMMIT ;

Commit complete.

phil@ORA9iR2 ->
phil@ORA9iR2 -> /*----------------------------------------------------------------*/
phil@ORA9iR2 -> PROMPT Now let's gather stats (and put a lot of SQL in the shared pool :))
Now let's gather stats (and put a lot of SQL in the shared pool :))
phil@ORA9iR2 ->
phil@ORA9iR2 -> BEGIN
2 DBMS_STATS.GATHER_TABLE_STATS (ownname => USER
3 , tabname => 'ASKTOM_TEST'
4 , partname => NULL
5 , estimate_percent => DBMS_STATS.auto_sample_size
6 , block_sample => TRUE
7 , method_opt => 'FOR ALL INDEXED COLUMNS SIZE AUTO'
8 , DEGREE => NULL
9 , granularity => 'ALL'
10 , CASCADE => TRUE
11 , stattab => NULL
12 , statid => NULL
13 , statown => NULL
14 , no_invalidate => NULL
15 );
16 END;
17 /

PL/SQL procedure successfully completed.

phil@ORA9iR2 ->
phil@ORA9iR2 -> /*----------------------------------------------------------------*/
phil@ORA9iR2 -> PROMPT Now let's see how many statements were generated for this gather stats proc call...
Now let's see how many statements were generated for this gather stats proc call...
phil@ORA9iR2 ->
phil@ORA9iR2 -> SELECT sql_text
2 FROM v$sqlarea
3 WHERE UPPER(sql_text) LIKE '%ASKTOM_TEST%'
4 /

SQL_TEXT
----------------------------------------------------------------------------------------------------
SELECT sql_text FROM v$sqlarea WHERE UPPER(sql_text) LIKE '%ASKTOM_TEST%'
analyze index "PHIL"."ASKTOM_TEST_PRODUCT_ID_BIDX" ESTIMATE statistics sample 5 percent
select /*+ cursor_sharing_exact dynamic_sampling(0) no_monitoring*/ count(*) from "PHIL"."ASK
TOM_TEST" sample block ( 1.0000000000,1) t

select /*+ cursor_sharing_exact dynamic_sampling(0) no_monitoring noparallel(t) noparallel_index(t
) */ count(*) from "PHIL"."ASKTOM_TEST" partition ("P_APR_2005") sample block ( 10.0000000000,1
) t

select /*+ cursor_sharing_exact dynamic_sampling(0) no_monitoring noparallel(t) noparallel_index(t
) */ count(*) from "PHIL"."ASKTOM_TEST" partition ("P_AUG_2005") sample block ( 10.0000000000,1
) t

select /*+ cursor_sharing_exact dynamic_sampling(0) no_monitoring noparallel(t) noparallel_index(t
) */ count(*) from "PHIL"."ASKTOM_TEST" partition ("P_DEC_2005") sample block ( 10.0000000000,1
) t

select /*+ cursor_sharing_exact dynamic_sampling(0) no_monitoring noparallel(t) noparallel_index(t
) */ count(*) from "PHIL"."ASKTOM_TEST" partition ("P_JAN_2005") sample block ( 10.0000000000,1
) t

select /*+ cursor_sharing_exact dynamic_sampling(0) no_monitoring noparallel(t) noparallel_index(t
) */ count(*) from "PHIL"."ASKTOM_TEST" partition ("P_JUL_2005") sample block ( 10.0000000000,1
) t

select /*+ cursor_sharing_exact dynamic_sampling(0) no_monitoring noparallel(t) noparallel_index(t
) */ count(*) from "PHIL"."ASKTOM_TEST" partition ("P_JUN_2005") sample block ( 10.0000000000,1
) t

select /*+ cursor_sharing_exact dynamic_sampling(0) no_monitoring noparallel(t) noparallel_index(t
) */ count(*) from "PHIL"."ASKTOM_TEST" partition ("P_MAR_2005") sample block ( 10.0000000000,1
) t

select /*+ cursor_sharing_exact dynamic_sampling(0) no_monitoring noparallel(t) noparallel_index(t
) */ count(*) from "PHIL"."ASKTOM_TEST" partition ("P_MAY_2005") sample block ( 10.0000000000,1
) t

select /*+ cursor_sharing_exact dynamic_sampling(0) no_monitoring noparallel(t) noparallel_index(t
) */ count(*) from "PHIL"."ASKTOM_TEST" partition ("P_NOV_2005") sample block ( 10.0000000000,1
) t

select /*+ cursor_sharing_exact dynamic_sampling(0) no_monitoring noparallel(t) noparallel_index(t
) */ count(*) from "PHIL"."ASKTOM_TEST" partition ("P_OCT_2005") sample block ( 10.0000000000,1
) t

select /*+ cursor_sharing_exact dynamic_sampling(0) no_monitoring noparallel(t) noparallel_index(t
) */ count(*) from "PHIL"."ASKTOM_TEST" partition ("P_SEP_2005") sample block ( 10.0000000000,1
) t

INSERT /*+ append */ INTO asktom_test SELECT MOD(LEVEL, 100) AS product_id , TO_DATE('01/01/200
5', 'MM/DD/YYYY') + MOD(ROWNUM, 365) AS sale_date , dbms_random.RANDOM AS sale_amount FROM dual
CONNECT BY LEVEL <= 120000

select /*+ cursor_sharing_exact dynamic_sampling(0) no_monitoring*/ count(*),count("PRODUCT_ID"),c
ount(distinct "PRODUCT_ID"),sum(vsize("PRODUCT_ID")),substrb(dump(min("PRODUCT_ID"),16,0,32),1,120),
substrb(dump(max("PRODUCT_ID"),16,0,32),1,120),count("SALE_DATE"),count("SALE_AMOUNT"),sum(vsize("SA
LE_AMOUNT")) from "PHIL"."ASKTOM_TEST" t

select /*+ cursor_sharing_exact dynamic_sampling(0) no_monitoring noparallel(t) noparallel_index(t)
*/ count(*),count("PRODUCT_ID"),count(distinct "PRODUCT_ID"),sum(vsize("PRODUCT_ID")),substrb(dump
(min("PRODUCT_ID"),16,0,32),1,120),substrb(dump(max("PRODUCT_ID"),16,0,32),1,120),count("SALE_DATE")
,count("SALE_AMOUNT"),sum(vsize("SALE_AMOUNT")) from "PHIL"."ASKTOM_TEST" partition ("P_APR_2005
") t

select /*+ cursor_sharing_exact dynamic_sampling(0) no_monitoring noparallel(t) noparallel_index(t)
*/ count(*),count("PRODUCT_ID"),count(distinct "PRODUCT_ID"),sum(vsize("PRODUCT_ID")),substrb(dump
(min("PRODUCT_ID"),16,0,32),1,120),substrb(dump(max("PRODUCT_ID"),16,0,32),1,120),count("SALE_DATE")
,count("SALE_AMOUNT"),sum(vsize("SALE_AMOUNT")) from "PHIL"."ASKTOM_TEST" partition ("P_AUG_2005
") t

select /*+ cursor_sharing_exact dynamic_sampling(0) no_monitoring noparallel(t) noparallel_index(t)
*/ count(*),count("PRODUCT_ID"),count(distinct "PRODUCT_ID"),sum(vsize("PRODUCT_ID")),substrb(dump
(min("PRODUCT_ID"),16,0,32),1,120),substrb(dump(max("PRODUCT_ID"),16,0,32),1,120),count("SALE_DATE")
,count("SALE_AMOUNT"),sum(vsize("SALE_AMOUNT")) from "PHIL"."ASKTOM_TEST" partition ("P_DEC_2005
") t

select /*+ cursor_sharing_exact dynamic_sampling(0) no_monitoring noparallel(t) noparallel_index(t)
*/ count(*),count("PRODUCT_ID"),count(distinct "PRODUCT_ID"),sum(vsize("PRODUCT_ID")),substrb(dump
(min("PRODUCT_ID"),16,0,32),1,120),substrb(dump(max("PRODUCT_ID"),16,0,32),1,120),count("SALE_DATE")
,count("SALE_AMOUNT"),sum(vsize("SALE_AMOUNT")) from "PHIL"."ASKTOM_TEST" partition ("P_FEB_2005
") t

select /*+ cursor_sharing_exact dynamic_sampling(0) no_monitoring noparallel(t) noparallel_index(t)
*/ count(*),count("PRODUCT_ID"),count(distinct "PRODUCT_ID"),sum(vsize("PRODUCT_ID")),substrb(dump
(min("PRODUCT_ID"),16,0,32),1,120),substrb(dump(max("PRODUCT_ID"),16,0,32),1,120),count("SALE_DATE")
,count("SALE_AMOUNT"),sum(vsize("SALE_AMOUNT")) from "PHIL"."ASKTOM_TEST" partition ("P_JAN_2005
") t

select /*+ cursor_sharing_exact dynamic_sampling(0) no_monitoring noparallel(t) noparallel_index(t)
*/ count(*),count("PRODUCT_ID"),count(distinct "PRODUCT_ID"),sum(vsize("PRODUCT_ID")),substrb(dump
(min("PRODUCT_ID"),16,0,32),1,120),substrb(dump(max("PRODUCT_ID"),16,0,32),1,120),count("SALE_DATE")
,count("SALE_AMOUNT"),sum(vsize("SALE_AMOUNT")) from "PHIL"."ASKTOM_TEST" partition ("P_JUL_2005
") t

select /*+ cursor_sharing_exact dynamic_sampling(0) no_monitoring noparallel(t) noparallel_index(t)
*/ count(*),count("PRODUCT_ID"),count(distinct "PRODUCT_ID"),sum(vsize("PRODUCT_ID")),substrb(dump
(min("PRODUCT_ID"),16,0,32),1,120),substrb(dump(max("PRODUCT_ID"),16,0,32),1,120),count("SALE_DATE")
,count("SALE_AMOUNT"),sum(vsize("SALE_AMOUNT")) from "PHIL"."ASKTOM_TEST" partition ("P_JUN_2005
") t

select /*+ cursor_sharing_exact dynamic_sampling(0) no_monitoring noparallel(t) noparallel_index(t)
*/ count(*),count("PRODUCT_ID"),count(distinct "PRODUCT_ID"),sum(vsize("PRODUCT_ID")),substrb(dump
(min("PRODUCT_ID"),16,0,32),1,120),substrb(dump(max("PRODUCT_ID"),16,0,32),1,120),count("SALE_DATE")
,count("SALE_AMOUNT"),sum(vsize("SALE_AMOUNT")) from "PHIL"."ASKTOM_TEST" partition ("P_MAR_2005
") t

select /*+ cursor_sharing_exact dynamic_sampling(0) no_monitoring noparallel(t) noparallel_index(t)
*/ count(*),count("PRODUCT_ID"),count(distinct "PRODUCT_ID"),sum(vsize("PRODUCT_ID")),substrb(dump
(min("PRODUCT_ID"),16,0,32),1,120),substrb(dump(max("PRODUCT_ID"),16,0,32),1,120),count("SALE_DATE")
,count("SALE_AMOUNT"),sum(vsize("SALE_AMOUNT")) from "PHIL"."ASKTOM_TEST" partition ("P_MAY_2005
") t

select /*+ cursor_sharing_exact dynamic_sampling(0) no_monitoring noparallel(t) noparallel_index(t)
*/ count(*),count("PRODUCT_ID"),count(distinct "PRODUCT_ID"),sum(vsize("PRODUCT_ID")),substrb(dump
(min("PRODUCT_ID"),16,0,32),1,120),substrb(dump(max("PRODUCT_ID"),16,0,32),1,120),count("SALE_DATE")
,count("SALE_AMOUNT"),sum(vsize("SALE_AMOUNT")) from "PHIL"."ASKTOM_TEST" partition ("P_NOV_2005
") t

select /*+ cursor_sharing_exact dynamic_sampling(0) no_monitoring noparallel(t) noparallel_index(t)
*/ count(*),count("PRODUCT_ID"),count(distinct "PRODUCT_ID"),sum(vsize("PRODUCT_ID")),substrb(dump
(min("PRODUCT_ID"),16,0,32),1,120),substrb(dump(max("PRODUCT_ID"),16,0,32),1,120),count("SALE_DATE")
,count("SALE_AMOUNT"),sum(vsize("SALE_AMOUNT")) from "PHIL"."ASKTOM_TEST" partition ("P_OCT_2005
") t

select /*+ cursor_sharing_exact dynamic_sampling(0) no_monitoring noparallel(t) noparallel_index(t)
*/ count(*),count("PRODUCT_ID"),count(distinct "PRODUCT_ID"),sum(vsize("PRODUCT_ID")),substrb(dump
(min("PRODUCT_ID"),16,0,32),1,120),substrb(dump(max("PRODUCT_ID"),16,0,32),1,120),count("SALE_DATE")
,count("SALE_AMOUNT"),sum(vsize("SALE_AMOUNT")) from "PHIL"."ASKTOM_TEST" partition ("P_SEP_2005
") t

BEGIN DBMS_STATS.GATHER_TABLE_STATS (ownname => USER ,
tabname => 'ASKTOM_TEST' , partname => NULL
, estimate_percent => DBMS_STATS.auto_sample_size
, block_sample => TRUE , method_opt
=> 'FOR ALL INDEXED COLUMNS SIZE AUTO' , DEGREE => NU
LL , granularity => 'ALL' , CA
SCADE => TRUE , stattab => NULL
, statid => NULL , statown
=> NULL , no_invalidate => NULL
); END;


29 rows selected.

phil@ORA9iR2 -> spool off

-----------------------------------------------------

This simple test was with a single-dimensioned table, with just 1 locally partitioned bitmap index. Imagine how much SQL would be generated if we had a table with 10 dimension columns, and 10 bitmap indexes.

By the way - I may be missing something here - one doesn't know what one doesn't know - so maybe there is a flag that can "disable" this verbose SQL generation... If so - kindly share.

Thanks for your help sir - have a good day.

Sincerely,

Phil

Tom Kyte
October 25, 2005 - 1:20 am UTC

you call 26 "flooding the shared pool"?

Please bother to read

Philip Moore, October 25, 2005 - 11:28 am UTC

Hi Tom,

No - I don't call 26 "flooding the shared pool". I used a simple example as you require to demonstrate that a table with just one index and 12 partitions created a lot of unique SQL.

If you had bothered to read my response in entirety - you would have noticed that it would create many more with more indexes/dimension columns. We have cases where it generates 1,000's of statements (by refreshing multiple partitioned-Materialized Views with bitmap indexes then gathering stats).

I think you are getting too busy nowadays to monitor this site. Instead of offering a helpful suggestion you answer with a question that intends to mock the questioner. Instead of reading a question in entirety - you now seem to jump on the first "gotcha" (hmm - sounds a lot like Oracle support). How does this help people become successful with Oracle software?

Sincerely,

Philip

Tom Kyte
October 26, 2005 - 7:33 am UTC

"If you had bothered to read my response in entirety"

Ok, fine.

Yes, you are correct. I'll go onto the next thing now.

Seriously - you are correct -- when I have to hit page down over and over and over and over in a comment/followup - you are right. I cannot read them all. I try to take *new* questions - but spend too much time doing this part of it.

So yes, you are absolutely correct - I breeze through the reviews and followups (check out the home page to get a count of how many in the last 4 weeks).




BLOB Statistics

Deepak, October 26, 2005 - 5:04 am UTC

Hi Tom,

Want to know when I collect statistics of a table using DBMS_STATS package, is there any statistics related to BLOB column are gathered?

Let's suppose
> I gathered the statistics of a table T1 using DBMS_STATS.
> Added a Blob column to the table T1.

Need I collect statistics for the table T1 again? If I do not, then will it affect the performance?

Please help.


Tom Kyte
October 26, 2005 - 12:14 pm UTC

the blob data is generally stored out of line (not in the table)

and if you added a blob column, just added it, nothing in the table changed (they are all null - they are the last column, the consume zero bytes of storage). so nothing would have changed.



Thanks Tom

Deepak, October 27, 2005 - 1:09 am UTC

Thanks Tom.

Capturing schema statistics for all users except SYS and SYSTEM

Vikas Katyal, November 16, 2005 - 3:33 pm UTC

Hi Tom,

Can you please let me know what is the mistake here in the code.


SQL> SET SERVEROUTPUT ON
SQL> DECLARE
  2          l_job  NUMBER;
  3  BEGIN
  4       for x in (Select username from dba_users where username NOT IN ('SYS','SYSTEM')) loop
  5      DBMS_JOB.submit(l_job,
  6                      'BEGIN DBMS_STATS.gather_schema_stats(' || x.username || ')''; END;',
  7                        trunc(SYSDATE),
  8                       'trunc(SYSDATE) + 1');
  9       end loop;
 10       COMMIT;
 11       DBMS_OUTPUT.put_line('Job: ' || l_job);
 12     END;
 13    /
DECLARE
*
ERROR at line 1:
ORA-06550: line 1, column 138:
PLS-00103: Encountered the symbol "; END; END; " when expecting one of the following:
:= . ( % ;
ORA-06512: at "SYS.DBMS_JOB", line 79
ORA-06512: at "SYS.DBMS_JOB", line 136
ORA-06512: at line 5

Thanks 

Tom Kyte
November 16, 2005 - 6:19 pm UTC

the statement you generate would resemble this:

ops$tkyte@ORA10GR2> declare
  2  x_username varchar2(30) := 'SCOTT';
  3  begin
  4          dbms_output.put_line(
  5  'BEGIN DBMS_STATS.gather_schema_stats(' || x_username || ')''; END;'
  6  );
  7  end;
  8  /<b>
BEGIN DBMS_STATS.gather_schema_stats(SCOTT)'; END;
</b>
PL/SQL procedure successfully completed.



missing a quote and a quote in the wrong place... 

In RAC env

David, January 15, 2006 - 12:54 pm UTC

Tom, Is there a way to restrict dbms_stats.gather_table_stats to use only one instance to spawn parallel process assuming I specify value 4 for DEGREE attribute. Thanks.

Tom Kyte
January 15, 2006 - 3:54 pm UTC

you have to go out of your way to permit it to do cross instance parallel query - do you have that even set up?

Donald K. Burleson

BH, January 16, 2006 - 9:42 am UTC

From further up the thread:

-------------------------------------------------------

And your statement: "but one would question WHY you wouldn't want them put into
the dictionary since they would be more current/accurate".

But then I recently read an article by Donald K. Burleson where he states:
"One of the most common mistakes made by Oracle DBAs is to frequently re-analyze
the schema. Remember, the sole purpose of doing that is to change the execution
plans for your SQL, and if it ain't broke, don't fix it. If you are satisfied
with your current SQL performance, re-analyzing
a schema could cause significant performance problems and undo the tuning
efforts of the development staff. In practice, very few shops are sufficiently
dynamic to require periodic schema re-analysis."

-------------------------------------------------------

Just commenting on Donald K. Burleson and for that matter Mike Ault. Anyone who boasts about being one the world's leading expert blah blah blah in this, that and the other, should be taken with an extremely large truck load of salt. And these two are no exception.

Over the years, I have found numerous mistakes in works by both of these 'experts' ranging from simple editing mistakes upwards. I would strong dissuade anyone buying books and other materials from this intrepid duo and instead stick with authors like Cary Millsap, Steve Feuerstein, Steve Adams, Jonathan Lewis and of course the excellent Tom himself. Take the word of Messrs Burleson and Ault at your peril. You may well risk yours and your company’s livelihoods!!!


DBMS_STAT

Mukesh, March 25, 2006 - 1:46 am UTC

Dear Tom,
Actually i am new to dbms_stats , i want to know that does this utility helps in the following situation:
Suppose TABLE M is having 100 rows, and i inser 2 more rows to that table,so in this case does this utility will provide the complete statistics of that table, if yes than plz tell how?
and the second query is by specifying gather auto in the options clause how mouch does it analyzes(how much modification in (%) made to table by default).
Third thing which i would like to ask is What does gather_auto do, esimate or compute?

Tom Kyte
March 25, 2006 - 7:38 am UTC

if you add 2 rows to 100, It is highly unlikely you would consider gathering statistics.

dbms_stats is a package that gathers statistics used by the cost based optimizer to come up with query plans.

auto does whatever auto wants to do. It might estimate, it might not, it might get histograms, it might not.



Performance Issue after gathering stats on a New Partition

VLS, March 27, 2006 - 3:48 am UTC

Hi Tom,

I have a partitioned table which is around 100GB in size. The table is partition on month.
Every month when I add a partition and generate the stats for this partition after some data
is inserted into it, I see a major performance issue on the queries involving this table and
as a workaround, I gather the stats on the entire table. After gathering the stats on the entire table, the performance comes to normalcy. But then, do I really need to gather the stats on the entire table against a single partition ? Because gathering the stats on entire table every month takes lots of time and huge resources as well. The DBA's say that it has to do with the Global Stats of the table when a partition is analyzed.

Would request you to suggest me a workaround by way of which I can eliminate gathering of statistics of the entire table every month.

Regards
VLS


Tom Kyte
March 27, 2006 - 9:57 am UTC

if you KNOW the statistics, you can just tell us.

Eg: if you gather stats just on the new partition you know things like how many rows you added, how many blocks it added and so on. You could also determine the new high/low values for certain columns (like timestamps/dates/sequence populated columns) and set them for the table using dbms_stats.set_**** functions.


The optimizer doesn't care WHERE the stats come from, how they get into the dictionary - only that they actually exist.


If Oracle cannot determine precisely what SINGLE partition will be accessed at hard parse time, Oracle will use global table stats - it is likely your queries do not access a single partition (or that partition cannot be determined at hard parse time) and hence use global table stats - not local partition stats.

But you can SET them instead of GATHERING them.

Performance Issue after adding new partition

VLS, March 27, 2006 - 10:12 pm UTC

Hi Tom,

As a workaround, I started generating the stats of the newly added partition after some data has been inserted into the new partition, using following command.

exec dbms_stats(ownname=>'VLS',tabname=>'PART_EXAMPLE',part_name=>'P4',cascade=>TRUE);

When I execute this command, it generates the stats of the new partition plus it also updates the global stats of the table.

As per your suggestion, I would follow this policy once I add a new partition :

1. Will add the new partition during less load on the database which is during night hours in my case when only batch processes are run.
2. Flush the shared pool so that the queries on the table on which I added a new partition are hard parsed.
3. Once some data, which might impact the performance, is inserted into the table, generate the stats using the procedure used by me (as mentioned above).

Will it be OK ? The procedure mentioned above should not affect the performance. Am i Right ?

Would appreciate if you can validate this.

Regards
VLS



Tom Kyte
March 28, 2006 - 7:48 am UTC

2) is not necessary, we'll already do that. In fact, flushing the shared pool would be bad since only the affect SQL's need to go.

3) you might just use dbms_stats.set_* API's after adding the partition to give it some representative statistics to start with. that is, add partition WITH some reasonable statistics associated with it already.

SQL Reference Guide documentation bug (for EXCHANGE PARTITION)???

Phil, March 28, 2006 - 12:38 pm UTC

Hi Tom,

After working with ALTER TABLE ... EXCHANGE PARTITION for loading datawarehouses - I think I discovered either a documentation bug or a database bug with 9iR2 (9.2.0.6).

I had hoped to gather stats on my "little" table prior to the swap - and having those statistics factored into the larger table's global statistics after the exchange as the documentation states in the 9iR2 SQL Reference Guide.  Here is the excerpt from page 11-82:

"All statistics of the table and partition are exchanged, including table, column, index
statistics, and histograms. The aggregate statistics of the table receiving the new
partition are recalculated."

Here is a demo showing that this does NOT occur:

SQL> @c:\asktom2
SQL> COLUMN table_name FORMAT A30;
SQL> COLUMN last_analyzed FORMAT A20;
SQL> SET linesize 80;
SQL> 
SQL> SELECT *
  2  FROM v$version;

BANNER                                                                          
----------------------------------------------------------------                
Oracle9i Enterprise Edition Release 9.2.0.6.0 - 64bit Production                
PL/SQL Release 9.2.0.6.0 - Production                                           
CORE    9.2.0.6.0    Production                                                       
TNS for HPUX: Version 9.2.0.6.0 - Production                                    
NLSRTL Version 9.2.0.6.0 - Production                                           

SQL> 
SQL> DROP TABLE big_table;

Table dropped.

SQL> 
SQL> DROP TABLE swap_table;

Table dropped.

SQL> 
SQL> CREATE TABLE big_table
  2  (a INT
  3  )
  4  PARTITION BY RANGE (a)
  5  (PARTITION p1 VALUES LESS THAN (10000)
  6  );

Table created.

SQL> 
SQL> -- Create our swap table
SQL> CREATE TABLE swap_table
  2  AS
  3  SELECT *
  4  FROM big_table;

Table created.

SQL> 
SQL> -- Generate some data
SQL> INSERT INTO swap_table
  2      SELECT       LEVEL
  3            FROM DUAL
  4      CONNECT BY LEVEL < 10000;

9999 rows created.

SQL> 
SQL> INSERT INTO big_table
  2      SELECT       LEVEL
  3            FROM dual
  4      CONNECT BY LEVEL < 5000;

4999 rows created.

SQL> 
SQL> COMMIT ;

Commit complete.

SQL> 
SQL> -- Get thorough stats (global and partitioned)
SQL> BEGIN
  2      DBMS_STATS.GATHER_TABLE_STATS (ownname             => USER
  3                       , tabname             => 'SWAP_TABLE'
  4                       , partname          => NULL
  5                       , estimate_percent      => DBMS_STATS.auto_sample_size
  6                       , block_sample         => TRUE
  7                       , method_opt         => 'FOR ALL INDEXED COLUMNS SIZE AUTO'
  8                       , DEGREE             => NULL
  9                       , granularity         => 'ALL'
 10                       , CASCADE             => TRUE
 11                       , stattab             => NULL
 12                       , statid             => NULL
 13                       , statown             => NULL
 14                       , no_invalidate         => NULL
 15                        );
 16  END;
 17  /

PL/SQL procedure successfully completed.

SQL> 
SQL> BEGIN
  2      DBMS_STATS.GATHER_TABLE_STATS (ownname             => USER
  3                       , tabname             => 'BIG_TABLE'
  4                       , partname          => NULL
  5                       , estimate_percent      => DBMS_STATS.auto_sample_size
  6                       , block_sample         => TRUE
  7                       , method_opt         => 'FOR ALL INDEXED COLUMNS SIZE AUTO'
  8                       , DEGREE             => NULL
  9                       , granularity         => 'ALL'
 10                       , CASCADE             => TRUE
 11                       , stattab             => NULL
 12                       , statid             => NULL
 13                       , statown             => NULL
 14                       , no_invalidate         => NULL
 15                        );
 16  END;
 17  /

PL/SQL procedure successfully completed.

SQL> 
SQL> -- See the stats before the swap
SQL> SELECT table_name
  2        , global_stats
  3        , num_rows
  4        , last_analyzed
  5  FROM user_tables
  6  WHERE table_name IN ('BIG_TABLE', 'SWAP_TABLE');

TABLE_NAME                     GLO   NUM_ROWS LAST_ANALYZED                     
------------------------------ --- ---------- --------------------              
BIG_TABLE                      YES       4999 28-MAR-06                         
SWAP_TABLE                     YES       9999 28-MAR-06                         

SQL> 
SQL> -- Swap
SQL> ALTER TABLE big_table
  2  EXCHANGE PARTITION p1 WITH TABLE swap_table
  3  INCLUDING INDEXES WITH VALIDATION;

Table altered.

SQL> 
SQL> -- See the stats after the first swap
SQL> SELECT table_name
  2        , global_stats
  3        , num_rows
  4        , last_analyzed
  5  FROM user_tables
  6  WHERE table_name IN ('BIG_TABLE', 'SWAP_TABLE');

TABLE_NAME                     GLO   NUM_ROWS LAST_ANALYZED                     
------------------------------ --- ---------- --------------------              
BIG_TABLE                      YES       4999 28-MAR-06                         
SWAP_TABLE                     YES       4999 28-MAR-06                         

SQL> 
SQL> -- Swap the partition back
SQL> ALTER TABLE big_table
  2  EXCHANGE PARTITION p1 WITH TABLE swap_table
  3  INCLUDING INDEXES WITH VALIDATION;

Table altered.

SQL> 
SQL> -- See the stats after the swap back
SQL> SELECT table_name
  2        , global_stats
  3        , num_rows
  4        , last_analyzed
  5  FROM user_tables
  6  WHERE table_name IN ('BIG_TABLE', 'SWAP_TABLE');

TABLE_NAME                     GLO   NUM_ROWS LAST_ANALYZED                     
------------------------------ --- ---------- --------------------              
BIG_TABLE                      YES       4999 28-MAR-06                         
SWAP_TABLE                     YES       9999 28-MAR-06                         

----------------------------------------------

Is this a database bug or a documentation bug?  I know I could use your workaround mentioned above - but just wanted to call this out...

Thanks!

Sincerely,

Phil 

Tom Kyte
March 28, 2006 - 8:05 pm UTC

I'll look into this - have book marked this one.

Global statistics update on exchange

Michel Cadot, March 29, 2006 - 2:27 am UTC

I made many tests with multipartition tables (to avoid the singularity of 1 partition), with estimate or compute, with different method_opt values and granularity values (ALL, GLOBAL, PARTITION to get estimate or compute global statistics), in 9.2.0.6 and 10.2.0.1, only partition statistics are updated (exchanged), global ones still remained the same.
I think it is more a documentation bug as the purpose of exchange is mainly the speed and recalculate global statistics may be a quite long task.

Regards
Michel

Tom Kyte
March 29, 2006 - 7:15 am UTC

many of the global stats could be UPDATED - no need to recompute.

if there were 1,000,000,000 rows before the exchange and you took away 1,000,000 but added 2,000,000 with the exchange....

if there were N blocks before and you took away M and added L....


so many of them could be maintained - but not all. If you had 10 partitions and a column X has 100 distinct values globally and in the new partition column X has 100 distinct values - after the exchange all you can say is X has between 101 and 200 distinct values

Any luck?

Phil, April 12, 2006 - 10:16 am UTC

"I'll look into this - have book marked this one. "

Hi Tom, just wanted to check if you believe I should open a TAR for the database or the documentation set...

(In the mean time - I've written a packaged procedure to remove the old partition's rows and blocks from the global totals for the partitioned table and then to add the swap tables rows and blocks to that result. I also do this for each index as well. I don't touch the "Average" statistics such as row length, etc - nor do I touch histograms. Do you think this is ok?)

Thank you sir.

Phil


Tom Kyte
April 12, 2006 - 11:18 am UTC

I think that is OK, I still have this on my todo list...

If you could file a tar that would be great really - to get it started.

ora-12842 and dbms_stats

sudershan, May 18, 2006 - 3:41 pm UTC

ORA-9206
Description
ORA-12842 possible during a Parallel Query if concurrent DDL occurs
against the objects on which the query depends.

Tom,
Have you ever seen ORA-12842 in conjunction with DBMS_STATS
running on a table.
I know DBMS_STATS does not get a lock on a table..but are there any other issues if DML is running on a table..while stats are gathered?
Also, Could DBMS_STATS be running at the same time through
two different jobs..on the same object..will that cause any issues?

Thanks.



Tom Kyte
May 19, 2006 - 11:40 am UTC

dbms_stats can use parallel query.
dbms_stats uses sql, like a program would.

[tkyte@vmware ~]$ oerr ora 12842
12842, 00000, "Cursor invalidated during parallel execution"
// *Cause: The cursor was invalidated during the parse phase of deferred
// parallel processing, e.g. when set operands are parallelized.
// *Action: Depends on why the cursor was invalidated. Possible causes include
// DDL on a schema object and shared pool being flushed.
[tkyte@vmware ~]$




What happens when then analyze is stopped in between.

Koshal, May 22, 2006 - 10:41 am UTC

we have a big table with 200 Million records (175 columns). The table is analyzed weekly. It takes almost 9 hrs. Sometimes during release we can not allocate 9 hrs for analyze, we can allocate only 7 hrs . In such scenario, is it better not to analyze that week or complete whatever that is possible. For the reasons best know to them they are using only Analyze not dbms_stats.


Thanks
Koshal


Exporting statistic from one partition to another

Max, June 06, 2006 - 1:44 pm UTC

Tom,

I have exported statistics from one partition (range) to another, but the query plan is not the same when changing range in my predicate. I am trying to understand why.

Here is my syntax for exporting statistics:
--export statistics from an old partition (ie, 'PART_20060529')
begin
DBMS_STATS.EXPORT_TABLE_STATS (
ownname => 'ACR_ADMIN',
tabname => 'ACR_MESSAGE',
partname => 'PART_20060529',
stattab => 'ACR_MESSAGE_STAT',
cascade => TRUE);
end;
/

update ACR_MESSAGE_STAT set c2 = 'PART_20060530';
update ACR_MESSAGE_STAT set c3 = 'PART_20060530_US';


--import statistics from an old partition into the new partition (ie, 'PART_20060530')
begin
DBMS_STATS.IMPORT_TABLE_STATS (
ownname => 'ACR_ADMIN',
tabname => 'ACR_MESSAGE',
partname => 'PART_20060530',
stattab => 'ACR_MESSAGE_STAT',
cascade => TRUE);
end;
/

I have reviewed user_tab_partitions, user_tab_subparitions to make sure that the statistics were updated.

I am trying to understand why the same query with a different predicate on date will not give me the same plan is the statistics are the same, or are they not?

Tom Kyte
June 06, 2006 - 2:07 pm UTC

give full example please.

DBMS_STAT

AD, December 04, 2006 - 2:06 pm UTC

Hi Tom,

Some are of the opinion that we should be writing a generic procedure which will be using DBMS_STAT to gather stats across all tables and parameters values being passed in are same in all situation.

Can you please suggest is this a right approach?

Thanks & Regards


Tom Kyte
December 05, 2006 - 9:32 pm UTC

one of three things could be said:

a) it could be good
b) it could be bad
c) it could be neither good nor bad

doubt it would be a in most cases.

dbms_stats automatic options and estimate value

jean-pierre, February 27, 2007 - 8:10 am UTC

Tom,
We will migrate from 8i to 10g, majority of the databases are OLTP, there are big databases (with partitioned tables), and others are in RBO.

1) Is it better to use method_opt = 'for all columns size 1' for OLTP database because (and this option seems good actually) ?
2) for DSS environnement, is it better to use method_opt = 'for all indexed colomuns size auto' ? best option is size=AUTO or SIZE=SKEWONLY ?
3) is AUTO_SAMPLE_SIZE relevant option and estimate percentage could it be different between table and related indexes ?
4) if no statistics are available (new objects), which percentage Oracle chooses first time ?
5) if this option not relevant, is it a good idea to dissociate the gather statistics indexes of the tables to specifie a different percentage for indexes ?
6) for table we can take a percentage between 5 to 10%. Indexes needs a higher percentage (even compute !) ?
7) in your opinion (experience) which automatic options (often the default) are inapropriate (dangerous ?)

thanks a lot.
Jean-Pierre

DBMS_STATS, 'LIST_STALE' consideration an limitation

jean-pierre, February 27, 2007 - 8:15 am UTC

Tom,
1) Is it a good idea to iterate on each schema, collect the STALE object ('LIST STALE' option in first), and execute for each table the
gather_table_stats procedure with the following options :
estimate_percent=>dbms_stat.AUTO_SAMPLE_SIZE
method_opt=>'FOR ALL COLUMNS SIZE 1'
degree=> dbms_stats.AUTO_DEGREE
cascade=>TRUE
granularity=>'AUTO'
2) 'LIST STALE' option do not returns objects with empty statistics. It is moreover necessary to use 'LIST EMPTY' ?
3) if only one table partition has more than 10 percent of modifications, 'LIST STALE' will return how many rows (one for the concerned partition, and
one for the table level ?). What must I do to gather statistics for the concerned partition and at the table level, and for the related indexes (global or local) ?
Is it enought to execute only one statement to gather statistics : with granularity = AUTO and cascade = true or AUTO_CASCADE ?
4) if a new partition is added, 'LIST STALE' will return no rows ? It is neccessary to use 'LIST EMPTY' or gather statistics explicitely to the partition ?
5) in 10g when an index is created, statistics are implicitely gathered. With which options ? Can I disable this behaviour ?
6) GATHER_STATS_AUTO job has a stop_on_window_close option causing to terminate this job at the end of maintenance window. How can I do the same with the method
I have said before ?
7) GATHER_STATS_AUTO is (according to Oracle documentation) similar to GATHER_DATABASE_STATS with 'GATHER AUTO' option. But in the DB Consol we can owerride different
options. And yet, in documentation we can read that with 'GATHER AUTO', parameters like METHOD_OPT,DEGREE,and others are ignored. Where is the truth ?

thanks a lot.
Jean-Pierre

schema name in dbms_stats.gather_table_stats

A reader, March 16, 2007 - 10:35 am UTC

Tom please explain me about schema name in one of parameters of DBMS_STATS.gather_table_stats.

SQL> conn scott
Enter password: *****
Connected.
SQL> create table t as select * from dual;

Table created.

-- I try to do this but getting an error
SQL> exec dbms_stats.gather_table_stats(scott, 't');
BEGIN dbms_stats.gather_table_stats(scott, 't'); END;

                                    *
ERROR at line 1:
ORA-06550: line 1, column 37:
PLS-00201: identifier 'SCOTT' must be declared
ORA-06550: line 1, column 7:
PL/SQL: Statement ignored

SQL> exec dbms_stats.gather_table_stats(user, 't');

PL/SQL procedure successfully completed.



But when I try user as schema name it is succeed. So what user actually is ?

Thanks.
Tom Kyte
March 17, 2007 - 3:54 pm UTC

you need to pass a string in.

scott - that is an identifier

'SCOTT' - that is a string with the schema name scott in it.

USER - that is a function that returns the string 'SCOTT' when scott is logged on.

dbms_stats

reader, March 27, 2007 - 4:22 pm UTC

In sqlserver 2005, there is a database option called AUTO_CREATE_STATISTICS. If this is set to ON, statistics are automatically created on columns used in a predicate.
Is there a similar option in oracle when we use dbms_stats package in 9i?
In 10g, does GATHER_STATS_JOB do this?

Thanks.
Tom Kyte
March 28, 2007 - 11:22 am UTC

global_stats

A reader, May 29, 2007 - 12:35 pm UTC

What is global_stats?
How to make it YES or NO in dbms_tables?
Is it important for optimizer to make decisions?

Thanks,
Tom Kyte
May 30, 2007 - 11:04 am UTC

are you using partitioning? if not, they mean nothing to you.

dbms_stats slow on tables with many columns

Udi Karni, June 20, 2007 - 5:28 pm UTC

Hi Tom,

Dbms_stats typically runs very fast, even on partitioned tables. However - on very wide (hundreds of columns) partitioned tables - it switches to a cycle of "fast-slow" which overall takes a long time.

Increasing the parallel or decreasing sample size doesn't help.

Ideas?

NUM_ROWS and SAMPLE_SIZE

Chaman, July 30, 2007 - 11:41 am UTC

Tom,
The NUM_ROWS column of ALL_TABLES table has the value which nearly equal to the actual number of rows but the SAMPLE_SIZE column is 5% of the number of rows.

Does this mean that the table was analyze with a sample size of 5%? If so in that case the NUM_ROWS must be equal to 5% of total number of rows.


Tom Kyte
July 30, 2007 - 5:42 pm UTC

why would num_rows be 5%

if we sample 5% of the rows, we extrapolate up (eg: multiply by 20) to arrive at....

the aggregate statistics.

we sample to avoid processing the entire result set and we extrapolate (guess) at the real numbers.

granularity and partname in dbms_stats

Jay, July 30, 2007 - 5:05 pm UTC

Hi, Tom,

If I just specify partname without specifying granularity, does gather_table_stats collects stats on the table level or partition level? The situation I have is that I have a large partitioned table, with 65 partitions of 1.5 million rows each. I just wanted to collect stats on a newly inserted partition for the moment. But it took long and during the process it looks like it was repeatedly scanning all partitions of the full tables (as I can see in the v$session_longops) and it was executing something like

insert /*+ append */ into sys.ora_temp_1_ds_26332
select /*+ no_parallel(t) no_parallel_index(t) dbms_stats cursor_sharing_exact
use_weak_name_resl dynamic_sampling(0) no_monitoring */
col1, col2, ..., col19, rowid SYS_DS_ALIAS_0
from myschema.mytable sample ( .1274453176) t;

and then

select /*+ no_parallel(t) no_parallel_index(t) dbms_stats cursor_sharing_exact use_weak_name_resl dynamic_sampling(0) no_monitoring */ count(*),count("COL1"),count(distinct "COL2"),... from myschema.mytable sample ( 1.2744531756) t;

So I guessed it was actually collecting stats on the full table since I didn't specify granularity, even though I specified the partname.

However, at the end of the stats collection, I checked dba_tab_partitions and dba_ind_partitions and only saw the partition I specified showed recently updated last_analyzed column. So I'm confused why it was doing the above two mentioned SQLs at all in an apparent scan of full table.

Thanks,
Jay

Tom Kyte
August 02, 2007 - 9:07 am UTC

It'll vary by release possibly as granularities default value can and will change. However, in 10gR2, it defaults to auto and will get the table (global statistics) as well as the partition (local) statistics for that single partition, similar results will be seen in 9i (except that get_param did not exist back then..) :

ops$tkyte%ORA10GR2> CREATE TABLE t
  2  (
  3  OWNER, OBJECT_NAME, SUBOBJECT_NAME, OBJECT_ID, DATA_OBJECT_ID,
  4  OBJECT_TYPE, CREATED, LAST_DDL_TIME, TIMESTAMP, STATUS,
  5  TEMPORARY, GENERATED, SECONDARY
  6  )
  7  PARTITION BY hash (object_id) partitions 8
  8  as
  9  select * from all_objects
 10  /

Table created.

ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> column partition_name new_val PNAME
ops$tkyte%ORA10GR2> select partition_name from user_tab_partitions where table_name = 'T';

PARTITION_NAME
------------------------------
SYS_P737
SYS_P738
SYS_P739
SYS_P740
SYS_P741
SYS_P742
SYS_P743
SYS_P744

8 rows selected.

ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> select dbms_stats.get_param( 'GRANULARITY' ) from dual;

DBMS_STATS.GET_PARAM('GRANULARITY')
-------------------------------------------------------------------------------
AUTO

ops$tkyte%ORA10GR2> exec dbms_stats.gather_table_stats( user, 'T', '&PNAME' );

PL/SQL procedure successfully completed.

ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> select to_char(last_analyzed,'dd-mon-yyyy hh24:mi:ss') la, num_rows from user_tables where table_name = 'T';

LA                     NUM_ROWS
-------------------- ----------
02-aug-2007 08:52:02      49980

ops$tkyte%ORA10GR2> select partition_name, to_char(last_analyzed,'dd-mon-yyyy hh24:mi:ss') la, num_rows from user_tab_partitions where table_name = 'T';

PARTITION_NAME                 LA                     NUM_ROWS
------------------------------ -------------------- ----------
SYS_P737
SYS_P738
SYS_P739
SYS_P740
SYS_P741
SYS_P742
SYS_P743
SYS_P744                       02-aug-2007 08:52:02       6209

8 rows selected.

More explain needed ...

Chaman, July 31, 2007 - 11:47 am UTC

Tom,

Thanks for your quick response. Sorry to say, I could not understand what you said.

Followup July 30, 2007 - 5pm US/Eastern:

why would num_rows be 5%

if we sample 5% of the rows, we extrapolate up (eg: multiply by 20) to arrive at....

the aggregate statistics.

we sample to avoid processing the entire result set and we extrapolate (guess) at the real numbers.


Tom Kyte
August 02, 2007 - 10:06 am UTC

you asked to gather statistics.


you asked to sample only 5% of the rows.


We need to have statistics that represent the FULL TABLE. So, we sample 5% of the rows in the table.


And then multiply to get a guess as to what the full table would have.

analyze v.s. stats

A reader, August 01, 2007 - 5:54 am UTC

Hello,Mr. Tom!
I have got lots of help from you!

Now I have a new question on analyze and dbms_stats.
I did a test below:
SQL> create table tab1(a number(1));
Table created

SQL> create table tab2(a char(1));
Table created

SQL> insert into tab1 values(1);
1 row inserted

SQL> insert into tab2 values('1');
1 row inserted

SQL> commit;
Commit complete

SQL> analyze table tab1 compute statistics;
Table analyzed

SQL> analyze table tab2 compute statistics;
Table analyzed

SQL> select table_name,AVG_ROW_LEN from user_tables where table_name in ('TAB1','TAB2');
TABLE_NAME                     AVG_ROW_LEN
------------------------------ -----------
TAB1                                     6
TAB2                                     5

SQL> exec dbms_stats.gather_table_stats(user,'TAB1')
PL/SQL procedure successfully completed

SQL> exec dbms_stats.gather_table_stats(user,'TAB2')
PL/SQL procedure successfully completed

SQL> select table_name,AVG_ROW_LEN from user_tables where table_name in ('TAB1','TAB2');
TABLE_NAME                     AVG_ROW_LEN
------------------------------ -----------
TAB1                                     3
TAB2                                     2
I have 2 questions:
1:Why the results of ANALYZE and DBMS_STATS are different?

2:I think lenth of columns of tab1 and tab2 is 1 bytes,but the result is 2,3(using DBMS_STATS) and 5,6(using ANALYZE),why?
I would be very thankful if ¿you give me some explain!

Best Regards!
      Alan

Tom Kyte
August 05, 2007 - 9:54 am UTC

1) because they are different commands. analyze to gather statistics is deprecated, the optimizer is designed to use dbms_stats generated statistics.

2) these are ROW WIDTHS - and there is necessary control data in a row used to store the data - eg; a varchar2 has a leading byte length. A row is a data structure, more than just your data


analyzing indexes,

A reader, August 02, 2007 - 11:07 am UTC

Using DBMS_STATS analyzing an index can be done by setting CASCADE=TRUE while analying the tables, or by using gather_index_stats.

I have been told that gather_table_stats with cascade=>true, does not analyze the index in parallel (even though degree is set to big number like 8 or 12)

In that case, is it advisable to use gather_index_stats to analyze the index? Will this run in parallel?

Another question:

We have multi terabyte database (more than 12TB) but not all tables in the database are involved in every day DML activities.

Is it a good option considering speed and collecting good information in the dictionary, to use DBMS_STATS.GATHER_SCHEMA_STATS with OPTIONS set to GATHER_STALE?

Currently we use dbms_stats.gather_table_stats for each and every table with CASCADE=true, METHOD_OPT=for all indexed columns size 1 and estimate percent varying depending on the size of the table.


Our problem is DBMS_STATS job run for more than 2-3 days and we want to reduce that.

Tom Kyte
August 05, 2007 - 10:39 am UTC

it is documented that way:
http://docs.oracle.com/docs/cd/B19306_01/appdev.102/b14258/d_stats.htm#sthref8134

nothing is advisable here - you have to state whether

a) you want it parallelized
b) or not

and then the answer is "obvious"

see also
http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:68126314591741#383309700346174570

A reader, August 03, 2007 - 12:11 am UTC

Tom,

Which data dictionary tables are used to store database statistics if stat table is not specified.

Thanks.
Tom Kyte
August 05, 2007 - 1:11 pm UTC

ours, the internal undocumented ones.

You see them in user_tables, user_indexes and the like. Where we put them physically - not relevant to you or me. Where you access them - priceless - and those are the documented data dictionary views

dbms_stats tuning

a reader, November 19, 2007 - 1:01 pm UTC

We are using Oracle 10.2.0.2 database. We have a partitioned table that has 900 million records. I have a script to collect statistics on it using dbms_stats procedure. This seems to be taking around 8 hours. Could you tell which parameters I need to look at in order to speed up the stats collection for this table and any other ways to speed up this process. I am not sure what else you need, so please let me know and I will give you all the details.

dbms_stats.gather_table_stats('TEST', 'PART_TABLE', partname => null, estimate_percent => 10, block_sample => false, method_opt => 'FOR ALL COLUMNS SIZE 1', degree => 4, granularity => 'DEFAULT', cascade => true);

Thanks

Tom Kyte
November 21, 2007 - 1:07 pm UTC

there are too many 'details' to list here.

for example, do you EVEN NEED to gather stats (fastest way to do something is to NOT do it).

eg: if you say "well, we bulk load this table fresh every week", then I'll say "maybe you just use the SET_* functions to TELL us the statistics and don't gather them

eg: if you say "well, this is a rolling window partition - our queries ALWAYS specify a single partition key in the predicate (so the optimizer only hits a SINGLE partition at a time, definitely)" - then I'd say gather only local statistics and only on the partitions that have been modified - which should be one since this is a rolling window

eg: if you say "much of the same as right above, but we do not include the partition key, but we have normally distributed data so the global stats that are inferred from the local stats are better than good enough" - I say "same answer"

DBMS_STATS never finishes

Suzanne, November 21, 2007 - 4:52 pm UTC

I have a DBMS_STATS job that runs once a week on one of my test databases. In the last two weeks it has failed to complete. After a couple of days somebody notices that the system performance is slow and one of the DBA's kills the job (this is not normal behavior). I've managed to track down the table that seems to be the road block but I don't know how to debug this issue. Everytime I start a DBMS_STATS subprogram (including delete_table_stats) the process seems to run and run. Since there is no error message generated what Oracle facilities should I use to track down the problem?
Tom Kyte
November 21, 2007 - 4:56 pm UTC

have you tried tracing it with waits (you can tail -f the trace file on unix)

or use v$sesswait to see what it is waiting on plus v$sql to see what it is doing.

and of course, you've started the tar with support right?

dbms_stats

parlee, November 22, 2007 - 11:32 am UTC

I am running the dbms_stats procedure (as shown below) to collect stats on a partitioned table with granularity as PARTITION and cascade FALSE (intend to collect statistics on indexes separately). There are around 500 partitions on the table. I've been monitoring its performance. It takes on an average 10 seconds to do each partition. In between, all of a sudden, it jumps to 20 odd minutes per partition. I notice in the trace file that there are lot of db file scattered reads. After investigating more, I found that the scattered reads are on the primary key index of this partitioned table. I wonder,

1. Why does it to look at the PK index when I am limiting it to gather partition stats for table only.

2. Even if it needs to look at the index, why doesn't it read the index whilst collecting stats for every single partition.

Thanks in advance.

Tom Kyte
November 26, 2007 - 11:05 am UTC

... In between, all of a sudden, it jumps to 20 odd
minutes per partition. ...

what does "in between" mean?

are you using auto for the rest of the inputs, what release, what is the FULL dbms_stats command you are using, what is different perhaps about the partitions in question.

dbms_stats

parlee, November 22, 2007 - 11:33 am UTC

I am running the dbms_stats procedure (as shown below) to collect stats on a partitioned table with granularity as PARTITION and cascade FALSE (intend to collect statistics on indexes separately). There are around 500 partitions on the table. I've been monitoring its performance. It takes on an average 10 seconds to do each partition. In between, all of a sudden, it jumps to 20 odd minutes per partition. I notice in the trace file that there are lot of db file scattered reads. After investigating more, I found that the scattered reads are on the primary key index of this partitioned table. I wonder,

dbms_stats.gather_table_stats(ownname => 'TEST', tabname => 'PART_TABLE', partname => 'P1', estimate_percent => 10,
block_sample => false, method_opt => 'FOR ALL COLUMNS SIZE 1', degree => 4, granularity => 'PARTITION', cascade => false);

1. Why does it to look at the PK index when I am limiting it to gather partition stats for table only.

2. Even if it needs to look at the index, why doesn't it read the index whilst collecting stats for every single partition.

Thanks in advance.

Tom Kyte
November 26, 2007 - 11:06 am UTC

how about tkprofing that output and posting the SQL that was being executed when you saw the reads?

DBMS_STATS never returns

Suzanne, November 27, 2007 - 12:32 pm UTC

Thank you for the suggestion about v$session_wait. I used these two queried to find my wait event.
select * from v$session_wait where sid = '39';

select sql_text, sql_hash_value, sadmin
from v$session
join v$sql
on sql_hash_value = hash_value
where sid = '39';

and then I used this approach, I found on Metalink, to isolate the blocking process.
METHOD 2: EXAMINE THE X$KGLLK TABLE
-----------------------------------

The X$KGLLK table (accessible only as SYS/INTERNAL) contains all the
library object locks (both held & requested) for all sessions and
is more complete than the V$LOCK view although the column names don't
always reveal their meaning.

You can examine the locks requested (and held) by the waiting session
by looking up the session address (SADDR) in V$SESSION and doing the
following select:

select * from x$kgllk where KGLLKSES = 'saddr_from_v$session'

This will show you all the library locks held by this session where
KGLNAOBJ contains the first 80 characters of the name of the object.
The value in KGLLKHDL corresponds with the 'handle address' of the
object in METHOD 1.

You will see that at least one lock for the session has KGLLKREQ > 0
which means this is a REQUEST for a lock (thus, the session is waiting).
If we now match the KGLLKHDL with the handles of other sessions in
X$KGLLK that should give us the address of the blocking session since
KGLLKREQ=0 for this session, meaning it HAS the lock.

SELECT * FROM X$KGLLK LOCK_A
WHERE KGLLKREQ = 0
AND EXISTS (SELECT LOCK_B.KGLLKHDL FROM X$KGLLK LOCK_B
WHERE KGLLKSES = 'saddr_from_v$session' /* BLOCKED SESSION */
AND LOCK_A.KGLLKHDL = LOCK_B.KGLLKHDL
AND KGLLKREQ > 0);

If we look a bit further we can then again match KGLLKSES with SADDR
in v$session to find further information on the blocking session:

SELECT SID,USERNAME,TERMINAL,PROGRAM FROM V$SESSION
WHERE SADDR in
(SELECT KGLLKSES FROM X$KGLLK LOCK_A
WHERE KGLLKREQ = 0
AND EXISTS (SELECT LOCK_B.KGLLKHDL FROM X$KGLLK LOCK_B
WHERE KGLLKSES = 'saddr_from_v$session' /* BLOCKED SESSION */
AND LOCK_A.KGLLKHDL = LOCK_B.KGLLKHDL
AND KGLLKREQ > 0)
);

In the same way we can also find all the blocked sessions:

SELECT SID,USERNAME,TERMINAL,PROGRAM FROM V$SESSION
WHERE SADDR in
(SELECT KGLLKSES FROM X$KGLLK LOCK_A
WHERE KGLLKREQ > 0
AND EXISTS (SELECT LOCK_B.KGLLKHDL FROM X$KGLLK LOCK_B
WHERE KGLLKSES = 'saddr_from_v$session' /* BLOCKING SESSION */
AND LOCK_A.KGLLKHDL = LOCK_B.KGLLKHDL
AND KGLLKREQ = 0)
);

I ended up finding a zombie process that had held a lock on one of my tables since Oct 29 - I, of course, kill the process and my problem went away. Thank you again for your help.

dbms_stats index lookup

parlee, November 28, 2007 - 11:21 am UTC

Sorry Tom, I got busy with other work. Please see below the SQL output and execution plan from tkprof output. It shows that it is doing index scan for collecting statistics for a partition of PART_TABLE table. By "all of a sudden it jumped to 20 odd minutes", I meant, initially it was taking around 10 seconds to collect statistics for each partition and then some partitions took around 20 minutes, then it went down to around 10 seconds again. When it was taking 20 minutes, I traced the session, the output of which is listed below. Please note that it is not the complete output from the tkprof output file, I have just copied the first SQL output. I can send the whole file if needed.

select /*+ no_parallel(t) no_parallel_index(t) dbms_stats
cursor_sharing_exact use_weak_name_resl dynamic_sampling(0) no_monitoring
*/ count(*) from "TEST"."PART_TABLE" t where TBL$OR$IDX$PART$NUM("TEST"."PART_TABLE",0,4,0,"ROWID") = :objn

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 0 0.00 0.00 0 0 0 0
Execute 0 0.00 0.00 0 0 0 0
Fetch 1 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 1 0.00 0.00 0 0 0 0

Misses in library cache during parse: 0
Parsing user id: 75 (recursive depth: 1)

Rows Row Source Operation
------- ---------------------------------------------------
1 SORT AGGREGATE (cr=397784 pr=397772 pw=0 time=1350410697 us)
17424 INDEX FAST FULL SCAN PART_TABLE_PK_1 (cr=397784 pr=397772 pw=0 time=1350393394 us)(object id 226025)


Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
db file scattered read 3710 0.46 571.96
db file sequential read 6 0.00 0.00


Tom Kyte
November 28, 2007 - 11:16 pm UTC


I see lots of scattered reads- which are not single block IO's but multiblock IOs and I see them sometimes taking 1/2 second which is GLACIAL in computer terms.

why is an IO on your system taking 1/2 second. I think perhaps something else is happening big time on your system at that time.

dbms_stats tuning

parlee, November 29, 2007 - 10:57 am UTC

regarding slow i/o, i know that it was computing statistics using threads, not sure if they were slaves or just other sessions collecting statistics on other partitions in parallel. i am sure there was nothing else running on the server at the time.

how did you find out the i/o taking 1/2 second?

i know that it does db file sequential read for index scan. however, does it do db file scattered read for INDEX FAST FULL SCAN?

Tom Kyte
November 29, 2007 - 7:03 pm UTC

Elapsed times include waiting on following events:
  Event waited on                             Times   Max. Wait  Total Waited
  ----------------------------------------   Waited  ----------  ------------
  db file scattered read                       3710        0.46        571.96
  db file sequential read                         6        0.00          0.00


you told me it was 1/2 second.

max wait - 0.46

fast full scans are done just like a full scan - multiblock IO, so yes, they do scattered read waits.

dbms_stats tuning

parlee, December 04, 2007 - 5:34 am UTC

So the question is why it was doing index scan when collecting statistics on individual partitions.

Many Thanks

Tom Kyte
December 03, 2007 - 7:48 am UTC

and did you go back to the tkprof to see which if the sql's it was executing did them - that would give us a good clue as to what it was doing..

aaaaaaaaaaaaaaaaa

A reader, December 10, 2007 - 7:12 am UTC

aaaaaaaaaaaaaa
Tom Kyte
December 10, 2007 - 11:17 am UTC

catch me now I'm falling...... aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa

dbms_stats tuning

parlee, December 12, 2007 - 10:56 am UTC

I did include the sql that caused it to use the index. Please see below the sql:

select /*+ no_parallel(t) no_parallel_index(t) dbms_stats
cursor_sharing_exact use_weak_name_resl dynamic_sampling(0) no_monitoring
*/ count(*) from "TEST"."PART_TABLE" t where
TBL$OR$IDX$PART$NUM("TEST"."PART_TABLE",0,4,0,"ROWID") = :objn

Tom Kyte
December 12, 2007 - 12:11 pm UTC

I don't see what the problem is - the optimizer just decided to use an index on some apparently NOT NULLABLE column(s) as if it were a skinny version of the table.

would you have preferred to full scan the table? It instead fast full scanned a smaller index.

dbms_stats tuning

parlee, December 13, 2007 - 7:58 am UTC

this table is very big, around 500 million records. there are three indexes on this table, all being global. i have a script that collects statistics for each partition as a one off (just before the system goes live). thereafter we will collect statistics on partition with state statistics. when collecting statistics first time on each partition, it runs within 10 seconds for most of them and in between takes 20 minutes for few partitions (20 mins each). when i monitored during the slow running partition sql, i found that it was doing INDEX FAST FULL SCAN. since the index is very large (global index), it takes long time and hence taking 20 mins. i would not have thought it to do the index scan. please note that i had set CASCADE to FALSE.

Tom Kyte
December 13, 2007 - 9:53 am UTC

the index does not look that big

  Event waited on                             Times   Max. Wait  Total Waited
  ----------------------------------------   Waited  ----------  ------------
  db file scattered read                       3710        0.46        571.96

only 3710 IO's - what, maybe 3gb? But that aside - how big is the partition itself, is the partition bigger or smaller than that global index.

And - almost 1/2 second to perform an IO, that is glacial - is there much contention on this machine, or is the disk just not configured very well?

dbms_stats tuning

parlee, December 13, 2007 - 10:55 am UTC

the index is around 6.5 GB in size. the partitions are daily partitions (one for each day). partitions are very small compared to the index size, max 100MB each. partitions for a month reside in its own tablespace, this is for archiving purpose.

this was run on a test server and i agree the disks were not configured efficiently. having said that, i wouldn't expect it to be as bad as 0.5 second per read. from the statistics, doesn't it mean 0.46 second was the max wait, other times it could have been a lot faster. it actually comes down to average wait of 0.15 seconds per read (571/3710). please correct me if i am interpreting it wrong.

Tom Kyte
December 13, 2007 - 11:20 am UTC

other times could have been faster, yes - the average was 0.15 which is still pretty "glacial"

is it possible the optimizer thinks this index is small? what are the stats on this index

Package Procedures

Rajeshkumar, December 13, 2007 - 6:28 pm UTC

Hi Tom,

I have 1000 PL/SQL packages. I need to find out the different public and the private procedures within those packages. How can I do this ? Is there any data dictionary which gives these details ?

As these procedures are not stand-alone and are found within the packages, I couldnt find any dictionary table which has their details. Please help me in this.

Thanks
Rajesh
Tom Kyte
December 14, 2007 - 12:56 pm UTC

public, yes - user_arguments

private, no - you'd have to do that yourself.



Package Procedures

Rajesh, December 14, 2007 - 10:53 am UTC

In other words, how does SQL Navigator finds the public and private procedures ? Can you please send me that query ?
Tom Kyte
December 14, 2007 - 3:03 pm UTC

public, they can discover via user_arguments.

private, they must be parsing the plsql code itself.

dbms_stats tuning

parlee, December 18, 2007 - 9:21 am UTC

we had 100% statistics collected on index at the time (we were trying different level of statistics to get the timings). but i am sure the stats were collected with estimate_percent of 100% when we had this problem with index scan for partitions. unfortunately the test server has been rebuilt since then, so i do not have required details.

if for any reason it thought index scan to be better than the partition scan then the question is what made it to do partition scan whilst collecting statistics for other partitions. there were only one in every 30 to 40 odd partitions when it decided to do index scan.

Public and Private Functions

Rajesh, December 19, 2007 - 3:19 pm UTC

how does SQL Navigator finds the public and private functions ?
Tom Kyte
December 20, 2007 - 9:22 am UTC

like I said, it can use user_arguments for things in the specification

and it must be reading the code and parsing it to find "private"

Reverse Merge

Rajesh, January 08, 2008 - 1:02 pm UTC

Hi Tom,

After commiting a merge statement in oracle, Is there anyway to rollback the data in the same merge statement ?

For example..

Merge into A using B on (A.id = B.id)
When Matched then update A.col1 = B.col1
When Not Matched then insert values(B.col1,B.col2);

commit;

After 2 or 3 hours, I want to rollback the data. I am familiar of doing this rollback. But I would like to know is there any option in Merge statement like...

Merge into A using B
When matched then update
When not matched DELETE;

This is like a reverse Merge statement. Is it possible to perform such rollback using merge ? or how will you handle this ?

Thanks
Rajesh

Tom Kyte
January 08, 2008 - 2:59 pm UTC

... After commiting a merge statement in oracle, Is there anyway to rollback the
data in the same merge statement ? ...

A commit does one thing very well: It makes permanent the changes made by inserts, updates, deletes, and merges....

There is no "rolling back"

That said - there is

a) flashback table
b) flashback query
c) flashback versions query
d) flashback transaction



Sounds like flashback table might be what you are looking for.

PL/SQL from UNIX

Rajesh, January 10, 2008 - 10:17 am UTC

Hi Tom,

I was trying to call a PL/SQL function from UNIX. The PL/SQL was giving me an error. But it was not reflected in the UNIX shell.

The shell script will be..

ERROR=0
ALLDETAIL=`get_interface_details $INTERFACECODE`
echo $ALLDETAIL
ERROR=$?


get_interface_details is a UNIX function which calls the PL/SQL function as follows..

get_interface_details()
{
sqlplus -s $SQLCONNECT 2>&1 <<EOF |grep ALLDETAIL|tail -1|awk '{print $2 " " $3 " " $4 " " $5}' |read ALLDETAIL
whenever sqlerror exit 170
SET echo on
SET feedback off
SET heading off
SET linesize 120
SET serveroutput on
SET pagesize 0
SET verify off
SET space 0
spool $LOGDIR/$LOGFILE.SQL.log append

select '****************************COLLECT INTERFACE DETAILS SQLLOG******************************' from dual;
select 'ALLDETAIL: '||NF_BTCH_UTL_PKG1.get_intrfc_dtl('$1') ret from dual
.
l
/
select '****************************COLLECT INTERFACE DETAILS SQLLOG******************************' from dual;
spool off

EOF
RC=$ALLDETAIL
if [ -z $RC ]
then
RC=0
echo $RC
else
echo $RC
fi
return
}

So in this UNIX function call, we track the error by the variable ERROR ($?) as follows...

ERROR=0
ALLDETAIL=`get_interface_details $INTERFACECODE`
echo $ALLDETAIL
ERROR=$?

This ERROR should be 170 as PL/SQL is raising an error as we have the following line in the above code..

whenever sqlerror exit 170

But still I get the ERROR as 0. Where am I going wrong ?

Thanks
Rajesh


Tom Kyte
January 10, 2008 - 2:46 pm UTC

first, let us clear up some semantics here.

Unix does not call PLSQL.

Unix, the operating system, may execute some programs. One of these is the shell (/bin/bash for example). bash has a scripting language that allows it to call it's own procedures and such - and use Unix, the operating system, to do things like run other programs.

So, you have a shell script that runs sqlplus and sqlplus dynamically executes a stored procedure. There is no "calling" here - they are all distinctly different beasts with very very limited "communication" capabilities.


now, typically, $? is the error from the last command - and on many systems, echo is - well, a program - a command. You should grab things like $? IMMEDIATELY after the command of interest - IMMEDIATELY.

<b>$ cat test.sh</b>
#!/bin/sh

ERROR=0
sqlplus / <<EOF
whenever sqlerror exit 170
begin
        raise program_error;
end;
/
select * from dual;
EOF
ERROR=$?
echo error = $ERROR
echo Hello World
ERROR=$?
echo error = $ERROR

<b>
$ sh test.sh
</b>
SQL*Plus: Release 10.2.0.2.0 - Production on Thu Jan 10 14:49:30 2008

Copyright (c) 1982, 2005, Oracle.  All Rights Reserved.


Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.2.0 - Production
With the Partitioning, OLAP and Data Mining options

ops$tkyte%ORA10GR2> ops$tkyte%ORA10GR2>   2    3    4  begin
*
ERROR at line 1:
ORA-06501: PL/SQL: program error
ORA-06512: at line 2


Disconnected from Oracle Database 10g Enterprise Edition Release 10.2.0.2.0 - Production
With the Partitioning, OLAP and Data Mining options<b>
error = 170</b>
Hello World<b>
error = 0</b>
[tkyte@dellpe ~]$



see, echo succeeded, error is ZERO...

but, if you catch error RIGHT AFTER sqlplus exits (do that in your function), you'll get the error code.

PL/SQL from UNIX

Rajesh, January 14, 2008 - 11:51 am UTC

Hi Tom,

As you said, I tried to get the error code 170 within the UNIX function itself. But it didnt work.

get_interface_details()
{
sqlplus -s $SQLCONNECT 2>&1 <<EOF |grep ALLDETAIL|tail -1|awk '{print $2 " " $3 " " $4 " " $5}'
|read ALLDETAIL
whenever sqlerror exit 170
SET echo on
SET feedback off
SET heading off
SET linesize 120
SET serveroutput on
SET pagesize 0
SET verify off
SET space 0
spool $LOGDIR/$LOGFILE.SQL.log append

select '****************************COLLECT INTERFACE DETAILS SQLLOG******************************'
from dual;
select 'ALLDETAIL: '||NF_BTCH_UTL_PKG1.get_intrfc_dtl('$1') ret from dual
.
l
/
select '****************************COLLECT INTERFACE DETAILS SQLLOG******************************'
from dual;
spool off

EOF
ERROR=$?
echo $ERROR <= It still gives the error code as 0
RC=$ALLDETAIL
if [ -z $RC ]
then
RC=0
echo $RC
else
echo $RC
fi
return
}

It still gives the error code 0 even when a PL/SQL error is raised within the NF_BTCH_UTL_PKG1.get_intrfc_dtl function.

Am I going wrong anywhere in the program ?

Thanks
Rajesh
Tom Kyte
January 14, 2008 - 3:48 pm UTC

maybe it just isn't failing the way you think it is.

this is the time to strip out everything not relevant to the problem you are thinking you have....

and make a teeny tiny itty bitty standalone example (just like I did)

that is how I find what I did wrong, by stripping out anything not part of the actual problem.

You have a big example that no one on the planet can run and experience what you see - make it something we we can see (eg: you need run NONE of your code)


hah, I see it, think about this line:

sqlplus -s $SQLCONNECT 2>&1 <<EOF |grep ALLDETAIL|tail -1|awk '{print $2 " " $3

what do you think the error code in shell would be for that if sqlplus fails? Now, what if grep fails? Now, what if tail failed? Then, what if awk fails?

Now, what if awk succeeds, but tail failed - what would be the code?

Now, what if sqlplus exits with non-zero, but awk is fine and dandy - what is the return code the SHELL gets?

if you were to have made the smallest of examples, you would have discovered this, you would have stripped out everything but sqlplus and echo $?

it would have worked

then you would have added in stuff till it broke - which would tell you what went wrong. It would have been obvious at that point...

Search

Rajesh, January 18, 2008 - 4:39 pm UTC

Hi Tom,

This is about the search of table data. We can search any text in the package using the
dba_source. Similarly, can we search a text among the table data ? i.e. we dont know the table or
column name and know only the text to be searched in the tables. We should now find the table where
the text is present.

Is it possible ?

Thanks
Rajesh

Tom Kyte
January 19, 2008 - 10:46 pm UTC

look in the OTHER PLACE you posted this EXACT SAME TEXT.

See, now if I were to trawl for text, I would get two hits due to you and your repetitive stuff!

dbms_stats.gather_system_stats

Stan, September 12, 2008 - 3:02 pm UTC

Hi Tom,

By reviewing:
select * from sys.aux_stats$

It doesn't look like workload stats have been created in the system (SREADTIM, MREADTIM, etc are null). When our high load window starts I want to run:

exec dbms_stats.gather_system_stats('start');

wait until high load window is over and run:
exec dbms_stats.gather_system_stats('stop');

I have 2 questions:

1) When will this potentially start to effect query plans? When the stop is done?

2) How dramaticaly can the plans change? I'm worried something might change for the worse - is this possible?

Thanks in advance.


Tom Kyte
September 16, 2008 - 2:00 pm UTC

1) yes

2) one of three things will happen....

they go faster
they stay the same
they go slower

that is always true - when you gather (or not gather even) statistics. You can export statistics and revert back as fast as you want.

error while running dbms_stats

Manoj, October 19, 2008 - 10:21 pm UTC

Hi Tom,

Thanks for your time.

I am trying to schedule the dbms_stats to gather the schema stats as follows:

DECLARE
X NUMBER;
BEGIN
SYS.DBMS_JOB.SUBMIT
( job => X
,what => 'sys.dbms_stats.gather_schema_stats(ownname=>''AA'', degree=>4, cascade=>true, options=>''gather auto'');'
,next_date => to_date('23/09/2008 00:05:52','dd/mm/yyyy hh24:mi:ss')
,interval => 'trunc(SYSDATE+1)+(11/48)'
,no_parse => TRUE
);
SYS.DBMS_OUTPUT.PUT_LINE('Job Number is: ' || to_char(x));
END;
/

commit;



but the job is failing with the following error:

¿ORA-04031:Unable to allocate nnn bytes of shared memory (`large pool¿, `unknown object¿, `large pool hea¿, `PX large pool¿)¿

The large pool size is :

select * from v$sgastat where pool like 'large%pool'

POOL NAME BYTES
large pool PX msg pool 16477504
large pool free memory 299712

Questions:

a)Can you please tell me why the job is failing? Is large pool value is low?
b)Is dbms_stats is trying to run parellal query..? Not sure just i read in some posting. Please correct me here .
c) Could it be due to fragmentation in shared pool?
d) Can you please give one example, if feasible wherein you can show the job failing and then correct the problem by changing the SGA params?

Kind Regards,



Tom Kyte
October 21, 2008 - 11:49 am UTC

a) it would appear so, what is the large pool set it itself and what is the parallel_execution_message_size

b) yes, you told it to use parallel query. degree=>4

c) well, it is talking about the large pool here, so - shared pool? no.

d) not even a little bit comprehending why that would be relevant?

error while running dbms_stats ....

Manoj, October 21, 2008 - 9:12 am UTC

Hi Tom

Further inputs to my above question:

select server, count(*) from v$session group by server;

SERVER COUNT(*)
DEDICATED 72

Kind Regards.

error while running dbms_stats ....

Manoj, October 21, 2008 - 2:09 pm UTC

Hi Tom
Many Thanks for your reply.

...a) it would appear so, what is the large pool set it itself and what is the parallel_execution_message_size

Here are the values:
select value from v$parameter where name = 'parallel_execution_message_size';
4096

select value from v$parameter where name='large_pool_size';
16777216

Regards.

Tom Kyte
October 21, 2008 - 3:59 pm UTC

it would appear that your large pool is insufficient for what you are asking to do - 16mb is rather smallish and it is currently allocated.

error while running dbms_stats ....

Manoj, October 21, 2008 - 4:29 pm UTC

Hi Tom,

Many thanks for prompt reply. I keep a tab on "Where is Tom?" and find that in spite of your busy schedule you always take time to answer the questions.
Kodos! to you .

coming to the question now ,
you said above -
...it would appear that your large pool is insufficient for what you are asking to do - 16mb is rather smallish and it is currently allocated.

Now what approach shall i follow to get rid of ORA-04031.
I read it on some doc and it says that make it present_value*2 and then see if you are being hit by ORA-04031.
If not further, reduce it and keep reducing... till you get hit.
Then once hit, set the value just above which is safe to not get hitted.

Regards

Tom Kyte
October 22, 2008 - 8:06 am UTC

or reduce your needs for the resource (eg: parallel query).

Do you need parallel query for this - if not, that would be another perfectly valid approach.

error while running dbms_stats ....

Manoj, October 22, 2008 - 8:26 pm UTC

Hi Tom,

...or reduce your needs for the resource (eg: parallel query).


I will run the the stats gathering procedure without parellal option.

...Do you need parallel query for this - if not, that would be another perfectly valid approach.

We have #16 CPUs in db box. I was reading this site and found at

http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:4541159655224

<quote TOM>

2) degree specifies the number of threads/processes that will be used. since these processes spend
lots of time doing IO -- not using CPU, you would find that your CPU is not pegged, not fully
utilized, if you set degree = #cpu -- since the processes spend lots of time in IO waits. In
order to more fully utilize this resource (that you cannot put in the bank and use later -- its a
use or lose resource), you would use a higher degree.

</quote>

Tom kindly see, if my understanding as written below is correct.

If we set degree=4 (actually we should set it 16-1 i.e. 15) then that many threads will be used to run the sqls ( mostly full scan/ index scan ) by gather_schema_stat procedure. So, n thread => will occupy the large pool n times larger then had the schema been gatherd without parellism? And thus we can get rid of large pool scarcity. Basically , here we are trying to use the available *small(16M)* large pool sequentially.?

Many thanks and sorry for keeping my question *thread* long and long..

Regards

Tom Kyte
October 23, 2008 - 9:14 am UTC

you do not say if you NEED parallel query.

parallel is going to be either

a) faster
b) slower
c) the same as

serial. Do you have really large things, would they benefit from parallel.


and - very importantly - are you the ONLY THING on the machine or are you competing for resources.

error while running dbms_stats ....

Manoj, October 22, 2008 - 8:36 pm UTC

c/And thus we can get rid of large pool scarcity./So, by going without parallel we can get rid of large pool scacity./
Tom Kyte
October 23, 2008 - 9:14 am UTC

that was my point, if you don't use parallel, you won't be using the large pool - hence not using parallel is another perfectly valid approach to solving this issue.

error while running dbms_stats ....

Manoj, October 23, 2008 - 1:13 pm UTC

Hi Tom,

.....parallel is going to be either
a) faster
b) slower
c) the same as
serial. Do you have really large things, would they benefit from parallel.

I truly understand your analogy on "going parallel?" - writing 1 page summary for the new product vs writing *book* ( expert one.)

...and - very importantly - are you the ONLY THING on the machine or are you competing for resources.

There are OTHER THING as well, so i would rather prefer to leave the resources for OTHERS.

......you do not say if you NEED parallel query.

So i will run it without prallel.

Many Thanks for your time.

Kind Regards.



error while running dbms_stats ....

Manoj, October 23, 2008 - 7:21 pm UTC

Hi Tom,

I tried to run the dbms_stat as follows with degree=null as follows:

sys.dbms_stats.gather_schema_stats(ownname=>'XXXX', degree=>null, cascade=>true, options=>'gather auto');

Here degree=>null i.e. table default 

But still it running as parallel query i got the obvious error:

ORA-12012: error on auto execute of job 81
ORA-12801: error signaled in parallel query server P022, instance xxxxx:yyyyy (1)
ORA-04031: unable to allocate 16096 bytes of shared memory ("large pool","unknown object","large pool","PX msg pool")
ORA-06512: at "SYS.DBMS_STATS", line 10070
ORA-06512: at "SYS.DBMS_STATS", line 10553
ORA-06512: at "SYS.DBMS_STATS", line 10718
ORA-06512: at "SYS.DBMS_STATS", line 10805
ORA-06512: at "SYS.DBMS_STATS", line 10782
ORA-06512: at line 1

Further, 

SQL>select distinct (degree) from user_tables;

1

I found the possible values of degree as
o NULL means use the table default value
o DBMS_STATS.DEFAULT_DEGREE means value based on the init parameters (9i). 


SQL> select name,value from v$parameter where name like '%parallel%';

NAME    VALUE
log_parallelism   1
parallel_server   TRUE
parallel_server_instances 2
recovery_parallelism  0
fast_start_parallel_rollback LOW
parallel_min_percent  0
parallel_min_servers  4
parallel_max_servers  240
parallel_instance_group 
parallel_execution_message_size 4096
parallel_adaptive_multi_user TRUE
parallel_threads_per_cpu 3
parallel_automatic_tuning TRUE


How to run dbms_stat *without parallel* ?



Kind regards.



Tom Kyte
October 24, 2008 - 12:53 pm UTC

240 max parallel servers?!??!?!?!? You really believe each of your cpu's can do 15 things at once...



can you run this in the foreground to verify what you think is getting called is in fact getting called - or query up job 81 out of the dictionary for us to confirm.


error while running dbms_stats ....

Manoj, October 23, 2008 - 7:57 pm UTC

Hi Tom

Further, degree from  user_indexes:

SQL> select distinct (degree) from user_indexes;

0
1


error while running dbms_stats ....

Manoj, October 26, 2008 - 7:15 am UTC

Hi Tom

Many thanks for your time.


...240 max parallel servers?!??!?!?!? You really believe each of your cpu's can do 15 things at once... 

a) For large DB the value should be ?
#cpu x 5
(i.e. 16x5)
 

...can you run this in the foreground to verify what you think is getting called is in fact getting called - or query up job 81 out of the dictionary for us to confirm.

b) here is the dbms_stat job sumitted and rest. ( JOB ID is now 82, as i had removed prev job).

server1  $ more submit_job.sh
sqlplus XXXX/<YYY> <<EOF
spool jobs_s.lst

DECLARE
  X NUMBER;
BEGIN
  SYS.DBMS_JOB.SUBMIT
    ( job       => X
     ,what      => 'sys.dbms_stats.gather_schema_stats(ownname=>''XXXX'', degree=>null, cascade=>true, options=>''gather auto'');'
     ,next_date => to_date('24/10/2008 05:00:00','dd/mm/yyyy hh24:mi:ss')
     ,interval  => 'trunc(SYSDATE+1)+(5/24)'
     ,no_parse  => TRUE
    );
  SYS.DBMS_OUTPUT.PUT_LINE('Job Number is: ' || to_char(x));
END;
/

commit;


spool off;
EOF

server1  $ sh submit_job.sh

SQL*Plus: Release 9.2.0.6.0 - Production on Fri Oct 24 19:51:15 2008

Copyright (c) 1982, 2002, Oracle Corporation.  All rights reserved.


Connected to:
Oracle9i Enterprise Edition Release 9.2.0.6.0 - 64bit Production
With the Partitioning, Real Application Clusters, OLAP and Oracle Data Mining options
JServer Release 9.2.0.6.0 - Production

SQL> SQL> SQL>   2    3    4    5    6    7    8    9   10   11   12   13
PL/SQL procedure successfully completed.

SQL> SQL>
Commit complete.

SQL> SQL> SQL> SQL> Disconnected from Oracle9i Enterprise Edition Release 9.2.0.6.0 - 64bit Production
With the Partitioning, Real Application Clusters, OLAP and Oracle Data Mining options
JServer Release 9.2.0.6.0 - Production
server1  $





SQL>select job,last_date,last_sec,next_date,next_sec,broken,failures,what,nls_env,misc_env,instance  from user_jobs;

JOB LAST_DATE         LAST_SEC NEXT_DATE        NEXT_SEC        BROKEN FAILURES WHAT NLS_ENV MISC_ENV INSTANCE
82 24/10/2008 19:51 19:51:16 24/10/2008 19:54 19:54:04 N 1         sys.dbms_stats.gather_schema_stats(ownname=>'XXXX', degree=>null, cascade=>true, options=>'gather auto'); NLS_LANGUAGE='AMERICAN' NLS_TERRITORY='AMERICA' NLS_CURRENCY='$' NLS_ISO_CURRENCY='AMERICA' NLS_NUMERIC_CHARACTERS='.,' NLS_DATE_FORMAT='DD-MON-RR' NLS_DATE_LANGUAGE='AMERICAN' NLS_SORT='BINARY' 102000200000000 0



server1 $ ls <bdump_dir>/<sid>_p*.trc | wc -l 

200

#in bdump dir many trace files got generated after submitting the job:



server1 $ view <bdump_dir>/alert_<sid>.log 
 Fri Oct 24 19:52:04 2008
Errors in file <bdump_dir>/<sid>_j000_15995.trc:
ORA-12012: error on auto execute of job 82
ORA-12805: parallel query server died unexpectedly
ORA-06512: at "SYS.DBMS_STATS", line 10070
ORA-06512: at "SYS.DBMS_STATS", line 10553
ORA-06512: at "SYS.DBMS_STATS", line 10718
ORA-06512: at "SYS.DBMS_STATS", line 10805
ORA-06512: at "SYS.DBMS_STATS", line 10782
ORA-06512: at line 1

.....
....

Errors in file <bdump_dir>/<sid>_j000_5974.trc:
ORA-12012: error on auto execute of job 82
ORA-12801: error signaled in parallel query server P016, instance server1:<sid> (1)
ORA-04031: unable to allocate 16096 bytes of shared memory ("large pool","unknown object","large pool","PX msg pool")
ORA-06512: at "SYS.DBMS_STATS", line 10070
ORA-06512: at "SYS.DBMS_STATS", line 10553
ORA-06512: at "SYS.DBMS_STATS", line 10718
ORA-06512: at "SYS.DBMS_STATS", line 10805
ORA-06512: at "SYS.DBMS_STATS", line 10782
ORA-06512: at line 1



server1 $view <bdump_dir>/<sid>_j000_15995.trc
*** 2008-10-24 19:52:04.704
ORA-12012: error on auto execute of job 82
ORA-12805: parallel query server died unexpectedly
ORA-06512: at "SYS.DBMS_STATS", line 10070
ORA-06512: at "SYS.DBMS_STATS", line 10553
ORA-06512: at "SYS.DBMS_STATS", line 10718
ORA-06512: at "SYS.DBMS_STATS", line 10805
ORA-06512: at "SYS.DBMS_STATS", line 10782
ORA-06512: at line 1
kjbhistory[0x0.0]
kclassert no lock[0x0.0]
kjbhistory[0x0.0]
kclassert no lock[0x0.0]
kjbhistory[0x0.0]
kclassert no lock[0x0.0]


Kind regards.

Tom Kyte
October 26, 2008 - 10:28 am UTC

a) size of database - not relevant.

size of database machine - relevant.

You should let them all default.



b) verify the objects in the schema in question are not parallel enabled (yes, you ran queries and showed output above, but I'm not sure that it was the same user as the job runs as

run dbms_stats - not in the job queue, do it in the foreground once please.

error while running dbms_stats ....

Manoj, October 26, 2008 - 1:01 pm UTC

Hi Tom,

Many thanks for your time. I am getting prompt reply from you even on Sunday. 

...verify the objects in the schema in question are not parallel enabled (yes, you ran queries and showed output above, but I'm not sure that it was the same user as the job runs as 

SQL> select  table_name,degree from user_tables where degree !=1;
select  table_name,degree from user_tables where degree !=1
                                                 *
ERROR at line 1:
ORA-01722: invalid number
SQL> 

SQL> select distinct(degree) from user_tables;

DEGREE
----------
         1
SQL>


a) Looks like problem with some of the values in degree column of user_tables? ANd therefore it is not taking that value as 1?

Further,

SQL>select * from (select 'Index' , index_name,degree from user_indexes where degree != 1) where rownum<2;

'INDEX' INDEX_NAME               DEGREE
Index SYS_IL0000073101C00008$$ 0

b) we have 145 such LOB indexes with degree !=1, all have degree=0, but this should not be an issue ?


c) Yes, it was the same user as the job runs as.(Username - XXXX. Sorry i would have given some meaningful name to username)


...run dbms_stats - not in the job queue, do it in the foreground once please. 

presently some batch job is running in the database. So i will run dbms_stat in forground when there are low actvities in DB and send it to you.

Kind Regards.


Tom Kyte
October 26, 2008 - 1:54 pm UTC

degree one is the default.
degree zero is the default for the index on a lob segment.

so that is normal.


error while running dbms_stats ....

Manoj, October 26, 2008 - 8:56 pm UTC

Hi Tom,

Many thanks for your valuable time. I ran the dbms_stats in forground( out of job queue) as follows:

server1 $ more forground_run.sh
sqlplus MAN/<pass> <<EOF
spool dbms_job_run_forground.lst;
set time on
exec dbms_stats.gather_schema_stats(ownname=>'MAN', degree=>null, cascade=>true, options=>'gather auto');
commit;
spool off
exit;


server1 $

server1 $ nohup ./forground_run.sh &
[1]     8922
server1 $ Sending output to nohup.out
server1 $
server1 $
[1] +  Done                    nohup ./forground_run.sh &
server1 $  


server1 $ more nohup.out

SQL*Plus: Release 9.2.0.6.0 - Production on Sun Oct 26 23:45:43 2008

Copyright (c) 1982, 2002, Oracle Corporation.  All rights reserved.


Connected to:
Oracle9i Enterprise Edition Release 9.2.0.6.0 - 64bit Production
With the Partitioning, Real Application Clusters, OLAP and Oracle Data Mining options
JServer Release 9.2.0.6.0 - Production

SQL> SQL> SQL> SQL> 23:45:43 SQL> 23:45:43 SQL> BEGIN dbms_stats.gather_schema_stats(ownname=>'MAN', degree=>null, cascade=>true, options=>'gather auto'); END
;

*
ERROR at line 1:
ORA-12801: error signaled in parallel query server P012, instance server1:instance (1)
ORA-04031: unable to allocate 16096 bytes of shared memory ("large
pool","unknown object","large pool","PX msg pool")
ORA-06512: at "SYS.DBMS_STATS", line 10070
ORA-06512: at "SYS.DBMS_STATS", line 10553
ORA-06512: at "SYS.DBMS_STATS", line 10718
ORA-06512: at "SYS.DBMS_STATS", line 10805
ORA-06512: at "SYS.DBMS_STATS", line 10782
ORA-06512: at line 1


23:46:04 SQL> 23:46:04 SQL>
Commit complete.

23:46:04 SQL> 23:46:04 SQL> 23:46:04 SQL> 23:46:04 SQL> Disconnected from Oracle9i Enterprise Edition Release 9.2.0.6.0 - 64bit Production
With the Partitioning, Real Application Clusters, OLAP and Oracle Data Mining options
JServer Release 9.2.0.6.0 - Production
server1 $



## output from the spool file -- same as nohup.out

server1 $ more dbms_job_run_forground.lst
SQL>
SQL> set time on
23:45:43 SQL>
23:45:43 SQL> exec dbms_stats.gather_schema_stats(ownname=>'MAN', degree=>null, cascade=>true, options=>'gather auto');
BEGIN dbms_stats.gather_schema_stats(ownname=>'MAN', degree=>null, cascade=>true, options=>'gather auto'); END;

*
ERROR at line 1:
ORA-12801: error signaled in parallel query server P012, instance server1:instance1 (1)
ORA-04031: unable to allocate 16096 bytes of shared memory ("large
pool","unknown object","large pool","PX msg pool")
ORA-06512: at "SYS.DBMS_STATS", line 10070
ORA-06512: at "SYS.DBMS_STATS", line 10553
ORA-06512: at "SYS.DBMS_STATS", line 10718
ORA-06512: at "SYS.DBMS_STATS", line 10805
ORA-06512: at "SYS.DBMS_STATS", line 10782
ORA-06512: at line 1


23:46:04 SQL>
23:46:04 SQL> commit;

Commit complete.

23:46:04 SQL>
23:46:04 SQL> spool off
server1$


server1 $ cd <bdump_dir>
server 1 $ ls *_p*.trc 
300
#Many trace file generated 
# format <SID>_p<nnn>_<mmm>.trc

server1 $ view <SID>_p024_22175.trc

....
.....

*** 2008-10-26 23:46:04.374
*** SESSION ID:(80.5116) 2008-10-26 23:46:04.374
=================================
Begin 4031 Diagnostic Information
=================================
The following information assists Oracle in diagnosing
causes of ORA-4031 errors.  This trace may be disabled
by setting the init.ora parameter _4031_dump_bitvec = 0
======================================
Allocation Request Summary Information
======================================
Current information setting:  00654fff
  Dump Interval=300 seconds  SGA Heap Dump Interval=3600 seconds
  Last Dump Time=10/26/2008 23:46:03
Allocation request for:      PX msg pool
Heap: c0000000de140fd8, size: 983136
******************************************************
HEAP DUMP heap name="large pool"  desc=c0000000de140fd8
 extent sz=0xfe0 alt=200 het=32767 rec=9 flg=-126 opc=0
 parent=0000000000000000 owner=0000000000000000 nex=0000000000000000 xsz=0x1
====================
Process State Object
====================
----------------------------------------
SO: c0000000a4474f48, type: 2, owner: 0000000000000000, flag: INIT/-/-/0x00
  (process) Oracle pid=101, calls cur/top: c0000000b638a8a8/c0000000b638a8a8, flag: (0) -
            int error: 0, call error: 0, sess error: 0, txn error 0
  (post info) last post received: 0 0 163
"<SID>_p024_22175.trc" [Read only] 706 lines, 32832 characters
 parent=0000000000000000 owner=0000000000000000 nex=0000000000000000 xsz=0x1
====================
Process State Object
====================
----------------------------------------
SO: c0000000a4474f48, type: 2, owner: 0000000000000000, flag: INIT/-/-/0x00
  (process) Oracle pid=101, calls cur/top: c0000000b638a8a8/c0000000b638a8a8, flag: (0) -
            int error: 0, call error: 0, sess error: 0, txn error 0
  (post info) last post received: 0 0 163
              last post received-location: kxfprienq: slave
              last process to post me: c0000000a445cee8 122 0
              last post sent: 131896 0 162
              last post sent-location: kxfprienq: QC
              last process posted by me: c0000000a445cee8 122 0
  (latch info) wait_event=0 bits=50
    holding     c0000000ae6e0ab8 Child parallel query alloc buffer level=6 child#=1
        Location from where latch is held: kxfpbalo: allocate a buffer
        Context saved from call: 13835058058893948848
        state=busy
    holding     c0000000d74776b0 Child process queue reference level=4 child#=1760
        Location from where latch is held: kxfprialo: addr process queue
        Context saved from call: 13835058058220266736
        state=busy
    Process Group: DEFAULT, pseudo proc: c0000000a24473e8
    O/S info: user: oracle, term: UNKNOWN, ospid: 22175
    OSD pid info: Unix process pid: 22175, image: oracle@server1 (P024)
=========================
User Session State Object
=========================
----------------------------------------
SO: c0000000a45a39e0, type: 4, owner: c0000000a4474f48, flag: INIT/-/-/0x00
  (session) trans: 0000000000000000, creator: c0000000a4474f48, flag: (48000041) USR/- BSY/-/-/-/-/-
            DID: 0001-0065-00019BC1, short-term DID: 0000-0000-00000000
            txn branch: 0000000000000000
            oct: 3, prv: 0, sql: c0000000db646ad0, psql: 0000000000000000, user: 42/SA
  O/S info: user: <MMM>, term: , ospid: 22175, machine: server1
            program: oracle@server1 (P024)
  application name: SQL*Plus, hash value=3669949024
  last wait for 'latch free' blocking sess=0x0 seq=17 wait_time=347949
          address=c0000000ae6e0ab8, number=c8, tries=0
  temporary object counter: 0
...No current library cache object being loaded
...No instantiation object
----- Call Stack Trace -----
calling              call     entry                argument values in hex
location             type     point                (? means dubious value)
-------------------- -------- -------------------- ----------------------------
ksm_4031_dump()+220  call     ksdfmw()+960         000000000 ?
8                                                  C000000000000916 ?
                                                   40000000028D4380 ?
                                                   000000000 ?
$cold_ksmasg()+112   call     ksdfmw()+960         000654FFF ?
                                                   C0000000DE0C4BA0 ?
                                                   C0000000DE140FD8 ?
                                                   60000000004DE7D0 ?
                                                   60000000004DE934 ?
                                                   60000000004DD174 ?
kghnospc()+544       call     ksdfmw()+960         000001000 ?
                                                   C0000000DE0C4BA0 ?
                                                   0000F0060 ? 0000F0060 ?
                                                   C0000000DE0C4BE8 ?
                                                   9FFFFFFFFFFFBCC0 ?
                                                   002002000 ? 000000000 ?
$cold_kghalo()+1120  call     ksdfmw()+960         6000000000049400 ?

.....
.....


server 1 $ ### no errors reported in Alert log .


Kind Regards.

error while running dbms_stats ....

Manoj, October 26, 2008 - 9:49 pm UTC

Hi Tom,

Further, I tried to check again the degree in user_tables as follows:

SQL> select count(*) from user_tables where degree !=1;
select count(*) from user_tables where degree !=1
*
ERROR at line 1:
ORA-01722: invalid number


SQL>
SQL> select count(*) from user_tables where degree != '1';

COUNT(*)
----------
1074

SQL> select count(*) from user_tables;

COUNT(*)
----------
1074

SQL>

SQL> select distinct(degree) from user_tables;

DEGREE
----------
1

SQL>

Thanks a lot.

Kind regards.

Tom Kyte
October 27, 2008 - 8:02 am UTC

ops$tkyte%ORA10GR2> select distinct '"' || degree || '"' from user_tables;

'"'||DEGREE|
------------
"         1"


degree has in this case leading blanks. It is not a number, it is a string, a formatted string at that.

and I just spotted it, your use of options with 'gather auto':


options

GATHER AUTO: Gathers all necessary statistics automatically. Oracle implicitly determines which objects need new statistics, and determines how to gather those statistics. When GATHER AUTO is specified, the only additional valid parameters are ownname, stattab, statid, objlist and statown; all other parameter settings are ignored. Returns a list of processed objects.



You probably meant to use gather stale.

error while running dbms_stats ....

Manoj, October 27, 2008 - 7:48 pm UTC

Hi Tom,

Many thanks for your time.

I will be trying the option gather stale next.


Further,


....ops$tkyte%ORA10GR2> select distinct '"' || degree || '"' from user_tables;

'"'||DEGREE|
------------
"         1"


a) Does these leading blanks causing the dbms_stat to go parallel, as table default degree value ( 1) is not picked ?

I querried *other* database  then the above in the problem, for values of degree  and it also results with leading spaces as above but NO error is thrown when number comparision is performed on user_tables.  

Output as follows:

SQL> select distinct '"' || degree || '"' from user_tables;

'"'||DEGREE|
------------
"         1"

SQL> select count(*) from user_tables where degree !=1;

  COUNT(*)
----------
         0

SQL> select count(*) from user_tables;

  COUNT(*)
----------
      1000

SQL>


So,
b) does it mean that degree in the above query has leading spaces but still, comparision with number ( i.e. 1) succeeds.?

c) we were running anlayze table T compute... so far as dbms_stat is failing. Can we use dbms_stat with stale option directly?

Also, We have 02 node RAC setup. which is running in Active-Passive ( in terms of client connections) mode. i,e. All connections are directed to the Primary Instance always using client TNS. So at any given point of time another instance is IDLE and is up & running.( no connection.).

So
d) Can i ran dbms_stats in another ( secondary - IDLE ) database Instance and see if it succeeds? Your suggestion please?
  
Kind Regards,

Tom Kyte
October 27, 2008 - 8:54 pm UTC

a) no - I was just pointing out why your queries above where not returning anything. that is all.

it went parallel because you said "do your auto thing, have at it, do what you feel like doing"



NEVER EVER NEVER EVER compare a string to a number. All it takes is one string that cannot be converted to get the error.

only compare strings to strings
dates to dates
numbers to numbers

and never any other combination.

b) no, it means the "where degree != 1" was really "where TO_NUMBER(degree) != 1" and you got lucky, the word DEFAULT did not appear

c) yes, you can use dbms_stats, yes you can use gather stale - if you understand what it means, what it does and if that satisfies your needs.

d) sure you can, but then it wouldn't be active passive - that is, expect to see a ton of interconnect traffic all of a sudden as many of the blocks will have to be shipped for the first time to the other node.

error while running dbms_stats ....

Manoj, October 27, 2008 - 9:07 pm UTC

Many Thanks Tom for enlightening me.

...c) yes, you can use dbms_stats, yes you can use gather stale - if you understand what it means, what it does and if that satisfies your needs.

I have gathered the docs on the same and trying to go through cover to cover.

Kind Regards.

dbms_stats pkg issue after upgradation from 9i

A reader, December 23, 2008 - 12:51 am UTC

Dear Tom,

We have upgraded our oracle applications database from 9i to oracle 10g but when i look at the dbms_stats package , i found
SQL> select dbms_stats.get_param('METHOD_OPT') from dual;

DBMS_STATS.GET_PARAM('METHOD_OPT')
------------------------------------------------------------------------------------------------------------------------------------
FOR ALL COLUMNS SIZE 1


Even after upgradtion to 10g . As far as i know in 10g it should be FOR ALL COLUMNS SIZE AUTO. Can you please help me out in this case ??

Regards

Balakrishna.
Tom Kyte
December 29, 2008 - 3:06 pm UTC

you can use set param to set it to whatever you want.


Details on statistics

A reader, January 05, 2009 - 11:41 am UTC

We currently have a 120 GB database in production where we run full schema statistics every day and that takes about 5 hours. We have a lot of IOT tables and we are gathering 100% stats on them every day. I need some insights on how statistics work with DBMS_STATS package and how we can determine how to better gather the statistics.

Thanks for your help.
Tom Kyte
January 05, 2009 - 11:48 am UTC

... I need some insights on how
statistics work with DBMS_STATS package ...

well, dbms_stats gathers them - it doesn't really "interact" with them. I think you mean "with the optimizer", not "with the dbms_stats package"


do your tables change frequently? are they growing, shrinking, staying the same?

it could well be that most of them do not need any gathering after a while.

Why do you gather them every day? Why do you do full? version information is relevant too.

A reader, January 05, 2009 - 11:57 am UTC

This system went live in July 2008. In the initial few weeks, that it went live we saw some issues with performance of the front end. So we started doing statistics for the tables that were accessed from the front end every day and then thought that we would just do the full schema statistics daily to just make sure that we dont have any problems.
Some tables grow a lot every day and some stay the same.
Initially it started with the full schema stats job taking an hr or so and after we changed some tables to IOT tables and started doing 100% on them the time taken has increased to 5 hours. That is why we want to revisit our stats job and come up with something efficient.
Our database is Oracle 10.2.0.4.
Tom Kyte
January 05, 2009 - 12:41 pm UTC

probably, for most of your segments - an estimate (1 to 10%) would be more than sufficient and gather_stale (only on tables that have grown/shrunk/modified a lot) would do.

you will want to *test* that - and if you want to understand 'how' statistics are used, Jonathan Lewis wrote a decent book on that - Cost Based Oracle Fundamentals

A reader, January 05, 2009 - 12:57 pm UTC

Does gather_stale determine if the change in the table is more than a certain percentage then the statistics will be gathered. Do you suggest that we gather table by table instead of doing schema statistics ?
Tom Kyte
January 05, 2009 - 1:45 pm UTC

gather stale uses the information in dba_tab_modifications to determine what segments are prime candidates for statistics.

you would typically use that with schema level gathering

Degree of parallelism not working

Ashiq, January 05, 2009 - 2:16 pm UTC

Hi Tom,

We've big table (size 100gig) and when we tried to gather stats with degree option - its not gathering stats with child processes.

It's 32 CPU server


Below is my create table statement

CREATE TABLE SLS_ORDR_FACT
(
ACCT_REGN_CODE VARCHAR2(25 BYTE) NOT NULL,
SLS_ORDR_DOC_HDR_NUM_CODE VARCHAR2(25 BYTE) NOT NULL,
SLS_ORDR_DOC_LN_ITM_NUM_CODE VARCHAR2(25 BYTE) NOT NULL,
..
..
..
PARTITION US_FOOTWEAR_REPLEN_FORECAST VALUES (33)
NOLOGGING
NOCOMPRESS
TABLESPACE DM_SMART_TAB_SOF
PCTFREE 5
INITRANS 13
MAXTRANS 255
STORAGE (
INITIAL 16M
NEXT 16M
MINEXTENTS 1
MAXEXTENTS 2147483645
PCTINCREASE 0
BUFFER_POOL DEFAULT
),
PARTITION REMAINDER VALUES (DEFAULT)
NOLOGGING
NOCOMPRESS
TABLESPACE DM_SMART_TAB_SOF
PCTFREE 5
INITRANS 13
MAXTRANS 255
STORAGE (
INITIAL 16M
NEXT 16M
MINEXTENTS 1
MAXEXTENTS 2147483645
PCTINCREASE 0
BUFFER_POOL DEFAULT
)
)
NOCOMPRESS
NOCACHE
PARALLEL ( DEGREE 4 INSTANCES 1 )
NOMONITORING

ENABLE ROW MOVEMENT;

CREATE UNIQUE INDEX PK_SLS_ORDR_FACT ON SLS_ORDR_FACT
(SLS_ORDR_DOC_HDR_NUM_CODE, SLS_ORDR_DOC_LN_ITM_NUM_CODE, KEY_CCD_DATE_DIM_ID, RJCT_RSN_CODE)
NOLOGGING
TABLESPACE DM_SMART_IDX
PCTFREE 1
INITRANS 13
MAXTRANS 255
STORAGE (
INITIAL 16M
NEXT 16M
MINEXTENTS 1
MAXEXTENTS 2147483645
PCTINCREASE 0
FREELISTS 1
FREELIST GROUPS 1
BUFFER_POOL DEFAULT
)
NOPARALLEL;


aLTER TABLE SLS_ORDR_FACT ADD (
CONSTRAINT SOF_PK
PRIMARY KEY
(SLS_ORDR_DOC_HDR_NUM_CODE, SLS_ORDR_DOC_LN_ITM_NUM_CODE, KEY_CCD_DATE_DIM_ID, RJCT_RSN_CODE));

=============

####initialization parameters
parallel_automatic_tuning boolean FALSE
parallel_max_servers integer 960
parallel_min_percent integer 0
parallel_min_servers integer 37
parallel_server boolean FALSE
parallel_server_instances integer 1
parallel_threads_per_cpu integer 3
recovery_parallelism integer 0


Tom Kyte
January 05, 2009 - 2:38 pm UTC

you do not tell us what the dbms_stats call was?? and how you ascertain that it is not using parallel...

Degree of parallelism not working

Ashiq, January 05, 2009 - 2:28 pm UTC

This is my statement which I tried to execute

begin
DBMS_STATS.GATHER_TABLE_STATS
( ownname => 'DM_SMART'
, tabname => 'SLS_ORDR_FACT'
, partname => null -- partitions
, estimate_percent => 2
, degree =>8
, cascade => true -- does indexes
);
end;
/

Tom Kyte
January 05, 2009 - 2:43 pm UTC

give me a create I can actually run - and tell us how you arrived at "it did not use parallel"

Ashiq, January 05, 2009 - 2:58 pm UTC

Here it goes--

CREATE TABLE SLS_ORDR_FACT
(
ACCT_REGN_CODE VARCHAR2(25 BYTE) NOT NULL,
SLS_ORDR_DOC_HDR_NUM_CODE VARCHAR2(25 BYTE) NOT NULL,
SLS_ORDR_DOC_LN_ITM_NUM_CODE VARCHAR2(25 BYTE) NOT NULL,
KEY_CCD_DATE_DIM_ID NUMBER(38,2) NOT NULL,
RJCT_RSN_CODE VARCHAR2(25 BYTE) NOT NULL,
KEY_RJCT_RSN_DIM_ID NUMBER(38,2) NOT NULL,
LCL_CRNCY_CODE VARCHAR2(25 BYTE),
SLS_ORG_CODE VARCHAR2(25 BYTE),
DIV_CODE VARCHAR2(25 BYTE),
LFCYCL_CODE VARCHAR2(25 BYTE),
ORDR_ENTRY_MTHD_CODE VARCHAR2(25 BYTE),
ORDR_TYPE_CODE VARCHAR2(25 BYTE),
QLTY_TYPE_CODE VARCHAR2(25 BYTE),
REF_DOC_TYPE_CODE VARCHAR2(25 BYTE),
PRECEDE_DOC_HDR_NUM_CODE VARCHAR2(25 BYTE),
PRECEDE_DOC_LN_ITM_NUM_CODE VARCHAR2(25 BYTE),
CUST_ORDR_NUM_CODE VARCHAR2(25 BYTE),
CNFRMD_UNIT_QTY NUMBER(38,2),
CUST_RJCTD_UNIT_QTY NUMBER(38,2),
FXD_UNIT_QTY NUMBER(38,2),
ON_ORDR_UNIT_QTY NUMBER(38,2),
OPEN_UNIT_QTY NUMBER(38,2),
OPEN_TO_REF_UNIT_QTY NUMBER(38,2),
ORDR_ENTRY_UNIT_QTY NUMBER(38,2),
AUTH_FUTR_UNIT_QTY NUMBER(38,2),
OTHR_RJCTD_UNIT_QTY NUMBER(38,2),
RSRVD_UNIT_QTY NUMBER(38,2),
SCHDLD_FOR_DLVRY_UNIT_QTY NUMBER(38,2),
UNCNFRMD_UNIT_QTY NUMBER(38,2),
ORGNL_ORDR_ENTRY_UNIT_QTY NUMBER(38,2),
UOM_CNVRSN_FCTR NUMBER(38,2),
CNFRMD_AMT NUMBER(38,2),
CUST_RJCTD_AMT NUMBER(38,2),
FXD_AMT NUMBER(38,2),
ON_ORDR_AMT NUMBER(38,2),
OPEN_AMT NUMBER(38,2),
OPEN_TO_REF_AMT NUMBER(38,2),
ORDR_ENTRY_AMT NUMBER(38,2),
AUTH_FUTR_AMT NUMBER(38,2),
OTHR_RJCTD_AMT NUMBER(38,2),
RSRVD_AMT NUMBER(38,2),
SCHDLD_FOR_DLVRY_AMT NUMBER(38,2),
UNCNFRMD_AMT NUMBER(38,2),
ORGNL_ORDR_ENTRY_AMT NUMBER(38,2),
WHLSL_AMT NUMBER(38,2),
WHLSL_UNIT_PRC NUMBER(38,2),
TRL_CAT_CD VARCHAR2(25 BYTE),
GOAL_RFRNC_NUM_CODE VARCHAR2(25 BYTE),
KEY_ACCT_DEPT_DIM_ID NUMBER(38) NOT NULL,
KEY_ACCT_PRFL_DIM_ID NUMBER(38) NOT NULL,
KEY_ACCT_REGN_DIM_ID NUMBER(38) NOT NULL,
KEY_ACCT_SHPTO_DIM_ID NUMBER(38) NOT NULL,
KEY_ACCT_SLDT_DIM_ID NUMBER(38) NOT NULL,
KEY_BUSNSS_CCD_MNTH_DIM_ID NUMBER(38) NOT NULL,
KEY_BUSNSS_CRD_MNTH_DIM_ID NUMBER(38) NOT NULL,
KEY_BUSNSS_FRZ_CRD_MNTH_DIM_ID NUMBER(38) NOT NULL,
KEY_CCD_SESN_YEAR_DIM_ID NUMBER(38) NOT NULL,
KEY_CNCL_DATE_DIM_ID NUMBER(38) NOT NULL,
KEY_CRD_DATE_DIM_ID NUMBER(38) NOT NULL,
KEY_CRD_SESN_YEAR_DIM_ID NUMBER(38) NOT NULL,
KEY_DIV_DIM_ID NUMBER(38) NOT NULL,
KEY_FRZ_CRD_DIM_ID NUMBER(38) NOT NULL,
KEY_FRZ_CRD_SESN_YEAR_DIM_ID NUMBER(38) NOT NULL,
KEY_MATRL_DIM_ID NUMBER(38) NOT NULL,
KEY_ORDR_ENTRY_DATE_DIM_ID NUMBER(38) NOT NULL,
KEY_ORDR_MAINT_DATE_DIM_ID NUMBER(38) NOT NULL,
KEY_ORIGNL_CCD_DATE_DIM_ID NUMBER(38) NOT NULL,
KEY_ORIGNL_CRD_DATE_DIM_ID NUMBER(38) NOT NULL,
KEY_PRFRM_SLS_REP_DIM_ID NUMBER(38) NOT NULL,
KEY_MGMT_SLS_REP_DIM_ID NUMBER(38) NOT NULL,
KEY_RTRN_RSN_DIM_ID NUMBER(38) NOT NULL,
KEY_SLS_ORG_DIM_ID NUMBER(38) NOT NULL,
KEY_SLSMATRL_DIM_ID NUMBER(38) NOT NULL,
KEY_STNDRD_FLTR_DIM_ID NUMBER(38) NOT NULL,
KEY_AUTH_FUTR_DLNDT_DIM_ID NUMBER(38) NOT NULL,
ZZZ_CREATE_DTTM DATE DEFAULT SYSDATE,
ZZZ_UPDATE_DTTM DATE DEFAULT SYSDATE,
ZZZ_RCRD_CHNG_UID VARCHAR2(25 BYTE) DEFAULT USER,
NET_UNIT_PRC NUMBER(38,2),
USAGE_CODE VARCHAR2(10 BYTE),
DSCNT_PER_UNIT NUMBER,
TTL_DSCNT_AMT NUMBER,
KEY_SEASNL_SNPSHT_DT_DIM_ID NUMBER(38) DEFAULT 0,
SEASNL_SNPSHT_UNIT_QTY NUMBER(38) DEFAULT 0,
SEASNL_SNPSHT_AMT NUMBER(38,2) DEFAULT 0,
KEY_SSN_FRZ_CRD_DT_DIM_ID NUMBER(38) DEFAULT 0
)
TABLESPACE DM_SMART_TAB_LG
PCTUSED 0
PCTFREE 5
INITRANS 13
MAXTRANS 255
STORAGE (
PCTINCREASE 0
)
NOLOGGING
PARTITION BY LIST (KEY_STNDRD_FLTR_DIM_ID)
(
PARTITION US_REMAINDER VALUES (0)
NOLOGGING
NOCOMPRESS
TABLESPACE DM_SMART_TAB_SOF
PCTFREE 5
INITRANS 13
MAXTRANS 255
STORAGE (
INITIAL 16M
NEXT 16M
MINEXTENTS 1
MAXEXTENTS 2147483645
PCTINCREASE 0
BUFFER_POOL DEFAULT
),
PARTITION US_FOOTWEAR_FUTURES VALUES (1)
NOLOGGING
NOCOMPRESS
TABLESPACE DM_SMART_TAB_SOF
PCTFREE 5
INITRANS 13
MAXTRANS 255
STORAGE (
INITIAL 16M
NEXT 16M
MINEXTENTS 1
MAXEXTENTS 2147483645
PCTINCREASE 0
BUFFER_POOL DEFAULT
),
PARTITION US_FOOTWEAR_AT_ONCE VALUES (2)
NOLOGGING
NOCOMPRESS
TABLESPACE DM_SMART_TAB_SOF
PCTFREE 5
INITRANS 13
MAXTRANS 255
STORAGE (
INITIAL 16M
NEXT 16M
MINEXTENTS 1
MAXEXTENTS 2147483645
PCTINCREASE 0
BUFFER_POOL DEFAULT
),
PARTITION US_FOOTWEAR_B_GRADE VALUES (3)
NOLOGGING
NOCOMPRESS
TABLESPACE DM_SMART_TAB_SOF
PCTFREE 5
INITRANS 13
MAXTRANS 255
STORAGE (
INITIAL 16M
NEXT 16M
MINEXTENTS 1
MAXEXTENTS 2147483645
PCTINCREASE 0
BUFFER_POOL DEFAULT
),
PARTITION US_FOOTWEAR_CLOSEOUT VALUES (4)
NOLOGGING
NOCOMPRESS
TABLESPACE DM_SMART_TAB_SOF
PCTFREE 5
INITRANS 13
MAXTRANS 255
STORAGE (
INITIAL 16M
NEXT 16M
MINEXTENTS 1
MAXEXTENTS 2147483645
PCTINCREASE 0
BUFFER_POOL DEFAULT
),
PARTITION US_APPAREL_FUTURES VALUES (5)
NOLOGGING
NOCOMPRESS
TABLESPACE DM_SMART_TAB_SOF
PCTFREE 5
INITRANS 13
MAXTRANS 255
STORAGE (
INITIAL 16M
NEXT 16M
MINEXTENTS 1
MAXEXTENTS 2147483645
PCTINCREASE 0
BUFFER_POOL DEFAULT
),
PARTITION US_APPAREL_AT_ONCE VALUES (6)
NOLOGGING
NOCOMPRESS
TABLESPACE DM_SMART_TAB_SOF
PCTFREE 5
INITRANS 13
MAXTRANS 255
STORAGE (
INITIAL 16M
NEXT 16M
MINEXTENTS 1
MAXEXTENTS 2147483645
PCTINCREASE 0
BUFFER_POOL DEFAULT
),
PARTITION US_APPAREL_B_GRADE VALUES (7)
NOLOGGING
NOCOMPRESS
TABLESPACE DM_SMART_TAB_SOF
PCTFREE 5
INITRANS 13
MAXTRANS 255
STORAGE (
INITIAL 16M
NEXT 16M
MINEXTENTS 1
MAXEXTENTS 2147483645
PCTINCREASE 0
BUFFER_POOL DEFAULT
),
PARTITION US_APPAREL_CLOSEOUT VALUES (8)
NOLOGGING
NOCOMPRESS
TABLESPACE DM_SMART_TAB_SOF
PCTFREE 5
INITRANS 13
MAXTRANS 255
STORAGE (
INITIAL 16M
NEXT 16M
MINEXTENTS 1
MAXEXTENTS 2147483645
PCTINCREASE 0
BUFFER_POOL DEFAULT
),
PARTITION US_EQUIPMENT_FUTURES VALUES (9)
NOLOGGING
NOCOMPRESS
TABLESPACE DM_SMART_TAB_SOF
PCTFREE 5
INITRANS 13
MAXTRANS 255
STORAGE (
INITIAL 16M
NEXT 16M
MINEXTENTS 1
MAXEXTENTS 2147483645
PCTINCREASE 0
BUFFER_POOL DEFAULT
),
PARTITION US_EQUIPMENT_AT_ONCE VALUES (10)
NOLOGGING
NOCOMPRESS
TABLESPACE DM_SMART_TAB_SOF
PCTFREE 5
INITRANS 13
MAXTRANS 255
STORAGE (
INITIAL 16M
NEXT 16M
MINEXTENTS 1
MAXEXTENTS 2147483645
PCTINCREASE 0
BUFFER_POOL DEFAULT
),
PARTITION US_EQUIPMENT_B_GRADE VALUES (11)
NOLOGGING
NOCOMPRESS
TABLESPACE DM_SMART_TAB_SOF
PCTFREE 5
INITRANS 13
MAXTRANS 255
STORAGE (
INITIAL 16M
NEXT 16M
MINEXTENTS 1
MAXEXTENTS 2147483645
PCTINCREASE 0
BUFFER_POOL DEFAULT
),
PARTITION US_EQUIPMENT_CLOSEOUT VALUES (12)
NOLOGGING
NOCOMPRESS
TABLESPACE DM_SMART_TAB_SOF
PCTFREE 5
INITRANS 13
MAXTRANS 255
STORAGE (
INITIAL 16M
NEXT 16M
MINEXTENTS 1
MAXEXTENTS 2147483645
PCTINCREASE 0
BUFFER_POOL DEFAULT
),
PARTITION US_FOOTWEAR_REPLEN VALUES (13)
NOLOGGING
NOCOMPRESS
TABLESPACE DM_SMART_TAB_SOF
PCTFREE 5
INITRANS 13
MAXTRANS 255
STORAGE (
INITIAL 16M
NEXT 16M
MINEXTENTS 1
MAXEXTENTS 2147483645
PCTINCREASE 0
BUFFER_POOL DEFAULT
),
PARTITION US_APPAREL_REPLEN VALUES (14)
NOLOGGING
NOCOMPRESS
TABLESPACE DM_SMART_TAB_SOF
PCTFREE 5
INITRANS 13
MAXTRANS 255
STORAGE (
INITIAL 16M
NEXT 16M
MINEXTENTS 1
MAXEXTENTS 2147483645
PCTINCREASE 0
BUFFER_POOL DEFAULT
),
PARTITION US_EQUIPMENT_REPLEN VALUES (15)
NOLOGGING
NOCOMPRESS
TABLESPACE DM_SMART_TAB_SOF
PCTFREE 5
INITRANS 13
MAXTRANS 255
STORAGE (
INITIAL 16M
NEXT 16M
MINEXTENTS 1
MAXEXTENTS 2147483645
PCTINCREASE 0
BUFFER_POOL DEFAULT
),
PARTITION US_APPAREL_REPLEN_FORECAST VALUES (31)
NOLOGGING
NOCOMPRESS
TABLESPACE DM_SMART_TAB_SOF
PCTFREE 5
INITRANS 13
MAXTRANS 255
STORAGE (
INITIAL 16M
NEXT 16M
MINEXTENTS 1
MAXEXTENTS 2147483645
PCTINCREASE 0
BUFFER_POOL DEFAULT
),
PARTITION US_EQUIPMENT_REPLEN_FORECAST VALUES (32)
NOLOGGING
NOCOMPRESS
TABLESPACE DM_SMART_TAB_SOF
PCTFREE 5
INITRANS 13
MAXTRANS 255
STORAGE (
INITIAL 16M
NEXT 16M
MINEXTENTS 1
MAXEXTENTS 2147483645
PCTINCREASE 0
BUFFER_POOL DEFAULT
),
PARTITION CANADA_REMAINDER VALUES (100)
NOLOGGING
NOCOMPRESS
TABLESPACE DM_SMART_TAB_SOF
PCTFREE 5
INITRANS 13
MAXTRANS 255
STORAGE (
INITIAL 16M
NEXT 16M
MINEXTENTS 1
MAXEXTENTS 2147483645
PCTINCREASE 0
BUFFER_POOL DEFAULT
),
PARTITION CANADA_FOOTWEAR_FUTURES VALUES (101)
NOLOGGING
NOCOMPRESS
TABLESPACE DM_SMART_TAB_SOF
PCTFREE 5
INITRANS 13
MAXTRANS 255
STORAGE (
INITIAL 16M
NEXT 16M
MINEXTENTS 1
MAXEXTENTS 2147483645
PCTINCREASE 0
BUFFER_POOL DEFAULT
),
PARTITION CANADA_FOOTWEAR_AT_ONCE VALUES (102)
NOLOGGING
NOCOMPRESS
TABLESPACE DM_SMART_TAB_SOF
PCTFREE 5
INITRANS 13
MAXTRANS 255
STORAGE (
INITIAL 16M
NEXT 16M
MINEXTENTS 1
MAXEXTENTS 2147483645
PCTINCREASE 0
BUFFER_POOL DEFAULT
),
PARTITION CANADA_FOOTWEAR_B_GRADE VALUES (103)
NOLOGGING
NOCOMPRESS
TABLESPACE DM_SMART_TAB_SOF
PCTFREE 5
INITRANS 13
MAXTRANS 255
STORAGE (
INITIAL 16M
NEXT 16M
MINEXTENTS 1
MAXEXTENTS 2147483645
PCTINCREASE 0
BUFFER_POOL DEFAULT
),
PARTITION CANADA_FOOTWEAR_CLOSEOUT VALUES (104)
NOLOGGING
NOCOMPRESS
TABLESPACE DM_SMART_TAB_SOF
PCTFREE 5
INITRANS 13
MAXTRANS 255
STORAGE (
INITIAL 16M
NEXT 16M
MINEXTENTS 1
MAXEXTENTS 2147483645
PCTINCREASE 0
BUFFER_POOL DEFAULT
),
PARTITION CANADA_APPAREL_FUTURES VALUES (105)
NOLOGGING
NOCOMPRESS
TABLESPACE DM_SMART_TAB_SOF
PCTFREE 5
INITRANS 13
MAXTRANS 255
STORAGE (
INITIAL 16M
NEXT 16M
MINEXTENTS 1
MAXEXTENTS 2147483645
PCTINCREASE 0
BUFFER_POOL DEFAULT
),
PARTITION CANADA_APPAREL_AT_ONCE VALUES (106)
NOLOGGING
NOCOMPRESS
TABLESPACE DM_SMART_TAB_SOF
PCTFREE 5
INITRANS 13
MAXTRANS 255
STORAGE (
INITIAL 16M
NEXT 16M
MINEXTENTS 1
MAXEXTENTS 2147483645
PCTINCREASE 0
BUFFER_POOL DEFAULT
),
PARTITION CANADA_APPAREL_B_GRADE VALUES (107)
NOLOGGING
NOCOMPRESS
TABLESPACE DM_SMART_TAB_SOF
PCTFREE 5
INITRANS 13
MAXTRANS 255
STORAGE (
INITIAL 16M
NEXT 16M
MINEXTENTS 1
MAXEXTENTS 2147483645
PCTINCREASE 0
BUFFER_POOL DEFAULT
),
PARTITION CANADA_APPAREL_CLOSEOUT VALUES (108)
NOLOGGING
NOCOMPRESS
TABLESPACE DM_SMART_TAB_SOF
PCTFREE 5
INITRANS 13
MAXTRANS 255
STORAGE (
INITIAL 16M
NEXT 16M
MINEXTENTS 1
MAXEXTENTS 2147483645
PCTINCREASE 0
BUFFER_POOL DEFAULT
),
PARTITION CANADA_EQUIPMENT_FUTURES VALUES (109)
NOLOGGING
NOCOMPRESS
TABLESPACE DM_SMART_TAB_SOF
PCTFREE 5
INITRANS 13
MAXTRANS 255
STORAGE (
INITIAL 16M
NEXT 16M
MINEXTENTS 1
MAXEXTENTS 2147483645
PCTINCREASE 0
BUFFER_POOL DEFAULT
),
PARTITION CANADA_EQUIPMENT_AT_ONCE VALUES (110)
NOLOGGING
NOCOMPRESS
TABLESPACE DM_SMART_TAB_SOF
PCTFREE 5
INITRANS 13
MAXTRANS 255
STORAGE (
INITIAL 16M
NEXT 16M
MINEXTENTS 1
MAXEXTENTS 2147483645
PCTINCREASE 0
BUFFER_POOL DEFAULT
),
PARTITION CANADA_EQUIPMENT_B_GRADE VALUES (111)
NOLOGGING
NOCOMPRESS
TABLESPACE DM_SMART_TAB_SOF
PCTFREE 5
INITRANS 13
MAXTRANS 255
STORAGE (
INITIAL 16M
NEXT 16M
MINEXTENTS 1
MAXEXTENTS 2147483645
PCTINCREASE 0
BUFFER_POOL DEFAULT
),
PARTITION CANADA_EQUIPMENT_CLOSEOUT VALUES (112)
NOLOGGING
NOCOMPRESS
TABLESPACE DM_SMART_TAB_SOF
PCTFREE 5
INITRANS 13
MAXTRANS 255
STORAGE (
INITIAL 16M
NEXT 16M
MINEXTENTS 1
MAXEXTENTS 2147483645
PCTINCREASE 0
BUFFER_POOL DEFAULT
),
PARTITION CANADA_FOOTWEAR_REPLEN VALUES (113)
NOLOGGING
NOCOMPRESS
TABLESPACE DM_SMART_TAB_SOF
PCTFREE 5
INITRANS 13
MAXTRANS 255
STORAGE (
INITIAL 16M
NEXT 16M
MINEXTENTS 1
MAXEXTENTS 2147483645
PCTINCREASE 0
BUFFER_POOL DEFAULT
),
PARTITION CANADA_APPAREL_REPLEN VALUES (114)
NOLOGGING
NOCOMPRESS
TABLESPACE DM_SMART_TAB_SOF
PCTFREE 5
INITRANS 13
MAXTRANS 255
STORAGE (
INITIAL 16M
NEXT 16M
MINEXTENTS 1
MAXEXTENTS 2147483645
PCTINCREASE 0
BUFFER_POOL DEFAULT
),
PARTITION CANADA_EQUIPMENT_REPLEN VALUES (115)
NOLOGGING
NOCOMPRESS
TABLESPACE DM_SMART_TAB_SOF
PCTFREE 5
INITRANS 13
MAXTRANS 255
STORAGE (
INITIAL 16M
NEXT 16M
MINEXTENTS 1
MAXEXTENTS 2147483645
PCTINCREASE 0
BUFFER_POOL DEFAULT
),
PARTITION EMEA_REMAINDER VALUES (200)
NOLOGGING
NOCOMPRESS
TABLESPACE DM_SMART_TAB_SOF
PCTFREE 5
INITRANS 13
MAXTRANS 255
STORAGE (
INITIAL 16M
NEXT 16M
MINEXTENTS 1
MAXEXTENTS 2147483645
PCTINCREASE 0
BUFFER_POOL DEFAULT
),
PARTITION EMEA_FOOTWEAR_FUTURES VALUES (201)
NOLOGGING
NOCOMPRESS
TABLESPACE DM_SMART_TAB_SOF
PCTFREE 5
INITRANS 13
MAXTRANS 255
STORAGE (
INITIAL 16M
NEXT 16M
MINEXTENTS 1
MAXEXTENTS 2147483645
PCTINCREASE 0
BUFFER_POOL DEFAULT
),
PARTITION EMEA_FOOTWEAR_AT_ONCE VALUES (202)
NOLOGGING
NOCOMPRESS
TABLESPACE DM_SMART_TAB_SOF
PCTFREE 5
INITRANS 13
MAXTRANS 255
STORAGE (
INITIAL 16M
NEXT 16M
MINEXTENTS 1
MAXEXTENTS 2147483645
PCTINCREASE 0
BUFFER_POOL DEFAULT
),
PARTITION EMEA_FOOTWEAR_B_GRADE VALUES (203)
NOLOGGING
NOCOMPRESS
TABLESPACE DM_SMART_TAB_SOF
PCTFREE 5
INITRANS 13
MAXTRANS 255
STORAGE (
INITIAL 16M
NEXT 16M
MINEXTENTS 1
MAXEXTENTS 2147483645
PCTINCREASE 0
BUFFER_POOL DEFAULT
),
PARTITION EMEA_FOOTWEAR_CLOSEOUT VALUES (204)
NOLOGGING
NOCOMPRESS
TABLESPACE DM_SMART_TAB_SOF
PCTFREE 5
INITRANS 13
MAXTRANS 255
STORAGE (
INITIAL 16M
NEXT 16M
MINEXTENTS 1
MAXEXTENTS 2147483645
PCTINCREASE 0
BUFFER_POOL DEFAULT
),
PARTITION EMEA_FOOTWEAR_REPLEN VALUES (205)
NOLOGGING
NOCOMPRESS
TABLESPACE DM_SMART_TAB_SOF
PCTFREE 5
INITRANS 13
MAXTRANS 255
STORAGE (
INITIAL 16M
NEXT 16M
MINEXTENTS 1
MAXEXTENTS 2147483645
PCTINCREASE 0
BUFFER_POOL DEFAULT
),
PARTITION EMEA_FOOTWEAR_OTHER VALUES (206)
NOLOGGING
NOCOMPRESS
TABLESPACE DM_SMART_TAB_SOF
PCTFREE 5
INITRANS 13
MAXTRANS 255
STORAGE (
INITIAL 16M
NEXT 16M
MINEXTENTS 1
MAXEXTENTS 2147483645
PCTINCREASE 0
BUFFER_POOL DEFAULT
),
PARTITION EMEA_APPAREL_FUTURES VALUES (207)
NOLOGGING
NOCOMPRESS
TABLESPACE DM_SMART_TAB_SOF
PCTFREE 5
INITRANS 13
MAXTRANS 255
STORAGE (
INITIAL 16M
NEXT 16M
MINEXTENTS 1
MAXEXTENTS 2147483645
PCTINCREASE 0
BUFFER_POOL DEFAULT
),
PARTITION EMEA_APPAREL_AT_ONCE VALUES (208)
NOLOGGING
NOCOMPRESS
TABLESPACE DM_SMART_TAB_SOF
PCTFREE 5
INITRANS 13
MAXTRANS 255
STORAGE (
INITIAL 16M
NEXT 16M
MINEXTENTS 1
MAXEXTENTS 2147483645
PCTINCREASE 0
BUFFER_POOL DEFAULT
),
PARTITION EMEA_APPAREL_B_GRADE VALUES (209)
NOLOGGING
NOCOMPRESS
TABLESPACE DM_SMART_TAB_SOF
PCTFREE 5
INITRANS 13
MAXTRANS 255
STORAGE (
INITIAL 16M
NEXT 16M
MINEXTENTS 1
MAXEXTENTS 2147483645
PCTINCREASE 0
BUFFER_POOL DEFAULT
),
PARTITION EMEA_APPAREL_CLOSEOUT VALUES (210)
NOLOGGING
NOCOMPRESS
TABLESPACE DM_SMART_TAB_SOF
PCTFREE 5
INITRANS 13
MAXTRANS 255
STORAGE (
INITIAL 16M
NEXT 16M
MINEXTENTS 1
MAXEXTENTS 2147483645
PCTINCREASE 0
BUFFER_POOL DEFAULT
),
PARTITION EMEA_APPAREL_REPLEN VALUES (211)
NOLOGGING
NOCOMPRESS
TABLESPACE DM_SMART_TAB_SOF
PCTFREE 5
INITRANS 13
MAXTRANS 255
STORAGE (
INITIAL 16M
NEXT 16M
MINEXTENTS 1
MAXEXTENTS 2147483645
PCTINCREASE 0
BUFFER_POOL DEFAULT
),
PARTITION EMEA_APPAREL_OTHER VALUES (212)
NOLOGGING
NOCOMPRESS
TABLESPACE DM_SMART_TAB_SOF
PCTFREE 5
INITRANS 13
MAXTRANS 255
STORAGE (
INITIAL 16M
NEXT 16M
MINEXTENTS 1
MAXEXTENTS 2147483645
PCTINCREASE 0
BUFFER_POOL DEFAULT
),
PARTITION EMEA_EQUIPMENT_FUTURES VALUES (213)
NOLOGGING
NOCOMPRESS
TABLESPACE DM_SMART_TAB_SOF
PCTFREE 5
INITRANS 13
MAXTRANS 255
STORAGE (
INITIAL 16M
NEXT
Tom Kyte
January 05, 2009 - 4:22 pm UTC

a sample create table should be small - I don't need your storage clauses - in fact, I cannot use them. I don't have your tablespaces

please - small test case

and tell us how you "determined" what you determined.

Ashiq, January 05, 2009 - 3:08 pm UTC

Since its huge create table statment, can I send create table script directly to your mail id ?
Tom Kyte
January 05, 2009 - 4:34 pm UTC

no, make it smaller

Ashiq, January 05, 2009 - 6:11 pm UTC

I've removed storage parameters ---

CREATE TABLE SLS_ORDR_FACT
(
ACCT_REGN_CODE VARCHAR2(25 BYTE) NOT NULL,
SLS_ORDR_DOC_HDR_NUM_CODE VARCHAR2(25 BYTE) NOT NULL,
SLS_ORDR_DOC_LN_ITM_NUM_CODE VARCHAR2(25 BYTE) NOT NULL,
KEY_CCD_DATE_DIM_ID NUMBER(38,2) NOT NULL,
RJCT_RSN_CODE VARCHAR2(25 BYTE) NOT NULL,
KEY_RJCT_RSN_DIM_ID NUMBER(38,2) NOT NULL,
LCL_CRNCY_CODE VARCHAR2(25 BYTE),
SLS_ORG_CODE VARCHAR2(25 BYTE),
DIV_CODE VARCHAR2(25 BYTE),
LFCYCL_CODE VARCHAR2(25 BYTE),
ORDR_ENTRY_MTHD_CODE VARCHAR2(25 BYTE),
ORDR_TYPE_CODE VARCHAR2(25 BYTE),
QLTY_TYPE_CODE VARCHAR2(25 BYTE),
REF_DOC_TYPE_CODE VARCHAR2(25 BYTE),
PRECEDE_DOC_HDR_NUM_CODE VARCHAR2(25 BYTE),
PRECEDE_DOC_LN_ITM_NUM_CODE VARCHAR2(25 BYTE),
CUST_ORDR_NUM_CODE VARCHAR2(25 BYTE),
CNFRMD_UNIT_QTY NUMBER(38,2),
CUST_RJCTD_UNIT_QTY NUMBER(38,2),
FXD_UNIT_QTY NUMBER(38,2),
ON_ORDR_UNIT_QTY NUMBER(38,2),
OPEN_UNIT_QTY NUMBER(38,2),
OPEN_TO_REF_UNIT_QTY NUMBER(38,2),
ORDR_ENTRY_UNIT_QTY NUMBER(38,2),
AUTH_FUTR_UNIT_QTY NUMBER(38,2),
OTHR_RJCTD_UNIT_QTY NUMBER(38,2),
RSRVD_UNIT_QTY NUMBER(38,2),
SCHDLD_FOR_DLVRY_UNIT_QTY NUMBER(38,2),
UNCNFRMD_UNIT_QTY NUMBER(38,2),
ORGNL_ORDR_ENTRY_UNIT_QTY NUMBER(38,2),
UOM_CNVRSN_FCTR NUMBER(38,2),
CNFRMD_AMT NUMBER(38,2),
CUST_RJCTD_AMT NUMBER(38,2),
FXD_AMT NUMBER(38,2),
ON_ORDR_AMT NUMBER(38,2),
OPEN_AMT NUMBER(38,2),
OPEN_TO_REF_AMT NUMBER(38,2),
ORDR_ENTRY_AMT NUMBER(38,2),
AUTH_FUTR_AMT NUMBER(38,2),
OTHR_RJCTD_AMT NUMBER(38,2),
RSRVD_AMT NUMBER(38,2),
SCHDLD_FOR_DLVRY_AMT NUMBER(38,2),
UNCNFRMD_AMT NUMBER(38,2),
ORGNL_ORDR_ENTRY_AMT NUMBER(38,2),
WHLSL_AMT NUMBER(38,2),
WHLSL_UNIT_PRC NUMBER(38,2),
TRL_CAT_CD VARCHAR2(25 BYTE),
GOAL_RFRNC_NUM_CODE VARCHAR2(25 BYTE),
KEY_ACCT_DEPT_DIM_ID NUMBER(38) NOT NULL,
KEY_ACCT_PRFL_DIM_ID NUMBER(38) NOT NULL,
KEY_ACCT_REGN_DIM_ID NUMBER(38) NOT NULL,
KEY_ACCT_SHPTO_DIM_ID NUMBER(38) NOT NULL,
KEY_ACCT_SLDT_DIM_ID NUMBER(38) NOT NULL,
KEY_BUSNSS_CCD_MNTH_DIM_ID NUMBER(38) NOT NULL,
KEY_BUSNSS_CRD_MNTH_DIM_ID NUMBER(38) NOT NULL,
KEY_BUSNSS_FRZ_CRD_MNTH_DIM_ID NUMBER(38) NOT NULL,
KEY_CCD_SESN_YEAR_DIM_ID NUMBER(38) NOT NULL,
KEY_CNCL_DATE_DIM_ID NUMBER(38) NOT NULL,
KEY_CRD_DATE_DIM_ID NUMBER(38) NOT NULL,
KEY_CRD_SESN_YEAR_DIM_ID NUMBER(38) NOT NULL,
KEY_DIV_DIM_ID NUMBER(38) NOT NULL,
KEY_FRZ_CRD_DIM_ID NUMBER(38) NOT NULL,
KEY_FRZ_CRD_SESN_YEAR_DIM_ID NUMBER(38) NOT NULL,
KEY_MATRL_DIM_ID NUMBER(38) NOT NULL,
KEY_ORDR_ENTRY_DATE_DIM_ID NUMBER(38) NOT NULL,
KEY_ORDR_MAINT_DATE_DIM_ID NUMBER(38) NOT NULL,
KEY_ORIGNL_CCD_DATE_DIM_ID NUMBER(38) NOT NULL,
KEY_ORIGNL_CRD_DATE_DIM_ID NUMBER(38) NOT NULL,
KEY_PRFRM_SLS_REP_DIM_ID NUMBER(38) NOT NULL,
KEY_MGMT_SLS_REP_DIM_ID NUMBER(38) NOT NULL,
KEY_RTRN_RSN_DIM_ID NUMBER(38) NOT NULL,
KEY_SLS_ORG_DIM_ID NUMBER(38) NOT NULL,
KEY_SLSMATRL_DIM_ID NUMBER(38) NOT NULL,
KEY_STNDRD_FLTR_DIM_ID NUMBER(38) NOT NULL,
KEY_AUTH_FUTR_DLNDT_DIM_ID NUMBER(38) NOT NULL,
ZZZ_CREATE_DTTM DATE DEFAULT SYSDATE,
ZZZ_UPDATE_DTTM DATE DEFAULT SYSDATE,
ZZZ_RCRD_CHNG_UID VARCHAR2(25 BYTE) DEFAULT USER,
NET_UNIT_PRC NUMBER(38,2),
USAGE_CODE VARCHAR2(10 BYTE),
DSCNT_PER_UNIT NUMBER,
TTL_DSCNT_AMT NUMBER,
KEY_SEASNL_SNPSHT_DT_DIM_ID NUMBER(38) DEFAULT 0,
SEASNL_SNPSHT_UNIT_QTY NUMBER(38) DEFAULT 0,
SEASNL_SNPSHT_AMT NUMBER(38,2) DEFAULT 0,
KEY_SSN_FRZ_CRD_DT_DIM_ID NUMBER(38) DEFAULT 0
)
TABLESPACE DM_SMART_TAB_LG
PCTUSED 0
PCTFREE 5
INITRANS 13
MAXTRANS 255
STORAGE (
PCTINCREASE 0
)
NOLOGGING
PARTITION BY LIST (KEY_STNDRD_FLTR_DIM_ID)
(
PARTITION US_REMAINDER VALUES (0)
NOLOGGING
NOCOMPRESS
TABLESPACE DM_SMART_TAB_SOF
PCTFREE 5
INITRANS 13
MAXTRANS 255
,
PARTITION US_FOOTWEAR_FUTURES VALUES (1)
NOLOGGING
NOCOMPRESS
TABLESPACE DM_SMART_TAB_SOF
PCTFREE 5
INITRANS 13
MAXTRANS 255
,
PARTITION US_FOOTWEAR_AT_ONCE VALUES (2)
NOLOGGING
NOCOMPRESS
TABLESPACE DM_SMART_TAB_SOF
PCTFREE 5
INITRANS 13
MAXTRANS 255
,
PARTITION US_FOOTWEAR_B_GRADE VALUES (3)
NOLOGGING
NOCOMPRESS
TABLESPACE DM_SMART_TAB_SOF
PCTFREE 5
INITRANS 13
MAXTRANS 255
,
PARTITION US_FOOTWEAR_CLOSEOUT VALUES (4)
NOLOGGING
NOCOMPRESS
TABLESPACE DM_SMART_TAB_SOF
PCTFREE 5
INITRANS 13
MAXTRANS 255
,
PARTITION US_APPAREL_FUTURES VALUES (5)
NOLOGGING
NOCOMPRESS
TABLESPACE DM_SMART_TAB_SOF
PCTFREE 5
INITRANS 13
MAXTRANS 255
,
PARTITION US_APPAREL_AT_ONCE VALUES (6)
NOLOGGING
NOCOMPRESS
TABLESPACE DM_SMART_TAB_SOF
PCTFREE 5
INITRANS 13
MAXTRANS 255
,
PARTITION US_APPAREL_B_GRADE VALUES (7)
NOLOGGING
NOCOMPRESS
TABLESPACE DM_SMART_TAB_SOF
PCTFREE 5
INITRANS 13
MAXTRANS 255
,
PARTITION US_APPAREL_CLOSEOUT VALUES (8)
NOLOGGING
NOCOMPRESS
TABLESPACE DM_SMART_TAB_SOF
PCTFREE 5
INITRANS 13
MAXTRANS 255
,
PARTITION US_EQUIPMENT_FUTURES VALUES (9)
NOLOGGING
NOCOMPRESS
TABLESPACE DM_SMART_TAB_SOF
PCTFREE 5
INITRANS 13
MAXTRANS 255
,
PARTITION US_EQUIPMENT_AT_ONCE VALUES (10)
NOLOGGING
NOCOMPRESS
TABLESPACE DM_SMART_TAB_SOF
PCTFREE 5
INITRANS 13
MAXTRANS 255
,
PARTITION US_EQUIPMENT_B_GRADE VALUES (11)
NOLOGGING
NOCOMPRESS
TABLESPACE DM_SMART_TAB_SOF
PCTFREE 5
INITRANS 13
MAXTRANS 255
,
PARTITION US_EQUIPMENT_CLOSEOUT VALUES (12)
NOLOGGING
NOCOMPRESS
TABLESPACE DM_SMART_TAB_SOF
PCTFREE 5
INITRANS 13
MAXTRANS 255
,
PARTITION US_FOOTWEAR_REPLEN VALUES (13)
NOLOGGING
NOCOMPRESS
TABLESPACE DM_SMART_TAB_SOF
PCTFREE 5
INITRANS 13
MAXTRANS 255
,
PARTITION US_APPAREL_REPLEN VALUES (14)
NOLOGGING
NOCOMPRESS
TABLESPACE DM_SMART_TAB_SOF
PCTFREE 5
INITRANS 13
MAXTRANS 255
,
PARTITION US_EQUIPMENT_REPLEN VALUES (15)
NOLOGGING
NOCOMPRESS
TABLESPACE DM_SMART_TAB_SOF
PCTFREE 5
INITRANS 13
MAXTRANS 255
,
PARTITION US_APPAREL_REPLEN_FORECAST VALUES (31)
NOLOGGING
NOCOMPRESS
TABLESPACE DM_SMART_TAB_SOF
PCTFREE 5
INITRANS 13
MAXTRANS 255
,
PARTITION US_EQUIPMENT_REPLEN_FORECAST VALUES (32)
NOLOGGING
NOCOMPRESS
TABLESPACE DM_SMART_TAB_SOF
PCTFREE 5
INITRANS 13
MAXTRANS 255
,
PARTITION CANADA_REMAINDER VALUES (100)
NOLOGGING
NOCOMPRESS
TABLESPACE DM_SMART_TAB_SOF
PCTFREE 5
INITRANS 13
MAXTRANS 255
,
PARTITION CANADA_FOOTWEAR_FUTURES VALUES (101)
NOLOGGING
NOCOMPRESS
TABLESPACE DM_SMART_TAB_SOF
PCTFREE 5
INITRANS 13
MAXTRANS 255
,
PARTITION CANADA_FOOTWEAR_AT_ONCE VALUES (102)
NOLOGGING
NOCOMPRESS
TABLESPACE DM_SMART_TAB_SOF
PCTFREE 5
INITRANS 13
MAXTRANS 255
,
PARTITION CANADA_FOOTWEAR_B_GRADE VALUES (103)
NOLOGGING
NOCOMPRESS
TABLESPACE DM_SMART_TAB_SOF
PCTFREE 5
INITRANS 13
MAXTRANS 255
,
PARTITION CANADA_FOOTWEAR_CLOSEOUT VALUES (104)
NOLOGGING
NOCOMPRESS
TABLESPACE DM_SMART_TAB_SOF
PCTFREE 5
INITRANS 13
MAXTRANS 255
,
PARTITION CANADA_APPAREL_FUTURES VALUES (105)
NOLOGGING
NOCOMPRESS
TABLESPACE DM_SMART_TAB_SOF
PCTFREE 5
INITRANS 13
MAXTRANS 255
,
PARTITION CANADA_APPAREL_AT_ONCE VALUES (106)
NOLOGGING
NOCOMPRESS
TABLESPACE DM_SMART_TAB_SOF
PCTFREE 5
INITRANS 13
MAXTRANS 255
,
PARTITION CANADA_APPAREL_B_GRADE VALUES (107)
NOLOGGING
NOCOMPRESS
TABLESPACE DM_SMART_TAB_SOF
PCTFREE 5
INITRANS 13
MAXTRANS 255
,
PARTITION CANADA_APPAREL_CLOSEOUT VALUES (108)
NOLOGGING
NOCOMPRESS
TABLESPACE DM_SMART_TAB_SOF
PCTFREE 5
INITRANS 13
MAXTRANS 255
,
PARTITION CANADA_EQUIPMENT_FUTURES VALUES (109)
NOLOGGING
NOCOMPRESS
TABLESPACE DM_SMART_TAB_SOF
PCTFREE 5
INITRANS 13
MAXTRANS 255
,
PARTITION CANADA_EQUIPMENT_AT_ONCE VALUES (110)
NOLOGGING
NOCOMPRESS
TABLESPACE DM_SMART_TAB_SOF
PCTFREE 5
INITRANS 13
MAXTRANS 255
,
PARTITION CANADA_EQUIPMENT_B_GRADE VALUES (111)
NOLOGGING
NOCOMPRESS
TABLESPACE DM_SMART_TAB_SOF
PCTFREE 5
INITRANS 13
MAXTRANS 255
,
PARTITION CANADA_EQUIPMENT_CLOSEOUT VALUES (112)
NOLOGGING
NOCOMPRESS
TABLESPACE DM_SMART_TAB_SOF
PCTFREE 5
INITRANS 13
MAXTRANS 255
,
PARTITION CANADA_FOOTWEAR_REPLEN VALUES (113)
NOLOGGING
NOCOMPRESS
TABLESPACE DM_SMART_TAB_SOF
PCTFREE 5
INITRANS 13
MAXTRANS 255
,
PARTITION CANADA_APPAREL_REPLEN VALUES (114)
NOLOGGING
NOCOMPRESS
TABLESPACE DM_SMART_TAB_SOF
PCTFREE 5
INITRANS 13
MAXTRANS 255
,
PARTITION CANADA_EQUIPMENT_REPLEN VALUES (115)
NOLOGGING
NOCOMPRESS
TABLESPACE DM_SMART_TAB_SOF
PCTFREE 5
INITRANS 13
MAXTRANS 255
,
PARTITION EMEA_REMAINDER VALUES (200)
NOLOGGING
NOCOMPRESS
TABLESPACE DM_SMART_TAB_SOF
PCTFREE 5
INITRANS 13
MAXTRANS 255
,
PARTITION EMEA_FOOTWEAR_FUTURES VALUES (201)
NOLOGGING
NOCOMPRESS
TABLESPACE DM_SMART_TAB_SOF
PCTFREE 5
INITRANS 13
MAXTRANS 255
,
PARTITION EMEA_FOOTWEAR_AT_ONCE VALUES (202)
NOLOGGING
NOCOMPRESS
TABLESPACE DM_SMART_TAB_SOF
PCTFREE 5
INITRANS 13
MAXTRANS 255
,
PARTITION EMEA_FOOTWEAR_B_GRADE VALUES (203)
NOLOGGING
NOCOMPRESS
TABLESPACE DM_SMART_TAB_SOF
PCTFREE 5
INITRANS 13
MAXTRANS 255
,
PARTITION EMEA_FOOTWEAR_CLOSEOUT VALUES (204)
NOLOGGING
NOCOMPRESS
TABLESPACE DM_SMART_TAB_SOF
PCTFREE 5
INITRANS 13
MAXTRANS 255
,
PARTITION EMEA_FOOTWEAR_REPLEN VALUES (205)
NOLOGGING
NOCOMPRESS
TABLESPACE DM_SMART_TAB_SOF
PCTFREE 5
INITRANS 13
MAXTRANS 255
,
PARTITION EMEA_FOOTWEAR_OTHER VALUES (206)
NOLOGGING
NOCOMPRESS
TABLESPACE DM_SMART_TAB_SOF
PCTFREE 5
INITRANS 13
MAXTRANS 255
,
PARTITION EMEA_APPAREL_FUTURES VALUES (207)
NOLOGGING
NOCOMPRESS
TABLESPACE DM_SMART_TAB_SOF
PCTFREE 5
INITRANS 13
MAXTRANS 255
,
PARTITION EMEA_APPAREL_AT_ONCE VALUES (208)
NOLOGGING
NOCOMPRESS
TABLESPACE DM_SMART_TAB_SOF
PCTFREE 5
INITRANS 13
MAXTRANS 255
,
PARTITION EMEA_APPAREL_B_GRADE VALUES (209)
NOLOGGING
NOCOMPRESS
TABLESPACE DM_SMART_TAB_SOF
PCTFREE 5
INITRANS 13
MAXTRANS 255
,
PARTITION EMEA_APPAREL_CLOSEOUT VALUES (210)
NOLOGGING
NOCOMPRESS
TABLESPACE DM_SMART_TAB_SOF
PCTFREE 5
INITRANS 13
MAXTRANS 255
,
PARTITION EMEA_APPAREL_REPLEN VALUES (211)
NOLOGGING
NOCOMPRESS
TABLESPACE DM_SMART_TAB_SOF
PCTFREE 5
INITRANS 13
MAXTRANS 255
,
PARTITION EMEA_APPAREL_OTHER VALUES (212)
NOLOGGING
NOCOMPRESS
TABLESPACE DM_SMART_TAB_SOF
PCTFREE 5
INITRANS 13
MAXTRANS 255
,
PARTITION EMEA_EQUIPMENT_FUTURES VALUES (213)
NOLOGGING
NOCOMPRESS
TABLESPACE DM_SMART_TAB_SOF
PCTFREE 5
INITRANS 13
MAXTRANS 255
,
PARTITION EMEA_EQUIPMENT_AT_ONCE VALUES (214)
NOLOGGING
NOCOMPRESS
TABLESPACE DM_SMART_TAB_SOF
PCTFREE 5
INITRANS 13
MAXTRANS 255
,
PARTITION EMEA_EQUIPMENT_B_GRADE VALUES (215)
NOLOGGING
NOCOMPRESS
TABLESPACE DM_SMART_TAB_SOF
PCTFREE 5
INITRANS 13
MAXTRANS 255
,
PARTITION EMEA_EQUIPMENT_CLOSEOUT VALUES (216)
NOLOGGING
NOCOMPRESS
TABLESPACE DM_SMART_TAB_SOF
PCTFREE 5
INITRANS 13
MAXTRANS 255
,
PARTITION EMEA_EQUIPMENT_REPLEN VALUES (217)
NOLOGGING
NOCOMPRESS
TABLESPACE DM_SMART_TAB_SOF
PCTFREE 5
INITRANS 13
MAXTRANS 255
,
PARTITION EMEA_EQUIPMENT_OTHER VALUES (218)
NOLOGGING
NOCOMPRESS
TABLESPACE DM_SMART_TAB_SOF
PCTFREE 5
INITRANS 13
MAXTRANS 255
,
PARTITION US_FOOTWEAR_REPLEN_FORECAST VALUES (33)
NOLOGGING
NOCOMPRESS
TABLESPACE DM_SMART_TAB_SOF
PCTFREE 5
INITRANS 13
MAXTRANS 255
,
PARTITION REMAINDER VALUES (DEFAULT)
NOLOGGING
NOCOMPRESS
TABLESPACE DM_SMART_TAB_SOF
PCTFREE 5
INITRANS 13
MAXTRANS 255
)
NOCOMPRESS
NOCACHE
PARALLEL ( DEGREE 4 INSTANCES 1 )
NOMONITORING
ENABLE ROW MOVEMENT;


CREATE UNIQUE INDEX PK_SLS_ORDR_FACT ON SLS_ORDR_FACT
(SLS_ORDR_DOC_HDR_NUM_CODE, SLS_ORDR_DOC_LN_ITM_NUM_CODE, KEY_CCD_DATE_DIM_ID, RJCT_RSN_CODE)
NOLOGGING
TABLESPACE DM_SMART_IDX
PCTFREE 1
INITRANS 13
MAXTRANS 255
STORAGE (
INITIAL 16M
NEXT 16M
MINEXTENTS 1
MAXEXTENTS 2147483645
PCTINCREASE 0
FREELISTS 1
FREELIST GROUPS 1
BUFFER_POOL DEFAULT
)
NOPARALLEL;


ALTER TABLE SLS_ORDR_FACT ADD (
CONSTRAINT SOF_PK
PRIMARY KEY
(SLS_ORDR_DOC_HDR_NUM_CODE, SLS_ORDR_DOC_LN_ITM_NUM_CODE, KEY_CCD_DATE_DIM_ID, RJCT_RSN_CODE));






Tom Kyte
January 05, 2009 - 6:45 pm UTC

but you left on all of the other "junk"


and - and most importantly - I do not see the answer to a question I've asked over and over - how did you come to your conclusion?

Here is my much smaller example showing this "working"

ops$tkyte%ORA10GR2> CREATE TABLE SLS_ORDR_FACT
  2  (
  3    ACCT_REGN_CODE                  VARCHAR2(25 BYTE) NOT NULL,
  4    SLS_ORDR_DOC_HDR_NUM_CODE       VARCHAR2(25 BYTE) NOT NULL,
  5    SLS_ORDR_DOC_LN_ITM_NUM_CODE    VARCHAR2(25 BYTE) NOT NULL,
  6    KEY_CCD_DATE_DIM_ID             NUMBER(38,2)  NOT NULL,
  7    RJCT_RSN_CODE                   VARCHAR2(25 BYTE) NOT NULL,
  8    KEY_RJCT_RSN_DIM_ID             NUMBER(38,2)  NOT NULL,
  9    LCL_CRNCY_CODE                  VARCHAR2(25 BYTE),
 10    SLS_ORG_CODE                    VARCHAR2(25 BYTE),
 11    DIV_CODE                        VARCHAR2(25 BYTE),
 12    ZZZ_CREATE_DTTM                 DATE          DEFAULT SYSDATE,
 13    ZZZ_UPDATE_DTTM                 DATE          DEFAULT SYSDATE,
 14    ZZZ_RCRD_CHNG_UID               VARCHAR2(25 BYTE) DEFAULT USER,
 15    NET_UNIT_PRC                    NUMBER(38,2),
 16    KEY_STNDRD_FLTR_DIM_ID          NUMBER(3)    NOT NULL,
 17    USAGE_CODE                      VARCHAR2(10 BYTE),
 18    DSCNT_PER_UNIT                  NUMBER,
 19    TTL_DSCNT_AMT                   NUMBER,
 20    KEY_SEASNL_SNPSHT_DT_DIM_ID     NUMBER(38)    DEFAULT 0,
 21    SEASNL_SNPSHT_UNIT_QTY          NUMBER(38)    DEFAULT 0,
 22    SEASNL_SNPSHT_AMT               NUMBER(38,2)  DEFAULT 0,
 23    KEY_SSN_FRZ_CRD_DT_DIM_ID       NUMBER(38)    DEFAULT 0
 24  )
 25  PARTITION BY LIST (KEY_STNDRD_FLTR_DIM_ID)
 26  (
 27    PARTITION US_REMAINDER VALUES (0),
 28    PARTITION US_FOOTWEAR_FUTURES VALUES (1),
 29    PARTITION US_FOOTWEAR_AT_ONCE VALUES (2),
 30    PARTITION US_FOOTWEAR_B_GRADE VALUES (3),
 31    PARTITION US_FOOTWEAR_CLOSEOUT VALUES (4),
 32    PARTITION US_APPAREL_FUTURES VALUES (5),
 33    PARTITION US_APPAREL_AT_ONCE VALUES (6),
 34    PARTITION US_APPAREL_B_GRADE VALUES (7),
 35    PARTITION US_APPAREL_CLOSEOUT VALUES (8),
 36    PARTITION US_EQUIPMENT_FUTURES VALUES (9),
 37    PARTITION US_EQUIPMENT_AT_ONCE VALUES (10),
 38    PARTITION US_EQUIPMENT_B_GRADE VALUES (11),
 39    PARTITION US_EQUIPMENT_CLOSEOUT VALUES (12),
 40    PARTITION US_FOOTWEAR_REPLEN VALUES (13),
 41    PARTITION US_APPAREL_REPLEN VALUES (14),
 42    PARTITION US_EQUIPMENT_REPLEN VALUES (15),
 43    PARTITION US_APPAREL_REPLEN_FORECAST VALUES (31),
 44    PARTITION US_EQUIPMENT_REPLEN_FORECAST VALUES (32),
 45    PARTITION CANADA_REMAINDER VALUES (100),
 46    PARTITION CANADA_FOOTWEAR_FUTURES VALUES (101),
 47    PARTITION CANADA_FOOTWEAR_AT_ONCE VALUES (102),
 48    PARTITION CANADA_FOOTWEAR_B_GRADE VALUES (103),
 49    PARTITION CANADA_FOOTWEAR_CLOSEOUT VALUES (104),
 50    PARTITION CANADA_APPAREL_FUTURES VALUES (105),
 51    PARTITION CANADA_APPAREL_AT_ONCE VALUES (106),
 52    PARTITION CANADA_APPAREL_B_GRADE VALUES (107),
 53    PARTITION CANADA_APPAREL_CLOSEOUT VALUES (108),
 54    PARTITION CANADA_EQUIPMENT_FUTURES VALUES (109),
 55    PARTITION CANADA_EQUIPMENT_AT_ONCE VALUES (110),
 56    PARTITION CANADA_EQUIPMENT_B_GRADE VALUES (111),
 57    PARTITION CANADA_EQUIPMENT_CLOSEOUT VALUES (112),
 58    PARTITION CANADA_FOOTWEAR_REPLEN VALUES (113),
 59    PARTITION CANADA_APPAREL_REPLEN VALUES (114),
 60    PARTITION CANADA_EQUIPMENT_REPLEN VALUES (115),
 61    PARTITION EMEA_REMAINDER VALUES (200),
 62    PARTITION EMEA_FOOTWEAR_FUTURES VALUES (201),
 63    PARTITION EMEA_FOOTWEAR_AT_ONCE VALUES (202),
 64    PARTITION EMEA_FOOTWEAR_B_GRADE VALUES (203),
 65    PARTITION EMEA_FOOTWEAR_CLOSEOUT VALUES (204),
 66    PARTITION EMEA_FOOTWEAR_REPLEN VALUES (205),
 67    PARTITION EMEA_FOOTWEAR_OTHER VALUES (206),
 68    PARTITION EMEA_APPAREL_FUTURES VALUES (207),
 69    PARTITION EMEA_APPAREL_AT_ONCE VALUES (208),
 70    PARTITION EMEA_APPAREL_B_GRADE VALUES (209),
 71    PARTITION EMEA_APPAREL_CLOSEOUT VALUES (210),
 72    PARTITION EMEA_APPAREL_REPLEN VALUES (211),
 73    PARTITION EMEA_APPAREL_OTHER VALUES (212),
 74    PARTITION EMEA_EQUIPMENT_FUTURES VALUES (213),
 75    PARTITION EMEA_EQUIPMENT_AT_ONCE VALUES (214),
 76    PARTITION EMEA_EQUIPMENT_B_GRADE VALUES (215),
 77    PARTITION EMEA_EQUIPMENT_CLOSEOUT VALUES (216),
 78    PARTITION EMEA_EQUIPMENT_REPLEN VALUES (217),
 79    PARTITION EMEA_EQUIPMENT_OTHER VALUES (218),
 80    PARTITION US_FOOTWEAR_REPLEN_FORECAST VALUES (33),
 81    PARTITION REMAINDER VALUES (DEFAULT)
 82  )
 83  PARALLEL ( DEGREE 4 INSTANCES 1 )
 84  NOMONITORING
 85  ENABLE ROW MOVEMENT;

Table created.

ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> CREATE INDEX PK_SLS_ORDR_FACT ON SLS_ORDR_FACT
  2  (SLS_ORDR_DOC_HDR_NUM_CODE, SLS_ORDR_DOC_LN_ITM_NUM_CODE, KEY_CCD_DATE_DIM_ID, RJCT_RSN_CODE) NOLOGGING NOPARALLEL;

Index created.

ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> exec gen_data('SLS_ORDR_FACT', 100000 );

PL/SQL procedure successfully completed.

ops$tkyte%ORA10GR2> commit;

Commit complete.

ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> @mystat parallel
ops$tkyte%ORA10GR2> set echo off

NAME                                               VALUE
--------------------------------------------- ----------
DBWR parallel query checkpoint buffers writte          0
n

queries parallelized                                   0
DML statements parallelized                            0
DDL statements parallelized                            0
DFO trees parallelized                                 0
Parallel operations not downgraded                     0
Parallel operations downgraded to serial               0
Parallel operations downgraded 75 to 99 pct            0
Parallel operations downgraded 50 to 75 pct            0
Parallel operations downgraded 25 to 50 pct            0
Parallel operations downgraded 1 to 25 pct             0

11 rows selected.

ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> begin
  2  DBMS_STATS.GATHER_TABLE_STATS
  3  ( ownname  => user
  4  , tabname => 'SLS_ORDR_FACT'
  5  , partname => null  -- partitions
  6  , estimate_percent => 2
  7  , degree =>8
  8  , cascade => true  -- does indexes
  9  );
 10  end;
 11  /

PL/SQL procedure successfully completed.

ops$tkyte%ORA10GR2> @mystat parallel
ops$tkyte%ORA10GR2> set echo off

NAME                                               VALUE
--------------------------------------------- ----------
DBWR parallel query checkpoint buffers writte          0
n

queries parallelized                                   5
DML statements parallelized                            0
DDL statements parallelized                            0
DFO trees parallelized                                 5
Parallel operations not downgraded                     5
Parallel operations downgraded to serial               0
Parallel operations downgraded 75 to 99 pct            0
Parallel operations downgraded 50 to 75 pct            0
Parallel operations downgraded 25 to 50 pct            0
Parallel operations downgraded 1 to 25 pct             0

11 rows selected.

Gather Stats

A reader, March 06, 2009 - 9:47 am UTC

Hi Tom,

1) With number of local bitmap indexes on the partitioned table, can we gather stats in parallel?
2) If we exchange partition for history table with a new table with current month having stats, will stats of new table will be reflected in history partition table after exchange partition?

Regards,
Tom Kyte
March 06, 2009 - 11:01 am UTC

1) yes
2) did you gather them? the stats should "swap" on the partition, but global statistics might be in need of some maintenance

the best way of statistcs gathering after ETL

Fredx Mercy, November 03, 2009 - 7:27 am UTC

Hi Tom,

You are my last chance to accelerate statistics gathering.

Here is my problem:

Let say I have one large fact table partitioned by range. ETL procedure change only couple (two or three) from the total of 48 partitions (not necessarily last three partitions).

What is the fastest way to get table statistics?

2. Part of ETL procedure is also local index rebuilding. When rebuilding indexes Oracle automatically gather index statistics (probably local and global-->correct me if I am wrong). If that is true, can I use the option cascade => false when gather table statistics (Oracle Manuals PL/SQL Packages and Types reference claim that index gathering is not parallelized so that way It would be faster if I use degree => 4) rather then cascade => true or DBMS_STATS.AUTO_CASCADE?

3. Is it important to have histograms on columns, or is it better/faster way to collect table statistics with method_opt => ‘FOR ALL COLUMNS SIZE 1’ insted of ‘FOR ALL COLUMNS SIZE AUTO’ whicj is the default value?

4. I’ve noticed that if I let Oracle to decide sampling size, it takes much longer to finish statistics gathering that when I explicitly determine sample size (for example estimate_percent => 2 instead of
estimate_percent => DBMS_STATS.AUTO_SAMPLE_SIZE). Is it safe to use my estimate size instead of Oracle’s?

5. What does the option granularity => ‘AUTO’ mean? In PL/SQL Packages and Types reference I can only find that ‘AUTO’ determines the granularity based on the partitioning. What does that mean? Is it better to use granularity => ‘ALL’ option instead.


Thanks in advance for your answers.

Fredx

Tom Kyte
November 09, 2009 - 2:40 pm UTC

2) depends on how you gather the index stats. You are hopefully only rebuilding the local index partitions that NEED IT, not all - so it would not gather global stats (since we don't read all of that), but it might be able to update them (infer them).

if you rebuild or create the index, the statistics are computed and you need not cascade=>true.


3) you tell me - only you know. Is your data skewed? If it is, do you use those columns in your where clause often? Only you know.

the default - depends on version and what changes you've made, dbms_stats has a get_param procedure to let you know the current defaults


4) again, only you know. Are the statistics better than good enough when you go with 2%? How's it been working?

5) auto granularity is global and partition level.


Note in the below the last two gathers and see the timestamps that show what levels were gathered...


ops$tkyte%ORA10GR2> CREATE TABLE t
  2  (
  3    dt  date,
  4    x   int,
  5    y   varchar2(30)
  6  )
  7  PARTITION BY RANGE (dt)
  8  (
  9    PARTITION part1 VALUES LESS THAN (to_date('01-jan-2008','dd-mon-yyyy')) ,
 10    PARTITION part2 VALUES LESS THAN (to_date('01-jan-2009','dd-mon-yyyy'))
 11  )
 12  /

Table created.

ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> insert into t select to_date( '01-jan-2007' ) + mod(rownum, 2*365 ), rownum, 'x' from all_objects;

50316 rows created.

ops$tkyte%ORA10GR2> exec dbms_stats.gather_table_stats( user, 'T', granularity => 'AUTO' );

PL/SQL procedure successfully completed.

ops$tkyte%ORA10GR2> select 'table', table_name, num_rows, last_analyzed from user_tables where table_name = 'T'
  2  union all
  3  select 'part', partition_name, num_rows, last_analyzed from user_tab_partitions where table_name = 'T';

'TABL TABLE_NAME                       NUM_ROWS LAST_ANALYZED
----- ------------------------------ ---------- --------------------
table T                                   50316 09-nov-2009 15:39:38
part  PART1                               24937 09-nov-2009 15:39:38
part  PART2                               25237 09-nov-2009 15:39:38

ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> exec dbms_lock.sleep(5);

PL/SQL procedure successfully completed.

ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> delete from t where mod(x,2) = 0 and dt >= to_date('01-jan-2008');

12532 rows deleted.

ops$tkyte%ORA10GR2> exec dbms_stats.gather_table_stats( user, 'T', granularity => 'partition', partname => 'part2' );

PL/SQL procedure successfully completed.

ops$tkyte%ORA10GR2> select 'table', table_name, num_rows, last_analyzed from user_tables where table_name = 'T'
  2  union all
  3  select 'part', partition_name, num_rows, last_analyzed from user_tab_partitions where table_name = 'T';

'TABL TABLE_NAME                       NUM_ROWS LAST_ANALYZED
----- ------------------------------ ---------- --------------------
table T                                   50316 09-nov-2009 15:39:38
part  PART1                               24937 09-nov-2009 15:39:38
part  PART2                               12600 09-nov-2009 15:39:43

ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> exec dbms_lock.sleep(5);

PL/SQL procedure successfully completed.

ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> exec dbms_stats.gather_table_stats( user, 'T', granularity => 'auto', partname => 'part2' );

PL/SQL procedure successfully completed.

ops$tkyte%ORA10GR2> select 'table', table_name, num_rows, last_analyzed from user_tables where table_name = 'T'
  2  union all
  3  select 'part', partition_name, num_rows, last_analyzed from user_tab_partitions where table_name = 'T';

'TABL TABLE_NAME                       NUM_ROWS LAST_ANALYZED
----- ------------------------------ ---------- --------------------
table T                                   37983 09-nov-2009 15:39:49
part  PART1                               24937 09-nov-2009 15:39:38
part  PART2                               12600 09-nov-2009 15:39:49

incremental global statistics maintenance in 10g database

Josip Voyager, December 15, 2009 - 3:08 am UTC

Hi Tom,

Let say I have one large table in my data warehouse that is partitioned by range (on date column).

My ETL procedure is executing on a daily basis and only last three or four partition have been changed between two ETL batch procedures.

I can maintain partition level statistics for changed partitions and local indexes easily (it’s very fast on my system), but the problem is global statistics maintenance.

Question is how can I incremental update/maintain global statistics for large, range partitioned table on 10g release 2 database running on IBM AIX server?

Please don’t tell me to upgrade to 11g because feature of incremental updating global statistics is implemented there, because my database is about 20 Tb, and Oracle 11g release 2 has not been released for IBM AIX platform yet.

I can send you the way I collect statistics right now if you like.

I’d like If you could provide an example.


Regards

Josip

incremental global statistics maintenance in 10g database

Josip Voyager, December 17, 2009 - 2:37 am UTC

I hope you won't forget my question about incremental updating/refreshing global statistics on range partitioned large fact tables in Oracle 10g.

Although that feature is already implemented in 11g, I’ve found
on Metalink that incremental updating is available on 10.2.0.5 version.

Problem is that there is no 10.2.0.5 patch-set update available yet.

So, I can’t collect statistics with granularity => 'APPROX_GLOBAL AND PARTITION' option, because that option doesn’t exist in 10.2.0.4

Once again, is there any other way to incrementally collect global statistics in 10.2.0.4?


2.
What is the difference between incremental global statistics gathering in 10.2.0.5 with granularity => 'APPROX_GLOBAL AND PARTITION' and 11g way of statistics collecting with settings INCREMENTAL option with DBMS_STATS.SET_TABLE_PREFS on large partitioned table.


Regards,


Josip

Question on "gather auto" option of dbms_stats

A reader, February 05, 2010 - 3:05 pm UTC

Hi - I am trying to understand how the "gather auto" option works with dbms_stats. I have a database that has this running every night

SYS.DBMS_STATS.GATHER_SCHEMA_STATS (
OwnName => '<schema name>'
,Options => 'GATHER AUTO'
,Method_Opt => 'FOR ALL INDEXED COLUMNS SIZE AUTO'
,Degree => DBMS_STATS.AUTO_DEGREE
,Cascade => TRUE
,No_Invalidate => TRUE);
END;

Every day I have been checking sys.dba_tab_stats_history view and I see one table user_tab getting new stats but there are hardly any new rows in this table. Sometimes there are 7 or 8 rows added, sometimes 1. So I am trying to understand what the gather auto does or how does it determine that this table requires newer stats. Can you please help me ?
Tom Kyte
February 08, 2010 - 9:33 pm UTC

how many rows are UPDATED in the table? When about 10% of the data is deemed to be changed - we gather stale on that table.

Strange Issue

ravi, February 19, 2010 - 11:28 am UTC

Hi Tom,
Recently we are facing strange issue. A query gets executed and gives 200 records. But suddenly it gives 25 rows.

But analyzing all the tables in the query fixes the problem and fetches correct number of rows. and again, after somme time it starts returning incorrect rows.

my question is ...how analyzing tables fixes the issue ..

Thanks & regards
Ravi
Tom Kyte
February 25, 2010 - 12:50 am UTC

please capture your sql plans before/after - to see if they change, if the changed plan (and nothing in the tables change - no modifications to the data could be the cause), then you are experiencing a bug - support will need to work with you.

suggestion, if it happens again, do this:

a) export statistics using dbms_stats
b) capture the plan being used from v$sql_plan (keep a copy of it)
c) gather stats
d) if 'fixed' then export statistics again and capture the plan.

Now you would have a good set of information to get started with support.

A reader, February 27, 2010 - 11:43 pm UTC

Hi

Suppose statistics of two databases are exactly same,database version is also same, but there is a
big difference interms of amount of data.

Does the response time of the sql queries will be exactly same ?

Tom Kyte
March 01, 2010 - 11:48 am UTC

how about you look at the other place you asked this IDENTICAL question?

you can even use the search to find it again!

http://asktom.oracle.com/pls/asktom/asktom.search?p_string=%22Suppose+statistics+of+two+databases+are+exactly+same%22

Index Stats in Oracle 9iR2.

Rajeshwaran, Jeyabal, April 19, 2010 - 6:19 am UTC

Tom,

While reading the information about statistics from Oracle 9iR2 product documentation, I am ended up with below scenario.

http://download.oracle.com/docs/cd/B10501_01/server.920/a96533/stats.htm#33861

<Quote> The statistics-gathering operations can run either serially or in parallel. Index statistics are not gathered in parallel </Quote>

pwruser@PWTK52> select * from v$version;

BANNER
----------------------------------------------------------------
Oracle9i Enterprise Edition Release 9.2.0.8.0 - Production
PL/SQL Release 9.2.0.8.0 - Production
CORE    9.2.0.8.0       Production
TNS for 32-bit Windows: Version 9.2.0.8.0 - Production
NLSRTL Version 9.2.0.8.0 - Production

Session 1:

pwruser@PWTK52> select sid from v$mystat where rownum<=1;

       SID
----------
        10

pwruser@PWTK52> BEGIN
  2    dbms_stats.gather_index_stats(ownname=>USER,indname=>'PK_STATE_CODE_VALUES',degree=>4,estimate_percent=>dbms_stats.auto_sample_size);
  3  END;
  4  /

PL/SQL procedure successfully completed.

Session 2:
pwruser@PWTK52> SELECT * FROM V$PX_SESSION;

no rows selected

pwruser@PWTK52> SELECT * FROM V$PX_SESSION;

SADDR           SID    SERIAL#      QCSID  QCSERIAL#  QCINST_ID SERVER_GROUP SERVER_SET    SERVER#     DEGREE REQ_DEGREE
-------- ---------- ---------- ---------- ---------- ---------- ------------ ---------- ---------- ---------- ----------
68644CB0         10        103         10
68647270         14          1         10        103          1            1          1          1       2             4
68647BE0         15          1         10        103          1            1          1          2       2             4
68648EC0         17          1         10        103          1            1          2          2       2             4
68649830         18          1         10        103          1            1          2          1       2             4

pwruser@PWTK52> SELECT * FROM V$PX_SESSION;

SADDR           SID    SERIAL#      QCSID  QCSERIAL#  QCINST_ID SERVER_GROUP SERVER_SET    SERVER#     DEGREE REQ_DEGREE
-------- ---------- ---------- ---------- ---------- ---------- ------------ ---------- ---------- ---------- ----------
68644CB0         10        103         10
68647270         14          1         10        103          1            1          1          1       2             4
68647BE0         15          1         10        103          1            1          1          2       2             4
68648EC0         17          1         10        103          1            1          2          2       2             4
68649830         18          1         10        103          1            1          2          1       2             4

pwruser@PWTK52> SELECT * FROM V$PX_SESSION;

no rows selected


Question
1) As per the statement from Product documentation, Index Statistics should not be gathered in parallel. But the results shows that Statistics gathered in parallel. Am i doing things wrong here. please help me.

Thanks,
Rajesh.
Tom Kyte
April 19, 2010 - 9:05 am UTC

that looks like a doc bug - they meant for the gather_table_stats it wouldn't do the indexes that way.

Gather index stats would not have a degree setting - it would be meaningless - otherwise.

dbms_stats changes for histograms

A reader, May 24, 2010 - 3:44 pm UTC

Hi Tom,

This is what we have for computing statistics in our production database.

SYS.DBMS_STATS.GATHER_SCHEMA_STATS (
OwnName => '<schema>'
,estimate_percent => DBMS_STATS.AUTO_SAMPLE_SIZE
,degree => DBMS_STATS.AUTO_DEGREE);

Recently we encountered some concurrency issues in the production database and found out that it was related to a system generated guid column having histograms (this is unique). So we have a daily job that will delete the stats on this particular column. I have been doing some analysis on the method_opt parameter in dbms_stats trying to figure out if I can NOT compute histograms on this unique column by controlling the method_opt parameter. This is what I did

select column_name, count(*) from dba_tab_histograms
where table_name = 'TAB1'
group by column_name;

COLUMN_NAME COUNT(*)
TYCODE 2
TXTVAL 160
GUIDCOL 255
FIELDNAME 17

begin
dbms_stats.delete_column_stats('<schema>','TAB1','GUIDCOL' );
end;
/

begin
DBMS_STATS.GATHER_table_STATS (OwnName => '<schema>', tabname => 'TAB1', method_opt => 'FOR ALL COLUMNS SIZE SKEWONLY');
end;
/

COLUMN_NAME COUNT(*)
TYCODE 1
TXTVAL 161
GUIDCOL 255
FIELDNAME 17

I would have expected to see guidcol to not have any histograms after I added the method_opt for skewonly, but it came back with the same results as before adding the method_opt parameter. What am I missing ? What should I give for method_opt parameter if I dont want histograms to be computed other than specifying the column name because at some point I want to change the overall stats gathering for the entire schema.

Thanks for your help.


Tom Kyte
May 24, 2010 - 7:43 pm UTC

how did this lead to "concurrency issues"?????

define what you mean, give lots of detail.

what column datatype are you using for your guid.

Reg: histogram deletion - followup to above question

A reader, May 25, 2010 - 6:05 am UTC

We had an issue 3 weeks back in production because that stats deletion job failed to run. I opened up ADDM from that day and viewed the report. This is what I see for one SQL which comes up as the top sql which is referencing tab1.guidcol a lot. And this is the rationale

Waiting for event "cursor: pin S wait on X" in wait class
"Concurrency" accounted for 60% of the database time spent in
processing the SQL statement with SQL_ID "gfg29tnbf15nj".

When we were seeing these issues for the first time in prod, an SR was opened and based on the analysis of the support engineer the histograms on guidcol were asked to be deleted. Does that answer your question ?

The datatype for guidcol is char(36).

Thanks.

A reader, May 26, 2010 - 8:59 am UTC

Hi Tom - I havent got a response to my question above yet. What do you think ?
Tom Kyte
May 26, 2010 - 9:16 am UTC

We had an issue 3 weeks back in production because that stats deletion job failed to run. I opened
up ADDM from that day and viewed the report. This is what I see for one SQL which comes up as the
top sql which is referencing tab1.guidcol a lot. And this is the rationale

Waiting for event "cursor: pin S wait on X" in wait class
"Concurrency" accounted for 60% of the database time spent in
processing the SQL statement with SQL_ID "gfg29tnbf15nj".

When we were seeing these issues for the first time in prod, an SR was opened and based on the
analysis of the support engineer the histograms on guidcol were asked to be deleted. Does that
answer your question ?

The datatype for guidcol is char(36).

Thanks.


we only use the first few characters of a string when building statistics (histograms)

so, it is likely that the first 30 characters of your 36 character guid are very skewed (I think it is 30 characters, maybe 32 - but in any case <36)



... and based on the
analysis of the support engineer the histograms on guidcol were asked to be deleted. Does that
answer your question ?...

nope, I don't see the connection at all.

same query executed on two clone schemas have different execution plans

A reader, July 17, 2010 - 8:30 pm UTC

Hi Tom,

I cloned (using datapump) a source schema into a target (created in the same way as the source schema), both schemas are in the same database which is 10.2.0.4. After all the objects were created in the target schema by impdp (i included the parameter exclude=statistics in the par file to not clone the stats during the import), I ran dbms_stats to collect statistics in all the objects in the target schema using the following format:
dbms_stats.gather_table_stats(ownname=>''||lcur2.owner||'',tabname=>''||lcur2.table_name||'',degree=>4,granularity=>'GLOBAL',estimate_percent=>40,cascade=>TRUE,force=>TRUE);.

After finishing setting up the target, I cleaned up the source schema and re-import all the objects again using the same dmp files that i used to load the target schema. Once the import finished I ran dbms_stats on the source schema using the same command that used for collecting the stats on the target schemas. Once the stats finished I tested the exact same query on the target and the source schemas and the explains plans are completely different, even though the tables are the same (structure, indexes,the stats, etc), the data is the same and they are on the same database, the only difference between these two queries is that the schema that owns the objects have a different name but they were set up the same and they are clones.
Could you please guide me to where i need to focus on in order to troubleshoot this problem? Do you have any ideas why this is happening if everything is the same between these two schemas?

I would appreciate any help and guidance on this problem.

thanks.
Tom Kyte
July 19, 2010 - 1:59 pm UTC

compare their statistics, are they

a) the same
b) different

look at a 10053 trace from each - compare that output looking for major differences.


define "I cleaned up the source schema", I don't know how to clean a schema - it might mean "I truncated everything", it might mean "I dropped everything", it might mean "I deleted everything" - all of which would have different impacts on this.

same query executed on two clone schemas have different execution plans

A reader, July 19, 2010 - 11:51 pm UTC

Tom,
Thanks for answering my question.

I checked the statistics (I exported the stats from the data dictionary into the table I created), I did a minus on all the columns (with the exception of column D1 between the two statids where I have the statistics for each of schemas, and 90% of the rows are different.
When I said clean up, I meant I dropped all the objects own by the source and the target and reload both of them using the export that i generated before the drop on the source schema. The weird thing here is that these two schemas are clones of each other (exact objects, exact data), and the stats were collected executing the exact same command (dbms_stats.gather_table_stats using the same parameter and options), how come the statistics exported are different?

When you said look at the 10053 trace from each, do you mean trace the queries?
What I did was a 10046 on both querys, and the one that runs on the target executes right away, the one on the source only has waits on db sequential reads on different objects and after one hour i cancelled it and the tkprof did not generate anything becuase the trc file only has db file sequential reads.

I will deeply appreciate any help you can give me on this problem.
Thanks a lot.
Tom Kyte
July 23, 2010 - 6:48 am UTC

how come the statistics exported are different?


a "for example" would be really nice don't you thing??? give me some statistics to look at.


search this site for 10053 to see the trace I asked you to use, it is an optimizer dump. But since 90% of the stats are different - we already know what it causing it - no need for that.

question datatypes

sat, December 08, 2010 - 12:22 pm UTC

Tom,

Is there any difference/advantage on performance point of view, if we declare a numeric data type with precision and without?

Example :

create table emp (empno number);

vs
create table emp (empno number(10));
thank you
Tom Kyte
December 08, 2010 - 12:46 pm UTC

would it matter? The two definitions above are night and day different from each other - the driving factor should be "do I need a number with as much precision and scale as possible or a number(10) - that is ten digits of precision and no decimals"

ops$tkyte%ORA11GR2> create table t ( x number, y number(10) );

Table created.

ops$tkyte%ORA11GR2> 
ops$tkyte%ORA11GR2> insert into t values ( 1.2, 1.2 );

1 row created.

ops$tkyte%ORA11GR2> insert into t values ( 1.2, 1.8 );

1 row created.

ops$tkyte%ORA11GR2> select * from  t;

         X          Y
---------- ----------
       1.2          1
       1.2          2


they are (in my opinion) as different as DATE and VARCHAR2.


They both will have edits applied to them - they both will do about the same amount of work.

but... if you do this:

ops$ora11gr2%ORA11GR2> alter session set events '10046 trace name context forever, level 12';

Session altered.

ops$ora11gr2%ORA11GR2> begin
  2          for i in 1 .. 100000
  3          loop
  4                  insert into t (x) values ( i+0.5 );
  5                  insert into t (y) values ( i+0.5 );
  6                  insert into t (x) values ( i );
  7                  insert into t (y) values ( i );
  8          end loop;
  9  end;
 10  /



you might find this:

INSERT INTO T (X) VALUES ( :B1 +0.5 )

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute 100000      2.95       6.74          0        622     102479      100000
Fetch        0      0.00       0.00          0          0          0           0
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total   100001      2.95       6.74          0        622     102479      100000

INSERT INTO T (Y) VALUES ( :B1 +0.5 )

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute 100000      3.17       7.05          0        622     101850      100000
Fetch        0      0.00       0.00          0          0          0           0
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total   100001      3.17       7.05          0        622     101850      100000
********************************************************************************
INSERT INTO T (X) VALUES ( :B1 )

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute 100000      3.45       8.88          0       1137     156138      100000
Fetch        0      0.00       0.00          0          0          0           0
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total   100001      3.45       8.88          0       1137     156138      100000

INSERT INTO T (Y) VALUES ( :B1 )

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute 100000      3.20       7.00          0        622     101850      100000
Fetch        0      0.00       0.00          0          0          0           0
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total   100001      3.20       7.00          0        622     101850      100000



which indicates to me "it is a wash - they are about the same"

Analysing table daily

Rajeshwaran, Jeyabal, February 19, 2011 - 12:14 pm UTC

Tom:

We have a table having 1700M records with AVG_ROW_LEN=60. Table is HASH Partitioned (8 partitions) on primary key. Every day nearly 15M-18M records gets inserted into this table.

desc xxxx;
Name                            Null     Type       
------------------------------  -------- ------------
AUDIT_KEY                  NOT NULL NUMBER(10) 
DX_KEY                    NOT NULL NUMBER(10) 
ENC_KEY                    NOT NULL NUMBER(10) 
CREATE_DT                       NOT NULL DATE       
DG_CD_KEY                     NOT NULL NUMBER(10) 
REC_ST_KEY                   NOT NULL NUMBER(10) 
REC_STU_KEY                  NOT NULL NUMBER(10)  
SOURCE_KEY                      NOT NULL NUMBER(10) 
COMMENT_KEY                              NUMBER(10) 
UPDATE_USER_KEY                          NUMBER(10) 
OUT_BATCH_KEY                         NUMBER(10) 
HIST_STATUS                              VARCHAR2(1) 
IN_BATCH_KEY                             NUMBER(10)

This table is heavily use in Reporting queries.I am using this script to gather statistics / histograms in this table everyday midnight. and it runs for 18-20 hrs.
begin
 dbms_stats.gather_table_stats(
 ownname=>user,
 tabname=>'xxxx',
 estimate_percent=>100,
 method_opt=>'FOR ALL INDEXED COLUMNS SIZE 254',
 degree=>8);
end;
/

If i split this script into 8(one for each partition) then it takes nearly 4-5 hrs to complete.

I know this table should be Range partitioned on CREATE_DT column (since report queries use that columns heavily) and HASH sub-partition on primary key column, so that i make me easy to gather stats on that Range-partition created for that DAY, rather than FULL TABLE stats gather.

Question
1) Is there is any other easiest approach to gather stats on this table rather than spending 18-20 hrs daily? (we are on 10.2.0.4 by mid of this year we are planning to move on 11g)
Tom Kyte
February 20, 2011 - 12:34 pm UTC

You can use incremental statistics in 11g - we'll only gather local - not local PLUS global - to maintain both local and global statistics.

and you should consider backing off of the compute. for such a large table a 1% sample is probably sufficient.

Analysing table daily

Rajeshwaran, Jeyabal, February 21, 2011 - 12:29 am UTC

for such a large table a 1% sample is probably sufficient

Tom:

1) How do you say that 1% Sample is probably sufficient and why NOT dbms_stats.auto_sample_size (oracle 10g)?

2) Is that mentioned somewhere in product documentation that 1% Sample is sufficient for large tables? if yes, can you share the link?

Tom Kyte
February 22, 2011 - 10:58 am UTC

1) you said you computed, you are not using the defaults, 1% of a really big table is a big sample - probably more than sufficient to get the details we need. That is why I said that.


2) experience and knowledge of basic statistics provides that information. You'll be sampling 17million rows with a 1% sample, that would be sufficient to give someone an idea.

Analysing Table daily

Rajeshwaran, Jeyabal, February 22, 2011 - 11:20 am UTC

Tom:

1% of a really big table is a big sample

So instead of providing estimate_percent=>1 If i say estimate_percent=> dbms_stats.auto_sample_size will Oracle arrive this 1% sample size for me?

http://download.oracle.com/docs/cd/B19306_01/appdev.102/b14258/d_stats.htm#i1036461
Tom Kyte
February 23, 2011 - 12:39 pm UTC

maybe - it depends. it is not defined what it will do, it will do what it feels is right (according to the algorithms implemented in your version which can and will change from release to release)

And you should consider...

Dana, February 22, 2011 - 5:07 pm UTC

I think Tom was being quite clear when he said "...consider backing off...". Based on the description, it is a reasonable suggestion. Someone closer and more familiar with the processing that normally happens could give a more complete test of that consideration. So I think he's saying you've got lots of options. And you have the ability to determine which works best for you in your framework.

I might opt for smaller sample size and executing the index stats in parallel in separate jobs. But that's only a suggestion. I don't have enough information to give more than an opinion.

David Aldridge, February 25, 2011 - 2:11 am UTC

It's rather easy to run experiments to see how valid small estimation percentages might be.

Compare the timings and results of the following:

select 100000*count(*) from test sample(0.001);
select 10000*count(*) from test sample(0.01);
select 1000*count(*) from test sample(0.1);
select 100*count(*) from test sample(1);
select count(*) from test;



Statistics on remote table

Arvind, March 01, 2011 - 4:14 am UTC

Hello Tom,

I am accessing remote table in our schema using dblink. How can i gather statistics of remote table in our schema?

Thanks,

Arvind
Tom Kyte
March 01, 2011 - 12:15 pm UTC

you do not, you collect statistics on the site that owns the table, it is an administrative operation the REMOTE SITE would perform.


technically you could:


dbms_stats.gather_table_stats@remote( user, 'T' );

but you would generally perform table maintenance on the site that owns the table, it is their data, their responsibility.

remote query

Arvind, March 02, 2011 - 12:46 am UTC

Hello Tom,

I am executing a query which is accessing remote tables and it is taking more than one minute to execute but when I am executing same query with use_hash hint it is taking 8 sec. PSB for explain plan of both query. I am sorry but due to database restrictions, I cannot provide tkprof. How can i avoid using hint in this case?

Thanks,

Arvind

---------------------------------------------------------------------------------------------
--QUERY WITHOUT USE_HASH HINT
---------------------------------------------------------------------------------------------
  1  SELECT UNIQUE(TCONT.GT_CLIENT_ID) CSCO_ID, PCONT.CN_ID USER_ID, TCERT.GT_FIRST_NAME FNAM
  2  TCERT.GT_LAST_NAME LNAME, LSCM.SCEM_ROLE_COURSE_ID EXCR_ID
  3      FROM
  4      pdb_admin.lpms_contact_vw@pdbdb PCONT,
  5      pdb_admin.pdb_contact_qual_vw@pdbdb PQUAL -- PDB tables
  6       , CERT_GT_CONTACT@tcmsdb TCONT,
  7       cert_view@tcmsdb TCERT              -- TCMS certification/contact tables
  8       , LPMS_TCMS_CERT_MAP LTCM,
  9       LPMS_SPEC_CERT_MAP LSCM,
 10       LPMS_SPEC_JOBROLE_MAP LSJM  -- Job Role Mapping Tables
 11      where PCONT.CN_ID = PQUAL.CN_ID(+)
 12      AND PQUAL.CN_QUAL_ID = TCONT.GT_CLIENT_ID
 13      AND TCONT.GT_CONTACT_ID = TCERT.GT_CONTACT_ID
 14      AND LSCM.SCEM_SRM_ID = LSJM.SRM_ID
 15      AND TCERT.GT_TEMPLATE_ANAME = LTCM.CERT_MAP_TCMS_ID AND LTCM.CERT_TCMS_ID = LSCM.SCE
 16      AND LSCM.SCEM_COURSE_TCMS_FLAG = 'T' AND LSCM.SCEM_COURSE_TYPE = 'C' AND LSJM.SRM_SP
 17      AND TCERT.GT_CERT_STATUS = 'CERT'
 18      AND PQUAL.CN_QUAL_TYPE = 'CERT'
 19*     AND TO_CHAR(PCONT.CN_SITE_ID) IN ('100565')
SQL> /

Execution Plan
----------------------------------------------------------

--------------------------------------------------------------------------------
---------------------------------

| Id  | Operation                       | Name                     | Rows  | Byt
es | Cost (%CPU)| Inst   |IN-OUT|

--------------------------------------------------------------------------------
---------------------------------

|   0 | SELECT STATEMENT                |                          |     1 |   7
80 |    26  (12)|        |      |

|   1 |  HASH UNIQUE                    |                          |     1 |   7
80 |    26  (12)|        |      |

|*  2 |   HASH JOIN                     |                          |     1 |   7
54 |    21  (10)|        |      |

|   3 |    NESTED LOOPS                 |                          |     1 |   6
98 |    16   (7)|        |      |

|   4 |     NESTED LOOPS                |                          |     1 |   6
75 |    14   (8)|        |      |

|*  5 |      HASH JOIN                  |                          |     2 |  12
72 |    12   (9)|        |      |

|*  6 |       HASH JOIN                 |                          |    10 |  56
30 |     9  (12)|        |      |

|   7 |        REMOTE                   | CERT_VIEW                |     5 |   2
95 |     5   (0)| TCMSDB | R->S |

|   8 |        TABLE ACCESS FULL        | LPMS_TCMS_CERT_MAP       |    36 | 181
44 |     3   (0)|        |      |

|*  9 |       TABLE ACCESS FULL         | LPMS_SPEC_CERT_MAP       |     7 |   5
11 |     3   (0)|        |      |

|* 10 |      TABLE ACCESS BY INDEX ROWID| LPMS_SPEC_JOBROLE_MAP    |     1 |
39 |     1   (0)|        |      |

|* 11 |       INDEX UNIQUE SCAN         | LPMS_SPEC_JOBROLE_MAP_PK |     1 |
   |     0   (0)|        |      |

|  12 |     REMOTE                      | CERT_GT_CONTACT          |     1 |
23 |     2   (0)| TCMSDB | R->S |

|  13 |    REMOTE                       |                          |     5 |   2
80 |     4   (0)|  PDBDB | R->S |

--------------------------------------------------------------------------------
---------------------------------


Predicate Information (identified by operation id):
---------------------------------------------------

   2 - access("PQUAL"."CN_QUAL_ID"="TCONT"."GT_CLIENT_ID")
   5 - access("LTCM"."CERT_TCMS_ID"="LSCM"."SCEM_ROLE_COURSE_ID")
   6 - access("TCERT"."GT_TEMPLATE_ANAME"="LTCM"."CERT_MAP_TCMS_ID")
   9 - filter("LSCM"."SCEM_COURSE_TCMS_FLAG"='T' AND "LSCM"."SCEM_COURSE_TYPE"='
C')

  10 - filter("LSJM"."SRM_SPEC_ID"=3 AND "LSJM"."SRM_JR_ID"=1)
  11 - access("LSCM"."SCEM_SRM_ID"="LSJM"."SRM_ID")

Remote SQL Information (identified by operation id):
----------------------------------------------------

   7 - SELECT "GT_CONTACT_ID","GT_FIRST_NAME","GT_LAST_NAME","GT_CERT_STATUS","G
T_TEMPLATE_ANAME" FROM

       "CERT_VIEW" "TCERT" WHERE "GT_CERT_STATUS"='CERT' (accessing 'TCMSDB.CISC
O.COM' )


  12 - SELECT "GT_CONTACT_ID","GT_CLIENT_ID" FROM "CERT_GT_CONTACT" "TCONT" WHER
E "GT_CONTACT_ID"=:1

        (accessing 'TCMSDB.CISCO.COM' )

  13 - SELECT "A1"."CN_ID","A1"."CN_QUAL_ID","A1"."CN_QUAL_TYPE","A2"."CN_ID","A
2"."CN_ID","A2"."CN_SITE_

        ID" FROM "PDB_ADMIN"."PDB_CONTACT_QUAL_VW" "A1","PDB_ADMIN"."LPMS_CONTAC
T_VW" "A2" WHERE

        TO_CHAR("A2"."CN_SITE_ID")='100565' AND "A2"."CN_ID"="A1"."CN_ID" AND "A
1"."CN_QUAL_TYPE"='CERT'

        (accessing 'PDBDB.CISCO.COM' )

--------------------------------------------------------------------------------------------
--QUERY WITH USE_HASH HINT
--------------------------------------------------------------------------------------------
SQL> SELECT /*+use_hash(tcont)*/UNIQUE(TCONT.GT_CLIENT_ID) CSCO_ID, PCONT.CN_ID USER_ID, TCERT.
RST_NAME FNAME, 
  2  TCERT.GT_LAST_NAME LNAME, LSCM.SCEM_ROLE_COURSE_ID EXCR_ID
  3      FROM 
  4      pdb_admin.lpms_contact_vw@pdbdb PCONT, 
  5      pdb_admin.pdb_contact_qual_vw@pdbdb PQUAL -- PDB tables
  6       , CERT_GT_CONTACT@tcmsdb TCONT,  
  7       cert_view@tcmsdb TCERT              -- TCMS certification/contact tables
  8       , LPMS_TCMS_CERT_MAP LTCM, 
  9       LPMS_SPEC_CERT_MAP LSCM, 
 10       LPMS_SPEC_JOBROLE_MAP LSJM  -- Job Role Mapping Tables
 11      where PCONT.CN_ID = PQUAL.CN_ID(+) 
 12      AND PQUAL.CN_QUAL_ID = TCONT.GT_CLIENT_ID
 13      AND TCONT.GT_CONTACT_ID = TCERT.GT_CONTACT_ID
 14      AND LSCM.SCEM_SRM_ID = LSJM.SRM_ID
 15      AND TCERT.GT_TEMPLATE_ANAME = LTCM.CERT_MAP_TCMS_ID AND LTCM.CERT_TCMS_ID = LSCM.SCEM_
COURSE_ID -- OR LSCM.SCEM_ROLE_COURSE_ID IN (SELECT CERT_MAP_TCMS_ID FROM LPMS_TCMS_CERT_MAP WH
ERT_TCMS_ID = TCERT.GT_TEMPLATE_ANAME)) 
 16      AND LSCM.SCEM_COURSE_TCMS_FLAG = 'T' AND LSCM.SCEM_COURSE_TYPE = 'C' AND LSJM.SRM_SPEC
 3 AND LSJM.SRM_JR_ID = 1
 17      AND TCERT.GT_CERT_STATUS = 'CERT'
 18      AND PQUAL.CN_QUAL_TYPE = 'CERT'
 19      AND TO_CHAR(PCONT.CN_SITE_ID) IN ('100565');

Execution Plan
----------------------------------------------------------

--------------------------------------------------------------------------------
-------------------------------

| Id  | Operation                     | Name                     | Rows  | Bytes
 | Cost (%CPU)| Inst   |IN-OUT|

--------------------------------------------------------------------------------
-------------------------------

|   0 | SELECT STATEMENT              |                          |     1 |   780
 |  6941   (1)|        |      |

|   1 |  HASH UNIQUE                  |                          |     1 |   780
 |  6941   (1)|        |      |

|   2 |   NESTED LOOPS                |                          |     1 |   780
 |  6940   (1)|        |      |

|*  3 |    HASH JOIN                  |                          |     1 |   741
 |  6939   (1)|        |      |

|*  4 |     HASH JOIN                 |                          |     1 |   668
 |  6936   (1)|        |      |

|*  5 |      HASH JOIN                |                          |     1 |   164
 |  6932   (1)|        |      |

|*  6 |       HASH JOIN               |                          |     2 |   210
 |  6927   (1)|        |      |

|   7 |        REMOTE                 |                          |     3 |    78
 |     4   (0)|  PDBDB | R->S |

|   8 |        REMOTE                 | CERT_GT_CONTACT          |  1361K|    29
M|  6909   (1)| TCMSDB | R->S |

|   9 |       REMOTE                  | CERT_VIEW                |     5 |   295
 |     5   (0)| TCMSDB | R->S |

|  10 |      TABLE ACCESS FULL        | LPMS_TCMS_CERT_MAP       |    36 | 18144
 |     3   (0)|        |      |

|* 11 |     TABLE ACCESS FULL         | LPMS_SPEC_CERT_MAP       |     7 |   511
 |     3   (0)|        |      |

|* 12 |    TABLE ACCESS BY INDEX ROWID| LPMS_SPEC_JOBROLE_MAP    |     1 |    39
 |     1   (0)|        |      |

|* 13 |     INDEX UNIQUE SCAN         | LPMS_SPEC_JOBROLE_MAP_PK |     1 |
 |     0   (0)|        |      |

--------------------------------------------------------------------------------
-------------------------------


Predicate Information (identified by operation id):
---------------------------------------------------

   3 - access("LTCM"."CERT_TCMS_ID"="LSCM"."SCEM_ROLE_COURSE_ID")
   4 - access("TCERT"."GT_TEMPLATE_ANAME"="LTCM"."CERT_MAP_TCMS_ID")
   5 - access("TCONT"."GT_CONTACT_ID"="TCERT"."GT_CONTACT_ID")
   6 - access("PQUAL"."CN_QUAL_ID"="TCONT"."GT_CLIENT_ID")
  11 - filter("LSCM"."SCEM_COURSE_TCMS_FLAG"='T' AND "LSCM"."SCEM_COURSE_TYPE"='
C')

  12 - filter("LSJM"."SRM_SPEC_ID"=3 AND "LSJM"."SRM_JR_ID"=1)
  13 - access("LSCM"."SCEM_SRM_ID"="LSJM"."SRM_ID")

Remote SQL Information (identified by operation id):
----------------------------------------------------

   7 - SELECT "A1"."CN_ID","A1"."CN_ID","A1"."CN_SITE_ID","A2"."CN_ID","A2"."CN_
QUAL_ID","A2"."CN_QUAL_

       TYPE" FROM "PDB_ADMIN"."LPMS_CONTACT_VW" "A1","PDB_ADMIN"."PDB_CONTACT_QU
AL_VW" "A2" WHERE

       "A2"."CN_QUAL_TYPE"='CERT' AND "A1"."CN_ID"="A2"."CN_ID" AND TO_CHAR("A1"
."CN_SITE_ID")='100565'

       (accessing 'PDBDB.CISCO.COM' )

   8 - SELECT /*+ USE_HASH ("TCONT") */ "GT_CONTACT_ID","GT_CLIENT_ID" FROM "CER
T_GT_CONTACT" "TCONT"

       (accessing 'TCMSDB.CISCO.COM' )

   9 - SELECT "GT_CONTACT_ID","GT_FIRST_NAME","GT_LAST_NAME","GT_CERT_STATUS","G
T_TEMPLATE_ANAME" FROM

       "CERT_VIEW" "TCERT" WHERE "GT_CERT_STATUS"='CERT' (accessing 'TCMSDB.CISC
O.COM' )


Tom Kyte
March 02, 2011 - 7:33 am UTC

are those estimated cardinalities anywhere near "correct"? Use your knowledge of the data to answer that question. If they were correct - then the first query should be pretty darn fast as there is almost nothing to be done.

INCREMENTAL Statistics - 11GR2

Rajeshwaran, Jeyabal, April 10, 2011 - 2:25 pm UTC


http://download.oracle.com/docs/cd/E11882_01/appdev.112/e16760/d_stats.htm#BEIHGEBD

rajesh@11GR2> create table t(
  2     x ,
  3     y ,
  4     z )
  5  partition by range(x)
  6  (
  7             partition p1 values less than (10),
  8             partition p_max values less than(maxvalue)
  9  )
 10  as
 11  select 1, rpad('*',2000,'*'), sysdate
 12  from all_objects;

Table created.

Elapsed: 00:00:08.87
rajesh@11GR2>
rajesh@11GR2> exec dbms_stats.gather_table_stats(user,'T');

PL/SQL procedure successfully completed.

Elapsed: 00:00:05.89
rajesh@11GR2>
rajesh@11GR2> select table_name,partition_name,last_analyzed,num_rows,blocks
  2  from user_tab_statistics
  3  where table_name ='T';

TABLE_NAME                     PARTITION_NAME                 LAST_ANALYZED             NUM_ROWS     BLOCKS
------------------------------ ------------------------------ ----------------------- ---------- ----------
T                                                             10-apr-2011 02:04:36 pm      71480      24026
T                              P1                             10-apr-2011 02:04:36 pm      71480      24026
T                              P_MAX                          10-apr-2011 02:04:36 pm          0       0

Elapsed: 00:00:00.01
rajesh@11GR2> begin
  2     dbms_stats.set_table_prefs(
  3     user,
  4     'T',
  5     'INCREMENTAL',
  6     'TRUE');
  7
  8     dbms_stats.set_table_prefs(
  9     user,
 10     'T',
 11     'GRANULARITY', 'AUTO');
 12  end;
 13  /

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.03
rajesh@11GR2>
rajesh@11GR2> select
  2     dbms_stats.get_prefs('INCREMENTAL',user,'T') as incremental,
  3     dbms_stats.get_prefs('GRANULARITY',user,'T') as granularity
  4  from dual;

INCREMENTAL                    GRANULARITY
------------------------------ ------------------------------
TRUE                           AUTO

Elapsed: 00:00:00.01
rajesh@11GR2>
rajesh@11GR2> alter table t split partition p_max at(20) into (partition p_2,partition p_max);

Table altered.

Elapsed: 00:00:00.11
rajesh@11GR2>
rajesh@11GR2> insert into t(x,y,z)
  2  select 11,rpad('#',2000,'#'),sysdate
  3  from all_objects;

71481 rows created.

Elapsed: 00:00:12.98
rajesh@11GR2>
rajesh@11GR2> commit;

Commit complete.

Elapsed: 00:00:00.07
rajesh@11GR2>
rajesh@11GR2> begin
  2     dbms_stats.gather_table_stats(ownname=>user,
  3                     tabname=>'T',
  4                     granularity=>'AUTO',
  5                     estimate_percent=>dbms_stats.auto_sample_size
  6                     );
  7  end;
  8  /

PL/SQL procedure successfully completed.

Elapsed: 00:00:12.67
rajesh@11GR2>
rajesh@11GR2> select table_name,partition_name,last_analyzed,num_rows,blocks
  2  from user_tab_statistics
  3  where table_name ='T';

TABLE_NAME                     PARTITION_NAME                 LAST_ANALYZED             NUM_ROWS     BLOCKS
------------------------------ ------------------------------ ----------------------- ---------- ----------
T                                                             10-apr-2011 02:06:21 pm     142961      48403
T                              P1                             10-apr-2011 02:06:16 pm      71480      24026
T                              P_2                            10-apr-2011 02:06:09 pm      71481      24377
T                              P_MAX                          10-apr-2011 02:06:09 pm          0       0

Elapsed: 00:00:00.03
rajesh@11GR2>


Tom:

<quote>
Oracle will update the global table statistics by scanning only the partitions that have been changed instead of the entire table if the following conditions hold:

1) the INCREMENTAL value for the partitioned table is set to TRUE;
2) the PUBLISH value for the partitioned table is set to TRUE;
3) the user specifies AUTO_SAMPLE_SIZE for ESTIMATE_PERCENT and AUTO for GRANULARITY when gathering statistics on the table.
</quote>

1) When i gathered stats after loading data into partition P_2, looks like Oracle is gathering stats on all Partitions rather than Partition P_2 and Global Statistics. Why this is contradict to Oracle docs? Is there is any wrong in Test cases?
Tom Kyte
April 13, 2011 - 8:14 am UTC



you have to gather incrementally against the entire table at least ONCE to get the synopsis created.

Your steps were:

a) create table
b) gather non-incremental stats
c) change table and enable incremental
d) gather incremental

(d) has to gather the entire table since we need to build the initial synopsis for the table, if you had:

a) create table and enabled incremental
b) gather incremental
c) change table
d) gather incremental

you would have seen what you "expected" to see.

ops$tkyte%ORA11GR2> create table t(
  2     x ,
  3     y ,
  4     z )
  5  partition by range(x)
  6  (
  7             partition p1 values less than (10),
  8             partition p_max values less than(maxvalue)
  9  )
 10  as
 11  select 1, rpad('*',2000,'*'), sysdate
 12  from all_objects;

Table created.

ops$tkyte%ORA11GR2> begin
  2     dbms_stats.set_table_prefs(
  3     user,
  4     'T',
  5     'INCREMENTAL',
  6     'TRUE');
  7     dbms_stats.set_table_prefs(
  8     user,
  9     'T',
 10     'GRANULARITY', 'AUTO');
 11  end;
 12  /

PL/SQL procedure successfully completed.

ops$tkyte%ORA11GR2> exec dbms_stats.gather_table_stats(user,'T');

PL/SQL procedure successfully completed.

ops$tkyte%ORA11GR2> select table_name,partition_name,last_analyzed,num_rows,blocks
  2  from user_tab_statistics
  3  where table_name ='T';

TABLE_NAME                     PARTITION_NAME                 LAST_ANALYZED          NUM_ROWS     BLOCKS
------------------------------ ------------------------------ -------------------- ---------- ----------
T                                                             13-apr-2011 09:12:49      72107      24218
T                              P1                             13-apr-2011 09:12:48      72107      24218
T                              P_MAX                          13-apr-2011 09:12:48          0          0

ops$tkyte%ORA11GR2> select
  2     dbms_stats.get_prefs('INCREMENTAL',user,'T') as incremental,
  3     dbms_stats.get_prefs('GRANULARITY',user,'T') as granularity
  4  from dual;

INCREMENTA GRANULARIT
---------- ----------
TRUE       AUTO

ops$tkyte%ORA11GR2> alter table t split partition p_max at(20) into (partition p_2,partition p_max);

Table altered.

ops$tkyte%ORA11GR2> insert into t(x,y,z)
  2  select 11,rpad('#',2000,'#'),sysdate
  3  from all_objects;

72143 rows created.

ops$tkyte%ORA11GR2> commit;

Commit complete.

ops$tkyte%ORA11GR2> begin
  2     dbms_stats.gather_table_stats(ownname=>user,
  3                     tabname=>'T',
  4                     granularity=>'AUTO',
  5                     estimate_percent=>dbms_stats.auto_sample_size
  6                     );
  7  end;
  8  /

PL/SQL procedure successfully completed.

ops$tkyte%ORA11GR2> select table_name,partition_name,last_analyzed,num_rows,blocks
  2  from user_tab_statistics
  3  where table_name ='T';

TABLE_NAME                     PARTITION_NAME                 LAST_ANALYZED          NUM_ROWS     BLOCKS
------------------------------ ------------------------------ -------------------- ---------- ----------
T                                                             13-apr-2011 09:13:07     144250      48612
T                              P1                             13-apr-2011 09:12:48      72107      24218
T                              P_2                            13-apr-2011 09:13:06      72143      24394
T                              P_MAX                          13-apr-2011 09:13:06          0          0


Statistics on mlog tables

Suresh Ramachandran, June 10, 2011 - 6:35 am UTC

Hi Tom,

Good day :)
I have a query regarding statsistics on mlog tables in a database.
When we analyze statsics on databases, we found that mlog tables are not analysing it.
However it contains max 100 to 200 rows since it will replicated in a 2 or 3 mins once.
Could you please tell me how much value mlog tables are if its not analysed?
Also could you please tell me whether it should be analyzed or not necessary really.
Kindly revert back if any query.
Thanks for your support.

Regards
Suresh.R


Tom Kyte
June 10, 2011 - 8:06 am UTC

in 10g and above, we'd sample them upon hard parsing a query - that should be sufficient in most all cases.

Statistics on mlog tables

Suresh Ramachandran, June 10, 2011 - 7:06 am UTC

Tom,

We are using Oracle 9i version(9.2.0.8) database and using Dbms_stats.gather_schema_stats.

Regards
Suresh.R
Tom Kyte
June 10, 2011 - 8:10 am UTC

by default, in 9ir2, any table accessed using the cost based optimizer that does not have an index will be sampled during hard parse time. The optimizer will be picking up basic statistics for this table since it is not indexed by default

When dependent cursors will be invalidated

A reader, June 29, 2011 - 8:08 am UTC

If collecting table statistics with cascade equal true,when existing dependent execution plans will be invalidated? At the end? Or whenever Oracle sees there are new statistics generated for table or index?

Thanks.
Tom Kyte
June 29, 2011 - 12:22 pm UTC

in 9i and before, dependent cursors are invalidated at the end of the gathering of statistics on the table and dependent objects.

in 10g and later - we do a lazy invalidation in the background - bit by bit. rather than invalidate all N queries at the same time - we invalidate a few at a time - over time - in the background. that is to lessen the impact of having to hard parse tons of sql all at one after a gather statistics.

See the NO_INVALIDATE parameter for more information:
http://docs.oracle.com/docs/cd/E11882_01/appdev.112/e16760/d_stats.htm#i1036461

When dependent cursors will be invalidated

A reader, June 29, 2011 - 1:55 pm UTC

In 10 and above, when will the bit by bit invalidation start? Start after the entire statistics process finishes or in the middle? I saw the USER_TABLES.LAST_ANALYZED timestamp is diffrent from USER_INDEXES.LAST_ANALYZED for the same table. If the dependent cursor is called after the analyzing table is complete but not the indexes, will Oracle still use the existing execution plan or will generate a new one based on new table statistics and old index statistics?

Thanks.
Tom Kyte
June 29, 2011 - 3:11 pm UTC

whenever we deem to start - sometime after the stats gathering call is complete.

How can I get rid of these?

AndyP, July 15, 2011 - 3:23 am UTC

Tom,

Somehow in my 11gR2 user_statistics I have gained 131 entries with table_name similar to 'BIN$prALAdiHBGHgQAB/AQAT0Q==$0'. As you can imagine I don't apparently own any objects with names of that nature so:
a) do you have any insight into where these might have come from or what they relate to and
b) how can I clear them out, given that the name is an invalid identifier and breaks queries?
select * from v$version
  2  /

BANNER
--------------------------------------------------------------------------------
Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production

select distinct granted_role from dba_role_privs where granted_role = 'DBA' or granted_role like '%STAT%';

GRANTED_ROLE
------------------------------
DBA
GATHER_SYSTEM_STATISTICS

select table_name from user_tab_statistics;

TABLE_NAME
------------------------------
ACTIVITY
BIN$lWbf0DrM/J7gQAB/AQBE/A==$0
BIN$lbfpdkGhrPLgQAB/AQBpaw==$0
BIN$lfaQWHTgmAfgQAB/AQBd2w==$0
etc

exec dbms_stats.delete_table_stats(user,'BIN$prALAdiHBGHgQAB/AQAT0Q==$0')
BEGIN dbms_stats.delete_table_stats(user,'BIN$prALAdiHBGHgQAB/AQAT0Q==$0'); END;

*
ERROR at line 1:
ORA-20001: BIN$PRALADIHBGHGQAB/AQAT0Q==$0 is an invalid identifier

exec dbms_stats.delete_schema_stats(user)

PL/SQL procedure successfully completed.


but the stats are still present. Maybe I have misunderstood the expected effect of these procedures. Could you advise?
Tom Kyte
July 18, 2011 - 8:57 am UTC

a) they are recyclebin objects


http://docs.oracle.com/docs/cd/E11882_01/server.112/e17120/tables011.htm#ADMIN11679



b) you can use quoted identifers


ops$tkyte%ORA11GR2> exec dbms_stats.delete_table_stats( user, '"BIN$qFlAWG3BtejgQAB/AQAFpA==$0"' );

PL/SQL procedure successfully completed.


or you can purge your recyclebin if you want to get rid of these dropped tables forever.


So something else is breaking my query attempt

AndyP, July 19, 2011 - 8:57 am UTC

That's excellent thanks Tom - I hadn't come across the recycle bin previously.

The original question arose because I was getting errors about invalid identifiers while trying to query the USER_STATISTICS table in a pivot format as:
select * from v$version where banner like 'Oracle%';

BANNER
--------------------------------------------------------------------------------
Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production

select table_name,colval from user_tab_statistics
unpivot(colval for columns in(
num_rows,blocks,empty_blocks,avg_space,chain_cnt,avg_row_len,sample_size,last_anal
))
/

select table_name,colval from user_tab_statistics
*
ERROR at line 1:
ORA-00904: "USER_TAB_STATISTICS"."STALE_STATS": invalid identifier

and I also tried using your printtbl format as:
@printtbl 'select table_name,num_rows,blocks,empty_blocks,avg_space,chain_cnt,avg_row_len,sample_size,last_anal from user_tab_statistics'

old  14:     dbms_sql.parse(  l_theCursor, replace( '&1', '"', ''''), dbms_sql.native );
new  14:     dbms_sql.parse(  l_theCursor, replace( 'select table_name,num_rows,blocks,empty_blocks,avg_space,chain_cnt,avg_row_len,sample_size,last_anal from user_tab_statistics', '"', ''''), dbms_sql.native );
declare
*
ERROR at line 1:
ORA-00904: "LAST_ANAL": invalid identifier
ORA-06512: at line 31

select count(*) from user_recyclebin;

  COUNT(*)
----------
         0


I have purged my recycle bin and no longer see those generated table_names in user_statistics, but still get the errors, so presumably I was jumping to the wrong conclusion as to what was responsible. Can you see what's happening here?

Tom Kyte
July 19, 2011 - 10:28 am UTC

misleading error....

last_anal should be last_analyzed.

and you need to have them be the same type:


ops$tkyte%ORA11GR2> select *
  2  from
  3  (
  4  select table_name,
  5  to_char(num_rows) num_rows,
  6  to_char(blocks) blocks,
  7  to_char(empty_blocks) empty_blocks,
  8  to_char(avg_space) avg_space,
  9  to_char(chain_cnt) chain_cnt,
 10  to_char(avg_row_len) avg_row_len,
 11  to_char(sample_size) sample_size,
 12  to_char(last_analyzed,'dd-mon-yyyy hh24:mi:ss') last_analyzed
 13  from user_tab_statistics
 14  )
 15  unpivot(colval for columns in(
 16  num_rows,
 17  blocks,
 18  empty_blocks,
 19  avg_space,
 20  chain_cnt,
 21  avg_row_len,
 22  sample_size,
 23  last_analyzed
 24  ) )
 25  /
TABLE_NAME                     COLUMNS       COLVAL
------------------------------ ------------- ----------------------------------------
BIN$qG57duT1wkngQAB/AQAbSQ==$0 NUM_ROWS      44
BIN$qG57duT1wkngQAB/AQAbSQ==$0 BLOCKS        4
BIN$qG57duT1wkngQAB/AQAbSQ==$0 EMPTY_BLOCKS  0
BIN$qG57duT1wkngQAB/AQAbSQ==$0 AVG_SPACE     0
BIN$qG57duT1wkngQAB/AQAbSQ==$0 CHAIN_CNT     0
BIN$qG57duT1wkngQAB/AQAbSQ==$0 AVG_ROW_LEN   19
BIN$qG57duT1wkngQAB/AQAbSQ==$0 SAMPLE_SIZE   44
BIN$qG57duT1wkngQAB/AQAbSQ==$0 LAST_ANALYZED 19-jul-2011 11:07:43

8 rows selected.



it should not error out as "stale_stats", but "last_anal".

Ah - embarrassing but very helpful

AndyP, July 20, 2011 - 2:20 am UTC

Thank you very much Tom. A silly cut'n'paste error led me down completely the wrong path there, but I have gained several very useful insights as a result of your solving it for me, so needless to say I am very appreciative.

DBMS_STATS scheduled to run daily internally ? why isnt index use ?

Alan, July 22, 2011 - 1:12 am UTC

Hi tom,

I am on 10g.
does ORACLE periodically run DBMS_STATS ?

I am currently having this issues.

I have this table X.
It is a "live" table with frequent insert and delete and there is a index on the user_id column.

I have a sql
SELECT COUNT(*) FROM X WHERE USER_ID='jadtan' AND TYPE='SMS' AND BOX_TYPE='IN';

1) It used to be using the index.
2) After sometime, it become a FTS
3) I run dbms_stats again, it begin to use the index again.

This has been happening quite sometime already.

---------------------------------------------

q1) why would oracle decide to use FTS when it was using index originally

q2) why would a manual dbms_stats revert back to use index, while the daily schedule GATHER_STATS_JOB doesn't do so ?

Regards,
Noob


Tom Kyte
July 22, 2011 - 2:24 pm UTC

does 10g run dbms_stats?

it depends, there is an optional job that can be run to gather stats during your defined maintenance window - yes. It might be running.


q1) if we thought the table was really small - we could easily find a full scan to be the right choice. If we noticed that many of the rows were matching the predicate - we could easily find a full can to be the right choice

q2) perhaps when the daily gather stats job runs, the table is very small - leading us to a full scan. or maybe the where clause is returning many of the rows - leading us to a full scan - when the daily gather job runs.

But when you run dbms_stats - in the middle of the day - the table is large and the where clause returns few rows.




Here is my advice:

a) first and foremost, stop running the query. There will be NO FASTER way to run this query than to NOT RUN IT. Why are you running this query? Probably for some logic like this:


select count(*)
if the count > 0
then
process the data
end if;


why not just

process the data

instead????

TEMP issue during stats gather

Manoj Kaparwan, September 02, 2011 - 1:32 am UTC

Hi Tom,

Thanks for your time.


Below job was successfully running

exec dbms_stats.gather_schema_stats(ownname=>'ABC', degree=>2, cascade=>true, options=>'gather');

except yesterday ( when we witnessed the belwo error)

ERROR at line 1:
ORA-12801: error signaled in parallel query server P000, instance
ORA-01652: unable to extend temp segment by 128 in tablespace TEMP
ORA-06512: at "SYS.DBMS_STATS", line 10502
ORA-06512: at "SYS.DBMS_STATS", line 10996
ORA-06512: at "SYS.DBMS_STATS", line 11183
ORA-06512: at "SYS.DBMS_STATS", line 11237
ORA-06512: at "SYS.DBMS_STATS", line 11214
ORA-06512: at line 1



It was suggested to split the Table Stats gather and Index Stats Gather...
( i.e. run index stats gathering during low peak hrs.. hence miniising the chnaces of TEMP out of space)



table stats gather using ---> exec dbms_stats.gather_schema_stats(ownname=>'SA', degree=>2, cascade=>FALSE, options=>'gather');
and
INDEX stats gather using --> dbms_stats.gather_index_stats




I have few question/s here

a) It is not good practice to gather the table and index stats at 02 different point in time?
in a high transaction table case this might create a huge difference in terms of data avaliable to the optimizer...

b) Gather_stats does not do diffrenetlty when it gathers index stats compared to table stats gather.
I feel it does the same work ( here no index rebuilt/creation.. happens when stats are gathered)
so there are chances that during table stats gather also we might run out of TEMP space..


c) what activity ( at low level) consumes the TEMP tablespace during stats gather?




Tom Kyte
September 02, 2011 - 8:40 am UTC

why not just increase temp to satisfy your needs? Why does no one do that? It would be the *first* thing I would do.


it won't matter - we gather then one after the other anyway. They are done at a different time.

In a high transaction table, it probably won't make a bit of difference - if the table is as a steady state size. You'll have about the same number of rows. You'll have about the same skew of data. You'll have about - well - the same. It would only be of you were adding millions of rows per hour and never deleting - but then gathering statistics would be the least of your problem (as soon as you gather - you would be stale, wouldn't you...)



sorting, hashing, aggregating, processing in general consumes temp space. Just allocate more - really - it could be that easy.

method_opt

A reader, September 21, 2011 - 9:06 am UTC

Tom,

Is there any difference between "method_opt=>'FOR ALL COLUMNS SIZE 254'" and "method_opt=>'FOR COLUMNS SIZE 254'" when using dbms_stats.gather_table_stats in 10.2.0.3.0?

Thanks.
Tom Kyte
September 21, 2011 - 9:47 am UTC

yes, 'for columns size 254' is just like 'for columns /* none of them */ size 254'

ops$tkyte%ORA10GR2> create table t as select * from all_objects;

Table created.

ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> exec dbms_stats.gather_table_stats( user, 'T', method_opt=>'for columns size 254' );

PL/SQL procedure successfully completed.

ops$tkyte%ORA10GR2> select column_name, count(*) from user_tab_histograms where table_name = 'T' group by column_name order by 1;

no rows selected

ops$tkyte%ORA10GR2> exec dbms_stats.gather_table_stats( user, 'T', method_opt=>'for ALL columns size 254' );

PL/SQL procedure successfully completed.

ops$tkyte%ORA10GR2> select column_name, count(*) from user_tab_histograms where table_name = 'T' group by column_name order by 1;

COLUMN_NAME            COUNT(*)
-------------------- ----------
CREATED                     120
DATA_OBJECT_ID              255
GENERATED                     2
LAST_DDL_TIME               120
OBJECT_ID                   255
OBJECT_NAME                 255
OBJECT_TYPE                  28
OWNER                        29
SECONDARY                     2
STATUS                        2
SUBOBJECT_NAME              136
TEMPORARY                     2
TIMESTAMP                   121

13 rows selected.

estimate stats

san4479, December 27, 2011 - 8:06 am UTC

What will happen if we gather stats :-
first with 100% estimate
and next with 30% estimate.

Assuming that there are no additions/deletions/updates, does the re-run of stats with 30 % degrade the quality of stats collected earlier at 100%
Tom Kyte
December 27, 2011 - 8:55 am UTC

it will overwrite 100% statistics with a 30% sample. It may have no effect whatsoever, it might cause some plans to change. It depends on what the numbers end up being ultimately.

estimate percent

san4479, December 28, 2011 - 12:42 am UTC

so we would end up deterioriating the stats as the estimate percent is now less than previous assuming no changes to table? right?
Tom Kyte
December 29, 2011 - 10:47 am UTC

A smaller estimate percent might result in just as good statistics, but in general - it would not be a good idea to take freshly collected statistics and overwrite them with a smaller estimate percent.

If on the other hand, you were overwriting stale statistics - it would likely be a good thing.

Find old stats name

Vinodh, January 11, 2012 - 3:10 pm UTC

Hi,

We gathering stats every week like
EXEC DBMS_STATS.EXPORT_TABLE_STATS(OWNNAME => 'IKEA', TABNAME => 'T_C_ACC_STATUS', STATTAB =>'T_C_ACC_STATUS_311211',CASCADE => TRUE, STATOWN =>'IKEA');
We have more than 600 tables.I want to delete some old statsitics.
I tried to find when the statistics gathered with name( For Ex :T_C_ACC_STATUS_311211) but i am unable to find it.

Is there any dba view?

Thanks & Regards,
VN


DBMS_STATS.SET_TABLE_STATS

asktom fan, January 17, 2012 - 4:50 pm UTC

Hi Tom,

In 11gR2, if I fake the stats for a partition like the following, will this invalidate only queries accessing partition P1 in table T1, or will it invalidate all queries accessing table T1, regardless which partition?

begin
DBMS_STATS.SET_TABLE_STATS (
ownname => 'TEST',
tabname => 'T1',
partname => 'P1,
numrows => 100000000,
numblks => 100000,
no_invalidate => FALSE);
end;


Thanks for all the great work!

Tom Kyte
January 18, 2012 - 6:57 am UTC


it'll get rid of all of them. If you do something like:

CREATE TABLE t
(
  dt  date,
  x   int,
  y   varchar2(30)
)
PARTITION BY RANGE (dt)
(
  PARTITION part1 VALUES LESS THAN (to_date('13-mar-2003','dd-mon-yyyy')) ,
  PARTITION part2 VALUES LESS THAN (to_date('14-mar-2003','dd-mon-yyyy')) ,
  PARTITION junk VALUES LESS THAN (MAXVALUE)
)
/

insert into t select to_date( '12-mar-2003','dd-mon-yyyy')+mod(rownum,3), user_id, username from all_users;

exec dbms_stats.gather_table_stats( user, 'T' );

select num_rows from user_tables where table_name = 'T';
select partition_name, num_rows from user_tab_partitions where table_name = 'T';

@trace
select /* look for me */ count(*) from t;
select /* look for me */ count(*) from t partition(part1);
select /* look for me */ count(*) from t partition(part2);
exec dbms_stats.set_table_stats( user, 'T', partname => 'PART1', numrows => 1000000, numblks => 10000, no_invalidate => FALSE );
select /* look for me */ count(*) from t;
select /* look for me */ count(*) from t partition(part1);
select /* look for me */ count(*) from t partition(part2);
pause
@tk "aggregate=no"


(@trace just enables trace, @tk is below)


column trace new_val TRACE

select c.value || '/' || d.instance_name || '_ora_' || a.spid || '.trc' trace
  from v$process a, v$session b, v$parameter c, v$instance d
 where a.addr = b.paddr
   and b.audsid = userenv('sessionid')
   and c.name = 'user_dump_dest'
/

disconnect
!tkprof &TRACE ./tk.prf &1
connect /
edit tk.prf


you would find in the tkprof report that the sql is hard parsed both times, all of it.

select /* look for me */ count(*) from t

Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 629
Number of plan statistics captured: 1
********************************************************************************
select /* look for me */ count(*) from t partition(part1)

Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 629
Number of plan statistics captured: 1
********************************************************************************
select /* look for me */ count(*) from t partition(part2)

Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 629
Number of plan statistics captured: 1
********************************************************************************
select /* look for me */ count(*) from t

Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 629
Number of plan statistics captured: 1
********************************************************************************
select /* look for me */ count(*) from t partition(part1)

Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 629
Number of plan statistics captured: 1
********************************************************************************
select /* look for me */ count(*) from t partition(part2)

Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 629
Number of plan statistics captured: 1



if you comment out the gather stats, the 'misses in library cache during parse' would be zero the second time around, indicating a soft parse.

all about stats

asktom fan, January 18, 2012 - 9:41 am UTC

Tom, thanks for the excellent answer as always.

Actually, before I tried to fake the stats for a partition, I wanted to purge out of the shared pool the query accessing the partition. To my surprise, that query could not be purged out of the shared pool using dbms_shared_pool.purge. Then on metalink I found out this is a bug in Oracle 11.2.0.1, which is our version.

Bug 11829677: Cursors that had V$SQL.IS_SHAREABLE set to 'N' could not be purged from the shared pool. This has been corrected in 11.2.0.3 and 12.1.

Then I thought to use the no_invalidate parameter to force a hard parse next time the query is run so that my faked new stats would influence the optimizer to pick up a different plan. This trick worked for me.

I have two questions:

1. Is there any other workaround to purge a query out of the shared pool before we upgrade to 11.2.0.3?

2. To check the validity of a cursor, besides using tkprof, can we also use the OBJECT_STATUS column in V$SQLAREA? The Oracle document says this column has the following values:

VALID - Valid, authorized without errors
VALID_AUTH_ERROR - Valid, authorized with authorization errors
VALID_COMPILE_ERROR - Valid, authorized with compilation errors
VALID_UNAUTH - Valid, unauthorized
INVALID_UNAUTH - Invalid, unauthorized
INVALID - Invalid, unauthorized but keep the timestamp

The definition of each value seems too simple for me. Can you please elaborate a little bit more on what these values mean?

Tom Kyte
January 18, 2012 - 10:20 am UTC

1) not that I'm aware of.
2) no, they'll all be valid - they are syntactically correct and you have authorizations in place


incremental statistics gather

A Reader, May 24, 2012 - 5:51 am UTC

Hi Tom,

I have a composite partitioned (Hash and Range) table T with 22 locally partitioned indexes.

The stats gather is taking around 55min and 60% time is spend in collecting the index statistics.

Now i enabled incremental stats gathering and now the time taken is around 49min but the time taken for index stats gather is almost 58% of total time.

My question is even though i enabled incremental stats i don't find much diff in time taken for index stats gather. Is this the normal behavior or am i missing out any thing.
Tom Kyte
May 24, 2012 - 9:16 am UTC

I don't see much difference in the total time to gather - other than the fact you are using the new algorithms now that you have incremental stats on - so it is going a little faster to get a compute than it used to for an estimate.


There is not sufficient data here to analyze. How do you load, what is your process exactly. You didn't change anything (55 minutes = 49 minutes as far as I'm concerned) yet. So, I need to know what you are doing.

Do you have to gather stats for most all partitions ? is this a warehouse/oltp system? give details.

A Reader

A Reader, May 25, 2012 - 1:33 am UTC

Hi Tom,

I have done the testing on my local system.
I initially gathered the stats using incremental(first time would take more time for incremental as synopsis would be generated) .
I inserted 1million rows randomly into one partition and then gathered the stats using incremental.
Then i disabled incremental stats and inserted 1million rows into one partition.
Now i gathered the stats without incremental.

I have observed that the index stats gather is taking the same time in both scenarios. Agreed that the table full scans are eliminated with incremental but indexes are taking the same time in both scenarios.

the below is the snippet for index from both trace files

select /*+ no_parallel_index(t, "INDEX_SALE_ID") dbms_stats
cursor_sharing_exact use_weak_name_resl dynamic_sampling(0) no_monitoring
no_substrb_pad no_expand index(t,"INDEX_SALE_ID") */ count(*) as nrw,
count(distinct sys_op_lbid(75465,'L',t.rowid)) as nlb,null as ndk,
sys_op_countchg(substrb(t.rowid,1,15),1) as clf
from
"ADITYA"."SALES_PARTITION" t where ("SALE_ID" is not null) and
(TBL$OR$IDX$PART$NUM("ADITYA"."SALES_PARTITION",0,3,0,"ROWID") = :objn)


Tom Kyte
May 25, 2012 - 9:00 am UTC

... I have observed that the index stats gather is taking the same time in both
scenarios. ...

seems reasonable to me? why wouldn't it be? The gathering of index statistics against a partition would be the same with or without the synopsis since the synopsis is only generated against the table - it doesn't come into play with the index..


A Reader, May 26, 2012 - 10:41 am UTC

Hi Tom,

The scenarios are with and without incremental stats.

Scenario 1( With incremental enabled) :

Total Time Taken : 3200 sec
Time Taken for Indexes : 2100 sec
Time Taken for Tables : 1100 sec

Scenario 2 ( Without incremental enabled)
Total Time Taken : 3600 sec
Time Taken for Indexes : 2000 sec
Time Taken for Tables : 1600 sec

This timings are from the trace generated

So as far as indexes are concerned the stats gather on indexes will not be dependent whether incremental is enabled or not. The time taken on indexes would be same with or without incremental?
Tom Kyte
May 27, 2012 - 11:17 am UTC

give me the entire situation, table creates and all. and tell me WHAT is actually being gathered against in both situations.


if you ask me - unless you ran this 100 times over and over - the times are "just about the same" - meaning, it sounds like we were gather stats against just about the same amount of time. Splitting hairs here.

incremental is just going to reduce the amount of data we are looking at. If you are comparing "local gather in 10g", with "incremental looking at the same amount of data in 11g", this looks perfectly OK.

DBMS_STATS takes longer time to complete

ramki, September 06, 2012 - 12:24 pm UTC

I have partitioned table and each partition holds around 50-55 million rows.
I ran the following command to Analyze a huge table for 1 partition.

DBMS_STATS.GATHER_TABLE_STATS('WHLOAD','KW_SALESFACTS',PARTNAME => 'KWSALES_2012_07',DEGREE=>4,CASCADE=>FALSE,ESTIMATE_PERCENT=>DBMS_STATS.AUTO_SAMPLE_SIZE); It took more than 2 hours and still not complete. The table is created with 8 degree of parallelism.On looking at the wait events it was in db_file_scattered_read.Any reason why does it take so much time to get the stats?
Tom Kyte
September 10, 2012 - 7:58 pm UTC

how big it 50-55 million rows? 50mb, 50gb, 50tb???? row counts are so useless in so many cases.

what is your disk attached to your server by? what throughput can you get? Let's say you have a 1Gb (gigabit) card - that is like 100MB/sec. So, 10 second per gigabyte. So 1 minute per 6GB, 1 hour per 360GB (assuming your disks can deliver and assuming you have the CPU to process)

unless you flood it with lots of parallel - then you won't get that throughput, you'll spend a lot of network time with contention on the wire.

so, how big is your data, have you done the math to see how long it would take to just transfer the data from the storage to the computer? (hint: use dd to measure how fast IO "might" be, then use math to figure out how long you'll be waiting for IO)

dbms_stats

Tony, September 13, 2012 - 2:52 pm UTC

Hi Tom,

I have a pl/sql program which does the below

a) truncate table tabA
a) inserts 1 M rows in tabA
b) Do some processing based on the records in tabA
c) update a field in tabA based on the value in some other field - ie update tabA set tabA.status = 'Y' where tabA.fld1=<value fetched from some other table >

Do I need to gather statistics within the pl/sql program after step b. If I don't gather statistics after step b, will the optimizer get correct statistics for the update statement

Thank you



Tom Kyte
September 14, 2012 - 6:51 pm UTC

you have two step a's which makes this somewhat confusing...

You might need a stats gather after the second (a) IF the statistics for the newly inserted rows differ dramatically from the rows before the first (a) took place.

tell me this - after truncate and insert -are the statistics you had representative of the data in that table???

Tony, September 15, 2012 - 7:37 am UTC

Hi Tom,

Sorry for the confusion. The steps should have been a), b), c), d) in order. So the second a) should be b)

When step d)( update statement) took a long time, and when I checked user_tables num_rows is 0 for the table. Is it ok to call dbms_stats inside the pl/sql program before the update. I usually call dbms_stats.gather_table_stats as a script before running the application.

Thank you
Tom Kyte
September 16, 2012 - 4:07 am UTC

yes it would be ok to call dbms_stats, even better, consider calling dbms_stats.set_table_stats - you just loaded the table, you KNOW what the statistics are! You can just tell us.

or, you could gather stats on the table once it is loaded - lock those stats - and just use them forever (assuming the stats are representative of the data)

if num_rows is zero, that indicates someone gathered stats on the empty table and never gathered again...


Is this a bug on oracle11gr2?

A reader, September 26, 2012 - 12:06 am UTC

SQL> create table t as select 1 id, a.* from dba_objects a;

表已创建。

SQL> update t set id=2 where rownum=1;

已更新 1 行。

SQL> create index idx_t on t(id);

索引已创建。

SQL> exec dbms_stats.gather_table_stats(user,'t',cascade=>true);

PL/SQL 过程已成功完成。

SQL>  select * from t where id = 2;

执行计划
----------------------------------------------------------
Plan hash value: 1601196873

--------------------------------------------------------------------------
| Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |      | 36263 |  3541K|   299   (1)| 00:00:04 |
|*  1 |  TABLE ACCESS FULL| T    | 36263 |  3541K|   299   (1)| 00:00:04 |
--------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter("ID"=2)
-------WRONG SQL plan!!! I want ONE record(id=2) from a table which have nearly 70K rows ,why CBO choose a FULL TABLE SCAN,it is a expensive cost .     
SQL> analyze table T compute statistics for table for all indexes for all indexed columns;
SQL> select * from t where id = 2;
执行计划
----------------------------------------------------------
Plan hash value: 1594971208

-------------------------------------------------------------------------------------
| Id  | Operation                   | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |       |     1 |   103 |     2   (0)| 00:00:01 |
|   1 |  TABLE ACCESS BY INDEX ROWID| T     |     1 |   103 |     2   (0)| 00:00:01 |
|*  2 |   INDEX RANGE SCAN          | IDX_T |     1 |       |     1   (0)| 00:00:01 |
------------------------------------------------------------
Predicate Information (identified by operation id):

Tom Kyte
September 27, 2012 - 8:29 am UTC

no, this isn't a bug.

you created a table with skewed data and then used dbms_stats to gather stats but didn't ask for any histograms? so you only gave the optimizer:

a) num rows in the table
b) ndv (number distinct values) for each column

if you had that, you would get the same estimated cardinality (i mean you personally, if that was all you knew, that would be your guess too)

and then you use analyze to get a histogram..... hmmmmm


you know, if you would have just run dbms_stats again, you'd see we would build a histogram - but only because we now know you use ID in a where clause!!

see:
http://www.oracle.com/technetwork/issue-archive/2006/06-may/o36asktom-099107.html

section: "why does my plan change"

Is this a bug on oracle11gr2?

A reader, September 27, 2012 - 10:52 pm UTC

Tom,
I ran dbms_stats.gahter_table_stats again,but the PLAN is still not correct.
SQL> exec dbms_stats.gather_table_stats(user,'T');
PL/SQL 过程已成功完成。
SQL> set autotrace trace exp;
SQL> select * from t where id=99;
--------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |      | 36231 |   106K|    36   (3)| 00:00:01 |
|*  1 |  TABLE ACCESS FULL| T    | 36231 |   106K|    36   (3)| 00:00:01 |
------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
   1 - filter("ID"=99)
SQL> select * from t where id=1;
执行计划
----------------------------------------------------------
Plan hash value: 1601196873
------------------------------------------------------------
| Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |
------------------------------------------------------------
|   0 | SELECT STATEMENT  |      | 36231 |   106K|    36   (3)| 00:00:01 |
|*  1 |  TABLE ACCESS FULL| T    | 36231 |   106K|    36   (3)| 00:00:01 |
------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
   1 - filter("ID"=1)
SQL> set autotrace off;
SQL> exec dbms_stats.gather_table_stats(user,'T');
PL/SQL 过程已成功完成。
SQL> set autotrace trace exp;
SQL> select * from t where id=99;
----------------------------------------------------------
Plan hash value: 1601196873
------------------------------------------------------------
| Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |      | 36231 |   106K|    36   (3)| 00:00:01 |
|*  1 |  TABLE ACCESS FULL| T    | 36231 |   106K|    36   (

Tom Kyte
September 28, 2012 - 6:53 am UTC

did someone change the defaults for dbms_stats? it should be:

ops$tkyte%ORA11GR2> select dbms_stats.get_prefs( 'method_opt' ) from dual;

DBMS_STATS.GET_PREFS('METHOD_OPT')
----------------------------------------
FOR ALL COLUMNS SIZE AUTO




show all steps from start to finish like this, you are all of a sudden using 1 and 99 this time around?

ops$tkyte%ORA11GR2> create table t as select 99 id, a.* from all_objects a;

Table created.

ops$tkyte%ORA11GR2> update t set id = 1 where rownum = 1;

1 row updated.

ops$tkyte%ORA11GR2> create index t_idx on t(id);

Index created.

ops$tkyte%ORA11GR2> exec dbms_stats.gather_table_stats( user, 'T' );

PL/SQL procedure successfully completed.

ops$tkyte%ORA11GR2> set autotrace traceonly explain
ops$tkyte%ORA11GR2> select * from t where id = 99;

Execution Plan
----------------------------------------------------------
Plan hash value: 1601196873

--------------------------------------------------------------------------
| Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |      | 36411 |  3555K|   299   (1)| 00:00:04 |
|*  1 |  TABLE ACCESS FULL| T    | 36411 |  3555K|   299   (1)| 00:00:04 |
--------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter("ID"=99)

ops$tkyte%ORA11GR2> select * from t where id = 1;

Execution Plan
----------------------------------------------------------
Plan hash value: 1601196873

--------------------------------------------------------------------------
| Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |      | 36411 |  3555K|   299   (1)| 00:00:04 |
|*  1 |  TABLE ACCESS FULL| T    | 36411 |  3555K|   299   (1)| 00:00:04 |
--------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter("ID"=1)

ops$tkyte%ORA11GR2> set autotrace off
ops$tkyte%ORA11GR2> exec dbms_stats.gather_table_stats( user, 'T' );

PL/SQL procedure successfully completed.

ops$tkyte%ORA11GR2> set autotrace traceonly explain
ops$tkyte%ORA11GR2> select * from t where id = 99;

Execution Plan
----------------------------------------------------------
Plan hash value: 1601196873

--------------------------------------------------------------------------
| Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |      | 72801 |  7109K|   300   (1)| 00:00:04 |
|*  1 |  TABLE ACCESS FULL| T    | 72801 |  7109K|   300   (1)| 00:00:04 |
--------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter("ID"=99)

ops$tkyte%ORA11GR2> select * from t where id = 1;

Execution Plan
----------------------------------------------------------
Plan hash value: 470836197

-------------------------------------------------------------------------------------
| Id  | Operation                   | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |       |    13 |  1300 |     2   (0)| 00:00:01 |
|   1 |  TABLE ACCESS BY INDEX ROWID| T     |    13 |  1300 |     2   (0)| 00:00:01 |
|*  2 |   INDEX RANGE SCAN          | T_IDX |    13 |       |     1   (0)| 00:00:01 |
-------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - access("ID"=1)

ops$tkyte%ORA11GR2> set autotrace off
ops$tkyte%ORA11GR2> 

Is this a bug on oracle11gr2?

A reader, September 28, 2012 - 8:12 pm UTC

Tom,
How could I post a large code block,
it says 'maximum 1000 words'....
I can just post apart of the the test you give me.

Regards
Alan
Tom Kyte
September 29, 2012 - 8:05 am UTC

until I get something fixed here (probably after OOW), post is limited, can you make it span more than one.

or better yet, just verify for us the method opt default?

Is this a bug on oracle11gr2?

A reader, September 29, 2012 - 8:44 am UTC

Tom,
I am stupid...
C:\Windows\system32>sqlplus / as sysdba

SQL*Plus: Release 11.2.0.1.0 Production on 星期六 9月 29 08:55:27 2012

Copyright (c) 1982, 2010, Oracle.  All rights reserved.


连接到:
Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SQL> conn test/test
已连接。
SQL> select dbms_stats.get_prefs( 'method_opt' ) from dual;

DBMS_STATS.GET_PREFS('METHOD_OPT')
--------------------------------------------------------------------------------
FOR ALL COLUMNS SIZE AUTO

SQL> drop table t purge;

表已删除。

SQL> create table t as select 99 id, a.* from all_objects a;

表已创建。

SQL> update t set id = 1 where rownum = 1;

已更新 1 行。

SQL> create index t_idx on t(id);

索引已创建。

SQL> exec dbms_stats.gather_table_stats( user, 'T' );

PL/SQL 过程已成功完成。

A reader, September 29, 2012 - 8:52 am UTC

I installed the database by default and changed nothing.
continue....
SQL> set autotrace traceonly explain
SQL> select * from t where id = 99;

执行计划
----------------------------------------------------------
Plan hash value: 1601196873

--------------------------------------------------------------------------
| Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |      | 35738 |  3490K|   295   (1)| 00:00:04 |
|*  1 |  TABLE ACCESS FULL| T    | 35738 |  3490K|   295   (1)| 00:00:04 |
--------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter("ID"=99)

SQL> select * from t where id = 1;

执行计划
----------------------------------------------------------
Plan hash value: 1601196873

--------------------------------------------------------------------------
| Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |      | 35738 |  3490K|   295   (1)| 00:00:04 |
|*  1 |  TABLE ACCESS FULL| T    | 35738 |  3490K|   295   (1)| 00:00:04 |
--------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter("ID"=1)

SQL> set autotrace off;
SQL> exec dbms_stats.gather_table_stats( user, 'T' );

PL/SQL 过程已成功完成。


Is this a bug on oracle11gr2?

A reader, September 29, 2012 - 9:01 am UTC

SQL> set autotrace off;
SQL> set linesize 120;
SQL> col column_name for a20;
SQL> select table_Name,column_name,endpoint_number,endpoint_value from user_histograms where TABLE_NAME='T';

TABLE_NAME                     COLUMN_NAME          ENDPOINT_NUMBER ENDPOINT_VALUE
------------------------------ -------------------- --------------- --------------
T                              ID                              5468             99
T                              OWNER                              0     3.3913E+35
T                              OBJECT_NAME                        0     2.4504E+35
T                              SUBOBJECT_NAME                     0     1.8867E+35
T                              OBJECT_ID                          0              2
T                              DATA_OBJECT_ID                     0              0
T                              OBJECT_TYPE                        0     3.4943E+35
T                              CREATED                            0     2455289.55
T                              LAST_DDL_TIME                      0     2452549.53
T                              TIMESTAMP                          0     2.5558E+35
T                              STATUS                             0     4.4786E+35

TABLE_NAME                     COLUMN_NAME          ENDPOINT_NUMBER ENDPOINT_VALUE
------------------------------ -------------------- --------------- --------------
T                              TEMPORARY                          0     4.0500E+35
T                              GENERATED                          0     4.0500E+35
T                              SECONDARY                          0     4.0500E+35
T                              NAMESPACE                          0              1
T                              OWNER                              1     4.5831E+35
T                              OBJECT_NAME                        1     6.2963E+35
T                              SUBOBJECT_NAME                     1     4.5340E+35
T                              OBJECT_ID                          1          74586
T                              DATA_OBJECT_ID                     1          74586
T                              OBJECT_TYPE                        1     4.5849E+35
T                              CREATED                            1     2456200.37

TABLE_NAME                     COLUMN_NAME          ENDPOINT_NUMBER ENDPOINT_VALUE
------------------------------ -------------------- --------------- --------------
T                              LAST_DDL_TIME                      1     2456200.37
T                              TIMESTAMP                          1     2.6059E+35
T                              STATUS                             1     4.4786E+35
T                              TEMPORARY                          1     4.6211E+35
T                              GENERATED                          1     4.6211E+35
T                              SECONDARY                          1     4.6211E+35
T                              NAMESPACE                          1             64

已选择29行。

Is this a bug on oracle11gr2?

A reader, September 29, 2012 - 9:03 am UTC

SQL> select column_name,density,num_buckets,histogram from user_tab_col_statistics where table_name='T';

COLUMN_NAME             DENSITY NUM_BUCKETS HISTOGRAM
-------------------- ---------- ----------- ---------------
ID                   7.0364E-06           1 FREQUENCY
OWNER                .033333333           1 NONE
OBJECT_NAME          .000023404           1 NONE
SUBOBJECT_NAME        .00952381           1 NONE
OBJECT_ID            .000013991           1 NONE
DATA_OBJECT_ID       .000149076           1 NONE
OBJECT_TYPE          .026315789           1 NONE
CREATED              .000719424           1 NONE
LAST_DDL_TIME        .000692521           1 NONE
TIMESTAMP            .000660939           1 NONE
STATUS                        1           1 NONE

COLUMN_NAME             DENSITY NUM_BUCKETS HISTOGRAM
-------------------- ---------- ----------- ---------------
TEMPORARY                    .5           1 NONE
GENERATED                    .5           1 NONE
SECONDARY                    .5           1 NONE
NAMESPACE            .058823529           1 NONE
EDITION_NAME                  0           0 NONE

已选择16行。

SQL>

Is this a bug on oracle11gr2?

A reader, September 29, 2012 - 9:10 am UTC

SQL> set autotrace traceonly explain;
SQL> select * from t where id = 99;

执行计划
----------------------------------------------------------
Plan hash value: 1601196873

--------------------------------------------------------------------------
| Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |      | 71469 |  6979K|   295   (1)| 00:00:04 |
|*  1 |  TABLE ACCESS FULL| T    | 71469 |  6979K|   295   (1)| 00:00:04 |
--------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter("ID"=99)

SQL> select * from t where id = 1;

执行计划
----------------------------------------------------------
Plan hash value: 1601196873

--------------------------------------------------------------------------
| Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |      | 35738 |  3490K|   295   (1)| 00:00:04 |
|*  1 |  TABLE ACCESS FULL| T    | 35738 |  3490K|   295   (1)| 00:00:04 |
--------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter("ID"=1)

Tom Kyte
October 08, 2012 - 2:55 pm UTC

I think I see what the issue is, when you gathered, it likely did not do a compute and 'missed' the singleton value (problem with extremely rare data). If you compute, it'll get it. In real life, this typically isn't an issue because you don't have just a single row usually.

Tom Please help

A reader, October 01, 2012 - 9:30 pm UTC

Hi Tom,

I am trying to implement oracle incremental statistics calculation on a huge data warehouse. My problem is:

1. I dont have a working example and dont know how to verify.

2. Even with partition the incremental partition is so huge that I would be needing parallel data warehouse statistics calculation.

Please help and guide with an example

A reader, October 10, 2012 - 9:49 pm UTC

The link does not work.
Tom Kyte
October 11, 2012 - 8:06 am UTC

sure it does. I just clicked it now - no problem. Please try again.

Is this a bug on oracle11gr2?

A reader, October 20, 2012 - 9:04 am UTC

Hi,Tom
if we had a same situation in real life,so we have to get a wrong SQL plan?
Why in 10g the CBO choose a correct plan?

Regards
Alan
Tom Kyte
October 23, 2012 - 11:55 am UTC

it won't always - it depends on the statistics it has, what the statistics tell it. if we have a bad plan, it is almost certainly due to an incorrect cardinality estimate, with each release of the database - they add more and more capabilities to the optimizer to attempt to get the correct estimated cardinality.

Is this a bug on oracle11gr2?

A reader, October 20, 2012 - 9:34 am UTC

Hi,Tom
I tried to updated 10 rows and repreduced the test and again got a wrong plan,
SQL> create table t1 as select object_id,object_name from dba_objects where 1=2;

表已创建。


SQL> create index idx_t1_object_id on t1(object_id);

索引已创建。


SQL> insert into t1 select 1,object_name from dba_objects;

已创建72593行。

SQL> commit;

提交完成。

SQL> update t1 set object_id=99 where rownum<10;

已更新9行。

SQL> commit;

提交完成。


SQL> exec dbms_stats.gather_table_stats(user,'t1',cascade=>true);

PL/SQL 过程已成功完成。

SQL> set autotrace trace exp;

SQL> select * from t1 where object_id=99;

执行计划
----------------------------------------------------------
Plan hash value: 3617692013

--------------------------------------------------------------------------
| Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |      | 36297 |   957K|   103   (1)| 00:00:02 |
|*  1 |  TABLE ACCESS FULL| T1   | 36297 |   957K|   103   (1)| 00:00:02 |
--------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter("OBJECT_ID"=99)

SQL>

Tom,if we had such a situation ,how should we gather the tables statistics?

Regards
Alan

Tom Kyte
October 23, 2012 - 11:58 am UTC

you have no histograms here, you have only high/low values, number of rows, number of distinct values.

either

a) gather stats again after running the query, we'll see you use object_id in a where clause and since it is skewed, it'll gather histograms

b) since you know you'll be doing this query and that the data is skewed (you are using your knowledge of the data and the usage of said data), explicitly gather histograms yourself the first time around.

dbms_stats.gather_table_stats performance

Andre, November 16, 2012 - 12:32 pm UTC

Hi Tom,

I have read quite a bit on using dbms_stats applied to partition tables and found this to perform at an acceptable level on my laptop - Intel-7 4-core running Oracle LINUX 5 and Oracle 11gR2.

However, on the client's site the execution times of the exact same process is extremely poor.

It is a DW App - and the main FACT table is partitioned and then sub-parititioned = with the total of nearly 80,000 subpartitions.

The data content is approx 5,000 rows per sub-partition and a new set = i.e. 1 partition with 300 sub-partitions is added every week.

My laptop does the job in less than 80 sec -

But - the client's PROD server = Sun Star-Fire 880 with 4 CPU and 25GB RAM struggles for 2900 sec (48mins) midnight where there are no ther active sessions. The same process in a somewhat busier load executed in over 4500 sec (75mins). BTW. It is running Oracle 11g.1.0.7 = NOT 11gR2!

I had spotted your comments on this package and demo how introducing degree => 8 brings GREAT improvement (Sorry, but I cannot find that thread). So - I tried with all values for the degree in the range od 2-8 - but had achieved very little = i.e. approx. 20% improvement (unlike what you had shown in one of your follow-ups).

I am not sure what - if anything - I can do to improve this, as gathering stats takes nearly 70% of the entire ETL elapsed processing time, and my job may get killed if it exceeds the allocated window of 4 hrs.

Any suggestions ...?

Thank you in advance
Andre
Tom Kyte
November 19, 2012 - 10:10 am UTC

... The data content is approx 5,000 rows per sub-partition ...

that seems *very* low doesn't it? what is the rationale?


take an awr snapshot before and after running the stats job (right before/after) and see what that might tell you (I am *not* asking for the awr report, I'm asking you to look at and analyze it it to see if there isn't something obvious).


how are you gathering stats? are you using incrementals?

why are your partitions so teeny tiny?

dbms_stats performance

andre, November 19, 2012 - 12:17 pm UTC

Tom - Thank you

The DW is being accessed by many users via Business Objects and each BO Report retrieves the contents of exactly one sub-partition (size between 3k and 10k) = the total for 300 sub-partitions is approx 1,800,000.

Other reports retrieve data from several partitions for the same relative sub-partition (same LIST Value).

I had run a few tests to compare and found this structure much better performing than the one with partitions + local indexes... so I had opted for this architecture.

But I would be happy to hear from you some specific pros and cons ... if you could, please.

+

As far as gathering stats, I build an object-list which contains schema_name , partition_name (one per each weekly load) and a set of 300 sub-partitions and I execute dbms_stats.gather_schema_stats driving with this object_list.

I will try to follow your advice - by first scheduling a job overnight with AWR - as the daily process varies far too much depending on other activities.

Still - I am reallu puzzeled with this huge performance gap between Sun 880 and my laptop = 2900 : 80
=> i.e. 36:1 ratio ..!!

Also - I would like to hear your comment on environment setup to facilitate gathering stats with DEGREE factor of something that would be optimal...?

Is 3 is what it should be (for 4 CPU work-group server) or another...?

Regards
Andre
Tom Kyte
November 19, 2012 - 3:10 pm UTC

I would suggest revisting that approach.

full scan larger partitions - no indexes necessary.


Your laptop has nothing else going on and probably much faster CPUs (I stopped using my workgroup server to answer questions, it is so much slower than my laptop running a VM...).

But let's get numerate here. Get an awr from your laptop too to compare to as well - make sure you are comparing apples to apples.

good starting DOP typically is 2*cpu_count.

long-running dbms_stats - performance issue

Andre, November 20, 2012 - 9:02 am UTC

Tom,

Thank you.

I did the following:
(a) clone-copied one partition (17-Nov-12) into (02-JUL-16) by changing the partition date value
(b) executed a PLSQL block to:
... 1. Create AWR snap
... 2. gather_table_stats (for this partition
... .. w/ granularity = 'ALL' = i.e. all 300 subs
... 3. Got another AWR snap
... 4. got the exec time with get_time = 3204 secs

The AWR report is not that revealing - unless I am missing something.

My stats job had spawned 3 SQL Analyze jobs - and all of them rank rather low in terms of CPU time or DB time or GETS or READS or other...

+++

I will run a similar job over the weekend and schedule a series of such jobs at different times - with hope to pick up very quiet time on this OLD workgroup server.

Then I will do a similar experiment on my laptop.

However - I would REALLY APPRECIATE IT if you could make some suggestions as to WHERE to look.

If this PROD server runs 30-40 times slower than my laptop then there are basically 3 things I would like to know:
(1) Is this the best that machine (w/ 11gR1.0.7) can do?
(2) Are there any possible extremely poor settings?
(3) What other jobs can have a drastic effect on dbms_stats?

+

I believe that dbms_stats does not depend much on I/O - but mainly on CPU and RAM - am I right..?

Just to complete the picture - this workgroup server is running concurrently TWO physical databases (with the other being much less used - but still...)

Then there may be some UNIX jobs executing in the background - copy or zip or some other - - difficult to assess, however I would see any of those as the root cause.

So - how to locate it the root cause..?

Thank you again
Regards
Andre

Tom Kyte
November 20, 2012 - 9:09 am UTC

i would not be surprised if your laptops CPU's were an order of magnitude faster than an old big server.


(1) who knows. You are trying to compare old SPARC performance with brand new, state of the art, x86 - you cannot even compare the CPU's times. Maybe find a program that is just cpu intensive as all get out and compile/run it on each machine. See if they take a hugely different amount of time.

(2) sure, memory being the biggest one. but that would pop out in the AWR report, you'd have seen tons of direct path read/writes due to insufficient PGA settings or huge db file sequential/scattered reads due to insufficient buffer cache settings.

(3) all of them - anything else going on on the machine would. Your laptop is undoubtedly an isolated thing - nothing else happening when you gather. Your server - maybe not.


dbms_stats will depend on IO - it has to read everything. And you have tons of teeny tiny partitions.



Maybe set up a benchmark - something you can run - that does a ton of queries and computations and run it on both machine with sql tracing (with wait events) enabled and compare the tkprofs. see of your old server can do the simpliest of things as fast as your laptop.


old 880 vs new x86 Intel-7

Andre, November 21, 2012 - 4:03 am UTC

Many thanks Tom,

You have basically confirmed my thoughts as well as previous findings.

Actually - I have found out that significant performance gap already many weeks earlier - however this was in the DW App area. I had tuned whatever I could and had achieved very goos results - but then the dbms_stats had won the top prize - and got to the top of the ranking.

So, I thought I should share this with you - as it is pure Oracle system package.

As mentioned earlier - I had tried to run stats with various settings for the DEGREE parameter and the best I had managed to achieve was to reduce the elapsed time by approx. 33%. You - on the other hand - posted a response somewhere to demonstrate better than an order of magnitude reduction in execution elapsed time.

+

I will follow your advice and execute a series of tests including gather the stats on a single partition without any subs vs the current set-up = i.e. 1 w/ 300 subs and compare the results.

BTW. The two reasons I had adopted this approach are:
(1) There are many users who may concurrently wish to generate Business Objects reports - but each one of them is ONLY interested in a subset of data = i.e. geographic that is currently loaded into a subpartition within a partition that is identified by a week#. So BO is going after this tiny (as you called) segment of data = 3k - 10k rows) reads ALL DATA joins with dimension tables and returns the report within 15 secs or less. The same job with the previous structure (i.e. without subs) would take over 2mins.
(2) BO has some internal limit of 500,000 rows = max to be returned. So when a series of SQL queries need to run and any one of them returns too many rows - then only PART result is provided = i.e. UNACCEPTABLE.
And I have next to nothing to say that BO should be set-up in a different manner. So this is an added constraint.

Considering the above I have to let the status quo - be...

But - I WILL TEST the stats against two different structures, as well as other very basic jobs.

I already know that (forget the CPU) as far as I/O goes PROD environment is worse:
A very simple INSERT of approx. 7,100,000 = 6 tables total(SQL*Load via External tables) takes 3500 secs -
- while the same process on my laptop (very slow disk 160GB/5k rpm - NOT a SAN) is nearly 4 times faster!!!

Had I installed an SSD - I would have had achieved an ORDER of magnitude better performance.

Anyway - thanks again Tom

Best regards
Andre


Tom Kyte
November 21, 2012 - 9:32 am UTC

... (2) BO has some internal limit of 500,000 rows = max to be returned. So when a
series of SQL queries need to run and any one of them returns too many rows -
then only PART result is provided = i.e. UNACCEPTABLE. ...

that should affect nothing, the number of rows returned by a query is not dependent on your partitioning scheme, it is dependent on the QUERY!!!



dbms_stats performance

Andre, November 22, 2012 - 7:14 am UTC

Hi Tom,

you said:
< ...
that should affect nothing, the number of rows returned by a query is not dependent on your partitioning scheme, it is dependent on the QUERY!!!
.. >

And I agree in principle.

However, I have no visibility to what BO had generated, unless I run a trace and this qould require additional privs that I had not been granted or would get.

Anyway - suffice it is to say that the main report that is executed by any number of users (as many as 40 concurrently) had improved from approx 150 secs to 13-15.

My implementation of sub-partitioning most likely coincided with the BO guys work...

And this is with even more data in the main FACT table now than what was 2 months ago.

My job now is to reduce the ETL elapsed processing time - which includes dbms_stats processes.

I will follow your advice and run some tests on the TEST machine that is very similar to PROD.

BTW - Oracle has a limit of 1024k partitions.
I thought that creating 300 sub-partitions for every week would not be excessive ..?

Also please keep in mind the following:
a. I had implemented those for a 4-year (until Dec-16)
b. Partitions are stored in dedicated tablespaces by quarter = total 20 (2012 - 2016)
c. Users will be running the reports against the current week (90%) + the last 6 weeks (10%) and sporadically for the entire year.
d. When they move into a new year - they may request BAU team to purge the old data leaving the last 5 quarters

So the most typical scenario right now is that several users access BO and request their reports and the criteria passed is current week or the last 6 + geographic region

With these two defined BO accesses the Data Mart and retrieves all data from the corresponding sub-partition or 6 of those = on average 5k or 30k rows (max 12k or 72k).

And this works even on this very old work group server pretty well.

My rationale behind sub-partitioning was to prune the data sets as required and I had found out 8-10 times better performance gained out of that...


Thanks again.
Best regards
A






Tom Kyte
November 29, 2012 - 6:40 am UTC

I wrote:

... (2) BO has some internal limit of 500,000 rows = max to be returned. So when a
series of SQL queries need to run and any one of them returns too many rows -
then only PART result is provided = i.e. UNACCEPTABLE. ...

that should affect nothing, the number of rows returned by a query is not dependent on your partitioning scheme, it is dependent on the QUERY!!!


in reference to your statement. You said "limit of 500,000 rows = max to be returned"


and I said "so what, partitioning would have NO IMPACT ON THE NUMBER OF ROWS RETURNED"

does that make sense? your statement made no sense to me, that could not be part of your reasoning for partitioning, a partitioning scheme will never affect the number of rows returned.


dbms_stats performance

Andre, November 22, 2012 - 9:18 am UTC

Hi Tom,

you said:
< ...
that should affect nothing, the number of rows returned by a query is not dependent on your partitioning scheme, it is dependent on the QUERY!!!
.. >

And I agree in principle.

However, I have no visibility to what BO had generated, unless I run a trace and this qould require additional privs that I had not been granted or would get.

Anyway - suffice it is to say that the main report that is executed by any number of users (as many as 40 concurrently) had improved from approx 150 secs to 13-15.

My implementation of sub-partitioning most likely coincided with the BO guys work...

And this is with even more data in the main FACT table now than what was 2 months ago.

My job now is to reduce the ETL elapsed processing time - which includes dbms_stats processes.

I will follow your advice and run some tests on the TEST machine that is very similar to PROD.

BTW - Oracle has a limit of 1024k partitions.
I thought that creating 300 sub-partitions for every week would not be excessive ..?

Also please keep in mind the following:
a. I had implemented those for a 4-year (until Dec-16)
b. Partitions are stored in dedicated tablespaces by quarter = total 20 (2012 - 2016)
c. Users will be running the reports against the current week (90%) + the last 6 weeks (10%) and sporadically for the entire year.
d. When they move into a new year - they may request BAU team to purge the old data leaving the last 5 quarters

So the most typical scenario right now is that several users access BO and request their reports and the criteria passed is current week or the last 6 + geographic region

With these two defined BO accesses the Data Mart and retrieves all data from the corresponding sub-partition or 6 of those = on average 5k or 30k rows (max 12k or 72k).

And this works even on this very old work group server pretty well.

My rationale behind sub-partitioning was to prune the data sets as required and I had found out 8-10 times better performance gained out of that...


Thanks again.
Best regards
A






Tom Kyte
November 29, 2012 - 6:52 am UTC

... I thought that creating 300 sub-partitions for every week would not be
excessive ..? ...


it isn't the number inasmuch as the puny size I was commenting on. But the number will tend to affect you in a big way as well.

think about it. there is some amount of time to process a partition regardless of size for stats. Let's call that N. If you have an order of magnitude more partitions in implementation A than implementation B, you'll spend 10*N units of time in A but only N units of time in B.

Furthermore, if your users hit 100 partitions in implementation B, we'll have to load into the dictionary, cache and manage - 100 partitions. In A, 1,000 (more cache, more reads, more stuff to manage).

Just because you can do something doesn't always mean "it is a good good idea"

I was just questioning the use of such teeny tiny partitions, it didn't seem to make sense, but you seem to have reasonable reasons. Just be aware that some thins will take longer - just because there are so many partitions to deal with.

Set_table_stats Gives error

Snehasish Das, December 03, 2012 - 1:46 am UTC

Hi Tom,

we have our own schema based on our employee id. so each schema name begins with a number and is often created with double quotes (").

I am facing issues with the below.

begin
dbms_stats.set_table_stats(
OWNNAME=>'"604560221"',
TABNAME=>'X_MRD_CORMIS_CALL_DIRECTION',
NUMROWS=>10000,
numblks=>100 ,
avgrlen=>80);
end;

It gives ORA-20001 604560221 is an invalid Identifier.
we use the same syntax (OWNNAME=>'"604560221"') to do gather tablestats but set_table_stats is failing.
I tried with OWNNAME=>user still its not working.

Can you please let me know a way to do set_table_stats.

Regards,
Snehasish Das.
Tom Kyte
December 03, 2012 - 9:00 am UTC

I'll have to refer you to support, this looks like a bug no one has ever encountered before. I do not see a way to work around it.

dbms_stats issues

Andre, December 04, 2012 - 6:59 am UTC

Hi Tom,

Thank you for your comments on teeny-tiny subpartitions.
Sure - I do accept your statement that
>> Just because there is a feature it does not mean that it should be used <<

OK then - are there any rules as to the minimum size?
If 10k rows is too tiny then what would be the reasonable starting size of a sub-partition ..?
100,000 rows, 1mil, 10mil...?

What steps should be taken to evaluate the pros derived from pruning vs the cons = i.e. penalty from overhead..?

Is this experimental - and if so what would be your guidance as to how to go about optimizing this..?

+

Another issue:

The DBMS_STATS.gather_table_stats on a simple partitioned table (size 500k - 1 mil rows [BTW no subs here]) sometimes executes OK but last night it failed.

I am not able to reproduce this - as it either happens or not - most likely caused by some other (unrelated) processes on the database...?

The trace file snippet - see below.

Is there any way to deal with this..?
(a) trap an error can be done with PLSQL Exception
... but then what ..?
(b) detect pro-actively a root cause and alert ..?
(c) run a DELETE_table_stats procedure before GATHER..?
(d) other approach..?

Many thanks in advance
Andre


----- START DDE Action: 'ORA_12751_DUMP' (Sync) -----
CPU time exceeded 300 seconds
Time limit violation detected at:
<-ksedsts()+1228<-kspol_12751_dump()+156<-dbgdaExecuteAction()+344<-dbgerRunAction()+88<-dbgerRunActions()+1380<-dbgexPr
ocessError()+992<-dbgePostErrorKGE()+1128<-kgeade()+968<-kgesem()+64<-OCIKSEC()+208<-ktsaasapol()+1224<-kewrpbcb_purge_b
ycbf()+1052<-kewrpdc_periodic_cleanup()+20


Tom Kyte
December 14, 2012 - 1:32 pm UTC

If 10k rows is too tiny then what would be the reasonable starting size of a
sub-partition ..?


rows is wrong, size is more like it. You'd want to be dealing with a sizeable chunk of data (gigabytes) at a time.

While you can create thousands and thousands of partitions - you need to have a *really* good reason for it.

think about the dictionary cache needs - each segment would be up there as they are queried. If you have orders of magnitude more partitions in one scheme than another - then you'll have orders of magnitude more stuff to read into the cache, mange in the cache, just "cache" (memory).

I'd try a partitioning scheme that wasn't so - partition intensive - benchmark it.


as for the exceeding cpu - could it be that there is a massive amount of modifications taking place to the table in question when the stats are being gathered?

dbms_stats - obj_filter_list

Andre, December 11, 2012 - 10:16 am UTC

Hi Tom,

I had shared with you some other issues with dbms_stats.

Now - there is another one.

I have opted for the filter = obj_filter_list
>> as provided with 11g1 and had written a PLSQL proc to build a list of all objects with empty stats and then process this list to a specific sub-set to produce what was desirable at any given stage of ETL.

Initially I believed that all worked fine (as the list of objects with STATS=EMPTY was actually what needed to be)

By accident i discovered an issue when I had created a new partitioned table (with over 40mil records) into the same schema.

Suddenly the dbms_stats had taken nearly 3 hours rather than 20 mins.

Upon examination of all tables - I had discovered that the new table had the stats gathered as well IN SPITE OF NOT being included in the obj_filter_list.

Documentation says that if one chooses GATHER AUTO, then most of all other parameters in dbms_stats.gather_schema_stats are IGNORED - but there is no mention of this when you opt for GATHER EMPTY or STALE

So I ran tests:
(a) repeated the process with original list => stats done in 20 mins GATHER EMPTY
(b) ran some updates and made a couple of tables STALE - and these were excluded
(c) cloned the large 40mil row table - so the clone had now EMPTY stats and it got included ->> 3 hours process again

Oracle docs say that 'GATHER' is the default - but I think that this would RE-Gather ALL stats - wouldn't it...?
>> Not what I want - as this will take over 40 hours (this was actually how long it had taken a couple of months ago)

So - how can I exclude gathering stats for some tables - when obj_filter_list does not appear to do the job.

>>>>> ORACLE FORUMS <<<<<

I had spotted some comments people had made that schema level gather stats does not work correctly with the obj_filter_list but the database level does.

Well, I would NOT like to risk a process like this only to find out that it would go after ALL other schemas even if none but my own is specified in the obj_filter_list.

Is there any solid verified case that you may know - so that I may adopt - rather than setting up test after test to see what works and what does NOT ...?

Thank you
Regards
Andre


Tom Kyte
December 17, 2012 - 3:10 pm UTC

do you have a reproducible test case, a simple two table schema for example, that demonstrates exactly what you did and shows what happened?

here is a fully worked up example using obj filter list:

https://blogs.oracle.com/optimizer/entry/how_do_i_use_concurrent

dbms_stats + pmon + etc...

Andre, December 15, 2012 - 5:47 am UTC

Dear Tom

Thank you for your response posted Dec-14.

I will follow your suggestions and bench-mark several scenarios on my Oracle-Linux + 11gR2 laptop where I have all privs and freedom.

Since my former post re exceeding CPU Other unrelated issues had come up to the surface (please see below)

+

Back to subpartitions - I do accept your general statement and I do not intend to argue the point - however I would like to give you a better picture:
(1) Overall - YES - there are many sub-partitions
(2) The size in MB = between 800 and 1000 MB per partition
... i.e. approx 3M per sub
(3) However, only a few are used in any day as the subs
... represent geo-regions - so approx 10-15 may be read
(4) Every week the time-window moves and the new subs
... get read
(5) Every Sat database goes down for maintenance
(6) DW has only two fact tables + 7 dimension tables
... and only one fact tbl is split into sub-partitions
... making the total number of objects in a range 17-25
... so I consider this a very modest

As to bench-marking - I would greatly appreciate your guidance as to HOW to set it up...?

I could create (clone) many more transactions in the main fact table thus increasing the size of the weekly partition from the current = 1GB into 10, therefore sub-partition sizes would be approx 30MB

... or ... generate many more rows in a partition that would be stored in 10 subs rather than 300 and run tests against 10GB / 20GB / 40GB partition with 1-4GB per sub.

What should I compare then...?

+ + +

The other issue = exceeding CPU - popped up again a couple of days ago and again and it is now clear that within this shared environment other Apps had been causing this.

Alert log shows:
ORA-04030: out of process memory when trying to allocate 258072 bytes
... (many of those)
ORA-07445: exception encountered: core dump
...
ORA-00490: PSP process terminated with error
...
ORA-1092 : opiodr aborting process unknown ospid
...
ORA-1092 : opitsk aborting process
Instance terminated by PMON, pid = 3979

and - database crashed at that point.

Thanks
A

Tom Kyte
December 18, 2012 - 7:40 am UTC

... What should I compare then...?
...

you might start without subpartitions - or with 10 times fewer subpartitions.




for the other errors - please utilize support for a database crash, if someone ddn't do a shutdown abort - that should not happen

obj_filter_list - NOT doing its job (11.1.0.7)

Andre, December 18, 2012 - 6:54 am UTC

Dear Tom,

You had made a reference to:
>>>
https://blogs.oracle.com/optimizer/entry/how_do_i_use_concurrent
<<<

Actually I had spotted it too + some comments - before posting this issue on your site

It appears that others had also experienced issues.
Also - The author (Maria Colgan) is talking about concurrent statistics hence Oracle 11gR2 (NOT 11gR1).

The shop here have not upgraded into 11gR2 yet - so all I can do is to check on my laptop running Oracle-Linux 11gR2 but all Apps are deployed on Sun+11gR1
...
ALSO: Maria's example calls dbms_stats.gather_schema_stats without providing a value for the OPTION parameter.
So - considering Oracle docs where Oracle says:
(a) options is default 'GATHER'
(b) GATHER = Gathers stats on ALL objects in the schema
(c) GATHER AUTO = objlist is considered but NOT the filter - Oracle says "ALL other param settings are ignored"
...
...
All I can do is to test on 11gR2 - and report to teh DBA group if it works on R2 BUT NOT on R1.

Thanks
A

Here are the comments from that post:
>>>
Hello Maria,
thank you for the very nice article.
I have tried to reproduce the case (using you code, but with another tables, specific to my environment).
The problem is that(because of bug12754926) on 11.2.0.3 (Linux x64) the code do NOT work!
Bug description:
Bug 12754926 DBMS_STATS.gather_schema_stats does not work with an obj_filter_list parameter specified
This note gives a brief overview of bug 12754926.

The obj_filter_list argument to DBMS_STATS.gather_schema_stats() does
not filter out objects to gather stats upon. *** However it does work
correctly if specified with DBMS_STATS.gather_database_stats()***

The problem is that after impelementimg the workaround ( using
DBMS_STATS.gather_database_stats(...), instead of DBMS_STATS.gather_schema_stats(...) ), it is still not working ;(

At the end of the day, we have excellent idea, but bad implementation.
Best regards. Milen

Extended Statistics

alan tan, December 27, 2012 - 9:16 pm UTC

Hi,Tom
Here is a test about Oracle 11g Extended Statistics,but it seems not as smart as I expect.
SQL> create table t as select decode(mod(rownum,2),0,0,1) flag1,decode(mod(rownum,2),0,1,0) flag2  from dba_objects where rownum<10000;

表已创建。

SQL> select * from t where rownum<10;

     FLAG1      FLAG2
---------- ----------
         1          0
         0          1
         1          0
         0          1
         1          0
         0          1
         1          0
         0          1
         1          0

已选择9行。

SQL> select dbms_stats.create_extended_stats(user,'t',extension=>'(flag1,flag2)') from dual;

DBMS_STATS.CREATE_EXTENDED_STATS(USER,'T',EXTENSION=>'(FLAG1,FLAG2)')
--------------------------------------------------------------------------------
SYS_STUGVNB7PTIYWAVPJX#YT77WGD

SQL> exec dbms_stats.gather_table_stats(user,'t',method_opt=>'for all columns size 254');

PL/SQL 过程已成功完成。

SQL> set autotrace trace exp;

SQL> select * from t where flag1=0 and flag2=0;

执行计划
----------------------------------------------------------
Plan hash value: 1601196873

--------------------------------------------------------------------------
| Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |      |  2500 | 15000 |     7   (0)| 00:00:01 |
|*  1 |  TABLE ACCESS FULL| T    |  2500 | 15000 |     7   (0)| 00:00:01 |
--------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter("FLAG1"=0 AND "FLAG2"=0)

SQL> set autotrace off;
SQL> select * from t where flag1=0 and flag2=0;

未选定行

SQL>


Could you please point out if I got something wrong here?

Thank you very much.
Regards
Alan
Tom Kyte
January 04, 2013 - 11:11 am UTC

see this:

http://jonathanlewis.wordpress.com/2009/04/23/histogram-change/

for what happens when you use a value that falls outside of the values found in a frequency based histogram.

..num_rows vs count(*)

A Reader, January 07, 2013 - 9:18 am UTC

Tom,
Why is this behaviour, in 10gR2?

SQL> select * from v$version;

BANNER
----------------------------------------------------------------
Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - Prod
PL/SQL Release 10.2.0.4.0 - Production
CORE    10.2.0.4.0      Production
TNS for Linux: Version 10.2.0.4.0 - Production
NLSRTL Version 10.2.0.4.0 - Production


SQL> select count(*) from t1;

  COUNT(*)
----------
   1511058

SQL> set time on
02:10:40 SQL>
02:10:41 SQL>
02:10:41 SQL> select count(*) from t1;

  COUNT(*)
----------
   1511058

02:10:48 SQL>
02:10:49 SQL> exec dbms_stats.gather_table_stats(user,'&1',cascade=>true);
Enter value for 1: t1

PL/SQL procedure successfully completed.

02:11:31 SQL>  select  num_rows from user_tables where table_name ='t1';

  NUM_ROWS
----------
   1509979

02:11:38 SQL> select count(*) from t1;

  COUNT(*)
----------
   1511058

02:11:46 SQL>  exec dbms_stats.gather_table_stats(user,'&1',cascade=>true);
Enter value for 1: t1

PL/SQL procedure successfully completed.

02:12:12 SQL> select  num_rows from user_tables where table_name ='t1';

  NUM_ROWS
----------
   1510607

i.e. on each succession of using dbms_stats gather for a table t1.. num_rows keep incrementing ...!!!  but at the same time no inserts to table happening .. count(*) is constant throughout above test.

I did upgarde to 10.2.0.4 from 10.2.0.1 thinking it might solve the problem (bug). but no luck

regards

Tom Kyte
January 14, 2013 - 11:30 am UTC

because statistics are gathered using an estimation - using sampling. It isn't meant to be perfect, it only has to be representative of the data.

there is no bug
there is nothing wrong
everything is working as we intended



Cursor invalidation

Rajeshwaran, Jeyabal, January 29, 2013 - 6:44 am UTC

Tom,
(How can we hard parse our Static sql developed for batch process, for each execution)
We have two table T1, T2 each 7G and 120G size. Hash partitioned on PK (8 partitions).
T1 has 5+Million rows and T2 has 1503+Million rows.
During business hours (8am-4pm)data loaded into T2 with T2.x = 3, stats gathered and Histograms on all Indexed columns daily at 6.00pm.
This below procedure run's daily at 8.00 pm (only once per day) and write contents to Text file.
create or replace procedure
get_data as
begin
 for x in (select *
  from t1 , t2
  where t1.c1 = t2.c1
  and t1.z = 5
  and decode(t2.x,3,3) = 3)
 loop
  /* loop over and Write to Text File */
 end loop;
end;
/

Day#1 - The sql in procedure hard parsed, returned 7 rows on T2 for x=3 (so went for Index Range scan on T2 and Nested loops T1, fine.)
Day#2 - sql softparsed and reused the plan, but now we got 200K rows from T2 for x=3 (so day#1 plan went horrible), We expected it to Index range scan on T2 and HASH join with T1.

one of the Experienced developer suggest to go this way, The index T2_IND is used only by this process and no where in application. Can you share your ideas?
create or replace procedure
get_data as
begin
 dbms_stats.gather_index_stats(user,'T2_IND',no_invalidate=>false);
 /* this way we do hard parse and never-reuse existing plans */
 
 for x in (select *
  from t1 , t2
  where t1.c1 = t2.c1
  and t1.z = 5
  and decode(t2.x,3,3) = 3)
 loop
  /* loop over and Write to Text File */
 end loop;
end;
/

We are on 10.2.0.5
Tom Kyte
January 31, 2013 - 9:26 am UTC

that'll work - as would native dynamic sql with a query that has something unique in it. for example

open l_cursor for '
select /* ' || to_char( sysdate,'yyyymmddhh24miss' ) || ' */ * from t1, ....';

open l_cursor
loop
    fetch l_cursor bulk collect into ... limit N;
    for i in 1 .. array.count
    loop
       ...
    end loop;
    exit when l_cursor%notfound;
end loop;
close l_cursor;



and would avoid the need to do the extra gather stats.

stats with older value for optimizer_features_enable

A reader, February 08, 2013 - 3:09 am UTC

If we upgrade our database to 11g from 10g with optimizer_features_enabled with 10.2.0.x value,do we need
a change in the way we call dbms_stats.gather_table_stats , any new parameter need to be provided ?
Tom Kyte
February 11, 2013 - 9:25 am UTC

you do not need to change, however you might want to change.

if you use partitioning, we have new incremental statistics.
https://blogs.oracle.com/optimizer/entry/maintaining_statistics_on_large_partitioned_tables

if you skip estimate_percent and let it default to auto, you'll get better statistics faster than you do with estimate_percent:
http://optimizermagic.blogspot.com/2008/01/improvement-of-auto-sampling-statistics.html

Anomaly detected with accounting for STALE

Andre, April 10, 2013 - 10:11 am UTC

Tom,

I am monitoring the DBA_TAB_MODIFICATIONS view and noticed that 3 tables that used to be STALE - are no longer being picked up into a list using DBMS_STATS - schema level LIST STALE.

I noticed this by comparing the logs in the past 24 hours.

So I checked for LAST_ANALYZED for these tables only to find that they show today's date = 2 hours ago - and then saw that there were 8 with the same LAST_ANALYZED timestamp

I am 99.9% certain that nobody executed DBMS_STATS.

Yet, it appears that something had been done -

Is it all possible that some other activity than Gather STATS may alter the LAST_ANALYZED..?

Thanks
Andre
Tom Kyte
April 22, 2013 - 2:38 pm UTC

... I am 99.9% certain that nobody executed DBMS_STATS.
...

well... I would be closer to 100% certain that someone did...

last anaylzed is updated via dbms stats or the analyze command...

you might peek into your ASH repository or think about enabling auditing...

Extended stats

Rajeshwaran, July 29, 2013 - 3:10 pm UTC

Tom,

I was working on extended stats in 11gr2 (11.2.0.3) and found index is not getting used. Can you please help me what I am missing here. Below is the script I used for testing

drop table t purge;
create table t
as
select a.*,
 decode( mod(rownum,2),0,'Y','N') flag1,
 decode( mod(rownum,2),0,'N','Y') flag2
from all_objects a;
create index t_ind on t(flag1,flag2);
begin 
 dbms_stats.gather_table_stats
 (ownname=>user,
  tabname=>'T',
  method_opt=>'for all columns size 254');
end;
/
set serveroutput off;
select * from t where flag1='Y' and flag2='Y';  
select * from table( dbms_xplan.display_cursor );
select dbms_stats.create_extended_stats(user,'T','(FLAG1,FLAG2)')
from dual;
begin 
 dbms_stats.gather_table_stats
 (ownname=>user,
  tabname=>'T',
  method_opt=>'for all columns size 254');
end;
/
select * from t t1 where flag1='Y' and flag2='Y';  
select * from table( dbms_xplan.display_cursor );

Tom Kyte
August 02, 2013 - 5:43 pm UTC

see the comment from:

Extended Statistics December 27, 2012


above, same reason.

setting up stats for Global Temp tables

Andre, October 09, 2013 - 3:47 pm UTC

Hi Tom,

I saw you mention setting up table stats for temporary tables by gather the stats on a ordinary table with the same structure and a typical population of data - if data content is much higher than Oracle would assume it to be - and then import these stats back with Global temporary table being the target.

2 questions:

(a) Am I correct to assume that the volume of data to be considered should only repreasent one session - or in fact I would need to load the equivalent data volume that would be the total of most likely number of sessions concurrently using the Global Temporary table..?

(b) Is getting table stats with histograms etc. needed or execute dbms_stats.set_table_stats - sufficient..?

Thank you

Kindly

Andre
Tom Kyte
October 09, 2013 - 6:08 pm UTC

prior to 12c, stats on global temporary tables (gtt) can be a problem. There would be just one set of stats that all sessions see - which will be an issue if sometimes there are 5 rows in the gtt in one session and 5,000,000 rows in some other session.

In that case you had two basic options:

a) put in place representative "big" statistics - either by loading the gtt and gathering stats and then locking them, or by setting them. All sessions will use "big table" statistics (in the hope that a plan decided upon using big stats will work just fine for a small table).

b) do not have any statistics, rely on dynamic sampling, and force a hard parse in every session so each one dynamically samples. You would have to use dynamic sql to accomplish this and make sure that each session generated a unique sql statement (eg: put the current date/time in the query as a comment for example)



In 12c, you'll have session private global temporary statistics:

http://docs.oracle.com/cd/E16655_01/server.121/e17906/chapter1.htm#FEATURENO09671

here each session could

a) set their own stats
b) gather stats
c) rely on dynamic sampling

without having to use dynamic sql, without having to force a hard parse.



so, in answer to (a) and (b) for pre-12c:

a) you would typically want to put in representative statistics for all sessions - or if there are not representative statistics, you can put in "big table scenario" statistics in the belief that a plan generated for big tables would work fine in the small table case (not always true, but more true than using small table stats for a big table in general!)

or relying on dynamic sampling by forcing a hard parse.


b) depends. is the data in your gtt skewed? do you use that skewed data in your where clause? if so, you may very well need histograms...

GTT stats

Andre, October 10, 2013 - 12:55 pm UTC

Tom - Thank you.

I am reading new features in 12c

A

estimate percent

Tony, November 20, 2013 - 2:58 pm UTC

Hi Tom,

As per documentation, the default value of estimate percent is to_estimate_percent_type (get_param('ESTIMATE_PERCENT')).

But when I execute
select dbms_stats.to_estimate_percent_type(dbms_stats.get_param('ESTIMATE_PERCENT')) from dual it returns 0. Does this mean that Oracle uses 0% as default( which doesn't make sense). Is there any way to tell the percentage rows sampled for a table if we use oracle's default parameter for estimate_percent. Thank you

Incrementals statistics with granularity = partition

Manish, February 13, 2014 - 9:13 pm UTC

Hi Tom,

Have a large range partitioned table, with new partitions getting loaded on daily basis concurrently based on arrival of 70 independent (many of them can be concurrent) data feeds spread across a window of 6 hrs.

Can you please explain if global stats will get updated in this case incrementally or will go for full table scan; based on following steps:
1. Create table and enable incremental stats?
2. Execute dbms_stats.gather_table_stats(....., granularity = partition, partname = <specific partition name>) at the completion of load of a specific partition for a specific source system.
3. To update table/global level stats-> at the end of the load of all source system loads execute dbms_stats.gather_table stats(.....granularity = auto....)

Thanks
Manish



Monitoring a concurrent statistics gather

Randy, January 23, 2015 - 12:23 am UTC

Tom -

I would like to know how to monitor concurrent statistics gathering. That is, when setting the global preference CONCURRENT to TRUE and then executing DBMS_STATS.GATHER_SCHEMA_STATS, how can I determine how many jobs (for individual tables) are waiting so that I can track progress.

Thanks...
Randy

Online stats gathering in 12c on bulk loads with VC

Rajeshwaran, Jeyabal, February 13, 2016 - 8:05 am UTC

Team,

reading through the manuals, online stats gathering during bulk loads in 12c are not possible if that table has "Virtual column".

How ever when tested in 12c (12.1.0.2) results were different.
please correct me if i am wrong.

http://docs.oracle.com/database/121/TGSQL/tgsql_statscon.htm#GUID-CDDB5A54-0991-4E68-A9D7-2305777B608B

rajesh@ORA12C> drop table t purge;

Table dropped.

rajesh@ORA12C> create table t
  2  ( owner varchar2(30),
  3    object_id number ,
  4    object_type varchar2(30),
  5    deptno number);

Table created.

rajesh@ORA12C>
rajesh@ORA12C> alter table t add x generated always as (upper(object_type)) ;

Table altered.

rajesh@ORA12C>
rajesh@ORA12C>
rajesh@ORA12C> select num_rows,blocks,avg_row_len,last_analyzed
  2  from user_tables
  3  where table_name ='T';

  NUM_ROWS     BLOCKS AVG_ROW_LEN LAST_ANALYZED
---------- ---------- ----------- --------------------


1 row selected.

rajesh@ORA12C>
rajesh@ORA12C> insert /*+ append */ into t(owner,object_id,object_type,deptno)
  2  select owner,object_id,object_type,10
  3  from all_objects
  4  where rownum <=1000;

1000 rows created.

rajesh@ORA12C> commit;

Commit complete.

rajesh@ORA12C> select num_rows,blocks,avg_row_len,last_analyzed
  2  from user_tables
  3  where table_name ='T';

  NUM_ROWS     BLOCKS AVG_ROW_LEN LAST_ANALYZED
---------- ---------- ----------- --------------------
      1000          7          23 13-FEB-2016 13:28:47

1 row selected.

rajesh@ORA12C>
rajesh@ORA12C> select column_name,num_distinct,num_nulls,last_analyzed,notes
  2  from user_tab_col_statistics
  3  where table_name ='T' ;

COLUMN_NAME  NUM_DISTINCT  NUM_NULLS LAST_ANALYZED        NOTES
------------ ------------ ---------- -------------------- -------------------
OWNER                   3          0 13-FEB-2016 13:28:46 STATS_ON_LOAD
OBJECT_ID            1000          0 13-FEB-2016 13:28:46 STATS_ON_LOAD
OBJECT_TYPE             6          0 13-FEB-2016 13:28:46 STATS_ON_LOAD
DEPTNO                  1          0 13-FEB-2016 13:28:46 STATS_ON_LOAD
X                       6          0 13-FEB-2016 13:28:46 STATS_ON_LOAD

5 rows selected.

rajesh@ORA12C>

Connor McDonald
February 14, 2016 - 4:17 am UTC

Reproduced here.

Looks to me like a documentation bug. (I dont have a 12.1.0.1 handy so perhaps its a restriction that was lifted in 12.1.0.2).

Please use the "Feedback" option on the docs to pass that back to the documentation team, and they'll get it fixed.

Thanks,
Connor

question about dbms_stats.gather_database_stats

Apr, October 07, 2021 - 5:10 pm UTC

What happens if we give dbms_stats_import_xxxx_stats and statistics already exists for some of the objects in data dictionary ?
Connor McDonald
October 08, 2021 - 1:18 am UTC

Import will overwrite unless you have locked existing stats for objects you want to retain

More to Explore

PL/SQL demos

Check out more PL/SQL tutorials on our LiveSQL tool.

PL/SQL docs

PL/SQL reference manual from the Oracle documentation library