Skip to Main Content

Breadcrumb

Question and Answer

Tom Kyte

Thanks for the question, vrishali.

Asked: January 04, 2001 - 12:39 pm UTC

Last updated: February 06, 2012 - 10:52 am UTC

Version: 8.1.6

Viewed 10K+ times! This question is

You Asked

Hi Tom,

I want to know if I have altered the table with 'Alter Table Monitoring ' will it create load on Database if lot of inserts are occuring on that table duringg the day.

Thanks
Vrishali

and Tom said...

It appears to have minimal impact (no measurable impact that I could observe) and I see nothing in the problem database about it being a performance issue.

However, if you are concerned about this, and since you know the table is undergoing lots of inserts during the day, monitoring is not really needed - you know for a fact the stats are going stale frequently and need to be re-gathered. Monitoring is all about helping you find the tables that need stats to be gathered. This table would be known to need frequent statistics collection.

Rating

  (41 ratings)

Is this answer out of date? If it is, please let us know via a Comment

Comments

Doesn't MONITORING actually ANALYZE a table ?

Andre Nasser, October 02, 2001 - 4:36 pm UTC

I didn't quite understand the use of MONITORING. From what I read in the Oracle books, it enables automatic statistics collection in the SGA and from time to time, they are flushed to the data dictionary.

Seems you're saying it only keeps track of which specific tables are being modified.

Please clarify...

Tom Kyte
October 02, 2001 - 7:17 pm UTC

See
</code> http://docs.oracle.com/docs/cd/A81042_01/DOC/server.816/a76992/stats.htm#25754 <code>

for the details. When you designate a table for monitoring, Oracle will watch for DML activity on it. It will capture the approx. number of inserts/updates/deletes for the table.

Later, DBMS_STATS can be used to gather stats on "stale" tables. Tables are classified "stale" by the act of monitoring them.

It takes a combination of MONITORING and DBMS_STATS with a "gather stale" to update the stats for stale tables -- monitoring does not analyze the tables themselves, only counts (approximately) the DML hits on it.

Reader

Reader, October 02, 2001 - 9:33 pm UTC

Tom,

Is there an algorithm Oracle uses to establish if the
statistics is fresh or not. Any combination of total
# of rows to total # of DMLS etc..

Thanks

Tom Kyte
October 03, 2001 - 8:50 am UTC

The criteria for monitoring implying stales is that the total number of inserts, deletes and updates for the table exceeds 10% of the number of rows in the table as recorded by the last analyze.





Thanks for the followup !!!

Andre Nasser, October 03, 2001 - 9:06 am UTC

Tom,

Now I see what you mean. In fact, the Oracle material and online books are not very clear about this feature. Even the 8i classes I attended did not cover this subject.

Thanks a lot !!!

Table Monitoring

William, October 04, 2001 - 12:16 am UTC

Tom,

If a table is set to MONITORING and its statistics are stale, will Oracle then ignore the current statistics until the tables are re-analyzed?

Tom Kyte
October 04, 2001 - 6:41 am UTC

No.

All monitoring does is count about how many rows have been modified. The stats that are in place are used until you re-analyze.

The stats that are in place, even though they are stale, are probably much better then "guessing" which is what it would do if it ignored them.

Stale statistics

William Lee, November 30, 2001 - 12:19 am UTC

Tom,

Once a table has been tagged as having stale statistics, does the optimizer still use the current statistics?

I was experementing with this feature and noticed performance problems with queries until I re-analyzed the tables with stale statistics.

Thanks

Tom Kyte
November 30, 2001 - 7:38 am UTC

yes, the "stale" stats are used.

the other even less desirable option would be to "guess" at the stats.

The reason you see poor performance with stale stats is that the stats are INCORRECT, instead of their being 1,000 rows for example, there are now 1,000,000 and the plan of attack for 1,000 rows would be different then for 1 million.

Re: Stale statistics

William, December 05, 2001 - 12:38 am UTC

Tom,

I don't agree with what you mentioned about the stale statistics. Firstly, without using MONITORING, I analyze all my tables once a week. When I set the tables to be MONITORING, I had a script to analyze tables with stale statistics every 6 hours.

With MONITORING set, some of the queries would run slow but would run ok if I went and manually analyzed the table with stale statistics.

From this I would assumed that its not using stale statistics. I have given up on this and have gone back to doing a full analyzed once a week.

Cheers

Tom Kyte
December 05, 2001 - 4:37 pm UTC

Sorry you don't "agree" but it is what happens. You should have analyzed the problem a little more thoroughly. Your assumption is incorrect, something else was the culprit. Perhaps -- it was because you were running analyzes when people where querying the data and the analyze was contending for the same resources (disk) as the queries were. You would have had to analyze the plans of the queries (a) with the stats "fresh", (b) when they didn't work, and (c) what they were right after making them fresh again. If they didn't change between (a) and (b) that means that the stats were in fact being used (since the plans were the same). If they changed between (b) and (c) that means the collection of stats was needed -- NOT that the stats were ignored.

Sigh, thus begins another rumor/myth. This is how they start you know. Pretty soon there will be an army of DBA's refusing to use another tool simply due to heresay (same reason people don't use auditing, reorg tables into a single extent for "performance" and so on)....

SYS.DBA_TAB_MODIFICATIONS

reader, April 17, 2003 - 5:22 am UTC

tom,

i tried the following, i wonder why there are no rows in the SYS.DBA_TAB_MODIFICATIONS ..please let me know where i went wrong.


ge$scott@workflow.us.oracle.com>ALTER TABLE EMP MONITORING;

Table altered.

ge$scott@workflow.us.oracle.com>analyze table SCOTT.EMP compute statistics;

Table analyzed.

ge$scott@workflow.us.oracle.com>select table_name, num_rows, blocks, avg_row_len from dba_tables where table_name='EMP';

TABLE_NAME NUM_ROWS BLOCKS AVG_ROW_LEN
============================== ========== ========== ===========
EMP 28 1 40

ge$scott@workflow.us.oracle.com>alter table SCOTT.EMP monitoring;

Table altered.

ge$scott@workflow.us.oracle.com>select table_name, monitoring from dba_tables where owner='SCOTT' ORDER BY 2
2 ;

TABLE_NAME MON
============================== ===
A NO
B NO
BIG NO
BONUS NO
PLAN_TABLE NO
SALGRADE NO
CHAINED_ROWS NO
DEPT NO
EMP YES

9 rows selected.

ge$scott@workflow.us.oracle.com>INSERT INTO EMP SELECT * FROM EMP;

28 rows created.

ge$scott@workflow.us.oracle.com>COMMIT;

Commit complete.

ge$scott@workflow.us.oracle.com>SELECT * FROM SYS.DBA_TAB_MODIFICATIONS;

no rows selected


i was expecting a record saying 28 inserts. please clarify

Tom Kyte
April 17, 2003 - 10:37 am UTC

see
</code> http://asktom.oracle.com/pls/asktom/f?p=100:11:::::P11_QUESTION_ID:1154434873552#8727886618633 <code>

you need to

o wait 3 hours or so with 8i
o use dbms_stats to flush modifications at will in 9i
o shutdown/startup cleanly in any release



Thanks Tom ...

Reader, April 17, 2003 - 10:39 am UTC


What would you do when the table is very volatile?

A reader, April 06, 2004 - 6:10 pm UTC

Dear Tom,

Suppose I have a table T and it undergoes lot of inserts and deletes within a short period of time. dbms_stats.SET_TABLE_STATS may not give the correct info as this table is real volatile. what would you recommend, in this case?

1, dbms_stats.SET_TABLE_STATS
2, monitoring turned on for the table and gather statistics every few minutes if the stats is stale?
3, something else ...?

Thanks

Tom Kyte
April 07, 2004 - 8:51 am UTC

3) dynamic sampling in 9ir2....
4) using set table stats to set a "best guess overall guideline for the optimizer to use"

Michael Jernigan, July 08, 2004 - 5:42 pm UTC

My question is about your reply to the initial poster. You said that you didn't need monitoring if you know the table is heavily updated.

If I have monitoring on for tables a, b, and c, but off for table d, would the gather stale option only get stats for tables a-c? In which case I would need to use the gather_table_stats for d.

Tom Kyte
July 08, 2004 - 9:00 pm UTC

you would need to gather table stats for d.

but, as i said, nominal impact, i would just monitor them all if that made my life easier.

Data created in _TAB_MODIFICATIONS?

A reader, August 30, 2004 - 12:31 pm UTC

1. Is a record created in USER_TAB_MODIFICATIONS for EACH transaction?

2.If yes, when will this data be deleted, concerned that the amount of data might become a load?

3.Can the data in _TAB_MODIFICATIONS be deleted?

4.Does DBMS_STATS.GATHER_sTALE option gets all tables that are between their LAST_ANALYZED and _TAB_MODIFICATIONS.TIMESTAMP to do the analyzes?

Tom Kyte
August 30, 2004 - 12:43 pm UTC

1) no

3) gathering stats, removing the object

4) objects are only in the tab modifications view if they are in need of fresh stats. the timestamps are not relevant, the existence of a record in there is.

Need a little clarification on ans 4)

Steve, December 09, 2004 - 9:24 am UTC

Hi Tom,

Could you please clarify the last answer:

   4) objects are only in the tab modifications view 
      if they are in need of fresh stats...

I did the folloeing test (full details further below):

    o gather table statistics on table T
    o update only 1 row (out of 65k) 
    o flush stats
    o query the user_tab_modifications

There was a row present in the user_tab_modifications for
table T (because of the FLUSH). Does this mean that the 
stats are stale? They shouldn't be because only 1 out
of 65K rows were changed. If they are considered stale,
then I guess FLUSH() defeats the whole monitoring purpose
and should be avoided...


SQL> desc t

 Name       Null?    Type
 ---------- -------- ----------
 I                   NUMBER(38)

SQL> select count(*) from t;

  COUNT(*)
----------
     65536

SQL>  exec DBMS_STATS.GATHER_TABLE_STATS('dwadm',tabname=>'T', estimate_percent=>5);

PL/SQL procedure successfully completed.


SQL> exec print_table('select * from user_tab_modifications where table_name=''T''');
TABLE_NAME                    : T
PARTITION_NAME                :
SUBPARTITION_NAME             :
INSERTS                       : 0
UPDATES                       : 1
DELETES                       : 0
TIMESTAMP                     : 09-dec-2004 10:09:35
TRUNCATED                     : NO
DROP_SEGMENTS                 : 0
-----------------

SQL>  update t set i=8 where rownum<2;

1 row updated.


SQL> exec DBMS_STATS.FLUSH_DATABASE_MONITORING_INFO();

PL/SQL procedure successfully completed.



SQL>  exec print_table('select * from user_tab_modifications where table_name=''T''');
TABLE_NAME                    : T
PARTITION_NAME                :
SUBPARTITION_NAME             :
INSERTS                       : 0
UPDATES                       : 1
DELETES                       : 0
TIMESTAMP                     : 09-dec-2004 10:14:00
TRUNCATED                     : NO
DROP_SEGMENTS                 : 0
-----------------

PL/SQL procedure successfully completed.
 

Tom Kyte
December 09, 2004 - 2:21 pm UTC

guess it was stated "ambigously" - they will be in there if they are monitored and have started to be changed and have had their stats flushed.

Finally found the answer

Steve, December 09, 2004 - 9:58 am UTC

Hi Tom,

I finally found the answer (in Performance & Tuning Guide).
A table can be present in the user_tab_modifications but
it does not mean it is stale.

"Objects are considered stale when 10% of the 
 total rows have been changed. When you issue 
 GATHER_TABLE_STATS with GATHER STALE, the procedure 
 checks the USER_TAB_MODIFICATIONS view. If a monitored 
 table has been modified more than 10%, then statistics 
 are gathered again. The information about changes of 
 tables, as shown in the USER_TAB_MODIFICATIONS view, 
 can be flushed from the SGA into the data dictionary 
 with the DBMS_STATS.FLUSH_DATABASE_MONITORING_INFO 
 procedure." 

The following simple test also agreed with the documentation:

SQL>  select table_name,last_analyzed from user_tables 
      where table_name='T';

TABLE_NAME                     LAST_ANALYZED
------------------------------ -------------------
T                              2004-12-09 10:13:10

SQL> exec print_table('select * from user_tab_modifications where table_name=''T''');
TABLE_NAME                    : T
PARTITION_NAME                :
SUBPARTITION_NAME             :
INSERTS                       : 0
UPDATES                       : 1
DELETES                       : 0
TIMESTAMP                     : 09-dec-2004 10:14:00
TRUNCATED                     : NO
DROP_SEGMENTS                 : 0
-----------------

PL/SQL procedure successfully completed.


SQL> select sysdate from dual;

SYSDATE
-------------------
2004-12-09 10:53:13


SQL>  exec DBMS_STATS.GATHER_SCHEMA_STATS('DWADM',OPTIONS=>'gather stale');

PL/SQL procedure successfully completed.

SQL>  select table_name,last_analyzed from user_tables
      where table_name='T';

TABLE_NAME                     LAST_ANALYZED
------------------------------ -------------------
T                              2004-12-09 10:13:10

SQL> exec print_table('select * from user_tab_modifications where table_name=''T''');
TABLE_NAME                    : T
PARTITION_NAME                :
SUBPARTITION_NAME             :
INSERTS                       : 0
UPDATES                       : 1
DELETES                       : 0
TIMESTAMP                     : 09-dec-2004 10:14:00
TRUNCATED                     : NO
DROP_SEGMENTS                 : 0
-----------------

PL/SQL procedure successfully completed.

Thanks 

Steve 

Monitoring for partition

A reader, December 09, 2004 - 3:28 pm UTC

Hi Tom,
We have DW partitioned MVs. There are very rare cicumstances that old data (month_key -partition key) will be updated. But every day, current partition will be effected. Can I do monitoring only on the partition? so that gathering of stats should be done only for partition which has stale stats (which will be most of the time current partition), as this will save lots of time, instead of gathering stats for whole table!
Thanks

Tom Kyte
December 09, 2004 - 3:35 pm UTC

but -- do you need local or global stats for this table.

do all of your queries specify the partition key
do they all hit a SINGLE partition
and is that partition KNOWN at query parse time?

if not -- then you do need global statistics.

You can use dbms_stats to set statistics globally if you like, if you know what they should be, but you most likely need global statistics -- even more so than local.

Query by partition key

A reader, December 09, 2004 - 3:54 pm UTC

Hi Tom,
Thanks for quick response.
We have month_key as partition key, and queries will make sense only if partition key is used in the queries, thus I have created local partitioned indices. But my problem is, how do I know for which partition to gather stats? I thought monitoring on partition (if it is there!) will be quite cool!
Thanks

Tom Kyte
December 09, 2004 - 4:03 pm UTC

do you use bind variables?

bind variables?

A reader, December 10, 2004 - 2:40 pm UTC

Hi Tom,
This is DW system, and users submit adhoc queries from reporting tool, which has predicate for month_key.

Tom Kyte
December 10, 2004 - 8:08 pm UTC

that doesn't quite answer the question....

Adhoc queries...

A reader, December 10, 2004 - 8:26 pm UTC

Hi Tom,
We don't have control over DW adhoc queries, and possible answer is 'NO'

Tom Kyte
December 10, 2004 - 8:33 pm UTC

the right answer here is "NO" (believe it -- or not ;)

can you find out (check v$sql)

The reason behind bind variable usage...

A reader, December 11, 2004 - 12:51 am UTC

Hi Tom,
It would be good to know the answer both ways...
Thanks

Tom Kyte
December 11, 2004 - 7:56 am UTC

I'll give it -- I want to know if it will be useful for them or not.


I want them to find out first so that when I tell them how to accomplish their goals -- we'll know if it will be meaningful or not.

No worries!

A reader, December 11, 2004 - 3:01 pm UTC

Hi Tom,
Thanks for your patience, I will check, and get back to you.
Regards

Can I alter a materialized view to Monitoring.....?

Prashant, March 18, 2005 - 10:54 pm UTC

Hi Tom,

Can I alter a materialized view to monitoring, for similar reasons as I would intend to do for a table?

Thanks and Regards
Prashant

Tom Kyte
March 19, 2005 - 6:55 am UTC

no, but you can query v$sql_plan to see what plans are referencing it.

Wasn't Clear....will explain again

Prashant, March 24, 2005 - 1:23 am UTC

Hi Tom,

I am sorry I wasn't clear, e.g If I truncate and reload a partition in a table for which monitoring is turned on, then by looking at user_tab_modifications I can find out which partition to analyze. I believe I can do similarly for a partitioned MV as well, by saying
alter table mv_name monitoring;

Is that correct?

Thanks and Regards
Prashant

Tom Kyte
March 24, 2005 - 8:38 am UTC

ahh, got it -- sorry (don't know why I translated "table" into "index" when I replied)

yes, you can - an MV is just a table underneath.

Thanks a lot ....Tom

Prashant, March 25, 2005 - 5:02 pm UTC


A reader, April 11, 2005 - 6:27 am UTC


Fast Refresh Materialized Views ...Are they also eligible?

Prashant, April 13, 2005 - 3:26 pm UTC

Hi Tom,

In the above followup I asked, I have one more question. I have a materialized view MV A E.g.
MV A -> FAST REFRESH
MV A -> Altered to monitoring.

I refresh MV A by fast refresh, will it also show up in user_tab_modifications. Since its a fast refresh I believe it will not truncate MV partitions, so will querying on user_tab_modifications be correct for this scenario, to find out changed partitions and then analyze the same.

Thanks and Regards
Prashant

Tom Kyte
April 13, 2005 - 4:18 pm UTC

yes, it'll be maintained via inserts/updates/deletes, monitoring would work

clear me

sikanthar, June 28, 2005 - 4:23 am UTC

Hi Tom,

while gathering the stats with GATHER STALE,oralce will look into user_tab_modifications for checking stale tables.

1.If the approx. count has been logged in user_tab_modifications for every 3 hours,what will happen to the modifications while gathering between this time since the view shows null columns?
2.do we need flush before gathering stats to show the user_tab_modifications up-to-date,else we might loose a bit stats? right?


Tom Kyte
June 28, 2005 - 7:52 am UTC

1) in current releases, the statistics are flushed to the dictionary during the dbms_stats run, before it looks at them. You may also flush them manually using dbms_stats.

2) it is done

monitoring tables

Rrrrrrrob, August 25, 2005 - 11:30 am UTC

All I ever wanted to know and more. Thanks for a full explanation on the monitoring and stale stats scenario.



A reader, November 29, 2005 - 11:50 am UTC


Monitoring dml on individual tables on 10gr2

A reader, September 26, 2006 - 12:47 pm UTC

Hi Tom how can you know the # of dmls per table on 10gr2

There is the view dba_advisor_sqlw_colvol,
but neither creating tunning tasks(of updates for examples) the table
SYS.WRI$_ADV_SQLW_COLVOL(referenced by the previous view) is updated.

How could get those tables updated so I could know the dmls per table.

Thanks


CREATE TABLE TEST AS SELECT * FROM DBA_OBJECTS;
ANALYZE TABLE TEST COMPUTE STATISTICS;
UPDATE OYM.TEST SET OBJECT_ID=-OBJECT_ID;
COMMIT;
GRANT ADVISOR TO CACHITO;
DECLARE
my_task_name VARCHAR2(30);
my_sqltext CLOB;
BEGIN
my_sqltext := 'UPDATE OYM.TEST SET OBJECT_ID=-OBJECT_ID WHERE OBJECT_ID > :bnd';
my_task_name := DBMS_SQLTUNE.CREATE_TUNING_TASK(
sql_text => my_sqltext,
bind_list => sql_binds(anydata.ConvertNumber(100)),
user_name => 'OYM',
scope => 'COMPREHENSIVE',
time_limit => 60,
task_name => 'my_sql_tuning_task2',
description => 'Task to tune a query on a specified employee');
END;
/
BEGIN
DBMS_SQLTUNE.EXECUTE_TUNING_TASK( task_name => 'my_sql_tuning_task2' );
END;
/
SELECT DBMS_SQLTUNE.REPORT_TUNING_TASK( 'my_sql_tuning_task2') FROM DUAL;

SELECT task_name FROM DBA_ADVISOR_LOG WHERE owner = 'OYM';
SELECT * FROM DBA_ADVISOR_TASKS WHERE TASK_NAME='my_sql_tuning_task2';

SELECT * FROM dba_advisor_sqlw_colvol;
-- empty
SELECT * FROM SYS.WRI$_ADV_SQLW_COLVOL;
-- empty


Tom Kyte
September 26, 2006 - 4:53 pm UTC

if you want to know the number of DML's (which include by the way SELECT, select is dml) you can use fine grained auditing....

but if you are trying to figure out when a table might need new stats, monitoring is as good as anything, stick with it.

A reader, September 27, 2006 - 11:18 am UTC

Tom tHanks for that :)

Best paractice

A reader, October 02, 2006 - 8:28 pm UTC

Tom,
We a have a volatile table with high insert/update/delete. Several thousand queries executed on this table in an hour. We gather statistics on this table every night.
There is a purge job deletes rows those not required for business.
One case, statistics gathered when rows are 100(just after purge and alter table move) so oracle decided to full table scan. In 2 hours, they reached around 100,000. Queries started running very slow because full table scan.
Could you please recommend what is the best practice to gather statistics in these cases?
Thank you,
V

Tom Kyte
October 02, 2006 - 9:18 pm UTC

perhaps you never gather stats, you set them to be "representative of the normal case"

that is, if the table normally can have about 100k rows - just use set table stats and say that and never gather again.

A reader, October 02, 2006 - 8:29 pm UTC

Tom.
Forogt to mention that database 9.2.0.6.
Thanks again
V

Great! keep up the good work...

Kalyan, October 16, 2006 - 11:51 am UTC

I was confused about table level monitoring for SAP / SAP BW applications... This helps..

Why the stats are collected even when the modifications < 10%

Chinni, February 18, 2009 - 7:46 am UTC

Hi Tom,
As I understand when I enable monitoring on a table, then stats jobs gathers only stale stats( above 10% change). Please see the below listing, I am missing something(in 9i)

SQL> 
SQL> drop table t;

Table dropped.

SQL> 
SQL> create table t (c number);

Table created.

SQL> 
SQL> alter table t monitoring;

Table altered.

SQL> 
SQL> alter session set nls_date_format='dd-mon-yyyy hh24:mi:ss';

Session altered.

SQL> 
SQL> INSERT INTO T SELECT ROWNUM FROM DBA_TABLES WHERE ROWNUM<101;

100 rows created.

SQL> 
SQL> BEGIN
  2  DBMS_STATS.FLUSH_DATABASE_MONITORING_INFO;
  3  END;
  4  /

PL/SQL procedure successfully completed.

SQL> 
SQL> BEGIN
  2  DBMS_STATS.GATHER_TABLE_STATS (ownname => USER, tabname => 'T', estimate_percent => DBMS_STATS.AUTO_SAMPLE_SIZE, cascade => TRUE, method_opt => 'FOR ALL INDEXED COLUMNS SIZE AUTO');
  3  END;
  4  /

PL/SQL procedure successfully completed.

SQL> SELECT LAST_ANALYZED,NUM_ROWS FROM DBA_TABLES WHERE TABLE_NAME='T';

LAST_ANALYZED          NUM_ROWS                                                                                                                                                                         
-------------------- ----------                                                                                                                                                                         
18-feb-2009 13:40:40        100                                                                                                                                                                         

SQL> 
SQL> INSERT INTO T SELECT ROWNUM FROM DBA_TABLES WHERE ROWNUM<2;

1 row created.

SQL> 
SQL> BEGIN
  2  DBMS_STATS.FLUSH_DATABASE_MONITORING_INFO;
  3  END;
  4  /

PL/SQL procedure successfully completed.

SQL> BEGIN
  2  DBMS_STATS.GATHER_TABLE_STATS (ownname => USER, tabname => 'T', estimate_percent => DBMS_STATS.AUTO_SAMPLE_SIZE, cascade => TRUE, method_opt => 'FOR ALL INDEXED COLUMNS SIZE AUTO');
  3  END;
  4  /

PL/SQL procedure successfully completed.

SQL> SELECT LAST_ANALYZED,NUM_ROWS FROM DBA_TABLES WHERE TABLE_NAME='T';

LAST_ANALYZED          NUM_ROWS                                                                                                                                                                         
-------------------- ----------                                                                                                                                                                         
18-feb-2009 13:40:42        101                                                                                                                                                                         

SQL> 
SQL> spool off;


when I inserted only one row ( which is less that 10% of 100 rows) also why oracle regathered the stats?

Please clarify. Thank you for your time.

Tom Kyte
February 18, 2009 - 3:40 pm UTC

because you TOLD US TO.

you are not saying "gather stats on STALE objects in this schema"

you are rather forcefully saying "gather the statistics on table T right now, get to it"

http://docs.oracle.com/docs/cd/B19306_01/appdev.102/b14258/d_stats.htm#sthref6735

see the gather stale option of gather schema|database stats

Thank you

Chinni, February 19, 2009 - 5:45 am UTC

Hi Tom,
Thank you for clarifying that. I wonder why "options" parameter is not available with gather_tables_stats.
Tom Kyte
February 19, 2009 - 8:38 am UTC

it wouldn't make sense?

The gather stale only makes sense for a set of objects. If you are targeting a single table, you MUST want to gather statistics for it.

You can use LIST STALE to see if a single table is stale if you wanted.

Are the stats gathered again here?

Chinni, March 17, 2009 - 2:59 am UTC

Hi Tom,
I am using
BEGIN
DBMS_STATS.GATHER_SCHEMA_STATS
(
ownname    => 'CRIS',
estimate_percent   =>  DBMS_STATS.AUTO_SAMPLE_SIZE,
method_opt    => 'FOR ALL INDEXED COLUMNS SIZE AUTO',
options   => 'GATHER STALE',
DEGREE   => 4,
CASCADE   => TRUE,
granularity   => 'DEFAULT'
);
END;


to gather stats. I have a table TRA with monitoring ON. I checked in ALL_TAB_MODIFICATIONS that for NO partition has changed over 10%
select * from
(
  select A.PARTITION_NAME,
 (
  (INSERTS+UPDATES+DELETES) / 
  (SELECT NUM_ROWS FROM ALL_TAB_PARTITIONS 
  B WHERE TABLE_NAME='TRA' AND B.PARTITION_NAME=A.PARTITION_NAME) 
)*100 as percentage
 from all_tab_modifications A where table_name ='TRA'
 ) where percentage > 10


This returns no rows. Now when I executed the above stats gathering block and i noticed some thing like below statement being executed.

SELECT /*+ cursor_sharing_exact dynamic_sampling(0) no_monitoring parallel(t,4)  parallel_index(t,4) */
       COUNT (*), COUNT ("TRA_TRAD_AMT"), SUM (VSIZE ("TRA_TRAD_AMT")),
.....
  FROM "CRIS"."TRA" SAMPLE (19.4452929663) t


Are the stats for TRA gathered again here? Could you please clarify?

Thank you.

Tom Kyte
March 17, 2009 - 10:30 am UTC

why don't you just query the dictionary to see the last analyzed time? You can answer whether and when statistics were last gathered right away.

Also, if you are curious, you can always do a LIST STALE first, print that out and then gather stale, so you can see what was done.

But why it is re analyzing ?

Chinni, March 18, 2009 - 8:48 am UTC

Hi Tom,
Thanks for the reply
I checked the dba_tables and found that the table actually analyzed again.

Now my doubt is " Why are the statistics re gathered this table even though modifications are less that 10%"

I don't know if I have to consider other factors while deciding the percentage of modifications ??

Thank You
Tom Kyte
March 19, 2009 - 9:47 am UTC

the data in the views is not maintained in anything even remotely near "real time". The modifications information (since it is not crucial/critical/time sensitive) is heavily cached in the instance. Before gathering statistics, we flush that cache *and then* review the information.

Hence, when you look at the views - you are looking at information that could be very stale. There is an API in dbms_stats to flush the monitoring information, but you should just use LIST STALE to generate a report if you are curious about what exactly will be gathered.

Thanks

Chinni, March 19, 2009 - 1:32 pm UTC

Thanks Tom I will use the full monitoring info procedure. Th

Stale Statistics on individual partitions.

David, April 04, 2009 - 5:59 pm UTC

Tom, in February, Chinni asked why the options parameters is not available with gather_table_stats, and you said that if you are targeting a single table, you MUST want to gather statistics for it.

I would suggest that there are exceptions to that rule. I have a table with weekly partitions, even if someone does implement any archiving it will have at least 6 months of rolling data (this is PeopleSoft Time & Labour). This table is processed by a scheduled nightly batch program, as well as adhoc processes on smaller subsets of data.

It is necessary that the statistics are up to date on the partition for the current week when the first batch of the week is run. If the stats show that the partition has no rows when in truth is has lots, the database understandably chooses the wrong execution plan and the batch runs several hours longer.

We cannot rely on the automatic stats job to refresh stale stats at the right time (we have run in this problem during testing). Because I know of no way to gather stale stats on just a named table, my solution has been to write a PL/SQL procedure to flush_database_monitoring_info and then use the information in user_tab_modifications to determine which partitions are stale, and gather stats for those partitions. This procedure is then called from the batch process that processes this table.

My point is that I want to target a single table, but I don't want to collect stats for lots historical partitions that haven't changed. If the automatic stats job has run and refreshed the statistics, then I'm not going to bother doing the work again.

Or, do you think this approach is flawed?
Tom Kyte
April 07, 2009 - 5:29 am UTC

are you sure you want to use the automatic job at all? Seems you are using your knowledge of the data patterns to more intelligently gather statistics by yourself (which is not a bad idea at all)

monitoring sql on table/partitions

Rhodus, November 27, 2011 - 4:10 am UTC

Hi Tom,
With alter table monitoring we can monitor dml only.
What would be the way to monitor sql?
We have large historic partitioned table in the database and we'd like to see if anyone is accessing it to take opportunity and reduce history.
Thanks
Tom Kyte
November 28, 2011 - 10:47 am UTC

v$segment_statistics would give you information about what segments are being read/written to.

monitoring sql on table/partitions

Rhodus, November 30, 2011 - 7:51 am UTC

Thanks Tom!
This is great. Once we find out segment is accessed, what would be the best way to find out what process was running select against the segment.
Tom Kyte
November 30, 2011 - 8:02 am UTC

auditing would do that.

Analyze table after Truncate

Brijesh, February 06, 2012 - 8:38 am UTC

Hi Tom,

In the nightly batch process we truncate and load 60+ tables. There are underlining Materialize view which gets complete refreshed after this.

Do we need to collect stats for the table before refresh? Does truncate drop the stats?

What should be the optimal methodology for performing this type of activity?
Tom Kyte
February 06, 2012 - 10:52 am UTC

ops$tkyte%ORA11GR2> create table t as select * from all_objects;

Table created.

ops$tkyte%ORA11GR2> 
ops$tkyte%ORA11GR2> exec dbms_stats.gather_table_stats( user, 'T' );

PL/SQL procedure successfully completed.

ops$tkyte%ORA11GR2> select num_rows from user_tables where table_name = 'T';

  NUM_ROWS
----------
     72386

ops$tkyte%ORA11GR2> truncate table t;

Table truncated.

ops$tkyte%ORA11GR2> select num_rows from user_tables where table_name = 'T';

  NUM_ROWS
----------
     72386



truncate will not remove the statistics.

You might need to update column stats to set new high/low values for some columns. for example, if you have a date column (or sequence) that is always "growing", you might want to set the new high value - so we know the actual range of values. Otherwise we might underestimate cardinalities - leading to potentially bad plans.

dbms_stats.set_**** functions are good for that.