what is "stale"
A reader, March 14, 2002 - 2:55 pm UTC
Hi
What are stale objects?
Can we automatize statistics gathering? Will it hit performance hard though?
March 14, 2002 - 4:13 pm UTC
"stale" objects are objects in need of being analyzed, more then "10%" of the object has changed.
We use "alter table T monitoring" to enable the capture of "stale" tables to the data dictionary.
You can then use dbms_stats to gather stats automagically on just the "stale" objects.
Analyzing does work on the system -- but consider the impact on performance you have with OUT OF DATE stats!
Search this site for
dbms_stats monitoring
for other discussions.
A reader, April 14, 2005 - 9:57 am UTC
I have a similar issue. we gather statistics on a table once the daily load is done.my questions
1) Does oracle records every time a row was inserted(like..if i load 9 million rows will it update dba_tab_modifications 9 million times?? Does it do at the same time the row was inserted??)
2)the table got loaded at 4am and it was updated at 4:02 am.the analyze script ran at 4.03 am but did not analyze the table..when i checked in the morning..a row existed in dba_tab_modifications with a timestamp of 5:30 am..does it mean that this row was inserted/modified only once at 5:30 am
April 14, 2005 - 10:01 am UTC
1) that information is cached in the sga and flushed very periodically (used to be 3 hours, could still be that, but it is periodically, or when you tell us to)
2) you could use FLUSH_DATABASE_MONITORING_INFO in dbms_stats before gathering stale.
A reader, April 14, 2005 - 1:19 pm UTC
Thanks for the Info..
followup to comment 1
Is there a way to check what time it is set to right now?
thanks
April 14, 2005 - 1:29 pm UTC
do you mean "when it'll flush"? I don't think so, as it'll vary based on the startup time of the instance.
To : A Reader
Vinayak, April 14, 2005 - 11:44 pm UTC
1) Does oracle records every time a row was inserted(like..if i load 9 million rows will it update dba_tab_modifications 9 million times?? Does it do at the
same time the row was inserted??)
- Oracle records every insert, but only in SGA. It will write this information to dba_tab_modifications periodically. In 8i, it was every 3 hours. In 9i, it is every 15 minutes.
2)the table got loaded at 4am and it was updated at 4:02 am.the analyze script ran at 4.03 am but did not analyze the table..when i checked in the morning..a row existed in dba_tab_modifications with a timestamp of 5:30 am..does it mean that this row was inserted/modified only once at 5:30 am
- I think you are using 8i.
In 9i, when you use dbms_stats.gather_schema_stats with 'GATHER STALE' option, it automatically flushes the monitoring information and hence you do not even need to use Flush_Database_Monitoring_Info. But unfortunately, this is not available in 8i.
April 15, 2005 - 8:29 am UTC
1) it is a fast non-latched increment of a counter (meaning some observations can be missed, but that is ok and by design)
Follow-up
Vinayak, April 14, 2005 - 11:52 pm UTC
This is the test I did on 9.2.0.5 which shows that you do not have to wait for SMON to flush the monitoring information. GATHER_SCHEMA_STATS with 'GATHER STALE' aoes that automatically.
vmahajan@ECADEV> select table_name,num_rows from user_tables ;
TABLE_NAME NUM_ROWS
------------------------------ ----------
T1 39
vmahajan@ECADEV> insert into t1 select rownum from all_objects where rownum<20 ;
19 rows created.
vmahajan@ECADEV> commit ;
Commit complete.
vmahajan@ECADEV> exec dbms_stats.gather_schema_stats(ownname=>'VMAHAJAN',options=>'GATHER STALE');
PL/SQL procedure successfully completed.
vmahajan@ECADEV> select table_name,num_rows from user_tables ;
TABLE_NAME NUM_ROWS
------------------------------ ----------
T1 58
A reader, April 15, 2005 - 8:00 am UTC
Hi
I am ausing Oracle 9.2.0.6..
I use GATHER_TABLE_STATS. There is not a 3 hr delay..The time gap between the first row inserted into the table for which i am gathering stats and the timestamp in dba_tab_modifications for that table is around 1hr..
To : A Reader
Vinayak, April 15, 2005 - 9:20 am UTC
Since you are using 9.2, you should just use GATHER_SCHEMA_STATS with options->'GATHER STALE' and this way, you do not have to depend upon the information to be materialized in dba_tab_modification, oracle does the job for you. It will flush the informtion from SGA and then gather the statistics for only stale objects.
Trip the threshold
AC, April 25, 2005 - 5:50 pm UTC
Is there a way to set the the threshold to 5% rather 10% for gather stale?
April 25, 2005 - 6:01 pm UTC
not that I am aware of, no
Gather stale story
Mark, April 25, 2005 - 6:29 pm UTC
We had been running a batch report for over 2 years on a PS HRIS application when it went from 6 minutes to 27 hour runtime (it broke with an ora-1555). The fix from PS was the traditional add more/make bigger rollback segments.
Problem didn't occur on fresh import to a dev environment. The difference was fresh statistics were being done on the import. We reproduced the problem on development by importing the statistics from production.
Turned out that although we have been gathering stale for 2 years, 2 of the tables hadn't been analyzed since Jun2004. Once we analyzed the tables, the runtime went back to 6 minutes. The partial set of statistics that were collected by gather stale were enough to negatively alter the execution plan.
We now export stats so we can import them into development for troubleshooting and run full statistics every weekend to make sure we are only one week stale.
April 25, 2005 - 6:41 pm UTC
that can happen easily on tables where the upper bound of an important column (like a sequence, a date) goes up -- the stats do not cross that threshold -- and the optimizer starts to think "no rows" for some predicates.
gather_dictionary_stats and dba_tab_modifications
Sami, August 11, 2005 - 8:40 am UTC
Dear Tom,
I did gather statistcs for dictinary segments and fixed objects however still I see values in dba_tab_modificaions as below. As per definition, only stale objects should be displayed in dba_tab_modifications. Please correct me if I am wrong?
SQL> exec dbms_stats.gather_dictionary_stats;
PL/SQL procedure successfully completed.
SQL> exec DBMS_STATS.GATHER_FIXED_OBJECTS_STATS;
PL/SQL procedure successfully completed.
SQL> select table_owner,table_name,inserts,updates,deletes from dba_tab_modifica
tions order by inserts desc;
TABLE_OWNER TABLE_NAME INSERTS UPDATES DELETES
------------- ---------------- --------- ---------- ----------
SYS WRH$_LATCH 8352 0 0
SYS WRH$_SYSSTAT 7584 0 0
SYS WRH$_PARAMETER 6192 0 0
SYS WRH$_SQLBIND 5426 0 0
[truncated the output for display purpose]
August 11, 2005 - 5:52 pm UTC
those are real tables, not fixed objects.
hat's why I used gather_dictionary_stats
Sami, August 11, 2005 - 11:07 pm UTC
Dear Tom,
yes.. those are real tables.. that's why I did
SQL> exec dbms_stats.gather_dictionary_stats;
PL/SQL procedure successfully completed.
August 12, 2005 - 8:45 am UTC
sorry, saw the THING IN CAPS.
but those just probably didn't trigger an analyze (they hadn't been changed enough yet)
ops$tkyte@ORA10GR1> create table t as select * from all_objects where rownum <= 10;
Table created.
ops$tkyte@ORA10GR1> exec dbms_stats.gather_table_stats( user, 'T' );
PL/SQL procedure successfully completed.
ops$tkyte@ORA10GR1> alter table t monitoring;
Table altered.
ops$tkyte@ORA10GR1> insert into t select * from all_objects where rownum = 1;
1 row created.
ops$tkyte@ORA10GR1> exec dbms_stats.flush_database_monitoring_info;
PL/SQL procedure successfully completed.
ops$tkyte@ORA10GR1> select table_name,inserts,updates,deletes
2 from user_tab_modifications
3 order by inserts desc;
TABLE_NAME INSERTS UPDATES DELETES
------------------------------ ---------- ---------- ----------
T 1 0 0
ops$tkyte@ORA10GR1>
ops$tkyte@ORA10GR1> exec dbms_stats.gather_schema_stats( user, options => 'GATHER STALE' );
PL/SQL procedure successfully completed.
ops$tkyte@ORA10GR1> select table_name,inserts,updates,deletes
2 from user_tab_modifications
3 order by inserts desc;
TABLE_NAME INSERTS UPDATES DELETES
------------------------------ ---------- ---------- ----------
T 1 0 0
<b>still there, not analyzed, didn't change enough</b>
ops$tkyte@ORA10GR1>
ops$tkyte@ORA10GR1> insert into t select * from all_objects where rownum <= 10;
10 rows created.
ops$tkyte@ORA10GR1> exec dbms_stats.flush_database_monitoring_info;
PL/SQL procedure successfully completed.
ops$tkyte@ORA10GR1> select table_name,inserts,updates,deletes
2 from user_tab_modifications
3 order by inserts desc;
TABLE_NAME INSERTS UPDATES DELETES
------------------------------ ---------- ---------- ----------
T 11 0 0
ops$tkyte@ORA10GR1>
ops$tkyte@ORA10GR1> exec dbms_stats.gather_schema_stats( user, options => 'GATHER STALE' );
PL/SQL procedure successfully completed.
ops$tkyte@ORA10GR1> select table_name,inserts,updates,deletes
2 from user_tab_modifications
3 order by inserts desc;
no rows selected
<b>now gone, did get analyzed, changed enough</b>
What if I use "GATHER AUTO"?
Mike, March 07, 2006 - 3:05 pm UTC
We are on 9.2.0.5 Ent. Edition on Sun boxes.
We have 10 schemas in our warehouse, and all have a mixture of HUMONGOUS partitioned tables (100-350 million rows)AND smaller tables, from 10 rows to millions. All table in all schemas are set to monitoring.
All schemas have cron jobs set up to collect stats weekly, except for one table with daily partitions that runs nightly.
Eight of the schemas can be analyzed as a whole, e.g.
exec dbms_stats.gather_schema_stats (ownname => 'ECAS', options => 'GATHER AUTO', CASCADE => TRUE);
I always thought that using 'GATHER AUTO' would produce the same result as 'GATHER STALE' in this case. Please set me straight if that is untrue.
Two of the schemas have tables so large that if I try to use the above syntax Oracle busts a gut and terminates before finishing. All kinds of Pl/SQl errors among others.
So for those schemas I individually analyze the tables as follows:
For all the tables except the HUMONGOUS ones, I do
exec dbms_stats.gather_table_stats (ownname => 'RECOVERY', TABNAME => 'RECVRY_AGENCY', ESTIMATE_PERCENT =>
DBMS_STATS.AUTO_SAMPLE_SIZE, CASCADE => TRUE);
For a HUMONGOUS table, like the following with 200M rows (28G) in 59 partitions, I do
exec dbms_stats.gather_table_stats (ownname => 'RECOVERY', TABNAME => 'MNTHLY_RECVRY1_CNTCT_HIST', ESTIMATE
_PERCENT => 10, CASCADE => TRUE);
That table has an evil twin of about the same size, and when that dbms_stats job runs late at night off a cron, it runs about 4 hours doing a 10% sample.
Since there is no 'GATHER STALE' option at the table level, (psst, that would be a nice feature)it almost seems like a waste to have the tables in those two schemas set to monitoring. Is that correct?
March 09, 2006 - 11:17 am UTC
describe in more detail "busts a guts and terminates".
these tables are merely "pretty sizable", they are not by any means really large or humugous.
alter table
jasdeep, October 09, 2006 - 4:41 pm UTC
does alter table ... monitoring add significant load in oltp environment.
October 09, 2006 - 9:22 pm UTC
no, and it is the default (and really only) way in 10g.
alter table
A reader, October 10, 2006 - 12:28 pm UTC
In 9i.
does it put load.
October 10, 2006 - 8:09 pm UTC
no, i was just noting in 10g, it is on by default.
tab_modifications can be missed ...?
Andre, April 08, 2013 - 1:29 pm UTC
Hi Tom,
You said:
---------
>>> Followup April 15, 2005 - 8am UTC:
>>> 1) it is a fast non-latched increment of a counter
>>> (meaning some observations can be missed, but
>>> that is ok and by design)
I am rather curious as to how this process compares with Oracle's AUDIT defined "By Access".
Correct me if I am wrong:
whether you issue one update statement that updates 100 rows or you issue 100 statements in a PL/SQL loop to update one row at a time => you will end up with 100 AUDIT records for that table with ACTION# = 6.
The same would be the case with a so-called "Financial-Audit" using a trigger.
So what can we expect from Oracle gathering this kind of info in its SGA that can be then flushed and retrieved by a query of the DBA_TAB_MODIFICATIONS view...?
Thanks
Andre
April 22, 2013 - 2:14 pm UTC
... I am rather curious as to how this process compares with Oracle's AUDIT defined
"By Access". ...
not at all, nothing even remotely similar.
dba_tab_modifications is just trying to keep a raw guesstimate of the percentage of change to a table. nothing official, very lightweight, if we miss some - big deal, we don't care.