A reader, September 03, 2008 - 3:13 pm UTC
Thanks for the response.
Database Upgrade
Tim, September 10, 2008 - 10:08 am UTC
How does the autotask architecture affect a database upgrade? For example, if an Oracle 9.2 database was upgraded to Oracle 11.1, the autotask architecture would be entirely new and should be implemented as part of the upgrade. What would happen if an Oracle 10.2 database was upgraded (with the original Scheduler architecture and the GATHER_STATS_JOB implemented in that architecture)?
Thanks.
September 11, 2008 - 10:59 am UTC
it is not entirely "new", it uses the infrastructure of the scheduler.
It does the "right thing" when you upgrade. We realize if we are upgrading a 9i or 10g database and different scripts are run to do different things.
changing the schedule?
JayMag, May 08, 2009 - 5:13 pm UTC
Hi Tom,
I've been trying to figure out how to change the schedule in 11G for the 'auto optimizer stats collection'. In 10G I would create a new window with DBMS_SCHEDULER.CREATE_WINDOW and then assign the new window to the job:
exec DBMS_SCHEDULER.SET_ATTRIBUTE(name=> 'GATHER_STATS_JOB',attribute => 'SCHEDULE_NAME',value=>'ALL_WEEK_WINDOW');
Reading the documentation on DBMS_AUTO_TASK_ADMIN I cannot seem to figure it out for 11G.
So, how do I change the Automatic Maintenance Task to assign a new window to the 'auto optimizer stats collection' ? (for simplicity let's assume I want to gather stale stats 24/7)
May 11, 2009 - 4:36 pm UTC
define what you mean by "gather stale stats 24/7" - is that "continuously??"
continuously =
JayMag, May 13, 2009 - 12:27 pm UTC
I meant that I need stale statistics gathered throughout the day instead of during the default windows. However, I did not want to edit the default windows since that would affect other tasks.
I believe I found the solution.
First add a newly created window as a window-group-member:
BEGIN
DBMS_SCHEDULER.ADD_WINDOW_GROUP_MEMBER(
group_name=>'MAINTENANCE_WINDOW_GROUP',
window_list=>'ALL_WEEK_WINDOW');
END;
Then DISABLE the window-group-members that are not needed:
BEGIN
dbms_auto_task_admin.disable(
client_name => 'auto optimizer stats collection',
operation => NULL,
window_name => 'MONDAY_WINDOW');
dbms_auto_task_admin.disable(
client_name => 'auto optimizer stats collection',
operation => NULL,
window_name => 'TUESDAY_WINDOW');
dbms_auto_task_admin.disable(
client_name => 'auto optimizer stats collection',
operation => NULL,
window_name => 'WEDNESDAY_WINDOW');
dbms_auto_task_admin.disable(
client_name => 'auto optimizer stats collection',
operation => NULL,
window_name => 'THURSDAY_WINDOW');
dbms_auto_task_admin.disable(
client_name => 'auto optimizer stats collection',
operation => NULL,
window_name => 'FRIDAY_WINDOW');
dbms_auto_task_admin.disable(
client_name => 'auto optimizer stats collection',
operation => NULL,
window_name => 'SATURDAY_WINDOW');
dbms_auto_task_admin.disable(
client_name => 'auto optimizer stats collection',
operation => NULL,
window_name => 'SUNDAY_WINDOW');
END;
May 13, 2009 - 3:32 pm UTC
I seriously, very seriously you really want to gather stale stats 24/7.
What is the reasoning behind that?
stale stats
JayMag, May 15, 2009 - 11:53 am UTC
I don't WANT to gather stale stats all the time.
I have little choice from my perspective.
We have a 3rd party application that creates large tables on the fly and then runs queries against those tables. I cannot change the 3rd party application or how it works (wish I could).
Would I be better off changing the OPTIMIZER_DYNAMIC_SAMPLING parameter? It is currently set to 2, but I have noticed a great deal of hanging on joined queries when the tables either have incorrect statistics (not analyzed after a large load) or have no stats. After a manual analyze the queries fly.
So, am I not configuring something properly or am I approaching this problem the wrong way?
(fyi - I am on 11.1.0.7)
May 15, 2009 - 2:11 pm UTC
why isn't this something you wouldn't work with your application vendor with? You are not the only one hitting this issue I'm sure, have you pinged them to see if they have a process for this already?
the problem with dynamic sampling in your case is that it happens only at hard parse time - so if you hard parse when tables are small, you'll get plans for small tables in the shared pool - and they'll stay there, even when the tables get large.
But, have you talked to the vendor - they may well already have something for you - a suggestion, a switch, a "something"
the problem with gathering stats constantly will be the flushing of dependent sql and subsequent hard parsing you'll be encountering all of the time.
More info on your last statement
A reader, May 20, 2009 - 9:42 am UTC
"the problem with gathering stats constantly will be the flushing of dependent sql and subsequent hard parsing you'll be encountering all of the time."
Could you please explain why sql will be flushed when gathering stats?
May 23, 2009 - 12:57 pm UTC
because when you gather statistics, the only way for us to use them is to hard parse
so we invalidate your sql (over time in 10g +, right away in 9i - )
and if we did not - why bother gather statistics?
no stats gathered?
JayMag, December 19, 2009 - 11:11 am UTC
Well, the vendor is uninterested (Unica) in implementing a way to analyze their created tables. Their campaign product creates tables on the fly and then queries and subqueries against the data of those created tables.
I also have tables with rows (8 million) yet no statistics are being gathered on them. They've been sitting around
Am I missing something regarding how oracle determines what tables (those with no statistics) to analyze? We are talking about a table(s) that was created months ago and stats are not locked.
I am going by last_analyzed IS NULL.
(version 11.1.0.7)
December 20, 2009 - 8:43 am UTC
Well, maybe it is time your company starts evaluation applications using the same metrics they would in evaluating a core technology (like a database or operating system...) Since applications are becoming todays core technology.
That is, evaluate the product not just on pretty screens, neat graphs and a cool report - but on whether it can actually perform, scale, do the job in your environment.
are you running the default database gather job? If not, you'd need to gather stale yourself if you wanted them gathers.
I AGREE
JayMag, December 21, 2009 - 9:40 am UTC
I agree with your statements. I wish my position in this large company had more clout as to enable this type of change. The best I can do is educate those involved.
I have the default 11G gather stats schedule implemented. Currently I am investigating why it would not be analyzing objects with no statistics and what stale objects it is actually gathering stats for. A generic gather_schema_stats solved the immediate issue.
To:JayMag
A reader, December 21, 2009 - 10:55 am UTC
If you are relying on GATHER STALE to gather stats on newly created tables, you have to manually gather stats on them once to get it going.
SQL> alter session set nls_date_format='dd-MON-yy hh:mi:ss AM';
Session altered.
SQL> drop table t1;
Table dropped.
SQL> create table t1 as select * from all_objects;
Table created.
SQL> begin
2 DBMS_STATS.GATHER_SCHEMA_STATS (
3 ownname =>'XYZ',
4 estimate_percent=>10,
5 degree=>1,
6 cascade=>TRUE,
7 options=>'GATHER STALE',
8 no_invalidate=>FALSE);
9
10 end;
11 /
PL/SQL procedure successfully completed.
SQL> select table_name, last_analyzed from user_tables where table_name='T1';
TABLE_NAME LAST_ANALYZED
------------------------------ ---------------------
T1
SQL> delete from t1;
36002 rows deleted.
SQL> commit;
Commit complete.
SQL> begin
2 DBMS_STATS.GATHER_SCHEMA_STATS (
3 ownname =>'XYZ',
4 estimate_percent=>10,
5 degree=>1,
6 cascade=>TRUE,
7 options=>'GATHER STALE',
8 no_invalidate=>FALSE);
9
10 end;
11 /
PL/SQL procedure successfully completed.
SQL> select table_name, last_analyzed from user_tables where table_name='T1';
TABLE_NAME LAST_ANALYZED
------------------------------ ---------------------
T1
SQL> begin
2 DBMS_STATS.GATHER_TABLE_STATS (
3 ownname =>'XYZ',
4 tabname=>'T1',
5 estimate_percent=>10,
6 degree=>1,
7 cascade=>TRUE,
8 no_invalidate=>FALSE);
9
10 end;
11 /
PL/SQL procedure successfully completed.
SQL> select table_name, last_analyzed from user_tables where table_name='T1';
TABLE_NAME LAST_ANALYZED
------------------------------ ---------------------
T1 21-DEC-09 11:44:33 AM
SQL> insert into t1 select * from all_objects;
36002 rows created.
SQL> commit;
Commit complete.
SQL> begin
2 DBMS_STATS.GATHER_SCHEMA_STATS (
3 ownname =>'XYZ',
4 estimate_percent=>10,
5 degree=>1,
6 cascade=>TRUE,
7 options=>'GATHER STALE',
8 no_invalidate=>FALSE);
9
10 end;
11 /
PL/SQL procedure successfully completed.
SQL> select table_name, last_analyzed from user_tables where table_name='T1';
TABLE_NAME LAST_ANALYZED
------------------------------ ---------------------
T1 21-DEC-09 11:46:07 AM
SQL> begin
2 DBMS_STATS.GATHER_SCHEMA_STATS (
3 ownname =>'XYZ',
4 estimate_percent=>10,
5 degree=>1,
6 cascade=>TRUE,
7 options=>'GATHER STALE',
8 no_invalidate=>FALSE);
9
10 end;
11 /
PL/SQL procedure successfully completed.
SQL> select table_name, last_analyzed from user_tables where table_name='T1';
TABLE_NAME LAST_ANALYZED
------------------------------ ---------------------
T1 21-DEC-09 11:46:07 AM
SQL> select sysdate from dual;
SYSDATE
---------------------
21-DEC-09 11:48:30 AM
SQL> begin
2 DBMS_STATS.GATHER_SCHEMA_STATS (
3 ownname =>'XYZ',
4 estimate_percent=>10,
5 degree=>1,
6 cascade=>TRUE,
7 options=>'GATHER STALE',
8 no_invalidate=>FALSE);
9
10 end;
11 /
PL/SQL procedure successfully completed.
SQL> select table_name, last_analyzed from user_tables where table_name='T1';
TABLE_NAME LAST_ANALYZED
------------------------------ ---------------------
T1 21-DEC-09 11:46:07 AM
SQL> delete from t1;
36002 rows deleted.
SQL> commit;
Commit complete.
SQL> begin
2 DBMS_STATS.GATHER_SCHEMA_STATS (
3 ownname =>'XYZ',
4 estimate_percent=>10,
5 degree=>1,
6 cascade=>TRUE,
7 options=>'GATHER STALE',
8 no_invalidate=>FALSE);
9
10 end;
11 /
PL/SQL procedure successfully completed.
SQL> select table_name, last_analyzed from user_tables where table_name='T1';
TABLE_NAME LAST_ANALYZED
------------------------------ ---------------------
T1 21-DEC-09 11:49:23 AM
SQL>
11G is not the same as 10G for stats jobs
JayMag, December 23, 2009 - 4:22 pm UTC
Well, it is mentioned earlier in the thread that the 10G statistics "jobs" are not implemented the same in 11G. Totally different in how they are scheduled and handled.
I am on 11G (11.1.0.7)
December 31, 2009 - 2:11 pm UTC
JayMag -
thanks for letting us know - but - so? What is your point/question/comment?
It doesn't mention just *earlier* in the thread, the entire thing started with that fact.
Not sure what your point was?
Is this way of gathering statistics is advisable
juan carlos reyes, August 13, 2010 - 1:27 pm UTC
Hi tom,
In theory are this three command enough to gather all statistics in the database
and is the more advisable on 11g?
eXEC DBMS_STATS.gather_dictionary_stats;
EXEC DBMS_STATS.GATHER_FIXED_OBJECTS_STATS;
EXEC DBMS_STATS.GATHER_database_STATS( GATHER_SYS=>TRUE,ESTIMATE_PERCENT=> DBMS_STATS.AUTO_SAMPLE_SIZE,METHOD_OPT=>'FOR ALL COLUMNS AUTO');
I'm referring specially to the third command using the auto values. Or you have some comment please.
Thank you :)
what's the need?
A reader, October 20, 2010 - 11:32 am UTC
Hi Tom
My question is - What's the logic in seperating these 3 jobs (stats gathering, space anayze and SQL tuning adv.) to be launched from auto task framework and leave rest of them in scheduler (like baseline stats, hm etc)?
On metalink it is said "Maintenance operations can potentially use a lot of resource which may, in extreme cases, affect other jobs. To address this, in 11g, maintenance operations are closely linked to resource manager to manage the resources that are used and share them more efficiently"
How does seperating these 3 tasks not lead to resource consumption when they are finally launched as scheduler jobs only?
October 25, 2010 - 3:06 pm UTC
those three jobs are very different - and some people would run one of the three or two of the three or three of the three as they saw fit.
They would typically be done on a very different schedule - perhaps you would gather stats twice a week, look for space to be reclaimed once a month and run sql tuning jobs nightly. For example.
Or maybe you would just gather stats - and never do the other two.
SQL tuning advisor would use an entirely different set of inputs (it would use the output of stats for example, not the same inputs stats use) to figure out what to do to tune (eg: it cannot be combined with gathering stats - it uses the OUTPUT of stats gathering along with other stuff that stats gathering doesn't touch)
And analyzing space utilization is looking at different things than stats gathering does. Stats gathering in general issues queries to look at the data, space analysis is looking at freelists, block utilization and so on - different data.
Why didn't GATHER_STATS_JOB gather stats on my table?
Robert, January 18, 2011 - 11:07 am UTC
Hi Tom,
This is a question about Oracle 10.2.0.4
If the automatic stats collection job doesn't gather stats on my table, is there a way to know what was the criteria it used to determine that the stats were good enough and thus not gather stats?
Thanks,
Robert.
January 23, 2011 - 3:40 pm UTC
look at dba_tab_modifications, if the table data hasn't changed much, the default job doesn't gather again.
Mikhail, October 24, 2011 - 5:38 am UTC
Hi, Tom.
How to set the amount of jobs that AUTOTASK creates for each client ? For example, client "auto optimizer stats collection" uses 5 jobs in SUNDAY_WINDOW and one job in MONDAY_WINDOW. It can be found in DBA_AUTOTASK_JOB_HISTORY.
Thank you!
High CPU/MEM usage during stats collection.
lalu, January 15, 2012 - 8:27 am UTC
Hello Tom,
We have some 10g and 11g databases.The auto task job for database stats collection runs during the maintenance time frame.But the CPU and memory usage goes up during that time and application suffers.
We have planned to stop the auto task job and collect stats manually.Break down the stats gather and do it daily basis on selective tables.My query is:
In 11g, for which database objects we need to collect stats?
1:Application table stats(For all the tables)
2:system stats(is it for the dictionary objects?)
3:Fixed object stats
Is that all or do I need to consider any other objects?
Thanks.
January 17, 2012 - 11:44 am UTC
you typically only need to collect fixed table stats after a major parameter change, they stay about the same size in general otherwise.
you only need to gather dictionary stats after the dictionary tables change radically - perhaps after a major application upgrade (lots of DDL)
your application stats are something you need to worry about - not all tables will need all statistics gathered all of the time...
auto statistics oracle 11g
lalu, January 18, 2012 - 2:40 am UTC
Hello Tom,
Thanks for your review.
We are thinking of another approach ie keep the auto task job as it is.let it run during the scheduling time.
But we thought of excluding some big tables (size in range of 1TB) from auto task and gather stats for those big tables manually.
Is there any better method of avoiding stats gather for big tables during auto task?
How can we have control on auto stats gather at table level(which tables to collect or not)?
Thanks.
January 18, 2012 - 7:38 am UTC
are you sure these tables are even having stats updated - if they are large, it would take a 10% change to trigger a new stats gather against them (the default job is using "gather stale" as an option)
check your last analyzed dates on your segments - see which ones are really getting stats gathered day to day.
the best (and only) way is to gather them manually - yes.
auto statistics oracle 11g
A reader, January 18, 2012 - 7:43 am UTC
Yes Tom.
Most of tables are considered during auto task.
And it brings down the db performance during large table stats gather.
So we have planned to do manually for whole db or use dbms_stats.lock_table_stats for large tables and keep the rest of tables as it is.And gather manually with a small sample(1-10%) for large tables manually.
January 18, 2012 - 7:47 am UTC
you really change 10% or more of a bunch of 1tb tables every single day? Impressive.
I'd suggest programming your stats gathering after a design analysis of the tables. Even if a table changes 10% (or 50% or even 100%) - it might not need stats again.
If the number of rows stays about the same, if the distribution of values in columns stays stable mostly - you might never need to gather stats. The only thing you might have to do is bump up high values on some columns using SET_COLUMN_STATS (like dates and sequences)
Look at stats as part of your program, as part of your application and design how you want to do it based on a design (thought, analysis).
A reader, January 19, 2012 - 7:22 am UTC
Hello Tom,
What are the implications if we disable the auto gather stats in 11g ? As this job is not only gathering the non-system tables but system tables. Can we restrict this job to gather only system tables ?
Thanks in Advance !!
January 19, 2012 - 12:31 pm UTC
the implication is that everything it does will stop being done and you'll have to decide if and when you want to do them yourself.
build and schedule your own job to gather the statistics you want - when you want - including statistics on the data dictionary if you desire.
Gathing fixed object statistics
A reader, June 07, 2012 - 4:00 am UTC
Tom,
We saw a great improvement of some queries against v$views used by our monitoring tool after executing (once, during typical load) dbms_stats.gather_fixed_objects_stats;
This recommended both by metalink and oracle optimizer group.
Metalink says that this gathered statistisc will survives oracle reboot as well. The problem is that after some time / some reboots the statistics disappear. I can not explain this behavior - it is just a fact.
SQL> exec dbms_stats.gather_fixed_objects_stats;
....
SQL> select stats_update_time from dba_tab_stats_history where owner like 'SYS' and table_name in ( 'X$KSQRS', 'X$KDNSSF', 'X$KSQEQ', 'X$KTADM', 'X$KTATRFIL', 'X$KTATRFSL');
STATS_UPDATE_TIME
---------------------------------------------------------------------------
07-JUN-12 10.53.16.799227 AM +02:00
07-JUN-12 10.53.16.611530 AM +02:00
07-JUN-12 10.53.24.533421 AM +02:00
07-JUN-12 10.53.24.699773 AM +02:00
07-JUN-12 10.53.24.809293 AM +02:00
07-JUN-12 10.52.29.356066 AM +02:00
I can reboot the instance - the statistics are still there. In a few week they are disappear and we have the problem with our monitoring again and should gather fixed object statistics.
Can you explain this behavior?
June 07, 2012 - 7:06 am UTC
something or someone is removing them. Statistics are stored in persistent tables, real tables, not in memory. The stats for the fixed objects are in exactly the same place as the stats for SCOTT.EMP. It takes a "delete" statement to remove them.
Perhaps you might want to enable auditing of the dbms_stats package or look at other things done by your monitoring/admin tools you've put together to see if someone is inadvertently removing these.
Seems 'auto optimizer stats collection' not running
JonJ, August 16, 2012 - 2:21 am UTC
Hello Tom,
Env: 11.2.0.3
I understand that in 11g the stats are gathered using one off jobs from dba_autotask.
When i query the DBA_AUTOTASK_JOB_HISTORY, it seems that they are not running since couple of weeks.
There isn't any detail on whether it is failing or not?
SELECT window_name,job_name, job_status, job_duration,JOB_START_TIME
FROM DBA_AUTOTASK_JOB_HISTORY
WHERE client_name='auto optimizer stats collection'
AND window_start_time >= SYSDATE -30
ORDER BY job_start_time DESC; 2 3 4 5
WINDOW_NAME JOB_NAME JOB_STATUS JOB_DURATION JOB_START_TIME
------------------------------ ------------------------------ ---------- -------------------- ---------------------------------------------
MONDAY_WINDOW ORA$AT_OS_OPT_SY_1641 SUCCEEDED +000 00:21:32 30-JUL-12 10.00.02.398132 PM CST6CDT
SUNDAY_WINDOW ORA$AT_OS_OPT_SY_1623 SUCCEEDED +000 00:02:47 29-JUL-12 10.01.19.542999 PM CST6CDT
SUNDAY_WINDOW ORA$AT_OS_OPT_SY_1621 SUCCEEDED +000 00:01:13 29-JUL-12 06.01.10.102645 PM CST6CDT
SUNDAY_WINDOW ORA$AT_OS_OPT_SY_1601 SUCCEEDED +000 00:01:32 29-JUL-12 02.01.01.138744 PM CST6CDT
:
select * from (select session_id,LOG_DATE,REQ_START_DATE,ACTUAL_START_DATE,STATUS,RUN_DURATION,ADDITIONAL_INFO from DBA_SCHEDULER_JOB_RUN_DETAILS where JOB_NAME like 'ORA$AT_OS_OPT_SY_%' order by ACTUAL_START_DATE desc ) where rownum<10;
SESSION_ID LOG_DATE REQ_START_ ACTUAL_START_DATE STATUS RUN_DURATION ADDITIONAL_INFO
--------------- ----------------------------------- ---------- --------------------------------------- ---------- --------------- --------------------
140,27067 30-JUL-12 10.21.34.417813 PM -05:00 30-JUL-12 10.00.02.398132 PM CST6CDT SUCCEEDED +000 00:21:32
14,53735 29-JUL-12 10.04.06.844428 PM -05:00 29-JUL-12 10.01.19.542999 PM CST6CDT SUCCEEDED +000 00:02:47
593,53617 29-JUL-12 06.02.23.323713 PM -05:00 29-JUL-12 06.01.10.102645 PM CST6CDT SUCCEEDED +000 00:01:13
718,63519 29-JUL-12 02.02.33.306788 PM -05:00 29-JUL-12 02.01.01.138744 PM CST6CDT SUCCEEDED +000 00:01:32
:
SELECT client_name, status
FROM DBA_AUTOTASK_TASK
WHERE client_name like 'auto optimizer %'; 2 3
CLIENT_NAME STATUS
----------------------------------- ----------
auto optimizer stats collection ENABLED
It seems to me that job is not being scheduled else i should be getting a failed attempt with FAILED status, should i?
Performance issue auto stats in 11g
Baiju, October 03, 2012 - 2:56 am UTC
Hello Tom,
We have recently upgraded our QA and DEV environment to 11g..After upgrading it has observed that one of schema jobs have performance issues in this environment..I have disabled auto stats in 11g databases..Later we have exported 9i stats and imported in 11g databases and same has locked..After that it is running fine..The problem is that this schema objects are daily truncating and objects are loading..Will locking the stats will cause performance issue for a longer run..Which method of stats i have to use for avoid this issue..I have tried different method,but it is causing issues..
11g autostats not happening?
stephan, April 03, 2014 - 7:49 pm UTC
Hi Tom,
I suspect this is something obvious, but I'm far more familiar with 10g than 11g, and not sure what I'm missing here. I just got pinged on a performance problem in one of our environments (11.2.0.3 on Windows x64), and, looking at dba_tables, I see this:
<code>
OWNER TRUNC(LAST_ANALY COUNT(*)
------------------------------ ---------------- ----------
LOCATION_ACTIVITY_DB 03-12-2013 00:00 2
LOCATION_ACTIVITY_DB 556
Comparing to the same application in a 10g DB, in the 10g DB, every table has a last_analyzed time.
Looking at the same query you provided in your initial answer, my results are very similar, except no sign that the job has run:
CLIENT_NAME : auto optimizer stats collection
STATUS : ENABLED
CONSUMER_GROUP : ORA$AUTOTASK_STATS_GROUP
CLIENT_TAG : OS
PRIORITY_OVERRIDE : INVALID
ATTRIBUTES : ON BY DEFAULT, VOLATILE, SAFE TO KILL
WINDOW_GROUP : ORA$AT_WGRP_OS
SERVICE_NAME :
RESOURCE_PERCENTAGE : 25
USE_RESOURCE_ESTIMATES : FALSE
MEAN_JOB_DURATION :
MEAN_JOB_CPU :
MEAN_JOB_ATTEMPTS :
MEAN_INCOMING_TASKS_7_DAYS :
MEAN_INCOMING_TASKS_30_DAYS :
TOTAL_CPU_LAST_7_DAYS :
TOTAL_CPU_LAST_30_DAYS :
MAX_DURATION_LAST_7_DAYS :
MAX_DURATION_LAST_30_DAYS :
WINDOW_DURATION_LAST_7_DAYS :
WINDOW_DURATION_LAST_30_DAYS :
-----------------
SQL> select count(*) from dba_scheduler_job_log where
2 additional_info like '%GATHER_STATS_PROG%';
COUNT(*)
----------
0
Is there something obvious I should be looking at or for?
Thanks!</code>