Skip to Main Content
  • Questions
  • Difference between the Scheduler job GATHER_STATS_JOB and DBMS_STATS

Breadcrumb

Question and Answer

Tom Kyte

Thanks for the question.

Asked: July 15, 2006 - 5:18 pm UTC

Last updated: December 08, 2010 - 12:23 pm UTC

Version: 10g

Viewed 10K+ times! This question is

You Asked

Oracle 10G recommends to gather statistics using the GATHER_STATS_JOB instead of using the GATHER_STATS package

In the same time Oracle mentioned that the GATHER_STATS package gather statistics and populates columns in some DBA_xxx views. Does that mean that the GATHER_STATS_JOB wont populates those DBA_xxx views?

Can you tell me what is the exact difference between GATHER_STATS_JOB and GATHER_STATS?

Oracle mentioned that in 10G
You no longer need to specify the MONITORING keyword in the CREATE
TABLE or ALTER TABLE statement to enable the automatic collecting of
Statistics for a table. Statistics are now collected automatically as controlled by
the STATISTICS_LEVEL


But when I try to check an index usage I still need to use the MONITOR USAGE clause with the ALTER INDEX command , could you tell me what is the automatically collected statistics Oracle mean ?

thanks for your cooperation

and Tom said...

there is no "difference"

the gather stats job is simply a stored procedure someone wrote that calls dbms_stats to gather statistics. It uses many defaults, finds new tables that have never been gathered on and gathers stats on them.

It is just an "automated procedure". Very much like one you would write yourself on top of dbms_stats.



Monitor USAGE is entirely and utterly different from "table monitoring". Index monitoring tells you if an index was used to ACCESS DATA. Table monitoring tells us when about 10% of the tables data has been modified. They are very different features.

Rating

  (32 ratings)

Is this answer out of date? If it is, please let us know via a Comment

Comments

that is nice , one more thing

Reader, July 19, 2006 - 3:43 pm UTC

does GATHER_STATS_JOB gather statistics on the data dictonary as well , or gathering data dictonary should be done manually ?

Tom Kyte
July 22, 2006 - 4:18 pm UTC

it does it all.

My GATHER_STATS_JOB does not seem to work

cs, April 13, 2007 - 12:25 pm UTC

Tom,
I just upgraded to 10g from 9206
I found a lot of entries in sys.dba_tab_modifications, but after I run
select PROGRAM_ACTION
from dba_scheduler_programs
where PROGRAM_NAME = 'GATHER_STATS_PROG';
PROGRAM_ACTION
----------------------------------------------
dbms_stats.gather_database_stats_job_proc

And I run
SYS @ blabla
SQL> exec dbms_stats.gather_database_stats_job_proc;
PL/SQL procedure successfully completed.

Those entries in sys.dba_tab_modifications are still there, some was analyzed, but not all of them.

Could you tell me what is going on?



Tom Kyte
April 13, 2007 - 7:08 pm UTC

sure, not everything in dba_tab_modifications IS SUPPOSED TO have new statistics.

We use that table to find tables that need them, but not every row in there represents a table that is in need of statistics.

GATHER_DATABASE_STATS_JOB_PROC vs DBMS_STATS.GATHER_TABLE_STATS

Carl Bruneau, May 31, 2007 - 2:41 pm UTC

Hello Tom,

you said that there was no difference between gathering statistics automatically (night batch maintenance window using gather_database_stats_job_proc) and gathering them "manually" using DBMS_STATS package methods?

If this is the case, I don't understand why I ended up with histograms on columns of a table when I gather statistics "manually" (using dbms_stats.gather_table_stats(user, tabname = > 'TABNAME', partname => NULL)) but I don't have these histograms if the statistics are gathered automatically in the maintenance window (using gather_database_stats_job_proc).

Regards.

Carl
Tom Kyte
May 31, 2007 - 3:40 pm UTC

the gather job uses DBMS_STATS

if you call it the same way they call it, the results would be the same.

they use AUTO for most all of the inputs.


the gather job is just a layer that calls DBMS_STATS, in exactly the same way you can call dbms_stats.

As dbms_stats is a procedure with inputs that make it do different things depending on what they are set to....

METHOD_OPT

Carl Bruneau, June 04, 2007 - 8:30 am UTC

Hello Tom,

When gathering statistics "manually" (using dbms_stats) METHOD_OPT is set at its default value: FOR ALL COLUMNS SIZE AUTO.

If it is the same for the "Gather_Stats_Job", I don't understand why we ended up with differences.

Best regards,

Carl


Tom Kyte
June 05, 2007 - 8:45 am UTC

all I said was "gather stats job just used dbms_stats like you and i do, it just is a wrapper on dbms_stats, it does the same things you or I could do"

Now, why could it result in "different things" being gathered?

One logical reason could be that when you let gather stats job do it, it looked at the columns AT THAT POINT in time and said "hmmm, nope, don't want to gather this that and the other thing", and when you ran dbms_stats yourself, the data had changed enough OR you used some other parameters that were slightly different from the gather job (which does not really document what it passes to dbms_stats) like estimate, the percent to estimate and so on - and dbms_stats decided then "we should get this that and the other statistic"


statistics

Carl Bruneau, June 05, 2007 - 12:58 pm UTC

Thanks Tom,

The data did not change. This is one point that I am sure.

Since I use exactly this command:

dbms_stats.gather_table_stats(user, tabname = > 'TABNAME', partname => NULL))

I don't see what would be different from the default values the gather job is supposed to take.

Or, as you just said, tha values for some parameters of the gather job are not the default values and are unknow (undocumented). Which can lead to confusing results.

Or its a bug...

Best regards.

Carl

dba_tab_modifications - Validity of entries

Chris Talbot, June 15, 2007 - 4:47 am UTC

Followup April 13, 2007 - 7pm US/Eastern:
sure, not everything in dba_tab_modifications IS SUPPOSED TO have new statistics.

We use that table to find tables that need them, but not every row in there represents a table that is in need of statistics.


Your response above leaves me somewhat confused, could you clarify exactly what you mean by "We use that table to find tables that need them, but not every row in there represents a table that is in need of statistics."

We have a situation where we too are trying to use the all_tab_modifcations to do an adhoc analyze of only tables needing it, trouble is this table just doesn't seem consistant in it's entries. We have seen examples where partitions show with 0 inserts/updates and deletes ! What is the point if the data hasn't changed ? I'm confused ?

Tom Kyte
June 15, 2007 - 7:47 am UTC

why are you using that - just use dbms_stats with gather or list stale, that API provides a list of tables we believe are in need of statistics.

all_tab_modifications - continued.....

Chris Talbot, June 15, 2007 - 4:52 am UTC

If this table does indeed show tables that need stats, why is it that some of our weekly subpartitioned and partitioned tables do not show when clearly they need analysis ?

We have a program that takes 20 hours to run if we do not analyse the new partitions, takes 20 minutes when the analysis has been done, trouble is, they never show in all_tab_modifications !
Tom Kyte
June 15, 2007 - 7:49 am UTC

are your segments monitoring or nomonitoring, you do not say.

I will guess you have not enabled monitoring on those segments - and are in 9i where nomonitoring was the default.

Segment Monitoring

Chris Talbot, June 15, 2007 - 8:25 am UTC

Hi Tom,

We are running 10.2.0.2, my understanding is that as of 10g the keywords were deprecated, I have also checked on all_tables and this shows as monitored YES, from what I can see there is nothing on all_part_tables so assume that this is handled at the table level.

Thanks
Tom Kyte
June 15, 2007 - 11:56 am UTC

then insufficient amounts of the table segments were modified to trigger 'staleness'

that happens easily on large segments (where 10% is huge) and you might not be able to do just stale objects if you have monotonically increasing (dates, sequences,etc) fields and we need to be aware of their new high bound.

Stale is an option, it is not a panacea, it is not for everyone, it works many times, in many cases.

What is the time , this GATHER_STATS_JOB job gets executed

RajaRajan, June 15, 2007 - 1:46 pm UTC

Hi Tom,
What is the time , this GATHER_STATS_JOB job gets executed and is there a way that we can specifically make it run only during a maintenance window ?


regds
Raj

Tmathew

Mathew Tom, June 15, 2007 - 3:19 pm UTC

Hi Tom,

we have databases which runs by 24x7 and haven't done any statistics collection more than 8 months.we have primary and logical standby dbs. can I schedule statistics collection on the fly on logical standby and primary at anytime?

Thanks,

Mathew


Schedule Definitions

Chris Talbot, July 10, 2007 - 4:52 am UTC

Raj,
See dba_scheduler_jobs etc...., it's documented on metalink
Chris

DBA_TAB_MODIFICATIONS and the 10% threshold

Mark Brunskill, July 10, 2007 - 7:03 am UTC

Tom,
Can you clarify what you mean when you said
"Table monitoring tells us when about 10% of the tables data has been modified".

If the table is partitioned, then does the 10% threshold restrict itself to each partition that has data loaded, or to the table as a whole? Is there a setting that influences whether partitions are monitored (if the answer to the above is its 10% of the table as a whole).

Many thanks,
MarkB...
Tom Kyte
July 10, 2007 - 11:59 am UTC

DBA_TAB_MODIFICATIONS and the 10% threshold

Chris Talbot, July 11, 2007 - 2:50 am UTC

Hi Tom,

We have found that even though partitions are monitored they don't always appear in the %_TAB_MODIFICATIONS tables, we currently have a TAR 6361309.992 logged with Oracle which hasn't progressed very far in the last 4 weeks. Bug ??
Tom Kyte
July 11, 2007 - 8:56 am UTC

why do you care if the partitions "show" in the modifications view if they show with ZERO rows modified? You don't need to understand the point of having them there, just that they are there - for whatever reason. They are 'accurate', they are not inaccurate.


I could not ascertain from the tar if you are actually experiencing any issues or if you are just wondering why a row with 0 inserts/update/deletes is showing up in the modifications view??

(and it did seem they worked the tar pretty good

your recommendation,

A reader, July 31, 2007 - 6:03 pm UTC

Do you recommend gather_stats_job or use dbms_stats with conditions difined around that meet our needs?

Our environment is more or like a data warehouse environment where tables are getting data every hour in batches. Most of the tables are having one or more indexes and some of them need histograms.

Our priority is speed of performing the analyze process (the size of the database is more than 12 TB), and pretty good statistics on tables, partitions, sub-partitions, and its corresponding indexes for the reporting queries to make good decision.


Tom Kyte
August 05, 2007 - 9:43 am UTC

I think you'll need a custom approach to statistics, you have very specific needs here, the generic "so you have a pretty small database and don't want to do much work on it" approach (of the default gather job with AUTO as the default parameter to most everything) will not work.


You might not even need to gather statistics many times, say you just did a load of data - you know how many rows you just loaded, how big that is, what the new end points of some data values are (eg: dates, timestamps, sequence values) - you can use the dbms_stats.SET_**** routines to tell the optimizer what they are.

You might be able to get away with just gathering statistics on modified partitions and letting the global statistics get rolled up from there (not always, but often you can). In other cases you can once again use the SET routines where appropriate.

set_stats is a cool approach,

A reader, August 06, 2007 - 11:54 am UTC

Thanks for letting me know about the SET_STATS. I liked it and helps us in large tables that are involved in heavy loads.

About your last statement you made
<
You might be able to get away with just gathering statistics on modified partitions and letting the global statistics get rolled up from there (not always, but often you can). In other cases you can once again use the SET routines where appropriate
>

Can you please eloborate that? I did not understand how to let the global statistics get rolled up.

Thanks,


Tom Kyte
August 06, 2007 - 12:28 pm UTC

they do get rolled up, but they might not be sufficient.

eg:

you have 10 partitions.
local stats on each partition says "column X has 1,000 distinct values"

Now, at the global level, does column X have

a) 1,000 distinct values
b) 10,000 distinct values
c) some number between 1,000 and 10,000

We don't know, but you might - you might be able to set the correct value (if needed, it might not make a bit of difference depending on how you access column X)

follow up on set_stats,

A reader, August 06, 2007 - 12:53 pm UTC

Thanks a lot for clearing my doubt.

If we use set_**_stats procedure, what are the set_***_stats procedures and its parmeters that you recommend?

There are lot of parameters in each of the SET_** procedures.

Thanks,

Tom Kyte
August 07, 2007 - 9:27 am UTC

the parameters - well - you need to sort of supply them, they are very "personal", you are telling Oracle about YOUR data after all.

You need to tell the optimizer about your data - that is all I can really say.

A reader, November 14, 2007 - 8:29 am UTC

Tom,
Thanks for the above explanations. Just one more question on the same. I've read this somewhere that in 10g, statistics are collected automatically and a re-analyze is triggers when 20% of the rows change. So, it there a way that this value (20%) can be changed and is this necessary for a system to have the tables to be re-analyzed with this new feature inspite of your existing jobs scheduled to do so over a definite period of time. What concerns me is the time at which this statistics are gathered. I mean it might be happening during the daytime at peak hours. Also, to the fact that I would like to change the statistics as my understanding because it would only analyze tables with that has changes of the desired number of rows. In that case, many of the tables will never be analyzed.
Please correct me if I am wrong.
Thanks in advance.
Tom Kyte
November 19, 2007 - 5:43 pm UTC

http://docs.oracle.com/docs/cd/B19306_01/server.102/b14211/stats.htm#sthref1461
(10%)

if you want automatic, you use automatic - gather stale does whatever gather stale wants to do.

and the auto gather runs as you would like it to - you can stop if from running, black it out during certain times, whatever.


And, you can ALWAYS gather statistics on anything you WANT to - you don't have to wait for an X% change.

A reader, November 21, 2007 - 6:14 am UTC

Tom, Thanks for the valuable information.

Gather STATS

A reader, November 27, 2007 - 6:34 am UTC

Tom, is there anyway to tell Oracle that you need to analyze the tables after X% rather than after 10%. This is because some tables may be so big that 10% change may take a long time and we might not afford to wait till that time. Though, we can always analyze them whenever we can.
Tom Kyte
November 27, 2007 - 3:44 pm UTC

you can do it whenever you like.

if "auto" is not what you need, you have "manual" anytime you like.

you can query the *_TAB_MODIFICATIONS views and the *_TABLES views to determine if you want to use dbms_stats.gather_table_stats on the table or not.

can set_param effect dbms_stats.gather_database_stats_job_proc

amihay gonen, December 17, 2007 - 1:46 pm UTC

as you said that dbms_stats.gather_database_stats_job_proc is only wrapper to dbms_stats.

so i guess we can use dbms_stats.set_param to influnce the defaults in dbms_stats.gather_database_stats_job_proc.

(and yes i know that 11g set_param is obselete... :)
Tom Kyte
December 17, 2007 - 3:27 pm UTC

If you want control like that, suggestion:

a) don't use the default job
b) create your own

problem solved, without any "mystery or magic for the people that come after you"

the danger with changing the defaults is that many people (self included) will be looking at your data in the future not knowing that and waste a lot of time.

This is why you should set AS FEW init.ora parameters as possible
And never change the defaults for something like dbms_stats - OVERRIDE THEM explicitly - sure, but don't change them.

gather_database_stats_job_proc running while SQL is being fired

Stan, June 03, 2008 - 11:41 pm UTC

Hi Tom,

Can running SQL while gather_database_stats_job_proc is running have an advise effect on the SQL query plan?

Or, in other words, say you have the same plan before and after running gather_database_stats_job_proc - could the plan be different during the run?
Tom Kyte
June 04, 2008 - 9:42 am UTC

until the statistics are gathered, you won't 'see' them - so "no" it would not have an "adverse" affect necessarily - you just have to know that the plan generated will only use the statistics that are visible when the plan is generated.


The plans pre and post gathering can and will many times be different - that is the goal of gathering statistics, to get plans to change in general.

how to find window_name for job_name of GATHER_STATS_JOB

Pauline, June 10, 2008 - 5:56 pm UTC

Hi Tom,
In Oracle 10g sys.dba_scheduler_jobs(by default), we see job_name GATHER_STATS_JOB's scheduler_name is MAINTENANCE_WINDOW_GROUP, scheduler_type is WINDOW_GROUP.
What is the dictionary view we can join and know job_name GATHER_STATS_JOB belongs to window_name of WEEKNIGHT_WINDOW?

Although we see dba_scheduler_windows and dba_scheduler_wingroup_members have the value of WEEKNIGHT_WINDOW, but there is no useful column to join with
dba_scheduler_jobs to identify the window_name for GATHER_STATS_JOB.


In oracle 10g OEM, there is same issue for it. From database targets --> Administration --> oracle scheduler --> when click jobs, can't see job_name for GATHER_STATS_JOB, when click schedules, job_classes, Windows, Window_groups, can't find GATHER_STATS_JOB is corresponding to WEEKNIGHT_WINDOW.

What are we missing? please help.

Thanks.

Tom Kyte
June 11, 2008 - 7:49 am UTC

the schedule name can be a window or window group.

that can be used to probe into ALL_SCHEDULER_WINGROUP_MEMBERS

Pauline, June 11, 2008 - 3:09 pm UTC

Tom,
Thanks for the response.I noticed DBA_SCHEDULER_WINGROUP_MEMBERS or ALL_SCHEDULER_WINGROUP_MEMBERS has the column of WINDOW_GROUP_NAME may join column for dba_scheduler_jobs.schedule_name, but it is not enough (I
consider it is not useful in my question). Let us query as:

ORACLE@dev1 > set linesize 120 
ORACLE@dev1 > SELECT job_name,schedule_name,schedule_type,job_class,comments FROM dba_scheduler_jobs
  2     where job_name='GATHER_STATS_JOB';

JOB_NAME
------------------------------
SCHEDULE_NAME
------------------------------------------------------------------------------------------------------------------------
SCHEDULE_TYP JOB_CLASS
------------ ------------------------------
COMMENTS
------------------------------------------------------------------------------------------------------------------------
GATHER_STATS_JOB
MAINTENANCE_WINDOW_GROUP
WINDOW_GROUP AUTO_TASKS_JOB_CLASS
Oracle defined automatic optimizer statistics collection job

ORACLE@dev1 > select window_name,schedule_type,last_start_date,comments  from dba_scheduler_windows;

WINDOW_NAME                    SCHEDULE LAST_START_DATE
------------------------------ -------- ---------------------------------------------------------------------------
COMMENTS
------------------------------------------------------------------------------------------------------------------------
WEEKNIGHT_WINDOW               CALENDAR 10-JUN-08 10.00.00.209546 PM US/EASTERN
Weeknight window for maintenance task

WEEKEND_WINDOW                 CALENDAR 07-JUN-08 06.00.03.017171 AM US/EASTERN
Weekend window for maintenance task

ORACLE@dev1 > SELECT * FROM dba_scheduler_wingroup_members;

WINDOW_GROUP_NAME              WINDOW_NAME
------------------------------ ------------------------------
MAINTENANCE_WINDOW_GROUP       WEEKNIGHT_WINDOW
MAINTENANCE_WINDOW_GROUP       WEEKEND_WINDOW

ORACLE@dev1 > select job_name,schedule_name,WINDOW_NAME from dba_scheduler_jobs j,dba_scheduler_wingroup_members m
  2     where  j.schedule_name=m.WINDOW_GROUP_NAME
   and j.job_name='GATHER_STATS_JOB';  3  

JOB_NAME
------------------------------
SCHEDULE_NAME
------------------------------------------------------------------------------------------------------------------------
WINDOW_NAME
------------------------------
GATHER_STATS_JOB
MAINTENANCE_WINDOW_GROUP
WEEKEND_WINDOW

GATHER_STATS_JOB
MAINTENANCE_WINDOW_GROUP
WEEKNIGHT_WINDOW

From last query result, we still can't identify job of 
GATHER_STATS_JOB is actually in WEEKNIGHT_WINDOW.

Can you show your query to us?

Thanks.


Tom Kyte
June 11, 2008 - 8:57 pm UTC

...
From last query result, we still can't identify job of
GATHER_STATS_JOB is actually in WEEKNIGHT_WINDOW.

....

why not, I see it?

JOB_NAME
------------------------------
SCHEDULE_NAME
--------------------------------------------------------------------------------
----------------------------------------
WINDOW_NAME
------------------------------
GATHER_STATS_JOB
MAINTENANCE_WINDOW_GROUP
WEEKEND_WINDOW
<b>
GATHER_STATS_JOB
MAINTENANCE_WINDOW_GROUP
WEEKNIGHT_WINDOW</b>


Pauline, June 12, 2008 - 9:15 pm UTC

Sorry,I thought it should be exclusively in one of the group
so that I tried to uniquely identify which group it beblongs. Now I know it could be in both groups.

Thanks

Raj, September 12, 2008 - 8:15 pm UTC

Hi Tom,

How do you alter the default gather_stats_job that is auto scheduled on db creation to apply only to sys & system... or better yet... not for a specific schema.

The 3rd party owner of the database instance I'm an admin of wants their schema stats gathered only their way (don't get me started!). I think it would be unwise to risk their method and the default method crossing paths... but I do want the default enabled.

Thank You.
Tom Kyte
September 16, 2008 - 5:23 pm UTC

disable the job itself and gather statistics manually then using your own job.

"External" launch of GATHER_DATABASE_STATS_JOB

Christophe, January 27, 2009 - 4:19 am UTC

Hello Tom,

First of all :
Thank you for all we learn from you.
We are still in January, so I can wish you an Happy New Year.

For monitoring reasons (in our company, there are lot of rules and mandatory tools), we wish to launch the statistics refresh by an external tool, and via Oracle scheduler.
In Oracle documentation, I don't find anything about lauching gather_database_stats_jobs from an external way.
I don't see why we could not ? Am I right please ?

Thank you and best regards,

Tom Kyte
January 28, 2009 - 8:28 am UTC

it is just a stored procedure in dbms_stats, you can call it, sure.

can i change the default values in automatic stats gather job

A reader, April 14, 2009 - 8:39 am UTC

Hi Tom,

we are using GATHER_STATS_JOB of 10g to gather stats in our database. I have observed that it is using degree=>null by default, is it possible to change the degree to say 4 so that this job will get completed quickly.
so that question is how to change the value of default parameters in GATHER_STATS_JOB , it calls the procedure

GATHER_DATABASE_STATS_JOB_PROC .

Thanks
-A


Tom Kyte
April 14, 2009 - 11:46 am UTC

is it possible to change the
degree to say 4 ...

sure, but....

... so that this job will get completed quickly.
...

it might run slower, it might run faster, it might not change at all.

http://docs.oracle.com/docs/cd/B19306_01/appdev.102/b14258/d_stats.htm#i1048566

set the degree parameter default to auto, it'll change it by default for *everyone*

ops$tkyte%ORA11GR1> exec dbms_stats.set_param( 'DEGREE', 'DBMS_STATS.AUTO_DEGREE' );

PL/SQL procedure successfully completed.

ops$tkyte%ORA11GR1> select dbms_stats.get_param( 'DEGREE' ) from dual;

DBMS_STATS.GET_PARAM('DEGREE')
-------------------------------------------------------------------------------
DBMS_STATS.AUTO_DEGREE


Christophe, May 07, 2009 - 11:55 am UTC

Hi Tom,

You previously confirmed that it is possible to launch
gather_database_stats_jobs from an external way.
If would like to use, as external way, a shell script.
In this script I wrote
"
sqlplus -S "/ as sysdba" <<EOSQL > "$FIC_SPOOL"
(...)
execute DBMS_STATS.GATHER_DATABASE_STATS_JOB_PROC;
(...)
EOSQL
"
If somebody kills this shell, the execution of DBMS_STATS.GATHER_DATABASE_STATS_JOB_PROC will be immediatly stopped.

Can you please tell me how I can trap this interruption
or follow the execution is everyting is ok ?

Thank you and best regards,


Tom Kyte
May 11, 2009 - 1:32 pm UTC

if someone kills that shell, it won't stop. Did you try that?

the sqlplus session would just become disconnected from the tty and keep going.

and if you killed the sqlplus session, the client would disappear but the server session would almost certainly keep on trucking - it wouldn't know the client dies UNTIL it tried to tell the client the outcome of the dbms_stats call.


and scripting at the OS level is very specific to the shell you choose - no one could tell you how to catch the interrupt here - we don't know your shell (but you should!)

gather_statistics_job

A reader, March 12, 2010 - 9:56 am UTC

Tom,

Is there anyway to see old (now they were before gather_statistics_job started) and to restore them.

We have an issue in 10gR2 that after gathering statistics the same queries running with another (worse) execution plan. Any way to restore the old statisitics?

A reader, March 17, 2010 - 8:28 am UTC

Tom,

Thanks for the quick response.

We still have the issue that one day exactly the same query runs with one (good - about 3000 LIO/executions) and another day with another (bad) execution plan...

My first guest was it is due to gather_stats_jobs running on the database daily. The application does not use binds (ok - you already wrote a lot of stuff about not using binds) - so I did not think it is bind peeking issue. The query from bad and good period are exactly the same - even with the same literals.

I used dbms_stats.restore to restore statisctics from the "good" time. Then I disabled auto execution of dbms_stats (per scheduler). Then I flushed shared pool.

All of this had no effect. Optimizer still choosing a bad exection plan. I checked last_analyzed in both dba_tables and dba_tab_statistics for the tables referensed in the query - both show 19-FEB-2010 as last_analyzed. But I (successfully!) restored statisitcs from March 2th. Is it simple a wrong place (last_analyzed column) where I checked or the statistics were not restored? DBA_OPTSTAT_OPERATIONS shows me statistics gathered every day till I stop them yesterday - but I did not see this in the last_analyzed.
Oracle documentation says that I can NOT restore a user defined statistics - what does it means (what statistics are user-defined?).

Do you have any other idea (beyond bind peeking and statistics) what can cause oracle optimizer choose other execution plan for the same query (we actually have one good week, one bad week then another two good or bad days) and we do not change anything on the system - now not even gathering statistics....
Tom Kyte
March 17, 2010 - 8:44 am UTC

do you have the "good" and the "bad" plans - not explain plans, but v$sql_plan information - the real plans used at runtime.

A reader, March 19, 2010 - 8:15 am UTC

Tom,

I think I found the reason for such unstabil behavior as descibed above (having bad and good periods, when the same query produces ten thousands or only a few hundreads of LIOs depends on day of the week or simpe a week). In my opinion the reason is that the application saves some data referenced as date in applicaion (attached_sla.time_to_violation ) in a column of type number in the database.
The numbers I see are something like
time_to_violation < 1269057600 AND time_to_violation > 1268971200
In addition the data in the columns are pretty skew in the distribuions - most of the values occur only ones but some values (few of them) occur 30-40 times. So on some specific days we get a huge increase in LIOs - from normal 5 000 LIOs/sec to 90 000 LIOs per seconds in the statspack load profile - this can last for a week then we get another "good period".
The execution plan is actually the same for both - bad and good query:
--------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time
-------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | | 32 (100)|
| 1 | SORT UNIQUE | | 1 | 132 | 31 (4)| 00:00:01
|* 2 | TABLE ACCESS BY INDEX ROWID | CALL_REQ | 1 | 77 | 23 (0)| 00:00:01
| 3 | NESTED LOOPS | | 1 | 132 | 30 (0)| 00:00:01
| 4 | MERGE JOIN CARTESIAN | | 1 | 55 | 7 (0)| 00:00:01
| 5 | TABLE ACCESS BY INDEX ROWID | ATTACHED_SLA | 1 | 21 | 3 (0)| 00:00:01
|* 6 | INDEX RANGE SCAN | ATTACHED_SLA_X1 | 1 | | 2 (0)| 00:00:01
| 7 | BUFFER SORT | | 3 | 102 | 4 (0)| 00:00:01
| 8 | TABLE ACCESS BY INDEX ROWID| GRPMEM | 3 | 102 | 4 (0)| 00:00:01
|* 9 | INDEX RANGE SCAN | GRPMEM_X2 | 3 | | 1 (0)| 00:00:01
|* 10 | INDEX RANGE SCAN | CALL_REQ_X5 | 24 | | 2 (0)| 00:00:01
--------------------------------------------------------------------------------------------------


Actually I saw more than 1000 "sqls" in v$sql area sharing this plan (they differ only in literals). Depends on the literals step 5 in the plan brings 0 or for e.g. 30 rows back and v$sql shows 3000 or 30000 buffer_gets.

In a "good" period optimizer choose another plan (I see this plan and not the plan above):

---------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
---------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | | 237 (100)| |
| 1 | SORT UNIQUE | | 12 | 1584 | 236 (2)| 00:00:03 |
|* 2 | HASH JOIN | | 12 | 1584 | 235 (1)| 00:00:03 |
| 3 | TABLE ACCESS BY INDEX ROWID | CALL_REQ | 37 | 2849 | 23 (0)| 00:00:01 |
| 4 | NESTED LOOPS | | 128 | 14208 | 73 (0)| 00:00:01 |
| 5 | TABLE ACCESS BY INDEX ROWID| GRPMEM | 3 | 102 | 4 (0)| 00:00:01 |
|* 6 | INDEX RANGE SCAN | GRPMEM_X2 | 3 | | 1 (0)| 00:00:01 |
|* 7 | INDEX RANGE SCAN | CALL_REQ_X5 | 24 | | 2 (0)| 00:00:01 |
| 8 | TABLE ACCESS BY INDEX ROWID | ATTACHED_SLA | 498 | 10458 | 161 (1)| 00:00:02 |
|* 9 | INDEX RANGE SCAN | ATTACHED_SLA_X1 | 498 | | 3 (0)| 00:00:01 |
---------------------------------------------------------------------------------------------------
As I can not change the application :-) - they save date as number, did not use binds etc - what can I do to improve oracle behavior. I think I saw your article describing the same date/numbers mismacth and your assumpion was to use dynamic_sampling...
If yes is setting optimizer_dynamic_sampling=4 is enough or should I set it higher? Can you give any other hints for the data/execution plans above ? (again - no chance to change the applicaion...).
Should I gather statisitcs automatically with dbms_stats or try to save statistics from the "good" period?
And as I try to restore statistics with dbms_stats.restore - where I can see the affect - the last_analyzed colum of dba_tables or dba_tab_statistics is not updated if I restore statistics (or collect new statistics automatically with dbms_stats)?

Thanks in advance for all your help

DBMS_STATS and Statistics

Robert, December 08, 2010 - 12:00 pm UTC

Tom,

Is there anywhere we can find (after the fact) the exact statistics parameters that were used to gather statistics for any given segment?

In other words can we tell that on December 8, 2010 statistics were gathered for all indexed columns for EMP using default METHOD_OPT and using BLOCK_SAMPLE=TRUE?

This kind of info could be useful for debugging.

Thanks,

Robert.
Tom Kyte
December 08, 2010 - 12:23 pm UTC

... Is there anywhere we can find (after the fact) the exact statistics parameters
that were used ...

not unless you wrote it down somewhere - we do not record the exact parameters.

You could easily write your own 'dbms_stats' package that records this somewhere and then invokes sys.dbms_stats.

More to Explore

PL/SQL demos

Check out more PL/SQL tutorials on our LiveSQL tool.

PL/SQL docs

PL/SQL reference manual from the Oracle documentation library