Skip to Main Content

Breadcrumb

Question and Answer

Connor McDonald

Thanks for the question, SURESH.

Asked: April 04, 2001 - 10:43 am UTC

Last updated: July 19, 2021 - 5:45 am UTC

Version: 8i

Viewed 100K+ times! This question is

You Asked

Dear Tom,

Is it possible to analyze <compute option> only a partition of a table. <using DBMS_STATS package> i want to analyze a table <table having 83 million rows> parallely. when using ANALYZE table command for estimating statistics it is taking 90 minutes <for sample 30 percent>. the table used is a DATA WAREHOUSE table. <very big table>

i heard that using ANALYZE command we cannot analyze just a partition. for oracle documentaion i cannot find out the exact syntax for analyzing a partition of the table using DBMS_STATS command. Can you provide an example how to do this ?

and Tom said...



tkyte@TKYTE816> analyze table emp partition(p1) compute statistics;

Table analyzed.



tkyte@TKYTE816> select table_name, partition_name, num_rows from user_tab_partitions;

TABLE_NAME PARTITION_NAME NUM_ROWS
------------------------------ ------------------------------ ----------
EMP P2
EMP P3
EMP P4
EMP P1 0

that shows just one partition is analyzed. But yes, DBMS_STATS, supports this as well. See the "partname" parameter to the gather routines.


tkyte@TKYTE816> exec dbms_stats.gather_table_stats( user, 'EMP', 'P2' );

PL/SQL procedure successfully completed.

tkyte@TKYTE816> select table_name, partition_name, num_rows from user_tab_partitions;

TABLE_NAME PARTITION_NAME NUM_ROWS
------------------------------ ------------------------------ ----------
EMP P2 6
EMP P3
EMP P4
EMP P1 0


for example.

Rating

  (30 ratings)

Is this answer out of date? If it is, please let us know via a Comment

Comments

Pauline, April 04, 2001 - 5:36 pm UTC


Simple and to the point!

Robert, October 15, 2002 - 12:06 pm UTC


method_opt cascade clause of dbms_stats

Prasad, June 17, 2004 - 11:13 am UTC

Hi tom,

I've a table having 9 partitions. Each partition is ~ 30million. I've a local BITMAP index (having 3 columns) for each partition & a primary key ( 1 column) index.
I want to analyze this table (& indexes) periodically once every month. I want to use DBMS_STATS package. WHats the best way to go about it?
I'm little confused by the method_opt & cascade clause.

the following thing is not working for me
dbms_stats.gather_table_stats(OWNNAME=>'ABCD', TABNAME=>'TAB1', PARTNAME=>'YRMNTH_200404', METHOD_OPT=> 'FOR ALL LOCAL INDEXES', DEGREE=> 4, CASCADE=> TRUE);

I'm on Oracle 8.1.7.4

looking forward to hear from you.

Thanks,
Prasad


Tom Kyte
June 17, 2004 - 2:06 pm UTC

a bitmap index with 3 columns -- very strange. bitmaps work best a column at a time and would then be anded/ored together as needed.


method_opt is for gathering histograms, just like the for columns clause of ANALYZE.

cascade simply says whether to do indexes on the table as well.


unless you always do partition elimination AND the optimizer knows exactly what partition you are going after in a query, you'll want global statistics on that table.

DBMS_STATS taking time

prasad, June 17, 2004 - 2:34 pm UTC

Hi tom,

These 3 columns are used frequently in the select query. And they have some 4000 distinct value in 30Million rows, Hence BITMAP index.

The following code I'm using for Analyzing the 3 tables. But its taking a really loooong time, considering the amount of data it has.
What i want is ALWAYS to analyze only the LATEST partition. Not the entire table.
So when i'm running followig on my Test database it takes a long time.
----------------------------------------------------------
declare
n number;

begin

n := dbms_utility.get_time ;

dbms_stats.gather_table_stats(OWNNAME=>'UHDB', TABNAME=>'ICDS_1', PARTNAME=>'ACCOUNTING_PERIOD_200404', DEGREE=> 4, CASCADE=> TRUE);
dbms_stats.gather_table_stats(OWNNAME=>'UHDB', TABNAME=>'INCA_NIA_1', PARTNAME=>'ID_ACP_YEARNO_200404', DEGREE=> 4, CASCADE=> TRUE);
dbms_stats.gather_table_stats(OWNNAME=>'UHDB', TABNAME=>'INCA_OIA_1', PARTNAME=>'ID_ACP_YEARNO_200404', DEGREE=> 4, CASCADE=> TRUE);

dbms_output.put_line('Time elapsed -> '|| (dbms_utility.get_time-n));

end;
-----------------------------------------------------------

So CASCADE=>TRUE will analyze indexes as well, right ?

The first table's partition has "0" rows. (total rows are 10M)
The second table's partition has 15 million rows. (total rows are 15M)
The third table's partition has "0" rows. (total rows are 1M)

What am I missing here ??


Tom Kyte
June 17, 2004 - 3:09 pm UTC

but as I said, it is more normal to

create index bmidx1 on t(a);
create index bmidx2 on t(b);
create index bmidx3 on t(c);

rather than

create index bmidx on t(a,b,c);


now a query on any of a, b, c, or a,b, or b,c, or a,c, or a, or b, or c would just pick up any of the three indexes and AND or OR them.

So,

where a = 5 and b = 6 or c = 7

would and bmidx1 with bmidx2 and or that with bmidx3 to get the rows. not so with your single index. that was my point there.


unless you ALWAYS exclude all but one partition AND the optimizer knows at query parse time which partition that will be -- you want GLOBAL statistics.

say you query:

where partition_key = :bind_variable

that'll be one partition, but which one? not known, use global stats to optimize.

or

where partition_key between A and B

that'll be zero, one or more partitions -- use global stats to optimize

or

<this space intentionally left blank>

all partitions -- use global stats...



cascade => true will get the indexes, yes. perhaps you want to ESTIMATE and perhaps you want to use a higher degree of parallelism (2x number of cpus is good place to start) and you want to make sure you are using as many disks on as many controllers as possible (data is spread all over the place)





History Database

Prasad, June 18, 2004 - 5:02 am UTC

Hi Tom,

Thanks for clearing the INDEX thing. I got your point.
But my tables are history tables. Only 1 partitions is active (insert/delete/update), the other 8 partitions will never be queried or modified, they are there just because of the data retention policy. I have optimized all the table partitions (~ 400 million)a few days back & the process took 18 Hrs. But now the stats are there for all those 8 partitions for all the 3 tables. And as i said these partitions will never be touched.
So in my case i want to optimize ONLY the partition. Am i right? If i ESTIMATE=>49, will it be good?
I'm using 8 CPUs (LIVE) 4 CPUs (test). I'll change the parellel clause to 16.

Looking forward to here from u.

Many Thanks,
Prasad

Tom Kyte
June 18, 2004 - 10:45 am UTC

if those partitions will *never* be touched, why do they even exist as partitions? roll them out into their own archive tables and forget about them.

OPTIMIZE=>ANALYZE

prasad, June 18, 2004 - 6:25 am UTC

Tom,

Sorry for the typo above.
PLease read ANALYZE in place of OPTIMIZE in the above message.
Sorry for the inconvenience caused.

Cheers,
Prasad

Puzzling.

A reader, June 18, 2004 - 10:58 am UTC

Tom,

This is what the requirement is. On a request (once-twice an year) the data might be queried. But we got to have them here in the table as partition.
---------------------------------
I’m little(??) confused by this Partition analyze thing. I ran the following code to analyze ONLY the mentioned (HIGHEST) partition.

declare
n number;

begin

dbms_output.put_line('Analyze of table INCA_NIA_1 started -> '|| to_char(sysdate, 'DD-MON-YYYY hh24:mi:ss'));
n := dbms_utility.get_time ;

dbms_stats.gather_table_stats(OWNNAME=>'UHDB', TABNAME=>'INCA_NIA_1', PARTNAME=>'ID_ACP_YEARNO_MAXVALUE', DEGREE=> 8, CASCADE=> TRUE, ESTIMATE_PERCENT=>49);

dbms_output.put_line('Analyze of table INCA_NIA_1 Completed -> '|| to_char(sysdate, 'DD-MON-YYYY hh24:mi:ss'));
dbms_output.put_line('Time elapsed -> '|| (dbms_utility.get_time-n));
dbms_output.put_line(' ');

end;

So ideally Oracle should have analyzed this ONLY partition and not the remaining table. (I guess I’m wrong here, but this is what I used to think)

The output I got from above code is:

Analyze of table INCA_NIA_1 started -> 18-JUN-2004 09:34:49
Analyze of table INCA_NIA_1 Completed -> 18-JUN-2004 10:36:07
Time elapsed -> 367764

So the analyze process took ~ 1Hr 2 Mins.

Then I fired the query:

select table_name, partition_name, last_analyzed, num_rows, blocks, sample_size
from user_tab_partitions
where last_analyzed is not null
and table_name = 'INCA_NIA_1'

TABLE_NAME PARTITION_NAME LAST_ANALYZED NUM_ROWS BLOCKS SAMPLE_SIZE

INCA_NIA_1 ID_ACP_YEARNO_MAXVALUE 18/06/2004 09:34:49 0 0 {null}

So the partition analyzing was finished in a second, as the script start time & last analyzed time is same (09:34:49). But the script finished at 10:36:07.

So I fired a query:

select table_name, last_analyzed, num_rows, blocks, sample_size
from user_tables
where last_analyzed is not null
and table_name = 'INCA_NIA_1'

TABLE_NAME LAST_ANALYZED NUM_ROWS BLOCKS SAMPLE_SIZE

INCA_NIA_1 18/06/2004 10:36:06 15147671.4285714 246177 7422359

So Even if I mentioned to analyze ONLY the partition, oracle analyzed the entire table. It scanned the whole 246177 blocks for 1 Hr.
Is this how it is supposed to behave?

Please clear my confusion (As usual )

Many thanks,
Prasad


Tom Kyte
June 18, 2004 - 11:11 am UTC

the partition didn't take 1 second, it was just done first.


you might want to read the parameters (all of them) for the dbms_stats package.

granularity defaults to DEFAULT which is "global and partition level stats" (which is what I really think you actually do want to have btw.....)

you could set granularity to partition to skip global stats (which I don't think you really want to do necessarily)

Huge time for GLOBAL stats

Prasad, June 20, 2004 - 1:46 pm UTC

Hi Tom,

Thanks for the reply. I got your point reg the GLOBAL stats. But if I dont do the GRANULARITY=>PARTITION, then the analyze process takes about 18 Hrs. So its clearly not very efficient. And if I do only partition it takes very very less time. Also i've all the stats for the remaining tables when i ran the analyze for the 1st time.
But as u said, when we modify(insert/update) the LATEST partition, the global stats also needs to be refreshed.right ?
But i really dont want to spend 18 Hrs just analyzing the tables.
SO if i analyze ONLY the latest partition after modifying it, do you suggest that Oracle will react "expensively/wrongly" because the global stats are not reflecting the correct info?

Thanks for all your help,

Tom Kyte
June 20, 2004 - 4:15 pm UTC

you are confusing me.

You say "if i don't do granularity=> partition...."

but you are telling me you have always been doing:

dbms_stats.gather_table_stats(OWNNAME=>'UHDB', TABNAME=>'INCA_NIA_1',
PARTNAME=>'ID_ACP_YEARNO_MAXVALUE', DEGREE=> 8, CASCADE=> TRUE,
ESTIMATE_PERCENT=>49);


and further you stated:

So the analyze process took ~ 1Hr 2 Mins.


that is far away from 18 hours.



think about this -- as far as global and partition stats go.......


You have a column X.
In partition one it has 50 distinct values.
In partition two it has 50 distinct values.
(and so on, say you have 10 partitions total).

Now, you know "50" because you gathered stats for each partition.

Tell me, what can you say about the number of distinct values of X globally? You can ONLY say that it has somewhere between 50 and 500 distinct values. It could be that each partition has the numbers X=1..50 in it. Or it could be that partition one has 1..50, two has 51..100 and so on.


Only you can tell us if that is "bad" or not. If that will affect you or not. Think about what information YOU would be missing if I told you the characteristics of a data set -- only 1,000 rows at a time (for a 100,000 row data set). You would be able to say lots about a given 1,000 rows. Not so much about the entire thing.

You could estimate the global stats using a much smaller sample size as well (if you went to 50%, just 1% more, Oracle would have said "oh that's such a waste, we are going to compute, it'll be just as much work"). 49% on an estimate is "not saving much"



Parallel Analyze

Yogesh B, July 06, 2004 - 1:09 pm UTC

Is there any way to analyze table using parallel clause ?

Tom Kyte
July 06, 2004 - 2:26 pm UTC

nope

Which dictionary table is correct

Stewart W. Bryson, March 30, 2005 - 3:29 pm UTC

SQL> select last_analyzed from dba_part_col_statistics where table_name='COMBINED_STAY_NEW' and partition_name='CMBSTAY_DATA_JAN_2000';

LAST_ANA
--------


Elapsed: 00:00:00.19


SQL> select distinct last_analyzed from dba_part_col_statistics where table_name='COMBINED_STAY_NEW' and partition_name='CMBSTAY_DATA_JAN_2000';

LAST_ANA
--------
20050330

Elapsed: 00:00:15.03
SQL>  

Tom Kyte
March 30, 2005 - 3:41 pm UTC

that looks like a "bug" please contact support.

Automate Partition Stats

Alexandre Matthes, August 04, 2005 - 10:02 am UTC

Tom,

I´ve altered a partitoned table to be MONITORING, and created a procedure that runs everyday (via Oracle JOB) in order to calculate STALE partitions statistics. The job runs everyday with no FAILURES. The procedure is OK, not invalid. But when I check last_analyzed column at user_tab_partitions it shows me no stats. Any ideas ?

create or replace procedure calc_stats
as
alt_sess varchar2(200);
begin
alt_sess:='Alter session set NLS_NUMERIC_CHARACTERS='||chr(39)||'.,'||chr(39);
execute immediate alt_sess;
DBMS_STATS.GATHER_SCHEMA_STATS ( ownname => 'USERNAME', estimate_percent => 8, granularity => 'PARTITION', cascade => TRUE, options => 'GATHER STALE');
end;
/

VARIABLE jobno NUMBER;
BEGIN
DBMS_JOB.SUBMIT(:jobno, 'USERNAME.calc_stats;', SYSDATE, 'SYSDATE + 1');
commit;
END;
/

Thanks in advance,

PS: I´m using this alter session command inside procedure due to a bug at Oracle Database 9.2.0.4

Tom Kyte
August 04, 2005 - 10:10 am UTC

have you gathered the initial set of stats?
and does the data actually change enough to trigger "i am stale" status of the partition?

My mistake

Alexandre Matthes, August 08, 2005 - 3:34 pm UTC

Tom,

Great...

My mistake not watching these points, I´ve altered my procedure in order to calculate initial stats on new partitions.

create or replace procedure calc_stats
as
alt_sess varchar2(200);
v_part_name varchar2(14);

cursor c1 is
select partition_name
from user_tab_partitions
where last_analyzed is null;

begin
alt_sess:='Alter session set NLS_NUMERIC_CHARACTERS='||chr(39)||'.,'||chr(39);
execute immediate alt_sess;

open c1;
loop
fetch c1 into v_part_name;
exit when c1%NOTFOUND;
DBMS_STATS.GATHER_TABLE_STATS (ownname => 'USERNAME', tabname => 'TABLE_NAME', partname => v_part_name, estimate_percent => 8, degree => 8, granularity => 'PARTITION', cascade => TRUE);
end loop;
close c1;

DBMS_STATS.GATHER_SCHEMA_STATS ( ownname => 'USERNAME', estimate_percent => 8, granularity => 'PARTITION', cascade => TRUE, options => 'GATHER STALE');
end;
/

Jack, October 05, 2005 - 3:01 am UTC

If I use granuality=>partition and don't collect global stats, would partition stats roll up to global?

Thanks.

Tom Kyte
October 05, 2005 - 7:25 am UTC

ops$tkyte@ORA9IR2> CREATE TABLE t1
  2  (
  3    dt  date,
  4    x   int,
  5    y   varchar2(25)
  6  )
  7  PARTITION BY RANGE (dt)
  8  subpartition by hash(x) subpartitions 8
  9  (
 10    PARTITION part1 VALUES LESS THAN (to_date('13-mar-2003','dd-mon-yyyy')) ,
 11    PARTITION part2 VALUES LESS THAN (to_date('14-mar-2003','dd-mon-yyyy')) ,
 12    PARTITION junk VALUES LESS THAN (MAXVALUE)
 13  )
 14  /

Table created.

ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> insert /*+ APPEND */ into t1
  2  select to_date('12-mar-2003','dd-mon-yyyy')+mod(rownum,4), rownum, rpad('*',25,'*')
  3    from all_objects;

30702 rows created.

ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> exec dbms_stats.gather_table_stats( user, 'T1', partname => 'part1', granularity => 'PARTITION' );

PL/SQL procedure successfully completed.

ops$tkyte@ORA9IR2> select num_rows from user_tables where table_name = 'T1';

  NUM_ROWS
----------


ops$tkyte@ORA9IR2> select count(*) from user_tab_histograms where table_name = 'T1';

  COUNT(*)
----------
         0

ops$tkyte@ORA9IR2> exec dbms_stats.gather_table_stats( user, 'T1', partname => 'part2', granularity => 'PARTITION' );

PL/SQL procedure successfully completed.

ops$tkyte@ORA9IR2> exec dbms_stats.gather_table_stats( user, 'T1', partname => 'junk', granularity => 'PARTITION' );

PL/SQL procedure successfully completed.

ops$tkyte@ORA9IR2> select num_rows from user_tables where table_name = 'T1';

  NUM_ROWS
----------
     30702

ops$tkyte@ORA9IR2> select count(*) from user_tab_histograms where table_name = 'T1';

  COUNT(*)
----------
         6


Normally, yes.  But remember, for histograms - they could be way off

if you have 10,000 distinct values for X in each partition and you have 100 partitions.

Do you have

10,000 distinct values of X or
1,000,000 distinct values of X

globally... 

Partition Stats Question

Mike, May 03, 2006 - 2:56 pm UTC

Hi Tom:

We are on 9.2.0.5 on Solaris 8, 8G ram, 8cpu.

In response to Prasad earlier, you recommended NOT using granularity => partition. We have a large data warehouse with a table which has daily partitions and 3 months of data. Every week, we drop the partitions over 90 days old and add new ones for the following week. There is a nightly load for the previous day's data (received from an outside source). Once data is loaded into a partition, it never changes. Every night after the load for this table is done, we recompute statistics for the entire table, using a 20% sample, cascade => true. (Otherwise, the next day we will have a number of queries which will resort to full table scans .) However, because this is a big table, that takes about 2 hours to complete. We have other jobs running and this one is robbing resources from those.

We are proposing to do

exec dbms_stats.gather_table_stats (ownname => 'INCENTIVE', TABNAME => 'DAILY_ACCOUNT_HISTORY', partname => 'PART060503', granularity => partition, cascade => true);

every night immediately after the load completes. This will finish within a few minutes, and the statistics for the current partition and indexes will be updated.

In a warehouse environment with large partitioned tables, where only the current partition will have new data, must we avoid granularity => partition? I need to better understand why all the partitions would need updated statistics if only the latest one changed.


Tom Kyte
May 03, 2006 - 3:19 pm UTC

... In response to Prasad earlier, you recommended NOT using granularity =>
partition. ...

please point that out - give me a bit of a cut and paste so I can read it in context.

Your original comment

Mike, May 03, 2006 - 3:57 pm UTC

Prasad said....

Tom,

This is what the requirement is. On a request (once-twice an year) the data
might be queried. But we got to have them here in the table as partition.
---------------------------------
I&#8217;m little(??) confused by this Partition analyze thing. I ran the following
code to analyze ONLY the mentioned (HIGHEST) partition.

declare
n number;

begin

dbms_output.put_line('Analyze of table INCA_NIA_1 started -> '||
to_char(sysdate, 'DD-MON-YYYY hh24:mi:ss'));
n := dbms_utility.get_time ;

dbms_stats.gather_table_stats(OWNNAME=>'UHDB', TABNAME=>'INCA_NIA_1',
PARTNAME=>'ID_ACP_YEARNO_MAXVALUE', DEGREE=> 8, CASCADE=> TRUE,
ESTIMATE_PERCENT=>49);

dbms_output.put_line('Analyze of table INCA_NIA_1 Completed -> '||
to_char(sysdate, 'DD-MON-YYYY hh24:mi:ss'));
dbms_output.put_line('Time elapsed -> '|| (dbms_utility.get_time-n));
dbms_output.put_line(' ');

end;

So ideally Oracle should have analyzed this ONLY partition and not the remaining
table. (I guess I&#8217;m wrong here, but this is what I used to think)

The output I got from above code is:

Analyze of table INCA_NIA_1 started -> 18-JUN-2004 09:34:49
Analyze of table INCA_NIA_1 Completed -> 18-JUN-2004 10:36:07
Time elapsed -> 367764

So the analyze process took ~ 1Hr 2 Mins.

Then I fired the query:

select table_name, partition_name, last_analyzed, num_rows, blocks, sample_size
from user_tab_partitions
where last_analyzed is not null
and table_name = 'INCA_NIA_1'

TABLE_NAME PARTITION_NAME LAST_ANALYZED NUM_ROWS BLOCKS
SAMPLE_SIZE

INCA_NIA_1 ID_ACP_YEARNO_MAXVALUE 18/06/2004 09:34:49 0 0 {null}

So the partition analyzing was finished in a second, as the script start time &
last analyzed time is same (09:34:49). But the script finished at 10:36:07.

So I fired a query:

select table_name, last_analyzed, num_rows, blocks, sample_size
from user_tables
where last_analyzed is not null
and table_name = 'INCA_NIA_1'

TABLE_NAME LAST_ANALYZED NUM_ROWS BLOCKS SAMPLE_SIZE

INCA_NIA_1 18/06/2004 10:36:06 15147671.4285714 246177 7422359

So Even if I mentioned to analyze ONLY the partition, oracle analyzed the entire
table. It scanned the whole 246177 blocks for 1 Hr.
Is this how it is supposed to behave?

Please clear my confusion (As usual )

Many thanks,
Prasad



Followup:
the partition didn't take 1 second, it was just done first.


you might want to read the parameters (all of them) for the dbms_stats package.

granularity defaults to DEFAULT which is "global and partition level stats"
(which is what I really think you actually do want to have btw.....)

you could set granularity to partition to skip global stats (which I don't think
you really want to do necessarily)

*******************************

So, I inferred from that last comment that you do not recommend granularity => partition. Our situation is a bit different than his, we do have people who go back and query based on a past date (usually with an account number), so we cannot archive off "unused" partitions, and we do a lot of partition elimination. I try to keep stats up to date as much as possible to avoid full table scans. With tables like ours, those can be disasters.



Tom Kyte
May 03, 2006 - 4:36 pm UTC

before I read further, I just want you to point out where exactly I said what you say I said - so I can read it in context. I did not see it at a glance.

Regarding the post "Puzzling" and your reply

Mike, May 03, 2006 - 8:03 pm UTC

The topic heading was "puzzling" by "a reader" on June 18, 2004. This was actually a continuation of a thread started by Prasad from UK, but at the bottom, your reply was as I pasted it above. It seemed from your comment that you frown on using granularity => partition when you only want to create statistics ONLY on a new partition and its indexes. I observed the same behavior as Prasad. The new partition was analyzed quickly, but then, using default granularity, the session went off and started to scan all the partitions to update the statistics on the indexes. We do not want that to happen, it takes too long and we do not understand why it would be necessary if the indexes are already analyzed and nothing has changed in those partitions.

Tom Kyte
May 04, 2006 - 2:07 am UTC

I never frowned on using anything - I still don't see it.


I was pointing out that just gathering statistics on a single new partition is in many cases not sufficient. It was a caveat, something you need to be aware of.

Okay, let's see if I got it now........

Mike, May 04, 2006 - 7:51 am UTC

You said to Prasad:

"granularity defaults to DEFAULT which is "global and partition level stats" (which is what I really think you actually do want to have btw.....)

you could set granularity to partition to skip global stats (which I don't think you really want to do necessarily)"

I interpreted that to mean you do not approve of granularity => partition. Maybe I misunderstood.

I'm glad I found this discussion, because even though I have read a lot about dbms_stats and have been using it for quite a while, I did not realize that the granularity "default" behavior with cascade => true would re-analyze all the index partitions in the case where I am only collecting table stats for the newest partition.

Is there any downside to using granularity => partition with cascade => true when doing using dbms_stats for the newest partition in a warehouse table, where all previous partitions (and their associated indexes) have been analyzed and where the data in those earlier partitions never changes?




Tom Kyte
May 04, 2006 - 12:20 pm UTC

is there any downside?

Yes, there are possible downsides, look for this text above.

....
Normally, yes. But remember, for histograms - they could be way off

if you have 10,000 distinct values for X in each partition and you have 100
partitions.

Do you have

10,000 distinct values of X or
1,000,000 distinct values of X

globally...
.................

The dog ate my glasses....

Mike, May 04, 2006 - 2:10 pm UTC

Somehow, my aging eyeballs missed that comment. On the table in question, we have approx 90 partitions at any given time, partitioned on a date with the format MMDDYYYY (this comes from outside the company, there is no time data in it). In each partition, there are approx 1.15 million rows. Since it is daily partitions, we would have exactly 1 distinct value per partition on the partition key, and about 90 for the entire table. If that was the only index, I THINK(???) granularity => partition would be okay. HOWEVER, there is another index on account number in that table. There could be some variation in the number of distinct values per partition on that index. So, if I understood your point, that would be a reason to NOT use granularity => partition.

All of our other partitioned tables are monthly partitions, and all are partitioned on a date column. But, they also have one or more indexes on other columns, and the data could skew somewhat, so for those we would definitely want to redo "global" index stats.

Do I have it now?

(BTW, I noticed a new option for this in 10g, called
cascade => dbms_stats.auto_cascade, which will let Oracle decide if the index stats need updating. I cannot upgrade to 10g yet (grrrrr) but when I finally do, I WILL check that out).

David Aldridge, May 04, 2006 - 2:40 pm UTC

>> I did not realize that the
granularity "default" behavior with cascade => true would re-analyze all the
index partitions in the case where I am only collecting table stats for the
newest partition.

Just to be clear, it isn't the index or table partition stats that are updated in the scenario of default granularity and specifying a partition name, it's the global statistics for the table and index and the named partition's statistics.

That makes perfect sense as a default behaviour -- if you modify a single partition then those are the only stats that you need to update -- albeit that you might choose to update global stats manually from the aggregate partition stats (insert histogram caveat here), or you might want to use a lower estimate percent for the global.

create table t1 (col1 number)
partition by range (col1)
(partition p1 values less than (2),
partition p2 values less than (3),
partition p3 values less than (4))
/
create index i1 on t1 (col1) local
/

insert into t1 values (1);
insert into t1 values (2);

exec dbms_stats.gather_table_stats(user,'t1',granularity => 'default', cascade => true)


begin
dbms_lock.sleep(10);
end;
/


exec dbms_stats.gather_table_stats(user,'t1',partname => 'P3',granularity => 'default', cascade => true)

alter session set nls_date_format = 'HH24:mi:ss'
/

select table_name,partition_name,last_analyzed
from user_tab_partitions where table_name ='T1'
union all
select table_name,null,last_analyzed
from user_tables where table_name ='T1'
union all
select index_name,partition_name,last_analyzed
from user_ind_partitions where index_name ='I1'
union all
select index_name,null,last_analyzed
from user_indexes where index_name ='I1'
/



Collecting statistics on partitioned tables

Paresh, June 12, 2006 - 8:43 am UTC

Tom,

Our application has 200 odd tables most of which are small and medium sized. However there are five tables that are really big (half a billion to 3 billion records) and are partitioned on a date column (range partitions for each day). The partitioned older than 40 month gets dropped. I am looking to build a strategy to collect statistics on partitioned tables. Please note that most of the activities are on the partition of the current month and there are very few updates to old partitioned data (0.5 %). Also, we have a mixture of local and global hash partitioned indexes).

Most of the processing is done in batch jobs. However, the system is online for small number of business users.

When the system goes live, these partitioned tables will be populated with huge volume data (half a billion to 3 billion records). Could you tell me the following:

1. What method should be used for collecting statistics first time (before the system goes live) - estimate, compute, each partition or global statistics).

2. After the system goes live, how should we collect the statistics on the partitioned tables?

Many Thanks

Paresh


Tom Kyte
June 12, 2006 - 10:06 am UTC

1) what do you have time for, either compute or a 5-10% estimate.

2) that cannot be answered so simply. some people can just gather partition level, some people cannot. so, do your tables need global statistics? do you know when partition level versus global level statistics would be used? do you have columns that are monotomically increasing, skewed in funny ways?


Suggest you might want to pick up a copy of Jonathan Lewis' book "cost based oracle" and give it a quick read through to see some of the variables involved.

Statistics collection for partitioned tables

Paresh, June 12, 2006 - 12:05 pm UTC

Thanks Tom for prompt reply. I have got one more query on this. I had read few months ago about the new feature with 10g database. You can enable it to let Oracle collect statistics on big tables when the change is more than 10%, hence removing the need to collect statistics on regular basis manually. Could you tell me what feature is that and how to use it?

Thanks again.

Paresh


Tom Kyte
June 13, 2006 - 10:37 am UTC

that is an old feature - you enable table monitoring (alter table t monitor) which is the DEFAULT in 10g (you don't need to do that)

and then dbms_stats would be used with the GATHER STALE option.

Statistics of partition

Dilipkumar Patel, November 18, 2006 - 2:37 pm UTC

Hi Tom,

I have few FACT tables for datawarehouse. Fact tables have local indexes. So daily loading process involved following process.

FACT1 is RANGE/LIST Partitioned table (Daily range partition)
EXCH_FACT1 is LIST partitioned table

For loading data to FACT1 table for date 18-NOV-2006, Partition PART_20061118 is empty.

1. Truncate EXCH_FACT1 table/Drop indexes from EXCH_FACT1.
2. Load/Insert data to EXCH_FACT1 table
3. Build Indexes on EXCH_FACT1 table
4. Analyze EXCH_FACT1 using following

dbms_stats.gather_table_stats(ownname => 'TAB_OWNER',
tabname => 'EXCH_FACT1',
estimate_percent => 1,
degree => 16, CASCADE => TRUE);

5. Exchange partition of FACT1 table partition PART_20061118 with table EXCH_FACT1

6. Data is loaded to FACT1 table !!!!

The problem is the we gathered stats for EXCH_FACT1 table, which is not getting transferred to FACT1 table after exchange partition. As statistics is not available, some queries gives performance probles, as partition/subpartitions do not have statistics.

Once data is loaded to daily partition, data will not ever get changed/appended to the partition until partition is dropped.

Can you suggest me effective way to analyze daily added partitions.

Thanks

Dilipkumar Patel

Global and partition level

Tony, March 01, 2007 - 2:00 am UTC

Version: 9i Release 2
Tom,
Have a question regarding your comment:
-------------My observation begins-------------------------
I have a partitioned IOT.
If my goal is to have Global and Partition level statistics
gathered every time a new partition is added..then I need to run the following everytime a new partition gets added..correct:
execute dbms_stats.gather_table_stats(-
>ownname => 'scott',-
>tabname => 'orders',-
>partname => null,-
>estimate_percent => DBMS_STATS.AUTO_SAMPLE_SIZE,-
>block_sample => false,- --> row sampling rather than block sampling
>method_opt => 'FOR ALL COLUMNS SIZE 1',-
>degree => null,-
>granularity => 'DEFAULT',-
>cascade => true ,-
>stattab => null,-
>statid => null,-
>statown => null);

and that will get the new partition..and it will collect the global..

-------------My observation ends-------------------------

------------------Your Comments--------------------------

"think about this -- as far as global and partition stats go.......


You have a column X.
In partition one it has 50 distinct values.
In partition two it has 50 distinct values.
(and so on, say you have 10 partitions total).

Now, you know "50" because you gathered stats for each partition.

Tell me, what can you say about the number of distinct values of X globally? You can ONLY say that
it has somewhere between 50 and 500 distinct values. It could be that each partition has the
numbers X=1..50 in it. Or it could be that partition one has 1..50, two has 51..100 and so on.


Only you can tell us if that is "bad" or not. If that will affect you or not. Think about what
information YOU would be missing if I told you the characteristics of a data set -- only 1,000 rows
at a time (for a 100,000 row data set). You would be able to say lots about a given 1,000 rows.
Not so much about the entire thing.

You could estimate the global stats using a much smaller sample size as well (if you went to 50%,
just 1% more, Oracle would have said "oh that's such a waste, we are going to compute, it'll be
just as much work"). 49% on an estimate is "not saving much"
"
Tom Kyte
March 02, 2007 - 11:11 am UTC

well, you could gather stats on just the new partition and then global stats.

you don't need to re-gather for all partitions

and default as a granularity is deprecated these days...

Global and Partition Stats work.

Oxnard, August 17, 2008 - 11:04 am UTC

ver 10.2.0.3

It seems in my case depending on the query sometimes the global stats are helpful and sometimes not. Many different queries coming into the db. Without commiting mass murder it is impossible for myself or others on the team to control. We have 24 partitions the typical date range however, it is possible for data in the old partitions to change. Yes indeed we change history :) a long and painful story.

We can determine all partitions which have data changes.

I have done some testing with dbms_stats. Found how to do or not do global stats.

Based on the many different SQL statements being sent to the db it seems Global stats are required.

For example last week I had to use the dynamic sampling to get a query to run since in this case the global stats had not been done in a long while and the global stats were leading the optimizer astray. In this case it was an aggregation query against all partitions.

My question is how much more work is it for the DB to obtain the partiton stats if it is already required to do the global stats? Seems like to generate the Global stats the partition must be accessed so how much more work can it be?




Tom Kyte
August 20, 2008 - 9:15 am UTC

if you gather global stats, it reads the entire table as a table - from start to finish, all partitions.

So, with global stats for example, it can come up with "we have 100 distinct values of column X"

But, it cannot say "and there are 10 distinct values in part1, 40 distinct values in part2, 100 distinct in part3, 1 distinct in part4...." from the global stats - so to get accurate local statistics, you gather them on the local partition - reading it from start to finish.

Shil, February 23, 2009 - 9:34 am UTC

Hi - One of our applications is getting rewritten right now. This application populates each day's worth of data in a new partition and this current partition is the only one that changes. The other partitions are static. Is it enough if we just calculate statistics for the current partition every day ? Since the partition gets changed dynamically how can we find out which is the current partition ?
Tom Kyte
February 23, 2009 - 5:33 pm UTC

http://optimizermagic.blogspot.com/2009/02/maintaining-statistics-on-large.html


you should 'know' the current partition - but *_tab_partitions has a position, I presume you always load into the highest partition...

analyze partition Vs analyze table

Sita, May 06, 2009 - 7:49 pm UTC

Tom:

I am using the following command to analyze the parition tables. Only analyzing each partition parallely.
Will this gather stats on table also ?
Whats the difference between analyzing each partitions Vs analyze at table level.
Please clarify my question.

exec dbms_stats.gather_table_stats(user, tabname => 'MNTHLY_RSRVS', partname => 'MR_200906', estimate_percent => 15, degree => dbms_stats.default_degree, cascade => DBMS_STATS.AUTO_CASCADE ) ;
Tom Kyte
May 11, 2009 - 11:22 am UTC

since you left granularity alone, you gathered global statistics AND local statistics for a single partition.



ops$tkyte%ORA10GR2> create or replace view stats_view
  2  as
  3  select partition_name, num_rows, to_char(LAST_ANALYZED,'hh24:mi:ss') analyzed, sample_size
  4    from user_tab_partitions
  5   where table_name = 'T'
  6  union all
  7  select 'T', num_rows, to_char(LAST_ANALYZED,'hh24:mi:ss') analyzed, sample_size
  8    from user_tables
  9   where table_name = 'T';

View created.

ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> CREATE TABLE t
  2  (
  3    x   int primary key,
  4    dt  date,
  5    y   varchar2(30)
  6  )
  7  PARTITION BY RANGE (dt)
  8  (
  9    PARTITION part1 VALUES LESS THAN (to_date('01-jan-2008','dd-mon-yyyy')) ,
 10    PARTITION part2 VALUES LESS THAN (to_date('01-jan-2009','dd-mon-yyyy'))
 11  )
 12  /

Table created.

ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> insert into t (x,dt,y)
  2  select rownum, to_date( '01-jan-2007', 'dd-mon-yyyy' )+mod(rownum,365*2), object_name
  3    from all_objects
  4  /

49867 rows created.

ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> select 'part1', count(*) from t partition(part1)
  2  union all
  3  select 'part2', count(*) from t partition(part2)
  4  /

'PART   COUNT(*)
----- ----------
part1      25047
part2      24820

ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> select * from stats_view;

PARTITION_NAME                   NUM_ROWS ANALYZED SAMPLE_SIZE
------------------------------ ---------- -------- -----------
PART1
PART2
T

ops$tkyte%ORA10GR2> begin
  2          dbms_stats.gather_table_stats
  3          ( user, 'T', granularity => '&1', partname => 'PART1', estimate_percent=>15 );
  4  end;
  5  /
old   3:        ( user, 'T', granularity => '&1', partname => 'PART1', estimate_percent=>15 );
new   3:        ( user, 'T', granularity => 'AUTO', partname => 'PART1', estimate_percent=>15 );

PL/SQL procedure successfully completed.

ops$tkyte%ORA10GR2> pause

ops$tkyte%ORA10GR2> select * from stats_view;

PARTITION_NAME                   NUM_ROWS ANALYZED SAMPLE_SIZE
------------------------------ ---------- -------- -----------
PART1                               25113 10:59:41        3767
PART2
T                                   48713 10:59:41        7307

<b>see how we have TABLE and a single PARTITIONS statistics....</b>

ops$tkyte%ORA10GR2> begin
  2          dbms_stats.gather_table_stats
  3          ( user, 'T', granularity => '&1', partname => 'PART2', estimate_percent=>15 );
  4  end;
  5  /
old   3:        ( user, 'T', granularity => '&1', partname => 'PART2', estimate_percent=>15 );
new   3:        ( user, 'T', granularity => 'AUTO', partname => 'PART2', estimate_percent=>15 );

PL/SQL procedure successfully completed.

ops$tkyte%ORA10GR2> select * from stats_view;

PARTITION_NAME                   NUM_ROWS ANALYZED SAMPLE_SIZE
------------------------------ ---------- -------- -----------
PART1                               25113 10:59:41        3767
PART2                               24667 10:59:49        3700
T                                   49947 10:59:49        7492

<b>and now the table stats have been updated and the other partition gathered...</b>





Now, if the goal is just to have partition statistics gathered...


ops$tkyte%ORA10GR2> CREATE TABLE t
  2  (
  3    x   int primary key,
  4    dt  date,
  5    y   varchar2(30)
  6  )
  7  PARTITION BY RANGE (dt)
  8  (
  9    PARTITION part1 VALUES LESS THAN (to_date('01-jan-2008','dd-mon-yyyy')) ,
 10    PARTITION part2 VALUES LESS THAN (to_date('01-jan-2009','dd-mon-yyyy'))
 11  )
 12  /

Table created.

ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> insert into t (x,dt,y)
  2  select rownum, to_date( '01-jan-2007', 'dd-mon-yyyy' )+mod(rownum,365*2), object_name
  3    from all_objects
  4  /

49871 rows created.

ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> select 'part1', count(*) from t partition(part1)
  2  union all
  3  select 'part2', count(*) from t partition(part2)
  4  /

'PART   COUNT(*)
----- ----------
part1      25051
part2      24820

ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> select * from stats_view;

PARTITION_NAME                   NUM_ROWS ANALYZED SAMPLE_SIZE
------------------------------ ---------- -------- -----------
PART1
PART2
T

ops$tkyte%ORA10GR2> begin
  2          dbms_stats.gather_table_stats
  3          ( user, 'T', granularity => '&1', partname => 'PART1', estimate_percent=>15 );
  4  end;
  5  /
old   3:        ( user, 'T', granularity => '&1', partname => 'PART1', estimate_percent=>15 );
new   3:        ( user, 'T', granularity => 'PARTITION', partname => 'PART1', estimate_percent=>15 );

PL/SQL procedure successfully completed.

ops$tkyte%ORA10GR2> select * from stats_view;

PARTITION_NAME                   NUM_ROWS ANALYZED SAMPLE_SIZE
------------------------------ ---------- -------- -----------
PART1                               24953 11:01:24        3743
PART2
T

<b>note the lack of global table stats this time....</b>

ops$tkyte%ORA10GR2> pause

ops$tkyte%ORA10GR2> begin
  2          dbms_stats.gather_table_stats
  3          ( user, 'T', granularity => '&1', partname => 'PART2', estimate_percent=>15 );
  4  end;
  5  /
old   3:        ( user, 'T', granularity => '&1', partname => 'PART2', estimate_percent=>15 );
new   3:        ( user, 'T', granularity => 'PARTITION', partname => 'PART2', estimate_percent=>15 );

PL/SQL procedure successfully completed.

ops$tkyte%ORA10GR2> select * from stats_view;

PARTITION_NAME                   NUM_ROWS ANALYZED SAMPLE_SIZE
------------------------------ ---------- -------- -----------
PART1                               24953 11:01:24        3743
PART2                               24507 11:01:27        3676
T                                   49460 11:01:28           0

<b>now that all partitions are gathered from - we *appear* to have table statistics - but note the sample size and the fact that the num_rows is exactly the sum of the partition num_rows - we've "inferred" these table stats...</b>





read this:

http://optimizermagic.blogspot.com/2009/02/maintaining-statistics-on-large.html

Continuation of above post

Sita, May 06, 2009 - 8:32 pm UTC

I am on 10.2.0.4

From documentation, 'AUTO' is default for GRANULARITY in 10gR2.

Will this gather global/partition/subpartition statisics?

granularity
Granularity of statistics to collect (only pertinent if the table is partitioned).

'ALL' - gathers all (subpartition, partition, and global) statistics

'AUTO'- determines the granularity based on the partitioning type. This is the default value.


Tom Kyte
May 11, 2009 - 11:23 am UTC

if you want to specifically gather certain types of statistics (or do anything specifically) never rely on defaults - they change over time.

AUTO currently gathers global and local.

Global stats on Partitioned index

Dan, April 13, 2011 - 11:19 am UTC

Hi Tom:

Oracle Version 10.2.0.4

I am noticing "Global stats on partitioned index" is showing num_rows as 0 but all index its local index partitions have stats (num_rows > 0).

I would like to know why Global stats are not gathered for index when all its index partitions are gathered.

Please advice.

Global stats at table and its partitions have stats.

As part of ETL process, we load into temp table and perform gather stats on temp table and then exchange partition.

Gather stats on the table using the below command :

exec dbms_stats.gather_table_stats('RIMS','TRDPS_CED_XOL_PRCS',estimate_percent => 100,method_opt => 'for all indexed columns size auto',degree => 32, cascade => True);



=== Stats shown ============
SQL> SQL> select table_name, partitioned, num_rows, last_analyzed, sample_size from user_tables where table_name='TPRCS_RIMSID_DOC_RELTN'
  2  /

TABLE_NAME                     PAR   NUM_ROWS LAST_ANAL SAMPLE_SIZE
------------------------------ --- ---------- --------- -----------
TPRCS_RIMSID_DOC_RELTN         YES 1105543388 05-FEB-11   276385847

Elapsed: 00:00:00.07
SQL> select table_name, partition_name, num_rows, last_analyzed from user_tab_partitions where table_name='TPRCS_RIMSID_DOC_RELTN'
  2  /

TABLE_NAME                     PARTITION_NAME                   NUM_ROWS LAST_ANAL
------------------------------ ------------------------------ ---------- ---------
TPRCS_RIMSID_DOC_RELTN         PPRCS_RIMSID_DOC_RELTN1                 0 30-SEP-10
TPRCS_RIMSID_DOC_RELTN         PPRCS_RIMSID_DOC_RELTN2          83064281 30-SEP-10
TPRCS_RIMSID_DOC_RELTN         PPRCS_RIMSID_DOC_RELTN13        104793661 29-JAN-11
TPRCS_RIMSID_DOC_RELTN         PPRCS_RIMSID_DOC_RELTN15          8170225 07-APR-11
TPRCS_RIMSID_DOC_RELTN         PPRCS_RIMSID_DOC_RELTN14         91236311 21-MAR-11
TPRCS_RIMSID_DOC_RELTN         PPRCS_RIMSID_DOC_RELTN3          83296502 01-OCT-10
TPRCS_RIMSID_DOC_RELTN         PPRCS_RIMSID_DOC_RELTN4          91883572 02-OCT-10
TPRCS_RIMSID_DOC_RELTN         PPRCS_RIMSID_DOC_RELTN5          98252700 02-OCT-10
TPRCS_RIMSID_DOC_RELTN         PPRCS_RIMSID_DOC_RELTN6          89563221 03-OCT-10
TPRCS_RIMSID_DOC_RELTN         PPRCS_RIMSID_DOC_RELTN7          92457306 03-OCT-10
TPRCS_RIMSID_DOC_RELTN         PPRCS_RIMSID_DOC_RELTN8          92762407 04-OCT-10
TPRCS_RIMSID_DOC_RELTN         PPRCS_RIMSID_DOC_RELTN9          91440758 04-OCT-10
TPRCS_RIMSID_DOC_RELTN         PPRCS_RIMSID_DOC_RELTN10         91888994 05-OCT-10
TPRCS_RIMSID_DOC_RELTN         PPRCS_RIMSID_DOC_RELTN11         98462967 05-OCT-10
TPRCS_RIMSID_DOC_RELTN         PPRCS_RIMSID_DOC_RELTN12         87625612 06-OCT-10

15 rows selected.

Elapsed: 00:00:00.06
SQL> select index_name, num_rows, last_analyzed, sample_size from user_indexes where index_name='TPRCS_RIMSID_DOC_RELTN_PK' ;

INDEX_NAME                       NUM_ROWS LAST_ANAL SAMPLE_SIZE
------------------------------ ---------- --------- -----------
TPRCS_RIMSID_DOC_RELTN_PK               0 30-SEP-10           0

Elapsed: 00:00:00.03
SQL> select  index_name, partition_name, num_rows, last_analyzed , sample_size from user_ind_partitions where index_name='TPRCS_RIMSID_DOC_RELTN_PK' ;

INDEX_NAME                     PARTITION_NAME                   NUM_ROWS LAST_ANAL SAMPLE_SIZE
------------------------------ ------------------------------ ---------- --------- -----------
TPRCS_RIMSID_DOC_RELTN_PK      PPRCS_RIMSID_DOC_RELTN5          98252700 02-OCT-10    98252700
TPRCS_RIMSID_DOC_RELTN_PK      PPRCS_RIMSID_DOC_RELTN1                 0 30-SEP-10
TPRCS_RIMSID_DOC_RELTN_PK      PPRCS_RIMSID_DOC_RELTN2          83064281 30-SEP-10    83064281
TPRCS_RIMSID_DOC_RELTN_PK      PPRCS_RIMSID_DOC_RELTN8          92762407 04-OCT-10    92762407
TPRCS_RIMSID_DOC_RELTN_PK      PPRCS_RIMSID_DOC_RELTN6          89563221 03-OCT-10    89563221
TPRCS_RIMSID_DOC_RELTN_PK      PPRCS_RIMSID_DOC_RELTN7          92457306 03-OCT-10    92457306
TPRCS_RIMSID_DOC_RELTN_PK      PPRCS_RIMSID_DOC_RELTN3          83296502 01-OCT-10    83296502
TPRCS_RIMSID_DOC_RELTN_PK      PPRCS_RIMSID_DOC_RELTN4          87791416 02-OCT-10      538308
TPRCS_RIMSID_DOC_RELTN_PK      PPRCS_RIMSID_DOC_RELTN9          91440758 04-OCT-10    91440758
TPRCS_RIMSID_DOC_RELTN_PK      PPRCS_RIMSID_DOC_RELTN10         91888994 05-OCT-10    91888994
TPRCS_RIMSID_DOC_RELTN_PK      PPRCS_RIMSID_DOC_RELTN11         98462967 05-OCT-10    98462967
TPRCS_RIMSID_DOC_RELTN_PK      PPRCS_RIMSID_DOC_RELTN12         87625612 06-OCT-10    87625612
TPRCS_RIMSID_DOC_RELTN_PK      PPRCS_RIMSID_DOC_RELTN14         86600137 22-MAR-11      536201
TPRCS_RIMSID_DOC_RELTN_PK      PPRCS_RIMSID_DOC_RELTN13        104793661 29-JAN-11   104793661
TPRCS_RIMSID_DOC_RELTN_PK      PPRCS_RIMSID_DOC_RELTN15          8170225 07-APR-11     8170225

15 rows selected.

Elapsed: 00:00:00.06
SQL> 



Global stats on Partitioned index

Dan, April 19, 2011 - 8:04 pm UTC

Hi Tom:

Can you please update when you get a chance for the above question.
Tom Kyte
April 19, 2011 - 8:10 pm UTC

I'd need to know the steps to reproduce, under normal circumstances they would be. What I'd need is a create table (with say TWO partitions), some gather stats, load(s), gathers etc - to reproduce the issue.

What's the difference between SQL Analyze(0) and SQL Analyze(1)

Manas, July 16, 2021 - 5:05 pm UTC

What's the difference between SQL Analyze(0) and SQL Analyze(1)? In fact I see a lot of different number -

So how can I find what is the difference between these? To me it seems one of them is scheduled job and other run manually for dbms_stats but I need these details listed somewhere
Connor McDonald
July 19, 2021 - 5:45 am UTC

Some context would assist

More to Explore

PL/SQL demos

Check out more PL/SQL tutorials on our LiveSQL tool.

PL/SQL docs

PL/SQL reference manual from the Oracle documentation library