Skip to Main Content

Breadcrumb

May 4th

Question and Answer

Tom Kyte

Thanks for the question, Suresh.

Asked: July 07, 2002 - 8:43 pm UTC

Last updated: September 15, 2007 - 9:12 pm UTC

Version: 9i

Viewed 10K+ times! This question is

You Asked

Tom,
I created a table and tried to run monitoring and GATHER STALE option on the table as below. Everything worked fine, except that the last_analyzed column does not get updated in the user_tables table when I run the DBMS_STATS. Why is it not getting updated? Also, I can know from user_tab_modifications how many rows are inserted, updated, deleted from the table that has the monitoring turned on. Can I know from any dictionary tables that those are the only rows that get analyzed when we run DBMS_STATS Gather stale option and not the other rows that were already there without any modifications.

SQL> create table emp as select * from scott.emp;
Table created.

SQL> select table_name, last_analyzed, monitoring from user_tables where table_name = 'EMP';

TABLE_NAME LAST_ANAL MON
------------------------------ --------- ---
EMP NO

SQL> begin
dbms_stats.gather_schema_stats ('test', options => 'gather stale');
end;
/
PL/SQL procedure successfully completed.

SQL> select table_name, last_analyzed, monitoring from user_tables where table_name = 'EMP';
TABLE_NAME LAST_ANAL MON
------------------------------ --------- ---
EMP NO

SQL> alter table emp monitoring;
Table altered.

SQL> select table_name, last_analyzed, monitoring from user_tables where table_name = 'EMP';

TABLE_NAME LAST_ANAL MON
------------------------------ --------- ---
EMP YES

SQL> insert into emp(empno, ename) select rownum, 'abc' from all_objects where rownum < 1000;
999 rows created.

SQL> commit;
Commit complete.

SQL> begin
dbms_stats.gather_schema_stats ('test', options => 'gather stale');
end;
/
PL/SQL procedure successfully completed.

SQL> select table_name, last_analyzed, monitoring from user_tables where table_name = 'EMP';
TABLE_NAME LAST_ANAL MON
------------------------------ --------- ---
EMP YES

SQL> select table_name, inserts, deletes, updates from user_tab_modifications;

TABLE_NAME INSERTS DELETES UPDATES
------------------------------ ---------- ---------- ----------
EMP 999 0 0

Thanks,
Suresh

and Tom said...

answering in reverse.

A table is analyzed -- not just "new" or "modified" rows. A TABLE is analyzed.


As for the results above -- you did not have the table analyzed in the first place, hence gather stale is not able to tell when the table is in fact stale (is 999 new rows 10% of the table? I don't know -- do you???)

Anyway, you need to analyze once to "prime the pump" if you will. Consider:

ops$tkyte@ORA9I.WORLD> drop table emp;

Table dropped.

ops$tkyte@ORA9I.WORLD>
ops$tkyte@ORA9I.WORLD> create table emp as select * from scott.emp;

Table created.

ops$tkyte@ORA9I.WORLD>
ops$tkyte@ORA9I.WORLD> select table_name, to_char(last_analyzed,'dd-mon hh24:mi:ss'), monitoring from user_tables where table_name = 'EMP';

TABLE_NAME TO_CHAR(LAST_AN MON
------------------------------ --------------- ---
EMP NO

ops$tkyte@ORA9I.WORLD> exec dbms_stats.gather_schema_stats (user, options => 'gather stale');

PL/SQL procedure successfully completed.

ops$tkyte@ORA9I.WORLD> select table_name, to_char(last_analyzed,'dd-mon hh24:mi:ss'), monitoring from user_tables where table_name = 'EMP';

TABLE_NAME TO_CHAR(LAST_AN MON
------------------------------ --------------- ---
EMP NO

ops$tkyte@ORA9I.WORLD> alter table emp monitoring;

Table altered.

ops$tkyte@ORA9I.WORLD> insert into emp(empno,ename) select rownum, 'abc' from all_objects where rownum < 1000;

999 rows created.

ops$tkyte@ORA9I.WORLD> commit;

Commit complete.

ops$tkyte@ORA9I.WORLD> exec dbms_stats.gather_schema_stats (user, options => 'gather stale');

PL/SQL procedure successfully completed.

ops$tkyte@ORA9I.WORLD> select table_name, to_char(last_analyzed,'dd-mon hh24:mi:ss'), monitoring from user_tables where table_name = 'EMP';

TABLE_NAME TO_CHAR(LAST_AN MON
------------------------------ --------------- ---
EMP YES

ops$tkyte@ORA9I.WORLD> analyze table emp compute statistics;

Table analyzed.

ops$tkyte@ORA9I.WORLD> select table_name, to_char(last_analyzed,'dd-mon hh24:mi:ss'), monitoring from user_tables where table_name = 'EMP';

TABLE_NAME TO_CHAR(LAST_AN MON
------------------------------ --------------- ---
EMP 07-jul 21:03:10 YES

ops$tkyte@ORA9I.WORLD> insert into emp(empno,ename) select rownum, 'abc' from all_objects where rownum < 1000;

999 rows created.

ops$tkyte@ORA9I.WORLD> commit;

Commit complete.

ops$tkyte@ORA9I.WORLD> exec dbms_lock.sleep(1);

PL/SQL procedure successfully completed.

ops$tkyte@ORA9I.WORLD> exec dbms_stats.gather_schema_stats (user, options => 'gather stale');

PL/SQL procedure successfully completed.

ops$tkyte@ORA9I.WORLD> select table_name, to_char(last_analyzed,'dd-mon hh24:mi:ss'), monitoring from user_tables where table_name = 'EMP';

TABLE_NAME TO_CHAR(LAST_AN MON
------------------------------ --------------- ---
EMP 07-jul 21:03:11 YES

ops$tkyte@ORA9I.WORLD>


See, the last gather stale worked just fine -- because the table was analyzed in the first place.

Rating

  (27 ratings)

Is this answer out of date? If it is, please let us know via a Comment

Comments

You are the best

Suresh, July 07, 2002 - 9:12 pm UTC

Tom,
Thanks for your prompt reply. My understanding is that when you use GATHER STALE option, the modified and newly added rows only get analyzed and not the full table as a whole. Is my understanding correct? If so, when you say

"A table is analyzed -- not just "new" or "modified" rows. A TABLE is analyzed" then what is the point in using GATHER STALE option.

Thanks,
Suresh

Tom Kyte
July 07, 2002 - 9:18 pm UTC

Let's say you have 100 tables.

You modify 10 of them. Of the 10, you modify about 10% or more in 5 of them.

You gather stale. Now you gather stats on 5 tables. The other 5 modified tables did not change very much, we skipped them. the other 90 tables that did not change at all -- we skipped them as well.

The point here -- you skipped analyzing 95 tables out of 100 -- pretty good eh?

The point of gather stale is to gather stats only on those tables that NEED to have stats gathered -- all without you having to figure out what tables those might be. Thats all.

We cannot, do not, just analyze "new rows" or "modified rows" (be hard to figure out what those DELETED rows used to be to "subtract" them from the stats -- wouldn't it? think of things like histograms and such -- where we would need to actually track the before and after image of the changes -- the overhead would be HUGE)



Totally unrelated...

A reader, October 26, 2002 - 10:21 pm UTC

Iam sorry for posting this here...but thought you might want to have a look into this.

Just wanted to bring to notice that the number of days shown next to the question, when you do a search is not always right.

Score(100) Sun Jul 07, 2002 (Sun Jul 07, 2002, 0 days old)

Tom Kyte
October 27, 2002 - 11:14 am UTC

probably more of a bad use of terminology on my part. the "days old" is really days between updates.

DBMS_STATS

Ted, March 18, 2003 - 4:25 pm UTC

Tom,

I was playing around with the DBMS_STATS package and it was failing on ORA-01467. Some of the tables were analyzed by the DBMS_STATS before the error was returned. I get the same behavior with " dbms_stats.gather_SCHEMA_stats( user );"

Any ideas?

I am on version 8.1.7.2.

Thanks

USVAL156>begin
2 dbms_stats.gather_SCHEMA_stats( user, NULL );
3 end;
4 /
begin
*
ERROR at line 1:
ORA-01467: sort key too long
ORA-06512: at "SYS.DBMS_STATS", line 4479
ORA-06512: at "SYS.DBMS_STATS", line 4610
ORA-06512: at "SYS.DBMS_STATS", line 4720
ORA-06512: at "SYS.DBMS_STATS", line 4702
ORA-06512: at line 2


USVAL156>

Tom Kyte
March 18, 2003 - 5:19 pm UTC

Hdr: 2336505 8.1.7.3 PERFORMANC 11.5.6 PRODID-510 PORTID-59 1709806
Abstract: GETTING ORA-1467 WHEN PERFORMING FND_STATS.GATHER_TABLE_STATS

basically -- dbms_stats is building a query with lots of aggregates. on tables with lots of large character strings -- this would blow up depending on your block size -- an analyze would work on that table. The fix was to substr just the first 32 characters (thats all we use) and it would work. It is fixed, in the terminal release of Oracle8iR3. You can contact support, reference that number and tell them you hit that.

Until then, you can analyze the tables using dbms_stats one by one -- and find the problem table, use analyze on that one instead.

Some questions regarding dbms_stats

Riaz Shahid, March 21, 2003 - 3:20 am UTC

Hello Tom !

I've two questions regarding dbms_stats:

(1) does dbms_stats also analyzes the indexed columns ?

(2) what is the purpose of analyzing indexs ???

Hope you wouldn't mind for asking such "silly" questions. :(

Riaz

Tom Kyte
March 21, 2003 - 8:39 am UTC

1) if you ask it to, sure.

</code> http://asktom.oracle.com/pls/asktom/f?p=100:11:::::P11_QUESTION_ID:5792247321358#7417227783861 <code>

2) to see their size, their cluster factor, number of keys, how empty or full they are. same reason you analyze a table.

dbms_stats use in 9.2.0.3

Baqir Hussain, May 28, 2003 - 1:57 pm UTC

Tom,
One of the tables in production gets updated more than 10% every night. I am not sure whether I should gather stats every night?
What is the best dbms_stats parameters should I use in this case (10%-15% change in data)?

Please advise


Tom Kyte
May 28, 2003 - 7:01 pm UTC

use table monitoring and dbms_stats and let it decide.

dbms_stats monitoring

Baqir Hussain, May 28, 2003 - 7:11 pm UTC

Tom,
Thanks for the prompt response. I did the same as you suggested last week as follows:

1. exec dbms_stats.ALTER_SCHEMA_TAB_MONITORING('BDR',TRUE);
2. Run cron jobs once every night to get the stale tables:

set serveroutput on size 50000
spool /home/oracle/db_scripts/stale_object.list
declare
t_stale_objects dbms_stats.objecttab;
BEGIN
dbms_stats.gather_schema_stats (
ownname => 'BDR',
options => 'LIST STALE',
objlist => t_stale_objects);
FOR i IN 1 .. t_stale_objects.COUNT
LOOP
dbms_output.put_line(t_stale_objects (i).objname);
END LOOP;
END;
/
exit
spool off

I do not know the next step you suggested.
Please advise.
thanks

Tom Kyte
May 28, 2003 - 8:28 pm UTC

all you did was list the stale objects.

now gather their stats. options => gather stale

sys stats ?

sravan, August 13, 2003 - 4:05 pm UTC

Tom,
I have a database where the sys tables are analyzed. I know it is not recommended to analyze the sys objects. Now,
. Should I delete the statistics from the sys objects ? or
. just leave them in there ?
. If I leave the statistics in the database, will it hurt the database performace ?

Why shouoldn't the sys objects be analyzed in the first place ?


Tom Kyte
August 13, 2003 - 4:12 pm UTC

that advice varies by version.

are you experiencing any "bad things"?
is something in need of repair?

sys stats question

sravan, August 13, 2003 - 4:19 pm UTC

No. We haven't seen any performace degrade.
We just realised that some of the tables were analyzed in the past and stats are collected.
We were just curious if leaving them in there might cause problems.


dbms_stats.objecttab --> where is it defined?

Syltrem, August 15, 2003 - 11:46 am UTC

Example taken from one reviewer of this article:

declare
t_stale_objects dbms_stats.objecttab;
BEGIN
dbms_stats.gather_schema_stats (
ownname => 'BDR',
options => 'LIST STALE',
objlist => t_stale_objects);
...
...

Q1) How do I find out that the object type for parameter objlist is dbms_stats.objecttab
Q2) Where can I get a description for dbms_stats.objecttab ? I see (from the xeample) that it contains a column objname, but what else?

I have spent many an hour looking in the doc and have not happened to stumble over the right page.

Thanks so much!

Tom Kyte
August 15, 2003 - 12:07 pm UTC

in dbms_stats -- it is "dbms_stats.objecttab"

covered in the documentation:
</code> http://docs.oracle.com/docs/cd/B10501_01/appdev.920/a96612/d_stats.htm#1000552 <code>





Many thanks for quick reply

Syltrem, August 15, 2003 - 1:17 pm UTC

Can`t figure how how I could miss it (I read all the next page in the manual) - I find printed doc so much easier to browse...

method_opt

Sravan, September 30, 2003 - 5:46 pm UTC

Tom,
1) I looked around for a good explanation of method_opt parameter of dbms_stats package - but found nothing.
Can you please point me to a good/detailed explanation resource ?

Most of the books/documents only explain in one line about what different options exist for that parameter. I am looking for an in-depth look.

2) I read in one of your articles that specifying "degree" value greater than CPU count can improve performance. Can you enhance that fact. Please !!

Thanks

Tom Kyte
October 01, 2003 - 7:44 am UTC

1) </code> http://asktom.oracle.com/pls/asktom/f?p=100:11:::::P11_QUESTION_ID:5792247321358#7417227783861 <code>

2) degree specifies the number of threads/processes that will be used. since these processes spend lots of time doing IO -- not using CPU, you would find that your CPU is not pegged, not fully utilized, if you set degree = #cpu -- since the processes spend lots of time in IO waits. In order to more fully utilize this resource (that you cannot put in the bank and use later -- its a use or lose resource), you would use a higher degree.

io also uses cpu.....

Reader, October 01, 2003 - 8:26 am UTC

When we do IO, it also uses cpu. Right?

Tom Kyte
October 01, 2003 - 9:37 am UTC

not using "real" disks no -- an IDE drive used the system cpu but a SCSI drive for example won't.

so, on any real system, no, IO does not use the system cpu in general.

table monitoring and performance impact

Jerry, October 01, 2003 - 9:57 am UTC

A question about dbms_stats:
I have a reporting application where agents populate data for the database. When the agent first starts working, the data increased from zero rows to something like several hundred. The report query hangs because of the data change, although minor in absolute number but huge from percentage perspective. I am thinking of using gather_stales. But that would require turn on table monitoring on almost all tables in the database. Would there a strong negative performance impact if I turn on table monitoring for lots of tables?

Thanks!

Tom Kyte
October 01, 2003 - 10:18 am UTC

monitoring is very very non-intrusive, very light weight -- turn it on.

why do we have to turn on table monitoring ....

Reader, October 01, 2003 - 12:17 pm UTC

is this required if I decide to let Oracle make a decision when to collect stats for tables? Thanks.

Tom Kyte
October 01, 2003 - 1:57 pm UTC

yes. if you want it to figure out when the table needs new stats, you need to enable this.

it is a simple call to dbms_stats to enable for a schema.

monitoring

Sam, October 01, 2003 - 5:09 pm UTC

Is it like I have to do alter table table_name monitoring for all of the tables in a schema and then use dbms_stats package proc gather_stale procedure for a schema? Thanks.

Tom Kyte
October 01, 2003 - 6:20 pm UTC

there is a dbms_stats procedure to enable monitoring for a schema.

</code> http://docs.oracle.com/docs/cd/B10501_01/appdev.920/a96612/d_stats2.htm#1012974 <code>



RE: Is it like I have to do alter table...

Mark A. Williams, October 01, 2003 - 6:30 pm UTC

> Is it like I have to do alter table...

If you have a release prior to 9i, yes. But a simple script can create a spool file and then run that file. Something like:

select 'alter table ' || owner || '.' || table_name || ' monitoring;'
from all_tables
where owner in ('ABC','XYZ')
order by owner, table_name;

you get the idea...

Plus if you make the tablespace 'monitoring' you don't have to remember to set the 'monitoring' flag on any new tables that get created in that tablespace.

Of course, if you are 9i, then you can just use dbms_stats.alter_schema_table_monitoring as Tom has referenced.

HTH,

Mark

Tom Kyte
October 01, 2003 - 7:00 pm UTC


you cannot make a tablespace monitoring tho?

monitoring

Mark A. Williams, October 01, 2003 - 7:28 pm UTC

Tom,

Yep, my bad... not sure what I was thinking there... there is, of course, no alter tablespace <name> monitoring clause... Please disregard. Now, the question is, "what was I thinking?"

Thanks,

Mark

generate n only

A reader, December 05, 2003 - 4:14 am UTC

Hi

How can we gather histograms only for a table using dbms_stats?

Since METHOD_OPT seems that it only has

FOR COLUMNS
FOR ALL INDEXED COLUMNS
FOR ALL COLUMNS

options, so FOR TABLE is always run???? (which I dont want, I only want column statistics, histograms)

Tom Kyte
December 05, 2003 - 7:29 am UTC

well, why wouldn't you want them? you are going to be scanning the object anyhow?

but no, i don't see a way (never really even thought to look before) to do this with dbms_stats.

because I already have table stats

A reader, December 05, 2003 - 8:28 am UTC

Hi

I want to do it because sometimes I forgot to gather histograms after analyze the table!

With analyze we can why not with dbms_stats?

Tom Kyte
December 05, 2003 - 10:58 am UTC

well, if you use dbms_stats to gather the histograms -- you can forget about collecting table stats.

move the goal post here, there is no reason to "forget" (thats not a good reason really, this is a process, not something you just "key in")

use dbms_stats to gather histograms and "accidently" get your table stats.

i don't see how you can "forget".


but bottom line, you are already doing the heavy work needed to analyze the table to compute the histograms. You would not be "saving" much of anything here. so, basically -- non issue.

1* select count(*) from big_table
big_table@ORA920> /

COUNT(*)
----------
1833792


Indexes on big_table

Index Is
Name Unique COLUMNS
------------------------------ ------ --------------------------------
BIG_TABLE_ID No OWNER
OBJECT_TYPE_IDX No OBJECT_TYPE
BIG_TABLE_OBJECT_ID_IDX No OBJECT_ID


analyze table big_table compute statistics
for table
for all indexed columns

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.20 0 0 0 0
Execute 1 39.10 48.96 31720 22060 38 0
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 2 39.10 49.17 31720 22060 38 0

********************************************************************************

analyze table big_table compute statistics
for all indexed columns

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.02 0 0 0 0
Execute 1 36.96 47.77 28875 22060 37 0
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 2 36.96 47.80 28875 22060 37 0


method_opt in 9i

A reader, December 27, 2003 - 7:30 pm UTC

Hi

We have some new METHOD_OPT options in 9i, I have difficulty differentiate between two of them:

AUTO--Oracle determines the columns to collect histograms based on data distribution and the workload of the columns.

SKEWONLY--Oracle determines the columns to collect histograms based on the data distribution of the columns.

The only difference is the word WORKLOAD, what workload are we talking about here? The number of times these columns are queried...? If that's the case where is this statistic stored?

from the link you have given

A reader, December 28, 2003 - 1:08 pm UTC

it seems SIZE AUTO is the way to go huh, unless the shared pool is being flushed or aged out continuously

DM, June 16, 2004 - 2:25 pm UTC

I gathered stats using following options.

exec dbms_stats.gather_schema_stats ( 'AQU_DATA', 100 ) ;
exec dbms_stats.gather_schema_stats ( 'AQU_SEC', 100 ) ;

But after doing that CBO is taking hrs to execute and once gave the error also. vs running the same thing in RBO.

Am I missing anything while gathering statistics?


Details

SELECT table_name, NUM_ROWS, avg_row_len, last_analyzed from user_tables

TABLE_NAME NUM_ROWS AVG_ROW_LEN
------------------------------ ---------- -----------
DEAL 4804 113
DL_PROD 6597 81
DL_PROD_ASSOC 240 57
DL_PROD_BOND 3477 70
DL_PROD_LOAN 2701 155
DL_PROD_LOAN_CA 2699 50
DL_PROD_OWNERSHIP 7528 45
DL_PROD_OWNERSHIP_SUB_ORIG 5960 54
DL_TEAM 10898 58
LK_PRODUCT_CLASS 8 61
SEC_XREF_USERS_DL_ORGS 5168 35
SEC_XREF_USERS_DL_SUB_ORGS 29676 36




========following sql works output in 4 secs===
select /*+ RULE */ COUNT(*)
FROM AQU_DATA.V_DEAL_SEARCH_NEW
WHERE ID_AQU_EMP = '333456'

2.1 SORT(AGGREGATE)
3.1 MERGE JOIN
4.1 SORT(JOIN)
5.1 VIEW - SEC_QUALIFY_DEALS_RDONLY
6.1 SORT(UNIQUE)
7.1 VIEW
8.1 UNION-ALL
9.1 TABLE ACCESS(FULL) - DL_TEAM
9.2 TABLE ACCESS(BY INDEX ROWID) - DL_PROD_OWNERSHIP
10.1 NESTED LOOPS
11.1 TABLE ACCESS(FULL) - SEC_XREF_USERS_DL_ORGS
11.2 INDEX(RANGE SCAN) - DL_PROD_OWNERSHIP_ORG(NON-UNIQUE)
9.3 NESTED LOOPS
10.1 MERGE JOIN
11.1 SORT(JOIN)
12.1 TABLE ACCESS(FULL) - SEC_XREF_USERS_DL_SUB_ORGS
11.2 SORT(JOIN)
12.1 TABLE ACCESS(FULL) - DL_PROD_OWNERSHIP_SUB_ORIG
10.2 TABLE ACCESS(BY INDEX ROWID) - DL_PROD_OWNERSHIP
11.1 INDEX(UNIQUE SCAN) - DL_PROD_OWNERSHIP_PK(UNIQUE)
4.2 SORT(JOIN)
5.1 VIEW
6.1 UNION-ALL
7.1 MERGE JOIN(OUTER)
8.1 MERGE JOIN(OUTER)
9.1 SORT(JOIN)
10.1 NESTED LOOPS
11.1 TABLE ACCESS(FULL) - DEAL
11.2 TABLE ACCESS(BY INDEX ROWID) - DEAL
12.1 INDEX(UNIQUE SCAN) - DEAL_PK(UNIQUE)
9.2 SORT(JOIN)
10.1 VIEW
11.1 SORT(GROUP BY)
12.1 VIEW
13.1 UNION-ALL
14.1 SORT(GROUP BY)
15.1 TABLE ACCESS(FULL) - DL_PROD_BOND
14.2 SORT(GROUP BY)
15.1 TABLE ACCESS(FULL) - DL_PROD_LOAN
14.3 SORT(GROUP BY)
15.1 TABLE ACCESS(FULL) - DL_PROD_LOAN_CA
14.4 SORT(GROUP BY)
15.1 TABLE ACCESS(FULL) - DL_PROD_ASSOC
8.2 SORT(JOIN)
9.1 VIEW
10.1 SORT(GROUP BY)
11.1 NESTED LOOPS
12.1 TABLE ACCESS(FULL) - DL_PROD
12.2 TABLE ACCESS(BY INDEX ROWID) - LK_PRODUCT_CLASS
13.1 INDEX(UNIQUE SCAN) - LK_PRODUCT_CLASS_PK(UNIQUE)
7.2 MERGE JOIN(OUTER)
8.1 SORT(JOIN)
9.1 MERGE JOIN(OUTER)
10.1 MERGE JOIN(OUTER)
11.1 SORT(JOIN)
12.1 NESTED LOOPS
13.1 NESTED LOOPS
14.1 NESTED LOOPS
15.1 NESTED LOOPS
16.1 TABLE ACCESS(FULL) - DEAL
16.2 TABLE ACCESS(BY INDEX ROWID) - DL_PROD
17.1 INDEX(RANGE SCAN) - XIF1DL_PROD(NON-UNIQUE)
15.2 INDEX(UNIQUE SCAN) - LK_PRODUCTS_PK(UNIQUE)
14.2 INDEX(UNIQUE SCAN) - LK_PIPELINE_STATUS_PK(UNIQUE)
13.2 TABLE ACCESS(BY INDEX ROWID) - DEAL
14.1 INDEX(UNIQUE SCAN) - DEAL_PK(UNIQUE)
11.2 SORT(JOIN)
12.1 VIEW
13.1 SORT(GROUP BY)
14.1 VIEW
15.1 UNION-ALL
16.1 SORT(GROUP BY)
17.1 TABLE ACCESS(FULL) - DL_PROD_BOND
16.2 SORT(GROUP BY)
17.1 TABLE ACCESS(FULL) - DL_PROD_LOAN
16.3 SORT(GROUP BY)
17.1 TABLE ACCESS(FULL) - DL_PROD_LOAN_CA
16.4 SORT(GROUP BY)
17.1 TABLE ACCESS(FULL) - DL_PROD_ASSOC
10.2 SORT(JOIN)
11.1 VIEW
12.1 SORT(GROUP BY)
13.1 NESTED LOOPS
14.1 TABLE ACCESS(FULL) - DL_PROD
14.2 TABLE ACCESS(BY INDEX ROWID) - LK_PRODUCT_CLASS
15.1 INDEX(UNIQUE SCAN) - LK_PRODUCT_CLASS_PK(UNIQUE)
8.2 SORT(JOIN)
9.1 VIEW - DL_PROD_DERIVED_COLUMNS
10.1 MERGE JOIN(OUTER)
11.1 SORT(JOIN)
12.1 VIEW
13.1 SORT(GROUP BY)
14.1 NESTED LOOPS
15.1 TABLE ACCESS(FULL) - DL_PROD
15.2 TABLE ACCESS(BY INDEX ROWID) - LK_PRODUCT_CLASS
16.1 INDEX(UNIQUE SCAN) - LK_PRODUCT_CLASS_PK(UNIQUE)
11.2 SORT(JOIN)
12.1 VIEW
13.1 SORT(GROUP BY)
14.1 VIEW
15.1 UNION-ALL
16.1 SORT(GROUP BY)
17.1 TABLE ACCESS(FULL) - DL_PROD_BOND
16.2 SORT(GROUP BY)
17.1 TABLE ACCESS(FULL) - DL_PROD_LOAN
16.3 SORT(GROUP BY)
17.1 TABLE ACCESS(FULL) - DL_PROD_LOAN_CA
16.4 TABLE ACCESS(FULL) - DL_PROD_ASSOC

========following sql takes hrs to execute===
select COUNT(*)
FROM AQU_DATA.V_DEAL_SEARCH_NEW
WHERE ID_AQU_EMP = '333456'

2.1 SORT(AGGREGATE)
3.1 NESTED LOOPS
4.1 VIEW - SEC_QUALIFY_DEALS_RDONLY
5.1 SORT(UNIQUE)
6.1 VIEW
7.1 UNION-ALL
8.1 TABLE ACCESS(FULL) - DL_TEAM
8.2 HASH JOIN
9.1 TABLE ACCESS(FULL) - SEC_XREF_USERS_DL_ORGS
9.2 TABLE ACCESS(FULL) - DL_PROD_OWNERSHIP
8.3 HASH JOIN
9.1 HASH JOIN
10.1 TABLE ACCESS(FULL) - SEC_XREF_USERS_DL_SUB_ORGS
10.2 TABLE ACCESS(FULL) - DL_PROD_OWNERSHIP_SUB_ORIG
9.2 TABLE ACCESS(FULL) - DL_PROD_OWNERSHIP
4.2 VIEW
5.1 UNION-ALL(PARTITION)
6.1 HASH JOIN(OUTER)
7.1 NESTED LOOPS(OUTER)
8.1 NESTED LOOPS
9.1 INDEX(UNIQUE SCAN) - DEAL_PK(UNIQUE)
9.2 TABLE ACCESS(BY INDEX ROWID) - DEAL
10.1 INDEX(UNIQUE SCAN) - DEAL_PK(UNIQUE)
8.2 VIEW
9.1 SORT(GROUP BY)
10.1 HASH JOIN
11.1 TABLE ACCESS(FULL) - LK_PRODUCT_CLASS
11.2 TABLE ACCESS(FULL) - DL_PROD
7.2 VIEW
8.1 SORT(GROUP BY)
9.1 VIEW
10.1 UNION-ALL
11.1 SORT(GROUP BY)
12.1 TABLE ACCESS(FULL) - DL_PROD_BOND
11.2 SORT(GROUP BY)
12.1 TABLE ACCESS(FULL) - DL_PROD_LOAN
11.3 SORT(GROUP BY)
12.1 TABLE ACCESS(FULL) - DL_PROD_LOAN_CA
11.4 SORT(GROUP BY)
12.1 TABLE ACCESS(FULL) - DL_PROD_ASSOC
6.2 HASH JOIN(OUTER)
7.1 HASH JOIN(OUTER)
8.1 NESTED LOOPS
9.1 NESTED LOOPS
10.1 NESTED LOOPS
11.1 NESTED LOOPS(OUTER)
12.1 NESTED LOOPS
13.1 INDEX(UNIQUE SCAN) - DEAL_PK(UNIQUE)
13.2 TABLE ACCESS(BY INDEX ROWID) - DEAL
14.1 INDEX(UNIQUE SCAN) - DEAL_PK(UNIQUE)
12.2 VIEW
13.1 SORT(GROUP BY)
14.1 VIEW
15.1 UNION-ALL
16.1 SORT(GROUP BY)
17.1 TABLE ACCESS(FULL) - DL_PROD_BOND
16.2 SORT(GROUP BY)
17.1 TABLE ACCESS(FULL) - DL_PROD_LOAN
16.3 SORT(GROUP BY)
17.1 TABLE ACCESS(FULL) - DL_PROD_LOAN_CA
16.4 SORT(GROUP BY)
17.1 TABLE ACCESS(FULL) - DL_PROD_ASSOC
11.2 TABLE ACCESS(BY INDEX ROWID) - DL_PROD
12.1 INDEX(RANGE SCAN) - XIF1DL_PROD(NON-UNIQUE)
10.2 INDEX(UNIQUE SCAN) - LK_PRODUCTS_PK(UNIQUE)
9.2 INDEX(UNIQUE SCAN) - LK_PIPELINE_STATUS_PK(UNIQUE)
8.2 VIEW
9.1 SORT(GROUP BY)
10.1 HASH JOIN
11.1 TABLE ACCESS(FULL) - LK_PRODUCT_CLASS
11.2 TABLE ACCESS(FULL) - DL_PROD
7.2 VIEW - DL_PROD_DERIVED_COLUMNS
8.1 HASH JOIN(OUTER)
9.1 VIEW
10.1 SORT(GROUP BY)
11.1 HASH JOIN
12.1 TABLE ACCESS(FULL) - LK_PRODUCT_CLASS
12.2 TABLE ACCESS(FULL) - DL_PROD
9.2 VIEW
10.1 SORT(GROUP BY)
11.1 VIEW
12.1 UNION-ALL
13.1 SORT(GROUP BY)
14.1 TABLE ACCESS(FULL) - DL_PROD_BOND
13.2 SORT(GROUP BY)
14.1 TABLE ACCESS(FULL) - DL_PROD_LOAN
13.3 SORT(GROUP BY)
14.1 TABLE ACCESS(FULL) - DL_PROD_LOAN_CA
13.4 TABLE ACCESS(FULL) - DL_PROD_ASSOC


it returned error
ERROR at line 1:
ORA-04030: out of process memory when trying to allocate 49664 bytes (hash-join
subh,kllcqas:kllsltba)




reader

A reader, May 13, 2007 - 8:53 pm UTC

is fnd_stats a special application of dbms_stats

What does fnd_stats do. Does it use dbms_stats under the cover


Tom Kyte
May 14, 2007 - 2:02 pm UTC

fnd_stats is a special process of oracle applications, it is the stored procedure they wrote to gather statistics for an Oracle applications database.

yes, it in turn invoked dbms_stats, after reading configuration tables to determine what to do.

dbms_stats using least TEMP space

Enayet, May 21, 2007 - 5:03 pm UTC

Hi Tom:

I need to pick your expertise.

We are using:
exec DBMS_STATS.GATHER_SCHEMA_STATS(ownname => '${SCHEMA_TO_ANALYZE}',
options => 'GATHER STALE',
estimate_percent => 0.0000011,
method_opt => NULL,
degree => 4)

Challenge here is: we have a limited amount of TEMP space and our statistics gathering is always failing. How can we tweak the dbms_stats to need for least amount of TEMP space?

Please note that: we have allocated a larger sort_area_size and still its failing. This is a 9206 database.

Appreciate your response.

Tom Kyte
May 22, 2007 - 9:20 am UTC

why not spend a teeny tiny bit of money rather than waste thousands of dollars of your time?

I'll never understand this....

don't do parallel maybe, you'll have one serial session using temp instead of 4. Wasting even more time and money - but.....

Which options to use for this case

A reader, May 30, 2007 - 2:29 pm UTC

Tom,
I am using 10gr2 database. Here is what I want to achieve using dbms_stats:
a) All database objects should be analyzed, regardless of whethere they are stale or newly created with no stats or have most up to date stats. I tried to use options=>'GATHER' but the last_analyzed date does not change on some tables. What can I do to ensure that every object gets analyzed?

Thanks
Tom Kyte
May 30, 2007 - 4:12 pm UTC

the database right out of the box already has a job scheduled to do that, you need do nothing.

DBMS_STATS CPU utilization

A reader, August 29, 2007 - 4:47 pm UTC

Tom,
I am faced with a very strange situation. On the same RAC cluster, there are two databases. One database is 147GB and the second one is about 41GB. I run identical gather_schema_stats statements against both database at different times. The 147GB database gather stats finishes faster and takes up lot less CPU. The 41GB database gather stats takes more time than the 147GB database and the CPU utilization shoots very high.

I checked the size of tables/indexes analyzed in each database on a given day. The size of objects analyzed in the 147GB database is about 95GB and size of objects analyzed in the 41GB database is about 40GB. There is no other process running when gather stats is running.

My question is why the CPU utilization goes sky high when gather stats runs on the smaller database and why does it take longer? Is there some database parameter which affects resource consumption when stats are gathered? Other than database parameter differneces between these two databases, I cannot seem to find anything else which might cause such a behavior. We are on 10.2.0.3.

Thanks
Tom Kyte
September 04, 2007 - 4:46 pm UTC

everything but what is needed....

the actual inputs to dbms_stats.


I'll guess, gather stale, the 41gb database has lots of stale objects that need gathering and the other one does.

Or, you are using defaults and the default method opt is gather auto and in the smaller database, you run lots of queries against lots of columns and are gathering lots of histograms as a result - and in the larger database, you are not.

see Why Does My Plan Change?
https://www.oracle.com/technetwork/issue-archive/2006/06-may/o36asktom-099107.html

for more info on that.

I dont get it !!

karthick, September 14, 2007 - 2:07 am UTC

SQL> drop table t1
2 /

Table dropped.

SQL> drop table t2
2 /

Table dropped.

SQL> create table t2(a integer, b integer, c integer)
2 /

Table created.

SQL> create table t1 as select * from t2 where 1 = 2
2 /

Table created.

SQL> alter table t1 add(old_a integer, old_b integer, old_c integer)
2 /

Table altered.

SQL> create index t1_idx on t1(a,b,c)
2 /

Index created.

SQL> insert into t2 select rownum,rownum,rownum from all_objects
2 /

43777 rows created.

SQL> insert into t1
2 select rno,rno,rno,rownum,rownum,rownum
3 from (select rno from (select rownum rno from all_objects) order by rno desc)
4 /

43777 rows created.

SQL> select num_rows from user_tables where table_name = 'T1'
2 /

NUM_ROWS
----------


SQL> set timing on
SQL> analyze table t1 compute statistics
2 /

Table analyzed.

Elapsed: 00:00:02.00
SQL> select num_rows from user_tables where table_name = 'T1'
2 /

NUM_ROWS
----------
43777

Elapsed: 00:00:00.00
SQL> BEGIN
2 DBMS_STATS.DELETE_TABLE_STATS ('SCBADM','T1');
3 END;
4 /

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.07
SQL> select num_rows from user_tables where table_name = 'T1'
2 /

NUM_ROWS
----------


Elapsed: 00:00:00.00
SQL> BEGIN
2 DBMS_STATS.GATHER_TABLE_STATS ('SCBADM','T1');
3 END;
4 /

PL/SQL procedure successfully completed.

Elapsed: 00:00:02.03

this takes much more time than ANALYZE

SQL> BEGIN
2 DBMS_STATS.DELETE_TABLE_STATS ('SCBADM','T1');
3 END;
4 /

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.00
SQL> select num_rows from user_tables where table_name = 'T1'
2 /

NUM_ROWS
----------


Elapsed: 00:00:00.00
SQL> BEGIN
2 DBMS_STATS.GATHER_TABLE_STATS ('SCBADM','T1',NULL,NULL,FALSE,'FOR ALL COLUMNS SIZE 1',2);
3 END;
4 /

here the 2 means the gathering of stats should happen in 2 cuncurrent process right

PL/SQL procedure successfully completed.

Elapsed: 00:00:32.05

but see the time it takes

The best time i have got here is Elapsed: 00:00:02.00

Can we make faster than that

I ran all this in Oracle 8.1.7
Tom Kyte
September 15, 2007 - 9:12 pm UTC

parallel IS NOT FAST=TRUE

when you have something small and fast, parallel is OFTEN SLOWER!!!!

<quote src=Expert Oracle Database Architecture>

A Parallel Processing Analogy
I often use an analogy to describe parallel processing and why you need both a large task and sufficient free resources in the database. It goes like this: suppose you have two tasks to complete. The first is to write a one-page summary of a new product. The other is to write a ten-chapter comprehensive report, with each chapter being very much independent of the others. For example, consider this book: this chapter, ¿Parallel Execution,¿ is very much separate and distinct from the chapter titled ¿Redo and Undo¿¿they did not have to be written sequentially.
How do you approach each task? Which one do you think would benefit from parallel processing?
One-Page Summary
In this analogy, the one-page summary you have been assigned is not a large task. You would either do it yourself or assign it to a single individual. Why? Because the amount of work required to ¿parallelize¿ this process would exceed the work needed just to write the paper yourself. You would have to sit down, figure out that there should be 12 paragraphs, determine that each paragraph is not dependent on the other paragraphs, hold a team meeting, pick 12 individuals, explain to them the problem and assign them each a paragraph, act as the coordinator and collect all of their paragraphs, sequence them into the right order, verify they are correct, and then print the report. This is all likely to take longer than it would to just write the paper yourself, serially. The overhead of managing a large group of people on a project of this scale will far outweigh any gains to be had from having the 12 paragraphs written in parallel.
The exact same principle applies to parallel execution in the database. If you have a job that takes seconds or less to complete serially, then the introduction of parallel execution and its associated managerial overhead will likely make the entire thing take longer.
Ten-Chapter Report
But consider the second task. If you want that ten-chapter report fast¿as fast as possible¿the slowest way to accomplish it would be to assign all of the work to a single individual (trust me, I know¿look at this book! Some days I wished there were 15 of me working on it). Here you would hold the meeting, review the process, assign the work, act as the coordinator, collect the results, bind up the finished report, and deliver it. It would not have been done in one-tenth the time, but perhaps one-eighth or so. Again, I say this with the proviso that you have sufficient free resources. If you have a large staff that is currently not actually doing anything, then splitting the work up makes complete sense.
However, consider that as the manager, your staff is multitasking and they have a lot on their plates. In that case, you have to be careful with that big project. You need to be sure not to overwhelm them; you don¿t want to work them beyond the point of exhaustion. You can¿t delegate out more work than your resources (your people) can cope with, otherwise they¿ll quit. If your staff is already fully utilized, adding more work will cause all schedules to slip and all projects to be delayed.
Parallel execution in Oracle is very much the same. If you have a task that takes many minutes, hours, or days, then the introduction of parallel execution may be the thing that makes it run eight times faster. But then again, if you are already seriously low on resources (the overworked team of people), then the introduction of parallel execution would be something to avoid, as the system will become even more bogged down. While the Oracle server processes won¿t ¿quit¿ in protest, they could start running out of RAM and failing, or just suffer from such long waits for I/O or CPU as to make it appear as if they were doing no work whatsoever.
If you keep that in mind, remembering never to take an analogy to illogical extremes, you¿ll have the commonsense guiding rule to see if parallelism can be of some use. If you have a job that takes seconds, it is doubtful that parallel execution can be used to make it go faster¿the converse would be more likely. If you are low on resources already (i.e., your resources are fully utilized), adding parallel execution would likely make things worse, not better. Parallel execution is excellent for when you have a really big job and plenty of excess capacity. In this chapter, we¿ll take a look at some of the ways we can exploit those resources.

<quote>

More to Explore

Data Pump

All of the database utilities including Data Pump are explained in the Utilities guide.