index rebuild with Compute Statistics in program
Prakash Eranki, November 24, 2002 - 9:41 pm UTC
Hi Tom
That was a fantastic explanation regarding Indexes. Thanks a ton for the Knowledge Share.
I have a question related the same. In our Programs we are pulling out a chunks of data from a huge tables to a work tables using a Bulk Fetch and a Bulk insert respectively. Just before copying the data from the main table to work table, im disabling the index on work table by using alter index ... unusable and setting the sesion to skip unusable indexes. then bulk insert and then commit and then rebuilding the index with alter index rebuild compute statistics. According to your explanation above it should not have an impact. I cannot join huge tables so thats why this excercise. i could redue the program time from 45minutes to 3 - 4 minutes . I could achieve a good performance but so far i did'nt see any impact. will it impact in future . pls kindly suggest .
Thanks & Regards
Prakash Eranki
November 25, 2002 - 7:19 am UTC
No, you missed my point.
We are talking about regularly scheduled index rebuilds on a normal system here. "Every week, lets rebuild everything". Well, that might have been a super idea for dbase -- but not for a real database.
What you describe above is more akin to a bulk load. Here, instead of maintaining the index whilst loading, you disable them -- bulk load -- and then rebuild them (but the rebuild is really a CREATE since we have to go to the base table to really build these indexes -- insufficient data in the index structure).
So, I would say my explanation above has no bearing or impact on your situation -- you would be applying apples to oranges if you take the above advice and apply it to your situation which is totally different.
BTW: yes, yes you can join huge tables. Just as efficiently (if not much more so) then joining little tables over and over and over and over as you appear to be.
A reader, December 22, 2002 - 2:13 pm UTC
Your analogy was very interesting. But can you offer more technical explanation of what is happening? what do you mean by saying that the index rebuild will destroy the equilibrium the system worked so hard to get to? Also what is the difference between an index rebuild and re-creation of that index?
December 22, 2002 - 4:28 pm UTC
Don't know how to say it better -- the index wants to be big and fat with extra space. It is on a column you update -- moving the index entry from place to place in the index. One day the row has a code of "A", the next day the code is "G", then "Z" then "H" and so on. So the index entry for the row moves from place to place in the index. As it does so, it needs space -- will, if the space isn't there, we split the block into two -- and make space. Now the index is getting fat. Over time the index is 2-3x the size it was when you started and is "half or more empty" But that is OK since you move rows around. Now when we move the rows around, we no longer have to split blocks to make room -- the room is already available.
Then you come along and rebuild or drop and recreate the index (which have the same effects -- just the rebuild is "safer" -- doesn't stand a chance of losing the index and can be faster as the index can be rebuilt by scanning the existing index instead of scanning the table and sorting and building a fresh index). Now, all of that nice space is gone. We start the process of splitting the blocks all over again -- getting us right back to where we started.
You saved no space.
The index is right back the way it was.
You would just be wasting your time to rebuild it again causing this vicious cycle to repeat itself.
I had it sooo wrong...
Christo Kutrovsky, April 07, 2003 - 10:49 pm UTC
Since the begining, I was told that deleting a row from an indexed table is wasting that space in the index. I was also told (or maybe had read it somewhere...) that updating an indexed column is basically deleting it, and creating it. Since then, I had never taken the time to dig in more.
Now, it seems that both those statements are correct, but not the way I imagined it. Please correct me if I make a mistake:
1. Deleting a row from an index, is "wasting" the space, but this space will be reused when a similar row needs to be inserted.
2. Updating a row in a index, is deleting the one on the old place, and placing a new one at a new place. The space form the old row is "wasted" but used when a new similar row is created.
3. Building an index with pctfree 50 is the same as having ALL the blocks "splited" and will help in reducing REDO as we insert data (as the chance of splitting a block reduces)
4. When Oracle detects that an index is created on a column, and all the values are progressing (1,2,3,4) when splitting a block, it creates it 90% in old block, 10% in the new block, not 50/50 as in any other case.
5. Rebuilding an index can be helpfull if:
5a. The data is not going to be modified any more (or maybe just a little) to improve the space usage and to improve LIOs.
5b. After deleting many rows (50% for example) from a table (and no plan for reinserting them), rebuilding will help.
5c. If you have a range partitioned table, on a date basis (for example 1 partition per week), and you are progressively inserting and adding new partitions, AND RARELY modify the data in past partitions, it's a good idea to rebuild the indexes on those partitions.
5c - adition - Is that even necessary if we are having "progressive" numbers on the indexed columns
Thanks to you, I will not waste time rebuilding, due to my confusion that after DML activity, rebuilding will help the system. (in the general case)
April 08, 2003 - 7:33 am UTC
1) when a "similar row" (one that fits on that leaf block) or when all rows on a leaf block are deleted (then that block goes onto the freelist and will be reused elsewhere in the index structure)
2) it is not "wasted", no more then space in a table block is wasted when you update 'long value in a varchar2' to 'short'.
3) assumming a normal distribution of values -- consider if this index were on a column populated via a sequence. THEN you would have accomplished that waste of space as the "older" leaf blocks would never use their 50%
4) only in special cases, else it is more like a 60/40 split. The 90/10 is a special optimization for indexes on columns populated in a increasing fashion (like those on a sequence for example).
5) (a) sure, like after a load into a data warehouse -- but you would have
disabled them anyway...
(b) coalesce would do the same
(c) why?
why 5c
Christo Kutrovsky, April 08, 2003 - 12:45 pm UTC
As we insert in that table partition (when it's current) the index gets "fat" as we provide him food.
So after we finish loading data in that partition, some index blocks will not be full (depending on the data), as they have splited as necessary.
Since we've finished loading data, we won't need that "reserved" space in the index anymore, so we're rebuilding that partiion (on a local index) to get rid of the empty slots.
From this point in time, this partition is only going to be queried, and we're starting to fill up the next one.
April 08, 2003 - 1:58 pm UTC
tell me -- how much space do you anticipate getting back?
Sure, if you have a sliding window -- in a data warehouse for example -- one would typically DISABLE indexes, load and rebuild. totally different concept.
In a transactional system -- locally partitioned indexes are something are are less useful -- many partitioned indexes are in fact global -- so no.
You see -- 5c is very vague, very general, you said:
5c. If you have a range partitioned table, on a date basis (for example 1
partition per week), and you are progressively inserting and adding new
partitions, AND RARELY modify the data in past partitions, it's a good idea to
rebuild the indexes on those partitions.
i would say "good idea" -- maybe -- maybe not. give me the details here. global indexes? local indexes? local indexes that are not on an increasing field (they do 90/10, no need there)? local indexes that materially shrink (most indexes you will find are "well fed", neither too fat, nor too skinny).
So, I will stick with "no".
How to determine index characteristics
A reader, May 02, 2003 - 12:19 pm UTC
Tom:
I have a few Questions Re: Index characterisitcs. I'd grealty appreciate if you could please explain with the help of some samples.
How can I determine if my index is
1) "Lob sided" and what does it mean ?
2) Index browning - what is it and how to detect it?
3) undergoing block splitting ?
4) sequencially increasing in its key value - and hence I can re-create it with a low pctfree setting.
How can I determine the overhead of a bulk update/insert on the index maintenance (i.e block splitting at leaf level and branch level etc...)?
Thanks
May 02, 2003 - 12:56 pm UTC
1) think you mean "lop sided" and for that I would ask you to ask the person that said it what they mean and how to get one ;)
they really don't get that way.
2) a high percentage of deleted leaf rows to leaf rows (think of leaf rows like leaves on a tree -- dead leafs (deleted row entries) are "brown". If you have lots of dead leaves, the tree looks brown)
This is a somewhat rare case -- it typically occurs when you have an increasing key value (say a column populated by a sequence). So you have index entries 1,2,3,4,..... and so on. You tend to delete SOME but not all of the old entries (leaving you with entries 2,4,6,7,8,10,.... for example -- say every other row). Now, since you'll never insert a low value ever again -- that deleted leaf row may never be used again, it'll be "brown".
These indexes (rare as they are -- really, they are rare) may be coalesced, rebuilding -- nah, not necessary.
3) they do it all of the time, it is how they grow. If you look at your index one day and it is bigger the next, it did some splitting. It is normal, amoeba's do it, indexes do it.
4) why? we do a 90/10 split not a 50/50 or 60/40 split on those. Indexes are pretty smart structures and it is the rare index that needs extra special attention.
Michael Dinh, May 29, 2003 - 4:46 pm UTC
According to the article at </code>
http://www.dbazine.com/weeg16.shtml <code>The author rebuilds indexes based on certain criterias.
You are against rebuilding indexes.
Who's wrong and who's right?
May 29, 2003 - 7:01 pm UTC
well, lets see, they've set a "4 extent limit" -- one might ask "why"?
why they would rebuild an index based on number of extents is beyond me. I don't see the reasoning.
their goal:
"We want to reclaim space, so we will rebuild all of the indexes that have too much unused space. "
too bad for them that the indexes just grow all over again (see the riddle me this above... not only do they simply tend to grow right back to where they were -- they consume tons of resources in doing so!)
You know what is missing in that article? Science. I don't see any hypothesis, proof, I see nothing that says "see -- my system is so much better", or "see I save X megabytes PERMANENTLY of storage" (sure, they might have X megabytes more today -- but come back in a week)
There are rare degenerate cases when indexes need to be rebuilt.
I see no discussion of that there.
All I see is "I want space" -- but isn't it funny that they need to do this over and over (hence they get the space for like what, 5 minutes - then poof, its going back to the indexes which want to reach their steady state, a state that includes a bit of fluffiness -- but that it will always go back to).
Sort of self defeating eh?
Until I see the article that says:
a) I rebuild indexes
b) for this reason
c) and here is the proof that I achieve this goal in the long term
d) and here are the caveats to doing so, and times you don't want to do this cause it is counter productive
I will keep saying "don't do it"...
I'm not from Missouri but I still say "show me"
I didn't see anything that showed me the "why" and that this did more good then harm.
I can show you this doing more harm then good however...
Where are the metrics that show "and the very next day our LIO's were down 5% under the same load"?
"And the very next day our redo generation stayed the same under the very same load"
"And we had this extra space for months..." (this is the funniest reason to rebuild, since they just get fat anyway -- all but the degenerate cases stop growing. if you put them on a diet by rebuilding, they just get fat again anyway)
Rebuilding Indexes can be good
Angus McTavish aged 75, May 30, 2003 - 7:13 am UTC
Tom,
Your arguments against index rebuilding seem very sound, so what do you think of this example where index rebuilding made a measureable improvement in an OLTP system? (Sadly, the measurements in question are long gone, but trust me, the index rebuilds made a significant improvement in response times.) The scenario I describe is probably not that uncommon I guess, which is why I'm putting the question, because you suggest that index rebuilds are necessary in rare cases only.
We had a table Table A with about 20 million rows.
Stats were computed on this table weekly.
The primary key on the table is an ascending sequence.
There are six other indexed columns all of which are populated with primary key values on other tables (where the primary keys are sequences).
Basically the values in all indexed columns are going up.
No updates, just inserts. About a year ago we removed all data in this table over 60 days old. This removed about 15 million rows. There was an immediate drop in performance when running standard queries against this table. Reanalyzing had no effect. We rebuilt the indexes and hey presto! performance returned to normal. We have a job which runs daily and removes all data over 60 days old so the size of the table remains fairly constant now. But if we don't rebuild the indexes periodically (I now do it about once every couple of months), performance degrades - not as dramatically as when we first removed 15 million rows, but it does degrade.
Now I imagine that it's quite common to delete data from tables in this way - even non-partitioned tables as in this case. What are your thoughts, old chap?
May 30, 2003 - 8:23 am UTC
My arguments against index rebuilding are not against index rebuilding
they are against the scheduled rebuild of indexes, especially ones that have no metrics after the fact to show that they provided long term, beneficial improvements.
You would be surprised how many rebuilds provide
a) short term (like hours) space reclaimation
b) longterm performance DECREASES
You should have COALESCED the indexes to reclaim free space. Faster, easier, doesn't require 2x the space.
Coalesce... reclaim the free space from mostly empty index leaf blocks that will not be reused otherwise due to your increasing sequence.
"old chap?"
Who does Angus work for?
Richard, May 30, 2003 - 10:31 am UTC
I want to work for Angus's company! A UK business employing 75 year-olds (apart from DIY superstores) is as rare as hen's teeth. Well done, Angus, a job for life!
(by the way, is that your real name?)
Who does Angus work for?
Richard, May 30, 2003 - 10:31 am UTC
I want to work for Angus's company! A UK business employing 75 year-olds (apart from DIY superstores) is as rare as hen's teeth. Well done, Angus, a job for life!
(by the way, is that your real name?)
A reader, May 30, 2003 - 10:43 am UTC
Good Explaination!!
Coalesce - of course!
Angus McTavish aged 75, June 04, 2003 - 9:31 am UTC
Ah yes, old chap, you are quite right I'm sure. COALESCE would have done the trick just as nicely - better all round in fact, for the reasons you mention.
Please don't take this the wrong way.
md, June 09, 2003 - 2:36 pm UTC
Tom,
I love you man :))
A reader, June 17, 2003 - 2:36 pm UTC
Is there a way to rename an existing index? Thanks.
Reader, August 15, 2003 - 8:06 pm UTC
August 15, 2003 - 8:34 pm UTC
ops$tkyte@ORA920> alter index t3_b_idx rename to x;
Index altered.
ops$tkyte@ORA817DEV> alter index LOGICAL_DATE_PK rename to x;
Index altered.
Renaming index - is it new feature in 8i or 9i? Thanks.
A Reader, August 16, 2003 - 11:54 am UTC
August 16, 2003 - 1:58 pm UTC
i did it in 8i and 9i above....
Change of the execution plan after index is recreated
Branka Bowman, September 25, 2003 - 1:46 pm UTC
Following are two execution plans, one before I recreated index and one after. I am trying to understand what can be possible reason that Oracle decides not to use existing index for the execution plan. After I recreate index Oracle start using that index again.
Statistics (before index is recreated)
----------------------------------------------------------
0 recursive calls
4 db block gets
3349 consistent gets
2299 physical reads
0 redo size
716 bytes sent via SQL*Net to client
250 bytes received via SQL*Net from client
1 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
0 rows processed
1
Statistics (after index is recreated)
----------------------------------------------------------
0 recursive calls
0 db block gets
5 consistent gets
0 physical reads
0 redo size
896 bytes sent via SQL*Net to client
315 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
1 sorts (memory)
0 sorts (disk)
1 rows processed
September 25, 2003 - 11:31 pm UTC
hmmm, i don't know. i don't even know what i'm looking at here.
hows about a test case - from start to finish.
Change of the execution plan after index is recreated
Branka Bowman, September 26, 2003 - 12:30 pm UTC
Index is on one table that is in use all the time. Record are inserting and deleting from that table. Table has only around 500000 records.
Following is explanation that DBA gave to me:
“Since Oracle makes these determinations internally taking into account a variety of factors it is difficult to pinpoint with 100% certainty, but one possibility is fragmentation of the index over a period of time. As you add and delete records, which also results in having to update the index, over a period of time the index becomes fragmented and the blocks become scattered around rather than in one contiguous space. At some point, Oracle may decide that it is better to do a full table scan rather than search through all these scattered blocks and ignore the index.”
Will using of locally managed tablespaces resolve this problem in the future?
This is SQL statement. I have execution plan for one that is using index. Unfortunately, I did not save copy of one that is not using index. (before index is recreated)
select TO_CHAR(BATCH01.ACCTG_DT,'YYYY-MM-DD') ,BATCH01.CREATE_USER_ID ,BATCH01.STAT_CD ,BATCH01.DOM_CHECK_AM ,BATCH01.FRGN_CHECK_AM ,
BATCH01.DOM_MONEY_ORDER_AM ,BATCH01.FRGN_MONEY_ORDER_A ,BATCH01.CR_CARD_AM ,BATCH01.ELECT_XFR_AM ,BATCH01.TOT_AM ,
BATCH01.LAST_UPDT_USER_ID ,BATCH01.SEQ_NO ,BATCH01.OPNG_CASH_DRAWER_A ,BATCH01.CASH_AM ,BATCH01.BILLS_AM ,
BATCH01.COINS_AM ,TO_CHAR(BATCH01.OPND_TS,'YYYY-MM-DDHH24.MI.SS') ,BATCH01.PYMNT_NO
from BATCH BATCH01
where (BATCH01.CREATE_USER_ID='AADOFO1'
and BATCH01.STAT_CD='O')
Execution Plan (after index is recreated)
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=100 Card=136 Bytes=8
024)
1 0 TABLE ACCESS (BY INDEX ROWID) OF 'BATCH' (Cost=100 Card=13
6 Bytes=8024)
2 1 INDEX (RANGE SCAN) OF 'BATPSK2' (NON-UNIQUE) (Cost=3 Car
d=136)
Statistics (after index is recreated)
----------------------------------------------------------
0 recursive calls
0 db block gets
3 consistent gets
0 physical reads
0 redo size
720 bytes sent via SQL*Net to client
216 bytes received via SQL*Net from client
1 SQL*Net roundtrips to/from client
1 sorts (memory)
0 sorts (disk)
0 rows processed
Statistics (before index is recreated)
----------------------------------------------------------
0 recursive calls
4 db block gets
3349 consistent gets
2299 physical reads
0 redo size
716 bytes sent via SQL*Net to client
250 bytes received via SQL*Net from client
1 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
0 rows processed
September 26, 2003 - 2:50 pm UTC
their explanation is not accurate.
indexes are managed in single blocks always.
they are always "scattered all around" from the very beginning.
i need the other plan.
Change of the execution plan after index is recreated
Branka Bowman, September 26, 2003 - 4:30 pm UTC
I did not save other plan, but difference is that other plan is doing full table search.
I have Statpack report before problem started and while we had problem.
September 26, 2003 - 8:11 pm UTC
statspack doesn't count.
we need the PLAN. we need to see the estimated cardinality.
Change of the execution plan after index is recreated
Branka Bowman, September 28, 2003 - 1:12 am UTC
Is it any way that I can now find what happen with execution plan?
What can I monitor to be sure that I will find problem before customers?
September 28, 2003 - 9:23 am UTC
i can only keep saying "need to see said plan" before I can comment on possible causes.
Change of the execution plan after index is recreated
Branka Bowman, September 28, 2003 - 12:47 pm UTC
I created one possible scenario (using part of one of your example) that could be something that happens. Can you comment this scenario?
SQL> drop table t;
Table dropped.
SQL> create table t as select mod(rownum,3) X, a.* from all_objects a;
Table created.
SQL> insert /*+ append */ into t select * from t;
24694 rows created.
SQL> commit;
Commit complete.
SQL> insert /*+ append */ into t select * from t;
49388 rows created.
SQL> commit;
Commit complete.
SQL> select count(*) from t;
COUNT(*)
----------
98776
SQL> update t set x = 4 where rownum <= 10;
10 rows updated.
SQL> create index t_idx on t(x) ;
Index created.
SQL> analyze table t compute statistics;
Table analyzed.
SQL> set autotrace traceonly explain
SQL> select * from t where x = 4;
Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=212 Card=24694 Bytes
=2222460)
1 0 TABLE ACCESS (FULL) OF 'T' (Cost=212 Card=24694 Bytes=2222
460)
SQL> drop index t_idx;
Index dropped.
SQL> create index t_idx on t(x);
Index created.
SQL> select * from t where x = 4;
Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=207 Card=24694 Bytes
=2222460)
1 0 TABLE ACCESS (BY INDEX ROWID) OF 'T' (Cost=207 Card=24694
Bytes=2222460)
2 1 INDEX (RANGE SCAN) OF 'T_IDX' (NON-UNIQUE) (Cost=7 Card=
24694)
September 28, 2003 - 1:04 pm UTC
yes, you took away information -- by dropping the index, you lost the statistics.
bad stats = bad plans.
Change of the execution plan after index is recreated
Branka Bowman, September 28, 2003 - 9:26 pm UTC
I did not understand explanation. When I drop and create index, execution plan start using index. Before that it was not using index and number of consistent gets and physical reads is much smaller after index is dropped and created again.
First I had index that execution plan should use, bud did not. When I drop and create index execution plan start using index.
Estimated cardinality is not correct in both cases, but in second one execution plan is using index.
Statistics are obvious wrong. What would be correct statistics for this example? Is it any way that I can find out what statistics are made on one table (other than to get that information from DBA)?
September 29, 2003 - 7:25 am UTC
you took away all of the index statistics. but anyway, use this:
ops$tkyte@ORA920> analyze table t compute statistics
2 for table
3 for all indexes
4 for all indexed columns;
or better yet
ops$tkyte@ORA920> exec dbms_stats.gather_table_stats( user, 'T', method_opt => 'for all indexed columns', cascade => true );
PL/SQL procedure successfully completed.
ops$tkyte@ORA920> set autotrace traceonly explain
ops$tkyte@ORA920> select * from t where x = 4;
Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=2 Card=10 Bytes=1000)
1 0 TABLE ACCESS (BY INDEX ROWID) OF 'T' (Cost=2 Card=10 Bytes=1000)
2 1 INDEX (RANGE SCAN) OF 'T_IDX' (NON-UNIQUE) (Cost=1 Card=10)
and you'll find it does the right thing from step 1.
I cannot reproduce your example in either of 8174 or 9203 btw. In both, i get:
ops$tkyte@ORA817DEV> analyze table t compute statistics;
Table analyzed.
ops$tkyte@ORA817DEV> set autotrace traceonly explain
ops$tkyte@ORA817DEV> select * from t where x = 4;
Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=44 Card=26861 Bytes=2363768)
1 0 TABLE ACCESS (FULL) OF 'T' (Cost=44 Card=26861 Bytes=2363768)
ops$tkyte@ORA817DEV> drop index t_idx;
Index dropped.
ops$tkyte@ORA817DEV> create index t_idx on t(x);
Index created.
ops$tkyte@ORA817DEV> select * from t where x = 4;
Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=44 Card=26861 Bytes=2363768)
1 0 TABLE ACCESS (FULL) OF 'T' (Cost=44 Card=26861 Bytes=2363768)
Now, I have defaults for pretty much everything.
Create partition table
Hanna, October 28, 2003 - 6:07 pm UTC
I have tables in schema A for each year say Y99, Y00, Y01, Y02, Y03 and I am in process to create a partitioned table in schema B with data from all the tables in schema A.
can you give a demo with creating partition table along with the index creation on each partion ( Say we have ID as the common column on all the tables)
October 28, 2003 - 9:11 pm UTC
i would "hope" you have more then ID as the commonality. ALL columns would be in common.
not sure what you are looking for?
a single create table that you would insert into
or a method to convert these existing tables into partitions of a partitioned table
I have examples of both on this site.
Reader, October 29, 2003 - 1:58 pm UTC
can you give the search words
October 29, 2003 - 2:29 pm UTC
partition existing table
More on Angus question.
Sai., October 29, 2003 - 6:56 pm UTC
Hi Tom,
In this page Angus said: "We had a table Table A with about 20 million rows.
Stats were computed on this table weekly.
The primary key on the table is an ascending sequence.
There are six other indexed columns all of which are populated with primary key
values on other tables (where the primary keys are sequences).
Basically the values in all indexed columns are going up.
No updates, just inserts. About a year ago we removed all data in this table
over 60 days old. This removed about 15 million rows. There was an immediate
drop in performance when running standard queries against this table.
Reanalyzing had no effect. We rebuilt the indexes and hey presto! performance
returned to normal."
Why would deleting a lot of rows degrade's performance of standard queries they were running. I thought, performance would still remain same, because Oracle would still have to visit same number of blocks, as were before delete? Can you please explain what could possible explain degrade in performance?
Thanks,
October 30, 2003 - 7:00 am UTC
they would have achieved the same goal via a COALESCE of the index.
I'll assume their queries read "min" values lots.
ops$tkyte@ORA920LAP> create table t ( x int );
Table created.
ops$tkyte@ORA920LAP> insert into t select rownum from big_table.big_table;
1000000 rows created.
ops$tkyte@ORA920LAP> create index t_idx on t(x);
Index created.
ops$tkyte@ORA920LAP> analyze table t compute statistics for table for all indexes;
Table analyzed.
ops$tkyte@ORA920LAP>
ops$tkyte@ORA920LAP> set autotrace on
ops$tkyte@ORA920LAP> select min(x) from t;
MIN(X)
----------
1
Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=152 Card=1 Bytes=8)
1 0 SORT (AGGREGATE)
2 1 INDEX (FULL SCAN (MIN/MAX)) OF 'T_IDX' (NON-UNIQUE) (Cost=152 Card=1000000 Bytes=8000000)
Statistics
----------------------------------------------------------
0 recursive calls
0 db block gets
3 consistent gets
0 physical reads
0 redo size
377 bytes sent via SQL*Net to client
499 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
1 rows processed
ops$tkyte@ORA920LAP> set autotrace off
ops$tkyte@ORA920LAP> delete from t where x <= 999999;
999999 rows deleted.
ops$tkyte@ORA920LAP> commit;
Commit complete.
ops$tkyte@ORA920LAP> set autotrace on
ops$tkyte@ORA920LAP> select min(x) from t;
MIN(X)
----------
1000000
Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=152 Card=1 Bytes=8)
1 0 SORT (AGGREGATE)
2 1 INDEX (FULL SCAN (MIN/MAX)) OF 'T_IDX' (NON-UNIQUE) (Cost=152 Card=1000000 Bytes=8000000)
Statistics
----------------------------------------------------------
0 recursive calls
0 db block gets
4454 consistent gets
2225 physical reads
133560 redo size
377 bytes sent via SQL*Net to client
499 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
1 rows processed
ops$tkyte@ORA920LAP> set autotrace off
ops$tkyte@ORA920LAP> alter index t_idx coalesce;
Index altered.
ops$tkyte@ORA920LAP> set autotrace on
ops$tkyte@ORA920LAP> select min(x) from t;
MIN(X)
----------
1000000
Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=152 Card=1 Bytes=8)
1 0 SORT (AGGREGATE)
2 1 INDEX (FULL SCAN (MIN/MAX)) OF 'T_IDX' (NON-UNIQUE) (Cost=152 Card=1000000 Bytes=8000000)
Statistics
----------------------------------------------------------
0 recursive calls
0 db block gets
6 consistent gets
2 physical reads
0 redo size
377 bytes sent via SQL*Net to client
499 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
1 rows processed
ops$tkyte@ORA920LAP> set autotrace off
What happened was the mass delete left lots of blocks empty -- but they would not be moved to a freelist until later. So, the min() had to look through lots of "dead blocks". the coalesce moved them (the empty ones) onto the freelist for the index.
freelist
Reader, October 30, 2003 - 11:32 am UTC
<quote>but they would not be moved to a freelist until later<quote>
In the above example, after deleting most of the rows, exit the session and login back and then if I performed the query, oracle by then would have put those free index blocks in the free list? Thanks.
Indexing
Suhas desai, November 18, 2003 - 1:56 am UTC
Simply fantastic
Execution Plan
Branka, December 31, 2003 - 4:14 pm UTC
What Cost in Execution Plan mean?
December 31, 2003 - 5:08 pm UTC
COST in explain plan
Neeraj, April 16, 2004 - 1:41 am UTC
While it's true that Oracle will choose the lower cost
a common trap people fall into is comparing the cost and thinking
if I make the cost lower I make the execution faster.
In a nutshell don't compare "costs", never never never
COST is not related to table access operation so we should not conclude tha if a query is having low cost in explain plan is tune.
index used even when there are no records in the table?
A reader, April 22, 2004 - 5:38 am UTC
Tom,
This might be a simple question for you, i just wanted to understand more on the index and so asking you this. please read below ...
index used even when there are no records in the table?
SQL>delete from emp;
14 rows deleted.
SQL>commit;
Commit complete.
SQL>set autotrace traceonly explain
SQL>select * from emp where EMPNO=7369;
Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE
1 0 TABLE ACCESS (BY INDEX ROWID) OF 'EMP'
2 1 INDEX (UNIQUE SCAN) OF 'PK_EMP' (UNIQUE)
i have deleted emp, but why is it still using the index.
if i delete table records, it should delete from index too right? - not so?
ok, i will rebuild the index
SQL>alter index pk_emp rebuild;
Index altered.
SQL>select * from emp where EMPNO=7369;
Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE
1 0 TABLE ACCESS (BY INDEX ROWID) OF 'EMP'
2 1 INDEX (UNIQUE SCAN) OF 'PK_EMP' (UNIQUE)
but still its using index, why? ...so, even if the records are not there it will use index?
SQL>analyze table emp estimate statistics;
Table analyzed.
SQL>select * from emp where EMPNO=7369;
Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=1 Card=1 Bytes=87)
1 0 TABLE ACCESS (BY INDEX ROWID) OF 'EMP' (Cost=1 Card=1 Bytes=87)
2 1 INDEX (UNIQUE SCAN) OF 'PK_EMP' (UNIQUE)
SQL>analyze table emp compute statistics;
Table analyzed.
SQL>select * from emp where EMPNO=7369;
Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=1 Card=1 Bytes=87)
1 0 TABLE ACCESS (BY INDEX ROWID) OF 'EMP' (Cost=1 Card=1 Bytes=87)
2 1 INDEX (UNIQUE SCAN) OF 'PK_EMP' (UNIQUE)
whats your thought on this. so, if there an index on the predicate used, it will use it the index, no matter, whether there are records or not ?
Thanks a lot,
April 22, 2004 - 8:10 am UTC
a primary key index like that -- definitely.. it knows it can get one row exactly by rowid using it.
created a new index, but still its using the index.
A reader, April 22, 2004 - 8:46 am UTC
Tom,
I created a new index, but still its using the index.
SQL>create index EMP_DEPT_IDX on emp(deptno);
Index created.
SQL>set autotrace traceonly explain;
SQL>select * from emp where deptno=30;
Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=1 Card=1 Bytes=87)
1 0 TABLE ACCESS (BY INDEX ROWID) OF 'EMP' (Cost=1 Card=1 Bytes=87)
2 1 INDEX (RANGE SCAN) OF 'EMP_DEPT_IDX' (NON-UNIQUE) (Cost=1 Card=1)
SQL>analyze table emp estimate statistics for table for all indexes for all indexed columns;
Table analyzed.
SQL>select * from emp where deptno=30;
Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=1 Card=1 Bytes=87)
1 0 TABLE ACCESS (BY INDEX ROWID) OF 'EMP' (Cost=1 Card=1 Bytes=87)
2 1 INDEX (RANGE SCAN) OF 'EMP_DEPT_IDX' (NON-UNIQUE) (Cost=
1 Card=1)
SQL>set autotrace off;
SQL>select count(*) from emp;
COUNT(*)
----------
0
April 23, 2004 - 7:52 am UTC
and it is doing the right thing. test the full scan against it. You have the one of two extremes here. really small table, or large table -- and few rows being returned, with an index that is small and an AWESOME clustering factor.
Needs just a little more data to go over the edge:
ops$tkyte@ORA9IR2> create table emp as select * from scott.emp;
Table created.
ops$tkyte@ORA9IR2> create index emp_dept_idx on emp(deptno);
Index created.
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> analyze table emp compute statistics for table for all indexes for all indexed columns;
Table analyzed.
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> set autotrace traceonly explain
ops$tkyte@ORA9IR2> select * from emp where deptno = 30;
Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=2 Card=6 Bytes=240)
1 0 TABLE ACCESS (BY INDEX ROWID) OF 'EMP' (Cost=2 Card=6 Bytes=240)
2 1 INDEX (RANGE SCAN) OF 'EMP_DEPT_IDX' (NON-UNIQUE) (Cost=1 Card=6)
ops$tkyte@ORA9IR2> set autotrace off
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> insert into emp select * from emp;
14 rows created.
ops$tkyte@ORA9IR2> analyze table emp compute statistics for table for all indexes for all indexed columns;
Table analyzed.
ops$tkyte@ORA9IR2> set autotrace traceonly explain
ops$tkyte@ORA9IR2> select * from emp where deptno = 30;
Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=2 Card=12 Bytes=480)
1 0 TABLE ACCESS (FULL) OF 'EMP' (Cost=2 Card=12 Bytes=480)
ops$tkyte@ORA9IR2> set autotrace off
Great explanation .. thanks tom
A reader, April 23, 2004 - 9:16 am UTC
Help
James, April 30, 2004 - 2:27 pm UTC
Hi Tom,
I have made an index as unusable.But how to make it usable?
I have used the following but fails
SQL>alter index idx usable;
April 30, 2004 - 5:47 pm UTC
alter index idx rebuild;
coalesce
John, June 16, 2004 - 12:40 am UTC
Tom,
thanks for the informative explanation as usual.
do we have to coalesce an index even when we truncate the table? i don't think, we need to do this right?
and another general doubt on analyze table, say i have 60,000 rows in a table. table is analyzed.
now if delete 30,000 rows and re-insert them(similar rows), do i have to analyze again?
June 16, 2004 - 12:22 pm UTC
no, the index is truncated as well -- wiped out.
if the data you inserted was radically different -- sure. suppose column X had 60,000 unique values. You delete 30k and load up 30k but this time, X is a constant (so it has 30,0001 unique values). That fact would be nice to know.
recreating Indexes
Reader, July 12, 2004 - 12:31 pm UTC
I am bringing up this topic again --
Recreating Indexes ?
Everyone has said that recreation of indexes are not required and is a waste of time.
I picked up an Index and
Did the following :
1.
exec show_space(....)
Free Blocks.............................10
Total Blocks............................118784
Total Bytes.............................973078528
Unused Blocks...........................3671
Unused Bytes............................30072832
Last Used Ext FileId....................45
Last Used Ext BlockId...................540681
Last Used Block.........................425
2.
sum of bytes from dba_segments = 928M
3.
I did a coalesce on the index :NO CHANGE AT All
4.
I rebuilt the index
Free Blocks.............................9187
Total Blocks............................110592
Total Bytes.............................905969664
Unused Blocks...........................0
Unused Bytes............................0
Last Used Ext FileId....................45
Last Used Ext BlockId...................122889
Last Used Block.........................4096
5.sum of bytes from dba_segments = 864M
6.Is there a way to reduce the number of Free Blocks
Why this reduction in space
and increase in Free Blocks ???
So is recreation required ???
Pl Enlighten ...
Thanks ,
July 12, 2004 - 8:46 pm UTC
No, i don't thing anyone has ever said that -- never. I haven't at least.
"the scheduled rebuild of all indexes or lots of them 'just cause we always have and everyone knows you should' " is what we are against.
coalesce will never make the index size smaller, it'll put blocks on the freelist IF they are candidates for it.
the point is -- if you rebuild this index, how many hours/minutes/days does it take to become the same exact size again. Allocating space is expensive, why keep doing it over and over (reminds me of the engergizer bunny commericial if you've ever seen that -- but the bunny is pointed into a corner and just keeps going and going and going)
If you ask me you've done real good here, you have:
1) compacted the leaf rows, so if you range scan, that will be improved
2) not destroyed the branches of the index, so as the leaves are put back on -- as they will -- the branches are all there, less split action going on
3) did this totally online with minimal overhead
good job!
overall size of index
Dratz, August 19, 2004 - 6:39 pm UTC
I'm working on a project that will load approx 70 million rows a month for reporting/analysis. The database designer has a regular table with a 4 column index that responds very well with about 6 million rows(though I caught him stuffing the block buffers and can't bounce his database to prove it).
My contention (I think a proper fact table and dimensions are better here) is that in a year's time (or two or three) that rebuilding indexes will not be realistic after we have a couple of billion rows. I'm suggesting partioning the table by day with local indexes not only for current performance but for long-term performance and options (aging data out, etc). Am I just off the mark here, will a multi-billion row table being hit by 3,000 users be just fine as a big fat table (read only except for loads)? I'm also concerned (am I too anal?) that the rows are 10x bigger than they need to be (varchar fields, etc).
I know there's a difference between a big database (even a reporting one) and a datawarehouse, but my management just sees 2 dba's with different ideas. If I'm wrong, let me know and I'll cut my losses, if not can I at least get some confirmation that I'm on the right track?
indexed columns will over time add new values and change the distribution%
queries will always include date component (he doesn't have an index there but could add one).
August 19, 2004 - 8:16 pm UTC
if the queries always use date and the data is partitioned by date, using partitioning only makes sense here -- for all of the reasons you state.
also, the proper size fields SHOULD ABSOLUTELY be used, period. no question, none.
thanks
Dratz, August 24, 2004 - 5:54 pm UTC
A reader, October 21, 2004 - 11:28 am UTC
Tom,
In a table with approx 9 million rows, a query on an indexed column such as select * from table where pid='xxx' is taking a lot of time. Full scan with cost=51075. I think it is very expensive. The table is analyzed twice daily. I think the scan should be index(rowid). I donot think it should be analyzed twice daily and an trying to counter that
I issued "analyze table t compute statistics for table for all indexes for all indexed columns;" and it was taking a lot of time (I think I WAITED 10 MINS before quitting it witl ctl-c).
May I know whats happening here?
1. Could estimate statistics be run twice daily (In fact its run on all tables)
2. Why is computr statistics taking long? (is it actually running?)
3. Why is the cost so much.
Please suggest remedies. Thanks for all the help.
October 21, 2004 - 2:56 pm UTC
there is less said here then said.
10 entire minutes -- that is not very long.
why is "51075" very expensive??
how many rows does "where pid='xxx'" return
o at most
o at least
o on average
what is the clustering factor of the index on this columns
how many blocks are in the table
A reader, October 21, 2004 - 12:05 pm UTC
Tom,
I wanted to give some more information regarding last post.
The queries on indexed columns run fast and are performed using index scan. where as a simple query on nonindex column is not running and is being performed by full scan.
October 21, 2004 - 2:58 pm UTC
well, doesn't that just about totally conflict with what you said above?
a simple query on a nonindexed column -- hhmmmmm, think about it - -what else COULD it do other than full scan?!?
different execution plan
Branka, January 10, 2005 - 3:05 pm UTC
I run same select on DEV and PROD database and get different executin plan. Indexes are same.
SELECT customer.cust_id
FROM customer, all_phone, all_email
WHERE customer.cust_id = all_phone.cust_id
AND all_phone.phone = 4102084800
AND customer.cust_id = all_email.cust_id
AND upper(all_email.email_addr) = 'X'
Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=4 Card=1 Bytes=43)
1 0 NESTED LOOPS (Cost=4 Card=1 Bytes=43)
2 1 MERGE JOIN (CARTESIAN) (Cost=3 Card=3 Bytes=111)
3 2 TABLE ACCESS (BY INDEX ROWID) OF 'ALL_EMAIL' (Cost=2 C
ard=2 Bytes=48)
4 3 INDEX (RANGE SCAN) OF 'NDX_ALL_EMAIL_ADDR' (NON-UNIQ
UE) (Cost=4 Card=2)
5 2 BUFFER (SORT) (Cost=1 Card=2 Bytes=26)
6 5 INDEX (RANGE SCAN) OF 'PK_ALL_PHONE' (UNIQUE) (Cost=
2 Card=2 Bytes=26)
7 1 INDEX (UNIQUE SCAN) OF 'PK_CUSTOMER' (UNIQUE) (Cost=1 Ca
rd=1 Bytes=6)
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=2 Card=1 Bytes=46)
1 0 NESTED LOOPS (Cost=2 Card=1 Bytes=46)
2 1 NESTED LOOPS (Cost=2 Card=1 Bytes=33)
3 2 TABLE ACCESS (BY INDEX ROWID) OF 'ALL_EMAIL' (Cost=2 C
ard=1 Bytes=27)
4 3 INDEX (RANGE SCAN) OF 'NDX_ALL_EMAIL_ADDR' (NON-UNIQ
UE) (Cost=1 Card=1)
5 2 INDEX (UNIQUE SCAN) OF 'PK_CUSTOMER' (UNIQUE)
6 1 INDEX (UNIQUE SCAN) OF 'PK_ALL_PHONE' (UNIQUE)
On production I have CARTESIAN join.
Can you help me to understand what is going on and what can I do to change this.
Thanks
January 10, 2005 - 4:21 pm UTC
are the estimated cardinalities in the plans anywhere near close to reality (cause if they are -- it is doing ok in both places)
different execution plan
Branka, January 10, 2005 - 3:19 pm UTC
I run
analyze table customer compute statistics
for table for all indexes for all indexed columns;
and same for all_phone and all_email.
Now my execution plans are same.
Before I analized tables and indexes with dbms_stats.gather_table_stats.
I am not sure which one to use, since it seam that analize table work much better.
I am not sure what should I do from now on.
January 10, 2005 - 4:23 pm UTC
tell you what
delete the stats and regather with dbms_stats and report back.
Perhaps you ran dbms_stats "prematurely", before the data was available to be analyzed.
Gregory, January 10, 2005 - 7:45 pm UTC
Two questions...
What kind of "delete" stats would you suggest above ? Analyze or dbms_stats ?
> Perhaps you ran dbms_stats "prematurely", before the
> data was available to be
> analyzed.
What do you mean "available to be analyzed" ? The problem of running dbms_stats bags bags here might be knowing WHEN it's really available ?
Thanks.
January 10, 2005 - 8:19 pm UTC
analyze, since you use analyze
what I meant was, perhaps you used dbms_stats before the data was there.
and make sure you do the same things
analyze table t compute statistics;
is most close to dbms_stats.gather_table_stats( user, 'T', cascade=>true );
different execution plan
Branka, January 11, 2005 - 10:57 am UTC
It was same again.
January 11, 2005 - 1:35 pm UTC
show us the analyze command you use, the information from user_tables, user_indexes and user_tab_histograms it got
then show us the SAME for dbms_stats
and compare them. (i've a feeling you are not gathering EQUIVALENT statistics, you have not shared the commands with us)
for example:
ops$tkyte@ORA9IR2> create table t as select * from all_objects;
Table created.
ops$tkyte@ORA9IR2> create index t_idx1 on t(object_name);
Index created.
ops$tkyte@ORA9IR2> create index t_idx2 on t(object_id);
Index created.
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> analyze table t compute statistics;
Table analyzed.
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> create table stats
2 as
3 select cast( 'ANALYZE TABLE' as varchar2(20) ) what, a.*
4 from table( cols_as_rows( 'select * from user_tables where table_name = ''T'' ' ) ) a;
Table created.
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> insert into stats
2 select 'ANALYZE/IDX1', a.*
3 from table( cols_as_rows( 'select * from user_indexes where index_name = ''T_IDX1'' ' ) ) a;
50 rows created.
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> insert into stats
2 select 'ANALYZE/IDX2', a.*
3 from table( cols_as_rows( 'select * from user_indexes where index_name = ''T_IDX2'' ' ) ) a;
50 rows created.
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> select column_name, count(*) from user_tab_histograms group by column_name;
COLUMN_NAME COUNT(*)
--------------- ----------
CREATED 2
DATA_OBJECT_ID 2
GENERATED 2
LAST_DDL_TIME 2
OBJECT_ID 2
OBJECT_NAME 2
OBJECT_TYPE 2
OWNER 2
SECONDARY 2
STATUS 2
SUBOBJECT_NAME 2
TEMPORARY 2
TIMESTAMP 2
13 rows selected.
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> analyze table t delete statistics;
Table analyzed.
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> exec dbms_stats.gather_table_stats( user, 'T', cascade=>true );
PL/SQL procedure successfully completed.
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> insert into stats
2 select 'DBMS TABLE' what, a.*
3 from table( cols_as_rows( 'select * from user_tables where table_name = ''T'' ' ) ) a;
46 rows created.
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> insert into stats
2 select 'DBMS/IDX1', a.*
3 from table( cols_as_rows( 'select * from user_indexes where index_name = ''T_IDX1'' ' ) ) a;
50 rows created.
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> insert into stats
2 select 'DBMS/IDX2', a.*
3 from table( cols_as_rows( 'select * from user_indexes where index_name = ''T_IDX2'' ' ) ) a;
50 rows created.
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> select column_name, count(*) from user_tab_histograms group by column_name;
COLUMN_NAME COUNT(*)
--------------- ----------
CREATED 2
DATA_OBJECT_ID 2
GENERATED 2
LAST_DDL_TIME 2
OBJECT_ID 2
OBJECT_NAME 2
OBJECT_TYPE 2
OWNER 2
SECONDARY 2
STATUS 2
SUBOBJECT_NAME 2
TEMPORARY 2
TIMESTAMP 2
13 rows selected.
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> column analyzed format a20
ops$tkyte@ORA9IR2> column dbms_stat format a20
ops$tkyte@ORA9IR2> select cname,
2 max(decode( what, 'ANALYZE TABLE', val )) analyzed,
3 max(decode( what, 'DBMS TABLE', val )) dbms_stat
4 from stats
5 group by cname
6 having decode( max(decode( what, 'ANALYZE TABLE', val )), max(decode( what, 'DBMS TABLE', val )),0,1) = 1
7 /
CNAME ANALYZED DBMS_STAT
------------------------------ -------------------- --------------------
AVG_ROW_LEN 96 93
AVG_SPACE 876 0
EMPTY_BLOCKS 115 0
GLOBAL_STATS NO YES
ops$tkyte@ORA9IR2> select cname,
2 max(decode( what, 'ANALYZE/IDX1', val )) analyzed,
3 max(decode( what, 'DBMS/IDX1', val )) dbms_stat
4 from stats
5 group by cname
6 having decode( max(decode( what, 'ANALYZE/IDX1', val )), max(decode( what, 'DBMS/IDX1', val )),0,1) = 1
7 /
CNAME ANALYZED DBMS_STAT
------------------------------ -------------------- --------------------
GLOBAL_STATS NO YES
ops$tkyte@ORA9IR2> select cname,
2 max(decode( what, 'ANALYZE/IDX2', val )) analyzed,
3 max(decode( what, 'DBMS/IDX2', val )) dbms_stat
4 from stats
5 group by cname
6 having decode( max(decode( what, 'ANALYZE/IDX2', val )), max(decode( what, 'DBMS/IDX2', val )),0,1) = 1
7 /
CNAME ANALYZED DBMS_STAT
------------------------------ -------------------- --------------------
GLOBAL_STATS NO YES
<b>shows that the stats are basically "the same" when you use equivalent commands and a change in plans would not be in the cards</b>
create or replace type myScalarType as object
( rnum number, cname varchar2(30), val varchar2(4000) )
/
create or replace type myTableType as table of myScalarType
/
create or replace
function cols_as_rows( p_query in varchar2 ) return myTableType
-- this function is designed to be installed ONCE per database, and
-- it is nice to have ROLES active for the dynamic sql, hence the
-- AUTHID CURRENT_USER
authid current_user
-- this function is a pipelined function -- meaning, it'll send
-- rows back to the client before getting the last row itself
-- in 8i, we cannot do this
PIPELINED
as
l_theCursor integer default dbms_sql.open_cursor;
l_columnValue varchar2(4000);
l_status integer;
l_colCnt number default 0;
l_descTbl dbms_sql.desc_tab;
l_rnum number := 1;
begin
-- parse, describe and define the query. Note, unlike print_table
-- i am not altering the session in this routine. the
-- caller would use TO_CHAR() on dates to format and if they
-- want, they would set cursor_sharing. This routine would
-- be called rather infrequently, I did not see the need
-- to set cursor sharing therefore.
dbms_sql.parse( l_theCursor, p_query, dbms_sql.native );
dbms_sql.describe_columns( l_theCursor, l_colCnt, l_descTbl );
for i in 1 .. l_colCnt loop
dbms_sql.define_column( l_theCursor, i, l_columnValue, 4000 );
end loop;
-- Now, execute the query and fetch the rows. Iterate over
-- the columns and "pipe" each column out as a separate row
-- in the loop. increment the row counter after each
-- dbms_sql row
l_status := dbms_sql.execute(l_theCursor);
while ( dbms_sql.fetch_rows(l_theCursor) > 0 )
loop
for i in 1 .. l_colCnt
loop
dbms_sql.column_value( l_theCursor, i, l_columnValue );
pipe row
(myScalarType( l_rnum, l_descTbl(i).col_name, l_columnValue ));
end loop;
l_rnum := l_rnum+1;
end loop;
-- clean up and return...
dbms_sql.close_cursor(l_theCursor);
return;
end cols_as_rows;
/
why extra redo...
Aru, January 12, 2005 - 8:20 pm UTC
Hi Tom,
I have just confirmed that extra redo is genereated after index rebuilds but dont understand why. Please can you help?
Thanks,
Aru
January 13, 2005 - 8:08 am UTC
can you re-read the original answer, I addressed that there. That was the crux of the answer in fact
OK
Kumar, January 19, 2005 - 12:26 pm UTC
Dear Tom,
Are Indexes and Tables logically or physically related to
each other?Does any fact come into picture when we go for
Index Organized Tables(IOT) instead of Heap Organized Tables?
Your response is elicited.
Bye!
January 20, 2005 - 9:28 am UTC
they are physically related to eachother.
I'm not sure how that fact would drive you towards an IOT or heap table - if you have either of my books -- I cover the deciding factors in some detail there.
OK
Kumar, January 19, 2005 - 12:40 pm UTC
Dear Tom,
Are Indexes and Tables logically or physically related to
each other?Does any fact come into picture when we go for
Index Organized Tables(IOT) instead of Heap Organized Tables?
Your response is elicited.
Bye!
Index Rebuilding
Swapnil, March 16, 2005 - 3:58 am UTC
How do I know that a perticular Index is corrupted, so that I rebuild only that.
Why analyze
Shahrouz Darvishzadeh, May 25, 2005 - 10:40 am UTC
Hello Tom
Analyzing tables in a datawarehouse with hunderd millions of rows in each table is an expensive operation. I need for
dbms_stats.GATHER_TABLE_STATS('scott','t',ESTIMATE_PERCENT=>10); about half an hour for each table. Is there really no way to avoid analyzing everytime we have huge DMLs in our tables? With 10G I heard once a rumor that Oracle makes analyzing "on the fly" if necessary. Isn't it true?
Thank you in Advanced
May 25, 2005 - 1:27 pm UTC
well, this could be a book (and Jonathan Lewis is working on such a beast as I understand it, all about the CBO and stats are such a big part of that).
Say you load a new partition. And say your queries all use partition elimination and the optimizer knows at parse time what partition it'll hit (the single partition). You do not need global stats. Just gather on the new partition.
Say you load a new partition and add 1,000,000 rows. Do you need to gather global stats? Maybe, maybe not. You know you just added 1,000,000 rows (a gather stats on the partition would tell you that) and how many blocks -- just SET the global stats with that information. You know the data you loaded isn't very skewed so the distribution of values doesn't change. For some column (say a date field), you know you have a new high bound, you could just tell us that. That is, you could use your knowledge of the data to GIVE the optimizer the information it needs.
Else, you still have parallel -- are you using that? And if not, gather the stats for many tables at the same time.
10g (9i in fact) has dynamic sampling -- as query hard parse time, it can go out and do a mini gather stats in order to parse the query. Not sure you want to be doing that for every hard parse in a data warehouse.
Partition
Shahrouz DARVISHZADEH, May 30, 2005 - 3:54 am UTC
Thank you ver much.
One question about partitions: how can I gather stats for a partition? With dbms_stats is not possible. Is it?
May 30, 2005 - 8:50 am UTC
sure it is, see the supplied packges guide. dbms_stats lets you pass the partition name and specify the granularity when gathering stats.
Adding a column in index
A reader, September 12, 2005 - 4:52 pm UTC
Tom,
Is it possible to add one more column to existing unique index? I have a table with 10,000 rows and it has a unique index created using 3 columns. Now i need to add this new column to make a row unique.
1) Do ihave to drop and create this index or can i just doit by using alter index ... statement?
2) Can it be done on a 10,000 row table or do i need to empty table first?
Thanks,
September 12, 2005 - 5:42 pm UTC
you have to create a new index, you cannot add a column to an index, it would require and entire rebuild.
create new index
drop old index
optionally rename new index to old name.
from Sweden
A Reader, September 26, 2005 - 8:02 am UTC
Hi Tom,
Thanks for your efforts in giving such elaborate and easy to understand explanations.
I've been waiting for a long time to ask a question but the link is not enable yet. I'm not sure you'll consider it a new question or not but, here is my problem.
We have a datawarehouse in which operations is experiencing a significant slow down in index rebuild process for bitmap indexes. What they do is disable the bitmap indexes, then load the data and after that rebuild the indexes again. According to them they load 2-3 million rows everyday.
I understand from your explanation above that rebuilding bitmap index after data loading is the right approach in case of bitmap indexes. To suggest some improvement I want to first understand
1) What internally goes during indexe rebuild. Like what all resources are consumed etc. so that I start monitoring those.
2) And is there a way to make this process fast using hints or by setting some init parameters or by following anyother approach? I tried using nologging and parallel hints but they didn't work as some of these indexes are INDEX SUBPARTITION. This index rebuild process is written in a PL/SQL procedure (which is not written by us) and uses dynamic query to build the ALTER INDEX .. REBUILD command.
I checked with our AIX team as well, for any possibility of I/O being the problem but they are helpless in this regard as they also do not have a clue as to were to start from? They were only able to inform that during index rebuild they notices high amount of read activity going on which is obvious, I guess.
Please guide me what all can I do to improve the process.
September 26, 2005 - 9:31 am UTC
1) this index rebuild, of a un-maintained index, is really a "create", it'll read the table to build the new index.
2) ensuring you have a large _area_size settings (if using a serial rebuild, you can set the workarea policy to manual and set big sort sizes).
using nologging (remember to rebuild). nologging works OK with subpartitions? can you explain "didn't work"?
from Sweden
A Reader, September 26, 2005 - 9:46 am UTC
Hi,
Thanks for the prompt reply I've been eagerly waiting for the response :-)
I'll check the options suggested by you in detail.
The nologging parallel command failed with the message:
ORA-14189: this physical attribute may not be specified for an
index subpartition
BEGIN
*
ERROR at line 1:
ORA-14189: this physical attribute may not be specified for an index
subpartition
ORA-06512: at line 6
Is it because it has been written in PL/SQL script which is creating ALTER INDEX command dynamically?
I suggested these hints and application users who are responsible for making changes in the script did the change.
September 26, 2005 - 10:55 am UTC
SQL> CREATE TABLE t
2 (
3 dt date,
4 x int,
5 y varchar2(25)
6 )
7 PARTITION BY RANGE (dt)
8 subpartition by hash(x)
9 (
10 PARTITION part1 VALUES LESS THAN (to_date('13-mar-2003','dd-mon-yyyy')) ,
11 PARTITION part2 VALUES LESS THAN (to_date('14-mar-2003','dd-mon-yyyy')) ,
12 PARTITION junk VALUES LESS THAN (MAXVALUE)
13 )
14 /
Table created.
SQL> create index t_idx on t(dt,x) local;
Index created.
SQL>
SQL> alter index t_idx nologging;
Index altered.
SQL> select subpartition_name, logging from user_ind_subpartitions where index_name = 'T_IDX';
SUBPARTITION_NAME LOG
------------------------------ ---
SYS_SUBP4578 NO
SYS_SUBP4579 NO
SYS_SUBP4580 NO
you can alter the "index" and it gets everything.
from Sweden
A Reader, September 26, 2005 - 10:01 am UTC
More follow up on the above query.
I just checked all _AREA_SIZE parameters are 16MB in size should I suggest to increase it further for the instance or for that session? I guess increasing on session level make sense but will the alter session command work in the PL/SQL procedure? I've never tried this kind of command in a PL/SQL procedure and can't even test it as I'm not authorized to change their procedure.
September 26, 2005 - 10:58 am UTC
what is the workarea policy? manual or auto?
(and yes, >16mb for this batch would make sense)
from Sweden
A Reader, September 27, 2005 - 2:59 am UTC
Hi tom,
thanks a lot for the reply
the workarea policy is auto at present and it is Oracle 9.2.0 release.
September 27, 2005 - 11:11 am UTC
what is your pga_aggregate_target - unless you are shared server (which you should not be for batch) the *_size parameters are not used (in 10g, *_size are not used if the workarea policy is auto even in shared server)
bitmap index and cumulative inserts
lh, September 27, 2005 - 4:39 am UTC
I have obscure memory having read somewhere that 10gR2 has (major) enhanchements for using bitmap indexes and cumulative inserts.
Have You any experience about this ?
September 27, 2005 - 11:21 am UTC
if you mean incremental inserts - yes, 10g did in fact...
ops$tkyte@ORA9IR2> create table t ( x int );
Table created.
ops$tkyte@ORA9IR2> create bitmap index t_idx on t(x);
Index created.
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> begin
2 for i in 1 .. 100000
3 loop
4 insert into t values ( mod(i,100) );
5 commit;
6 end loop;
7 end;
8 /
PL/SQL procedure successfully completed.
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> exec show_space( 'T_IDX', user, 'INDEX' );
Free Blocks............................. 4
Total Blocks............................ 512
Total Bytes............................. 4,194,304
Total MBytes............................ 4
Unused Blocks........................... 3
Unused Bytes............................ 24,576
Last Used Ext FileId.................... 1
Last Used Ext BlockId................... 59,913
Last Used Block......................... 125
PL/SQL procedure successfully completed.
ops$tkyte@ORA9IR2> Disconnected from Oracle9i Enterprise Edition Release 9.2.0.6.0 - Production
With the Partitioning, OLAP and Oracle Data Mining options
JServer Release 9.2.0.6.0 - Production
[tkyte@dellpe ~]$ ora10gr2
[tkyte@dellpe ~]$ plus
SQL*Plus: Release 10.2.0.1.0 - Production on Tue Sep 27 11:10:29 2005
Copyright (c) 1982, 2005, Oracle. All rights reserved.
Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
With the Partitioning, OLAP and Data Mining options
ops$tkyte@ORA10GR2> ops$tkyte@ORA10GR2> set echo on
ops$tkyte@ORA10GR2> @test
ops$tkyte@ORA10GR2> drop table t;
drop table t
*
ERROR at line 1:
ORA-00942: table or view does not exist
ops$tkyte@ORA10GR2>
ops$tkyte@ORA10GR2> create table t ( x int );
Table created.
ops$tkyte@ORA10GR2> create bitmap index t_idx on t(x);
Index created.
ops$tkyte@ORA10GR2>
ops$tkyte@ORA10GR2> begin
2 for i in 1 .. 100000
3 loop
4 insert into t values ( mod(i,100) );
5 commit;
6 end loop;
7 end;
8 /
PL/SQL procedure successfully completed.
ops$tkyte@ORA10GR2>
ops$tkyte@ORA10GR2> exec show_space( 'T_IDX', user, 'INDEX' );
Unformatted Blocks ..................... 0
FS1 Blocks (0-25) ..................... 0
FS2 Blocks (25-50) ..................... 6
FS3 Blocks (50-75) ..................... 0
FS4 Blocks (75-100)..................... 0
Full Blocks ..................... 22
Total Blocks............................ 32
Total Bytes............................. 262,144
Total MBytes............................ 0
Unused Blocks........................... 0
Unused Bytes............................ 0
Last Used Ext FileId.................... 4
Last Used Ext BlockId................... 1,673
Last Used Block......................... 8
PL/SQL procedure successfully completed.
ops$tkyte@ORA10GR2>
<b>
4m versus 256k.... (10gr1 and 10gr2 would do this)</b>
from sweden
Reader, September 28, 2005 - 2:06 am UTC
Hi
Thanks again for the reply. Here is what you asked for
Here are the value of parameters
pga_aggregate_target 3145728000
bitmap_merge_area_size 16777216
create_bitmap_area_size 16777216
db_16k_cache_size 0
db_2k_cache_size 0
db_32k_cache_size 0
db_4k_cache_size 0
db_8k_cache_size 0
db_block_size 32768
db_cache_size 5251268608
db_keep_cache_size 0
db_recycle_cache_size 0
global_context_pool_size
hash_area_size 16777216
java_max_sessionspace_size 0
java_pool_size 0
large_pool_size 134217728
max_dump_file_size UNLIMITED
object_cache_max_size_percent 10
object_cache_optimal_size 102400
olap_page_pool_size 33554432
oracle_trace_collection_size 5242880
parallel_execution_message_size 16384
sga_max_size 5725201576
shared_pool_reserved_size 10905190
shared_pool_size 218103808
sort_area_retained_size 16777216
sort_area_size 16777216
workarea_size_policy AUTO
SGA is appr. 5GB
Now that pga_aggregate_target is approx. 3GB is it possible that Oracle gives whole of 3Gb to *AREA_SIZE parameters for this one batch session (as it is night batch and no other session is active) ?
There are around 300 million rows in the partition which receive 5 million data daily and only that index is rebuilt.
Also index is rebuilt with parallel degree 22 as it is a big box with 12 CPU(logical) and i/o is also quite fast as per the system administrators.
Are there any init parameters to be revised to take complete benefit of these 12 CPUs?
at present
CPU_COUNT 12
parallel_threads_per_cpu 2
September 28, 2005 - 9:45 am UTC
the way the pga_aggregate target works is that you get a % of it, never all of it.
how much ram do you have on the machine itself?
when you build the index, what are the "big events" - perhaps take a statspack snap, run the rebuild and then take another.
from sweden
reader, September 29, 2005 - 10:55 am UTC
Machine has 16Gb ram out of which this instance is having 5+ Gb SGA with other 4 instances having 500MB SGA
Following are the top 5 wait event from statspack
Event Waits Time (s) Ela Time
-------------------------------------------- ------------ ----------- --------
direct path read 307,859 70,662 92.27
PX Deq: Execute Reply 14,581 3,374 4.41
db file scattered read 7,316 872 1.14
buffer busy waits 8,487 472 .62
PX Deq: Table Q Get Keys 471 471 .61
-------------------------------------------------------------
Here is some of the relevent part of the statspack report
Snap Id Snap Time Sessions Curs/Sess Comment
--------- ------------------ -------- --------- -------------------
Begin Snap: 2948 29-Sep-05 04:04:31 13 288.5
End Snap: 2949 29-Sep-05 05:04:40 57 82.7
Elapsed: 60.15 (mins)
Cache Sizes (end)
~~~~~~~~~~~~~~~~~
Buffer Cache: 5,008M Std Block Size: 32K
Shared Pool Size: 208M Log Buffer: 3,200K
Load Profile
~~~~~~~~~~~~ Per Second Per Transaction
--------------- ---------------
Redo size: 5,179.27 4,672,994.00
Logical reads: 744.30 671,541.00
Block changes: 23.37 21,086.00
Physical reads: 1,382.29 1,247,173.25
Physical writes: 28.28 25,512.00
User calls: 0.76 684.25
Parses: 0.67 606.50
Hard parses: 0.02 19.50
Sorts: 0.39 349.50
Logons: 0.01 11.75
Executes: 1.08 972.25
Transactions: 0.00
% Blocks changed per Read: 3.14 Recursive Call %: 91.97
Rollback per transaction %: 0.00 Rows per Sort: 4873.23
Instance Efficiency Percentages (Target 100%)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Buffer Nowait %: 99.68 Redo NoWait %: 100.00
Buffer Hit %: 95.43 In-memory Sort %: 100.00
Library Hit %: 99.04 Soft Parse %: 96.78
Execute to Parse %: 37.62 Latch Hit %: 97.11
Parse CPU to Parse Elapsd %: 0.75 % Non-Parse CPU: 99.97
Shared Pool Statistics Begin End
------ ------
Memory Usage %: 88.79 89.21
% SQL with executions>1: 98.69 98.69
% Memory for SQL w/exec>1: 97.65 97.65
Background Wait Events for DB: FIN02 Instance: FIN02 Snaps: 2948 -2949
-> ordered by wait time desc, waits desc (idle events last)
Avg
Total Wait wait Waits
Event Waits Timeouts Time (s) (ms) /txn
---------------------------- ------------ ---------- ---------- ------ --------
db file parallel write 867 0 107 123 216.8
control file parallel write 1,199 0 10 8 299.8
db file scattered read 60 0 4 64 15.0
db file sequential read 123 0 2 18 30.8
control file sequential read 480 0 1 3 120.0
log file parallel write 602 0 0 0 150.5
LGWR wait for redo copy 113 0 0 0 28.3
rdbms ipc reply 31 0 0 0 7.8
buffer busy waits 2 0 0 1 0.5
rdbms ipc message 6,934 4,747 15,789 2277 1,733.5
pmon timer 1,203 1,203 3,524 2929 300.8
smon timer 27 8 3,272 ###### 6.8
-------------------------------------------------------------
Buffer Pool Statistics for DB: FIN02 Instance: FIN02 Snaps: 2948 -2949
-> Standard block size Pools D: default, K: keep, R: recycle
-> Default Pools for other block sizes: 2k, 4k, 8k, 16k, 32k
Free Write Buffer
Number of Cache Buffer Physical Physical Buffer Complete Busy
P Buffers Hit % Gets Reads Writes Waits Waits Waits
--- ---------- ----- ----------- ----------- ---------- ------- -------- ------
D 159,004 95.4 2,681,887 122,649 87,071 0 0 8,487
-------------------------------------------------------------
Buffer Pool Advisory for DB: FIN02 Instance: FIN02 End Snap: 2949
-> Only rows with estimated physical reads >0 are displayed
-> ordered by Block Size, Buffers For Estimate (default block size first)
Size for Size Buffers for Est Physical Estimated
P Estimate (M) Factr Estimate Read Factor Physical Reads
--- ------------ ----- ---------------- ------------- ------------------
D 496 .1 15,748 1.47 53,027,287
D 992 .2 31,496 1.35 48,886,853
D 1,488 .3 47,244 1.25 45,200,706
D 1,984 .4 62,992 1.18 42,705,825
D 2,480 .5 78,740 1.14 40,983,924
D 2,976 .6 94,488 1.10 39,725,583
D 3,472 .7 110,236 1.07 38,700,190
D 3,968 .8 125,984 1.04 37,648,903
D 4,464 .9 141,732 1.02 36,781,844
D 4,960 1.0 157,480 1.00 36,158,447
D 5,008 1.0 159,004 1.00 36,104,486
D 5,456 1.1 173,228 0.99 35,628,377
D 5,952 1.2 188,976 0.97 34,975,660
D 6,448 1.3 204,724 0.95 34,461,251
D 6,944 1.4 220,472 0.94 33,928,278
D 7,440 1.5 236,220 0.93 33,545,807
D 7,936 1.6 251,968 0.92 33,095,029
D 8,432 1.7 267,716 0.91 32,847,775
D 8,928 1.8 283,464 0.90 32,618,230
D 9,424 1.9 299,212 0.90 32,457,077
D 9,920 2.0 314,960 0.89 32,309,184
-------------------------------------------------------------
Buffer wait Statistics for DB: FIN02 Instance: FIN02 Snaps: 2948 -2949
-> ordered by wait time desc, waits desc
Tot Wait Avg
Class Waits Time (s) Time (ms)
------------------ ----------- ---------- ---------
data block 8,362 472 56
file header block 76 1 16
undo header 38 0 0
segment header 1 0 0
undo block 1 0 0
-------------------------------------------------------------
PGA Aggr Target Stats for DB: FIN02 Instance: FIN02 Snaps: 2948 -2949
-> B: Begin snap E: End snap (rows dentified with B or E contain data
which is absolute i.e. not diffed over the interval)
-> PGA cache hit % - percentage of W/A (WorkArea) data processed only in-memory
-> Auto PGA Target - actual workarea memory target
-> W/A PGA Used - amount of memory used for all Workareas (manual + auto)
-> %PGA W/A Mem - percentage of PGA memory allocated to workareas
-> %Auto W/A Mem - percentage of workarea memory controlled by Auto Mem Mgmt
-> %Man W/A Mem - percentage of workarea memory under manual control
PGA Cache Hit % W/A MB Processed Extra W/A MB Read/Written
--------------- ---------------- -------------------------
100.0 16,554 0
%PGA %Auto %Man
PGA Aggr Auto PGA PGA Mem W/A PGA W/A W/A W/A Global Mem
Target(M) Target(M) Alloc(M) Used(M) Mem Mem Mem Bound(K)
- --------- --------- ---------- ---------- ------ ------ ------ ----------
B 3,000 2,676 1,059.7 0.1 .0 100.0 .0 102,400
E 3,000 2,619 1,424.4 18.9 1.3 100.0 .0 102,400
-------------------------------------------------------------
PGA Aggr Target Histogram for DB: FIN02 Instance: FIN02 Snaps: 2948 -2949
-> Optimal Executions are purely in-memory operations
Low High
Optimal Optimal Total Execs Optimal Execs 1-Pass Execs M-Pass Execs
------- ------- -------------- ------------- ------------ ------------
32K 64K 67 67 0 0
64K 128K 121 121 0 0
128K 256K 86 86 0 0
256K 512K 145 145 0 0
512K 1024K 43 43 0 0
1M 2M 14,132 14,132 0 0
2M 4M 269 269 0 0
4M 8M 116 116 0 0
8M 16M 49 49 0 0
16M 32M 2 2 0 0
-------------------------------------------------------------
PGA Memory Advisory for DB: FIN02 Instance: FIN02 End Snap: 2949
-> When using Auto Memory Mgmt, minimally choose a pga_aggregate_target value
where Estd PGA Overalloc Count is 0
Estd Extra Estd PGA Estd PGA
PGA Target Size W/A MB W/A MB Read/ Cache Overalloc
Est (MB) Factr Processed Written to Disk Hit % Count
---------- ------- ---------------- ---------------- -------- ----------
375 0.1 1,907,830.9 739,479.9 72.0 456
750 0.3 1,907,830.9 485,036.4 80.0 135
1,500 0.5 1,907,830.9 134,289.5 93.0 0
2,250 0.8 1,907,830.9 121,272.4 94.0 0
3,000 1.0 1,907,830.9 78,905.4 96.0 0
3,600 1.2 1,907,830.9 70,800.1 96.0 0
4,200 1.4 1,907,830.9 70,800.1 96.0 0
4,800 1.6 1,907,830.9 70,800.1 96.0 0
5,400 1.8 1,907,830.9 70,800.1 96.0 0
6,000 2.0 1,907,830.9 70,800.1 96.0 0
9,000 3.0 1,907,830.9 70,800.1 96.0 0
12,000 4.0 1,907,830.9 70,800.1 96.0 0
18,000 6.0 1,907,830.9 70,800.1 96.0 0
24,000 8.0 1,907,830.9 70,800.1 96.0 0
-------------------------------------------------------------
Enqueue activity for DB: FIN02 Instance: FIN02 Snaps: 2948 -2949
-> Enqueue stats gathered prior to 9i should not be compared with 9i data
-> ordered by Wait Time desc, Waits desc
Avg Wt Wait
Eq Requests Succ Gets Failed Gets Waits Time (ms) Time (s)
-- ------------ ------------ ----------- ----------- ------------- ------------
TC 25 26 0 6 13,893.00 83
TX 16,445 16,452 0 2,358 1.54 4
PS 2,728 2,412 308 11 26.27 0
-------------------------------------------------------------
Latch Activity for DB: FIN02 Instance: FIN02 Snaps: 2948 -2949
->"Get Requests", "Pct Get Miss" and "Avg Slps/Miss" are statistics for
willing-to-wait latch get requests
->"NoWait Requests", "Pct NoWait Miss" are for no-wait latch get requests
->"Pct Misses" for both should be very close to 0.0
Pct Avg Wait Pct
Get Get Slps Time NoWait NoWait
Latch Requests Miss /Miss (s) Requests Miss
------------------------ -------------- ------ ------ ------ ------------ ------
Consistent RBA 601 0.0 0 0
FOB s.o list latch 3,497 0.9 0.0 0 0
SQL memory manager latch 1 0.0 0 1,199 0.0
SQL memory manager worka 122,813 0.0 0.0 0 0
active checkpoint queue 4,550 0.1 0.0 0 0
archive control 1 0.0 0 0
begin backup scn array 16,201 0.5 0.0 0 0
cache buffer handles 176 0.0 0 0
cache buffers chains 4,758,928 9.2 0.0 0 243,262 0.0
cache buffers lru chain 111,587 0.0 0.4 0 137,776 0.4
channel handle pool latc 30 0.0 0 0
channel operations paren 3,483 0.0 0 0
checkpoint queue latch 1,042,343 0.0 0 4,878 0.1
child cursor hash table 543 3.9 0.1 0 0
commit callback allocati 26 0.0 0 0
dictionary lookup 15 0.0 0 0
dml lock allocation 2,103 3.2 0.0 0 0
dummy allocation 1,194 14.2 0.0 0 0
enqueue hash chains 65,858 1.5 0.1 0 0
enqueues 32,636 2.7 0.0 0 0
error message lists 1,157 4.9 0.2 0 0
event group latch 2 0.0 0 0
hash table column usage 8 0.0 0 0
job workq parent latch 0 0 2 0.0
job_queue_processes para 61 0.0 0 0
ktm global data 29 0.0 0 0
lgwr LWN SCN 1,682 0.0 0 0
library cache 70,699 8.6 0.0 0 54 0.0
library cache pin 22,304 2.6 0.0 0 0
library cache pin alloca 12,206 2.1 0.0 0 0
list of block allocation 418 0.5 0.0 0 0
loader state object free 27,948 0.2 0.0 0 0
longop free list parent 114 0.0 0 114 1.8
message pool operations 38 0.0 0 0
messages 21,159 0.1 0.0 0 0
mostly latch-free SCN 1,692 0.4 0.0 0 0
multiblock read objects 18,998 0.0 0.0 0 0
ncodef allocation latch 57 0.0 0 0
object stats modificatio 38 10.5 0.0 0 0
parallel query alloc buf 94,534 19.3 0.0 1 0
parallel query stats 612 7.2 0.0 0 0
post/wait queue 203 1.0 0.0 0 569 5.6
process allocation 2 0.0 0 2 0.0
process group creation 4 0.0 0 0
process queue 14,811 7.0 0.0 0 0
process queue reference 8,557,485 0.0 0.0 0 82,249 8.1
query server freelists 28,553 28.2 0.0 0 0
redo allocation 73,239 0.8 0.0 0 0
redo copy 0 0 71,023 0.2
redo writing 8,160 0.0 0 0
Latch Activity for DB: FIN02 Instance: FIN02 Snaps: 2948 -2949
->"Get Requests", "Pct Get Miss" and "Avg Slps/Miss" are statistics for
willing-to-wait latch get requests
->"NoWait Requests", "Pct NoWait Miss" are for no-wait latch get requests
->"Pct Misses" for both should be very close to 0.0
Pct Avg Wait Pct
Get Get Slps Time NoWait NoWait
Latch Requests Miss /Miss (s) Requests Miss
------------------------ -------------- ------ ------ ------ ------------ ------
row cache enqueue latch 109,398 0.1 0.0 0 0
row cache objects 113,028 1.5 0.0 0 0
sequence cache 29 0.0 0 0
session allocation 10,026 11.7 0.1 0 0
session idle bit 5,657 0.2 0.2 0 0
session switching 57 0.0 0 0
session timer 1,203 0.0 0 0
shared pool 11,541 5.7 0.0 0 0
simulator hash latch 123,035 0.0 0.0 0 0
simulator lru latch 721 0.0 0 8,751 2.8
sort extent pool 84 0.0 0 0
transaction allocation 1,730 1.3 0.0 0 0
transaction branch alloc 57 0.0 0 0
undo global data 1,129,228 0.0 0.0 0 0
user lock 2 0.0 0 0
-------------------------------------------------------------
Latch Sleep breakdown for DB: FIN02 Instance: FIN02 Snaps: 2948 -2949
-> ordered by misses desc
Get Spin &
Latch Name Requests Misses Sleeps Sleeps 1->4
-------------------------- -------------- ----------- ----------- ------------
cache buffers chains 4,758,928 439,946 1,658 0/0/0/0/0
parallel query alloc buffe 94,534 18,279 154 18127/150/2/
0/0
library cache 70,699 6,100 169 5932/167/1/0
/0
session allocation 10,026 1,169 122 1047/122/0/0
/0
enqueue hash chains 65,858 968 79 890/77/1/0/0
shared pool 11,541 663 1 662/1/0/0/0
library cache pin 22,304 580 26 554/26/0/0/0
undo global data 1,129,228 158 7 151/7/0/0/0
error message lists 1,157 57 9 48/9/0/0/0
cache buffers lru chain 111,587 33 13 21/11/1/0/0
child cursor hash table 543 21 2 19/2/0/0/0
session idle bit 5,657 12 2 0/0/0/0/0
-------------------------------------------------------------
Latch Miss Sources for DB: FIN02 Instance: FIN02 Snaps: 2948 -2949
-> only latches with sleeps are shown
-> ordered by name, sleeps desc
NoWait Waiter
Latch Name Where Misses Sleeps Sleeps
------------------------ -------------------------- ------- ---------- --------
cache buffers chains kcbgtcr: kslbegin excl 0 685 607
cache buffers chains kcbchg: kslbegin: bufs not 0 556 435
cache buffers chains kcbrls: kslbegin 0 144 31
cache buffers chains kcbgcur: kslbegin 0 132 118
cache buffers chains kcbzwb 0 58 65
cache buffers chains kcbso1: set no access 0 14 6
cache buffers chains kcbgtcr: fast path 0 3 181
cache buffers chains kcbchg: kslbegin: call CR 0 2 97
cache buffers lru chain kcbzgb: multiple sets nowa 12 13 13
child cursor hash table kksfbc: new child 0 2 2
enqueue hash chains ksqgtl3 0 79 79
error message lists kxfpqsnd 0 8 9
error message lists kxfpqidqr2: KSLBEGIN 0 1 0
library cache kgllkdl: child: cleanup 0 70 4
library cache kglhdgn: child: 0 35 51
library cache kglpin: child: heap proces 0 34 31
library cache kglpnc: child 0 21 12
library cache kglpndl: child: before pro 0 5 67
library cache kglhdgc: child: 0 4 2
library cache pin kglpnal: child: alloc spac 0 26 26
parallel query alloc buf kxfpnfy 0 99 154
parallel query alloc buf kxfpbfre 0 55 0
session allocation ksuxds: KSUSFCLC not set 0 112 114
session allocation ksursi 0 5 7
session allocation kxfpqidqr 0 5 0
session idle bit ksupuc: set busy 0 2 45
shared pool kghalo 0 1 1
undo global data ktudba: KSLBEGIN 0 5 0
undo global data ktubnd:child 0 2 7
-------------------------------------------------------------
Dictionary Cache Stats for DB: FIN02 Instance: FIN02 Snaps: 2948 -2949
->"Pct Misses" should be very low (< 2% in most cases)
->"Cache Usage" is the number of cache entries being used
->"Pct SGA" is the ratio of usage to allocated size for that cache
Get Pct Scan Pct Mod Final
Cache Requests Miss Reqs Miss Reqs Usage
------------------------- ------------ ------ ------- ----- -------- ----------
dc_histogram_defs 258 0.0 0 0 17,608
dc_object_ids 2,456 0.1 0 0 1,208
dc_objects 117 0.0 0 40 4,404
dc_profiles 1 0.0 0 0 1
dc_rollback_segments 46,238 0.0 0 0 88
dc_segments 1,300 15.8 0 655 4,056
dc_sequences 13 0.0 0 13 13
dc_tablespace_quotas 26 0.0 0 26 129
dc_tablespaces 2,760 0.0 0 0 313
dc_user_grants 426 0.0 0 0 24
dc_usernames 64 0.0 0 0 16
dc_users 1,472 0.0 0 0 27
-------------------------------------------------------------
Library Cache Activity for DB: FIN02 Instance: FIN02 Snaps: 2948 -2949
->"Pct Misses" should be very low
Get Pct Pin Pct Invali-
Namespace Requests Miss Requests Miss Reloads dations
--------------- ------------ ------ -------------- ------ ---------- --------
BODY 5 0.0 5 0.0 0 0
INDEX 972 0.0 564 0.0 0 0
SQL AREA 1,587 1.8 9,319 1.1 27 62
TABLE/PROCEDURE 210 0.0 939 0.0 0 0
TRIGGER 4 0.0 4 0.0 0 0
-------------------------------------------------------------
init.ora Parameters for DB: FIN02 Instance: FIN02 Snaps: 2948 -2949
End value
Parameter Name Begin value (if different)
----------------------------- --------------------------------- --------------
bitmap_merge_area_size 16777216
compatible 9.2.0
create_bitmap_area_size 16777216
db_block_checksum TRUE
db_block_size 32768
db_cache_size 5251268608
db_file_multiblock_read_count 16
db_files 2000
dml_locks 2440
enqueue_resources 4460
fast_start_mttr_target 0
hash_area_size 16777216
hash_join_enabled TRUE
java_pool_size 0
job_queue_processes 2
large_pool_size 134217728
log_buffer 3276800
log_checkpoint_interval 0
log_checkpoint_timeout 0
max_dump_file_size UNLIMITED
max_rollback_segments 122
open_cursors 300
optimizer_features_enable 9.2.0
optimizer_index_caching 0
optimizer_index_cost_adj 100
parallel_adaptive_multi_user FALSE
parallel_automatic_tuning TRUE
parallel_execution_message_si 16384
parallel_max_servers 80
parallel_min_servers 0
parallel_threads_per_cpu 2
pga_aggregate_target 3145728000
processes 500
query_rewrite_enabled FALSE
remote_login_passwordfile EXCLUSIVE
session_cached_cursors 100
sessions 555
sga_max_size 5725201576
shared_pool_size 218103808
sort_area_retained_size 16777216
sort_area_size 16777216
sql_trace FALSE
star_transformation_enabled TRUE
timed_statistics TRUE
transactions 610
undo_management AUTO
undo_retention 900
undo_tablespace UNDOTBS1
I've not included the File I/O as it is a very large section and I'm working on it along with system administrator to make sure that files with high READS are on diffrent disks to reduce I/O.
From the above statspack report I concluded following. Please let me know am I correct.
1) There is a buffer contention. I think buffer cache is too large.
2) There is some contention on dictonary cache and library cache.
I'm unable to conclude anything from the PGA section.
September 30, 2005 - 8:10 am UTC
4 other instances - ugh, why why why. Just to make it really hard to tune? maintain????
too much information here - but was that statspack just during the index rebuild or was there other stuff going on?
it would appear "IO", physical IO is your biggest consumer of time.
from sweden
Reader, September 30, 2005 - 8:49 am UTC
Cannot do anything about other intances. It is like this. But I'm sure during this batch run no major activity was happening in these other instances.
Yes, this statspack report is during index rebuild.
This sub partition is spread over 8 tablespace and each tablespace has more then 10 datafiles.
All these datafiles are listed for high amount of READ in statspack report. As far as I/O tunning is concerned I know that we need to spread the load on all the available disks but according to system admin. all these are logical volumes in SAN and it does not make much difference by putting them on different filesystems.
I tried using NOLOGGING (with parallel degree 12) for these indexes as told by you, but it surprised when NOLOGGING didn't help at all.
I'm confused which section to follow next :-)
I'll be very greatful if you can give some directions to follow.
September 30, 2005 - 10:52 am UTC
your biggest thing was direct path reads, the parallel index build reading from disk. that would seem to be the place to "start", look at how to either
a) reduce io
b) increase speed of io.
I'd be looking at a way to add the new rows without having to rebuild the entire index. You are adding a small number of rows to a big number of rows - does the index really need be rebuilt, why not maintain it?
from sweden
Reader, September 30, 2005 - 8:54 am UTC
Some more information.
The index rebuild activity ran for 3.5 hours and this report is during that time. I'm sure no other activity was running at that time as no other tablespace other then the ones in which the sub partition is were listed in the statspack report.
Wipe and Load Database
Mita, November 17, 2005 - 11:45 am UTC
I have a database where most of the tables are wiped out everyday and being loaded with current data. Indexes are dropped before load and created after load. As we are rebuilding indexes everyday, database creates statistics for that(My team says so.. I did not check) . but what about tables ?? should not we have a process to gather statistics after data loads ?? My team is saying that, query performance will not be affected without table statistics as we have index statistics. what's the best approach in this case ?? database version is Oracle 10g Release 1.
November 18, 2005 - 9:36 am UTC
in 10g, the create index statement gathers statistics on the indexes and the optimizer dynamic sampling setting is set such that the tables will be dynamically sampled during a hard parse to try and get stats for each query.
Question: hows performance?
Gather Statistics
Mita, November 18, 2005 - 5:18 pm UTC
well, sometimes its ok.. for one of the query it was not using index so it was taking like 6 min. when I analyzed table, it started using index and it came back in 3 seconds.
November 19, 2005 - 9:53 am UTC
I would recommend using dbms_stats to gather statistics, dynamic sampling is good - but it is not a replacement. You have identified that dynamic sampling isn't sufficient for you - so that would be an indication you need to use dbms_stats.
rebuild index and drop and create
Parag J Patankar, March 14, 2006 - 5:13 am UTC
Hi Tom,
Is there is any differnce between alter index rebuild and drop and create index in terms of performances ?
my friend is claming that rebuiding index no effect in terms of performance on a set of activities ( I aggree, aggree also if performance is down ), but performance improves if he drops and creates index ( I am not sure )
Pl comment, if possible with an example.
thanks & regards
pjp
March 14, 2006 - 10:58 am UTC
yes, there can be. an index rebuild might be able to use the existing index and skip the full scan of the table and sort of the key data.
An index rebuild and a drop+create would have the same effect in terms of the resulting index.
Great Thread
Roger, March 14, 2006 - 10:28 am UTC
I love how these threads end up going through all the things I am doing on my databases right know.
Great work everyone.
Can I drop index or table while query is running and index is being used
Sarma, March 14, 2006 - 11:20 am UTC
Hi Tom,
I would like to know wether can we drop the table or index while a long running query is using that index. I tried to reproduce but I am not sure what I did is correct scenario.
SQL> select count(*) from t;
COUNT(*)
----------
13101056
Elapsed: 00:00:14.64
SQL> create index t_idx on t(object_name);
Index created.
Elapsed: 00:02:45.78
SQL> explain plan for select /*+ index(t t_idx) */ count(*) from t where object_name not like '%T';
Explained.
Elapsed: 00:00:00.01
SQL> select * from table(dbms_xplan.display());
PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------
---------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)|
---------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 18 | 52377 (2)|
| 1 | SORT AGGREGATE | | 1 | 18 | |
| 2 | INDEX FULL SCAN| T_IDX | 12M| 215M| 52377 (2)|
---------------------------------------------------------------
SQL> select /*+ index(t t_idx) */ count(*) from t where object_name not like '%T';
COUNT(*)
----------
11456512
In another session I dropped the index while query is running.
SQL> drop index t_idx;
Index dropped.
I am able to drop the index but query has returned me the results. similarly I executed query in one session and dropped the table in another session. It worked but I got the query results but after that when I described for that table I got error message saying that table does not exist.
Is it correct way of testing? Can I conclude that we can drop the table/index while query is running and also using that index?
Thank you for your great help to oracle community.
Regards
Sarma
March 15, 2006 - 8:58 am UTC
you can even drop the table.
the query will either
a) run to completion if the space is not reused.
b) get an 8103 if the space is reused
Index rebuild considerations
Edith Konopka, April 18, 2006 - 1:52 pm UTC
Helped us sort out the best way to rebuild two indexes needed by third-party software ArcSDE. ESRI recommends rebuilding periodically, but info on how to do it is scarce.
April 18, 2006 - 3:40 pm UTC
ask ESRI. It is after all their recommendation, they should have
a) the factual evidence to back up their assertion
b) their suggested methodology to do so
for all I know, these could be spatial/locator indexes and the answers are quite different from b*trees from bitmaps and so on.
Always consider asking the people that tell you to do something
a) why
b) how
drop index
Branka, May 23, 2006 - 3:08 pm UTC
I am truing to drop index on the table that is used all the time, but I get error.
I want to change the owner of the index. If I cannot drop and create index, what would be other option?
Thanks
Branka
drop index ind_name;
ERROR at line 1:
ORA-00054: resource busy and acquire with NOWAIT specified
May 24, 2006 - 6:54 am UTC
my question would be "why would you want to drop an index and change the owner"
seems like pure "busy work"
and since this table is SO active, you cannot do it - seems like a real waste (suppose you DID DROP IT, please consider what would happen to performance during the period of time the index did not exist -- seems like an important, active table doesn't it??? )
Raj, May 31, 2006 - 2:52 pm UTC
I have delete 50% of the old data from my database and rebuilding the indexes (I tried some of them) resulted in increased performance. I am also against rebuilding on a schedule but after a bulk delete I guess rebuilding is helping the performance. This database was migrated from Oracle 7 to 8 to 9i. We have Locally managed tablespace which were migrated but the exisiting objects in it violates the LMT policy.
Question
If we have to rebuild the indexes, would it be a good idea to create brand new Locally managed tablespace and then rebuild the indexes in it.
Thanks
Raj
May 31, 2006 - 3:44 pm UTC
It would be a good idea to figure out how to use a technique that would permit you to age data more gracefully rather than deleting 50% of it on a recurring basis.
Have you considered THAT?
rebuilding multiple indexes on HUGE table
Elaine H, July 13, 2006 - 3:40 pm UTC
I have read this thread several times and I have learned much from it. I am curious about the poster who talked about his bulk insert, how he disabled the index, did the load and then performed a rebuild.
I have a partitioned table (9 partitions) with 9 partitioned global indexes (9 partitions). The table has about 3.6 billion rows (give or take). Each day we insert about 4 million rows using append. The process takes between 10-12 hours.
Do you feel that there would be any performance lift from using the disable/rebuild index unrecoverable approach? We haven't tried rebuilding an index, but the last one we had to create one it took 9 hours. Would the rebuild time for each index be comparable to this? This is an OLTP system as well, so if we mark the indexes unusable, won't that force full table scans?
You said above that you didn't feel that rebuilds in and of themselves always provided a performance enhancement, in fact you said that performance could be negatively impacted because oracle would have to splitting all the blocks again. Since we are doing such large inserts, do you feel that this is applicable to our situation as well?
We are trying to think of anything that could reduce the runtime of the main insert. Our current version is 9.2. Any suggestions are greatly appreciated. Unfortunately reducing the number of indexes is not an option. The users query the database using certain criteria. We are already using some skip scanning on the indexes, but there seems to be a certain saturation point for this where performance becomes negatively impacted.
Lastly, (I know this got long) is there anything in 10g that would be able to provide another course of action to improve performance?
July 13, 2006 - 5:24 pm UTC
with a measely 4m rows added to 3.6b - I doubt you would end up gaining anything from a disable/rebuild.
With an insert /*+append*/ the indexes are maintained rather nicely, in bulk - at the end of the append - we build "mini indexes" off to the side and merge them in. If you do slow by slow processing (curious that it takes 10-12 hours to load such a small number of rows - are you slow by slow or in bulk????)
describe your load process.
my mistake: 400m not 4m
Elaine H, July 17, 2006 - 11:25 am UTC
Sorry, I mistyped. The process is as follows:
the file arrives from the mainframe and is converted to ascii. each row in the data is sent to an SSA (from identity systems corp) to be keyed by name. keys are then appended to the row at the O/S level. the first 3 characters of the ssa key (i'll call this first3) are broken out since this is the column on which the partitions are ranged.
rows from the flat file are loaded to a staging table (di_logf) using sqlldr direct path. this table is created new for each run.
at this point, records containing phone numbers have to checked against the main_table to see if the phone numbers can be legally used. if the phone number of the row is unknown to the system, it is "scrubbed" from the database -- that is, the column is blanked out. one of the indexes on di_name2 includes telephone number.
rows whose phone numbers were scrubbed are appended to another table.
the main table (di_name2) to be appended to is cached (well as much as can be, given its size).
a file is created for each partition with the appropriate insert statement as shown below.
insert /*+ append no parallel(index using the (ssa_id) */ into di_name2_part# from di_logf where the substr_pkid is within the range for the partition.
depending on how many arguments (4 or 6 -- beginning partiton range to ending partition range and the relational operators (">", ">=", "<") are passed to the build sql stmt by the shellscript, it builds the appropriate sql stmt. for example:
for partition 1: where substr_pkid < partition1_end_value
for partition 2: where substr_pkid > partition2_begin_value and substr_pkid >= partition1_end_value
and so on until
partition 9: where substr_pkid > partition8_end_value
an example of an insert to di_name2_part03 would be:
alter session set db_file_multiblock_read_count = 2;
-- this was originally higher until we found out that due to the horrendous striping on the disk volumes which caused us to use 40 reads/MG of data)
INSERT /*+ APPEND NOPARALLEL(DI_NAME2_PKT) */ INTO DI_NAME2 PARTITION (DI_NAME2_PART03) NOLOGGING SELECT *
FROM DI_LOGF WHERE NA_PN01_FIRST3 < 'A53' AND NA_PN01_FIRST3 >= '9EA';
after all 9 sql stmts complete, the rows which were appended to the need_pk_table are extracted and sent to the consumer credit database on the mainframe for keying. the main load kicks this off but does not wait for an answer from the mainframe.
for each row initially received from the mainframe, there are:
1 call to the SSA server.
2 fields appended to the initial row
1 sql load to the staging table
1 potential append to the non-usable phone table.
1 append to the need_pk table
10 writes when it goes to di_name2 -- one to the table itself and 9 index writes.
now, there is no index on di_logf but I can't decide whether or not the gain from the index might be worth the time and disk space needed to create and maintain one.
as i said, this process can generally run 10-12 hours. the load is lighter on Mondays, since there is less activity creating the rows which we process.
Can you make any recommendations or do you still need clarification? I know the process is pretty convoluted, but hopefully I have described it well enough.
July 17, 2006 - 3:12 pm UTC
400million to 3.x billion - eh. hard to say.
do it yourself parallelism might be called for here, take any serial row by row thing you are doing and have many of them happen at once.
homemade parallel
Elaine H, July 17, 2006 - 4:33 pm UTC
We sort of are doing parallel by hand. The script generates 9 sql statements which are in turned kicked off using an o/s script with nohup.
Do you think monkeying with any of the indexes (disabling/rebuilding main or creating one on the staging table) would be of any benefit?
July 18, 2006 - 7:56 am UTC
hard to say - that is the things benchmarks are made of
Size of a single index entry
Arindam Mukherjee, August 04, 2006 - 8:24 am UTC
Respected Mr. Tom,
I have got two contradictory comments on the size of a single index entry if my comprehension is okay.
From Oracle 9i Application Developer Guide – Fundamentals
*********************************************************************
Chapter 5: Selecting an Index Strategy, Page No. 5-4
-------------------------------------------------------------------------
The size of a single index entry cannot exceed roughly one-half (minus some overhead) of the available space in the data block. Consult with the database administrator for assistance in determining the space required by an index.
From Oracle 9i Database Concepts
*****************************************
Chapter 21: Data Integrity, Page No. 21-10/11
-------------------------------------------------------------------------
Composite UNIQUE key constraints have the same limitations imposed on composite indexes: up to 32 columns can constitute a composite unique key.
Note: If compatibility is set to Oracle9i or higher, then the total size in bytes of a key value can be almost as large as a full block. In previous releases key size could not exceed approximately half the associated databaseÂ’s block size.
What would be the correct measurement of a key value of index or single index entry? Please tell me how to measure that entry based on half of the block size or full block size (DB_Block_Size)
Waiting for your reply
Arindam Mukherjee, August 06, 2006 - 11:16 pm UTC
Sir! may I expect any response to my previous query?
August 06, 2006 - 11:55 pm UTC
hmm, thought I did.
It is a bit more than 1/2 block. simple test case would prove that out easily. On 8k blocks, 6,398 is the max size (on 10gr2, linux)
just give it a try?
but, if you have a b*tree index on something so big, you really should thing 50 times about what you are doing (perhaps you really meant to use a text index?)
Reg Indexes
Kiran, August 07, 2006 - 7:15 am UTC
Hi,
One of my concurrent program runs for every two hours and fetches on averagely 100 records(Cumulating every two hours for that day i.e. if first run of the program on that day fetches 200+ records the last run of the program for that day fetches almost 2400+ records x 6 tables) and updates in oracle applications standards tables through Oracle Standard APIs,there are two table spaces for that are ARD and ARX(for indexes). on my observation I found ARX table space is growing more than ARD. What could be the reason I could not find. Please can you suggest how to find out reason and solution for the same,
all the tables are oracle standard tables and indexes on those tables are by deafult created by oracle.
August 07, 2006 - 8:12 am UTC
how big is big and how much faster is fast.
likely, one of your indexes is on a monotomically increasing value (a sequence populated value, a date) and is sweeping from left to right - a coalesce or rebuild might be periodically called for on such a "sweeper" index.
After Mass Delete Delete or Insert , Index Rebuild benificial ?
Maulesh Jani, November 01, 2006 - 9:41 am UTC
Hi Tom,
After reading whole thread I got so many details and became helpful to me . But still i am not sure for this thus asking your vaulable comments on it :
I have one process which will mass insert on few tables ,which haing B*tree index on it . After the whole insertion process I am rebuiling all indexes. Is this approach is correct ?
Regarding the response time : these indices rebuilding taking some more time but still allowable . So Can you give your comments that it is right way or wrong ?
Let me know if you have any suggestion for such process aproach .!
Thanks
Maulesh Jani
November 01, 2006 - 9:46 am UTC
It is extremely doubtful that an index rebuild would be necessary after a bulk insert.
However, if you:
a) make unusable all of the indexes
b) then insert
c) then rebuild them
that may (might, maybe) be more efficient in some cases than inserting and maintaining the indexes.
I can not put Indexes as Unusable
Maulesh Jani, November 01, 2006 - 10:13 am UTC
Hi TOM,
Yes,you are absolutaly right that First make Index not use , then insertion , and then rebuild the Indices is the best approach .
But I can not put it at startup because the Process is some what like as below :
Insert into
Backup table
from Base table
where ( conditin with Process table) <--
In this where clause I am using the indexes of Base table and without it may cause Full-scan .
Sorry that I didnt mentioned this whole situation in my past comments .
I want to know that If I put the Index as it is and after that insertion process does the rebuild then are you sure that in NO CASE it will be good ?
Thanks
Maulesh Jani
November 01, 2006 - 11:18 am UTC
but for large operations - if you are processing lots of rows - I'd be very very upset if an index were used!
Reclaiming disk space of deleted indexes
A reader, November 01, 2006 - 10:52 am UTC
Hi Tom,
This is from Metalink Note:67522.1. Is this still valid for LMT's in 9iR2?
"Deletes do not necessarily free up allocated index space
Reorganization, Truncation or Deletion of data may or may not have cost implications for queries. Remember that deletes do not necessarily free up allocated space from objects. In addition, deletes do not reset the highwatermark for a table. Truncate does. Empty blocks may make indexes/tables appear more expensive than they potentially could be. Dropping and recreating the object will reorganise the structure and may potentially help (or may hinder). This problem is usually most noticeable when comparing the query performance of two different systems with the same data."
Thanks.
November 01, 2006 - 11:20 am UTC
yes, once allocated to a segment, space is never freed back to the "pool" of free space for any other segment until you
a) drop the segment
b) rebuild the segment
c) shrink space the segment (new 10g feature)
once a segment grabs space, that space is for the use of that segment.
When to rebuild an index
ocp_kir, November 01, 2006 - 6:03 pm UTC
Hi Tom,
I understand that an index need not(should not) be rebuilt based on a schedule. Can u please thorugh some light on when all different types of indexes( btree, bitmap, ...) need to be rebuilt.
Questions on Analyze and Archivelog.
Sujit Mondal, November 15, 2006 - 1:00 pm UTC
Tom,
Can you please get me the answer of following 3 questions or get me a link where i can get the answer?
1) How often we should do analyze table or index?
2) Do the archive logs are generated during dbms_stats or analyze call.
3) If above answer is yes , how can we manage not to have archive log generated for the above operations (considering the database is in archivelog mode) ?
November 16, 2006 - 8:08 am UTC
1) "it depends"
2) archives are generated when log switches occur, if one occurs during your gather statistics call, sure.
3) that does not even begin to make sense.
Index rebuild and Analyze
Syed, December 15, 2006 - 7:10 am UTC
Hi Tom,
I got very good information from the thread. But i have few questions based on my project scenario
It is a datawarehouse environment, database(10g r2). The data loaded thro ETL every day is approximately 50KB.
Tables shall be analyzed once in a week then how about rebuilding / Analyzing Indexes?
My Questions are
1. Index rebuild necessary for this amount of data?(Please don't give answer as "depends" If yes please tell me how it depends?)
2. Analyze tables once in a week - then how about rebuilding / Analyzing Indexes should be done when table is analyzed?
3. "Analyze" table gives more information like average row length, empty blocks when compared with "DBMS_STATS". Hence thought of using Analyze rather than DBMS_STATS.Gather_Schema_Stats. Please give me your suggestion.
Thanks
December 15, 2006 - 9:00 am UTC
50kb???
Syed, December 18, 2006 - 5:17 am UTC
Yes... per day data loaded is approximately 50KB.
December 18, 2006 - 8:30 am UTC
no worries then, that is like what, maybe 10-20 rows of data. I think we can handle that without even really thinking about it.
Don't bother with complex ETL - just have someone read you the data over the phone and type it in, probably faster.
Are you sure 50 Kilobytes - I mean, that is *nothing*
Syed, December 18, 2006 - 11:27 am UTC
Hi Tom,
Thank you for guiding us.
Yes...this is a small data warehouse environment. But expected to grow after 3 years with addition of sources.
In general,
After data is loaded into the table, only rebuilding indexes is sufficient or first table needs to be analyzed and then index? what should be the order?
2. exec DBMS_STATS.gather_schema_stats is not gathering statistics for Indexes why? Index is also an object in the schema right?
3."Analyze" table gives more information like average row length, empty blocks when compared with "DBMS_STATS". Hence thought of using Analyze rather than
DBMS_STATS.Gather_Schema_Stats. Please give me your suggestion.
December 18, 2006 - 2:36 pm UTC
for 50k of data, you need do nothing. that is nothing, noise, not noticable.
it would be a long long time before you had to worry about anything.
Some more answers
A reader, December 18, 2006 - 2:49 pm UTC
I'm surprised Tom did not mention anything about analyze vs dbms_stats, he must be very busy (as usual!).
Syed asked:
"2. exec DBMS_STATS.gather_schema_stats is not gathering statistics for Indexes why? Index is also an object in the schema right?"
Yes, indexes are objects in the schema. You need to set cascade=>true on gather_schema_stats to gather index stats. It is set to false by default (at least in 9i).
"3."Analyze" table gives more information like average row length, empty blocks when compared with "DBMS_STATS". Hence thought of using Analyze rather than DBMS_STATS.Gather_Schema_Stats. Please give me your suggestion."
DBMS_STATS is the way to go. Check the documentation for all available arguments. I think you can find more discussions on that topic on this site, and if I recall correctly Tom favors DBMS_STATS over Analyze.
But Cary said ...
Harrison Picot, January 22, 2007 - 11:55 am UTC
Hi Tom,
The first time I read this thread, I was sure that Cary Milsap had discussed rebuilding indexes (in a paper on why moving to raw discs is faster, just as moving an existing database from raw discs to UFS would be) and I was sure that Cary said that rebuilding indexes was part of that. After some digging I found the paper, and what he said was moving an existing database would rebuild the indexes, but that was an aside to his saying that moving the data would reduce or eliminate chained rows (at the time, this meant both chained and migrated rows), and that would make the database faster. Good thing I didn't bet on being right on what Cary said, ehhh?
Index Status N?A in Status Column
Ram K, February 16, 2007 - 10:57 am UTC
tom..
This is Oracle 9.2.0.6 DB and it used in Dataware house.
My question is, we have an Index on a table( this table is Partitioned) and this index is also Partitioned. After the Index is created, i check dba_ind_partitions, and i see Status column 'N/A'..
What does it mean ?...how do i know if this is a valid Index.?
Thanks
Ram K
February 17, 2007 - 11:02 am UTC
I'll "guess"
index is acdtually subpartitioned, the status of the "index" is 'not applicable' (because the index is just 'metadata'), the status of the "index partitions" is 'not applicable' (for same reason).
The status of the subpartitions - they are relevant
7 PARTITION BY RANGE (dt)
8 subpartition by hash(x)
9 (
10 PARTITION part1 VALUES LESS THAN (to_date('13-mar-2003','dd-mon-yyyy')) ,
11 PARTITION part2 VALUES LESS THAN (to_date('14-mar-2003','dd-mon-yyyy')) ,
12 PARTITION junk VALUES LESS THAN (MAXVALUE)
13 )
14 /
Table created.
ops$tkyte%ORA9IR2> create index t_idx on t(y) local;
Index created.
ops$tkyte%ORA9IR2>
ops$tkyte%ORA9IR2> select status from user_indexes where index_name = 'T_IDX';
STATUS
--------
N/A
ops$tkyte%ORA9IR2> select status from user_ind_partitions where index_name = 'T_IDX';
STATUS
--------
N/A
N/A
N/A
ops$tkyte%ORA9IR2> select status from user_ind_subpartitions where index_name = 'T_IDX';
STATUS
--------
USABLE
USABLE
USABLE
Index will be rebuilt when move to the other tablespace?
A reader, February 21, 2008 - 11:54 pm UTC
Hi Tom!
Your answer is very very useful for me.
Due to the accident, many Indexes in my DB was create with tablespace SYSTEM, and I have a decision to move them to the USERS tablespace. However, I wonder about them will be rebuilt when I use the command like following:
SQL> alter index CCS_YBI.NO_PS_102007 rebuild tablespace USER;
Thank you very much!
February 22, 2008 - 7:06 am UTC
what do you wonder about them?
yes, that will move them. Is that what you were wondering?
Index Rebuilding
vikas atrey, February 22, 2008 - 12:03 pm UTC
You are saying two key words in your command "move" and "rebuid" . So these two actions will be performed by Oracle.
You may please refer Oracle documentation for syntax explanation.
Oracle's Recommendation?????
Mark, February 25, 2008 - 4:32 pm UTC
We have a recommendation from Oracle. Is this correct?
Oracle recommendation is If the indexes reached Blevel >4 ,these indexes should rebuild.
February 25, 2008 - 4:52 pm UTC
define "oracle" in this context please.
the only answer is "it depends", huge table, rebuilding might do - NOTHING.
small table, I'd be asking "why did it get to 4, and if it rebuilds to 3 and I don't really use is that much - why am I rebuilding it again?"
and so on...
Mark, February 25, 2008 - 6:15 pm UTC
We have an Oracle consultant on-site, I've pointed out your article, but he says this is Oracle's recommendation, however can't show me any evidence apart from a metalink script metalink note 122008.1.
February 26, 2008 - 3:27 am UTC
ask them "why", I'm not saying that if height is 4 and you rebuild to height 3 - that wouldn't be a good thing, but if the height is 4 and you rebuild and it is 3 for a short period and then becomes 4 again soon OR stays at 4 - what then?
I'd want to have identified this index as being an "issue" before I willy nilly rebuild it.
Mark Young, February 26, 2008 - 3:57 pm UTC
Excellent advice! Thanks Tom
I would agree with everything that was said but
Gabe Riel, February 26, 2008 - 6:19 pm UTC
there are differences between warehousing and oltp. that said if the schedule required a warehouse refresh every week its possible that some of the affected indexes may benefit from an index rebuild every week. that's assuming that that data would basically be read-only and that there would be little or no dml run against that segment.
does that make sense?
February 27, 2008 - 2:29 am UTC
doubtful, unless you did the refresh in the most inefficient manner, in which case you have much larger fish to fry.
If you just load new data - nope, no worries.
If you decided to do it on the 'cheap' (air quotes fully intended, as on the 'cheap' usually ends up costing more), and didn't use partitioning and such - so you end up doing lots of scattered deletes and then inserts to refresh - maybe (but doubtful)
and remember, no one has every said "never rebuild" here - we have said "stop rebuilding all of them on a schedule"
.... There are extreme cases, particularly with regards bitmapped indexes after mass
data loads, where rebuilding is suggested. But in a day to day system that is
transactional -- the need to rebuild indexes is sooooo rare (that I've never
actually done it myself in 15 years except to MOVE an index from one physical
location to another -- and even then, we don't need to do that very much today
at all with logical volumes and their like) ...
too late - indexes were already rebuilt
gabe_riel@yahoo.com, February 27, 2008 - 4:52 pm UTC
So we did an upgrade and after loading multiple sets of data and endless reams of SQL code and updates we finally wrapped up our exercise with some table truncates and index rebuilds. Problem now we hit "Bug 6455161 - Higher CPU / Higher cache buffer chains".
Specifically the upgrade now needs to be complemented by the recreation of some indexes that were detrimentally affected by the REBUILD itself. We're prepared to do that.
Only problem, it looks as if some tables owned by SYS are also affected. Any suggestions?
Inde rebuilds
Des Browning, February 28, 2008 - 9:43 am UTC
Tom,
We are having this seeming endless debate about index rebuilds here. Our apps chaps maintain that rebuilding indexes has enabled a process that was running overnight to complete in a little over an hour. I maintain that it's more to do with running dbms_stats. Of course without the metrics, as you put it, I wouldn't expect you to comment other than the general remarks you have made before.
However, there are still some Oracle 'gurus' who maintain what seems to be a completely opposite view to you:-
http://www.dba-oracle.com/t_index_rebuilding_issues.htm Most confusing ! How would you answer these quite specific claims ?
February 28, 2008 - 11:08 pm UTC
when you come to asktom.oracle.com and see advice, information, facts - what else do you see?
You see some compelling evidence that what is said here is probably true, at least true under the conditions demonstrated.
So, keep that in mind when looking at other sites - if you see no proof points, if you just see things like "gee, we did X once and our life became so grand, you should too. But wait, there is more, if you act now....".... think about it.
It would be easy to measure what is happening at your location. Measure. v$segment_statistics for example, see of the IO against the index goes up, goes down, stays the same. See if the index is used differently after the rebuild/gather or used in the same way.
There are *cases* where an index rebuild (or even better - a coalesce) is something you want to consider. I call them sweeper indexes.
http://asktom.oracle.com/pls/ask/search?p_string=sweeper if you have one of those, and you use it in a certain way (eg: find me the oldest data - searching from left to right in the index), a rebuild/coalesce could be beneficial.
What I'm stating on this page is that the scheduled rebuild of all/most indexes is likely a) a waste of time at best, b) harmful/counter productive at worst
view
Reader, February 28, 2008 - 4:20 pm UTC
Tom
Look like you don't like to rebuild indexes. I have a database at the end of day, we delete a big chunk of data and next day before the shift start, we load same size of data. I rebuild the indexes and create statistics every day that reduce my query response from 10 minutes to 10 seconds
February 28, 2008 - 11:30 pm UTC
sigh,
I don't like to rebuild all indexes on a schedule "just because"
In fact,
I don't like to do ANYTHING - a-n-y-t-h-i-n-g - without a sound reason.
I can give you 5,000 examples where by I do the same exact process you just mention and rebuilding the index does squat, zero, nothing (well, other than waste my time).
So, why are you sure that the index rebuild is what "fixed" stuff here. Couldn't it be - oh, I don't know - the statistics?
If you have the "sweeper index" (see above one entry) and you use the index in the way I describe - then sure, a rebuild (or coalesce) might be useful.
Please -point to anywhere on this page where I said "never rebuild", I can point to places where I described "in this case - it might be something to consider"
Why does everyone see this as "black and white" - that you either
a) rebuild everything in site
b) rebuild nothing
I don't see that advice *anywhere* here.
Rapid response
DJB, February 29, 2008 - 10:57 am UTC
Tom.
Thanks for your quick reply. Unfortunately I can't find anything on here relating to either sweeper or sweeper indexes.
March 01, 2008 - 10:45 am UTC
did you, well, click on the link that is now TWO up from here? the link that includes the word 'sweeper' in it?
Coalesces on single-value indexes
Andrew, April 10, 2008 - 4:48 am UTC
Hi Tom
We have a table xyz with has a Status column. This column is set to 0 (zero) by the initial process A and there's another process B which updates it to 1. The index is on decode(status,0,0,null), meaning that we only care about fast retrieval of zeros.
Process B is multithreaded and each session issues the following SQL to grab any record with a zero status:
select *
from xyz
where (DECODE(status, 0, 0, NULL)) = 0
and rownum = 1
for update;
Which is quickly followed by updating the status to 1 and committing (in order unblock the other sessions), then by further processing...
Now the interesting bit: while doing performance testing I found that the behaviour for 100K/200K records is very different to the scenario when there are 1M records. What happens with 1M records is that the processing is much (2-3 times) slower and the CPU usage gets much higher after a while. Also, as a side effect there is more blocking between the sessions.
However, in the middle of the processing, when I did a COALESCE on the index (which was very quick, took less than a second) the performance returned back to normal and the CPU usage dropped!
Any suggestions on how frequently COALESCE should be run automatically? Any other ideas on changing the design to use a different index or an alternative index type? (I was thinking of using partitioning rather than an B-tree index, but there are Oracle licensing issues there)
Thanks in advance
April 10, 2008 - 11:15 am UTC
I don't like the
a) get a row
b) update it
c) commit <<<=== do not want
d) process
you have to clean those up manually when a process fails after committing.
see
http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:1786336182160 as for the coalesce question here is what I suggest:
either using awr/statspack or by doing your own snapshot of v$sql, measure the "efficiency" of your query over time (if you copy the v$sql entry for this select over time you can take two of those observations and subtract them from each other - you'll have measured the cpu, elapsed and IO's performed in that time frame, you can divide by the number of executions to get the averages).
Now, chart that over time. How often does it go up, up, up... and how fast
that'll give you an idea of "how often"
and coalesce is a very nice "online friendly" operation...
RE: Coalesces on single-value indexes
Andrew, April 11, 2008 - 2:43 am UTC
3) I know about this, we have a special cleanup operation to catch all these records in an intermediate state
Anyway, DBMS_AQ seems like the way to go - all we need is a simple push and pop. Thanks a lot for your suggestion.
Index causing a problem ?
Parag J Patankar, May 21, 2008 - 6:52 am UTC
Hi Tom,
we had received a pro-cobol program which is doing delete of records. Sorry, I can not change it to SQL for bulk delete. But when pro-cobol program is running I have generated statspack report. According to me X01DZ02 is mainly causing a problem. But I am be wrong. I have attached part of statspack report for your valuable help.
STATSPACK report for
DB Name DB Id Instance Inst Num Release Cluster Host
------------ ----------- ------------ -------- ----------- ------- ------------
ATHKP1 1057154055 athkp1 1 9.2.0.6.0 NO sinsa1100132
Snap Id Snap Time Sessions Curs/Sess Comment
--------- ------------------ -------- --------- -------------------
Begin Snap: 2 21-May-08 15:49:36 11 2.1
End Snap: 4 21-May-08 16:30:01 13 2.9
Elapsed: 40.42 (mins)
Cache Sizes (end)
~~~~~~~~~~~~~~~~~
Buffer Cache: 1,664M Std Block Size: 4K
Shared Pool Size: 640M Log Buffer: 2,048K
Load Profile
~~~~~~~~~~~~ Per Second Per Transaction
--------------- ---------------
Redo size: 121,080.59 97,873,473.33
Logical reads: 20,192.06 16,321,919.00
Block changes: 852.08 688,762.67
Physical reads: 161.74 130,743.00
Physical writes: 163.65 132,286.00
User calls: 153.38 123,981.67
Parses: 0.39 317.00
Hard parses: 0.03 23.00
Sorts: 70.22 56,758.33
Logons: 0.01 4.67
Executes: 142.72 115,363.33
Transactions: 0.00
% Blocks changed per Read: 4.22 Recursive Call %: 2.61
Rollback per transaction %: 0.00 Rows per Sort: 50.62
Instance Efficiency Percentages (Target 100%)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Buffer Nowait %: 100.00 Redo NoWait %: 100.00
Buffer Hit %: 99.50 In-memory Sort %: 100.00
Library Hit %: 91.98 Soft Parse %: 92.74
Execute to Parse %: 99.73 Latch Hit %: 100.00
Parse CPU to Parse Elapsd %: 1.63 % Non-Parse CPU: 99.94
Shared Pool Statistics Begin End
------ ------
Memory Usage %: 22.52 23.10
% SQL with executions>1: 97.94 98.67
% Memory for SQL w/exec>1: 89.91 94.73
Top 5 Timed Events
~~~~~~~~~~~~~~~~~~ % Total
Event Waits Time (s) Ela Time
-------------------------------------------- ------------ ----------- --------
CPU time 805 41.35
db file sequential read 37,244 778 39.92
db file scattered read 3,657 175 9.01
latch free 947 71 3.65
direct path read 9,040 49 2.53
-------------------------------------------------------------
-> ordered by wait time desc, waits desc (idle events last)
Avg
Total Wait wait Waits
Event Waits Timeouts Time (s) (ms) /txn
---------------------------- ------------ ---------- ---------- ------ --------
db file sequential read 37,244 0 778 21 ########
db file scattered read 3,657 0 175 48 1,219.0
latch free 947 117 71 75 315.7
direct path read 9,040 0 49 5 3,013.3
db file parallel write 804 0 46 57 268.0
control file parallel write 979 0 21 22 326.3
log file switch completion 18 0 1 28 6.0
log file parallel write 899 0 0 0 299.7
control file sequential read 744 0 0 0 248.0
LGWR wait for redo copy 40 32 0 8 13.3
SQL*Net more data to client 7,816 0 0 0 2,605.3
log file single write 18 0 0 5 6.0
SQL*Net break/reset to clien 13 0 0 4 4.3
direct path write 439 0 0 0 146.3
log file sync 2 0 0 13 0.7
log file sequential read 18 0 0 1 6.0
PX Idle Wait 4,852 4,852 9,474 1953 1,617.3
SQL*Net message from client 371,752 0 3,383 9 ########
SQL*Net message to client 371,754 0 0 0 ########
SQL*Net more data from clien 70 0 0 0 23.3
-------------------------------------------------------------
^LBackground Wait Events for DB: ATHKP1 Instance: athkp1 Snaps: 2 -4
-> ordered by wait time desc, waits desc (idle events last)
Avg
Total Wait wait Waits
Event Waits Timeouts Time (s) (ms) /txn
---------------------------- ------------ ---------- ---------- ------ --------
db file parallel write 804 0 46 57 268.0
control file parallel write 979 0 21 22 326.3
latch free 132 98 6 48 44.0
db file scattered read 278 0 4 14 92.7
direct path read 423 0 1 2 141.0
db file sequential read 98 0 0 4 32.7
log file parallel write 899 0 0 0 299.7
LGWR wait for redo copy 40 32 0 8 13.3
control file sequential read 556 0 0 0 185.3
log file single write 18 0 0 5 6.0
log file sequential read 18 0 0 1 6.0
direct path write 423 0 0 0 141.0
rdbms ipc message 3,838 3,158 10,985 2862 1,279.3
smon timer 10 8 2,481 ###### 3.3
pmon timer 815 808 2,365 2901 271.7
-------------------------------------------------------------
-> ordered by wait time desc, waits desc (idle events last)
Avg
Total Wait wait Waits
Event Waits Timeouts Time (s) (ms) /txn
---------------------------- ------------ ---------- ---------- ------ --------
db file sequential read 37,244 0 778 21 ########
db file scattered read 3,657 0 175 48 1,219.0
latch free 947 117 71 75 315.7
direct path read 9,040 0 49 5 3,013.3
db file parallel write 804 0 46 57 268.0
control file parallel write 979 0 21 22 326.3
log file switch completion 18 0 1 28 6.0
log file parallel write 899 0 0 0 299.7
control file sequential read 744 0 0 0 248.0
LGWR wait for redo copy 40 32 0 8 13.3
SQL*Net more data to client 7,816 0 0 0 2,605.3
log file single write 18 0 0 5 6.0
SQL*Net break/reset to clien 13 0 0 4 4.3
direct path write 439 0 0 0 146.3
log file sync 2 0 0 13 0.7
log file sequential read 18 0 0 1 6.0
PX Idle Wait 4,852 4,852 9,474 1953 1,617.3
SQL*Net message from client 371,752 0 3,383 9 ########
SQL*Net message to client 371,754 0 0 0 ########
SQL*Net more data from clien 70 0 0 0 23.3
-------------------------------------------------------------
^LBackground Wait Events for DB: ATHKP1 Instance: athkp1 Snaps: 2 -4
-> ordered by wait time desc, waits desc (idle events last)
Avg
Total Wait wait Waits
Event Waits Timeouts Time (s) (ms) /txn
---------------------------- ------------ ---------- ---------- ------ --------
db file parallel write 804 0 46 57 268.0
control file parallel write 979 0 21 22 326.3
latch free 132 98 6 48 44.0
db file scattered read 278 0 4 14 92.7
direct path read 423 0 1 2 141.0
db file sequential read 98 0 0 4 32.7
log file parallel write 899 0 0 0 299.7
LGWR wait for redo copy 40 32 0 8 13.3
control file sequential read 556 0 0 0 185.3
log file single write 18 0 0 5 6.0
log file sequential read 18 0 0 1 6.0
direct path write 423 0 0 0 141.0
rdbms ipc message 3,838 3,158 10,985 2862 1,279.3
smon timer 10 8 2,481 ###### 3.3
pmon timer 815 808 2,365 2901 271.7
-------------------------------------------------------------
CPU Elapsd
Buffer Gets Executions Gets per Exec %Total Time (s) Time (s) Hash Value
46,399,313 56,902 815.4 94.8 693.29 901.43 2253508589
Module: rtsora@sinsa1100132 (TNS V1-V3)
delete from VDZ2A1 where (((((((((((A0090=:b1 and A1010=:b2) an
d A6000=:b3) and A0230=:b4) and A9000=:b5) and C81001=:b6) and A
0090M=:b7) and A9780=:b8) and A0110=:b9) and A9030=:b10) and A90
10=:b11) and A9110=:b12)
2,138,499 56,902 37.6 4.4 20.74 471.72 523192397
Module: rtsora@sinsa1100132 (TNS V1-V3)
delete from VDZ121 where (((((((((((A0090=:b1 and A1010=:b2) an
d A6000=:b3) and A0230=:b4) and A9000=:b5) and C81001=:b6) and A
0090M=:b7) and A9780=:b8) and A0110=:b9) and A9030=:b10) and A90
10=:b11) and A9110=:b12)
1,266,641 56,902 22.3 2.6 13.72 421.24 1807881670
Module: rtsora@sinsa1100132 (TNS V1-V3)
delete from VDZ171 where (((((((((((A0090M=:b1 and A9780=:b2) a
nd A0110=:b3) and A9030=:b4) and A9010=:b5) and A9110=:b6) and A
00904=:b7) and A10104=:b8) and A60004=:b9) and A02304=:b10) and
A9000=:b11) and C81001=:b12)
227,960 56,902 4.0 0.5 4.18 49.44 2686945209
Module: rtsora@sinsa1100132 (TNS V1-V3)
select A0090M ,A9780 ,A0110 ,A9030 ,A9010 ,A9110 ,A00904
,A10104 ,A60004 ,A02304 ,A9000 ,C81001 into :b1,:b2,:b3,:b
4,:b5,:b6,:b7,:b8,:b9,:b10,:b11,:b12 from VDZ171 where ((((((((
(((A0090M=:b1 and A9780=:b2) and A0110=:b3) and A9030=:b4) and A
9010=:b5) and A9110=:b6) and A00904=:b7) and A10104=:b8) and A60
227,676 56,902 4.0 0.5 4.11 4.11 1212515928
Module: rtsora@sinsa1100132 (TNS V1-V3)
select A0090 ,A1010 ,A6000 ,A0230 ,A9000 ,C81001 ,A0090M
,A9780 ,A0110 ,A9030 ,A9010 ,A9110 ,A90101 ,A91101 from
VDZ221 where (((((((((((A0090=:b1 and A1010=:b2) and A6000=:b3)
and A0230=:b4) and A9000=:b5) and C81001=:b6) and A0090M=:b7) an
d A9780=:b8) and A0110=:b9) and A9030=:b10) and A90101=:b11) and
227,612 56,902 4.0 0.5 4.81 4.69 3031624248
Module: rtsora@sinsa1100132 (TNS V1-V3)
delete from VDZ221 where (((((((((((A0090M=:b1 and A9780=:b2) a
nd A0110=:b3) and A9030=:b4) and A90101=:b5) and A91101=:b6) and
A0090=:b7) and A1010=:b8) and A6000=:b9) and A0230=:b10) and A9
000=:b11) and C81001=:b12)
196,628 1 196,628.0 0.4 81.88 429.26 3976561296
Module: rtsora@sinsa1100132 (TNS V1-V3)
select A0090 ,A1010 ,A6000 ,A0230 ,A9000 ,C81001 ,A0090M
,A9780 ,A0110 ,A9030 ,A9010 ,A9110 ,A9040 from VDZ121 or
der by A0090,A1010,A6000,A0230,A9000,C81001,A0090M,A9780,A0110,A
9030,A9010,A9110
..
..
CPU Elapsd
Buffer Gets Executions Gets per Exec %Total Time (s) Time (s) Hash Value
46,399,313 56,902 815.4 94.8 693.29 901.43 2253508589
Module: rtsora@sinsa1100132 (TNS V1-V3)
delete from VDZ2A1 where (((((((((((A0090=:b1 and A1010=:b2) an
d A6000=:b3) and A0230=:b4) and A9000=:b5) and C81001=:b6) and A
0090M=:b7) and A9780=:b8) and A0110=:b9) and A9030=:b10) and A90
10=:b11) and A9110=:b12)
2,138,499 56,902 37.6 4.4 20.74 471.72 523192397
Module: rtsora@sinsa1100132 (TNS V1-V3)
delete from VDZ121 where (((((((((((A0090=:b1 and A1010=:b2) an
d A6000=:b3) and A0230=:b4) and A9000=:b5) and C81001=:b6) and A
0090M=:b7) and A9780=:b8) and A0110=:b9) and A9030=:b10) and A90
10=:b11) and A9110=:b12)
1,266,641 56,902 22.3 2.6 13.72 421.24 1807881670
Module: rtsora@sinsa1100132 (TNS V1-V3)
delete from VDZ171 where (((((((((((A0090M=:b1 and A9780=:b2) a
nd A0110=:b3) and A9030=:b4) and A9010=:b5) and A9110=:b6) and A
00904=:b7) and A10104=:b8) and A60004=:b9) and A02304=:b10) and
A9000=:b11) and C81001=:b12)
227,960 56,902 4.0 0.5 4.18 49.44 2686945209
Module: rtsora@sinsa1100132 (TNS V1-V3)
select A0090M ,A9780 ,A0110 ,A9030 ,A9010 ,A9110 ,A00904
,A10104 ,A60004 ,A02304 ,A9000 ,C81001 into :b1,:b2,:b3,:b
4,:b5,:b6,:b7,:b8,:b9,:b10,:b11,:b12 from VDZ171 where ((((((((
(((A0090M=:b1 and A9780=:b2) and A0110=:b3) and A9030=:b4) and A
9010=:b5) and A9110=:b6) and A00904=:b7) and A10104=:b8) and A60
227,676 56,902 4.0 0.5 4.11 4.11 1212515928
Module: rtsora@sinsa1100132 (TNS V1-V3)
select A0090 ,A1010 ,A6000 ,A0230 ,A9000 ,C81001 ,A0090M
,A9780 ,A0110 ,A9030 ,A9010 ,A9110 ,A90101 ,A91101 from
VDZ221 where (((((((((((A0090=:b1 and A1010=:b2) and A6000=:b3)
and A0230=:b4) and A9000=:b5) and C81001=:b6) and A0090M=:b7) an
d A9780=:b8) and A0110=:b9) and A9030=:b10) and A90101=:b11) and
227,612 56,902 4.0 0.5 4.81 4.69 3031624248
Module: rtsora@sinsa1100132 (TNS V1-V3)
delete from VDZ221 where (((((((((((A0090M=:b1 and A9780=:b2) a
nd A0110=:b3) and A9030=:b4) and A90101=:b5) and A91101=:b6) and
A0090=:b7) and A1010=:b8) and A6000=:b9) and A0230=:b10) and A9
000=:b11) and C81001=:b12)
196,628 1 196,628.0 0.4 81.88 429.26 3976561296
Module: rtsora@sinsa1100132 (TNS V1-V3)
select A0090 ,A1010 ,A6000 ,A0230 ,A9000 ,C81001 ,A0090M
,A9780 ,A0110 ,A9030 ,A9010 ,A9110 ,A9040 from VDZ121 or
der by A0090,A1010,A6000,A0230,A9000,C81001,A0090M,A9780,A0110,A
9030,A9010,A9110
Subobject Obj. Logical
Owner Tablespace Object Name Name Type Reads %Total
---------- ---------- -------------------- ---------- ----- ------------ -------
ATLAS ATLAS_I01 X01DZ2A INDEX 46,362,304 91.30
ATLAS HISTOCOMP_ X01DZ17 INDEX 682,672 1.34
ATLAS HISTO_I01 X01DZ12 INDEX 456,752 .90
ATLAS ATLAS_I01 X01DZ22 INDEX 454,608 .90
ATLAS HISTO_I01 X06DZ12 INDEX 225,616 .44
-------------------------------------------------------------
Index :
REATE UNIQUE INDEX "ATLAS"."X01DZ02" ON "TDZ02" ("A0020" , "QCOST" ,
"A0230" , "A0540" , "A0580" , "A0230P" )
SQL> desc tdz02
Name Null? Type
----------------------------------------- -------- ----------------------------
A0020 NOT NULL NUMBER(2)
QCOST NOT NULL CHAR(2)
A0230 NOT NULL CHAR(3)
A0540 NOT NULL NUMBER(2)
A0580 NOT NULL CHAR(2)
A0230P NOT NULL CHAR(3)
A0130 NOT NULL CHAR(8)
QCGAC NOT NULL NUMBER(2)
QXZ308 NOT NULL VARCHAR2(308)
Pl guide me.
thanks & regards
parag
May 21, 2008 - 9:03 am UTC
I don't see anything wrong here.
you had 2425.2 cpu seconds (at least, assuming 1 cpu)
you used 805... you did not do very much.
when you did physical IO, it was pretty fast - 0.02 seconds.
so, I see nothing 'wrong', only that you posted this on a question about rebuilding indexes - that is wrong.
Might you be able to make these do a little less work?
Buffer Gets Executions Gets per Exec %Total Time (s) Time (s) Hash
Value
46,399,313 56,902 815.4 94.8 693.29 901.43
2253508589
Module: rtsora@sinsa1100132 (TNS V1-V3)
delete from VDZ2A1 where (((((((((((A0090=:b1 and A1010=:b2) an
d A6000=:b3) and A0230=:b4) and A9000=:b5) and C81001=:b6) and A
0090M=:b7) and A9780=:b8) and A0110=:b9) and A9030=:b10) and A90
10=:b11) and A9110=:b12)
2,138,499 56,902 37.6 4.4 20.74 471.72
523192397
Module: rtsora@sinsa1100132 (TNS V1-V3)
delete from VDZ121 where (((((((((((A0090=:b1 and A1010=:b2) an
d A6000=:b3) and A0230=:b4) and A9000=:b5) and C81001=:b6) and A
0090M=:b7) and A9780=:b8) and A0110=:b9) and A9030=:b10) and A90
10=:b11) and A9110=:b12)
1,266,641 56,902 22.3 2.6 13.72 421.24
1807881670
Module: rtsora@sinsa1100132 (TNS V1-V3)
delete from VDZ171 where (((((((((((A0090M=:b1 and A9780=:b2) a
nd A0110=:b3) and A9030=:b4) and A9010=:b5) and A9110=:b6) and A
00904=:b7) and A10104=:b8) and A60004=:b9) and A02304=:b10) and
A9000=:b11) and C81001=:b12)
probably, but it depends on how many rows they delete. If for example:
Buffer Gets Executions Gets per Exec %Total Time (s) Time (s) Hash
Value
46,399,313 56,902 815.4 94.8 693.29 901.43
2253508589
Module: rtsora@sinsa1100132 (TNS V1-V3)
delete from VDZ2A1 where (((((((((((A0090=:b1 and A1010=:b2) an
d A6000=:b3) and A0230=:b4) and A9000=:b5) and C81001=:b6) and A
0090M=:b7) and A9780=:b8) and A0110=:b9) and A9030=:b10) and A90
10=:b11) and A9110=:b12)
that deletes an average of 0 or 1 rows - I would expect gets per exec to be lower - something like:
3/4 gets to read an index to find the row - assumes index on columns in predicate.
1 get to read block
3/4 gets * number of indexes on table to remove all indexed entries.
what happens when create the index is in the process
Yuna, May 26, 2008 - 9:39 am UTC
Hi Tom,
We need to copy data from an old table to a new table and create an index in the new table. We will use a schdule task to do the work ( and other work together) and we don't know when it will finish. What it will happen if the task was stopped during the process of creting the index? The index will not exist for sure right?
Thank you very much! Yuna
May 27, 2008 - 8:02 am UTC
if you kill it during the index create, then of course the index will not be created - yes.
Maintenance of Database
Lalit Sharma, June 03, 2008 - 12:42 pm UTC
Hi Tom,
I was going through this article and got confused because of so many opinions going around index rebuilding or index recreation and others.
Basically, we have a huge database and there are some tables where we keep on performing lots of DML operations.
I am worried about space fragmentation due to tablespaces, tables, indexes and other objects.
1. How can I reclaim the space back?
2. Will these DML tend to make my database slower? If so, how can I improve the performace of my database?
Please provide your expert guidance as what should I do to cover almost all the posssible actions for the betterment of the database. I am a developer only and not a DBA but I need to cover all this so that I can take this up and discuss with the DBA for implementation. Right now I am thinking of some kind of scheduled job that can help me in making optimal use of space and optimal performace of the database, if possible.
Please help by providing as much information as possible considering this to be just any other database on which you can provide some generic inputs.
Thanks in Advance,
Lalit
June 03, 2008 - 1:32 pm UTC
first, define "fragmentation"
next, if you delete - and insert, your tables are almost certainly *fine*, you delete - free up space, we reuse it for subsequent inserts. Since space is never released back to the tablespace - fragmentation would not happen (presuming locally managed tablespaces - never use dictionary)
as for indexes - same story, almost 100% of the time. The only index you might have to think about would be one on a monotonically increasing value (like a sequence populated fake key or a date field populated with sysdate upon insert). If you delete MOST/MUCH but not all of the old values (eg: you inserted sequences 1..100,000 and you delete say 70% of the data), those indexes might get "brown" on the left hand side - you delete 1-75, 101-175, .... but left 76-100, 176-200 and so on. The space that was used by 1, 2, 3... can not be reused. We might - MIGHT have to coalesce those indexes (don't bother with a rebuild). That'll take adjacent blocks on the left hand side and put them together - freeing up blocks that can now be used on the right hand side.
true for iot as well?
Chris, June 03, 2008 - 2:37 pm UTC
Are secondary index on iot's an exception to this rule? I rebuild them because with iot row movement the index key may or may not point to the actual data. My understanding is that the cbo looks at data like PCT_DIRECT_ACCESS field in user_indexes to detemine how many real i/o's using a secondary index will require.
Empirically, query's use less i'o when they go through a recently rebuilt secondary index on an iot.
June 03, 2008 - 5:17 pm UTC
yes, they would be one as well - they contain rowid "hints" - a rowid where the index entry was when it was inserted. Due to index splits the rows move around quite a bit in many cases inside the IOT - the hints become stale
that leads to a retrieval that requires TWO range scans - one on the secondary index + follow the rowid hint to find it is "bad" + index range scan on the IOT by primary key to find the record - instead of just "index scan the secondary index, follow rowid hint"
Text indexes would be the same - they can benefit from a rebuild if they are incrementally maintained over time.
Index Rebuilding
Thakur Manoj, August 07, 2008 - 5:38 am UTC
Dear Sir
Now my application is a web based application When I am inserting a record from the application to the database there is a button which is there when we click on that button the record to insert is taking 35 seconds.But when I am doing some changes in the application there are two tables which are getting affected one table is coabalances and the other table is coaratios.Basically this application is making a query from the application and then the query is getting fired in the database.The table coabalances and coaratios is having one indexes on 5 columns.Now Since inserts are taking 35 seconds and update is taking 6.5 minutes to save from the application to the database.Could you please help me as to what must be the cause?.Why updates are taking 6.5 minutes ?.Also these are the records which the control makes these update statements and are taking 6.5 minutes to go to the update.May I know what must be the reason.?.The application is making 300 inserts when we click on the save to database button and updates are taking 94 updates when do some changes in the application.Since the button is clicked it is taking 6.5 minutes
In a day there are 600-800 records which are coming into the database.
Update COABALANCES set value = '83712', trans_datetime = sysdate where BorrId = 3549 and CoaRow = 10 and StmtId = 3 and coaid = 3 and user_id = 21;
Update coabalances set value = '91000', trans_datetime = sysdate where BorrId = 3549 and CoaRow = 10 and StmtId = 4 and coaid = 3 and user_id = 21;
Update coabalances set value = '1000', trans_datetime = sysdate where BorrId = 3549 and CoaRow = 11 and StmtId = 3 and coaid = 3 and user_id = 21;
Update coabalances set value = '1000', trans_datetime = sysdate where BorrId = 3549 and CoaRow = 11 and StmtId = 4 and coaid = 3 and user_id = 21;
Update coabalances set value = '1000', trans_datetime = sysdate where BorrId = 3549 and CoaRow = 15 and StmtId = 3 and coaid = 3 and user_id = 21;
Update coabalances set value = '1000', trans_datetime = sysdate where BorrId = 3549 and CoaRow = 15 and StmtId = 4 and coaid = 3 and user_id = 21;
Update coabalances set value = '1000', trans_datetime = sysdate where BorrId = 3549 and CoaRow = 16 and StmtId = 3 and coaid = 3 and user_id = 21;
Update coabalances set value = '1000', trans_datetime = sysdate where BorrId = 3549 and CoaRow = 16 and StmtId = 4 and coaid = 3 and user_id = 21;
Update coabalances set value = '2134', trans_datetime = sysdate where BorrId = 3549 and CoaRow = 17 and StmtId = 3 and coaid = 3 and user_id = 21;
Update coabalances set value = '1800', trans_datetime = sysdate where BorrId = 3549 and CoaRow = 17 and StmtId = 4 and coaid = 3 and user_id = 21;
Update coabalances set value = '76765', trans_datetime = sysdate where BorrId = 3549 and CoaRow = 21 and StmtId = 3 and coaid = 3 and user_id = 21;
Update coabalances set value = '3209', trans_datetime = sysdate where BorrId = 3549 and CoaRow = 22 and StmtId = 3 and coaid = 3 and user_id = 21;
Update coabalances set value = '7977', trans_datetime = sysdate where BorrId = 3549 and CoaRow = 24 and StmtId = 3 and coaid = 3 and user_id = 21;
Update coabalances set value = '84712', trans_datetime = sysdate where BorrId = 3549 and CoaRow = 12 and StmtId = 3 and coaid = 3 and user_id = 21;
Update coabalances set value = '43.7331387753024', trans_datetime = sysdate where BorrId = 3549 and CoaRow = 13 and StmtId = 3 and coaid = 3 and user_id = 21;
Update coabalances set value = '4134', trans_datetime = sysdate where BorrId = 3549 and CoaRow = 18 and StmtId = 3 and coaid = 3 and user_id = 21;
Update coabalances set value = '88846', trans_datetime = sysdate where BorrId = 3549 and CoaRow = 19 and StmtId = 3 and coaid = 3 and user_id = 21;
Update coabalances set value = '79974', trans_datetime = sysdate where BorrId = 3549 and CoaRow = 23 and StmtId = 3 and coaid = 3 and user_id = 21;
Update coaratios set value = '87951', trans_datetime = sysdate where BorrId = 3549 and CoaRow = 25 and StmtId = 3 and coaid = 3 and user_id = 21;
Update coaratios set value = '76399', trans_datetime = sysdate where BorrId = 3549 and CoaRow = 27 and StmtId = 3 and coaid = 3 and user_id = 21;
Update coaratios set value = '80591', trans_datetime = sysdate where BorrId = 3549 and CoaRow = 29 and StmtId = 3 and coaid = 3 and user_id = 21;
Update coaratios set value = '8255', trans_datetime = sysdate where BorrId = 3549 and CoaRow = 30 and StmtId = 3 and coaid = 3 and user_id = 21;
Update coaratios set value = '7027', trans_datetime = sysdate where BorrId = 3549 and CoaRow = 32 and StmtId = 3 and coaid = 3 and user_id = 21;
Update coabalances set value = '6293', trans_datetime = sysdate where BorrId = 3549 and CoaRow = 34 and StmtId = 3 and coaid = 3 and user_id = 21;
Update coabalances set value = '0', trans_datetime = sysdate where BorrId = 3549 and CoaRow = 39 and StmtId = 3 and coaid =3 and user_id = 21;
The updates mentioned below are getting fired into the database but it is taking 6.5 minutes to complete the updates from the application.Why it is taking 6.5 minutes?
Now my question
This table is having indexes on borrid,coarow,stmtid,coaid and user_id.
Now since updates are more costly than inserts Why update is taking 6.5 minutes?.
How I should overcome thisn situation .Since inserts are taking 35 seconds I wanted update also to take 10-15 seconds.How to do it Sir?Appreciate your help for the same?
Can we do rebuilding of the indexes to increase the performance of my application?
August 07, 2008 - 1:27 pm UTC
make it so your application can turn sql trace on and off (with wait event capturing)
and get a tkprof - see if
a) you are spending all of the time in the database - if the insert in the tkprof says "i took 0.01 seconds" but the application says "I took 35 seconds", then problem is not in database.
b) if you are spending the time in the database - what the wait event is, is it enqueue (blocking contention), IO, what...
Good. One doubt though
babloo, August 07, 2008 - 1:37 pm UTC
Hi Tom,
You mentioned
"either using awr/statspack or by doing your own snapshot of v$sql, measure the "efficiency" of your query over tim"
can you please elaborate it . which columns we should be looking at
August 07, 2008 - 1:51 pm UTC
logical IO's (LIO)
if you are rebuilding for a query, you must be rebuilding so that it takes significantly less IO to retrieve the requested data.
so, if the LIO's, and likely the cpu time and elapsed times do not decrease - you didn't rebuild that index to make a query perform better.
A reader, May 02, 2009 - 7:06 am UTC
Sir
You do not reccomend rebuilding indexes and you mentioned it several times.
If the table segment has 100 rows, 100 index entries will be in index segment.
If 80 of the data is deleted, therefore 80 of the index entries would be deleted.
After a select query performed,oracle doesnt have to scan the deleted index entries, right?
It will go directly to requsted index? so no need to rebuild them?
Is that the reason?
May 02, 2009 - 12:18 pm UTC
No, that is not what I said, I've never ever said that. NEVER.
... Tell me -- why why why WHY are you rebuilding them on a schedule? .....
rebuilding all indexes on a schedule - against that.
rebuilding of an index because you have identified a real reason to and observe benefits from doing so - benefits you can actually see and measure - go for it.
In general, there will always be "white space" in most all indexes, however that space in general will be used and reused over time (any perceived benefit of a rebuild like "we got back a bit of space" is very short lived, measured in hours or maybe a few days).
search for "sweeper" on this site to see an exception to that rule.
A reader, May 02, 2009 - 11:04 pm UTC
Sir
Thanks for clarification.
I have couple of questions in my mind,which I am not sure.
1-)If the blocks are deleted in index segment, does oracle have to scan the deleted rows or go directly to requested index?
2-) Assume my index map is like:
f:free (which is deleted)
xxxfffxxxff
After the coalesce
Does it become like:
xxxxxxxfffff
3-)If I dont perform coalesce, Can a new data be inserted into deleted index blocks?
4-) I searched the site for sweeper indexes,
I found your explanation which is helpful.Is there any sql statements that shows sweeper index in this site.I couldnt find any sqls.
Here is your enlightining explanation:
" Sweeper indexes (date column populated with sysdate, id column populated by sequence) where
you delete all/most of the old data over time and frequently ask for the "lowest value(s)" could
benefit from rebuilds/coalesces "
May 04, 2009 - 10:37 am UTC
1) i don't know what you mean by "blocks are deleted in an index segment". And index is a collection of blocks with pointers - there is a root block that points to possibly hundreds of branch blocks, each branch block points to potentially hundreds of leaf blocks, the leaf blocks are pointing to their previous and next left block in sorted order (if you have access to my book Expert Oracle Database Architecture, I cover this in detail)
as you insert/update/delete from the base table - the entries on the leaf blocks are maintained. If a leaf block becomes entirely empty - we can unlink it from the index structure (from the doubly linked list of leaf blocks) and move it elsewhere in the structure to hold new data.
2) I don't know what an "index map" is. After a coalese, what you will find is that the leaf blocks are as full as can be because we've merged together leaf blocks that could fit together and removed the resulting empty leaf blocks. Remember - an index is a collection of single block entries, a complex data structure, the block containing things that start with "A" might be very very very far away from the block that contains things that start with "B" and so on.
3) free space in leaf blocks can always be used by data that belongs there, yes, sure, of course.
4) you know what your sweeper indexes are naturally. They are indexes on surrogate keys populated by sequences and indexes on date columns populated by sysdate - and are on tables where much but not all of the old data is purged over time. You leave some stragglers behind.
A reader, May 05, 2009 - 2:47 am UTC
May 11, 2009 - 8:19 am UTC
The current book would be Expert Oracle Database Architecture.
To me - the act of "tuning" cannot be performed without "understanding how it works". That book describes how Oracle works.
If you know how it works, tuning sort of falls out from there.
A reader, May 05, 2009 - 2:57 am UTC
Sorry
The second one is:
Effective Oracle by Design
Index Rebuild
Shimon Batashvili, July 12, 2009 - 1:01 pm UTC
Hello Tom!
Our DBA has told us (developing team) that increasing the size of a base table column which constructs the unique index, invalidates that index so that we have to rebuild it. We plan to change one column in about 100 different tables in Oracle 7 (Release 7.1.5.2.3) Database, in which the syntax "ALTER INDEX x REBUILD ..." does not work (We get ORA-02243 error: "Invalid alter index or alter snapshot option" when trying to run the command) .
a. Is that true in Oracle 7?
b. If yes, is there any efficient way to get the work done, besides creating scripts to drop and recreate the indexes, using "select 'drop index '||index_name ||';'
from user indexes
where table_name in( a list of the tables to be altered)
union all
select 'create index '||index_name
from user_indexes
where table_name in(a list of the tables to be altered)
including the storage information?
Best Regards and thanks in advance,
Shimon B.
July 14, 2009 - 5:44 pm UTC
a) I forget.
No, I didn't really, only kidding, index rebuilding was a new feature of 7.3.
over 14 years agob) how about you run your DDL you have in source code control??
but run this first to verify what you DBA says was true back then - I don't have my way back machine so I cannot tell if 7.1 actually does what he says it would
ops$tkyte%ORA9IR2> create table t ( x number(5), y varchar2(10) );
Table created.
ops$tkyte%ORA9IR2> create unique index t_x on t(x);
Index created.
ops$tkyte%ORA9IR2> create unique index t_y on t(y);
Index created.
ops$tkyte%ORA9IR2> insert into t values ( 12345, '1234567890' );
1 row created.
ops$tkyte%ORA9IR2> alter table t modify x number(7);
Table altered.
ops$tkyte%ORA9IR2> alter table t modify y varchar2(15);
Table altered.
ops$tkyte%ORA9IR2> insert into t values ( 1234567, '123456789012345' );
1 row created.
No indexes went invalid.
Index Rebuild
Shimon Batashvili, July 16, 2009 - 10:17 am UTC
Hi Tom !
Thanks for your quick answer. I also checked the index status after issuing the "alter table" commands, and inserting new data. The status had not been changed.
In any case I have prepared a script that creates the DROP INDEX and CREATE INDEX clauses just in case of need.
Best Regards,
Shimon B.
Hi Tom
Hi, July 20, 2009 - 10:11 am UTC
Using different Tablespaces for table and index helps performance improvement?
July 24, 2009 - 11:43 am UTC
yes
no
maybe
it depends
just because they are in different tablespaces does not mean they are on different devices or anything. A tablespace is a logical container, not physical. Think about it, if the tablespaces are:
create tablespace t datafile '/d1/t.dbf';
create tablespace i datafile '/d1/i.dbf';
would it make any difference IO wise? (no)
Even if they index is on a different device from the table, would it make a difference to a single query? No, we access INDEX and then access TABLE. they are done in SEQUENCE, not together. It would not matter a whit if the table is on a different device from the table.
the goal - EVEN IO ON ALL DEVICES
Today with raid striping and logical volume managers - we achieve that by striping massive amounts of data over many devices.
In 1980, we might have achieved that with our 100mb, 5 table schema by putting tables on device 1 and indexes on device 2 and hoping that about 1/2 of the IO was against the table and about half was against the index.
in 2009, using tablespaces for "performance tuning" is well beyond "not smart"
Rebuild and Blevel
Saptarshi Basu, August 01, 2009 - 9:49 am UTC
Hi Tom,
All your explanations above about index rebuild is very useful.
Do you think index rebuild is useful (and required) if there is a scenario where such a huge volume of old data have been deleted from the table that there is a chance the index height will decrease? After reading this thread I cannot think of any other scenario that would require index rebuild.
I have been testing in 10gR2 and see COALESCE does not reduce the height, but REBUILD does.
Thanks and regards,
Saptarshi
============================================
Test case
============================================
create table t1
(
row_num number,
object_name varchar2(400)
);
insert into t1
(
select rownum, object_name
from dba_objects
where rownum <= 3000
);
commit;
create index ind_t1
on t1(row_num)
pctfree 99;
begin
dbms_stats.gather_index_stats('ist','ind_t1');
end;
select BLEVEL from user_indexes where index_name='IND_T1';
delete from t1 where row_num<=2990;
commit;
begin
dbms_stats.gather_index_stats('ist','ind_t1');
end;
select BLEVEL from user_indexes where index_name='IND_T1';
alter index ind_t1 coalesce;
begin
dbms_stats.gather_index_stats('ist','ind_t1');
end;
select BLEVEL from user_indexes where index_name='IND_T1';
alter index ind_t1 rebuild;
begin
dbms_stats.gather_index_stats('ist','ind_t1');
end;
select BLEVEL from user_indexes where index_name='IND_T1';
===================================================
Output
===================================================
create table succeeded.
3000 rows inserted
commit succeeded.
create index succeeded.
anonymous block completed
BLEVEL
----------------------
2
1 rows selected
2990 rows deleted
commit succeeded.
anonymous block completed
BLEVEL
----------------------
2
1 rows selected
alter index ind_t1 succeeded.
anonymous block completed
BLEVEL
----------------------
2
1 rows selected
alter index ind_t1 succeeded.
anonymous block completed
BLEVEL
----------------------
1
1 rows selected
August 04, 2009 - 12:40 pm UTC
... I have been testing in 10gR2 and see COALESCE does not reduce the height, but REBUILD does. ...
that is precisely why I like coalesce over rebuild, rebuild makes us redo a ton of work.
read this:
http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:6601312252730 see what can happen if you rebuild - you make the index really small - and then spend the next week or two doing a ton of work to make it..... big again.
Be careful of what you wish for sometimes. If you do this large delete - are you
a) going to insert new data that'll cause the index to grow again relatively quickly (that could be like over the course of a year)
b) never really going to insert new data that'll cause the index to grown again.
If you say (a), coalesce is neat - because it saves all of that work. if you say (b), you might consider a rebuild.
A reader, December 29, 2009 - 11:40 am UTC
rebuild local indexes on partitioned tables after bulk insert
Vickye, January 07, 2010 - 11:39 am UTC
Tom,
If I'm bulk loading a billion records in a single partition with bitmap indexes, would it be faster to mark local indexes unsuable and then rebuild them or load a separate table, create bitmap indexes and use exchange partitions? Assuming that maintaining the separate table, index creation, and partition exchange are no overhead - is there any difference in the performance of index rebuild vs. index create? The table exists in a datawarehouse and availability is not an issue.
January 11, 2010 - 8:34 pm UTC
you know, it all depends.
it depends on what you are trying to speed up.
if you direct path load, it would probably be dandy to keep the indexes in place (best to keep in place) and load. We'll build the indexes in bulk as we load (avoiding the scanning and re-scanning of the partition for each index) and no one will hit undo read consistency issues because you load above the high water mark (they just don't see that data)
if you do this right into the table and don't maintain the indexes - prior to 11gR2, it could cause failures in queries after you load, but before you rebuild the local index partition.
so, are you direct pathing or not? describe your load.
Want to know more
A reader, January 12, 2010 - 10:15 am UTC
Hello Sir,
As per your last comment that when direct path is involved oracle will build index in bulk. Can you please elaborate on that. I had made test or 11g r2 on my laptop. Here are some details.
create table big_ind_test nologging as select * from mv_big;
exec dbms_stats.gather_table_stats(user,'BIG_IND_TEST');
create bitmap index id_1_bit on big_ind_test(object_id);
create bitmap index id_2_bit on big_ind_test(object_type);
create bitmap index id_3_bit on big_ind_test(p_id);
Then do direct load with index on place
insert /*+ append */ into big_ind_test select * from load_data ;
Here is tkprof
SQL ID: 21b8x0jb91w8g
Plan Hash: 2606305417
insert /*+ append */ into big_ind_test select * from load_data
call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 1 6.86 8.37 3790 1311504 1550460 144972
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 2 6.87 8.38 3790 1311504 1550460 144972
Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 91
Rows Row Source Operation
------- ---------------------------------------------------
0 LOAD AS SELECT (cr=1312200 pr=3796 pw=2185 time=0 us)
144972 TABLE ACCESS FULL LOAD_DATA (cr=2191 pr=2186 pw=0 time=36082 us cost=608 size=14932116 card=144972)
Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
Disk file operations I/O 3 0.00 0.00
db file sequential read 231 0.00 0.00
db file scattered read 254 0.00 0.01
direct path write 69 0.00 0.00
direct path sync 1 0.05 0.05
log buffer space 3 0.02 0.05
log file switch completion 4 0.10 0.16
reliable message 94 0.00 0.00
rdbms ipc reply 94 0.00 0.00
log file switch (checkpoint incomplete) 2 1.21 1.23
SQL*Net message to client 1 0.00 0.00
SQL*Net message from client 1 3.72 3.72
Also, some raw trace data
PARSING IN CURSOR #1 len=63 dep=0 uid=91 oct=2 lid=91 tim=1263277469086313 hv=1452339471 ad='89056258' sqlid='21b8x0jb91w8g'
insert /*+ append */ into big_ind_test select * from load_data
END OF STMT
PARSE #1:c=85987,e=85288,p=28,cr=734,cu=0,mis=1,r=0,dep=0,og=1,plh=2606305417,tim=1263277469086313
WAIT #1: nam='Disk file operations I/O' ela= 36 FileOperation=2 fileno=4 filetype=2 obj#=74634 tim=1263277469086521
WAIT #1: nam='db file sequential read' ela= 10 file#=4 block#=66698 blocks=1 obj#=74634 tim=1263277469086548
WAIT #1: nam='db file scattered read' ela= 26 file#=4 block#=66699 blocks=5 obj#=74634 tim=1263277469086650
WAIT #1: nam='db file sequential read' ela= 9 file#=4 block#=522 blocks=1 obj#=74629 tim=1263277469086792
WAIT #1: nam='db file sequential read' ela= 8 file#=4 block#=63744 blocks=1 obj#=74629 tim=1263277469086826
WAIT #1: nam='db file sequential read' ela= 7 file#=4 block#=521 blocks=1 obj#=74629 tim=1263277469086847
WAIT #1: nam='direct path write' ela= 90 file number=4 first dba=64544 block cnt=32 obj#=74629 tim=1263277469240990
WAIT #1: nam='db file scattered read' ela= 10 file#=4 block#=88958 blocks=2 obj#=74634 tim=1263277469248033
WAIT #1: nam='db file scattered read' ela= 301 file#=4 block#=88962 blocks=124 obj#=74634 tim=1263277469249027
WAIT #1: nam='direct path write' ela= 91 file number=4 first dba=64576 block cnt=32 obj#=74634 tim=1263277469250666
WAIT #1: nam='db file scattered read' ela= 10 file#=4 block#=90494 blocks=2 obj#=74634 tim=1263277469719968
WAIT #1: nam='db file scattered read' ela= 134 file#=4 block#=90498 blocks=51 obj#=74634 tim=1263277469720769
WAIT #1: nam='direct path write' ela= 92 file number=4 first dba=91968 block cnt=32 obj#=74634 tim=1263277469726573
WAIT #1: nam='direct path write' ela= 122 file number=4 first dba=92000 block cnt=31 obj#=74634 tim=1263277469735227
WAIT #1: nam='direct path sync' ela= 51021 File number=4 Flags=0 p3=0 obj#=74634 tim=1263277469786300
WAIT #1: nam='db file sequential read' ela= 22 file#=4 block#=66443 blocks=1 obj#=74632 tim=1263277469786680
The thing which i don't understand are
1. What is Disk file operations I/O ?
2. obj#=74634 is for load data table but i didn't find obj#=74629. i checked obj$ also. So, is this temporary object if yes then how should we know that ?
3. obj#=74634 also have direct path write why would be write in table which is being read ? Any reason
4. How does index are updated from LIO it seems it has read table much but where is the proof that index are updated also without reading call for any index.
January 19, 2010 - 3:49 pm UTC
can you take the file/block information and verify the obj#'s in there.
Disk File operations I/O is "This wait event is set when any of the disk file operations are performed. - e.g., open, close, seek, resize". They are going to add that to the documentation.
I don't understand #4 at all there.
Local Indexes
A reader, February 02, 2010 - 6:20 pm UTC
Hello Sir,
When i tested for local indexes on partition table using direct load it take more time. But yes with heap table it's really fast.
Also, one more question regarding 10gr2 onward where we have automatic memory management. I know you will say why to rebuild index but let's say if we set workarea_size_policy=manual and set sort_area_size so big will it improve rebuilding. Because i read somewhere that if there is huge data then it will go to temp and will ignore this setting. is there any limit if yes then how to calculate that.
Manual Setting
A reader, February 03, 2010 - 10:19 am UTC
Hello Sir,
Thanks for update i will take care next time. Can you please met me know my other question setting workarea to manual and setting sort area with some high number will have same effect at auto or will it outperform
February 03, 2010 - 11:51 am UTC
... Can you please met me know my
other question setting workarea to manual and setting sort area with some high
number will have same effect at auto or will it outperform
..
will your manual transmission outperform my automatic?
when you can answer that (you cannot, I'll come up with a case where your example is false) - I'll answer you :)
In general, use automatic pga memory management.
index rebuild
A reader, February 03, 2010 - 10:43 am UTC
Tom:
Is there a way or query that can tell you whether any indexes need rebuild or not for an old inherited oracle 9i system?
Optimizer confused and not picking index
akas, February 08, 2010 - 12:31 am UTC
I have run dbms_stats.gather_table_stats for a table.
After that Index is not used.
But, the index is used if I drop the index and then create it after running gather_table_stat. Even rebuilding the index after gather_table_stats does not use index.
I have 20 million rows in the table and the indexed column has 4 types of distinct values. The indexed column used in the where clause has about 7000 rows.
What is the cause of not using Index after running gather_table_stat and of not using by rebuilding index but of using Index by droping and creating it. How can I run gather table stat so that it can use index?
Using index, query gives result within 6/7 seconds, whereas for full table scan it takes 40seconds.
I have tried:
EXECUTE dbms_stats.gather_table_stats (ownname=>'HR', tabname=>'table_name', estimate_percent=>75, degree =>8, cascade=>true);
Even I tried different combination of estimate_percent.
I also tried method_opt=>'for all columns size 4'. With this option, cost reduces from 3000 to 1500, but if drop and recreate index, the cost becomes 57 only. Even rebuilding the index does not reduces cost.
February 09, 2010 - 6:59 pm UTC
you do not give us anything to look at.
How about this:
I fill my car with gas (gather statistics), it will not start (won't use index). I take that gas out of the car and refill it and it does start. Why does it do this?
We are about even, you don't report any of the statistics, you don't tell us what the plans before and after were exactly (estimated row counts, costs of doing things)
Some things I can think:
a) stats on index were stale, stats on table were stale. With stale stats on both we decided to use index.
b) then you freshen stats on table, but not on index. We decide not to use index. You recreate index which computes (computes) statistics on index and viola - we use index again.
there is one idea. You would query your statistics and look at plans before/after to see if this is the case.
should I do rebuild indexes
Amr Rizk, March 14, 2010 - 8:15 am UTC
we have Point of Sales system . we are inserting daily round 200,000 rows into Oracle E-Business Suite R12 (Oracle 10g R2 10.2.0.3.0). we did not delete many rows .most of operations is insertion and select or update statements . my questions
1- should i do rebuild indexes to improve performance only i don't care about space used
2- we are scheduled " Gather Schema Statistics" as concurrent requests to run weekly on all DB sachems . should we change the frequency and is it true that after running " Gather Schema Statistics" it will not use indexes and we should drop and recreate indexes
March 15, 2010 - 11:03 am UTC
1) probably not.
2) see the other place you put this identical question, I answered over there already.
Metalink note 989093.1
Ron, April 23, 2010 - 8:35 am UTC
Seems that metalink has finally caught up with the discussion.
See
Index Rebuild, the Need vs the Implications [ID 989093.1]
And
Script to investigate a b-tree index structure [ID 989186.1]
Both for 8.1.7 to 11.2
index rebuild
A reader, April 27, 2010 - 8:10 am UTC
Hi Tom,
1. I have a table with 15 m rows. as part of ETL process which runs once in a day , 0.5 m rows gets updated and the updated columns are indexed also (B Tree index).We are still in development environment.
I have learned at this site that there can be a better way to do this (that is not update and try to partition the table and then insert ,select).
But lets assume that we go with update approach so should we plan to rebuild the indexes ( on the columns which are getting updated as part of ETL) ?
2. We have another table (this is also in development ) where we have a colum invoice_flg which is varchar2(1) and can have either of null, 'Y' or 'N' as the valid values.
as part of business process - the nulls can be updated to 'Y' and 'Y' can be updated to 'N'. most of the time 'N' can be updated to null as well..
we are not sure..about how this update would happen as this is an off the shelf package. but we have seen that this index ( on invoice_flg) grows over a period of time and queries using this index starts showing progresive slowdown...
when we rebuild it ...the size becomes very small.
can you please explain these 2 cases.
Regards
April 27, 2010 - 8:26 am UTC
... But lets assume that we go with update approach so should we plan to rebuild
the indexes ( on the columns which are getting updated as part of ETL) ?
...
i doubt it, 500,000 rows is tiny. The only indexes you would have to worry about would be what I call sweeper indexes - search this site for sweeper.
Unless one of Y, N or NULL represents a very very very tiny fraction of the data - I question why there is an index on this column at all. Why is this indexed? How is it used.
yes 'Y' will be very very less
A reader, April 27, 2010 - 9:24 am UTC
Hi Tom
Thanks alot. yes in the second question - number of 'Y' s would be very very less and most of the quries would search for values with 'Y'.
Thanks
April 27, 2010 - 10:09 am UTC
well, if this is a third party application, you might not be able to do this but...
create index t_idx on t(case when col='Y' then col end);
if you search on "where case when col='Y' then col end = 'Y'" it will use this tiny index on 'Y' and 'Y' alone. We don't index entirely null keys - so the N's and NULLs will not be in there.
If the application does not search on that and the application cannot made to search on that - this will not work. You'll have what you have. You can coalesce the index from time to time if need be (don't rebuild, just gently coalesce)
yes, we can not build this index...
A reader, April 28, 2010 - 3:24 am UTC
Hello Tom
Yes, we can add this index after taking the approval from the vendor of this package.so we will try for the same.
I have also read about creating such indexes in your third book , and at that we tried on few tables, it worked very well.specially where the data was mostly read only. (lot more selects than the total inserts/updates).
Regards
syed zaheerulla, May 01, 2010 - 1:16 pm UTC
Hi Tom,
Thanks for your all help to the world of Oracle.
Could you please let me know, when should i opt for rebuilding indexes and when drop and recreate.
And which one would be better?
when I mark indexes unusable, and do huge data load, and then rebuild indexes. How rebuilding works here, whether oracle will use the existing index structure to rebuild the new index or use full table scan on the table to build new index structure.
May 06, 2010 - 11:16 am UTC
when you mark uusable and load and then rebuild - we of course cannot use the existing index - it is incomplete, we have to scan and sort the data.
I always prefer unusable/rebuild over drop and create.
With drop and create, you stand a change of not getting the create part done (due to not reading or handling errors properly in your scripts) whereas with a rebuild - the unusable index would still be there sticking out like a sore thumb - you cannot "lose it" by accident.
A reader, May 17, 2010 - 5:40 am UTC
Thanks for your reply TOM.
As you said you would prefer unusable/rebuild to drop/recreate, because creation part might be failed due to some errors. My doubt here is even when we mark unusable and load huge volume of data then rebuild it, during this process Oracle need to create a fresh index strucuture by doing a full table scan. so here also oracle is droping the old index structure and creating new one, so its similar to droping and recreating the index having the chance of failing the creation part.
Please correct me if am wrong in my above understanding.
May 24, 2010 - 8:23 am UTC
it would/could fail on the rebuild - but - and this is the key point - the index would STILL BE IN THE DICTIONARY. It would be there - nothing is "lost", no metadata - no "existance", you cannot "lose" the index this way.
it is all too easy to "lose" an index (or anything in fact) if you drop it and create it - the drop succeeds, someone doesn't notice that the create fails and it is just missing in action.
Mark it unusable and if the rebuild fails - it is still there, staring you in the face, people will report it (index is unusable errors will happen). Rather have an error and fix it right away rather than go for a while missing a key index.
rebuild unusable local indexes vs alter index rebuild partition nologging parallel 4
A reader, June 18, 2010 - 7:53 am UTC
Hi Tom,
Can you explain if
"alter table <table_name> modify partition <partition_name> rebuild unusable local indexes"
is generating redo log when rebuilding local bitmap partition indexes or not?
If the answer is yes, then it would be more efficient to rebuild local partition indexes with
"alter index <index_name> rebuild partition <partition_name> nologging parallel 4"
Is it possible to combine "rebuild unusable local indexes" with nologging and parallel command?
Regards
Mato
June 22, 2010 - 12:42 pm UTC
if the index is in the default logging mode and the database is in archivelog mode, then yes. If you alter it nologging AND the database permits non-logged operations (no force_logging enabled), then you can skip redo generation.
Efficient is a strange term sometimes. If you NEED the redo for standby - it would not be efficient.
the syntax is correct that you have, it might now work if the DBA said "we need redo" however.
rebuild unusable local indexes vs alter index rebuild partition nologging parallel 4
Mato, June 18, 2010 - 7:53 am UTC
Hi Tom,
Can you explain if
"alter table <table_name> modify partition <partition_name> rebuild unusable local indexes"
is generating redo log when rebuilding local bitmap partition indexes or not?
If the answer is yes, then it would be more efficient to rebuild local partition indexes with
"alter index <index_name> rebuild partition <partition_name> nologging parallel 4"
Is it possible to combine "rebuild unusable local indexes" with nologging and parallel command?
Regards
Mato
Why Oracle provides feature for rebuild ?
A reader, January 05, 2011 - 11:24 pm UTC
Tom,
I have learned a lot from this site and really appreciate your input on various questions asked by many people.
At one point, I agree with you that rebuilding index is totally not worth it, but second thought comes to my mind is, why Oracle has provided this feature in older version and continue to provide it with all new releases ?
As always, your answer will help me and others a lot in clearing doubts about it.
Many thanks in advance,
January 06, 2011 - 7:49 am UTC
challenge for you:
please re-read my answer and quote me where I say "never rebuild"
In fact, I can point out places on this page where I say "do rebuild"
understanding complex topics
A reader, March 16, 2011 - 7:07 pm UTC
Tom,
Most of the people asking questions here are asking because they do not know as much as you do about Oracle database technology. An Oracle database is very complex and difficult to understand. We are already aware that you are an expert. Please try to understand that most of us don't know as much as you do about databases. Most of us have other priorities, and have not dedicated years of our lives to mastering this database concepts. Therefore, please focus on what someone is asking, rather than on what they should already know.
Thank you,
Mort
March 17, 2011 - 8:02 am UTC
I'm not sure what you mean. Give some context here.
I'm asking serious questions. Take the original poster who wrote:
We are rebuilding indexes every week . but we found it is causing lot of
fragmentation.
Now, to me, if you are doing something every week - you must have some sound reason for doing it. You must have implemented this regimen on purpose. You did it - now tell me WHY you did it.
That flow of conversation has NOTHING to do with technology, nothing to do with Oracle. It is a valid question in any context.
Someone says to you:
"Every week I go outside and stand on my head for 10 minutes, then I get a headache"
What might you say to them? Might you say "WHY, why do you do that? Do you not see a potential linkage between the two??"
So, give me some context here.
Give Tom a break!
Mark Grady, March 28, 2011 - 5:11 pm UTC
Give Tom a break! He gives us his time to try and help our community and build our knowledge and encouraging us to think for ourselves.
So read the manual, buy his book and then if you don't understand, ask a question. If you are lucky, you will get a response.
About fragmentation
A, November 22, 2011 - 8:40 am UTC
Hello Tom,
While answer to an question regarding fragmentation earlier in this post, you said, fragmentation would not happen (presuming locally managed tablespaces - never use dictionary).
Currently we are archiving lot of data every day and we are on Oracle 10g with locally managed tablespaces, does it mean we don't have to re-organise the tables?
November 22, 2011 - 9:04 am UTC
probably not. space is used and reused.
If you are archiving "lots" of data - hopefully partitioning plays a big role in your implementation.
Thanks
A, November 22, 2011 - 11:02 pm UTC
Hello Tom,
Thanks for the reply. In vain, the tables are not partitioned which should have been. We are pushing for it.
Ask
Anand, November 26, 2011 - 8:04 am UTC
Hi Tom,
In our system Batch process run from Monday to Friday every night(4 hours).And we have activity to gather stats of all the table on weekend.But DBA has suggest to not gather stats for those table which is inserted and deleted on batch itself(40,000 rows) .Any suggestion.So we go ahead with this ?? Thanks alot.
November 28, 2011 - 10:28 am UTC
insufficient data to answer - no one could answer this.
tell us more.
Tell us the DBA's reasoning for not wanting to gather stats.
Tell us your reasoning for wanting to gather stats.
Tell us as much as you can about the nature of the data in the tables.
For example, if during the weekend, these batch tables are EMPTY - but during batch processing they get FILLED and then EMPTIED - I would pretty much agree with your DBA. You would gather stats on these tables once when they are FULL (during the batch process) and then lock those stats. You might have to come back and periodically set the statistics for columns populated by sequences or dates as the new high values will change over time (set_column_stats).
Gather stats
A reader, November 29, 2011 - 8:58 am UTC
Hi Tom,
Really Sorry for incomplete information.
1.As per DBA No of rows vary from day to day in this table(20,000 rows to 40,000 rows ).So DBA ask to delete the statistics as oracle will automatically create the stats.(is there any performance benefit with this ????) version oracle 10.2
2.I have done some test case by dropping the stats and there was no performance benefit after deleting stats also.
3. This table populate in Batch and further this data used for processing(10-15 select are there on this table).And after that delete itself in batch completely(zero rows).
Yes Tom, the data is empty while gather stats over the weekend.
But my question is whether i should gather stats when data is full (during batch process itself) or i just don't gather the stats for this table and delete the stats as well ???
Thanks a lot Tom.
please revert
A reader, December 10, 2011 - 2:27 am UTC
Hi tom,
Please revert ob above ....thanks
December 11, 2011 - 2:53 pm UTC
I don't know what 'revert ob' means.
sorry
A reader, December 13, 2011 - 10:28 am UTC
Hi Tom,
Sorry. Please revert on last post, query regarding stats.
December 13, 2011 - 10:40 am UTC
please revert what exactly? I really have no idea what you are trying to say - please rephrase using different words.
stats
A reader, December 14, 2011 - 11:08 am UTC
Hi Tom,
Really Sorry for incomplete information.
1.As per DBA No of rows vary from day to day in this table(20,000 rows to 40,000 rows ).So DBA ask
to delete the statistics as oracle will automatically create the stats.(is there any performance
benefit with this ????) version oracle 10.2
2.I have done some test case by dropping the stats and there was no performance benefit after
deleting stats also.
3. This table populate in Batch and further this data used for processing(10-15 select are there on
this table).And after that delete itself in batch completely(zero rows).
Yes Tom, the data is empty while gather stats over the weekend.
But my question is whether i should gather stats when data is full (during batch process itself) or
i just don't gather the stats for this table and delete the stats as well ???
Thanks a lot.
December 15, 2011 - 7:46 am UTC
1) you should probably gather stats on the table when there are 40,000 rows in it and then lock them. You might have to update the column stats from time to time using dbms_stats.set_column_stats if you have a date or sequence number in there and the high value is changing over time.
2) this table is so tiny - you should just get representative stats in there and leave them be.
Just gather them one, fix specific columns values if necessary over time.
A reader, December 21, 2011 - 1:12 am UTC
Hi Tom,
If we are joining some table in query and we are not selecting any column from any one or two table which is joined then we should use exists always.Is this thumb rule ? As i have read this in your article to use exists instead of join in such case.Because in some case i have found the cost in join is less than exist one.
Unfortunately I don't have tkprof.
December 21, 2011 - 8:26 am UTC
If we are joining some table in query and we are not selecting any column from
any one or two table which is joined then we should use exists alwaysnever say never
never say always
I always say
What you want to do is write a query that "makes sense". You might use a join, an exists OR "in".
In this case - you would probably want to use IN or EXISTS - yes.
http://www.oracle.com/technetwork/issue-archive/2006/06-may/o36asktom-099107.html Also, cost is not as relevant as real world experience. How did the queries perform? In general, an exists or in can be rewritten as a join (especially if we have foreign keys and primary keys and not null constraints in place!!)
Sql Tuning
A reader, December 24, 2011 - 6:02 pm UTC
Hi Tom,
As you said "cost is not as relevant as real world experience"
But We don't have tkprof or any performance tuning utility.so we just paste the query in sql window and press F5 which will automatically open the explain window.Then we try to change the query and see if IO cost getting decrease or not.you mean this is wrong way to tune.If so then what is other way to tune without having utility ?
December 24, 2011 - 8:34 pm UTC
you mean this is wrong way to tune
pretty much.
If so then what is other way to tune
without having utility ?
actually running the query.
Tuning
A reader, December 24, 2011 - 10:43 pm UTC
Hi Tom,
The problem is that when we run the query once it is cache in memory.So when we run second time it obviously will have different time than actual.Also we can,t flush the shared pool/buffer cache in production. Don't have access as well.
Please suggest ???
December 25, 2011 - 10:50 am UTC
What you want to mostly concentrate on is the logical IO's in most cases. A physical IO will be followed by a logical IO (exception: direct path reads for some full scans, in particular with parallel query).
If you concentrate on reducing the logical IOs - in general, the physical IO's will benefit from that as well.
(if you watch your pennies the dollars take care of themselves as they say).
It won't obviously have a different run time, it would only have a different run time in the even the buffer cache was devoid of blocks relevant to the query - which isn't something that happens in real life too much except at database startup time.
and you tune queries in production by try out alternative ones? Most people would do that in test/development first..
Stats gathered
A reader, December 25, 2011 - 4:10 am UTC
Hi Tom,
I have dataware house database which is around 400GB. The application runs many batch process 24/7 to provide report and there are sigficant changes in the data too.
Initially I have advised them to gather stats with 100% for the segments part of batch process after every load but they had an issue and that turned me to scheduled the stats job below to run every 1/2 hour and for the past 3 months no performance issue reported. If i disable this job application started giving poor performance.
My question here is, I am planning to get rid of this scheduled stats job. Should i lock the table stats for all the tables and disable the stats gather job or it has to continue as such? Please advise.
exec DBMS_STATS.FLUSH_DATABASE_MONITORING_INFO;
exec DBMS_STATS.GATHER_SCHEMA_STATS('AP_DAT', cascade => DBMS_STATS.AUTO_CASCADE, degree => DBMS_STATS.AUTO_DEGREE, options=>'GATHER STALE', estimate_percent => DBMS_STATS.AUTO_SAMPLE_SIZE, METHOD_OPT => 'FOR ALL COLUMNS SIZE AUTO');
Time takes to complete this stats job is not > 5 mins. Is it ok to stay with this approach forever or any alternatives available?
Thanks,
Sikki.
December 25, 2011 - 11:01 am UTC
Initially I have advised them to gather stats with 100% for the segments part
of batch process after every load but they had an issue and that turned me to
scheduled the stats job below to run every 1/2 hour and for the past 3 months
no performance issue reported. If i disable this job application started giving
poor performance.
I'm not following you - are you saying you have a data warehouse whereby you gather stats every 30 minutes?
Tell us more, you say there are significant changes to the data - describe what you mean. Are you doing things like adding new partitions (without statistics) and populating them with lots of data and querying them right away - what does "significant changes" mean exactly?
Stats gathered
Sikki, December 25, 2011 - 1:35 pm UTC
Hi Tom,
Thanks for the prompt response. yes, i scheduled to gather stats every 1/2 hour, earlier i configured to run every 2 hours then 1 hour and now 1/2 hour where by no issue reported. Before i sceduled this i had gone thru all the time taking sql's and ensured all has good execution plan.
significant changes means - a lot more data change is happening, many deletes, updates and inserts from other interfaces towards this database.
Please advise.
December 25, 2011 - 2:08 pm UTC
you didn't give sufficient detail - it seems strange that you would have to gather the stats every 30 minutes because of "deletes, updates, inserts" - it seems something would have to be happening like "we create lots of new partitions - which have NO stats - and start querying them"
tell us more about these modifications and what they do, how they do it, the volume at which they do their stuff ("many" is extremely vague - for some people 1,000,000 rows is "many" - for others, it is baby stuff. It depends - give us numbers - sizes - volumes )
stats gathered
Sikki, December 25, 2011 - 9:59 pm UTC
Tom,
Sorry for not giving sufficient information. Below is the detail. Below is the redo log generation per day. i.e., changes in MB per day on the database. The db block size is 32k.
RUNDATE LOGSWITCH REDO PER DAY (MB)
--------- ---------- -----------------
14-DEC-11 116 32389
15-DEC-11 280 78111
16-DEC-11 221 61142
17-DEC-11 129 35398
18-DEC-11 315 87746
19-DEC-11 389 108155
20-DEC-11 262 72983
21-DEC-11 284 78568
22-DEC-11 288 80122
23-DEC-11 266 73194
24-DEC-11 96 26770
25-DEC-11 282 78088
26-DEC-11 140 38933
13 rows selected.
December 26, 2011 - 10:35 am UTC
I don't think you are understanding me.
I want to understand the load process here. Numbers of rows, gigabytes of data loaded, approach to loading (I'm suspecting *new partitions* that *do not have any statistics*)
Please re-read what I've written and see if you can dig up some of that information
stats gathered
sikki, December 26, 2011 - 1:17 pm UTC
Tom,
I'll relook the stats on new partitions and let you know.
Thanks for giving valuable input.
index rebuild/analyze
paul, August 13, 2012 - 11:50 pm UTC
Tom,
I have query regarding index rebuild /analzye.
We run table and index analyze every day to keep up to date db statistics. Should we have to run rebuild on table and index everyday?
To address certain perfromance issues, tuning advisors complains that certain tables are stale every though it has statistics are up to date.
Which one I have to run first? rebuild then run table/index analyze.
Please advise.
August 17, 2012 - 1:58 pm UTC
We run table and index analyze every day to keep up to date db statistics.
Should we have to run rebuild on table and index everyday?
NO, do not rebuild your tables and indexes like that. You probably don't even have to gather stats so often.
do not rebuild your tables and indexes. If you identify a scientific, numerically backed reason for rebuilding a segment here or there once in a blue moon - fine. but do not rebuild everything.
Good Explanation ....
INDRANIL DAS, March 20, 2013 - 2:44 am UTC
Hi Tom,
We learned so much from this article and thanks a lot for that...I really appreciate if you read my question here...
I understand that if we can use LOCALLY MANAGED tablespace for an INDEX, then we should not be worried about "Fragmentation and hence no need for "Re-Building" and so on...
********************************************
I was just searching in google and find below:
**********************************************1> http://docs.oracle.com/cd/B19306_01/server.102/b14211/data_acc.htm ( section :
15.1.7 Re-creating Indexes)2>Enabling "INDEX Monitoring Usage" is a very tedious ( It may have performance impact, plz correct me if I am wrong) thing to do like "ALTER INDEX my_index_i MONITORING USAGE;..."
3>And then checking for
SQL> SELECT name,height,lf_rows,del_lf_rows,(del_lf_rows/lf_rows)*100 as ratio
FROM INDEX_STATS;
if the ratio of deleted to actual leaf enteries is more than 20% than you can think about rebuildingthem.
(
https://forums.oracle.com/forums/thread.jspa?threadID=580451 )
# My Question is to you that (1) Is there any easy way for monitoring the "INDEX-USAGE"
rather than enabling it for all the indexes ? ( I don't like both "one by one" enabling and "by a scripts" as well)
[ Our Production has more than 7000 indexes ]
(2) How to search for a INDEX that really qualify for "Re-building" or "coalescing" ?
(3) It seems that "coalescing"is better than "re-building"{as discussed here }
############
Could you please help me ?Thanks in advance
INDRANIL
March 25, 2013 - 3:31 pm UTC
I don't care if you use local managed or dictionary managed - it doesn't ever impact the decision to rebuild anything.
The type of tablespace is not something to consider when considering a segment reorganization.
1) alter index monitoring. why do you call that one time command tedious??? it really doesn't impact performance at all. It just flips a switch if an index is used as an access method.
and I don't know why you would do it - it wouldn't help you decide to rebuild an index or not.
If you don't like to enable by scripts and you don't like to enable one by one, I don't know what to tell you. Honestly??? We use scripts to automate the repetitive things in life. It is what we do.
Not that I think you want or need to do this...
2) I don't really search for them. There is basically one type of b*tree index that needs this sort of care and feeding and you know them (because you know your data). Search this site for "sweeper index" to read about them. Basically indexes on sequences and dates (always increasing values) where you delete a lot but not all of the old data. They might need help. An index on something like "last_name" - it is inserted into all over the place and won't need help.
Other special indexes - like a text index - have special API's for you to determine if they need to be rebuilt - refer to their documentation.
and Enterprise Manager will run a segment advisor for you to point out potential structures in the database in need of care.
3) coalescing or shrinking - yes. I prefer that to rebuild if possible.
On the same
INDRANIL DAS, March 20, 2013 - 2:46 am UTC
More : our Database is Oracle 10.2
Estimated cardinality doesnot match up with Actuals
Rajeshwaran Jeyabal, March 26, 2013 - 12:29 pm UTC
Tom,
We just created a Global range partitioned index (on create_dt column) in one of our transaction table just now.
(we are on 10.2.0.5)
1) Why the estimated cardinality is off from actual cardinality?
2) What information I am missing here to help optimizer to come up with correct estimated cardinality?
app@ORA10G> select partition_name,partition_position,object_type,num_rows,sample_size,stale_Stats
2 from user_ind_statistics
3 where index_name ='IDX_IR_TEST_01'
4 and ( partition_position is null
5 or partition_position in (12,13,14) )
6 order by 2 nulls first ;
PARTITION_NAME PARTITION_POSITION OBJECT_TYPE NUM_ROWS SAMPLE_SIZE STA
------------------------------ ------------------ ------------ ---------- ----------- ---
INDEX 658760298 658760298 NO
P_NOV_2012 12 PARTITION 20502191 20502191 NO
P_DEC_2012 13 PARTITION 10049446 10049446 NO
P_JAN_2013 14 PARTITION 1992729 1992729 NO
app@ORA10G> select /*+ parallel(e) */ count(*)
2 from claim_data_audit e
3 where create_dt >= to_date('11/01/2012','mm/dd/yyyy')
4 and create_dt < to_date('02/01/2013','mm/dd/yyyy') ;
COUNT(*)
----------
32544366
app@ORA10G> select create_dt
2 from claim_data_audit
3 where create_dt >= to_date('11/01/2012','mm/dd/yyyy')
4 and create_dt < to_date('02/01/2013','mm/dd/yyyy') ;
Elapsed: 00:00:00.66
Execution Plan
----------------------------------------------------------
Plan hash value: 487317143
----------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Cost (%CPU)| Pstart| Pstop |
----------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 23 | 4 (0)| | |
| 1 | PARTITION RANGE ITERATOR| | 23 | 4 (0)| 12 | 14 |
|* 2 | INDEX RANGE SCAN | IDX_IR_TEST_01 | 23 | 4 (0)| 12 | 14 |
----------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
2 - access("CREATE_DT">=TO_DATE(' 2012-11-01 00:00:00', 'syyyy-mm-dd hh24:mi:ss') AND
"CREATE_DT"<TO_DATE(' 2013-02-01 00:00:00', 'syyyy-mm-dd hh24:mi:ss'))
app@ORA10G> l
1 select column_name
2 from user_ind_columns
3* where index_name ='IDX_IR_TEST_01'
app@ORA10G> /
COLUMN_NAM
----------
CREATE_DT
1 row selected.
March 27, 2013 - 3:21 pm UTC
you tell me nothing about the data in this table
you tell me nothing about how you gathered stats (on tables and indexes...)
you tell me nothing about how representative these stats are
so......
Estimated cardinality doesnot match up with Actuals
Rajeshwaran Jeyabal, March 27, 2013 - 3:51 pm UTC
you tell me nothing about the data in this table- Why do you need that, we dont read any stuff from table, the index is the skinny version of the table.
you tell me nothing about how you gathered stats (on tables and indexes...) - I told you "just now we created the index" (means compute stats is default in 10g) so why are you asking about " how you gathered stats " ?
you tell me nothing about how representative these stats are - I dont really get this..
look at the num_rows in each partition - 20502191 + 10049446 + 1992729 = 32544366 ( which exactly matches up with below counts).
Meaning index stats matches up actual table data. Why do you ask from Table stats? what is its need here ?
app@ORA10G> select partition_name,partition_position,object_type,num_rows,sample_size,stale_Stats
2 from user_ind_statistics
3 where index_name ='IDX_IR_TEST_01'
4 and ( partition_position is null
5 or partition_position in (12,13,14) )
6 order by 2 nulls first ;
PARTITION_NAME PARTITION_POSITION OBJECT_TYPE NUM_ROWS SAMPLE_SIZE STA
------------------------------ ------------------ ------------ ---------- ----------- ---
INDEX 658760298 658760298 NO
P_NOV_2012 12 PARTITION 20502191 20502191 NO
P_DEC_2012 13 PARTITION 10049446 10049446 NO
P_JAN_2013 14 PARTITION 1992729 1992729 NO
March 27, 2013 - 6:29 pm UTC
- Why do you need that, we dont read any stuff from table, the index is the skinny version of the table.
well, that wins the prize for the most, well, unusual (and least useful) feedback of the day.
You are asking me to be an optimizer, to tell you why I didn't give you the right answer, and you give me none of the inputs I would have access to in order to come up with my guess in the first place????????????????? really??? think about it (you've asked a tons of questions on this site - you know I always need "details", "information" - without it, one is doing something called "guessing" and I don't like to guess)
...
- I told you "just now we created the index" (means compute stats is default in 10g) so why are you asking about " how you gathered stats " ?
...
and you still didn't answer my question. So what, you have index stats? big deal? so - what sort of COLUMN STATISTICS might you or might you not have????? what about the table stats? they drive the cardinality estimates...
consider:
ops$tkyte%ORA11GR2> /*
ops$tkyte%ORA11GR2>
ops$tkyte%ORA11GR2> drop table t;
ops$tkyte%ORA11GR2>
ops$tkyte%ORA11GR2> CREATE TABLE t
ops$tkyte%ORA11GR2> (
ops$tkyte%ORA11GR2> dt date,
ops$tkyte%ORA11GR2> x int,
ops$tkyte%ORA11GR2> y varchar2(30)
ops$tkyte%ORA11GR2> )
ops$tkyte%ORA11GR2> PARTITION BY RANGE (x)
ops$tkyte%ORA11GR2> (
ops$tkyte%ORA11GR2> PARTITION p2012_10 VALUES LESS THAN (20121101),
ops$tkyte%ORA11GR2> PARTITION p2012_11 VALUES LESS THAN (20121201),
ops$tkyte%ORA11GR2> PARTITION p2012_12 VALUES LESS THAN (20130101),
ops$tkyte%ORA11GR2> PARTITION p2013_01 VALUES LESS THAN (20130201),
ops$tkyte%ORA11GR2> PARTITION p2013_02 VALUES LESS THAN (20130301),
ops$tkyte%ORA11GR2> PARTITION p2013_03 VALUES LESS THAN (20130401),
ops$tkyte%ORA11GR2> PARTITION junk VALUES LESS THAN (MAXVALUE)
ops$tkyte%ORA11GR2> )
ops$tkyte%ORA11GR2> /
ops$tkyte%ORA11GR2>
ops$tkyte%ORA11GR2> insert into t(x,dt,y) select 20121001+ mod(rownum,28), to_date( '01-oct-2012', 'dd-mon-yyyy') + mod(rownum,28), 'x'
ops$tkyte%ORA11GR2> from (select level l from dual connect by level <= 100000),
ops$tkyte%ORA11GR2> (select level l2 from dual connect by level <= 200)
ops$tkyte%ORA11GR2> /
ops$tkyte%ORA11GR2> insert into t(x,dt,y) select 20121101+ mod(rownum,28), to_date( '01-nov-2012', 'dd-mon-yyyy') + mod(rownum,28), 'x'
ops$tkyte%ORA11GR2> from (select level l from dual connect by level <= 100000),
ops$tkyte%ORA11GR2> (select level l2 from dual connect by level <= 200)
ops$tkyte%ORA11GR2> /
ops$tkyte%ORA11GR2>
ops$tkyte%ORA11GR2>
ops$tkyte%ORA11GR2> insert into t(x,dt,y) select 20121201+ mod(rownum,28), to_date( '01-dec-2012', 'dd-mon-yyyy') + mod(rownum,28), 'x'
ops$tkyte%ORA11GR2> from t partition(p2012_10)
ops$tkyte%ORA11GR2> where rownum <= 100000*100;
ops$tkyte%ORA11GR2>
ops$tkyte%ORA11GR2> insert into t(x,dt,y) select 20130101+ mod(rownum,28), to_date( '01-jan-2013', 'dd-mon-yyyy') + mod(rownum,28), 'x'
ops$tkyte%ORA11GR2> from t partition(p2012_10)
ops$tkyte%ORA11GR2> where rownum <= 100000*20;
ops$tkyte%ORA11GR2>
ops$tkyte%ORA11GR2> insert into t(x,dt,y) select 20130201+ mod(rownum,28), to_date( '01-feb-2013', 'dd-mon-yyyy') + mod(rownum,28), 'x'
ops$tkyte%ORA11GR2> from t partition(p2012_10)
ops$tkyte%ORA11GR2> where rownum <= 100000*20;
ops$tkyte%ORA11GR2>
ops$tkyte%ORA11GR2> commit;
ops$tkyte%ORA11GR2> exec dbms_stats.gather_table_stats( user, 'T' );
ops$tkyte%ORA11GR2>
ops$tkyte%ORA11GR2> create index t_idx on t(dt)
ops$tkyte%ORA11GR2> global partition by range(dt)
ops$tkyte%ORA11GR2> (
ops$tkyte%ORA11GR2> PARTITION p2012_10 VALUES LESS THAN (to_date('01-nov-2012','dd-mon-yyyy')) ,
ops$tkyte%ORA11GR2> PARTITION p2012_11 VALUES LESS THAN (to_date('01-dec-2012','dd-mon-yyyy')) ,
ops$tkyte%ORA11GR2> PARTITION p2012_12 VALUES LESS THAN (to_date('01-jan-2013','dd-mon-yyyy')) ,
ops$tkyte%ORA11GR2> PARTITION p2013_01 VALUES LESS THAN (to_date('01-feb-2013','dd-mon-yyyy')) ,
ops$tkyte%ORA11GR2> PARTITION p2013_02 VALUES LESS THAN (to_date('01-mar-2013','dd-mon-yyyy')) ,
ops$tkyte%ORA11GR2> PARTITION p2013_03 VALUES LESS THAN (to_date('01-apr-2013','dd-mon-yyyy')) ,
ops$tkyte%ORA11GR2> PARTITION junk VALUES LESS THAN (MAXVALUE)
ops$tkyte%ORA11GR2> )
ops$tkyte%ORA11GR2> /
ops$tkyte%ORA11GR2>
ops$tkyte%ORA11GR2> */
ops$tkyte%ORA11GR2>
ops$tkyte%ORA11GR2> set autotrace traceonly explain
ops$tkyte%ORA11GR2> select dt
2 from t
3 where dt >= to_date('11/01/2012','mm/dd/yyyy')
4 and dt < to_date('02/01/2013','mm/dd/yyyy') ;
Execution Plan
----------------------------------------------------------
Plan hash value: 2022905573
--------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | Pstart| Pstop |
--------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 33M| 255M| 28091 (2)| 00:05:38 | | |
| 1 | PARTITION RANGE ITERATOR| | 33M| 255M| 28091 (2)| 00:05:38 | 2 | 4 |
| 2 | INDEX FAST FULL SCAN | T_IDX | 33M| 255M| 28091 (2)| 00:05:38 | 2 | 4 |
--------------------------------------------------------------------------------------------------
<b>so far, so good.... but</b>
ps$tkyte%ORA11GR2> exec dbms_stats.set_table_stats( user, 'T', numrows => 1000 );
PL/SQL procedure successfully completed.
ops$tkyte%ORA11GR2> set autotrace traceonly explain
ops$tkyte%ORA11GR2> select dt
2 from t
3 where dt >= to_date('11/01/2012','mm/dd/yyyy')
4 and dt < to_date('02/01/2013','mm/dd/yyyy') ;
Execution Plan
----------------------------------------------------------
Plan hash value: 2022905573
--------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | Pstart| Pstop |
--------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 620 | 4960 | 28091 (2)| 00:05:38 | | |
| 1 | PARTITION RANGE ITERATOR| | 620 | 4960 | 28091 (2)| 00:05:38 | 2 | 4 |
| 2 | INDEX FAST FULL SCAN | T_IDX | 620 | 4960 | 28091 (2)| 00:05:38 | 2 | 4 |
--------------------------------------------------------------------------------------------------
ops$tkyte%ORA11GR2> set autotrace off
<b>so, if I had to guess, I'd guess your table stats are whacked. That is why I asked for them. We actually *use them*.
So, a plan ended up deciding to use an index - you know how it comes to that decision, by using table stats to estimate the output of many of the row sources (before we thought "index", we though "how many rows" - the rows came from - table....)
... you tell me nothing about how representative these stats are - I dont really get this..
.....
tell me, are the COLUMN STATS, the table stats, the statistics actually used to derive this cardinality, representative of the data.
tell you what, post the test case from start to finish - create table, load tables, create indexes, statistics gathering method, etc etc etc and we'll start from there.
no creates
no inserts
no look....
help me help you. make it as small as possible (smaller than most of your other uploads please!! just the barest needed to reproduce the situation!!!!)
Looking at your test case - something seems whacky. your partitioning by date already (based on partition names) so why would this index by global anyhow?
rebuild index
A reader, March 29, 2013 - 12:00 pm UTC
Tom:
If you migrate a 9i DB from one machine to another machine with 11g server, is it a good idea to rebuild all the indexes and refresh the statistics for all objects in the schema?
March 29, 2013 - 4:09 pm UTC
not unless it would have been a good idea to rebuild all the indexes and refresh the statistics for all objects in the schema without having done the upgrade.
An exception to that would be for large partitioned tables where you might delete existing statistics, enable incremental statistics and regather local partition (but never global again) statistics.
https://blogs.oracle.com/datawarehousing/entry/managing_optimizer_statistics but rebuild all indexes? no.
Estimated cardinality doesnot match up with Actuals
Rajeshwaran Jeyabal, April 01, 2013 - 1:04 pm UTC
1)
your partitioning by date already (based on partition names) so why would this index by global anyhow? - Tom, the reason for having global range partitioned index is this table (CLAIM_AUDIT) is partitioned by HASH (partitions 8)on PK. We got a recent requirement from end-users to develope a monthly report based on data changes done on previous month. so we decided to have global range partitioned index on create_dt column (on an average we get nearly 30M - 35M records got selected for this report). This table will grow daily by adding nearly 1M to 1.5M records (no update, no deletes only Inserts)
2)
what sort of COLUMN STATISTICS might you or might you not have????? what about the table stats? they drive the cardinality estimates... - This is what the stats i have.
app@ORA10G> select num_distinct,low_value,high_value,
2 density,num_nulls,num_buckets,sample_size,
3 histogram,global_stats
4 from user_tab_col_statistics
5 where table_name='CLAIM_AUDIT'
6 and column_name ='CREATE_DT' ;
NUM_DISTINCT LOW_VALUE HIGH_VALUE DENSITY NUM_NULLS NUM_BUCKETS SAMPLE_SIZE HISTOGRAM GLO
------------ -------------------- -------------------- ---------- ---------- ----------- ----------- --------------- ---
31864472 786B020E0E1D1D 7871031D0F0637 .000015384 0 254 6586900 HEIGHT BALANCED YES
app@ORA10G> select partition_name,object_type,num_rows,blocks,
2 empty_blocks,avg_space,chain_cnt,avg_row_len,sample_size,
3 user_stats,stale_stats
4 from user_tab_statistics
5 where table_name='CLAIM_AUDIT' ;
PARTITION_NAME OBJECT_TYPE NUM_ROWS BLOCKS EMPTY_BLOCKS AVG_SPACE CHAIN_CNT AVG_ROW_LEN SAMPLE_SIZE USE STA
-------------------- ------------ ---------- ---------- ------------ ---------- ---------- ----------- ----------- --- ---
TABLE 658690000 10409024 0 0 0 105 6586900 NO NO
CLAIM_AUDIT_D_P3 PARTITION 82207500 1302151 0 0 0 105 822075 NO NO
CLAIM_AUDIT_D_P6 PARTITION 82456200 1302151 0 0 0 105 824562 NO NO
CLAIM_AUDIT_D_P1 PARTITION 82541200 1302151 0 0 0 105 825412 NO NO
CLAIM_AUDIT_D_P2 PARTITION 82278900 1302151 0 0 0 105 822789 NO NO
CLAIM_AUDIT_D_P4 PARTITION 82403300 1293967 0 0 0 105 824033 NO NO
CLAIM_AUDIT_D_P5 PARTITION 82258900 1302151 0 0 0 105 822589 NO NO
CLAIM_AUDIT_D_P7 PARTITION 82264600 1302151 0 0 0 105 822646 NO NO
CLAIM_AUDIT_D_P8 PARTITION 82360500 1302151 0 0 0 105 823605 NO NO
April 22, 2013 - 1:09 pm UTC
if you are getting 30-35m records YOU SHALL NOT CONSIDER EVEN REMOTELY THE IDEA OF USING AN INDEX.
think about it - average single block physical IO takes 3-5ms. 1,000,000 of them is 1+ hours. You would have just invented a 30-35 HOUR query. don't even go there.
why did you hash on primary key? what is that doing for you? what was the logic? what was the reasoning? what technical reason drove you to partition by that approach? what is it doing for you? is it making you more available? speeding up some query (*highly* unlikely)? making administration easier in some fashion (doubtful - other than you have smaller segments)
please don't expect me to look up or down in a review. I look at a review - I cannot be expected to pull all of the bits and pieces together.
and that doesn't even tell me what stats you may or may not have, it shows me a vewi stats but doesn't really tell me anything. a simple "we gather stats like this, using these parameters and therefore have x y and z available" would do it.
sweeper index link is NOT working ...
INDRANIL DAS, April 05, 2013 - 12:42 pm UTC
index rebuild
A reader, June 28, 2013 - 4:09 am UTC
can rebuild of indexes and table reorgs help in performance savings if segment size becomes 10-20% smaller .Segments are in tens of GBs.
July 01, 2013 - 9:10 pm UTC
how long does it say 10-20% smaller?
smaller
A reader, July 02, 2013 - 6:10 pm UTC
10-15 hours approximately
July 16, 2013 - 12:17 pm UTC
then why bother?
it is probably slowing things down, not making them faster. think about it - you spent a lot of cycles 'shrinking' the thing and we spent a lot of cycles in the next 10-15 hours making it bigger again.
why not just leave it bigger and be done with it?
Sumit, July 14, 2014 - 7:00 am UTC
Excellent explanation on the Index Rebuild.
Can you help me how can we convince the team that index rebuild in not necessary daily as our DB team is rebuilding the indexes daily through cron jobs. Our DB is an OLTP DB for telecom domain and high response time is first preference.
1.How can we get a matrix on the index , how it has grown over the time to come in equilibrium state.
2. Performance impact matrix when we rebuild and when we don't rebuild.
Does stale statistics downgrade the performance of rebuild index operation?
LYF, October 17, 2019 - 7:39 am UTC
Hi Tom,
We've migrated our archive DB from 11g to 12c instance, then we ran the archive job as usual. During the job execution, we found that the rebuild unusable local index after exchanging partition took much more time than in 11g (25 times slower). We noticed that the table and index statistics was imported from 11g so we gathered them in 12c. After that the performance of rebuild index process returned to normal as in 11g.
So, does the stale statistics really matter the performance of rebuild index operation? If it really matters, why and how?
Thanks in advance for your help.
October 21, 2019 - 7:03 am UTC
Is this is a job run in serial or in parallel ?
Does stale statistics downgrade the performance of rebuild index operation?
LYF, October 21, 2019 - 11:10 am UTC
Is this is a job run in serial or in parallel ?It runs in serial. Here is the command.
alter table t_xxx modify partition p_xxx rebuild unusable local indexes;
January 06, 2020 - 2:53 am UTC
I asked around internally and found no obvious reason.
Any chance you could a trace on the session and send us the tkprof file at asktom_us@oracle.com
Macy, June 24, 2021 - 3:49 pm UTC
If we have a locally managed tablespaces and still if such a kind of index issues pop up then what should be right way to resolve fragmentation issues.
Would dropping and rebuilding index would be a right way ahead?
June 25, 2021 - 12:49 am UTC
The tablespace structure is unrelated to the fragmentation of an index.