Skip to Main Content

Breadcrumb

Dev Live Dev Intro

We are celebrating Developers at AskTOM. We welcome Developers of all levels of experience to join us at our FREE Developer Live events coming in August and September. Just click on the left to register today! If you are brand new to Database Technology, then we also have got you covered. Just click on the right for your comprehensive FREE training program to kick start your Oracle Database Development journey!

Question and Answer

Tom Kyte

Thanks for the question, Raj.

Asked: February 28, 2002 - 3:44 pm UTC

Answered by: Tom Kyte - Last updated: December 12, 2018 - 3:40 am UTC

Category: Database - Version: 5

Viewed 100K+ times! This question is

You Asked

I know this question has been asked earlier and
I am sorry to take up this question slot but I
am confused regarding rebuilding indexes.

If I am interpreting it correctly, you don't
recommend rebuilding indexes at all. I have talked
to two seasoned Oracle dba's and they both
recommend rebuilding indexes at a regular period
of time. We have couple of tables with a million
to two million records but the indexes have never
been rebuilt. Also, we don't have that much
deletion of records going on though we do have
insertions and quite a few selects on those
tables. Do you recommend rebuilding indexes for
such a database.

What would be the implication if we did
have to rebuild the indexes. Would I need lot
of space for doing such a task.

Thanks a lot for your help.

Raj

and we said...

Ask them for the technical reasons WHY they rebuild.

When they say "performance of course" -- ask them for the performance metrics they took BEFORE the rebuild and AFTER the rebuild. They won't have any (no one ever does, no one seems to think about doing that). They will say "it just goes faster trust me".

When they say "to reclaim space of course" -- ask them "but how long does it take before the index is just a big as it was before the rebuild". You see they have not reclaimed any space -- the index will just grow again, eventually hitting a steady state size that it likes to be.

If you need to rebuild your indexes, you need 2x the space -- you'll have the old and the new index for a period of time. If you do it online, you'll need additional space to hold the changes that are made during the rebuild as well.

and you rated our response

  (190 ratings)

Is this answer out of date? If it is, please let us know via a Review

Reviews

Is your advice against rebuilding a "blanket" one?

March 01, 2002 - 10:49 am UTC

Reviewer: Mohit Dubey from UK

Hi Tom,

Would rebuilding indexes not help in case you want to re-balance the b*tree? Particularly in case of indexes deriving their values from a monotonically increasing sequence (as I trust most of the unique/foreign keys are)?

Please throw some more light on this issue!

Mohit.

Tom Kyte

Followup  

March 01, 2002 - 1:25 pm UTC

If you can show me mearsurable performance increases or some other measurable, material benefit -- by all means rebuild.

99.99999999% of the time, no one measures, they just rebuild everything (making a really big deal out of it, wasting time, cutting off access, generating overtime hours, etc....

To HAVE to rebuild it a rarity -- not the norm. Keep good metrics, show the before and after effects. Be ready to prove that the rebuild was worth the effort (i mean, we have to benchmark ourselves in so many other ways -- why is this any different).



Rebuilding Indexes

March 01, 2002 - 2:20 pm UTC

Reviewer: Rob from Chicago

I find that I have to watch bitmap indexes very closely.
Our fact tables get loaded every day and every so often a bitmap index seems to grow very rapidly. I rebuild the index and it then holds at a steady state. I see this often if we delete and reinsert a days worth of info. The performance improvements comes for the inserts on the table not the select using the index.



Tom Kyte

Followup  

March 01, 2002 - 2:37 pm UTC

Agreed -- bitmaps are a special case, easily measurable, definite payback.

Now, about b*trees......

online index rebuild

February 24, 2004 - 12:16 pm UTC

Reviewer: reader

Doc says that when we rebuild index online, the changes are allowed and they are stored in a "journal table". What happens to the changes in the journal table if the rebuild aborts for some reason after i started the index rebuild? Are they lost? Thanks.

Tom Kyte

Followup  

February 24, 2004 - 1:02 pm UTC

they are cleaned up for you.

they are not "lost", they are "not useful anymore" if the rebuild aborts.

You would just start the rebuild again and start journaling anew -- no need for the old stuff, it would be counter productive to attempt to use it.

index and table in the same tablespace

February 24, 2004 - 5:37 pm UTC

Reviewer: reader

From </code> http://download-west.oracle.com/docs/cd/B10501_01/server.920/a96521/indexes.htm#310 <code>

<quote>Using different tablespaces (on different disks) for a table and its index produces better performance than storing the table and index in the same tablespace. Disk contention is reduced. <quote>

Tom, is the above still recommended? Thanks.

Tom Kyte

Followup  

February 25, 2004 - 7:40 am UTC

no



index hijacking

February 24, 2004 - 7:08 pm UTC

Reviewer: VKOUL from Lacey, WA USA

Hi Tom

Pls. go through the following

SQL> create table abc (x number);

Table created.

SQL> create index abc_non_unique_idx on abc(x);

Index created.

SQL> insert into abc values (1);

1 row created.

SQL> insert into abc values (1);

1 row created.

SQL> select * from abc;
Press Enter ...

         X
----------
         1
         1

SQL> roll
Rollback complete.
SQL> 
SQL> select * from abc;

no rows selected

SQL> @1.lst

                    INDEX_NAME : ABC_NON_UNIQUE_IDX
                    INDEX_TYPE : NORMAL
                   TABLE_OWNER : VKOUL
                    TABLE_NAME : ABC
                    TABLE_TYPE : TABLE
                    UNIQUENESS : NONUNIQUE
                   COMPRESSION : DISABLED
                 PREFIX_LENGTH :
               TABLESPACE_NAME : USERS
                     INI_TRANS : 2
                     MAX_TRANS : 255
                INITIAL_EXTENT : 1048576
                   NEXT_EXTENT : 1048576
                   MIN_EXTENTS : 2
                   MAX_EXTENTS : 2147483645
                  PCT_INCREASE : 0
                 PCT_THRESHOLD :
                INCLUDE_COLUMN :
                     FREELISTS : 1
               FREELIST_GROUPS : 1
                      PCT_FREE : 10
                       LOGGING : YES
                        BLEVEL :
                   LEAF_BLOCKS :
                 DISTINCT_KEYS :
       AVG_LEAF_BLOCKS_PER_KEY :
       AVG_DATA_BLOCKS_PER_KEY :
             CLUSTERING_FACTOR :
                        STATUS : VALID
                      NUM_ROWS :
                   SAMPLE_SIZE :
                 LAST_ANALYZED :
                        DEGREE : 1
                     INSTANCES : 1
                   PARTITIONED : NO
                     TEMPORARY : N
                     GENERATED : N
                     SECONDARY : N
                   BUFFER_POOL : DEFAULT
                    USER_STATS : NO
                      DURATION :
             PCT_DIRECT_ACCESS :
                    ITYP_OWNER :
                     ITYP_NAME :
                    PARAMETERS :
                  GLOBAL_STATS : NO
                 DOMIDX_STATUS :
               DOMIDX_OPSTATUS :
                FUNCIDX_STATUS :


SQL> 
SQL> alter table abc add constraint pk_abc primary key
  2  (x) using index;

Table altered.

SQL>  insert into abc values (1);

1 row created.

SQL> /
 insert into abc values (1)
*
ERROR at line 1:
ORA-00001: unique constraint (VKOUL.PK_ABC) violated


SQL> 
SQL> @1.lst

                    INDEX_NAME : ABC_NON_UNIQUE_IDX
                    INDEX_TYPE : NORMAL
                   TABLE_OWNER : VKOUL
                    TABLE_NAME : ABC
                    TABLE_TYPE : TABLE
                    UNIQUENESS : NONUNIQUE
                   COMPRESSION : DISABLED
                 PREFIX_LENGTH :
               TABLESPACE_NAME : USERS
                     INI_TRANS : 2
                     MAX_TRANS : 255
                INITIAL_EXTENT : 1048576
                   NEXT_EXTENT : 1048576
                   MIN_EXTENTS : 2
                   MAX_EXTENTS : 2147483645
                  PCT_INCREASE : 0
                 PCT_THRESHOLD :
                INCLUDE_COLUMN :
                     FREELISTS : 1
               FREELIST_GROUPS : 1
                      PCT_FREE : 10
                       LOGGING : YES
                        BLEVEL :
                   LEAF_BLOCKS :
                 DISTINCT_KEYS :
       AVG_LEAF_BLOCKS_PER_KEY :
       AVG_DATA_BLOCKS_PER_KEY :
             CLUSTERING_FACTOR :
                        STATUS : VALID
                      NUM_ROWS :
                   SAMPLE_SIZE :
                 LAST_ANALYZED :
                        DEGREE : 1
                     INSTANCES : 1
                   PARTITIONED : NO
                     TEMPORARY : N
                     GENERATED : N
                     SECONDARY : N
                   BUFFER_POOL : DEFAULT
                    USER_STATS : NO
                      DURATION :
             PCT_DIRECT_ACCESS :
                    ITYP_OWNER :
                     ITYP_NAME :
                    PARAMETERS :
                  GLOBAL_STATS : NO
                 DOMIDX_STATUS :
               DOMIDX_OPSTATUS :
                FUNCIDX_STATUS :


SQL> 
SQL> disconnect
Disconnected from Oracle8i Enterprise Edition Release 8.1.7.3.0 - 64bit Production
With the Partitioning option
JServer Release 8.1.7.3.0 - 64bit Production
SQL> 


Q1 : How does Oracle implement uniqueness, since the underlying index is NONUNIQUE ? Does Oracle recreates index on primary key creation, if that is so, why in index data it says it is NONUNIQUE ?

Q2 : On your site I have read somewhere like "There is no nonunique B*tree, in case of non-unique indexes, the rowid is concatenated with the value to make it unique". In this case does it segregate rowid from the value after hijacking ?

Q3 : What effect does it have on optimizer in selecting a plan.

Thanks 

Tom Kyte

Followup  

February 25, 2004 - 8:02 am UTC

q1) it doesn't matter "how", only that it "does" at the end of the day.

Since version 8.0 with deferrable constraints, we've only needed AN INDEX to enforce uniqueness.  Not a unique index, just an index.

q2) not relevant to us.  only relevant that "it works".  the internal mechanics, not known to us.  all we need know is a primary key/unique constraint will

a) look for the columns to be on the leading edge of any index, if found, use it, consider:


ops$tkyte@ORA920PC> create table t ( x int, y int );
Table created.
 
ops$tkyte@ORA920PC> create index t_idx on t(x,y);
Index created.
 
ops$tkyte@ORA920PC> alter table t add constraint t_pk primary key(x);
Table altered.
 
ops$tkyte@ORA920PC> select index_name from user_indexes;
 
INDEX_NAME
------------------------------
T_IDX
 
ops$tkyte@ORA920PC> drop index t_idx;
drop index t_idx
           *
ERROR at line 1:
ORA-02429: cannot drop index used for enforcement of unique/primary key

b) if not found and constraint is not deferrable, create unique index by default
c) if not found and constraint is deferrable, create non-unique index by default

q3) nothing horribly measurable really.  It would be rare for it to affect a plan, in light of statistics, the optimizer will understand about how many rows (1) would come back based on "where pk = value".  That and remember -- indexes are the tuning tools here.  If you needed a unique index for performance reasons, then you most certainly should not have created a non-unique one.  


 

Interesting...

February 25, 2004 - 8:12 am UTC

Reviewer: Invisible from UK

<reader>
Should indexes go in a separate tablespace to tables?
</reader>

<Tom>
No.
</Tom>

...any reason WHY this should be so? Just curiouse...


Tom Kyte

Followup  

February 25, 2004 - 9:39 am UTC

in the olden days....


before logical volumes
before striping
before 500+ table schemas

when you had like 10 tables and 12 indexes and a couple of disks direct attached....


putting indexes on one and data on the other was an attempt to evenly distribute IO over both devices. "couple of index reads - many from cache, one or so from disk" followed by "table access by index rowid" maybe from cache, maybe not -- would tend to put 1/2 the IO on each of your two disks.

The SAME effect could have been achieved by using really small extents and making sure each object was in 100 or so extents -- the data would have been nicely striped (but that ran counter to that extent myth so no one did that)...


Today with 5000 objects in a schema, gigabytes/terabytes of storage, raid striping, logical volumes -- it just isn't relevant in the least.


Me, I use tablespaces to hold application when appropriate. Your application will get A tablespace unless your application has some really large tables. Then and only then will we carve out extra things to manage.

Cool.

February 25, 2004 - 10:19 am UTC

Reviewer: Invisible from UK

Thanks for that Tom. :-)

I presume from the above that there's also no particular PROBLEM if the DB is already set up with everything separated out?


Tom Kyte

Followup  

February 25, 2004 - 11:23 am UTC

nope, just extra stuff to manage.

use tablespaces as an administrative/bookeeping tool, not so much as a tuning device.

Not rebuild index definitively?

February 25, 2004 - 10:54 am UTC

Reviewer: A reader

Hi Tom,

What do you think about rebuild index yearly,
or the answer is definitively
NO index rebuild necessary, unless something like block corruption in index or other rarity like this.

Thanks

Tom Kyte

Followup  

February 25, 2004 - 11:33 am UTC

I do not beleive in a scheduled rebuild of indexes.


I do believe in

a) determining the root cause of a performance related issue
b) correcting it

and if part of b) is rebuilding an index, so be it. but to rebuild on a schedule? No.

February 25, 2004 - 11:38 am UTC

Reviewer: A reader

Thanks Tom,


Rebuilding online reply

February 25, 2004 - 12:57 pm UTC

Reviewer: Steve from Midwest

Just to clarify one point to your reply to the question:

"What happens to the changes in the journal table if
the rebuild aborts for some reason after i started the index rebuild? Are they lost?"

You reply they are cleaned up for you... I guess the documentation may say that is the case but we've ran into various issues with both 8i and 9i where the journal table does not get dropped under certain conditions and future attempts to rebuild the index (or analyze the table) require intervention. In 8.1.7.2 if a session is killed rebuilding an index we had to go out there and update ind$ to get the index status out of "rebuild" status along with manually dropping the journal table. In 9.2.0.3 if a session is killed or you ctrl c out of the rebuild or you get an unable to extent message you need to manually drop the journal table in order to rebuild it online after the "abort". I'm sure there's other conditions that cause problems but this is just a couple.

Just want to highlight this feature is a bit buggy if various conditions are hit.

May 14, 2004 - 12:51 pm UTC

Reviewer: A reader

Hi Tom could you give your opinion please about this paper
</code> http://www.actoug.org.au/Downloads/oracle_index_internals.pdf <code>

Because I'm still investigating if there are really situations when you need to rebuild the index

Specially about
 &#8220;If the index clustering factor is high, an index
rebuild may be beneficial&#8221; Don Burleson: Inside
Oracle Indexing dated December 2003 at
www.DBAzine.com

Thank you

Tom Kyte

Followup  

May 15, 2004 - 11:57 am UTC

I loved it -- already read it. loved it.

perfect.


The one about clustering factor is PARTICULARLY AMUSING isn't it?

the cluster factor is the number of LIOs against the table that would be performed as we range scan through the index. If we are range scanning and the first 15 index elements all point to the same block, that would be 1 LIO. On the other hand, if the 15 index elements all point to DIFFERENT blocks, that would be 15 LIO's...

So, if a table is sorted in the same key order as the index, the cluster factor will be very near the number of blocks in the table.

If a table is not sorted as the index is, the cluster factor will be near the number of rows in the table.


So, tell me -- how could rebuilding an index affect the cluster factor since the cluster factor is a metric based on the TABLE not the index????

How can reorging an index affect a metric that has nothing to do with the structure of the index?

A table rebuild -- sorting the table by the index key columns -- now that would lower the cluster factor of that index but destroy the cluster factor of the other indexes since a table can be sorted in about 1 way...

I loved that presentation.



Yes, there are times when an index rebuild can be beneficial.
Yes, there are times when an index rebuild can be harmful.
No, there are no times when a scheduled rebuild of all indexes is "good".


Rebuilding an index just "because it might be the cause of some issue" is a fairly "non-professional" thing to do. Sort of like a police officer sticking his hand into a dark room and shooting a gun, hoping to kill the bad guys. Problem is, there are good guys in the room too. You might hit them, you might get the bad guy -- but hit 2 good guys on the way as well.


Lets say you goto the doctor and give them your list of symptoms. Instead of performing a test, diagnosing your case, they say "you know, I've given this medicine to lots of people in the past and it helped some of them. why don't you try this and tell me what happens first."

Would you use that doctor? Or would you want them to be fairly certain of what was wrong before prescribing a treatment, or have them prescribing a treatment that while it works -- causes other side effects you would rather not have? (I can fix your broken leg by amputation or by setting it -- which would you pick given the choice?)







choose your authors carefully ...

May 15, 2004 - 1:03 pm UTC

Reviewer: Alberto Dell'Era from Milan, Italy

My 2 cents from my book/whitepaper addiction experience - read only informations published by reputable authors (who are NOT necessarily the ones that wrote best-selling books).

"Guru suggestions" such as the one above almost destroyed my Oracle career (and especially motivation!) - so now I read almost exclusively material by 3 or 4 authors that I've learnt to be trustable and professional.

Interestingly, the aforementioned trustable authors ALL belong to the OakTable network - but at least two of them had been in my deja query well before the OakTable was born :)

May 15, 2004 - 7:37 pm UTC

Reviewer: A reader

Thanks Tom

Oracle's B-Tree index is always balanced. Always.

August 21, 2004 - 2:12 am UTC

Reviewer: Oracle DBA from Bombay, India

I am a bit more curious about Oracle’s implementation of B-Tree indexes. When its said that “Oracle's B-Tree index is always balanced. Always.” Does it mean that if I have 3 records in the index then 2 will be the root node and 1 and 3 will be left and right respectively and when 4 more records added to the index then 4 will “automatically” become root node with 2 and 6 in its left and right and so on?
I went thru the pdf link given by A reader on May 14, 2004 which described index structure as sorted liner “leaf” data with branch / parent block stores the data related to starting point of its left and right / child [ may be more than 2 ] nodes.
My question is - will the “tree” build on top of these leaves data be automatically altered so that it will be balanced ALWAYS?
Thanx a ton Tom almighty.

Tom Kyte

Followup  

August 21, 2004 - 12:08 pm UTC

it means the height of the leaf nodes for all leaves is consistent.

if one leaf node is at level "2", ALL leaf nodes are at level "2"

it would take two IO's to get to ANY leaf node.

it would be unbalanced if some leaf nodes took 3 LIO's, some 4 and some 2.

If one leaf is at level 4, all leaves are.


if there were only 3 index entries, in all probability there would only be a single leaf block (which would also be the root block which would also be the branch block).



Growth of Index "Tree"

August 21, 2004 - 3:09 pm UTC

Reviewer: Oracle DBA from Bombay, India.

Let me know if my understanding is wrong. When data is added to base table and if this insert requires new branch block then rather than creating child node it will create a peer node and also a parent node to store starting point of its children nodes. i.e. it grows from bottom to top instead of conventional top down growth.
For eg consider each leaf can contain 2 indexed column records with its respective rowid. Then first 4 records can be accessed with only 1 root / block node and 2 leaves node. Like below

1
3

1 <-> 3
2 4

when 5th row is inserted in the table it will create another leaf and also a peer to the block node and also a parent to the peers like

1
5

1 5
3

1 <-> 3 <-> 5
2 4

with 1 as root node 2 branch nodes and 3 leaf nodes.

Now consider first 8 records inserted into table; skipping 4 which will create an index structure as below.

1
6

1 6
3 8

1 <-> 3 <-> 6 <-> 8
2 5 7 9

which is a complete / fully utilized b-tree. Pls. let me know if above demonstrated details are true or not and if yes then what will happen if a row with id as 4 is inserted in the table.


Tom Kyte

Followup  

August 21, 2004 - 3:33 pm UTC

the branches may have many many many entries, this is not a simple "b-tree" with left and right -- this is a fairly complex structure.

for some details- see the CONCEPTS guide (most important document you'll ever read) or "Expert one on one Oracle" -- I cover the structure in there with pictures and what not.

Don Burlescon "New Oracle Myth: Index rebuilding requires careful analysis"

September 07, 2004 - 6:03 am UTC

Reviewer: Jens Bob from Walton Mountains

Hi Tom,

have you read Don Burlescons article on DBAZINE:

</code> http://www.dbazine.com/burleson25.shtml <code>

What is your opinion about his "Myths" and about the whole article?

Bye,

Jens

Tom Kyte

Followup  

September 07, 2004 - 9:17 am UTC

Yes I read it.

Nice graphs. They are pretty.

That sounds like "Forget, what Don wrote"

September 07, 2004 - 9:43 am UTC

Reviewer: Jens Bob from Walton Mountain

I would ask them how he got the title "one of the world’s top Oracle Database experts".

It's not easy to get an in-depth knowlegde of Oracle, if every "expert" tell different things.

>>>
Donald K. Burleson is one of the world’s top Oracle Database experts with more than 20 years of full-time DBA experience. He specializes in creating database architectures for very large online databases and he has worked with some of the world’s most powerful and complex systems. A former Adjunct Professor, Don Burleson has written 15 books, published more than 100 articles in national magazines, serves as Editor-in-Chief of Oracle Internals and edits for Rampant TechPress. Don is a popular lecturer and teacher and is a frequent speaker at Oracle Openworld and other international database conferences. Don’s Web sites include DBA-Oracle, Remote-DBA, Oracle-training, remote support and remote DBA.
<<<

"experts"

September 07, 2004 - 9:14 pm UTC

Reviewer: Reader from Aus

I'm sure DB is an "expert" based on the definition:

- experts CLAIM things
- scientists PROVE things

I'd much rather be a scientist

Tom Kyte

Followup  

September 08, 2004 - 8:57 am UTC

Yes, I would have personally preferred to see some hard facts in there instead of "I've seen this happen before, not sure why...."

For example:

1. When a index rebuild is combined with a table reorganization (using the dbms_redefinition package). This is especially useful when the data is accessed via index range scans and when the table is re-sequenced into index-key order (using single-table clusters, or via a CTAS with an order by clause).


I would ask some of the following questions:

o when would an index rebuild NOT be combined with a table reorg?

o how many actual case studies has the author done with dbms_redefinition using
the brand new orderby_cols feature of 10g in the real world? (in 9i, you
could not sort the rows using dbms_redefinition). Are there any serious
performance implications of these online operations one needs to be concerned
with?

o CTAS is an offline operation -- cannot allow modification DML to the original
table -- so you propose to make my Oracle database unavailable on a recurring
basis (note: this is my main issue with scheduled full reorgs. people tend
to a) not use online, b) schedule them frequently, c) tell people "well
Oracle cannot do this Mr. Mgr, so yes, your systems are offline because of
the software". )


o why not suggest the permanent solution -- you almost seem to hint sort of at
it by mentioning "single table <hash> clusters" but what that would do
for data that is accessed "in a range scan" is beyond me. Since "single
table" clusters are exclusively hash clusters and hash clusters do not (by
definition) support range scanning by the hash key, I'm left in a state of
confusion as to what that reference could be pertaining to

o but the real solution for data like this would or could be "IOT". If you
find that you frequently
a) have to CTAS with order by or
b) in 10g use dbms_redefinition with the order by option
c) and rebuild indexes because A and B force you too

why not use dbms_redefinition (9i or 10g) to rebuild the object into an IOT
that will preserve the sorted order natually for you over time? Do it once,
and fix the root cause.

o how do I measure and report to my mgmt the benefits accrued by these
expensive, ongoing operations (by expensive, I'm refering to either the
scheduled offline rebuild -- CTAS, or the "not totally for free" implications
of an online rebuild.


Those are the sorts of things I'd need to see for someone to show me "myth busting" I guess.


I'd also look at:

2. When a heavily-updated index is rebuilt. In highly volatile databases in which table and column values change radically, periodic index rebuilds will reclaim index space and improve the performance of index range scans.

so show me? show me how to measure that. Show me WHEN it makes sense (define what you mean by heavily updated). Will the space reclaimation be permanent or will the index rapidly get fat again? If my redo jumps by an factor of two or three in the days immediately following this operation - might there be a tie in? What is the cost of doing this -- I mean, if the index wants to be "fat" -- and we make it "skinny" -- how long till it gets fat and again, and to carry the analogy to an extreme -- is making an index skinny/fat/skinny/fat over and over again as bad for it's overall health as it is for humans that bounce their weight back and forth frequently (quite stressful for the body). To what end is this magic "space reclaimation" if the index is just going to need it again next week (I mean it isn't like it is going to wise to reuse that space for something else right?).

Actually, a heavily modified index -- where the entries fly around -- would be one I don't want to rebuild - that index took lots of work to get to its relatively "fluffy" state with just enough whitespace in the interior to accomidate the movement of key entries *WITHOUT EXPENSIVE INTERNAL INDEX OPERATIONS*.


<quote>
In the real-world, Oracle professionals will occasionally schedule index rebuilds during off-hours, knowing that the worst possible scenario is that there is no space reclamation or performance improvements.
</quote>

"knowing that the worst possible scenario includes"

o larger than normal redo generation in the days to come.

o increased (yes, increased) space utilization (how so? ask yourself, so when does pctfree come into play for an index, what might pctfree do to a well packed right hand side index?)

o





I LOVE the index article he points to, contains awesome facts I was never aware of till now, such as:

o Oracle indexes are not self-balancing.

o When a row is deleted from a table, the Oracle database will not reuse the corresponding index space until you rebuild the index

o Therefore, indexes are always growing and can become very fragmented over time

o This node could spawn a fourth level without the other level-three nodes spawning new levels. That makes the index unbalanced.

o An index could have 90 percent of the nodes at three levels, but excessive splitting and spawning in one area of the index with heavy DML operations could make nodes in that area to have more than three levels.


and oh, it gets better as it goes on. Sage advice such as "del_lf_rows should be less than 20% of the total", "if index height >= 4, rebuild" and under the heading of "Amazing Results"


I would encourage everyone to take a peek at

</code> http://www.actoug.org.au/Downloads/oracle_index_internals.pdf <code>

it is really nicely presented and even includes a number or two.

Expert ?

September 08, 2004 - 8:49 am UTC

Reviewer: A reader

When experts claim things, they're usually correct.

When Mr Burleson claims things, they're usually wrong.

The person who wrote that article is not an Oracle expert, no matter the definition.

Index Rebuilds - Proof

September 08, 2004 - 9:29 am UTC

Reviewer: Tony Andrews from London, UK

In another of his own articles that he links to - </code> http://www.dbazine.com/burleson18.shtml <code>- Don Burleson says:

"To date, none of the world’s Oracle experts has determined a reliable rule for index rebuilding, and no expert has proven that index re-builds “rarely” help."

Yet again and agin, he fails to prove that they DO help. All he ever offers is anecdotal evidence like "I have seen cases in which performance radically improves after an index rebuild". He even admits that the perceived improvement may be no more than the "placebo effect".

Surely if someone is selling medicine that claims to cure baldness, the onus is on him to prove it works, not on others to prove that it does not? After all, however many benchmark examples someone like you could come up with that show no improvement, he can still say "ah, but I have seen other cases where it did, so there".

Surely it should be possible for Mr Burleson to construct a test case that proves his point? Assuming it CAN be proved that is...

Tom Kyte

Followup  

September 08, 2004 - 10:29 am UTC

</code> http://www.computerworld.com/departments/opinions/sharktank/0,4885,95744,00.html <code>

someone emailed me that link this morning.

somehow, just somehow, it seems appropriate.

that ACTOUG presentation looks impressive....

September 08, 2004 - 11:45 am UTC

Reviewer: Mike Stone from Liverpool, UK

...well argued, with examples to back up the assertions. I've taken a copy for closer study.

In passing - and by contrast - what do you make of figure 2 in DB's DBAzine article!?

Leaving aside the meaningless reference to the "second derivative" of an undefined mathematical function "f(x)", in what way does the graph illustrate how the optimal setting for db_cache_size is arrived at?

Tom Kyte

Followup  

September 08, 2004 - 1:29 pm UTC

the only thing that graph shows is the estimated Physical IO that would performed given a cache size of X.

Why is that point the "optimal" point? Seems if I had 50 gigawads of memory that I wasn't using, the optimal point would be further left?

At the end of the day -- there are many ways to fix problems. The "Automatic" stuff looks at metrics and based on rules -- says go do this, add ram. Great, perfect. What if -- you don't have it? What if -- you could tune a query. You know what is missing on that picture -- on that page? It could be perhaps the SQL Tuning session that says "hey, add an index and your IO on this query will go from 1,000,000 to 4". Might that - just might that - perhaps, maybe slightly impact your performance?

For example, take a look at:

</code> http://asktom.oracle.com/pls/asktom/f?p=100:11:::::P11_QUESTION_ID:9843506698920#23327315081899 <code>

No matter HOW WELL the database cached my data -- no matter how well or fast -- the "solution" is the last query. The problem that could be fixed a little bit (not all of the way, just a nudge the left) was a full scan of a table. We could make that a fast full scan of an index (better). The real answer -- look at the problem and fix it.

Now, the pretty graphic would be totally different wouldn't it.

Yes, playing with the buffer cache can have a material affect on response time.

Yes, playing with the algorithms -- the implementation -- is by far going to have the most MASSIVE impacts.



And another interesting bit of food for thought is

"If the machine can do this (ratio based tuning), do I as a human being really need to also be doing it -- or is the machine better at it than I am. Perhaps I as a human being can spend my time adding value elsewhere, doing the things that transcend what the machine can figure out all by itself"



avoid that TOP 'Expert'

September 08, 2004 - 11:39 pm UTC

Reviewer: Winston

I have recommended people to read Tom's books and Oracle docs instead.

Liked the computerworld article

September 10, 2004 - 10:45 am UTC

Reviewer: A reader

that was funny!

Tom Kyte

Followup  

September 10, 2004 - 10:45 am UTC


Whats the matter with the experts

September 15, 2004 - 5:39 pm UTC

Reviewer: subbu from MN, USA

I was reading </code> http://searchoracle.techtarget.com/tip/0,289483,sid41_gci1006334,00.html?track=NL-93&ad=491728
and thought of looking for what Tom has to say on this because somehow i was getting a feeling that it talks about the tuning help which i refer at this asktom site and searched for "Donald K. Burleson" in asktom.oracle.com.
I quickly got reference to this one article.
What is right? and whats the matter with the experts?
i'm left confused.
BTW -
http://www.computerworld.com/departments/opinions/sharktank/0,4885,95744,00.html <code> article helps in identifying the air-conditioning problem my office room is facing.

Tom Kyte

Followup  

September 15, 2004 - 9:03 pm UTC

it would have been very interesting if DKB's article of "tips" actually provided useful information like "how to do this stuff".

I found nothing "actionable" in that article (searchoracle) -- no advice on how to actually *do anything*. Just rambling on as to how a scientific approach (actually discovering what the problem is) is "bad". it seems to boil down to

o increase your buffer caches
o add cpus
o upgrade your network
o buy a new disk subsystem

rather than

o diagnose the issue, find the root cause
o fix it.

for example he says:

Fallacy: The sole purpose of Oracle tuning should be to reduce logical I/O

I'm not aware of anyone that has said "the sole purpose". When asked "hey, I want to tune, where should I start", many reasonable people say "well, look for the high load SQL, that would be SQL with lots of consistent gets/ execution. Lets look at seeing if perhaps an index, the use of analytics, better physical design, an IOT, a materialized view might help"

He says:

"Accessing Oracle data via the data buffers is at least 10x faster (sometimes hundreds of times faster) than disk access, so one should not"....

throw more RAM at it. Me, adding RAM is the last step after diagnosing the issue. Be a pity to buy that new airconditioning unit if all you needed was cardboard :)



He says:

Fallacy: Hourly tuning averages are meaningless

which I believe to be true (a 15/30 minute observation -- taken on the hour - whats wrong with that?) is useless as a *tuning* device, but extremely useful as a method to see "whats changed" (thats what we use it for). I think I wrote in both of my books that "you need to take statspacks in sickness and in HEALTH, trying to use them to tune a system *without having them from before, when the 'system' was good* is, well, like reading tea leaves.

but if you had them from when things are good -- awesome, compare them, trend them, see what "changed"

tuning -- no
finding out "something has changed and this is what changed" -- yes.



He says:

Fallacy: Only transaction-level tuning will be successful

.... System-level tuning is a reality. Some tuning professionals treat the system-wide issues first and then drill-in to the sub-optimal PL/SQL and SQL. Other shops that do not have the large amount of funds required to pay for detailed application-level changes (tweaking application code & SQL). Here are some examples of real-world system-level Oracle tuning activities:

o Replace sub-optimal RAID - (he says to replace your disk subsystem, that'll save tons of money won't it, this is followed up by another bullet point suggesting raid 0+1 or even that inexpensive solid state disk)

o Change sub-optimal optimizer parameters - Resetting a sub-optimal setting for several optimizer parameters (optimizer_mode, optimizer_index_cost_adj) can double the performance of some systems. Changing optimizer_mode=rule to optimizer_mode=first rows (or vice-versa sometimes!) can influence the performance of every SQL statement in the application (so, any ideas for us here, any advice? or should we just guess -- close our eyes and shoot the gun into the other room and see if we accidently hit any bad guys? Thing is the last sentence sort of understates it -- these settings will influence the performance -- not can, will)

o Fix poor-quality optimizer statistics - The Oracle cost-based SQL optimizer (CBO) will only make good decisions if it has good-quality samples from the schema objects. (i'm curious -- how do you KNOW you have bad stats at the system level, unless you've identified bad sql that has really off cardinalities -- that is what tells you "bad stats". i'm really curious how "system level stats pack" would point to "stats are bad")

o Use a faster network (hmmm, no comment on that one. I always say "use the fastest of everything you can get -- but sure, we'll throw out our network infrastructure and replace it -- tomorrow. It'll definitely be faster and cost less then, oh, i don't know, looking at an application?)

o Get faster processors - Faster (or more) CPUs may greatly improve performance of Oracle system that are constrained by CPU (right in that "cheaper, faster, better mantra - dig it. Oh, and you can just point click and ship that right? and -- you've never seen a 16 cpu machine get upgraded to 32 and simply "fall over" (it was 100% utilized at 16 -- massive latch contention, guess what happened when we gave it more cpu.... maybe we should have fixed the hard parse problem instead -- must be sounding like a broken record, it all comes back to parsing doesn't it?)


He says:
Fallacy: If an Oracle tuning metric has an exception, it is not valid

and we've said "a metric by itself is not always useful. Just because it is 99%, does not mean *GOOD*. 50% might mean "bad" but a high value does not mean "good" -- it just means maybe ok -- you have to look elsewhere if you want to see if you are "good".


In a tuning tips sort of thing, it would have been nice to have something actionable, something with a number or two. And if one is going to point to "them" -- one might want to point to referenceable quotes, sort of in the manner of

</code> http://www.actoug.org.au/Downloads/oracle_index_internals.pdf <code>

(excellent paper, love it more each time I read it)

Index Fragmentation ?

September 16, 2004 - 6:50 am UTC

Reviewer: Nilanjan Ray from India

Hi Tom,

Excellent discussions. Shakes off quite a few deep rooted myths. I don't mean to irritate you further with my questions. But an explanation/corrections in your usual witty manner :) would be most helpful.

1. An indexed column undergoing heavy deletes would actually leave a large value for DEL_LF_ROWS in NDEX_STATS, leading to a large number of sparsely poulated leaf blocks. Therefore during an index range scan it will involve more IO against the index, decreasing performance ( this is eactly where I need your opinion). Afer a rebuild
DEL_LF_ROWS drops to zero, reducing the IO on the index and thereby improves performance, though temporarily( eventually,as you rightly say the index would assume its own *stady state*).
2. Redo generation increases after index rebuild. Is it due to the fact that "branch block splitting" occurs in the process of attaining its *good old steady state* ?
3. People have been using DEL_LF_ROWS and LF_ROWS to measure index fragmentation, eventually leading to an index rebuild. If rebuild is of little or no use in the long run, can you point out what useful information/analysis/conlusion can we draw out of DEL_LF_ROWS column in INDEX_STATS?

Thanks in advance

Regards
Ray

Tom Kyte

Followup  

September 16, 2004 - 9:09 am UTC

1) if you heavily delete -- and never ever insert or update, this MIGHT hold water - if we actually always left deleted leaf rows there (we don't).

consider this table:

drop table t;
create table t ( x int, y char(20) default 'x' );
drop sequence s;
create sequence s;
alter table t add constraint t_pk primary key(x);
                                                                                                                    

Ok, I set up one pro*c program doing nothing but this:

void process( int rows )
{
int  i;
                                                                                                                    
    EXEC SQL WHENEVER SQLERROR DO sqlerror_hard();
    for( i = 0; i < rows; i++ )
    {
        exec sql insert into t (x) values ( s.nextval );
        exec sql commit;
        if ( i%1000 == 0 )
            printf( "inserted row %d\n", i+1 );
    }
    exit(0);
}


and another to do this:

void process( int rows )
{
int  i;
                                                                                                                    
    EXEC SQL WHENEVER SQLERROR DO sqlerror_hard();
    for( i = 0; i < rows; i++ )
    {
        exec sql delete from t where x = (select min(x) from t);
        exec sql commit;
        if ( i%1000 == 0 )
            printf( "deleted row %d\n", i+1 );
    }
    exit(0);
}


Now I let it run and run (started the insert ahead of the delete) and run and run (until i got bored).  After a couple hundred thousand inserts and a couple hundred thousand deletes -- I ctl-c'ed each one.


ops$tkyte@ORA9IR2> analyze index t_pk validate structure;
 
Index analyzed.
 
ops$tkyte@ORA9IR2> select del_lf_rows, lf_rows from index_stats;
 
DEL_LF_ROWS    LF_ROWS
----------- ----------
         15       2687
 

and it really really really depends on how you use the index and how the index is hit if a large DEL_LF_ROWS would affect your IO at all.

What if you 

a) remove old entries (low sequence numbers)
b) were always interested in new entries (high sequence numbers) 
c) so therefore you query:
   select /*+ first_rows */ * from t order by X desc;

and range scanned that way.  well, the high end of the index is going to be very packed -- and you never "page down" into the low end (you show the user the N newest entries and they page down and page down -- you probably never get to the "low end" of the index where the whitespace might be

2) yup, exactly.

3) If I were to observe over time that DEL_LF_ROWS was always *increasing* -- note "over time" "increasing" -- i would consider that a thing to take interest in.  

If "over time", it increased to say 50,000 and then stayed there, fluctuating up and down a couple of thousand even -- i would do nothing -- even if the number of leaf rows was 50,000 as well.  why ?  well, obviously this index wants to be that fat, with that much white space and <b>no matter how many times I put it on a diet...... it is going to get back to 50,000 deleted and 50,000 not</b>




Just apply some thought to what you are doing day to day, think about "why", think about what you are trying to accomplish.  Have a metric or two to validate it.  It is always nice to be to show what really happens. 

Hopelessly stumped

September 16, 2004 - 8:43 pm UTC

Reviewer: ARU from NZ

Hi Tom,
Great thread this. Never will I rebuild indexes needlessly.
BUT yesterday I got a call from a client who says batch jobs which were taking minutes are now taking hours and insists for a script with which he can check index fragmentation. He insists that no application changes have been made at all.
What else can be the reason behind the batche jobs suddenly taking so long.
Tom don't laugh but the database is Oracle7 and 200GB.
It uses rule based optimiser thus the clients emphasis on indexes.
Please do guide me as you do always,
Thanks heaps,
ARU.

Tom Kyte

Followup  

September 17, 2004 - 8:05 am UTC

this is going to be a "hindsight is 20/20" sort of thing.

In order to determine "whats different", we would need to know "what was normal".

Unless you have tkprofs that profile these batch jobs, that show what they used to do -- it'll be hard to know.

Ok, so how would you see this?  I rigged up an example:

drop table t;
create table t ( x int, mod_x int, y char(20) default 'x' );
drop sequence s;
create sequence s;
alter table t add constraint t_pk primary key(x,mod_x);


we are going to populate this table like this:


void process( int rows )
{
int  i;
                                                                                                
    EXEC SQL WHENEVER SQLERROR DO sqlerror_hard();
    for( i = 0; i < rows; i++ )
    {
        exec sql insert into t (x,mod_x) values 
                 ( s.nextval, mod(s.nextval,100) );
        exec sql commit;
        if ( i%1000 == 0 )
            printf( "inserted row %d\n", i+1 );
    }
    exit(0);
}



so, X will be an increasing sequence and mod_x will run 0..99, 0..99, 0..99 and so on.


The carefully crafted delete will be:

void logon()
{
EXEC SQL BEGIN DECLARE SECTION;
varchar   oracleid[31];
EXEC SQL END DECLARE SECTION;
                                                                                            
    strcpy( (char *)oracleid.arr, "/" );
    oracleid.len = strlen((char *)oracleid.arr);
                                                                                            
    EXEC SQL WHENEVER SQLERROR DO sqlerror_hard();
                                                                                            
    EXEC SQL CONNECT :oracleid;
    exec sql alter session set events '10046 trace name context forever, level 12';
}
                                                                                            
                                                                                            
void process( int rows )
{
int  i;
                                                                                            
    EXEC SQL WHENEVER SQLERROR DO sqlerror_hard();
    for( i = 1; i < rows; i++ )
    {
        exec sql
            delete from t where (x,mod_x) =
            (select x,mod_x
               from (select x, mod_x from t order by x, mod_x)
              where rownum = 1
                and mod_x <> 1 );
        exec sql commit;
        if ( i%1000 == 0 )
            printf( "deleted row %d\n", i+1 );
        if ( i%10000 == 0 )
        {
            exec sql commit work release;
            logon();
        }
    }
}


I did this in order to get a plan like this:

ops$tkyte@ORA9IR2> explain plan for
  2              delete from t where (x,mod_x) =
            (select x,mod_x
               from (select x, mod_x from t order by x, mod_x)
              where rownum = 1
                and mod_x <> 1 );
  3    4    5    6
Explained.
 
ops$tkyte@ORA9IR2> select * from table(dbms_xplan.display);
 
PLAN_TABLE_OUTPUT
-------------------------------------------------------------------------------
 
--------------------------------------------------------------------
| Id  | Operation            |  Name       | Rows  | Bytes | Cost  |
--------------------------------------------------------------------
|   0 | DELETE STATEMENT     |             |       |       |       |
|   1 |  DELETE              | T           |       |       |       |
|*  2 |   INDEX UNIQUE SCAN  | T_PK        |       |       |       |
|*  3 |    COUNT STOPKEY     |             |       |       |       |
|   4 |     VIEW             |             |       |       |       |
|*  5 |      INDEX FULL SCAN | T_PK        |       |       |       |
--------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
   2 - access(("T"."X","T"."MOD_X")= (SELECT
              "from$_subquery$_002"."X","from$_subquery$_002"."MOD_X" FROM
              (SELECT /*+ */ "T"."X" "X","T"."MOD_X" "MOD_X" FROM "T" "T" WHERE
 
 
              "T"."MOD_X"<>1) "from$_subquery$_002" WHERE ROWNUM=1))
   3 - filter(ROWNUM=1)
   5 - filter("T"."MOD_X"<>1)
 
Note: rule based optimization
 
23 rows selected.




<b>so, this delete will read the index -- scanning to find the first X,MOD_X such that MOD_X is not 1 -- and then delete it.  We are leaving behind in this index structure all of the x,mod_x's with mod_x = 1 (not a clean sweep as before).

Also, I'm logging out every 10k rows and logging back in and turning trace on again.

Here are the first 4 tkprofs:
</b>


delete  from t
where
 (x,mod_x)=(select x ,mod_x  from (select x ,mod_x  from t  order by x,mod_x )
   where (rownum=1 and mod_x<>1))
                                                                                                                                                     
                                                                                                                                                     
call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute  10000      3.34       3.74          0     246225      40457       10000
Fetch        0      0.00       0.00          0          0          0           0
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total    10001      3.34       3.74          0     246225      40457       10000
                                                                                                                                                     
Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: 158
                                                                                                                                                     
Rows     Row Source Operation
-------  ---------------------------------------------------
  10000  DELETE  (cr=20008 r=0 w=0 time=1411426 us)
  10000   INDEX UNIQUE SCAN OBJ#(30278) (cr=20000 r=0 w=0 time=69076 us)(object id 30278)
********************************************************************************
delete  from t
where
 (x,mod_x)=(select x ,mod_x  from (select x ,mod_x  from t  order by x,mod_x )
   where (rownum=1 and mod_x<>1))
                                                                                                                                                     
                                                                                                                                                     
call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute  10000      4.71       5.19          0     681712      40479       10000
Fetch        0      0.00       0.00          0          0          0           0
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total    10001      4.71       5.19          0     681712      40479       10000
                                                                                                                                                     
Misses in library cache during parse: 0
Optimizer goal: CHOOSE
Parsing user id: 158
                                                                                                                                                     
Rows     Row Source Operation
-------  ---------------------------------------------------
  10000  DELETE  (cr=20011 r=0 w=0 time=1443698 us)
  10000   INDEX UNIQUE SCAN OBJ#(30278) (cr=20000 r=0 w=0 time=126472 us)(object id 30278)
********************************************************************************
delete  from t
where
 (x,mod_x)=(select x ,mod_x  from (select x ,mod_x  from t  order by x,mod_x )
   where (rownum=1 and mod_x<>1))
                                                                                                                                                     
                                                                                                                                                     
call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute  10000      6.36       6.38          0    1130588      40484       10000
Fetch        0      0.00       0.00          0          0          0           0
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total    10001      6.36       6.38          0    1130588      40484       10000
                                                                                                                                                     
Misses in library cache during parse: 0
Optimizer goal: CHOOSE
Parsing user id: 158
                                                                                                                                                     
Rows     Row Source Operation
-------  ---------------------------------------------------
  10000  DELETE  (cr=20004 r=0 w=0 time=1460916 us)
  10000   INDEX UNIQUE SCAN OBJ#(30278) (cr=20000 r=0 w=0 time=91020 us)(object id 30278)
********************************************************************************
delete  from t
where
 (x,mod_x)=(select x ,mod_x  from (select x ,mod_x  from t  order by x,mod_x )
   where (rownum=1 and mod_x<>1))
                                                                                                                                                     
                                                                                                                                                     
call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute  10000      7.54       8.39          0    1579542      40521       10000
Fetch        0      0.00       0.00          0          0          0           0
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total    10001      7.54       8.39          0    1579542      40521       10000
                                                                                                                                                     
Misses in library cache during parse: 0
Optimizer goal: CHOOSE
Parsing user id: 158
                                                                                                                                                     
Rows     Row Source Operation
-------  ---------------------------------------------------
  10000  DELETE  (cr=20016 r=0 w=0 time=1530803 us)
  10000   INDEX UNIQUE SCAN OBJ#(30278) (cr=20000 r=0 w=0 time=117590 us)(object id 30278)



See the LIOs go up from 

  246k
  681k
1,130k
1,579k

and see the increasing CPU times.  Now I rebuilt the index and restarted the "batches":

delete  from t
where
 (x,mod_x)=(select x ,mod_x  from (select x ,mod_x  from t  order by x,mod_x )
   where (rownum=1 and mod_x<>1))
                                                                                             
                                                                                             
call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute  10000      4.19       5.26         15     192185      40502       10000
Fetch        0      0.00       0.00          0          0          0           0
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total    10001      4.19       5.27         15     192185      40502       10000


<b>and it went back down, funny thing is -- the "classic method for finding fragmented (whatever that means) indexes", would totally miss this</b>

ops$tkyte@ORA9IR2> analyze index t_pk validate structure;
Index analyzed.
 
ops$tkyte@ORA9IR2> select lf_rows, del_lf_rows from index_stats;
 
   LF_ROWS DEL_LF_ROWS
---------- -----------
      6505         194

<b>not many del_lf_rows here eh?</b>

So, ask yourself, do we have any degenerate cases like this -- a sweeping left to right index that leaves stragglers behind?


Now, to be fair, the other example above would show increased LIO's over time -- at a much much much much slower rate however (it took over 10 times the work to get the LIO's as high as we saw jump the first iteration here) -- but again, the "classic approach" (look for del_lf_rows) would have been meaningless!  utterly and totally.



So, look for slow by slow code that scans the left hand side, to find the first row to process, but leaves stragglers behind.   

Love all , Trust Few , Follow One

September 17, 2004 - 1:09 am UTC

Reviewer: Nilanjan Ray from India

Great example and demonstrations as usual. Nothing could have been better.

Follow one -- That's You :). Others in the forum would agreee too.

Obviously by trusted list would include greats like: Carry Milsap, Steve Adams, Jonathan Lewis, Criag Shallahamer.

Regards
Ray

Tom Kyte

Followup  

September 17, 2004 - 8:37 am UTC



I disagree, my scope of knowledge is actually quite limited in some respects. I defer to others for many things. I'm a developers DBA (if they are willing to listen), there are people who are much better at certain things....

follow your instincts.
follow that which you can prove.


It is like the difference between Astronomy, the science of the heavens, and Astrology, the art of pretending that if mars is properly aligned with jupiter your cache hit ratio will be superb today.

Follow Astronomers.

Astronomy- Astrology-mars-jupiter-cache hit ratios

September 20, 2004 - 3:34 am UTC

Reviewer: Nilanjan Ray from India

This is definitely Mr.*Modest* Kyte speaking.

"I'm a developers DBA..." People in the industry simply undermine the value/knowlege/capability of Development DBA unless you are someone like "Mr. Kyte"...True..and 100%factual. The server room dba (if I may call production dba-pls I am not generalizing) are looked upon with awe..Look I can change optimizer_* parameters...so and so parameters that can make make your application attain *escape velocity*. PL/SQL and SQL are just not their cup of tea. These simple things are developer's stuff. Then when one fine morning when one transaction/SQL simple refuses to run the dba would remotely loging....compute amazing mathamatical equations...tweak a few parameters. Even then after a couple of days the transaction/sql starts using even more temp segments. Then when this poor development dba takes the call, he is in no hurry-finishes his cup of coffee, does'nt need to tweak parameters..knows the database, system load, knows the access path, understands what went wrong from the tkprof output. Uses *a simple editor* to optimize the code, instruments it, load tests it and releases it.

These are real life scenarios. People might agree, disagree.

And Mr. Kyte, I was not aware of this area of interest of yours "Astronomy- Astrology-mars-jupiter-cache hit ratios" ;).

I would start looking at my cache hit ratios again when it falls below 99% "while mars is properly aligned with jupiter". Please provide me with a best practise link for this. :-D

Ray

Tom Kyte

Followup  

September 20, 2004 - 8:19 am UTC

It is true, I've seen many cases such that when mars and jupiter are well aligned, and solar flares are low (i'm pretty sure this is important, you know how solar flares can affect the computing speed of your cpu and the transfer rates of even solid state drives) -- end users report better response time. Might be the placebo effect, but hey -- who cares. Just align mars and jupiter and things are good.

Now, how do I get them to stop moving away, out of alignment, thats the tricky part. Need more coffee to think about that one.

one more time

September 22, 2004 - 11:57 pm UTC

Reviewer: subbu from MN,USA

hey tom.. i see one more article
</code> http://www.dba-oracle.com/art_dbazine_idx_rebuild.htm <code>
you got to read that.. there are some proof also..
you know i've also seen cases where analyze of table was not improving anything and then analyze the indexes - no change.. what helped finally is dropping the stats, rebuilding indexes, analyze table.
I'm sorry i dont have the proof for this right now as this happened long back. may be that index was requiring a rebuild which i might have come to know at the end of detailed analysis..
but at the same time -- incidents like this will always makes me to think why not to try rebuilding indexes when there is a problem. if the problem is not fixed then i can move to detailed analysis. ofcourse there is no harm in doing this right?
Now i read what i typed and was wondering how i wrote like this even after reading the articles on "extended sql tuning". The reality is when there is a performance problem we are under lot of pressure to do something and make it perform better and we choose the easy hits if we know that it wont take much time.

The case which i quoted above where index rebuild helped, i wouldn't have done that if i know that index rebuilding is going to take more time. in which case i would have done detailed analysis before even thinking of rebuilding..

So can i say that if we have a problem which needs immediate fix then its ok to try index rebuild though it may or may not fix the time.

Do i sound confused?

Tom Kyte

Followup  

September 23, 2004 - 4:18 pm UTC

one time, after rebooting the system and flushing the network - -while looking outside, during a rainstorm, on a tuesday -- before 3pm (very very important, this doesn't apply after 3pm -- and remember, that is EAST COAST time, that is relevant) -- stuff went faster.


when your kid gets sick -- would you like your Dr. to say "well, this medicine worked really good on Sally and she had a 101 degree fever too sooooooo"

You should be able to look at the sql using that index (it is all there) and saying "you know, that LIO is really high".

have I told you about the guys who said "reorgs are always good, they never hurt..." -- wow, can a reorg kill you if it has unanticipated side effects...

I'd rather have at least a clue before I tried anything -- at least a clue.

experience?

September 24, 2004 - 4:16 am UTC

Reviewer: A reader

hi tom,
I am a great fan of YOURS and you always said experience is important than theoritcal learning.
is it not justifiable that we can take his (Don's) point
as he has proven track record.

thanks

Tom Kyte

Followup  

September 24, 2004 - 11:01 am UTC

Experience is more relevant than "i know everything I know from books" -- proof: so many books are wrong in so many ways.


If you look at my book, my writings, I show when and where

o things apply (they help)
o things have no application (they do nothing but waste your time)
o things do not apply (they do more harm than good)

Because someone has been talking about Oracle for many years doesn't mean squat.

And then you have the conundrum of "A (who is expert) says 'x' is true", "B (who is expert) says 'x' is false"


Now what?


I believe in numbers.
I believe in proofs.
I believe in showing.

I believe everything has a "good", "bad" and an "ugly"

So, tell me -- goto that page (the fallicies one) and show me one ACTIONABLE item -- a nugget of help, of truth -- that you can take home tonight and apply.

Besides the ones I pointed out he suggested which were:

o get more ram
o buy more cpu
o buy faster cpus
o upgrade your network
o trash your disk array
o heck, just get solid state



Give me something I can use -- all I see there is a bit of bashing "oh, those guys that say you should actually look for a root cause, gosh, I'm far too busy for that. Just buy more hardware -- it'll fix it right up"

I'll stick with the cardboard on the thermostat thank you very much. If you read the computerworld article -- the anlogy is awesome. There you have people thinking "heat issue", but it is a configuration issue (cold air, blown on thermostat). The fix -- buy another AC unit of course. The kicker -- it probably would have "apparently solved the problem" -- but not because they needed another AC, but because this AC's thermostat wouldn't have been placed right above a vent blowing cold air on it.

It is wonderful to fix stuff by accident, ain't it. Too bad the fix cost so much.


This isn't about experience. You can have 20 years of "experience" in something and still not be "expert" (i'm not saying anything about anyone here -- i'm just saying "cause you been doing something for a long time doesn't mean you know how to do it best". Look around you -- are there people you work with, that have been doing it longer, that you don't think "get it"? I'll bet you there are.


Given two candidates in an interview, I'll tend towards experience -- but hey, I'm STILL GOING TO INTERVIEW them.


Any Extraterrestrial Intelligence listening out there....

September 24, 2004 - 7:28 am UTC

Reviewer: Nilanjan Ray from India

Hi Mr. Tom,

Just remembered that "when mars and jupiter were well
aligned, and solar flares are low" we had UFO's landing on earth as well. But then when UFO's took off from earth mars and jupiter fell off once again and users started reporting poor response times. Mr. tom you should be glad that these people at, </code> http://setiathome.ssl.berkeley.edu/, <code>are trying to get ET's back on earth once again solve the mystries of mars-jupiter-cache hit ratios.

Ray

just bought his book

October 11, 2004 - 6:11 am UTC

Reviewer: A reader

hi tom
i have just bought his book creating self tuning database and found it good, but just before that i bought ur book
efefctive oracle by design.
ur book is like a holy book for me.
have u read his book. what are your reviews.
should i follow his book as we people depend the recommendations of people like you.

Tom Kyte

Followup  

October 11, 2004 - 8:12 am UTC

I have an idea.

Use information that is factual.

Use information that can be proven.

Use information that not only comes with "do this" advice -- but also comes with "but watch out, it doesn't apply in this case" sort of information (eg: nothing is 100% true, nothing is 100% false -- things are not black and white).

Use information that the person giving you the information has convinced you with examples, test cases, *numbers* to be accurate.


Above all, use common sense :)

(i would not suggest using multiple block size tablespaces as a tuning device. i would not suggest rebuilding all of your indexes with a 32k blocksize. i would suggest benchmarking all advice that was not supplied with a benchmark heavily!!!!!)

why ?

October 11, 2004 - 8:19 am UTC

Reviewer: A reader

(i would not suggest using multiple block size tablespaces as a tuning device.
i would not suggest rebuilding all of your indexes with a 32k blocksize. i
would suggest benchmarking all advice that was not supplied with a benchmark
heavily!!!!!)

why ? any specific reason

thanks

Tom Kyte

Followup  

October 11, 2004 - 8:46 am UTC

if you want to "fine tune the keeping/recycling" of objects -- you use the keep/recycle pools.

and most importantly -- because I haven't seen numerical proof of it "being better" (yes, i've seen the 'proof' that LIO's go down -- but, umm, that is sort of obvious, if the index block is 32k instead of 8k -- one might expect that). I only see it as being harder to manage, more to think about.


so, someone (maybe that author?) do a technical proof -- not "well, i've seen it be good I think". It should be easy (or not).

But be sure to include "when this isn't good". I can think of one -- a large table with an index that retrieves 1-100 or so rows at a time. There, I can think of how the 32k blocksize on an index would very likely be detrimental to my performance. I should only need 1, maybe 2 leaf blocks to perform that range scan with 8k. I would need 1, maybe 2 leaf blocks with 32k. However, I might have to do 4times the PIO to do it. I might need 4times the buffer cache to hold it -- meaning I could definitely be INCREASING MY PIO needs with a 32k blocksize index. My blocks bump out other blocks more often since we use 4x the buffer cache to hold these blocks.


I'd like to see the science that says "if you have this condition (cause things are not "universally applicable"), and you do these things -- then you might expect under this set of circumstances to experience 'this' ('this' needs to be the metrics we should be looking for, what numbers we are looking for)"

That's all -- pretend I'm from Missouri -- "show me"


Multiple blocksizes

October 11, 2004 - 10:20 pm UTC

Reviewer: A reader

"i would not suggest using multiple block size tablespaces as a tuning device. i would not suggest rebuilding all of your indexes with a 32k blocksize"

I completely agree with everything you said and see your point. But, as a Oracle technology person, I see these new features, I see the reasoning behind them and where they might be useful. I would like to benchmark them in my system to see if they would indeed benefit my applications. How do I go about doing that?

Lets take the "recreate an index in a 32k blocksize tablespace" feature. What index stats, usage patterns, cardinality, etc, etc, should I look at to determine if a certain index would indeed benefit from being moved to a 32k blocksize tablespace?

Thanks

Tom Kyte

Followup  

October 12, 2004 - 7:44 am UTC

the reason for multi-block size tablespaces is, was and will be documented as "for support of transporting tablespaces from small block size oltp database to large blocksize data warehouses for ETL processing without the 'E'"



why ?

October 12, 2004 - 12:29 am UTC

Reviewer: A reader

great as always.

i have question for you. PLease tell me you answer these questions alone or you have a team, i a person like you also needs rest. i wonder someday i can also be like you.

thanks

Tom Kyte

Followup  

October 12, 2004 - 7:58 am UTC

unless the answer starts with "I asked <name> to answer and they say...."

I answered it.

review

October 12, 2004 - 4:12 am UTC

Reviewer: A reader

tom
have you read this book Oracle SQL Tuning & CBO Internals by kimberly floss.
i was about to buy this book but since it has been co - written by don Burlesson, what are your views about this book. Please shed some light on it to make a decision.

thanks


Tom Kyte

Followup  

October 12, 2004 - 8:12 am UTC

you'll have to make your own judgement calls as to whom you get advice from.

When I've read something and liked it, I'll point it out. I haven't read this book, I am not qualified to comment on it one way or the other.

about the book cbo internals ?

October 12, 2004 - 7:48 am UTC

Reviewer: A reader

about the book cbo internals ?

???

October 12, 2004 - 8:08 am UTC

Reviewer: A reader

What do you think about about this book, can you give me your opinion please.
It's called 'Annoy people by asking them to review books!'

Thanks

Multi blocksize tablespaces

October 12, 2004 - 10:20 am UTC

Reviewer: A reader

You are dogding my question

"the reason for multi-block size tablespaces is, was and will be documented as "for support of transporting tablespaces from small block size oltp database to large blocksize data warehouses for ETL processing without the 'E'""

That just says that, say, 32k blocksize tablespace is likely to be used by customers in a "data warehouse", whatever that means nowadays! But you just advised that just because it is a data warehouse, one shouldnt jump to use 32k blocksize tablespaces. Thats why I was trying to get some scientific, methodical way from you by looking at stats, numbers, etc to determine if indeed I would benefit from 32k blocksize tablespace?

Tom Kyte

Followup  

October 12, 2004 - 10:23 am UTC

it means, 32k/16k is normal for DW.

it means, numbers smaller than that are normal for OLTP (all of the way down to 2k).


I DID NOT ADVISE ANY SUCH THING. Please tell me where you saw that? That I said "don't use 32k/16k for DW"

I see "don't use mixed size blocks for tuning"
Where do you see "don't use 32k for anything"

October 12, 2004 - 10:55 am UTC

Reviewer: A reader

OK so you *are* saying that use 32k for a datawarehouse, always? Surely not. I understand what you mean, I am just trying to do my due diligence to determine if my datawarehouse would benefit from this new feature and requesting your help as to what steps I should follow to come to this determinination. Dont know how else to ask this.

Tom Kyte

Followup  

October 12, 2004 - 11:25 am UTC

I'm saying

"in general, DW's use bigger blocks -- like 16 and yes, even 32k"

"in general, OLTP's use smaller blocks -- like 8 and yes, even 2k"


You pick -- but given that you picked say "16" for DW and say "8" for OLTP -- multiple block sizes are SUPPORTED so you can take a datafile from "8" and plug it into "16" so you can "ETL" without the "E" from your OLTP's to your DW's.


YOUR DW BENEFITS FROM THIS NEW FEATURE IF AND ONLY IF YOUR DW has a blocksize that is different from the databases you are transporting data from (tried to say that above, hope this clears it up -- multiple blocksizes are there to permit the transportation of a datafile from a database with blocksize X to a database with blocksize Y)



Domain Index

October 12, 2004 - 11:28 am UTC

Reviewer: Kamal from INDIA

Hi Tom

Can you explain the use of Domain Index and where it can be used.

There is column called domixops which is showing failed even when the index is valid...what is the reason...

Thanks in advance..

Regards
kamal

Tom Kyte

Followup  

October 12, 2004 - 11:36 am UTC

Oracle text uses domain indexes... create index t_idx on t(text) indextype is ctxsys.context;

Oracle spatial/locator uses domain indexes....

they are "user defined" or "extensible" indexes -- you could create your own (but the ones we ship are usually enough"




DOMIXOPS is not a name familar to me. Who owns it? what is it on?

October 12, 2004 - 12:41 pm UTC

Reviewer: A reader

"in general, DW's use bigger blocks -- like 16 and yes, even 32k"

"YOUR DW BENEFITS FROM THIS NEW FEATURE IF AND ONLY IF YOUR DW has a blocksize that is different from the databases you are transporting data from"

We are still not on the same page. I am not talking about transporting tablespaces at all.

I am just trying to justify, rationlize, see if your first statements above is valid in my environment. In other words, I am trying to see if your "in general" would apply to my particular warehouse, thats all. I am NOT asking about why multiple blocksize feature was invented.

Very simply, all I want to know is how do I go about determining if my warehouse would benefit from a 32k blocksize? You pointed out some caveats above "a large
table with an index that retrieves 1-100 or so rows at a time ...". How do I go about *testing* and benchmarking 32k blocksize so as to arrive at a determination that it would/would not benefit me?

Tom Kyte

Followup  

October 12, 2004 - 1:54 pm UTC

you would have to test and benchmark your application, on your system.

If you look at our tpc-h's for example, we use 8k/16k blocksize.

it'll depend on so many factors -- such as "do you query rows always that are next to eachother?" -- if so, the blocksize won't impact the buffer cache efficiency so much being really large. If the rows you query are far apart -- happening every say 64k of data (totally artificial here), you have to cache 32k for each row you are working with, i could cache the same amount of data in 1/2 the cache with 16k and 1/4 the cache with 8k.


if your warehouse is built, stop, done. don't rebuild it.

if you are building a warehouse, by all means -- test the typical operations you plan on experiencing and see if you can "measure" any difference (doubtful, not 100% ruled out -- but highly improbable).

8 and 16k are what the vast majority of the rest of the world is using (some people say "just use 8k for everything").

October 12, 2004 - 2:10 pm UTC

Reviewer: David Aldridge from Colorado Springs

For data warehouses you might find that the payoff for larger block sizes comes if you are using data segment compression, for which your rows ought to be physically ordered anyway. It might be worth benchmarking the compression ratio you can achieve on 16k and 32k blocks.

You might like to check metalink for any outstanding issues with 32kb blocks -- i recall that there was a bitmap index corruption problem a little while ago that forced us to migrate our entire DWh back to 16kb blocks. That hurt.

Tom Kyte

Followup  

October 12, 2004 - 2:18 pm UTC

well, all rows are physically ordered :) randomly perhaps but physically ordered in some fashion.

The ordering that achieves maximum compression -- might be the ordering that puts the rows you tend to analyze as far apart as possible.


Consider:

SYM varchar2(4)
DT date
VAL number

(stocks...) date is 7 bytes. SYM is 5 bytes.

The most "repetitive and big" field is DT. If you order by DT, all of the information for ORCL is spread out over as many blocks as possible. But, you query by ORCL for the last N weeks (or for SYM=xxxx in general, you analyze a stock)

So, your sort order for MAX compression is different than the order you would like the data in for analysis.


But yes, 32k will use less storage in a compressed and uncompressed form -- less block overhead (for uncompressed) and more opportunities to compress with a bigger block.


that last point was the reasoning behind saying "8 and 16k are what the vast majority of the rest of the world is using (some people say "just use 8k for everything").". 32k is not mainstream yet, as 16k was not too many years ago, as 8k was not before that.




October 12, 2004 - 3:38 pm UTC

Reviewer: David Aldridge from Colorado Springs

>> So, your sort order for MAX compression is different than the order you would like the data in for analysis.

Yes indeed, and given the choice I'd go for ordering by analytic requirements by default, but everything depends on how many blocks the average query would scan (assuming that the rows are physically ordered by analytic requirements). If the "average" query scans some hundreds of blocks then there's probably still room for further sorting to improve compression ratio. As you say, benchmarking is the thing.

>> that last point was the reasoning behind saying "8 and 16k are what the vast majority of the rest of the world is using (some people say "just use 8k for everything").". 32k is not mainstream yet, as 16k was not too many years ago, as 8k was not before that.

That's a relief -- I though I might get flamed for mentioning that :)

Domain Index

October 14, 2004 - 4:52 am UTC

Reviewer: Kamal from INDIA

Hi tom

Thanks for the answer.

I am sorry i have mentioned the column name wrongly in my previous Post.

the Column name is DOMIDX_OPSTATUS in DBA_INDEXES.

for a domain Index , the status is valid but this column DOMIDX_OPSTATUS is Failed.. what is the reason?

can you please explain with an example of how to create a domain index and use it in a query..or if not possible any reference links...

Thanks
kamal

Tom Kyte

Followup  

October 14, 2004 - 10:10 am UTC

my car won't start either....

one would need "more information" -- like "what you did", what the index is on, what TYPE of domain index (spatial, text, what) and so on...

search for

indextype

on this site -- indextype is ctxsys.context for example -- creates a TEXT index.

Concerning the URL on index rebuilds mentioned above..

October 14, 2004 - 5:34 am UTC

Reviewer: Hans Wijte from The Netherlands

I found this thread highly interesting to read, especially
the comments on the article mentioned in the URL
www.actoug.org.au/Downloads/oracle_index_internals.pdf

Well, I took a peek at this article, but it seems to me I
have read the wrong article because mr. F. declares the
rebuilding of indexes to be largely a MYTH as well(see: the
summary of his article); in fact if I'm not completely
misstaken, this article states that almost everything you
point out as being myths, are also myths in his opinion.

Have I perhaps been reading a second ("updated") version ?

Best regards

Hans

Tom Kyte

Followup  

October 14, 2004 - 10:16 am UTC

i'm confused? what did you mean?

????????

Explanation

October 15, 2004 - 4:58 am UTC

Reviewer: Hans from The Netherlands

In the summary of his article he states the following:
1. The vast majority of indexes do not require rebuilding
2. Oracle B-tree indexes can become "unbalanced" and need to
be rebuilt is a myth
3. deleted space in an index is "deadwood" and over time
requires it to be rebuilt is a myth
4. If an index has a height of "X" levels, it's inefficient
and must be rebuilt is a myth
5. If an index has a poor CF, rebuilding it is a myth
6. To improve performance indexes must be rebuilt on a regu
lar basis is a myth

To me all this doesn't sound like he's trying to make us
believe that indexes have to be rebuilt

Tom Kyte

Followup  

October 15, 2004 - 11:47 am UTC

1) i concurr
2) correct
3) correct
4) correct
5) correct
6) correct


i don't see anything wrong with that list.

the vast majority of indexes do not need to be rebuilt.

b*tree indexes cannot become unbalanced -- it is technically IMPOSSIBLE (so to rebuild because they are out of balance would be foolish, for they are NOT out of balance)

deleted space is not deadwood in general (my tests above show that actually). Only in seriously degenerate, exceptional cases might there be an issue. (the myth is "deleted space is NEVER reused -- many many people believe that", it is NOT true at all, not even close to true)

the index height in general will not be affected by a rebuild.

rebuilding an index cannot in any way shape or form EVER affect the CF - the CF is a measure of the TABLE, not the index so much -- how sorted is the TABLE with regards to the index -- rebuilding the index will NEVER change how the index is sorted, hence it cannot affect the CF

Rebuilding all indexes on a recurring basis is a bad bad bad thing. It is a myth.



So, where did either of us go wrong? There are cases -- special, degenerate cases where a rebuild is helpful, even necessary. But it is like 0.01% of the real world cases -- not 99.99%.


Still not understanding the disconnect here?

Genesis of a Myth

October 15, 2004 - 3:34 pm UTC

Reviewer: Alberto Dell'Era from Milan, Italy

> "deleted space is NEVER reused -- many many people believe that"

According to Dave Ensor ("Tales of the Oak Table", bottom of page 50), that used to be true for some prehistorical release of Oracle at least for (quoting)"completely empty leaf blocks". So, this a "once-was-true" kind of Myth.

Just noticed that while reading, perhaps someone may be interested.

bye
Alberto

Tom Kyte

Followup  

October 15, 2004 - 5:59 pm UTC

all myths have foundations in long ago behaviour probably -- the explicit cursor myth for example (that you should open/fetch/close instead of select into) came probably from early forms (iag/iap) applications...

Time-Resistant Myths

October 15, 2004 - 6:42 pm UTC

Reviewer: Alberto Dell'Era from Milan, Italy

And what's the foundation of the Myth that says that you don't exist ?

Seriously - anyone I know, *without exception*, being him/her an Oracle professional or not, believes that asktom is run by a shadow team of ghost writers and that Tom Kyte is in permanent vacation at the Caribbeans, or perhaps a team acronym (Team for Operations and Maintenance ? Could be :) ...

Hard to find a test case against this, so i'm going to believe it myself ;)

Hans, I think you're confused ?

October 15, 2004 - 6:51 pm UTC

Reviewer: A reader

Hans, it's the articles of Mr Burleson that Tom has been questioning, not Mr Foote. Tom uses Mr Foote's article as an example that dispels and proves many of these myths not to be correct.

You might want to re-read this thread.

Tom Kyte

Followup  

October 16, 2004 - 10:32 am UTC

Correct -- I really like the Richard Foote material, it is well done. I encourage all to give it a read through and take it into consideration.

October 16, 2004 - 11:19 am UTC

Reviewer: David Aldridge from Colorado Springs

In Richard Foote's excellent article he mentions a 9i 90-10 block split bug, that results in too-high a pct_free.

Is it fair to say that this is one situation where an occasional rebuild might be justified? Not for performance reasons I'd think, just if reducing the space allocation of the index was useful.

Tom Kyte

Followup  

October 16, 2004 - 1:04 pm UTC

it is a weird case that only affects a really badly written program. i'd rather fix the code :)

but it is indexes on sequences that in general need watching (and the del_lf_rows ratio so commonly used won't work as shown above!). when they sweep from the left to the right (like a queue -- put rows in, process them, delete them) but you leave a couple behind -- it can get bad over time. A coalesce would fix them right up.

My mistake

October 18, 2004 - 6:03 am UTC

Reviewer: Hans Wijte from The Netherlands

Tom,

While reading your article i got the impression that you
were also questioning the article by Mr. Foote, but I
couldn't find anything wrong with ot. That's why I responded like that. I see the mistake was wholly on my
side; I apologize for this confusion

Best regards

Hans

Jonathan Lewis spot on

October 18, 2004 - 5:49 pm UTC

Reviewer: A reader

The two Jonathan Lewis links you highlight on your home page make the perfect postscript to this thread. Perhaps this Don Burleson dude will read it all and stop writing until he learns what he's writing about !!

Tom Kyte

Followup  

October 19, 2004 - 8:48 am UTC

Just for posterity (links on the home page come and go...)

INCREDIBLY amusing, as well as full of useful information:
Circular References proof by restatement
</code> http://asktom.oracle.com/pls/ask/z?p_url=http%3A%2F%2Fwww.jlcomp.demon.co.uk%2Fcircular.html&p_cat=CIRCULAR&p_company=10
and  That was then, this is now . 
http://asktom.oracle.com/pls/ask/z?p_url=http%3A%2F%2Fwww.jlcomp.demon.co.uk%2FThat%20was%20then.html&p_cat=THAT_WAS_THEN&p_company=10 <code>
Both of those articles are not only funny, but thorougly educational. Two thumbs up.

Not likely

October 19, 2004 - 7:36 pm UTC

Reviewer: Jeff from Atlanta

<Perhaps this Don Burleson dude will read it all and
stop writing until he learns what he's writing about !!>

Hasn't stopped him so far. Try reading some of his responses to these things on the newsgroups.

(You can get to them now at groups.google.com easily)

Status N/A

October 20, 2004 - 5:12 am UTC

Reviewer: Yogesh from Pune, India

Please confirm.

If a index is locally partitioned, status of that index in dba_indexes is N/A



Tom Kyte

Followup  

October 20, 2004 - 7:24 am UTC

in dba_indexes -- sure, cause there is no "index" to have a status of, there are many mini-indexes (in dba_ind_partitions) that have their individual status

top oracle expert - myth?

October 20, 2004 - 9:03 am UTC

Reviewer: A reader

I'd like to understand how he became on of the world's top oracle db experts. Circular reference type of a thing again ...

"Donald K. Burleson is one of the world’s top Oracle Database experts with more than 20 years of full-time DBA experience. He specializes in creating database architectures for very large online databases and he has worked with some of the world’s most powerful and complex systems. A former Adjunct Professor, Don Burleson has written 15 books, published more than 100 articles in national magazines, serves as Editor-in-Chief of Oracle Internals and edits for Rampant TechPress. Don is a popular lecturer and teacher and is a frequent speaker at Oracle Openworld and other international database conferences. Don’s Web sites include DBA-Oracle, Remote-DBA, Oracle-training, remote support and remote DBA."

Bitmap Indexes rebuild

October 21, 2004 - 5:02 am UTC

Reviewer: Kiril from BG

Hi Tom
What about bitmap indexes on the tables with heavy inserts or updates of bitmap column.
I have tested this particular with 1 000 000 rows with insert / update and commit at every 20000 rows and there is unexpected growth in the bitmap index.
After rebuild of index size is reduced.
Is it this 0.01 % where rebuild is reasonable?
Thanks
Kiril

Tom Kyte

Followup  

October 21, 2004 - 7:01 am UTC

it is not unexpected -- in fact, it is entirely expected and predicable.

bitmaps are not relevant in a system with lots of row level modifications (they do not "survive" such operations).

(i've written about this elsewhere on this site, jonathan lewis has an excellent series on this on dbazine.com)


this is a case where you set the index unusable, bulk load ALL rows and rebuild the index entirely.

Unbalance?

October 21, 2004 - 7:23 am UTC

Reviewer: A reader from USA

Tom,

Sorry for asking very naive question but have to.

What do they (or you) really mean the word "balance" w.r.t to index as? What is balance of an index?


Tom Kyte

Followup  

October 21, 2004 - 7:55 am UTC

all leaf nodes are at exactly the same depth from the root -- eg: it always takes 3 IO's to get to the leaf (for example).

An unbalanced index would have it such that some leaf nodes could take 3, or 4, or 5 or 6 or N IO's to reach them.

in oracle -- the b*tree index is always balanced, always the same number of IO's to get to any leaf block in that index.

Review of Tom's answers on rebuilding indexes

October 21, 2004 - 11:27 am UTC

Reviewer: Cayton Janner from Austin, TX USA

This answer was quite helpful and Tom, I agree---you should never rebuild indexes except to move them or because you
have moved the underlying table and they have become "unusable". I did rebuild some to get better balance
and eventually, they do go back to that "stable size"
you mentioned in your first response over 2 and a half years ago.

Tom Kyte

Followup  

October 21, 2004 - 2:54 pm UTC

(and you didn't "balance them" :)

they are *always* perfectly balanced.

btree index

November 01, 2004 - 2:36 am UTC

Reviewer: A reader

Hi

I am reading a Metalink note,
</code> http://metalink.oracle.com/metalink/plsql/ml2_documents.showDocument?p_database_id=NOT&p_id=30405.1 <code>

I wonder when will btree level increase? Undere what circumstances?

Also there is a paragraph which confuses me,

Inserting an index entry after index creation ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
..........
Where an index key value has multiple entries, these entries are made into the leaf block in ROWID order. So all the blocks for the index key value are scanned in turn until an entry is found with a greater ROWID than the new row, and the row is inserted into that block (which may cause extra block splitting).
The reason for this is for queries such as:

SELECT some_columns FROM table WHERE COL_A = valueA and COL_B = valueB;

If COL_A and COL_B both have non-unique indexes, then because the entries in each index are stored in ROWID order, it makes it easy to find the ROWID's that occur in both indexes. Otherwise we would have to sort the ROWID's before we could find the ROWID's that occur in both indexes.

---------------------

I am confused, I understand that an index with multiple keys are composite indexes correct? From the text it seems that COL_A and COL_B are seperate single column indexes, how can single column indexes have multiple index keys...?

I dont understand why rowid is used, if say we have composite index ename, empno

1000, 10
1002, 20
1003, 10
1005, 30

how are these stored?

Cheers

Tom Kyte

Followup  

November 01, 2004 - 3:45 am UTC

the level will increase when a branch block level becomes "full" and we need another level.

that paragraph was talking about using two indexes simultaneously. an index join.

November 02, 2004 - 6:10 am UTC

Reviewer: A reader

Just noticed Mr Burleson is doing a presentation at Openworld on indexing, topic II d should be good for a laugh ;-)

...
I Introduction

II Indexing Overview
a. Types of Indexes Available
Type
When to Use.
When it is not appropriate.
b. Indexing NULLs
c. The Need for Statistics
d. Rebuilding Indexes
Myth vs. Reality
Getting Stats for Free on Rebuilds
e. Testing Index Effectiveness
...

Oracle to blame ?

November 03, 2004 - 6:16 am UTC

Reviewer: A reader

After reading this thread, it appears obvious that Burleson understands little about indexes. Yet he writes articles, Oracle authorized books and presentations at Oracle Open World on the topic. No wonder us newbies get confused.

Shouldn't Oracle take some responsibility or am I missing something ?

Tom Kyte

Followup  

November 03, 2004 - 7:15 am UTC

"Oracle authorized books" -- no such thing "exists", Oracle does not "authorize" individuals to write books -- never has.

We cannot legally prevent someone from saying their mind. It is not legal. We cannot discriminate, we cannot blacklist. If we said "we will prevent John Doe from presenting in Oracle forums" -- we would be open to litigation.

Oracle doesn't pick all of the presentations for OOW, they are selected by the user groups as well as I understand it.

The people must vote with their pocket books here.

We can only be responsible for Oracle produced material.


You or I could write a book (totally misleading book) about SQLServer if I wanted to -- no one can really stop us. We are private individuals.



Poor Don!

November 08, 2004 - 7:59 am UTC

Reviewer: Jensbob from Lower Saxony

> Just noticed Mr Burleson is doing a presentation at
> Openworld on indexing, topic
> II d should be good for a laugh ;-)

He will feel like human lions food in the Rome colosseum.


oracle authorosed books

December 03, 2004 - 2:21 am UTC

Reviewer: A reader

the books are not oracle authorosed books but don has bokks published by ORACLE PRESS like statspack tuning
etc

Tom Kyte

Followup  

December 03, 2004 - 7:56 am UTC

so? you too could probably publish a book under OraclePress.

It is a marketing relationship, pure and simply -- just like MS Press. The books are not 'sanctioned', they are not 'subject to approval' -- not anymore than anything by any publisher.

December 05, 2004 - 11:41 pm UTC

Reviewer: A reader

thats not fair enough ( maarketing relationship ) for readers if nobody is there to check its matter. i think

Tom Kyte

Followup  

December 06, 2004 - 11:20 am UTC

Oracle Press is a big company -- McGraw Hill.

They have a staff of editors. These editors hire tech editors (like me -- I've tech edited for them).

These tech editors do "check its matter" as you say. They have upwards of 5 or 6 people reading the book only for technical accuracy (eg: i don't correct grammar or flow, just find technical inaccuracies).

So, that said, McGraw Hill with a long history and reputation -- does tech edit the books. But the tech editors themselves are human beings -- like me. With varying levels of "experience".

My point is "Oracle the company" doesn't tech edit these books. There are tech editors from time to time that work for Oracle sure.

I believe when you buy a book -- you should look at the team that tech edited it to. (look at the team I had on my books - the names there should instill a high degree of confidence that the material has been vetted by people who know). I was as proud of the tech edit team on the book as I was of the content itself. To the point where I have them point the bios in there so you can read about the people who did it.


In fact -- that may be as important as the author themself. I've been in a situation where I've said to the publisher "if this material is printed -- I want my name removed from the book, I will not have my name associated with this content, people will believe I "believe" it to be true". It is a powerful way to get stuff retracted.




December 06, 2004 - 10:46 pm UTC

Reviewer: jas from mumbai,india

we all like you because of your sincerity.

thanks


So do lions enjoy eating a turkey ?

December 09, 2004 - 4:18 am UTC

Reviewer: A reader

"He will feel like human lions food in the Rome colosseum.
"

Being unable to attend Open World this year, I would be interested in hearing feedback on Don Burleson's index presentation

Based on what I've read here and elsewhere, he doesn't seem to know a great deal. So has he changed his tune and learnt something of what he presents or were there a lot of lions with a bad case of indigestion in attendence?

Lewis, Kyte, Burleson et al

December 10, 2004 - 10:19 am UTC

Reviewer: A reader from US

I was planning to attend that Burleson session until I discovered who the presenter was!

Tom's and Jonathan's sessions were great as expected. BTW, many are still waiting for the hands on lap session that you promised, Tom! :)

Tom Kyte

Followup  

December 10, 2004 - 11:01 am UTC

ha ha... ;)

December 10, 2004 - 12:31 pm UTC

Reviewer: David Aldridge from Colorado Springs

Don Burleson's presentation: </code> https://www.openworld2004.com/published/1082/garmany_doc_1082.pdf <code>

Hi,

December 10, 2004 - 1:15 pm UTC

Reviewer: A reader

I had a situation where the size of the index on two columns of a particular table was 1.5 gig and the size of the table with 6 columns was just 450MB.
After rebuilding the index the size of the index came down to 240MB.

The table is however prone to all kinds of DML activities.

I don't understand why the index grow upto 1.5 Gig. Can you please explain? I am using oracle 9.2.0.5.

Thanks,


Tom Kyte

Followup  

December 10, 2004 - 7:59 pm UTC

it is called "white space"

You could have most likely "reclaimed" the space via a COALESCE as well.


Lets say you fill an index up with the numbers 1...1,000,000

You delete 75% of the rows (the numbers 1..25 STAY, 75..100 go, 101..125 STAY, and so on)

Your index will be as big as it was before
Your table will be as big as it was before.

Now, you insert rows 1,000,001 .. 1,750,000 (numbers 1,000,001 .. )

Your index is forced to put them on the "right hand side", the number 1,000,001 must go to the RIGHT of 1,000,000. The index grows

but -- the table, the number 1,000,001 can certainly go where the row 990,990 *was* -- the table might not grow at all.


Things in an index have a "place"
Things in a (heap) table do not.


So, some "right handed indexes" like that can benefit from a coalesce from time to time, or maybe even a rebuild...

So a 1.5 gig index on a several hundred meg table is ok?

December 12, 2004 - 12:19 pm UTC

Reviewer: Mike Ault from Alpharetta, Ga

Tom,

I take it this is one of the special cases that represent only a fractional percent of cases? Yet I see these at almost all my clients. I also noticed in my reading of the Tales form the Oak Table book several of your reading list folks say "rebuilds of indexes" are beneficial.

I agree, they shouldn't be rebuilt on a whim, only when it is determined they have a problem. The ratio of deleted blocks to full blocks is a simple measure that can indicate this, is it a 100% indication? No. However it is a valid tool.

I also agree that the ultimate fix is to fix the code in many cases. Unfortunately over half the client sites I visit have 3rd party code they can't fix. In that case we have to treat the symptoms.

I would like to apologize to folks who read my articles about rebuild of indexes fixing cluster factors, I believe I recinded that advice where ever I could, however, I have little control over reposting. I do condone reordering of the index columns for a concatenated index to achieve better clustering factors and then apply that reorder to the appropriate SQL.

As to hardware fixes verses SQL fixes, it may be better money spent on older releases and third-party software to use the judicial application of specific technology to fix performance issues, especially if the only fix is to re-write a massive amount of SQL that could cost the company hundreds of thousands of dollars in lost time and productivity. Remember, many companies have bought into poorly designed software that they cannot easily discard. Again, in this case we have to treat the symptoms again.

I wish Oracle would take your practices to heart in all of its applications. I hate to take assignments on Oracle Apps sites because of the complex self joins, bad indexes, poor use of bind variables and cryptic use of undocumented parameters with no explanation or reasoning provided.

I wonder also what would happen if you were to play devils advocate with your examples, instead of designing them to disprove, design them to prove some of these items and see what results you obtain. Sometimes what we see is what we look for.


Tom Kyte

Followup  

December 12, 2004 - 7:30 pm UTC

I've never ever said "they are terrible, don't do them" (please, if anyone can find where I've said that -- point it out)

what I'm against -- a scheduled rebuild of every single silly index on the planet -- causing downtime, eating up resources and providing negative benefits in some cases.

I'm not saying "don't rebuild this thing" am I?

The ratio of deleted blocks to full blocks? how do you do that? and what is a deleted block? if you mean del_lf_entries -- that is useless (we clean out del_lf_entries on certain occasions -- every deleted but not reused slot is not a del-lf-entry!). examples are above -- from the pro*c stuff that shows degenerate cases.


Not sure what to make of your last paragraph there. The examples above clearly showed degenerate cases??!? I carefully crafted them to show a breakdown -- and the type of processing that causes it. I gave examples to show that space is reused efficiently.

did you read the entire posting? (ctl-f for hindsight for example)

I do as many "postive" and "contra-positve" cases here. Many (most actually I would say) are "postive" cases -- showing something to be true here.


Are you the Mike Ault that works for Don Burleson ?

December 13, 2004 - 6:01 am UTC

Reviewer: A reader

Because if you are, it explains your comments.

Might I suggest you carefully read the excellent presentation by Richard Foot highlighted by Tom and learn why some indexes need rebuilding but why most don't.

Considering your guest appearence at the start of the presentation, it might do both you and Don Burleson some good.

Keep up the great work Tom and showing us how it really is.


To the last "A reader" ....

December 13, 2004 - 9:51 am UTC

Reviewer: Gabe

One can read/research both sides of the argument and draw his/her own conclusions ... one can express public opinions or statements (when backed up by proofs/facts).

Dishing out nominal advice <quote>Might I suggest you carefully read ...</quote> without leaving a name might be considered a bit patronizing.


Tom Kyte

Followup  

December 13, 2004 - 10:23 am UTC


I encourage discussion on any and all topics myself.

The last thing about Don Burleson.

December 25, 2004 - 8:48 am UTC

Reviewer: Walter

Don is a funny guy:

</code> http://www.dba-oracle.com/redneck.htm <code>



Tom Kyte

Followup  

December 25, 2004 - 9:18 am UTC

and extremely sensitive
</code> http://dba-oracle.com/oracle_news/2004_12_22_webb.htm <code>



December 26, 2004 - 2:25 pm UTC

Reviewer: Mike

Tom,
you have stated"
I'm not saying "don't rebuild this thing" am I?
"

Some times we do need to rebuild indexes, correct? If so, what are your criterions? Please illustrate them with examples as you have used to.


Tom Kyte

Followup  

December 26, 2004 - 2:47 pm UTC

did you see the pro*c examples, they show you what I measure, actual, verifiable, measurable, large reduction in work performed -- against the type of index that typically benefits from this (a index you range scan, from the "left", where you leave over time one or two entries - deleting most of the others -- in other words, maybe a primary key on a batch table)

December 26, 2004 - 3:47 pm UTC

Reviewer: dave

the ireland article has been changed, he is a very unprofessional person

temp tablespace for sorting

January 04, 2005 - 7:29 pm UTC

Reviewer: reader

(1) When Oracle rebuilds the index, if it needs more sort space, does it use temp or the index tablespace?
(2) When you create a NEW index, if it needs more sort space, does it use temp or the index tablespace?
Thanks.

Tom Kyte

Followup  

January 05, 2005 - 9:16 am UTC

1) temp, it sorts into temp and then writes into index. however, you will see TEMP segments in INDEX, since when it is moving the data from TEMP into INDEX, it'll write them into TEMPORARY SEGMENTS (so that if the index create fails, SMON will just nicely clean up the temporary stuff that was being moved).

so, don't be confused when you see temporary extents in index, it just means we are writing the index to disk. At the end of the write, we convert the temporary extents into permanent ones.


2) same as #1.

good book

January 16, 2005 - 11:33 pm UTC

Reviewer: A reader

Oracle SQL Tuning & CBO Internals by kimberly floss is great book regarding CBO along with your expert one on one
great reads. must buy.


Oracle Press

February 25, 2005 - 10:00 am UTC

Reviewer: Richard from Oxfordshire, UK

I know you're not Mr. Oracle Press, but I have to say that for Oracle to allow any old hack to publish under their umbrella is both shocking and confusing to the customer, who would (not surprisingly) believe that the *Oracle* name would only be associated with quality.

I recently spent about $70 on an Oracle Press book (the author is somewhat famous for his idiosyncratic ideas about rebuilding indexes!), and, quite frankly, it's a right load of Eartha Kitt!

I bought the book because it was on a subject that is not widely covered (so I was limited to what I could pick from) AND because it was Oracle Press. I have other Oracle Press books which are fine, so it's been an eye-opener to have ended up with a stinker (and I do mean STINKER).

Come on Oracle, pull your finger out! Check-out the author and his/her work before lending them any semblance of respectability. I will probably not buy another Oracle Press book again (and feel free to contact me to arrange a refund!).

Index Rebuild

March 14, 2005 - 1:58 am UTC

Reviewer: APL

How we can make text index rebuild faster? I am using ctx_ddl.sync_index which is taking hours to finish.

Tom Kyte

Followup  

March 14, 2005 - 8:02 am UTC

look at the options -- parallel, up the memory settings and so on.

tech books

March 15, 2005 - 11:41 am UTC

Reviewer: A reader.

Oracle Press books are of varying quality, as are those published by other groups. Oracle provides tech reviewers (and sometimes authors) but the books are ultimately published by a division of McGraw-Hill. In publishing parlance "Oracle Press" is an imprint. The books published under the Oracle Press imprint have Oracle part numbers, and the standard label on the cover lists them as "official" and "authorized".

Regardless of the publisher, all of the parties involved in publishing a book have an interest in providing the highest quality product possible. If you have issues with a book's quality, take it up with the publishers so they'll do better next time. They aren't trolling discussion boards for feedback; if you want your comments about an author or book to influence the future, tell the publisher.



Tom Kyte

Followup  

March 15, 2005 - 11:54 am UTC

:1,$s/Oracle Press/All/g


I believe that you should judge a book not by it's cover but rather by:

a) I trust the author who wrote it

b) I feel comfortable with the review team, for the review team makes or breaks the book regardless of who wrote it. I'm very proud of the people who have reviewed my books. They would not be what they are without them. I'm not sure that all technical books have even been through tech edits!

c) the book was available in whole or part online for technical review (new way to judge -- most all books should be this way since about 2002/2003 I think)


March 15, 2005 - 4:52 pm UTC

Reviewer: A reader

Couldn't agree more!

Tom & Don

March 16, 2005 - 7:37 am UTC

Reviewer: DBA from Manchester (United !), England.

Not a technical question - just interested in knowing whether you and Don get on well - or at all ? I presume he knows of your disdain for some of his theories.

Don's Mystery about Rebuilding Indexes

March 22, 2005 - 12:39 pm UTC

Reviewer: Jens from Germany

Now I got it, when I take a look at Don's Page and saw his consultant rates: Mr.SuperOracle wants $550 per hour!

I bet, that if he would come to us, his first recommendation would be "Hey guys, first I would rebuild the indexes" - Ok than he sit very important in front of the terminal and waits for the end of the rebuild process. And after hours he earned a lot of money without nothing! GREAT JOB!

If Don take $550 per hour, Tom, what's your price? 10 Million $ for the same time? ;-)

</code> http://www.dba-oracle.com/consulting_prices.htm#burleson <code>

Performance after Index Rebuild

April 01, 2005 - 8:25 am UTC

Reviewer: Mahesh from Delhi India

Hi Tom

could you please let me know the way to measure the query speed improvements or any other improvement that result from Index Rebuilding.

I did this on my database and i released 2gb out of 3gb indexes space.

Before Rebuild After Rebuild Space Recovered
(Size MB) (Size MB) (Size MB)
---------------- ---------------- --------------
3009 775 2234

Thanks


Tom Kyte

Followup  

April 01, 2005 - 8:48 am UTC

do you have measurements (anything) from before the rebuild? else it is going to be near impossible to measure any things

what metrics do you have from before?

Index Rebuilding

April 04, 2005 - 9:50 am UTC

Reviewer: Mahesh Kumar from Delhi India

Hi Tom

i have follwing information from INDEX_STATS table before and after rebuilding Indexes.  

SQL> desc  BEGIN_INDEX_STATS

 ------------------- ----------------------------
 OWNER                VARCHAR2(30)
 NAME                 VARCHAR2(30)
 HEIGHT               NUMBER
 BLOCKS               NUMBER
 LF_ROWS              NUMBER
 DEL_LF_ROWS          NUMBER
 DISTINCT_KEYS        NUMBER
 BLKS_GETS_PER_ACCESS NUMBER
 INDEXSIZE            NUMBER Calulated from dba_segments

SQL> desc  END_INDEX_STATS
 ------------------- ----------------------------
 OWNER                VARCHAR2(30)
 NAME                 VARCHAR2(30)
 HEIGHT               NUMBER
 BLOCKS               NUMBER
 LF_ROWS              NUMBER
 DEL_LF_ROWS          NUMBER
 DISTINCT_KEYS        NUMBER
 BLKS_GETS_PER_ACCESS NUMBER
 INDEXSIZE            NUMBER  Calulated from dba_segments


Is this information is sufficient ?

Let me know what other information is missing so i can collect before index rebuild.


Thanks
 

Tom Kyte

Followup  

April 04, 2005 - 11:50 am UTC

You would like to also get the statistics for the queries making use of this index.

queries utilizing it to access data, did the consistent gets GO DOWN measurably?

modifications having to maintain the index, did their amount of work performed GO UP measurably.


and in a month, are you just back right where you began? so you need "later_index_stats" as well.



index performance

April 05, 2005 - 10:44 am UTC

Reviewer: PRS from USA

Hi Tom,
I have an index in OLTP environment where it is a primary key and it's value is generated by oracle sequence. Index is analyzed and and it is on seperate LUNS on SAN than data. I have 500 cocurrent users. When I see queries on that table using that primary key index, I see lot of contention and users wating to get the lookup on indexe. I have rebulded the index so many times but the problem still remains. Any ideas how do I reduce contention on this particular index?
Thanks,
PRS

Tom Kyte

Followup  

April 05, 2005 - 12:17 pm UTC

how are you measuring "contention", what does contention mean to you?

index performance

April 05, 2005 - 1:49 pm UTC

Reviewer: PRS from USA

user has to wait for that index lookup as so many people are looking at that table.
PRS

Tom Kyte

Followup  

April 05, 2005 - 6:45 pm UTC

tell me how you figured that out

show me some numbers, evidence. how did you determine this to be true.

coalesce and "right handed index" effect

May 04, 2005 - 3:09 am UTC

Reviewer: A reader

Hi

From one of your posts you mentioned that by coalsecing an index we can reclaim space. In your example you said this

"Lets say you fill an index up with the numbers 1...1,000,000
You delete 75% of the rows (the numbers 1..25 STAY, 75..100 go, 101..125 STAY, and so on)"

"Now, you insert rows 1,000,001 .. 1,750,000 (numbers 1,000,001 .. )
Your index is forced to put them on the "right hand side", the number 1,000,001 must go to the RIGHT of 1,000,000. The index grows
"

Then you say

"So, some "right handed indexes" like that can benefit from a coalesce from time"

I dont understand how coalesce can reclaim space in this case since coalesce only merges "leaf blocks within same branch of tree". The new index entries will always go right hand side if the index is sequential, how can a coalesce help here?

Tom Kyte

Followup  

May 04, 2005 - 9:02 am UTC

it merges them and removes leaf blocks from under the branches, allowing those leaves to be put on the freelist (as they are empty now) to be used on the left hand side....

So, it lets leaf blocks from the left go over to the right.

Great service to Oracle community

May 10, 2005 - 2:13 am UTC

Reviewer: A Tom's fan for his dedication to Oracle community from Dallas

Tom: You are doing a great service to the Oracle community. Any question on Oracle, we are all confident that "ASK TOM", he will answer you. The discussion on Index rebuilding has cleared lot of myths. I was also thinking that indexes do not re-use space. I have a follow-up question on this topic. I have 200 tables in one of the DBs and all of them are Primary Keys or Unique Keys populated from Sequence by the application. After Archiver is run, most of the old data is deleted from these tables. Under these circumstances, do I need to do anything to maintain Indexes and their used space?

Tom Kyte

Followup  

May 10, 2005 - 8:13 am UTC

they may be the classic "sweeper" index that could benefit from a coalesce or rebuild. (see examples above)

Index rebuilding

May 10, 2005 - 2:15 am UTC

Reviewer: A Tom's fan for his dedication to Oracle community from Dallas

By the way the oracle version I have is 817416

Index rebuild/coalesce

May 10, 2005 - 9:48 am UTC

Reviewer: Oracle DBA from Dallas

Tom: Thanks for quick response as always. Do you recommend coalesce over rebuilding index. Does coalesce require downtime? What is the best way of doing index coalesce?
2) I am forced to move a huge index from one TS to another at a client location. Decision is not in my hands not to do. Do you recommend rebuilding the index or drop the original one and create the new one? Disk space is not a problem.

Tom Kyte

Followup  

May 10, 2005 - 10:42 am UTC

coalesce is an online operation (feature of enterprise edition). see the alter index command (there is only one way to do it)

it works by taking adjacent leaves and putting them together if possible -- freeing leaf blocks to be placed on the freelist of the index. It does not require 2x the space

an index rebuild can be done online (EE) and requires 2 times the space (or more or less, depends), it requires an exclusive lock on the index before and after the rebuild (so on a busy system, be very very careful) and will have overhead associated with it during the rebuild as it must remember what has changed in the table too.

2) that is a rebuild case for sure.

rebuild index

May 11, 2005 - 8:24 pm UTC

Reviewer: oracle dba from Dallas

Thanks Tom for detailed info.

Index stagnation

May 18, 2005 - 12:14 pm UTC

Reviewer: Christof Breker from Paderborn, Germany

In URL </code> http://download-west.oracle.com/docs/cd/B10501_01/em.920/a96675/dba.htm#1009471 <code>the OEM Event Test Reference Manual states:
When an indexed value is updated in the table, the old value is deleted from the index and the new value is inserted into a separate part of the index. The space released by the old value can never be used again. As indexed values are updated or deleted, the amount of unusable space within the index increases, a condition called index stagnation. Because a stagnated index contains a mixture of data and empty areas, scans of the index will be less efficient.

Would this be a situation when an index should be rebuilt as the manual suggests?

Regards,
C. Breker



Tom Kyte

Followup  

May 18, 2005 - 12:53 pm UTC

I have filed a documentation bug as that statement could not be any further from reality.

Index space *is* reused, constantly. That statement is totally false.

Reason for regular rebuild?

May 24, 2005 - 3:26 am UTC

Reviewer: Michael from Germany

Hi Tom!

We had the following issue - same query, same plan, same result, very different performance with index:

SQL> set autotrace on
SQL> set timing on
SQL> select count(*)
2 from yb_unit_sort_bin_result
3 where test_run_sk = 1100000;

COUNT(*)
----------
0

Abgelaufen: 00:00:00.05 <----- Fast, that is what i expected

Ausf³hrungsplan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=7 Card=1 Bytes=5)
1 0 SORT (AGGREGATE)
2 1 INDEX (RANGE SCAN) OF 'YUBR_YTRS_FK_I' (NON-UNIQUE) (Cost=31 Card=7247 Bytes=36235)

Statistiken
----------------------------------------------------------
0 recursive calls
0 db block gets
4 consistent gets
2 physical reads
0 redo size
188 bytes sent via SQL*Net to client
236 bytes received via SQL*Net from client
3 SQL*Net roundtrips to/from client
1 sorts (memory)
0 sorts (disk)
1 rows processed

Now, another query which is very slow, but it only in the filtering value for test_run_sk different.

SQL> select count(*)
2 from yb_unit_sort_bin_result
3 where test_run_sk = 800000;

COUNT(*)
----------
0

Abgelaufen: 00:06:21.08 <--- Very, very slow!

Ausf³hrungsplan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=7 Card=1 Bytes=5)
1 0 SORT (AGGREGATE)
2 1 INDEX (RANGE SCAN) OF 'YUBR_YTRS_FK_I' (NON-UNIQUE) (Cost=31 Card=7247 Bytes=36235)

Statistiken
----------------------------------------------------------
0 recursive calls
0 db block gets
135656 consistent gets   <--- Wow, that's a lot!
126204 physical reads
0 redo size
199 bytes sent via SQL*Net to client
238 bytes received via SQL*Net from client
3 SQL*Net roundtrips to/from client
1 sorts (memory)
0 sorts (disk)
1 rows processed

From this table an archiver process (SELECT-DELETE-like behavior) from the 3rd party application deleted many, many rows (about 25% of all rows). (I know that's not a good idea to "archive").

Support told us to rebuild this index (and other indexes on this table) at regular intervals because the archiver also runs at regular intervals.

Is this a case where a regular index rebuild is inevitable? 

Tom Kyte

Followup  

May 24, 2005 - 7:57 am UTC

looks like a sweeper -- see the example above -- delete most but not all of the left hand side (old data) and you may well have lots of old values out there.

A coalesce may well solve the issue as well without necessitating a full rebuild. Coalesce is a lower impact online operation than a rebuild online.

Suggestion from Oracle, Advance Product Services group.

May 24, 2005 - 8:12 am UTC

Reviewer: whizkid from APAC

We used to re-build indexes on a weekly basis and after learning from this site we stoppped doing that and moved to a specific problem solving approach. We recently had a database audit done by one consultant from APS, Oracle. During the audit, the topic of index rebuilding had come up and he was saying we should periodically rebuild index. When asked why, he said its because you need to re-use the free space when entries get deleted from the index. He's gone as far as putting it in his recommendations document... part of it is below...

Oracle Support Services
I
Recommendations
Database maintenance strategy
Currently the peformance of the database is looking good, but few
modification or changes to the database will give a better/optimal
performance. The suggestions/reccommendations are given on that basis
only.
Index Rebuild
Observation
Currently indexes rebuilding was carried out as an adhoc process.
Recommendation
Indexes should be rebuilt periodically. The following query can be used to
check the need of index rebuilding.
Select height, del_lf_rows/lf_rows from index_stats;

This query should be run immediately after one particular index is
analyzed. Any index which has del_lf_rows/lf_rows>0.2 or height >3
should be considered for rebuilding.

------

If support from Oracle itself spread such myths, then customers will believe them. Shouldn't something be done about it??? And we pay substantial amount for the product support. What do you suggest?

P.S. I have their audit report with me in which this is mentioned. I can send it to you if you want to have a look at it yourself..

Tom Kyte

Followup  

May 24, 2005 - 9:39 am UTC

we can only defeat the myths one person at a time. When I see it in the documentation, I get it fixed. When I see it in writing, I point it out.



Myths

May 25, 2005 - 4:05 am UTC

Reviewer: Michael from Germany

You can find this recommandation in MetaLink, too:

</code> http://metalink.oracle.com/metalink/plsql/ml2_documents.showDocument?p_database_id=NOT&p_id=108573.1 <code>states:

"Then check INDEX_STATS view particularly columns LF_ROWS
and DEL_LF_ROWS which shows current number of entry slots in
leaf blocks. On the other hand LF_ROWS_LEN and DEL_LF_ROWS_LEN shows
the total number of bytes associated with these entries.

As a rule of thumb, when the number of or spaced used by
deleted entries is greater than 15 - 20% of the total entries
you should consider rebuilding the index."

Tom Kyte

Followup  

May 25, 2005 - 7:51 am UTC

well, you know what Rule of thumb is abbreviated as ;)

It really depends on the type of data you are indexing. If this index is on a field that is regularly updated -- I wouldn't be worrying. As you update "A" to "M", you'll get a deleted leaf row in the "A" section of the index and either create a new leaf row OR reuse an existing deleted leaf row in the "M"'s. As the index entries move around -- you might at any point in time have 40% of the index "deleted leaf rows" -- but that is OK (many indexes run at 40% 'empty')

And the advice won't find many indexes that are in need of it -- the sweeper above for example won't have a high deleted leaf rows, it would never find it. But it would be a candidate.

documentation bug?

May 25, 2005 - 11:34 am UTC

Reviewer: SDiZ from Hong Kong

Is that possible to file a documentation bug on this?
We shouldn't let a myth like this spread ....

I can understand why people rebuild index when they experience performance problem... But recommanding people rebuild index even if they don't have any problem is really evil...



Tom Kyte

Followup  

May 25, 2005 - 2:11 pm UTC

the email has already been sent, the wheels are in motion.

Meaning of "balance"

June 09, 2005 - 7:50 pm UTC

Reviewer: Brandon Allen from Phoenix, AZ

I've been reading a lot on this subject of index balance lately - read all of Richard Foote's excellent paper and much of the others and it's occurred to me that much of this argument may be due to the ambiguity of the word "balance".

Tom and many others describe balance as the number of levels between the root and the leaves, which is of course always consistent throughout the index so, by this meaning, an index is most certainly always balanced.

However, if you define balance as the density throughout the leaf blocks, then by this definition an index can certainly become unbalanced. Tom gives a good example of the "sweeper" with the monotonically increasing values on the right side and sparse values remaining on the left. Imagine the dense, pct_used=90 blocks on the right side of a scale and then the sparse, pct_used=5 blocks on the left side and you have an index that is indeed out of balance.


Tom Kyte

Followup  

June 09, 2005 - 10:28 pm UTC

I agree with your premise but:

Well, "balance" in the context of a B*Tree index is not ambigous.

But, if by "out of balance", someone means "this phenomena", with no further explaination....


if you google
"out of balance" oracle index

the first hit is:
</code> http://www.devx.com/gethelpon/10MinuteSolution/16596
but there are SOOO many things wrong in that article, it is hard to know where even to begin or what the author meant by "out of balance"

the second is:
http://dba-oracle.com/oracle_news/2004_9_20a_jenkins.htm <code>
but that is a circular reference in a way.

but I suppose it goes back to "saying something in an understandable way". Out of balance is not ambigous (in the context of a b*tree which is definitely self balancing)

It is not "tom and many others", it is in fact the definition of the structure itself.

This sweeper is one of the very very few cases where a B*tree index benefits from a coalesce or rebuild -- but even then you must

a) hypothesize that this index needs be rebuilt (say WHY you are going to do it)
b) measure somethings (take metrics) so see what the index does today (v$sql, executions -- other views to get the IO's performed while scanning this index)
c) rebuild
d) re-measure to see if you achieved your goal and to measure how often you might need to do this.

It takes like 10 minutes extra. And it is worth it.


And remember, it depends on how you use the index. Take that 5 and 90. If you do keyed reads -- guess what? It doesn't matter.

If you do "find me the minimum value" -- it may or may not matter.

If you do "start with this key and get the next 5 rows" -- it probably doesn't matter.

It is only if it matters... that we care.

108573.1

June 10, 2005 - 3:13 pm UTC

Reviewer: Tamil from ATLANTA

Hmmm,
Tried to access 108573.1 doc.
And what I got is:

MetaLink V2 - Error Message
Note entry 108573.1.cannot be displayed. Possible reasons are:
1. The article is not classified as publicly accessible ("non-public")
2. The article's content is being updated and it is temporarily unavailable but will be made available again soon.
3. The article has been deleted from this serve r.
4. The article number does not exist (it was referenced incorrectly).
5. If you still have questions about why you couldn't access this article, please use the Feedback button at the bottom of the left-hand menu bar.
--------------------------------------------------------------------------------

!!! out of balance ?

Tamil

Tom Kyte

Followup  

June 10, 2005 - 3:58 pm UTC

I had them remove that article due to it's being "quite technically not correct"

As I hit misinformation -- I either get it corrected, or I try to get the author to correct it, or failing that -- you might see me comment on it here.

I do it for everyone, not to individuals, not to single organizations. Anyone with "bad information" gets a chance to correct it, retract it, or failing that -- have it commented on.

And I hope others do the same for me, I type too fast some days as well -- when you see something questionable or just plain wrong -- call me on it, please.

Meaning of "balance"

June 10, 2005 - 6:24 pm UTC

Reviewer: Brandon Allen from Phoenix, AZ

Agreed Tom. I wasn't disputing any of the points you have made - I was just noting the observation that much of the confusion may be due to a misunderstanding of the meaning of "balance" as applied to B-Trees. In general both meanings are valid, but technically the one you use is the standard meaning as defined by Oracle so it is obviously the correct one. Interestingly, I just found this article from Jonathan Lewis making my exact same point a few months ago: </code> http://www.dbazine.com/oracle/or-articles/jlewis13 <code>

Under the subtitle "Why is the Myth so Strong", he states: "Remember that the definition of the word "balanced" has a very strict meaning when we are discussing B-trees. There is, however, a completely different interpretation that could be used for this word." and then continues to present the same idea I had.

Thanks for your time!

Tom Kyte

Followup  

June 10, 2005 - 6:59 pm UTC

I totally agree, that is why sloppy writing, hyperbole, grandiose statements, things that the national enquirer could have written....

will draw my attention immediately.

You want to talk technical? Be precise, don't make up new meanings. Don't make people guess what you MIGHT have meant, maybe.

And if you talk in generalizations, forget it, game over player one, you lose ;)

So misguided and thankx for all the info

June 27, 2005 - 8:52 am UTC

Reviewer: Eswar

Hi Tom,

I was about to rebuild my indexes [yes misguided by dbzine doc] in my test mechine. But i accidently found this. Now, i wont, better i will do an analysis of the indexes and then think abt it.

Thanks again..



Skeleton in Tom's closet :-)

June 28, 2005 - 6:59 pm UTC

Reviewer: Brandon Allen from Phoenix, AZ

Tom, I just found a response from you on a Metalink forum note from back in 2002 (DocID 258194.995), in which you refer the user to Metalink Note 77574.1 (among others), which in turn includes the following statement: "The more deleted entries exist on an index, the more unbalanced the index becomes." This statement is followed with advice to rebuild the index if del_lf_rows/lf_rows > 20%.

Thought you might want to get that Metalink note corrected :-)

Also, the same note also says "Leaving indexes with high PCT_DELETED [del_lf_rows/lf_rows] without rebuild might cause excessive redo allocation on some systems."

This doesn't make any sense to me - can you confirm and explain if there is any truth to it?

Thanks,
Brandon

Tom Kyte

Followup  

June 28, 2005 - 8:42 pm UTC

but I've *never* posted on metalink?

can you clarify? I pulled up that posting, I see a David Brinkman, a certain Mr. Burleson, and a Tom Villane.

where exactly do you see me referenced?

Tom vs Tom ?

June 29, 2005 - 2:19 am UTC

Reviewer: A reader

Brandon,

Seems a bit unfair to attribute comments to Tom Kyte when they were actually made by a different Tom, Tom Villane. I hope you don’t criticise Tom Kyte’s acting abilities simply because you didn’t enjoy 'War of the Worlds' :)


Tom vs Tom

June 29, 2005 - 6:01 am UTC

Reviewer: Mancman from UK

I though it was Tom Cruise in WofW ?

Tom Kyte

Followup  

June 29, 2005 - 9:03 am UTC

It is, the poster above you was pointing out that just because someones first name is Tom, it doesn't mean it was me :)

On the referred to metalink thread, there was a Tom participating, but it was not me.

June 29, 2005 - 9:30 am UTC

Reviewer: dave

is note 77574.1 valid at all?

Tom Kyte

Followup  

June 29, 2005 - 9:46 am UTC

it is mostly benign.

with the 9i caveat - you can ignore the blevel. Although think about it. say you rebuild all height 4 indexes and you end up with.....

height 4 indexes. do you rebuild immediately? That advice is dodgy -- very circular.

I'll just keep saying it:

if you are going to rebuild
have a reason a goal for doing so
so you can collect some metrics, makes measurements
so you can verify you've achieved a stated goal
and can monitor this over time, to help determine if in fact you need
to keep doing this.


There are the sweeper indexes. They can benefit from a periodic rebuild (each has their own period however). But those, those you should "know" - they are monotomic (left or right hand side) and get 'purged' but not completely purged in general.

You should *know* without even looking what indexes those are!

Many of my applications tend to not delete or to use partitioning to facilitate purging. I don't have sweepers by design.

I'm a big dummy

June 29, 2005 - 7:01 pm UTC

Reviewer: Brandon Allen from Phoenix, AZ

Sorry about that - I must've read too fast and mixed up my Toms. I should've known better than to think Tom Kyte would've recommended that article.

My apologies,
Brandon

ORA-08102

July 06, 2005 - 6:25 am UTC

Reviewer: a reader from kuwait

Dear tom , what can i do with this error ORA-08102 ??
i try read alot but reach nothing .!


Tom Kyte

Followup  

July 06, 2005 - 8:02 am UTC

contact support.

Index Rebuild

July 12, 2005 - 4:07 pm UTC

Reviewer: Rob H from Winnipeg MB

I am currently at a new position as a DBA and of course, there are several DBA's here with their rules as to how to manage databases. I am having issues with many of their rules (like the fact most of their databases are 9i and they extent tablespaces manually...). However, in a discussion I was made to feel like quite a newbie due to my stance against index rebuilding. They claim that index rebuilds can(and have) recovered several gigs of index tablespace. That is proof enough that periodic rebuilds can continue to maintain low index space usage (this is confusing to me and using your diet analogy its like starving the patient). I have been reading a lot on this and the one fact that I'm confused with is this line:

"Index space *is* reused, constantly. That statement is totally false."

Is this in the Oracle Documentation somewhere? I have looked in the Concepts and DBA Guide and I cannot find any comment stating that space is re-used. The best I could find was "All blocks of the B-tree are three-quarters full on the average." and "Inserts, updates, and deletes are efficient, maintaining key order for fast retrieval."

Their claim is that when rows are inserted into the table, the space is added to the index. I agree. You remove those rows from the table and the table 'potentially' could free up the space for new rows, but that the index will remain at it size. Continual addition and deletion to a table will "ever-increase" the index. Thus manditory rebuild is nessecary to reclaim wasted space (is this what you refer to as white space?). I had agreed with this until I saw your post stating index space is reused.

Thier rules for rebuild are:
1. report if index levels > 5
2. report if deleted leaf rows/total leaf rows > .10

Their entire argument is around the fact that space is recovered. They have proof. I am trying hard to understand all aspects. You agree that there can be times when a coalesce is beneficial(an example is large deletes, followed by large inserts and repeat, this is an example of a transaction system keeping x records online, then archiving them to datawarehouse).

Any clarity on that statement.

Tom Kyte

Followup  

July 13, 2005 - 10:51 am UTC

... They claim that
index rebuilds can(and have) recovered several gigs of index tablespace.
...

for how many days, permanently? and how much work was the system doing in the days following the rebuild as these indexes set about getting fat again.....


Stating that index space is reused would be stating the "obvious". Do you have references where it says "we don't reuse it".


and if they rebuild an index with height >5 and it turns into a rebuilt index with height >5 what do they do then?

and the deleted leaf rows is funny, that one doesn't catch the type of index that might need a rebuild -- a sweeper index :) Look for classic method for finding
fragmented (whatever that means) indexes" on this page.


They have proof that space is recovered, for a couple of minutes, and then -- what happens??

tell them to do this, measure the index growth daily after their next rebuild and see if some of the indexes don't just "peak out" and stop "growing" (they attain their weight, their degree of breadth, and stay there). And then do it again next month.

Ask they "well, we don't seem to permanently reclaim squat and these indexes just grow back to what they were and THAT my friends takes work, to have an index grow"

Avoid that TOP 'Expert'

July 25, 2005 - 6:17 am UTC

Reviewer: Bill from Brussels, Belgium

Amused at the comments much further up the thread regarding one of the "world’s top Oracle Database experts".

That said: Tom is not exactly shy and unforthcoming when he puts himself "among the most knowledgeable Oracle professionals in the world". Still I guess it's all in pursuit of maxing the sales of his book.

Tom is indeed extremely knowledgeable and I have learnt so much from reading his stuff, but sometimes I have to think is this guy real. In the section on LMTs in his book : "I'm not kept awake at night wondering if a PCTINCREASE is going crazy on me or if my tablespace will be fragmented like a Swiss cheese".

I mean for heaven's sake. There are thousands starving to death in Niger and bombs going off in London and Egypt. However, since the advents of LMTs, Tom at least now gets a peaceful night's slumber, not fretting about his PCTINCREASEs. Come on!

I do enjoy much of the Oak Table's stuff and they really are among the finest sources of Oracle info. But sometimes they do come across as a mutual backslapping, ego-stroking society. Hey guys aren't we good?

What I want to know is how comes there are apparently no female members bringing a bottle of whisky to the Oak Table. And considering the amount of whisky being drunk and baarfing going on, no wonder ‘Brushco’ nearly went down the toilet so to speak :-)


Tom Kyte

Followup  

July 25, 2005 - 8:15 am UTC

so, mundane work related tasks never bother your sleep at night? Until August 11th, I can assure you I'll probably wake up at least once or twice and worry if I'm not making my deadline for volume II of the book.

And, since LMT's came out -- I'm not worrying about about pctincrease, how about you?

I didn't get that last comment about the ego-stroking thing, seriously. Where do you see that? for me, it is an email list, a place to ask questions and discuss. Outside of that - what sort of backslapping presence does it have?

and there are female members. (no idea what brushco is...)

The band of merry men.

July 25, 2005 - 11:44 am UTC

Reviewer: A reader

Thank you kindly for your reply and good luck with the second volume. I look forward to reading it when it becomes available.

I can't say I really ever wake up in a cold sweat and palpitations fretting about some object setting in a database. Sometimes I wake up in the middle of the night with what I think is a solution to a problem, but worrying? No. I believe in doing my work as conscientiously as I can and if a problem does arise solving it in a calm and methodical manner. But worrying over an Oracle database? I really don't what that achieves. And relative to all this terrible stuff going on in the world at the moment, my day to day DBA problems are totally insignificant.

Brushco is the 'mythical' company in 'Tales of the Oak Table' that Mogens Nørgaard and his merry men save from calamity, while it seems drinking copious amounts of whisky and patting each other on the back gathered round the Oak Table at his home in Denmark.

It seems to be all boys quaffing back the hard stuff while Mogens' missus appears to do nothing but run around after them all the time!

self proclaimed guru.

August 03, 2005 - 10:54 am UTC

Reviewer: Sandy from London, UK

After reading through this thread, I have realised that
I've been really unlucky to have bought 2 books written by a self proclaimed Oracle guru whose second name starts
with a 'B'... I never understood the books... one was about STATSPACK and is the worst technical book I've ever read! The first thing I plan to do when I get back home is
to dump those books in a skip.. or maybe first use a shredder, lest some unlucky bin-man finds them!
Sorry to rant on this thread - just trying to calm down :-))

Clustering factor is very high for the table.

August 23, 2005 - 2:01 am UTC

Reviewer: Neeraj Ranjan Rath ,Hyderabad from India

Hi Tom,
I am working in wel known Telecom Billing system.
For a particular table i found the following details as clustering factor.
BLOCKS CLUSTERING_FACTOR
---------- -----------------
196205 6884676
196205 747475
196205 959798
196205 1756130
196205 1277388
196205 115067
196205 1575241
The clustering factor seems to be very high.The query which is using the the particular table is using index scan.
Do we need to reduce this clustering factor?
If yes please let me know how i can do that.
I have analyzed the table for all indices.But the clustering facter seems to be increasing.
Please advise me !

Tom Kyte

Followup  

August 24, 2005 - 3:47 am UTC

do you have access to Expert one on one Oracle?

You need to understand what a clustering factor is (a measure of how sorted the table is with regards to the index keys).

You have to understand that a table can be "sorted" one way on disk (so in general, a single index will typically have a chance at a good clustering factor and the rest -- well, won't).


Are you doing a LARGE very LARGE index range scan using a single one of these indexes frequently?

how to show that, my indexe rebulding is going on?

January 09, 2006 - 4:30 pm UTC

Reviewer: Matte from Austin,Tx

Hi, Tom
This is something different for an DBA.
Here is an simple procedure to rebuild the indexes.
In my environment I send this script to the end users to rebuild their indexes themself. My environment is just like an distributed database system where the databases are located in the client locations .Because of the time taken to rebuild the indexes the end user sometimes confusing, whether system hanged or is it still rebuilding indexes.

My requirment is :
I would like to print some "." while rebuiling indexes
For example, If an index takes 3 minutes to rebuild, I would like to display 6 dots at the prompt( one dot per 30 secs) while the rebuilding is taking place. So that end user does not confuses whether it just hanged or continuing.

I am not getting any idea to print the dots and rebuilding the indexes parallelly.

Hope you understood my idea!!!!
Thank you in advance.
######################################################
create or replace PROCEDURE rebuild_all_indexes
AS
V_INDX_NAME user_indexes.index_name%type;
cursor index_cur is select index_name from user_indexes where index_type='NORMAL';
BEGIN
OPEN INDEX_CUR;
FETCH index_cur INTO v_indx_name;
LOOP
EXIT WHEN index_cur%NOTFOUND;
dbms_output.put_line('Rebuiliding index for '||v_indx_name);
Execute immediate 'ALTER INDEX '||v_indx_name||' REBUILD ONLINE COMPUTE STATISTICS';
dbms_output.put_line('Completed rebuiliding index for '||v_indx_name);
FETCH index_cur INTO v_indx_name;
END LOOP;
CLOSE index_cur;
dbms_output.put_line('Rebuiling of INDEXES is completed');
END;
#########################################################

Tom Kyte

Followup  

January 09, 2006 - 6:14 pm UTC

why are you rebuilding them in the first place.

I can fix this problem immediately - just make the first line of code look like this:

...
begin
RETURN; <<<=== new code
open index_cur;
....



January 09, 2006 - 11:05 pm UTC

Reviewer: A reader

Great response, as usual.

Thanks for the reader!!!

January 10, 2006 - 1:28 am UTC

Reviewer: Matte from austin,TX

Thanks for the answer (professional) Tom,

Here is some thing i would like to tell you(espacially for the reader who commented for the answer).

>All environments will not be same.

>End user expectations will change everyday and it is our responsibility to provide the best interactive service to my clients.

>It does not make sense to comment on every answer (?professional?)



Rob

April 18, 2006 - 10:36 pm UTC

Reviewer: A reader

Tom,
I have a sql that does a AND_EQUAL join of 2 indexes..Is there a way(like a hint) to force that not to happen? I just want to use one index??
Thanks

Tom Kyte

Followup  

April 19, 2006 - 8:28 am UTC

begs the question "why"


but sure, if you apply some sort of "dummy function" to the indexed column - that would typically preclude the use of an index on that column.

April 19, 2006 - 9:46 am UTC

Reviewer: A reader

the reason is just using one index the query runs faster..but when the and_equal happens it runs a lot slower..the
table also has STATS gathered..but still goes for a AND_EQUAL whch puzzles me...Can you please validate the below article..
</code> http://people.aapt.net.au/roxsco/tuning/and%20equal.html <code>
Thanks
Rob

Tom Kyte

Followup  

April 19, 2006 - 10:27 am UTC

show us the autotrace traceonly explains and the tkprofs here.

Summary

April 19, 2006 - 3:33 pm UTC

Reviewer: Daniel

Just try to summarize all the comment, and see if i have understand it correcly:
----------------------------------------
Your arguement:
1) Rebuilding index may make SELECT queries faster, but usually it won't last long.

2) The structure of the B*Tree will get to an equbilium point anyway, if your access pattern does not change.

3) INSERT/UPDATEs are going to be more costy during the above process.

4) Usually, people should fix their queries instead of rebuilding the index periodically.

5) In some rare case, rebuilding the index is the right thing, do it. (e.g. archiving old data with partition?)
----------------------------------------
Their arguement:
1) Rebuilding index is not costly, and you can do it online.

2) Database become slow again? Just rebuild the index again!

3) Some query run faster. Don't have to understand why. It just run faster.

4) Rebuild the index is easy, fixing the queries is hard.
----------------------------------------



Tom Kyte

Followup  

April 19, 2006 - 5:24 pm UTC

1) did not say that...

2) in most all but the rare degenerate cases yes

3) if the index wants to be wide and you make it skinny, it'll take effort to make it wide again.

4) did not say that... I probably did say "people should fix their queries" <period>. Don't see any relation to rebuilding index necessarily.

5) "sweeper indexes" (date column populated with sysdate, id column populated by sequence) where you delete all/most of the old data over time and frequently ask for the "lowest value(s)" could benefit from rebuilds/coalesces IF you cannot permanently solve the problem via some physical structure that takes care of this (like partitioning)


1) the rebuild is just one step of the process...
2) see above
3)
4) but doesn't necessarily fix anything

When to rebuild bitmap indexes

April 25, 2006 - 8:34 pm UTC

Reviewer: Fat Geek from Auckland NZ

Today I had an issue when I was approached because a DW load had failed, the situation appeared obvious - tablespace couldn't extend due to a company directive that datafiles be no larger than 2gb in size (I wont go into that). However on closer observation I noticed that the bitmap indexes were huge. I rebuilt them and the tablespace is now only 65% full.

I'm aware that some loads are poorly written and can result in large bitmap indexes. Rather than waiting for them to fail, I'd like to be able to tell which bitmap indexes needed rebuilding. Looking at size alone doesn't work - some are genuinely large.

Is there a way of telling when a bitmap index is blown out and needs rebuilding?

Tom Kyte

Followup  

April 26, 2006 - 12:05 am UTC

just look at the loads - if they do slow by slow processing (row by row), the bitmaps will be huge, if they do large direct path loads, the bitmaps will be maintained efficiently.

You only have to look at the load methodology to decide if the bitmaps should be disabled, then table loaded and then rebuilt.

Rebuilding indexes

April 26, 2006 - 5:47 pm UTC

Reviewer: Fat Geek from Auckland NZ

OK, I've done that and identified a couple of loads that need fixing; but there are some large bitmaps I've found that are over a year old and have loads that disable and rebuild the bitmaps so I suspect that a predecessor has started on the same process, fixed future loads but not rebuilt historic blown indexes.

This does make it a space reclaiming process rather than an improvement process but I'd like to get them rebuilt as it makes sizing for the ongoing migration from Solaris/Sparc/8i to Linux/Intel/9i (and a subsequent upgrade to 10g) more accurate.

Is there a query that can be run that gives a hint that the index needs reviewing?

Tom Kyte

Followup  

April 27, 2006 - 2:52 pm UTC

not beyond looking for "really big bitmaps, that seem bigger than they should"

Rebuilding bitmap indexes

May 01, 2006 - 5:45 pm UTC

Reviewer: Fat Geek from Auckland NZ

I found two methods of ferreting out bitmap indexes that benefited from a rebuild. Neither are guaranteed 100% accurate (or even 80%) but were useful, given the huge number of indexes in a data warehouse.

1. Bitmap index partitions that were larger than thier matching table partitions (we have partition naming standards so I could run that comparison) benefitted from a rebuild.

2. Bitmap indexes that were blown tended to have a high AVG_DATA_BLOCKS_PER_KEY.

These both were pretty reliable at finding blown indexes but didn't get all of them. However I found that they pointed me at areas of concern.


create index

June 11, 2006 - 9:52 am UTC

Reviewer: mohannad from jordan

hi
i have a table named car_info with the following fields
car_no,car_owner_no,car_model,car_value,......
i usally user the folowing queries for ths table
1.select * from car_info where car_no = values
2.select * from owner , car_info where car_info.car_owner_no = owner.owner_no
and car_no = value
3.select * from car_info
where car_mode LARGER THAN values and CAR_MODEL LARGER THEN VALUES
for the first query i create an index on the car_no filed
and for the
second query i create three indexes on these fields car_owner_no,owner_no.
so my question is what is the best approach to make the third query faster (what indexes should i create and why).



Tom Kyte

Followup  

June 11, 2006 - 11:57 am UTC

the only answer is "it depends"

but it seems strange to say "where car_model > :x" - card_model is a surrogate key isn't it. why would you do that?


you would have to ask yourself "which will likely (stress LIKELY) return less? where car_mode > :x, or, where car_model > :x. Which ever one in general would likely return less - that dictates which I would pref to have on the leading edge of my index"

and the index would be on both columns. It is the order you need to determine.

90-10 splits example from R. Foote

August 01, 2006 - 5:20 am UTC

Reviewer: yohann from france

Hi Tom,
Document from Richard Foote is very good.
But there is one thing I don't understand with "90-10 splits" is : what is the influence of the commit inside or out of the insert loop?
with commit out of the loop: 
SQL> select lf_blks, pct_used from index_stats;
LF_BLKS PCT_USED
---------- ----------
19 94

with commit in the loop:
SQL> select lf_blks, pct_used from index_stats;
LF_BLKS PCT_USED
---------- ----------
36 51

could you explain please why committing after each insert makes a difference.
Thanks
 

Tom Kyte

Followup  

August 01, 2006 - 7:31 am UTC

I believe he was pointing out an odd "product issue" with 9i there. Not something that was supposed to happen.



ops$tkyte%ORA10GR2> create table t ( id number, value varchar2(10) );

Table created.

ops$tkyte%ORA10GR2> create index t_idx on t(id);

Index created.

ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> begin
  2     for i in 1 .. 10000
  3     loop
  4        insert into t values (i,'xxxxx' );
  5        commit;
  6     end loop;
  7  end;
  8  /

PL/SQL procedure successfully completed.

ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> analyze index t_idx validate structure;

Index analyzed.

ops$tkyte%ORA10GR2> select lf_blks, pct_used from index_stats;

                %
   LF_BLKS   Used
---------- ------
        19   94.0
 

another question whith del_lf_rows

August 02, 2006 - 10:23 am UTC

Reviewer: yohann from france

Thank you very much. I tested: it always happens in 9i but nomore in 10g.

another question:
I read so many web pages about the advice of rebuilding index when you reach a ratio del_lf_rows/lf_rows. I'm very confused because i think it's a bad rule.
I tried the approach of Richard Foote with showing that del_lf_rows is not a good metric for showing fragmentation:
SQL> begin
  2  for i in 1..1000
  3  loop
  4  insert into t values(i);
  5  end loop;
  6  end;
  7  /
SQL> analyze index t_idx validate structure;
Index analyzed.

SQL> select lf_rows, del_lf_rows from index_stats;
   LF_ROWS DEL_LF_ROWS
---------- -----------
      1000           0

SQL> delete from t where x<540;
539 rows deleted.

SQL> delete from t where x between 541 and 999;
459 rows deleted.

SQL> analyze index t_idx validate structure;
Index analyzed.

SQL> select lf_rows, del_lf_rows from index_stats;
   LF_ROWS DEL_LF_ROWS
---------- -----------
      1000         998

NOW IT'S TIME TO REBUILD , NO? BUT...
SQL> insert into t values(1);
1 row created.

SQL> analyze index t_idx validate structure;
Index analyzed.

SQL> select lf_rows, del_lf_rows from index_stats;
   LF_ROWS DEL_LF_ROWS
---------- -----------
       462         459

SQL> insert into t values(700);
1 row created.

SQL> analyze index t_idx validate structure;
Index analyzed.

SQL> select lf_rows, del_lf_rows from index_stats;
   LF_ROWS DEL_LF_ROWS
---------- -----------
         4           0

with the rule, it's nomore time to reorganize; however , I've only inserted 2 rows, and there's always 2 blocks almost empty as treedump shows it:
branch: 0x100027c 16777852 (0: nrow: 2, level: 1)
   leaf: 0x100027e 16777854 (-1: nrow: 2 rrow: 2)
   leaf: 0x100027f 16777855 (0: nrow: 2 rrow: 2)


Tom, was it a fair example or something's wrong with my argument?

Thanks 

Tom Kyte

Followup  

August 02, 2006 - 12:08 pm UTC

... I'm very confused because i think it's a bad rule. ...

You are not confused, you are correct.


Not only will that rule flag indexes as being in need of a rebuild when they aren't, they won't catch those that are!

see my example above which includes:

ops$tkyte@ORA9IR2> select lf_rows, del_lf_rows from index_stats;
 
   LF_ROWS DEL_LF_ROWS
---------- -----------
      6505         194

(that index was in need of a coalesce/rebuild) 

Rebuild Index Performance

January 21, 2007 - 3:54 pm UTC

Reviewer: PRS from USA

Tom,
We do the rebuild of index on a very large weekly partition table. Table is having market data. We receive 50 million rows per day and it is a weekly partition.
We now have only 4 days of data and it takes 2 hrs to rebuild the indexe. We are on ORACLE 9i (9.2.0.7 - 64 bit) and we have E2900 SERVER (24 cpu, 96gb ram).
Is there any way to improve the performance from 2 Hrs to less time??? Any pointers are appreciated. Are there INIT.ORA parameters we need to evaluate?
Please let me know if you need any other information related to database?

Thanks,
PRS

When the index has been rebuilt successfully why am i getting the following error.

February 15, 2007 - 9:33 am UTC

Reviewer: A reader from VIRGINIA,USA

09:21:00 sonardba@SONARP.CLT.NASD.COM> set serveroutput on
09:21:08 sonardba@SONARP.CLT.NASD.COM> set echo on
09:22:00 sonardba@SONARP.CLT.NASD.COM> ALTER TABLE NEWS_STORY MODIFY PARTITION NEWS_STORY_P200612 REBUILD UNUSABLE LOCAL INDEXES;

Table altered.

Elapsed: 00:00:00.00
09:24:29 sonardba@SONARP.CLT.NASD.COM> alter index idx_news_story rebuild partition newS_story_p200612;

Index altered.

Elapsed: 00:00:04.03
09:24:50 sonardba@SONARP.CLT.NASD.COM> SELECT COUNT(*) FROM SONARDBA.NEWS_STORY PARTITION (NEWS_STORY_P200612) ;

COUNT(*)
----------
259588

Elapsed: 00:00:03.05
09:25:44 sonardba@SONARP.CLT.NASD.COM> SELECT COUNT(*) FROM SONARDBA.NEWS_STORY PARTITION (NEWS_STORY_P200612) WHERE contains (STORY_body_bk ,'removed')>0
09:25:51 2 /
SELECT COUNT(*) FROM SONARDBA.NEWS_STORY PARTITION (NEWS_STORY_P200612) WHERE contains (STORY_body_bk ,'removed')>0
*
ERROR at line 1:
ORA-29954: domain index partition is marked LOADING/FAILED/UNUSABLE


Elapsed: 00:00:00.02
09:25:53 sonardba@SONARP.CLT.NASD.COM> spool off;

Tom Kyte

Followup  

February 16, 2007 - 10:42 am UTC

so, what is the status of that index

status of the index is unusable

February 16, 2007 - 5:15 pm UTC

Reviewer: A reader from virginia usa

I beleive if rebuilding of the index is successfull then
the index status should be valid/usable, am I intepreting something wrongly?


Thanks Tom!


Tom Kyte

Followup  

February 18, 2007 - 8:57 am UTC

if you give me an entire example from start to finish that I can run on my machine, I'll be glad to explain to you what happened. As it is, we just have a snippet of a bit of stuff and no details as how to arrive there.

index reorg on high db file sequential read?

March 01, 2007 - 5:26 pm UTC

Reviewer: ignorant

Hi Tom,
I have been reading your invaluable website very carefully but it looks like I might have missed something.
I am trying to identify cases where you would reindex/coalesce indexes. I do not wish to reindex a table unless it has a strong case because it can't be done online (normally anyway).
In that vein, I am looking at this statspack report of a small (1TB) 9i production database -
period = 4pm-8pm
Sessions approx 142.
Load Profile
~~~~~~~~~~~~
Per Second Per Transaction
--------------- ---------------
Logical reads: 31,947.21 5,352.37
Block changes: 201.97 33.84
Physical reads: 590.61 98.95
Physical writes: 17.30 2.90
Sorts: 33.55 5.62
Transactions: 5.97

We are very efficient with our SQL and buffer. Almost everything is found in cache -
Instance Efficiency Percentages (Target 100%)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Buffer Nowait %: 100.00 Redo NoWait %: 100.00
Buffer Hit %: 98.16 In-memory Sort %: 100.00
Library Hit %: 100.00 Soft Parse %: 98.96
Execute to Parse %: 99.79 Latch Hit %: 99.94
Parse CPU to Parse Elapsd %: 82.13 % Non-Parse CPU: 99.87

Shared Pool Statistics Begin End
------ ------
Memory Usage %: 73.30 78.03
% SQL with executions>1: 97.50 96.91
% Memory for SQL w/exec>1: 88.67 86.23

The sorts are also happening in memory so that is not an issue.
However our top 5 events are ¿
Top 5 Timed Events
~~~~~~~~~~~~~~~~~~ % Total
Event Waits Time (s) Ela Time
-------------------------------------------- ------------ ----------- --------
sbtwrite2 2,533,050 130,025 64.86
db file sequential read 8,016,019 47,703 23.80
CPU time 21,742 10.85
sbtinfo2 12 190 .09
log file parallel write 167,913 177 .09

sbtwrite2 is the backup to tape. That will take whatever it has to. The other wait here ¿db file sequential read¿.
We have 8,016,019 waits in total or 580 waits/sec!
This means that we have 98.5% of our physical reads have to wait.
This is very high.
To tune this, it seems to me that I should be looking at inefficient indexes that are being scanned repeatedly. I used a lot of methods to try to determine that - index size (rough indicator), indexes with most blocks in memory (another rough indicator), comparing clustering factor to blocks vs num_rows (query1 below), most frequently occuring index operations in V$sql_plan (again, rough idea, query2 below). None of these really show me if I can get anything back by coalescing or reindexing something. Also they all seem to give different results. I based a lot of this analysis on SAP note 771929 (need sap login). But after reading this it seems that I should not worry about indexes at all.
Worst come to worst, if the clustering factor is closer to number of rows than number of blocks, then I should rebuild the table not the index. I do not know how much performance gain would I gain by that and it would be an outage so I have to have a strong case for it. So I would avoid it for now.
Other than that I can only watch statspack reports and look for SQL statements that stay up in buffer gets, etc. Here is the query I used to determine cf factor (query 1)-

select i.index_name, i.CLUSTERING_FACTOR cf, t.table_name, t.BLOCKS, t.NUM_ROWS, round(i.CLUSTERING_FACTOR/t.blocks) cf_ratio, t.LAST_ANALYZED
from dba_indexes i, dba_tables t
where i.table_name=t.table_name
and i.CLUSTERING_FACTOR/t.blocks >100 and t.blocks !=0
order by i.CLUSTERING_FACTOR/t.blocks desc

Here is one candidate from the above query -
D010TAB~1 3035765 D010TAB 15340 4021925 198 28-JUL-06
(admittedly the stats are old for this table)

Here is the query I wrote (might be flawed) to see what the most frequently hit indexes in memory are (query2) -
select * from (
select object_name, count(1)
from v$sql_plan
where operation='INDEX' and (options like 'FULL%' or options like 'RANGE%')
group by object_name order by 2 desc)
where rownum < 20

So if not rebuilding/coalescing indexes, what should be the approach to reduce "db file sequential reads"? And if it is looking at coalescing/rebuilding indexes, which of the above approaches would be the proper one to use? Or is this database performing the best it can given the resources?

I wish I could write a test case for the above but I haven't narrowed down the problem enough yet to be able to do that. I guess that's what I really need help with.

Thank you,
ignorant
Tom Kyte

Followup  

March 02, 2007 - 1:50 pm UTC

and I say "so" to the

... We have 8,016,019 waits in total or 580 waits/sec! ..

you had 4 hours, you had lots of users.


100% of your reads should wait - all IO takes measurable time. 100% of the IO's waited. 100% of physical IO's take MEASURABLE TIME!


I'm not seeing anything horrific here. Say you had 100 users, say the waits were evenly distributed, they each had 5.8 seconds of wait time - in 4 hours. Pretty darn amazingly good don't you think.

io waits

March 02, 2007 - 2:36 pm UTC

Reviewer: ignorant

Thanks Tom, that was succint and to the point -
"100% of your reads should wait - all IO takes measurable time. 100% of the IO's waited. 100% of physical IO's take MEASURABLE TIME!"
makes perfect sense when I think about it... duh!
So I guess the more important factor is how much wait did each user/session face. Your calculation shows 5.8 seconds of wait time in 4 hours. But I am not sure if I can follow the logic. Can you please elaborate a little bit?
Here is what I am thinking.
580 waits/second over 142 sessions comes out to about 4 waits/second for each session.
Going the other way, the total wait is 47,703 seconds. Per hour, that would be = 47,703/4 = 11,925.75 seconds. That spread over 142 sessions would still be 83 seconds. So that tells me that each session had to wait about 83 seconds every hour.
This is still not bad and clearly no case for index/table reorg/coalesce. But I would still appreciate if you could verify my reasoning here.
Thanks again, your insights are better than Oracle Concepts guide. More like concepts guide + thinking :)
Tom Kyte

Followup  

March 04, 2007 - 6:28 pm UTC

verify which reasoning?



Meta link Note:122008.1

March 02, 2007 - 5:46 pm UTC

Reviewer: Ted from PA

Tom,

Can you comment on the metalink Note:122008.1, "Script: Lists All Indexes that Benefit from a Rebuild"? Is this a script that should be used?
Tom Kyte

Followup  

March 04, 2007 - 6:43 pm UTC

the deleted entries - maybe, but - it will miss many indexes that would benefit from a coalesce/rebuild but don't have a high deleted leaf row count!!!!!

therein lies the problem, it will (for del lf rows) identify some that MIGHT benefit, but it'll miss the vast majority that would - just read this page for an example)

as for height > 4, what if you rebuild and the results are ..... the same...

and for bitmaps - well, no. You might bitmap an almost unique column, you might bitmap a column with 2 values. Then again, if you do single row operations against this table - you would of course never bitmap. One attribute (selectivity) is wholly insufficient to make this decision.


rebuild index

March 16, 2007 - 2:05 am UTC

Reviewer: Suraj Sharma from India

Tom,

We have databases on which STATSPACK was installed and we are collection the data for about a month and delete the data older than a month. The problem is these databases are having space issues all the times with the default tablespace of the PERFSTAT schema (TOOLS.) Whenever we got such space issue our collegues ask us to rebuild the indexes and also it gives some space free.

I am personally not in favor of doing this and never recommend rebuilding of indexes in every week or so. In this case the snapshot is collected in every 15 mins. for statspack.

Please let me know if this is a right approach of rebuilding indexes in every week or so.

Thanks,
Suraj
Tom Kyte

Followup  

March 17, 2007 - 2:32 pm UTC

do the indexes

a) stay small
b) grow back to big in a week


because if you say "b" then I'd ask

"what was the point, it is not like you could use that space for anything else anyway"

Sparse Indexes and Rebuild

May 08, 2007 - 11:04 am UTC

Reviewer: A reader

Hello Tom,
Thanks for giving oppurtunity to ask on this topic.
Well we had a onsite consultant and from the long list of recommendations he has given to my manager and team after running some assessment reports on our exisitng Oracle 9i Rel 2 Database, which is a 24/7 OLTP Application, That report says this

10.11 Sparse Indexes
When a table ¿ especially the indexed columns ¿ is heavily affected by DML operations the index may become fragmented, i.e. the amount of unused space in the index blocks increases. If the index is accessed Oracle has to retrieve more index blocks. In general an index should have less leaf blocks than the associated table.


Best Practices
¿ Indexes should be checked regularly if they have more leaf blocks than the table.
¿ Reorganize fragmented indexes.
¿ Rebuild indexes online. Rebuilding indexes online does not affect to table base. The following advantages will be provide using alter index <name> rebuild online:
o DMLs are allowed on the base table
o Although slower compared with coalesce, coalesce cannot be done ¿online¿
o Base table is only locked in shared mode (as opposed to exclusive mode for offline rebuilds) 
o Intermediate data is stored changes are recorded in a temporary journal table during the index rebuild, which will be used to update the new index at the end of the ONLINE processing.


 Recommendations

¿ Reorganize those indexes where Extra_Blocks is high (potential space to reclaim) and the leaf block density is low (i.e., less that 50-60%).

10.12 Sparse Tables

Many delete or update operations on a table over time can produce a sparse table. 
That means that the high water mark (HWM) of that table is representing a multiple of the really existing data in the table. During full table scans Oracle reads all blocks up to the High Water Mark (HWM). Those tables should be reorganized to improve performance and to need less disk space.


Best Practices
¿ Repair the tables that are commonly subjected to full table scans and have an excessive amount of 
wasted space. Resetting the HWM on these tables will result in shorter response times during full table scans 
and increased throughput for the system.
¿ A new package DBMS_REDEFINITION allows for online reorganization of tables 
(See Note 177407.1: How to Re-Organize a Table Online for more information and restrictions)
¿ Tables should be checked regularly if they are sparse.
¿ Reorganize sparse tables.
 Recommendations

¿ Review the listed tables for reorganization .


Do you have any script to get list of sparse indexes,Please want to see list of indexes, because the list of indexes he has given us had around 180 indexes from total of 500 indexes in our Application, I know you are 99.999% times against rebuilding indexes,what is your suggestion on this now,

Thanks
Tom Kyte

Followup  

May 10, 2007 - 8:38 pm UTC

we already had these exact same comments elsewhere.


I always stop reading at:

coalesce cannot be done "online"

why?

because coalesce in fact cannot be done OFFLINE - it is an online ONLY operation.



and you would think their report, if it were identifying issues, would have all queries you ever need to find these problems. I mean, they would not tell you about problems you don't actually have right?

Index Rebuild for transaction table?

July 23, 2008 - 10:40 am UTC

Reviewer: LIU GaoYuan from Singapore

Tom,

This morning one developer complained about a query performance:

select * from table_a where txn_timestamp<sysdate-93

The final result is zero row selected after running for two minuts, and the query is using an index indexing on txn_timestamp only.

I did an SQL trace and found that the query read about 700MB of the index (1300MB in total).

The developer later confirmed that they had deleted all the data before May 2008. So I did an index rebuild and the new index is about 250MB, and the same query finishes in 0.01 second.

My question is, if a transaction table has indexes with transaction date as the first index column, and the transaction data is being purged regularly by transaction date, should we "periodically" coalesce or rebuild the index? My understanding is this would behave the same as the example of indexing on sequence, and the empty blocks in the left-hand would never be reused if the index is not coalesced or rebuilt?

Thanks
Tom Kyte

Followup  

July 24, 2008 - 10:25 am UTC

search this site for "sweeper index"


or, just read this page - we talked about it above.

July 24, 2008 - 3:24 am UTC

Reviewer: LIU GaoYuan from Singapore

Tom,

Re: Index Rebuild for transaction table

With reference to Richard Foote's article Oracle B-Tree Index Internals: Rebuilding The Truth, Richard Foote comments that "Fully emptied blocks are placed on freelist and recycled (although remain in the index structure)".

My own test shows that
1) Oracle will still scan those fully emptied blocks, or "Empty Blocks Not Unlinked" in Richard's article.
2) The fully emptied blocks will be reused when new data is inserted.

To avoid performance surprise, a coalesce or rebuild would be helpful if there are huge number of fully emptied blocks in the index.

Thanks
Tom Kyte

Followup  

July 24, 2008 - 10:53 am UTC

you are both correct. Yes, the fully emptied blocks are reused when new data is inserted - fully emptied blocks therefore are placed on the freelist and recycled.

only for this "sweeper" index where you DO the actual type of query that causes an issue - it is a rather rare case, and typically only happens on a date column (eg: primary keys populated by a sequence have the same issue in the case of a purge, but you typically never range scan on them like that)

Basic question on indexes --

September 05, 2008 - 5:56 pm UTC

Reviewer: Apurva from nyc

Hi Tom,

Hope you are doing great. I have a question to ask --

If I have a table with indexes defined on it, and I perform some insert/delete operations on the table (with index defined). Would the B Tree/Bitmap index(s) adjust itself to account for the freshly inserted/deleted records?

Would rebuilding the indexes after insert/delete operations ensure that the index is fresh again?

Or, would you recommend that we drop and recreate indexes after insert/delete operations (something we don't want to do)?

Thanks a lot for your advice
Tom Kyte

Followup  

September 08, 2008 - 8:34 am UTC

... Would the B Tree/Bitmap index(s)
adjust itself to account for the freshly inserted/deleted records?
...

yes.

... Would rebuilding the indexes after insert/delete operations ensure that the
index is fresh again?
...

do not even remotely think about considering this - NO you do not do that.

.. Or, would you recommend that we drop and recreate indexes after insert/delete
operations (something we don't want to do)?
....

NO, do not do that.

Thanks a lot Tom

September 08, 2008 - 10:31 am UTC

Reviewer: A reader


Space Saving

September 08, 2008 - 12:38 pm UTC

Reviewer: Kv from INDIA

Hi Tom,

I know for sure that the B*tree indexes don't need a rebuild in their life time. However we have a index based on Seqience number and we have deleted millions of rows from the billion row table.

This kind of indexes need rebuild so that the Left hand side brown leaves just goes away and the index becomes compact. this way we will also ensure that the index rasnge scans are faster.

Can you please tell me how to calculate the savings in space without actually re-building the index. Is that possible.

Thanks
Tom Kyte

Followup  

September 08, 2008 - 4:26 pm UTC

... I know for sure that the B*tree indexes don't need a rebuild in their life
time. ...

no one is saying that either. There are special cases where a coalesce or rebuild would be called for. Most times - no, rare special cases - yes.


... This kind of indexes need rebuild so that the Left hand side brown leaves just
goes away and the index becomes compact. this way we will also ensure that the
index rasnge scans are faster.
....

that is only true if you delete MOST but not ALL of the left hand (old values), if you purge all of the old values - we move the leaf blocks from the left to the right as needed.

and it only affects a very special range scan - one that starts from the very far left - most range scans would NOT be affected.



...Can you please tell me how to calculate the savings in space without actually
re-building the index. Is that possible.

...

how many rows per leaf block could fit in a newly build index (take some of your data and calculate that)

now, how many rows do you have?
how many leaf blocks do you have?

what percentage of rows to blocks do you have now versus what a 'fresh' one would?

there is your 'savings'

Index not used

September 15, 2008 - 1:53 pm UTC

Reviewer: Maverick

Tom, I have a small confusion going on with my table indexes:
I have a table 'X' and has a column fk_case_id [foreign key to another table]
I have an index built on this column
idx_caseid[fk_case_id]
I have another index using this and another column
idx_case_date[fk_case_id,case_date]

When I issue a query like this:
Select * from X where fk_case_id=1

my plan shows it is always doing a FFS on idx_case_date and not using idx_caseid at all.

Not sure why idx_caseid is not used? I checked by dropping idx_case_date and now it's doing a full table scan [only 1% of data has fk_case_id=1 in this table]

I checked index statistics and
idx_caseid has the following:
Leaf_blocks 1041
Distinct keys 812
Avg leaf blocks per key 1
Avg Data blocks per key 422

idx_case_date has the following:
Leaf_blocks 1555
Distinct keys 502,952
Avg leaf blocks per key 1
Avg Data blocks per key 1

[Not sure these statistics are of any relevance with my problem]..

Can you point me any reasons why Oracle wouldn't use idx_caseid but instead prefers to use idx_case_date?

Thanks


Tom Kyte

Followup  

September 16, 2008 - 10:19 pm UTC

no create tables
no look


not even a plan, no example of anything.

probably an implicit conversion - but you give us no clues.... :(

Here is the missing information

September 18, 2008 - 9:06 am UTC

Reviewer: Maverick

Sorry for not giving table information. Here are more details


SET DEFINE OFF;
CREATE TABLE case_data
(
ID NUMBER(10),
FK_CASELOAD_ID NUMBER(10),
CASE_NUMBER NUMBER(10),
ASSIGNED_DATE DATE,
STAFF_ID NUMBER(10),
AGENCY_ID NUMBER(10),
TRANSFERRED_DATE DATE,
CONSTRAINT PK_CASELOAD
PRIMARY KEY
(ID),
CONSTRAINT FK_CASELOAD_CASE_LOADS
FOREIGN KEY (FK_CASELOAD_ID)
REFERENCES CASE_LOADS (ID),
CONSTRAINT FK_CASELOAD_REASON
)
NOLOGGING
NOCOMPRESS
NOCACHE
NOPARALLEL
MONITORING;



CREATE INDEX IDX_CASELOAD_CN_TRDATE ON case_data
(CASE_NUMBER, TRANSFERRED_DATE, FK_CASELOAD_ID)
LOGGING
NOPARALLEL;


CREATE INDEX IDX_CASELOAD_CASELOADID ON case_data
(FK_CASELOAD_ID)
LOGGING
NOPARALLEL;

select count(*) from case_data;

count(*)
--------
502954



set autotrace on;

select * from case_data
where fk_caseload_id=5;


1042 rows selected.

Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer Mode=ALL_ROWS (Cost=587 Card=1 K Bytes=41 K)
1 0 TABLE ACCESS FULL TEST.CASE_DATA (Cost=587 Card=1 K Bytes=41 K)

Statistics
----------------------------------------------------------
0 recursive calls
0 physical write total multi block requests
0 gcs messages sent
0 db block gets from cache
0 redo entries
0 java session heap collected bytes
1 cursor authentications
0 queries parallelized
0 Parallel operations not downgraded
0 Parallel operations downgraded to serial
1042 rows processed


I do not understand, out of 500K rows, i'm getting only 1042 rows and still it's not using index.


So, I forced it with a hint:

set autotrace on;

select /*+ index(idx_caseload_caseloadid) */ * from case_data
where fk_caseload_id=5;

Still i get the same exact plan.

Now I am selecting just one column form the entire table:

set autotrace on;

select case_number from case_data
where fk_caseload_id=5;


This is the plan I get:

1042 rows selected.

Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer Mode=ALL_ROWS (Cost=359 Card=1 K Bytes=11 K)
1 0 INDEX FAST FULL SCAN CMS.IDX_CASELOAD_CN_TRDATE (Cost=359 Card=1 K Bytes=11 K)

Statistics
----------------------------------------------------------
1 recursive calls
0 physical write total multi block requests
0 gcs messages sent
0 db block gets from cache
0 redo entries
0 java session heap collected bytes
0 cursor authentications
0 queries parallelized
0 Parallel operations not downgraded
0 Parallel operations downgraded to serial
1042 rows processed

It somehow using a different index instead of the one I am expecting[??] idx_caseload_caseloadid.

Where is the problem, Tom?
Tom Kyte

Followup  

September 18, 2008 - 11:23 am UTC

what version and what are you using to run the query,

use dbms_xplan to display the output of the plan if you are using 9i - better information.

Those statistics - what is up with them - none of the useful ones are present. Just use sqlplus please.

Sorry again !!

September 18, 2008 - 4:51 pm UTC

Reviewer: Maverick from US

I keep missing important information.

Version Oracle 10g: I was using TAOD tool to run these queries.
But here is the SQLPLUS Version:

SQL> cl scr
SQL> explain plan for
  2  select * from case_data where fk_caseload_id=5;

Explained.

SQL> select * table(dbms_xplan.display);
select * table(dbms_xplan.display)
         *
ERROR at line 1:
ORA-00923: FROM keyword not found where expected 


SQL> c/table/from table
  1* select * from table(dbms_xplan.display)
SQL> /

PLAN_TABLE_OUTPUT                                                               
--------------------------------------------------------------------------------
Plan hash value: 779626952                                                      
                                                                                
--------------------------------------------------------------------------------
--                                                                              
                                                                                
| Id  | Operation         | Name         | Rows  | Bytes | Cost (%CPU)| Time    
 |                                                                              
                                                                                
--------------------------------------------------------------------------------
--                                                                              
                                                                                

PLAN_TABLE_OUTPUT                                                               
--------------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |              |  1313 | 42016 |   587   (4)| 00:00:08
 |                                                                              
                                                                                
|*  1 |  TABLE ACCESS FULL| CASE_DATA |  1313 | 42016 |   587   (4)| 00:00:08
 |                                                                              
                                                                                
--------------------------------------------------------------------------------
--                                                                              
                                                                                
                                                                                
Predicate Information (identified by operation id):                             

PLAN_TABLE_OUTPUT                                                               
--------------------------------------------------------------------------------
---------------------------------------------------                             
                                                                                
   1 - filter("FK_CASELOAD_ID"=5)                                               

13 rows selected.

SQL> select count(*) from case_data;

  COUNT(*)                                                                      
----------                                                                      
    502954                                                                      

SQL> explain plan for
  2  select case_number from case_data
  3  where fk_caseload_id=5;

Explained.

SQL> select * from table(dbms_xplan.display);

PLAN_TABLE_OUTPUT                                                               
--------------------------------------------------------------------------------
Plan hash value: 2061868224                                                     
                                                                                
--------------------------------------------------------------------------------
---------------                                                                 
                                                                                
| Id  | Operation            | Name                   | Rows  | Bytes | Cost (%C
PU)| Time     |                                                                 
                                                                                
--------------------------------------------------------------------------------
---------------                                                                 
                                                                                

PLAN_TABLE_OUTPUT                                                               
--------------------------------------------------------------------------------
|   0 | SELECT STATEMENT     |                        |  1313 | 11817 |   359   
(5)| 00:00:05 |                                                                 
                                                                                
|*  1 |  INDEX FAST FULL SCAN| IDX_CASELOAD_CN_TRDATE |  1313 | 11817 |   359   
(5)| 00:00:05 |                                                                 
                                                                                
--------------------------------------------------------------------------------
---------------                                                                 
                                                                                
                                                                                
Predicate Information (identified by operation id):                             

PLAN_TABLE_OUTPUT                                                               
--------------------------------------------------------------------------------
---------------------------------------------------                             
                                                                                
   1 - filter("FK_CASELOAD_ID"=5)                                               

13 rows selected.

SQL> spool off


Hope this helps [in helping me]. Please  let me know in case of any more information

Thanks,

Tom Kyte

Followup  

September 18, 2008 - 9:56 pm UTC

can you "prove" the indexes are as you say.

do this

create table t as select * from case_data;
create the indexes
do the query


I believe there is something in the way you are creating the index that does not relate to what you say you did.

so, show us the work from start to finish

??

September 19, 2008 - 8:34 am UTC

Reviewer: Maverick

Tom, Not sure I followed your question!!
I think that's what I did in my earlier post. If you look at it, I have put in the create statements for Table and Indexes. That shows how I am creating indexes. I have provided Plan generated by auto trace and also by Explain plan.

If you use those statements and recreate at your end, does it not show the way I am getting it?


Tom Kyte

Followup  

September 19, 2008 - 9:17 am UTC

all I asked was for you to step by step reproduce this.

Anyway, it is likely due to a poor clustering factor. If the case_data arrives randomly over time - so that each case data record you need is on a different database block - we would tend to (rightly so) avoid the index.

If the data were colocated, we would use the index. Consider, in the following - note how the case data arrives - none of the "5" values are on the same block the first time around:



ops$tkyte%ORA10GR2> create table case_loads( id number(10) primary key );

Table created.

ops$tkyte%ORA10GR2> insert into case_loads
  2  select level from dual connect by level <= 384
  3  /

384 rows created.

ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> CREATE TABLE case_data
  2  (
  3    ID                NUMBER(10),
  4    FK_CASELOAD_ID    NUMBER(10),
  5    CASE_NUMBER       NUMBER(10),
  6    ASSIGNED_DATE     DATE,
  7    STAFF_ID          NUMBER(10),
  8    AGENCY_ID         NUMBER(10),
  9    TRANSFERRED_DATE  DATE,
 10    CONSTRAINT PK_CASELOAD PRIMARY KEY (ID),
 11    CONSTRAINT FK_CASELOAD_CASE_LOADS
 12   FOREIGN KEY (FK_CASELOAD_ID)
 13   REFERENCES CASE_LOADS (ID)
 14  );

Table created.

ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> insert into case_data
  2  select level, mod(level,384)+1, level, sysdate, level, level, sysdate
  3    from dual connect by level <= 502954
  4  /

502954 rows created.

ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> CREATE INDEX IDX_CASELOAD_CASELOADID ON case_data (FK_CASELOAD_ID);

Index created.

ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> exec dbms_stats.gather_table_stats( user, 'CASE_DATA' );

PL/SQL procedure successfully completed.

ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> select a.num_rows, a.blocks, b.clustering_factor,
  2         (select count(distinct dbms_rowid.rowid_block_number(rowid)) from case_data where fk_caseload_id=5) cnt_blocks
  3    from user_tables a, user_indexes b
  4   where a.table_name = 'CASE_DATA'
  5     and b.index_name = 'IDX_CASELOAD_CASELOADID'
  6  /

  NUM_ROWS     BLOCKS CLUSTERING_FACTOR CNT_BLOCKS
---------- ---------- ----------------- ----------
    501422       3142            502954       1310

<b>the clustering factor is near the number of rows in the table - large range scans will not be efficient against this data, it will have to visit a single block per row retrieved - we can see that the "5" data is on 1310 separate blocks - we'll have to use single block IO 1310 times to retrieve this data</b>


ops$tkyte%ORA10GR2> set autotrace traceonly
ops$tkyte%ORA10GR2> select * from case_data where fk_caseload_id=5;

1310 rows selected.


Execution Plan
----------------------------------------------------------
Plan hash value: 2422329801

-------------------------------------------------------------------------------
| Id  | Operation         | Name      | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |           |  1302 | 50778 |  1006   (5)| 00:00:05 |
|*  1 |  TABLE ACCESS FULL| CASE_DATA |  1302 | 50778 |  1006   (5)| 00:00:05 |
-------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter("FK_CASELOAD_ID"=5)


Statistics
----------------------------------------------------------
          1  recursive calls
          0  db block gets
       3236  consistent gets
          0  physical reads
          0  redo size
      48224  bytes sent via SQL*Net to client
       1341  bytes received via SQL*Net from client
         89  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
       1310  rows processed

ops$tkyte%ORA10GR2> select /*+ index( case_data IDX_CASELOAD_CASELOADID ) */ * from case_data where fk_caseload_id=5;

1310 rows selected.


Execution Plan
----------------------------------------------------------
Plan hash value: 1867324150

-------------------------------------------------------------------------------------------------------
| Id  | Operation                   | Name                    | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |                         |  1302 | 50778 |  1315   (1)| 00:00:07 |
|   1 |  TABLE ACCESS BY INDEX ROWID| CASE_DATA               |  1302 | 50778 |  1315   (1)| 00:00:07 |
|*  2 |   INDEX RANGE SCAN          | IDX_CASELOAD_CASELOADID |  1306 |       |     5   (0)| 00:00:01 |
-------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - access("FK_CASELOAD_ID"=5)


Statistics
----------------------------------------------------------
          1  recursive calls
          0  db block gets
       1403  consistent gets
          0  physical reads
          0  redo size
      48224  bytes sent via SQL*Net to client
       1341  bytes received via SQL*Net from client
         89  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
       1310  rows processed

ops$tkyte%ORA10GR2> set autotrace off

<b>the cost of the full scan using multi-block reads is less than the cost of tons of single block reads in this case, if the data were organized on disk differently:</b>

ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> create table t as select * from case_data;

Table created.

ops$tkyte%ORA10GR2> truncate table case_data;

Table truncated.

ops$tkyte%ORA10GR2> insert /*+ append */ into case_data select * from t order by fk_caseload_id;

502954 rows created.

ops$tkyte%ORA10GR2> exec dbms_stats.gather_table_stats( user, 'CASE_DATA', cascade=> true );

PL/SQL procedure successfully completed.

ops$tkyte%ORA10GR2> select a.num_rows, a.blocks, b.clustering_factor,
  2         (select count(distinct dbms_rowid.rowid_block_number(rowid)) from case_data where fk_caseload_id=5) cnt_blocks
  3    from user_tables a, user_indexes b
  4   where a.table_name = 'CASE_DATA'
  5     and b.index_name = 'IDX_CASELOAD_CASELOADID'
  6  /

  NUM_ROWS     BLOCKS CLUSTERING_FACTOR CNT_BLOCKS
---------- ---------- ----------------- ----------
    504905       3142              3455          9

<b>now the clustering factor is near the number of blocks in the table, large range scans are efficient - we can see that all of our "5" data is on 9 blocks...</b>


ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> set autotrace traceonly
ops$tkyte%ORA10GR2> select * from case_data where fk_caseload_id=5;

1310 rows selected.


Execution Plan
----------------------------------------------------------
Plan hash value: 1867324150

-------------------------------------------------------------------------------------------------------
| Id  | Operation                   | Name                    | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |                         |  1391 | 54249 |    16   (0)| 00:00:01 |
|   1 |  TABLE ACCESS BY INDEX ROWID| CASE_DATA               |  1391 | 54249 |    16   (0)| 00:00:01 |
|*  2 |   INDEX RANGE SCAN          | IDX_CASELOAD_CASELOADID |  1391 |       |     6   (0)| 00:00:01 |
-------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - access("FK_CASELOAD_ID"=5)


Statistics
----------------------------------------------------------
          1  recursive calls
          0  db block gets
        192  consistent gets
          0  physical reads
          0  redo size
      48224  bytes sent via SQL*Net to client
       1341  bytes received via SQL*Net from client
         89  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
       1310  rows processed

<b>and the index is used naturally</b>


Physical organization counts, a lot - Not knowing your data, I'm totally guessing, but probably the case data arrives out of order and you are using a heap organized table and the data is scattered all over the place...

What could you do? Maybe a b*tree cluster would be called for, to keep all of the related case data together. for example:

ops$tkyte%ORA10GR2> create table case_loads( id number(10) primary key );

Table created.

ops$tkyte%ORA10GR2> insert into case_loads
  2  select level from dual connect by level <= 384
  3  /

384 rows created.

ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> create cluster case_data_cluster ( fk_caseload_id number(10) ) size 8k;

Cluster created.

ops$tkyte%ORA10GR2> create index case_data_cluster_idx on cluster case_data_cluster;

Index created.

ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> CREATE TABLE case_data
  2  (
  3    ID                NUMBER(10),
  4    FK_CASELOAD_ID    NUMBER(10),
  5    CASE_NUMBER       NUMBER(10),
  6    ASSIGNED_DATE     DATE,
  7    STAFF_ID          NUMBER(10),
  8    AGENCY_ID         NUMBER(10),
  9    TRANSFERRED_DATE  DATE,
 10    CONSTRAINT PK_CASELOAD PRIMARY KEY (ID),
 11    CONSTRAINT FK_CASELOAD_CASE_LOADS
 12   FOREIGN KEY (FK_CASELOAD_ID)
 13   REFERENCES CASE_LOADS (ID)
 14  )
 15  cluster case_data_cluster(fk_caseload_id);

Table created.

ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> insert into case_data
  2  select level, mod(level,384)+1, level, sysdate, level, level, sysdate
  3    from dual connect by level <= 502954
  4  /

502954 rows created.

ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> exec dbms_stats.gather_table_stats( user, 'CASE_DATA', cascade=>true );

PL/SQL procedure successfully completed.

ops$tkyte%ORA10GR2>


ops$tkyte%ORA10GR2> set autotrace traceonly
ops$tkyte%ORA10GR2> select * from case_data where fk_caseload_id=5;

1310 rows selected.


Execution Plan
----------------------------------------------------------
Plan hash value: 3427366092

----------------------------------------------------------------------------------------------
| Id  | Operation            | Name                  | Rows  | Bytes | Cost (%CPU)| Time     |
----------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT     |                       |  1303 | 50817 |     2   (0)| 00:00:01 |
|   1 |  TABLE ACCESS CLUSTER| CASE_DATA             |  1303 | 50817 |     2   (0)| 00:00:01 |
|*  2 |   INDEX UNIQUE SCAN  | CASE_DATA_CLUSTER_IDX |     1 |       |     1   (0)| 00:00:01 |
----------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - access("FK_CASELOAD_ID"=5)


Statistics
----------------------------------------------------------
          1  recursive calls
          0  db block gets
         97  consistent gets
          0  physical reads
          0  redo size
      48224  bytes sent via SQL*Net to client
       1341  bytes received via SQL*Net from client
         89  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
       1310  rows processed

ops$tkyte%ORA10GR2> set autotrace off
ops$tkyte%ORA10GR2>


in this case, even though we loaded the data "out of order", the btree cluster put it all back together - all of the "5"'s are colocated.

Thanks a bunch!!

September 19, 2008 - 11:45 am UTC

Reviewer: Maverick

That was really useful.I'll give it a try and see if it changes the plan like you mentioned.

As far as data coming in random order, we are currently migrating our existing data into new system. Depending on the problems we find, we are doing this multiple times.
we usually do a "insert /* append */ " into this table from the source [which is in a staging area]
Do you think this migration is causing data to go in different blocks and Creating the cluster is the way to go for this?

Thanks a lot for all your help
Tom Kyte

Followup  

September 19, 2008 - 4:30 pm UTC

it would depend on how the data is ordered in the first place - select * from source - did source have the data arrive randomly or out of order.

creating a cluster is something to CONSIDER.

along with everything else - I'm sure you execute many other queries, if you organize the data by this foreign key - they might be affected (the old whack a mole syndrome)

organizing segments

September 22, 2008 - 4:18 am UTC

Reviewer: A reader

greetings thomas,

I have a table that have a lot of insertions and deletions, and after some time of heavy insert and delete, the system goes slow, so we move the table and rebuild its indexes and the system goes fast, so i am planning to make a job that organize this table every night or every week.

What do you think.

Thanks.
Tom Kyte

Followup  

September 22, 2008 - 5:09 pm UTC

I think I'd ask myself "what part of this rebuild actually fixed anything - what was slow, and what goes faster, perhaps I don't need to move the table at all, perhaps I don't need to rebuild n-1 index - perhaps there is just one thing that has a large impact and that is all I need to do"

In short, I'd want to understand what about that change causes this massive "good thing" to happen and why. Perhaps I can use a structure (cluster, IOT, partitioning, whatever) to avoid ever having to rebuild or reorganize again.


Index not used

September 22, 2008 - 9:01 am UTC

Reviewer: Maverick

Tom, I was doing some analysis before I create a cluster for my table CASE_DATA. I ran the query you provided to find how many blocks this data is spread into..

I did 'Alter table CASE_DATA move' and found, it went to more blocks instead of reducing the number of blocks..

I am not fully aware of this command and just trying after reading about it on the net.

Here is the cut paste from sql*plus:

SQL*Plus: Release 10.2.0.1.0 - Production on Mon Sep 22 08:25:45 2008

Copyright (c) 1982, 2005, Oracle.  All rights reserved.


Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
With the Partitioning, OLAP and Data Mining options

SQL> select a.num_rows, a.blocks, b.clustering_factor,
  2           (select count(distinct dbms_rowid.rowid_block_number(rowid)) from case_data where 
  3                        fk_caseload_id=5) cnt_blocks
  4       from user_tables a, user_indexes b
  5      where a.table_name = 'CASE_DATA'
  6        and b.index_name = 'IDX_CASELOAD_CASELOADID';

  NUM_ROWS     BLOCKS CLUSTERING_FACTOR CNT_BLOCKS
---------- ---------- ----------------- ----------
    502943       2574            343446        755

SQL> alter table case_data move;

Table altered.

SQL> select a.num_rows, a.blocks, b.clustering_factor,
  2           (select count(distinct dbms_rowid.rowid_block_number(rowid)) from case_data where 
  3                        fk_caseload_id=5) cnt_blocks
  4       from user_tables a, user_indexes b
  5      where a.table_name = 'CASE_DATA'
  6        and b.index_name = 'IDX_CASELOAD_CASELOADID';

  NUM_ROWS     BLOCKS CLUSTERING_FACTOR CNT_BLOCKS
---------- ---------- ----------------- ----------
    502943       2574            343446        768

SQL> 
SQL> 

Why does it show more blocks after MOVE? Shouldn't that be few blocks?

Tom Kyte

Followup  

September 23, 2008 - 12:34 pm UTC

you reintroduced the pct free, you unmigrated rows, you fundementally changed the data.

The count(dbms rowid) thing I did counts the head rowid pieces (worked for me since I never updated the rows after insertion - i had no migrated rows) - if you had migrated rows before - your original could could easily be smaller. Also, you reintroduced the pctfree of X% (whatever you had it set to) - so you could easily have moved rows around.

13 blocks is "noise" here, not really significant.

ops$tkyte%ORA10GR2> create table t ( x int, y varchar2(4000) );

Table created.

ops$tkyte%ORA10GR2> insert into t (x) select rownum from all_users;

58 rows created.

ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> select count(distinct dbms_rowid.rowid_block_number(rowid) ) from t;

COUNT(DISTINCTDBMS_ROWID.ROWID_BLOCK_NUMBER(ROWID))
---------------------------------------------------
                                                  1

ops$tkyte%ORA10GR2> alter table t move;

Table altered.

ops$tkyte%ORA10GR2> select count(distinct dbms_rowid.rowid_block_number(rowid) ) from t;

COUNT(DISTINCTDBMS_ROWID.ROWID_BLOCK_NUMBER(ROWID))
---------------------------------------------------
                                                  1

ops$tkyte%ORA10GR2> update t set y = rpad('*',4000,'*');

58 rows updated.

<b> obviously, these do not fit on a  block anymore! </b>
ops$tkyte%ORA10GR2> select count(distinct dbms_rowid.rowid_block_number(rowid) ) from t;

COUNT(DISTINCTDBMS_ROWID.ROWID_BLOCK_NUMBER(ROWID))
---------------------------------------------------
                                                  1

<b>we are only counting the head rowid piece, when we reorg the table, we "unmigrate" the rows as well</b>

ops$tkyte%ORA10GR2> alter table t move;

Table altered.

ops$tkyte%ORA10GR2> select count(distinct dbms_rowid.rowid_block_number(rowid) ) from t;

COUNT(DISTINCTDBMS_ROWID.ROWID_BLOCK_NUMBER(ROWID))
---------------------------------------------------
                                                 58

<b>it takes 58 blocks to hold those rows, not just one, when Y is filled with data</b>


organizing segments

September 23, 2008 - 2:21 am UTC

Reviewer: A reader

greetings thomas,

and thanks like always.

--In short, I'd want to understand what about that change causes this massive "good thing" to happen and why

i did understand what about that change, this table is like a workarea for the whole system, every one want something from the system would go to that table and issue an insert and a delete.

--Perhaps I can use a structure (cluster, IOT, partitioning, whatever) to avoid ever having to rebuild or reorganize again

I did partition the table, but the same, after the rebuild everything goes fast.
Tom Kyte

Followup  

September 24, 2008 - 3:22 pm UTC

... every one want something from the system would go to that table
and issue an insert and a delete.
...

so, that should not be so bad.

why do you not use a global temporary table - using a permanent table for this is wrong. If you use a global temporary table, they would not delete and the segments would be cleaned up upon commit or session end and you would never need to touch this table again.

Index not used

September 23, 2008 - 4:44 pm UTC

Reviewer: Maverick

Tom,
But my scenario is different. we have migrated data. Ran the query to find number of blocks on that table. and did "alter table move" and ran the query again. There was no DML or any other DDL took place on that database [not just table, but the database is locked currently]. So if I did "Alter table [tablename] Move " shouldn't it reorg the data and reduce number of blocks if they are fragmented bad?

I thought we use MOVE to reorganize , but it seems not [I have not much experience with it to even comment].
So, please give me your thoughts on this command and why it's used or how will it help me reduce fragmentation of data ..

Thanks for all your help
Tom Kyte

Followup  

September 24, 2008 - 6:42 pm UTC

did it increase the number of blocks in the table? You didn't show us that, you showed us that the data got more spread out - you don't have an order by on that alter table move (dbms_redefinition would let you do that), so the order of the data is "not deterministic" especially if you use ASSM (automatic segment space management)

You do use move to "reorg", it will compact the table, but you haven't shown it not compacting the table - you showed it possibly moving rows around - but the blocks would be "compact" (10% free by default, unless you changed the pctfree)

if the database is "locked" (whatever that means), how did the data get loaded in the first place... must not have been "locked" at some prior point in time.

alter table move is for

a) moving a table - from tablespace 1 to tablespace 2
b) removing "free space" that has been introduced due to "deletes" not followed by inserts

it does "reorganize" the table, it does "compact it". You have not shown it "uncompacting" the table at all.

I am of the opinion somewhat that there were some modifications at least after you loaded - before you moved (move again, what happens then???)

index internal pdf -new URL

October 26, 2008 - 3:23 am UTC

Reviewer: A reader

The index internal ppt - refered in many posts above has been moved to this URL, thought it will be useful to put it here.

http://bbs.wisky.org/doc/internal.docs/Richard%20Foote-Oracle%20Index%20Internal.pdf


Update global index

December 31, 2008 - 1:28 am UTC

Reviewer: Shivdeep Modi from Noida, India

If i am dropping multiple partitions of a partitioned table with a global indexes, should I use the update global index with every alter table drop partition command or with the last command only? Is this more efficient rather than using it will all the alter statements?

alter table tab1 drop partition part1;
alter table tab1 drop partition part2;
alter table tab1 drop partition part3 update global indexes;

Tom Kyte

Followup  

January 05, 2009 - 10:12 am UTC

it would be sort of pointless to do it on the 3rd drop.

ops$tkyte%ORA10GR2> CREATE TABLE t
  2  (
  3    dt  date,
  4    x   int,
  5    y   varchar2(30)
  6  )
  7  PARTITION BY RANGE (dt)
  8  (
  9    PARTITION part1 VALUES LESS THAN (to_date('01-jan-2005','dd-mon-yyyy')),
 10    PARTITION part2 VALUES LESS THAN (to_date('01-jan-2006','dd-mon-yyyy')),
 11    PARTITION part3 VALUES LESS THAN (to_date('01-jan-2007','dd-mon-yyyy')),
 12    PARTITION part4 VALUES LESS THAN (to_date('01-jan-2008','dd-mon-yyyy'))
 13  )
 14  /

Table created.

ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> insert into t
  2  select add_months( to_date( '31-dec-2004','dd-mon-yyyy'), mod(rownum,4)*12 ), user_id, username
  3    from all_users;

45 rows created.

ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> select 'part1', count(*) from t partition(part1) union all
  2  select 'part2', count(*) from t partition(part2) union all
  3  select 'part3', count(*) from t partition(part3) union all
  4  select 'part4', count(*) from t partition(part4);

'PART   COUNT(*)
----- ----------
part1         11
part2         12
part3         11
part4         11

ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> create index t_idx on t(x) global;

Index created.

ops$tkyte%ORA10GR2> select index_name, status from user_indexes where table_name = 'T';

INDEX_NAME                     STATUS
------------------------------ --------
T_IDX                          VALID

ops$tkyte%ORA10GR2> alter table t drop partition part1;

Table altered.

ops$tkyte%ORA10GR2> select index_name, status from user_indexes where table_name = 'T';

INDEX_NAME                     STATUS
------------------------------ --------
T_IDX                          UNUSABLE

ops$tkyte%ORA10GR2> alter table t drop partition part2;

Table altered.

ops$tkyte%ORA10GR2> alter table t drop partition part3 update global indexes;

Table altered.

ops$tkyte%ORA10GR2> select index_name, status from user_indexes where table_name = 'T';

INDEX_NAME                     STATUS
------------------------------ --------
T_IDX                          UNUSABLE



it won't (can't) 'fix' the global indexes on the third drop.


Your index would need to be rebuilt.

Rebuild Indexes - Analyze Index.. validate structure ?

January 04, 2009 - 6:50 am UTC

Reviewer: Harschil Kaparwan from UK

Hi Tom

Many thanks for enlightening me as always.
I am not going to ask when/why we should rebuild the index. I have gone through articles written by you on that.

In order to populate index_stats we need to use:
ANALYZE INDEX <INDEX_NAME> VALIDATE STRUCTURE;

Now questions:

a)We use dbms_stats.schema_gather to get *things* for CBO (with cascade=true)
Now if we use Analyze command , whether stats gathered using dbms_stats for index will be overwritten ?

b) "Analyze" command is deprecated now , then why to use "Analyze" to populate index_stats?

Kind Regards




Tom Kyte

Followup  

January 05, 2009 - 11:13 am UTC

In order to populate index_stats we need to use:
ANALYZE INDEX <INDEX_NAME> VALIDATE STRUCTURE;


and don't forget - that'll LOCK that index - it is not an online operation. And if you use the online option - it doesn't populate index_stats...


a) no, they won't be.
b) analyze is deprecated as a means to gather statistics. analyze is used to validate structures still.

Rebuilding indexes helps

July 03, 2009 - 6:19 am UTC

Reviewer: RSah from Ontario

I gather you say rebuilding indexes does not help on performance improvement as every one expects. But in my case i can tell you it gave me 100% performance improvement on a 10+ million record tables. I am not DBA so i cannot pin point what reindexing did. But definitely more than one occassion i asked DBA to rebuild index and it gave expected performance improvement.
Tom Kyte

Followup  

July 06, 2009 - 7:34 pm UTC

Man, the art of critical reading is dead isn't it. It is out the window along with "being able to phrase sensible questions"



RSah - please, take a few moments and point out anywhere on this page where I've said "I gather you say rebuilding indexes does not help on performance improvement as every one expects."

Anywhere, just one reference, that would be great.


What I said were things like "If you can show me measurable performance increases or some other measurable, material benefit --
by all means rebuild."

which leads me to believe - you didn't actually read this page.

Rebuild index metrix

August 27, 2009 - 4:36 pm UTC

Reviewer: A reader

For all those who question index rebuilds, here is a metric we measured before and after index rebuild.
These measurements were taken on same database, same workload, same day of week, same time of day (all on production 24x7 db). DB_FILE_SEQUENTIAL_READ WAITS before index rebuild = 986,692 (one hour snapshot), DB_FILE_SEQUENTIAL_READ WAITS after rebuild = 49,724 : PHYSICAL READS before rebuild = 369.42, PHYSICAL READS after rebuild = 22.35. This is same code, both snapshots have 64 sessions, near identical Curs/Sess. Why rebuilds improve performance? I/O improvements and less waits.
Tom Kyte

Followup  

August 28, 2009 - 5:06 pm UTC

*perfect*

... here is a metric we measured before and after index rebuild. ...

PERFECT.

If you read my original answer, you'll see that we are in complete and utter agreement. I wrote:


ask them for the performance metrics
they took BEFORE the rebuild and AFTER the rebuild. They won't have any (no
one ever does, no one seems to think about doing that). They will say "it just
goes faster trust me".


You have done what I asked, you have verified that the INDEX you rebuild provides good returns.

And you can use your metric approach to see that that other 99 indexes you have - provide NO such return if they are rebuilt.

So, you have proven that the single index you rebuild

a) provides a return
b) provides the return you were aiming for

perfect.

What about the Automatic Segment Advisor recommendations

September 03, 2009 - 3:00 pm UTC

Reviewer: Mark from Seattle,WA

Tom,

Very interesting thread, thanks.

What are your thoughts on shrinking indexes identified by the automatic segment advisor findings.

select * from table ( DBMS_SPACE.ASA_RECOMMENDATIONS ) where segment_type like '%INDEX%'
Tom Kyte

Followup  

September 04, 2009 - 3:47 pm UTC

It can be

a) a good thing (unlikely in most cases unless it is an index on a sequence field or a date field populated by seq.nextval or sysdate where you delete many of the old records but not all)

b) a non-event, neither good nor bad

c) a bad thing - since the index will just grow again and growing is expensive - but staying the same size costs nothing.

frequency of execution of CTX_DDL.SYNC_INDEX

November 30, 2009 - 4:49 pm UTC

Reviewer: RG

Hi:
In our application [on Oracle 9i (9.2.0.7.0)] we execute CTX_DDL.SYNC_INDEX every 5 mins (using a DBMS_JOB) to synchronize oracle text indexes on VARCHAR2 columns.
The same job optimizes the indexes at 7 AM, 12 PM and 5 PM to reduce index fragmentation.

The user doesn't want to wait 5 mins to find inserted/updated records in the search result.

I am planning on increasing the frequency of index synchronization to 1 min so that the user does not have to wait 5 mins to search inserted/updated records. My main concern was performance and based on the information in http://download.oracle.com/docs/html/A95995_01/oraclete.htm it doesn't look like performance will be impacted. There will be index fragmentation which happens right now anyway and the index optimization at 7 AM, 12 PM and 5 PM should take care of that.

Do you have any comments on this approach? Is there a better way to solve this problem?

Thanks!
Tom Kyte

Followup  

December 01, 2009 - 3:29 am UTC

If you cannot wait 5 minutes, you cannot wait 5 minutes, if you want the index maintained more frequently so that recently modified data is in there - you have to do that.

You can even go with sync on commit if you want, nowadays (and for quite a while) you can make text indexes transactional.

It would utterly depend on information I don't have (size of existing data, transaction workload) as to whether it would be too cumbersome or not - that would be something you would need to test.

Link update

October 20, 2010 - 10:27 am UTC

Reviewer: Ray from Winston-Salem, NC USA

The links are no longer valid in the response to A Reader's post above:
http://asktom.oracle.com/pls/apex/f?p=100:11:0::::P11_QUESTION_ID:2913600659112#25757722313362

I believe these reference the articles mentioned:
http://www.jlcomp.demon.co.uk/circular.html
http://www.jlcomp.demon.co.uk/That%20was%20then.html

Link Update

January 15, 2011 - 3:11 pm UTC

Reviewer: Nasir L from India

The index internal ppt of Richard Foote - refered in many posts above has been moved to this URL, thought it will be
useful to put it here.

http://www.dbafan.com/book/oracle_index_internals.pdf

Index Rebuild

February 11, 2011 - 2:01 pm UTC

Reviewer: Hasan Saberedowo from Nigeria

Dear Ask Tom;

I have read your several submissions on index and tablespace fragmentation and why it is not advisable to schedule or make a practice to be rebuilding index everytime.

However, you agreed that if after perfmance analysis, it is realised that the performance issue is due to index problem, then we can rebuild.

How will you recommend that we detect performance problem from index, if all the scripts showing delete leaf etc would not be useful.
Tom Kyte

Followup  

February 14, 2011 - 7:27 am UTC

You would detect it via observing a deterioration in performance of a query over time. Something that ran fast is now (using the same plan) running slower and slower due to increased logical IO activity.

Inconsistent reads using index

May 20, 2011 - 7:43 am UTC

Reviewer: Abel from Buenos Aires, Argentina

Hello, Tom, the problem with an index in this case is not the performance, but the inconsistency between index and table.

Query with default plan (with index use):

SELECT COUNT(*) FROM GA_ABOCEL
WHERE COD_CLIENTE IN (SELECT COD_CLIENTE FROM GA_ABOCEL)

COUNT(*)
--------
0

Plan
SELECT STATEMENT RULE
6 SORT AGGREGATE
5 NESTED LOOPS
3 VIEW SYS.VW_NSO_1
2 SORT UNIQUE
1 TABLE ACCESS FULL TMC_10027_SCL_FE.GA_ABOCEL
4 INDEX RANGE SCAN NON-UNIQUE TMC_10027_SCL_FE.AK_GA_ABOCEL_CLIENTE


Query with a plan that nullify the index:

SELECT COUNT(*) FROM GA_ABOCEL
WHERE COD_CLIENTE+0 IN (SELECT COD_CLIENTE FROM GA_ABOCEL)

COUNT(*)
--------
4999


Plan
SELECT STATEMENT RULE
8 SORT AGGREGATE
7 MERGE JOIN
2 SORT JOIN
1 TABLE ACCESS FULL TMC_10027_SCL_FE.GA_ABOCEL
6 SORT JOIN
5 VIEW SYS.VW_NSO_1
4 SORT UNIQUE
3 TABLE ACCESS FULL TMC_10027_SCL_FE.GA_ABOCEL


Rebuilding index fix the problem. This is matter of corruption of index's data? how to prevent it? is scheduled rebuild a solution?

Version 9.2.0.8.0

Thanks in advance.


Tom Kyte

Followup  

May 23, 2011 - 11:25 am UTC

please utilize support for this one, looks like a bug.

reads on empty index

December 01, 2011 - 5:33 am UTC

Reviewer: Michael from Switzerland

hi tom,

concerning this: http://alexanderanokhin.wordpress.com/2011/02/22/when-indexes-degrade/

i tried it in 11.2.0.2 and found oracle to do 211 buffer gets even the index is totally empty (that is it contains only del_lf_rows). this happens even after i restarted the instance.

[28] test@EL000E1> select * from test where owner='SYS'
  2  ;

no rows selected

Elapsed: 00:00:00.00
[28] test@EL000E1> select * from table (dbms_xplan.display_cursor (null, null, 'ALLSTATS LAST'));

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------
SQL_ID  g0ubradf3sybh, child number 0
-------------------------------------
select * from test where owner='SYS'

Plan hash value: 4092295894

----------------------------------------------------------------------------------------------
| Id  | Operation                   | Name | Starts | E-Rows | A-Rows |   A-Time   | Buffers |
----------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |      |      1 |        |      0 |00:00:00.01 |     211 |
|   1 |  TABLE ACCESS BY INDEX ROWID| TEST |      1 |      1 |      0 |00:00:00.01 |     211 |
|*  2 |   INDEX RANGE SCAN          | IDX  |      1 |      1 |      0 |00:00:00.01 |     211 |
----------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - access("OWNER"='SYS')


19 rows selected.

question: why 211 buffer gets on an empty index? oracle could recognize that the index is empty when touching the (empty) index root block (2 buffer gets only). no?

Tom Kyte

Followup  

December 06, 2011 - 10:06 am UTC

You have a strange edge case that doesn't typically happen in real life.

How often do you

a) insert a bunch of rows in a table
b) delete all of them
c) then commit


or do you rather - have some inserts in some sessions, some updates in others, some deletes in others - all committing and working on the same table?

If you are going to insert/delete in the same transaction I think you meant to use a global temporary table. or you meant to truncate. You did not mean to delete all of them.


A delete of all rows can leave the blocks marked with deleted rows - which will get reused upon subsequent inserts!



If you do have this issue - do not rebuild, just coalesce.


and please remember - this is important - no one ever said "NEVER rebuild indexes", one has said "it is usually an utter waste of time, make sure you actually need to before you start scheduling them willy nilly as a rebuild can make things faster, make things SLOWER, or not change a thing - with the last two happening a lot more often than people think

reads on empty index

December 09, 2011 - 7:12 am UTC

Reviewer: MIchael from Switzerland

hi tom,

thanks for your answer. i know this is a strange edge situation.

anyway. my question remains why 211 buffer gets? is there a valid reason for this? (or is it an inefficient implementation or simply a bug or...)?

cheers,
michael
Tom Kyte

Followup  

December 11, 2011 - 2:45 pm UTC

because the index entries would be marked as deleted in your example but not necessarily removed from the index structure - not until they are needed to be reused again - at which point they would be.

It is not a bug
It is not an inefficient implementation
It is the way it works - a couple of inserts later and the "problem" (which you won't really have) goes away.

Or, use truncate which would be infinitely more appropriate to the task at hand.

indexes

December 13, 2011 - 10:09 am UTC

Reviewer: rizwan from india

Hi tom, 

We did full import from 10g R1 to 11g R2.. But for 2 users i see additional indexes got imported which are not in 10g ..Those indexes exists in dba_indexes columns but not in dba_ind_columns .. 

Output from 10g  :- 

SQL> select owner , object_type, count(*) from dba_objects where owner='BWIP'  and object_type='INDEX' group by owner , object_type ;

OWNER                          OBJECT_TYPE           COUNT(*)
------------------------------ ------------------- ----------
BWIP                           INDEX                      151


Output from 11g :- 

SQL>  select owner , object_type, count(*) from dba_objects where owner='BWIP'  and object_type='INDEX' group by owner , object_type ;

OWNER                          OBJECT_TYPE           COUNT(*)
------------------------------ ------------------- ----------
BWIP                           INDEX                      154

I figured out below are the three indexes which are in 11g but not in 10g 


SYS_IL0000077287C00002$$  
SYS_IL0000077287C00003$$  
SYS_IL0000077287C00004$$

SQL>  select owner, index_name, index_type from dba_indexes where index_name in ('SYS_IL0000077287C00002$$','SYS_IL0000077287C00003$$','SYS_IL0000077287C00004$$') ;

OWNER                          INDEX_NAME                     INDEX_TYPE
------------------------------ ------------------------------ ---------------------------
BWIP                           SYS_IL0000077287C00004$$       LOB
BWIP                           SYS_IL0000077287C00003$$       LOB
BWIP                           SYS_IL0000077287C00002$$       LOB

 select index_name from dba_ind_columns where index_name in ('SYS_IL0000077287C00002$$','SYS_IL0000077287C00003$$','SYS_IL0000077287C00004$$') ;

Any idea why they are showing in dba_indexes and not in dba_ind_columns .. And how they got imported even though they didn't exists in 10g ?

Tom Kyte

Followup  

December 13, 2011 - 10:34 am UTC

those are automatically generated indexes on any clob/blob columns.


I don't have 10gR1 to test with right now - perhaps 10gR1 'forgot' to list these in dba_indexes.


do this in 10gr1 in a 'clean' (empty) schema:

create table t ( x clob );

and see if that user has an index or not. If they do not, it was a bug in 10gR1 whereby they forgot to list that index in dba_indexes.

indexes

December 13, 2011 - 10:11 am UTC

Reviewer: rizwan from india

Missing output for above question:- 

SQL> select index_name from dba_ind_columns where index_name in ('SYS_IL0000077287C00002$$','SYS_IL0000077287C00003$$','SYS_IL0000077287C00004$$') ;

no rows selected

December 14, 2011 - 6:48 am UTC

Reviewer: rizwan from india

Hi tom , 

One correction ..Its 10gR2 database (10.2.0.5.0) not 10gR1 .. I did a small test scenario and below is the result .. 

----------In 10G ---------------------

SQL> select * from v$version;

BANNER
----------------------------------------------------------------
Oracle Database 10g Enterprise Edition Release 10.2.0.5.0 - Prod
PL/SQL Release 10.2.0.5.0 - Production
CORE    10.2.0.5.0      Production
TNS for 32-bit Windows: Version 10.2.0.5.0 - Production
NLSRTL Version 10.2.0.5.0 - Production

SQL> create user riz identified by riz default tablespace users ;

User created.

SQL> grant create session , create table to riz;

Grant succeeded.

SQL> alter user riz quota 100m on users;

User altered.

SQL> connect riz/riz
Connected.
SQL> create table t ( x clob );

Table created.

SQL> select  object_type , count(*)  from user_objects group by  object_type;

OBJECT_TYPE           COUNT(*)
------------------- ----------
LOB                          1
TABLE                        1

SQL> select count(*) from user_indexes;

  COUNT(*)
----------
         1


--------And in 11g-------------
SQL> SELECT * FROM V$VERSION;

BANNER
--------------------------------------------------------------------------------
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
PL/SQL Release 11.2.0.3.0 - Production
CORE    11.2.0.3.0      Production
TNS for Linux: Version 11.2.0.3.0 - Production
NLSRTL Version 11.2.0.3.0 - Production



SQL> create user riz identified by riz default tablespace users ;

User created.

SQL> grant create session , create table to riz;

Grant succeeded.

SQL> alter user riz quota 100m on users;

User altered.

SQL> connect riz/riz
Connected.
SQL>
SQL> create table t ( x clob );


Table created.

SQL> SQL>
SQL>
SQL> select  object_type , count(*)  from user_objects group by  object_type;

OBJECT_TYPE           COUNT(*)
------------------- ----------
LOB                          1
TABLE                        1
INDEX                        1

SQL> select count(*) from user_indexes;

  COUNT(*)
----------
         1



So conclusion is LOB indexes which are created with LOB tables didn't show up in user_objects(also dba_objects) in 10gR2 but are visible in 11gR2 user_objects(also dba_objects) .. This might be a bug in 10gR2 .. 

Thanks a lot tom for your timely help .. 

Regards, 

Rizwan

To: Rizwan

December 16, 2011 - 9:48 am UTC

Reviewer: A reader

Please see: "Bug 9935857 LOB indexes are missing from DBA_OBJECTS" on My Oracle Support website (formerly Metalink). This bug is fixed in 11.2.0.3. Versions confirmed as being affected are:

10.2.0.4, 11.2.0.1, 11.2.0.2

Hope this helps...


When to rebuild Index

April 06, 2012 - 11:06 am UTC

Reviewer: pranav from NJ, USA

Hi Tom,

Thank you very much for spending your valuable time in responding to our questions.

We have a query that's running for around 2hours when the result set is returning around 2-3thousand rows. Same query is running very fast(2minutes) when the result set is about 300 records. When I check the plan, one of the table is doing 100M buffer gets at INDEX RANGE SCAN and TABLE ACCESS BY INDEX ROWID(from dbms_xplan.display_cursor). Not sure what is causing these many buffer gets. This table has around 120M rows. Not sure if this index is a right candidate of index rebuild.
Database version: 11.2.0.2
Could you please let me know what's your approach on this?

Is this link valid from your point of view? Can we use it to determine if the index needs to rebuild?
http://damir-vadas.blogspot.com/2010/11/how-to-see-index-usage-without-alter.html

My only take on this is, author has mentioned Jonathan Lewis(Index_quality script)

Thanks for all your help


Tom Kyte

Followup  

April 06, 2012 - 3:07 pm UTC

I doubt it is a rebuild issue.

Probably more of a "you didn't add the right set of columns into the index"


What if you have a query:

select * from t where a = ? and b = ?;

and you have an index on A. And we decided to use the index. We find all of the A=? values in the index (say a lot of time) - and then have to go to the table to look and see if B=? is satisfied - suppose that happens infrequently (that b=? is true). Then, you would do a lot of IO against the table only to find nothing of relevance.


consider the following and notice the dramatic difference in IO's between the two:


ops$tkyte%ORA11GR2> /*
ops$tkyte%ORA11GR2> 
ops$tkyte%ORA11GR2> drop table t;
ops$tkyte%ORA11GR2> 
ops$tkyte%ORA11GR2> create table t
ops$tkyte%ORA11GR2> as
ops$tkyte%ORA11GR2> select case when mod(rownum,2) = 0 then 0 else rownum end id,
ops$tkyte%ORA11GR2>        all_objects.*
ops$tkyte%ORA11GR2>   from all_objects
ops$tkyte%ORA11GR2> /
ops$tkyte%ORA11GR2> exec dbms_stats.gather_table_stats(user, 'T' );
ops$tkyte%ORA11GR2> create index t_idx on t(id);
ops$tkyte%ORA11GR2> */
ops$tkyte%ORA11GR2> 
ops$tkyte%ORA11GR2> variable x number;
ops$tkyte%ORA11GR2> variable y varchar2(30)
ops$tkyte%ORA11GR2> 
ops$tkyte%ORA11GR2> exec :x := 1; :y := 'SCOTT'

PL/SQL procedure successfully completed.

ops$tkyte%ORA11GR2> set autotrace traceonly
ops$tkyte%ORA11GR2> select * from t where id = :x and owner = :y;

no rows selected


Execution Plan
----------------------------------------------------------
Plan hash value: 470836197

-------------------------------------------------------------------------------------
| Id  | Operation                   | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |       |     1 |   100 |     2   (0)| 00:00:01 |
|*  1 |  TABLE ACCESS BY INDEX ROWID| T     |     1 |   100 |     2   (0)| 00:00:01 |
|*  2 |   INDEX RANGE SCAN          | T_IDX |     2 |       |     1   (0)| 00:00:01 |
-------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter("OWNER"=:Y)
   2 - access("ID"=TO_NUMBER(:X))


Statistics
----------------------------------------------------------
          0  recursive calls
          0  db block gets
          3  consistent gets
          0  physical reads
          0  redo size
       1239  bytes sent via SQL*Net to client
        409  bytes received via SQL*Net from client
          1  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
          0  rows processed

ops$tkyte%ORA11GR2> set autotrace off
ops$tkyte%ORA11GR2> exec :x := 0; :y := 'SCOTT'

PL/SQL procedure successfully completed.

ops$tkyte%ORA11GR2> set autotrace traceonly
ops$tkyte%ORA11GR2> select * from t where id = :x and owner = :y;


Execution Plan
----------------------------------------------------------
Plan hash value: 470836197

-------------------------------------------------------------------------------------
| Id  | Operation                   | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |       |     1 |   100 |     2   (0)| 00:00:01 |
|*  1 |  TABLE ACCESS BY INDEX ROWID| T     |     1 |   100 |     2   (0)| 00:00:01 |
|*  2 |   INDEX RANGE SCAN          | T_IDX |     2 |       |     1   (0)| 00:00:01 |
-------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter("OWNER"=:Y)
   2 - access("ID"=TO_NUMBER(:X))


Statistics
----------------------------------------------------------
          0  recursive calls
          0  db block gets
       1143  consistent gets
          0  physical reads
          0  redo size
       1727  bytes sent via SQL*Net to client
        420  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
          5  rows processed

ops$tkyte%ORA11GR2> set autotrace off



select * from t where id = :x and owner = :y

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute      1      0.00       0.00          0          0          0           0
Fetch        2      0.01       0.01          0       1143          0           5
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        4      0.01       0.01          0       1143          0           5

Rows (1st) Rows (avg) Rows (max)  Row Source Operation
---------- ---------- ----------  ---------------------------------------------------
         5          5          5  TABLE ACCESS BY INDEX ROWID T (cr=1143 pr=0 pw=0 time=9076 us cost=2 size=100 card=1)
     36436      36436      36436   INDEX RANGE SCAN T_IDX (cr=69 pr=0 pw=0 time=23315 us cost=1 size=0 card=2)(object id 114237)


the tkprof shows us why - look at all of the rows we found in the index (36,436) that resulted in 5 rows from the table. that means we did 36,431 possible extra table hits.

I'll bet if you trace your query you'll see something similar. If so, look at your indexing scheme - in my case, if I put an index on (id,owner) instead:

ops$tkyte%ORA11GR2> drop index t_idx;
ops$tkyte%ORA11GR2> create index t_idx on t(id,owner);
ops$tkyte%ORA11GR2> 
ops$tkyte%ORA11GR2> */
ops$tkyte%ORA11GR2> variable x number;
ops$tkyte%ORA11GR2> variable y varchar2(30)
ops$tkyte%ORA11GR2> 
ops$tkyte%ORA11GR2> exec :x := 1; :y := 'SCOTT'

PL/SQL procedure successfully completed.

ops$tkyte%ORA11GR2> set autotrace traceonly
ops$tkyte%ORA11GR2> select * from t where id = :x and owner = :y;

no rows selected


Execution Plan
----------------------------------------------------------
Plan hash value: 470836197

-------------------------------------------------------------------------------------
| Id  | Operation                   | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |       |     2 |   200 |     2   (0)| 00:00:01 |
|   1 |  TABLE ACCESS BY INDEX ROWID| T     |     2 |   200 |     2   (0)| 00:00:01 |
|*  2 |   INDEX RANGE SCAN          | T_IDX |     2 |       |     1   (0)| 00:00:01 |
-------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - access("ID"=TO_NUMBER(:X) AND "OWNER"=:Y)


Statistics
----------------------------------------------------------
          0  recursive calls
          0  db block gets
          2  consistent gets
          0  physical reads
          0  redo size
       1239  bytes sent via SQL*Net to client
        409  bytes received via SQL*Net from client
          1  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
          0  rows processed

ops$tkyte%ORA11GR2> set autotrace off
ops$tkyte%ORA11GR2> exec :x := 0; :y := 'SCOTT'

PL/SQL procedure successfully completed.

ops$tkyte%ORA11GR2> set autotrace traceonly
ops$tkyte%ORA11GR2> select * from t where id = :x and owner = :y;


Execution Plan
----------------------------------------------------------
Plan hash value: 470836197

-------------------------------------------------------------------------------------
| Id  | Operation                   | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |       |     2 |   200 |     2   (0)| 00:00:01 |
|   1 |  TABLE ACCESS BY INDEX ROWID| T     |     2 |   200 |     2   (0)| 00:00:01 |
|*  2 |   INDEX RANGE SCAN          | T_IDX |     2 |       |     1   (0)| 00:00:01 |
-------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - access("ID"=TO_NUMBER(:X) AND "OWNER"=:Y)


Statistics
----------------------------------------------------------
          0  recursive calls
          0  db block gets
          7  consistent gets
          0  physical reads
          0  redo size
       1872  bytes sent via SQL*Net to client
        420  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
          5  rows processed

ops$tkyte%ORA11GR2> set autotrace off



all is good.

nothing to do with a reorg, everything to do with physics and how many times we have to hit a table.




As for that link, I don't care how many times you rebuild an index - it will not affect the clustering factor. The clustering factor is a measure of how sorted a table is with respect to the indexed attributes. If you rebuild an index - will the table become any more sorted????? (rhetorical question - NO, it won't be).

That is bad advice there.

When to Rebuild Index

April 06, 2012 - 4:55 pm UTC

Reviewer: pranav from NJ, USA

Just can't tell simple thanks. But thank you very much Tom.

I created another index(with correct ordering) and it's working like a charm :-)

Previous index is doing lot of buffer gets and returning few hundred rows.

Tom Kyte

Followup  

April 08, 2012 - 5:43 pm UTC

remember this "pattern" - you'll see it a few million times more in your life. The pattern of the "index isn't on enough attributes, causing lots of buffer gets"

Index rebuilding

April 09, 2012 - 1:06 am UTC

Reviewer: A Reader

Hi Tom ,

I am facing a strange issue in my database. i am having a table t partitioned by range(c1 column) and sub partitioned by list(c2 column).

column c2 has local bitmap index created on it. After the data load when i query table t using where c2=123 i don't get the result set. but when i do where c2 in (123,234,345) i get the result set corresponding to 123. c2 is data type number.

When i report this to my dba he says that it is a issue with index and index rebuild would solve the issue.

Can this be the case.

Tom Kyte

Followup  

April 09, 2012 - 11:33 am UTC

if you are getting the wrong answer it is either

a) an optimizer bug, review the plan and see if the plan looks like it could get the data

b) data corruption


In *either* case, a call to support is definitely in order.

A strong case for rebuilding indexes

April 25, 2012 - 6:30 pm UTC

Reviewer: Keith J Baugh from United Kingdon

I've been a DBA for 20 years and I normally have little time for blogging, however, on this topic I have to make an exception. I am also a great fan of this site and its content and wisdom.

The arguments presented in this thread kind of suggest that the primary reason for having indexes is that they may exist at a size which is pleasing to the index maintenance algorithms.

It looses sight of the real reason we create indexes, which is generally to enhance the response of application queries and help the business to function efficiently.

I am currently providing tuning advice on a large database system with approximately 1800 users. The database is configured with a 24GB SGA of which 16GB has been allocated to database buffers. One of our main wait events in this system during the peak times of the business day is 'db file sequential read'. A wait event, I'm sure you'll agree that is related to the fetching of discrete blocks of data from the file system. Our system has over 900 tables and close to 1700 indexes. Total table footprint is 168GB, LOBS occupy around 2.5TB and the index total footprint for the largest 389 indexes in the system was 163GB.

AWR reports for average business days at this time revealed that, during peak hours, each node (there were 3 of these nodes - same SGA configuration), that the top-5 listed 'db file sequential read' event was running at a cluster total of 45,660 seconds per hour. This represents over 12 hours of wait time, during peak activity, that the users were kept waiting for single block fetches.

Given that there is simply not enough room in the buffer pool to hold the complete set of indexes/tables the system was busying itself with having to fetch data from I/O for index and table reads.

After performing some analysis on a similar size copy DB, I concluded that the indexes had grown bloated and advised that we should rebuild them in order to save space. The rationale behind this was that we would make their footprints smaller and this would then lead to a greater density of index data being stored per block. Then, given the fixed size buffer cache, the data it would contain would therefore be far more efficient in terms of its usefulness to index look up operations. The ultimate goal was to reduce the amount of read operations, and so a smaller set of core index blocks would translate to a more usefully populated buffer cache and lead to less I/O operations and hence less wasted I/O time.

I had quite a disagreement with the production DBAs who had chosen to hold the view that indexes only grow back to their original sizes over time.

Eventually, I was granted the go ahead.

The footprint of the top 389 indexes shrunk from 163GB to 86.7GB. A footprint saving of: 9.81 million blocks! - for the same index set. Now, the indexes are contained in about half of the space they originally occupied.

The net result is that AWR stats reported a drop in sequential read wait times from 45,600 to 28,800 seconds in the peak hour of the day (these times are total sums for the whole cluster, not just a single node). A statistic that remains today after around 4 weeks.

Before
Top 5 Timed Events                                         Avg %Total
~~~~~~~~~~~~~~~~~~                                        wait   Call
Event                                 Waits    Time (s)   (ms)   Time Wait Class
------------------------------ ------------ ----------- ------ ------ ----------
db file sequential read           1,687,645      12,390      7   40.0   User I/O
CPU time                                          6,897          22.3
enq: TX - row lock contention        12,419       5,431    437   17.5 Applicatio
gc current block 2-way            1,461,583       1,137      1    3.7    Cluster
gc cr block busy                    188,283         865      5    2.8    Cluster

After
 
Top 5 Timed Events                                         Avg %Total
~~~~~~~~~~~~~~~~~~                                        wait   Call
Event                                 Waits    Time (s)   (ms)   Time Wait Class
------------------------------ ------------ ----------- ------ ------ ----------
enq: TX - row lock contention        18,600       8,740    470   36.9 Applicatio
db file sequential read             786,454       5,399      7   22.8   User I/O
CPU time                                          5,200          22.0
gc cr block busy                    240,347       1,075      4    4.5    Cluster
gc current block 2-way              900,849         635      1    2.7    Cluster


AWR reports above reflect the day before the indexes were rebuilt and then the day after they were re-built at the same hour of the business day (10am to 11am), for one of the three nodes. The trend continues across the other nodes and remains, even now, nearly four weeks later.

So, the indexes may eventually grow back to sizes which are bigger than when first rebuilt, but, what of it? When that happens just rebuild the things again. The object is that the users have a good experience and if rebuilding the indexes buys those users 4.6 hours of saved wait time per hour because the index data is more densely held in the buffer cache then rebuild the things. They may grow back in time, but, in the meantime the benefits brought to the system by denser index blocks will please your users. Do not forget that it's the business we are trying to please, not the index maintenance algorithm.

Now, I know there are situations where this may not be appropriate, but, as a rule of thumb, I'd say that if you have a system that has core data bursting out of its buffer cache, and you can demonstrate that freshly built indexes will reduce that 'core' footprint substantially, then it would be worth trying.

Additionally, the CPU time from the AWR stats also dropped slightly after the index rebuild. So, the worry of increased CPU burn due to additional index block splitting operations just did not appear (I guess the write-to-read ratio of my system makes write operations negligible in comparison). Again, this phenomenon is not an isolated AWR glitch but shows a continuing trend. I've put this down to reduced CPU requirements to deal with in cache blocks as opposed to fetched ones.

You asked for statistics, I have hundreds of AWR reports from before and after that demonstrate this, but, it is more obviously seen graphically. I'd give you a great picture, but, not sure how to upload one. I also captured index sizes before and immediately after their rebuild. Reporting on this now (4 week on) shows a grow-back of only 5.28 GB, from the initial 76GB saving.

Hope this helps

Keith

Tom Kyte

Followup  

April 26, 2012 - 7:49 am UTC

The arguments presented in this thread kind of suggest that the primary reason for having indexes is that they may exist at a size which is pleasing to the index maintenance algorithms.

It looses sight of the real reason we create indexes, which is generally to enhance the response of application queries and help the business to function efficiently.


and I don't see how those two things are contrary. If the index wants to be 40% empty (because it is on last name and the data arrives randomly and therefore the blocks split frequently and end up being half empty until they fill and split again) - AND given the act of splitting and broadening the index takes a lot of cpu, time, undo and redo - AND given that those things imply "takes a lot of resources, time, energy" - THEN - letting the darn thing alone makes sense.



Now, I know there are situations where this may not be appropriate, but, as a rule of thumb, I'd say that if you have a system that has core data bursting out of its buffer cache, and you can demonstrate that freshly built indexes will reduce that 'core' footprint substantially, then it would be worth trying.

Now that - that was well said and I don't see it running counter to *anything* said here.


IF you can demonstrate
IF you can measure
IF you can figure out when it would make sense to do it again

then go for it - that was said here.


What we were talking about was "do not rebuild on some arbitrary schedule like once a month, once a week, etc" "do not rebuild on a whim" "do not rebuild as a 'hey things seem slow, let us see if a rebuild would do something'"


Use science - facts - numbers. Not "we rebuilt once before and it was super great so let's start doing that every weekend"



you answered all of the questions I said you needed to ask - why do you feel you are running in opposition to what was written here?

NIcely put

April 26, 2012 - 9:47 am UTC

Reviewer: DJB from UK

By both parties. A clear case of where a rebuild works, and a quite reasonable response that it can be useful in specific cases, but not as a regular scheduled event without the stats to justify it. Myths abound in IT as everywhere, and the production dbas who said it was a waste of time seem to believe in this as a hard and fast rule.

Used to be a wonderful generator of overtime, though - so I'm told.

A Culture of Fear on Index rebuilds

April 26, 2012 - 11:46 am UTC

Reviewer: Keith J Baugh from United Kingdom

Thanks for your considered comments.

I'm just a little concerned that 'out there' are many DBAs who read this particular thread of index building discussions and become fearful of the act of index rebuilds. You do give a good argument for the negative implications of freshly squeezed indexes and it is good to consider these when approaching production systems.

However, in systems where updates and inserts are gradual and the index maintenance code, over time, has staked out its claim for a comfortable though bloated holding of real-estate, at the expense of massively outnumbering read operations - which are forced to bash the cache to pieces, that under these conditions, and many others that you also air a disclaimer to, then rebuilding indexes may well be a very valuable gain to the business.

I have no argument with you on your approach to index rebuilds, but, would like to challenge other DBAs.

Sitting back and accepting that rebuilding indexes is wrong, is as great an error as rebuilding them every night without regard to possible negative consequences.

Don't be afraid to consider doing this. See what your indexes could shrink to on a test system. Would the possible reduction in their size significantly improve the efficiency of your buffer cache for its read operations? Can you tolerate an increase in redo generation and block splitting operations on your transactional workload? Would you expect this to be much higher or just a little blip?

After one such index rebuild I looked for such increases and 'feared' their appearance but, they never amounted to anything. In contrast the improvement to the system in general was measurably significant.

I just get the impression that many DBAs take the, perhaps wrongful, view that this thread discourages DBAs from rebuilds. It has, rightly or wrongly, induced a view that periodic rebuilding of indexes is a rather unprofessional manner in which to deal with system performance issues. Even though it qualifies itself nicely throughout that where there is a measurable positive difference, rebuilds should be considered. Many DBAs have now unfortunately developed the notion that it's a silly thing to rebuild indexes as they just grow back again. And it is to this sentiment that I have to object. If your index rebuilds shave tens of GBytes from the index footprints, and these indexes are very commonly used in the system, then this (rebuilding the indexes) is like having a bigger buffer cache, and we all know what that's like. Under these conditions, if after rebuilding your indexes you discover that they take two weeks to grow back to their original size, despair not, measure the saving of I/O wait times wrought by smaller indexes during this grow-back time and if this is worth it, then you are on to a winner.

In my particular case, the index rebuild operation has shaved 70+GB from the index footprint and as a direct result has shaved around 20 hours of I/O wait time per day off the business wait time and this is measured on one node of the three I have. In contrast, it would take a considerable amount of index block splitting and extra redo operations to offset the savings in shear read performance, before a break-even point is reached!
Tom Kyte

Followup  

April 26, 2012 - 12:22 pm UTC

Don't be afraid to consider doing this.

but I said that time and time and time again on this page - over and over

If you can show me mearsurable performance increases or some other measurable, material benefit -- by all means rebuild.


I do not beleive in a scheduled rebuild of indexes.


I do believe in

a) determining the root cause of a performance related issue
b) correcting it

and if part of b) is rebuilding an index, so be it. but to rebuild on a schedule? No.



I even demonstrated a case of a class of index that would definitely benefit from a re-org:


and see the increasing CPU times. Now I rebuilt the index and restarted the "batches":

...

So, ask yourself, do we have any degenerate cases like this -- a sweeping left to right index that
leaves stragglers behind?









I just get the impression that many DBAs take the, perhaps wrongful, view that this thread discourages DBAs from rebuilds

I hope that is not true, for that was definitely not my intention.





In my particular case, the index rebuild operation has shaved 70+GB from the index footprint and as a direct result has shaved around 20 hours of I/O wait time per day off the business wait time and this is measured on one node of the three I have. In contrast, it would take a considerable amount of index block splitting and extra redo operations to offset the savings in shear read performance, before a break-even point is reached!

that is perfect, that is exactly what I was saying should be done! You are doing exactly what we were talking about doing here.


If you can show me mearsurable performance increases or some other measurable, material benefit -- by all means rebuild...............


April 26, 2012 - 3:32 pm UTC

Reviewer: Alexander

I don't mean to whack the hornets nest here, but my interpretation of Keith's analysis is that he probably has an undersized buffer cache given the amount of data they have and the size of the sga.

I am arriving at that because if reducing the size of the indexes periodically yields those kinds of performance results, then they are hot enough blocks that they probably belong in the buffer cache. Since the indexes will eventually grow to their "happy" sizes, this process will have to be routinely repeated and rebuilding indexes is expensive.
Tom Kyte

Followup  

April 26, 2012 - 3:39 pm UTC

I'm thinking actually that they have lots of "sweeper" type indexes based on the rate of slow growth afterwards

... Reporting on this now (4 week on) shows a grow-back of only 5.28 GB, from the initial 76GB saving. ....

Keith J Baugh

April 27, 2012 - 10:04 am UTC

Reviewer: Chris from Dallas, TX

Keith,

You should blog here and share your opinion more often! I read asktom daily (almost religiously), and your posts are one of the most interesting, informative, and thought-provoking reads I've had in a while (aside from Tom's of course, which are customarily thought provoking!). This type of intellectual engagement, and sharing of ideas and experiences is what I love most about forums like this.


Cheers!

Simple query to determine if Bitmap indexes need to be rebuild

May 25, 2012 - 12:07 pm UTC

Reviewer: tonykapp from Mason, OH USA

Tom,

For the limited situations where a rebuild of bitmap indexes is necessary. Do you have a query for detecting which bitmap indexes need to be rebuilt?

TonyKapp

Is it an index problem?

June 14, 2012 - 6:44 pm UTC

Reviewer: Cesar Salas

I can't provide you a CREATE statement because i can't reproduce the error with other table. So, I will understand if you delete my question.

owner@machine|instance> desc notifications
name null? type
------------------------------- -------- ----
noti_id not null varchar2(12)
noti_type not null varchar2(10)
noti_date not null date
noti_send not null varchar2(1)
...

Primary Key ( noti_id ) using an index ( NOTI_PK_IDX )
check noti_send in ( 'Y', 'N' )

select noti_id, noti_send
from notifications
where noti_id = '20126'
/

ID D
------------ -
20126 N

So, ID 20126 exists, but:

update notifications
set noti_send = 'THIS_IS_INVALID!'
where noti_id = '20126'
/

0 rows updated.

I guess is an index problem because if I change the WHERE clause the update does the right thing:

update notifications
set noti_send = 'THIS_IS_INVALID!'
where noti_type is not null
and noti_id = '20126'
/

update notifications
*
ERROR at line 1:
ORA-01401: inserted value too large for column

And this fails too:

update notifications
set noti_send = 'THIS_IS_INVALID!'
where noti_id is not null
and noti_id = '20126'
/

0 rows updated.

Please, Can you tell where or what to look for? I already tried "ALTER INDEX NOTI_PK_IDX REBUILD" before this queries.

Thank you so much for your help.
Tom Kyte

Followup  

June 15, 2012 - 7:51 am UTC

can you do an

analyze index <that index name> validate structure cascade;

for us?


also, show us the plans used for the queries (set autotrace on explain)

also,

select dump(noti_id), noti_id, noti_send
from notifications
where noti_id = '20126'

and lastly:

update /*+ full( notifications ) */ notifications
set noti_send = 'THIS_IS_INVALID!'
where noti_id = '20126'
/

Index problem?

June 15, 2012 - 11:42 am UTC

Reviewer: César Salas from México

SQL*Plus: Release 8.0.6.0.0 - Production on Fri Jun 15 16:34:34 2012

(c) Copyright 1999 Oracle Corporation.  All rights reserved.


Connected to:
Oracle9i Enterprise Edition Release 9.2.0.1.0 - Production
With the Partitioning, OLAP and Oracle Data Mining options
JServer Release 9.2.0.1.0 - Production

owner@machine|instance> select 
  banner
from v$version
/

Oracle9i Enterprise Edition Release 9.2.0.1.0 - Production
PL/SQL Release 9.2.0.1.0 - Production
CORE    9.2.0.1.0       Production
TNS for 32-bit Windows: Version 9.2.0.1.0 - Production
NLSRTL Version 9.2.0.1.0 - Production

owner@machine|instance> set autotrace on explain
Unable to verify PLAN_TABLE format or existence
Error enabling EXPLAIN report


Fix From http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:111012348061#13310772227878


owner@machine|instance> create table PLAN_TABLE (
    statement_id    varchar2(30),
    timestamp       date,
    remarks         varchar2(80),
    operation       varchar2(30),
    options         varchar2(255),
    object_node     varchar2(128),
    object_owner    varchar2(30),
    object_name     varchar2(30),
    object_instance numeric,
    object_type     varchar2(30),
    optimizer       varchar2(255),
    search_columns  number,
    id      numeric,
    parent_id   numeric,
    position    numeric,
    cost        numeric,
    cardinality numeric,
    bytes       numeric,
    other_tag       varchar2(255),
    partition_start varchar2(255),
        partition_stop  varchar2(255),
        partition_id    numeric,
    other       long,
    distribution    varchar2(30),
    cpu_cost    numeric,
    io_cost     numeric,
    temp_space  numeric,
        access_predicates varchar2(4000),
        filter_predicates varchar2(4000))
/

Table created.

owner@machine|instance> set autotrace on explain

owner@machine|instance> analyze index NOTI_PK_IDX validate structure cascade
                                                                     *
ERROR at line 1:
ORA-01491: CASCADE option not valid

owner@machine|instance> analyze index NOTI_PK_IDX validate structure;

Index analyzed.

owner@machine|instance> select noti_id, noti_send
  2  from notifications
  3  where noti_id = '20126';

NOTI_ID      N
------------ -
20126        N


Execution Plan
----------------------------------------------------------
   0      SELECT STATEMENT Optimizer=CHOOSE (Cost=2 Card=1 Bytes=18)
   1    0   TABLE ACCESS (BY INDEX ROWID) OF 'NOTIFICATIONS' (Cost=2 Car
          d=1 Bytes=18)

   2    1     INDEX (UNIQUE SCAN) OF 'NOTI_PK_IDX' (UNIQUE) (Cost=1 Card
          =749)
          

owner@machine|instance> update notifications
  2  set noti_send = 'THIS_IS_INVALID!'
  3  where noti_id = '20126';

0 rows updated.


Execution Plan
----------------------------------------------------------
   0      UPDATE STATEMENT Optimizer=CHOOSE (Cost=2 Card=1 Bytes=18)
   1    0   UPDATE OF 'NOTIFICATIONS'
   2    1     INDEX (UNIQUE SCAN) OF 'NOTI_PK_IDX' (UNIQUE) (Cost=1 Card
          =1 Bytes=18)


owner@machine|instance> update notifications
  2  set noti_send = 'THIS_IS_INVALID!'
  3  where noti_type is not null
  4  and noti_id = '20126';
update notifications
*
ERROR at line 1:
ORA-01401: inserted value too large for column


owner@machine|instance> update notifications
  2  set noti_send = 'THIS_IS_INVALID!'
  3  where noti_id is not null
  4  and noti_id = '20126';

0 rows updated.


Execution Plan
----------------------------------------------------------
   0      UPDATE STATEMENT Optimizer=CHOOSE (Cost=2 Card=1 Bytes=18)
   1    0   UPDATE OF 'NOTIFICATIONS'
   2    1     INDEX (UNIQUE SCAN) OF 'NOTI_PK_IDX' (UNIQUE) (Cost=1 Card
          =1 Bytes=18)


owner@machine|instance> select dump(noti_id)  dumpid, noti_id notiid, noti_send    
notisend
  2  from notifications
  3  where noti_id = '20126';

DUMPID
----------------------------------------------------------------------------------------------------
NOTIID       N
------------ -
Typ=1 Len=5: 50,48,49,50,54
20126        N


Execution Plan
----------------------------------------------------------
   0      SELECT STATEMENT Optimizer=CHOOSE (Cost=2 Card=1 Bytes=18)
   1    0   TABLE ACCESS (BY INDEX ROWID) OF 'NOTIFICATIONS' (Cost=2 Car
          d=1 Bytes=18)

   2    1     INDEX (UNIQUE SCAN) OF 'NOTI_PK_IDX' (UNIQUE) (Cost=1 Card
          =749)

owner@machine|instance> update /*+ full( notifications ) */ notifications
  2  set noti_send = 'THIS_IS_INVALID!'
  3  where noti_id = '20126';
update /*+ full( notifications ) */ notifications
*
ERROR at line 1:
ORA-01401: inserted value too large for column

Thanks in advance

Tom Kyte

Followup  

June 15, 2012 - 1:21 pm UTC

8.0.6.0.0 ??????
9.2.0.1.0?????

really????

I'll agree there is something wrong, but with such unpatched software dating back so far into history - I'll never be able to reproduce or help you.

I might even suspect the client software. Can you try with something that was done in at least the same century? 8.0 is 1997, 9iR2 is circa 2004.





Any idea?

June 15, 2012 - 1:36 pm UTC

Reviewer: César Salas from México

Thanks for your time, i found the cause.

Last change before the problem was:

owner@machine@instance> alter table notifications
2 add (
3 constraint notipayment_ck
4 check ( noti_payment <= noti_total
5 and noti_payment >= 0 )
6 )
7 /

Table altered.

The problem is still there

owner@machine@instance> update notifications
2 set noti_send = 'THIS_IS_INVALID!'
3 where noti_id = '20126';

0 rows updated.


Execution Plan
----------------------------------------------------------
0 UPDATE STATEMENT Optimizer=CHOOSE (Cost=2 Card=1 Bytes=18)
1 0 UPDATE OF 'NOTIFICATIONS'
2 1 INDEX (UNIQUE SCAN) OF 'NOTI_PK_IDX' (UNIQUE) (Cost=1 Card
=1 Bytes=18)

0 rows updated.


Testing without that constraint:

owner@machine@instance> alter table notifications
2 drop constraint notipayment_ck
3 /

Table altered.

owner@machine@instance> update notifications
2 set noti_send = 'THIS_IS_INVALID!'
3 where noti_id = '20126';
update notifications
*
ERROR at line 1:
ORA-01401: inserted value too large for column


Using just one condition with that constraint:

owner@machine@instance> alter table notifications
2 add (
3 constraint notipayment_ck
4 check ( noti_payment <= noti_total )
5 )
6 /

Table altered.

owner@machine@instance> update notifications
2 set noti_send = 'THIS_IS_INVALID!'
3 where noti_id = '20126';
update notifications
*
ERROR at line 1:
ORA-01401: inserted value too large for column

Why? I have no idea...

Tom Kyte

Followup  

June 15, 2012 - 1:48 pm UTC

Because you are using unpatched software from over A DECADE AGO? Just a thought.

Have you tried it in 9.2.0.8 - the terminal release for 9iR2 put out many many many years ago?

Thanks

June 15, 2012 - 10:22 pm UTC

Reviewer: César Salas from México

Thank you, I asked to update the database.

Partitioned index unusable

August 26, 2012 - 3:23 am UTC

Reviewer: Rajeshwaran Jeyabal

Tom,

What privilege i needed below to make this partitioned index Un-usable in schema 'B'.

a@ORA11GR2> create table t
  2  ( x number, y number)
  3  partition by list(x)
  4  ( partition p1 values (1),
  5    partition p2 values (2),
  6    partition pmax values (default) );

Table created.

Elapsed: 00:00:00.51
a@ORA11GR2>
a@ORA11GR2> create index t_ind
  2  on t(x) local;

Index created.

Elapsed: 00:00:00.20
a@ORA11GR2>
a@ORA11GR2> grant all on t to b;

Grant succeeded.

Elapsed: 00:00:00.09
a@ORA11GR2> connect b/b;
Connected.
b@ORA11GR2> alter index a.t_ind modify partition p1 unusable;
alter index a.t_ind modify partition p1 unusable
*
ERROR at line 1:
ORA-01418: specified index does not exist


Elapsed: 00:00:00.06
b@ORA11GR2> select * from a.t;

no rows selected

Elapsed: 00:00:00.04
b@ORA11GR2>
b@ORA11GR2> alter index a.t_ind unusable;
alter index a.t_ind unusable
*
ERROR at line 1:
ORA-01418: specified index does not exist


Elapsed: 00:00:00.25
b@ORA11GR2>

Tom Kyte

Followup  

August 29, 2012 - 1:12 pm UTC

http://docs.oracle.com/cd/E11882_01/server.112/e26088/statements_1010.htm#SQLRF00805

the SQL commands all start with perquisites including the necessary privileges to do things.

will index rebuild generates too much of undo segemments

September 19, 2012 - 2:46 pm UTC

Reviewer: SreeGanesh from India

Hi tom, during the production time, i did the rebuild for a particular index, and after some time we just found out our undo tablespace is demandin for more space and we too were keep on increasing the space, after some time my senior DBA said, the cause for the generation of undo segments was due to the index rebuild during the production hours... That is, i did the index rebuild and on that particular table, queries
were executing...i really do not know what is the concept behind it... what is index rebuild online... it is "that we can rebuild online" right? how it will affect the undo tablespace? please help me here...
Tom Kyte

Followup  

September 20, 2012 - 5:52 am UTC

an online index rebuild has to be able to remember the changes it needs to make to the rebuilt index - so yes, there will be undo, there will be redo, there will be resources being used.

some clarification

September 20, 2012 - 11:35 am UTC

Reviewer: A reader

Tom,

if I have a partitioned table and I have local bitmap index I drop a few partition using alter table drop partition update index .

did drop partition release space from both table and index and reduce HWM.

or

HWM in both index and table is kept the same and just a the segment is being dropped.
Tom Kyte

Followup  

September 26, 2012 - 11:58 am UTC

if you drop, there is no high water mark anymore, the segment doesn't exist!

the partition is the segment, when you drop it - it is gone.

Not a review as as such

October 01, 2012 - 4:29 am UTC

Reviewer: djb from UK

But Sql Server users are beginning to reach the same conclusions about the value of index reduilds.

http://www.sqlservercentral.com/Forums/Topic1366088-263-1.aspx#bm1366413

Quote from the article leading to this discussion:-

"As Brent points out, index defragmentation is easy to do. We can configure SQL Server Maintenance Plans to rebuild all our indexes, on a schedule, regardless of whether they need it or not. Unfortunately, most of the rebuilds will be a waste of time, if not actively harmful. Ola Hallengren's scripts, or any script that interrogates the DMVs to check the fragmentation level, before rebuilding, offers a much better option. However, if the rebuild threshold is some arbitrary value across all indexes, then you'll still do a lot of needless rebuilding.
Index fragmentation can, of course, cause performance problems, but is actually not a huge issue for many indexes. Logical fragmentation (index pages in the wrong order) thwarts SQL Server's read-ahead mechanism and makes it less IO-efficient at reading contiguous pages on disk. However, this only really affects large range scans from disk, with the emphasis on large and disk. Reduced page density (many gaps causes by page splits and deletes) is a knottier form of fragmentation. If pages are "half full", they will take up twice the space on disk, and in memory, and twice the IO bandwidth to transfer the data. Again, though, this won't affect infrequently modified indexes."

creating index online and why

November 01, 2012 - 4:23 am UTC

Reviewer: A reader

hi tom,

i need to create a index and from the documentation, it is mentioned that creating an index online will allow concurrent DML.

--------

My question is

- if i am not able to use the "online" feature, will insert be block when an index is being created on the table ?

- and why does it need to block DML when creating an index ?

Regards,
Noob
Tom Kyte

Followup  

November 01, 2012 - 10:13 pm UTC

yes, if you do not use online, all modifications will be prevented during the index build. that is why we have online as an option.


it needs to block DML when you do not use ONLINE by design. You have two options:

a) offline, uses less resources, does less work but does not permit modifications

b) online, uses more resources, does more work but does permit modifications.


creating index online and why

November 04, 2012 - 1:27 pm UTC

Reviewer: A reader

hi tom,

i have a table, few millions rows and i am doing something like

select * From table
where id between somevalue and somevalue
and provider_id = 'xxx'
and status ='0';

-------------------
ID is primary key and that is the only index i got.

1) If i just use the id alone, it will always be a FTS, which i believe is due to the large range given.

2) i am thinking of creating a index (provider_id,status,id)
but then again as of above, i am not allow to use the online statement and i am afraid of the long lock that blocks all DML.

i am on standard 10g. partitioning isn't allowed too.
what should i do ?

Regards,
Noob


Tom Kyte

Followup  

November 05, 2012 - 9:38 am UTC

1) it won't be always a FTS - why do you say that???? if the range was sufficient small, it would use an index.

2) what should you do? I don't know. I don't know if your suggested index would be useful or not.

how many rows would this predicate return in general.

do you have a maintenance window in which to create this index if it is useful?

creating index online and why

November 05, 2012 - 10:20 am UTC

Reviewer: A reader

hi tom,

1)the range is huge as relevant and non-relevant data share the same primary key

2) the returned rows are about ranging from 2,3k to max 8,9k rows amoung the millions rows in the table

however, as of 1, there are other data belong to other "provider" that will be sharign the same ID. thus when given a range, it will be quite huge apart even for relevant data.

3) just pass our regular monthly maintainence window and the next one is next month, but the report must be out this week.

just trying to see if there are other alternatives as i am on 10g standard.
Tom Kyte

Followup  

November 05, 2012 - 10:28 am UTC

in 10g SE, index creates lock the table.


if you need the index, you'll have to create it offline (no concurrent modifications)

multi-block io

December 05, 2012 - 12:22 pm UTC

Reviewer: Jack Douglas from Maidenhead, England

Hi Tom

I've been having a long discussion on the DBA stack exchange site with a well known SQL Server (author of this blog: http://sqlblog.com/blogs/paul_white/ ) about index rebuilding.

We agree on the basics: only rebuild if you know you have to. His main contention is that rebuilding allows more read-ahead when range-scanning. Am I right that Oracle will use single-block io for range scans or has that changed in 11g?

If I'm wrong and Oracle does now use multi-block io in some cases, would you say that is now a factor in deciding whether a particular index should or should not be a candidate for rebuilding?

Kindest regards
Jack
Tom Kyte

Followup  

December 14, 2012 - 1:47 pm UTC

sql server advice doesn't really apply in Oracle.


why would a read ahead (which is NOT multiblock IO really - it is just reading ahead) be better served by a rebuild in general? what is the physics behind it? two leaf blocks in a an index structure cannot be assumed to be next to each other - nor would they be necessarily after an index rebuild.

(and in a range scan, almost all of the IO's are against the table - indexes pack lots of rows per leaf block - in order to get hundreds/thousands of rows - you do very little IO against the index, lots against the table)

Why index space is not getting reused upon deletion?

August 21, 2013 - 5:18 pm UTC

Reviewer: Prashant from India

We have a table having 10 billion records.

It has 5 indexes size is between 250G to 650G

Every day we insert approx 10 million rows.

Every day we delete approx 15-25% of the data which is 1.5-2.5 million rows which are mostly the once created same day but fraction of those deleted rows are also old rows created in the past.

Despite of so much of deletion the space used by these five indexes continue to grow at consistent rate and we have to keep adding disk space.

After 4 months when we did a rebuild of all indices we found that index size was reduced by 10-25%.

If the space was to be reused why index size gets reduced by almost the same size based on the % of rows that were deleted?

Is the table where this much of data is getting deleted come under those rare cases where index rebuilding might help?

Regards
Tom Kyte

Followup  

August 28, 2013 - 5:51 pm UTC

sounds like "sweeper" indexes.

use shrink or coalesce, not a rebuild - if needed.


if you are deleting 15-25% of the data every day

AND

space was not being reused

THEN

indexes would have decreased in size by a LOT more than 10-25%


don't you think? Meaning - space was being reused, if the indexes only shrank by 10-25% (which is by the way - the amount you delete everyday!!!!!!) - then space MUST have been reused very efficiently.

think about it! space *must* be getting reused and is actually getting reused pretty darn efficiently.

Rebuilding Partitioned Indexes

October 04, 2013 - 9:26 am UTC

Reviewer: KevB

Hi Tom,

just recently I have noticed this entry in our Alert.Log:

INDEX <name> or some [sub]partitions have been marked unusable;

This is a bitmap partitioned index.

So I ran the following:

ALTER INDEX <index_name> REBUILD PARTITION <partition_name> ONLINE;

It ran very quickly (in seconds) for a total of eight indexes that were marked unusable.

Once completed I ran:

SELECT index_name, status
FROM user_ind_partitions
WHERE status = 'UNUSABLE';

It returned no rows. The tables *_INDEXES, *_IND_PARTITIONS and *_IND_SUBPARTITIONS also returned no rows and I assumed that all indexes were usable.

However, the orginal error regarding unable indexes has returned but the SELECTs I ran to find them still returns no rows!

I am at a loss to figure out why this is happening!

index rebuild needs 2x space

November 14, 2013 - 1:40 pm UTC

Reviewer: A reader

hi tom,

in the start of this thread you explained us that:

If you need to rebuild your indexes, you need 2x the space -- you'll have the old and the
new index for a period of time. If you do it online, you'll need additional space to
hold the changes that are made during the rebuild as well.


just wanted to ask, does it hold good for 11g database as you explained the above in 2002

rebuilding index using parallel option

August 05, 2017 - 2:21 am UTC

Reviewer: Rajeshwaran Jeyabal

Team:

Does rebuilding an index using PARALLEL option is equivalent to NOLOGGING ( minimal redo generation) ?

The below test is from 11.2.0.4 database.

demo@ORA11G> create index big_idx_01 on big_table ( object_id,object_name,object_type,owner );

Index created.

demo@ORA11G> select log_mode from v$database;

LOG_MODE
------------
ARCHIVELOG

demo@ORA11G> column value new_val r
demo@ORA11G> select t2.value
  2  from v$sysstat t1, v$mystat t2
  3  where t1.statistic# = t2.statistic#
  4  and t1.name  = 'redo size';

     VALUE
----------
 329632520

demo@ORA11G>
demo@ORA11G> alter index big_idx_01 rebuild;

Index altered.

demo@ORA11G> select &r as prev_value,t2.value as curr_value,t2.value - &&r as diff , t2.value
  2  from v$sysstat t1, v$mystat t2
  3  where t1.statistic# = t2.statistic#
  4  and t1.name  = 'redo size';
old   1: select &r as prev_value,t2.value as curr_value,t2.value - &&r as diff , t2.value
new   1: select  329632520 as prev_value,t2.value as curr_value,t2.value -  329632520 as diff , t2.value

PREV_VALUE CURR_VALUE       DIFF      VALUE
---------- ---------- ---------- ----------
 329632520  653333092  323700572  653333092

demo@ORA11G> alter index big_idx_01 rebuild NOLOGGING parallel 2;

Index altered.

demo@ORA11G> select &r as prev_value,t2.value as curr_value,t2.value - &&r as diff , t2.value
  2  from v$sysstat t1, v$mystat t2
  3  where t1.statistic# = t2.statistic#
  4  and t1.name  = 'redo size';
old   1: select &r as prev_value,t2.value as curr_value,t2.value - &&r as diff , t2.value
new   1: select  653333092 as prev_value,t2.value as curr_value,t2.value -  653333092 as diff , t2.value

PREV_VALUE CURR_VALUE       DIFF      VALUE
---------- ---------- ---------- ----------
 653333092  654398772    1065680  654398772

demo@ORA11G> alter index big_idx_01 rebuild parallel 2;

Index altered.

demo@ORA11G> select &r as prev_value,t2.value as curr_value,t2.value - &&r as diff , t2.value
  2  from v$sysstat t1, v$mystat t2
  3  where t1.statistic# = t2.statistic#
  4  and t1.name  = 'redo size';
old   1: select &r as prev_value,t2.value as curr_value,t2.value - &&r as diff , t2.value
new   1: select  654398772 as prev_value,t2.value as curr_value,t2.value -  654398772 as diff , t2.value

PREV_VALUE CURR_VALUE       DIFF      VALUE
---------- ---------- ---------- ----------
 654398772  655489424    1090652  655489424

demo@ORA11G>

Connor McDonald

Followup  

August 06, 2017 - 8:43 pm UTC

It is still logged...but the redo will be consumed by the *slaves* so you need to look at all of them combined, eg

SQL> create table t as
  2  select d.* from dba_objects d,
  3    ( select 1 from dual connect by level <= 50 );

Table created.

SQL> create index ix on t ( owner, object_name );

Index created.

SQL> select * from v$sysstat where name = 'redo size';

STATISTIC# NAME                                                                  CLASS      VALUE    STAT_ID     CON_ID
---------- ---------------------------------------------------------------- ---------- ---------- ---------- ----------
       288 redo size                                                                 2  100449028 1236385760          0

SQL> alter index ix rebuild parallel 4;

Index altered.

SQL> select * from v$sysstat where name = 'redo size';

STATISTIC# NAME                                                                  CLASS      VALUE    STAT_ID     CON_ID
---------- ---------------------------------------------------------------- ---------- ---------- ---------- ----------
       288 redo size                                                                 2  329721508 1236385760          0



Rebuild Index

December 05, 2018 - 12:47 pm UTC

Reviewer: Raj from India

Hi Tom,

We have archive tables on which we perform Daily Housekeeping.
Every day as part of Batch our system performs below.
1) Drop index on few archive tables.
2) Move the records into these archive tables.
3) Create the Index on these tables
4) Delete old records from the tables.
5) Rebuild index on the tables again.

Is the above correct approach?

Index Creation and rebuild are taking 1 hour each.

We are doing it as below. There are around 10 tables.
Create Index IndexName on Table(column) Tablespace Tablespacename ONLINE;

Alter index Indexname REBUILD ONLINE.

Is there any ways by which can perform it faster?

Connor McDonald

Followup  

December 12, 2018 - 3:40 am UTC

I would do a benchmark on the options, eg

Benchmark 1

- alter index ... unusable
- load new data
- alter index ... rebuild

Benchmark 2

- load new data with indexes left on

And see which is faster, and also, what impact it has on things such redo etc.

The other thing to consider is using parallel capabilities to rebuild your indexes.