Greg, April 03, 2008 - 8:49 am UTC
Unless you run:
exec dbms_stats.gather_table_stats('USER','TABLE_NAME');
In this case the index will be re-calculated isn't?
April 03, 2008 - 8:12 pm UTC
umm, gather stats will never reduce (or increase) the SIZE of any segment, no.
The redo and undo reasons
Doug, April 03, 2008 - 10:06 am UTC
I guess that rings a bell. But what are the undo and redo reasons themselves is what I had in mind. If I'm a transaction, I know what segments occupy the space, I know the space is clear - is it merely because someone could roll it back that I have to start fresh with new space with inserts?
April 03, 2008 - 9:07 pm UTC
it is just that indexes are horribly complex beasts and the space cannot be reused until the transaction that freed the space commits. It is not "someone could roll it back" it is that "the freeing transaction can roll it back"
LIkewise for LOBS
AJ Allen, April 03, 2008 - 1:22 pm UTC
As I recall, there is a similar phenomenon for update and delete on out of line LOBs. When we update or delete an out of line LOB, undo is not generated. Instead a second version of the LOB is stored and the location of the previous version of the LOB is remembered. When the transaction commits, the previous version of the LOB is released (disremembered, if you will) and the new version of the LOB becomes current. So, the space freed by deleting a LOB cannot, likewise, be reused in the same transaction.
Or did I miss completely??
April 03, 2008 - 9:18 pm UTC
undo is always generated - it is just for lobs that undo is managed in the lob data itself (pctversion controls it)
but yes, you are correct - the versioning of lobs would not permit the modified chunks to be reused in the same transaction.
Want to make sure I understand ...
Phil Singer, April 03, 2008 - 6:59 pm UTC
With regard to the indexes (and not LOBs):
a) You delete 100,000 rows from a table with a freshly rebuilt index. Those 100,000 values will leave "holes" scattered all over any associated indexes.
b) You then insert another 100,000 rows. Until yesterday, if you had given me this as a quiz, I would have said that these rows would be able to reuse those holes. Some splits would still be needed, but the indexes would only grow a little bit.
Now that you have given me a hint that something else is happening, and I've been trying to figure out what it could be. I thought it might be that other transactions need the index -- but this can't depend on other transactions.
My thought: if the transaction is rolled back, you don't want to just re-insert the rows and reconstruct the index, since then the indexes' statistics would not be correct. How close am I?
Also -- if every time you deleted 100,000 rows followed by an insert of 100,000 rows doubled the size of an index, these indexes would seem to be good candidates for an index reorg. But, I think that a second delete and insert of 100,000 rows would be able to use the "holes" created in the first, and the index would only grow a little bit. Please let me know if I am right
April 03, 2008 - 9:30 pm UTC
how close are you? not very really. but don't worry - the ONLY THING you and I really need to know is:
space released in an index in transaction "X" cannot be reused until transaction "X" commits.
that is all - that is all we really need to know.
as for your last paragraph, depends on how you use the index. for range scans that access rows in the tables - you are likely correct. if you MOSTLY fast full scanned the index - you would be a rebuild candidate.
Truncate and fill up again
Readr, April 04, 2008 - 9:58 am UTC
So,
If I have a daily job which starts by
truncating table t (which has an index on it t(k))
insert into table t
do this....
insert again in t
do this ...
and insert into t
select from a,b, t
....
where t.k = :x;
Commit;
Do you suggest to review this batch per regards to the index of table t?
Thanks
April 04, 2008 - 12:30 pm UTC
you are not doing a delete, there is no issue with space utilization here.
I would recommend using /*+ APPEND */ in the last insert as select - it'll make the index maintenance easier and can skip undo generation for the table.
A reader, April 07, 2008 - 4:47 am UTC
Thanks for your answer.
The table t is truncated every day and then is filled up by a lot rows.
Our DBA's has implemented a job which starts every day at 6H00. This jobs checks if a table changes at about 20% then it recalculates it statistics. (I know that this is not a good idea but I can't do anything as a developer to influence our DBAs)
This means that when my job runs again the next night and starts by truncating the table its corresponding statistics are not up-to-date during the job execution time. As far as I am doing select on this table t during the same job, would you recommend to delete the table stats using dbms_stats.DELETE_TABLE_STATS just after the truncate table?
Regards
April 07, 2008 - 9:31 am UTC
use dbms_stats.set_table_stats to set correct representative statistics would likely be more reasonable.
can you please be more specific about it...?
jian huang zheng, March 11, 2009 - 12:34 pm UTC
Hi Tom
As you said in the begining of this thread:
hat is, if you
a) delete from 100,000 row table
b) insert 100,000 rows into table
c) commit
each index will be about twice as large as it was after the operation as before.
This is just a fact, the fact has to do with read consistency needs and rollback needs.
can you please be more specific about the consistency needds and rollback needs?
thanks!
March 12, 2009 - 12:21 pm UTC
it is what it is - in a single transaction, the index space you "free up" for a non-unique index is not reusable, not until you commit.