Skip to Main Content

Breadcrumb

Question and Answer

Chris Saxon

Thanks for the question, Lindsay.

Asked: August 23, 2000 - 5:22 pm UTC

Last updated: July 22, 2024 - 12:52 pm UTC

Version: 8.1.6.1

Viewed 10K+ times! This question is

You Asked

I have noticed that the HWM will only go down if the table is truncated. If I do the following will it lower the HWM:

CREATE TABLE temp AS SELECT * FROM table_name;

TRUNCATE table_name;

INSERT INTO table_name SELECT * FROM temp;

COMMIT;

This has been successful at times in lowering the HWM and other times not. I am wondering why the inconsistency?

Is there a better way?

I am measuring the used blocks with the following:

select count(distinct dbms_rowid.rowid_block_number(rowid) || dbms_rowid.rowid_relative_fno(rowid)) "Used"
from table_name;

Thanks



and Tom said...



Thats not a good way to measure used blocks. It ignores block chaining and row migration completely. It ignores blocks that were used but aren't anymore (eg: put a million rows in a table, commit, delete from table, commit, run your process above -- it'll say "zero" but used blocks is actually quite high). It can be widely inaccurate. See

</code> http://asktom.oracle.com/pls/asktom/f?p=100:11:::::P11_QUESTION_ID:231414051079 <code>
for a better way to measure used blocks in a table...

The CTAS/truncate/INSERT will lower the high water mark as much as possible (down to MINEXTENTS on the table anyway). It might appear to not work for you since your method of counting used blocks doesn't work (especially if you have rows that span blocks -- you are only counting the head pieces. If you do this with a table that has say a 4,000 byte width and you have a 2k blocksize -- a row might be on 3 blocks, you could be off by a factor of 3 or more in that case).

In 8.1 and up (which you have), look at ALTER TABLE T move storage (...) as a way to avoid lots of moves -- just remember to alter index rebuild for all indexes on the table afterwards. This'll work with RI and all intact.

Must be a strange application that you need to lower a HWM frequently -- perhaps a candidate for rolling partition windows? Sounds like you might be adding and purging lots of data -- perhaps a partition can be used more efficiently to avoid this? Also -- unless you full scan the table frequently -- why do you care about the HWM? We'll reuse that space over time so unless you full scan it lots -- why bother?




Rating

  (9 ratings)

Is this answer out of date? If it is, please let us know via a Comment

Comments

A reader, August 17, 2004 - 9:50 am UTC

If we dont reset the HWM, will it have any performace issues?


Tom Kyte
August 17, 2004 - 10:18 am UTC

only if you full scan the object and most of the object is whitespace

or if you do large index range scans for lots of rows -- and you current have like one row per block -- and the act of compacting the table would reduce the LIO's done by range scanning significantly (eg: you have to be getting more than just a couple of rows for this to be measurably beneficial)

DBA

Paul Song, November 01, 2005 - 4:21 pm UTC

Tom,

We have a need to shrink the HWM of a tablespace which is used by a group of mview logs. We could not truncate the logs because of 24x7 usage. We deleted all records in the mview logs after each refresh. However after some major updates the tablespace was fully used and the HWM reached the upper limit. This made the tablespace usable even all old records were cleaned. How can we lower the HWM without truncate logs and keep the tablespace online?

Thanks.

Tom Kyte
November 02, 2005 - 4:59 am UTC

you'd probably want to move the MV logs to a new tablespace (you need to rebuild them) - for until 10g, there is no way to "shrink" an existing table.

script to see the HWM on all the tables of a schema

Reene, April 01, 2008 - 9:06 am UTC

Hi Tom

is there a script to see the HWM of all the tables in a schema.does oracle stores this information in some table.

Thanks
Tom Kyte
April 01, 2008 - 9:47 am UTC

search this site for

show_space


HWM

Srikanth Sathya, April 01, 2008 - 10:30 am UTC

Is HWM information stored in some table?

Thanks
Tom Kyte
April 01, 2008 - 12:04 pm UTC

if you gather statistics, you can see "blocks" in user_tables. It is current as of last "gather"

HWM again

Reene, April 02, 2008 - 6:59 am UTC

Hi Tom

we gather stats daily,so is it ok to look at blocks to see the HWM for that table in user_tables.

if yes - how to say that this table has a high HWM than it should have.what could be a good criteria.

the reason i am asking that we have several applications running off our database -and some sql programs do lots of delets -so want to run a script to see the HWM and find such tables.
all_tab_modifications is another view ,I tried.

thanks
Tom Kyte
April 02, 2008 - 8:10 am UTC

you shouldn't need to monitor tables to find "hwm that are unacceptable"

First of all, in 10g - the segment advisor does that, you don't need to.

Second of all, it would only matter for large tables that you full scan (frequently). And given that you as a dba should know something about the data you are managing, you'd be able to list off the set of tables you need to watch - for you would understand that they are frequently loaded big, then massively purged. And then you would start looking for ways to avoid this purge process using DELETE via partitioning for example (so that there is never an issue)

In other words, you should already know what to look at.


I have *never* monitored for this.


So, unless you full scan large tables frequently, put this to bed - forget it, do not go this path.

is there a way to monitor it using a sql

Reene, April 02, 2008 - 9:46 am UTC

Hi Tom

ok,understood this,but just curious that is there a simple way to monitor the table with very hwm.
i searched the metalink too, there i saw a very big sql.could not quite understand that.
show_space-tells the hwm ,but it does not tell the tables for which hwm is in appropriately high.

in my environemnt - i can see lot of sqls which are deleting and so the select queries off those tables takes quite sometime and resources too.

while i plan to change the code and other things ,but i am trying to put an automated process to see such things.so this sql will help as an intermidate step.

thanks
Tom Kyte
April 02, 2008 - 10:24 am UTC

you have other metrics in that view as well. If the number of rows times average row length is much less than the size of the table (YOU decide what "much less means" in this context), then you know that most blocks are empty or almost empty.


ops$tkyte%ORA10GR2> create table t as select * from all_objects;

Table created.

ops$tkyte%ORA10GR2> exec dbms_stats.gather_table_stats( user, 'T' );

PL/SQL procedure successfully completed.

ops$tkyte%ORA10GR2> select num_rows, avg_row_len, round( num_rows*avg_row_len/(blocks*8192) * 100 ) pct_utilized from user_tables where table_name = 'T';

  NUM_ROWS AVG_ROW_LEN PCT_UTILIZED
---------- ----------- ------------
     49889          93           80

ops$tkyte%ORA10GR2> delete from t where mod(object_id,2) = 0;

24932 rows deleted.

ops$tkyte%ORA10GR2> commit;

Commit complete.

ops$tkyte%ORA10GR2> exec dbms_stats.gather_table_stats( user, 'T' );

PL/SQL procedure successfully completed.

ops$tkyte%ORA10GR2> select num_rows, avg_row_len, round( num_rows*avg_row_len/(blocks*8192) * 100 ) pct_utilized from user_tables where table_name = 'T';

  NUM_ROWS AVG_ROW_LEN PCT_UTILIZED
---------- ----------- ------------
     24957          93           40



easy to understand

Reene, April 03, 2008 - 1:28 am UTC

Thanks so much for this .

it is now so easy to understand.

Thanks

alter table move storage ()

nagesh, June 19, 2008 - 6:05 pm UTC

Alter table command helped to shrink.
We have a few dozen queue table. Only one of them is singles consumer. On that table we have to shring wvery other moth otherwise dequeue takes about 15 seconds and once shrnked it takes about 15 milli seconds.

Any reason for the behavior. All tables are 8i compatible

Is a TRUNCATE a magic pill to clean waste of block-storage by single row append hint INSERTS

TomS_Ott, July 19, 2024 - 4:59 pm UTC

Every hour I truncate then do single INSERTs with append hint into a temp table . This I presume is very wasteful of space since the average bytes size of my rows is much smaller than the db;s block size (8192 bytes). Btw I am not concerned about the slower speed. I'm not looking for an elegant solution that will involve a lot of code refactoring so i hope you will turn a blind eye to why my code is this way. I just want to know if each call to truncate is a magic pill that resets the HWM and/or cleans up this wasteful use of blocks.
Chris Saxon
July 22, 2024 - 12:52 pm UTC

i hope you will turn a blind eye to why my code is this way

The current process is very messy, so it's kinda hard to ignore it! Why exactly is it this way and why the resistance to looking into it?

I just want to know if each call to truncate is a magic pill that resets the HWM

Yes, by default truncate reset the HWM and frees blocks above the minextents as Tom said above.

More to Explore

DBMS_ROWID

More on PL/SQL routine DBMS_ROWID here