What is HWM ....
Sandip, May 18, 2001 - 4:38 pm UTC
Still the definition of HWM is lacking.
HWM
Monica, oraclequeries@hotmail.com, May 20, 2001 - 3:40 am UTC
High Water Mark defination itself is missing...and required, please provide if you can.
HWM
vivek, November 19, 2001 - 6:33 pm UTC
What I understand about HWM (INFORMAL) is that it is a mark
to identify how may blocks had data at least once.
This mark will be changed/refereshed only if
1.) Table is dropped
2.) Table is truncated
or
More data is inserted ( HWM will Increase )
Thanks,
Vivek
I See
A reader, December 26, 2001 - 8:23 pm UTC
excellent explanation!
Good example code showing High Water Mark
Chris Barr, February 21, 2002 - 2:51 pm UTC
Thanks, Tom, for 3-step example showing query results before & after Delete and Truncate ...
I believe I finally "got it" on High Water Mark.
Crystal Clear
Subhro, September 05, 2002 - 9:49 am UTC
HWM was always something that evaded my understanding. Today, ran a search on this site on HWM and now I am HWM aware. Thanks...
In fact, I wanted to ask you all sorts of questions but as I was writing them, I answered all of them... myself...
I wonder if other databases have similar concepts???
Reader
A reader, October 14, 2002 - 5:35 pm UTC
Tom,
RDBMS 8.1.6
DB_BLOCK_SIZE 8192
1. I created a table ( 1st extent 1M next 1M )
2. Inserted 100+ rows
3. Analyzed table
BLOCKS = 2 EMPTY = 127
4. From DBA_EXTENTS
BLOCKS = 130
1 for OS , 1 for segment header, 128 for segments
5. value for '_bump_highwater_mark_count' = 0 , default of 5
6. show_space shows
freelist = 1
I expected that 5 blocks are put in the freelist ( default for hwm)
every time Oracle bumps the HWM There is only one on the freelist.
Could you explain for us
Thanks
SQL> create table t_r (x varchar2(2000)) tablespace RPTSVR
2 storage
3 ( initial 1m
4 next 1m
5 pctincrease 0
6 );
Table created.
SQL> begin
2 for k in 1..111
3 loop
4 insert into t_r values (rpad('*',100,'*'));
5 end loop;
6 commit;
7 end;
8 /
PL/SQL procedure successfully completed.
SQL> analyze table t_r compute statistics;
Table analyzed.
SQL> select TABLE_NAME,BLOCKS,EMPTY_BLOCKS
from dba_tables where table_name = 'T_R';
TABLE_NAME BLOCKS EMPTY_BLOCKS
------------------------------ ---------- ------------
T_R 2 127
SQL> select EXTENT_ID,FILE_ID,BLOCK_ID,BYTES,BLOCKS
from dba_extents where SEGMENT_NAME = 'T_R';
EXTENT_ID FILE_ID BLOCK_ID BYTES BLOCKS
---------- ---------- ---------- ---------- ----------
0 19 2 1064960 130
SQL> select KSPPSTVL from x$ksppcv, x$ksppi
where x$ksppi.INDX = x$ksppcv.INDX and
ksppinm = '_bump_highwater_mark_count';
KSPPSTVL
----------------------------------------------------
0
SQL> exec show_space('T_R');
Free Blocks.............................1
Total Blocks............................130
Total Bytes.............................1064960
Unused Blocks...........................127
Unused Bytes............................1040384
Last Used Ext FileId....................19
Last Used Ext BlockId...................2
Last Used Block.........................3
PL/SQL procedure successfully completed.
October 14, 2002 - 8:20 pm UTC
what is this _bump_blah blah blah thing. did support ask you to use that? if so, why? I've never heard of them asking for someone to use that.
Since its meaning is 100% not relevant, we'll ignore its very existence.
but anyway -- your test doesn't test what you wanted to test. It could very well have bumped the HWM by 5 -- you filled and used 4, leaving one there. You would have to now iteratively add a row, commit, measure, add a row, commit, measure -- to force it to advance the HWM and see what happens THEN.
Tom, Thanks very Much
A reader, October 14, 2002 - 10:31 pm UTC
SQL> exec show_space('T_R');
Free Blocks.............................1
Total Blocks............................128
Total Bytes.............................1048576
Unused Blocks...........................122
Unused Bytes............................999424
Last Used Ext FileId....................5
Last Used Ext BlockId...................6881
Last Used Block.........................6
PL/SQL procedure successfully completed.
SQL> insert into t_r values (rpad('*',1000,'*'));
1 row created.
SQL> commit;
Commit complete.
SQL> exec show_space('T_R');
Free Blocks.............................5
Total Blocks............................128
Total Bytes.............................1048576
Unused Blocks...........................117
Unused Bytes............................958464
Last Used Ext FileId....................5
Last Used Ext BlockId...................6881
Last Used Block.........................11
PL/SQL procedure successfully completed.
SQL> select count(*) from t_r;
COUNT(*)
----------
36
It seems that the first 5 blocks Oracle allocates 1 block
at a time and subsequent to first 5, bumps 5 blocks in the
freelist for HWM
Thanks
October 15, 2002 - 8:02 am UTC
Ummm, disagree:
SQL> exec show_space('T_R');
Free Blocks.............................1
Total Blocks............................128
Total Bytes.............................1048576
Unused Blocks...........................122
128 blocks - 122 Unused blocks = 6 blocks "used". 1 block is for the segment header which means there were 5 blocks in the table. You filled up 4 of all of the way with data and only filled part of the 5th block.
It was *always* 5 blocks.
p001, February 12, 2003 - 10:14 pm UTC
One of my packages truncates a table A once per every 2 hours and then inserts and updates the table with the required data.
1. Truncate table A;
2. Insert into table A;
3. Update table A:
4. No delete statements.
The current size of the table is 340 MB. When I create a copy of this table using
Create table B nologging as select * from A;
Then the target table B size is around 24 MB.
Why my original table size was 15 times higher then actual size? How to fix this problem?
Please excuse me if its a basic question.
February 12, 2003 - 10:40 pm UTC
The original storage parameters of A were such that the initial, next, pctincrease, minextents said "Table A should allocate 340 mb"
The storage parameters of B are different. Consider:
ops$tkyte@ORA817DEV> create table a ( x int ) tablespace users storage ( initial 340m );
Table created.
ops$tkyte@ORA817DEV> select sum(bytes) from user_extents where segment_name = 'A';
SUM(BYTES)
----------
357040128
ops$tkyte@ORA817DEV> create table b tablespace users as select * from a;
Table created.
1* select sum(bytes) from user_extents where segment_name = 'B'
ops$tkyte@ORA817DEV> /
SUM(BYTES)
----------
524288
so, look at A's initial, next, pctincrease, minextents -- that'll tell you what truncate will take it back to.
A reader, February 13, 2003 - 1:28 pm UTC
Thanks for your quick answer.
I verified the initial extents for both Table A and B and it was 65536.
Here are the DDLs for my tables
1.
CREATE TABLE A (
Col1 NUMBER (18),
Col2 NUMBER (18),
Col3 NUMBER (18),
Col4 VARCHAR2 (4000),
Col5 VARCHAR2 (4000),
... other columns
...
)
TABLESPACE TOOLS NOLOGGING
PCTFREE 10
PCTUSED 40
INITRANS 1
MAXTRANS 255
STORAGE (
INITIAL 65536
NEXT 65536
PCTINCREASE 0
MINEXTENTS 1
MAXEXTENTS 10000
FREELISTS 1 FREELIST GROUPS 1 )
NOCACHE;
2.
CREATE TABLE B (
Col1 NUMBER (18),
Col2 NUMBER (18),
Col3 NUMBER (18),
Col4 VARCHAR2 (4000),
Col5 VARCHAR2 (4000),
... other columns
...
)
TABLESPACE TOOLS NOLOGGING
PCTFREE 10
PCTUSED 40
INITRANS 1
MAXTRANS 255
STORAGE (
INITIAL 65536
NEXT 65536
PCTINCREASE 0
MINEXTENTS 1
MAXEXTENTS 10000
FREELISTS 1 FREELIST GROUPS 1 )
NOCACHE;
I am still not sure why table A used 340 mb and table B used 24 MB.
February 13, 2003 - 5:17 pm UTC
you'll need to give me a full test case with which to reproduce. Not that I doubt you but ....
a) truncate by default releases storage (shrinks the table back to its initial extent or initial + minextents extents -- and since you say minextents is 1....)
b) the insert would not balloon up to 340meg.
So, I think if you study your processs step by step by step -- (to give me a test case to reproduce with) you'll find what part diverges from what you think is happening (or we'll have a test case from which to disect and figure out what is happening)
On a similar vein....
A reader, February 19, 2003 - 10:02 am UTC
Hi Tom,
I'm pretty new to the DBA role, so excuse if this question is obvious, but i'm a bit perplexed by the results of show_space for this setup. I need to know what you recommend to change for this (relatively old) application storage setup.
SQL> CREATE TABLESPACE TEST DATAFILE 'C:\TEMP\TEST.DBF' SIZE 100M REUSE autoextend on EXTENT MANAGEMENT LOCAL
2 UNIFORM
3 size 100k ONLINE ;
Tablespace created.
SQL> CREATE TABLE TEMP ( A VARCHAR2(10) ) TABLESPACE TEST;
Table created.
Okay, I've created the table. The HWM (as I expect) is 1.
SQL> exec show_space('TEMP');
Shows segment information.
Tablespace..................................................TEST
Tablespace Block Size.......................................8192
Num Extents.................................................1
Total Blocks................................................13
Total Bytes.................................................106496
Last Used Block (HWM).......................................1
Unused Blocks (blocks above the HWM)........................12
Unused Bytes................................................98304
Free Blocks (blocks on the freelist)........................0
PL/SQL procedure successfully completed.
If I now populate with loads of data...
SQL> BEGIN
2 FOR i IN 1..100000 LOOP
3 INSERT INTO temp VALUES ('XXXXXXXXXX');
4 END LOOP;
5 END;
6 /
PL/SQL procedure successfully completed.
SQL> EXEC SHOW_SPACE('TEMP');
Shows segment information.
Tablespace..................................................TEST
Tablespace Block Size.......................................8192
Num Extents.................................................18
Total Blocks................................................234
Total Bytes.................................................1916928
Last Used Block (HWM).......................................5
Unused Blocks (blocks above the HWM)........................8
Unused Bytes................................................65536
Free Blocks (blocks on the freelist)........................5
PL/SQL procedure successfully completed.
Can you explain to me why the total blocks for the segment has gone to 234, but the HWM is still only 5 (or am I reading this wrong)?
Thanks for all your invaluable help.
February 19, 2003 - 1:57 pm UTC
the unused blocks are the blocks above the HWM
last used block -- is not the "hwm"
the high water mark (number of blocks below the HWM) is total-blocks minus unused blocks
so, there are 234-8 blocks under the HWM here.
A reader, March 02, 2003 - 11:12 pm UTC
Very informative
Why does Oracle Maintain a HWM..?
A reader, March 26, 2003 - 11:52 pm UTC
We have been reading about HWM, but why does oracle maintain a HWM which is not the number of blocks currently used. Why does it need it to know the number of blocks which were ever used by a table..?
March 27, 2003 - 7:37 am UTC
Well, quite simply put -- they are the blocks that ARE in use by the table. We need that so that if you issue:
select * from T;
we know the set of block which we MUST scan in order to retrieve the data for you.
They are not currently in use
A reader, March 27, 2003 - 9:41 pm UTC
Hi,
The blocks are not currently in use by that table, The data has been deleted and also been committed.
Hence while scanning we should scan only the blocks which currently have data, not necessarily had data at one point of time,
Still can't comprehend why are we scanning the blocks which are not currently in use. can you please elaborate
Thanks
March 28, 2003 - 6:58 am UTC
Hence -- tell me -- how the heck do we know the blocks are empty?
You issue :delete from t;
Whilst another session(s) issues: insert into t
would you like us to lock entire table for delete, preventing any other DML from occuring. Cause if we didn't -- we would have to at the end of the delete LOCK the table (to prevent DML from occuring) and then SCAN entire said table to see if "oh, did anyone insert while we were deleteing?".
If you would like this behavior -- that's ok, we have it -- it is called TRUNCATE.
The blocks *are in use*. The table is a complex data structure, it is not a fifo/lifo queue or anything like that -- it is a big old heap. Once used -- implies "could still be used".
Still can't understand it
A reader, March 28, 2003 - 10:12 pm UTC
Hi tom,
I still dont get it, I can understand you getting mad at me, as u have said its a complex structure not a lifo/fifo methodoly. I can understand the concept if i can understand why it needs to do that.
If the data has not been committed i can understand the blocks being read, but if the data has been commited, only those should be read, as others are not in use.
if we take just 2 session,
1) Delete * from tablename <table1>
Deletes blocks from 1-25 (hypotetical)
2) Insert into the tablename <table1> select * from table2
This would insert from blocks 26 to 50 (hypotetical).
we have a third session which is doing a
select * from <table1>
if the first delete has not been commited it and insert has been it needs to scan blocks 1 to 50, but if first delete has been commited then it should just read blocks 26 to 50 as they are the one in use.
Can you plesase explain a little more in detail.
Thanks for your reply
March 29, 2003 - 9:39 am UTC
why -- who is to say that blocks 1..50 do not have any data on them???
We do not micro-manage blocks like that -- think about the overhead. What you propose would have us look at a table with a row per block to see if we should read that block or not.
The database table is an extremely complex data structure. It is not a simple thing. You are thinking it is a very simple thing.
You cannot just take 2 sessions -- you must take worst case, you must take every case.
We do not delete by block -- we delete by row. We do not managed space in the simplistic manner which would allow something like this.
Some more info pls
A reader, March 30, 2003 - 5:28 pm UTC
Hi Tom,
We do not need to look at eack row, but the blocks can be read based on the freelist,I am not trying to give a solution for HWM, but i need to know why do we ever have something like HWM,what overhead does it reduce and what are the advantages, as uptill now i hv read more of disadvantages of HWM, can you point me to any documentation which would gove details about the comple table structure and why HWM is used.
Since most of my queries have always been answered by you, thats whay i am invariably depending on you. :)
Thanks in Advance
March 30, 2003 - 5:59 pm UTC
That a block is on the freelist by no means "it is empty". It just means it has "some space". A block on the freelist typically (more often then not) HAS data on it.
Once a block is formatted and used by the table -- it must be assumed that it contains data -- for there is nothing preventing it from having data.
Now, bear in mind -- what does this affect? Full scans only (has NO relevance for index reads whatsoever). So, how many times do you full scan -- or better yet, what type of systems full scan? Data warehouses. Do they suffer from this issue? Most generally "no". the DW is filled typically by:
o appending, never delete
o rolling windows with partitions (aging of data, no deletes)
o merging in data (update existing add new) -- no delete
So the systems that would be affected by this are not.
Again, we are back to "if you are going to empty the table, and you want to let oracle know decisively 'these blocks do not contain data', then TRUNCATE is the command you are looking for".
Just because stuff is on the freelist for a table does not mean "empty". We have to look at them during a full scan or risk giving you the wrong answer.
If you have my book - I go over the physical structure of tables, blocks, extents, etc. The concepts guide does as well. Short of dumping the blocks and reverse engineering them (not somthing I'm recommending bear in mind) thats it.
Think of the advantages during DML -- think of the advantages during a full scan even. What if every other block was "empty" and not to be read? Think what that would do to a full scan (the thought of a manual transmission car being driven by someone who has only used automatic transmissions comes to mind -- go/stop/go/stop/go/stop). You would need two freelists -- "sort of empty and truly 100% empty". You would need to micromanage blocks -- you would have to consult this "truly 100% empty" freelist (think of them as rows) and subtract them from your list of blocks to scan -- instead of "scan extent 5", you would have to do "scan extent 5 except for these 1,000 scattered blocks".
Why so much discussion on HWM?
John, March 31, 2003 - 2:52 am UTC
Hi Tom,
Why do so many people worry about HWM? Is it just for mental exercise to satisfy one's curiosity or does knowing it have more practical use than I am aware of? All I care to know is that full table scan will scan till HWM. I don't recall ever have to manipulate HWM.
March 31, 2003 - 8:28 am UTC
It is probably just to gain understanding. Many people apply access or dbase concepts to Oracle and that just doesn't work -- apples and toaster over comparison.
In general, you are correct -- all you need know is
a) there is a HWM
b) full scans read up to the high water mark
and maybe
c) this becomes most obvious to people running the RBO who like to constantly issue "select count(*) from t" just to see how many rows this table might contain". They will constantly full scan upto the HWM -- they should use the CBO which will index full/fast full scan instead -- removing this problem (or just stop counting the rows!)
Tim, March 31, 2003 - 9:12 am UTC
"c) this becomes most obvious to people running the RBO who like to constantly
issue "select count(*) from t" just to see how many rows this table might
contain". They will constantly full scan upto the HWM -- they should use the
CBO which will index full/fast full scan instead -- removing this problem (or
just stop counting the rows!) "
How is the index full/fast full scan better? examples please.
March 31, 2003 - 9:27 am UTC
big_table@ORA920> set autotrace on
big_table@ORA920> select count(*) from big_table;
COUNT(*)
----------
3817191
1 row selected.
Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=806 Card=1)
1 0 SORT (AGGREGATE)
2 1 INDEX (FAST FULL SCAN) OF 'BIG_TABLE_PK' (UNIQUE) (Cost=806 Card=3816192)
Statistics
----------------------------------------------------------
0 recursive calls
0 db block gets
8862 consistent gets
8844 physical reads
0 redo size
382 bytes sent via SQL*Net to client
499 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
1 rows processed
big_table@ORA920> select /*+ RULE */ count(*) from big_table;
COUNT(*)
----------
3817191
1 row selected.
Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=HINT: RULE
1 0 SORT (AGGREGATE)
2 1 TABLE ACCESS (FULL) OF 'BIG_TABLE'
Statistics
----------------------------------------------------------
0 recursive calls
0 db block gets
54699 consistent gets
54684 physical reads
0 redo size
382 bytes sent via SQL*Net to client
499 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
1 rows processed
big_table@ORA920> set autotrace off
big_table@ORA920>
Probably thats what you require but everone requirement is diff
A reader, March 31, 2003 - 8:32 pm UTC
Hi Tom, Rather John
"Why do so many people worry about HWM? Is it just for mental exercise to satisfy
one's curiosity or does knowing it have more practical use than I am aware of?
All I care to know is that full table scan will scan till HWM. I don't recall
ever have to manipulate HWM. "
The full table scan you must be performing for small rows
try performing Full table scans for tables with millions of Rows, When it comes to tunning a Database i think even a few seconds make a difference.
"Yes its faster then before the data does come up quickly now", A lot of user will give you this reaction even if the data just comes quickly by a few seconds, and as a proper learning process, i like to know why is it being done so that we can try and exploit it in a better way, rather then half of knowledge.
More over what will be ur reaction will be when you try to explain BA's and Project managers that its doing a Full table scan and the insert statement is writting above HWM as we have used a Append hint to increase the performance of upload.All they will ask is Why is HWM maintained, what do you do? you raise your two hands in air and say dont know ask oracle. I certainly dont like to do that.
Thanks tom for the info
HWM on tablespaces
Trevor, May 14, 2003 - 3:41 am UTC
Tom there is a script on www.oracle-base.com
used to rebuild indexes it talks about a high water mark
on tablespaces..
"Rebuilding indexes within the tablespace raises the high water mark on the tablespace. This chews disk space which cannot be easily recovered. Unix)."
I know about table HWM but what the heck is a tablspace
HWM and what are it's impacts
May 14, 2003 - 7:15 am UTC
say you have a tablespace with no freespace (but autoextend datafiles)
it is 500 meg right now. You could say the HWM for the tablespace is 500 meg (you cannot shrink it any smaller)
Now, you rebuild a 10meg index in there. For a moment -- you need 510 meg of space (the index exists twice). So the tablepsace grows to 510 -- the index moves to the "end" of the tablespaces files (it'll cause them to grow)
When you are done, you have 500meg used but 510meg of space allocated. You cannot shrink it down since the new index is at the end of the files -- the empty space is interior space in the datafiles, they cannot shrink.
you can think of that as if the tablespaces HWM was advanced to 510m
But not really, cause you could rebuild that index again -- it might move back to the original space and you could shrink the files (so it is a HWM that can go up and go down)
But since rebuilding indexes is mostly a waste of our time -- it is really not too much of an issue -- is it. search for
index rebuild
on this site to read my opinion (backed up with some math) about scheduled rebuilds of indexes.
Thanks Tom
Trevor Welch, May 15, 2003 - 2:10 am UTC
Thanks very much Tom
for the clear explainantion on Tablespace HWM
i will search for index rebuilds now as you suggested
Kind Regards Trevor
Delete and Insert
A reader, June 19, 2003 - 1:05 am UTC
I have a table which does some constant deletes and inserts, as this is the table we store the ids to which the users have access based on their profile,like user x has access to all ids or all of these 10,000 ids, We need to populate the table everytime the user logds in, Delete on this table is taking quite some time as the number of rows per user go upto .4 million.
Is there a way to delete without redo or speed up the insert and delete, i am currently using bulk insert, is bulk delete faster, i cant use truncate as the other users maybe using the table, dont want to partiion and truncate as the user creation is thru an application. Any ideas.
June 19, 2003 - 8:00 am UTC
why would you not use a global temporary table with on commit preserve rows?
each user gets their own segment.
upon logging out the data "disappears"
a select * from that table would only show that user their data
A reader, June 19, 2003 - 7:29 pm UTC
its a web application wher the user does not necessary get that account and it has connection pooling implemented in app server to ensur the connections are used up rationally
June 20, 2003 - 4:37 pm UTC
then I guess that design decision precludes other design decisions.
We let state tables like that (use them on asktom) fill up and run a daily batch to clean out old entries in the background. it should be a SINGLE delete like:
delete from state_table where ...... (something to identify the rows to delete)
that is the fastest.
There is NO way to disable redo and undo generation for deletes.
Philip, May 28, 2004 - 3:54 am UTC
Tom,
In the output of the show_space procedure, does the value Total Blocks = Last Used Block + Unused Block?
How do i interpret the following output?
'ATTRIBUTE'
Free Blocks.............................3
Total Blocks............................18432
Total Bytes.............................150994944
Unused Blocks...........................284
Unused Bytes............................2326528
Last Used Ext FileId....................11
Last Used Ext BlockId...................202633
Last Used Block.........................740
May 28, 2004 - 9:03 am UTC
No -- since an object could be in extents all over the place (in many files).
free blocks -- blocks on the freelist.
total blocks/bytes -- blocks/bytes allocated to the segment
unused blocks/bytes -- blocks/bytes allocated to the segment but not yet used to store anything ever (eg: unformatted, above the high water mark)
last used ext file/block id -- a pointer to the front of the "last extent" of the table. Consider:
ops$tkyte@ORA817DEV> create table t ( x varchar2(4000) ) tablespace system storage ( initial 32k next 32k pctincrease 0 );
Table created.
ops$tkyte@ORA817DEV>
ops$tkyte@ORA817DEV> exec show_space( 'T' )
Last Used Ext FileId....................1
Last Used Ext BlockId...................30858
Last Used Block.........................1
PL/SQL procedure successfully completed.
ops$tkyte@ORA817DEV> select block_id first_block, block_id+blocks-1 last_block from dba_extents where segment_name = 'T' and owner = user;
FIRST_BLOCK LAST_BLOCK
----------- ----------
30858 30861
ops$tkyte@ORA817DEV>
ops$tkyte@ORA817DEV> insert into t select rpad('*',4000,'*') from all_objects where rownum <= 5;
5 rows created.
ops$tkyte@ORA817DEV> exec show_space( 'T' )
Last Used Ext FileId....................1
Last Used Ext BlockId...................30867
Last Used Block.........................4
PL/SQL procedure successfully completed.
ops$tkyte@ORA817DEV> select block_id first_block, block_id+blocks-1 last_block from dba_extents where segment_name = 'T' and owner = user;
FIRST_BLOCK LAST_BLOCK
----------- ----------
30858 30861
30867 30870
ops$tkyte@ORA817DEV> insert into t select rpad('*',4000,'*') from all_objects where rownum <= 5;
5 rows created.
ops$tkyte@ORA817DEV> exec show_space( 'T' )
Last Used Ext FileId....................1
Last Used Ext BlockId...................19841
Last Used Block.........................5
PL/SQL procedure successfully completed.
ops$tkyte@ORA817DEV> select block_id first_block, block_id+blocks-1 last_block from dba_extents where segment_name = 'T' and owner = user;
FIRST_BLOCK LAST_BLOCK
----------- ----------
30858 30861
30867 30870
19841 19848
ops$tkyte@ORA817DEV> insert into t select rpad('*',4000,'*') from all_objects where rownum <= 5;
5 rows created.
ops$tkyte@ORA817DEV> exec show_space( 'T' )
Last Used Ext FileId....................1
Last Used Ext BlockId...................19841
Last Used Block.........................8
PL/SQL procedure successfully completed.
ops$tkyte@ORA817DEV> select block_id first_block, block_id+blocks-1 last_block from dba_extents where segment_name = 'T' and owner = user;
FIRST_BLOCK LAST_BLOCK
----------- ----------
30858 30861
30867 30870
19841 19848
ops$tkyte@ORA817DEV> insert into t select rpad('*',4000,'*') from all_objects where rownum <= 5;
5 rows created.
ops$tkyte@ORA817DEV> exec show_space( 'T' )
Last Used Ext FileId....................1
Last Used Ext BlockId...................30875
Last Used Block.........................4
PL/SQL procedure successfully completed.
ops$tkyte@ORA817DEV> select block_id first_block, block_id+blocks-1 last_block from dba_extents where segment_name = 'T' and owner = user;
FIRST_BLOCK LAST_BLOCK
----------- ----------
30858 30861
30867 30870
19841 19848
30871 30874
30875 30878
ops$tkyte@ORA817DEV> insert into t select rpad('*',4000,'*') from all_objects where rownum <= 5;
5 rows created.
ops$tkyte@ORA817DEV> exec show_space( 'T' )
Last Used Ext FileId....................1
Last Used Ext BlockId...................30879
Last Used Block.........................4
PL/SQL procedure successfully completed.
ops$tkyte@ORA817DEV> select block_id first_block, block_id+blocks-1 last_block from dba_extents where segment_name = 'T' and owner = user;
FIRST_BLOCK LAST_BLOCK
----------- ----------
30858 30861
30867 30870
19841 19848
30871 30874
30875 30878
30879 30882
6 rows selected.
<b>It'll point to the "last" extent.</b>
HWM
David, June 12, 2005 - 12:42 am UTC
When oracle adjusts HWM position, does it lock the table? Thanks.
June 12, 2005 - 7:56 am UTC
raising the HWM does not involve any table locks.
Why segment shrink locks the table in 10g?
David, June 12, 2005 - 3:05 pm UTC
From, </code>
http://download-west.oracle.com/docs/cd/B14117_01/server.101/b10739/schema.htm#sthref1783 <code>
<quote>As with other DDL operations, segment shrink causes subsequent SQL statements to be reparsed because of invalidation of cursors unless you specify the COMPACT clause,<quote>
In 10g, when the segment is shrunk by deallocating space below hwm, it locks the table. right? that means it locks the table when it moves hwm. Therefore, does oracle lock the table when it adjusts HWM? maybe I am confused! Any comments? Thanks.
June 13, 2005 - 8:49 pm UTC
it briefly locks the table when lowering the HWM, yes.
before, you just said
...
When oracle adjusts HWM position, does it lock the table? Thanks.
.....
with no versions and against an 8i question - in that case, no, the table is not locked to raise it.
but the alter table command will briefly lock it while lowering it.
the answer to "does Oracle lock the table when it adjusts the HWM" is "generally no, because usually it is just going "up" and no table lock is required. However if you issue the DDL command it will briefly lock the table to lower the HWM"
segment shrink.......
Steve, June 13, 2005 - 2:44 pm UTC
Refering David's question above, does oracle lock the table when segment shrink operation is done? I would be surprised if it did. Thanks.
June 13, 2005 - 8:50 pm UTC
see above.
Identify tables with high water mark
A reader, March 23, 2006 - 4:21 pm UTC
Tom,
1) How do we find out if a given table has a high water mark?
2) The way to reset a table with high water mark is a)exp/trunc/imp b) alter table t move, rebuild index. Are they any other ways of doing this?
DB version - 9.2.0.6
Thank you
March 23, 2006 - 6:11 pm UTC
1) find me a real table without one! they all have high water marks...
2) I would suggest ONLY alter table T move *or* dbms_redefinition (online) and in 10g, alter table T shrink..
exp/imp - never, nope.
but - I would only suggest this IF it were really needed and many times - it quite simply ISN'T.
Identify tables with high water mark
A reader, March 23, 2006 - 6:42 pm UTC
>>1) find me a real table without one! they all have high water marks...
That's correct, but isn't a high water mark relative? Ex. If a table has 20 million records and if we delete 17 million records, then a select count(*) would take a relatively long time reading 20 million records to 3 million records. I am of the assumption that if the high water mark is really high, them the time taken to do selects goes up and so the performance is impacted. Please let me know if my assumtion is wrong?
If my assumption is right, I wanted to find tables with really high water mark and fix them - and so the request to find out tables with high/really high water mark.
Thank you
March 24, 2006 - 8:28 am UTC
yes, but it was asked to find tables with high water marks - that is the set of all tables.
If the question were:
what tables have a high percentage of white space - that is, if upon being rebuilt/reorged, they would consume significantly less space
My answer would have been very different. Look at avg space after ANALYZING (dbms_stats won't fill that in). It'll show you the average amount of block used.
Or use dbms_space to see how many blocks are currently on the freelist.
10g automates this process.
And unless you frequently full scan - I would say "don't even bother"
some extra information
Raktim, April 11, 2006 - 1:23 am UTC
Hi Tom,
why exactly do we need a High Water Mark.Who sets the High Water Mark and why?
April 11, 2006 - 2:25 pm UTC
the high water mark is simply the collection of blocks that at one time contained data.
If you can fit 100 rows per block
and you insert 1,000 records
you have used at least 10 blocks - these 10 blocks are "under the high water mark", the have contained data. They might not anymore (could have deleted the rows) but they will remain under "the high water mark" - since there were part of the table that contained data at some point in time.
you would have to rebuild, truncate or shrink (new in 10g) the segment to get them to not be under the 'high water mark'
where is HWM information stored
jianhui, April 17, 2006 - 6:47 pm UTC
Hi Tom,
If I have a huge table, I dont want to use analyze or dbms_stats to get the HWM. But I must be stored somewhere so oracle knows where to stop when doing full table scan.
(1)
Just curious where HWM information is stored, so I can get it without consuming lots of resource to do analyze or dbms_stats. I guess its in segment header and dbms_space package reads segment header, right?
(2)
If it's in segment header, then how could i interpret the data of the block dump in order to find HWM?
Thanks.
April 18, 2006 - 9:14 am UTC
where is HWM stored?
Jianhui, April 18, 2006 - 11:28 pm UTC
Hi Tom,
I was not asking how to get HWM, I am not sure whether my questions were clear enough. There were simply two questions
(1) Where is HWM information stored, is it in segment header so dbms_space package can read it.
(2) How to interpret the direct segment header block dump to find out HWM if it's not a business secret.
Thank you as always
April 19, 2006 - 8:30 am UTC
you clearly asked "how to figure out where the current HWM is"
and I answered - use dbms_space. That is all.
I do not dump blocks, if you want to - have fun. It is so totally not necessary in this case. You have the information you need, done.
HWM for tables/datafiles
Thiru, April 25, 2006 - 4:15 pm UTC
Tom,
How does one go about resetting the HWM for a table and a datafile? I have found so many tables that have a few thousand rows that initially had millions of rows.
Thanks
April 25, 2006 - 11:56 pm UTC
in 10g, if you are using ASSM you can "alter table t shrink space compact" and "alter table t shrink space"
else, you would be using
o dbms_redefinition - to online redefine/reorg
o alter table t move - to offline reorg
It is somewhat difficult to reset the high water mark of a datafile - you'd have to query the dictiony to find the segment that has the "last extent" and then MOVE it.
Offline Reorg
Thiru, April 27, 2006 - 4:01 pm UTC
Referring to your comment "alter table t move - to offline reorg"
When I use this command, I see that the 'blocks' column in the view user_tables is reset to null. Only when I analyze again, the actual blocks occupied is shown that is much less than before reorg. That means the table has been reorg'ed. So I did not understand your comment to use this as offline reorg.
April 27, 2006 - 4:23 pm UTC
whilst you are doing the move, the table is not allowed to be modified - it is in effect "offline"
immediately after the move, the indexes need to be rebuilt - queries that want the index will now fail as well. It is in effect "offline"
moving a table is an offline operation from an availability perspective.
How to find unused tables list
Ramu, June 12, 2006 - 9:24 am UTC
Hi Tom
Is there any way to get the list of tables were not used for sometime?
I don't have auditing on table
Thanks
Ram
June 12, 2006 - 10:16 am UTC
not unless and until you employ some form of auditing.
this question always frightens me, looking for "dead tables", implies a lack of "we know what our stuff does and what is important" knowledge...
Finding "dead" tables
A reader, June 12, 2006 - 5:21 pm UTC
You potentially could keep snapshots of v$segment_statistics somewhere. If a particular object never gets any statistics then it may have never been touched. Though there may always be a chance the stats somehow were aged out or flushed before each snapshot was taken...
Then again you could rename the table and see if anybody eventually yells about it, and hope it's not one of those "it's important we see this data once every 5 years sort of thing" :-)
When to reset HWM
Jdam, July 02, 2006 - 12:36 pm UTC
Tom,
Is there a formula to knows if a table need to reset the HWM ( alter table t move )?
July 07, 2006 - 3:03 pm UTC
In sort of theory - sure. When the average free space is too high.
Now, define too high :)
Moving Tables
A reader, July 07, 2006 - 3:59 pm UTC
show_space
ravi, July 19, 2006 - 5:26 am UTC
hwm
alter table shrink performance
Jan van Mourik, August 17, 2006 - 2:42 pm UTC
The "alter table .... shrink space cascade" seems a bit slow to me. I created a test table with 100,000 blocks, deleted all but 100 out of 100,000 rows. Then I ran the shrink, which took 994 seconds (947 in another database). For comparison, I did an "alter table ... move", this took 175 seconds. Is this typical?
Also, it's a pity there's no info in v$session_longops when running the shrink...
August 17, 2006 - 3:00 pm UTC
so what?
it is *online*, if it takes 2 days but the alter table t move takes 1 hours (just to use hyperbole here...) so what?
maintaining indexes during partition operations like truncate, drop, split, merge makes the partition operation take longer.
But so what - you have avoided the downtime.
This is an ONLINE operation.
coalesce of an index generally takes longer than an offline rebuild (if the index is really in need of a coalesce/rebuild).
but so what? it is online, life continues as before.
btw... previous comment...
Jan van Mourik, August 17, 2006 - 2:55 pm UTC
Oracle Database 10g Enterprise Edition Release 10.1.0.5.0 - Production
With the Partitioning, Real Application Clusters and Data Mining options
System name: Linux
Release: 2.4.21-32.0.1.ELhugemem
Version: #1 SMP Tue May 17 17:43:22 EDT 2005
Machine: i686
FYI
Great examples and detailed explanation - Thanks
Doug Brown, September 06, 2006 - 8:42 am UTC
HWM for Datafile
oj, September 13, 2006 - 4:11 pm UTC
Thanks for all the useful information on this link.
I am trying to migrate from Oracle8.1.7 to 10g. I created a test DB and am trying to import the structure from the production DB without the data.
The production DB runs into tera bytes and all I need right now is to get the structure in there so that I can check all PL/SQL and compile forms and reports.
But when I try to import, the tablespaces start growing to (possibly) accomodate the data in each table per statistics that get inported and I fall short on space on the new system (since this is only a test system and I am limited to 100 gigs).
After my import fails, I tried everything from shrinking tables to shrinking indexes to re-analyzing tables and truncating tables and coalescing tablespaces. All tablespaces now show as empty and all data files are also almost empty. But it is not letting me re-size any of the data files.
What do I do?
September 14, 2006 - 8:57 am UTC
the tablespaces grow because every segment created will create an initial extent.
and how large that initial extent is depends on - well, lots, how you ran export, what the current initial extent in the source database is, what the new tablespaces are defined as (locally managed, uniform size and so on)
You don't have INSTALL SCRIPTS for your application?????
This would be an EXCELLENT time to get them under control - to take time to create a subproject: create installation scripts for our application using whatever sizing techniques we like and get it into source code control.
HWM for datafiles
oj, September 13, 2006 - 5:48 pm UTC
Hi Tom,
An update on the issue above. I just found out that for some reason, when I am running my import, the tables that are getting created have initial sizes running into 100's of MB's for all large tables. That is probably why after doing all trciks I am not able to reclaim the space.
How do I resize the tables to smaller size?
If I use 'create table as...' then not only do I not know how to give size definition using that syntax, but also I lose all index/contraint information which I would have to re-create separately.
September 14, 2006 - 9:01 am UTC
like I said
depends on the arguments you used with export...
depends on the initial size in the EXISTING database...
depends on the type of tablespace you have created.
you might be able to "shrink" the tables (only AFTER they are created) via
alter table t move storage (initial 1k next 1k pctincrease 0 minextents 1);
alter index i rebuild storage(same as above);
but I really suggest.....
you get that "here is the set of scripts that actually installs our application!" and get it into source code control and maintain it.
HWM for Datafiles
oj, September 13, 2006 - 6:58 pm UTC
Hi Tom,
I managed to fix that problem by:
alter table T move
storage ( initial 1k next 1k minexents 1 pctincrease 0 );
and
alter index I rebuild
storage ( initial 1k next 1k minexents 1 pctincrease 0 );
Then I executed your 'maxshrink' script and was able to regain about 40 GB of space.
HWM and full scan
Serge Shmygelsky, December 12, 2006 - 3:40 am UTC
Hello Tom,
I saw in this thread that full scan is reading table up to the HWM and therefore I have a question.
I created a table:
OPS$SHMYG@REX> create table test (customer_id number);
Then I put there some data:
OPS$SHMYG@REX> begin
for i in 1..100000
loop
insert into test values (i);
end loop;
end;
/
Now the storage consumed by the table looks as follows:
OPS$SHMYG@REX> @space TEST
Free Blocks.............................
Total Blocks............................256
Total Bytes.............................2097152
Unused Blocks...........................91
Unused Bytes............................745472
Last Used Ext FileId....................5
Last Used Ext BlockId...................20617
Last Used Block.........................37
So, from my understanding, HWM is 165 blocks.
But when I run a simple query:
OPS$SHMYG@REX> select * from test;
and tkprof it, I see the following:
select *
from
test
call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 101 0.04 0.05 0 258 0 100000
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 103 0.04 0.05 0 258 0 100000
Misses in library cache during parse: 0
Optimizer mode: ALL_ROWS
Parsing user id: 60
Rows Row Source Operation
------- ---------------------------------------------------
100000 TABLE ACCESS FULL TEST (cr=258 pr=0 pw=0 time=300117 us)
So the question is: why do I have 258 blocks read while (from my understanding) it should be 165 (as HWM implies)?
Thanks in advance
December 12, 2006 - 6:58 am UTC
HWM and full scan
Serge Shmygelsky, December 14, 2006 - 2:06 am UTC
Sum of Blocks doesn't look correct
Sujith Wimalasooriya, February 02, 2007 - 2:04 pm UTC
Hi Tom,
I installed the show_space procedure and ran it for two tables in one of the schema.
Here is the results I got,
SQL>
Free Blocks.............................
Total Blocks............................4096
Total Bytes.............................67108864
Total MBytes............................64
Unused Blocks...........................0
Unused Bytes............................0
Last Used Ext FileId....................27
Last Used Ext BlockId...................58564
Last Used Block.........................64
PL/SQL procedure successfully completed
Free Blocks.............................
Total Blocks............................4608
Total Bytes.............................75497472
Total MBytes............................72
Unused Blocks...........................509
Unused Bytes............................8339456
Last Used Ext FileId....................27
Last Used Ext BlockId...................276740
Last Used Block.........................3
PL/SQL procedure successfully completed
I was thinking, the total Blocks should be the same as Last Used Block(HWM) + Unused Block(Blocks above HWM) + Free Blocks (If there are any empty block below the HWM).
I think we have LMT tablespaces and on Oracle 9205.
Is my understanding is correct?? or am I missing something or doesn't understand something??
Thanks for your time and inputs,
Kandy Train
February 03, 2007 - 7:17 pm UTC
the last used block is a relative offset into a SINGLE extent, not the entire structure.
it is an offset into the LAST extent of a segment. You are missing all of the other extents in the count.
Total Blocks
Sujith Wimalasooriya, February 03, 2007 - 12:34 pm UTC
Hi Tom,
Okay, I read through a little bit and understand the equation total blocks = free blocks + unused blocks + last used block is not true.
But in my two examples, My total blocks are so huge and the HWM is very low. How would I claim those allocated blocks to the segment??
I am planning to move the table and rebuild the indexes, that should reset the HWM, but not sure how I can deallocate some of those blocks from 'Total Blocks' as it looks like a waste leave such space..
Thanks,
Kandy Train
Total Blocks
sujith Wimalasooriya, February 04, 2007 - 10:55 am UTC
Hi Tom,
How could you make up the total blocks in the show_space program. My problem is, total blocks are so huge, and I cannot add unused blocks + HWM + Free Blocks to make up the total blocks.
I would like to understand where all the blocks are that make up the total blocks, and how many of them really used and how many of them really free(Empty).
Thanks,
Kandy Train
February 05, 2007 - 7:28 am UTC
where do you see "HWM"
total blocks - blocks allocated to segment
I think your confusion lies in the "last used block". It is a pointer into the last extent ONLY.
Say a table has 50 extents
Say each extent is 100 blocks
total blocks = 50*100 = 5,000
Now, one of those 50 extents is the "last used extent" - let us say for ease that it is extent 50. If we've used 5 blocks in that extent, the last used block will point to that block. So, the used blocks = 49*100 (the 49 previously used extents) + 5 blocks. That is the 'HWM'
Total Blocks
Kandy Train, February 05, 2007 - 12:01 pm UTC
Hi Tom,
Thank you for the explanation and Yes, I now understand that the Last Used Block is a relating no and is in the last used extent. That's not the HWM. But If I know the
"Last Used Extent", then as in your example, I can count the HWM correctly.
But how do I know "Last Used Extent" ??
This is a small example I have based my questions on,
select segment_name, count(*) num_of_extents, sum(blocks) total_blocks
from dba_extents
where owner = 'SCOTT'
and segment_name = 'INVENTORY_PART_TAB'
group by segment_name;
set serveroutput on
exec show_space('INVENTORY_PART_TAB', 'SCOTT', 'TABLE');
--Results
SQL>
SEGMENT_NAME NUM_OF_EXTENTS TOTAL_BLOCKS
-------------------------------------------------------------------------------- -------------- ------------
INVENTORY_PART_TAB 65 4608
Free Blocks.............................
Total Blocks............................4608
Total Bytes.............................75497472
Total MBytes............................72
Unused Blocks...........................509
Unused Bytes............................8339456
Last Used Ext FileId....................13
Last Used Ext BlockId...................292164
Last Used Block.........................3
PL/SQL procedure successfully completed
February 05, 2007 - 12:23 pm UTC
you need to know all of the other extents - not the last used one.
But the HWM would be total blocks minus the unused blocks, that is the amount of blocks under the HWM.
Why no blocks in empty_blocks list ?
parag j patankar, February 15, 2007 - 1:53 am UTC
Hi,
I am having a table in 9.2 database in a dictonary managed tablespace, after collecting statistics it is showing following
0:57:43 SQL> select table_name, initial_extent, num_rows, blocks, empty_blocks, avg_row_len
0:57:52 2 from user_tables
0:57:52 3 where table_name='TWW11';
ABLE_NAME INITIAL_EXTENT NUM_ROWS BLOCKS EMPTY_BLOCKS AVG_ROW_LEN
-------------- -------------- ---------- ---------- ------------ -----------
TWW11 10485760 37310 387404 0 231
My block size is 4k. It means for 8m ( 37310*231 ) table, total size was occupied was 1.5 gb.
empty_blocks are 0 it means there are no blocks allocated over high water mark. initial extent size is 10m.
If I try to deallocate, it is not working ( it should not work if empty blocks are 0 )
12:02:33 SQL> alter table tww11 deallocate unused;
Table altered.
12:02:51 SQL> exec dbms_stats.gather_table_stats('A', 'TWW11');
PL/SQL procedure successfully completed.
12:05:08 SQL> select table_name, initial_extent, num_rows, blocks, empty_blocks, avg_row_len
12:05:12 2 from user_tables
12:05:12 3 where table_name='TWW11';
TABLE_NAME INITIAL_EXTENT NUM_ROWS BLOCKS EMPTY_BLOCKS AVG_ROW_LEN
--------------- -------------- ---------- ---------- ------------ -----------
TWW11 10485760 37310 387404 0 231
Ultimately, I have to use alter table move clause to reduce a no of blocks allocated
12:05:12 SQL> alter table tww11 move storage (initial 50k next 50k pctincrease 0 minextents 1);
Table altered.
12:09:03 SQL> exec dbms_stats.gather_table_stats('A', 'TWW11');
PL/SQL procedure successfully completed.
12:10:40 SQL> select table_name, initial_extent, num_rows, blocks, empty_blocks, avg_row_len
12:11:10 2 from user_tables
12:11:10 3 where table_name='TWW11';
TABLE_NAME INITIAL_EXTENT NUM_ROWS BLOCKS EMPTY_BLOCKS AVG_ROW_LEN
--------------- -------------- ---------- ---------- ------------ -----------
TWW11 53248 37310 2561 0 231
My questions is why some blocks are not showing under empty_blocks ? Why it has been allocated so many blocks for such a small table size ?
SQL> select initial_extent, next_extent, pct_increase from dba_tablespaces where tablespace_name='A_D01';
INITIAL_EXTENT NEXT_EXTENT PCT_INCREASE
-------------- ----------- ------------
102400 12288 0
Pl tell me.
thanks & regards
pjp
February 15, 2007 - 11:48 am UTC
deallocate unused - by definition - only deallocates unused blocks - those above the HWM
you confirmed there were no such blocks - hence deallocate unused would ..... do nothing.
empty_blocks
Parag J Patankar, February 18, 2007 - 11:24 pm UTC
Hi Tom,
If my table size 8k, total allocated size is 1.5gb, why some of blocks are not showing under empty blocks ? I am using 9.2 database and dictionary managed tablespace.
Kindly suggest.
thanks & regards
pjp
February 19, 2007 - 9:26 am UTC
because at one point all of the blocks where used, you then deleted the rows. The blocks are not empty, once they had data - they get pulled under the high water mark and stay there until you shrink or otherwise reorganize the table.
num_freelist_blocks
parag j patankar, February 20, 2007 - 12:14 am UTC
Hi Tom,
Thanks for your explanation of empty blocks. I agree with this. But when you delete the data, Oracle should show me blocks under freelist, why it is not showing me
1 select table_name, blocks, empty_blocks, num_freelist_blocks, pct_free, pct_used, num_rows,
2 avg_row_len
3 from user_tables
4* where table_name = 'TWW44'
10:40:08 SQL> /
TABLE_NAME BLOCKS EMPTY_BLOCKS NUM_FREELIST_BLOCKS PCT_FREE PCT_USED NUM_ROWS AVG_ROW_LEN
---------- ---------- ------------ ------------------- ---------- ---------- ---------- -----------
TWW44 440917 0 0 10 40 36997 231
What is a reason it is not showing under freelist ?
thanks & regards
pjp
February 20, 2007 - 9:38 am UTC
(you gotta hit that CODE button, it is not readable otherwise)
dbms_stats, which I presume you are using, only gathers information used by the optimizer.
num_freelist_blocks is not included in that. you would have to use ANALYZE to have that filled in (and that only will be filled in for MANUAL segment space managed tablespaces)
Suggest you use
http://asktom.oracle.com/pls/ask/search?p_string=show_space show_space - it uses builtin packages to report on "free space" for ASSM and Manual segment space managed objects without using ANALYZE
resetting of hwm -- moving/reorganizing data within tablespace and releasing space - oracle 9i
A reader, April 19, 2007 - 5:05 pm UTC
Hi Tom
We have Table News_story.
CREATE TABLE NEWS_STORY
(
STORY_TS DATE NOT NULL,
NEWS_STORY_ID NUMBER,
STORY_TM_CT NUMBER NOT NULL,
NEWS_SRC_NM VARCHAR2(30 BYTE) NOT NULL,
WIRE_SRC_CD VARCHAR2(4 BYTE) NOT NULL,
STORY_HDLN_TX VARCHAR2(160 BYTE) NOT NULL,
STORY_WORD_CT NUMBER(6) NOT NULL,
STORY_BODY_BK CLOB NOT NULL,
SM_STORY_FL CHAR(1 BYTE) NOT NULL,
DOC_TYPE_CD NUMBER,
VISTA_NEWS_ID VARCHAR2(50 BYTE),
LOAD_TS DATE
)
TABLESPACE SONAR_IDX_TS
PCTUSED 50
PCTFREE 10
INITRANS 1
MAXTRANS 255
STORAGE (
INITIAL 400M
NEXT 200M
MINEXTENTS 1
MAXEXTENTS 500
PCTINCREASE 0
)
LOGGING
PARTITION BY RANGE (STORY_TS)
(
PARTITION NEWS_STORY_P200609 VALUES LESS THAN (TO_DATE(' 2006-10-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
LOGGING
NOCOMPRESS
TABLESPACE SONAR_IDX_TS
LOB (STORY_BODY_BK) STORE AS
( TABLESPACE CLOB_TS
ENABLE STORAGE IN ROW
CHUNK 16384
PCTVERSION 10
NOCACHE
STORAGE (
INITIAL 200M
NEXT 347304K
MINEXTENTS 4
MAXEXTENTS 2147483645
PCTINCREASE 1
FREELISTS 1
FREELIST GROUPS 1
BUFFER_POOL DEFAULT
)
)
PCTUSED 50
PCTFREE 10
INITRANS 1
MAXTRANS 255
STORAGE (
INITIAL 400M
NEXT 200M
MINEXTENTS 1
MAXEXTENTS 500
PCTINCREASE 0
FREELISTS 1
FREELIST GROUPS 1
BUFFER_POOL DEFAULT
),
PARTITION NEWS_STORY_P200610 VALUES LESS THAN (TO_DATE(' 2006-11-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
LOGGING
NOCOMPRESS
TABLESPACE SONAR_IDX_TS
LOB (STORY_BODY_BK) STORE AS
( TABLESPACE CLOB_TS
ENABLE STORAGE IN ROW
CHUNK 16384
PCTVERSION 10
NOCACHE
STORAGE (
INITIAL 200M
NEXT 330424K
MINEXTENTS 4
MAXEXTENTS 2147483645
PCTINCREASE 1
FREELISTS 1
FREELIST GROUPS 1
BUFFER_POOL DEFAULT
)
)
PCTUSED 50
PCTFREE 10
INITRANS 1
MAXTRANS 255
STORAGE (
INITIAL 400M
NEXT 200M
MINEXTENTS 1
MAXEXTENTS 500
PCTINCREASE 0
FREELISTS 1
FREELIST GROUPS 1
BUFFER_POOL DEFAULT
),
.......
Clob fied is store in store in seperate tablespace clob_ts. This table is occupying around 600 GB of . CLOB data occupies more than 500 GB. So we of thought of getting rid of data prior to 2005 and get back some space. What we did was create a staging table copy all the data to the staging table (other than clob field) ,truncate the partition and then exchange the partition by staging table. In this way we got back around 230 GB of space. But this space we were not able to allocate to other tablespace as freed space is below HWM.
1. How to reset HWM?
2. How to organize the data? We can not export and import as we dont have that much of space.
3. Clob_Ts Tablespace is already around 400 GB, rest of the tables are under different tablespace whose size is less than 100 GB. Do you see any possibility of performance impact if we move tables from other tablespace to clob_ts.
Thanks
Koshal
TABLESPACE_NAME FILE_NAME HIGHWATER
CLOB_TS /DB07/SONARP/ORACLE/data01/SONAR_CLOB_TS_01.dbf 8360517632
CLOB_TS /DB07/SONARP/ORACLE/data01/SONAR_CLOB_TS_07.dbf 8319639552
CLOB_TS /DB07/SONARP/ORACLE/data01/SONAR_CLOB_TS_13.dbf 8360517632
CLOB_TS /DB07/SONARP/ORACLE/data01/SONAR_CLOB_TS_19.dbf 8360517632
CLOB_TS /DB07/SONARP/ORACLE/data01/SONAR_CLOB_TS_29.dbf 4208156672
CLOB_TS /DB07/SONARP/ORACLE/data02/SONAR_CLOB_TS_02.dbf 16749125632
CLOB_TS /DB07/SONARP/ORACLE/data02/SONAR_CLOB_TS_08.dbf 8361582592
CLOB_TS /DB07/SONARP/ORACLE/data02/SONAR_CLOB_TS_14.dbf 8360517632
CLOB_TS /DB07/SONARP/ORACLE/data02/SONAR_CLOB_TS_20.dbf 10457669632
CLOB_TS /DB07/SONARP/ORACLE/data02/SONAR_CLOB_TS_55.dbf 1553334272
CLOB_TS /DB07/SONARP/ORACLE/data03/SONAR_CLOB_TS_21.dbf 16707182592
CLOB_TS /DB07/SONARP/ORACLE/data04/SONAR_CLOB_TS_04.dbf 8361582592
CLOB_TS /DB07/SONARP/ORACLE/data04/SONAR_CLOB_TS_10.dbf 8361582592
CLOB_TS /DB07/SONARP/ORACLE/data04/SONAR_CLOB_TS_16.dbf 8366907392
CLOB_TS /DB07/SONARP/ORACLE/data04/SONAR_CLOB_TS_22.dbf 8318574592
CLOB_TS /DB07/SONARP/ORACLE/data04/SONAR_CLOB_TS_25.dbf 7815258112
CLOB_TS /DB07/SONARP/ORACLE/data04/SONAR_CLOB_TS_28.dbf 10919043072
CLOB_TS /DB07/SONARP/ORACLE/data05/SONAR_CLOB_TS_05.dbf 8361582592
CLOB_TS /DB07/SONARP/ORACLE/data05/SONAR_CLOB_TS_11.dbf 8318574592
CLOB_TS /DB07/SONARP/ORACLE/data05/SONAR_CLOB_TS_17.dbf 8360517632
CLOB_TS /DB07/SONARP/ORACLE/data05/SONAR_CLOB_TS_23.dbf 8360517632
CLOB_TS /DB07/SONARP/ORACLE/data05/SONAR_CLOB_TS_27.dbf 1901289472
CLOB_TS /DB07/SONARP/ORACLE/data06/SONAR_CLOB_TS_06.dbf 8319639552
CLOB_TS /DB07/SONARP/ORACLE/data06/SONAR_CLOB_TS_12.dbf 8360517632
CLOB_TS /DB07/SONARP/ORACLE/data06/SONAR_CLOB_TS_18.dbf 8360517632
CLOB_TS /DB07/SONARP/ORACLE/data06/SONAR_CLOB_TS_24.dbf 4166213632
CLOB_TS /DB07/SONARP/ORACLE/data06/SONAR_CLOB_TS_30.dbf 8192745472
CLOB_TS /DB07/SONARP/ORACLE/data06/SONAR_CLOB_TS_31.dbf 8192745472
CLOB_TS /DB07/SONARP/ORACLE/data07/SONAR_CLOB_TS_03.dbf 8360517632
CLOB_TS /DB07/SONARP/ORACLE/data07/SONAR_CLOB_TS_09.dbf 8319639552
CLOB_TS /DB07/SONARP/ORACLE/data07/SONAR_CLOB_TS_15.dbf 8360517632
CLOB_TS /DB07/SONARP/ORACLE/data07/SONAR_CLOB_TS_26.dbf 8360517632
CLOB_TS /DB07/SONARP/ORACLE/data07/SONAR_CLOB_TS_40.dbf 20927201280
CLOB_TS /DB07/SONARP/ORACLE/data07/SONAR_CLOB_TS_41.dbf 15681708032
CLOB_TS /DB07/SONARP/ORACLE/data08/SONAR_CLOB_TS_32.dbf 20775657472
CLOB_TS /DB07/SONARP/ORACLE/data08/SONAR_CLOB_TS_33.dbf 20775657472
CLOB_TS /DB07/SONARP/ORACLE/data08/SONAR_CLOB_TS_36.dbf 6095593472
CLOB_TS /DB07/SONARP/ORACLE/data09/SONAR_CLOB_TS_27.dbf 6286303232
CLOB_TS /DB07/SONARP/ORACLE/data09/SONAR_CLOB_TS_37.dbf 20844437504
CLOB_TS /DB07/SONARP/ORACLE/data09/SONAR_CLOB_TS_38.dbf 20831551488
CLOB_TS /DB07/SONARP/ORACLE/data09/SONAR_CLOB_TS_39.dbf 15658369024
CLOB_TS /DB07/SONARP/ORACLE/data10/SONAR_CLOB_TS_42.dbf 15717359616
CLOB_TS /DB07/SONARP/ORACLE/data10/SONAR_CLOB_TS_43.dbf 15655370752
CLOB_TS /DB07/SONARP/ORACLE/data11/SONAR_CLOB_TS_44.dbf 10485276672
CLOB_TS /DB07/SONARP/ORACLE/data11/SONAR_CLOB_TS_54.dbf 1751007232
CLOB_TS /DB07/SONARP/ORACLE/data12/SONAR_CLOB_TS_45.dbf 20943249408
CLOB_TS /DB07/SONARP/ORACLE/data12/SONAR_CLOB_TS_46.dbf 10485235712
CLOB_TS /DB07/SONARP/ORACLE/data12/SONAR_CLOB_TS_47.dbf 15679864832
CLOB_TS /DB07/SONARP/ORACLE/data13/SONAR_CLOB_TS_48.dbf 15705817088
CLOB_TS /DB07/SONARP/ORACLE/data13/SONAR_CLOB_TS_49.dbf 15716311040
CLOB_TS /DB07/SONARP/ORACLE/data13/SONAR_CLOB_TS_50.dbf 17663680512
CLOB_TS /DB07/SONARP/ORACLE/data13/SONAR_CLOB_TS_51.dbf 14117568512
CLOB_TS /DB07/SONARP/ORACLE/data14/SONAR_CLOB_TS_52.dbf 10103119872
CLOB_TS /DB07/SONARP/ORACLE/data14/SONAR_CLOB_TS_53.dbf 9710149632
CLOB_TS /DB07/SONARP/ORACLE/idx02/SONAR_CLOB_TS_35.dbf 20775657472
hwm in GB 566.8251877
NAME GBYTES USED FREE PCT_USED LARGEST
CLOB_TS 621.1875 330.9694748 290.2180252 53.28012473 17.90049744
DRSYS 0.9765625 0.212081909 0.764480591 21.7171875 0.202857971
EDGAR_DATA_TS 4.8828125 3.153709412 1.729103088 64.58796875 1.338478088
EDGAR_IDX_TS 9.765625 2.799858093 6.965766907 28.67054688 3.430610657
EDGAR_STGNG_TS 7.8125 0.806442261 7.006057739 10.32246094 2.54069519
Q_DATA 0.1953125 0.102462769 0.092849731 52.4609375 0.050468445
RBS 23.4375 10.74221802 12.69528198 45.83346354 0.01953125
RBS_BIG 15.625 7.63E-06 15.62499237 4.88E-05 15.03905487
SONAR_IDX_TS 29.296875 27.2549057 2.041969299 93.03007813 0.493614197
SONAR_LG_IDX_TS 85.9375 81.10351563 4.833984375 94.375 4.150390625
SONAR_LG_TBL_TS 100.5859375 93.1640625 7.421875 92.62135922 4.052734375
SONAR_MD_IDX_TS 34.1796875 17.55273438 16.62695313 51.35428571 3.375
SONAR_MD_TBL_TS 16.6015625 9.614257813 6.987304688 57.91176471 2.206054688
SONAR_SM_IDX_TS 0.48828125 0.245483398 0.242797852 50.275 0.242797852
SONAR_SM_TBL_TS 2.44140625 1.432373047 1.009033203 58.67 0.951049805
SONAR_SM_TS 193.8476563 176.3231049 17.52455139 90.95962689 1.175796509
SONAR_STGNG_DAT_TS 4.8828125 0.26171875 4.62109375 5.36 4.620117188
SONAR_STGNG_IDX_TS 1.46484375 0.170898438 1.293945313 11.66666667 1.29296875
SYSTEM 0.29296875 0.259887695 0.033081055 88.70833333 0.030357361
TEMP 34.1796875 2.988327026 31.19136047 8.742991071 0.009765625
TEST 4 7.63E-06 3.999992371 0.000190735 3.999992371
TEXT_IDX_TS 118.1640625 114.0585861 4.105476379 96.52561338 0.1953125
USERS 0.48828125 0.151298523 0.336982727 30.9859375 0.313522339
USER_TOOLS 4.8828125 4.409286499 0.473526001 90.3021875 0.363670349
VSA 0.029296875 0.018554688 0.010742188 63.33333333 0.010742188
1315.646484 877.7952576 437.8512268 1261.695107 68.00608063
April 20, 2007 - 6:32 am UTC
what high water mark are you talking about at this point (lots of unformatted data there that I cannot read through by the way..)
the high water mark of the segment is as low as it will go.
You wrote:
...
What we did was create a staging table copy all the data to the staging table (other than clob field) ,truncate the partition and then exchange the partition by staging table. In this way we got back around 230 GB of space. But this space we were not able to allocate to other tablespace as freed space is below HWM.
.....
and that doesn't make sense.
you created a new compact segment.
you truncated the old one (it's space is given back to the tablespace by default unless you said to keep the storage)
you exchanged partition with compact segment
the empty table could be dropped now and there are no segments left with any space below the HWM - since the segment you created as select is fully compact.
exec show_space() vs. blocks/empty_blocks from user_tables
William, June 29, 2007 - 3:06 pm UTC
Hi Tom:
I tried your stored procedure show_space as well as from user_tables to generate the used/free/unused blocks of a table.
Here is what I got.
set serveroutput on
exec show_space('ITM_DIMENSION');
Free Blocks............................. 10,776
Total Blocks............................ 10,880
Total Bytes............................. 89,128,960
Total MBytes............................ 85
Unused Blocks........................... 103
Unused Bytes............................ 843,776
Last Used Ext FileId.................... 94
Last Used Ext BlockId................... 51,977
Last Used Block......................... 25
PL/SQL procedure successfully completed.
But, if I run
select blocks, empty_blocks from user_tables where table_name = 'ITM_DIMENSION';
BLOCKS EMPTY_BLOCKS
---------- ------------
10776 103
The same result comes back even after I run
exec dbms_stats.gather_table_stats(ownname=>'DBA', tabname=>'ITM_DIMENSION', estimate_percent=>100);
How come the stored procedure reports 10776 as the free blocks?
Then, I tried to create a second table
create table ITM_DIMENSION_TEST as select * from ITM_DIMENSION;
I got
exec show_space('ITM_DIMENSION_TEST');
Free Blocks............................. 0
Total Blocks............................ 5,376
Total Bytes............................. 44,040,192
Total MBytes............................ 42
Unused Blocks........................... 25
Unused Bytes............................ 204,800
Last Used Ext FileId.................... 131
Last Used Ext BlockId................... 1,301,641
Last Used Block......................... 103
PL/SQL procedure successfully completed.
exec dbms_stats.gather_table_stats(ownname=>'DBA', tabname=>'ITM_DIMENSION_TEST', estimate_percent=>100);
select blocks, empty_blocks from user_tables where table_name = 'ITM_DIMENSION_TEST';
BLOCKS EMPTY_BLOCKS
---------- ------------
5350 0
I could not interpret the results. Any ideas? Thanks.
July 03, 2007 - 8:40 am UTC
free blocks are blocks in a manual segment space managed tablespace that are on the freelist.
when you copied the data into another table, every block was "packed" (hence the table in your case is 50% the allocated size)
So, your first table, it is "half empty", every block is "half full" - every block is on the single free list you have for it.
your second table, it is "packed fully", every block is filled up and will not accept anymore inserts, they are NOT on the freelist. And this is why the table is 50% the size.
In the following I:
a) create a fully packed table T1
b) delete 2/3rds of the table (delete rows 1,2, leave 3, delete 4,5, leave 6 and so on)
c) copy T1 to T2 - creating a fully packed copy of T1
d) run showspace to show the same thing you see above
ops$tkyte%ORA10GR2> create table t1
2 tablespace manual
3 as
4 select *
5 from all_objects
6 order by object_id
7 /
Table created.
ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> delete from t1
2 where rowid in
3 (select rid
4 from (select rowid rid, row_number() over (order by rowid) rn
5 from t1
6 )
7 where mod(rn,3) in (0,1)
8 );
34128 rows deleted.
ops$tkyte%ORA10GR2> commit;
Commit complete.
ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> create table t2
2 tablespace manual
3 as
4 select * from t1;
Table created.
ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> exec show_space( 'T1' )
Free Blocks............................. 705
Total Blocks............................ 768
Total Bytes............................. 6,291,456
Total MBytes............................ 6
Unused Blocks........................... 62
Unused Bytes............................ 507,904
Last Used Ext FileId.................... 10
Last Used Ext BlockId................... 9
Last Used Block......................... 66
PL/SQL procedure successfully completed.
ops$tkyte%ORA10GR2> exec show_space( 'T2' )
Free Blocks............................. 0
Total Blocks............................ 256
Total Bytes............................. 2,097,152
Total MBytes............................ 2
Unused Blocks........................... 20
Unused Bytes............................ 163,840
Last Used Ext FileId.................... 10
Last Used Ext BlockId................... 137
Last Used Block......................... 108
PL/SQL procedure successfully completed.
Shrink could be used to resolve performance problems?
Roberto Veiga, April 24, 2008 - 2:53 pm UTC
Hi Tom.
I have a simple question. Could I use alter table T shrink to improve performance? Let´s suppose that my table has 5 million rows and 2 million rows were deleted. Is it possible that table shrink can improve performance in queries that use that table?
Thanks in advance.
April 28, 2008 - 12:26 pm UTC
... Could I use alter table T shrink to improve
performance? ....
a) sure. Say you full scan it a lot, you'll full scan 3/5ths the data now if you have it reset the high water mark after compacting
b) no way. Say you always access this table a row at a time via an index on the primary key. Say the data you deleted was all old data - so the remaining 3/5ths of the data is pretty 'dense' (eg: you did not remove 2 out of every 5 rows, you removed the first 2 million rows - leaving lots of empty blocks and lots of full blocks).
c) maybe. See a and b, now change the assumptions.
In short "it depends"
How much empty space below HWM
Chris, April 29, 2008 - 4:52 pm UTC
Tom,
Specifically, I have a CLOB in a table that my client is slowly deleting the contents of for each record. They want to know how much space will be reclaimed at any given time.
In short, there is no way to tell how many bytes will be reclaimed under the HWM (v9.2.0.7) before the table is reorg'd (ie, copied/truncated/inserted or ALTER TABLE ... MOVE or in this case, just moving the LOB.)? I see the Freelist count but as you state, there is certainly no certainty that these blocks are even remotely empty.
April 30, 2008 - 9:24 am UTC
...Specifically, I have a CLOB in a table that my client is slowly deleting the
contents of for each record....
specifically, that is not clear to me. I don't know what 'deleting the contents of for each record' means.
basically - lobs are versioned in the lob segment. You control how much of the lob segment is used for this via pctversion. If you have say 2gb of lobs allocated - we'll use about 200mb of space to hold 'old versions - from before the modifications' (sort of like undo, lobs use this versioning instead of undo). This versioned data will not be reused until more versioned data comes into play (eg: you could 'delete' 200mb of lobs - then insert 200mb of lobs - we won't reuse the deleted space, it is versioning that stuff)
segment size grow unexpected
Harry Zhang, July 03, 2008 - 12:17 pm UTC
Hi Tom,
We have a table size is 76MB and stable, but yesterday it grows to 4.3G. Please see the below show_space. Do you know how this table could blow upto 4.3GB by just deleting existing 188,965 rows and inserting 189,073 new rows? It was running like this everyday, but only yesterday it grow expectedly. After move the table it returns to 76MB. Thanks!
Unformatted Blocks ..................... 534,926
FS1 Blocks (0-25) ..................... 8,468
FS2 Blocks (25-50) ..................... 0
FS3 Blocks (50-75) ..................... 0
FS4 Blocks (75-100)..................... 8,591
Full Blocks ..................... 974
Total Blocks............................ 553,984
Total Bytes............................. 4,538,236,928
Total MBytes............................ 4,328
Unused Blocks........................... 0
Unused Bytes............................ 0
Last Used Ext FileId.................... 7
Last Used Ext BlockId................... 996,361
Last Used Block......................... 256
July 07, 2008 - 10:11 am UTC
insufficient data - something changed, *something* was different.
I'd guess someone stuck an append hint in there and didn't tell you.
What can cause a unformatted block below HWM?
Lennert Bruin, August 12, 2008 - 12:59 pm UTC
Hi Tom,
We are experiencing the same problem. According to bug 7299727 (see metalink) this is related to bug 5890312. Unfortunately, that last bug is not open to the public (yet).
Having said that, I would like to understand how an unformatted block below the HWM can be created. When I just delete records, de blocks are marked empty (FS4), not unformatted. Any ideas?
Example:
Connected to:
Oracle9i Enterprise Edition Release 9.2.0.8.0 - 64bit Production
With the Partitioning, OLAP and Oracle Data Mining options
JServer Release 9.2.0.8.0 - Production
SQL> create table arbor.lennert (id number) tablespace cat_ref;
Table created.
SQL> @show_space ARBOR LENNERT TABLE
Unformatted Blocks .....................0
FS1 Blocks (0-25) .....................0
FS2 Blocks (25-50) .....................0
FS3 Blocks (50-75) .....................0
FS4 Blocks (75-100).....................0
Full Blocks .....................0
Total Blocks............................16
Total Bytes.............................131072
Total MBytes ..........................0
Unused Blocks...........................13
Unused Bytes............................106496
Last Used Ext FileId....................18
Last Used Ext BlockId...................321545
Last Used Block.........................3
PL/SQL procedure successfully completed.
SQL> insert into arbor.lennert (select rownum from dba_objects);
7180 rows created.
SQL> commit;
Commit complete.
SQL> @show_space ARBOR LENNERT TABLE
Unformatted Blocks .....................0
FS1 Blocks (0-25) .....................1 !
FS2 Blocks (25-50) .....................0
FS3 Blocks (50-75) .....................0
FS4 Blocks (75-100).....................2 !
Full Blocks .....................10 !
Total Blocks............................16
Total Bytes.............................131072
Total MBytes ..........................0
Unused Blocks...........................0
Unused Bytes............................0
Last Used Ext FileId....................18
Last Used Ext BlockId...................321545
Last Used Block.........................16
PL/SQL procedure successfully completed.
SQL> delete from arbor.lennert;
7180 rows deleted.
SQL> commit;
Commit complete.
SQL> @show_space ARBOR LENNERT TABLE
Unformatted Blocks .....................0
FS1 Blocks (0-25) .....................0
FS2 Blocks (25-50) .....................0
FS3 Blocks (50-75) .....................0
FS4 Blocks (75-100).....................13 !
Full Blocks .....................0
Total Blocks............................16
Total Bytes.............................131072
Total MBytes ..........................0
Unused Blocks...........................0
Unused Bytes............................0
Last Used Ext FileId....................18
Last Used Ext BlockId...................321545
Last Used Block.........................16
PL/SQL procedure successfully completed.
Thank you in advance!
August 14, 2008 - 10:47 am UTC
blocks in ASSM are either
a) below the low high water mark, these are all formatted and ready to be used
b) above the low high water mark and below the high water mark - these may or may not be formatted
c) at or above the high water mark, these are unformatted
so, in the case of (b), you have unformatted blocks - when we advance the high water mark in ASSM, we move the low high water mark (only ASSM has this low high water mark) up to the old high water mark - and then advance the high water mark and leave the blocks in between basically unformatted - we format them as we need them
Ricardinho, July 20, 2009 - 7:45 pm UTC
Sir
Regarding shrinking the HWM,
Assume I have thousands of table,
How can I know which table needs reorganization?
How can I decide wheter I need to shrink the hwm of the tables?
July 24, 2009 - 11:50 am UTC
use the segment space advisor, it tells you which segments would benefit from a shrink.
zahir, July 24, 2009 - 12:11 pm UTC
You can use EM
or
Select * from table (dbms_space.asa_recommendations())
A reader, September 15, 2009 - 3:26 pm UTC
Tom ,
If I have 4 billion records in a table , in this table , around 100,000 records either updated or inserted daily
And if I perform delete operations of 100,000 records daily ,
I presume I will have a defragmentation .
By reading through the documentation , shrinking the segments will reset the HWM , colaseces the space ...
thus ful table scan need not go too far ( blocks containing no data) ..
Would it also help index scan or improve performance?
September 15, 2009 - 4:31 pm UTC
... I presume I will have a defragmentation . ...
you presume incorrectly.
if you free up space for 100,000 records
and you load in 100,000 records
you would expect the table to grow to a certain size and pretty much stay there in a steady state
... Would it also help index scan or improve performance? ...
not really.
A reader, September 15, 2009 - 4:13 pm UTC
Adding to the above question .
We use ASM and we are in 10.2.0.4 on RHEL
September 16, 2009 - 6:55 am UTC
pretend for sake of discussion that ASM is just "file system" and a "file system" that you can add and remove devices from to make any mount point (diskgroup) grow or shrink
Now, thinking that way - plan everything - and then substitute in ASM for "file system" and "diskgroup" for "mount point" and you'll be done.
I think you are getting wrapped around the axle thinking ASM changes "everything", it doesn't. It makes lots of things easier, nicer, but basically it is a volume manager, a file system provider.
Excellent
Vishal Jaiswal, September 16, 2009 - 5:43 am UTC
Excellent explanation of HWM.
:)
BLOB column HWM reset
Abhisek, March 23, 2010 - 7:41 am UTC
hi Tom,
I have 10-15 tables that have 3-5 BLOB columns in it and has milions of record. Since the record size incresed very high so it was decided to delete the records that are 5 years old. Now deleting those records didnt reset the HWM for which lot of records were deleted.
Now Performace degraded because of that.. We want to reset the HWM but very long downtime is not allowed. How can we achive that.
March 23, 2010 - 12:40 pm UTC
why do you believe "performance has degraded because" the HWM was not reset?? why would that cause performance to degrade?
are you using automatic segment space management?
Blob Tuning
ABhisek, March 23, 2010 - 4:23 pm UTC
Hi TOm,
thanks for the reply.. the requirement is such that we have to reset the HWM any how. So I was wondering if we could move the tablespace or deallocate unused...
yes the performance has degraded as around 80% rows were deleted..Please help
March 24, 2010 - 3:51 am UTC
why, tell me why. what is the technical reason.
tell me how performance degraded due to 80% of the rows being deleted. Give me some metrics - eg: this query used to take this amount of time and do this many IO's, now it takes this amount of time and does this many IO's
and describe WHAT hwm you want to affect here.
(yes, you can move the table and lobs, that would be an offline operation and you'd have to rebuild all indexes. You can use dbms_redefinition to do this online. You might be able to use alter table t shrink, but you didn't answer my questions from above, so I don't know...)
Exellent Work
Sneha, March 24, 2010 - 8:52 am UTC
Thanks Soooo much for such a great site...Really Tom your site is a great help to all the new DBA's like me ;)
The way the HWM concept is explained with example is just too good....Thanks a ton :)
What about the blocks with half filled..
A reader, January 21, 2011 - 10:09 am UTC
Hi Tom,
I have gone through the process to find the fragmented blocks which you have explained previously.
I have a doubt that a block may contain more than one row. Let it be 5 per block. If i
delete 4 rows from that table, then that block contains only 1 row.
The space released after deleting the 4 rows is fragmented which cannot be used for
further insertions. This block is under the used blocks(as it contains 1 row).
But actually this is a fragmented block. How can we get these type of fragmented blocks.
Please advice if i am going in a wrong way.
Thanks in advance,
Venkat
January 24, 2011 - 7:17 am UTC
... The space released after deleting the 4 rows is fragmented which cannot be
used for
further insertions. ...
false, that is not a true statement. The block is not fragmented, the block would be 80% empty (assuming block can fit 5 rows, if 4 out of 5 rows are gone - the block is 80% empty) and would be on the freelist (assuming defaults and manual segment space management) or showing as mostly free (assuming automatic segment space management) and would definitely be a candidate for inserts until it hits pctfree again. There is NOTHING you need to do here, NOTHING.
A reader, January 24, 2011 - 12:42 pm UTC
Reset HWM
Brijesh, August 01, 2011 - 5:53 am UTC
Hi Tom,
I have 1 table which get flushed and loaded daily. It has close to 1 million records. We want to replicate the changes into Datawarehouse so we have to use delete (not truncate).
How can we reset the HWM before we load the table?
August 01, 2011 - 11:59 am UTC
alter table t shrink space;
or
alter table t move; (but that would require an index rebuild on all indexes)
Impact on sql plan
Mark, September 20, 2011 - 8:20 am UTC
Hi Tom,
Does the "gap" between the number of rows vs. HWM have any impact on sql plan. Example:
1. row count: 1mil, HWM=50,000
2. after delete, row count: 10,000, HWM=50,000 still the same
Will sql plan be different? When the table is involved in a full scan, HWM can have a significant effect on the execution time. So, does the optimizer do any evaluation of the number of blocks with data vs. HWM to determine the cost of the full scan?
September 20, 2011 - 6:49 pm UTC
sql plans could be different - sure. The cost of the full scan will remain the same but the cost of other plans might well decrease.
the number of blocks => cost of full scan
say you have a query "select * from t where x = ?" and when the table is full - it will retrieve 500,000 records. We will likely full scan because the cost of using an index to get 500,000 records from a 1,000,000 table would be very high.
Now, after the delete - the cost of the full scan stays exactly the same. However, the query might only return 500 rows now (two order of magnitude less). So, the cost of using the index suddenly drops dramatically - and hence the plan changes
AVD, July 29, 2012 - 9:05 am UTC
Hi Tom,
I have a question regarding usage of HWM
I am trying to copy data from master table from remote DB to local machine first day there are 698 rows and the second day there are 709 rows can i copy the 9 new rows that have been yesterdays HWM to todays updated HWM directly or can the copy be done only based on the column on the master table which captures date when a row was updated
July 30, 2012 - 11:55 am UTC
no, no no no, NO.
We do not just insert new rows at the "end of the table", they go where ever there is space. You cannot just copy one "region" of the table to get all new rows - those new rows could be anywhere in the source table.
and you are describing replication - why wouldn't you just create a materialized view log on the base table and a materialized view at the remote site if in fact you really truly do need to copy this data.
How does this relate to REDO logging?
TomS_Ott, October 30, 2024 - 5:26 pm UTC
Ok so from reading this discussion (gr8 btw) , a main takeaway (and most relevant to my work) is TRUNCATE lowers the HWM. Not mentioned here is Inserts that avoid REDO logging (i.e. with the append hint-- i.e. direct path mode). Such inserts grab unformattedblock from "above" the HWM. Question: When a truncate "moves" free blocks to above the HWM do they remain formatted? Do they get re-used by the above-noted append inserts? Or do such inserts instead grab unformulated blocks? i.e. My main question: Does the fact that they had been previously formatted affect whether the append insert will re-use them?
October 31, 2024 - 1:53 pm UTC
Truncate deallocates space by default. So the database is free to assign these blocks to any table. So any append inserts will grab new blocks - these might be the ones just freed if nothing else has taken them.
If you use the reuse storage clause the HWM is reset, but the blocks remain allocated to the table. So append inserts will reuse these existing blocks.
Re: How does this relate to REDO logging?
TomS_Ott, October 31, 2024 - 4:16 pm UTC
Are you saying that i should always truncate as "reuse storage" for huge tables that are complete refreshed daily? i.e. Would that save on Oracle not having to reformat most of the blocks? And if some of those tables are mat views that are complete refreshed in non-atomic mode (i.e. in truncate mode + append insert, and the table is nologging), how do i specify in the mv refresh command to "reuse storage" ?
November 01, 2024 - 5:44 am UTC
Are you saying that i should always truncate as "reuse storage" for huge tables that are complete refreshed daily? i.e. Would that save on Oracle not having to reformat most of the blocks?
No.. Those blocks have to be rewritten anyway, because they no apply to a "new" object
SQL> create table t as select * from dba_objects;
Table created.
SQL> select data_object_id
2 from user_objects
3 where object_name = 'T';
DATA_OBJECT_ID
--------------
266621
SQL> truncate table t reuse storage;
Table truncated.
SQL> select data_object_id
2 from user_objects
3 where object_name = 'T';
DATA_OBJECT_ID
--------------
266622
TomS_Ott, November 01, 2024 - 5:46 pm UTC
is this a correct paraphrase of "the reuse storage clause" in your two replies? It will keep blocks "allocated" to the table (but not below its HWM). "So append inserts will reuse these existing blocks. " even though "Those blocks ... no [don't?] apply to a "new" [table] object " (that is always created by invocation of truncate with or without the "the reuse storage clause" ).
November 04, 2024 - 7:00 pm UTC
Sorry - I should have been more careful with my explanation.
With reuse storage, the database keeps the same number of blocks allocated to the table as before the truncate. No other table can use them. These are new blocks though as Connor demonstrated.
blocks = 0 after "reuse storage"?
TomS_Ott, November 05, 2024 - 2:35 am UTC
create table Z as select * from <some huge table> where rownum < 200001;
select blocks from User_Tables z where z.TABLE_NAME = 'Z';
-- returns <some number>
truncate table Z <b>REUSE STORAGE</b>
exec dbms_stats.gather_table_stats('<schema name>','Z');
select blocks from User_Tables z where z.TABLE_NAME = 'Z';
-- <b>returns 0</b>
Does the above blocks = 0 jive with your statement "With reuse storage, the database keeps the same number of blocks allocated to the table" ? My db is ASSM; is ASSM vs MSSM relevant?
November 05, 2024 - 4:55 am UTC
We move the high water mark (ie, highest *used* block, which hence becomes zero).
The *extents* are still allocated to the table (so no OTHER table can take that storage)
Hope this clarifies things
oops
TomS_Ott, November 05, 2024 - 5:01 am UTC
Ooops, I see now that "re-use storage" does not deallocate. I later had tried looking at empty_blocks (which is blocks allocated above the HWM --right?) which showed that "re-use storage" moved the count from blocks to empty_blocks so no blocks had been deallocated. Then did a "drop storage" which deallocated all blocks except keeping the "INITIAL" storage 64KB (table definition)
ANALYZE TABLE GEMS.ZEQP COMPUTE STATISTICS FOR TABLE;
select num_rows, blocks, empty_blocks, (blocks+empty_blocks)*8192/1024/1024 as z_MB, .... ;
which returned :
200000 10971 293 88.000000 -- after delete 20k and 20k and all 200k
0 0 11264 88.000000 -- re-use storage -- 88.000000 (MB from user_extents)
0 0 8 0.062500 -- drop storage (default) -- 0.062500 64.000kB (MB,kB from user_extents)
November 05, 2024 - 3:31 pm UTC
Right. keep storage preserves the space allocated to the table. Drop storage (the default) removes it (except the first extent).
your show_space() proc
TomS_Ott, November 05, 2024 - 5:29 pm UTC
p.s. I settled on the basic info from the user-tables (qry in my last msg). i guess it shed enough light for my question. I had tried your fancy show_space() proc but it would not compile: Table not found error even though i could run a SELECT query on it using the same schema/account! -- too weird an issue so didn't try to get it to work. Wish it was a dummy-proof copy/paste/compile.
November 06, 2024 - 2:55 pm UTC
What exactly did you run?