Skip to Main Content

Breadcrumb

Question and Answer

Tom Kyte

Thanks for the question, wallacel.

Asked: November 27, 2000 - 8:36 pm UTC

Last updated: August 17, 2012 - 3:09 pm UTC

Version: 8.1.6

Viewed 10K+ times! This question is

You Asked

Tom:

when i do query like below, i got the result that not look
good, how to improve or correct it?

select segment_name,extents from dba_segments where extents >30

SEGMENT_NAME EXTENTS
---------------------------------------- ----------
SOURCE$ 213
IDL_UB1$ 668
IDL_UB2$ 56
DEPENDENCY$ 36
I_SOURCE1 102
I_DEPENDENCY1 32
I_DEPENDENCY2 32
SYS_LOB0000021814C00002$$ 85


select count(0) from dba_segments where extents>5

COUNT(0)
----------
43






and Tom said...

It is 'better' then mine where I have 90 objects with more then 30 and


ops$tkyte@ORA8I.WORLD> select count(0) from dba_segments where extents>5
2 /

COUNT(0)
----------
235


In fact, some of mine are nearing 1,000 extents:


SEGMENT_NAME EXTENTS
------------------------------ ----------
ARGUMENT$ 97
SOURCE$ 540
IDL_UB1$ 895
....

and I am perfectly happy. If having hundreds of extents was a problem -- I would not be happy. Since having lots of extents is not a problem, I have no problems with this situation.


This is not fragmentation. Fragmentation is when you have lots of irregular "holes" in a tablespace. In your system tablespace -- since space is never released, only allocated, you cannot have fragmentation. You do not have any HOLES in system -- just lots of stuff.

It should NEVER be your goal to absolutely minimize extents. That is a myth. Your situation above looks fine.



Rating

  (28 ratings)

Is this answer out of date? If it is, please let us know via a Comment

Comments

Fragmentation and performance

Reader, December 25, 2002 - 1:17 am UTC

"Fragmentation is when you have lots of irregular
"holes" in a tablespace"...

Does fragmentation affect performance? Or only space utilization?

Thanks

Tom Kyte
December 25, 2002 - 8:37 am UTC

space utilization.

Fragmentation - Performance

Bala, December 26, 2002 - 1:03 am UTC

Doesn't bringing data into one extent increase performance as all the data of the segment is available at one place (sequentially). Supposing an operation that needs to sort the whole data of the table (select * from all_objects order by object_name). Please explain.

Sorry for the posting above, I got the details in other threads

Bala, December 26, 2002 - 2:21 am UTC

Please go to
</code> http://asktom.oracle.com/pls/asktom/f?p=100:11:::::P11_QUESTION_ID:1667293495336 <code>
for the above posting.
Sorry again. Would be doubly sure the next time

fragmentation and performance

Anand Gopalan, April 08, 2003 - 12:55 am UTC

"Does fragmentation affect performance? Or only space utilization?"

I expected a answer "BOTH". You said "only space utilization".

I thought it will impact the select queries since the data now has to be fetched across mutliple blocks.

In a similar way it would impact the insert/update/delete as well as these operations span across fragmented rows.

Are my assumptions right?

Thanks for your time



Tom Kyte
April 08, 2003 - 7:35 am UTC

need to reread:

<quote>
This is not fragmentation. Fragmentation is when you have lots of irregular
"holes" in a tablespace. In your system tablespace -- since space is never
released, only allocated, you cannot have fragmentation. You do not have any
HOLES in system -- just lots of stuff.
</quote>

and no, having lots of extents does not affect DML performance. But this was NOT about fragmentation at all.

Bubble Frag

Sikandar Hayat, March 25, 2004 - 1:01 am UTC

I have used the following script to check any fragmentation,

SELECT dfsc.tablespace_name tablespace_name,
DECODE (
dfsc.percent_extents_coalesced,
100,
(DECODE (
GREATEST ((SELECT COUNT (1)
FROM dba_free_space dfs
WHERE dfs.tablespace_name = dfsc.tablespace_name), 1),
1,
'No Frag',
'Bubble Frag'
)
),
'Possible Honey Comb Frag'
)
fragmentation_status
FROM dba_free_space_coalesced dfsc
ORDER BY dfsc.tablespace_name
/

the query is showing many tablespaces with Bubble Frag while I have locallay managed ts. This is the case on an 8i db while I am surprised to see that on my latest installation of 9i the system and UNDOTBS1 ts's are also showing bubble frag.

CWMLITE No Frag
DRSYS No Frag
ESD Bubble Frag
EXAMPLE No Frag
INDX No Frag
ODM No Frag
SYSTEM Bubble Frag
TOOLS No Frag
UNDOTBS1 Bubble Frag
USERS No Frag
XDB No Frag

Please let me know,
1- The reason of these fragmentation even if we are using locally managed ts?
2- How much these types of frag effects the performance?
3- What is the easiest way to get rid of these frag other than moving data to another tbs and droping the old one?
4- How to avoid such frag in future?





Tom Kyte
March 25, 2004 - 9:12 am UTC

so, tell me, is "bubble" a "problem" (no, it is not, if all of the bubbles are the same size or nice multiples)


You do not have fragmentation as I define it.

It does not affect your performance.
You do not need (or want) to get rid of it.
You do not need to avoid it.

How to evaluate the fragment after convert tablespace from DLT to LMT

dawei wang, April 22, 2004 - 4:24 pm UTC

I have a question:
1). Suppose the database verion is 8i and tablespace type
is DLT; there are some tables which extents are more than
1000

2). We converted tablespace from DLT to LMT.
the extents for these tables are still 1000.
Based on the advantage of LMT, we don't have to worry abort
there extents number? Is that right or not ?


Tom Kyte
April 23, 2004 - 9:25 am UTC

think you mean "DMT" dictionary managed.

if you used the "turn this DMT into an LMT", you are getting partial benefits from LMTs.

if you used "create a new LMT and alter table move tablespace new_lmt_tablespace" then you are getting the full benefits

and yes, the number of extents in an existing segment is not really relevant as far as access goes.

How to evaluate the fragment after convert tablespace from DLT to LMT - Contin

dawei.wang, April 23, 2004 - 3:44 pm UTC

Thanks for your help.

Just confirmed that not matter how many extents of tables
existed and we don't have to consider doing some export/import for reducing number of extents after our tablespace convert from DLT to LMT.

Thanks
-Dawei wang

space utlization by users/objects

Thiru, August 23, 2004 - 10:30 pm UTC

Tom,

I am trying to figure out a way of generating stats for space utlization by users. Is there a way to get this?

My system ts shows 800+ mb utilized. And I have only sys,system and outln allocated to this ts. And the usage is just going up and up. Please advise how to go about finding out why this is happening?


Tom Kyte
August 24, 2004 - 7:19 am UTC

query dba_segments -- it has everything you need to see about who is using what.

Is it possible to have fragmentation in indexes

A reader, December 15, 2004 - 3:50 am UTC

Tom,

Fragmentation doesnt affect performance -- right?. what if there are fragmentation in Indexes ?.

Is it possible to have fragmentation in indexes? or in the tablespace used by indexes

what is the effect of it?

Thanks for your consideration.

How Can I check that is there any actual fragementation ?

Maulesh P Jani, November 24, 2006 - 2:37 pm UTC

Hi TOM
All the information and practical examples you have shown in your site regarding Index-Rebuild and Fragementation are greate and extra-ordinary .
I have just one question that as you have mentioned that it is possible that more than 1 or 10 or 100 extents might be allocated , means it might be the wholes.
But then how can I check is there any actual fragmentation exists in LMT for tables or indices.
Regards
Maulesh Jani

Tom Kyte
November 24, 2006 - 6:49 pm UTC

define to me fragmentation first, be very detailed, be very precise.

do not say "holes", describe in a technical fashion precisely what you mean by fragmentation.

Identifying tablespace fragmentation is very subjective.

Vitaliy, November 25, 2006 - 5:42 pm UTC

> But then how can I check is there any actual fragmentation exists in LMT
> for tables or indices.

Identifying tablespace fragmentation is very subjective.

For instance you could perform tablespace analysis looking for fragments of free space that are potentially wasted. The goal here is to find free space fragments that are smaller than the next extent of "active" segments.

Important to note here is that **you** can only tell what the active segments are. Here's an example I put together where the list of active segments is built using collection from DBA_SEGMENTS and then it's analyzed against the free space available in the tablespace:

</code> http://dbatoolz.orapros.com/tp/1249.identifying_tablespace_fragmentation__subjective_.html <code>

Tom Kyte
November 25, 2006 - 5:47 pm UTC

no, it cannot be subjective.

You must have objective, scientific, valid things you are looking for.


(you have LMT's - the "find free space fragments that are smaller then the next..." need not apply)


your script doesn't apply to LMTs

and it is not really subjective, it is "objective", you have a list of things you are specifically looking for there!)

subjective and LMT

Vitaliy, November 25, 2006 - 6:14 pm UTC

> (you have LMT's - the "find free space fragments that are
> smaller then the next..." need not apply)
>
...
> your script doesn't apply to LMTs
>

you must have missed the following:

NOTE:
This is useful when you are dealing with legacy tablespaces
that were converted from dictionary managed to local managed
but have no uniform size defined (typical issue with
ORACLE APPS instance upgraded from pre-11i to a later version).

> and it is not really subjective, it is "objective", you have a
> list of things you are specifically looking for there!)
>

subjective - pertaining to or characteristic of an individual; personal; individual: a subjective evaluation.

REFERENCE:
</code> http://dictionary.reference.com/browse/subjective <code>


Tom Kyte
November 25, 2006 - 6:46 pm UTC

but - fragmentation is NOT subjective, there are objective measures - once you define what you mean. The problem is no one "does"

And the poster said "LMT"

fragmentation is subjective to objective measures

Vitaliy, November 25, 2006 - 9:50 pm UTC

> but - fragmentation is NOT subjective, there are objective
> measures - once you define what you mean.
> The problem is no one "does"
>

I will agree to this - fragmentation is subjective to objective measures.


> And the poster said "LMT"
>

My example is with LMT "converted" but LMT.


khoo, February 23, 2010 - 8:54 pm UTC

Hi Tom,
I am currently trying to do a health check on my database using metalink ID 122669.1. I do come across an item called tablespace fragmentation and i use the script ID 1020182.6 to generate the result. However i am not sure how to interpret, what value is too much, where i need to do something about it, or it can be ignored. By the way i am now in 10g with my tablespaces are in LMT. Thank you for your time.

Khoo

Tom Kyte
March 01, 2010 - 8:40 am UTC

I would ignore that "fragmentation script"

first, if you use locally managed tablespaces, it doesn't really matter if the extents are next to each other or not.

second, use locally managed tablespaces

that script is a holdover from the olden days of dictionary managed tablespaces.

Fragment issue

Ashok Chejerla, November 03, 2010 - 10:59 am UTC

I used the following query to find the fragmented space in a table.

select owner,table_name,blocks,num_rows,avg_row_len,round(((blocks*8/1024)),2)||'MB' "TOTAL_SIZE",
round((num_rows*avg_row_len/1024/1024),2)||'Mb' "ACTUAL_SIZE", round(((blocks*8/1024)-(num_rows*avg_row_len/1024/1024)),2) ||'MB' "FRAGMENTED_SPACE"
from all_tables
where table_name='Test1'

It returned total size as 332.35MB ...actual size as 671.19Mb... fragmented space as -338.84MB.

The result is confusing how come fragmented space be in -ve. Does this have an effect on performance
Tom Kyte
November 04, 2010 - 12:53 am UTC

define what you mean by "fragmented". I've yet to see a "fragmented" table.

I've seen tables with lots of "white space" which could benefit from a reorganization if you full scan them a lot. But they are not 'fragmented', they just contain a lot of empty space.

I've seen tables with thousands of extents - they are OK with me, I've no problem with them in general.


avg_row_len * num rows = a very misleading number. You didn't/don't account for block overhead. You don't account for pctfree.

If you have 10g and above and access to the ASH/AWR features - just use the segment advisor to see if there are segments that might benefit from a shrink.

Fragment issue

Ashok Chejerla, November 04, 2010 - 9:32 am UTC

What i meant by fragment table is table with lot of empty spaces....Thanks for the information...It is very useful...

6 million extents in db

Tadi Kumar, November 04, 2010 - 10:59 am UTC

Hi Tom,
We have a third party vendor app and db that was performing poorly. 6 minutes to update the blob(no cache blob) on sun T5440 , We have the blob in the same tablspace as the other app tables. Tablesapce was LMT with uniform allocate 64K chunks. Blob size is 220 GB.
The number of extents for the blob was 3.08 million extents. Total we got 6 million extents in the db

What we noticed is export and import in to autoallocate LMT TS made significant improvemnt in the performance 20 sec for update to the blob(2700 extnts for the blob).
Blob size went down to 160GB from 220 GB

I saw some white papers saying extents does not matter , but our experience is different.

And also we did not see any wait events in the awr report. Opened a sr with oracle they looked at awr and addm reprots and said it was the application issue.

Can you please let us know
1) What overall impact you will see if we have these many extents, like all dmls will be impacted or selects are also impacted? any other processes impacted like dbwr writing to datafiles?
2) what events we will see if extents are causing the performance issue
our wait events before reorg were
Event Waits Time(s) Avg Wait(ms) % Total Call Time Wait Class
CPU time "1,429" 86.6
db file parallel write "7,726" 277 36 16.8 System I/O
db file sequential read "16,206" 207 13 12.5 User I/O
control file parallel write "4,095" 83 20 5.1 System I/O
log file parallel write "1,602" 45 28 2.7 System I/O

your help is appreciated, our dba group is looking forward for your response.

Thanks

Tom Kyte
November 05, 2010 - 5:09 am UTC

... Tablesapce was LMT with uniform allocate
64K chunks. ...

well, that was a bad idea wasn't it..


You would have what one would call an EXTREME EDGE CASE. People are worried about 10 extents, a hundred extents, a few thousand extents.

6,000,000 isn't what people normally experience - in fact, you are a first for me. In 23 years.

Your experience is different because it is somewhat unimaginable that someone would do that.

If you lobs are large and you use a 64k extent size - you will have measurable overhead - yes.

Please start using system allocated extent sizes and nothing else - use no other approach. You'll have a few hundred, maybe a thousand extents and life will be good.

6,000,000 is an absurdly large number. I'm amazed that someone would create such a tablespace to hold LARGE OBJECTS.

On Extents effecting performance...

Steve C., November 04, 2010 - 11:58 am UTC

In regards to extents effecting DML performance the main situation I can think of is if the extent size is smaller than the multi-block read capability of the system. In that case a SELECT would underperform because it would not be able to leverage the full multi-block capabilities of Oracle.

I say that because I saw in an earlier post (and the following test case suggests) that a multi-block read will not cross extents:

SQL> create table t1 storage(initial 64k next 64k) tablespace users noparallel as
(select rownum id, lpad(' ',4000,' ') text from dual connect by level <= 1000)
Table created.
SQL> -- gather stats
SQL> exec dbms_stats.gather_table_stats(null,'T1')
PL/SQL procedure successfully completed.
SQL> --extents ranging from 64k to 1M.
SQL> select blocks, count(*) from user_extents where segment_name = 'T1' group by blocks
 
    BLOCKS   COUNT(*)
---------- ----------
       128          7
         8         16
 
2 rows selected.
SQL> -- Default of 1M multiblock read.
SQL> select name, value from v$parameter where 
name in ('db_file_multiblock_read_count', 'db_block_size')
 
NAME                                                                            
--------------------------------------------------------------------------------
VALUE                                                                           
--------------------------------------------------------------------------------
db_block_size                                                                   
8192                                                                            
                                                                                
db_file_multiblock_read_count                                                   
128                                                                              
                                                                                
 
2 rows selected.
SQL> -- Make sure buffer cache is flushed.
SQL> alter system flush buffer_cache
System altered.
SQL> -- Start trace and ID.
SQL> ALTER SESSION SET EVENTS '10046 trace name context forever, level 8'
Session altered.
SQL> alter session set tracefile_identifier = 'MBRC_EXTENT_TEST'
Session altered.
SQL> -- Execute query
SQL> select count(*) from t1
 
  COUNT(*)
----------
      1000
1 row selected.
SQL> -- end trace
SQL> alter session set events '10046 trace name context off'
Session altered.
SQL> /****** Trace results
PARSING IN CURSOR #5 len=23 dep=0 uid=5 oct=3 lid=5 tim=370175611036 hv=4235652837 ad='25c65ba8' sqlid='5bc0v4my7dvr5'
select count(*) from t1
END OF STMT
PARSE #5:c=0,e=29,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=1,tim=370175611028
EXEC #5:c=0,e=96,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=1,tim=370175611427
WAIT #5: nam='SQL*Net message to client' ela= 0 driver id=1413697536 #bytes=1 p3=0 obj#=380 tim=0

*** 2010-10-14 23:17:57.004
WAIT #5: nam='db file sequential read' ela= 0 file#=4 block#=10635 blocks=1 obj#=74622 tim=0
WAIT #5: nam='db file scattered read' ela= 0 file#=4 block#=10636 blocks=5 obj#=74622 tim=0
WAIT #5: nam='db file scattered read' ela= 0 file#=4 block#=10641 blocks=8 obj#=74622 tim=0
WAIT #5: nam='db file scattered read' ela= 0 file#=4 block#=10650 blocks=7 obj#=74622 tim=0
WAIT #5: nam='db file scattered read' ela= 0 file#=4 block#=10657 blocks=8 obj#=74622 tim=0
WAIT #5: nam='db file scattered read' ela= 0 file#=4 block#=10666 blocks=7 obj#=74622 tim=0
WAIT #5: nam='db file scattered read' ela= 0 file#=4 block#=10673 blocks=8 obj#=74622 tim=0
WAIT #5: nam='db file scattered read' ela= 0 file#=4 block#=10682 blocks=7 obj#=74622 tim=0
WAIT #5: nam='db file scattered read' ela= 0 file#=4 block#=10689 blocks=8 obj#=74622 tim=0
WAIT #5: nam='db file scattered read' ela= 0 file#=4 block#=10698 blocks=7 obj#=74622 tim=0
WAIT #5: nam='db file scattered read' ela= 0 file#=4 block#=10705 blocks=8 obj#=74622 tim=0
WAIT #5: nam='db file scattered read' ela= 0 file#=4 block#=10714 blocks=7 obj#=74622 tim=0
WAIT #5: nam='db file scattered read' ela= 0 file#=4 block#=10721 blocks=8 obj#=74622 tim=0
WAIT #5: nam='db file scattered read' ela= 0 file#=4 block#=10730 blocks=7 obj#=74622 tim=0
WAIT #5: nam='db file scattered read' ela= 0 file#=4 block#=10737 blocks=8 obj#=74622 tim=0
WAIT #5: nam='db file scattered read' ela= 0 file#=4 block#=10746 blocks=7 obj#=74622 tim=0
WAIT #5: nam='db file scattered read' ela= 0 file#=4 block#=10753 blocks=8 obj#=74622 tim=0
WAIT #5: nam='db file scattered read' ela= 0 file#=4 block#=10763 blocks=126 obj#=74622 tim=0
WAIT #5: nam='db file scattered read' ela= 0 file#=4 block#=10891 blocks=126 obj#=74622 tim=0
WAIT #5: nam='db file scattered read' ela= 0 file#=4 block#=11019 blocks=126 obj#=74622 tim=0
WAIT #5: nam='db file scattered read' ela= 0 file#=4 block#=395 blocks=126 obj#=74622 tim=0
WAIT #5: nam='db file scattered read' ela= 0 file#=4 block#=523 blocks=126 obj#=74622 tim=0
WAIT #5: nam='db file scattered read' ela= 0 file#=4 block#=651 blocks=126 obj#=74622 tim=0
WAIT #5: nam='db file scattered read' ela= 0 file#=4 block#=779 blocks=126 obj#=74622 tim=0
FETCH #5:c=0,e=488156,p=1001,cr=1005,cu=0,mis=0,r=1,dep=0,og=1,tim=370176099777
*****/

Tom Kyte
November 05, 2010 - 5:18 am UTC

... In regards to extents effecting DML performance the main situation I can think of is if the extent size is smaller than the multi-block read capability of the system. In that case a SELECT would underperform because it would not be able to leverage the full multi-block capabilities of Oracle. ...

In theory that is true, in practice - it will not be true (i'll expand on that in a moment).

If you use system allocated extents - you'll have a few - very few - 64k extents that jump up in size rapidly - so in real life - even with hundreds or thousands of extents, this will NOT be an issue.

As for this being theoretically true - the way multi-block IO is done is not commonly understood. In most real systems - the case will be that the table is sometimes read via an index and sometimes via a full scan. In that case - SOME of the table will be in the cache. If part of a table is in the cache - that'll chop up our multi-block reads anyway.

Supposed the multi-block read count would have us read 32 blocks at a time. Suppose block 16 was in the cache. We would

read blocks 1..15 from disk into the cache.
and then
read blocks 17..32 from disk into the cache.

skipping a read on block 16. In real life - we do not typically hit our multi-block read count (exception: direct path reads - parallel query frequently checkpoints data to disk before reading so it can just bypass the cache entirely - the cache just gets in the way)



and remember - this argument isn't really about having lots of extents - it is about having a sizable number of extents in relation to the size of the table that are not a multiple of the multiblock read count. that is - if a table has 1,000 extents ("a lot" of them), and the first 8 are 64k - and the remaining 992 are multiples of the multi-block read count - this doesn't really affect you.


Now, if you did something uncomprehendable - like create a LMT with 64k uniform sized extents to store LARGE OBJECTS - and you had 6,000,000 of them (just a hypothetical situation :) ) - I could see merit to this argument.....

Tadi Kumar, November 05, 2010 - 9:26 am UTC

Thanks Tom for the information and example that helps lot in understanding how selects are impacted. We might be changing our standard for our tablespaces to auto allocate.

But I did not understand why blob updates perform very poorly and no wait event was reported in awr. We traced the sessions but since it was a external blob(no cache) trace did not catch update statement, how do you identify issues with extents while using blobs(It took some time for us to determine this).
I did log mining and saw multiple commits for one update blob statement, first we thought it was the application chunking is doing it, but seems like because of the extent size blob update was doing commit for each extent even though the application is not doing the commit, I was really surprised by that , Do you think a correlation between the extent size and the commits we saw in the logmining for blob update?
There is not much information how internally blobs handled by oracle, Do you have any suggestion for book or white paper for tuning blobs?

Thanks for your help , with the feedback you provided we are now looking at all our databases to see whether we are putting big objects in the small extent LMT and doing a reorg.



Tom Kyte
November 06, 2010 - 7:10 pm UTC

... no wait event was reported in awr ...

because you were not waiting for something, you were just doing a TON of work.

begin
  for i in 1 .. 1000000
  loop
     x := ln(i);
  end loop;
end;


that won't "wait" for anything other than cpu and assuming it gets all of the cpu it wants - it'll still take a bit of time to run. Because it does a TON of stuff

... Do you think a correlation between the extent size and the commits we saw in the logmining for blob update? ...

sure, you have what I would term an almost absurd situation. 64k uniform extent sizes??? for LARGE OBJECTS???

recursive space management transactions may well have to commit - by default every 10th "nextval" call commits for example.



do NOT use "small extent" tablespaces - just use auto allocate for everything.

Truncate causing fragmentation?

Tom, February 03, 2011 - 10:01 pm UTC

Thank you for the time you put into answering questions on your site. Every time I read a page on your site I feel like I get another PHD in Oracle.


Wanted to see if you could help with an issue that has come up with our customer’s production DBAs that is relevant to this page.

Our developer’s deployment code has been rejected by the production DBA group due to the fact that it utilizes “TRUNCATE” and truncate is DDL, which is not allowed in their OLTP systems.

Originally the DBAs asked if the developers could help reduce the amount of rollback space that was being consumed in an application report during a 1 million-row table delete, insert, commit.

I recommended to the developers to truncate, set table and indexes to nologging, do insert with append hint to help alleviate the DBA’s rollback space concerns and to have some speed. We can’t drop and rebuild indexes since we are forbidden from having DDL in our code.

I thought that this solution would reduce the rollback space since it was holding 1 million rows times 2, plus 2 sets of indexes on a million rows.

Today the DBAs said that the reason for the rejection is not that we are utilizing the forbidden DDL but that it will cause their database server to become fragmented and will crash the server in 3 days.

We did some benchmarking in our own environment and could not produce any fragmentation problems.

Are they correct that a truncate of a million-row table with 4 indexes every 2 hours could cause their space to become fragmented?

Can you shed some light on how the oracle DB reacts to the above type of code in regards to space fragmentation?

Should we seek a different solution?

Thanks! By the way, I utilized one of your 11 cool features from your Oracle 11g presentation last week and it worked out great!

Tom Kyte
February 04, 2011 - 9:05 am UTC

... Today the DBAs said that the reason for the rejection is not that we are
utilizing the forbidden DDL but that it will cause their database server to
become fragmented and will crash the server in 3 days.
....

the production DBA's know not of what they speak.


definition of fragmentation: you have a tablespace whereby dba free space says "there is lots of space available" - but if you try to allocate say a 1mb bit of storage it fails. The cause of this would be that you have tons of tiny free extents - but they are not contiguous - and therefore you cannot make any sizable space requests.


Let me make an assumption, it is not a necessary assumption but makes it easier to "prove" this is not an issue.

That assumption is you are using locally managed tablespaces with either uniform or system allocated extents.


If you are using uniform extents - it is physically impossible - as in NOT POSSIBLE - as in IT CANNOT HAPPEN - that you suffer from fragmentation. If every bit of space is managed using the same unit of allocation - IT IS NOT POSSIBLE to become fragmented. Every extent you give back to dba free space is immediately usable by any segment in that tablespace since every segment uses exactly, precisely the same size extent.


If you are using system allocated extents - it is again not really truly possible to suffer from fragmentation. With system allocated extents (my favorite by far!!!!), the extents are designed to all be multiples of each other (8 blocks, 128 blocks, 1024 blocks and so on). So once again, the space is very easily reused.


And the real kicker here is - you are immediately going to reuse that space with something that is about the same size - the tablespace will not grow during this operation - you'll end up with what you started with basically.

And furthermore - you can truncate and KEEP the storage - so you don't have to worry about the storage here at all - not a little.



It certainly does sound like you should be using truncate+insert /*+ APPEND */ - it'll generate the least amount of UNDO (only undo for the index merges that take place at the end of the insert) and the least amount of REDO. It should also run much faster in general than the delete+insert.



If this is a temp table for a report (as it sounds) why aren't you using a global temporary table?


Another Tom from Leesburg - well, Round Hill but close :)

Oracle version

Tom, February 04, 2011 - 6:24 am UTC

Oracle 10g R2.4..

David Aldridge, February 07, 2011 - 4:26 am UTC

Truncating with the "reuse storage" option, or using "drop storage" and the minextents parameter to preserve a minimum segment allocation, seem to be very underused but the case for one or other seems to be unassailable in this context.

Not only do they render the (dubious at best) fragmentation objection moot, but they also correctly indicate for space management purposes that so-many bytes really must be reserved for a particular operation to be successful.

Whether you use REUSE STORAGE or DROP STORAGE and MINEXTENTS is really a matter of whether you want to reserve the maximum space ever used or just a reasonable amount that you are sure you will need.

Fragmentation response

Tom, February 07, 2011 - 8:42 am UTC

Thank very much for your response this will make the developers feel better about their solution.

I will ask the developer to investigate the issue of “reuse storage” and why they did not employ it with their release.

The use of global temporary tables was not employed since the table is utilized by many different users during the 2 hours between the report’s scheduler job executions. We have utilized global temporary tables for reports that need to disappear after the user closes the report but have never considered them for a report that is static for a 2 hour time period and reviewed by many users.

I know you were in Leesburg at one time. Funny, my wife thought I was crazy for keeping one of your books in my car for purposes of getting you to sign it if I bumped into you in Costco.

Thank you and have a great day!

Tom Kyte
February 09, 2011 - 7:04 am UTC

Hah, I do go into Costco from time to time ;)

DBA response to the fragmentation discussion:

Tom, February 08, 2011 - 7:09 am UTC

The concern is about leaf block fragmentation. We realize that when a table is truncated the index is also truncated. However, there is nothing in the truncate command that addresses the leaf blocks. The concern is that new leaf blocks will continually be added as the data is truncated and reloaded.

Is this something that the developer should be concerned with? If this is a problem what is the solution to avoid the issue. Thank You for your time.
Tom Kyte
February 10, 2011 - 3:51 pm UTC

that is the "least smart" comment they could make.

When you truncate the table, the index is truncated and there are NO LEAF BLOCKS left. none.


That concern is not a concern. Tell them to demonstrate the issue - then wait forever for they cannot. But only because the issue doesn't exist.

The lack of understanding how things work by your DBA is scary. They should NOT be fighting you on this, they should be THANKING you.


ops$tkyte%ORA11GR2> create table t as select * from all_objects;

Table created.

ops$tkyte%ORA11GR2> create index t_idx on t(owner,object_type,object_name);

Index created.

ops$tkyte%ORA11GR2> analyze index t_idx validate structure;

Index analyzed.

ops$tkyte%ORA11GR2> create table idx_stats as select 'before' when, index_stats.* from index_stats;

Table created.

ops$tkyte%ORA11GR2> truncate table t;

Table truncated.

ops$tkyte%ORA11GR2> analyze index t_idx validate structure;

Index analyzed.

ops$tkyte%ORA11GR2> insert into idx_stats select 'after' when, index_stats.* from index_stats;

1 row created.

ops$tkyte%ORA11GR2> 
ops$tkyte%ORA11GR2> select when, height, blocks, lf_rows, lf_blks, br_rows, br_blks, btree_space, used_space
  2    from idx_stats
  3   order by when desc
  4  /

WHEN   HEIGHT BLOCKS LF_ROWS LF_BLKS BR_ROWS BR_BLKS BTREE_SPACE USED_SPACE
------ ------ ------ ------- ------- ------- ------- ----------- ----------
before      3    640   72513     507     506       4     4086084    3638389
after       1      8       0       1       0       0        7996

ops$tkyte%ORA11GR2> 


The truncate takes care of the index - the leaf's just go away, the structure is completely "reset"

Fragmentation follow-up

Tom, February 09, 2011 - 8:53 am UTC

I am going to assume that the answer to the leaf block question should be obvious based on your answer above on the table fragmentation and that is why you chose to not follow up this question. This coupled with our own benchmarking and research should give us enough to respond to the DBA group. I recognize that you don’t have the bandwidth to get in the middle of every DBA/Developer discussion.
As developers we try to study and understand as much as we can about Oracle and its structures but I must admit that I have not had to think about fragmentation when developing code in years.
Thank you for your continued support of Oracle with this site.

Tom Kyte
February 10, 2011 - 4:46 pm UTC

no, the reason I didn't followup is because I'm in my third city in three days with three different customers doing all day sessions :)

I was tired.


SAFE (Simple Algorithm for Fragmentation Elimination)

Tremegusta, July 26, 2012 - 3:11 pm UTC

Hi Tom!

I have a question about this issue.

What about SAFE Methodology, which says:

"The performance for DML operations is largely independent
of the number of extents in the segment. However, certain DDL operations such as dropping and truncating of segments are sensitive to the number of extents. Performance measures for these operations have shown that a few thousand extents can be supported by Oracle without a significant impact on performance. A reasonable maximum has been determined to be 4096.

The goal of our recommended algorithm is to keep the number of extents below 1024 which is well within the range that
Oracle can efficiently handle."

Is it still reasonable apply it on production environments to prevent fragmentation and waste of disk space?

SAFE (Default sizes): 128k, 4m, 128m extents size for each tablespaces.


Thanks in advance!
Tremegusta

Tom Kyte
July 30, 2012 - 9:41 am UTC

... However, certain DDL operations such
as dropping and truncating of segments are sensitive to the number of extents. ...

that was true in the dark ages, eons ago.

it is not true with locally managed tablespaces.



SAFE (Simple Algorithm for Fragmentation Elimination)

Tremegusta, July 31, 2012 - 1:29 pm UTC

I see, but doubts continue ...rs

Is it still reasonable apply SAFE on production environments to "prevent fragmentation" and waste of
disk space?

(( I think now a days the real problem still is waste of space and bad segments allocated ))

SAFE (Default sizes): 128k, 4m, 128m extents size for each tablespaces.


Thanks in advance!
Tremegusta
Tom Kyte
July 31, 2012 - 4:16 pm UTC

you don't need to do this at all with locally managed tablespaces.

forget it exists, it is not useful.


TABLE FRAGMENTATION

swapnil, August 17, 2012 - 9:17 am UTC

We are trying to find out if table is fragmented.
Database version is 10.2.0.5 tables space is ASSM.

Segment space advisor is showing below results:
Segment  Reclaimable Space (MB)   Allocated Space (MB) Used Space (MB)  
TABLE1    1,490.41     10,822.00    9,331.59    

Segment Type
TABLE

We tried to create a test table TEST_TABLE1 using CTAS TABLE1.Below are the statistics of these two tables:

select blocks from dba_tables where table_name='TABLE1'; 

    BLOCKS
----------
   1376755
   
select NUM_ROWS,AVG_ROW_LEN from dba_tables where table_name='table1'; 

tables was not completely analyzed...

  NUM_ROWS AVG_ROW_LEN
---------- -----------
   3297094        1537

select count(1) from schema1.table1;

  COUNT(1)
----------
   3306062

select (sum(blocks)*8192)/1024/1024 from dba_tables where table_name='table1'; 

(SUM(BLOCKS)*8192)/1024/1024
----------------
 10755.8984
-----------------------------------------------------------------------------
create table schema1.test_table1 as select * from schema1.table1 nologging; 

This table was analyzed (analyze table not using dbms_stat) with 50% estimate


select NUM_ROWS,AVG_ROW_LEN from dba_tables where table_name='TEST_table1'; 

  NUM_ROWS AVG_ROW_LEN
---------- -----------
   3306062        2486
   
select (sum(blocks)*8192)/1024/1024 from dba_tables where table_name='TEST_table1'
(SUM(BLOCKS)*8192)/1024/1024
--------------
11111.4141
      


select sum(bytes)/1024/1024 from dba_extents where SEGMENT_NAME='TEST_table1'; 

SUM(BYTES)/1024/1024
--------------------
  11160



select blocks from dba_tables where table_name='TEST_TABLE1';

    BLOCKS
----------
   1422261

----------------------------------------------------------------------------------
SELECT UT.OWNER "OWNER" , UT.TABLE_NAME "TABLE NAME",
ROUND(UT.NUM_ROWS*UT.AVG_ROW_LEN/1024/1024,2) "Used Space MB",
ROUND(US.BYTES/1024/1024,2) "ALLOCATED SIZE MB",
ROUND(US.BYTES/(UT.NUM_ROWS*UT.AVG_ROW_LEN),2) "TIMES LARGER"
 FROM DBA_TABLES UT, DBA_SEGMENTS US
 WHERE (UT.NUM_ROWS > 0 AND UT.AVG_ROW_LEN > 0 AND US.BYTES > 0)
 AND UT.PARTITIONED = 'NO'
 AND UT.IOT_TYPE IS NULL
 AND UT.IOT_NAME IS NULL
 AND UT.TABLE_NAME = US.SEGMENT_NAME
 AND ROUND(US.BYTES/1024/1024,2) > 1024
 AND ROUND(US.BYTES/1024/1024,2) > (ROUND(UT.NUM_ROWS*UT.AVG_ROW_LEN/1024/1024,2)* 2)
 and ut.table_name='table1'
 ORDER BY 4 DESC;

OWNER     TABLE NAME    Used Space MB ALLOCATED SIZE  MB TIMES LARGER
-------- ------------- -------------- ------------  -----------
schema1  table1            4832.87    10758          2.23
 
OWNER    TABLE NAME     Used Space MB  ALLOCATED SIZE MB TIMES LARGER
-------- -------------- ---------------- -------------------------------- 
schema1  TEST_table1    7838.13    11160           1.42

Segment space advisor on original table TABLE1 is showing below results:
Segment  Reclaimable Space (MB)   Allocated Space (MB) Used Space (MB)  
TABLE1    1,490.41     10,822.00    9,331.59 

This tables contains 6 LOB columns.

I understand using above SQL is just useless since block overhead and other things are not considered...But question is 
Why do wee see this huge diffrence in allocated and actual used space and the segment advisor output?

Honestly I am confuse here..I understand num_row*avg_row_len 
is use less but how can it show 5Gb diffence compaing with segment advisor?

Further....


SQL> exec show_space('TABLE1','SCHEMA1');
Free Blocks.............................
Total Blocks............................1377024
Total Bytes.............................11280580608
Total MBytes............................10758
Unused Blocks...........................0
Unused Bytes............................0
Last Used Ext FileId....................35
Last Used Ext BlockId...................2129289
Last Used Block.........................8192 

PL/SQL procedure successfully completed.

SQL> exec show_space('TEST_TABLE1','SCHEMA1');
Free Blocks.............................
Total Blocks............................1428480
Total Bytes.............................11702108160
Total MBytes............................11160
Unused Blocks...........................6219
Unused Bytes............................50946048
Last Used Ext FileId....................43
Last Used Ext BlockId...................1774601
Last Used Block.........................1973

PL/SQL procedure successfully completed.

How did 6219 unused blocks created in newly created table.



 

Tom Kyte
August 17, 2012 - 3:09 pm UTC

We are trying to find out if table is fragmented.



define fragmented for me please. what does it mean.



as for why the new table has more space allocated to it - you tell us nothing about the extent allocation for the tablespace you are in. I'll assume system allocated extents.

we just decided based on the growth rate of the second table to give it more space - we decide what the extents look like, what size they are.

compare the extents of the two tables.