Skip to Main Content

Breadcrumb

Question and Answer

Chris Saxon

Thanks for the question, Ruben.

Asked: October 19, 2001 - 2:50 pm UTC

Last updated: September 02, 2015 - 1:38 am UTC

Version: 8.1.7

Viewed 10K+ times! This question is

You Asked


Hi, Tom

There is a lot debate concerning to number extents to be created for
segments. Some people think different about this myth.
I had read all of your links

</code> http://asktom.oracle.com/pls/asktom/f?p=100:11:::::P11_QUESTION_ID:730289259844 <code>

that disscus this point, and I had find that
no matter the number of extents for segments we could have for it.

I'm agree with you, because I have did tests that shows there's not difference
having 1,10,100,1000 extents per table, table scanning make same time, with
insignificant difference. Following is test I did.
Both tables have same structure and same data in different LMT.

SEGMENT_NAME EXTENTS BYTES BLOCKS INITIAL_EXTENT
---------------------- ---------- ---------- ---------- --------------
F 1 5242880 1280 5242880
M 167 4788224 1169 28672

tkprof for table F (Full Table Scan)
--------------------------------------

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.01 0.01 1 0 1 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 3943 0.84 0.90 1170 5038 4 59129
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 3945 0.85 0.91 1171 5038 5 59129



tkprof for table M (Full Table Scan)
------------------------------------

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.02 0.01 1 0 1 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 3943 0.95 0.89 1166 5038 20 59129
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 3945 0.97 0.90 1167 5038 21 59129



But I have found next paragraph in book "Oracle Performance Tuning" page 429
from Mark Gurry and Peter Corrigan.

"Your goal is to minimize the number extents on disk. Access to contiguous
areas of disk is much faster than access to noncontiguous area. In one test
that we did on a 4,000-row table; we found that when the entire table fit
on one extent, it took 0.76 second to scan it; when the table was spread over
10 extents, it took 3.62 seconds.
We have found that the existence of a small number of such extents (fewer than
five) doesn't seem to affect performance very much, but it is a good practice
to store your objects in a single extent."

Regarding to contiguous areas of disk mentioned above, you say :

"As for the "seek time" -- you assume that an extent is contigous on disk. They
are not. When I ask a file system for 100meg of space -- it does not give me
100meg of contigous space, it gives me space from here, there, everything (its
been a loooonngggg time since you had to use CCF (create contigous file) to
create a database file!). Even if everything was in a single extent -- its
still ALL OVER the disk. Add in volume managers, raid arrays, etc etc etc --
your data is no where NEAR contigous."

"PHYSICALLY, on the disk, an extent is not contigous. File systems dole out
space in tiny chunks. They give out space as they find it. They do not
allocate contigous space."

Another Question :

I had make some space estimation for 10 tables, I had find its proper initial extents
for each one, 8 of them with 60MB of Initial extent and 2 of them with 100MB.
This give me an average of 30 extents per table yearly.
But in "HOW TO STOP DEFRAGMENTING AND START LIVING: THE DEFINITIVE WORD ON
FRAGMENTATION". in 2.1.3, authors says that :

2.1.3 The Extents Size for all Data Tablespaces Should be one of 160K, 5120K, or 160M

"These three extent sizes are all that you will ever need for tables and indexes.
Using more than these three extent sizes will
not significantly improve performance or space utilization but it will
complicate administration and potentially induce
fragmentation."

Do I should not use an extent size of 60MB and 100MB for my own tables, and
should I use one of 160K, 5120K, or 160M.?
Is this a rule of thumb ?
Am I wrong with my own extents size ?

Can you explain this please, thanx.


Thanks in advance for your comments.

Regards.

and Tom said...

Man, I don't get it. "facts is facts", the evidence is right in front of our noses. "Opinions, rumors" -- they don't count when you have "facts". To all of the people described by "Some people think different about this myth." -- they must prove it (if it is true, they should be able to do that right??)

As for:

But I have found next paragraph in book "Oracle Performance Tuning" page 429
from Mark Gurry and Peter Corrigan. ....

Well -- those guys are "wrong". You showed it to youself. I can show it to you. They made a mistake, they were incorrect. They did the test wrong in some way (did they give you the test to reproduce??? if not, question the test. In my book -- if I say something, i give you the code to try and see for yourself every time).


Here, here is a simple test:

ops$tkyte@ORA717DEV.US.ORACLE.COM> create table t1 storage ( initial 528k ) tablespace dict_managed
2 as
3 select * from all_objects where rownum <= 4000;

Table created.

ops$tkyte@ORA717DEV.US.ORACLE.COM> create table t2 storage ( initial 1k next 1k pctincrease 0 ) tablespace dict_managed
2 as
3 select * from all_objects where rownum <= 4000;

Table created.

ops$tkyte@ORA717DEV.US.ORACLE.COM> select segment_name, extent_id, blocks
2 from user_extents
3 where segment_name like 'T_'
4 order by 1, 2;

SEGMENT_NAME EXTENT_ID BLOCKS
------------------------------ ---------- ----------
T1 0 70
T2 0 2
T2 1 1
T2 2 1
T2 3 1
T2 4 1
T2 5 1
T2 6 1
T2 7 1
T2 8 1
T2 9 1
T2 10 1
T2 11 1
T2 12 1
T2 13 1
T2 14 1
T2 15 1
T2 16 1
T2 17 1
T2 18 1
T2 19 1
T2 20 1
T2 21 1
T2 22 1
T2 23 1
T2 24 1
T2 25 1
T2 26 1
T2 27 1
T2 28 1
T2 29 1
T2 30 1
T2 31 1
T2 32 1
T2 33 1
T2 34 1
T2 35 1
T2 36 1
T2 37 1
T2 38 1
T2 39 1
T2 40 1
T2 41 1
T2 42 1
T2 43 1
T2 44 1
T2 45 1
T2 46 1
T2 47 1
T2 48 1
T2 49 1
T2 50 1
T2 51 1
T2 52 1
T2 53 1
T2 54 1
T2 55 1
T2 56 1

58 rows selected.

alter tablespace dict_managed offline;

alter tablespace dict_managed online;

alter session set sql_trace=true;
set termout off

select * from t1;
select * from t1;
select * from t1;
select * from t1;
select * from t1;

select * from t2;
select * from t2;
select * from t2;
select * from t2;
select * from t2;
set termout on

So, the results are:

select *
from
t1


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 5 0.00 0.01 0 0 0 0
Execute 5 0.00 0.00 0 0 0 0
Fetch 1340 0.32 0.33 58 1615 60 20000
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 1350 0.32 0.34 58 1615 60 20000

Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: 48

Rows Row Source Operation
------- ---------------------------------------------------
4000 TABLE ACCESS FULL T1

********************************************************************************

select *
from
t2


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 5 0.00 0.00 0 0 0 0
Execute 5 0.00 0.00 0 0 0 0
Fetch 1340 0.34 0.37 58 1640 135 20000
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 1350 0.34 0.37 58 1640 135 20000

Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: 48

Rows Row Source Operation
------- ---------------------------------------------------
4000 TABLE ACCESS FULL T2


Hmm, makes me suspect of their results -- how about you? (btw: if you run this over and over, sometimes T2 beats T1 and sometimes not -- meaning that any differnce you see above is "noise", ignore it -- those results are the SAME)


Your goal is to NOT worry about tables with less then say about 1,000 or so extents. It is NOT worth losing sleep over.


It is a BAD practice to store your sizable objects in a single extent (and no, i am not recommening you use 57 extents like I did above for such a trivial table -- 4,000 rows is TINY, thats too small to even test with in my opinion!)

(and as I said above, extents are LOGICALLY contigous and physically ALL OVER the place. The odds of the blocks being right next to eachother is about ZERO in the real world -- doesn't matter anyway since we do have these "multi-user" systems and anyone who even dreams the head will stay where they left it is wrong)

As for that paper, they make a compelling argument however, it predates locally managed tablespaces with the ability to FORCE an extent size. I would use the extent size that makes you happy. 30 extents/year sounds great to me. Should give you a good 30 or 40 years before you would be tempted to reorg it.











Rating

  (10 ratings)

Is this answer out of date? If it is, please let us know via a Comment

Comments

Number of Extents

Ruben, October 19, 2001 - 7:05 pm UTC

thanks for you worthwhile opinion, you always give us
very good examples. Thanks again

elapsed less than cpu

A loyal reader, October 19, 2001 - 10:14 pm UTC

Tom:

I see this every now and again, and have wondered about it:

tkprof for table M (Full Table Scan)
------------------------------------

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.02 0.01 1 0 1 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 3943 0.95 0.89 1166 5038 20 59129
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 3945 0.97 0.90 1167 5038 21 59129

Why is the elapsed time less than the cpu time?

Thanks!


Tom Kyte
October 20, 2001 - 7:25 am UTC

Because we are measuring things using a clock that measures 1/100's of a second but the things we are measuring take place in LESS time then that interval.

Each time we measure an event therefore, we could be off by +/- 1/100. Over many observations, this error can add up. Basically, when measuring something that takes SO LITTLE time, the timings we see are +/- some value. It is like trying to measure something that takes less then a second using a wrist watch. Sometimes the event will span a second (from 12:01:01 to 12:01:02). Other times, it will appear to be instantaneous from 12:01:03 to 12:01:03 -- the seconds didn't advance.

If you re-run, the elapsed will exceed CPU, run again, it'll be exactly the same, do it again and it'll be less

(thats why I said in my answer that even though the numbers differ slitghtly, that is just "noise" -- its this cumulative error creeping in)

re: elapsed less than cpu

A loyal reader, October 20, 2001 - 11:36 am UTC

Senior Moment! Senior Moment!

Totally missed ..."even though the numbers differ slightly,
that is just "noise" -- its this cumulative error creeping in)" the 1st time through. Sorry for the bother! That's what I get for reading 'AskTom' late at night.

Serial Reads and Writes

Bala, June 03, 2003 - 3:40 am UTC

Tom,
What do you mean by
"(and as I said above, extents are LOGICALLY contigous and physically ALL OVER
the place. The odds of the blocks being right next to eachother is about ZERO
in the real world -- doesn't matter anyway since we do have these "multi-user"
systems and anyone who even dreams the head will stay where they left it is
wrong)"

In any given extent the blocks are contiguous isnt it (I mean PHYSICALLY - how could they be contiguous LOGICALLY). If they are physically contiguous then isn't reading contiguous blocks faster than reading scattered blocks.
I understand your point about Multi user systems and the head not stying the place where we left etc, but one of the main reasons Oracle has a Redo log is this isn't it (Serial writes...)
All the statistics above prove your point...
I am still not convinced.
Correct me if my understanding is wrong

Tom Kyte
June 03, 2003 - 7:36 am UTC

if they were physically contigous, why do you have to defragment your disks?


files are not contigous on disk. they are data structures managed by a file system. As files are used, space is allocated and the space is never 100% contigous (unless you have a brand new disk, never erase any data, allocate single files at a time, never grow or shrink files -- eg: you work in a dust free lab environment ;)


Files themselves, provided by the OS, are NOT contigous things -- they are spread all over the place.

How about Redo logs

Bala, June 04, 2003 - 12:12 am UTC

Tom,
On page 94 you say
"The logs are written to with Sequential I/O as compared to the scattered I/O DBWn must perform....This is one of the main reasons for having a LGWR and redo logs in the first place"
The same problems hold there also, issues like fragmentation, O/S managing the files etc.... making the writes not sequential..


Tom Kyte
June 04, 2003 - 7:52 am UTC

You do a 4 row update.

LGWR says to OS -- please write these 1000bytes of redo data out in this file.


dbwr would have to say "please write this block here, this block there, this block over there, this other block over here" (for the data), "this block over yonder, that block back there, this one over here, that other one down there" (for the index) and so on.....


one sequential IO vs tons and tons of scattered block writes.

Understand

Bala, June 08, 2003 - 10:12 pm UTC

Clear. Period.

Is 1,000 extents the ROT for recreating a table

Glenn Amy, April 17, 2010 - 7:49 am UTC

Hi Tom,

'My' table has 64 million rows, 12.8gb in storage, and 7,237 extents (was created with next extent = 10mb). Approximately 6 million rows are added per year.

1) Using your ROT of 1,000 extents and 30 extents/year, the table needs to be dropped and recreated?

2) How large should the initial extent be? I'm calculating the next extent should be greater than 40mb (for 30 extents/year), correct?

Specifics: Oracle 9.2.0.5, IBM p5-595, AIX 5.3

Tom Kyte
April 17, 2010 - 12:55 pm UTC

no, unless and until you have identified some performance issue - what would be the point?

if you are using dictionary managed tablespaces, it could take a really really really LONG (like really long) time to drop that table. The performance hit of lots of extents is not that they are there - it is in dropping or adding them.


Unless you have some metrics that say "because we have lots of extents, this is going slow" (and you won't), leave it be. If it is dictionary managed - feel free to alter the next extent to be more aligned with your data load sizes to improve their performance but that is about it.


I simply used the 30/40 years as hyperbole, not as ROT, there are no ROT's here.

How can i become a Real DBA

ABDUL TOUFIC, August 18, 2010 - 6:24 pm UTC

Hi Tom sir,

How are you,
sir iam new for oracle DBA and iam a fresher...
sir My question is how can i become a Real DBA....
What books can i prefer and how can i start...
pls help me sir...
I hope i will get answer from you.

Thanks
TOUFIC

Tom Kyte
August 19, 2010 - 1:58 am UTC

http://asktom.oracle.com/Misc/success.html

(yes, the links to various forums are old or dead in that page, but do not fear - there are lots of them out there, just look around for them)



INCREMENTAL EXTENT for this segment

Rajeshwaran, Jeyabal, May 03, 2011 - 12:50 pm UTC

Tom:

I was reading about Logical storage segment from product documentation

http://download.oracle.com/docs/cd/E11882_01/server.112/e16508/logical.htm#CIHEFHDF

<quotes>
If the initial extent become full, and if more space is required, then the database automatically allocates an incremental extent for this segment
</quotes>

rajesh@ORA11GR2> exec print_table('select * from user_segments where segment_name =''T''');
     SEGMENT_NAME-------- T
     PARTITION_NAME------
     SEGMENT_TYPE-------- TABLE
     SEGMENT_SUBTYPE----- ASSM
     TABLESPACE_NAME----- DATA_11GR2
     BYTES--------------- 131072
     BLOCKS-------------- 16
     EXTENTS------------- 2
     INITIAL_EXTENT------ 65536
     NEXT_EXTENT--------- 1048576
     MIN_EXTENTS--------- 1
     MAX_EXTENTS--------- 2147483645
     MAX_SIZE------------ 2147483645
     RETENTION-----------
     MINRETENTION--------
     PCT_INCREASE--------
     FREELISTS-----------
     FREELIST_GROUPS-----
     BUFFER_POOL--------- DEFAULT
     FLASH_CACHE--------- DEFAULT
     CELL_FLASH_CACHE---- DEFAULT
--------------------------------------------

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.14
rajesh@ORA11GR2>


Question:

1) Looking into user_segments I know the INITIAL_EXTENT defined for the Table 't' is 8 blocks (assume 1block =8KB), But how do i know what is the incremental extent defined for this segment?

Tom Kyte
May 04, 2011 - 1:40 pm UTC

it depends

if you are using a locally managed tablespace with UNIFORM extents, then you know what the next extent will be - it will be the same as the first and in fact the same as every other extent in that tablespace.


If you are using a system managed tablespace, you don't know what the next extent size will be - we'll determine that. (it'll probably be 8 for the first couple and then grow up in size from there)

How many extents?

David, September 01, 2015 - 6:52 pm UTC

Tom:
Thanks for the example. You are comparing tables with 1 versus 57 extents. How about 1 versus 350,000 extents as we have in my poorly performing environment? Number of extents has no effect whatsoever?
Chris Saxon
September 02, 2015 - 1:38 am UTC

For any discussion, whether it be extents, or indexes, or <insert anything here>, its likely that boundary conditions will exist that need to be treated with care.

For example, if you had (say) 350,000 64kb sized extents, and yet your server IO infrastructure supported 1MB or larger multiblock read capability, then you *may* notice a difference in full scan performance. I say "may" because a lot of storage technologies cope with that anyway, ie, you ask for 64k, the storage goes ahead and reads a megabyte anyway.

However, what is more critical than hypothesizing is to conduct a real experiment. For example, take a selection of tables consuming lots of extents in your current tablespaces, and copy them to a new tablespace with less extents and have a look at performance. Why guess when we can measure ?

Where things might get more interesting with 350,000 extents is the performance of dictionary related queries. If you are regularly querying DBA_EXTENTS, then 350,000 entries might be an issue.

(And of course, if you've fot 350,000 extents for tablespace that are *not* locally managed, then thats cause for concern... but the main priority would be converting to local management first and then re-assessin)

Hope this helps.

More to Explore

Performance

Get all the information about database performance in the Database Performance guide.