Skip to Main Content
  • Questions
  • Optimal PCTFREE, PCTUSED and INITTRANS settings.

Breadcrumb

Question and Answer

Connor McDonald

Thanks for the question, Srikanth.

Asked: April 15, 2010 - 1:57 am UTC

Last updated: June 09, 2017 - 2:27 am UTC

Version: 10.2.0.4.3

Viewed 50K+ times! This question is

You Asked

Hello Tom

We have a online transaction database with huge volumes of inserts/updates.

Initially we have set a value of PCTFREE 40, PCTUSED 50 and INITTRANS 100 on all the tables in the database schema. However now we know that there are some tables which have lot of inserts/updates and where as for some tables there are only inserts. So we want to reevalute and set the PCTFREE, PCTUSED and INITTRANS settings to optimal on a table by table basis.

The question is : what is the best way to go about this? How can we identify for a table/index, what are the optimal PCTFREE, PCTUSED and INITTRANS settings? I looked at various V$ schema objects, but could not find any which has information about the "actual" values of PCTFREE, PCTUSED instead of "defined" values.

Thanks and Regards
Srikanth

and Tom said...

if you have a table that only undergoes inserts - use pctfree 0, you need to reserve NOTHING for updates.

if you have a table that gets rows that double in size - pctfree 50 would make sense - to reserver room for the updates.

you need to understand how your data grows over time. If you insert it and update 10% of the rows and make them 20% larger - you would need to understand

a) how many rows/block you get on average
b) what 20% of one of those rows means block percentage wise
c) use that as your pctfree


and just use automatic segment space management (assm) so you don't have to worry about pctused, freelists, freelist groups.

initrans would be based on what you think your maximum concurrent modification rate on a block would be. If you have an insert only table in ASSM - don't sweat it, just let it default. If you tend to have a small table that you update like crazy - then think about it.

Rating

  (8 ratings)

Is this answer out of date? If it is, please let us know via a Comment

Comments

Srikanth Potu, April 16, 2010 - 2:39 am UTC

Hello Tom

Thanks for the response. In some tables, we almost update all the rows multiple times after insertion. However finding how much the rows actually grow with each update seem to be a tough job.

I was just wondering is there a way I can get this information from the Oracle Stats / SYS user tables i.e. in a table/index, how much of the PCTFREE allocated is actually used for the subsequent updates?

Thanks and Regards
Srikanth

Tom Kyte
April 16, 2010 - 10:45 am UTC


You could only analyze the table and list chained rows to see if you suffer from row migration (look at the identified rows you get listed, if they could fit on a block, they are migrated)


If you have a large percentage of the rows being migrated, you should set pctfree higher for the future (it'll only affect blocks going forward, it won't 'fix' anything that already exists)

Then you could look at your block utilization, to see if most of your blocks are mostly empty - if they are, that would indicate a lower pctfree might be called for.

Please clarify for pctfree

Franck Pachot, April 19, 2010 - 4:51 am UTC

Tom,

Please, can you clarify, maybe with an example, the following:

<quote>
If you insert it and update 10% of the rows and make them 20% larger - you would need to understand
a) how many rows/block you get on average
b) what 20% of one of those rows means block percentage wise
c) use that as your pctfree
</quote>

Because I do not get how the number of rows/block is involved here.
I thought that if 10% of the rows increases by 20%, then on average the block used space will increase by 2%.

Thanks,
Franck.
Tom Kyte
April 19, 2010 - 9:02 am UTC

2% of a block isn't really meaningful here.

You need to understand how many rows you can get per block.


I can get seven 1KB rows on an 8k block. If I was to update 10% of them to be 20% larger - does 2% count or is the math more like:

- about 1 row on a block will grow
- if it grows by 20% - then I can only fit six rows on that block
- I'd better set up pctfree so that only six rows per block on average fit.


2% would be misleading because the granularity of your rows is so "large"

Would it hurt (negatively impact performance) to have a high initrans

Uday, September 25, 2012 - 1:02 am UTC

Very useful information as always!

This is in reference to
"initrans would be based on what you think your maximum concurrent modification rate on a block would be. If you have an insert only table in ASSM - don't sweat it, just let it default"

We have an application that runs certain procedures(doing several Inserts,Updates and Deletes) in 64 to 128 concurrent sessions. This usually happens during a batch we run at the end of the day everyday.

Although all tables in the database(Version 11.2.0.3) are on ASSM tablespaces, it was observed that these concurrent sessions spend a lot of time on wait event "Enq: TX - Allocate ITL Entry". For this we have increased the initrans on all the involved tables to 128 and it paid off.

In the coming days,however, it is planned to introduce/enhance some more procedures(working on a different set of tables) to run them in 64 to 128 concurrent sessions. So, to avoid the contention we faced earlier, we plan to set the initrans to 128 an all the tables in the application schema. But we are not sure if this will negatively impact the table access to those tables that are not concurrently accessed currently.

Your inputs on this are much appreciated.
Thanks.
Tom Kyte
September 27, 2012 - 8:17 am UTC

you could well see physical IO's go up, more logical IO's.

you'll have reserved 3k of your probable 8k block for ITLs. So, almost 50% of your block will be used for block management - not data. Your tables will be larger, you'll be able to cache less or your data in the cache, you'll do more physical io's and logical io's.

so sad to hear, you took a bunch of old fashioned slow by slow processes and run them in many threads instead of taking advantage of database parallelism (which would be contention free). You turned a batch process (which should be a big bulk operation) into an OLTP application!!!!!

Initrans too high?

David Penington, September 28, 2012 - 1:37 am UTC

Uday & Tom, is there any point in having INITRANS higher than the number of rows per block ? My off the cuff thinking is that there can't be more than one outstanding transaction per row.

Committing some of the work at an intermediate point (which you should be able to do since you can already divide the work into 128 independent granules) might also reduce the number of outstanding transactions per block.

Overall I agree with Tom - do bulk operations in SQL and it will be enormously faster and more efficient.
Tom Kyte
September 28, 2012 - 7:12 am UTC

is there any point in having INITRANS higher than the number of
rows per block ?


this is true, it would not be useful to have more initrans than rows. for most normal indexes, this would be more than 128 usually. for tables, probably less. good point.

be careful with that commit though - the more you commit - the more undo, redo, log file sync waits, etc you will see as well. and make sure your process is restartable so when it fails (when, not it), it can be safely restarted and picks up where it left off...

initrans 128 =3k of block space too high indeed.

Uday, September 29, 2012 - 9:49 am UTC

Thanks for taking time to shed more light on this Tom.

The application the database supports is , actually, an OLTP application. To be specific its a mix of OLTP and OLAP.

The load is OLTP during business hours. After business hours during the End of Day, however, to consolidate that day's activity a batch has to be run to make the application available for business the next day.

So this leads to the application having several tables in common for both type of loads.

You got the 8K block size spot on.

This
<Quote>
you'll have reserved 3k of your probable 8k block for ITLs
</Quote>
and the question from David,
raise a few more questions in me.

1. I'm curious, how you calculated this figure of 3k?
2. All the application tables have a default PCTFREE of 10 or more.So the usable block space,for inserts, would come down to less than 4K. Please let me know if this is wrong.
3. Some of the common tables have an avg. row length of 6-7K, so from the calculation, can I consider them as candidates to be moved to a tablespace with a block size greater than 8K?
4. <Quote>
for most normal indexes, this would be more than 128 usually </Quote>
So is it OK to maintain an initrans of 128 for indexes?

Bulk operations in SQL are currently in use in most of the
procedures, however there are still quite a few that can get there. Unfortunately, that is something that could happen only with the next release of the application, the date of which is unknown.





Tom Kyte
October 08, 2012 - 3:02 pm UTC

a batch has to be run to make
the application available for business the next day.


sorry, but your "batch" isn't batch, it is OLTP. Batch is "single thing, using parallel query". You have 128 things going concurrently - bigger than MOST OLTP systems!!! you have concurrency based waits in your future forever.

128 parallel client threads is not batch, that is OLTP. plain and simple. as I wrote before:

so sad to hear, you took a bunch of old fashioned slow by slow processes and run them in many threads instead of taking advantage of database parallelism (which would be contention free). You turned a batch process (which should be a big bulk operation) into an OLTP application!!!!!


1) each ITL is 23 bytes

ops$tkyte%ORA11GR2> select 128*23 from dual;

    128*23
----------
      2944



2) that is correct, you would have less than 4k for your data on each block probably.

3) if they have an average row length of 6-7k, why would you even consider increasing initrans????? you get one row per block in the first place, there would be no concurrency on this block by definition. it doesn't make sense to have lots of ITLs on a block that has ONE ROW.

4) you tell me, do you want your indexes to be 50% dead weight? do you have the cache to support caching 2x the data???


Bulk operations in SQL are currently in use in most of the
procedures


i disagree, you have 128 threads! that is not bulky, that is OLTP. I don't care if the individual programs do things 100 rows at a time - they aren't using bulky single sql statements - just lots and lots of tiny sqls that have an array bound to them.

Relationship between PCTFREE and INITRANS

Michael Donnelly, October 03, 2012 - 2:54 pm UTC

Tom,

We believe the number of ITLs on a table or index block can dynamically grow in a range of INITRANS to 255, and that the value of setting INITRANS is to pre-allocate the space for a minimum number. We also believe that a block can be full, and not have room to reach 255 transactions. If this is the case, is the space in the block into which the transaction count can grow the same space reserved by PCTFREE, or does this have to be some contiguous header space? In other words, if we think that INITRANS 32 is *usually* enough, but want to avoid at all costs the inability to grow to 255, can/should PCTFREE be used for this purpose? (The rows in this table are not updated, and the indexes are all on fixed-length data types, so we would otherwise have PCTFREE 0.)
Tom Kyte
October 09, 2012 - 12:00 pm UTC

pctfree only counts on an index block during the create or rebuild - it means nothing after that - index blocks always fill up to 100% full before they split.


pctfree cannot be used for that purpose in an index for sure.


pctfree only counts during inserts into a table, never updates. updates take the blocks to 100%

so pctfree cannot be used for that purpose in a table either in general.


but if you are just inserting into this table, just use ASSM or multiple freelist groups if using manual segment space management and the inserts will be spread out over the table - not just into the last block. That is how to avoid this issue - not ITLs on the tables really.


(actually, if this table is just being inserted into in batch - why not parallel direct path loads???)



Would larger PCTFREE help updates?

Brian, December 26, 2012 - 1:10 am UTC

Hi Tom,

I’m a bit confused why one needs to reserve space for UPDATEs not INSERTs as you said at the top of this thread – my naïve intuition is telling me the opposite…

We run an OLTP system on Oracle 10g. One table is taking a huge amount of INSERTs each day and each new row also gets updated at least once – so it’s a lot of updates. The update is on a numeric column, which is not part of a key or any index. ADDM does, however, advise a higher PCTFREE (than the default of 10):

Action Consider rebuilding the TABLE "XXXX.ORDERS" with object id 127216 using a higher value for PCTFREE.

Rationale The UPDATE statement with SQL_ID "d27jma6373z2k" was significantly affected by "buffer busy" waits.

So it comes to my previous confusion: how would a higher PCTFREE help in this case? I don’t see any growth in row size since the update is to a numeric column.

Also, does it make sense to disable ROW MOVEMENT on this table as INSERTs and the above UPDATEs are the only operations on this table? We never delete.

Merry Christmas!

Brian
Tom Kyte
January 04, 2013 - 10:56 am UTC

I’m a bit confused why one needs to reserve space for UPDATEs not INSERTs

well, inserts do not cause an existing row on an existing block to GROW - but an update does (or at least it could)


pctfree tells us when to stop inserting into a block - by default - when the block only has 10% free space - we stop inserting into it and move onto a different block. that 10% free space is hopefully big enough to hold and growth in the rows due to future updates to the existing rows on that block.


A higher pctfree would have two effects for you:

a) more space would be reserved for future updates. If you are updating a number and growing the size of the number (remember - numbers in Oracle are between 0 and 22 bytes in length - the number 999,999 takes more space to store than 1,000,000 does since the number of significant digits is what we have to worry about - not the "size" of the number itself). So, if you are updating a NULL number to a non-null value - you are definitely causing the row to "grow" and if you get say 150 rows per block and have 10% free - you have about 700 bytes free - if each of those numbers need 4 or 5 bytes - you might cause rows to migrate after they are updated (they won't fit on the block anymore)

b) there would be less rows per block - decreasing the concurrency on the block when the updates take place, reducing the buffer busy waits for the update. Only one transaction at a time can have the block in current mode and you need the block in current mode in order to change the bits and bytes on it. So, if two transactions try to update different rows on the same block - one will have to wait BRIEFLY with a buffer busy wait while the first transaction changes the bits and bytes on the block. It will not have to wait for a commit - it just needs to wait for the first transaction to do its changes and put the modified block back into the cache - then it can make its modifications.




row movement only affects things like:

a) updating a partition key, if enabled - you can update the partition key in a manner that causes the row to move from one partition to another (and hence changes its rowid)

b) reorganizing a table with the shrink space command


It has nothing to do at all with inserts, updates or deletes.

Smith Kumar, June 08, 2017 - 4:29 pm UTC

both table and index have same pctfree and pctused parameter concept or not?

How can i practically implement pctused and pctfree parameter at table and index level?
Connor McDonald
June 09, 2017 - 2:27 am UTC

pctused is *less* relevant nowadays with ASSM tablespaces (which is the default).

But pctfree is still very relevant.

By default, we reserve 10% of space for subsequent updates to rows in a table, ie, room for them to grow.

A *lot* of tables *never* have updates to rows - they only insert, or they only insert/delete. So that 10% is never used...so your table is 10% larger than it needs to be be.

If your table if 1TB, then that's an extra 100G of dead space - that you could use elsewhere, or also means 10% slower to scan that table for large scanning queries.

So if you have tables that are never/rarely updated, or the updates will never make the rows larger (eg, you are changing a status column from one value to another of the same size), then you could look at (say) pctfree of 2 or 1.

If the data is purely for archival, then perhaps pctfree 0 (plus compression). You'll see that when you compress data (using basic compression), we use pctfree 0 anyway because we assume that's your intent.


For an index, pctfree is applied at build/rebuild time, not generally normal operations. Similarly guidelines apply in terms of reduced pctfree when the data is ready for archival (ie, is now read only).