Skip to Main Content

Breadcrumb

Connor and Chris will both be at AI World from October 12 to October 17 , the premier Oracle conference of 2025. If you're in Vegas, please come say Hi or pop into our sessions

Question and Answer

Tom Kyte

Thanks for the question, G.

Asked: March 25, 2005 - 7:12 pm UTC

Last updated: July 16, 2005 - 8:25 pm UTC

Version: 9i

Viewed 1000+ times

You Asked

My understanding is that multiple block sizes are allowed starting in Oracle 9i. With that in mind, I was wondering if there was an optimal way to design a database.

For example, would it be better to use one block size (say 4K) for small tables such as reference files , another block size (say 8K) for large tables, another block size (say 16K or 32K) for LOBs , and another block size for indexes (say 32K)?

Would multiple block sizes make managing the instance or setting up the SGA overly complex? Are there any other considerations? Or is it better to just setup the database with one block size?

and Tom said...

Using multiple block sizes would make the database harder to manage. You now have to size buffer caches for each individual type (and a buffer cache is a pretty impressively robust thing -- it keeps that which you use and loses that which you don't). If you start setting up 4k, 8k, 16k caches -- you are basically saying to the system "yes, even though you could use that memory over in the 16k cache, that memory I'm not effectively using over here in the 4k cache -- YOU CANNOT HAVE IT"

You would be removing the databases ability to effectively use the memory to its best ability. Further, if you have a special table or two -- you have the keep and recycle pools for segregating them from the common population at well.

I would not recommend utilizing multiple block sizes in a single database. If you are trying to eek out a bit more performance via that mechanism -- you have hit the virtual end of the tuning road (there are so many other bits of low hanging fruit that will pay off with rewards many orders of magnitude larger)

Rating

  (6 ratings)

Is this answer out of date? If it is, please let us know via a Comment

Comments

Block Size

A reader, March 26, 2005 - 5:43 pm UTC

Hi Tom,
Recently, our DW application (based upon 9.2.0.1 - AIX 5.2L - 64bit) loading time increased a lot. I observed on the unix side that there was huge waiting I/O happening. When I informed unix admin, he mentioned that every thing is alright at their end but Oracle parameters are wrong especially block size should be 32K instead of 16K, maximium parallel processes should be 12 (3CPUs *4) instead 25 etc. Though I mentioned that how do we say that 32K will be better than 16K, yet he is of the opinion that making 32K is probably the best solution to this problem based upon his previous DW applications on Oracle experience. Further, he was too much worried about maximimum number of parallel processes (though I am not using parallel processing at this stage). Could you please help (especailly with your expertise in performance tuning), how to approach this?
Thanks & Regards

Tom Kyte
March 26, 2005 - 6:07 pm UTC

Did they give you a technical reason why 32k would be better? (given that the OS probably is not doing IO 32k at a time?)

the power of two comes into play with parallel. 2xnumber of cpus would be more normal.

You can "benchmark" it, but you'll need to create a 32k block size database. direct path load some stuff and see if there is significant difference (just a quickie scratch database)

Use of multiple block sizes as a workaround

Steven, July 15, 2005 - 2:48 pm UTC

I've identified a query in our production database (9.2, Solaris) that would benefit from an index.

Unfortunately the size of the column [varchar2(2000)] is too big for the blocksize [4096].

I have tried on our dev system creating a small 8k tablespace and cache just for this one index to work around the limitation of the current blocksize. My analysis is that the benefit of having this index is greater than the extra complexity of maintenance that this will cause.

Would you classify this workaround as a legitimate use of the multiple block size feature?

Tom Kyte
July 15, 2005 - 6:13 pm UTC

is the field really 2000 characters long?? and needs a b*tree index?

would a text index provide yet even more benefit?

yes, a separate tablespace for this would work around the issue, but I'm curious as to the need to index a 2000 character field using a b*tree.

Not all are 2000 characters long

Steven, July 15, 2005 - 7:07 pm UTC

  1  select
  2  case  when length(name)> 1000 then '1000+'
  3  when  length(name) between 500 and 999 then '500-999'
  4  when  length(name) between 250 and 499 then '250-499'
  5  else '<250' end
  6  as namelen, count(*)
  7  from problem_table
  8  group by
  9  case  when length(name)> 1000 then '1000+'
 10  when  length(name) between 500 and 999 then '500-999'
 11  when  length(name) between 250 and 499 then '250-499'
 12  else '<250' end
 13* order by decode (namelen,  '1000+',2, '500-999',3, '250-499',4,'<250',5)
SQL> /

NAMELEN   COUNT(*)
------- ----------
1000+          252
500-999       6387
250-499      25048
<250        382015


Unfortunately for me, this table/column is used for different things in different ways depending on how our product is implemented. It was apparently decided back when just to make it big enough for all possible uses. :-P 

In this particular implementation it's being used to make a keyed match tens of thousands of times in a pl/sql loop. No index=FTS.

Additionally the 4k block size would not be my first choice for this database, but changing it is a medium- to long-term strategy.

Thanks for the validation of method here. The index ends up taking up about 50mb which we can handle easily enough on this system. I'll be recommending a more sane approach to column sizing for our next version. 

Tom Kyte
July 15, 2005 - 9:07 pm UTC

<quote>
In this particular implementation it's being used to make a keyed match tens of
thousands of times in a pl/sql loop
</quote>

well, there is your bug!!! databases were born to join, it is what they do best (borrowed from tigger of course)



Using small blocks size for updates

amihay gonen, July 16, 2005 - 4:46 pm UTC

Hi, I've a case which I've very big tables with 100M rows,rowsize is about ~350 bytes each which I being read using index (timestamp scan) . This situation calls for 16k or even 32k block size.

But I've a case (in the same database) that I've a table of about 100,000 rows , rowsize ~100bytes , which are begin updated all the time randomly (access via PK) . This table hold PK as ID and score field which is being updated by some proesses.
IMHO, this case calls for 4k block size to reduce amount of dbwr writes to disk (because dbwr is working in a blocks).

What do you think ?

What

Tom Kyte
July 16, 2005 - 4:53 pm UTC

why does that situation call for 16k or even 32k block sizes. Please don't say something as if it were fact without saying "why" you believe that.

If I read something via an index scan, the size of the blocks maybe should be small -- what if the table isn't clustered very will by the index key you are ranging on. So, your range scan of 1,000 rows needs to hit 1,000 table blocks. I'd rather have 1,000 8k blocks containing my rows than 1,000 16k blocks containing my data -- how about you.

You need to say "why" you believe what you believe.

8k is a really nice size. A really really nice size.

why does that situation call for ...

amihay gonen, July 16, 2005 - 8:19 pm UTC

Hi , I got your remark.
I should have explain in more details , my mistake ;) .

Please let me put in some new details :

1. The big table stores CDR (call detail record). There is an engine with process those records. The engine is doing select ... from big_Table where id between :x1 and :x2 order by id;
The engine fetches all the records. This is the reason I was thinking of using big block size.

2. The small table has a lot of updates. which mean , If I update one record (100 bytes ) in 8k block , I will end with writing to disk 8k-100 bytes more data then needed.
This is the reason I was thinking on a small size there .

Tom Kyte
July 16, 2005 - 8:25 pm UTC

I don't see the need for "big blocks", you'll really need to say a bit more than "I do where id between ? and ?". That does not mean 'big blocks'

8k -- sounds so nice.

between small and big.

nice isn't it?



why does that situation call for ...

amihay gonen, July 16, 2005 - 8:36 pm UTC

Hi , I got your remark.
I should have explain in more details , my mistake ;) .

Please let me put in some new details :

1. The big table stores CDR (call detail record). There is an engine with process those records. The engine is doing select ... from big_Table where id between :x1 and :x2 order by id;
The engine fetches all the records. This is the reason I was thinking of using big block size.

2. The small table has a lot of updates. which mean , If I update one record (100 bytes ) in 8k block , I will end with writing to disk 8k-100 bytes more data then needed.
This is the reason I was thinking on a small size there .