Skip to Main Content

Breadcrumb

May 4th

Question and Answer

Connor McDonald

Thanks for the question, Durga Prasad.

Asked: March 06, 2017 - 4:28 am UTC

Last updated: March 06, 2017 - 4:46 am UTC

Version: 11.2.0.4

Viewed 1000+ times

You Asked

Hi Tom,

I am looking for guidance on when to choose a block size greater than 8K for Oracle DB. I have seen few posts from the past which indicated 8K typically should do fine for most scenarios but have always been under the impression that for Datawarehouse type applications 16K is a decent starting point due to the ability to store more data in compressed blocks and with proper db_multi_block_read_count this block size will also allow for effective IO.

I also heard from few of my friends that any block size greater than 8K would be inefficient due to Operating System limitation on block fetches.

Can you please let me know your thoughts & advise in this regard?

Many Thanks

and Connor said...

There are so many variables in play, that the only way to really answer that question would be a benchmarking exercise.

What I mean by that is - if *I* say to you "If you are on Linux version X, database version Y, and its a warehouse-style database" then use 16k (or 32k or 4k), then I'm sure it would be possible to come up with

- a scenario where that is very good advice
- a scenario where that is very bad advice

What really matters is the *real* world, ie, which block sizes works best in your system, and the only way to know that is to do some representative testing.

I will add however, that in my experience, I've rarely seen a non-8k blocksize outperform an 8k blocksize by any margin significant enough to make me consider switching away from the default.

And the reason I like defaults, is because if I encounter a problem, I am far more likely to have a hit a problem that someone may already have hit, and thus likely to get better support, better chances of existing patches etc.

Is this answer out of date? If it is, please let us know via a Comment