Skip to Main Content
  • Questions
  • Implications of moving from 16k block size to 8k

Breadcrumb

Question and Answer

Connor McDonald

Thanks for the question.

Asked: September 30, 2015 - 5:08 am UTC

Last updated: October 01, 2015 - 3:17 am UTC

Version: 11.2

Viewed 1000+ times

You Asked

What are the implications, affects, side effects, concerns if any if moving from 16k block size to 8 k from one db to other.

We are using schema expdp data pump to move from old db which is 16k block size to new db which is block size 8k , same character set.

Will the row length increase or decrease ?

What query to run to find row length of a table rows so we may compare both old and new db?

Will cbo behave differently , statistics get affected, query response change just for this block size change factor ?

Is there any todo checklist adjustments to be made to init.ora Params to facilitate this ? Memory parameter adjustments ?

Do we adjust the sga to double the size since now new block size is 8k ?

and Connor said...

You should not need to double the SGA, because you (I assume) are currently specifying your SGA parameters in bytes not blocks.

There is a fixed overhead in each block, so the *percentage* of space available per block to store data is slightly lower. Based on that you might want to increase your SGA *slightly*.

Obviously any memory allocations may have explicitly made for 16k block will need to be mapped over to 8k allocations.

Row lengths will be unchanged - a row (in simplest terms) is:

length of col1, data for col1, length of col2, data for col2, etc etc

If you have any compressed data, then it would be worth experimenting with compression rates in 16k versus 8k. Similarly, if you are doing index leading column compression, then you'd want to do some experimentation there.

But I suppose more important than *anything* is performance benchmarking - you are about to change a *lot* of optimizer statistics, ie, the number of table blocks, rows per block, leaf blocks in indexes, index levels etc etc... You really want to have confidence your SQL's all perform as you expect before you jump in and migrate.

Hope this helps

Rating

  (2 ratings)

Is this answer out of date? If it is, please let us know via a Comment

Comments

What about I/o

A reader, September 30, 2015 - 12:15 pm UTC

Thanks for the response.

How is I/O operations affected ?

How to test that ?
Connor McDonald
October 01, 2015 - 12:20 am UTC

IO will predominantly be

- multiblock/direct reads
- single block reads

For multiblock reads, we are reading up to 1megabyte of blocks at a time anyway. If you've explicitly set db_file_multiblock_Read_count, you might want to consider unsetting it,and letting Oracle determine the optimal value

For single block reads, these are most typically index access reads - which comes back to my comments about performance benchmarking. Your indexes will be quite different - smaller branch blocks, smaller leaf blocks. You really want to be testing your SQL's carefully to make sure the performance characteristics are unchanged (or better)

I/O testing sript

A reader, October 01, 2015 - 2:18 am UTC

Can you please give an example script of how to test i/o performance.

Should I just insert into a table and then check I/O performance - how using what sql query ?

Thanks for all your valuable inputs
Connor McDonald
October 01, 2015 - 3:17 am UTC

No, I'm saying test your *application*

You run your application, or at least the critical parts of it with your existing block size, and then you run your application (in your testing environment) under your new block size.

You then compare results, make any fixes needed, and have a good level of confidence before doing on your live production system

More to Explore

Data Pump

All of the database utilities including Data Pump are explained in the Utilities guide.