Skip to Main Content
  • Questions
  • db_block_size and setting tablespace block size

Breadcrumb

May 4th

Question and Answer

Connor McDonald

Thanks for the question, Taral.

Asked: February 03, 2009 - 4:37 pm UTC

Last updated: September 02, 2021 - 11:49 am UTC

Version: 10.2.0.3

Viewed 50K+ times! This question is

You Asked

Hello Sir,

Is there any difference in setting up db_block_size and tablespace block size.

As, if i wish not to create my database with 16k block size(db_block_size) and then say we have a need to have 16k block size so we go and create 16k blocksize of one tablespace and put our object inside that.

Will performance(Block Read/Write) of db_block_size=16k can be achieved by setting tablespace of 16k and db_block_size of 8k

Also, how db_multiblock_read_count will react what would be the effect of that

and Tom said...

why would you "need" a different blocksize.


http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:44779427427730


I would not recommend going into a system planning on using multiple blocksizes - they were invented for one thing, to transport data from a transactional system to a warehouse (where you might be going from 8k OLTP to 16/32k warehouse) and to be used only to extract/transform/load the OLTP data.


(multiblock read count is set automatically in 10g, let it set it - we'll always max out the IO size, we'll cost queries based on history - how many blocks we actually get to read, but we'll always try to use the largest IO size available on your platform)

Rating

  (17 ratings)

Is this answer out of date? If it is, please let us know via a Comment

Comments

db_file_multiblock_read_count

Taral Desai, February 03, 2009 - 5:11 pm UTC

Can you please explain this more with some example if possible

"(multiblock read count is set automatically in 10g, let it set it - we'll always max out the IO size, we'll cost queries based on history - how many blocks we actually get to read, but we'll always try to use the largest IO size available on your platform)"

Tom Kyte
February 03, 2009 - 5:31 pm UTC

http://docs.oracle.com/docs/cd/B19306_01/server.102/b14214/chapter1.htm#FEATURENO05506

you can read about it.

basically - we will issue multiblock IOs at the limit of the OS.
We watch to see historically however how large of an IO we actually were able to do.

We use the historical number to cost a full scan.



The old way - you said "multiblock read = 128". So, we would cost a query assuming that we would read 128 blocks at a time. Well, in reality we never read 128 blocks at a time, we read sometimes far far less.


Say we are going to read blocks 1 - 128. In the cache, blocks 5, 10, 15, .... and so on (every fifth block) are already there. Therefore we cannot read block 5, or 10, or ... from disk - we have to use the version in the cache to start with.

So, instead of 128 blocks in a single IO, we read 1-4, 6-9, ..... and so on.

The true multiblock read count = 4, that is what 10g would use to cost the query, but it would always *try* to read whatever the OS supported.

Need Information

A reader, February 03, 2009 - 9:32 pm UTC

Few doubts

1. So, db file multiblock is calculated automatically and we don't have to set it oracle will decide upon it's own.
or will it require to collect system stats to help oracle decide this number auto magically

2. If this is not useful at all as oracle decide by it's own why we use this parameter can't we depreciate it. I may be misunderstanding something please clear my doubts
Tom Kyte
February 04, 2009 - 10:24 am UTC

1) gathering system statistics can help set that, yes. One of the things records in aux_stats$ is MBRC.

2) I de-preciate it all of the time :) I do think in some future release it will be fully deprecated but for now, you have the choice of 'auto tune or do it yourself'.

Just like the addition of true undo tablespaces and automatic undo management in 9i did not remove the ability to create and configure your own rollback segments.

And the addition of automatic pga memory management did not prevent you from explicitly setting memory pga memory parameters...

And automatic SGA memory management in 10g...
And automatic total memory management in 11g...
and so on and so on.

Oracle Block Size

A reader, February 04, 2009 - 5:46 am UTC

Hi Tom,

Going through your suggestions about benefits of single instance over multiple instances. We have 8K block size for Enterprise Data Warehouse, and having discussions, if we create data mart in the same instance, will there will be any detrimental performance impact in comparison to creating a new instance with 16/32K block size? We are weighing the advantages of single instance over not having large block size for data mart.

Regards,
Tom Kyte
February 04, 2009 - 11:30 am UTC

just use 8k all around.


What is the difference between a 'warehouse' and a 'data mart' in your mind (a data mart to me is just a cute little name for a warehouse that has been broken up into sometimes smaller chunks, it is still a warehouse/reporting system)


DB Block Size

A reader, February 04, 2009 - 2:08 pm UTC

Hi Tom,

Thanks for the advise.
Unfortunately, Enterprise Data Warehouse is not performing as expected. I understand that there could be many things to be looked at, but block size just came up as we are in the process building new data mart.

Regards,
Tom Kyte
February 04, 2009 - 3:27 pm UTC

was it designed to perform as expected...

DB block size

A reader, February 04, 2009 - 3:37 pm UTC

Hi Tom,

I just joined, and came to know that performance is a big issue in EDW, so paying lots of thoughts in design aspect of DM, DB block size being one of them!

Thanks
Tom Kyte
February 04, 2009 - 4:14 pm UTC

my point is - picking a block size isn't design.

thinking about your physical schema is part of design.

Sounds like the are going to start throwing this boutique cute little data marts out as a way to improve performance - what it will most likely end up doing is increasing complexity.

Data Mart

A reader, February 05, 2009 - 3:42 am UTC

Hi Tom,

EDW is in 3NF, and DM design is based upon dimensional model, more of EUL (End User Layer). This is the time to make decision about block size...

Regards,
Tom Kyte
February 05, 2009 - 10:43 am UTC

why put it into a separate database.

A data warehouse in 3rd normal form is not a data warehouse, it sounds more like an operational data store.

Again, this goes back to "design, physical schema"

block size - don't go there, physical schema design, design to actually answer the questions you have - go there first.


DB Block Size

A reader, February 05, 2009 - 1:19 pm UTC

Hi Tom,

Thanks for your help. Just one clarification - once we do the design for performance (as per your book), block size is something too late to change in the mid or after the developing/test or I am missing something or it is not big thing in terms of whole scheme of things.

Regards,
Tom Kyte
February 05, 2009 - 2:22 pm UTC

I am of the opinion that it will not make a huge difference one way or the other and 8k is good.

A reader, May 04, 2009 - 10:54 pm UTC

Tom
My block size is 4096 and my db_32k_cache_size=67108864
I want to create a tablespace with 32K and rebuild all indexes into this tablespace. These are frequently used indexes. Do you think is there any benefit for using 32K block size in this scenerio
Tom Kyte
May 09, 2009 - 10:45 am UTC

why?

before you do something, you should have an identified goal in mind

so, tell us all - WHY would you do this? Don't say "cause I read on some website it makes things super fast" (it doesn't), tell us WHY you think YOU would derive benefit from this?


I do not think there is in general benefits to be gained from using multiple block size tablespaces - short of TRANSPORTING data from one block size to another for an 'extract transform and load' process.


A reader, May 19, 2009 - 5:39 am UTC

Hi tom
This links states that:

" Simply by using the new 16K tablespace and accompanying 16K data cache, the amount of logical reads has been reduced by half "

What do you think?


http://www.rampant-books.com/t_oracle_blocksize_disk_i_o.htm
Tom Kyte
May 23, 2009 - 11:41 am UTC

and the amount of phyiscal IO might have gone way up (or not changed at all)

and the amount of logical IO your query does might not have changed very much at all (think about it, when you do logical IO in a large range scan+table access by index rowid - do you really do most of the IO's against the index or THE TABLE??)


Think about the physics here for a minute, the first point in the link was "B-tree indexes with frequent index range scans perform best in the largest supported block size" - well, here is a large range scan:

ops$tkyte%ORA10GR2> /*
ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> drop table t;
ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> create table t
ops$tkyte%ORA10GR2> as
ops$tkyte%ORA10GR2> select *
ops$tkyte%ORA10GR2>   from all_objects;
ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> create index t_idx on t(object_type,owner,object_name);
ops$tkyte%ORA10GR2> */
ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> set linesize 1000
ops$tkyte%ORA10GR2> set arraysize 100
ops$tkyte%ORA10GR2> exec dbms_monitor.session_trace_enable

PL/SQL procedure successfully completed.

ops$tkyte%ORA10GR2> variable x varchar2(30)
ops$tkyte%ORA10GR2> exec :x := 'SYS'

PL/SQL procedure successfully completed.

ops$tkyte%ORA10GR2> set autotrace traceonly
ops$tkyte%ORA10GR2> select /*+ index(t t_idx) */ * from t where owner = :x order by object_type,owner,object_name;

22934 rows selected.


Execution Plan
----------------------------------------------------------
Plan hash value: 3778778741

-------------------------------------------------------------------------------------
| Id  | Operation                   | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |       |   531 | 67968 |   487   (1)| 00:00:04 |
|   1 |  TABLE ACCESS BY INDEX ROWID| T     |   531 | 67968 |   487   (1)| 00:00:04 |
|*  2 |   INDEX FULL SCAN           | T_IDX |   212 |       |   348   (1)| 00:00:03 |
-------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - access("OWNER"=:X)
       filter("OWNER"=:X)

Note
-----
   - dynamic sampling used for this statement


Statistics
----------------------------------------------------------
          0  recursive calls
          0  db block gets
      16476  consistent gets
          0  physical reads
          0  redo size
    1079076  bytes sent via SQL*Net to client
       2919  bytes received via SQL*Net from client
        231  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
      22934  rows processed


the tkprof shows us:

select /*+ index(t t_idx) */ *
from
 t where owner = :x order by object_type,owner,object_name


Rows     Row Source Operation
-------  ---------------------------------------------------
  22934  TABLE ACCESS BY INDEX ROWID T (cr=16476 pr=0 pw=0 time=160693 us)
  22934   INDEX FULL SCAN T_IDX (cr=575 pr=0 pw=0 time=45996 us)(object id 175622)


Now, if we used a two times the size buffer cache for the index - we might chop as much as 288 IO's off of the query. A 1.8% reduction in logical IO.....





Think about the REAL WORLD use cases.

Do you do that COUNT(*)? (if so, why - I'd be looking at a way to NOT DO the count in the first place, almost certainly not needed - but even if so, how often do you do it - is it the thing you really need to tune??)

Most index range scans are followed by table access by index rowid, the index is not the IO consumer, the table is.

Large Scans

Taral Desai, May 27, 2009 - 5:26 pm UTC

Hi Tom,

In relation to this if we had large scan then will it be beneficiary. Here is the small test
1. 
Default db block size is 16k
select count(*) from test_16k; --53401088 (16k Block Size )
select count(*) from test_32k; --53401088 (32k Block Size )

2. 

select count(*) 
from
 tdesai_dba.test_16k where owner='SYS'


call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.01       0.00          0          0          0           0
Execute      1      0.00       0.00          0          0          0           0
Fetch        2     60.03     542.93     415233     415263          0           1
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        4     60.04     542.94     415233     415263          0           1

Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: SYS

Rows     Row Source Operation
-------  ---------------------------------------------------
      1  SORT AGGREGATE (cr=415263 pr=415233 pw=0 time=542939889 us)
14015488   TABLE ACCESS FULL TEST_16K (cr=415263 pr=415233 pw=0 time=532594887 us)


Elapsed times include waiting on following events:
  Event waited on                             Times   Max. Wait  Total Waited
  ----------------------------------------   Waited  ----------  ------------
  SQL*Net message to client                       2        0.00          0.00
  db file sequential read                         1        0.00          0.00
  db file scattered read                      26004        1.36        519.06
  SQL*Net message from client                     2        0.04          0.05

3.
select count(*) 
from
 tdesai_dba.test_32k where owner='SYS'


call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute      1      0.00       0.00          0          0          0           0
Fetch        2     66.24     509.83     206236     206265          0           1
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        4     66.24     509.83     206236     206265          0           1

Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: SYS

Rows     Row Source Operation
-------  ---------------------------------------------------
      1  SORT AGGREGATE (cr=206265 pr=206236 pw=0 time=150428252 us)
14015488   TABLE ACCESS FULL TEST_32K (cr=206265 pr=206236 pw=0 time=504606113 us)


Elapsed times include waiting on following events:
  Event waited on                             Times   Max. Wait  Total Waited
  ----------------------------------------   Waited  ----------  ------------
  SQL*Net message to client                       2        0.00          0.00
  db file sequential read                        21        0.01          0.06
  db file scattered read                      25844        0.79        484.74
  free buffer waits                              13        0.01          0.17
  SQL*Net message from client                     2        0.05          0.07



Tom Kyte
May 27, 2009 - 5:45 pm UTC

how many times do you do that - how much concurrency is there? is there any real benefit from that? I doubt it.

and you showed it (which one took more cpu time in that particular case?)




think about this please - in real life - how many times do you do that count(*)? If you say "more than once" - you are doing something wrong, you need to rethink what you are doing in the database. Even if you say "once, after a load", you probably need to rethink your approach (sort of wish "select count(*)" with no other attributes would raise an ERROR myself)


so tell me - does it make any sort of real world difference? do you really gain anything from this (the reduction in LIO's would mostly benefit scalability issues - many users doing this simultaneously - but if you have many users doing it at the same time - you have other problems - a seriously bad "algorithm" that needs this count)

It was Test

Taral Desai, May 29, 2009 - 5:46 pm UTC

Hello Tom,

Well i was testing performance for 32k and 16k block size tablespaces. We have default 16k block size.

Here is the another test i did for 9 million rows.
set serveroutput on;
exec runStats_pkg.rs_start;
create table t_32 nologging tablespace taral_32k as select * from test_32 ;
exec runStats_pkg.rs_middle;
create table t_16 nologging tablespace users as select * from test_16 ;
spool test2.txt
exec runStats_pkg.rs_stop;
spool off

Here are the result

Run1 ran in 21074 hsecs 32
Run2 ran in 18964 hsecs 16
run 1 ran in 111.13% of the time

LATCH.library cache 56,888 43,033 -13,855
LATCH.session allocation 36,838 23,126 -13,712
LATCH.row cache objects 21,836 10,909 -10,927
LATCH.enqueue hash chains 11,736 7,147 -4,589
LATCH.enqueues 9,306 5,046 -4,260
STAT...session uga memory 65,408 0 -65,408
LATCH.cache buffers chains 414,600 481,135 66,535
STAT...db block gets 81,598 152,128 70,530
STAT...table scan blocks gotte 71,700 143,880 72,180
STAT...physical writes direct 71,016 143,284 72,268
STAT...db block gets direct 71,016 143,284 72,268
STAT...physical writes non che 71,016 143,284 72,268
STAT...consistent gets from ca 78,441 151,740 73,299
STAT...consistent gets 78,441 151,740 73,299
STAT...no work - consistent re 74,123 148,974 74,851
LATCH.simulator hash latch 94,448 189,332 94,884
STAT...redo size 1,391,092 1,290,768 -100,324
LATCH.object queue header oper 142,832 262,483 119,651
STAT...session logical reads 160,039 303,868 143,829
STAT...session pga memory 4,521,984 65,536 -4,456,448

Run1 latches total versus runs -- difference and pct
Run1 Run2 Diff Pct
873,289 1,099,487 226,198 79.43%

Can you please explain
1. What is latch "object queue header operation" and "simulator hash latch"
2. Is that so that 16k block had ran fast as of wallclock but performance or using heavy latching than 32k.

If yes then can you please explain
3. Why is cache buffer chains higher

Tom Kyte
June 01, 2009 - 7:35 pm UTC

why didn't you answer a single question I asked?

the difference between 0.32 and 0.16 is approximately ZERO in real life, that is wall clock timely - hugely and highly variable - run it 100 times and tell us the average..


If you see something like "10.00 seconds vs 0.10 seconds" - that is a big difference.

0.xx versus 0.yy - with lots of IO involved - probably xx = yy for all intents and purposes.


(I'm telling you - nothing to be gained here....)

Taral Desai, June 03, 2009 - 1:44 pm UTC

Hi Tom,

Actually we don't use count(*) but this was just an test i had perform to evaluate. Even for second one this is the test only.

I got your point for timing information. Yes it would not make much difference.

Also saw that it won't make any difference in I/O as one with 16k is doing 16 blocks per read and other is doing 8 blocks per read this is from p3 value.

But, curious about knowing why there is so much latch difference.
Tom Kyte
June 04, 2009 - 12:27 pm UTC

each logical io requires a latch.

Should ODS be in different database than Datamart because of blocksize

A reader, November 05, 2010 - 11:23 am UTC

Tom,
You are clear that there is no need to pick different block size for database / tablespaces and that 8K is good enough. However, something you said regarding moving data from OLTP to datawarehusing might be reason to have bigger block size, i.e. 16K/32K. Two questions:
1. in designing data warehouse, should ODS layer (3NF) be in its own database and data mart layer (Dimensional) be in it's own database?
2. If so, then what are general recommended block sizes? If not and in the same database, then what is general recommended block size?
Thanks
Tom Kyte
November 06, 2010 - 7:18 pm UTC

I'd prefer as few databases as I can get away with - as few as possible.

siva, July 11, 2013 - 10:58 am UTC

Hi Tom,
I have a list partitoined table(blocksize is 8k) and partitioned indexes(blocksize is 32 k) in 11g.
Now I am unable to add new partition to the table ,getting

ORA-14523: Cannot co-locate [sub]partition of local index with table [sub]partition because local index block size [32768] does not match table block size [8192]

Pleae help me inthis regard.

Tom Kyte
July 16, 2013 - 3:55 pm UTC

why multiple block sizes??????? ugh. what a pain.


and see - you got some pain here. you should never be doing this - using a different block size for your data. The only reason multiple block sizes exist are to transport data from an OLTP system (8-16k block sizes) to warehouse (16-32k block sizes). NOT for doing indexes different from tables. I have a feeling I know what website you read for that "advice" - bad idea.


ops$tkyte%ORA11GR2> !oerr ora 14523
14523, 00000, "Cannot co-locate [sub]partition of %s %s with table [sub]partition because %s block size [%s] does not match table block size [%s]"
// *Cause: A DDL statement was issued that would require
//         a partition/subpartition of a local index/LOB column to
//         be co-located with the corresponding partition/subpartition
//         of the base table. However, this is not possible because
//         the block sizes of the table and the LOB column/local
//         index are different.
// *Action: Either
//          (1) Specify an object-level default tablespace (or
//              partition-level default tablespace for the
//              appropriate partition, if composite partitioning is
//              used) for the partitioned local index/LOB column
//              and then retry the DDL command, OR
//          (2) Ensure that tablespaces of the correct block size are
//              specified for all new partitions/subpartitions being
//              created.
//          Also ensure that neither of TABLESPACE DEFAULT and
//          STORE IN (DEFAULT) is specified for a local index whose
//          block size does not match that of the base table.





but you'll have to show us what you did to get here - because it would work by default in most normal cases:

ops$tkyte%ORA11GR2> CREATE TABLE t
  2  (
  3    dt  date,
  4    x   int,
  5    y   varchar2(30)
  6  )
  7  PARTITION BY list (x)
  8  (
  9    PARTITION part1 VALUES (1),
 10    PARTITION part2 VALUES (2)
 11  )
 12  /

Table created.

ops$tkyte%ORA11GR2>
ops$tkyte%ORA11GR2> create index t_y_idx on t(y) local tablespace tbs16k;

Index created.

ops$tkyte%ORA11GR2>
ops$tkyte%ORA11GR2> select table_name, tablespace_name from user_tab_partitions where table_name = 'T';

TABLE_NAME                     TABLESPACE_NAME
------------------------------ ------------------------------
T                              USERS
T                              USERS

ops$tkyte%ORA11GR2> select index_name, tablespace_name from user_ind_partitions where index_name = 'T_Y_IDX';

INDEX_NAME                     TABLESPACE_NAME
------------------------------ ------------------------------
T_Y_IDX                        TBS16K
T_Y_IDX                        TBS16K

ops$tkyte%ORA11GR2>
ops$tkyte%ORA11GR2> alter table t add partition part3 values ( 3 );

Table altered.

ops$tkyte%ORA11GR2>
ops$tkyte%ORA11GR2> select table_name, tablespace_name from user_tab_partitions where table_name = 'T';

TABLE_NAME                     TABLESPACE_NAME
------------------------------ ------------------------------
T                              USERS
T                              USERS
T                              USERS

ops$tkyte%ORA11GR2> select index_name, tablespace_name from user_ind_partitions where index_name = 'T_Y_IDX';

INDEX_NAME                     TABLESPACE_NAME
------------------------------ ------------------------------
T_Y_IDX                        TBS16K
T_Y_IDX                        TBS16K
T_Y_IDX                        TBS16K

ops$tkyte%ORA11GR2>

Issue with multiblock size in tablespace

siva karani, July 18, 2013 - 6:48 am UTC

Thanks Tom.

In my case index was created like this,

create index t_y_idx on t(y)
local(
partition part1 tablespace tbs16k,
partition part2 tablespace tbs16k);

Now, if I try to add new partition then I am getting ORA-14523 Error.
But If i specify the tablespace name globally , I am able add new partition.

create index t_y_idx on t(y)
tablespace tbs16k
local(
partition part1 tablespace tbs16k,
partition part2 tablespace tbs16k);

Could you pls explain , how index is stored in both cases and how to find that the tablespace is specified globally or locally.
Tom Kyte
July 18, 2013 - 5:18 pm UTC

by default, unless the index has a tablespace, it'll use the tablespace of the table it is on.

I really suggest you consider rebuilding this index in a normal tablespace and getting rid of your multiple block sizes.

ops$tkyte%ORA11GR2> create index t_x_idx on t(x)
  2  local
  3  (partition part1 tablespace tbs16k,
  4   partition part2 tablespace tbs16k
  5  )
  6  /

Index created.

ops$tkyte%ORA11GR2>
ops$tkyte%ORA11GR2> create index t_y_idx on t(y)
  2  tablespace tbs16k
  3  /

Index created.

ops$tkyte%ORA11GR2>
ops$tkyte%ORA11GR2> alter table t add partition part3 values less than (to_date('15-mar-2003','dd-mon-yyyy'));
alter table t add partition part3 values less than (to_date('15-mar-2003','dd-mon-yyyy'))
            *
ERROR at line 1:
ORA-14523: Cannot co-locate [sub]partition of local index  with table [sub]partition because local index block size
[16384] does not match table block size [8192]


ops$tkyte%ORA11GR2>
ops$tkyte%ORA11GR2> alter index t_x_idx modify default attributes tablespace tbs16k;

Index altered.

ops$tkyte%ORA11GR2>
ops$tkyte%ORA11GR2> alter table t add partition part3 values less than (to_date('15-mar-2003','dd-mon-yyyy'));

Table altered.

16k or 32k tablespace for storing LOBs

Rodrigo Murillo G., September 02, 2021 - 3:29 am UTC

Hi

I hope I'm writing in the right thread

It is clear there is no beneffit from having tablespace with different block sizes (16k or 32k) for structured data and OLTP databases.

Is there any difference or beneffit if we use this tablespaces to store LOB segments?

Thanks
Chris Saxon
September 02, 2021 - 11:49 am UTC

Mixing block sizes in a database generally makes your database harder to manage and is best avoided.

Though if you need to index long text strings, you can hit ORA-1450 errors using the default block size of 8k. Using a larger block size is one solution which in some cases may be the best solution. I discuss other options for the problem at

https://blogs.oracle.com/sql/post/how-to-fix-ora-01450-maximum-key-length-6398-exceeded-errors#blocksize

Mixing block sizes in tablespaces

Piotr Nogaƛ, September 27, 2021 - 10:26 am UTC

IMHO you can find benefits of mixing db_block_size in DW especially if Basic compression is used. Generally bigger block better compression ratio, however as usual not all table spaces contains ReadOnly/rarely modified data - so db_block_size distinction among different data models could save plenty of storage/IO providing faster queries with reduction of resources used.