Skip to Main Content

Breadcrumb

Question and Answer

Tom Kyte

Thanks for the question, Juan Carlos.

Asked: July 22, 2005 - 1:35 pm UTC

Last updated: January 09, 2008 - 8:12 am UTC

Version: 9.2

Viewed 1000+ times

You Asked

Hi Tom,
I recently gave this advice about using multiblock t ablespaces in a database
"Hi,
This works really nice, this allows to create separate areas of memory
for different things.
For example big blobs documents, in a 32k
normal tables 8k
indexes 16k, etc. etc.

You have to set the database memory cache in the init.ora for each
different block size."

Then several dbas (more experienced than me) toldme briefly
1) reason for multiblocksizes are for transportables tablespaces.
2) Any other use to this should be done with caution.

I remember I read several articles about benefits of using different block sizes (other than only for transpoartable tablespaces).
*****added ***
Possibly the only one from who I read that was Burleson.

But makes sense for me, to move indexes to a tablespace of 16k, helpsme to keep memory only for indexes, and control better the index memory usage.
For me (a small database) makes sense.
Now I ask you because I want to know if I missed something or I'm wrong.
*****added ***

Anyway, The question is for you Tom.
When do you (and Oracle) advices to use different blocksize in the same database, for transportabletablespaces, tuning reading blocks, etc.?
And when you don't advice to use different blocksizes?

Thanks Tom in advance

and we said...

use keep and recycle pools if you want multiple buffer caches and can really honestly PROVE that you actually need them. the cache is a pretty darn efficient thing.

You want to sit there and micromanage? Are you have a physical IO problem with indexes? have you IDENTIFIED A TRUE PROBLEM that you are trying to fix?

I've not advised to use multiple blocksizes, use the keep and recycle pools if you want to segregate things out -- you've had that for many many many versions -- more than multiple block size tablespaces. Why would they not be the first path to take? (considering you don't even need to MOVE or REBUILD a thing to take advantage of them!)

Rating

  (14 ratings)

Is this answer out of date? If it is, please let us know via a Comment

Comments

A reader, July 24, 2005 - 2:33 am UTC

Thanks Tom.
This is my point, I have some historical tables, I use recycle buffer for them.
I have some very frequent accesed small tables, I use keep cache.
I have a table where I store forms and reports, about 1200, and I use every day. I don't want this data to take off other data from the cache. Then for me seems reasonable to use a 32k tablespace and give a cache of 20MB, enough to move that data, and the blocksize seems to me better, because they are "big" blobs.

I don't micromanage the memory, I don't monitor all the time performance, tuning is not a frequent task , and I don't need it, because is a small database (up to 6gb) for less than 100 oltp users and about 10 dw users; and that is the reason because I want to move indexes to a 16k tablespace, because this guarantees me the fact, some misconfigurations are not moving out my indexes from memory. I want to be sure the indexes have their special place there. And because I understand is better a 16k tablespace for indexes, meanwhile 8k is better for my data.

Now what you didn't tellme is, if Oracle multiblock tablespaces feature is mainly for transport tablespaces only.

Thanks Tom.

Tom Kyte
July 24, 2005 - 9:52 am UTC

You are micromanaging for memory, you just said so. If you were not you would have but a SINGLE CACHE. So, by definition you are micromanaging memory.

And I'd bet you $5 that if you stopped and gave all of the ram to the default buffer cache, you would notice no significant difference.



Multiple block sizes were implemented to transport tablespaces.

it would be the RARE and EXOTIC system that might even theoretically benefit from multiple block sizes with regards to indexes and such.

You *are* micromanaging here, you have a really small database, I cannot imagine having multiple caches here.

A reader, July 25, 2005 - 9:30 am UTC

Thanks Tom, so the point is, from your words:
"Multiple block sizes were implemented to transport tablespaces."


Tom Kyte
July 25, 2005 - 11:09 am UTC

That is what they primarily were implemented to facilitate, yes.

A reader, July 25, 2005 - 1:09 pm UTC

Thanks again

LOB column tables and migration

hk, February 02, 2006 - 10:28 pm UTC

</code> http://www.dba-oracle.com/art_dbazine_9i_multiblock.htm <code>
From the above article
---- quote
Segregate LOB column tables - For those Oracle tables that contain raw, long raw, or in-line LOBs, moving the table rows to large block size will have an extremely beneficial effect on disk I/O. Experienced DBAs will check dba_tables.avg_row_len to make sure that the blocksize is larger than the average size. Row chaining will be reduced while at the same time the entire LOB can be read within a single disk I/O, theareby avoiding the additional overhead of having Oracle to go out of read multiple blocks.
--- quote
1) We have LOB column tables and I see lot of chainig/migration. If we move the LOB column tables to a bigger blocksize tablespace. Will that not reduce the migration?
2)I think the The max block size possible is 32k on 9i. How do reduce the migration for such big column tables as average row length is more than 100k?

Thanks

Tom Kyte
February 03, 2006 - 1:57 pm UTC

1) lob columns are stored inline ONLY if they are 4000 bytes or less. If they are bigger than a varchar2 - the are automagically moved out of line into the lob segment.

Insufficient data, even for "experienced DBA's" right here to tell you what to do. It could be that a simple alter table t move with a larger pctfree to set aside sufficient space for updates is what is needed (and that moving to a larger blocksize without changing pctfree would not *fix a darn thing*).

How is it that your rows end up. are the "truly chained" (larger than a block) or are they "migrated". Migrated means we have to look at space parameters - forget the blocksize (the block is apparently BIG ENOUGH if they are migrated).

2) and only on some platforms


quick question for you - before you do A SINGLE THING - have you identified this to be the cause of some performance related issue for you?


If you are using lobs - I feel pretty confident in stating "the average row length is NOT 100K!!!!", the lobs are stored OUT OF LINE, in the lob segment, when they exceed 4000 bytes.


(and if they were 100k, just give up, I don't care what your block size is, it won't matter)

Confusing

Dakshinamurty, Y, December 22, 2006 - 12:34 am UTC

I want to do an exercise to know whether multi block tablespaces will be PERFORMANCE wise better or not. In that regard I read two contrasting articles
</code> http://asktom.oracle.com/pls/asktom/f?p=100:11:::::P11_QUESTION_ID:44779427427730
and
http://www.dba-oracle.com/oracle_tips_multiple_blocksizes.htm <code>

I am confused. I have few questions to ask you.
1. Is the exercise worth doing? (we have to spend hardware, software and manpower resources for 4 weeks)
2. Are there any major concerns like maintenance and future performance degradation?

Our database is:
OLTP, Oracle 10gr2 RAC, 4 TB, Huge banking system which will be doing millions of updates per day in some tables.






Tom Kyte
December 22, 2006 - 6:33 am UTC

1) seriously seriously doubt it.

2) absolutely there are. You have block buffer caches (many) that cannot really share memory between themselves, you have to figure out the right sizes, make the tradeoffs of "this one will be smaller so this can be bigger and so on". Consider (if ANYTHING) the keep and recycle pools, but using multiple block sizes for "tuning" isn't really something recommended.


Disadvantage of using multiple blocksize

Tarek, September 18, 2007 - 10:15 am UTC

From this discussion i have understood - multiple blocksize does not give us any advantage.

Could you please list down some of the disadvantages of such configuration other than manual buffer assignment for the table? Is there any disadvantage in the Oracle processing level, latch, sorting ........
Tom Kyte
September 18, 2007 - 4:48 pm UTC

there is the fact that you might have 50gb of memory free in one buffer cache and none in the other - and we cannot share.

there is the extra administrative overhead.

there is the confusion it will cause when dealing with optimizer related issues.

it makes you *different* from everyone else (hence harder to debug, to support)

rather than list "bad" - which I presume you want in order to prevent someone from using it - ask them for "why it would be so good" and we'll address that.

A reader, September 25, 2007 - 5:18 am UTC

Dear Tom,

I have an OLTP Database , in
fact most tables are typically OLTP but one table
is holding historical data. This table is loaded
by overnight jobs but it is also often queried by a GUI
user tool. The queries are performing mostly range
scans (date intervals) together with user ID (in
fact every user is listing his own transactions )
I have found that the queries are faster
with 16K bloksize than with a 8K blocksize.(
and the number of
consistent gets taken with autotrace confirms it)
Tom , can you comment on this please.


Thanks in advance,

Isaac
Tom Kyte
September 26, 2007 - 9:19 pm UTC

what does number of consistent gets have to do with runtime in this case - i mean, what logic are you using to say "and the number of consistent gets confirms that this must be faster"

got tkprof?

A reader, October 02, 2007 - 12:36 pm UTC

No I haven't used tkprof yet, but here is an example with observing
the real execution time: 


First with the table and the index inside 8k tablespace:

set timing on

SQL> r
  1  select count(MYFIELD) from table_8K where ttime >to_date('27/09/2006','dd/mm/y
  2* and ttime <to_date('06/10/2006','dd/mm/yyyy')

COUNT(MYFIELD)
-------------------
             164864

Elapsed: 00:00:01.40
...
(This command is executed several times - the execution time was approximately the same ~ 00:00:01.40)

And now the test with the same table, but created together with the index  in 16k tablespace:

SQL> r
  1  select count(MYFIELD) from table_16K where ttime >to_date('27/09/2006','dd/mm/
  2* and ttime <to_date('06/10/2006','dd/mm/yyyy')

COUNT(MYFIELD)
-------------------
             164864

Elapsed: 00:00:00.36
(Again, the command is executed several times, the new execution time is approximately the same ~ 00:00:00.36 )

What could be the reason ?

I will also use Tkprof to see what result it shows...

Thanks in advance




Tom Kyte
October 05, 2007 - 10:59 am UTC

I want to see a tkprof, without that - we would only be GUESSING.

it would have the row source operation, showing the plans.
it would have the amount of work performed
it would show physical as well as logical IO's
it would have useful information

I won't comment unless and until you have it.


Brilliant

Santosh Kumar, October 20, 2007 - 6:28 am UTC

I really want to see where the discussion goes on.I want Tom to make this concept crystal clear. Thanks a lot.

Waiting for the TKPROF report.............

Need your expert advise.

Laxman Bage, January 08, 2008 - 7:29 am UTC

Database: Oracle 9iR2
OS: HP UX 11i
Currently we are using multiple buffer pools 8K and 16K.
The 8K is of 5 GB
and 16K is of 4 GB
The table CDR_DATA and associated INDEXES are assgined to 16K block size tablespace. From last few months our client is complaining that the billing process is running slow. I did a detail analysis using SQL_TRACE using 10046 with SYSTEM_STATISTICS=ALL

I found that the below SQL is leading to huge no. of "db block sequential reads"

SELECT CU.CDR_DATA_PARTITION_KEY, TYPE_ID_USG, ELEMENT_ID, PROVIDER_ID, 
  POINT_TARGET, POINT_ORIGIN, POINT_ID_TARGET, POINT_ID_ORIGIN, CU.TRANS_DT, 
  CU.MSG_ID, CU.MSG_ID2, CU.MSG_ID_SERV, RATED_UNITS, JURISDICTION, AMOUNT, 
  CCARD_ID, CCARD_ID_SERV, CUSTOMER_TAG, RATE_PERIOD, POINT_TAX_CODE_ORIGIN, 
  POINT_TAX_CODE_TYPE_ORIGIN, POINT_TAX_CODE_TARGET, 
  POINT_TAX_CODE_TYPE_TARGET, BASE_AMT, UNROUNDED_AMOUNT, BILL_CLASS, 
  ZONE_CLASS, EXT_TRACKING_ID, ANNOTATION, REV_RCV_COST_CTR, AGGR_USAGE_ID, 
  RATE_CURRENCY_CODE, PRIMARY_UNITS, SECOND_UNITS, THIRD_UNITS, 
  CORRIDOR_PLAN_ID, CU.SPLIT_ROW_NUM, OPEN_ITEM_ID, NVL(NUM_RECORDS,1), 
  AMOUNT_REDUCTION, AMOUNT_REDUCTION_ID, ACCESS_REGION_ORIGIN, 
  ACCESS_REGION_TARGET, CU.SUBSCR_NO, CU.SUBSCR_NO_RESETS, TAX_PKG_COUNT, 
  GEOCODE, AUX_TAX_INFO, TAX_RATE_ACTIVE_DT, TAX_RATE_INACTIVE_DT, 
  ADD_IMPLIED_DECIMAL, RATE_CLASS, POINT_CLASS_ORIGIN, POINT_CLASS_TARGET, 
  COMPONENT_ID, EQUIP_CLASS_CODE 
FROM
 CDR_UNBILLED CU, CDR_DATA CD, FILE_STATUS 
WHERE CU.ACCOUNT_NO = :B10 AND 
  CU.TRANS_DT < :B9 AND 
  CU.CDR_DATA_PARTITION_KEY = CD.CDR_DATA_PARTITION_KEY AND 
  CU.MSG_ID = CD.MSG_ID AND 
  CU.MSG_ID2 = CD.MSG_ID2 AND 
  CU.MSG_ID_SERV = CD.MSG_ID_SERV AND 
  CU.SPLIT_ROW_NUM = CD.SPLIT_ROW_NUM AND 
  CDR_STATUS IN (:B8, :B7, :B6, :B5) AND 
  NO_BILL = 0 AND 
  ( :B4 = 0 OR 
      ( :B4 = 1 AND 
       CU.RATE_DT < :B3)
   ) AND 
  CD.FILE_ID = FILE_STATUS.FILE_ID AND 
  CD.FILE_ID_SERV = FILE_STATUS.FILE_ID_SERV AND 
  (FILE_STATUS.FILE_STATUS = :B2 OR 
  FILE_STATUS.FILE_STATUS = :B1) 
  ORDER BY CU.ACCOUNT_NO,CU.TRANS_DT



Rows Row Source Operation
------- ---------------------------------------------------
6497 SORT ORDER BY (cr=58534 r=8771 w=0 time=150209662 us)
6497 NESTED LOOPS (cr=58534 r=8771 w=0 time=149973079 us)
6497 NESTED LOOPS (cr=32546 r=8771 w=0 time=149525666 us)
6497 TABLE ACCESS CLUSTER CDR_UNBILLED (cr=6558 r=58 w=0 time=783420 us)
1 INDEX UNIQUE SCAN CDR_UNBILLED_XC_CCNT_N_TRNS_JN (cr=3 r=0 w=0 time=35 us)(object id 434351)
6497 PARTITION RANGE ITERATOR PARTITION: KEY KEY (cr=25988 r=8713 w=0 time=148683930 us)
6497 TABLE ACCESS BY LOCAL INDEX ROWID CDR_DATA PARTITION: KEY KEY (cr=25988 r=8713 w=0 time=148630331 us)
6497 INDEX UNIQUE SCAN CDR_DATA_PK PARTITION: KEY KEY (cr=19491 r=2562 w=0 time=37627876 us)(object id 75426)
6497 PARTITION LIST ITERATOR PARTITION: KEY KEY (cr=25988 r=0 w=0 time=333196 us)
6497 TABLE ACCESS BY LOCAL INDEX ROWID FILE_STATUS PARTITION: KEY KEY (cr=25988 r=0 w=0 time=295733 us)
6497 INDEX UNIQUE SCAN FILE_STATUS_PK PARTITION: KEY KEY (cr=19491 r=0 w=0 time=203930 us)(object id 257683)

Rows Execution Plan
------- ---------------------------------------------------
0 SELECT STATEMENT GOAL: CHOOSE
6497 SORT (ORDER BY)
6497 NESTED LOOPS
6497 NESTED LOOPS
6497 TABLE ACCESS GOAL: ANALYZED (CLUSTER) OF 'CDR_UNBILLED'
1 INDEX (UNIQUE SCAN) OF 'CDR_UNBILLED_XC_CCNT_N_TRNS_JN'
(NON-UNIQUE)
6497 PARTITION RANGE (ITERATOR) PARTITION:KEYKEY
6497 TABLE ACCESS GOAL: ANALYZED (BY LOCAL INDEX ROWID) OF
'CDR_DATA' PARTITION:KEYKEY
6497 INDEX GOAL: ANALYZED (UNIQUE SCAN) OF 'CDR_DATA_PK'
(UNIQUE) PARTITION:KEYKEY
6497 PARTITION LIST (ITERATOR) PARTITION:KEYKEY
6497 TABLE ACCESS GOAL: ANALYZED (BY LOCAL INDEX ROWID) OF
'FILE_STATUS' PARTITION:KEYKEY
6497 INDEX GOAL: ANALYZED (UNIQUE SCAN) OF 'FILE_STATUS_PK'
(UNIQUE) PARTITION:KEYKEY

Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
db file sequential read 73928 0.41 1179.61
latch free 94 0.03 0.36
SQL*Net message to client 62 0.00 0.00
SQL*Net more data to client 1075 0.23 1.03
SQL*Net message from client 62 2.68 21.47
buffer busy waits 2 0.02 0.02


I am suggesting to move the CDR_DATA and associated index to 8K block size (default) and assign them KEEP BUFFER POOL

Please advise.

Thanks for your time !!!

Tom Kyte
January 08, 2008 - 7:51 am UTC

why would you format the sql statement using the code button - but then take the stuff that really needs to line up and have it not use the code button - making the information unreadable....

The keep pool will do nothing for you - unless the data was already read and still in the cache (the keep pool just holds blocks in much the same way the default pool would, it is just a separate area).

Likely - as is the case with a 'batch job', the data you need to read is so long ago aged out of the cache and back on disk. You end up reading it.

I would enable a better trace - one that shows the row source operation and the amount of IO performed on each step (eg: one that looks like this:

select *
from
 emp


call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute      1      0.00       0.00          0          0          0           0
Fetch        2      0.00       0.00          0          4          0          15
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        4      0.00       0.00          0          4          0          15

Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 233

Rows     Row Source Operation
-------  ---------------------------------------------------<b>
     15  TABLE ACCESS FULL EMP (cr=4 pr=0 pw=0 time=53 us)</b>


Elapsed times include waiting on following events:
  Event waited on                             Times   Max. Wait  Total Waited
  ----------------------------------------   Waited  ----------  ------------
  SQL*Net message to client                       2        0.00          0.00
  SQL*Net message from client                     2        0.00          0.00



In that way, you'll be able to see where you are doing the IO's (guessing right now we are). Then, you can start to think about how to reduce them. Given the tiny number of rows, the amount of IO is surprising - so you might have a non-selective index or something that by simply adding a column or two - you would dramatically reduce the IO's - eg:

ops$tkyte%ORA10GR2> create table t as select * from all_objects;

Table created.

ops$tkyte%ORA10GR2> exec dbms_stats.gather_table_stats( user, 'T' );

PL/SQL procedure successfully completed.

ops$tkyte%ORA10GR2> create index t_idx on t(object_type);

Index created.

ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> set autotrace traceonly
ops$tkyte%ORA10GR2> set termout off
ops$tkyte%ORA10GR2> select * from t where object_type = 'TABLE' and owner = 'SCOTT' and object_name = 'EMP';


Execution Plan
----------------------------------------------------------
Plan hash value: 470836197

------------------------------------------------------------------------
| Id  | Operation                   | Name  | Rows  | Bytes | Cost (%CPU
------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |       |     1 |    93 |    87   (2
|*  1 |  TABLE ACCESS BY INDEX ROWID| T     |     1 |    93 |    87   (2
|*  2 |   INDEX RANGE SCAN          | T_IDX |  1915 |       |     6   (0
------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter("OBJECT_NAME"='EMP' AND "OWNER"='SCOTT')
   2 - access("OBJECT_TYPE"='TABLE')


Statistics
----------------------------------------------------------
          0  recursive calls
          0  db block gets
        124  consistent gets
          0  physical reads
          0  redo size
       1201  bytes sent via SQL*Net to client
        385  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
          1  rows processed

ops$tkyte%ORA10GR2> set autotrace off
ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> drop index t_idx;

Index dropped.

ops$tkyte%ORA10GR2> create index t_idx on t(object_type,owner,object_name);

Index created.

ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> set autotrace traceonly
ops$tkyte%ORA10GR2> set termout off
ops$tkyte%ORA10GR2> select * from t where object_type = 'TABLE' and owner = 'SCOTT' and object_name = 'EMP';


Execution Plan
----------------------------------------------------------
Plan hash value: 470836197

------------------------------------------------------------------------
| Id  | Operation                   | Name  | Rows  | Bytes | Cost (%CPU
------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |       |     1 |    93 |     4   (0
|   1 |  TABLE ACCESS BY INDEX ROWID| T     |     1 |    93 |     4   (0
|*  2 |   INDEX RANGE SCAN          | T_IDX |     1 |       |     3   (0
------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - access("OBJECT_TYPE"='TABLE' AND "OWNER"='SCOTT' AND
              "OBJECT_NAME"='EMP')


Statistics
----------------------------------------------------------
          0  recursive calls
          0  db block gets
          5  consistent gets
          0  physical reads
          0  redo size
       1201  bytes sent via SQL*Net to client
        385  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
          1  rows processed

ops$tkyte%ORA10GR2> set autotrace off


what you would be looking for in the tkprof would be something like this:

Rows     Row Source Operation
-------  ---------------------------------------------------
      1  TABLE ACCESS BY INDEX ROWID T (cr=124 pr=0 pw=0 time=1315 us)
   1797   INDEX RANGE SCAN T_IDX (cr=8 pr=0 pw=0 time=1837 us)(object id 152586)



lots of rows coming out of the index, few rows coming out from the table...

Details

Laxman Bage, January 08, 2008 - 12:13 pm UTC

The tkprof is generated with option aggregate=yes

SELECT CU.CDR_DATA_PARTITION_KEY, TYPE_ID_USG, ELEMENT_ID, PROVIDER_ID,
POINT_TARGET, POINT_ORIGIN, POINT_ID_TARGET, POINT_ID_ORIGIN, CU.TRANS_DT,
CU.MSG_ID, CU.MSG_ID2, CU.MSG_ID_SERV, RATED_UNITS, JURISDICTION, AMOUNT,
CCARD_ID, CCARD_ID_SERV, CUSTOMER_TAG, RATE_PERIOD, POINT_TAX_CODE_ORIGIN,
POINT_TAX_CODE_TYPE_ORIGIN, POINT_TAX_CODE_TARGET,
POINT_TAX_CODE_TYPE_TARGET, BASE_AMT, UNROUNDED_AMOUNT, BILL_CLASS,
ZONE_CLASS, EXT_TRACKING_ID, ANNOTATION, REV_RCV_COST_CTR, AGGR_USAGE_ID,
RATE_CURRENCY_CODE, PRIMARY_UNITS, SECOND_UNITS, THIRD_UNITS,
CORRIDOR_PLAN_ID, CU.SPLIT_ROW_NUM, OPEN_ITEM_ID, NVL(NUM_RECORDS,1),
AMOUNT_REDUCTION, AMOUNT_REDUCTION_ID, ACCESS_REGION_ORIGIN,
ACCESS_REGION_TARGET, CU.SUBSCR_NO, CU.SUBSCR_NO_RESETS, TAX_PKG_COUNT,
GEOCODE, AUX_TAX_INFO, TAX_RATE_ACTIVE_DT, TAX_RATE_INACTIVE_DT,
ADD_IMPLIED_DECIMAL, RATE_CLASS, POINT_CLASS_ORIGIN, POINT_CLASS_TARGET,
COMPONENT_ID, EQUIP_CLASS_CODE
FROM
CDR_UNBILLED CU, CDR_DATA CD, FILE_STATUS WHERE CU.ACCOUNT_NO = :B10 AND
CU.TRANS_DT < :B9 AND CU.CDR_DATA_PARTITION_KEY = CD.CDR_DATA_PARTITION_KEY
AND CU.MSG_ID = CD.MSG_ID AND CU.MSG_ID2 = CD.MSG_ID2 AND CU.MSG_ID_SERV =
CD.MSG_ID_SERV AND CU.SPLIT_ROW_NUM = CD.SPLIT_ROW_NUM AND CDR_STATUS IN
(:B8, :B7, :B6, :B5) AND NO_BILL = 0 AND ( :B4 = 0 OR ( :B4 = 1 AND
CU.RATE_DT < :B3)) AND CD.FILE_ID = FILE_STATUS.FILE_ID AND CD.FILE_ID_SERV
= FILE_STATUS.FILE_ID_SERV AND (FILE_STATUS.FILE_STATUS = :B2 OR
FILE_STATUS.FILE_STATUS = :B1) ORDER BY CU.ACCOUNT_NO,CU.TRANS_DT


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 122 0.02 0.01 0 0 0 0
Execute 122 0.14 0.15 0 0 0 0
Fetch 121 23.65 997.13 60599 372853 0 41315
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 365 23.81 997.29 60599 372853 0 41315

Misses in library cache during parse: 0
Optimizer goal: CHOOSE
Parsing user id: 22 (ARBOR) (recursive depth: 1)

Rows Row Source Operation
------- ---------------------------------------------------
391 SORT ORDER BY (cr=3529 r=490 w=0 time=8548795 us)
391 NESTED LOOPS (cr=3529 r=490 w=0 time=8536181 us)
391 NESTED LOOPS (cr=1965 r=490 w=0 time=8509444 us)
391 TABLE ACCESS CLUSTER CDR_UNBILLED (cr=401 r=7 w=0 time=154478 us)
1 INDEX UNIQUE SCAN CDR_UNBILLED_XC_CCNT_N_TRNS_JN (cr=3 r=0 w=0 time=32 us)(object id 434351)
391 PARTITION RANGE ITERATOR PARTITION: KEY KEY (cr=1564 r=483 w=0 time=8350951 us)
391 TABLE ACCESS BY LOCAL INDEX ROWID CDR_DATA PARTITION: KEY KEY (cr=1564 r=483 w=0 time=8348011 us)
391 INDEX UNIQUE SCAN CDR_DATA_PK PARTITION: KEY KEY (cr=1173 r=125 w=0 time=1813137 us)(object id 75426)
391 PARTITION LIST ITERATOR PARTITION: KEY KEY (cr=1564 r=0 w=0 time=18827 us)
391 TABLE ACCESS BY LOCAL INDEX ROWID FILE_STATUS PARTITION: KEY KEY (cr=1564 r=0 w=0 time=16773 us)
391 INDEX UNIQUE SCAN FILE_STATUS_PK PARTITION: KEY KEY (cr=1173 r=0 w=0 time=11484 us)(object id 257683)


Rows Execution Plan
------- ---------------------------------------------------
0 SELECT STATEMENT GOAL: CHOOSE
391 SORT (ORDER BY)
391 NESTED LOOPS
391 NESTED LOOPS
391 TABLE ACCESS GOAL: ANALYZED (CLUSTER) OF 'CDR_UNBILLED'
1 INDEX (UNIQUE SCAN) OF 'CDR_UNBILLED_XC_CCNT_N_TRNS_JN'
(NON-UNIQUE)
391 PARTITION RANGE (ITERATOR) PARTITION:KEYKEY
391 TABLE ACCESS GOAL: ANALYZED (BY LOCAL INDEX ROWID) OF
'CDR_DATA' PARTITION:KEYKEY
391 INDEX GOAL: ANALYZED (UNIQUE SCAN) OF 'CDR_DATA_PK'
(UNIQUE) PARTITION:KEYKEY
391 PARTITION LIST (ITERATOR) PARTITION:KEYKEY
391 TABLE ACCESS GOAL: ANALYZED (BY LOCAL INDEX ROWID) OF
'FILE_STATUS' PARTITION:KEYKEY
391 INDEX GOAL: ANALYZED (UNIQUE SCAN) OF 'FILE_STATUS_PK'
(UNIQUE) PARTITION:KEYKEY


Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
db file sequential read 61576 0.37 990.92
latch free 65 0.03 0.25
SQL*Net message to client 121 0.00 0.00
SQL*Net more data to client 1010 0.01 0.27
SQL*Net message from client 121 1.02 18.58
buffer busy waits 2 0.00 0.00
********************************************************************************

Thanks
Tom Kyte
January 08, 2008 - 2:53 pm UTC

what is up with the CODE BUTTON - man, it is so hard to read when everything is flatlined on the left hand side.


Ok, 1,173/391 = 3 - common theme in that row source operation. It takes 3 IO's to get to a leaf in your indexes, we have a bunch of nested loops - we are reading all over the plan - skipping from partition to partition.

You do this query over and over and over - 121 times, fetching about 341 rows each time.

You read all over the place - average physical IO time is about 0.016 seconds, but you do so much of it.

typically, the way to resolve an issue like this will be... to look to your logical - ask yourself 'why, why are we doing this query 121 times, is there a way we can bulk up - erase procedural code, replace with a single, infinitely more efficient SQL statement'


Even if you moved your indexes into their own cache, you'd have to get them loaded (do the IO)

I do not see a silver bullet here.

Additional Info...

Laxman Bage, January 08, 2008 - 8:52 pm UTC

Thanks a lot for your time.
Regrets.. This time I embedded the text around the "code" tags. The process pertaining to below SQL is from a legacy system. I tarced all the session SQL's related to the process for 30 mins. and find out that below query is taking lot of time. My concern is almost 90 to 95 % CDR_DATA table blocks are leading to phyiscal reads wrt to no. of rows read from the table. The block size for this table is 16K. If I move this table to 8K block size will there be any performance gain in terms of physical read

SELECT CU.CDR_DATA_PARTITION_KEY, TYPE_ID_USG, ELEMENT_ID, PROVIDER_ID, 
  POINT_TARGET, POINT_ORIGIN, POINT_ID_TARGET, POINT_ID_ORIGIN, CU.TRANS_DT, 
  CU.MSG_ID, CU.MSG_ID2, CU.MSG_ID_SERV, RATED_UNITS, JURISDICTION, AMOUNT, 
  CCARD_ID, CCARD_ID_SERV, CUSTOMER_TAG, RATE_PERIOD, POINT_TAX_CODE_ORIGIN, 
  POINT_TAX_CODE_TYPE_ORIGIN, POINT_TAX_CODE_TARGET, 
  POINT_TAX_CODE_TYPE_TARGET, BASE_AMT, UNROUNDED_AMOUNT, BILL_CLASS, 
  ZONE_CLASS, EXT_TRACKING_ID, ANNOTATION, REV_RCV_COST_CTR, AGGR_USAGE_ID, 
  RATE_CURRENCY_CODE, PRIMARY_UNITS, SECOND_UNITS, THIRD_UNITS, 
  CORRIDOR_PLAN_ID, CU.SPLIT_ROW_NUM, OPEN_ITEM_ID, NVL(NUM_RECORDS,1), 
  AMOUNT_REDUCTION, AMOUNT_REDUCTION_ID, ACCESS_REGION_ORIGIN, 
  ACCESS_REGION_TARGET, CU.SUBSCR_NO, CU.SUBSCR_NO_RESETS, TAX_PKG_COUNT, 
  GEOCODE, AUX_TAX_INFO, TAX_RATE_ACTIVE_DT, TAX_RATE_INACTIVE_DT, 
  ADD_IMPLIED_DECIMAL, RATE_CLASS, POINT_CLASS_ORIGIN, POINT_CLASS_TARGET, 
  COMPONENT_ID, EQUIP_CLASS_CODE 
FROM
 CDR_UNBILLED CU, CDR_DATA CD, FILE_STATUS WHERE CU.ACCOUNT_NO = :B10 AND 
  CU.TRANS_DT < :B9 AND CU.CDR_DATA_PARTITION_KEY = CD.CDR_DATA_PARTITION_KEY 
  AND CU.MSG_ID = CD.MSG_ID AND CU.MSG_ID2 = CD.MSG_ID2 AND CU.MSG_ID_SERV = 
  CD.MSG_ID_SERV AND CU.SPLIT_ROW_NUM = CD.SPLIT_ROW_NUM AND CDR_STATUS IN 
  (:B8, :B7, :B6, :B5) AND NO_BILL = 0 AND ( :B4 = 0 OR ( :B4 = 1 AND 
  CU.RATE_DT < :B3)) AND CD.FILE_ID = FILE_STATUS.FILE_ID AND CD.FILE_ID_SERV 
  = FILE_STATUS.FILE_ID_SERV AND (FILE_STATUS.FILE_STATUS = :B2 OR 
  FILE_STATUS.FILE_STATUS = :B1) ORDER BY CU.ACCOUNT_NO,CU.TRANS_DT


call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse       50      0.00       0.00          0          0          0           0
Execute     50      0.09       0.05          0          0          0           0
Fetch       50     11.07     351.60      29727     185662          0       20581
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total      150     11.16     351.66      29727     185662          0       20581

Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: 22  (ARBOR)   (recursive depth: 1)

Rows     Row Source Operation
-------  ---------------------------------------------------
    227  SORT ORDER BY (cr=2049 r=297 w=0 time=4083900 us)
    227   NESTED LOOPS  (cr=2049 r=297 w=0 time=4076019 us)
    227    NESTED LOOPS  (cr=1141 r=297 w=0 time=4060889 us)
    227     TABLE ACCESS CLUSTER CDR_UNBILLED (cr=233 r=3 w=0 time=31308 us)
      1      INDEX UNIQUE SCAN CDR_UNBILLED_XC_CCNT_N_TRNS_JN (cr=3 r=0 w=0 time=52 us)(object id 434351)
    227     PARTITION RANGE ITERATOR PARTITION: KEY KEY (cr=908 r=294 w=0 time=4027092 us)
    227      TABLE ACCESS BY LOCAL INDEX ROWID CDR_DATA PARTITION: KEY KEY (cr=908 r=294 w=0 time=4025227 us)
    227       INDEX UNIQUE SCAN CDR_DATA_PK PARTITION: KEY KEY (cr=681 r=76 w=0 time=817931 us)(object id 75426)
    227    PARTITION LIST ITERATOR PARTITION: KEY KEY (cr=908 r=0 w=0 time=11330 us)
    227     TABLE ACCESS BY LOCAL INDEX ROWID FILE_STATUS PARTITION: KEY KEY (cr=908 r=0 w=0 time=9857 us)
    227      INDEX UNIQUE SCAN FILE_STATUS_PK PARTITION: KEY KEY (cr=681 r=0 w=0 time=6714 us)(object id 257683)


Rows     Execution Plan
-------  ---------------------------------------------------
      0  SELECT STATEMENT   GOAL: CHOOSE
    227   SORT (ORDER BY)
    227    NESTED LOOPS
    227     NESTED LOOPS
    227      TABLE ACCESS   GOAL: ANALYZED (CLUSTER) OF 'CDR_UNBILLED'
      1       INDEX (UNIQUE SCAN) OF 'CDR_UNBILLED_XC_CCNT_N_TRNS_JN' 
                  (NON-UNIQUE)
    227      PARTITION RANGE (ITERATOR) PARTITION:KEYKEY
    227       TABLE ACCESS   GOAL: ANALYZED (BY LOCAL INDEX ROWID) OF 
                  'CDR_DATA' PARTITION:KEYKEY
    227        INDEX   GOAL: ANALYZED (UNIQUE SCAN) OF 'CDR_DATA_PK' 
                   (UNIQUE) PARTITION:KEYKEY
    227     PARTITION LIST (ITERATOR) PARTITION:KEYKEY
    227      TABLE ACCESS   GOAL: ANALYZED (BY LOCAL INDEX ROWID) OF 
                 'FILE_STATUS' PARTITION:KEYKEY
    227       INDEX   GOAL: ANALYZED (UNIQUE SCAN) OF 'FILE_STATUS_PK' 
                  (UNIQUE) PARTITION:KEYKEY


Elapsed times include waiting on following events:
  Event waited on                             Times   Max. Wait  Total Waited
  ----------------------------------------   Waited  ----------  ------------
  db file sequential read                     29727        0.33        342.86
  SQL*Net message to client                      50        0.00          0.00
  SQL*Net more data to client                   578        0.00          0.10
  SQL*Net message from client                    50        0.53          6.95
  latch free                                     18        0.00          0.01
  buffer busy waits                               1        0.00          0.00
********************************************************************************


Thanks
Tom Kyte
January 09, 2008 - 7:15 am UTC

... If I move this table to 8K block size will there be any performance gain in terms of physical read
...

it could be that a smaller blocksize will slightly decrease the IO times since

a) you'll be reading 50% the data on each physical IO.

b) you might be caching blocks more efficiently- depending on how your data is physically spread out in the table - you might be caching 16k right now to cache a single row. (eg: you get 500 rows per block, but your query invocation only cares about 1 of them, you've put 499 rows into the cache that your query doesn't need - you likely bumped another block out of the cache - one that might have rows you will need in a millisecond - and that block will have to be read again)


it could be that a smaller blocksize will slightly increase the IO times since

a) you'll be making two times the number of physical IO calls, adding the overhead of the IO call twice - and only reading 1/2 of the data each time.



Excellent!!!

Laxman Bage, January 09, 2008 - 7:40 am UTC

Thanks a lot TOM for a rapid response.
Looks like a double edge sword!!!
Your answer really confused me.
There is one more motivation behind moving the table to 8K block. This table is accessed by online application (only read) as well and I can see huge no. of "Buffer Busy Waits" in V$SEGMENT_STATISTICS against this table and associated indexes.
We have added another 4GB of memory against Buffer Cache 16K to make it 8GB.
Will check the performance in next bill run.
Once again thanks for your valuable time.


Tom Kyte
January 09, 2008 - 8:12 am UTC

my answer should fill you with doubt that there is a magic silver bullet (like using multiple block sizes)

one of three things will happen when you make any change:

a) it'll go faster
b) it'll not go faster nor slower
c) it'll go slower


any of the three outcomes are possible - every single time.


The buffer busy waits are due to the IO typically and in this case seem inconsequential. (I don't care about a huge number of times waited on something, if the aggregate time spent waiting for it is small). I don't see large numbers here at all.

99% of all performance gains are to be had by analyzing how the application accesses data and making that access more efficient (eg: review the algorithms, I care not if this is 'legacy' code - ALL CODE is legacy code the millisecond it goes production).

Moving files, adding memory, faster cpus - shots in the dark, hoping that something might give you a 1% return on investment....

thks..

Laxman Bage, January 09, 2008 - 10:25 am UTC

Thanks a lot!!!
The project is very new to me, the 16K Buffer Cache was implemented by the earlier project members. As per them the 16K buffer cache was used for buffer segregation. However, I am unable to see any benefit of using 16K BC.
I am convinced that there is need of algorithm review and conveying this message to the legacy team is going to be big challenge.

This is a great website. Your books "Expert One-on-One" and "Effective Oracle by Design" are very helpful.

Regards,