Skip to Main Content


Question and Answer

Tom Kyte

Thanks for the question.

Asked: July 14, 2001 - 7:45 pm UTC

Last updated: July 19, 2011 - 9:58 am UTC

Version: 8.1.7

Viewed 10K+ times! This question is

You Asked


Frankly, speaking I dont see any particular extraordinary thing about hash partitioning..

A good dba should be able to speard about his objects onto different
disks to balance IO contentions..

It hash partitioning does not make any logical sense to the sql queries i.e. I cannot do partition elimination... then why should I use hash partitioning..

Please explain in what circumstances hash partitioning becomes most appropriate..


We are using the partitioning concept for rolling over the data to historical tables.. right now there are no primary keys or indexes on
the history tables to which the data is being rolled over, or for tables from whom data is being rolled over..

But I have learnt from my lead that very soon we will be adding indexes and primary key to the tables from whom the data is being rolled over..

What impact it is going to make on the process of rolling the data over to the history table..

I have a request..

Can you come up with a small workbook for all the common problems faced by developers...


and Tom said...

Well, partitioning brings about many things for you. forget about partition elimination for a moment:

- instead of having a 100 gig tablespace to backup, you have 100, 1 gig tablespaces. (each tablespace spends less time in backup mode, reduces the amount of potential extra redo, reduces the amount of manual recovery you need to do if the instance failes during backup). same with restores.

- instead of having an index on a 100 gig table, you have say 100 indexes on 1 gig tables. You can rebuild each one fast.... (and online with minimal overhead). rebuilding a single index on a 100 gig table -- that would take maybe 100 times longer then each individual rebuild (true parallel query might reduce that).

- you can analyze each partition quickly, instead of running an analyze on a 100 gig table.

- you can reorg each partition independent of any other partition

- you can easily redistribute the load over many disks, you now have evenly distributed the data into 100 1 gig partitions, move them at will. If you did this manually -- you would have a heck of a time moving the stuff around.

- and so on (every single admin option you do to a partition applies to a hash partition).

- as a bonus, you no longer have to load this data into this file in direct path mode, that data into that file and so on. Just let the software do that work for you. True, us humans can do lots of stuff -- but -- when the software can do it for me.... maybe even better then I can... its worth a shot.

Now, lets re-introduce partition elimination. Say you are an OLTP system. You would pick a PARTITION key that is used in most all queries (say a CUSTOMER_ID, an ORDER_ID, something). It is very hard to range partition on that (almost impossible on something like an ORDER_ID) but very trivial to hash partition.

Now, all queries of the form:

select * from t where customer_id = :x;

will use partition elimitation. You've just re-introduced all of the availability features of partitions as well (in addition to never having lost the admin features).

Say you are a data warehouse and need to do a bulk update/delete. Well, you can do that in parallel now since the data is partitioned, if it weren't, you could not do PDML.

Say you do a join on the hash partition key -- we can do parallel partition wise elimination on the join....

and so on....

Also, what about when you use range partitioning but the underlying partitions are still too large -- enter hash partitioning to re-introduce the divide and conquer features of partitions.

In short:

Hash partitioning -- all of admin features of range partitions and many of the partition elimination/query features as well.

Use it on data you might have used in a hash cluster. A census table with data full of data keyed by SSN. We either FULL SCAN it or go in by SSN. Perfect for a hash partitioned data....

As for the second part -- you NEED TO ENSURE RIGHT NOW that any unique key is entirely in the partition key. Otherwise, the only way to enforce uniqueness is via a GLOBAL INDEX (only if the partition key is in the local index can it be unique). A global index will kill your rolling window as it will go totally invalid every time you add/split/drop a partition.

A small workbook -- actually I can:

o use bind variables, if you do not you will fail.

thats the smallest workbook. the "real" on is:

</code> <code>

but its 1300 pages long.


  (43 ratings)

Is this answer out of date? If it is, please let us know via a Comment


hash cluster v/s has partition

Yogesh Purabiya, August 25, 2002 - 7:29 am UTC

Going through your excellent book, I got enough idea of hash-clustered tables. It is now easy to understand why we have to specify SIZE and HASH-Keys before hand; and how they are used to pre-allocate the space needed.

But, I got confused when I reached hash-clustered Partitions. There we do not specify SIZE and HASH-Keys. Then, how does Oracle pre-allocate space? Or, if it is not pre-allocated, then why the same facility is not available with clusters?

Tom Kyte
August 25, 2002 - 9:14 am UTC

We are not pre-allocating space in hash partitions.

With a harsh partition, we are attempting to achieve an EVEN distribution of data across all of the partitions while at the same time supporting partition elimination and other features.

So, we take the key and "hash it" to figure out -- does this row belong in partition 1, 2, 3, 4, ... N. Then, we put it there (or get it from there).

So, here, hashing just allows us to evening distribute the data.

A HASH cluster is very very different in nature and goal.

Hash Cluster v/s Hash Partition

Yogesh Purabiya, August 26, 2002 - 11:33 pm UTC

Though hash partition does equal distributipn of the rows as per the hash function, I think that each partition does need to know its size in advance; otherwise hash partitioning may not be as efficient as hash cluster.

Tom Kyte
August 27, 2002 - 8:23 am UTC

You are confusing IDEAS here.

No, you do NOT need to know the size of each partition in a hash partition -- not any more then in a range or list.

You cannot compare in any way shape or form

o hash partitions
o hash clusters

That would be like comparing an apple to a stove. It just makes ZERO sense.

A hash partition -- method to get data evenly distributed over many partitions. Used to distribute IO. Used to allow for partition elimination (partition wise joins). Used to ease administration. Each individual partition is just a normal old HEAP table.

A hash cluster -- method to force physical organization of individual rows. A row goes into the SLOT it hashed to. Does not distribute IO. Does not do partition elimination. Does not reduce adminstration. The TABLE is not a HEAP but a data structure.

They are totally and wholly orthogonal concepts, nothing whatsoever to do with eachother at all.

Got clarity from your book

Yogesh Purabiya, August 28, 2002 - 11:51 pm UTC

I once again went through your book, the chapter on partitioning. Yes, now I understand the matter. But, that is "Range Partition" where from I could get the clarity. In fact, that time I just tried to compare Normal Index to Range partition, and somehow I got the clarity.

Thanks, Tom, for your reply.

Well, one more thing. While giving THIS review, I could see none of my questions.

Thanks again.


hash partitioning and disk striping

Peter Tran, July 28, 2003 - 5:50 pm UTC


1) If you stripe all disks using Raid 0+1, are there any performance benefits for using hash partition other than the administrative benefits you outlined above?

For ex.

create table t (
x number(38))
partition by HASH(X) (
partition p1 tablespace ts_dat,
partition p2 tablespace ts_dat,
partition p3 tablespace ts_dat,
partition p4 tablespace ts_dat

alter table t
add (
constraint t_pk
primary key (x)
using index local
(PARTITION p1_idx tablespace peter,
PARTITION p2_idx tablespace peter,
PARTITION P3_idx tablespace peter,
PARTITION p4_idx tablespace peter)

Or is the performance better with no partition?


create table t2 (
x number(38))
tablespace peter

alter table t2
add (
constraint t2_pk
primary key (x)
using index
tablespace peter

2) From a performance perspective, are there any benefits from partitioning the data but NOT create the partition on the index?


create table t3 (
x number(38))
partition by HASH(X) (
partition p1 tablespace ts_dat,
partition p2 tablespace ts_dat,
partition p3 tablespace ts_dat,
partition p4 tablespace ts_dat

alter table t3
add (
constraint t3_pk
primary key (x)
using index
tablespace peter

If yes, why?

Your help is greatly appreciated.


BTW, I have both of your Wrox books and they're great. I'm looking forward to your new one - currently on order.

Tom Kyte
July 28, 2003 - 8:02 pm UTC

1) yes, there are partition wise joins, and there are partition eliminations that can take place AND there is higher availability possible.

2) there can be -- you can even partition the index in a different MANNER then the underlying table itself.

there can be performance DETRIMENTS by improperly partitioning an index -- that implies there can be performance BENEFITS by not partitioning it at all.

The next book is entirely different from "expert one on one Oracle" -- that was a fairly large reference, this next book is not "expert one on one Oracle version 2". Check out my asktom home page and you can see the first three chapters to get a feel of what it is like.

Thought of one more question...

Peter Tran, July 28, 2003 - 5:56 pm UTC

Can hash partitioning ever be bad or abused?

If you have a table containing unique IDs (e.g. SSN), then using hash partitioning sounds like the way to go. Is there any reason not to use it for a table with SSN# as the PK?


Tom Kyte
July 28, 2003 - 8:03 pm UTC

of course it can be bad and abused.

everything can.

that SSN table -- suppose you DON"T actually access it by SSN -- then partitioning by SSN would be a bad idea.

Good point on the SSN question

Peter Tran, July 28, 2003 - 9:58 pm UTC

However, if I don't access by SSN, then any type of index on SSN (partition or not) will be a bad idea.

I'm re-reading chap. 14 (slower) to make sure I grasp all of the concept.

I missed the blurb about "Since global indexes may be partition by range only, you must use local indexes if you wish to have has or composite partitioned index."

I spent several futile minutes trying to force Oracle to create a global hash index.


Partition key & Index key

David, September 29, 2003 - 4:14 pm UTC


Does the hash partition key have to be the part of index key? For e.g. if I have a table hash partitioned by deptno and have created partitioned indexes on the partitioned table, is it necessary to have deptno the part of my index key?

Thanks always!

Tom Kyte
September 29, 2003 - 4:43 pm UTC


the hash key (which should be almost unique -- in order to hash nicely) is idependent of other indexes.

A reader, September 29, 2003 - 5:14 pm UTC

Tom, thanks for the prompt answer!

"which should be almost unique" -- Do you mean the hash key should be unique in that partition? Do you mean every partition should have only 1 key value for better performance?

Tom Kyte
September 29, 2003 - 6:43 pm UTC

I mean, you want to pick a column that has MANY different values to hash on

Say you have 8 hash partitions
Say you pick a column with 10 different values (like deptno in emp).
Very good chance you have some empty partitions, some very very full partitions and some skimpy ones.

Now, take same data, hash on empno and you'll find a nice even distribution.

ops$tkyte@ORA920> drop table test1;
Table dropped.
ops$tkyte@ORA920> create table test1
  2  (
  3    empno  number,
  4    deptno number
  5  )
  6  partition by hash(empno) partitions 8
  7  /
Table created.
ops$tkyte@ORA920> drop table test2;
Table dropped.
ops$tkyte@ORA920> create table test2
  2  (
  3    empno  number,
  4    deptno number
  5  )
  6  partition by hash(deptno) partitions 8
  7  /
Table created.
ops$tkyte@ORA920> insert into test1 select object_id, mod(rownum,10) from all_objects;
31641 rows created.
ops$tkyte@ORA920> insert into test2 select object_id, mod(rownum,10) from all_objects;
31641 rows created.
ops$tkyte@ORA920> commit;
Commit complete.
ops$tkyte@ORA920> declare
  2          l_cnt number;
  3  begin
  4          for x in ( select partition_name from user_tab_partitions where table_name = 'TEST1' )
  5          loop
  6                  execute immediate 'select count(*) from test1 partition('||x.partition_name||')'
  7                  into l_cnt;
  8                  dbms_output.put_line( x.partition_name || ' ' || l_cnt );
  9          end loop;
 10  end;
 11  /
SYS_P564 3958
SYS_P565 4047
SYS_P566 3941
SYS_P567 3877
SYS_P568 3901
SYS_P569 4073
SYS_P570 3924
SYS_P563 3920
PL/SQL procedure successfully completed.
ops$tkyte@ORA920> declare
  2          l_cnt number;
  3  begin
  4          for x in ( select partition_name from user_tab_partitions where table_name = 'TEST2' )
  5          loop
  6                  execute immediate 'select count(*) from test2 partition('||x.partition_name||')'
  7                  into l_cnt;
  8                  dbms_output.put_line( x.partition_name || ' ' || l_cnt );
  9          end loop;
 10  end;
 11  /
SYS_P572 0
SYS_P573 3164
SYS_P574 6328
SYS_P575 0
SYS_P576 6328
SYS_P577 6328
SYS_P578 6329
SYS_P571 3164
PL/SQL procedure successfully completed.

One more question?

A reader, September 29, 2003 - 5:29 pm UTC

What is the difference between a hash partitioned table with 5 tablespaces(say with around 20 datafiles) and a normal table with one tablespace that consists of all those 20 datafiles?
Which is a good option?

Tom Kyte
September 29, 2003 - 6:44 pm UTC

night and day -- they are as different as you get.

which is "good" -- depends, depends on your needs, what you are trying to do, your system, your design, everything.

A reader, October 03, 2003 - 4:04 pm UTC

Tom, is it possible to know, on a hash partition, which partition contains which keys? Assume a table is hash partitioned by dept, into 6 partitions. Can we find what dept values are stored in individual 6 partitions?

Thanks in advance!

Tom Kyte
October 04, 2003 - 9:29 am UTC

1* select distinct object_id from big_table_hashed partition(sys_p61)
big_table@ORA920> /


just use a partition query on the partitions and you can see what is in each.

how to know in advance

subbu, October 28, 2003 - 9:34 pm UTC

how can i say a particular dept_id will go into a particular partition. is there a way to do this. This will help me in allocating space in advance knowing the dept_no in advance.

Tom Kyte
October 28, 2003 - 9:46 pm UTC

hashing an emp table by deptno would be a "bad idea"

hashing is best done on a unique or at least a column with TONS of values

hashing is designed to spread data out evenly.

there should be NO difference in size.

if you thing "i'll have 8gig of data and 8 hash partitions", then it should be about 1gig/partition.

So, if you follow that advice, you need not know "what partition" it'll go into. they will be the SAME

If you need that fine degree of control, you will use LIST partitioning in 9i and direct specific values to specific partitions

more details

subbu, October 29, 2003 - 12:37 am UTC

I'm having a table where there are going to be millions of rows.
There is a particular id that i'm generating every time i do a dataload into that table.
Say out of 10 million rows of data, the id which i'm generating is going to be 100 assuming i'm doing only 100 loads.
Is it worth hash partitioning based on those 100 values?

Is so(even if not) then i want to know how to find out how the data will go to different partition based on this value. What i've seen is all rows having same id used to go to same partion. how to predict which partition will be used assuming i've 8 partitions.

Tom Kyte
October 29, 2003 - 6:53 am UTC

not really.  100 values won't spread the data out nicely/evenly.  

ops$tkyte@ORA920LAP> create table t ( x int )
  2  partition by hash(x) partitions 8
  3  /

Table created.

ops$tkyte@ORA920LAP> insert into t select rownum from all_objects where rownum <= 100;

100 rows created.

ops$tkyte@ORA920LAP> declare
  2      l_cnt    number;
  3  begin
  4      for x in ( select partition_name from user_tab_partitions where table_name = 'T' )
  5      loop
  6          execute immediate
  7          'select count(*) from t partition('||x.partition_name||')'
  8          into l_cnt;
 10          dbms_output.put_line( to_char(l_cnt,'999') || ' ' || x.partition_name );
 11      end loop;
 12  end;
 13  /
  11 SYS_P313
  15 SYS_P314
  13 SYS_P315
  10 SYS_P316
   9 SYS_P317
  13 SYS_P318
  12 SYS_P319
  17 SYS_P320

PL/SQL procedure successfully completed.

see sys_p330 -- with 17% of the data, sys_p317 with 9% of the data.  one of the partitions would have 2x the data!

you want an ALMOST unique or unique value to hash on -- i keep saying that

with hashing -- all rows with the same key by definition hash to the same partition -- that is the very essence of hash partitioning. 

Number of partitions

Hector, November 04, 2003 - 8:00 am UTC


What factors should you take into account when deciding how many partitions to have in a hash partitioned table.

We will be creating a warehouse table that has over 100 Million rows where the partition key will not be unique (but very very close).

This table will be joined to many smaller tables based on the same key. These smaller tables would typically contain around 100k rows. Would you recommend partitioning these as well.


Tom Kyte
November 04, 2003 - 8:35 am UTC

well, 100 million rows isn't that big.

but one would need to know how you use this data. In a DW, full scans and hash joins might rule. That you join based on that key might be "bad", it might be "good"

you'd have to look at your queries and ask yourself

"will hash partitioning by this key

a) decrease query performance
b) increase it
c) not affect it at all"

for all 3 are possible.

suppose you have this 100m row table T partitioned by and a lookup L

you query:

select *
from t, l
where =
and l.column = 'some value'

that T is partitioned by T.PK isn't going to help anything. In fact, if you locally partition the indexes on T and index T(fk) and the optimizer used that index - you could end up doing N times the LIO's (where N = number of hash partitions) in a nested loops join. that is because the T.FK values could be in ANY or ALL of the N hash partitions of the index.

You have to understand the physics behind partitioning -- what it is doing, what you want to accomplish. I've written 2 chapters on this topic -- on in "Expert One on One Oracle" and another newer one in "Effective Oracle By Design". You might be interested in that. But you want to prototype, understand, benchmark to see what will work best for you (understand how partitioning works, understand the implications of partitioning)

partition elimination

A reader, November 04, 2003 - 6:19 pm UTC


Was reading Jonathan Lewis book a couple of days back. It stated that partition elimination does not occur in hash partitions... But from your words it seems that it is possible. Hmmmmm

From optimizer's view how does hash partitioning help to speed up queries? Is hash partitioning suitable in DWH or OLTP systems? It seems to me in DWH (unless ubpartitioned) it is useless. But I dont see much use in OLTP neither (let's forget the administration tasks but focusing on optimizer's view).

Recently I have been in a customer site, they are running a in-house OLTP application using hash partitions and local hash partitioned index. One of the huge tables (around 90 million of rows) is partitioned on a date column, according to customer they partitioned that column because it has quite many distinct values (so far so good, nice for hash partitions) but also they have many queries which does range scan such as

x_date between initial_date and end_date

This doest not seem right to me. It seems like they are thinking hash partitions as they were range partitions? Do you agree? I dont think hash partitioning would improve this kind of queries since most of the time Oracle would have to scan all partitions anyway.

Also I have doubts about how Oracle uses indexes to access hash partitions. Imagine we have local indexes on these hash partitions, if say I want to query rows between first of June and end of June will Oracle needs to scan all index partitions? I.E how are the row values distributed? Does Oracle treat all partitions as a single data structure and scan as the index were B-Tree or it treats them as four data structures (4 B-Tree)? How does Oracle know which index partition it has to scan to obtain appropriate values?

Tom Kyte
November 05, 2003 - 8:00 am UTC

where did he state THAT? page references please.

I think you are either taking something out of context or maybe mis-read it.

in OLTP hash partitioning must be carefully used so as to not DAMAGE performance. For read queries in that environment, it is HIGHLY UNLIKELY that it will increase performance. for modifications -- it may well increase SCALABILITY (not necessarily single user performance, but multi-user performance) since you have N table segments and M index segments (reduced contention) instead of 1.

in DW, hash partitioning can dramatically increase the performance (reduce the runtime) of full scans via partition elimination (rare with hash partitions, more common with range or list) or partition wise joins.

If they have a global range partitioned index on X_DATE, that is dandy. If not, they have likely increased the IO on their system by a factor of N if the index on X_DATE is locally partitioned.

If you have access to my new book "Effective Oracle By Design", I actually use an example almost IDENTICAL to what you describe -- and show you how to show it is "wrong" and ultimately how to "fix it"

Yes, we MUST range scan all index partitions if you have a locally partitioned index on a hash partitioned table and use something other then "="

the table is hash partitioned

A reader, November 05, 2003 - 8:15 am UTC

Your com ments:

"If they have a global range partitioned index on X_DATE, that is dandy. If not, they have likely increased the IO on their system by a factor of N if the index on X_DATE is locally partitioned."

What does dandy mean? Good?
The table is hash partitioned on X_DATE, I dont think we can create a global range partitioned index on X_DATE right? In fact they created a local hash partitioned index. How about creating a plain index, would that have same effect as global range partitioned index?

"If you have access to my new book "Effective Oracle By Design", I actually use an example almost IDENTICAL to what you describe -- and show you how to show it is "wrong" and ultimately how to "fix it" "

The example you are talking about is similar my customerĀ“s case ? By the way I ordered the book 3 weeks ago, waiting for it to arrive.

"Yes, we MUST range scan all index partitions if you have a locally partitioned index on a hash partitioned table and use something other then "=" "

My common sense tells me Oracle has to range scan all partitions if we are not using "=" operator. My scientific sense however doesnt see how. A small example

I hash partition on a dummy column in 4 partitions, numbered sequentially, if I query

select *
from test
where dummy between 1 and 4

How does Oracle find the corresponding rows? Does it hash dummy (1, 2, 3,4) then find the index value thus rowid and then access the table partitions?

Or it scans 4 indexes since dummy is randomly distributed in index partitions?

Tom Kyte
November 05, 2003 - 9:41 am UTC

dandy means "good".

you can only create a global range partitioned index on X_DATE if you create a global index on X_DATE. Else, it is locally partitioned with the table.

A "plain" index IS a global index with one partition.

The example in the book is virtually identical to the issue you describe.

"where dummy between 1 and 4" -- dummy has an INFINITE set of values!! 1, 1.1, 1.11, 1.111, 1.1111 and so on.

it scans everything.

why local index increase I/O?

A reader, November 06, 2003 - 4:00 am UTC


Why the I/O can be increased factor of N on a hash partitioned table if the index on is locally partitioned?

Imagine we have 8 hash table partitions and 4 hash index partitions (local of course since there is no such hash partitioned global index concept). Partitioned by a date column, order_date if we query

order_date = to_date('20031010', 'YYYYMMDD')

we would eliminate unnecesary partitions

if we query

order_date between to_date('20031010', 'YYYYMMDD')
and to_date('20031110', 'YYYYMMDD')

we would scan all 4 index partitions, but where is the factor of N?

Finally which index would you use, a plain index or a range partitioned index on order_date?

Tom Kyte
November 06, 2003 - 7:50 am UTC

you would not be able to have

8 table partitions
4 hash index partitions (local of course)

local of course implies parity -- 8 to 8. you cannot currently HASH partition an index differently then the underlying table.

so, the assumptions are wrong -- you cannot have structures as you describe.

is there a point if..

reader, November 06, 2003 - 11:18 am UTC

the table being hash partitioned only contains a pk and a LOB?

the LOB is just a pointer, so excluding managability issues, would a query that looks like select lob from table pk_col=:x have any benefit from partition pruning on the pk?

I suspect that a traditional B-Tree for this same row might be faster or negligible with regards to the difference.
I am sure this is why you say that for a query of a transactional nature partition pruning with hash partitions will provide no benefit.

Tom Kyte
November 06, 2003 - 5:14 pm UTC

well, the index on the pk would either be:

a) global by pk
b) local by hash(pk)

so, partition pruning would happen regardless.

and yes, it benefits for that. say the table was (pk,x,lob) and you hashed by (x) into 8 hash partitions. you locally partitioned the index on pk (not possible but assume you did). then you

select * from t where pk = :x

you have to range scan 8 index partitions.

now, assume it is a global partition on pk (single or multiple-partitions) -- then select * from t where pk = :x will range scan 1 partition.

One quick question

Subbu, December 03, 2003 - 3:23 pm UTC

Tom, Recently we partitioned few tables by range on column id. After that the data loading got slower becase the paritioned key is not used in the query which involves the processing of data load.
I'm thinking of atleast having a partitioned index on the column which they are using in the processing of data load.
I found that i can't do global hash partition but if i do local then will it help as the partition is still going to be based on the key i used in table partition.

Tom Kyte
December 03, 2003 - 6:45 pm UTC

not following you. you go from "global hash partition" (of something) to "do local" (of something) and ask will that help?

I don't know, not understanding what you are thinking really -- examples are worth a trillion words. (small concise ones that demonstrate what you are thinking)

clarifying with eg.

subbu, December 03, 2003 - 8:19 pm UTC

partition p1 values less than (100) ,
partition p2 values less than (200) ,
partition p3 values less than (300) ,
partition p4 values less than (400) )

create index part.part_test_ind on part.P_UNIT(u_id)
tablespace PART_DEF2

The above script created the range partitioned table and my index on u_id is global.

so queries like

select * from part.P_UNIT
where p_id=90;

will run faster.

Will partitioning the index help with queries like

select * from part.P_UNIT where
u_id < 100 ;

Note that i partitioned the table based on p_id

If i create local index will that help? for the above query.

Can i create global hash partitioned index on u_id (i'm on version which i know is not possible.
So what best can be done in this case where i've created partitioned table and got 2 queries based on 2 different ids but the partitioning is on only one of them.

How to make both queries to run efficiently.

Tom Kyte
December 04, 2003 - 7:39 am UTC

well, lets look at the statements:

so queries like select * from part.P_UNIT where p_id=90; will run faster.

that is true because we'll FULL SCAN a single partition instead of a FULL SCAN of a full table. partition elimination can help that FULL SCAN go faster.

Will partitioning the index help with queries like select * from part.P_UNIT where
u_id < 100 ; Note that i partitioned the table based on p_id

if you globally partition the index on u_id, you could POSSIBLY descrease the size of it (the height) and that would lend *some* to the performance. You would see the MOST beneift in concurrent inserts/updates/deletes as you'll now have N index structures and you'll have spread the points of contention for this modifications out over N structures.

In show, partitioning (globally) the index by u_id probably won't affect queries (negatively or positively) and may have some affect on modifications (positive affects)

Partitioning (locally) the index will kill query performance as each index range scan will really be 4 range scans (as any of the index partitions could have u_id < 100)

You can only create range partitioned global indexes.

That -- and hashing PRECLUDES partition elimination with <, >!!! you would totally defeat the purpose there!

You have already setup structures that allow the object to be queried efficiently by either of those queries.


kuljeet, December 04, 2003 - 12:50 am UTC


hash partioning key ---> composite

A reader, January 08, 2004 - 12:22 pm UTC


Is it a good idea using a composite key as partiotning key (HASH)?

Tom Kyte
January 08, 2004 - 3:20 pm UTC

if it lends more "uniqueness" to the hash key and you use all the columns in the predicate -- sure.

Index Rebuild is Achilles Heel of Partitioning

Robert, March 10, 2004 - 5:11 pm UTC


Partitioning is advertised as improving administration by breaking the table up into manageable chunks.

In our OLTP environment we have a 74G/277M row table.
I plan on partitioning this table into 16, 32, or 64 hash partitions (based on non-unique account_id) which will improve managing the table...
...but this table has 11 indexes.
Any time we do anything to a partition that would invalidate the indexes we will have to rebuild at least 10 of the indexes globally which could take days.
This is a bitter-sweet situation for implementing partitioning.

Could you comment on this, please?

Thanks for all your help!


Tom Kyte
March 10, 2004 - 5:36 pm UTC

or do the partition operation with "maintain global indexes" perhaps?

partition operations in 9i can maintain those indexes.

(and if it takes days for you to rebuild 10 indexes on such a small amount of data, there is something really wrong....)

and, since you are hashing, just what partition level operations are you planning on doing exactly?

Partition Level Operations

Robert, March 10, 2004 - 5:45 pm UTC


Small amount of data!? ... I thought this was a lot in an OLTP environement... :) ...but anyway...
We have a requirement to archive old data (e.g. 18 months) off of this table periodically. (insert into archive table, then delete from live table). I figured that after archiving data in this manner a few times, we would need to reorg the live table.

What do you think?



PS. Will definitely look into 'maintain global indexes'.

Tom Kyte
March 10, 2004 - 6:38 pm UTC

you would just drop/truncate the oldest partition and if you do this with the option to maintain the indexes, it'll not impose any sort of rebuild.

But I'm not partitioning by date

A reader, March 10, 2004 - 6:45 pm UTC


I am partitioning on 'account_id' but I will be archiving by some date field.... so my deletes will be honeycombed throughout all partitions.
Could you please address this scenario?



Tom Kyte
March 10, 2004 - 6:52 pm UTC

then why are you partitioning by account_id? (and in your current "scheme", you would not be doing any partition operations really, you would not need to).

sounds like you want to partition by date (RANGE, not hash)
have global indexes that are range partitioned themselves
and when you want to purge, just drop a partition but maintain the global indexes.

List why you want to partition (what you most want to gain from it) specifically -- not "better admin" -- WHAT part of admin exactly do you want to make "better"

What to be gained by partitioning

A reader, March 10, 2004 - 7:06 pm UTC


1) We cannot partition by our date field... 70% of the queries against this table are OLTP direct hits on the account_id... therefore for performance we *must* partition by this account_id field... to partition by our archiving date field would cause these online OLTP queries to run poorly... would it not? (I do plan on benchmarking this, however, in order to prove it).
2) Why do I want to partition? .... To be able better manage the reorging that must eventually occur after much archiving/deleting from this table.

Is this not sound?



Tom Kyte
March 10, 2004 - 7:39 pm UTC

1) why -- you take the index on account_id and RANGE partition that.

you'll have one index range scan followed by lots of table access by global index rowids. no different then now.

2) you won't really have to reorg after a purge -- space is reused in objects. If you want to make the purge efficient, you would partition to purge.

Thanks for your help

Robert, March 11, 2004 - 9:43 am UTC


You've convinced me to try partitioning by archiving date... however I don't think we can RANGE partition the account_id index because the distribution is not even.

1) Would HASH partitioning (or even NOT partitioning) the account_id index work as well?

2) I plan to test and benchmark whatever I do on a subset of the table which is approx 10% the size of the real one. Do you think this 10% test table could give reasonable results?



Tom Kyte
March 11, 2004 - 1:32 pm UTC

1) not until 10g as you cannot hash partition a global index until then.

You can range partition it into a SINGLE partition. No reason not to. You want to optimize the purge, you only need a single index partition.

2) not really, partially. extrapolation is "really hard". You'd want to do perhaps 5, 10, 15% to see how it scales as you add data (does it curve, or is it a straight line)


Reader, May 08, 2004 - 6:32 pm UTC


This has reference to your book "Effective Oracle by Design" Page 183.

I would appreciate if you could clarify the following :
(4*2+6*3+1) = 27
1 is for the table access by rowid --no problems
But the others ,
It has to probe 10 partitioned indexes -so 10*3 or 10*2 or 4*2+6*3 .
Depending upon the height of the index it will read the root then branch .
So it should be either do 1 or 2 LIO to find out that the row does not exists in this partition .
Why will it ever have go to the leaf block ?
And if it goes then for what information ?

I did the same test :
the results are :
0 recursive calls
30 db block gets
27 consistent gets
0 physical reads
0 redo size
661 bytes sent via SQL*Net to client
311 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
10 sorts (memory)
0 sorts (disk)
1 rows processed

Why am i getting these 30 db_block_gets ?
Also ,how do I find out the height of all the partitioned indexes ?


Tom Kyte
May 10, 2004 - 7:28 am UTC

sometimes it can find it is not in the index straight from the root/branch -- but not in general.

the root/branches are used to navigate "left and right" in the b*tree. In my example, I had an index on OWNER -- there were 10 indexes with 1/10 of the data in each. We were looking for a value in there. The value we were looking for would be "in the middle" of the index generally. Only if the value we were looking for was LESS THAN all values or GREATER THAN all values could the root/branches rule it out. So, we searched all of the way down.

That is why it needs to go to the leaf block -- the branch block would say "SMITH thru SMITHE over to this leaf" - there are lots of values between SMITH and SMITHE and if our string was between that -- we'd have to goto the leaf to see if it was there. In general, it must traverse the tree.

You are probably using ASSM perhaps (automatic segment space mgmt) and those are bitmap blocks being read.

analyze each index partition.

hash partitioning

Reader, May 10, 2004 - 4:11 pm UTC

Tom ,

Here are my findings :

* My DB is and it is DMT
* All indexes have been analyzed using analyze table <table_name> estimate statistics for table for all indexes for all indexed columns * from big_hash where owner = 'USA'

Index Range Scan

0 recursive calls
0 db block gets
28 consistent gets
0 physical reads
0 redo size
662 bytes sent via SQL*Net to client
210 bytes received via SQL*Net from client
3 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
1 rows processed

Now I change the parallelism on the index from noparallel to parallel .
1 0 PARTITION HASH* (ALL) :Q636910

Cost=122 Card=1273 Bytes=114570) 000

3 2 INDEX* (RANGE SCAN) OF 'ABIG_ID' (NON-UNIQUE) (Cost=23 :Q636910
Card=1273) 000

221 recursive calls
38 db block gets
78 consistent gets
4 physical reads
652 redo size
661 bytes sent via SQL*Net to client
211 bytes received via SQL*Net from client
3 SQL*Net roundtrips to/from client
21 sorts (memory)
0 sorts (disk)
1 rows processed

Why do I see the 38 db block gets and 652 redo size ?

2.Based on your earlier comment can I conclude that it is always not necessary for oracle to go to the leaf block of the Index if it can get the answer from the root and branch blocks.--Only is cases where it will not find a rowid in the table .

3.How do I find out the height of all the partitions of any local indexes ie someting like validate index < >

Thanks a lot !

Tom Kyte
May 10, 2004 - 8:50 pm UTC

you know, without a test case, it is always sort of "hard to guess"

but, redo is generated to clean out blocks on a select all of the time (if you have expert one on one Oracle -- i cover that in some detail in there).

db block gets are blocks read in current mode. You did parallel query, gotta break that stuff up and divide and conquer. current mode gets would have been used to figure out "where the table is, where the index is, right now, and divide it up".

2) the last part, after the -- doesn't compute to me. I've not seen it short circut the index search, doesn't mean it couldn't happen, but I've not seen it in practice.

3) you can use blevel in the user_ind_partitions view (add one to it)

hash partition

Reader, May 10, 2004 - 11:04 pm UTC


I hash partitioned the big table on owner and created a local index on owner .I have also analyzed it.

select /*+ INDEX(A) */ * from big_hash a where owner = 'USA'

0 recursive calls
0 db block gets
2 consistent gets
0 physical reads
0 redo size
661 bytes sent via SQL*Net to client
311 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
1 rows processed

Pstart 5
Pstop 5

1. Why only 2 LIO ?
One is for table and one for index.
This means that it found the rowid of the record from the root block ?
How is this possible ?

2.How does the optimizer know that this record is in the fifth partition ?
This whole operation is done using only one LIO .

Thanks again ,

Tom Kyte
May 11, 2004 - 8:15 am UTC

1) why not? index height might be 1. depends on how many rows you have in that partition. Hashing on something that is so "dense" (few owners in general, lots of records) will lead to skewed partitions. hashing is best done on a (nearly) unique column.

2) you hashed the table on owner. hence Oracle knows exactly what index/table partition can contain USA -- only one of them can.


Ramesh, January 05, 2006 - 3:44 pm UTC

Do you think an IOT (WH) with 10+ million rows improves perfomrance by hash partitioning ? (Just looking for the peformance only..)
Query always uses primary key. I am thinking that index height would be less than regular table thus reduce some IO.
Please advise.

Tom Kyte
January 05, 2006 - 4:48 pm UTC

If the query is ALWAYS

select * from table where primary_key = :bind;

then a single table hash cluster could likely be used to increase query retrieval performance of that particular query.

It would not even use an index. It would hash the bind variable, determine the database block address(DBA) from that, read that block and find the data.

(if you have access to any of my books - I cover hash clusters in more detail there, also covered in the concepts guide and admin guide)

Ramesh, January 05, 2006 - 5:34 pm UTC

Do you think it is advisable hash cluster if we do not know number of keys.

Tom Kyte
January 05, 2006 - 6:49 pm UTC

not unless you have a reasonable upper bound.

Hash Partitioning

Dav, April 24, 2007 - 3:33 pm UTC

We've a table with batch id and a book id and each combinations can have 6 mil records or some of the other book id will have less number of records

for ex.

Batch ID Book ID Number of records
101 1001 6 mil record
102 1002 150 rows
103 1003 700 rows
104 1001 6 mil records.

As you can see 1001 book id has maximum number of records and other book Id will have less number of records.

We'd hash partitioned this table on Batch ID column with 50 partitions. Some of the partitions has more than one batch IDs in it . One partition has batch id 101 and 104 which are heavy volume batch ids.

Is there anyway to make sure that they go into different partitions?. We're on

All other sub systems which pulls our data uses batch_id in their query.

Tom Kyte
April 24, 2007 - 3:45 pm UTC

with hash partitions, you want to hash on something that is almost unique (or at least has lots of values) and into powers of 2

you want 2, 4, 8, 16, 32, 64, 128, .... partitions. 50 is not going to work (you'll always get a bell shape with the partitions at the front and end having the least and the ones in the middle having the most).

hashing on batch id is wrong
hashing into 50 partitions is wrong

use 32 or 64 partitions and find something "more unique" to hash on OR use range partitioning to break up by BATCH and hash subpartitions (in powers of 2) to spread out each batch if you want.

Data load into Hash partitioned table

Karteek, May 02, 2007 - 1:18 pm UTC


Can you provide some insight on data load into a hash partitioned table.

Currently we have a LIST partition (on product). Into this table we load the data from flat files - using sql loader. When I tested the same data load into Hash partitioned has taken more time. I guess this is bcoz, for each record Oracle needs to apply hashing algorithm and decide the partition into which records needs to be sent.
**Is there any way to improve the load speed in this case. we use conventional load.

Also, ours is RAC with 4 instances. What we currently do is... we distribute all the files into 4 groups and do the load from all the 4 instances (we maintain data files based on products - so just by looking at the file we can come know into which partition entire file (its records) would be put in). We choose this files distribution method in order to avoid partition level contention. --- suppose if two files of same product are being loaded by 2 instances simultaneously (obviously into a same partition) due to partition level contention it takes more time. Currently load time is truly impressive.

But now the situation is changed..instead of list partition on product, we want to go for hash partition on account_id.

Even loading from single instance itself is taking more time...but how come we do parallel data load from all the 4 instances - since it is hash on account_id we would not know into which a record could be inserted. any help that you provide would be truly appreciated.

Many Thanks,
Tom Kyte
May 02, 2007 - 6:14 pm UTC

maybe you can run more than one loader at a time, hashing will likely consume more cpu than a simple list lookup.

partitioning can be used for one of three things:

a) ease of administration
b) query performance (rare, mostly in warehouse)
c) increased availability.

so, what were you hoping to achieve with partitioning from that list and why did you change from an approach that worked great to one that - given your situation - almost certainly won't work as well?

which hash partition will data go in

Saurabh, February 29, 2008 - 6:07 am UTC

Hi Tom,

I was trying to find out hashing algorithm used by Oracle to allocate rows to a particular hash partition. Then I stumbled upon ORA_HASH function.

I observed below behavior

Consider, these 4 variables (part_col, no_of_part, bucket_size, prev_2_power)

part_col be candidate partitioning column

no_of_part is the number of partitions we are considering

bucket_size is derived from no_of_part as follows:

if no_of_part is powers of 2(2,4,8,16...) then
bucket_size = no_of_part - 1

else if no_of_part not powers of 2 then
bucket_size = "next higher power of 2" - 1
for eg, if you have 24 no_of_part then (32 -1) = 31

prev_2_power is no_of_part if its a power of 2 number else its previous power of 2. for ex. for 24 partitions, 16 would be prev_2_power

Partition Number =
ora_hash(part_col, bucket_size)+ 1 > no_of_part
ora_hash(part_col, bucket_size)+ 1 - prev_2_power
ora_hash(part_col, bucket_size)+ 1 /* +1 since bucket start with 0 but partitions start with 1 */

I tested above algo for couple of cases and it worked correct. Also it explains why there is skew if we dont use power of 2 partitions. as anyways oracle bucket data into next 2power number of buckets and then just merges them using above logic to make desired no of partitions.

Above expression can be used along with a group by to evaluate partitioning skew before actually creating partitioned table.

I am not sure though that if this the logic or documented somewhere but seems to hold good.

hash partition

Reader, July 01, 2011 - 11:03 am UTC

Is hash partition the index on a date column (that has unique values inserted everytime there is an insert/update) a good idea to minimize/eliminate gc buffer busy waits in a two node RAC ( Thanks.
Tom Kyte
July 01, 2011 - 12:00 pm UTC

not if you are doing range scans on that date. If you are doing equality lookups - great, if you are doing >, < type of lookups - not so great, we cannot really range scan effectively on a global hash partitioned index.

hash partition

reader, July 01, 2011 - 5:26 pm UTC

Thank you for the response.
Inserts on that table take about 10 minutes to complete during the busy hours - we see 40% of the dbtime on gc buffer busy wait. what can we do to alleviate it? reverse key index also seems to be bad as we do a lot of range scan on the date column. Thanks.
Tom Kyte
July 01, 2011 - 6:17 pm UTC

there is not too much you can do - unless you can isolate the inserts to one node. If you need to range scan - all inserts will be taking place on the hot right hand side.

Even if you isolate to one node, your gc buffer busy waits will turn into slightly faster buffer busy waits.

If you were to reduce concurrency (by lowering your connection pool sizes) - I'd guess that you would see reduced contention and greater throughput (guess based on assumption that you have your connection pool sizes set really large). Most people set their connection pools way way way too high (higher than say 4 times the number of cpus). That introduces contention and run queues and not much else. That is something to seriously consider - instead of using the database as a queueing mechanism - consider using the middle tier (which is what they were designed to do in the first place, believe it or not).

4 times the number of cpus?

Uwe, July 05, 2011 - 8:33 am UTC

sorry for going slightly off-topic, but you stated:
> Most people set their connection pools way way way too high (higher than say 4 times the number of cpus)

Where did you get this number from? Could you suggest any further reading on how to size a connection pool? It's just that all of the production configurations I've seen so far are way higher than 4 times the number of cpus.

Tom Kyte
July 05, 2011 - 12:56 pm UTC

how many things can your CPU - your 'cores' do at the same time?

If you have 4 cores and a connection pool of 100 - what is something that could happen? All 100 try to become active at the same exact time. Can your machine do that? If it can - please send me the specs for that machine - I'd like to have a 4 core machine that can deal with 100 concurrent requests - with a predicable response time (and deal with the fact that 100 more requests are coming right away).

You need something to QUEUE those transactions. You can queue at the database OS level (big run queues). But database cpu is expensive right?

You can queue at the database level (shared server) - but then that is the expensive cpu again AND shared server has a much much longer code path (eats cpu itself) than dedicated server)

Or you can queue at the connection pool - where it takes almost zero cpu to do...

but developers don't like to queue at their level - they think "if we just do more - we'll get more done". But that just isn't true - all you do is run slower with more contention...

think about it - it is all about math, just divide (number of requests by cpu)...

it is all about math

Uwe, July 06, 2011 - 8:46 am UTC

probably I haven't made myself clear enough. I'm aware of the facts and principles you mention and I also know how do "do math". My two points are:
1. To do the math, I need numbers. Actually, I'm looking for guidelines to find those numbers (like, how does one find out how much concurrent requests a cpu can handle).
2. I didn't want to say "the facts are against you" but "I find that many connection pools SEEM oversized to me, and I need to find information on how to scientifically PROVE they are oversized."

Kind regards,
Tom Kyte
July 08, 2011 - 11:25 am UTC

1) a cpu can handle ONE CONCURRENT REQUEST. Think about it, if you have something that takes one cpu second to execute, and you have one second, you can AT MOST do one of those things per second (probably less if you have two people trying to do it at the same time - as the OS will take a large portion of the time - probably MUCH less if you have 3, 4, 5, ... N users trying to do it at the same time)

A cpu is doing one thing at a time. How much cpu does your thing NEED? That regulates how many APPARENTLY concurrent things you can do.

2) it is math, please just close your eyes - pretend a cpu is a person. Pretend the person is asked to do things one at a time - one after the other. Now, pretend that same person is asked to do 500 things AT THE SAME TIME and must multi-task between them.

You can prove they are oversized by running a simple test on your own machines. Write a program that does some transaction for you - have it be able to measure how many transactions it accomplishes in some period of time.

then, run one copy of the program for some period of time.
then, run 5 copies, then 10, then 20, then ..... and so on.

then report on transactions accomplished. what you'll find is that as the number of programs you run nears the number of cpus you have - the numbers will increase (the transactions you accomplish). As you near and exceed the number of cpus, the numbers will probably flatline. As you far exceed the of cpus, you'll see it go way way way down.

If you want to see it in 'real life', you can watch us do this live in Frankfurt this August 2011:

We do exactly that simulation I described - you can watch the transactions per second massively decrease as we increase the size of connection pools.

the other issue is - if your connection pool is 300 large - but you only have 10 active at any time - ask yourself "how many resources like memory, OS time slicing, etc are being wasted by having 290 processes just sitting there"

The only thing that can happen with an over sized connection pool is that they all might become active at the same time - if they do, and you have far less than 300 cpus - you'll be in trouble - won't you....

Connection pools are oversized because connections are held too long

Stew Ashton, July 10, 2011 - 5:22 am UTC

Here's one reason connection pools are oversized: connections tend to be held without being used.

Suppose I hold a connection for one second and actually give the database work for one hundredth of a second: my Application Server could in theory have 100 connections per CPU.

A good DBA comes along and says "all these connections are hardly being used, downsize your connection pool." So the J2EE admin reduces the connection pool to 5, and my throughput goes down to 5 transactions per second...

You may ask why I am holding a connection without using it? The answer is: I, the application developer, am not holding the connection, I am closing it. However, the Application Server is not really putting the connection back into the pool, it is holding it for me in case I ask for it again. The connection only goes back in the pool when I have done all my other work and exited.

Why on earth does the Application Server do this? Because the connection is configured as "shareable".

Here is an excerpt from an IBM article on the subject:
"When the application closes a shareable connection, the connection is not truly closed, nor is it returned to the Free pool. Rather, it remains in the Shared connection pool, ready for another request.." "Shareable" is the default configuration for Websphere AND Weblogic!

I learned this last year the hard way.

The moral of the story: Application Server configuration is just as important as coding practice, for good or ill.

Real Life Performance

Uwe, July 12, 2011 - 8:39 am UTC

that seminar you mention looks pretty interesting. I'll be there, hoping to gain some more insights on that topic. And some figures to argue over with AppServer admins.
Kind regards,

optimizer not using partition

Rakesh, July 19, 2011 - 8:45 am UTC



This table consists of list partitioning on CIRCLE_ID. Below is the execution plan of above statement.

| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | Pstart| Pstop |
| 0 | UPDATE STATEMENT | | | | 3 (100)| | | |
| 1 | UPDATE | ABC_XYZ | | | | | | |
|* 2 | TABLE ACCESS BY GLOBAL INDEX ROWID| ABC_XYZ | 1 | 49 | 3 (0)| 00:00:01 | ROW L | ROW L |
|* 3 | INDEX UNIQUE SCAN | UK_C0012643_UK | 1 | | 2 (0)| 00:00:01 | | |

1)As you can see,it is not using the partition,i think that,since it is a bind variable. optimizer is not able to use the partition.

If i hard core the circle_id it is using the partition.


| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | Pstart| Pstop |
| 0 | UPDATE STATEMENT | | 1 | 56 | 3 (0)| 00:00:01 | | |
| 1 | UPDATE | ABC_XYZ | | | | | | |
|* 2 | TABLE ACCESS BY GLOBAL INDEX ROWID| ABC_XYZ | 1 | 56 | 3 (0)| 00:00:01 | 4 | 4 |
|* 3 | INDEX UNIQUE SCAN | UK_C0012643_UK | 1 | | 2 (0)| 00:00:01 | | |

How to overcome this. how can i force the optimizer to use the partition?

Tom Kyte
July 19, 2011 - 9:58 am UTC

1) huh? It is using a global unique index, to get a global index rowid (a rowid that points to a single partition), to get a single row.

What do you mean "it is not using partition". It is using a global index that apparently has ONE single partition (eg: the index is not partitioned) to get a global index rowid to get a SINGLE row.

It will figure out at run time what SINGLE partition it'll hit (it says that with the row/row in pstart/pstop).

It is "using the partition" in both cases. The performance of those two plans is *identical*

But please continue to use binds (actually, i see sys_b_0 in there - cursor sharing force/similar :( makes me so so so very sad...)

PArtitioned bitmap index

Rakesh, July 20, 2011 - 6:20 am UTC

Hi Tom,

Pls help me with this query..

SELECT * FROM ( SELECT unbar_data.unbar_data_request_id AS unbarreqid,unbar_data.sim_number AS simnumber,
unbar_data.msisdn_number AS msisdnnumber,
unbar_data.retailer_org_id AS retailerorgid,
unbar_data.dist_org_id AS distorgid,
unbar_data.api_code AS apicode,
unbar_data.initiated_by_number AS initiatedby,
COALESCE (zon.zone_description,
(SELECT circle_name FROM ABCPROD.ABC_cmn_mst_circle WHERE circle_id = unbar_data.circle_id))
ABCPROD.ABC_crs_trn_cad_unbar_data unbar_data JOIN ABCPROD.ABC_invt_mst_item_msisdn_dtl msisdn ON
(msisdn.msisdn_number=unbar_data.msisdn_number) JOIN ABCPROD.ABC_invt_sim_msisdn_asso asso ON (msisdn.msisdn_dtl_id=asso.msisdn_id) JOIN
ABCPROD.ABC_invt_mst_sim_dtl simdtl ON (simdtl.sim_dtl_id = asso.sim_dtl_id AND simdtl.sim_no || simdtl.sim_no_checksum = unbar_data.sim_number ) JOIN
ABCPROD.ABC_cmn_mst_ent_org org ON (org.ent_org_id = unbar_data.dist_org_id ) LEFT JOIN ABCPROD.ABC_cmn_mst_zone zon ON
(zon.zone_id = org.zone_id AND zon.circle_id = org.circle_id ) WHERE unbar_data.circle_id =:1 AND msisdn.circle_id = unbar_data.circle_id AND
asso.circle_id = msisdn.circle_id AND org.circle_id = asso.circle_id AND unbar_status = :"SYS_B_0" AND hac1_status = :"SYS_B_1"
AND ROWNUM <= :"SYS_B_2" AND retry_count <= :"SYS_B_3" AND asso.is_paired = :"SYS_B_4" AND msisdn.is_active = :"SYS_B_5" AND
unbar_data.created_date > (SYSDATE - :"SYS_B_6") ORDER BY created_date) A;

Plan hash value: 2146329685

| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | Pstart| Pstop |
| 0 | SELECT STATEMENT | | 1 | 170 | 9137 (3)| 00:01:07 | | |
| 1 | TABLE ACCESS BY INDEX ROWID | ABC_CMN_MST_CIRCLE | 1 | 13 | 1 (0)| 00:00:01 | | |
|* 2 | INDEX UNIQUE SCAN | PKCIRCLEID | 1 | | 0 (0)| 00:00:01 | | |
| 3 | VIEW | | 1 | 170 | 9137 (3)| 00:01:07 | | |
| 4 | SORT ORDER BY | | 1 | 221 | 9137 (3)| 00:01:07 | | |
|* 5 | COUNT STOPKEY | | | | | | | |
| 6 | NESTED LOOPS OUTER | | 1 | 221 | 9136 (3)| 00:01:07 | | |
| 7 | NESTED LOOPS | | 1 | 191 | 9135 (3)| 00:01:07 | | |
| 8 | NESTED LOOPS | | 1 | 176 | 9133 (3)| 00:01:07 | | |
|* 9 | HASH JOIN | | 1500 | 188K| 6131 (4)| 00:00:45 | | |
|* 10 | HASH JOIN | | 1424 | 141K| 2863 (3)| 00:00:21 | | |
| 11 | TABLE ACCESS BY INDEX ROWID | ABC_CRS_TRN_CAD_UNBAR_DATA | 1424 | 115K| 260 (0)| 00:00:02 | | |
|* 12 | INDEX RANGE SCAN | IDX_CRS_UNBAR_US_HS_CI_CD_RC | 256 | | 12 (0)| 00:00:01 | | |
| 13 | PARTITION LIST SINGLE | | 2911K| 52M| 2579 (3)| 00:00:19 | KEY | KEY |
|* 14 | TABLE ACCESS FULL | ABC_INVT_MST_ITEM_MSISDN_DTL | 2911K| 52M| 2579 (3)| 00:00:19 | KEY | KEY |
| 15 | PARTITION LIST SINGLE | | 5337K| 137M| 3227 (4)| 00:00:24 | KEY | KEY |
|* 16 | TABLE ACCESS FULL | ABC_INVT_SIM_MSISDN_ASSO | 5337K| 137M| 3227 (4)| 00:00:24 | KEY | KEY |
|* 18 | INDEX UNIQUE SCAN | PK_SIM_DTL_ID | 1 | | 1 (0)| 00:00:01 | | |
|* 19 | TABLE ACCESS BY INDEX ROWID | ABC_CMN_MST_ENT_ORG | 1 | 15 | 2 (0)| 00:00:01 | | |
|* 20 | INDEX UNIQUE SCAN | PK_C0012205_PK | 1 | | 1 (0)| 00:00:01 | | |
|* 21 | TABLE ACCESS BY INDEX ROWID | ABC_CMN_MST_ZONE | 1 | 30 | 1 (0)| 00:00:01 | | |
|* 22 | INDEX UNIQUE SCAN | PK_ZONE_ID | 1 | | 0 (0)| 00:00:01 | | |

Predicate Information (identified by operation id):

2 - access("CIRCLE_ID"=:B1)
5 - filter(ROWNUM<=TO_NUMBER(:SYS_B_2))
14 - filter("MSISDN"."IS_ACTIVE"=:SYS_B_5)
16 - filter("ASSO"."IS_PAIRED"=:SYS_B_4)
18 - access("SIMDTL"."SIM_DTL_ID"="ASSO"."SIM_DTL_ID")
19 - filter("ORG"."CIRCLE_ID"=TO_NUMBER(:1))
20 - access("ORG"."ENT_ORG_ID"="UNBAR_DATA"."DIST_ORG_ID")
21 - filter("ZON"."CIRCLE_ID"(+)=TO_NUMBER(:1))
22 - access("ZON"."ZONE_ID"(+)="ORG"."ZONE_ID")

----> ABC_INVT_MST_ITEM_MSISDN_DTL(14 column) and ABC_INVT_SIM_MSISDN_ASSO(16 column) are having partitions but full table scan is increasing the cost.
IS_ACTIVE column on ABC_INVT_MST_ITEM_MSISDN_DTL table has only two values ('Y' and 'N') and same with IS_PAIRED column too. I am planning to create
local bitmap index on partitioned tables. but it is an OLTP environment. So can you suggest any querychane or something else on this. Coz this is causing a lot of issue in the database.

Hash Partitioning on table with high insert rate

S Chen, June 03, 2014 - 7:19 am UTC

Hi Tom,

I have a table as defined below:

SYS@VDDS1 SQL> desc vdds.stb_config_string
 Name                                      Null?    Type
 ----------------------------------------- -------- ----------------------------
 ID                                        NOT NULL NUMBER
 STB_CONFIG_ID                             NOT NULL NUMBER
 PARAM_KEY                                 NOT NULL VARCHAR2(1000)
 PARAM_VALUE                                        VARCHAR2(2000)
 MOD_DATE                                  NOT NULL TIMESTAMP(6)

with ID is the PK, a value generated from a sequence. The insert rate is very high but not using batch(we have to insert one row at a time), and most of the queries are using the STB_CONFIG_ID in the where clause. For example:

delete stb_config_string where stb_config_id=:X;

During the performance test, there is high on "gc buffer wait".  Will has partition the table using the PK will improve the performance?


More to Explore


If you are new to partitioning, check out Connor McDonald's introduction series here.


Documentation set on VLDB and Partitioning.