Skip to Main Content

Breadcrumb

May 4th

Question and Answer

Connor McDonald

Thanks for the question, Aakash.

Asked: December 09, 2016 - 7:26 pm UTC

Last updated: January 26, 2017 - 11:36 pm UTC

Version: 11.3.3

Viewed 1000+ times

You Asked

Hi Tom,

This is my first time that I'm posting a question to you.
I have been huge fan of your answer and humor sometimes :) Keep it up !!

Q) I have a schema that contains around 50 tables, 5 tables contain around 1.5 billion rows and rest 45 contain around 0.5 billion rows.
each table is list partitioned, but there are only 4 values to that list. further I have subpartitioned by hash 8 subpartitions.
In addition there are local indexes on these tables as per the select statements that we have in our application.

My worry is - I don't want application to fail - Am I going correct way ?

Note : I cant change the number of tables i.e. i cant further normalize, there is a GG with source and that is the restriction.
Please suggest

and Connor said...

My question would be - why are you partitioning ? That is, what is the problem you are using partitioning to solve ? And is the strategy you've adopted solving that problem ?

There is no such thing as the "right" way to partition. There is only the "right way for *your* needs and situation".

The only thing I will say is that assuming your 4 values isnt going to change, then obviously your partitions will continue to get larger as your grow. Whether that is a risk or not only you can say.


Rating

  (7 ratings)

Is this answer out of date? If it is, please let us know via a Comment

Comments

A very good question about partitioning

A reader, December 11, 2016 - 7:46 am UTC

Hi
I read the Johnathan ' document below and wondering if it still valid in 11g and 12 c.
https://www.google.fr/url?sa=t&source=web&rct=j&url=@@1@@

This document suggest some pros and cons for such slow when exchanging or dropping partition by conserving the novalidate option on some constraints. ..
While it make operations faster il risk of perturbing exec plans. ..

1. What changes may you give to the document if should be rewritten for 11g and above?

2. Is the "Novalidate Rely" will be a solution for a perfect exchange partition?
(Since in 9 version Jonathan is suggestion Novalidate alone)

3. What is your opinion to avoid such prob when exchanging partition especially a huge partitioned and subpartition ed table.

Tkx


Connor McDonald
December 12, 2016 - 1:16 am UTC

Can you give us a valid link so we can see what you're referring to

very useful guidance

Aakash Panjwani, December 12, 2016 - 1:35 am UTC

the answers and guidance have always been useful as is with this question. Thankful to asktom community and everyone :)

very useful guidance

Aakash Panjwani, December 12, 2016 - 1:42 am UTC

Thanks Connor for asking me to look into the problem. It definitely helps :)

Problem : 2 databases (huge ones) are continuously in Sync.
The source keep inserting / updating / deleting rows into the target (using a replication engine)

On target a application keeps executing queries as per the application logic.

There are going to be only 4 list partitions - the list is never going to change :(

My fear is / was - the partitions will keep growing and the only thing that I can do about it - gather stats

The date columns are blinding the oracle engine - because they are stored in number format by the source. Hence cannot partition / subpartition on date columns.

the only strategy that I could see is - List + Hash composite

Please suggest
Connor McDonald
December 12, 2016 - 1:55 am UTC

What is critical is choosing the partitioning strategy that best suits all (or at least some) of your business requirements.

For example, lets say I have a SALES table partitioned by SALES_DATE.

If the most important query in my application is:

select * from sales where customer_id = ...

then I'm going to need a GLOBAL index on customer_id.

But if the most important query in my application is:

select * from sales where customer_id = ... and sales_date > "some recent date"

then I'll probably opt for a LOCAL index on customer_id, because even though it might be slightly less efficient than the global index (ie, I might need to probe a few partitions) its probably a reasonable tradeoff because local indexes will more suit maintenance operations.

etc...

So you need to identify - what the most important business functions, and how can I exploit my partitions to best suit them.

Re. You asked for proper link

A reader, December 12, 2016 - 6:27 am UTC

Hi
I read the Johnathan ' document below and wondering if it still valid in 11g and 12 c

https://www.google.fr/url?sa=t&source=web&rct=j&url=@@1@@

This document suggest some pros and cons for such slow when exchanging or dropping partition by conserving the novalidate option on some constraints. ..
While it make operations faster il risk of perturbing exec plans. ..

1. What changes may you give to the document if should be rewritten for 11g and above?

2. Is the "Novalidate Rely" will be a solution for a perfect exchange partition?
(Since in 9 version Jonathan is suggestion Novalidate alone)

3. What is your opinion to avoid such prob when exchanging partition especially a huge partitioned and subpartition ed table.

Tkx

Link point to a doc document of your collegue j.lewis. titeld "Partitions in the Real World"
Of 6 pages. I copied again the link. If doesn't work please Google the title or link you llfind it.
Otherwise I could mail it.






Connor McDonald
December 13, 2016 - 3:11 am UTC

For parent/child example, this is solved by using reference partitioning.

For the other examples, you can simply benchmark on your own system. For me, 4million rows is still subsecond (and that is on my laptop)

<code>

SQL> drop table parent cascade constraints purge;

Table dropped.

SQL> drop table parent_ex cascade constraints purge;

Table dropped.

SQL>
SQL> create table parent (
2 id number(12,6),
3 v1 varchar2(10),
4 padding varchar2(100)
5 )
6 partition by range(id) (
7 partition p1000 values less than (1000),
8 partition p3000 values less than (3000),
9 partition p5000 values less than (5000)
10 );

Table created.

SQL>
SQL> create table parent_ex (
2 id number(12,6),
3 v1 varchar2(10),
4 padding varchar2(100)
5 );

Table created.

SQL>
SQL> insert /*+ append */
2 into parent_ex
3 select
4 3000 + trunc((rownum-1)/5000,6),
5 to_char(rownum),
6 rpad('x',100)
7 from
8 ( select 1 from dual connect by level <= 2000 ),
9 ( select 1 from dual connect by level <= 2000 );

4000000 rows created.

SQL>
SQL> alter table parent
2 exchange partition p5000 with table parent_ex;

Table altered.

SQL> alter table parent
2 exchange partition p5000 with table parent_ex;

Table altered.

SQL> set timing on
SQL> alter table parent
2 exchange partition p5000 with table parent_ex
3 without validation;

Table altered.

Elapsed: 00:00:00.02
SQL>
SQL> alter table parent
2 exchange partition p5000 with table parent_ex
3 without validation;

Table altered.

Elapsed: 00:00:00.00
SQL>
SQL> alter table parent
2 add constraint p_pk primary key(id)
3 using index (create index p_pk on parent(id) nologging local);

Table altered.

Elapsed: 00:00:00.10
SQL>
SQL> alter table parent_ex
2 add constraint px_pk primary key(id)
3 using index (create index px_pk on parent_ex(id) nologging);

Table altered.

Elapsed: 00:00:25.98
SQL>
SQL> alter table parent
2 exchange partition p5000 with table parent_ex
3 including indexes without validation;

Table altered.

Elapsed: 00:00:00.90
SQL>
SQL>
SQL> alter table parent
2 exchange partition p5000 with table parent_ex
3 including indexes without validation;

Table altered.

Elapsed: 00:00:00.02
SQL> alter table parent
2 exchange partition p5000 with table parent_ex
3 including indexes without validation;

Table altered.

Elapsed: 00:00:00.94
SQL>
<code>

Or. .link dwl

A reader, December 12, 2016 - 6:31 am UTC

Partitioning Strategy

Aakash Panjwani, December 12, 2016 - 4:32 pm UTC

All the date columns are stored in number format by the source and hence cannot go for a Range partition on any table.

if Hash subpartitions grows too big, what is option that I can use to further improve the performance - because someday this is going to happen.

Source never purges any data.


Connor McDonald
December 13, 2016 - 2:14 am UTC

"All the date columns are stored in number format ..."

Well...no matter what you do, I foresee problems in the future :-) You could always explore a virtual column which converts this to a date, but that is obviously a non-trivial refactoring exercise.

If the hash partitions become too large, you can always split them, eg, from 4 to 8, and then from 8 to 16 and so forth.

question on exchange SUBpartition

A reader, January 25, 2017 - 8:51 am UTC

say i need to exchange subpartion p_s of table T1 with table T2;
table T1 is partition by range date and subpartition by hash on id;
table T2 is partition by hash on id.

T1 and T2 have same cols/datatypes/indexes/ constraintes ..etc.

One thing differ however is that T1 (and so its partitions and subpartitions) are as PCTFREE different from T2

does that make issue for the exchange??
tkx



Connor McDonald
January 26, 2017 - 11:36 pm UTC

You should be fine


SQL>
SQL> create table t1  (x int, y int, z int )
  2  partition by range ( x )
  3  subpartition by hash ( y )
  4  subpartitions 4
  5  (
  6  partition p1 values less than ( 2000 )
  7  )
  8  /

Table created.

SQL>
SQL> insert into t1
  2  select rownum, rownum, rownum
  3  from dual
  4  connect by level <= 1000;

1000 rows created.

SQL>
SQL> commit;

Commit complete.

SQL>
SQL> create table t2 ( x int, y int, z int ) pctfree 50;

Table created.

SQL>
SQL> select subpartition_name from user_tab_subpartitions;

SUBPARTITION_NAME
--------------------------------------------------------------------------------------------------------------------------------
SYS_SUBP283
SYS_SUBP282
SYS_SUBP281
SYS_SUBP280

4 rows selected.

SQL> alter table t1 exchange subpartition SYS_SUBP283
  2    with table t2;

Table altered.

SQL>

More to Explore

VLDB

If you are new to partitioning, check out Connor McDonald's introduction series here.

VLDB

Documentation set on VLDB and Partitioning.