Drop Subpartition
Vijay, August 02, 2005 - 2:46 am UTC
Hi Tom,
Please tell me what is wrong with the below construct. I am getting Oracle error "ORA-00926 missing VALUES keyword"
Thanks
declare
a varchar2(100);
begin
for i in 1..3
loop
a:='alter table GL_JOURNAL_DETAILS modify partition default drop subpartition BOOK '|| i;
-- Subpartition is a hash partition
DBMS_OUTPUT.PUT_LINE(a);
execute immediate a;
end loop;
exception
when others then
raise;
end;
August 02, 2005 - 7:41 am UTC
what are you trying to do and what does gl_journal_details look like and why would you put a space between book and i
Exception ...
Bob B, August 02, 2005 - 9:10 am UTC
As a side note, the
"Exception When Other Then Raise;" can be eliminated as that is what oracle does by default.
I believe that line may make things harder to debug (at least for larger programs) as it would change the line number that oracle reports for the exception from the line the error actually occurred to the line where the error is raised.
Drop Subpartiton
Vijay, August 03, 2005 - 1:38 am UTC
Hi Tom,
Thanks for your reply.
I have a table with composite range/hash partition. The table has 31 columns. Range partitioned by period_id and hash subpartitioned by bookid. The requirement is to maintain only 12 months of data in the table. So I have written a procedure (shown below) to drop subpartition automatically. But I am not able to drop the subpartition. I have checked the Oracle documentation and found that it is not possible to use drop subpartition in this case. So please suggest some other way to solve this.
-----------------
Table Structure
CREATE TABLE GL_JOURNAL_DETAILS
(
LOCAL_TXN_ID VARCHAR2(30 BYTE),
JE_SOURCE_CD VARCHAR2(25 BYTE) NOT NULL,
JE_HEADER_ID NUMBER(15) NOT NULL,
JE_LINE_NUM NUMBER(15) NOT NULL,
IMPORT_CLASS VARCHAR2(30 BYTE) NOT NULL,
JE_CATEGORY_CD VARCHAR2(25 BYTE),
COA_ID NUMBER NOT NULL,
BOOK_ID NUMBER NOT NULL,
PERIOD_ID NUMBER(6) NOT NULL,
CURRENCY_CD VARCHAR2(4 BYTE) NOT NULL,
PROC_DATE_ID NUMBER,
SRC_GL_DATE_ID NUMBER,
SRC_VAL_DATE_ID NUMBER,
IMPORT_DATE_ID NUMBER,
ACCOUNTING_EVENT VARCHAR2(10 BYTE),
TXN_CR_AMT NUMBER(13,2) NOT NULL,
TXN_DR_AMT NUMBER(13,2) NOT NULL,
DOCUMENT_NBR VARCHAR2(15 BYTE),
EMPLOYEE_NBR VARCHAR2(6 BYTE),
CUSTOMER_NAME VARCHAR2(60 BYTE),
VENDOR_NAME VARCHAR2(60 BYTE),
EXPENSE_ADV_NBR VARCHAR2(9 BYTE),
PROJECT VARCHAR2(15 BYTE),
JE_DETAIL_DESC VARCHAR2(240 BYTE),
LOCAL_TXN_SEQ NUMBER,
ACCT_DATE_ID NUMBER,
VOUCHER_NBR VARCHAR2(15 BYTE),
INVOICE_NBR VARCHAR2(22 BYTE),
CUSTOMER_NBR VARCHAR2(20 BYTE),
VENDOR_NBR VARCHAR2(20 BYTE),
CREATE_DATE DATE,
UPDATE_DATE DATE)
PARTITION BY RANGE (PERIOD_Id)
SUBPARTITION BY HASH (book_id)
(PARTITION defaultpart VALUES LESS THAN(123));
--Procedure
create or replace procedure DropSubpart as
a varchar2(100);
v_load_date date;
v_book_id number;
v_process_params process_ctrl%rowtype;
begin
v_process_params:= proc_ctrl_pkg.get_process_parms(2024);
v_load_date := v_process_params.last_load_date;
v_book_id :=v_process_params.entity_id;
for r in (Select partition_name from all_tab_partitions
where table_name='GL_JOURNAL_DETAILS'
and partition_name <> 'DEFAULTPART')
loop
-- v_load_date is the last loading date
a:='alter table GL_JOURNAL_DETAILS modify partition '||r.partition_name ||' drop subpartition BOOK'|| v_book_id;
DBMS_OUTPUT.PUT_LINE(a);
execute immediate a;
b:='alter table GL_JOURNAL_DETAILS add partition P'||upper(to_char(v_load_date,'MON-RR'))|| ' values less than (' || to_number(to_char(to_DATE(TO_CHAR (v_load_date, 'MON-RR'),'mon-rr'),'RRRRMM'))||')';
DBMS_OUTPUT.PUT_LINE(b);
execute immediate b;
c:='alter table GL_JOURNAL_DETAILS modify partition P'||upper(to_char(v_load_date,'MON-RR')) ||' add subpartition book' ||v_book_id;
DBMS_OUTPUT.PUT_LINE(c);
execute immediate c;
end loop;
Exception
When Others then
Raise;
End DropSubpart;
August 03, 2005 - 10:19 am UTC
why would you drop a single subpartition containing a random subset of the old data????
ops$tkyte@ORA10G> drop TABLE t;
Table dropped.
ops$tkyte@ORA10G> CREATE TABLE t
2 (
3 dt date,
4 x int,
5 y varchar2(25)
6 )
7 PARTITION BY RANGE (dt)
8 subpartition by hash(x) subpartitions 4
9 (
10 PARTITION part1 VALUES LESS THAN (to_date('13-mar-2003','dd-mon-yyyy')) ,
11 PARTITION part2 VALUES LESS THAN (to_date('14-mar-2003','dd-mon-yyyy')) ,
12 PARTITION junk VALUES LESS THAN (MAXVALUE)
13 )
14
ops$tkyte@ORA10G> /
Table created.
ops$tkyte@ORA10G> alter table t drop partition part1;
Table altered.
You would drop that entire time period would you not?
Drop Subpartition
Vijay, August 04, 2005 - 6:32 am UTC
Hi Tom,
Thanks for your reply.
Partitioned column (periodid) is a number column. So i can't give to_date(). When I get the bookid-1 data for the period 200509 then I should delete all 200408 bookid-1. The other data like 200408 bookid-2 should remain in the table. So I can't drop the partition 200408 also. In such case how do I proceed.
Appreciate your help in this regard.
August 04, 2005 - 9:36 am UTC
but you are hash partitioning, you don't have any idea WHAT data is in a given hash partition
so, how would this work?
Trouble with the "subpartition by range" syntax
ht, October 12, 2005 - 6:06 pm UTC
Hi Tom,
I'm getting an ORA-14151 when creating a subpartition. I searched your site for the error to no avail.
I'm using 10.2.0.1.
Can you tell me what I'm doing wrong below?
create table testemp
2 (
3 emp_id number ,
4 created date
5 )
6 partition by range(created)
7 subpartition by range(emp_id)
8 subpartition template
9 (
10 subpartition x values less than (6),
11 subpartition y values less than (8),
12 subpartition others values less than (maxvalue)
13 )
14 (
15 partition y2003q1 values less than (to_date('01Apr2003','ddMonyyyy')),
16 partition y2003q2 values less than (to_date('01Jul2003','ddMonyyyy')),
17 partition others less than (maxvalue)
18 );
subpartition by range(emp_id)
*
ERROR at line 7:
ORA-14151: invalid table partitioning method
October 13, 2005 - 10:21 am UTC
Why can't one subpartition by range?
ht, October 13, 2005 - 2:13 pm UTC
Tom,
Thank you for the info. Is there a reason one cannot subpartition by range? Is there a workaround if both my partition and subpartition needs to be "by range"?
I tested the syntax of "subpartition by range" below but, of course, my rows will not be subpartitioned:
create table testemp
2 (
3 emp_id number ,
4 created date
5 )
6 partition by range(created)
7 subpartition by list(emp_id)
8 subpartition template
9 (
10 subpartition x values ('6'),
11 subpartition y values ('8'),
12 subpartition pothers values (default)
13 )
14 (
15 partition y2003q1 values less than (to_date('01Apr2003','ddMonyyyy')),
16 partition y2003q2 values less than (to_date('01Jul2003','ddMonyyyy')),
17 partition xothers values less than (maxvalue)
18 );
Table created.
October 13, 2005 - 4:26 pm UTC
because it is not supported - there is no feature of composite "range/range", it doesn't exist, the code isn't there.
if this is truly "empid" type of data - why would not HASH be appropriate?
Thanks.
ht, October 13, 2005 - 4:51 pm UTC
I'll use the hash method. It would've been nice to use the range method because of the admin limitations of "by hash".
range(a)/range(b) = range(a,b) ?!?!
Gabe, October 13, 2005 - 4:57 pm UTC
<quote>Is there a reason one cannot subpartition by range?</quote>
Let us assume the range for the partitions were by column A and the range for the subpartitions were by column B, where A != B.
It seems to me that would be similar to partitioning by the pair of columns (A, B). What a range/range partitioning scheme would achieve is, essentially, doable right now with range partitioning by pairs.
October 13, 2005 - 9:06 pm UTC
not really -- it gets tricky with the "vectors" when the lead column isn't always equal to partition key value.
thinking straight ...
Gabe, October 13, 2005 - 10:06 pm UTC
<quote>Not really
</quote>
Youre right of course. Nothing like trying an actual example down on paper
hmm, what a dummy!
ORA-7445 when subpartitioning on a nullable column?
ht, October 14, 2005 - 11:33 pm UTC
Hi Tom,
I subpartitioned by hash but my select on the table is giving me a 7445. I created a table with no subpartition and selecting doesn't produce the error. Is it possible to subpartition by hash on a column that could be null or am I running into another issue?
TIA,
ht
October 15, 2005 - 9:47 am UTC
ora-7445, ora-3113, ora-600 all imply "please contact support, something unexpected has occurred"
you give no example :(
but please utilize support for this issue.
you can subpartition on a nullable column, yes
ops$tkyte@ORA9IR2> CREATE TABLE t
2 (
3 dt date,
4 x int,
5 y varchar2(25)
6 )
7 PARTITION BY RANGE (dt)
8 subpartition by hash(x)
9 (
10 PARTITION part1 VALUES LESS THAN (to_date('13-mar-2003','dd-mon-yyyy')) ,
11 PARTITION part2 VALUES LESS THAN (to_date('14-mar-2003','dd-mon-yyyy')) ,
12 PARTITION junk VALUES LESS THAN (MAXVALUE)
13 )
14 ;
Table created.
ops$tkyte@ORA9IR2> insert into t
2 select to_date( '12-mar-2003')+mod(rownum,3), decode( mod(rownum,2),0,1,null), rownum
3 from all_users;
35 rows created.
Looks like a bug
ht, October 17, 2005 - 11:45 am UTC
Tom,
Thanks again for your assistance. I've been working with support for the last 4 days and this is a 10g bug that hasn't been fixed yet. Guess I'm going back to 9i.
ht
How to partition/subpartition
Sandeep Dubey, February 16, 2006 - 10:03 pm UTC
I have a table with columns act_id, cr_date, ndc, and_other_cols.
There will be 150 million rows in the table. There are 4 distinct act_id but will get more act_id. Query will always include act_id but may or may not include cr_date. We need to purge data from this table. Purging will vary for each act_id. E.g act_id 1 needs 36 months of data and act_id 2 needs to keep 40 months of data.
Given the requirement, I should partition the table. Please suggest how can I partition and subpartition this table so that I should be able to get advantage of partition pruning as well I should be able to drop partition for different months for different act_id.
What type of index on column ndc will be better- local or global?
Regards
Sandeep Dubey
Partition requirement
Mike, October 06, 2006 - 10:54 am UTC
We are on 9.2.0.5 on Solaris.
I have been tasked with creating a new table which will hold error information from unix servers. During a recent meeting, the requirement was given to partition this table, which will be HUGE, by Error Level (alert, crit, debug, err, info, notice, and warning) and then subpartion those by daily date. It appears that Oracle does not give an option to do composite partitioning that way.
Regarding the error_level values, I know I can "trick" oracle into thinking it is a range by doing something like the following, then using hash for the date values.
CREATE TABLE data_plog
(
data_log_id NUMBER(10) NOT NULL,
dl_dt_tm DATE NOT NULL,
log_level VARCHAR2(12) NOT NULL,
host VARCHAR2(40) NOT NULL,
facility VARCHAR2(10),
dl_cluster VARCHAR2(100),
application VARCHAR2(30),
dl_descript_1 VARCHAR2(4000),
err_flg CHAR(1)
)
PCTUSED 80
PCTFREE 10
STORAGE (
INITIAL 64 M
NEXT 64 M
PCTINCREASE 0)
TABLESPACE PARTTABS
PARTITION BY RANGE (log_level)
SUBPARTITION BY HASH (dl_dt_tm)
(
PARTITION p1_alert VALUES LESS THAN (B)
PCTUSED 80
TABLESPACE PARTTABS,
PARTITION p2_crit VALUES LESS THAN (C)
PCTUSED 80
TABLESPACE PARTTABS,
PARTITION p3_debug VALUES LESS THAN (D)
PCTUSED 80
TABLESPACE PARTTABS,
PARTITION p4_err VALUES LESS THAN (G)
PCTUSED 80
TABLESPACE PARTTABS,
PARTITION p5_info VALUES LESS THAN (M)
PCTUSED 80
TABLESPACE PARTTABS,
PARTITION p6_notice VALUES LESS THAN (V)
PCTUSED 80
TABLESPACE PARTTABS,
PARTITION p7_warning VALUES LESS THAN (MAXVALUE)
PCTUSED 80
TABLESPACE PARTTABS
)
PARALLEL ( DEGREE 4 )
MONITORING
An alternative is to partition as list partitions with the same error_level values, and hash subpartition those on the date value.
However, we have a retention requirement that states that for 2 of the error_level values, debug and info (which would be about 60% of the table), the retention period should be 30 days, and 90 days for the other 5.
So if we range partition by date, and sub partition by list (error_level) we cannot drop any partitions for 90 days, meaning our space requirement is much larger. We could only delete the date within the 90 date partitions that have an error level = to debug or info.
Any ideas? I'm stumped on this one.
October 06, 2006 - 1:28 pm UTC
you will range partition by date and subpartition by list then.
we have a saying "six one way, half dozen the other"
you'll have achieved the same goal - things partitioned by error level and date.
What I thought....
Mike, October 06, 2006 - 3:44 pm UTC
I figured that would be the case. I agree with the six of one half-dozen... assessment, but wanted to be sure that the only way to do this was as you indicated. So using the subpartition template (awesome feature!) we will do something like :
CREATE TABLE USERMGR.DATA_PLOG
(
DATA_LOG_ID NUMBER(10) NOT NULL,
DL_DT_TM DATE NOT NULL,
LOG_LEVEL VARCHAR2(12 BYTE) NOT NULL,
HOST VARCHAR2(40 BYTE) NOT NULL,
FACILITY VARCHAR2(10 BYTE),
DL_CLUSTER VARCHAR2(100 BYTE),
APPLICATION VARCHAR2(30 BYTE),
DL_DESCRIPT_1 VARCHAR2(4000 BYTE),
ERR_FLG CHAR(1 BYTE)
)
TABLESPACE PARTTABS
PCTUSED 80
PCTFREE 10
INITRANS 1
MAXTRANS 255
STORAGE (
INITIAL 64M
NEXT 64M
PCTINCREASE 0
)
PARTITION BY RANGE (DL_DT_TM)
SUBPARTITION BY LIST (LOG_LEVEL)
SUBPARTITION TEMPLATE
(SUBPARTITION ALERT VALUES ('ALERT'),
SUBPARTITION CRIT VALUES ('CRIT'),
SUBPARTITION DEBUG VALUES ('DEBUG'),
SUBPARTITION ERR VALUES ('ERR'),
SUBPARTITION INFO VALUES ('INFO'),
SUBPARTITION NOTICE VALUES ('NOTICE'),
SUBPARTITION WARNING VALUES ('WARNING'),
SUBPARTITION OTHERS VALUES (DEFAULT)
)
(
PARTITION ULOG_091506 VALUES LESS THAN (TO_DATE(' 2006-09-16 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
NOLOGGING
NOCOMPRESS
TABLESPACE PARTTABS
PCTUSED 80
PCTFREE 10
INITRANS 1
MAXTRANS 255
STORAGE (
INITIAL 64M
NEXT 64M
PCTINCREASE 0
BUFFER_POOL DEFAULT
)
( SUBPARTITION ULOG_091506_ALERT VALUES ('ALERT') TABLESPACE PARTTABS,
SUBPARTITION ULOG_091506_CRIT VALUES ('CRIT') TABLESPACE PARTTABS,
SUBPARTITION ULOG_091506_DEBUG VALUES ('DEBUG') TABLESPACE PARTTABS,
SUBPARTITION ULOG_091506_ERR VALUES ('ERR') TABLESPACE PARTTABS,
SUBPARTITION ULOG_091506_INFO VALUES ('INFO') TABLESPACE PARTTABS,
SUBPARTITION ULOG_091506_NOTICE VALUES ('NOTICE') TABLESPACE PARTTABS,
SUBPARTITION ULOG_091506_WARNING VALUES ('WARNING') TABLESPACE PARTTABS,
SUBPARTITION ULOG_091506_OTHERS VALUES (DEFAULT) TABLESPACE PARTTABS ),
PARTITION ULOG_091606 VALUES LESS THAN (TO_DATE(' 2006-09-17 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
NOLOGGING
NOCOMPRESS
TABLESPACE PARTTABS
PCTUSED 80
PCTFREE 10
INITRANS 1
MAXTRANS 255
STORAGE (
INITIAL 64M
NEXT 64M
PCTINCREASE 0
BUFFER_POOL DEFAULT
)
( SUBPARTITION ULOG_091606_ALERT VALUES ('ALERT') TABLESPACE PARTTABS,
SUBPARTITION ULOG_091606_CRIT VALUES ('CRIT') TABLESPACE PARTTABS,
SUBPARTITION ULOG_091606_DEBUG VALUES ('DEBUG') TABLESPACE PARTTABS,
SUBPARTITION ULOG_091606_ERR VALUES ('ERR') TABLESPACE PARTTABS,
SUBPARTITION ULOG_091606_INFO VALUES ('INFO') TABLESPACE PARTTABS,
SUBPARTITION ULOG_091606_NOTICE VALUES ('NOTICE') TABLESPACE PARTTABS,
SUBPARTITION ULOG_091606_WARNING VALUES ('WARNING') TABLESPACE PARTTABS,
SUBPARTITION ULOG_091606_OTHERS VALUES (DEFAULT) TABLESPACE PARTTABS ),
PARTITION ULOG_091706 VALUES LESS THAN (TO_DATE(' 2006-09-18 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
NOLOGGING
NOCOMPRESS
TABLESPACE PARTTABS
PCTUSED 80
PCTFREE 10
INITRANS 1
MAXTRANS 255
STORAGE (
INITIAL 64M
NEXT 64M
PCTINCREASE 0
BUFFER_POOL DEFAULT
)
( SUBPARTITION ULOG_091706_ALERT VALUES ('ALERT') TABLESPACE PARTTABS,
SUBPARTITION ULOG_091706_CRIT VALUES ('CRIT') TABLESPACE PARTTABS,
SUBPARTITION ULOG_091706_DEBUG VALUES ('DEBUG') TABLESPACE PARTTABS,
SUBPARTITION ULOG_091706_ERR VALUES ('ERR') TABLESPACE PARTTABS,
SUBPARTITION ULOG_091706_INFO VALUES ('INFO') TABLESPACE PARTTABS,
SUBPARTITION ULOG_091706_NOTICE VALUES ('NOTICE') TABLESPACE PARTTABS,
SUBPARTITION ULOG_091706_WARNING VALUES ('WARNING') TABLESPACE PARTTABS,
SUBPARTITION ULOG_091706_OTHERS VALUES (DEFAULT) TABLESPACE PARTTABS ),
PARTITION ULOG_091806 VALUES LESS THAN (TO_DATE(' 2006-09-19 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
NOLOGGING
NOCOMPRESS
TABLESPACE PARTTABS
PCTUSED 80
PCTFREE 10
INITRANS 1
MAXTRANS 255
STORAGE (
INITIAL 64M
NEXT 64M
PCTINCREASE 0
BUFFER_POOL DEFAULT
)
( SUBPARTITION ULOG_091806_ALERT VALUES ('ALERT') TABLESPACE PARTTABS,
SUBPARTITION ULOG_091806_CRIT VALUES ('CRIT') TABLESPACE PARTTABS,
SUBPARTITION ULOG_091806_DEBUG VALUES ('DEBUG') TABLESPACE PARTTABS,
SUBPARTITION ULOG_091806_ERR VALUES ('ERR') TABLESPACE PARTTABS,
SUBPARTITION ULOG_091806_INFO VALUES ('INFO') TABLESPACE PARTTABS,
SUBPARTITION ULOG_091806_NOTICE VALUES ('NOTICE') TABLESPACE PARTTABS,
SUBPARTITION ULOG_091806_WARNING VALUES ('WARNING') TABLESPACE PARTTABS,
SUBPARTITION ULOG_091806_OTHERS VALUES (DEFAULT) TABLESPACE PARTTABS ),
etc, etc.
The only concern I have is that if we need to drop the subpartitions ONLY for the 'DEBUG' or 'INFO' older than 30 days (for space reasons) it will totally hose any index data that includes that column, in particular the primary key. So I guess we would have to rebuild the PK every time we drop those subpartitions. Right?
October 06, 2006 - 4:11 pm UTC
don't really know what your primary key is...
Ooops
Mike, October 06, 2006 - 4:22 pm UTC
The primary key is fat, but necessarily so.
It consists of log_level, host, dl_dt_tm, and data_log_id.
So if you partition on dl_dt_tm and subpartition on log_level, when you drop two of the log_level subpartitions in each partition older than 30 days, would you not have to rebuild the primary key?
Please forgive punchiness, I'm coming down with some kind of flu and did not sleep well last night.
October 06, 2006 - 4:34 pm UTC
not if dl_dt_tm/host is part of the partition key there.
the unique index can be local.
partition information
Hans, January 02, 2007 - 3:21 pm UTC
I would get the information of the partitions of the table where the data is stored (partition/subpartion)when the data value is passed
Ex:
Lets say we have a table T(id, acctno)
range partioned on ID
subpartions 2
hash partioned on acctno
I would like to get the sql to the get the main partition/sub partition information of the table T for the accno passed as the parameter that is stored in the table T
Thans and appreciate your help
January 05, 2007 - 7:38 am UTC
while in theory we could do this - it would involve "guessing" at the Oracle algorithm for hashing, something that is subject to change.
So, what is the objective here - why do you need this information. What is the business problem you are trying to solve with this?
drop sub-partition
anupam, March 08, 2012 - 12:40 pm UTC
Hi,
I have composite partitioned table ( range ,hash). When we were designing the table we thought that the number of rows for one particular range partition will be pretty huge and the kind
of queries which are going to hit that table will always include the column (col2). So we decided to hash each partition on column col2 . Now after we pushed the table to production and got the volume metrics for a week ,it looks like that number of rows are not that huge ..and we dont necessarily need to hash each partition ...for the kind of volume we have range partition is sufficient ..
Now I am left with two questions
1.If I leave the table as it is then whats the harm with having unnecessary subpartitions ..
2. If i choose to rearrange the table to exclude the supartition is there a way other than dbms_redefinition .
I tried one approach for dropping the sub-partitions by altering the sub-partition template ..but its not giing me the desired result as it leaves the older partitions as it is.. Following is the test case I tried.
SQL> DROP TABLE test PURGE
2 /
Table dropped.
SQL> CREATE TABLE test(col_date_part_key NUMBER NOT NULL
2 ,col2 VARCHAR2(2000) NOT NULL
3 )
4 PARTITION BY RANGE(col_date_part_key)
5 SUBPARTITION BY HASH (col2)
6 SUBPARTITIONS 16
7 (PARTITION month_01 VALUES LESS THAN (201202) TABLESPACE USERS
8 ,PARTITION month_02 VALUES LESS THAN (201203) TABLESPACE USERS
9 ,PARTITION month_03 VALUES LESS THAN (201204) TABLESPACE USERS
10 ,PARTITION month_04 VALUES LESS THAN (201205) TABLESPACE USERS
11 ,PARTITION month_05 VALUES LESS THAN (201206) TABLESPACE USERS
12 ,PARTITION month_06 VALUES LESS THAN (201207) TABLESPACE USERS
13 ,PARTITION month_07 VALUES LESS THAN (201208) TABLESPACE USERS
14 ,PARTITION month_08 VALUES LESS THAN (201209) TABLESPACE USERS
15 ,PARTITION month_09 VALUES LESS THAN (201210) TABLESPACE USERS
16 ,PARTITION month_10 VALUES LESS THAN (201211) TABLESPACE USERS
17 ,PARTITION month_11 VALUES LESS THAN (201212) TABLESPACE USERS
18 ,PARTITION month_12 VALUES LESS THAN (201301) TABLESPACE USERS
19 )
20 /
Table created.
SQL> begin
2 for i in 1 .. 1000
3 loop
4
5 INSERT INTO test(col_date_part_key,col2)
6 SELECT to_char(ADD_MONTHS(TO_DATE('11-DEC-2011','DD-MON-YYYY'),l),'YYYYMM')
7 , TRUNC(DBMS_RANDOM.VALUE(0, 100))
8 FROM (SELECT level l FROM dual CONNECT BY level < 13);
9 end loop;
10 end;
11 /
PL/SQL procedure successfully completed.
SQL> COMMIT
2 /
Commit complete.
SQL> begin
2 dbms_stats.gather_table_stats(user,'TEST',granularity=>'ALL');
3 end;
4 /
PL/SQL procedure successfully completed.
SQL> SELECT partition_name,count(*)
2 FROM user_tab_subpartitions
3 WHERE table_name = 'TEST'
4 group by partition_name
5 order by partition_name;
PARTITION_NAME COUNT(*)
------------------------------ ----------
MONTH_01 16
MONTH_02 16
MONTH_03 16
MONTH_04 16
MONTH_05 16
MONTH_06 16
MONTH_07 16
MONTH_08 16
MONTH_09 16
MONTH_10 16
MONTH_11 16
PARTITION_NAME COUNT(*)
------------------------------ ----------
MONTH_12 16
12 rows selected.
SQL> alter table test set subpartition template() ;
Table altered.
SQL> alter table test add partition month_201301 values less than (201302);
Table altered.
SQL> begin
2 for i in 1 .. 1000
3 loop
4
5 INSERT INTO test(col_date_part_key,col2)
6 SELECT to_char(ADD_MONTHS(TO_DATE('11-DEC-2011','DD-MON-YYYY'),l),'YYYYMM')
7 , TRUNC(DBMS_RANDOM.VALUE(0, 100))
8 FROM (SELECT level l FROM dual CONNECT BY level < 14);
9 end loop;
10 commit;
11 end;
12 /
PL/SQL procedure successfully completed.
SQL> SELECT partition_name,count(*)
2 FROM user_tab_subpartitions
3 WHERE table_name = 'TEST'
4 group by partition_name
5 order by partition_name;
PARTITION_NAME COUNT(*)
------------------------------ ----------
MONTH_01 16
MONTH_02 16
MONTH_03 16
MONTH_04 16
MONTH_05 16
MONTH_06 16
MONTH_07 16
MONTH_08 16
MONTH_09 16
MONTH_10 16
MONTH_11 16
PARTITION_NAME COUNT(*)
------------------------------ ----------
MONTH_12 16
MONTH_201301 1
13 rows selected.
SQL> spool off
Thanks,
Anupam
March 09, 2012 - 9:16 am UTC
1) probably none, but we'd need to truly understand how you use this table.
2) you would have to create an entirely new table - one way or the other. dbms redefinition would let you do it online, a create table as select could be used, but that would be offline.
you cannot get rid of the composite partitioning scheme without recreating the table.
drop sub-partition
anupam, March 10, 2012 - 6:34 am UTC
Hi Tom,
Thank you for reply . This table is loaded on per month basis .So we have col_date_part as our range partition . The reporting queries which going to hit this table will have col2 either in its projection list or in predicate list ..
col2 is being referenced from a dimension table which is also hash partitioned on same column col2 with 16 buckets .
If we decided o go with the current structure ie composite (range,hash) then it will be beneficial in full partition wise ..But the data volume is very less in test table in each sub-partition so we are skeptical about the full partition wise join benefit .
Other question is if we live with 16 hash partition will it create any problem of space in shared pool because dictionary cache will have to be read and updated every time new sub-partition is created .because there are quite a few tables with this kind of structure in our DW.
Let me know your thoughts on above point ..
Thanks,
Anupam
March 12, 2012 - 7:33 am UTC
Other question is if we live with 16 hash partition will it create any problem
of space in shared pool because dictionary cache will have to be read and
updated every time new sub-partition is created .because there are quite a few
tables with this kind of structure in our DW.
we only load partition meta data for partitions we hit - as we hit them. Whether this will stress your shared pool or not is a function of your system - I cannot really answer that - I don't know how much of your shared pool is already committed and how much more it would need to handle this.
one word for you:
benchmark
Number of subpartitions (composite table by range/hash) ?
Marcin Jackowski, July 03, 2012 - 10:50 am UTC
I have composite partitioned table ( range ,hash) and currently I have 8 hash subpartitions for every time range (daily) partition. I have average ~4000 unique integer values of the hashed column in every partition and I'm wondering what will happen if I define more subpartitions (to get more granular hash).
My questions related:
1. Do I assume right I would benefit on performance by increasing number of subpartitions? I belive than I'd suffer on disk space, right?
2. What would the ideal number of subpartitions? How could I estimate such?
3. one of the value in the hashed column appears in 30% of all records, meaning one of my 8 subpartitions is somehow "overloaded" - how could I avoid that? should I ?
regards,
Marcin
July 03, 2012 - 1:58 pm UTC
1) performance would likely ONLY benefit IF and ONLY IF - you currently full scan a subpartition or small set of subpartitions. If you include the hash key in the predicate and if you full scan some of the subpartitions.
If you make the partitions smaller - you would full scan less data.
If you are using an index now and would be using an index still - it won't make a difference really, not too much. You might get marginally better clustering of the data - but that would be about it.
It wouldn't take significantly more or less disk space.
2) you'd have to tell us more, like - do you full scan it? how big is too big for you? what size would you *like* them to be? there is no perfect number (if there were, there would be no other choice)
3) pick a different hash key if you want to avoid that.
Unless of course you put this hash key in the where clause and you full scan - in which case - that just means you need to full scan 30% of your data and there is no escaping that, no matter what the hash key is.