Partitions Missing problem
MV, November 11, 2003 - 4:44 am UTC
Hi Tom
I have a range partitioned table based on processing date.Normally this table must have last 10 days(actually in our project, 90 days) data. So Totally we have 11 partitions including one max partition.Everyday our procedure will drop the lowest partition and it will add one new partition for sysdate. This Lowest partition date, higher partition date, and the period i.e., 10 days, these 3 values are stored in another table.
For example today is 11/11/2003, then this table should have partitions for 02/11/2003 to 11/11/2003 (10 partitions) and one max partition. Due to some problem(we don't know exactly whether it is a logical or data problem), it is created partitions for future date. for example today we have checked it, it contains the partitions in the following order :
02/11/2003
03/11/2003
06/11/2003
07/11/2003
08/11/2003
10/11/2003
11/11/2003
12/11/2003
13/11/2003
14/11/2003
max partition
04th, 05th and 09th november partitions are missing and also it is created for future dates. My question is, everyday we are dropping one lowest level partition and creating new one. when i drop this 06/11/2003 partition, i will lose the data for 3 days,but it shouldn't, because my client wants exactly last 10 days data. I should keep my table in the following order :
02/11/2003
03/11/2003
04/11/2003
05/11/2003
06/11/2003
07/11/2003
08/11/2003
09/11/2003
10/11/2003
11/11/2003
max partition
Can you suggest some alternatives?
November 11, 2003 - 7:30 am UTC
umm, you have a bug in your code that finds the "lowest" partition -- you need to find the bug, fix it and problem goes away. There is nothing builtin to oracle that says "get rids of the lowest", so it has to be in your logic.
could easily be the way you seem to deal with dates using dd/mm/yyyy, if you use that date format mask and are comparing STRINGS, not DATES, it could get all messed up easily.
can i increase the partition key width?
Subu, November 17, 2003 - 8:29 am UTC
Tom
I have a table with 4 million records. table structure is
create table cheques
( chq_folio_no varchar2(13),
chq_cheque_no varchar2(19),
...
...
...)
partition by range(chq_folio_no)
(partition part1 value less than ...
..
..
);
This table contains around 4 millions records. Now i wants to increase the width of partition key "chq_folio_no" into varchar2(16). I tried to alter the table, but it is giving error as "cannot alter the partition key".
can u suggest me some ideas to increase the width of this column. If we can't increase in the existing table, then how to do?
November 19, 2003 - 6:30 am UTC
if you are using oracle9i, you can use the online redefinition package..
or you can
a) export the table
b) drop and recreate just the table with the new definition
c) import it
or you can
a) prevent dml from happening on this table somehow
b) create table a new with the modified definition
c) insert /*+ append */ into it, index it, etc
d) drop old
e) rename new to old
or you can use steps like this (you'd have to create indexes on t1/t2 to match your local indexes and remember to add global indexes and grants to new before using it)
ops$tkyte@ORA920LAP> CREATE TABLE t
2 (
3 pkey varchar2(5),
4 data number
5 )
6 PARTITION BY RANGE (pkey) (
7 PARTITION part1 VALUES LESS THAN ('M'),
8 PARTITION part2 VALUES LESS THAN (MAXVALUE)
9 )
10 ;
Table created.
ops$tkyte@ORA920LAP>
ops$tkyte@ORA920LAP> insert into t values( 'A', 5 );
1 row created.
ops$tkyte@ORA920LAP> insert into t values( 'Z', 10 );
1 row created.
ops$tkyte@ORA920LAP> alter table t modify pkey varchar2(10)
2 /
alter table t modify pkey varchar2(10)
*
ERROR at line 1:
ORA-14060: data type or length of a table partitioning column may not
be changed
ops$tkyte@ORA920LAP>
ops$tkyte@ORA920LAP>
ops$tkyte@ORA920LAP> create table new_table
2 (
3 pkey varchar2(15),
4 data number
5 )
6 PARTITION BY RANGE (pkey) (
7 PARTITION part1 VALUES LESS THAN ('M'),
8 PARTITION part2 VALUES LESS THAN (MAXVALUE)
9 )
10 ;
Table created.
ops$tkyte@ORA920LAP>
ops$tkyte@ORA920LAP> create table t1 ( pkey varchar2(5), data number );
Table created.
ops$tkyte@ORA920LAP> create table t2 ( pkey varchar2(5), data number );
Table created.
ops$tkyte@ORA920LAP>
ops$tkyte@ORA920LAP> alter table t exchange partition part1 with table t1;
Table altered.
ops$tkyte@ORA920LAP> alter table t exchange partition part2 with table t2;
Table altered.
ops$tkyte@ORA920LAP>
ops$tkyte@ORA920LAP> alter table t1 modify pkey varchar2(15);
Table altered.
ops$tkyte@ORA920LAP> alter table t2 modify pkey varchar2(15);
Table altered.
ops$tkyte@ORA920LAP>
ops$tkyte@ORA920LAP> alter table new_table exchange partition part1 with table t1 without validation;
Table altered.
ops$tkyte@ORA920LAP> alter table new_table exchange partition part2 with table t2 without validation;
Table altered.
ops$tkyte@ORA920LAP>
ops$tkyte@ORA920LAP> drop table t;
Table dropped.
ops$tkyte@ORA920LAP> rename new_table to t;
Table renamed.
ops$tkyte@ORA920LAP>
ops$tkyte@ORA920LAP> select * from t;
PKEY DATA
--------------- ----------
A 5
Z 10
ops$tkyte@ORA920LAP> desc t;
Name Null? Type
----------------------------------- -------- ------------------------
PKEY VARCHAR2(15)
DATA NUMBER
ops$tkyte@ORA920LAP>
Importing nonpartitioned table to partitioned table
Denise, February 23, 2004 - 10:38 am UTC
Good day Tom...
it's been a somewhat frustrating weekend trying to get
the gift.dmp export from 8.1.7 imported to the 9i db.
I created the partitioned gift table on the 9i db.
As you recommended the LIST partition was the best option
for the intended purpose.
PARTITION BY LIST (usercode1)
(partition p_acga VALUES('ACGA') tablespace aclj_data,
partition p_aclj VALUES('ACLJ') tablespace aclj_data)
1 select table_name,partition_name,tablespace_name
2* from user_tab_partitions
SQL> /
TABLE_NAME PARTITION_NAME TABLESPACE_NAME
------------------------------ ------------------------------ -----------------------
GIFT P_ACGA ACLJ_DATA
GIFT P_ACLJ ACLJ_DATA
I tried every import statement under the sun:
imp user/pwd tables=gift file=gift.dmp ignore=y, rows=y
that generated an ora-14400 "inserted partition key does
not map to any partition"
I then tried:
imp user/pwd tables=(gift:p_acga,gift:p_aclj) file=gift.dmp
ignore=y rows=y grants=n indexes=n constraints=n
that generated am imp-00055 error "warning partition or
subpartition not found in export file".
I've tried several other scenarios using samples and examples I research and can't get this darned .dmp file imported.I keep getting either one of the above import errors.
I've read that if a target table is partitioned,partition
level import rejects any rows that fall above the highest
partition of the target table. Use the MAXVALUE clause to
set this value to unlimited?...not to sure what this means.
I made sure there were no NULL values in the usercode1
column before exporting.
any hints,help much appreciated.
February 23, 2004 - 10:48 am UTC
well, all that means is the data you imported has usercodes other than the ones you set up for. Consider:
ops$tkyte@ORA920PC> drop table t;
Table dropped.
ops$tkyte@ORA920PC> create table t
2 ( usercode1 varchar2(5),
3 data varchar2(10)
4 )
5 PARTITION BY LIST (usercode1)
6 (partition p_acga VALUES('ACGA') ,
7 partition p_aclj VALUES('ACLJ') )
8 /
Table created.
ops$tkyte@ORA920PC> !imp userid=/ full=y ignore=y
Import: Release 9.2.0.4.0 - Production on Mon Feb 23 10:45:57 2004
Copyright (c) 1982, 2002, Oracle Corporation. All rights reserved.
Connected to: Oracle9i Enterprise Edition Release 9.2.0.4.0 - Production
With the Partitioning, OLAP and Oracle Data Mining options
JServer Release 9.2.0.4.0 - Production
Export file created by EXPORT:V08.01.07 via conventional path
import done in WE8ISO8859P1 character set and AL16UTF16 NCHAR character set
export server uses WE8ISO8859P1 NCHAR character set (possible ncharset conversion)
. importing OPS$TKYTE's objects into OPS$TKYTE
. . importing table "T"<b>
IMP-00058: ORACLE error 14400 encountered
ORA-14400: inserted partition key does not map to any partition
IMP-00028: partial import of previous table rolled back: 2 rows rolled back
Import terminated successfully with warnings.</b>
<b>that is your error, it is because:
ops$tkyte@ORA817DEV> select * from t;
USERC DATA
----- ----------
ACGA hello
ACLJ world
XXXX ugh
the source table has usercodes we did not anticipate - and your data must also have the same issue. to get this imported:</b>
ops$tkyte@ORA920PC> drop table t;
Table dropped.
ops$tkyte@ORA920PC> create table t
2 ( usercode1 varchar2(5),
3 data varchar2(10)
4 )
5 PARTITION BY LIST (usercode1)
6 (partition p_acga VALUES('ACGA') ,
7 partition p_aclj VALUES('ACLJ') ,<b>
8 partition the_rest values ( DEFAULT ) )</b>
9 /
Table created.
ops$tkyte@ORA920PC> !imp userid=/ full=y ignore=y
Import: Release 9.2.0.4.0 - Production on Mon Feb 23 10:45:58 2004
Copyright (c) 1982, 2002, Oracle Corporation. All rights reserved.
Connected to: Oracle9i Enterprise Edition Release 9.2.0.4.0 - Production
With the Partitioning, OLAP and Oracle Data Mining options
JServer Release 9.2.0.4.0 - Production
Export file created by EXPORT:V08.01.07 via conventional path
import done in WE8ISO8859P1 character set and AL16UTF16 NCHAR character set
export server uses WE8ISO8859P1 NCHAR character set (possible ncharset conversion)
. importing OPS$TKYTE's objects into OPS$TKYTE
. . importing table "T" 3 rows imported
Import terminated successfully without warnings.
<b>and there you go! And then you can:
ops$tkyte@ORA920PC>
ops$tkyte@ORA920PC> select * from t partition(the_rest);
USERC DATA
----- ----------
XXXX ugh
to see what the bad rows are</b>
thanks Tom....
Denise, February 23, 2004 - 11:08 am UTC
Connected to:
Oracle8i Release 8.1.7.0.0 - Production
JServer Release 8.1.7.0.0 - Production
SQL> select count(*) from gift where usercode1 not in('ACGA','ACLJ');
COUNT(*)
----------
0
SQL>
I had also made sure of the values(sorry for not indicating that)....I believe there is a CHECK constraint on that column since it is crucial that the proper client be identified during an insert.
I noticed with your import commands you used full=y.
I'll review and run your test sample on the 9i machine.
as ever...thanks soo much for your patience and help.
February 23, 2004 - 4:31 pm UTC
I used full=y cause I had only one table in there.
You have *bad* data -- I'm 100% on that. Use the default and you'll see.
1 select count(*),
2 count( case when usercode1 not in('ACGA','ACLJ') then 1 end) x,
3 count( case when usercode1 in('ACGA','ACLJ') then 1 end) y
4* from t
ops$tkyte@ORA9IR2> /
COUNT(*) X Y
---------- ---------- ----------
3 0 2
Hmmm, 3 rows -- none are "not in", but only 2 are "in"....
(beware the NULL)
Beware the nulls
Mike, February 23, 2004 - 3:00 pm UTC
If some of your rows have null values, they could cause the results you are seeing here.
thanks Tom and Mike...
Denise, February 23, 2004 - 5:03 pm UTC
I did test for nulls and did test to see if any other
value existed other than ACGA and ACLJ in usercode1.
I imported the gift.dmp file into a regular nonpartitioned
table just so I could perform data validation in the
event data got corrupted during the export...although
the export generated no errors and I fail to see how
that could have happened.
I'll do what Tom says...then I am going to do another
export just incase something is wrong with the current
export file just to make sure the column I need as a
partition key has the correct values.
thanks again...will let you know when I finally get this
figured out.
Tom.....as always....you are RIGHT!
Denise, February 24, 2004 - 8:44 am UTC
1 select count(*),
2 count( case when usercode1 not in('ACGA','ACLJ') then 1 end) x,
3 count( case when usercode1 in('ACGA','ACLJ') then 1 end) y,
4 count(case when usercode1 is NULL then 1 end) z
5* from gift
SQL> /
COUNT(*) X Y Z
---------- ---------- ---------- ----------
8059479 0 4232800 3826679
SQL>
I ran your sql statement...added another clause for the
NULLS...and 'lo and behold' the NULLS popped up.
Why didn't they show when I ran the original sql
statement: select count(*) from gift where usercode1 IS NULL;???
anyhow...there it is in black and white.
thanks a quintillion..love the case statement to count
the rows..will add that to my repetoire of sql scripts.
have a great day!
;~D
Never Mind....
Denise, February 24, 2004 - 8:54 am UTC
I answered my own question...used the wrong operator...
spent 4 days on this because of that...let this be a lesson
for others.
SQL> select count(*) from gift where usercode1 IS NULL;
COUNT(*)
----------
3827277
SQL> ed
Wrote file afiedt.buf
1* select count(*) from gift where usercode1=NULL
SQL> /
COUNT(*)
----------
0
How about multiple partitions's range need change?
Steve, July 12, 2004 - 11:06 pm UTC
Hi Tom,
What if I have table with 12 partitions
PARTITION BY RANGE (DEM_ID)
partiton P1 value less than (100)
partiton P2 value less than (200)
...
partiton p11 vless less than (1100)
partiton P12 value less than (1200)
I want to change the partitions' range to
partiton P1 value less than (150)
partiton P2 value less than (300)
...
partiton p11 vless less than (1650)
partiton P12 value less than (1800)
Do I have to follow your solution for 12 times?
Or do you have better way with one shot?
1) create a temp table demdata
2) alter table demdata
exchange partition demdata_01
with table empty
including indexes
without validation
/
alter table demdata
exchange partition demdata_02
with table empty
including indexes
without validation
...
alter table demdata
exchange partition demdata_12
with table empty
including indexes
without validation
3) drop 12 partitions.
4) recreate 12 partitions
5) exchange each partition back
is it STILL a big "rename" trick even though all the partitions need chnage and my table have million rows?
What if I just
1) create a temp table temp_table as
select * from partition_table;
2) recreate partition table with new ranges
3) insert into new_partition_tables
select * from temp_table.
In this way, there are lot data "move" right?
Thanks!
Steve
July 13, 2004 - 12:04 am UTC
i would do an online or offline reorg. every bit of data would be moved perhaps multiple times.
just create a new table, with the new scheme.
either online redef it into there or just insert append it.
drop the old table and rename the new one.
Follow up your answer to my question
Steve, August 26, 2004 - 10:43 am UTC
Can I create new table by script based on the old table while keeping all the properties exception partition info?
e.g. keep all the index, constaints, trigger, synonym, grand.
Since I have to change many tables's partition, I am wondering if I do it in a script by a for loop.
Thanks Tom!
Steve
August 26, 2004 - 10:46 am UTC
you would be responsible for indexing, constraining, triggering, granting on this new table (synonyms would be "OK" if you rename the new table).
in 10g, the online redef process automates the indexing, constraint, triggering, etc -- but for a big table, you'd probably want to index yourself if you think about it (you should have the scripts already and you can do parallel, nologging, etc sort of stuff that a simple "copy my stuff" wont)
ALTER TABLE
raul, September 23, 2004 - 4:20 am UTC
ALTER TABLE DWFET_CDR_PRE22
MODIFY PARTITION MES200403
(
add SUBPARTITION MES200403_SP5, TABLESPACE TDATA05,
add SUBPARTITION MES200403_SP6, TABLESPACE TDATA06,
add SUBPARTITION MES200403_SP7, TABLESPACE TDATA07,
add SUBPARTITION MES200403_SP8, TABLESPACE TDATA08),
what is the solution, please¿?
September 24, 2004 - 8:01 am UTC
42 -- but now, what is the question (anyone know? i bet some do)
sorry Raul, couldn't resist. You've given us the ultimate answer to the ultimate question -- only we don't know what the question was.
Partition export import blues.
Alvin, October 20, 2004 - 5:40 am UTC
My development people asked me to extract some data from our db to be re exported to another users schema on another machine.
My export parameter file is as follows :
---
file =/export/home/oracle/exp_file/rtbetel_100.dat
log =/export/home/oracle/exp_file/rtbetel_100.log
compress=n
tables=(Artbetel.rtbetelcalldetailrecords:MON_2004_09)
query="where groupid=3441 "
---
The export was fine but the import has problems.
---
IMP-00003: ORACLE error 959 encountered
ORA-00959: tablespace 'CDR' does not exist
IMP-00057: Warning: Dump file may not contain data of all partitions of this table
IMP-00055: Warning: partition or subpartition "RTBETELCALLDETAILRECORDS":"MON_2004_09" not found in export file
Import terminated successfully with warnings.
---
My Import paramters are as follows :
---
ignore=y
file=rtbetel_100.dat
fromuser=(Artbetel)
touser=(telic_c)
tables=(rtbetelcalldetailrecords:MON_2004_09)
---
Do i really need to re-create a similar tablespace ?? i thought the ignore=y should bypass it and create the segment into its default tablespace.
October 20, 2004 - 7:29 am UTC
pre-create the table in the target database PARTITIONED.
you did a partition export, it is looking for a partitioned table.
else, use a query= on the export and use a WHERE CLAUSE to pull the data, then you can import into a simple pre-created, non-partitioned table.
The "Curly Shuffle" saved me a lot of ongoing work
Tim Cuthbertson, January 10, 2005 - 4:09 pm UTC
I have to do yearly maintenance to add partitions for a new year's data to a very large table (currently 60 million rows and growing). When I originally converted the table to range partitioned, several years ago, I short-sightedly coded (MAXVALUE, MAXVALUE) for the range limits of the then-highest partition.
Ever since, I have had to perform a complex split partition maintenance every year in order to add partitions for the upcoming year. The Oracle documentation and the info I researched on MetaLink gave me no other option.
Today, I searched Ask Tom and found this thread. I did the Curly Shuffle to change the highest range from (MAXVALUE,MAXVALUE) to (2004, MAXVALUE). For 2005 and on into the future, all I will have to do will be to add new partitions for the new year's data.
To quote Gomer Pyle, "Thank you, thank you, thank you!"
why you need index on t1 and t2
pingusan, March 03, 2005 - 8:34 am UTC
Hi
Following your steps I noticed there is no need to add indexes on t1 and t2 to match local indexes at partition table when you say
you can use steps like this (you'd have to create indexes on t1/t2 to match your local indexes and remember to add global indexes and grants to new before
using it)
May you clarify why we need it? I tested and there is no error
Cheers
March 03, 2005 - 8:57 am UTC
if you'd like your indexes to be there (i was commenting that you would have to index t1 and t2 of course -- if you want them to be indexed)....
was just pointing out that in real life, there would be indices and other things to consider before swapping the partitions in general
why you need index on t1 and t2
A reader, March 03, 2005 - 9:35 am UTC
Hi
I think there is no need to indx t1 and t2 in real life neither, I mean we want to modify partition table´s column length and not the intermediate tables (t1 and t2 in this case). I think it´s enough to index NEW_TABLE no? (t1 and t2 are going to be dropped anyways afterwards)
March 03, 2005 - 10:02 am UTC
you would sort of like your old indexes to be there wouldn't you.
that was my point. you have to create a clone of the original down to the indexes etc and then swap.
else you have less then you started with when you are done. I'm assuming you would have "including indexes" etc to make this all work, else you'd lose them.
why you need index on t1 and t2
pingusan, March 03, 2005 - 10:12 am UTC
"you would sort of like your old indexes to be there wouldn't you"
Hi
that´s correct but the indexes on the new partition table (the clone)! but not the intermediate table!
March 03, 2005 - 10:48 am UTC
tell you what, add:
ops$tkyte@ORA9IR2> create index t_idx1 on t(data) local;
Index created.
ops$tkyte@ORA9IR2> create index t_idx2 on t(data,pkey) global;
Index created.
ops$tkyte@ORA9IR2> select index_name from user_indexes where table_name = 'T';
INDEX_NAME
------------------------------
T_IDX1
T_IDX2
to the original example (right after you create T, before anything else)
and see if you have any indexes when you are done..... (you can even add 'including indexes' to the alter exchange)
You cannot get something for nothing here, you either index those intermediate tables or you have no indexes when you are done.
David Aldridge, March 03, 2005 - 10:39 am UTC
In reference to the original Curly Shuffle, would it not have been possible to add a new "3500" partition and then merge it with the "1500"?
March 03, 2005 - 10:54 am UTC
yes, but i believe that'll rewrite the partition(s) -- i believe later releases (this would have to be verified) may be able to do that without rewriting the partition.
why you need index on t1 and t2
pingusan, March 03, 2005 - 11:44 am UTC
Ah I know what you mean now.
I was assumming to get the DDL of the original partitioned table then after exchanging all the data run the DDL script to create the global and local indexes!
Altering Partition Key column Data Type
Maverick439, January 09, 2008 - 5:07 pm UTC
Tom, I currently have a couple of table partitioned by Range and dates and few normal tables. I am in a process of changing data types from Integer to Number and I am dynamically able to generate scripts to alter columns data type. I got stuck at these Partition tables though. it fails with ORA-14060 error and I checked in the begining of this thread, you mentioned a solution to create another table, exchange partitions etc..Is there any way I can do this in a straight forward manner in ORACLE 10G Rel2 using Alter table <tablename> modify??? Since I am using common script to alter , I cannot use your aproach that you showed for 9i version.
I mean to say Can I Automate this process for all those partitioning tables?
January 10, 2008 - 2:16 pm UTC
but that won't fly with datatype issues - the datatypes have to match.
Altering Partition Key column Data Type
Maverick, January 10, 2008 - 2:47 pm UTC
So, what should I do for Data type changes?
Thanks for your help..
January 10, 2008 - 2:51 pm UTC
create table as select if you can do this offline
dbms_redefinition if needed to be online
or....
ops$tkyte%ORA10GR2> CREATE TABLE t
2 (
3 part_key int,
4 y varchar2(30)
5 )
6 PARTITION BY RANGE (part_key)
7 (
8 PARTITION part1 VALUES LESS THAN (100),
9 PARTITION part2 VALUES LESS THAN (200)
10 )
11 /
Table created.
ops$tkyte%ORA10GR2> insert into t select rownum, object_name from all_objects where rownum < 200;
199 rows created.
ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> alter table t modify part_key number;
alter table t modify part_key number
*
ERROR at line 1:
ORA-14060: data type or length of a table partitioning column may not be
changed
ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> create table part1 (part_key int, y varchar2(30) );
Table created.
ops$tkyte%ORA10GR2> create table part2 (part_key int, y varchar2(30) );
Table created.
ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> alter table t exchange partition part1 with table part1 including indexes without validation;
Table altered.
ops$tkyte%ORA10GR2> alter table t exchange partition part2 with table part2 including indexes without validation;
Table altered.
ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> drop table t;
Table dropped.
ops$tkyte%ORA10GR2> alter table part1 modify part_key number;
Table altered.
ops$tkyte%ORA10GR2> alter table part2 modify part_key number;
Table altered.
ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> CREATE TABLE t
2 (
3 part_key number,
4 y varchar2(30)
5 )
6 PARTITION BY RANGE (part_key)
7 (
8 PARTITION part1 VALUES LESS THAN (100),
9 PARTITION part2 VALUES LESS THAN (200)
10 )
11 /
Table created.
ops$tkyte%ORA10GR2> alter table t exchange partition part1 with table part1 including indexes without validation;
Table altered.
ops$tkyte%ORA10GR2> alter table t exchange partition part2 with table part2 including indexes without validation;
Table altered.
ops$tkyte%ORA10GR2> desc t
Name Null? Type
---------------------------------------- -------- ----------------------------
PART_KEY NUMBER
Y VARCHAR2(30)
But why this way?
Maverick, January 10, 2008 - 4:15 pm UTC
Tom, thanks for the info. But, why can't oracle give us a way to alter partition keys in more straight fwd approach like it does in regular tables [I know I'm missing somethings..there must be a reason]?
Why can't I say
Alter table T modify partition part1 (part_key number);
Alter table T modify partition part2 (part_key number);
{ not exactly this way..but just trying to simplify :-)}
?????
Thanks,
January 10, 2008 - 5:32 pm UTC
the partition keys must be homogeneous - all the same.
And right now, you do things at the partition level, which would make them non-homogeneous for some bit of time.
Getting more questions ..
Maverick, January 11, 2008 - 4:46 pm UTC
Tom, as you said, if I take the approach you mentioned , then how do I take care of all the children for this table t? let's say ->
T1 is child of T
T2 is child of T1
and all are in relation using Foreign key constraints, then how can I simply drop this table T? will it not delete all the foreign key relations that I have,
when I use
"Drop table T Cascade Constraints"?
And to recreate the table T simpy will not work, I might have to
1)recreate all the foreign Key relations from T2->T1 and T1->T etc..?
2)and if I have to manually recreate all the relations like that, then how do I get list of Child tables for each level of parent table starting form T?
Or Am I missing something?
January 14, 2008 - 3:09 pm UTC
well, t->t1=>t2 - whatever you do to t will not affect t2, it doesn't care about t at all.
search this site for "cons", I've posted a script many times that you give it a parent table and it'll produce the foreign key constraints (which, you should already have if you ask me, as part of your ddl)
Correct..T2 doesn't get affected
Maverick, January 15, 2008 - 8:51 am UTC
I don't know what I was thinking. t2 has no affect when we drop t. So, I guess I have one more question [more like a clarification, than question]. From the examples you gave me , I understood that "Exchange partition" is used to copy data before drop to regular tables and put them back in the table after re-creating it. So, why can't we just do this?
create table temp_table as select * from t;
drop table t;
create table t [with all the columns and paritions etc..]
insert into t select * from temp_table;
Do you see any problem with that? I can't really think beyond some level ..so I keep bothering you to get more information :-) [and I know I'll get it]
Thanks for all the help.
January 15, 2008 - 1:01 pm UTC
exchange partition does not copy
it swaps a partition with a table, no data is moved.
ORA-14400: inserted partition key does not map to any partition
DJ, July 24, 2008 - 5:57 pm UTC
Tom,
I have a table T1 with more than 5 million rows. I created a temp t2 table and copied the data from T1. Then I dropped T1 and recreated T1 with range partitioned on not-null date colume like this:
CREATE TABLE T1 ("CHHSTD" VARCHAR2(1 byte), "CHPAT"
NUMBER(9), "CHPATY" VARCHAR2(1 byte), "CHTDT" DATE NOT NULL,....)
PARTITION BY RANGE (CHTDT)
(
PARTITION tab_year_2001 VALUES LESS
THAN(to_date('31-dec-2001','dd-mon-yyyy')),
PARTITION tab_year_2002 VALUES LESS
THAN(to_date('31-dec-2002','dd-mon-yyyy')),
PARTITION tab_year_2003 VALUES LESS
THAN(to_date('31-dec-2003','dd-mon-yyyy')),
....
PARTITION tab_year_2012 VALUES LESS
THAN(to_date('31-dec-2012','dd-mon-yyyy'))
);
Table created.
SQL> insert into T1( select * from t2);
insert into T1( select * from t2)
*
ERROR at line 1:
ORA-14400: inserted partition key does not map to any partition
So there must be some offending data in the old un-partitioned table.
My questions are:
1. How do you find out these offending data for this huge table and correct it?
2. Can you put a default value or something as the last partition to hold any unfitted or offending partition value there?
Thanks a lot.
July 29, 2008 - 9:45 am UTC
ops$tkyte%ORA10GR2> CREATE TABLE t
2 (
3 dt date,
4 x int,
5 y varchar2(30)
6 )
7 PARTITION BY RANGE (dt)
8 (
9 PARTITION part1 VALUES LESS THAN (to_date('13-mar-2003','dd-mon-yyyy')) ,
10 PARTITION part2 VALUES LESS THAN (to_date('14-mar-2003','dd-mon-yyyy'))
11 )
12 /
Table created.
ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> create table t2
2 as
3 select to_date('11-mar-2003','dd-mon-yyyy')+rownum x, rownum y, rownum z
4 from all_users where rownum <= 4;
Table created.
ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> insert into t select * from t2;
insert into t select * from t2
*
ERROR at line 1:
ORA-14400: inserted partition key does not map to any partition
ops$tkyte%ORA10GR2>
<b>one approach to finding "offending rows" is to query the source table for anything greater than or equal to your highest partition key</b>
ops$tkyte%ORA10GR2> select * from t2 where x >= to_date('14-mar-2003','dd-mon-yyyy');
X Y Z
--------- ---------- ----------
14-MAR-03 3 3
15-MAR-03 4 4
ops$tkyte%ORA10GR2>
<b>another is to use dml error logging (10gr2 and above)
</b>
ops$tkyte%ORA10GR2> exec dbms_errlog.create_error_log('T');
PL/SQL procedure successfully completed.
ops$tkyte%ORA10GR2> insert into t select * from t2 log errors reject limit unlimited;
2 rows created.
ops$tkyte%ORA10GR2> select * from t;
DT X Y
--------- ---------- ------------------------------
12-MAR-03 1 1
13-MAR-03 2 2
ops$tkyte%ORA10GR2> select * from err$_t;
ORA_ERR_NUMBER$
---------------
ORA_ERR_MESG$
-------------------------------------------------------------------------------
ORA_ERR_ROWID$
-------------------------------------------------------------------------------
OR
--
ORA_ERR_TAG$
-------------------------------------------------------------------------------
DT
-------------------------------------------------------------------------------
X
-------------------------------------------------------------------------------
Y
-------------------------------------------------------------------------------
14400
ORA-14400: inserted partition key does not map to any partition
I
14-MAR-03
3
3
14400
ORA-14400: inserted partition key does not map to any partition
I
15-MAR-03
4
4
ops$tkyte%ORA10GR2>
<b>and yes, you can use a "default" partition with maxvalue</b>
ops$tkyte%ORA10GR2> alter table t add PARTITION junk VALUES LESS THAN (MAXVALUE);
Table altered.
ops$tkyte%ORA10GR2> insert into t select * from t2;
4 rows created.
Thanks a lot
DJ, August 07, 2008 - 1:38 pm UTC
Tom,
Thanks and it is very helpful!
Vaibhav, August 11, 2008 - 12:55 pm UTC
Hi Tom,
We have 9 tables where MHS_TRANSACTIONS is the master table with transaction_id as the primary key. also, we have another table,
MHS_TRANSACTION_STATUS where we store the states through which a various transaction goes through.
Once, the transaction state is MARKED_FOR_ARCHIVE, we can delete that transaction from all our 9 tables.
We are currently partitioning these tables on date, on which records were inserted in the table.
say, we have 20 thousand inserts today. in the next three days, the transaction state changes to
MARKED_FOR_ARCHIVE for all these transactions except a few, say 20-30. so, because of these 20 records
i won't be able to truncate this partition.
Also, i would not want to update the partition key for these records
since updating is a costly process (as per my knowledge).
Not only this, after moving these records in another partition, it might happen that they block the
truncation process in that partition (if their state still doesn't change)
1) how do i go about this problem. i suggessted using a new column partition_time in all the tables and wch be set to the same time in all the tables
once the state changes to MARKED_FOR_ARCHIVE. Here, my transactions would be lying in maxvalue partition unless i update their partition key and updating partition
key of 9 tables would be a very costly operation.
2) also, my master table has primary key which is F.K in all the other tables. so, truncation won't be possible unless and untill i disable the constraint.
is there any other way out for this problem?
Awaiting your comments TOM
August 12, 2008 - 8:42 am UTC
... Also, i would not want to update the partition key for these records
since updating is a costly process (as per my knowledge). ...
but everything in life is a tradeoff see - if you want to truncate, you'll segregate.
maybe range/list partition this data. Yes, your update of the archive flag will cause a MOVE, but slowly - over time, then you can truncate in one single statement.
don't rule out something because you THINK it will be costly, everything has a cost - is the cost of doing A and then B less then living with doing C instead?
A = update partition key
B = truncate partition
C = another way...
Blob movement
vaibhav, August 22, 2008 - 3:45 am UTC
HI,
Following up with the above question, i think i would opt for the strategy where we move the UNARCHIVED records to a MAXVALUE partition, TRUNCATE the original partition and then revert back the UNARACHIVED records to their original partition.
To implement this, i would need to change the partition key of the BLOB table as well. i have a different tablespace to store the BLOB data
My question is,
1) How costly this process would be?
2) i know moving = delete + insert. Say, i have 1 lac records in the table having BLOB column out of which 95000 are ARCHIVED. so i would first move those 5000 UNARCHIVED records to max value partition (ie delete 5000 + insert 5000)
Now i would truncate this partition. once this is done again, move those max value records in this original partition (ie again delete 5000 + insert 5000)
so how is this going to take place. i mean is the blob data going to be deleted and inserted again or is it just the contents of the table (ie the locator and other columns of the table)
i ran a test in which i had 5000 records moving and coming back to the original partition.
it took 6 minutes to perform this operation.
Please explain me what is the exact process that would be happening here.
what happens to the blob data when i truncate a partition.
what happens to the blob data when i move a partition.
how do i get my activity done in least amount of time.
what strategy would you suggest in such a scenario. our blobs are read only blobs. we never update them
Please help
Regards,
Vaibhav
August 22, 2008 - 9:44 am UTC
1) depends doesn't it. How much data are you moving, that'll answer your "how expensive" question.
another approach is
create table as select records to keep;
alter partitioned table exchange partition with new-table;
rather than move then twice, just select what you want to keep into a table and swap table with existing partition.
Not Feasible
Vaibhav, August 24, 2008 - 2:00 am UTC
Hi,
We will be moving approximately 5000 records per day...
i thought of the idea you mentioned about creating a new table as select recirds to keep but i dont think i can go ahead with this approach...
what happens to my inserts when this create table script is in progress, also i would need to change the table names all the time, at every execution. i cant afford that...
our's is a 24 hr system and i cant have any downtime even at 6 am in the morning...
i will have to look out for some other alternative...
if moving 5000 records out and in takes around 5 mins, then we are happy with this approach, but i am not satisfied thinking that there can be still better approach to do the same...
any thoughts on this TOM
August 26, 2008 - 8:09 pm UTC
5000 records is small, don't over analyze this, just delete, it is small, tiny, nothing.
vaibhav, September 08, 2008 - 4:27 am UTC
Hi,
We would be deleting almost 50k records everyday.
For BLOBS with delete, only the pointer is deleted, the disk space is still not freed up (as per my knowledge).
Will this result into disk issues at some point of time.
Also, i guess, i would need to rebuild the table every time since delete will not free up the space used by the table itself.
No,w i am left with following 2 options:
1) Use partitioning...partition on time and sub partition on archive status, when the message is ready for archiving, set this status as 'Y' (row movement) i.e every message coming in my system will undergo row movement at some point in time.
2) use SQL delete for a batch of 5000 and then rebuild the table ( how to eliminate the disk issue here)
Which option would be better?
Can you please give a detailed explanation on the above scenario
September 08, 2008 - 4:06 pm UTC
... For BLOBS with delete, only the pointer is deleted, the disk space is still not
freed up (as per my knowledge).
...
that is true of ALL DATA. when you delete from ANY SEGMENT, the space is not "freed up and returned to the system as a whole", the space is released to the segment and free to be reused.
so, when you delete a blob, the space used by the blob is free to be used by other blobs added/modified in that table. Just like when you delete a row, the space used by that row is free to be used by other rows added/modified in that table.
... Also, i guess, i would need to rebuild the table every time since delete will
not free up the space used by the table itself.
...
IMMEDIATELY AND FOREVER WIPE THAT THOUGHT FROM YOUR HEAD period, just stop. That is so far from the truth.
just delete. 50,000 records is tiny
You might well partition for many reasons, if you can partition by some attribute and truncate over time instead of deleting 50k a day - great.
but you are NOT going to rebuild anything "daily" - stop thinking that way.
Great
vaibhav, September 09, 2008 - 4:47 am UTC
Thanks for that great explanation.
That went directly into the KEEP pool of my brain :)
now, i know what i have to go forward with.
Simple Delete, no rebuild, no partitions, no sub partitions, no row movements, nothing. SIMPLE DELETE
But give me one more advice,
i am thinking of keeping a job that would be deleting these blobs and this job would be scheduled every 1 hour. thus, i will be deleting in batches rather than waiting till the end of the day and deleting everything in one go.
but will this deletion affect my performance in peak hours...i mean, it should happen that my job is running in background, utilizing resources and hampering my application
what are your comments on this. also, what would be the most ideal way of deleting a blob in terms of table space and all... i have a separate table space for storing this blob.
"DELETE from MHS_MESSAGE WHERE archive_status = 'Y'" is this the ideal approach
September 09, 2008 - 7:59 am UTC
only you can answer this one - is your peak so "peak" that the addition of this one process would break it down? I don't know, only you do.
vaibhav, September 09, 2008 - 9:25 am UTC
dats true
may be it wont make a difference even if i do it during peak hours...lets c
neways, thanks a lot for all ur advice
cheers
vaibhav, September 09, 2008 - 9:27 am UTC
*ur --- your
*lets c --- lets see
sorry for those typos
Partition and Keys
vs, March 30, 2011 - 10:22 am UTC
I am using Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bi
I have two issues:
1. Drop partitioning key from the existing partitioning key columns i.e. multi-key to single key. The table given below is an existing table which is partitioned on BD and FD, now, i need to drop one partitioning key column and just need one key i.e. BD. What is safe and quick way to do it without any data lose.
Example:
CREATE TABLE "T_R_TMP"
( "R_TMP_SID" NUMBER(20,0) ,
"FSID" NUMBER(20,0) ,
"BD" DATE ,
FD Date,
"PN" NUMBER(38,0) ,
"FC" VARCHAR2(6) ,
"RT" TIMESTAMP (6) ,
"CUSR_ID" VARCHAR2(8) ,
"UUSR_ID" VARCHAR2(8))
PARTITION BY RANGE (BD, FD)
( PARTITION S1 VALUES LESS THAN ( TO_DATE('22/12/2005','DD/MM/YYYY'),TO_DATE('22/12/2005','DD/MM/YYYY'))
TABLESPACE TB01,
PARTITION S2 VALUES LESS THAN ( TO_DATE('07/01/2009','DD/MM/YYYY'), TO_DATE('07/01/2009','DD/MM/YYYY'))
TABLESPACE TB02,
PARTITION S3 VALUES LESS THAN (TO_DATE('10/01/2009','DD/MM/YYYY'),TO_DATE('10/01/2009','DD/MM/YYYY'))
TABLESPACE TB03,
PARTITION S4 VALUES LESS THAN (TO_DATE('13/01/2009','DD/MM/YYYY'),TO_DATE('13/01/2009','DD/MM/YYYY'))
TABLESPACE TB04,
PARTITION S5 VALUES LESS THAN (MAXVALUE, MAXVALUE)
TABLESPACE TB05
);
2. How can I change the max bound date on the each partition. for example:
CREATE TABLE "T_R_TMP"
( "R_TMP_SID" NUMBER(20,0) ,
"FSID" NUMBER(20,0) ,
"BD" DATE ,
FD Date,
"PN" NUMBER(38,0) ,
"FC" VARCHAR2(6) ,
"RT" TIMESTAMP (6) ,
"CUSR_ID" VARCHAR2(8) ,
"UUSR_ID" VARCHAR2(8))
PARTITION BY RANGE (BD)
( PARTITION S1 VALUES LESS THAN ( TO_DATE('22/12/2005','DD/MM/YYYY'))
TABLESPACE TB01,
PARTITION S2 VALUES LESS THAN ( TO_DATE('07/01/2009','DD/MM/YYYY'))
TABLESPACE TB02,
PARTITION S3 VALUES LESS THAN (TO_DATE('10/01/2009','DD/MM/YYYY'))
TABLESPACE TB03,
PARTITION S4 VALUES LESS THAN (TO_DATE('13/01/2009','DD/MM/YYYY'))
TABLESPACE TB04,
PARTITION S5 VALUES LESS THAN ( MAXVALUE)
TABLESPACE TB05
);
I need to change the max boundry for parition "S1" from TO_DATE('22/12/2005','DD/MM/YYYY') to TO_DATE('30/01/2006','DD/MM/YYYY')
April 12, 2011 - 9:47 am UTC
1) you have to create a new table. You'll have to
a) stop write access to the table if you want to do this offline using CREATE TABLE AS SELECT.
or
create a new table and use dbms_redefinition to online redefine the existing table into the new structure.
You cannot just drop a partition key, that would cause every row to possibly move - you need to recreate the object.
ops$tkyte%ORA11GR2> CREATE TABLE "NEW_T_R_TMP"
2 ( "R_TMP_SID" ,
3 "FSID" ,
4 "BD" ,
5 FD ,
6 "PN" ,
7 "FC" ,
8 "RT" ,
9 "CUSR_ID" ,
10 "UUSR_ID" )
11 PARTITION BY RANGE (BD)
12 ( PARTITION S1 VALUES LESS THAN ( TO_DATE('22/12/2005','DD/MM/YYYY')),
13 PARTITION S2 VALUES LESS THAN ( TO_DATE('07/01/2009','DD/MM/YYYY')),
14 PARTITION S3 VALUES LESS THAN (TO_DATE('10/01/2009','DD/MM/YYYY')),
15 PARTITION S4 VALUES LESS THAN (TO_DATE('13/01/2009','DD/MM/YYYY')),
16 PARTITION S5 VALUES LESS THAN ( MAXVALUE)
17 )
18 as
19 select * from t_r_tmp;
Table created.
2) You'll have to SPLIT S2 and then MERGE the new partition with the existing S1 partition.
ops$tkyte%ORA11GR2> CREATE TABLE "T_R_TMP"
2 ( "R_TMP_SID" NUMBER(20,0) ,
3 "FSID" NUMBER(20,0) ,
4 "BD" DATE ,
5 FD Date,
6 "PN" NUMBER(38,0) ,
7 "FC" VARCHAR2(6) ,
8 "RT" TIMESTAMP (6) ,
9 "CUSR_ID" VARCHAR2(8) ,
10 "UUSR_ID" VARCHAR2(8))
11 PARTITION BY RANGE (BD)
12 ( PARTITION S1 VALUES LESS THAN ( TO_DATE('22/12/2005','DD/MM/YYYY')),
13 PARTITION S2 VALUES LESS THAN ( TO_DATE('07/01/2009','DD/MM/YYYY')),
14 PARTITION S3 VALUES LESS THAN (TO_DATE('10/01/2009','DD/MM/YYYY')),
15 PARTITION S4 VALUES LESS THAN (TO_DATE('13/01/2009','DD/MM/YYYY')),
16 PARTITION S5 VALUES LESS THAN ( MAXVALUE)
17 )
18 /
Table created.
ops$tkyte%ORA11GR2>
ops$tkyte%ORA11GR2> alter table t_r_tmp
2 split partition s2
3 at ( to_date( '30/01/2006', 'dd/mm/yyyy' ) )
4 into ( partition new, partition s2 );
Table altered.
ops$tkyte%ORA11GR2>
ops$tkyte%ORA11GR2> alter table t_r_tmp
2 merge partitions s1, new into partition new;
Table altered.
ops$tkyte%ORA11GR2>
ops$tkyte%ORA11GR2> alter table t_r_tmp rename partition new to s1;
Table altered.
ops$tkyte%ORA11GR2>
ops$tkyte%ORA11GR2> select dbms_metadata.get_ddl( 'TABLE', 'T_R_TMP' ) from dual;
DBMS_METADATA.GET_DDL('TABLE','T_R_TMP')
--------------------------------------------------------------------------------
CREATE TABLE "OPS$TKYTE"."T_R_TMP"
( "R_TMP_SID" NUMBER(20,0),
"FSID" NUMBER(20,0),
"BD" DATE,
"FD" DATE,
"PN" NUMBER(38,0),
"FC" VARCHAR2(6),
"RT" TIMESTAMP (6),
"CUSR_ID" VARCHAR2(8),
"UUSR_ID" VARCHAR2(8)
) PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255
STORAGE(
BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
TABLESPACE "USERS"
PARTITION BY RANGE ("BD")
(PARTITION "S1" VALUES LESS THAN (TO_DATE(' 2006-01-30 00:00:00', 'SYYYY-MM-DD
HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN')) SEGMENT CREATION DEFERRED
PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255 NOCOMPRESS LOGGING
STORAGE( BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
TABLESPACE "USERS" ,
PARTITION "S2" VALUES LESS THAN (TO_DATE(' 2009-01-07 00:00:00', 'SYYYY-MM-DD
HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN')) SEGMENT CREATION DEFERRED
PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255 NOCOMPRESS LOGGING
STORAGE( BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
TABLESPACE "USERS" ,
PARTITION "S3" VALUES LESS THAN (TO_DATE(' 2009-01-10 00:00:00', 'SYYYY-MM-DD
HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN')) SEGMENT CREATION DEFERRED
PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255 NOCOMPRESS LOGGING
STORAGE( BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
TABLESPACE "USERS" ,
PARTITION "S4" VALUES LESS THAN (TO_DATE(' 2009-01-13 00:00:00', 'SYYYY-MM-DD
HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN')) SEGMENT CREATION DEFERRED
PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255 NOCOMPRESS LOGGING
STORAGE( BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
TABLESPACE "USERS" ,
PARTITION "S5" VALUES LESS THAN (MAXVALUE) SEGMENT CREATION DEFERRED
PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255 NOCOMPRESS LOGGING
STORAGE( BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
TABLESPACE "USERS" )
How to catch ORA-14400: inserted partition key does not map to any partition before it fails?
Sarayu, September 05, 2012 - 8:43 am UTC
Hi Tom,
I would like to run an sql every night or every week in our databases and find the candidate tables which might fail with this error "ORA-14400: inserted partition key does not map to any partition" and address them before developers get this error.
can you help me on how to find this?
Thank you
Sarayu
September 10, 2012 - 7:44 pm UTC
huh? this makes no sense at all to me.
any table that doesn't have "maxvalue" on a range or default on a list could fail with 14400. think about it.
What "will fail"
A reader, September 11, 2012 - 9:05 am UTC
yes Tom. You are right.
We know what "could fail". My intention is to find
what "will fail" for sure on next business day or next week
and for which partitions we need to take action to avoid those errors
September 14, 2012 - 4:21 pm UTC
explain in words what it means to "fail" in this case. You must have something in mind.
are you looking for example for tables which range partitions such that the high partition's date (assuming it is a single column date range partition??) is within two days or something like that?
what is your criteria for "failure"
you are right
Sarayu, September 20, 2012 - 10:42 am UTC
yes Tom. You are right.
September 26, 2012 - 11:40 am UTC
this is not fully baked, but the concept is there:
ops$tkyte%ORA11GR2> CREATE TABLE t
2 (
3 dt date,
4 x int,
5 y varchar2(30)
6 )
7 PARTITION BY RANGE (dt)
8 (
9 PARTITION part1 VALUES LESS THAN (to_date('13-mar-2003','dd-mon-yyyy')) ,
10 PARTITION part2 VALUES LESS THAN (to_date('14-mar-2003','dd-mon-yyyy')) ,
11 PARTITION junk VALUES LESS THAN (MAXVALUE)
12 )
13 /
Table created.
ops$tkyte%ORA11GR2>
ops$tkyte%ORA11GR2> create or replace function get_high_value( p_table_name in varchar2, p_partition_name in varchar2 ) return date
2 authid current_user
3 as
4 l_high_value long;
5 l_date date;
6 begin
7 select high_value
8 into l_high_value
9 from user_tab_partitions
10 where table_name = p_table_name
11 and partition_name = p_partition_name;
12
13 if ( l_high_value <> 'MAXVALUE' )
14 then
15 execute immediate 'begin :x := ' || l_high_value || '; end;' using OUT l_date;
16 end if;
17
18 return l_date;
19 end;
20 /
Function created.
ops$tkyte%ORA11GR2>
ops$tkyte%ORA11GR2> select table_name, partition_name, get_high_value( table_name, partition_name )
2 from user_tab_partitions
3 where table_name = 'T';
TABLE_NAME PARTITION_NAME GET_HIGH_
------------------------------ ------------------------------ ---------
T JUNK
T PART1 13-MAR-03
T PART2 14-MAR-03
example needed.
A reader, October 01, 2012 - 10:46 pm UTC
Sir,
Please help me in produce example of following partitions
Interval
Interval-Range
Interval-List
Interval-Hash
I have seen the example of interval partition but what are others, How they are implemented and what their usage. Please forgive my lack of knowledge since I am just in learning process.
October 09, 2012 - 11:32 am UTC
replace the word interval with range and their implementation and usage is identical to range.
they are used for the same reasons
an interval partitioned table is just a range partitioned table whereby the partitions are created on the fly as we fill them with data.
Partition with local index
Shambu, October 16, 2012 - 5:13 pm UTC
Hello Tom,
I have created table "testpart" from dba_objects and partitioned the table by range on "created" column. I have added near to 1 million rows to this table for testing. I have also created a local index on "created" column. Ran stats on the table and index.
SQL> select index_name, PARTITION_NAME,NUM_ROWS, HIGH_VALUE,last_analyzed from user_ind_partitions;
INDEX_NAME PARTITION_NAME NUM_ROWS HIGH_VALUE LAST_ANAL
------------------------------ ------------------------------ ---------- -------------------------------------------------------------------------------- ---------
TESTPART_IDX_LOC P08 3997120 TO_DATE(' 2008-01-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA 16-OCT-12
TESTPART_IDX_LOC P09 988160 TO_DATE(' 2009-01-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA 16-OCT-12
TESTPART_IDX_LOC P10 2029440 TO_DATE(' 2010-01-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA 16-OCT-12
TESTPART_IDX_LOC P11 15680 TO_DATE(' 2011-01-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA 16-OCT-12
TESTPART_IDX_LOC P12 315840 TO_DATE(' 2012-01-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA 16-OCT-12
TESTPART_IDX_LOC P13 133440 TO_DATE(' 2013-01-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA 16-OCT-12
TESTPART_IDX_LOC P06 0 TO_DATE(' 2006-01-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA 16-OCT-12
TESTPART_IDX_LOC P07 0 TO_DATE(' 2007-01-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA 16-OCT-12
Here is my question: Why is my query not picking up the local index when I try to fetch the rows which is less than 10% of data ? The no. of rows in partition P12 is near to 315840 and the rows I am trying to fetch (8640 rows) is less than 10% of it. So, why my local index is not being used here? I tried to hint it.. but didnt work.
SQL> select OWNER,OBJECT_NAME, STATUS from testpart where created between to_date('01-JAN-2011','dd-mon-yyyy') and to_date('21-JAN-2011','dd-mon-yyyy');
8640 rows selected.
Elapsed: 00:00:00.20
Execution Plan
----------------------------------------------------------
Plan hash value: 2077141103
---------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | Pstart| Pstop |
---------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 8653 | 380K| 974 (1)| 00:00:12 | | |
| 1 | PARTITION RANGE SINGLE| | 8653 | 380K| 974 (1)| 00:00:12 | 7 | 7 |
|* 2 | TABLE ACCESS FULL | TESTPART | 8653 | 380K| 974 (1)| 00:00:12 | 7 | 7 |
---------------------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
2 - filter("CREATED"<=TO_DATE(' 2011-01-21 00:00:00', 'syyyy-mm-dd hh24:mi:ss'))
Statistics
----------------------------------------------------------
1 recursive calls
0 db block gets
4982 consistent gets
0 physical reads
0 redo size
346107 bytes sent via SQL*Net to client
6849 bytes received via SQL*Net from client
577 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
8640 rows processed
October 16, 2012 - 7:06 pm UTC
Why is my query not picking up the local index when I try
to fetch the rows which is less than 10% of data ? there is no set percentage of the data that will be retrieve via an index. You could have an index whereby 1% of the data selected would blow off the index and use a full scan.
Consider a table with 10 rows per block.
You want to get 10% of the rows from that table.
You have 10,000 rows in the table (so you have 1,000 blocks). You want to get 1,000 of them.
You might have to read EVERY SINGLE BLOCK from that table to get 10% of the rows - the 1,000 rows you want might be on 1,000 different table blocks.
OR
You might have to read 10 blocks to get those 1,000 rows - if the rows you want are located right next to each other.
To make it more real, think of a table that has a NAME column and a CREATED_DATE column. You insert into this table some new name (arrives randomly) and sysdate.
If you say "give me all of the records that begin with a name of A%", and we decide that will return about 10% of the data we might well full scan since the A's would be spread all over the place.
On the other hand, if you say "where created_date between :x and :y" and we decide that will return about 10% of the data - we might well use an index on that since all of the rows with dates close to each other would naturally be physically close to each other - simply because we add rows to the table in order of sysdate.
ops$tkyte%ORA11GR2> create table organized
2 as
3 select x.*
4 from (select * from stage order by object_name) x
5 /
Table created.
ops$tkyte%ORA11GR2> create table disorganized
2 as
3 select x.*
4 from (select * from stage order by dbms_random.random) x
5 /
Table created.
ops$tkyte%ORA11GR2> pause
ops$tkyte%ORA11GR2> create index organized_idx on organized(object_name);
Index created.
ops$tkyte%ORA11GR2> create index disorganized_idx on disorganized(object_name);
Index created.
ops$tkyte%ORA11GR2> begin
2 dbms_stats.gather_table_stats
3 ( user, 'ORGANIZED',
4 estimate_percent => 100,
5 method_opt=>'for all indexed columns size 254'
6 );
7 dbms_stats.gather_table_stats
8 ( user, 'DISORGANIZED',
9 estimate_percent => 100,
10 method_opt=>'for all indexed columns size 254'
11 );
12 end;
13 /
PL/SQL procedure successfully completed.
ops$tkyte%ORA11GR2> pause
ops$tkyte%ORA11GR2>
ops$tkyte%ORA11GR2>
ops$tkyte%ORA11GR2> clear screen
ops$tkyte%ORA11GR2> select table_name, blocks, num_rows, 0.05*num_rows, 0.10*num_rows from user_tables
2 where table_name like '%ORGANIZED' order by 1;
TABLE_NAME BLOCKS NUM_ROWS 0.05*NUM_ROWS 0.10*NUM_ROWS
------------------------------ ---------- ---------- ------------- -------------
DISORGANIZED 1065 72939 3646.95 7293.9
ORGANIZED 1066 72939 3646.95 7293.9
ops$tkyte%ORA11GR2> select table_name, index_name, clustering_factor from user_indexes
2 where table_name like '%ORGANIZED' order by 1;
TABLE_NAME INDEX_NAME CLUSTERING_FACTOR
------------------------------ ------------------------------ -----------------
DISORGANIZED DISORGANIZED_IDX 72877
ORGANIZED ORGANIZED_IDX 1040
<b>so, we have two tables, exactly the same data - just in different orders. In the
organized table, the rows are sorted by object_name, in the disorganized - they are not.
Notice the clustering factor statistic there - in the organized table it is close to the
number of blocks in the table, in the disorganized - near the number of rows. the
clustering factor is a measure of how many IO's it would take to read the entire table
via the index. That metric shows that the organized table index would be used to read
more rows out via the index than the disorganized - because the table rows just happen
to be sorted like the index keys are. </b>
ops$tkyte%ORA11GR2> set autotrace traceonly explain
ops$tkyte%ORA11GR2> select * from organized where object_name like 'F%';
Execution Plan
----------------------------------------------------------
Plan hash value: 1925627673
-------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CP
-------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 144 | 13968 | 6 (
| 1 | TABLE ACCESS BY INDEX ROWID| ORGANIZED | 144 | 13968 | 6 (
|* 2 | INDEX RANGE SCAN | ORGANIZED_IDX | 144 | | 3 (
-------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
2 - access("OBJECT_NAME" LIKE 'F%')
filter("OBJECT_NAME" LIKE 'F%')
<b>that query has a very low cost because the optimizer thinks it will do about 3 IO's
against the index and 3 more against the table - which is just about right. My table
gets about 70 rows per block - so it would take three table blocks to read the rows out
- and the index would have to read a root block, branch block and leaf block to get the
rowids for the 144 rows</b>
ops$tkyte%ORA11GR2> select * from disorganized where object_name like 'F%';
Execution Plan
----------------------------------------------------------
Plan hash value: 3767053355
-------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (
-------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 144 | 13968 | 148
| 1 | TABLE ACCESS BY INDEX ROWID| DISORGANIZED | 144 | 13968 | 148
|* 2 | INDEX RANGE SCAN | DISORGANIZED_IDX | 144 | | 3
-------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
2 - access("OBJECT_NAME" LIKE 'F%')
filter("OBJECT_NAME" LIKE 'F%')
<b>but look at the cost of the same query against the other table. It is 148. It would
take 3 IO's against the index as before - but somewhere around 144 IO's against the
table since the data is so spread out! so much higher cost. But both were using
indexes so far.
remember this is a 72,000 row table. 1% of that is 720.</b>
ops$tkyte%ORA11GR2> select * from organized where object_name like 'A%';
Execution Plan
----------------------------------------------------------
Plan hash value: 1925627673
-------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CP
-------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1826 | 172K| 39 (
| 1 | TABLE ACCESS BY INDEX ROWID| ORGANIZED | 1826 | 172K| 39 (
|* 2 | INDEX RANGE SCAN | ORGANIZED_IDX | 1826 | | 12 (
-------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
2 - access("OBJECT_NAME" LIKE 'A%')
filter("OBJECT_NAME" LIKE 'A%')
<b>if we go after 1,826 rows - this query on this table will still use an index - the
cost is still very low since the 1,826 rows we want are all next to each other...
but...</b>
ops$tkyte%ORA11GR2> select * from disorganized where object_name like 'A%';
Execution Plan
----------------------------------------------------------
Plan hash value: 2727546897
-------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time
-------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1826 | 172K| 291 (1)| 00:00:0
|* 1 | TABLE ACCESS FULL| DISORGANIZED | 1826 | 172K| 291 (1)| 00:00:0
-------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
1 - filter("OBJECT_NAME" LIKE 'A%')
<b>getting the same 1,826 rows will not use an index here. the full scan cost is 291 -
way below the cost of getting 1,826 using single block IO's all over the place.
in fact, we can see that if we were to get around 290 rows out of this table - we would
stop using an index. that is about 0.4% of the rows in the table!!!!
we would not use this index to retrieve less than 0.5% of the table - no where near
10%!!!!
</b>
partition
Shambu, October 16, 2012 - 10:24 pm UTC
Thank you Tom for the response. It really makes sense...but have few questions to clear my doubts.
<quote>
On the other hand, if you say "where created_date between :x and :y" and we decide that will return about 10% of the data - we might well use an index on that since all of the rows with dates close to each other would naturally be physically close to each other - simply because we add rows to the table in order of sysdate.
</quote>
1) The table that I was refering earlier "testpart" is a range-partition table (copy of dba_objects)...and I do have local index on created column(date)datatype. So, the data should be placed closer to each other,correct ? if yes,then why does it do full partition scan ?
2) Does it mean that partition table "testpart" has disorganized data blocks for all/most partitions? How can we fix the same issue for partition table?
3) How can you calcuate how many rows/block for organized and disorganized tables?
4) As per your example- the disorganized table does have blocks near to each other...so it has to perform full table scan to retrieve the less data.
How can I find out in my database, which tables are disorganized? How can I fix it? ( alter table <tbl_name> move; )
Thank you very much !!
October 17, 2012 - 3:10 pm UTC
1) why would it be ? You didn't share with us the method for loading. 10% is a large amount, we *might* use an index, we *might* not use an index. It depends. I don't have any idea what your data actually looks like, how it was loaded, how organized the table is with respect to the index key.
2) it could, it could also mean that we decided we were going to get so much of a single partition that full scanning it made sense. You were getting 10% of a table, but how much of a partition were you getting? probably a lot of it.
3) you calculate it the same for any sort of table, how wide are your rows, how big is your block. Now divide. or look at the number of blocks used by the, take the number of rows in the table, divide.
4) read about the clustering factor (search this site, search google). Then, ask yourself "how many ways can a table be sorted?"
that is, if you reorganize the EMP table to be sorted by name - it will no longer be sorted by hiredate. If you reorganize it to be sorted by EMPNO, it won't be sorted by ename or hiredate. And so on. You don't necessarily "fix" it, you use this knowledge to understand why an index may or may not be used in certain cases.
ORA-14299
Dhaval, April 30, 2014 - 9:58 pm UTC
Hello Tom,
I have related question, please see below table definition and error during insert.
CREATE TABLE myTable
(
RANGEPARTKEY NUMBER(20) NOT NULL,
HASHPARTKEY NUMBER(20) NOT NULL,
SOMEID1 NUMBER(20) NOT NULL,
SOMEID2 NUMBER(20) NOT NULL,
SOMEVAL NUMBER(32,10) NOT NULL
)
PARTITION BY RANGE (RANGEPARTKEY) INTERVAL (1)
SUBPARTITION BY HASH (HASHPARTKEY) SUBPARTITIONS 16
(PARTITION myINITPart VALUES LESS THAN (1) NOCOMPRESS );
Insert Into myTable
Values
(65535,1,1,1,123.123)
ORA-14299: total number of partitions/subpartitions exceeds the maximum limit
I am aware of restriction that Oracle has on a table.(Max of 1024K -1 partitions including subpartitions) and that is preventing me to create a record with key value of 65535.
Now I am stuck as I have ids greater than this number (65535), question becomes how do I manage storing of data for ids greater that 65534?
One of the alternative I was thinking is retire/drop old partitions and modify first partition myINITPart to store data for more partitions (which are retired in reality any way) - this I could have more ids available to store.
So PARTITION myINITPart VALUES LESS THAN (1) would be replaced with PARTITION myINITPart VALUES LESS THAN (1000) and Oracle will allow me to store data for additional 1000 ids. Big question becomes, Oracle do not allow me to modify this initial partition attributes.
I would appreciate your quick response.
Bottomline, I would like to store data for ids greater than 65535 without any restrictions.
Thank you very much,
Dhaval
ORA-14299
Dhaval, May 01, 2014 - 1:22 pm UTC
Regarding ORA-14299 error post...
BTW, I forgot to mention in my last post that each partition will have around 4 billion rows. Only alternative I could see (and want to avoid) is recycle RANGEPARTKEY and start all over from 1 - scary.