Skip to Main Content

Breadcrumb

Question and Answer

Connor McDonald

Thanks for the question, Rick.

Asked: May 19, 2008 - 3:57 pm UTC

Last updated: April 27, 2021 - 6:15 am UTC

Version: 10g

Viewed 100K+ times! This question is

You Asked

I am attempting to partition a Table T1 with existing data.
Table T1 is as follows:

COLUMN DATATYPE
-----------------
COLUMN1 NUMBER PK
COLUMN2 NUMBER
COLUMN3 NUMBER

I am using this approach:
1- ALTER TABLE T1 RENAME TO T1_TEMP
2- CREATE TABLE T1 (
COLUMN1 NUMBER,
COLUMN2 NUMBER,
COLUMN3 NUMBER)
PARTITION BY HASH (COLUMN1)
( PARTITION P1 TABLESPACE T1,
PARTITION P2 TABLESPACE T1
);
3- INSERT INTO T1 SELECT * FROM T1_TEMP
4- DROP TABLE T1_TEMP CASCADE CONSTRAINTS

The problem with the above method is that I have tables T2, T3, Tn with a foreign key reference on COLUMN1 of T1. The reference is automatically moved to T1_TEMP but not back to T1.

QUESTION:
---------
How to modify all references to T1_TEMP to point back to T1 after having copied the data in step 3?

THANKS!

and Tom said...

probably the easiest, safest way will be to use dbms_redefinition - it'll copy over the grants, indexes, constraints, everything.

Table P is the parent table.

Table T1 is the existing non-partitioned table.
Table T2 is the new partitioned table.

At the end, we are left with P and T1 - T1 being partitioned.


ops$tkyte%ORA10GR2> create table p ( x primary key )
  2  as
  3  select user_id from all_users;

Table created.

ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> create table t1
  2  as
  3  select * from all_users
  4  /

Table created.

ops$tkyte%ORA10GR2> alter table t1 add constraint t1_pk primary key(user_id);

Table altered.

ops$tkyte%ORA10GR2> alter table t1 add constraint t1_fk foreign key(user_id) references p(x);

Table altered.

ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> create table t2
  2  ( username varchar2(30),
  3    user_id  number,
  4    created  date
  5  )
  6  partition by hash(user_id) partitions 8;

Table created.

ops$tkyte%ORA10GR2> exec dbms_redefinition.can_redef_table( user, 'T1' );

PL/SQL procedure successfully completed.

ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> exec dbms_redefinition.start_redef_table( user, 'T1', 'T2' );

PL/SQL procedure successfully completed.

ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> variable nerrors number
ops$tkyte%ORA10GR2> begin
  2          dbms_redefinition.copy_table_dependents
  3          ( user, 'T1', 'T2',
  4            copy_indexes => dbms_redefinition.cons_orig_params,
  5            num_errors => :nerrors );
  6  end;
  7  /

PL/SQL procedure successfully completed.

ops$tkyte%ORA10GR2> exec dbms_redefinition.finish_redef_table( user, 'T1', 'T2' );

PL/SQL procedure successfully completed.

ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> select dbms_metadata.get_ddl( 'TABLE', 'T1' ) from dual;

DBMS_METADATA.GET_DDL('TABLE','T1')
-------------------------------------------------------------------------------

  CREATE TABLE "OPS$TKYTE"."T1"
   (    "USERNAME" VARCHAR2(30) CONSTRAINT "SYS_C0026838" NOT NULL ENABLE NOVALIDA
TE,
        "USER_ID" NUMBER CONSTRAINT "SYS_C0026839" NOT NULL ENABLE NOVALIDATE,
        "CREATED" DATE CONSTRAINT "SYS_C0026840" NOT NULL ENABLE NOVALIDATE,
         CONSTRAINT "T1_PK" PRIMARY KEY ("USER_ID")
  USING INDEX PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS
  STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
  PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
  TABLESPACE "USERS"  ENABLE NOVALIDATE,
         CONSTRAINT "T1_FK" FOREIGN KEY ("USER_ID")
          REFERENCES "OPS$TKYTE"."P" ("X") ENABLE NOVALIDATE
   ) PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255
  STORAGE(
  BUFFER_POOL DEFAULT)
  TABLESPACE "USERS"
  PARTITION BY HASH ("USER_ID")
 (PARTITION "SYS_P1017"
   TABLESPACE "USERS",
 PARTITION "SYS_P1018"
   TABLESPACE "USERS",
 PARTITION "SYS_P1019"
   TABLESPACE "USERS",
 PARTITION "SYS_P1020"
   TABLESPACE "USERS",
 PARTITION "SYS_P1021"
   TABLESPACE "USERS",
 PARTITION "SYS_P1022"
   TABLESPACE "USERS",
 PARTITION "SYS_P1023"
   TABLESPACE "USERS",
 PARTITION "SYS_P1024"
   TABLESPACE "USERS")



ops$tkyte%ORA10GR2> select constraint_name, constraint_type from user_constraints where table_name = 'T1';

CONSTRAINT_NAME                C
------------------------------ -
SYS_C0026838                   C
SYS_C0026839                   C
SYS_C0026840                   C
T1_PK                          P
T1_FK                          R




Rating

  (37 ratings)

Is this answer out of date? If it is, please let us know via a Comment

Comments

Richard Kheir, May 20, 2008 - 1:55 pm UTC


Rick, May 20, 2008 - 5:39 pm UTC

The question was about resolving references "TO" the table being redifined.
This was NOT addressed!
Tom Kyte
May 20, 2008 - 9:27 pm UTC

Rick

you wanted to turn a table T1 into a partitioned table.

Which part of THAT was not addressed, why would address something that is *moot* when you do it the easy, correct way?


and, well,

ops$tkyte%ORA10GR2> variable nerrors number
ops$tkyte%ORA10GR2> begin
  2          dbms_redefinition.copy_table_dependents
  3          ( user, 'T1', 'T2',
  4            copy_indexes => dbms_redefinition.cons_orig_params,
  5            num_errors => :nerrors );
  6  end;
  7  /



does that - it creates all of the constraints on the new partitoined table for you, grants and all.


with your create table/insert approach - you would have to copy indexes, grants, all constraints, etc. You don't simply "repoint things", you have to put them in place. You cannot "repoint" a foreign key at all, you have to recreate it.

And I, well, sort of demonstrated how to do that - online, no down time, safely.



Rick, May 20, 2008 - 9:41 pm UTC

There was an internal error from my side with regards to creating local indexes. Your solution is perfect and most straight forward!

Thanks Tom!

archieving a table

Rajeshwaran Jeyabal, November 05, 2008 - 1:24 pm UTC

 Hi Tom,
 
         Can you please help me in this approch.
Create table t_part
(
 x Number,
 y DATE
);
insert into t_part values (1,sysdate);
insert into t_part values (2,sysdate-1);
insert into t_part values (3,sysdate-2);
insert into t_part values (4,sysdate-3);
insert into t_part values (5,sysdate-4);
insert into t_part values (6,sysdate-5);
insert into t_part values (7,sysdate-6);
insert into t_part values (8,sysdate-7);
insert into t_part values (9,sysdate-8);
insert into t_part values (10,sysdate-9);
scott@AFRSQA> select x,y, trunc(y-sysdate) from t_part;
         X Y         TRUNC(Y-SYSDATE)
---------- --------- ----------------
         1 04-NOV-08                0
         2 03-NOV-08               -1
         3 02-NOV-08               -2
         4 01-NOV-08               -3
         5 31-OCT-08               -4
         6 30-OCT-08               -5
         6 30-OCT-08               -5
         7 29-OCT-08               -6
         8 28-OCT-08               -7
         9 27-OCT-08               -8
        10 26-OCT-08               -9
11 rows selected.
 
(a) The table "t_part" will be populated by every day Transactions. I need to Archive the data that is 3 days old.
    I taught to use PARTITION Concept in Oracle instead of going for history table approch. 
    Is this a right approch Let me know your taughts?
 
(b) But i am facing this below error. Can you please help me how to identify and partition 3 days old data 
   (it is not allowing me to use (y-sysdate)  as a partition key then how to identify 3 days old data)?
 
(c) Also this partitioning should happen at the end of every day at Mid night. 
 
scott@AFRSQA> Create table t_p1
  2  (
  3   x Number,
  4   y DATE
  5  )
  6  partition by range(Trunc(y-sysdate))
  7  (
  8   partition p1 values less than (-2)
  9  )
 10  ;
partition by range(Trunc(y-sysdate))
                        *
ERROR at line 6:
ORA-00907: missing right parenthesis


Tom Kyte
November 11, 2008 - 2:13 pm UTC

a) partitioning would be natural for this - and would not necessarily obviate a history table. The two concepts are not really 'tied together' and are definitely not mutually exclusive.

what is a history table? a table with old transactions you would like to have access to but have no reason to be in the live table.

how you do populate a history table? Well, you could
1) insert as select from live
2) delete from live

but that would be hugely expensive. You could instead:

1) alter table live exchange the oldpartition with an empty table
2) alter table history exchange anemptypartition with now-full-table

which would 'move' the old transactions from live to history without moving a byte - no redo, no undo (short of the dictionary updates), very fast.


b) you always use fixed dates, you would say "data less than to_date('some-date','some-format')


think about this, think about what using "sysdate" would mean - and what it would mean 5 seconds from now when sysdate changes. You set up your ranges based on days.

So, every day you would have a job to create tomorrows partition - alter table t add partition (or use interval partition in 11g to automate this)

SPOOL

Rajeshwaran Jeyabal, November 18, 2008 - 5:52 am UTC


Tom !! Thanks a Ton.

I just want to know how to get the column names in spool file.
Normally, whenver we use spool file for listing the selected records, that will contain column name and corresponding data.
But, if there is no record for a particular select query, the spool file only contains "no rows selected".
Here is my question. Can we get the column names under which "no rows selected" message???
For example: A table contain 2 columns and i am running select query for a particular condn. it doesn't contain any record.
column1 column2
no rows selected.
instead of only "no rows selected".
Please help me on this.

Awaiting for your valuable replies. Since it is critical in my project.

Tom Kyte
November 18, 2008 - 7:42 pm UTC

see where you asked this previously, i answered over there

Using dbms_redefinition in Oracle 9i R2

Rajeshwaran Jeyabal, November 20, 2008 - 5:23 am UTC

Hi Tom,
     I am using dbms_redefinition to partition table in Oracle 9i. But facing this error. Can you please help me what this error about and how this can be solved?

SQL> select * from v$version;

BANNER
----------------------------------------------------------------
Oracle9i Enterprise Edition Release 9.2.0.1.0 - Production
PL/SQL Release 9.2.0.1.0 - Production
CORE    9.2.0.1.0       Production
TNS for 32-bit Windows: Version 9.2.0.1.0 - Production
NLSRTL Version 9.2.0.1.0 - Production

SQL> Create table t1_part
  2  as 
  3  select * from all_users;

Table created.

SQL> ALTER TABLE T1_PART ADD CONSTRAINT T1_PK PRIMARY KEY(USER_ID);

Table altered.

SQL> DESC T1_PART;
 Name                                      Null?    Type
 ----------------------------------------- -------- ----------------------------
 USERNAME                                  NOT NULL VARCHAR2(30)
 USER_ID                                   NOT NULL NUMBER
 CREATED                                   NOT NULL DATE

SQL> Create table t2_part
  2  (
  3   User_name Varchar2(30) NOT NULL,
  4   user_id   Number NOT NULL,
  5   Created   DATE NOT NULL
  6  )
  7  partition by hash(user_id) partitions 4;

Table created.

SQL> DESC T2_PART;
 Name                                      Null?    Type
 ----------------------------------------- -------- ----------------------------
 USER_NAME                                 NOT NULL VARCHAR2(30)
 USER_ID                                   NOT NULL NUMBER
 CREATED                                   NOT NULL DATE

SQL> ALTER TABLE T2_PART ADD CONSTRAINT T2_PK PRIMARY KEY(USER_ID);

Table altered.

SQL> BEGIN
  2  dbms_redefinition.abort_redef_table(uname=>'SCOTT',orig_table=>'T1_PART',int_table=>'T2_PART');
  3  END;
  4  /

PL/SQL procedure successfully completed.

SQL> begin 
  2   dbms_redefinition.can_redef_table(uname=>'SCOTT',tname=>'T1_PART');
  3   dbms_redefinition.start_redef_table(uname=>'SCOTT',orig_table=>'T1_PART',int_table=>'T2_PART');
  4   dbms_redefinition.finish_redef_table(uname=>'SCOTT',orig_table=>'T1_PART',int_table=>'T2_PART'); 
  5  end;
  6  /
begin
*
ERROR at line 1:
ORA-12060: shape of prebuilt table does not match definition query
ORA-06512: at "SYS.DBMS_REDEFINITION", line 8
ORA-06512: at "SYS.DBMS_REDEFINITION", line 146
ORA-06512: at line 3


SQL> 

Tom Kyte
November 24, 2008 - 11:29 am UTC

you changed column names - if you do that, you'd have to do a bit more work (column mapping)

if you use the same names, simple:

ops$tkyte%ORA9IR2> Create table t2_part
  2  (
  3   Username Varchar2(30) NOT NULL,
  4   user_id   Number NOT NULL,
  5   Created   DATE NOT NULL
  6  )
  7  partition by hash(user_id) partitions 4;

Table created.

ops$tkyte%ORA9IR2>
ops$tkyte%ORA9IR2> ALTER TABLE T2_PART ADD CONSTRAINT T2_PK PRIMARY KEY(USER_ID);

Table altered.

ops$tkyte%ORA9IR2> exec dbms_redefinition.can_redef_table(uname=>user, tname=>'T1_PART');

PL/SQL procedure successfully completed.

ops$tkyte%ORA9IR2> exec dbms_redefinition.start_redef_table(uname=>user, orig_table=>'T1_PART',int_table=>'T2_PART' );

PL/SQL procedure successfully completed.

ops$tkyte%ORA9IR2> exec dbms_redefinition.finish_redef_table(uname=>user, orig_table=>'T1_PART',int_table=>'T2_PART');

PL/SQL procedure successfully completed.


Partitions

Rajeshwaran, Jeyabal, November 25, 2008 - 6:00 am UTC

Hats off to TOM !!!

A reader, December 11, 2008 - 7:54 am UTC

According to Database Concepts at one place it says:

Partitioning is useful for many different types of applications, particularly applications that manage large volumes of
data. OLTP systems often benefit from improvements in manageability and availability, while data warehousing systems benefit
from performance and manageability.

Then at other place it says:

Partitioning offers these advantages:

Partitioning improves query performance. In many cases, the results of a query can be achieved by accessing a subset of
partitions, rather than the entire table. For some queries, this technique (called partition pruning) can provide
order-of-magnitude gains in performance.

According to Expert one on one Oracle your suggestion is :
Reduces contention on high insert OLTP systems (such as an audit trail table) by
spreading the inserts out across many separate partitions (spreads the hot spots
around).

We have one table on OLTP with records aroud 10978888. Should we implement Pratitioning on it?


Tom Kyte
December 11, 2008 - 8:23 am UTC

you tell me. Honestly - you are the only one that can answer this. (and by the way, 11 million rows is meaningless - that gives NO clue as to the size, we could be taking 100mb or 100gb or a terabyte - but we don't know.... everyone needs to stop saying "X rows" and start saying "Y <T|G|M>bytes")


You know now the physics behind partitioning.

You know now what you can hope to achieve:

o increased availability - by spreading the data out over many partitions - if you suffer a media failure, only a small subset of the data might become unavailable. is this relevant to you, does this apply in your case, do you need this, is this something you need to design into your system, is this your major number one goal?

if not, move onto the next point

o ease of administration - by making the things you manage smaller. is this relevant to you, do you purge data by date, would a range partition on date make sense, do you need to reorganize data/indexes for whatever reason, would the fact the big table is now a series of small tables be relevant to you, would it make your life better?

if not, move onto the next point

o improved DML performance. For warehouse - partition pruning. For OLTP - possible reduced contention. are you a warehouse that could make use of this? are you a transactional system suffering from contention on a single segment - be that an hot right hand side of an index populated by a DATE or a SEQUENCE or a table with massive concurrent inserts

if not..... we are done

A reader, December 11, 2008 - 10:22 am UTC

Thanks a lot for your details reply.

We are OLTP system using Oracle 10g release1, OS Red Hat AS 3 update 3. Database contains 4 CPU and 4 GM memory. The problem we are facing is when the some of reports which use tables (A 800MB, B 200MB, C 500MB) executes, it takes to finish about 20-30 minutes. In this period this report process consumes one CPU 100% and in the mean time rest of the application becomes slow down. The ADDM report generated for this time show high Physical/Logical IO. One of Our DBA is suggesting that we should use partitioning on these tables to resolve the issue.

Tom Kyte
December 11, 2008 - 7:13 pm UTC

ask the dba "why, will the report be able to use partitioning in order to eliminate a large portion of the data? and if so, how does this partitioning scheme affect the OLTP application - because if we are not careful - it will make OLTP run really really slow - unless we actually design it to work with OLTP and support the partition elimination"


Unless your dba has does this design and analysis, they are only just starting.

If you went to them and they said "hmmm, lots of IO, lets partition" - run away.

If you went to them and they studied the problem and have a design that leads to an X% reduction in IO's and does not negatively affect the OLTP system - then hats off to them, they did it right.


those are fairly small tables - a more likely option is "tune queries" if you ask me

so, which case are we dealing with - case1: silver bullet, I heard partitioning is fast=true or case2: we have studied it and can tell you what to expect when you implement our DESIGN..

dbms_redefinition question

A reader, October 04, 2009 - 11:30 pm UTC

Hi Tom,
I have an audit_trail table that is used by our application to log almost all changes to the network inventory that we keep. The table is 1.5 billion rows (size is about 100 GB). It has two composite indexes each 80 GB in size. The table gets 20 to 30 thousands rows in a minute in a busy period. If I use dbms_redefinition to partition it, will such insert rate be possible at all? Or will I have to get an application downtime to run dbms_redefinition?

Thank you.
Tom Kyte
October 08, 2009 - 6:25 am UTC

well, since this is an audit trail - you have another alternative I think.

Let me make this rather safe assumption:

your audit trail is an audit trail in the conventional sense - the data that is already inserted will *never* change - you do not update nor delete it. In fact - it is almost certain that your application does not really read it either - you use it for fact finding after the fact (rarely).

Therefore, an option for you would be:

a) create your new partitioned, but empty, audit trail - new_audit_trail. It'll be partitioned by date range I believe
b) pause your application for 1 second while you
c) rename current_audit_trail to old_audit_trail, then rename new_audit_trail to current_audit_trail
d) unpause your application

Now - your application will be populating this new table - you can, at your leisure, break your old audit trail up

create table temp_audit_trail
partition
as
select * from old_audit_trail;

drop old_audit_trail;

index temp_audit_trail

you now have your old data partitioned (heck, you could even transport the old audit trail to another machine to do this - you'll have the data on production if you need it - you have another machine to work on - after you are done you can transport this newly partitioned data back into production)

now all you need is a non-partitioned table T that looks just like your partitioned table(s). You will do an exchange to move the partitions from temp_audit_trail to current_audit_trail

alter table temp_audit_trail exchange partition part_jan_2005 with t including indexes novalidation..... whatever other options you want ....;

alter table current_audit_trail exchange partition part_jan_2005 with t .....;


do that for every partition and you'll have moved your data in likely the most efficient fashion possible - all direct path, all online (except for that one second pause), all can be done without redo or undo generation - in parallel if you like.




Think about the nature of your data and how it is used - in this case I believe that'll open up possibilities for moving the data not normally available.

partitioning for existing tables

su, January 23, 2010 - 7:37 am UTC

hi tom,
i am doing partitioning for existing table.actually this table is in paradox database.i have migrated that table using ms access in oracle.i want to know hash partitioning for this table.i have got answer for how to do range partitioning.but i am not getting the hash partitioning using ID.how to create tablespace for this table.and how i can transfer the data from partitioning table to non partition table.
ex: create table person(personid number,pname varchar2(10))
partition by HASH(personid)
(
Partition p1 tablespacet1
)
Tom Kyte
January 25, 2010 - 6:02 am UTC

this makes no sense.

Do you understand HOW you want to partition? and more importantly WHY you want to partition?

dbms_redefinition in 11g

Developer, July 20, 2010 - 10:02 am UTC

Hello Tom,
I am using Oracle 11.1.0.7 version. I want to use dbms_redefinition to move to using Tablespace Transparent Data Encryption (TDE). So I create new encrypted tablespaces and move the tables/indexes to the new encrypted tablespaces using dbms_redefinition. I observed 3 shortcomings of dbms_redefinition and they are as follows:

1) It does not copy column comments. (I think this is a bug.)
2) It does not copy table statistics even after I set COPY_STATISTICS to TRUE. I tested this to move a range partitioned table.
3) I cannot specify anywhere the new tablespace name for the indexes. I can't really use copy_indexes parameter of the copy_table_dependents procedure.

What do you think?
Tom Kyte
July 23, 2010 - 7:53 am UTC

1) they probably would not classify as bug but rather as enhancement request. I could see that go either way.

2) I confirmed this is a bug - it worked in 10gr2 for partitioned tables but stopped working in 11gr1. I filed bug 9937133 for this

3) that is expected and why we have manual and automatic copy methods. You would create the indexes with different parameters manually and then invoke copy_table_dependents to get everything else using ignore_errors=> true. Query dba_redefinition_errors after copy table dependents to make sure everything got copied correctly.

partition exchange failure

asim, September 17, 2010 - 9:06 am UTC

Hi Tom,
I had a question about partition exchange. I have been having failures when trying to exchange a table partition with a staging table (both has primary/foreign keys). When ever the partitin exchange script is run, the initial run always fails with the below error but the 2nd run is always successful. Any clues?? WHat should my diagnosis steps be. Thanks in advance.

-600-Error at step- ExchangePartition action ,procedure exchange_partition ORA-00600: internal error code, arguments: [16609]
, [0x70000001A4E9220], [3905], [0], [], [], [], []

Tom Kyte
September 17, 2010 - 12:38 pm UTC

ora-600 => please contact support, it should not happen.

Read only partitions

Rajeshwaran, Jeyabal, October 28, 2010 - 9:24 am UTC

Quote from Oracle product documentation.

<quote>
Once you add the new partition, you can drop the trailing month with the DROP PARTITION statement. The alternative to using the DROP PARTITION statement can be to archive the partition and make it read only, but this works only when your partitions are in separate tablespaces.
</quote>

http://download.oracle.com/docs/cd/B19306_01/server.102/b14223/parpart.htm#sthref178

rajesh@10GR2> create table t(
  2     x
  3  )partition by range(x)
  4  (
  5             partition p1 values less than(2) tablespace INDX,
  6             partition p2 values less than(3) tablespace tools,
  7             partition p_max values less than(maxvalue) tablespace users
  8  )
  9  AS
 10  select rownum
 11  from all_objects
 12  where rownum <= 5;

Table created.

Elapsed: 00:00:00.17
rajesh@10GR2>
rajesh@10GR2> exec dbms_stats.gather_table_stats(user,'T');

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.25
rajesh@10GR2>
rajesh@10GR2> alter table t move partition p1 tablespace tools read only;
alter table t move partition p1 tablespace tools read only
                                                 *
ERROR at line 1:
ORA-14020: this physical attribute may not be specified for a table partition


Elapsed: 00:00:00.01
rajesh@10GR2>
rajesh@10GR2> alter table t move partition p1 tablespace tools;

Table altered.

Elapsed: 00:00:00.21
rajesh@10GR2> alter table t modify partition p1 read only;
alter table t modify partition p1 read only
                                  *
ERROR at line 1:
ORA-14049: invalid ALTER TABLE MODIFY PARTITION option


Elapsed: 00:00:00.00
rajesh@10GR2>


Tom:

Is that not possible to make partitions Read only? Can you please help me how to achieve this?
Tom Kyte
October 28, 2010 - 12:17 pm UTC

their intent was to have you most the partition (or just have the partition be in a tablespace by itself already) AND THEN mark the tablespace the partition is in read only.

You would set the tablespace readonly.

dbms_redefinition with data guard logical standby

Richard, February 07, 2011 - 1:38 pm UTC

Tom, can you point me to something in the docs (we are on 10.2.0.4) about the feasibility/availability of using dbms_redefinition on tables protected by data guard in a logical standby environment?

Partition an existing table

Rob Campbell, July 07, 2011 - 8:18 am UTC

This process is very useful. I'm using it to prune data from some very large tables. Thanks for the information.

Appending data in existing partition

A reader, June 26, 2012 - 4:27 am UTC

Hi Tom,

While moving the data for archive from partition base table to history partition tabkle,I was following these two steps only


1) alter table live exchange the oldpartition with an empty table
2) alter table history exchange anemptypartition with now-full-table



But my case is some different.Here my base table is partitioned with one column in table creation_system while the history table we are keeping date as list partition parameter to remove the archived data very fast.

so for many creation_system the data should move to single partition and loop will work for each system separately.

I was using exchange partition.From base table to intermediate non-partitioned table and the non-partitioned intermediate table to history table.
Now problem is we are not able to append the data for different creation_system .Exchange partition remove the already existing data from the history table for that partition and load new.

Please let me know how this can be achieved.

Regards

Tom Kyte
June 26, 2012 - 8:01 am UTC

turn the partition into a table
and then insert /*+ APPEND */ into archive select * from new_table_you_just_made;

direct path load it into the appropriate partitions.

Appending data in existing partition

A reader, June 26, 2012 - 4:40 am UTC

Hi Tom

To add in the above part,We have almost 100 of creation system .If we create partition base on creation_system and date ,then there will be 100 number of partition will be created everyday.and we if archive data for 30 days then total 300 partition will be created.Is it wise option?

Regards
Tom Kyte
June 26, 2012 - 8:02 am UTC

I don't have sufficient information/understanding of your schema to comment on that.

but - using your existing schema, just direct path load it. It is only a few million records right? Nothing huge.

partitioning question - 11g

A reader, June 27, 2012 - 2:14 am UTC

Hi Tom,

I have a table where I need to keep last 13 months of data including the current month ,at any given day. data will be loaded daily.95% of the queries will be using last 3 month of data only and very few will go and query data older than 3 months . when I say last 3 months,it means trunc(sysdate) -90.

I want to partition this table by date field. or I can have a month filed also.

How to do this, can I use sysdate as my partitin range value.

one way would be merge partitions periodically, is there another way to do this . I would prefer to somehow create just 2 partitions, one for last 3 months of data ( trunc(sysdate) -90 annd for rest of the data.

question is - how can I implement it in best possible way.

thanks
Ajeet
Tom Kyte
June 27, 2012 - 9:30 am UTC

you do not want to merge anything. You want 13 partitions so you can just drop the oldest one to purge. If you were to "merge", you'd have to delete and that would be horrible.

It is ok, normal, great, fine to have 13 partitions - do not overthink this an do not try to minimize the partitions.


You can either use range partitioning (all releases) and at the end of each month - add a new partition for next month and drop the oldest partitoin

or, you can use interval partitioning (11g)

ops$tkyte%ORA11GR2> create table audit_trail
  2  ( ts    date,
  3    data  varchar2(30)
  4  )
  5  partition by range(ts)
  6  interval (NUMTOYMINTERVAL(1,'month'))
  7  store in (users, example )
  8  (partition p0 values less than
  9   (to_date('01-jun-2012','dd-mon-yyyy'))
 10  )
 11  /

Table created.



that'll add a new partition for you every month automagically

You'll be able to drop the jun-2012 partition in july 2013, then jul 2012 in august 2013 and so on

Appending data in existing partition

A reader, June 27, 2012 - 3:35 am UTC

Hi Tom

Thanks for your reply.

I can't use direct load as same table and same partition can be populated and read concurrently which is not possible in case of direct load.

Then in such is conventional load is only option?

Regards
Tom Kyte
June 27, 2012 - 9:36 am UTC

if you cannot afford to have that segment locked in exclusive mode (no other modifications can happen concurrently to it - reads can, but not writes) then a conventional path load is your only other option.

any impact of updating partition key column

A reader, June 27, 2012 - 10:36 am UTC

Hi Tom,

I have another question related with partitioning. I have a table which stores data for different sources .it is parttioned by source_name ( e.g. 'Legacy','EMEA' ,'REGEN') etc. I need to do a data reconcillation of the data loaded in this table with the source data everyday. as data in source table can change (need to compare for past 2 months of data ) . what I was thinking :

1. create an additional partition with source_name as null.
2. then everyday as part of reconcillation, in the main table update the source_name = null for the last 2 months of data.
3. then load the last 2 months of data again in this main from the source table.
4. truncate the partition with soruce_name as null.

reasons - 1)I do not want to delete data daily. and reload as i have bitmap indexes on 8 columns of this table.also if i take delete approach, I will have to insert new ones.
I tested it is very slow.
2) I do not want to update rows as many of the fields to be updated are bitmapped indexed. could cause an issue ?? I did not try ,but I find it not appropriate.

do you think that it could create an issue. source_name is a local bimap indexed column.

Thanks
Ajeet
Tom Kyte
June 27, 2012 - 10:59 am UTC

Since this data is partitioned by source name why not just:

a) load the last 2 months of data for the source into a new table
b) index it
c) alter the partitioned table and exchange that sources partition for the table
d) drop the new table (which has the old data)



why does source name have a bitmap index on it? That doesn't seem sensible. All of the values in a given partition have the same source name!!! that index is not useful for anything other than counting the number of rows in that partition. why does it exist???? what logic was used to create it??

Very useful

A reader, June 27, 2012 - 11:35 pm UTC

Hi Tom,

Thanks for your answer above. I think I did not mention cleary that the main table will have 13 months of data and I need to reconcile the past 2 months of data only.so partition exchange is not feasible here ? as in that case I will have only 2 months of data in the main table after partition exchange.

yes - bitmap index on source_name column is not required as it is partitioned key column and all the application queries have source_name is where clause so partition elimination will happen without an index on it.

I can not partition/sub partition this further on month column as month is not used in where clause of most of the application queries so it will be an overhead on performance ( I have read it one of your books / here on this site as well ).

that is why i thought of updating source_name as null as those records will be kind of logicaly deleted records and i can truncate that partition to remove the data which is not required anymore periodically.

but i am not sure that if i update a partition key column is there something which I should be aware of ( i have enabled row movement for the table) .


regards
Ajeet
Tom Kyte
June 28, 2012 - 9:47 am UTC

why isn't it feasible???????

think about it - you want to swap into the current month and last month all new data.


So, load the current month into a table.
exchange that table with the partition that already exists for the current month

You have now put the 'reconciled' current month data into the partitioned table and taken out the old bad data.

repeat for last months partition.


In DML involved, all DDL and direct path loading.


the bitmap index is not only not necessary, it sounds like a horrifically bad thing to have in place. I performance impediment of a huge degree. I cannot understand why anyone would have thought to have it.


You CAN partition this on month??? why can't you??? so what if that isn't used in the where clause. ????? You want to subpartition by month so you can administer the data.

partition by source
subpartition by month.


yes i can create subpartitions by month

A reader, June 28, 2012 - 11:09 am UTC

Hi Tom,

Thank you ! yes, I can subpartition by month and exchange the subpartitions , I understand this now. I will try this out and let you know.

the reason there was a bitmap index on source_name column is that this table was earlier not partitioned and almost all app queries were using this in where clause. almost all queries can have 1 to 7 where clauses based on search fields choosen in app screen . as part of refinement , i thought to partition the table and then this data reconcillation requirement has come . thank you for your valuable review , suggestion, guidance .in last 10 yrs i have come here at least once in every week.

thank you.
Ajeet
Tom Kyte
June 29, 2012 - 9:25 am UTC

I don't think that bitmap was useful back then either - there are way too many rows with the same source value to make it be useful in general it seems

but now, now it is just deadly, suggest you look into getting rid of it.

interval subpartitions

A reader, June 29, 2012 - 4:42 am UTC

Hi Tom,

can I subpartition by interval , it seems I can not.I read thru the partition administration chapter of oracle documentation , there also I did not see this option.

SQL> create table foo ( ts date,data varchar2(30))
  2  partition by list (data)
  3  subpartition by range(ts)
  4  subpartition template
  5   interval (NUMTOYMINTERVAL(1,'month')
  6  (subpartition p0 values less than ( to_date('01-jun-2012','dd-mon-yyyy')))
  7   )
  8  (partition data1 values ('x'),
  9  partition data2 vaues  ('y')
 10  ) 
 11  /
 interval (NUMTOYMINTERVAL(1,'month')
 *
ERROR at line 5:
ORA-14156: invalid number of subpartitions specified in [SUBPARTITIONS |
SUBPARTITION TEMPLATE] clause

is there a reason we do not support interval at the subpartition level. 

regards
Ajeet

Tom Kyte
June 29, 2012 - 10:50 am UTC

you cannot.

update of source_name

A reader, June 29, 2012 - 7:23 am UTC

Hi Tom,

While you answered everything I asked in above few posts, one thing which I really want to understand is if I update the source_name to NULL for the records which I want to delete and truncate the partitions with NULL as partition key column value periodically ..then is there any impact of it.is it not the something we can do - as this will ease the maintainence effort ( i will not have to write a procedure to add partitions /subpartitions , then add the data in seperate table and exchange it with subpartiitons etc). just want to know the implications of it.

also if I create 18 subpartitions for 8 partitions i have , it means just to create the table i need 18*8*8 megabyte of space ...(initial extent size is 8 mb ) and for indexes, which are local , and they are 12..this will be further more. so kind of 2 gb to just create the empty table. nothing wrong in it ,was just thinking . i could use defer segment option as well.

regards
Ajeet
Tom Kyte
June 29, 2012 - 11:54 am UTC

if you update the source name to null to force it to move, you will simply be deleting it and inserting it - all to just truncate it.

why not just delete at that point.


do not update it, just keep each source_name/month of data in its OWN partition - and then you don't have to move a single thing.


never thought of it

A reader, June 30, 2012 - 8:22 am UTC

Hi Tom,

Thanks for taking time to answer so many queries in this thread . yes, i never thought of that if we move rows from partition to another, it is actually delete from one partition and insert into another. it is not easy to be as intelligent as you are. thank you !

regards
Tom Kyte
July 02, 2012 - 7:02 am UTC

it is not easy to be as
intelligent as you are.


yes, yes it is.

Think about how things work - it is really not that hard.

If you have to move a row from one partition to another.... think about what is taking place. Visualize the mechanical system you are working with...

Alter an already partitioned table

A reader, July 13, 2012 - 9:39 am UTC

What would be the best way to alter a column type (number to varchar2) in an already partitioned table?

It is a ranged partitioned table by date containing 500m rows and 2 years data (750 dates).

Thanks
Tom Kyte
July 16, 2012 - 3:47 pm UTC

I would be sorely tempted to create table as select (CTAS) from the old table, drop old table, rename new table to old and index/grant/etc the new table.

if it had to be online, I would use dbms_redefinition.

I would not want to have to update every row in a table to be in a column larger than its existing column - many rows would tend to have to migrate and the update would just be the wrong way to do it.


so, either CTAS if you can take an outage

or dbms_redefinition otherwise.

Alex, August 31, 2012 - 11:37 am UTC

Thank you, Tom

I am working with 120GB+ tables and your explanation helps me to make my job more efficiently.

statistics after redef

George Joseph, July 12, 2013 - 3:38 am UTC

Hi Tom,

Would you recommend to gather the statistics after performing the redefinition on non partitioned table to a partitioned table. Will the statistics get distributed to the newly created table partitions
Tom Kyte
July 16, 2013 - 4:06 pm UTC

definitely gather statistics, no doubt about it.

Partitioning

Pradeep Kumar, July 20, 2014 - 6:06 am UTC

Superb Explanation on Partitioning of Regular Tables by TOM.
Thanks
Queries solved before I asked :)

Partition existing very large tables

Roberto, April 03, 2015 - 12:28 am UTC

Tom,
Your answer to use dbms_redefinition to partition an existing table, would it apply to very large tables? 500+ rows, 100 to 200 gigs in size?
We have huge tables in need of partitioning and I am wondering if dbms_redefinition is our best bet.

Thanks!!

PP, June 23, 2015 - 10:18 pm UTC

HI Tom,
I have audit table and it have range partition , I want to convert range partition into hash partion without downtime

Running into this error

Aaron Broad, June 25, 2015 - 5:52 pm UTC

Running into this error when running the begin/end block for copying the constraints...

ORA-01749: you may not GRANT/REVOKE privileges to/from yourself
ORA-06512: at "SYS.DBMS_REDEFINITION", line 884
ORA-06512: at "SYS.DBMS_REDEFINITION", line 2010
ORA-06512: at line 2
01749. 00000 - "you may not GRANT/REVOKE privileges to/from yourself"

A reader, January 25, 2017 - 10:06 am UTC


A reader, May 02, 2017 - 9:39 pm UTC


Partitioning taking long time

Nathan, December 15, 2020 - 3:21 pm UTC

Dear Sir,

We are doing partiton to an existing non partitioned table using below command in UAT, It exeucted almost 10 hours and did not complete. There are 11 indexes also including unique and pk. However the statistics are stale .There are 3 indexes are there but with that 2 more columns are also present in that index. There are around 398101093 records are present.Please suggest what might went wrong. We are doing this in Oracle 12c.

Earlier the error was coming
ORA-01652:Unable to extend the temp segment by 8192 in tablespace TAB_SPC
But now the error is not coming after DBA made auto extension enabled.

Initially the tablespace size is like
used size-560 GB allocated size-720 GB
after that it increases to
used size-820 GB allocated size-840 GB


ALTER TABLE TXN_DET MODIFY
 PARTITION BY RANGE (CREATION_DATE)   
(
   PARTITION TD_PROCESSED VALUES LESS THAN (TO_DATE('20200401','YYYYMMDD')),
   PARTITION TD_PENDING VALUES LESS THAN (TO_DATE('20201101','YYYYMMDD'))   
   )   UPDATE INDEXES;


Thank you
Connor McDonald
December 17, 2020 - 4:27 am UTC

That depends on how big TXN_DET is. But because this is an atomic operation, we need to hold space for *all* of the indexes.

Depending on your uptime requirements, it might be better to just do the table and then rebuild the indexes afterwards, because then after each one is built, the temp space will be released.

Or any combination, for example, 2 or 3 at a time etc.


Rajeshwaran Jeyabal, April 27, 2021 - 5:26 am UTC

Team:

is it not possible to convert a non-partitioned table to partitioned one with "xmltype" datatype using 'alter table <tname> modify partition" technique ?

demo@XEPDB1> create table t(x int, y xmltype );

Table created.

demo@XEPDB1> alter table t modify
  2  partition by list(x)
  3  ( partition p1 values (1),
  4    partition p2 values (2) );
alter table t modify
            *
ERROR at line 1:
ORA-14427: table does not support modification to a partitioned state DDL

Connor McDonald
April 27, 2021 - 6:15 am UTC

From the docs

"You cannot partition a non-partitioned table that has an object type."

but you can go with DBMS_REDEFINITION

SQL> create table t(x int primary key, y xmltype );

Table created.

SQL>
SQL> create table t1 (x int primary key, y xmltype )
  2      partition by list(x)
  3      ( partition p1 values (1),
  4        partition p2 values (2) );

Table created.

SQL>
SQL> EXEC DBMS_REDEFINITION.can_redef_table(user, 'T');

PL/SQL procedure successfully completed.

SQL> EXEC DBMS_REDEFINITION.start_redef_table(user, 'T', 'T1');

PL/SQL procedure successfully completed.

SQL> EXEC DBMS_REDEFINITION.sync_interim_table(user, 'T', 'T1');

PL/SQL procedure successfully completed.

SQL> EXEC DBMS_REDEFINITION.finish_redef_table(user, 'T', 'T1');

PL/SQL procedure successfully completed.


More to Explore

VLDB

If you are new to partitioning, check out Connor McDonald's introduction series here.

VLDB

Documentation set on VLDB and Partitioning.