Skip to Main Content

Breadcrumb

Question and Answer

Tom Kyte

Thanks for the question, Senthil.

Asked: September 24, 2002 - 4:20 pm UTC

Last updated: December 09, 2008 - 12:05 pm UTC

Version: 8.1.7

Viewed 10K+ times! This question is

You Asked

Tom,

I have a situation where i need your expertise advice. Our product, at the client site, creates table and populates data. The table creation happens once in a while depending upon the client's requirement.

As Oracle releases with a lot of feature in each versions, we would like to make use of it and convert all regular tables into partitioned tables. We are looking for a various options, to reduce the downtime, fast migration and with no problems. These regular tables might be having millions rows. The following information might give a better idea, what we have now and what we are looking for...

Existing environment:

App Ref. TableName Structure
ID
-----------------------------------------
1 1 TABLE_1
1 2 TABLE_2 Same as TABLE_1
1 3 TABLE_3 Same as TABLE_1
1 4 TABLE_4
1 5 TABLE_5 Same as TABLE_4
1 6 TABLE_6 Same as TABLE_4
1 7 TABLE_7 Same as TABLE_4
1 8 TABLE_8 Same as TABLE_4
1 9 TABLE_9
1 10 TABLE_10 Same as TABLE_1
1 11 TABLE_11 Same as TABLE_1
1 12 TABLE_12
.. ... ...
.. ... ...
.. ... ...
.. ... ...

As you may have noticed several tables have the same structure of the previous or old tables. Now, we wanted have partitioned tables as follows...

Need to be converted as:

App Ref. TableName Comments
ID
------------------------------------------------------------------
1 1 TABLE_1 will also be having TABLE_2, TABLE_3,
TABLE_10 & TABLE_11 data
1 4 TABLE_4 will also be having TABLE_5, TABLE_6,
TABLE_7 & TABLE_8 data
1 12 TABLE_12 only its data


My questions are,
1. How do we convert all regular tables into partitioned tables. Hints are basic steps would do fine.
2. We are using one of the date columns as a partition key. What would be the fastest and safest method to migrate.


thanks,
senthil


and Tom said...

pretend t1, t2, t3 are your existing tables.
T is the table you want (you can drop t1 and rename t to t1 later if you need)

Then, use the following approach:


ops$tkyte@ORA817DEV.US.ORACLE.COM> create table t1 as select sysdate dt, all_objects.* from all_objects;

Table created.

ops$tkyte@ORA817DEV.US.ORACLE.COM> create table t2 as select add_months(sysdate,-12) dt, all_objects.* from all_objects;

Table created.

ops$tkyte@ORA817DEV.US.ORACLE.COM> create table t3 as select add_months(sysdate,-24) dt, all_objects.* from all_objects;

Table created.

ops$tkyte@ORA817DEV.US.ORACLE.COM>
ops$tkyte@ORA817DEV.US.ORACLE.COM>
ops$tkyte@ORA817DEV.US.ORACLE.COM> create table t( dt, OWNER, OBJECT_NAME, SUBOBJECT_NAME,
2 OBJECT_ID, DATA_OBJECT_ID, OBJECT_TYPE, CREATED,
3 LAST_DDL_TIME, TIMESTAMP, STATUS, TEMPORARY,
4 GENERATED, SECONDARY )
5 partition by range(dt) (
6 partition part2000 values less than ( to_date( '01-jan-2001', 'dd-mon-yyyy') ),
7 partition part2001 values less than ( to_date( '01-jan-2002', 'dd-mon-yyyy') ),
8 partition part2002 values less than ( to_date( '01-jan-2003', 'dd-mon-yyyy') )
9 )
10 as
11 select sysdate dt, all_objects.* from all_objects where 1=0;

Table created.

just create an empty partitioned table, partitioned by date. The ranges are such that t1 fits in one partition, t2 in another and so on

ops$tkyte@ORA817DEV.US.ORACLE.COM>
ops$tkyte@ORA817DEV.US.ORACLE.COM> alter table t
2 exchange partition part2000
3 with table t3
4 without validation
5 /

Table altered.

ops$tkyte@ORA817DEV.US.ORACLE.COM> alter table t
2 exchange partition part2001
3 with table t2
4 without validation
5 /

Table altered.

ops$tkyte@ORA817DEV.US.ORACLE.COM> alter table t
2 exchange partition part2002
3 with table t1
4 without validation
5 /

Table altered.

ops$tkyte@ORA817DEV.US.ORACLE.COM> select count(*) from t;

COUNT(*)
----------
70848

ops$tkyte@ORA817DEV.US.ORACLE.COM> select count(*) from t1;

COUNT(*)
----------
0

Now, T is a table with partitions formally known as tables T1, T2, T3!


Rating

  (98 ratings)

Is this answer out of date? If it is, please let us know via a Comment

Comments

Wow that was cool...

A reader, September 24, 2002 - 8:20 pm UTC

Tom,
That was great. Also, can you explain what the usage of MAXVALUE (Iam not sure...or is it MAXDATA) in the PARTITION clause.

regards

Tom Kyte
September 25, 2002 - 8:07 am UTC

Maxvalue is used to create a partition that takes "all of the other stuff".

For example, in my table I had:

5 partition by range(dt) (
6 partition part2000 values less than ( to_date( '01-jan-2001',
'dd-mon-yyyy') ),
7 partition part2001 values less than ( to_date( '01-jan-2002',
'dd-mon-yyyy') ),
8 partition part2002 values less than ( to_date( '01-jan-2003',
'dd-mon-yyyy') )
9 )
10 as

so,

part2000 gets negative infinity UPTO but not including 01-jan-2001.
part2001 gets 01-jan-2001 UPTO but not including 01-jan-2002.
part2002 gets 01-jan-2002 UPTO but not including 01-jan-2003

If you tried to insert 01-jan-2003 .. positive inifinity -- it would fail. You could set up another partition that takes "upto maxvalue" that would get all of the ones that do not fit in any other range.

The first partition gets all of the low values.
A last partition with maxvalue gets all of the high values.
the partitions in between have discrete start and stop points.


A reader, September 24, 2002 - 8:51 pm UTC


Partioning tables

Ashwani Singh, September 25, 2002 - 1:29 am UTC

Hi Tom,
    Great as always!!!!,
We ALSO HAve a situation where in we have the following table structure.
SQL> desc category_date_wise_email
 Name                          Null?    Type
 EMAIL_ID             NOT NULL VARCHAR2(100)
 CATEGORY             NOT NULL VARCHAR2(100)                                                 date_of_mail         not null   date FLAG                                  CHAR(1) NAME                                  VARCHAR2(100)
primary key (email_id,category,date_of_mail);

In the above table i would like to create partition on the date_of_mail column in the following way:
1) Firstly on Year basis.
2) Secondly on the monthly basis which will be the sub partitions of the year partition. Since the table already contains records, Colud U kidly guide me as to how to do it in the most efficient way. Could u kindly, also guide me as to how to create the sub partitions in the most effective way?
Will i have to create twelve subpartitions of every year Partitions.
For example if I create five year partitions then will i hvae to create 12*5=60 subpartitions in all??

Eagerly waiting for ur reponse.

Regards and thnax in advance,
Ashwani Singh.
 

Tom Kyte
September 25, 2002 - 8:14 am UTC

Since you have a single table, not one table for each partition, you'll be rebuilding the table.

You do not subpartition by range, you have only one layer of partitioning according to your specs above.


Yes, if you have 5 years and want to partition by month, you'll have 60 partitions (not subpartitions).

You'll be doing SOMETHING like:

create table new_table ( email_id, category, date_of_mail, name )
partition by range(date_of_mail)
(
partition jan2001 values less than ( to_date( '01-feb-2001', 'dd-mon-yyyy') ),
partition feb2001 values less than ( to_date( '01-mar-2001', 'dd-mon-yyyy') ),
partition mar2001 values less than ( to_date( '01-apr-2001', 'dd-mon-yyyy') ),
...
partition sep2002 values less than ( to_date( '01-oct-2002', 'dd-mon-yyyy') ),
....
)
as
select * from category_date_wise_email;

drop table category_date_wise_email;

rename new_table to category_date_wise_email;


few more

Senthil, September 25, 2002 - 10:57 am UTC

well done Tom!

i think that would release my hassle. but would like to hear more from you for the following complexity related to partitions.

like i said in my post, the tables are loaded with data. most likely the tables data may not be organized. i mean, say for e.g.,

TABLE_1 : having data between 01-jan-2002 and 11-jan-2002
TABLE_2 : having data between 12-jan-2002 and 26-jan-2002
TABLE_3 : having data between 27-feb-2002 and 13-feb-2002
TABLE_4 : having data between 14-feb-2002 and 28-feb-2002
TABLE_5 : having data between 01-mar-2002 and 15-mar-2002
...
...

my questions would be...
1. can i create tiny partitions to hold each tables data
2. say if i create tiny partitions and exchange partition with table data. at the end, would it be possible to merge all partitions into a single(monthly/quarterly or whatever) partition. i know we could merge the partition, but i am worried about the downtime, the time taken for merge process. or, is this the right approach for this environment. could you please enlighten on this...

thanks.

Tom Kyte
September 25, 2002 - 11:13 am UTC

1) yup

2) if that was your goal, just create the table you wanted and do insert /*+ APPENDS */ into it (you'll be rewriting the entire table)

You can just keep these as individual partitions however.

Excellent

John, September 25, 2002 - 11:23 am UTC

Great example, as usual.

Partioning table

ashwani, September 26, 2002 - 1:53 am UTC

Thanx a lot !
Great answer.

Regards,
Ashwani Singh

Still confused with MAXVALUE and composite range partition key

Graham Halsey, September 26, 2002 - 5:46 am UTC

Tom,

I want to partition a large table containing financial transactions by the tax year in which the transaction was created. However, our system is growing at an increasing rate hence the partition for tax year 2002 would still contain a significant percentage of the total number of rows in the table.

We need to keep queries about transactions in the last 2 weeks running very fast, so I want a partition just for that as well as the normal taxyear partitioning. Here's what I'm trying:

- 1 partition contains transactions in the last 14 days
- 1 partition contains transactions in the current tax year but > 14 days old
- 1 partition for each previous tax year.

Here's a small example of what I'm trying:

drop table part_table cascade constraints
/
create table part_table
(
id number(10) not null,
historical number(1) null,
timestamp date null,
constraint pk_part_table
primary key ( id )
)
partition by range ( historical, timestamp )
(
partition active values less than (0,maxvalue),
partition taxyear_2001 values less than (2,to_date('06/04/2001 00:00:00','dd/mm/yyyy hh24:mi:ss')),
partition taxyear_2002 values less than (2,to_date('06/04/2002 00:00:00','dd/mm/yyyy hh24:mi:ss'))
)
/
insert into part_table (id,historical,timestamp) values (1,1,to_date('07/04/2000','dd/mm/yyyy'));
insert into part_table (id,historical,timestamp) values (2,1,to_date('07/04/2001','dd/mm/yyyy'));
insert into part_table (id,historical,timestamp) values (3,1,to_date('07/04/2002','dd/mm/yyyy'));
insert into part_table (id,historical,timestamp) values (4,1,sysdate-15);
insert into part_table (id,historical,timestamp) values (5,0,sysdate-13);
commit;

The plan is that we schedule a daily job to do something like "update part_table set historical=1 where timestamp < sysdate-14 and historical=0;" ... this gives us the "rolling" partition.

The "historical" field takes values 0=false, 1=true.

However, after the above inserts I have this in each partition:

sb_owner@sb_deva> select * from part_table partition (active);

ID HISTORICAL TIMESTAMP
---------- ---------- -------------------
5 0 13/09/2002 10:33:47

sb_owner@sb_deva> select * from part_table partition (taxyear_2001);

ID HISTORICAL TIMESTAMP
---------- ---------- -------------------
1 1 07/04/2000 00:00:00
2 1 07/04/2001 00:00:00
3 1 07/04/2002 00:00:00
4 1 11/09/2002 10:33:47

sb_owner@sb_deva> select * from part_table partition (taxyear_2002);

no rows selected

My "active" partition is working fine, but I'm seeing rows in the "taxyear_2001" partition that should be in the "taxyear_2002" partition.

Have I defined my partitions incorrectly?

Oh, RDBMS 8.1.6.1.0.

TIA,
Graham.

Tom Kyte
September 26, 2002 - 8:22 am UTC

You'll be moving data from partition to partition constantly.

what is wrong with a partition for every 2 weeks or every 1 month even -- as most, you'll be looking at 2 partitions of data when looking for the "last 14 days"

What does this imply?

A reader, September 26, 2002 - 11:58 am UTC

Tom,
A partitioned table on my database is done in this manner.

PARTITION BY RANGE (EFF_DT)
(
PARTITION P_0012 VALUES LESS THAN (MAXVALUE) TABLESPACE PART_DATA
PCTFREE 10
PCTUSED 40
INITRANS 1
MAXTRANS 255
STORAGE (
INITIAL 20971520
NEXT 20971520
PCTINCREASE 0
MINEXTENTS 1
MAXEXTENTS 505
)
);

What does this mean? How many partitions are here? Only one?



Tom Kyte
September 26, 2002 - 12:02 pm UTC

one partition.

it means you have one partition.

One more thing...

Ik, September 26, 2002 - 3:29 pm UTC

Thanks a ton, Tom, for the speedy response.

If it has only partition, then is there any real use for that? What is the data that lies on either side of the partition?
Is that almost like having NO PARTITION at all?



Tom Kyte
September 26, 2002 - 3:42 pm UTC

Well, it opens up the interesting possibility of:

ops$tkyte@ORA817DEV.US.ORACLE.COM> alter table t
  2  exchange partition p_0012
  3  with table NEW_DATA
  4  without validation
  5  /

-- you could easily swap a new set of data in there at the drop of a hat (and not lose grants, etc)....


You could also turn this table with a single partition into a multi-partitioned table without a dump and reload (or using the dbms redefinition package in 9i)

 

performance improvement tips...

Senthil, September 30, 2002 - 10:25 am UTC

Tom-

we like to go with with insert (with hint) approach. but we have a few questions though. say we created a partitioned table with nologging and inserted data from various regular tables into it. now, i wanted to add a few check constraints, indices and primary key, as it is in regular tables. i dont have a problem adding indices, think i can use nologging option again. but what about check constraint and primary key. would that take more time to create and enable. or is there hints to improve the performance. assume we have a tons of millions of rows, and data are valid.

thanks always.

Tom Kyte
October 01, 2002 - 9:07 am UTC

check constraints do not generate redo, they simply read data and verify it.

a primary key MIGHT create an index (or it might just take over an existing one). In any case, you can index the primary key columns using nologging and then create the constraint or use the USING INDEX clause of the CREATE CONSTRAINT statement to specify nologging and other attributes.


If you *know* the data is valid, you can enable constraints without validation. Suggest you read through the data warehousing guide for tips and techniques.

Perfomance implication?

Graham Halsey, October 09, 2002 - 10:00 am UTC

Hi Tom,

Thanks for the advice earlier in this question. I have another one for you>>

I'm re-creating my production database as part of a move from one database server to another - mainly so I can change the db_block_size but also to do things like switch DMTs to LMTs etc...

One of the things I'm contemplating is to re-create nearly all my tables with 1 partition in them - just so I can easily make them multi-partitioned later should the need arise (since I can't partition a non-partitioned table quite so easily).

Is there likely to be any performance degradation through having all tables with 1 partition in them, rather than just having them non-partitioned?

Tom Kyte
October 09, 2002 - 4:45 pm UTC

make sure you are ready to use the CBO on everything -- every every everything.

That'll be one of the most obvious impacts. The only optimizer used EVER with partitioned tables is the CBO. (in 10i, that'll be the case as well from whats been announced)


Now, what you might do is reconsider this in light of the fact that in 9i you have online redefs so you can partition a table easily while it is being used.

Also -- since all data would be in one partition, a simple:

create table partitioned ( ......

alter table partitioned exchange the single partition with the existing table

drop the "existing table" (which is now empty)

rename partitioned to existing table



I cannot put my finger on it -- but it just doesn't sound like a good idea -- especially since I can make a non-partitioned table into a partitioned table with all data in a single partition (as you would have) pretty fast.



where to use partition

Nikunj Thaker, October 10, 2002 - 5:12 am UTC

i have oracle EE 8.1.7.
i have below table

SQL> desc item
 Name                            Null?    Type
 ------------------------------- -------- ----
 ITCODE                          NOT NULL VARCHAR2(60)
 ITDES                           NOT NULL VARCHAR2(100)
 BRAND                           NOT NULL VARCHAR2(20)
 SILVER                          NOT NULL VARCHAR2(1)
 SAFFRON                         NOT NULL VARCHAR2(1)
 PACKAGING                       NOT NULL VARCHAR2(20)
 UNITPACK                        NOT NULL NUMBER(12,2)

SQL> desc invoice_mast
 Name                            Null?    Type
 ------------------------------- -------- ----
 SRNO                            NOT NULL NUMBER(20)
 INV_TYPE                                 VARCHAR2(5)
 INV_DATE                                 DATE
 CUSTCODE                        NOT NULL NUMBER(10)
 ORDNO                                    NUMBER(10)
 ORDDATE                                  DATE
 TRANS                                    VARCHAR2(80)
 DISCITY                                  VARCHAR2(80)

SQL> desc invoice_tran
 Name                            Null?    Type
 ------------------------------- -------- ----
 SRNO                            NOT NULL NUMBER(20)
 ITCODE                          NOT NULL VARCHAR2(60)
 NO_OF_CARTON                    NOT NULL NUMBER(13,3)
 UNITRATE                        NOT NULL NUMBER(12,2)
 ASS_VALUE                       NOT NULL NUMBER(12,2)

now i am taking reports through below query's

1). select brand,unitpack,packaging,to_date(to_char(inv_date,'Mon-YYYY'),'Mon-YYYY') month,
    sum(no_of_carton) carton from invoice_mast a,invoice_tran b,item c
    where a.srno=b.srno and b.itcode=c.itcode and inv_date between &sdate and &edate
    group by brand,unitpack,packaging,to_date(to_char(inv_date,'Mon-YYYY'),'Mon-YYYY');
2). select silver,to_date(to_char(inv_date,'Mon-YYYY'),'Mon-YYYY') month,
    sum(no_of_carton) carton from invoice_mast a,invoice_tran b,item c
    where a.srno=b.srno and b.itcode=c.itcode and inv_date between &sdate and &edate
    group by silver,to_date(to_char(inv_date,'Mon-YYYY'),'Mon-YYYY');
3). select custcode,to_date(to_char(inv_date,'Mon-YYYY'),'Mon-YYYY') month,
    sum(no_of_carton) carton from invoice_mast a,invoice_tran b,item c
    where a.srno=b.srno and b.itcode=c.itcode and inv_date between &sdate and &edate
    group by custcode,to_date(to_char(inv_date,'Mon-YYYY'),'Mon-YYYY');

shall i have to use partition for improve performance ?
if i use the partition which type of partion i need to use ? 

if u give me steps its fine for me. 

Tom Kyte
October 10, 2002 - 6:59 am UTC

Well, at a glance, it would look like a range partition on the inv_date field *MAY* be of use.

The steps you need to use -- that I can do. here is your reading assignment:

o Jonathan Lewis's "Practical Oracle8i -- building efficient databases"
o optional: My chapter on partitioning in "Expert one on one Oracle"
o MANDATORY: Server concepts guide
o MANDATORY: Designing and tuning for performance guide


You need to get a good understanding of the physical mechanics of partitioning before even attempting to use it. Partitioning is predominantly a high availability, easier to manage solution. It can sometimes contribute to better performance.

how to identify performance

Nikunj Thaker, October 10, 2002 - 8:29 am UTC

Thanks tom,

after creating partition on my table invoice_mast how can i identify truly the performace has been increase or not of the resonce of the query ?

Tom Kyte
October 11, 2002 - 7:11 pm UTC

"of the resonce of the query" ???

you would have gathered some metrics before doing this operation (eg: logical ios per query, cpu time per query, whatever)

you would do the operation

you would then see if after doing the operation, the logical io's and cpu times and whatever where improved.

A reader, October 11, 2002 - 2:14 am UTC

I have a question on partition (Oracle 9.0)

ALTER TABLE orders
EXCHANGE PARTITION q1_2000
WITH TABLE orders q1_2000
UPDATE GLOBAL INDEXES
PARALLEL (DEGREE 4);

Will all valid global indexes on the orders table retain their USABLE status after the statement completes...

TIA...

Tom Kyte
October 11, 2002 - 8:10 pm UTC

that is the purpose of the update global indexes, it'll maintain them during the alter -- yes.

not expert

Nikunj, October 12, 2002 - 1:16 am UTC

>you would then see if after doing the operation, the >logical io's and cpu times and whatever where improved.

tom, i am not expert how can i gathered above information.

thanks,
nikunj

Tom Kyte
October 12, 2002 - 9:11 am UTC

Either read chapter 10 of my book "Expert one on one Oracle" or read

</code> http://docs.oracle.com/docs/cd/A87860_01/doc/server.817/a76992/toc.htm <code>

and learn about sql_trace, tkprof and autotrace


A reader, December 03, 2002 - 5:53 pm UTC

I have a range partitioned table. Is it possible to create a hash sub partition on another column within each partition using 'ALTER TABLE syntax?'. I have data in the tables and lot of indexes and constraints.

I know it is possible by creating another table with the new partition scheme and then doing exchange partition. But I will have to re-create all indexes and constraints then. Is it possible to re-define the partitioning scheme of a table online (just like the 'alter table move command'?). Thanks for your input.

Rgds,

SA

Tom Kyte
December 03, 2002 - 6:46 pm UTC

You would have to by definition recreate all of the indexes -- by hashing the data, you will in fact MOVE every row -- they all get new rowids -- bamm, new indexes.

You can put the constraints on with enable rely to have them go on and ASSUME the existing data is OK (which is must be -- if it was before).



A reader, December 03, 2002 - 10:47 pm UTC

Sorry, but I don't really understand what you say. Are you saying that the partition scheme can be redefined online (using an alter table)? If yes then can you give the syntax. Re-creating the indexes would be fine. Thanks.



Tom Kyte
December 04, 2002 - 7:59 am UTC

In 9i, the partitioning scheme can be redefined online using dbms_redefinition, see
</code> http://asktom.oracle.com/pls/asktom/f?p=100:11:::::P11_QUESTION_ID:4636779130376 <code>
for an example.

In 8i and before, it is an offline situation -- you have to reorg it by rebuilding it offline (no modifications while you do it, offline operation -- you are rebuilding the structure)

A reader, December 04, 2002 - 9:24 am UTC

But in 8.1.7, can we use 'Alter table' to change the partitioning scheme from range to composite(range+hash)? Or we have to export the data and import it back to an empty table with the new partitioning definition?

I understand it has to be done offline 'cos it is the modification of the structure. But I don't know the syntax. Shall be greatful if you can explain with an example. Thanks again for the help.

Tom Kyte
December 04, 2002 - 12:19 pm UTC

the partitioning scheme would have to be in place from the beginning. there is no alter table command to take a range partitioned table and convert it into a composite. That is a rebuild.

Which is better?

Raghu, June 05, 2003 - 12:34 pm UTC

If I can summarize, we have following ways to convert a non paritioned table into a partitioned table
1. create TABLE <new_Table> As SELECT * FROM <Old_Table> - where u can create and load table all at once.
2. Create TABLE As SELECT * from <Old_table> Where 1 = 0 (Empty table) and use INSERT /*+ APPEND */

Which is the better (efficient) and faster approach?

Tom Kyte
June 05, 2003 - 1:33 pm UTC

you have

1)
2)
3) dbms_redefinition in 9i
4) export, drop, create new table, imp
5) alter table exchange partition

None of them are "better". It I was to pick between 1 and 2 above i would choose 1 since it involves less typing

To partition now or later

Alvin, August 25, 2003 - 2:55 am UTC

I'm the dba of an OLTP system.

Our customers transactions are flagged as 'failed' if it cannot insert data in our transaction table within 6 second time limit. Since our organization earns a certain percentage per transactions INSERTS are of paramount importance.

Our transaction tables are ever growing and i would like to change the table into a partitioned one.

My dilema is for me to be able to do change it to a partitioned table via CTAS i'd have to

1. Freeze the orig transaction table ( lock on exclusive mode ) so i'd have a consistent snapshot of the transaction table
2. CTAS to another table and define range partitioning
3. Rename and unfreeze orig transaction table
4. Rename New table as transaction table

Our transaction tables has grown (and still growing) and in doing step 1 (locking) I would be rejecting transactional inserts for a long period of time.

Since there are no Parent/foreign key defined on any of the column in this transaction table this table seems to be 'independent' of other table. And because this table is 'Independent'..

1. deleting ranges of data from the transactional table would not affect or block inserts therefore minimize or eliminate loss of income for the company.

I was thinking of

1. CTAS a range (annual data) from the transactional table to another table
2. Delete from the transactional table.
3. Create another DB containing historical data
and transfer data from production to archive db.

Is this approach viable ?

I need to trim the trasaction table because it's not fully normalized and some queries full scans the table. Redesign is not viable at this pt.

Tom Kyte
August 25, 2003 - 6:34 am UTC

how about dbms_redefinition

search for that term on this site to read about it.

dmms_redefinition

Alvin, August 25, 2003 - 10:34 pm UTC

I've glanced over it. Tried probing further but i can't find it in my 8i docs.

Is this one of those non-documented but available on 8i packages ?

partition table with huge data 120millions

Alex, November 25, 2003 - 5:17 pm UTC

Tom

I want to insert the 2Millions record into the table having the 120 millions records already and the insert woould be for one partiton. This table has indexes also.
Can I use the insert .. select or is thera any other way to load the data fast or do i need to disable the index before loading the data.

can you please tell me all the steps how to make fast load of the data.
thanks

Tom Kyte
November 25, 2003 - 6:45 pm UTC

is this the only data in this partition (eg: are you loading an empty partition)

if so, you can load a table (direct path)
index it (parallel nologging)

and then alter table exchange partition to slide the newly loaded data into the table.


Else, just insert append it, 2 million rows will go in pretty fast -- major limiter = number of indexes, but 2 millions rows -- not too long.

partition table insert

alex, November 26, 2003 - 4:51 pm UTC

Yes when I am inserting the data then this partition is empty, I tried to load data 1M but it took 2.30 hours for the 1Million.
I could not understand the meaning of this
"if so, you can load a table (direct path)
index it (parallel nologging)

and then alter table exchange partition to slide the newly loaded data into the
table."

please tell me the steps.

Also indexes disabling and enabling after load would be helpful or not.
And how much time index enabling would take for the table 40 partiton and 200Million data in it. or can be do the partition wise enableing and disabling of the index.

parallel insert in partition table and only insert would go in one partition , does it realy work?

thanks



Tom Kyte
November 27, 2003 - 10:20 am UTC

i did tell you the steps????


a) load a table using a direct path load

b) index that table in parallel nologging

c) alter table (partitioned table) and exchange the empty partition with the full table.




Is there a difference

pj, December 03, 2003 - 9:33 am UTC

We use this approach
disable bitmap indexes / truncate partition / select into partition (append)/ rebuild indexes

Would it be better to change this to:
insert into empty table / index / exchange partitions ?

DW - 600 M row insert only local bitmap indexes no global indexes on partitioned table

Tom Kyte
December 03, 2003 - 10:39 am UTC

benchmark it!


(but I think approach 1 is just fine, what you are doing)

exchange partition

alex, December 22, 2003 - 5:00 pm UTC

Tom

create table test1 (x number, y number, z varchar2(30), constraint xpk (x,y))
partition by range(x)
(
partition tab_1 values less than (1),
partition tab_2 values less than (2),
partition tab_3 values less than (3),
partition tab_4 values less than (4),
partition tab_5 values less than (5))
/


insert into test1 values(-1,1,'AB1');
insert into test1 values(0,2,'AB2');
insert into test1 values(1,3,'AB3');
insert into test1 values(2,4,'AB4');
insert into test1 values(3,5,'AB5');

create table temp1(x number, y number, z varchar2(30), constraint xpk1 (x,y));

insert into temp1 values(4,6,'AB6');


alter table test1
exchange partition tab_5
with table temp1;


when now I say
select * from test1 where x = 5;

then I get the following error
*
ERROR at line 1:
ORA-01502: index 'XPK' or partition of such index is in unusable
state


why?.
and now I have to rebuild XPK index on the test1. Then only it allow to work.
Test1 has global index.

thanks

Tom Kyte
December 22, 2003 - 6:35 pm UTC

is there a question in there somewhere?

you have a global index.

you did partition operations on that table and did not maintain the global index.

hence global index is invalid and must be rebuilt by you.

Exchange Partition

Alex, December 24, 2003 - 12:08 pm UTC

I want to say that I am exchanging the partition having global index.
It does exchange the partition with the temp table but then
all the indexes on main table becomes unusable.

Why?

Is there any way we can do this. Otherwise the purpose of exchanging the partition is lost, building the partition would take more time.

hope it is clear now.


Tom Kyte
December 24, 2003 - 1:36 pm UTC

because the global index by default is not maintained.

In 9i, you can do the partition operation AND maintain the global indexes (documentation has all of the details on the syntax)

In 8i and before -- global indexes by definition will go invalid and require a rebuild after partition level operations like that.

A reader, August 09, 2004 - 9:34 pm UTC

Tom,

I have table t similar to the one you created in your first reply, with global indexes on owner column.

Partitions are on monthly basis for the year 2004 and has maxvalue. I want to split the maxvalue partition so i can accomadate 12 more partitions for year 2005.

Tom, is there a way i can do all the 12 partitions at once rather than 12 different split partition statement and how should i go about with global indexes so there will index entries for the 12 new ones.

Thanks.

Tom Kyte
August 10, 2004 - 7:20 am UTC

you'd split that last one multiple times.  a split can only specify the attributes of two partitions - the "new one" and the "smaller old one".

as there isn't any data in this partition, splitting it won't "add" any new entries to your global indexes?  only new inserts would do that?

if data does exist in this partition and would cause the global indexes to go unusable, you can:

ops$tkyte@ORA9IR2> alter table t
  2  split partition junk
  3  at ( to_date('15-mar-2003'))
  4  into ( partition part3, partition junk )<b>
  5  update global indexes;</b>
 
Table altered.
 

A reader, August 10, 2004 - 9:24 am UTC

Thanks for your feedback, forgot to mention that the global index is range partitioned, does that mean i have to split the index as well like the table for new entries.

Tom Kyte
August 10, 2004 - 9:28 am UTC

the global index is range partitioned using its own partitioning scheme.

you need not split it at all if you don't want to -- but even if you did want to split it, it would be split "on its own schedule, for its own reasons", you would not be doing it because you split a table partition.

when you globally index, you are saying "the index and table partitions are independent of one another"

A reader, August 10, 2004 - 9:50 am UTC

I noticed after splitting the new entries for table and if I query a particular partition (new one), the table is being full scanned.

Thanks.

Tom Kyte
August 10, 2004 - 10:07 am UTC

if you use "from table partition(foo)", it is full scanning that partition (not the entire table) -- most likely because "from table partition(foo)" says "hey, use partition level statistics" and you have none, so it says "empty".

you might need to get statistics up to date.

Does partition need to be empty for exchange

Rob, September 16, 2004 - 12:08 pm UTC

Tom:

I have a situation where I want to replace an entire partition with the data from a table whose structure matches the partitioned table exactly. Does the partition need to be empty for partition exchange to work?

Tom Kyte
September 16, 2004 - 12:47 pm UTC

nope

splitting partition

A reader, March 18, 2005 - 12:51 am UTC

Hi

We want to convert a table (over 1000 million of rows) to partitioned table, can we use exchange to a maxvalue value then split it? We dont need to move the data physically so not sure if exchange/split are only data dictionary operations i.e fast (we understand that split need to validate the rows, is that very resource intensive?)

OR CTAS will be the fastest way

Tom Kyte
March 18, 2005 - 6:58 am UTC

it would take lots longer to exchange it and split it (a couple of times I would guess)

than it would to either

create an empty table and insert /*+ append */ into it

create table as select from it.

Split -vs- Exchange

Brad, April 20, 2005 - 8:24 pm UTC

We have a large table partitioned by month through 2003-01-01, plus a max-value partition that has collected millions of rows we now wish to put into their own partitions.

Essentially, we dropped the ball and didn't add new partitions for over a year.

I've tested a small table using the split method and it worked fine, but am concerned about performance of splitting the max partition > 12 times. I'm going to benchmark it on a 20 million row table tomorrow.

Do you think the following alternative approach is worth trying?

1) Exchange out the max partition into a stand-alone table.
2) Use CTAS to create individual tables that match the partition scheme (i.e. 1 month of data).
3) Exchange in each of the monthly partitions.
4) Add back a new "max" partition
5) rebuild any unusable indexes

I don't have a lot of experience with partition maintenance, so there may be an obvious flaw with this approach. Thanks for your help.

Tom Kyte
April 20, 2005 - 9:21 pm UTC

alternative looks promising.

if you can make this read only for a bit, 1) does not have to happen first.

ORA-14130 with partition exchange

Peter, July 19, 2005 - 2:53 pm UTC

Hi Tom,

in your book you said that if you can't resolve problem in 30 minutes you should go to ask somebody, well i spent all day with this, can you please help me?

It's simple ETL process:
i have temporary table,
i disable indexes and constraints (i know my data are ok in this step of ETL)
i insert data
i re-eneble constraints and indexes

for some reason second partition exchange will fail with "UNIQUE constraints mismatch in ALTER TABLE EXCHANGE PARTITION"

without disabling&rebuilding indexes and constraints everything works like charm.

If i have my local indexes for PK with "compress 1"
even first partition exchange will fail with ORA-28665: table and partition must have same compression attribute

thank you for your help

Peter

-- create partitioned table
--DROP TABLE ORG_HIER2;
CREATE TABLE ORG_HIER2
(
ORG_ID INTEGER NOT NULL,
PARENT_ID INTEGER NOT NULL,
DESCRIPTION VARCHAR2(40 BYTE),
AS_OF_DATE DATE NOT NULL
)
PARTITION BY RANGE (AS_OF_DATE)
(
PARTITION P200412 VALUES LESS THAN (TO_DATE('1.1.2005','DD.MM.YYYY')),
PARTITION P200501 VALUES LESS THAN (TO_DATE('1.2.2005','DD.MM.YYYY')),
PARTITION P200502 VALUES LESS THAN (TO_DATE('1.3.2005','DD.MM.YYYY')),
PARTITION P200503 VALUES LESS THAN (TO_DATE('1.4.2005','DD.MM.YYYY'))
);

-- create local pk
ALTER TABLE ORG_HIER2
ADD CONSTRAINT org_hier2_pk
PRIMARY KEY(as_of_date, org_id)
USING INDEX (
CREATE UNIQUE INDEX org_hier2_pk ON ORG_HIER2(as_of_date, org_id) LOCAL
) NOVALIDATE;

-- create table for exchange
--DROP TABLE ORG_HIER_EX;
CREATE TABLE ORG_HIER_EX
AS
SELECT * FROM ORG_HIER2 WHERE 1=2;

-- create pk
ALTER TABLE ORG_HIER_EX
ADD CONSTRAINT org_hier_ex_pk
PRIMARY KEY(as_of_date, org_id)
USING INDEX (
CREATE UNIQUE INDEX org_hier_ex_pk ON ORG_HIER_EX(as_of_date, org_id)
) NOVALIDATE;

-- disable indexes
BEGIN
FOR i IN (SELECT index_name FROM USER_INDEXES WHERE status != 'UNUSABLE' AND table_name = 'ORG_HIER_EX') LOOP
EXECUTE IMMEDIATE 'alter index ' || i.index_name || ' unusable';
END LOOP;
EXECUTE IMMEDIATE 'ALTER SESSION SET SKIP_UNUSABLE_INDEXES TRUE';
END;

ALTER SESSION SET SKIP_UNUSABLE_INDEXES TRUE;

-- disable constraints
BEGIN
FOR i IN (SELECT constraint_name, table_name FROM USER_CONSTRAINTS WHERE status = 'ENABLED' AND table_name = 'ORG_HIER_EX') LOOP
EXECUTE IMMEDIATE 'alter table '||i.table_name||' disable constraint '||i.constraint_name||'';
END LOOP;
END;

-- do some load
INSERT INTO ORG_HIER_EX VALUES(1,2,'My organization','31.3.2005');

-- enable constraints
BEGIN
FOR i IN (SELECT constraint_name, table_name FROM USER_CONSTRAINTS WHERE status = 'DISABLED' AND table_name = 'ORG_HIER_EX') LOOP
EXECUTE IMMEDIATE 'alter table '||i.table_name||' enable constraint '||i.constraint_name||'';
END LOOP;
END;

-- enable indexes
BEGIN
FOR i IN (SELECT index_name FROM USER_INDEXES WHERE status != 'VALID' AND table_name = 'ORG_HIER_EX') LOOP
EXECUTE IMMEDIATE 'alter index ' || i.index_name || ' rebuild nologging';
END LOOP;
END;

ALTER SESSION SET SKIP_UNUSABLE_INDEXES FALSE;

-- exchange partition
ALTER TABLE ORG_HIER2 EXCHANGE PARTITION P200503 WITH TABLE ORG_HIER_EX INCLUDING INDEXES WITHOUT VALIDATION;
-- if i need to exchange back this will fail for sure ORA-14130
ALTER TABLE ORG_HIER2 EXCHANGE PARTITION P200503 WITH TABLE ORG_HIER_EX INCLUDING INDEXES WITHOUT VALIDATION;

Tom Kyte
July 19, 2005 - 5:52 pm UTC

You have a mixture of validated and non-validated constraints, that is the root cause.


your script, upto and including "alter table org_hier_ex" is run...

ops$tkyte@ORA10GR1> INSERT INTO ORG_HIER_EX VALUES(1,2,'My organization','31-mar-2005');
 
1 row created.
 
ops$tkyte@ORA10GR1> alter table ORG_HIER_EX enable /* novalidate */ constraint org_hier_ex_pk ;
 
Table altered.
 
ops$tkyte@ORA10GR1>
ops$tkyte@ORA10GR1> -- exchange partition
ops$tkyte@ORA10GR1> ALTER TABLE ORG_HIER2 EXCHANGE PARTITION P200503 WITH TABLE ORG_HIER_EX
  2  INCLUDING INDEXES WITHOUT VALIDATION;
 
Table altered.
 
ops$tkyte@ORA10GR1> -- if i need to exchange back this will fail for sure ORA-14130
ops$tkyte@ORA10GR1> ALTER TABLE ORG_HIER2 EXCHANGE PARTITION P200503 WITH TABLE ORG_HIER_EX
  2  INCLUDING INDEXES WITHOUT VALIDATION;
ALTER TABLE ORG_HIER2 EXCHANGE PARTITION P200503 WITH TABLE ORG_HIER_EX
*
ERROR at line 1:
ORA-14130: UNIQUE constraints mismatch in ALTER TABLE EXCHANGE PARTITION


Your issue, but:

ops$tkyte@ORA10GR1> INSERT INTO ORG_HIER_EX VALUES(1,2,'My organization','31-mar-2005');
 
1 row created.
 
ops$tkyte@ORA10GR1> alter table ORG_HIER_EX enable novalidate constraint org_hier_ex_pk ;
 
Table altered.
 
ops$tkyte@ORA10GR1>
ops$tkyte@ORA10GR1> -- exchange partition
ops$tkyte@ORA10GR1> ALTER TABLE ORG_HIER2 EXCHANGE PARTITION P200503 WITH TABLE ORG_HIER_EX
  2  INCLUDING INDEXES WITHOUT VALIDATION;
 
Table altered.
 
ops$tkyte@ORA10GR1> -- if i need to exchange back this will fail for sure ORA-14130
ops$tkyte@ORA10GR1> ALTER TABLE ORG_HIER2 EXCHANGE PARTITION P200503 WITH TABLE ORG_HIER_EX
  2  INCLUDING INDEXES WITHOUT VALIDATION;
 
Table altered.


works -- your constraint enable validated them and made them "not match" (and only due to the existence of DATA, no data no problems)
 

Peter, July 19, 2005 - 6:39 pm UTC

Thank you Tom,

this is most helpful to me, what about my other issue when i have compression level on pk index set to 1, is it because
rebuild index will use nocompression by default?


Tom Kyte
July 19, 2005 - 9:54 pm UTC

can you do a tiny example? (no need for fancy dynamic sql, you know all of the names, i took out the dynamic stuff because it made it hard to see what we were doing exactly)

ops$tkyte@ORA10GR1> ALTER TABLE ORG_HIER2
  2  ADD CONSTRAINT org_hier2_pk
  3  PRIMARY KEY(as_of_date, org_id)
  4  USING INDEX (
  5      CREATE UNIQUE INDEX org_hier2_pk ON ORG_HIER2(as_of_date, org_id) LOCAL compress 1
  6  ) noVALIDATE;
 
Table altered.
 
ops$tkyte@ORA10GR1>
ops$tkyte@ORA10GR1> DROP TABLE ORG_HIER_EX;
 
Table dropped.
 
ops$tkyte@ORA10GR1> CREATE TABLE ORG_HIER_EX
  2  AS
  3  SELECT * FROM ORG_HIER2 WHERE 1=2;
 
Table created.
 
ops$tkyte@ORA10GR1>
ops$tkyte@ORA10GR1> ALTER TABLE ORG_HIER_EX
  2  ADD CONSTRAINT org_hier_ex_pk
  3  PRIMARY KEY(as_of_date, org_id)
  4  USING INDEX (
  5      CREATE UNIQUE INDEX org_hier_ex_pk ON ORG_HIER_EX(as_of_date, org_id) compress 1
  6  ) noVALIDATE;
 
Table altered.
 
ops$tkyte@ORA10GR1>
ops$tkyte@ORA10GR1> INSERT INTO ORG_HIER_EX VALUES(1,2,'My organization','31-mar-2005');
 
1 row created.
 
ops$tkyte@ORA10GR1> alter table ORG_HIER_EX enable novalidate constraint org_hier_ex_pk ;
 
Table altered.
 
ops$tkyte@ORA10GR1>
ops$tkyte@ORA10GR1> -- exchange partition
ops$tkyte@ORA10GR1> ALTER TABLE ORG_HIER2 EXCHANGE PARTITION P200503 WITH TABLE ORG_HIER_EX
  2  INCLUDING INDEXES WITHOUT VALIDATION;
 
Table altered.
 
ops$tkyte@ORA10GR1> -- if i need to exchange back this will fail for sure ORA-14130
ops$tkyte@ORA10GR1> ALTER TABLE ORG_HIER2 EXCHANGE PARTITION P200503 WITH TABLE ORG_HIER_EX
  2  INCLUDING INDEXES WITHOUT VALIDATION;
 
Table altered.
 
 

Security on partitions

Martin, July 20, 2005 - 6:58 am UTC

Hi Tom,

quick question about partitions. Is it possible to put some form of security on partitions, i.e. restrict users to only be able to select from certain partitions etc.? I know that architecturally this should be approached from other avenues, i.e. FGAC etc., but was more interested than anything.

Thanks

Tom Kyte
July 20, 2005 - 8:31 am UTC

FGAC would be the approach, grants work at the top level object level -- so just like you grant on a package, not a procedure IN a package, you grant on a table, not a partition of a table.

to restrict access to rows in a table, views or FGAC.

A reader, July 20, 2005 - 8:12 am UTC

Hi Tom,

thank you a lot for your answer it was most helpful

here is test test for my last question
it will fail on index compression level. (both pk indexes
was created with COMPRESS 1)

ALTER INDEX org_hier_ex_pk UNUSABLE
ALTER TABLE ORG_HIER_EX DISABLE CONSTRAINT org_hier_ex_pk
ALTER SESSION SET SKIP_UNUSABLE_INDEXES TRUE
INSERT INTO ORG_HIER_EX VALUES(1,2,'My ORGANIZATION','31-mar-2005');
ALTER TABLE ORG_HIER_EX ENABLE NOVALIDATE CONSTRAINT org_hier_ex_pk
ALTER INDEX org_hier_ex_pk REBUILD
ALTER SESSION SET SKIP_UNUSABLE_INDEXES TRUE

But maybe it's better that i explain what i'm trying to do instead of how am i doing it.

Every month we are loading data from DWH to our reporting datamart. First in DWH i prepare my tables with data for given month on DWH staging area. And i want to put them to partition in reporting table in datamart (i do that over DB link)

I have big partioned table (org_hier2), indexed with constraints. This table have empty partitions for each month for comming 2 years. I also have temporary table org_hier_ex
that i want to feed with data, create indexes, calculate stats and exchange with empty partition.

my steps are
disable indexes and constraints on org_hier_ex table
fees table with data over dblink
re-enable indexes and constraints to match org_hier2
echange partitions

whole point is that data are prepared on staging in DWH, i know they are good, and i want select over DB link as fast as possible, that's why i'm disabling both constraints and indexes.

my tables have more indexes and constraints then in previous example, that's why i was using PL/SQL code to disable/enable indexes and constraints. Hovever it seems that my procedures have to be more complicated then simple

FOR i IN (SELECT constraint_name, table_name FROM USER_CONSTRAINTS WHERE status = 'DISABLED' AND table_name = 'ORG_HIER_EX') LOOP
EXECUTE IMMEDIATE 'alter table '||i.table_name||' enable novalidate constraint '||i.constraint_name||'';
END LOOP;

to put constraints in same state as it was before.

Same for indexes if i make disable/rebuild i will loose information about compression level (as_of_date is same for whole partition so compress 1 can make diffrence in this case)

Is there some more simple and elegant aproach to transport data from dwh to datamart over dblink and putting new partition to reporting table?

sorry for long reply:-)

Thank you

Peter

Tom Kyte
July 20, 2005 - 8:40 am UTC

the problem here is the "index goes away" and when you enable the constraint, a new index is created (you need not rebuild the index by the way, the enable does that)


 
ops$tkyte@ORA10GR1> ALTER TABLE ORG_HIER_EX ENABLE NOVALIDATE CONSTRAINT org_hier_ex_pk
  2  USING INDEX (
  3      CREATE UNIQUE INDEX org_hier_ex_pk ON ORG_HIER_EX(as_of_date, org_id)
  4          compress 1
  5  ) ;
 
Table altered.


no rebuild (you are doing the index twice).
 

Thanks Tom.

Martin, July 20, 2005 - 8:40 am UTC


Peter, July 20, 2005 - 9:00 am UTC

thank you Tom,

that means i can skip disable/rebuild PK index part. But if i have some bitmap indexes and lot of data will be inserted it's probably better to disable just bitmap indexes and rebuild them after insert right?

Tom Kyte
July 20, 2005 - 12:37 pm UTC

sure (probably).

valent, July 20, 2005 - 9:51 am UTC

Hi Tom,

your last comments was very helpful, In documentation (datawarehousing guide, Using Partitioning to Improve Data Warehouse Refresh) is that i should insert data in sales_01_2001 and then create indexes and constraints, isn't it possible to already have a indexes and constraints on exchange table and just enable them? or maybe put automaticaly same indexes and constraints as on partitioned table?



Tom Kyte
July 20, 2005 - 12:52 pm UTC

sure, you could already have them, but it is typically faster to load + indexed rather than load into indexed data.

Convert CLOB non-partition table to partition table

Bala, August 04, 2005 - 12:51 am UTC

Tom,

We have a CLOB table which is growing pretty big (currently 100GB) in size. Per business requirements, we can purge those CLOBs which are older than 90 days. So we want to take advantage of partitioning for purging/archival and performance purposes. This CLOB table is accessed by the package_name column as below :

Query against clob :
--------------------

select <clob column> from <clob_table> where
package_name='ISR03120200926' (for example)

Format: ISRYYMMDDXXXXX
where: ISR - constant
YY - year
MM - month
DD - day
XXXXX - sequence# generated (max : 99999)


This clob table also have "issued_date" and "last_modified_date" date columns. Basically this clob table stores the shopping cart in xml format.

Questions:
-----------

1. We can make "partition key" column as "issued_date / last_modified_date" (this will ease purging) but most of the queries against this clob table are by "package name" which is some character string (varchar2). We want to see the benefit of "Partition Pruning" also. Can you tell us how to decide on "partition key" columns in our case.

2. What type of partitioning would be more suitable in our case ?
3. Do partition help at all for our above requirements ?

Tom Kyte
August 04, 2005 - 9:15 am UTC

1) you will partition the table by the DATE field, the one that tells you what is older than 90 days.

you will create a global index on PACKAGE_NAME (or just a "regular" index

when you go to purge the data (via drop partition), you can use the option to maintain global indexes so there is no loss of access, no downtime.

2) range

3) absolutely.

Partition pruning

Bala, August 04, 2005 - 10:51 am UTC

Thanks Tom.

How about partition pruning.. Will i be able to take advantage of it with the above query ?

Tom Kyte
August 04, 2005 - 11:39 am UTC

your index on package name is what matters here.

it will either

a) NOT be partitioned
b) BE partitioned


if (a), then a single index will be scanned and a rowid found and a table access by global index rowid. Read single index, access table by rowid.

if (b), then your predicate will find ONE index partition to read and it basically becomes (a) all over again.

redefine or recreate?

Sandeep, August 09, 2005 - 6:18 am UTC

Hi,
We are using Oracle 9i. There are a group of tables that we need to 'convert' from normal to partitioned. I checked the tables using the 'can_redef..' procedure and it seems they can be redefined online.
However, I've seen some of your examples addressing this issue, where simply new partitioned tables are created and renamed.
Would like to know which method is "preferred"?
recreate or online redefine? (given there is no major uptime requirement and the application/DB can be brought offline during a maintenance window of 2hr per fortnight).
Regards,
Sandeep

Tom Kyte
August 09, 2005 - 10:13 am UTC

can you in fact rebuild the table in the 2 hour maintenance window? (table+all indexes)

You'll know after you test it 50 times in test (making sure you have the grants and all worked out perfectly).




redefine or recreate?

Sandeep, August 09, 2005 - 11:39 am UTC

Hi,
Yes. I will be testing that. However, from your question... should I take it that *normally* a redefine is "faster" than a recreate?
Thanks,
Sandeep


Tom Kyte
August 09, 2005 - 11:59 am UTC

redefine is "infinitely fast" since, well, it doesn't matter how long it actually takes, there are just 2 very short periods where the table is locked - right up front and right at the end.


You see, it doesn't really make sense to compare the "speed". Online redefined is used when you cannot have down time. If you can have down time, you can definitely do this more efficiently offline (no logging, brute force, direct path, non-intrusive on the performance of the application because the application isn't running)

how about a two tier approach?

Andrew, August 09, 2005 - 11:57 am UTC

I had a situation where we had a date ranged(months) table used for reporting - kept 12 months at a time. There were 'current day at a time' reports off this table that ran slower as the month progressed. Longer report times for historical reporting was acceptable. The problem was they had no window for reorginization time. My solution was to implement a composite partitioned table date(month) then hash(default of 35 hash buckets for days). The existing months were implemented with 1 hash bucket for days. The partitions were swapped in instantly during reorg, indexes did not invalidate. The next month was defined with 35 buckets for days
and daily reporting was fast and consistent. What do you think?



Tom Kyte
August 09, 2005 - 12:08 pm UTC

well, you should use 32 or 64, but not 35 for hash partitioning.  You'll find you have a distinctly "bell" shaped distribution.

Actually, in your case, you don't have enough distinct values to hash into so many partitions (you have clumping now, many partitions are empty, others "overfull" and some just right).  

You could use composite range/list as well to ensure a "day is a day".  


<quote from next edition of expert one on one>
Hash Partition Using Powers of Two

I mentioned earlier that the number of partitions should be a power of two. This is easily observed to be true. To demonstrate, we'll set up a stored procedure to automate the creation of a hash partitioned table with N partitions (N will be a parameter). This procedure will construct a dynamic query to retrieve the counts of rows by partition and then display the counts and a simple histogram of the counts by partition. Lastly, it will open this query and let us see the results. This procedure starts with the hash table creation. We will use a table named T:

ops$tkyte@ORA10G> create or replace
  2  procedure hash_proc
  3            ( p_nhash in number,
  4              p_cursor out sys_refcursor )
  5  authid current_user
  6  as
  7      l_text     long;
  8      l_template long :=
  9             'select $POS$ oc, ''p$POS$'' pname, count(*) cnt ' ||
 10               'from t partition ( $PNAME$ ) union all ';
 11  begin
 12      begin
 13          execute immediate 'drop table t';
 14      exception when others
 15          then null;
 16      end;
 17
 18      execute immediate '
 19      CREATE TABLE t ( id )
 20      partition by hash(id)
 21      partitions ' || p_nhash || '
 22      as
 23      select rownum
 24        from all_objects';

Next, we will dynamically construct a query to retrieve the count of rows by partition. It does this using the "template" query defined earlier. For each partition, we'll gather the count using the partition-extended table name and union all of the counts together:

 25
 26      for x in ( select partition_name pname,
 27                        PARTITION_POSITION pos
 28                   from user_tab_partitions
 29                  where table_name = 'T'
 30                  order by partition_position )
 31      loop
 32          l_text := l_text ||
 33                    replace(
 34                    replace(l_template,
 35                          '$POS$', x.pos),
 36                          '$PNAME$', x.pname );
 37      end loop;

Now, we'll take that query and select out the partition position (PNAME) and the count of rows in that partition (CNT). Using RPAD, we'll construct a rather rudimentary but effective histogram:

 38
 39      open p_cursor for
 40         'select pname, cnt,
 41          substr( rpad(''*'',30*round( cnt/max(cnt)over(),2),''*''),1,30) hg
 42            from (' || substr( l_text, 1, length(l_text)-11 ) || ')
 43           order by oc';
 44
 45  end;
 46  /
Procedure created.

If we run this with an input of 4, for four hash partitions, we would expect to see output similar to the following:

ops$tkyte@ORA10G> variable x refcursor
ops$tkyte@ORA10G> set autoprint on
ops$tkyte@ORA10G> exec hash_proc( 4, :x );
PL/SQL procedure successfully completed.
 
PN        CNT HG
-- ---------- ------------------------------
p1      12141 *****************************
p2      12178 *****************************
p3      12417 ******************************
p4      12105 *****************************

The simple histogram depicted shows a nice, even distribution of data over each of the four partitions. Each has close to the same number of rows in it. However, if we simply go from four to five hash partitions, we'll see the following:
ops$tkyte@ORA10G> exec hash_proc( 5, :x );
PL/SQL procedure successfully completed.
 
PN        CNT HG
-- ---------- ------------------------------
p1       6102 **************
p2      12180 *****************************
p3      12419 ******************************
p4      12106 *****************************
p5       6040 **************
This histogram points out that the first and last partitions have just half as many rows as the interior partitions. The data is not very evenly distributed at all. We'll see the trend continue for six and seven hash partitions:
ops$tkyte@ORA10G> exec hash_proc( 6, :x );
PL/SQL procedure successfully completed.
 
PN        CNT HG
-- ---------- ------------------------------
p1       6104 **************
p2       6175 ***************
p3      12420 ******************************
p4      12106 *****************************
p5       6040 **************
p6       6009 **************
6 rows selected.
 
ops$tkyte@ORA10G> exec hash_proc( 7, :x );
PL/SQL procedure successfully completed.
 
PN        CNT HG
-- ---------- ------------------------------
p1       6105 ***************
p2       6176 ***************
p3       6161 ***************
p4      12106 ******************************
p5       6041 ***************
p6       6010 ***************
p7       6263 ***************
7 rows selected.
As soon as we get back to a number of hash partitions that is a power of two, we achieve the goal of even distribution once again:
ops$tkyte@ORA10G> exec hash_proc( 8, :x );
PL/SQL procedure successfully completed.
 
PN        CNT HG
-- ---------- ------------------------------
p1       6106 *****************************
p2       6178 *****************************
p3       6163 *****************************
p4       6019 ****************************
p5       6042 ****************************
p6       6010 ****************************
p7       6264 ******************************
p8       6089 *****************************
8 rows selected.
If you continue this experiment up to 16 partitions, you would see the same effects for the ninth through the fifteenth partitions-a skewing of the data to the interior partitions, away from the edges, and then upon hitting the sixteenth partition you would see a flattening-out again. The same would be true again up to 32 partitions, and then 64, and so on. This example just points out the importance of using a power of two as the number of hash partitions.
</quote> 

two tier

Andrew, August 09, 2005 - 12:52 pm UTC

This happened a few years back. I dont recall actually recall the number of hash buckets I used. I do recall the going through analysis similar to yours to come up with a non-clumping hash bucket number to evenly distribute the possible days 1-31. I do recall the answer was not the obvious(31). In any case, my question was to get your thoughts on the concept of having a table exist in a split configuration. One section of the table is essentially date range partitioned(composite date/hash with one bucket) and the other section a true composite date/hash with many buckets. In my mind, many reorginization problems could be solved in such a fashion. What is the downside of this approach?



Tom Kyte
August 09, 2005 - 2:06 pm UTC

I guess I'm not entirely following the "split configuration" part - because a table is either composite partitioned, or not.

Now, the individual partitions may have different numbers of subpartions (from 1 on up) but the table is compsosite partitioned (or not).

split partiion and local indexes,

sns, August 19, 2005 - 11:41 am UTC

My MAXVALUE partition contained data for 10 weeks. However my table is partition on week num, since there was no buckets for the most recent weeks, all my 10 weeks are now in MAXVALUE partition.

I extracted the data from MAXPARTITION to 10 different tables and truncated the MAXPARTITION.

Later, I split the MAXPARTITION into 10 different partitions based on week num.

The table has 6 local indexes and those local indexes now have the new partitions.

Now I need to move the data back from the 10 temp tables to the respective partitions of my base table.

I am thinking of using "exchange partition" concept. Do I need to disable the index if I use exchange partition?

option2: Does insert ..select runs faster than exchange partition?

Thanks,

Tom Kyte
August 20, 2005 - 4:29 pm UTC

you would use exchange partition with the "including indexes" option -- just make sure your 10 tables have their indexes all in place first.


examples are on this page.

Need Solution

Vikas Katyal, November 15, 2005 - 1:54 pm UTC

Hi Tom,

We have a datawarehouse where in the ETL process during night dumps the data in the database tables.

Recently we have partitioed them based on RANGE date partitioning so as to have six months of data.

We have a table named REQUEST_DATA which has been partitioned on REQUEST_TIMESTAMP and has 6 partitions.

CREATE TABLE REQUEST_DATA(
REQUEST_TAG VARCHAR2(16) NOT NULL,
REQUEST_TIMESTAMP TIMESTAMP(0) NOT NULL,
PUBLISHER_ID NUMBER(20, 0),
TREATMENT_ID NUMBER(20, 0) NOT NULL,
CHANNEL_ID NUMBER(20, 0),
REQUEST_COUNT NUMBER(10, 0),
REQUEST_TERMS VARCHAR2(1024),
REQUEST_LANGUAGE VARCHAR2(16),
CLIENT_LOCATION VARCHAR2(255),
CONSTRAINT REQUEST_DATA_PK PRIMARY KEY (REQUEST_TAG)
USING INDEX TABLESPACE INDEX_4M
)
TABLESPACE DATA_8M
PARTITION BY RANGE (REQUEST_TIMESTAMP)(
PARTITION MTH_PART1 VALUES LESS THAN (TO_DATE('30-JUN-2005','DD-MON-YYYY')) ,
PARTITION MTH_PART2 VALUES LESS THAN (TO_DATE('31-JUL-2005','DD-MON-YYYY')) ,
PARTITION MTH_PART3 VALUES LESS THAN (TO_DATE('31-AUG-2005','DD-MON-YYYY')) ,
PARTITION MTH_PART4 VALUES LESS THAN (TO_DATE('30-SEP-2005','DD-MON-YYYY')) ,
PARTITION MTH_PART5 VALUES LESS THAN (TO_DATE('31-OCT-2005','DD-MON-YYYY')) ,
PARTITION MTH_PART6 VALUES LESS THAN (MAXVALUE)
);

On the LAST_DAY of NOV,2005 we want the last partition to have values less than (TO_DATE('30-NOV-2005','DD-MON-YYYY')) and the oldest partition MTH_PART1 goes away. Besides this another partition with MAXVALUE should be created to accomodate DEC data.

This will be done on a recurring basis.

Please suggest and let us know how to achieve this.

Tom Kyte
November 15, 2005 - 4:05 pm UTC

you would split mth_part6
you would drop mth_part1

the syntax is documented, there are examples of at least a split on this page.


actually, you probably don't want the maxvalues partition at all - you probably just want to ADD a new partition every month at one end and drop the other (the split involves rewriting the data)

Got your point

Vikas Katyal, November 15, 2005 - 4:27 pm UTC

Hi Tom,

I understand what you said and exactly what we want, but can we somehow retain the partition names the same, even after the dropping the oldest partition, so that MTH_2 partition gets renamed to MTH_1 and so on. then at the end we add a new partition with the name MTH_6.

Should we automate this by writing some procedure and call it via dbms_job.

What do you say about this approach!

Tom Kyte
November 16, 2005 - 8:27 am UTC

well, you could technically

alter table t rename partition X to Y;

but I would just name the partitions after the month,

create partition mth_200510
mth_200511
mth_200512
mth_200601
.....

over time.


I would not bother renaming them all (unless you have queries that go after specific partitions - using the extended partition name in the from clause)

To Vikas from India

Logan Palanisamy, November 15, 2005 - 8:38 pm UTC

Instead of MTH_PART1, MTH_PART2, etc, my personal preference would be 2005_JUN, 2005_JUL, etc. By looking at it you know what data it contains.

Anyway, here is the way to acheive what you want by using partition renaming.

SQL> drop table request_data;

Table dropped.

SQL> CREATE TABLE REQUEST_DATA(
  2       REQUEST_TAG            VARCHAR2(16)      NOT NULL,
  3       REQUEST_TIMESTAMP        DATE      NOT NULL,
  4       CONSTRAINT REQUEST_DATA_PK PRIMARY KEY (REQUEST_TAG)
  5  )
  6  PARTITION BY RANGE (REQUEST_TIMESTAMP)(
  7  PARTITION MTH_PART1 VALUES LESS THAN (TO_DATE('30-JUN-2005','DD-MON-YYYY')) ,
  8  PARTITION MTH_PART2 VALUES LESS THAN (TO_DATE('31-JUL-2005','DD-MON-YYYY')) ,
  9  PARTITION MTH_PART3 VALUES LESS THAN (TO_DATE('31-AUG-2005','DD-MON-YYYY')) ,
 10  PARTITION MTH_PART4 VALUES LESS THAN (TO_DATE('30-SEP-2005','DD-MON-YYYY')) ,
 11  PARTITION MTH_PART5 VALUES LESS THAN (TO_DATE('31-OCT-2005','DD-MON-YYYY')) ,
 12  PARTITION MTH_PART6 VALUES LESS THAN (TO_DATE('30-NOV-2005','DD-MON-YYYY'))
 13  );

Table created.

SQL> alter table request_data drop partition mth_part1;

Table altered.

SQL> alter table request_data rename partition mth_part2 to mth_part1;

Table altered.

SQL> alter table request_data rename partition mth_part3 to mth_part2;

Table altered.

SQL> alter table request_data rename partition mth_part4 to mth_part3;

Table altered.

SQL> alter table request_data rename partition mth_part5 to mth_part4;

Table altered.

SQL> alter table request_data rename partition mth_part6 to mth_part5;

Table altered.

-- This is to prove that partition boundaries are still maintained even after renaming.
SQL> alter table request_data add PARTITION MTH_PART6 VALUES LESS THAN (TO_DATE('30-NOV-2005','DD-MON-YYYY'));
alter table request_data add PARTITION MTH_PART6 VALUES LESS THAN (TO_DATE('30-NOV-2005','DD-MON-YYYY'))
                                       *
ERROR at line 1:
ORA-14074: partition bound must collate higher than that of the last partition


-- Now add the new partition with the new boundary.
SQL> alter table request_data add PARTITION MTH_PART6 VALUES LESS THAN (TO_DATE('31-DEC-2005','DD-MON-YYYY'));

Table altered.

SQL> select partition_name, high_value from user_tab_partitions where table_name = 'REQUEST_DATA';

PARTITION_ HIGH_VALUE
---------- --------------------------------------------------------------------------------
MTH_PART1  TO_DATE(' 2005-07-31 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA
MTH_PART2  TO_DATE(' 2005-08-31 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA
MTH_PART3  TO_DATE(' 2005-09-30 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA
MTH_PART4  TO_DATE(' 2005-10-31 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA
MTH_PART5  TO_DATE(' 2005-11-30 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA
MTH_PART6  TO_DATE(' 2005-12-31 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA

6 rows selected.

SQL> spool off
 

Thanks Logan Palanisamy from Vikas INDIA

Vikas, November 16, 2005 - 3:45 pm UTC

Hi Logan,

Thanks for all your help, I also did the same exercise and it worked fine. But here are mine questions:

If I need to automate this using a job, then how could I achieve this. Should I have to use the function LAST_DAY to check whether my last_day(sysdate) = SYSDATE then execute the job.

& what will happen if at that point the database is down for some reasons,and the new data will not be able to load becuase of partition boundaries constraints.

All this will have to be done manually.Is n't it? Moreover how will the partitions be named based on month + year as NOV05. Can you provide me with an example

Thanks

Tom Kyte
November 16, 2005 - 6:21 pm UTC

have a job run every month that creates next months partition.

to remove data, I'd be afraid to schedule a job personally - but if you want to, go ahead and just have it query the dictionary, drop the oldest partition.


Just run the first job "early" in the month - so if it fails, you have days to fix it, not seconds.

MVs for on-line rebuild

TM, November 16, 2005 - 9:13 pm UTC

Hi tom,
We have a 100GB table with BLOB datatype in the table. We are trying to make it a partitioned table. However, the online redifinition comes up with a "Snapshot too old" for big tables - seems like a bug in 9206 server patch too. Also in order to avoid the redo/undo for the 100GB table, we want to use PL/SQL procedure to copy the data with commit between 1000 rows. Is it possible to do the same thing like the DBMS_REDFINITION package does using Materialized Views/LOGs?
Meaning, define some kind of MVs/LOG on the source/target table, start the PL/SQL to copy-with-commit while users are using the source table and then take some down time (minimal) to refresh the target table with the source table for the delta part (due to on-line users) to sync it up and then rename the table and then take care of the constraints and indexes?
If you can point me to a direction on this, that will be a big help. As usual, thank for all your help.


Tom Kyte
November 17, 2005 - 8:00 am UTC

1555 is not likely a bug, it is likely that it takes long to do the redef and either your pctversion is set too low to support that duration of time (your are 1555'ing on the reads of the lobs) or your undo tablespace/undo retention are not large enough (you are 1555'ing on the table itself).


Yes, you could "do it yourself" online redefine - however, your "do 1000 rows at a time" approach will be very problematic. You'll copy 1000 rows at a time - each AS OF A DIFFERENT point in time (unless you use flashback query - but you'll be right back at 1555 again).


To do the "do it yourself", you'd need a consistent view of the base table to start with (but that puts you squarely back into 1555 land) so you can build a MV on a prebuilt table.


Do you have a period where you can schedule some down time - sounds like you would best be served by CREATE TABLE AS SELECT (can be done without any redo/undo for the table data), a drop table and a rename table... (with the corresponding indexes/grants/constraints/.... and all)

MVs for on-line rebuild

TM, November 17, 2005 - 11:17 am UTC

Thanks Tom for your quick response and valuable advise. We tried to do this with PCTVERSION set to 50% and also with a huge undo Tablespace (48GB) and set undo_retention at 120 hours (5 days). After a 22 hour run of dbms_redefintion I got ORA-01555. How could this be? All my undo blocks were well within the undo retention period and should not have been touched. According to Oracle we were stung by BUG 1629230, SNAPSHOT TOO OLD ERROR WHEN TRYING TO DO ONLINE REORG OF LARGE TABLE.

Oracle wants us to go to 9.2.0.7 not because they know the bug is fixed but to find out whether it is fixed. In other words, patch up to 9.2.0.7, try again and let them know. We tried this with 9207 patch set too and got the 1555 after 32 hours. WE let them know too.

We do not have such downtime to do the CTAS and therefore all this effort. I agree with your input that 1000 rows at atime will be at differnet time frame, but with such huge UNDO tablespace and UNDO_RETENTION too if I get into a bug, then I'm at a loss until the bug is fixed.

The other thing about the PL/SQL (1000 row thing) is we can query the LONGOPS and see how far it had proceeded, I'm not sure if we can see that while doing CTAS. Is it possible to see it in LONGOPS?

Any more ideas from you is most welcome. Thanks Tom for what all you do for us.
Best Regards.


Tom Kyte
November 18, 2005 - 9:34 am UTC

that bug is very vague - no information at all as to what was "fixed" actually and it is tagged as fixed in 10.0.0 ? there was a 10.1 and a 10.2 - so it is fairly confusing.

As I said "likely" - I didn't have any information as to whether you thought about undo and such while doing this. We have re-orged active tables in the above a terabyte range.

a CTAS will show up in v$sessionlongops (the full scan rate is reported there).

If you do this 1000 rows at a time, you'll end up with each set of rows at a different point in time. You'll have no way to resync the data.



MVs for on-line rebuild

A reader, November 18, 2005 - 8:43 pm UTC

Thanks Tom, Yes, I have thought about the UNDO and such, and wanted to try different stuff (1000 row thing in TEST environment) when knew that such downtime is not available and on-line redefine did not work (with the bug).

Now, your comment of "We have re-orged active tables in the above a terabyte range." -- I failed to understand that if you did them using on-line with CTAS or you did it using DBMS_REDEF without getting into this bug? More, I wanted to know how you did it on active tables in terabyte range and if they had BLOB datatype in them. Does it mean no downtime involved? Can you please provide me some steps that I can try?

Thanks again for your valuable time and looking forward to your inputs on this.


Tom Kyte
November 19, 2005 - 10:12 am UTC

dbms_redef - on the Oracle Mail databases - online.

I'm not sure that this bug is in fact "a bug", there is simply nothing in the bug text to indicate a bug was found or that anything was fixed.




MVs for on-line rebuild

TM, November 20, 2005 - 9:57 am UTC

Thanks Tom. I assume here that the Oracle Mail database do not have BLOBS and "may be" something that DBMS_REDEF handle differently for BLOB datatype. Therefore you may not have issues with terabyte range.

I agree, the "bug" may be due to how big tables due to BLOB and "may not be" for the non-blob tables! Again, this is my assumption.

Thanks for all your inputs and I will keep trying the other routes - most likely "with a down time" like you said with a CTAS to copy, not the entire 100GB, may be that much only that we really want to retain. May be that will be the key question.

Best regards.

Renaming Partition: Index effect

Suvendu, November 24, 2005 - 1:34 am UTC

Hi Tom,
Here I got the strange behaviour on renaming partition where there is no effect on index defination. Could you please look into following statements:

SQL> conn scott/tiger
Connected.
SQL> show user
USER is "SCOTT"
SQL> drop table t;
drop table t
           *
ERROR at line 1:
ORA-00942: table or view does not exist


SQL> create table t ( a int, b char)
  2  partition by range(a)
  3  (partition p1 values less than(10),
  4  partition p2 values less than(20)
  5  );

Table created.

SQL> create index t_indx on t(1) local;

Index created.

SQL> select table_name, partition_name from user_tab_partitions where table_name='T';

TABLE_NAME                     PARTITION_NAME
------------------------------ ------------------------------
T                              P1
T                              P2

SQL> select index_name, partition_name from user_ind_partitions where partition_name in
  2  (select partition_name from user_tab_partitions where table_name='T');

INDEX_NAME                     PARTITION_NAME
------------------------------ ------------------------------
T_INDX                         P1
T_INDX                         P2

SQL> alter table t rename partition p1 to part_p1;

Table altered.

SQL> alter table t rename partition p2 to part_p2;

Table altered.

SQL> select table_name, partition_name from user_tab_partitions where table_name='T';

TABLE_NAME                     PARTITION_NAME
------------------------------ ------------------------------
T                              PART_P1
T                              PART_P2

SQL> select index_name, partition_name from user_ind_partitions where partition_name in
  2  (select partition_name from user_tab_partitions where table_name='T');

no rows selected

SQL> select index_name, partition_name from user_ind_partitions where index_name='T_INDX';

INDEX_NAME                     PARTITION_NAME
------------------------------ ------------------------------
T_INDX                         P1
T_INDX                         P2

SQL> alter index t_indx rebuild partition p1;

Index altered.

SQL> alter index t_indx rebuild partition part_p2;
alter index t_indx rebuild partition part_p2
                                     *
ERROR at line 1:
ORA-02149: Specified partition does not exist
SQL>

It's clear here, why the index partition name is not going to change after renaming partition?

Please, correct me if I'm wrong.

Thanking you.

Regards,
Suvendu 

Tom Kyte
November 24, 2005 - 8:50 am UTC

you renamed a table partition, why would the index partitions change?

sure, they "defaulted" from the table, but after that - there is no relation between their names - once the default is used, that is it.

Oracle Version...

Suvendu, November 24, 2005 - 1:38 am UTC

Sorry, I forget to provide Oracle version for above question its in Oracle 9.2 on HP UNIX.

Regards,
Suvendu


Messing the concepts...

Suvendu, November 24, 2005 - 9:07 am UTC

Thanks a lot answering to my question.

I'm not fully agree with you. 
But, I'm agree with you in the sense that table/index partitions are the individual segments. But after renaming the table partition, where index still pointing to old partition name. We are not sure this index pointing to which partition table though there is NO table_name column in user/dba_ind_partitions data dictionary. Finally I come to know after getting DDL script using:

SQL> select dbms_metadata.get_ddl('INDEX','IND_NAME') from dual;

On my view, its violating the concept from normal table/index behaviour.

Please, comment to my understanding.

Thanking you really a lot for your kind help and proud to be with a mentor like you.

Regards,
Suvendu

 

Tom Kyte
November 24, 2005 - 10:29 am UTC

well, beyond "sorry", I don't know what else to say.  There is no mandatory relationship between an index name and the table name.  There is no mandatory relationship between local index partition names and their corresponding table partition names.

if you don't use the default naming convention - well, then you don't use the default naming convention.  indexes are not named after tables and vice versa.

ops$tkyte@ORA10GR2> select i.index_name, i.partition_name, t.partition_name
  2    from user_ind_partitions i, user_tab_partitions t
  3   where i.index_name = 'T_INDX'
  4     and t.table_name = 'T'
  5     and i.partition_position = t.partition_position
  6  /

INDEX_NAME                     PARTITION_ PARTITION_
------------------------------ ---------- ----------
T_INDX                         P1         PART_P1
T_INDX                         P2         PART_P2


works for the special case of a locally partitioned index of course. 

MVs for on-line rebuild

TM, December 01, 2005 - 9:42 am UTC

Hi Tom,

If you see couple of posts before where you discussed with me about a 100GB BLOB table rebuild to partition it and I was trying with 1000 rows at a time and also DBMS_REDEF. All the time it was getting into 1555 error. Just a recap.

It took 3 weeks to hear back from Oracle Support about partitioning lob tables. Turns out there is a bug using Auto Segment Space Management. They said it was fixed in 9.2.0.4 and thought that the fix was in .5, .6, and .7 but found out from me it was not. The solution is simply to not use ASSM. We haven't tried it yet. Thought this is something might interest you.

While we try this, could you please check on your Oracle Mail databases (where you had no issues rebuilding the big table) that it uses BLOB datatype and also NOT in ASSM tablespace? That will help us a lot atleast to understand that , yes we are heading on the right direction. Thanks much.
regards.

Partitioned vs. non-partitioned tables

A reader, December 06, 2005 - 11:55 am UTC

Tom,
If I create a copy of a non-partitioned table as a partitioned table (identical in terms of data volume and columns), how would a full table scan on partitioned table compare with that of non-partitioned table?

I did this experiment and found that FTS on partitioned table took almost twice as long as on its non-partitioned copy. T1 is non-partitioned and T2 is partitioned.

16:12:11 System@dbprod@1> select count(*) from schema.t1;

COUNT(*)
----------
43410813

Elapsed: 00:02:36.06

Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=16144 Card=1)
1 0 SORT (AGGREGATE)
2 1 TABLE ACCESS (FULL) OF 'T1' (Cost=16144 Ca
rd=43488114)





Statistics
----------------------------------------------------------
0 recursive calls
0 db block gets
417272 consistent gets
309658 physical reads
0 redo size
200 bytes sent via SQL*Net to client
275 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
1 rows processed


16:22:23 System@dbprod@1> select /*+ full (x)*/ count(*) from schema.t2 x;

COUNT(*)
----------
43410813

Elapsed: 00:04:24.07

Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=16157 Card=1)
1 0 SORT (AGGREGATE)
2 1 PARTITION LIST (ALL)
3 2 TABLE ACCESS (FULL) OF 'T2' (Cost=16157 C
ard=43410910)





Statistics
----------------------------------------------------------
0 recursive calls
0 db block gets
417338 consistent gets
408356 physical reads
0 redo size
213 bytes sent via SQL*Net to client
275 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
1 rows processed

16:27:09 System@dbprod@1>

Is this kind of difference expected or is something wrong here?

Thanks.

Tom Kyte
December 06, 2005 - 3:05 pm UTC

query 1 ran in 3/5ths the time of query 2.

query 1 did 3/4ths the physical IO of query 2.

I blame physical IO. Use tkprof, run each dozens of times. Then compare.

try to do so in isolation, so other users are not competing with you.

MVs for on-line rebuild from December 01, 2005

Tapas Mishra, December 06, 2005 - 4:33 pm UTC

Tom,

You did not respond to my last post from Dec 01st. I was curious to know if Oracle Mail database is NOT in ASSM and is freelist managed that enabled you to REDEF thebig table? Also does the table used BLOB datatype?

thanks.
Tapas

Tom Kyte
December 07, 2005 - 1:28 am UTC

I do NOT see every followup - I scan them rather quickly as I can and if I miss one, I'll likely never see it again (the way my user interface is setup)

Not sure on the assm, no blob.



BLOB rebuild

Tapas, December 07, 2005 - 10:43 am UTC

Thansk Tom for your reply. So far Oracle advised to rebuild/REDEF the big blob table (in order to partition it) on non-ASSM tablespaces. Will try it, but though to keep you posted.

Partition a regular table without dropping the table

SD, April 21, 2006 - 2:36 pm UTC

Hi Tom,

Is there any way to partition an existing regular table without dropping the original table? We need to partition regular tables. Idea of creating a temp partitioned table, exchange partition, drop original table and rename temp to original comes with a risk of missing some indexes/grants on the original table. Is there any way I can partition a regular table without dropping it?

Thanks

SD

Tom Kyte
April 21, 2006 - 4:03 pm UTC

dbms_redefinition could be used.

but it will have the same "index could go missing" issue (which is a testing issue in my mind, not a real problem).




Archive partition data using Exchange to normal empty table

Vin, April 24, 2006 - 8:24 pm UTC

Oracle version 10.2.0.1.0

create table ARC_PT_200601(X number, Y number, Z number, EDATE date);

create unique index PK_ARC_PT_200601 on ARC_PT_200601(X);
alter table ARC_PT_200601 add constraint PK_ARC_PT_200601 primary key (X) using index;
create unique index NI_ARC_PT_200601_Y on ARC_PT_200601(Y);
create unique index NI_ARC_PT_200601_Z on ARC_PT_200601(Z);
create index NI_ARC_PT_200601_EDATE on ARC_PT_200601(EDATE);

create table PT(X number, Y number, Z number, EDATE date)
partition by range(edate)
(
partition PT_200601 values less than (to_date('01-feb-2006', 'dd-mon-yyyy')),
partition PT_200602 values less than (to_date('01-mar-2006', 'dd-mon-yyyy')),
partition PT_200603 values less than (to_date('01-apr-2006', 'dd-mon-yyyy')),
partition PT_200604 values less than (to_date('01-may-2006', 'dd-mon-yyyy'))
);

create unique index GI_PT_X on PT(X) global partition by range(X) (partition GI_PT_X_1 values less than (MAXVALUE));
alter table PT add constraint GI_PT_X primary key (X) using index;
create unique index GI_PT_Y on PT(Y) global;
create unique index GI_PT_Z on PT(Z);
create index LI_PT_EDATE on PT(EDATE) local;

insert into PT values (1,10,100,sysdate-100);
insert into PT values (2,20,200,sysdate-90);
insert into PT values (3,30,300,sysdate-70);
insert into PT values (4,40,400,sysdate-40);
insert into PT values (5,50,500,sysdate-10);
commit;

PARTITION_NAME NUM_ROWS
------------------------- ----------
PT_200601 2
PT_200602 1
PT_200603 1
PT_200604 1

INDEX_NAME PARTITION_NAME iL-iG STATUS
---------------------------- ------------------------- ------ --------
GI_PT_X GI_PT_X_1 GLOBAL USABLE
LI_PT_EDATE PT_200601 LOCAL USABLE
LI_PT_EDATE PT_200602 LOCAL USABLE
LI_PT_EDATE PT_200603 LOCAL USABLE
LI_PT_EDATE PT_200604 LOCAL USABLE

TABLE_NAME INDEX_NAME STATUS
----------------------- ---------------------------- --------
PT LI_PT_EDATE N/A
PT GI_PT_Z VALID
PT GI_PT_Y VALID
PT GI_PT_X N/A
ARC_PT_200601 NI_ARC_PT_200601_EDATE VALID
ARC_PT_200601 NI_ARC_PT_200601_Z VALID
ARC_PT_200601 NI_ARC_PT_200601_Y VALID
ARC_PT_200601 PK_ARC_PT_200601 VALID

Table_Name Total_Records
------------------------------ -------------
PT 5

Table_Name Total_Records
------------------------------ -------------
PT.PT_200601 2

Table_Name Total_Records
------------------------------ -------------
ARC_PT_200601 0

Table_Name X Y Z EDATE
------------------------------ ---------- ---------- ---------- -----------
PT 1 10 100 14-JAN-2006
PT 2 20 200 24-JAN-2006
PT 3 30 300 13-FEB-2006
PT 4 40 400 15-MAR-2006
PT 5 50 500 14-APR-2006

Partition_Name X Y Z EDATE
------------------------------ ---------- ---------- ---------- -----------
PT_200601 1 10 100 14-JAN-2006
PT_200601 2 20 200 24-JAN-2006

ACCT> alter table PT exchange partition PT_200601 with table ARC_PT_200601 including indexes with validation update global indexes parallel 3;
*
ERROR at line 1:
ORA-14098: index mismatch for tables in ALTER TABLE EXCHANGE PARTITION

Questions:
1>Could you please explian why an error resulted when exchanging a partition with rows with a empty table of similar structure and indexes.
got the same error when trying to attach a empty table to a partition with rows.

2>when archiving data from partitions, should the empty archived table have same indexes on the same columns like the partition table or can there be no indexes at all.

3>why is the locality of index GI_PT_Y on the partition table does not show as GLOBAL even though it was created as global?

Appreciate your time and valuable contributions to the Oracle community.

Tom Kyte
April 25, 2006 - 12:37 am UTC

1) create table ARC_PT_200601(X number, Y number, Z number, EDATE date);

-- create unique index PK_ARC_PT_200601 on ARC_PT_200601(X);
-- alter table ARC_PT_200601 add constraint PK_ARC_PT_200601 primary key (X) using index;
-- create unique index NI_ARC_PT_200601_Y on ARC_PT_200601(Y);
-- create unique index NI_ARC_PT_200601_Z on ARC_PT_200601(Z);
create index NI_ARC_PT_200601_EDATE on ARC_PT_200601(EDATE);


you only create the LOCAL indexes - the global indexes are maintained during the exchange. You had lots of indexes on arc_pt_200601 that didn't make sense in the context of the exchange.

2) if you want to exchange, you need the proper set of local indexes.

3) because it is in fact not partitioned. It is a 'single partition'.

Unusable Local Index after exchange partition

Florin, June 05, 2006 - 11:37 am UTC

Hi Tom,

Would you please have a look in the following example:

create table florin (CREATED,OWNER, OBJECT_NAME,OBJECT_ID,OBJECT_TYPE)
partition by range(CREATED) (
partition part2002 values less than ( to_date( '01-jan-2003', 'dd-mon-yyyy') ),
partition part2003 values less than ( to_date( '01-jan-2004', 'dd-mon-yyyy') ),
partition part2004 values less than ( to_date( '01-jan-2005', 'dd-mon-yyyy') ),
partition part2005 values less than ( to_date( '01-jan-2006', 'dd-mon-yyyy') ),
partition part2006 values less than ( to_date( '01-jan-2007', 'dd-mon-yyyy') ) )
as select CREATED,OWNER, OBJECT_NAME,OBJECT_ID,OBJECT_TYPE from dba_objects;

create index FLORIN_1ix on FLORIN (object_name) local;

create table florin_ex_2003 as select * from florin partition (PART2003);

alter table florin exchange partition PART2003 with table florin_ex_2003 with validation;

select INDEX_NAME,PARTITION_NAME,STATUS from user_ind_partitions where index_name='FLORIN_1IX';

INDEX_NAME PARTITION_NAME STATUS
------------------------------ ------------------------------ --------
FLORIN_1IX PART2002 USABLE
FLORIN_1IX PART2003 UNUSABLE
FLORIN_1IX PART2004 USABLE
FLORIN_1IX PART2005 USABLE
FLORIN_1IX PART2006 USABLE

Would you please explain me why the index is in "UNUSABLE" status? It is a LOCAL index not GLOBAL.

Many thanks in advance.

Florin


Tom Kyte
June 05, 2006 - 12:17 pm UTC

you didn't say "including indexes"


ctl-f for those words to see an example

exchange partition function

peter, July 26, 2006 - 11:18 am UTC

Hi Tom,

thanks for all of your very usefull answers about partitioning,

i would like to create function in my etl_utils package
that will make partition exchange.

Partition names in target table are deterministic (eq. for as_of_date='28.2.2006' partition name is P200602).

source table has a as_of_date column ,so it is possible to determine taget partition name from it.

CREATE TABLE TEST_P (as_of_date DATE NOT NULL, VALUE INT NOT NULL)
PARTITION BY RANGE (as_of_date)
(
PARTITION p200601 VALUES LESS THAN (TO_DATE('01.02.2006','DD.MM.YYYY')),
PARTITION p200602 VALUES LESS THAN (TO_DATE('01.03.2006','DD.MM.YYYY')),
PARTITION p200603 VALUES LESS THAN (TO_DATE('01.04.2006','DD.MM.YYYY'))
);
ALTER TABLE TEST_P ADD CONSTRAINT test_p_pk
PRIMARY KEY(as_of_date,VALUE) USING INDEX(CREATE UNIQUE INDEX test_p_pk ON TEST_P(as_of_date,VALUE) LOCAL);

CREATE TABLE TEST (as_of_date DATE NOT NULL, VALUE INT NOT NULL);
INSERT INTO TEST VALUES('28.2.2006',10);
ALTER TABLE TEST ADD CONSTRAINT test_pk
PRIMARY KEY(as_of_date,VALUE) USING INDEX(CREATE UNIQUE INDEX test_pk ON TEST(as_of_date,VALUE));
ANALYZE TABLE TEST
COMPUTE STATISTICS
FOR TABLE
FOR ALL INDEXES
FOR ALL INDEXED COLUMNS;

EXEC utils_pkg.exchange_partition('test_p', 'test');

utils_pkg.exchange_partition implementation:

PROCEDURE exchange_partition(v_table_name IN VARCHAR2, v_ex_table IN VARCHAR2) IS
v_part string(30); -- we take partition name from as_of_date in _EX table
BEGIN
DELETE FROM TMP_TABLE;
EXECUTE IMMEDIATE '
INSERT INTO TMP_TABLE
SELECT 'P'||TO_CHAR(as_of_date,'YYYYMM')
FROM '||v_ex_table||' WHERE ROWNUM=1
';
SELECT Dummy INTO v_part FROM TMP_TABLE WHERE ROWNUM=1;
DELETE FROM TMP_TABLE;
EXECUTE IMMEDIATE '
ALTER TABLE '||v_table_name||'
EXCHANGE PARTITION '||v_part||'
WITH TABLE '||v_ex_table||'
INCLUDING INDEXES WITHOUT VALIDATION
';
END exchange_partition;

problem i see here is use of tmp_table to determine partition name (i think i need to use execute immediate as table name is known just on runtime), is there a way around?

thank you

Tom Kyte
July 26, 2006 - 11:47 am UTC

EXECUTE IMMEDIATE '  
        SELECT 'P'||TO_CHAR(as_of_date,'YYYYMM')
        FROM '||v_ex_table||' WHERE ROWNUM=1
      ' into v_part;


select into...


ops$tkyte%ORA10GR2> declare
  2          l_dummy dual.dummy%type;
  3  begin
  4          execute immediate 'select dummy from dual' into l_dummy;
  5          dbms_output.put_line( l_dummy );
  6  end;
  7  /
X

PL/SQL procedure successfully completed.
 

Peter, July 26, 2006 - 12:07 pm UTC

thank you TOM,

it worked! I have never seen construct
EXECUTE IMMEDIATE 'select dummy from dual' into v_variable;
before

to create partitioned table with only structure of old table

Mukul Sharma, July 28, 2006 - 11:39 am UTC

This was helpful but i need to know the syntax to create
a partitioned table with only the structure of old table
with no rows in the partitioned table.




Tom Kyte
July 28, 2006 - 8:41 pm UTC

ops$tkyte%ORA10GR2> CREATE TABLE t
  2  (
  3    dt  ,
  4    x   ,
  5    y
  6  )
  7  PARTITION BY RANGE (dt)
  8  (
  9    PARTITION part1 VALUES LESS THAN (to_date('13-mar-2003','dd-mon-yyyy')) ,
 10    PARTITION part2 VALUES LESS THAN (to_date('14-mar-2003','dd-mon-yyyy')) ,
 11    PARTITION junk VALUES LESS THAN (MAXVALUE)
 12  )
 13  as
 14  select created, user_id, username from all_users
 15  /

Table created.
 

More partitions in table

A reader, July 29, 2006 - 9:00 am UTC

Hi Tom,

Can we have 365 partitions and 8750 sub partitions in a table? Is possible, is there any performance impact on this table, when we do insert or select. The idea behind these many partitions is we can easily truncate the partions when we have more than 50 million record.

Please advise on this.

Tom Kyte
July 29, 2006 - 9:17 am UTC

yes, you can.

the performance impact would come when you create/drop the table. In 10g, we'll page into the library cache only the relevant information as needed (in 9i, the first hard parse might find it takes a while as the entire definition is loaded)

for such a small number of records - only 50,000,000 - I don't think you want so many partitions however. 50,000,000 only sounds big, it isn't that big.

Space constraint on Table partiton

Bala, July 30, 2006 - 3:35 am UTC

Hi Tom,
Thanks for your answer.

I have tried by creating 365 partitions and 24 sub partitions on each partiton. It took 20 minutes to create the table. The most important thing is it occupied 9 GB of tablespace memory.

Why it took such enormous space?

Tom Kyte
July 30, 2006 - 8:42 am UTC

look to your initial extent, each segment will of course always allocate at least one initial extent.

You have 8,760 segments, say you have a 1mb initial extent.  Instant 9gb of storage.

ops$tkyte%ORA10GR2> select 8760/1024 from dual;

 8760/1024
----------
 8.5546875
 

Peter, August 01, 2006 - 9:54 am UTC

ops$tkyte%ORA10GR2> CREATE TABLE t
2 (
3 dt ,
4 x ,
5 y
6 )
7 PARTITION BY RANGE (dt)
8 (
9 PARTITION part1 VALUES LESS THAN (to_date('13-mar-2003','dd-mon-yyyy')) ,
10 PARTITION part2 VALUES LESS THAN (to_date('14-mar-2003','dd-mon-yyyy')) ,
11 PARTITION junk VALUES LESS THAN (MAXVALUE)
12 )
13 as
14 select created, user_id, username from all_users where 1=2
15 /

Tom Kyte
August 01, 2006 - 6:40 pm UTC

indeed, forgot the predicate to get "no rows" :)

thanks

What attributes are carried from partition to the table?

A reader, August 19, 2006 - 7:21 pm UTC



Dear Tom,

When I exchange a partition and convert it to a stand-alone table, what attributes (for the lack of more appropriate term) of the partition are carried over to the stand-alone table?

I have -
compression
bitmap indexes
regular indexes
RLS policies
select and delete privileges
auditing (not FGA)
and
comments on the partition. Out of these what will get carried over to the table?

Also, if I convert this table to be a partition of another table, which of the above will be carried over?

Thanks.

Tom Kyte
August 19, 2006 - 7:48 pm UTC

when you do the exchange - no data moves

if the data was compressed - it will be compress.
if you had local indexes, they will be there (the global indexes belong to the partitioned table)

things like grants and security and comments and auditing - they are associated with the table - you made the partition become the table, it (the partition that became the table) will have whatever was in place for the table.

ditto the other way.

physical things come "with the segment"
logical things "stay with the table name"

Question about partitioning

Richard, August 23, 2006 - 4:50 pm UTC

Hi Tom,

I have a problem with a non partitioned table. Mi client is a hospital with a non partitioned table which has a lob field. The lob field is in row,cache and logging, but the size of the lob segment is about 71 GB of data. The idea is convert the table into a partitioned table with disable storage in row and chunk of 8KB. The Oracle version is 8.1.7.4 on Sun Solaris. I have tried a insert into partitioned table the select * from the non partitioned table but I got the ORA-1555 with pctversion = 10. I create a private rollback seg for this transaction but however the ORA-1555 points to another public rollback segment different to the established. The size of the rollback segment established was 10GB.

The question is, what is the best way to migrate the non partitioned to the partitioned table?

Thank you in advance Tom.


Tom Kyte
August 27, 2006 - 6:54 pm UTC

search around this site for 1555 - you have it totally backwards.

the chance of a QUERY getting a 1555 (and it is your QUERY) is dependent on the size of the SMALLEST rollback segment you have allocated - not the largest.

Your transaction - it generates very very very little UNDO.
Your transaction - very much needs UNDO generated by other transactions.


but that would lead me to believe that other transactions are modifying this data while you are copying it??? how it THAT going to work?

Partitioning

Richard, August 28, 2006 - 3:36 am UTC

 Hi Tom,

  Thanks. Yes, I already knew the smallest rollback segment decides the transaction. All the rollback segments are equally sized. I can´t understand the why about the ORA-1555 if I asigned a new private rollback segment to my transaction. That is one of my questions. How is it possible. I have tried to launch several bulk insert in parallel, each of one on a range of data.

SQL> desc hc_informetxt
Name      Type       Nullable Default Comments 
--------- ---------- -------- ------- -------- 
CONTENIDO CLOB       Y                         
ID        NUMBER(32)     --> PK                      
IDEXTERNO NUMBER(32) Y                         

 

   declare
    cursor c
    is
    select contenido,id,idexterno from hc_informetxt;

   type miarray is table of c%rowtype index by binary_integer;

   type Arraylob is table of clob index by binary_integer;
   type idArray is table of number index by binary_integer;
   type idextArray is table of number index by binary_integer;

   l_data miarray;
   l_lob Arraylob;
   l_id idArray;
   l_idext idextArray;

   begin
       open c;
       loop
           fetch c bulk collect into l_data LIMIT 100;

           for i in 1 .. l_data.count
           loop 
               l_lob(i) := l_data(i).contenido;
               l_id(i) := l_data(i).id;
               l_idext(i) := l_data(i).idexterno;
           end loop;

           forall i in 1 .. l_data.count
               insert /*append */ /*+nocopy*/ into hc_informetxtp(contenido,id,idexterno) values (l_lob(i),l_id(i),l_idext(i));
           exit when c%notfound;
       end loop;
       close c;
   end;

  but it doesn´t work in 8.1.7.4 so I have finally decided several export in parallel with the "where" clausule on different ranges. 

    Thank you Tom. 

Tom Kyte
August 28, 2006 - 9:17 am UTC

YOUR TRANSACTION IS NOT PRODUCING GOBS OF UNDO.

Your transaction needs to READ the undo of OTHER TRANSACTIONS.

An ora-1555 means that some UNDO that is needed no longer exists. That undo might have been stored in the smallest OR the largest rollback segment.


You have missed my point:

the odds of an ora-1555 are based on the size of the SMALLEST undo segment. Period.

You get 1555's on the READ OF DATA, not on the write.


that you are "using" a big rollback segment for your writes isn't relevant even a little bit to your READS, your READS need to read from potentially EVERY undo segment.

Partitioning

Richard, August 28, 2006 - 3:49 pm UTC

Thanks Tom,

however,my main problem is how to bulk insert the data from a non-partitioned with 71 GB sized of clob table to another partitioned with 8K of chunk, nologging and cache (or nocache) option. I tried the above PL but it doesn´t work in 8.1.7.4, only in 9i. Then I tried the insert into .. select * from ..; but when I compare the response times in 9i between PL and insert .. select, there is not difference, so I thought about imp command. I found in asktom that imp with lob field imports row per row so I don´t know what to do... My questions are:

a) What is the fastest way to insert the data from a non partitioned table o another partitioned in 8.1.7.4?.

b) If there is not a way in 8.1.7.4, is there any way in 9.2.0.7? The architecture is Sun Solaris 9, 4 processors, 4 GB of free ram.

Tom, please, Help me.

Tom Kyte
August 28, 2006 - 4:47 pm UTC

above PL? what is "above PL"

insert /*+ append */ into t select ... to_lob(whatever) from old_t


using parallel if it makes sense
using nologging if it makes sense

Partitioning

Richard, August 29, 2006 - 8:54 am UTC

 
 Hi Tom,

  I would refer to this PL :

  SQL> desc hc_informetxt
Name      Type       Nullable Default Comments 
--------- ---------- -------- ------- -------- 
CONTENIDO CLOB       Y                         
ID        NUMBER(32)     --> PK                      
IDEXTERNO NUMBER(32) Y                         

 

   declare
    cursor c
    is
    select contenido,id,idexterno from hc_informetxt;

   type miarray is table of c%rowtype index by binary_integer;

   type Arraylob is table of clob index by binary_integer;
   type idArray is table of number index by binary_integer;
   type idextArray is table of number index by binary_integer;

   l_data miarray;
   l_lob Arraylob;
   l_id idArray;
   l_idext idextArray;

   begin
       open c;
       loop
           fetch c bulk collect into l_data LIMIT 100;

           for i in 1 .. l_data.count
           loop 
               l_lob(i) := l_data(i).contenido;
               l_id(i) := l_data(i).id;
               l_idext(i) := l_data(i).idexterno;
           end loop;

           forall i in 1 .. l_data.count
               insert /*append */ /*+nocopy*/ into 
hc_informetxtp(contenido,id,idexterno) values (l_lob(i),l_id(i),l_idext(i));
           exit when c%notfound;
       end loop;
       close c;
   end;

  which works fine on 9.2.0.7. 

   thank you for your help, I´ll try the to_lob clausule in the insert /*+ append */ into ...

    

Tom Kyte
August 29, 2006 - 3:24 pm UTC

i don't know what "PL" means.


/* append */ IS NOT a well formed hint. And since you are using VALUES in the insert - even if you used /*+ APPEND */ it would do NOTHING.

/*+ nocopy */ - what is that??

that entire piece of code should be erased and you just use

insert into hc_informetxtp(...)
select to_lob(...), ....
from hc_informetxt;

then you could use /*+ APPEND */
then you could use parallel
then you could use nologging

if they make sense.

Partitioning

Richard, August 30, 2006 - 8:06 am UTC


Thank you Tom,

I will follow all your suggests.


Thank you very much again.

UNpartition -- partition table to regular table

Duke Ganote, September 15, 2006 - 9:36 am UTC

I've inherited some tables with minimal records (less than 100K each), all of which are partitioned. To unpartition them, I've proposed
1. renaming the existing tables
2. creating regular tables with the old names
3. inserting into the regular tables
4. dropping the old, partitioned tables
Sound reasonable?

Tom Kyte
September 15, 2006 - 10:24 am UTC

that is a reasonable approach - as would be

a) using create table as select instead of create+insert
b) using dbms_redefinition if you need to do this "online"

Changing defaul attributes before splitting

Doody Atraktsi, November 01, 2006 - 3:39 am UTC

Hi,
I want to control the tablespace that a new partition and it's local index will be craeted after 'alter table .. split partition ..'
The way I do it is by 'ALTER TABLE .. MODIFY DEFAULT ATTRIBUTES TABLESPACE new_tbs' and same for the index.
Then I run the split partition and the results are: the index is created in the new_tbs but the table doesn't, The workaround is running 'Alter table move partition'.

I guess my question is why modifying attributes for that table doesnt act the same as for the index ?

Tom Kyte
November 01, 2006 - 8:45 am UTC

how's about providing a COMPLETE, yet CONCISE example to work with. starting with a create table and so on.

concise is very very important, as small as you can make it (eg: we need what, maybe 2 columns, we don't need initial, next, and so on...)

Simple test case

Doody Atraktsi, November 02, 2006 - 7:58 am UTC

Hi, Sorry I didn't attached it at first time.
Here is a simple test case, Notice that index partition tablespace is changes but the table's tablespace remains like the initial definition.

drop table TestPart
/

CREATE TABLE TestPart (
id NUMBER
)
TABLESPACE data01_ts
PARTITION BY RANGE (id)
(
PARTITION p_1_10 VALUES LESS THAN (11)
TABLESPACE data01_ts,
PARTITION p_max VALUES LESS THAN (MAXVALUE)
TABLESPACE data01_ts
)
/

CREATE INDEX TestPart_ix ON TestPart (id)
TABLESPACE data01_ts LOCAL
/
alter index TestPart_ix modify default attributes tablespace data05_ts
/
alter table TestPart modify default attributes tablespace data05_ts
/

ALTER TABLE TestPart
SPLIT PARTITION p_max
AT (21)
INTO (PARTITION p_11_20, PARTITION p_max)
/

select table_name||'...... '||partition_name||'...... '||tablespace_name "obj...partition....tablespace"
from dba_tab_partitions
where table_name = 'TESTPART'
union all
select index_name||'...... '||partition_name||'...... '||tablespace_name
from dba_ind_partitions
where index_name = 'TESTPART_IX'
/

The output of the last sql is:

obj...partition....tablespace
TESTPART...... P_MAX...... DATA01_TS
TESTPART...... P_11_20...... DATA01_TS
TESTPART...... P_1_10...... DATA01_TS
TESTPART_IX...... P_1_10...... DATA01_TS
TESTPART_IX...... P_11_20...... DATA05_TS
TESTPART_IX...... P_MAX...... DATA05_TS

Thank you very much.



Tom Kyte
November 02, 2006 - 8:20 am UTC

</code> http://docs.oracle.com/docs/cd/B19306_01/server.102/b14200/statements_3001.htm#i2150203

...
Oracle Database splits the corresponding local index partition, even if it is marked UNUSABLE. The database marks UNUSABLE, and you must rebuild the local index partitions corresponding to the split partitions. The new index partitions inherit their attributes from the partition being split. <b>The database stores the new index partitions in the default tablespace of the index partition being split. If that index partition has no default tablespace, then the database uses the tablespace of the new underlying table partitions.</b>
....

http://docs.oracle.com/docs/cd/B19306_01/server.102/b14200/statements_3001.htm#sthref5372 <code>

...
INTO Clause The INTO clause lets you describe the two partitions resulting from the split. In partition_spec, the keyword PARTITION is required even if you do not specify the optional names and physical attributes of the two partitions resulting from the split. If you do not specify new partition names, then Oracle Database assigns names of the form SYS_Pn. Any attributes you do not specify are inherited from current_partition.
.....

So, looks like it is documented that indexes are done differently than the table partitions

Partition exchange with non empty table.

Yogesh Raikar, November 20, 2006 - 9:49 am UTC

Tom,

We have a monthly partition fact table which we are loading on daily basis. We want to implement the exchange partition strategy for every day load. The main concern is the reports should show either a particular days data complete or should not show that days data at all.
But the exchange partition command will swap the entire month partition with the latest daily data. Hence we need to maintain the latest month partition data in separate table and need to swap that table. There are corresponding overheads for this strategy. Do you think this is right approach ? Can you please suggest any alternative better solution ?

e.g. suppose fact table is loaded upto 10th November. i.e. November partition has data upto 10th. When we load data for 11th November in intermediate table for exchanging, then after exchange the november partition will have only
November 11 data. To avoid that, we propose to maintain another intermediate table with data upto 10th November. We load 11th data into this table and then exchange it with main table partition table. Does this strategy sound OK?

Thanks and Regards

Tom Kyte
November 20, 2006 - 1:42 pm UTC

why not a partition per day then?

Partition per day or per month

Yogesh Raikar, November 21, 2006 - 12:25 am UTC

Thanks very much for your reply Tom.
Yes, we considered having the partitions over daily basis. The daily data is in terms of few hundred thousand records. Not much. Also the DB might store atleast 3 years of data live for reporting. Hence the number of daily partitions will be around 1000. So administrative issues also.
Or is it possible to keep current month data in daily partition and older data in month partition ? Is it workable solution ? Will it take advantage of partition pruning etc. ?

Thanks and Regards
Yogesh

Tom Kyte
November 22, 2006 - 3:10 pm UTC

if there are just a few hundred thousand - why not use just conventional path operations, why bother with all of this? that is tiny.

I am having 92070 with RAC

Star Nirav, December 07, 2006 - 7:33 am UTC


Dear Tom,

I am having oracle 92070 (2 node RAC), sotrage in SAN. Some of the tables are having more than 20 lacs rows but not yet partitioned. These tables are highly utilized/accessed.

can you confirm me / LET ME KNOW how to make those tables now partitioned or which one would be the best practice to improve performance... !! We are having more than 5 indexes on an average for these big tables.

Regards,
Star Nirav


Tom Kyte
December 07, 2006 - 1:06 pm UTC

wonder what a lac is.


</code> http://www.google.com/search?q=define%3A+lac&ie=utf-8&oe=utf-8&rls=org.mozilla:en-US:official&client=firefox-a <code>



but since size is but one minor consideration with regards to partitioning, probably not relevant.


so, do you full scan frequently? if not, HIGHLY UNLIKELY partitioning is going to have any effect. if you do, partitioning MIGHT possibly be something that can help performance.

No FTS in the big tables.

Star Nirav, December 09, 2006 - 6:16 pm UTC

Hi Tom,

There is only nested loop (Explain Plan) found in those big tables. FTS (Full Table Scans) is not beneficial for us since these tables are big and if we make partition then also, one partition would have more data which we can not afford with FTS.

--> My another tables (Partitioned) are stored in the tablespace and respective datafiles are also found in statspack reports (Top read) and as per OS feedback, there is very huge wait. can you throw some light on that please ?

Regards,
Star Nirav

Tom Kyte
December 09, 2006 - 6:59 pm UTC

if you do not full scan, HOW do you see partitioning helping.

do you understand what partitioning is? what it does? how it works? If you gain that understanding, you'll be able to see what it can and cannot do.


It is doubtful in your situation that partitioning would be beneficial from a performance perspective. If you have access to my latest book - it goes into more details 'why' this is so.


very huge wait. What is that? 42?

???

Star Nirav, December 09, 2006 - 8:12 pm UTC

Sir...

Would be glad if you share that link or your book name.

Since we dont have internet as per the policies, pls. put that extract in this forum because, our administrators have enabled only google and your site.

Regards,
Star Nirav


Tom Kyte
December 09, 2006 - 8:27 pm UTC

home page - just go there, right there.

Your latest book

Nikhil, December 11, 2006 - 1:03 am UTC

Hello Tom,
Can you tell me the name of your latest book?
Thanks

Tom Kyte
December 11, 2006 - 8:04 am UTC

go to home page, it is right there. expert oracle database architecture.

Space

dav, December 18, 2006 - 12:31 pm UTC

We're planning to partition a 20 GB table. When we partition the table using dbms_redef package, how much space we'll need to have. 20 GB more?

Also I'm assuming dbms_redef won't make dependent objects invalid. Please confirm.

Thank you for the wonderful site.

Tom Kyte
December 18, 2006 - 2:55 pm UTC

at least 20gb more - if not more, assuming current table is well packed.


it will invalidate dependent objects in current releases of the software when you call finish_redef.

Make indexes unusable for SPLIT performance

Robert Wood, February 15, 2007 - 12:22 pm UTC

Tom,

I am splitting a table partition which has global non-partitioned indexes.
Looking for best performance for the split... should I?....

1. MANUALLY make the indexes unusable before splitting the table partition.
2. Just split the table partition and let Oracle make the indexes unusable automatically.

I am thinking both will have the same effect, since no doubt Oracle will make the indexes unusable right out of the gate... correct?

Thanks,

Robert.

Tom Kyte
February 16, 2007 - 12:37 pm UTC

depends...

do you want to perform the operation OFFLINE or ONLINE

you can do partition operations and have us maintain the indexes, or you can do partition operations and let us invalidate the indexes that need be.

totally your choice, the syntax supports either option.

OFFLINE Partition Split

Robert, February 16, 2007 - 2:40 pm UTC

Tom,

Definitely doing this major repartition (split) OFFLINE (want to get whole thing done as fast as possible... don't want to drag it out online e.g. 'maintain global indexes')

So will it help performance if I 'alter index unusable' before doing the split partitions...Or will Oracle do this automatically as soon as the split partition command begins?

I don't think it will help performance for OFFLINE SPLIT PARTITION by making the indexes unusable myself since Oracle is going to do this anyway.... correct?

Thanks,

Robert.
Tom Kyte
February 18, 2007 - 8:44 am UTC

if you do not maintain the indexes, we'll mark them unusable and it won't really matter if you do it or not.

Exchange from partition to partition

Robert, February 28, 2007 - 6:32 pm UTC

Tom,

Is is possible to exchange table a partition a1 with table b partition b1?

Or can you ONLY exchange partition with TABLE?

Thanks.
Tom Kyte
February 28, 2007 - 10:05 pm UTC

table with partition, that is it

9.2.0.7 (partition to partition?)

Robert, February 28, 2007 - 6:33 pm UTC

Sorry Tom... database version: 9.2.0.7.

online rebuilding and performance

natasha, April 08, 2007 - 2:29 am UTC

Hi, Tom.

I'm going to rebuild table online using dbms_redefiniton package. The table is about 100 GB and there is intensive inserting and reading. I'm worrying about system performance while rebuilding. How much slowly it would be work? Could you give an advice how can I rebuild table and do not reduce performance dramatically while rebuilding?
Tom Kyte
April 09, 2007 - 10:26 am UTC

what is the goal behind rebuilding?

You will have in place the internal triggers used by the materialized view process in order to record changes done to the source table...

You will have the added work of logging/recording the fact that an insert/update/delete is done to the source table

You will have the added IO (read IO) of the redefinition process as it scans the entire 100gb of data.

You will have the added dirtying of the buffer cache as these new rows are placed into the target table.

You will have the additional redo and undo generated

You will have the extra work that DBWR must do to keep the buffer cache clean.


It'll work - it'll have an impact - it is something you want to be really sure you want to do before you do it.

online rebuilding and performance

natasha, April 09, 2007 - 12:06 pm UTC

The goal of redefining is to archive old data. The table is nonpartitioned (application was designed more than 8 years ago). I want to move old data to arc partition and after it use alter table... exchange partition method. All indexes are supposed to be local. The problem is I can stop the system just for an hour or two. So the only way I see to archive old data is to use dbms_redefinition first. The best of worst.

Thank you very much!


Tom Kyte
April 09, 2007 - 1:09 pm UTC

(good reason for redefining by the way :) Just wanted to make sure there was one, so often, there is not...

A reader, July 14, 2008 - 10:31 am UTC

Tom,
I need to convert a table which is partitioned on a monthly basis into daily basis. What should I do to do this? Please let me know if I should send any specific information in regards to this.
Tom Kyte
July 15, 2008 - 9:31 am UTC

you won't convert

you will 'rebuild'

you will create a new table with the new partition scheme.

then you can either


a) use dbms_redefinition (lots of examples on this site, search for them) to online redefine into new structure

b) insert /*+ APPEND */ into new_table select * from old_table;

Exchange Partition

Mahesh, July 21, 2008 - 4:49 pm UTC


I had a huge table xyz, partition by date, quarterly.
Total size of table is 50GB.
I created a empty table abc as CTAS and "exchage partition"
for one of the partitions.

Idea was to perform some join on abc and insert back to xyz.
Due to some issue we have to revert back the change. i.e Is there any way to exchange table with partition?

i.e.

ALTER TABLE list_part
EXCHANGE PARTITION northwest WITH TABLE q1_northwest
INCLUDING INDEXES
WITHOUT VALIDATION
EXCEPTIONS INTO uwclass.problems;

The data of partition northwest are now in TABLE q1_northwest. How can I insert back the data from table to the partition northwest ?

I do not want to do CTAS for obvious reason. So is there other way apart from using some tricks with dbms_redefination?????
Tom Kyte
July 22, 2008 - 11:07 am UTC

you already know how to exchange a table with a partition?? Not sure what you mean here, I did not follow this.

Revert Back for exchange partition

Mahesh, July 22, 2008 - 12:14 pm UTC

Hi Tom,

Sorry for confusion.

I did this,

ALTER TABLE list_part
EXCHANGE PARTITION northwest WITH TABLE q1_northwest
INCLUDING INDEXES
WITHOUT VALIDATION
EXCEPTIONS INTO uwclass.problems;

So, now,
The data of partition northwest of table list_part are now in TABLE q1_northwest.
How can I insert back the data from table q1_northwest to the partition northwest table list_part?

Tom Kyte
July 22, 2008 - 6:29 pm UTC

just run the same command?!?!?!?! if you swap A with B, and then you swap B with A, you have A back in A and B where B was.

you know how to switch a table with a partition, you have a table, you have a partition - switch them.

Exchange Partition

Mahesh, July 22, 2008 - 5:25 pm UTC

Sorry Tom again! Actually I had a misunderstanding in it.
What I need is just to run the same command again :)

Thank you,

Partitioning

Sridhara Raju, July 22, 2008 - 5:26 pm UTC

Tom, u r genius.!

partitioned table to regular table

Srinivas, December 05, 2008 - 3:41 am UTC

Tom, great response from you on the partitioning and after all the steps performed as you suggested, now I want to convert a partitioned table to a regular tables. Is that possible.
Tom Kyte
December 09, 2008 - 12:05 pm UTC

create table as select - would be one approach.

create table, insert into table would be another.

you can use dbms_redefinition.


If the table has but one partition, you can use alter table EXCHANGE partition as well.



More to Explore

VLDB

If you are new to partitioning, check out Connor McDonald's introduction series here.

VLDB

Documentation set on VLDB and Partitioning.