Skip to Main Content
  • Questions
  • DBMS_REDEFINITION and copying indexes

Breadcrumb

Question and Answer

Connor McDonald

Thanks for the question, Abhishek.

Asked: July 21, 2016 - 9:44 pm UTC

Last updated: March 04, 2021 - 9:30 am UTC

Version: 12.0

Viewed 10K+ times! This question is

You Asked

Hi Tom,
I have a scenario where I need to refresh a partitioned table from another database. The existing process copies the data over a db link and creates a temporary partitioned table. Then we exchange the partitions with the target table and at the end we just rebuild the indexes on the target table.

Problem with this approach is during refresh of the table data is inconsistent as some partitions have old data and some have new data. Now using DBMS_REDEFINITION I am able to copy the data but problem is my source table does not have the indexes I need on the target table. Could you please suggest the best approach in this case.

Thank you in advance.

Regards,
Abhishek.

1. Why is your table data inconsistent? I don't fully understand the issue - could you clarify please?
A: Hi Chris, Thanks for your reply. I am sorry, I should have provided the information in the first place.

Any way, this table I am working with is a big partitioned table(T1). It is a dimension table that is loaded from customer information. It also has a bunch of indexes(I1-23). The file is primarily loaded in stage schema(Table St1). Then is there is any difference in existing table data and new data in stage we refresh the whole table from stage. Refresh is done using partition exchange and after the refresh all the indexes are rebuilt. The whole process takes around an hour and half from start to finish.

Now in between this time if any one queries T1 he can see some old partition data and some new partition data. This is what I meant as inconsistent data. I don't want to stop the users from querying T1. So I am going for online redefinition of the table.

Please check the following code snippet. I will use generic existing table to simulate the scenarios.

SQL> Set serverout on size unlimited timing on
SQL>
SQL> Create table T1 as select object_id, Object_name, Object_type from dba_objects where object_id is not null and rownum < 10;
Table created
SQL> Create table ST1 as select object_id, Object_name, Object_type from dba_objects where object_id is not null;
Table created

SQL>
SQL> Alter table T1 add constraint I_PK Primary key (object_id);
Table altered

SQL> Create index I2 on T1(object_type);
Index created

SQL> select 'S1' as Sample,'T1' as table_name,count(*) from t1
2 union all
3 select 'S1' as Sample,'ST1' as table_name,count(*) from St1;

SAMPLE TABLE_NAME COUNT(*)
------ ---------- ----------
S1 T1 9
S1 ST1 188452

SQL> select 'S1' as Sample,'T1' as table_name,count(*) from user_indexes where table_name = 'T1'
2 union all
3 select 'S1' as Sample,'ST1' as table_name,count(*) from user_indexes where table_name = 'ST1';

SAMPLE TABLE_NAME COUNT(*)
------ ---------- ----------
S1 T1 2
S1 ST1 0

SQL> exec DBMS_REDEFINITION.can_redef_table('AKAHALI','ST1',DBMS_REDEFINITION.CONS_USE_ROWID);
PL/SQL procedure successfully completed

SQL> exec DBMS_REDEFINITION.START_REDEF_TABLE( uname => 'AKAHALI', orig_table => 'ST1', int_table => 'T1', options_flag => DBMS_REDEFINITION.CONS_USE_ROWID);
PL/SQL procedure successfully completed

SQL> select 'S2' as Sample,'T1' as table_name,count(*) from t1
2 union all
3 select 'S2' as Sample,'ST1' as table_name,count(*) from St1;
SAMPLE TABLE_NAME COUNT(*)
------ ---------- ----------
S2 T1 188452
S2 ST1 188452
SQL> select 'S2' as Sample,'T1' as table_name,count(*) from user_indexes where table_name = 'T1'
2 union all
3 select 'S2' as Sample,'ST1' as table_name,count(*) from user_indexes where table_name = 'ST1';
SAMPLE TABLE_NAME COUNT(*)
------ ---------- ----------
S2 T1 3
S2 ST1 0

SQL>
SQL> Declare
2 lvn_errs number;
3 begin
4 DBMS_REDEFINITION.COPY_TABLE_DEPENDENTS(
5 uname => 'AKAHALI',
6 orig_table => 'ST1',
7 int_table => 'T1',
8 num_errors => lvn_errs
9 );
10 dbms_output.put_line('Errors: '||to_char(lvn_errs));
11 end;
12 /
SQL>
Errors: 0
PL/SQL procedure successfully completed

SQL> select 'S3' as Sample,'T1' as table_name,count(*) from t1
2 union all
3 select 'S3' as Sample,'ST1' as table_name,count(*) from St1;
SAMPLE TABLE_NAME COUNT(*)
------ ---------- ----------
S3 T1 188452
S3 ST1 188452

SQL> select 'S3' as Sample,'T1' as table_name,count(*) from user_indexes where table_name = 'T1'
2 union all
3 select 'S3' as Sample,'ST1' as table_name,count(*) from user_indexes where table_name = 'ST1';
SAMPLE TABLE_NAME COUNT(*)
------ ---------- ----------
S3 T1 3
S3 ST1 0

SQL>
SQL> begin
2 dbms_redefinition.finish_redef_table(
3 uname => 'AKAHALI',
4 orig_table => 'ST1',
5 int_table => 'T1'
6 );
7 end;
8 /
PL/SQL procedure successfully completed

SQL> select 'S4' as Sample,'T1' as table_name,count(*) from t1
2 union all
3 select 'S4' as Sample,'ST1' as table_name,count(*) from St1;
SAMPLE TABLE_NAME COUNT(*)
------ ---------- ----------
S4 T1 188452
S4 ST1 188452
SQL> select 'S4' as Sample,'T1' as table_name,count(*) from user_indexes where table_name = 'T1'
2 union all
3 select 'S4' as Sample,'ST1' as table_name,count(*) from user_indexes where table_name = 'ST1';
SAMPLE TABLE_NAME COUNT(*)
------ ---------- ----------
S4 T1 0
S4 ST1 2

Now the indexes on T1 are moved to ST1. I can recreate them but before that I want to know if there is a way to keep them. Also please advise if there is a better way to achieve this.

Thanks,
Abhishek.

and Chris said...

You can build the indexes on your source table. Then when you do the switch over Oracle preserves these indexes (provided you asked it to!).

But you can't do this if the indexes exist on the destination already. You need to drop them from the target:

SQL> create table T1 as select object_id, Object_name, Object_type from dba_objects where object_id is not null and rownum < 10;

Table created.

SQL> create table ST1 as select object_id, Object_name, Object_type from dba_objects where object_id is not null;

Table created.

SQL>
SQL> alter table T1 add constraint I_PK Primary key (object_id);

Table altered.

SQL> alter table ST1 add constraint SI_PK Primary key (object_id);

Table altered.

SQL>
SQL> Create index I2 on T1(object_type);

Index created.

SQL> Create index SI2 on ST1(object_type);

Index created.

SQL>
SQL> exec DBMS_REDEFINITION.can_redef_table('CHRIS','ST1',DBMS_REDEFINITION.CONS_USE_ROWID);

PL/SQL procedure successfully completed.

SQL> exec DBMS_REDEFINITION.START_REDEF_TABLE( uname => 'CHRIS', orig_table => 'ST1', int_table => 'T1', options_flag => DBMS_REDEFINITION.CONS_USE_ROWID);

PL/SQL procedure successfully completed.

SQL>
SQL> select 'S2' as Sample,'T1' as table_name,count(*) from user_indexes where table_name = 'T1'
  2  union all
  3  select 'S2' as Sample,'ST1' as table_name,count(*) from user_indexes where table_name = 'ST1';

SA TABLE_NAME                       COUNT(*)
-- ------------------------------ ----------
S2 T1                                      3
S2 ST1                                     2

SQL>
SQL> declare
  2  lvn_errs number;
  3  begin
  4  DBMS_REDEFINITION.COPY_TABLE_DEPENDENTS(
  5  uname => 'CHRIS',
  6  orig_table => 'ST1',
  7  int_table => 'T1',
  8  num_errors => lvn_errs
  9  );
 10   dbms_output.put_line('Errors: '||to_char(lvn_errs));
 11  end;
 12  /
declare
*
ERROR at line 1:
ORA-01408: such column list already indexed
ORA-06512: at "SYS.DBMS_REDEFINITION", line 1961
ORA-06512: at "SYS.DBMS_REDEFINITION", line 2486
ORA-06512: at "SYS.DBMS_REDEFINITION", line 3803
ORA-06512: at line 4


SQL>
SQL> drop index I2;

Index dropped.

SQL> alter table t1 drop primary key;

Table altered.

SQL>
SQL> declare
  2  lvn_errs number;
  3  begin
  4  DBMS_REDEFINITION.COPY_TABLE_DEPENDENTS(
  5  uname => 'CHRIS',
  6  orig_table => 'ST1',
  7  int_table => 'T1',
  8  num_errors => lvn_errs
  9  );
 10   dbms_output.put_line('Errors: '||to_char(lvn_errs));
 11  end;
 12  /
Errors: 0

PL/SQL procedure successfully completed.

SQL>
SQL> begin
  2  dbms_redefinition.finish_redef_table(
  3  uname => 'CHRIS',
  4  orig_table => 'ST1',
  5  int_table => 'T1'
  6  );
  7  end;
  8  /

PL/SQL procedure successfully completed.

SQL>
SQL> select 'S4' as Sample,'T1' as table_name,count(*) from user_indexes where table_name = 'T1'
  2  union all
  3  select 'S4' as Sample,'ST1' as table_name,count(*) from user_indexes where table_name = 'ST1';

SA TABLE_NAME                       COUNT(*)
-- ------------------------------ ----------
S4 T1                                      2
S4 ST1                                     2


I'm guessing you need the indexes. So query performance will suffer while you're migrating.

So it's worth asking:

How much data normally changes between loads?

Refreshing the whole table because there's "any difference" seems excessive. If the changes are normally few, why not transfer them over using merge?

Or, instead of exchanging every partition, can you restrict this to just a few that have changed?

And what precise issue is having "inconsistent data" causing you? After all, if you need to refresh the data because it's changed, then people are already looking at "wrong" information! Why not stick with partition exchange?

Also - remember that DBMS_redefinition is meant for changing table properties while people are also changing its data. It sounds to me like you've got some form of batch ETL. So no one else is changing the data.

In this case, you could just switch the two tables over manually by renaming the tables!

This may cause issues with package invalidations, FKs, grants etc. But it will be much quicker than loading the data twice.

Rating

  (7 ratings)

Is this answer out of date? If it is, please let us know via a Comment

Comments

Locking issue with renaming the tables

Abhishek Kahali, August 09, 2016 - 8:20 pm UTC

Hi Cris, Thanks for your valuable insight. please check my responses.

How much data normally changes between loads?
- We really do not bother to check how much data has changed. It is a costly ops. We rather check the total count and max for some sample points.

Refreshing the whole table because there's "any difference" seems excessive. If the changes are normally few, why not transfer them over using merge?
- Original table is a 115column hash partitioned table with 50 partitions where every or some of the columns can change. Typically it holds around 6M records. Merge would take whole lot of time.

Or, instead of exchanging every partition, can you restrict this to just a few that have changed?
- No. any touple can change in any partition.

And what precise issue is having "inconsistent data" causing you? After all, if you need to refresh the data because it's changed, then people are already looking at "wrong" information! Why not stick with partition exchange?
- Partition exchange is pretty fast but if the job fails in between some partitions can have old data.

In this case, you could just switch the two tables over manually by renaming the tables!
- We don't want to do that for issues with dependent object invalidations as you rightly mentioned. Schema compile are not of much help as some of these objects belong to different schema and some use dynamic SQL.It can create excessive locking issue.
Chris Saxon
August 10, 2016 - 10:10 am UTC

You know your process better than me. But you asked for suggestions on what else you could try. So given the suggestions were just some things for you to investigate.

We really do not bother to check how much data has changed. It is a costly ops. We rather check the total count and max for some sample points.

True. But refreshing the whole table is taking you 1.5 hours currently. How long does it actually take you to find out what's changed? And could this eliminate a big chunk of work and bring the total time down?

Original table is a 115column hash partitioned table with 50 partitions where every or some of the columns can change. Typically it holds around 6M records. Merge would take whole lot of time.

The number of hash partitions should be a power of 2. So 50 isn't a great idea. It also seems like an overly large number for just 6 million rows.

Partition exchange is pretty fast but if the job fails in between some partitions can have old data.

Then build your process so it will automatically re-run the failures! Or abort completely and rollback what you've already done.

case partitioned table

Moris, January 17, 2018 - 2:38 pm UTC

Hi
in case of partitioned table an ora-14024 occurs during the DBMS_REDEFINITION.COPY_TABLE_DEPENDENTS. I found (Doc ID 1961587.1) but have no access to look at. what workaround is done to avoid this? cause if I create the unique partitioned index (PK) it would be under other name !!

Chris Saxon
January 17, 2018 - 3:55 pm UTC

If you've got no access to look at the doc, how did you find it?!

Anyway the fix is:

Apply patch 19294302

Or, as it's caused by internal partitioning, re-create the table as a regular range partitioned one.

Not working

Moris, January 18, 2018 - 6:31 am UTC

Just Google ora-14024 and you'll find the support patch but need account to access..that s for your question .

Meanwhile yesterday I asked dba team to patch it. But still not working .same error.

The situation is this. The source table is partitioned by id- e.g. Number not date. And the old partitions are not equi rows. P1 have 1000 rows but p6 have 10000.and so on . Pmax which is the maxvalue high_value have hundreds of millions rows. So I guess the table was not well managed in the past. And my task is to split this pmax and you guess the reasons right?. So instead of splitting lot of times this max part and remaining in such situation of non managed table, I decide to redef it into interval partitioned table on id within interval 30000000 on id which is the pk.
Now you known the context and task. What do you suggest? Cause my prob is the pk and his index. If I create the index previously before the dbms redef. It will be within another name than the origin and this will defeat the normalization of organization and it will defeat the dbms red3f purpose as I will have to rename or recreate or rebuild the index after refresh. And it is an oltp ....
Chris Saxon
January 18, 2018 - 10:45 am UTC

Meanwhile yesterday I asked dba team to patch it. But still not working .same error.

Are you sure they've patched it? That's a quick turnaround if they have!

As stated, the interval partitioning causes the problem. So in your redef use normal partitioning; defining the sizes you want.

Workaround and another question

Moris, January 19, 2018 - 7:56 am UTC

I finally create my pk on interim table and used the Register Dependent Object to overcome the issue.

Have a doubt of what risk expect on the standy db while executing the redef on primary? Something to worry about? As this is a one billion oltp table.
Chris Saxon
January 19, 2018 - 10:52 am UTC

All the changes will need to propagate to the standby. So work with your DBAs to see how the transfer will hold up.

Alex A, February 12, 2018 - 8:47 pm UTC

Hi folks,

We are looking to compress our 9TB database using dbms_definition. We have a physical DG standby configured. I'm unsure what affect this process is going to have on data guard. Obviously we want to test this, but it will take quite a bit of work to even get an environment ready, so I just wanted to get your thoughts on if this will work at all. My thought is that just replicating the DDL commands will not caused any bottleneck problems and the compression will crank away on the standby. It's going to take a very long time and we can't be without our standby for the amount of time it takes to run through this process.
Connor McDonald
February 13, 2018 - 1:46 am UTC

Under the covers dbms_redefinition is doing conceptually what we might do if we were doing the task manually:

- create a materialized view log on source table to capture changes
- create a materialized view that reflects the new structure/attributes ("START")
- apply indexes, constraints to the view to match the source ("COPY_DEPENDENTS")
- refresh the view from time to time with the mview log entries ("SYNC")
- finally rename the source table to something else, and rename the mview to the source ("FINISH")

The physically standby will not replicate the *commands* it will simply replicate what goes into the redo log. So you could easy build a little test harness with (say) a 50GB source table, and do a redefine on that, and see how much redo log you consume. Then extrapolate out to the true size.

Error during redefinition.

Rajeshwaran, Jeyabal, March 04, 2021 - 8:21 am UTC

Team:

could you please help us to understand, why this below error is for? (the below test case was from 18.4 database)

demo@XEPDB1> create table t as
  2  select OWNER   ,
  3  OBJECT_NAME    ,
  4  SUBOBJECT_NAME ,
  5  OBJECT_ID      ,
  6  DATA_OBJECT_ID ,
  7  OBJECT_TYPE    ,
  8  CREATED        ,
  9  LAST_DDL_TIME  ,
 10  TIMESTAMP      ,
 11  STATUS         ,
 12  TEMPORARY      ,
 13  GENERATED      ,
 14  SECONDARY      ,
 15  NAMESPACE      ,
 16  EDITION_NAME
 17  from all_objects;

Table created.

demo@XEPDB1> alter table t add constraint t_pk primary key(object_id);

Table altered.

demo@XEPDB1> create table t_intrim(
  2  OWNER            VARCHAR2(128) ,
  3  OBJECT_NAME      VARCHAR2(128) ,
  4  SUBOBJECT_NAME   VARCHAR2(128) ,
  5  OBJECT_ID        NUMBER        ,
  6  DATA_OBJECT_ID   NUMBER        ,
  7  OBJECT_TYPE      VARCHAR2(23)  ,
  8  CREATED          DATE          ,
  9  LAST_DDL_TIME    DATE          ,
 10  TIMESTAMP        VARCHAR2(19)  ,
 11  STATUS           VARCHAR2(7)   ,
 12  TEMPORARY        VARCHAR2(1)   ,
 13  GENERATED        VARCHAR2(1)   ,
 14  flags varchar2(80) );

Table created.

demo@XEPDB1> exec dbms_redefinition.can_redef_table(user,'T');

PL/SQL procedure successfully completed.

demo@XEPDB1> declare
  2     l_colmap long;
  3  begin
  4     l_colmap :=q'# OWNER ,
  5                                     OBJECT_NAME ,
  6                                     SUBOBJECT_NAME ,
  7                                     OBJECT_ID ,
  8                                     DATA_OBJECT_ID ,
  9                                     OBJECT_TYPE ,
 10                                     CREATED ,
 11                                     LAST_DDL_TIME ,
 12                                     TIMESTAMP ,
 13                                     STATUS ,
 14                                     TEMPORARY ,
 15                                     GENERATED ,
 16                                     SECONDARY ||'/'||
 17                                     NAMESPACE ||'/'||
 18                                     EDITION_NAME as flags #';
 19     dbms_redefinition.start_redef_table(user,'T','T_INTRIM',
 20             col_mapping=>l_colmap,
 21             orderby_cols=>'object_id');
 22  end;
 23  /

PL/SQL procedure successfully completed.

demo@XEPDB1> select constraint_name,constraint_type,table_name,status
  2  from user_constraints
  3  where table_name in ('T','T_INTRIM')
  4  and constraint_type <>'C';

CONSTRAINT C TABLE_NAME STATUS
---------- - ---------- --------
T_PK       P T          ENABLED

demo@XEPDB1> variable n number
demo@XEPDB1> begin
  2     dbms_redefinition.copy_table_dependents(user,'T','T_INTRIM',
  3             num_errors=>:n);
  4  end;
  5  /
begin
*
ERROR at line 1:
ORA-00904: "NAMESPACE": invalid identifier
ORA-06512: at "SYS.DBMS_REDEFINITION", line 2226
ORA-06512: at "SYS.DBMS_REDEFINITION", line 2226
ORA-06512: at "SYS.DBMS_REDEFINITION", line 1712
ORA-06512: at "SYS.DBMS_REDEFINITION", line 1594
ORA-06512: at "SYS.DBMS_REDEFINITION", line 447
ORA-06512: at "SYS.DBMS_REDEFINITION", line 1584
ORA-06512: at "SYS.DBMS_REDEFINITION", line 2205
ORA-06512: at "SYS.DBMS_REDEFINITION", line 3334
ORA-06512: at "SYS.DBMS_REDEFINITION", line 5870
ORA-06512: at line 2


demo@XEPDB1> select constraint_name,constraint_type,table_name,status
  2  from user_constraints
  3  where table_name in ('T','T_INTRIM')
  4  and constraint_type <>'C';

CONSTRAINT C TABLE_NAME STATUS
---------- - ---------- --------
T_PK       P T          ENABLED

demo@XEPDB1>

Chris Saxon
March 04, 2021 - 9:30 am UTC

I'm not sure what the issue is here; speak with support.

Error during redefinition.

Marcelo, March 04, 2021 - 7:20 pm UTC

NAMESPACE is not null. As you are mapping it to FLAGS, the redefinition fails because the not null constraint does not have a correspondent on the new table.

alter table t modify namespace null;

More to Explore

PL/SQL demos

Check out more PL/SQL tutorials on our LiveSQL tool.

PL/SQL docs

PL/SQL reference manual from the Oracle documentation library