A reader, January 23, 2017 - 10:36 pm UTC
I followed the above suggested Process. I am successfully able to change the datatype of the table. But it took 30 minutes for 4 millions rows data and I need to change the table with 20 millions rows. So is there any other way that we can reduce the time.
Thank you.
January 24, 2017 - 12:37 am UTC
Yes but with DBMS_REDEFINTION the duration *does not matter*.
Because the system is *available* whilst you do the change.
Consider virtual column
Gh.., January 24, 2017 - 7:49 am UTC
If the values of sname are always numeric consider adding virtual column as to_char of ...
A reader, January 24, 2017 - 4:07 pm UTC
I am trying in other way as
Create t_new as select sno,to_char(sname) sname,marks from t22;
But I am getting below error in 12c. My original table is having CLOB column. But here in table t22 I did not provide that column.
(It is working fine in 11g but my database is migrated to 12c)
Error: SQL Error: ORA-60019: Creating initial extent of size 14 in tablespace of extent size 8,
I am able to create the table with CLOB data type in 12c if I specify the storage parameters but CTAS we can't specify the storage parameters, column datatypes.
A reader, January 25, 2017 - 3:04 pm UTC
Hi Connor McDonald,
Thanks for your link. even I read the document I did not understand anything. If you don't mind and if I can get your valuable time could you please give me little more details regarding the LOB types restrictions from 11g to 12c.
January 26, 2017 - 11:47 pm UTC
how about when duration does matter
Edward Hayrabedian, September 11, 2018 - 6:05 pm UTC
hi,
the online redefinition is not panacea. i am talking about cases when the application release has to be alligned with the database migration. then we have to make the migrtion as fast as possible.. so the original.question is still not answered ... i.e. which is the fastest way to alter a column full with data?
thanks in advance!!
September 16, 2018 - 3:08 am UTC
i am talking about cases when the application release has to be aligned with the database migration
There's a difference between how long DBMS_REDEFINTION takes from start to finish to the time at which you need an outage. When you use DBMS_REDEFINTION do a:
- a start command
- intermittent sync commands
- a finish command
You only have application unavability during the "finish" component, at which point you would deploy your application changes. If the start/syncs take 8 hours it does *not* impact your availability.
Here's an example of its use
Redefinition over multiple days
Narendra, February 10, 2021 - 5:10 pm UTC
Hello Connor,
Thanks (once again) for your detailed responses, supported by test cases.
I am planning to use DBMS_REDEFINITION on modifying a large table in our database, mainly due to concerns about the "downtime" with "offline" approach. I had a couple of questions about this
1. I am planning to start the REDEF about a week before the actual "event" and am planning to introduce multiple SYNCs during weekdays, with FINISH to be done during the "event". What challenges, if any, do you see with this approach?
a) How will DB going down/ is restarted during weekdays, affect REDEF? I am guessing no impact but happy to be proved wrong
b) Is there a way to monitor the progress of any of the REDEF operations? I believe 12.2. provides a view v$online_redef but is there any way to monitor REDEF on 11.2?
Thanks,
February 16, 2021 - 3:57 am UTC
1. I am planning to start the REDEF about a week before the actual "event" and am planning to introduce multiple SYNCs during weekdays, with FINISH to be done during the "event". What challenges, if any, do you see with this approach?
The big question really is volume. You can think of the SYNC as being like a materialized view refresh using mview logs. So ideally, less volume (ie, a quiet time) is ideal. And don't forget to factor in the time for subordinate objects (indexes, constraints etc)
a) How will DB going down/ is restarted during weekdays, affect REDEF? I am guessing no impact but happy to be proved wrong
Shouldn't make any difference, but I strongly encourage you to test in your own environment!
SQL> conn /@db19_pdb1 as sysdba
Connected.
SQL> create table scott.old_emp as select * from scott.emp;
Table created.
SQL> alter table scott.old_emp add primary key ( empno );
Table altered.
SQL> create table scott.new_emp as select * from scott.emp where 1=0;
Table created.
SQL>
SQL> EXEC DBMS_REDEFINITION.start_redef_table('SCOTT', 'OLD_EMP', 'NEW_EMP');
PL/SQL procedure successfully completed.
SQL>
SQL> EXEC DBMS_REDEFINITION.sync_interim_table('SCOTT', 'OLD_EMP', 'NEW_EMP');
PL/SQL procedure successfully completed.
SQL> shutdown immediate
Pluggable Database closed.
SQL> startup
Pluggable Database opened.
SQL> EXEC DBMS_REDEFINITION.sync_interim_table('SCOTT', 'OLD_EMP', 'NEW_EMP');
PL/SQL procedure successfully completed.
SQL> EXEC DBMS_REDEFINITION.finish_redef_table('SCOTT', 'OLD_EMP', 'NEW_EMP');
PL/SQL procedure successfully completed.
b) Is there a way to monitor the progress of any of the REDEF operations? I believe 12.2. provides a view v$online_redef but is there any way to monitor REDEF on 11.2?The first one is equivalent to a big CTAS, so you should be able to use v$session_longops or sql monitoring if you are licensed for it.
Re: Redefinition over multiple days
Narendra, February 17, 2021 - 11:23 am UTC
Thank you Connor.
In terms of SYNC, we are looking at volumes that could range from few hundred thousands and up to a million. So hopefully, SYNC should not affect any BAU processing as long as we time it properly.
Hopefully, I am able to use DBMS_REDEFINITION in order to provide the benefit to the business/release. Will update here on how it went if I get the chance to use it eventually.
February 23, 2021 - 6:01 am UTC
Keep us posted - sounds like an interesting exercise.
Re: Redefinition over multiple days
Narendra, May 25, 2021 - 9:28 am UTC
Hello Connor,
We (finally) managed to achieve the data type conversion using DBMS_REDEFINITION. Overall, it was a success and achieved our main objective of reducing the amount of time spent on the data type conversion during the main release. Below is what we did in the end:
1. Two days before actual release date, we managed to create the interim table and run the START_REDEF_TABLE and COPY_DEPENDENTS. We had estimated about 5-6 hours for the same mainly due to data volumes, infrastructure and having to run it at the same time as another maintenance batch job that loads and archives data from the same table. Thanks to the firepower of parallel processing and RAC, we managed to complete this in (record) 2 hours.
2. Next day (i.e. one day before the release date) we ran the SYNC_INTERIM_TABLE. I was almost certain that it should complete in minutes but it took 2 hours to complete. In particular, I noticed that most of the time was spend on a couple of (internal) SQL statements that did not appear to be using most appropriate execution plan. I did not want to lose focus and change things in the middle of main change, so we did not try to interfere.
3. On the day of release, we ran SYNC_INTERIM_TABLE a couple of times; first one took 2 hours again and the second one was run after the "down time" began and completed in matter of seconds. The FINISH_REDEF_TABLE also completed in matter of seconds (in fact, I spent more time verifying the conversion as I found it too good to be true)
It was great to be able to spread the work over multiple days/runs in order to minimise the work during "down time" to minutes (as opposed to the initial attempt which took 24 hours on TEST environment and failed while the last successful attempt would have taken about 4-6 hours).
I am not sure whether things have improved in recent versions (we did the conversion on 11.2.0.4) but there is definitely scope for improving the performance of SYNC_INTERIM_TABLE by allowing the internal sqls to be optimized.
Thank you for your help (once again)
May 28, 2021 - 7:14 am UTC
Thanks for getting back to us with such a detailed response.
dbms_redefinition not possible on REFERENCE Partitions
Rajeshwaran Jeyabal, March 14, 2023 - 3:35 pm UTC
Team,
Is it not possible to apply dbms_redefinition on REFERENCE partitioned tables? any workaround possible? the below demo was from 21c (21.3)
create table t1( x int generated by default as identity primary key, y timestamp default systimestamp not null)
partition by range(y)
interval ( numtodsinterval(1,'day') )
( partition p1 values less than (to_timestamp('02-Jan-2023','dd-mon-yyyy')) ) ;
create table t2( x int generated by default as identity primary key, y number not null,
z timestamp default systimestamp not null ,
constraint t2_fk
foreign key(y) references t1)
partition by reference( t2_fk );
create table t3( x int generated by default as identity primary key, y number not null,
z timestamp default systimestamp not null ,
constraint t3_fk
foreign key(y) references t2)
partition by reference( t3_fk );
insert into t1(y) select systimestamp - rownum from all_users where rownum <=10;
insert into t2(y) select rownum from all_users where rownum <=10;
insert into t3(y) select rownum from all_users where rownum <=10;
commit;
demo@PDB1> exec dbms_redefinition.can_redef_table(user,'T2');
BEGIN dbms_redefinition.can_redef_table(user,'T2'); END;
*
ERROR at line 1:
ORA-23549: table "DEMO"."T2" involved in reference partitioning
ORA-06512: at "SYS.DBMS_REDEFINITION", line 291
ORA-06512: at "SYS.DBMS_REDEFINITION", line 6405
ORA-06512: at line 1