Partitioning is definitely going to be your friend here, because with conventional means (insert/delete), whilst it can be fast copy the data over to the new target, the cost of removing the data from the source can be very high, eg
SQL> select count(*) from t;
COUNT(*)
----------
3961700
SQL> create table t1 as select * from t where 1=0;
Table created.
SQL> set timing on
SQL> insert /*+ APPEND */ into t1
2 select * from t;
3961700 rows created.
Elapsed: 00:00:08.60
SQL> commit;
Commit complete.
Elapsed: 00:00:00.00
SQL>
SQL> delete from t;
3961700 rows deleted.
Elapsed: 00:00:29.19
So nearly 4 times more expensive (in this demo) to delete.
Exchange partition is not probably going to help you enormously in terms of efficiency, because the main benefit of exchange partition is that the data does not move, and of course, in this case, you actually *want* to move the data (to another database).
But we've send from above, that inserting data is not particular expensive, so if we can minimal the cost of deletes, then we might have a workable solution. For example, the demo below uses a yearly partition to copy a year of data over, and then truncate at the partition level which is far more efficient than delete.
SQL> drop table t purge;
Table dropped.
SQL>
SQL> create table t
2 partition by range ( created )
3 (
4 partition p1 values less than ( date '2018-01-01' ),
5 partition p2 values less than ( date '2019-01-01' )
6 )
7 as select d.* from dba_objects d,
8 ( select 1 from dual connect by level <= 50 );
Table created.
SQL>
SQL> set timing on
SQL> insert /*+ APPEND */ into t1
2 select * from t partition (p1 );
3598300 rows created.
Elapsed: 00:00:09.05
SQL> commit;
Commit complete.
Elapsed: 00:00:00.00
SQL> alter table t truncate partition p1;
Table truncated.
Elapsed: 00:00:00.41
If its hard to copy the data in the first, eg, that insert is going across a network etc, then another option to explore is transportable tablespace. In that instance, you would move a partition of data into a new, empty tablespace, and then detach from the primary and attach to the historical one.
So the workflow would be along the lines of:
- move the partition to be archived into tablespace X using 'alter table move partition '
- make tablespace X read only
- unload the metadata with expdp, eg
expdp hr/hr DIRECTORY=dir1 DUMPFILE=tts.dmp TRANSPORT_TABLESPACES=X TRANSPORT_FULL_CHECK=YES LOGFILE=tts.log
- copy the datafiles for X to the historical database
- import the metadata
impdp hr/hr DIRECTORY=dir1 DUMPFILE=tts.dmp TRANSPORT_TABLESPACES=X TRANSPORT_DATAFILES='user01/data/tbs6.dbf' LOGFILE=tts.log