Skip to Main Content
  • Questions
  • Automatically archive data based on timestamp

Breadcrumb

Question and Answer

Chris Saxon

Thanks for the question, asg.

Asked: July 19, 2016 - 9:55 am UTC

Last updated: December 21, 2018 - 9:09 am UTC

Version: Oracle 12c

Viewed 10K+ times! This question is

You Asked

Hi Tom

I need to perform data archival in Oracle 12c, after which the archived data must be accessible by the application, when queried for. Is there a way to automatically archive data which are >1 year old based on the 'updated_date' column for every row in a table.

Can this be automated without using triggers? i.e. for a particular row if the updated_date > 1 year old on a any given day, automatically archive that row (delete from primary table and insert into another archive table).

This archived data must be accessible by the application. So, my logic would have to be at a given point in time 'select * from primary_table' when my updated_date is within past 1 year, else 'select * from archive_table' when my updated_date is greater than past 1 year.
Kindly assist.

and Chris said...

Triggers won't help you here. There's nothing to make it fire!

What you need is a (scheduler) job. So your process is:

- Create a procedure to transfer the data from the current to the history table
- Submit a job which runs daily/weekly/monthly/... calling that procedure

The procedure could be a simple insert+delete. For example:
create table t (
  x int,
  up_dt date
);

create table t_hist as
  select * from t;

insert into t
  select rownum, sysdate-400+rownum from dual connect by level <= 400;
  
create index i on t(up_dt);

create or replace procedure arch_t is
begin
  insert into t_hist
    select * from t
    where  up_dt <= add_months(sysdate, -12);
    
  delete t
  where  up_dt <= add_months(sysdate, -12);
end arch_t;
/
select count(*) from t;

COUNT(*)  
400       

select count(*) from t_hist;

COUNT(*)  
0         

exec arch_t;

select count(*) from t;

COUNT(*)  
366       

select count(*) from t_hist;

COUNT(*)  
34  

If your table is "large" so you're deleting millions of rows+ on each run you may want to look at partitioning. If you partition the primary table on update_date, instead of a delete can do a "drop partition". This will be faster and generate less redo.

Once you're happy with the procedure, you can schedule it with something like:

BEGIN
    DBMS_SCHEDULER.CREATE_JOB (
            job_name => 'ARCH_JOB',
            job_type => 'STORED_PROCEDURE',
            job_action => 'ARCH_T',
            number_of_arguments => 0,
            start_date => NULL,
            repeat_interval => 'FREQ=WEEKLY',
            end_date => NULL,
            enabled => FALSE,
            auto_drop => FALSE,
            comments => '');
    
    DBMS_SCHEDULER.enable(
             name => 'ARCH_JOB');
END;
/

And Oracle will automatically run it according to the repeat_interval (weekly here).

For more about the scheduler, see:

http://docs.oracle.com/database/121/ADMIN/schedover.htm#ADMIN033
https://oracle-base.com/articles/10g/scheduler-10g

Rating

  (2 ratings)

Is this answer out of date? If it is, please let us know via a Comment

Comments

Thank you very much! This helps a lot.

asg, July 19, 2016 - 12:13 pm UTC


How query would read the archived table

Pankaj, December 20, 2018 - 8:26 pm UTC

Hi Tom,

thank you for sharing your thoughts on automatic archiving the data though I am assuming that when a query would run for the older data then it will not query records from archived table and we have to then first bring back the older records to main table.
Is there a way to read archived data?
Partitioning may be helpful but it is still not as good as archiving.

Thanks!
Chris Saxon
December 21, 2018 - 9:09 am UTC

Is there a way to read archived data?

Run a query against the table with archived data!

If you want to "transparently" include current and archived data, you can build a view over the top:

create view all_data as
  select * from current_data
  union all
  select * from archived_data;

More to Explore

VLDB

If you are new to partitioning, check out Connor McDonald's introduction series here.

VLDB

Documentation set on VLDB and Partitioning.