Skip to Main Content


Question and Answer

Chris Saxon

Thanks for the question, Ajay.

Asked: August 08, 2022 - 7:12 am UTC

Last updated: August 11, 2022 - 1:34 pm UTC


Viewed 1000+ times

You Asked

Hi Tom,

What plan do you suggest to purge a table which is of size 377Gb approx?

We are currently following this plan,
- Creating a temporary table with the required data(Approx 86 crore records).
- Drop the constraints(15 constraints).
- Truncate the original table.
- Make the indexes unsuable(4 Indexes).
- Insert the data from Temporary table to Original table. (It took 2 hours)
- Rebuilt one index to test.(It took 1hour 30mins)
- Recreated one constraint. (It took 52 mins)

We feel we cannot follow this approach for production environment. Can you suggest us a good plan to do this activity?

Please let me know if you need more details.

and Chris said...

Is this a one-off process? Or will you want to keep removing the oldest rows in the future?

If it's a one-off then I'd look at doing something like:

create table temp as 
  select <rows you want to keep>

<create indexes, constraints, etc. on temp>

alter table current rename to backup;

alter table temp rename to current;

This means you're only loading the data once instead of twice. And gives you a fast rollback if there are any problems - rename the tables back.

If this will be an ongoing process I'd partition the table too. Then remove the oldest partitions as necessary.

Either way, as a first step I'd look at upgrading to 19c. Not only is 11.2 now end-of-life, but this also opens up more (online) options which may help, such as:

- Filtered table move (alter table ... move including rows where ... )
- Single statement table partitioning (alter table ... partition ...)

For more on removing lots of rows fast, see


  (2 ratings)


Purging records of a large table

Ajay, August 09, 2022 - 8:03 am UTC

Hi Chris,

That was too informative. Thank you!

I still have a few doubts though,
We have a space concern in the tablespace level, We are restricted to use either CTAS switch rows or CTAS switch tables, in both the methods we have to create a temp table, in a different tablespace say T2.(Current table is in T1).
Now If I follow,
CTAS switch rows - There will be two inserts, which will take time.
CTAS switch tables - I need to move temp table from T2 to T1 ( to do so I need to drop or truncate current table), which will take time.
I am in a tough situation right now, what do you suggest?
Chris Saxon
August 09, 2022 - 1:09 pm UTC

Why do you need to move the table back to the other tablespace? Can you defer this to some time in future?

As stated originally, you need to prioritise upgrading to 19c. Once you're on this version you can move tables online.

Purging records of a large table

Ajay, August 11, 2022 - 6:00 am UTC

Hey Chris, Thank you for the reply!

We are sticking with CTAS switch rows method, because we cannot use CTAS switch tables since we have a table with one partition.

So this is what we are doing,
1. Take a expdp table backup
2. create a temp table in T2 with required data
3. drop FK constraints of current table
4. truncate current table and make its indexes unusable(so that insert will be faster)
5. Insert from temp to current
6. create the dropped FK constraints on current table
7. Rebuild indexes online and gather stats

The only dml operation on this table in the night, so we are asking downtime to perform steps 3,4,5,6(downtime of 7 hours).

I just wanted to know, can we do anything to make this better and less time consuming?
Chris Saxon
August 11, 2022 - 1:34 pm UTC

we cannot use CTAS switch tables since we have a table with one partition.

I don't understand why this stops you using the switch tables method.

Though as you've mentioned partitions - can you use exchange partition as a way to swap the rows over?

Have you tried running these operations in parallel?

More to Explore


Need more information on Administration? Check out the Administrators guide for the Oracle Database