Skip to Main Content

Breadcrumb

Question and Answer

Chris Saxon

Thanks for the question, Ajay.

Asked: August 08, 2022 - 7:12 am UTC

Last updated: September 26, 2023 - 11:04 am UTC

Version: 11.2.0.4

Viewed 10K+ times! This question is

You Asked

Hi Tom,

What plan do you suggest to purge a table which is of size 377Gb approx?

We are currently following this plan,
- Creating a temporary table with the required data(Approx 86 crore records).
- Drop the constraints(15 constraints).
- Truncate the original table.
- Make the indexes unsuable(4 Indexes).
- Insert the data from Temporary table to Original table. (It took 2 hours)
- Rebuilt one index to test.(It took 1hour 30mins)
- Recreated one constraint. (It took 52 mins)


We feel we cannot follow this approach for production environment. Can you suggest us a good plan to do this activity?

Please let me know if you need more details.

and Chris said...

Is this a one-off process? Or will you want to keep removing the oldest rows in the future?

If it's a one-off then I'd look at doing something like:

create table temp as 
  select <rows you want to keep>

<create indexes, constraints, etc. on temp>

alter table current rename to backup;

alter table temp rename to current;


This means you're only loading the data once instead of twice. And gives you a fast rollback if there are any problems - rename the tables back.

If this will be an ongoing process I'd partition the table too. Then remove the oldest partitions as necessary.

Either way, as a first step I'd look at upgrading to 19c. Not only is 11.2 now end-of-life, but this also opens up more (online) options which may help, such as:

- Filtered table move (alter table ... move including rows where ... )
- Single statement table partitioning (alter table ... partition ...)

For more on removing lots of rows fast, see https://blogs.oracle.com/sql/post/how-to-delete-millions-of-rows-fast-with-sql

Rating

  (3 ratings)

Is this answer out of date? If it is, please let us know via a Comment

Comments

Purging records of a large table

Ajay, August 09, 2022 - 8:03 am UTC

Hi Chris,

That was too informative. Thank you!

I still have a few doubts though,
We have a space concern in the tablespace level, We are restricted to use either CTAS switch rows or CTAS switch tables, in both the methods we have to create a temp table, in a different tablespace say T2.(Current table is in T1).
Now If I follow,
CTAS switch rows - There will be two inserts, which will take time.
CTAS switch tables - I need to move temp table from T2 to T1 ( to do so I need to drop or truncate current table), which will take time.
I am in a tough situation right now, what do you suggest?
Chris Saxon
August 09, 2022 - 1:09 pm UTC

Why do you need to move the table back to the other tablespace? Can you defer this to some time in future?

As stated originally, you need to prioritise upgrading to 19c. Once you're on this version you can move tables online.

Purging records of a large table

Ajay, August 11, 2022 - 6:00 am UTC

Hey Chris, Thank you for the reply!

We are sticking with CTAS switch rows method, because we cannot use CTAS switch tables since we have a table with one partition.

So this is what we are doing,
1. Take a expdp table backup
2. create a temp table in T2 with required data
3. drop FK constraints of current table
4. truncate current table and make its indexes unusable(so that insert will be faster)
5. Insert from temp to current
6. create the dropped FK constraints on current table
7. Rebuild indexes online and gather stats

The only dml operation on this table in the night, so we are asking downtime to perform steps 3,4,5,6(downtime of 7 hours).

I just wanted to know, can we do anything to make this better and less time consuming?
Chris Saxon
August 11, 2022 - 1:34 pm UTC

we cannot use CTAS switch tables since we have a table with one partition.

I don't understand why this stops you using the switch tables method.

Though as you've mentioned partitions - can you use exchange partition as a way to swap the rows over?

Have you tried running these operations in parallel?

narayana tarakada, September 25, 2023 - 1:10 pm UTC

If I have a table with partition 400 million records and I need to delete 30 million records which of these is best option


1) filtered move table
2) CTAS table switch
3) drop partition/truncate partition

Consider 2 cases a) partitioned table b) non partitioned table
Chris Saxon
September 26, 2023 - 11:04 am UTC

"Best" is subjective; read the linked blog post above for the full ins and outs of these methods.

If you're asking which is fastest, drop/truncate partition is quickest - assuming the table is already partitioned with all the rows you want to remove in the same partition(s). If it's unpartitioned there (potentially lots) of work checking how partitioning affects the rest of the app.


More to Explore

Administration

Need more information on Administration? Check out the Administrators guide for the Oracle Database