Skip to Main Content

Breadcrumb

Question and Answer

Connor McDonald

Thanks for the question.

Asked: September 19, 2018 - 2:31 pm UTC

Last updated: October 02, 2018 - 7:18 am UTC

Version: 11g

Viewed 1000+ times

You Asked

Hi Chris, Hi Connor,

Appology for puttingy query in theoritical manner, but I would request you to please suggest something which I can try in below scenario.

There is requirement when I need to cleanup a table (e.g. orders) having approx. 25 billions rows.
From that we need to clean up roughly around 20 billions rows.

What we were thinking here is to instead of deleting 20 billions:


Approach 1:
1. Create replica e.g. orders_bkp of orders table using CTAS
2. Truncate original table orders
3. Insert the required 5 billions rows into original table orders

Approach 2:
1. Prepare a script to delete 15 billions records
2. Create new tablespace -- this will be taken care by our DBA Team
3. Move table & index orders to new table space to reclaim space
4. Move back table to old tablespace.
5. Drop tablespace create in point#2.

Here, there are high chances of opting approach 2, can you please help me getting oracle commands on below points

a. Delete will performed based on order_date columns,so is it possible to share any reference or script to perform such huge delete task?
b. Command to move table to new tablespace & back?

Also, just for my understaing i tried CTAS of exisitng table,but CTAS itself taking too much time create replica of original table.
Is there any you sugget here to create replica of table?

Yes. We have partitioning license.
Below is the table structure:
Table: orders
Columns: orderid Number,
Type varchar2
Start_ts timestamp

Table: order_details
Columns: orderid number,
City varchar
Amount number

Data to be retain is based on orders.type =‘VALID’ and start_ts < input date

Based on condition need to cleanup orders and from order_details based on orderid

and Connor said...

OK, here's a couple of scenarios

Scenario #1 - you need to *keep* the rows that you are archiving off

1) create an empty partitioned table

- partition p1 values less than input date
- partition p2 values above input date

each has 2 subpartitions type = 'VALID, type = [everything else]

2) Use DBMS_REDEFINITION to migrate the table into the partitioned one

3) 'alter table exchange partition' to remove the partitions you no longer want attached to the table.

After this you will tables of the archive data, and your ORDERS table will only have the rows you want to keep

This process will take a long time to run, but the only real downtime is during step 2.


Scenario #2 - you don't care about the 20billions you want to keep

1) create table ORDERS_SMALL parallel as select /*+ parallel */ from orders where type =‘VALID’ and start_ts < input date.

Parallel load and query is about as fast as you can move data in Oracle. And make this a partitioned table, so that future archival will be easier.

2) drop table ORDERS

3) rename ORDERS_SMALL to ORDERS

(You'll need to re-apply indexes, grants, etc).

Is this answer out of date? If it is, please let us know via a Comment

More to Explore

Administration

Need more information on Administration? Check out the Administrators guide for the Oracle Database