Skip to Main Content

Breadcrumb

Question and Answer

Connor McDonald

Thanks for the question, Shobhraj.

Asked: September 30, 2016 - 5:36 pm UTC

Last updated: October 01, 2016 - 3:48 am UTC

Version: 11g

Viewed 1000+ times

You Asked

Hi ,
We have a application which create huge data (20M rows for 20k Keys) after key processing. Business do not need previous data once key is reprocessed. As this is continuous process resulting huge number of parallel deletes and inserts happening on table.
We tried different approaches like getting data snapshot in temp table , truncating original table.
Parallel threads with batching operations.
Delete/Inserts works fast but these get slow down after sometime , if we do gather stats , it again picks up speed.
Is it advisable to do number of gather stats or is any other steps we need to take.

Delete and Inserts are simple but number of rows insertion/deletion at one time are huge.

Thanks,
Raj

and Connor said...

How about a partitioning strategy ?

Existing keys are in partition p1
New keys go into partition p2

Have a view which is:

select * from my_table partition (p1)

When you have loaded the new data into p2, then change the view to:

select * from my_table partition (p2)

Then you can truncate p1. No expensive deletes needed.


Is this answer out of date? If it is, please let us know via a Comment

More to Explore

VLDB

If you are new to partitioning, check out Connor McDonald's introduction series here.

VLDB

Documentation set on VLDB and Partitioning.