Skip to Main Content
  • Questions
  • How to delete the billion records without affecting my database also order by clause is taking more time for "select * from table" statement

Breadcrumb

Question and Answer

Chris Saxon

Thanks for the question, Shiva.

Asked: August 05, 2021 - 2:48 pm UTC

Last updated: August 05, 2021 - 3:29 pm UTC

Version: 12c

Viewed 1000+ times

You Asked

Dear Tom,

I have a table where I have more than billion records which i should delete them without affecting my oltp database and users with faster manner in 12c database. What are the different ways for this.

Also I have a another table where i have more than billion records which i am trying to retrieving all the columns(20 columns) and all the data in sorted of timestamp with "order" clause. It is taking huge amount of time to get the output and even also i am not getting the data showing some message. Please provide the methods in order to get the huge data faster using "order by" clause.

My query:-

select * from table_having_more_than_billion_records order by timestamp desc;(timestamp is one column in this table);

Thanks in advance,
Hagire shiva.

and Chris said...

I discuss various ways to remove many rows faster at

https://blogs.oracle.com/sql/how-to-delete-millions-of-rows-fast-with-sql

Assuming you're on 12.2 and there's no (enabled) FKs referencing your table a filtered table move is the easiest:

alter table ...
  move including rows
  where rows_to_keep = 'Y'


Are you really fetching a billion+ rows?! If so that's going to take a looooooooong time whatever you do!

Sorting a data set this large will also be slow. Adding an index on the order by columns can speed this up.

But really the question here is - why are you fetching all that data? What are you doing with it?

Is this answer out of date? If it is, please let us know via a Comment

More to Explore

SQL

The Oracle documentation contains a complete SQL reference.