Skip to Main Content
  • Questions
  • Fetching all records from a extremely large oracle table

Breadcrumb

Question and Answer

Connor McDonald

Thanks for the question, Ashwini.

Asked: February 21, 2019 - 6:33 am UTC

Last updated: February 26, 2019 - 2:28 am UTC

Version: Oracle 12c

Viewed 1000+ times

You Asked

Hi Tom,
I am working on a ETL project that involves fetching all records from a Extremely large oracle table (that contains millions and millions of records) and has a very large number of partitions.

I need to extract all data from the Table.

What would be the efficient way to extract all data from this table. I don't want to miss out any records.
Also there isn't any field that can be used as a range to filter out the records so as to only read that many records at a time.

Is there a way that i can read only 1 block at a time and ensure that everything is read eventually?

Thanks
Ashwini Sharma.

and Connor said...

I don't want to miss out any records.


Not sure what you mean by this? You will always get *all* of the rows, where "all" is the entire table as it existed at the start of your query.

All you need to do is open a cursor on "select * from my_table" and start fetching.


Rating

  (3 ratings)

Is this answer out of date? If it is, please let us know via a Comment

Comments

Ashwini, February 25, 2019 - 4:17 am UTC

Apologies for not describing the situation properly.

Since this is a very huge table, so simply executing select * from table wont work for me. The reason being: if an error occurs during extraction of data from the table and writing it elsewhere, I have to repeat the entire process again.
Therefor i have to read the data in chunks. Process some reasonable amount of data everyday until I move all of the data.

Secondly, the table is constantly updated with new information. On an average everyday approximately 3 GB of new data will be added to the table.

I was wondering if it was possible to read it in terms of physical blocks or logical blocks at a time.
Let's say I read 200 blocks of data on 1st and and another 200blocks on next day and so on until all of the data is read.

Connor McDonald
February 26, 2019 - 2:28 am UTC

And what is to say that the 200 blocks you read yesterday won't be updated today?


happy days (are here again)

Racer I., February 25, 2019 - 6:48 am UTC

Hi,

(Guessing, as not many infos so far)
If the many partitions are time-based (say daily) you can export by date (or date range). You just have to store the date up to which you got somewhere, so you can restart from there.
If the 3 GB updates include UPDATEs to old time ranges you will likely be unable to get a consistent view by any method (ORA-1555...).

regards,

Let the hardware and database engine take care of the problems.

lh, February 27, 2019 - 8:43 am UTC

Hi


3 gb of updates isn't so much anymore.
Having long undo_retention times and large undo area isn't so expensive.

These should be set for large enough so one does not have to do excessive amount of coding.


lh

More to Explore

PL/SQL demos

Check out more PL/SQL tutorials on our LiveSQL tool.

PL/SQL docs

PL/SQL reference manual from the Oracle documentation library