Ashwini, February 25, 2019 - 4:17 am UTC
Apologies for not describing the situation properly.
Since this is a very huge table, so simply executing select * from table wont work for me. The reason being: if an error occurs during extraction of data from the table and writing it elsewhere, I have to repeat the entire process again.
Therefor i have to read the data in chunks. Process some reasonable amount of data everyday until I move all of the data.
Secondly, the table is constantly updated with new information. On an average everyday approximately 3 GB of new data will be added to the table.
I was wondering if it was possible to read it in terms of physical blocks or logical blocks at a time.
Let's say I read 200 blocks of data on 1st and and another 200blocks on next day and so on until all of the data is read.
February 26, 2019 - 2:28 am UTC
And what is to say that the 200 blocks you read yesterday won't be updated today?
happy days (are here again)
Racer I., February 25, 2019 - 6:48 am UTC
Hi,
(Guessing, as not many infos so far)
If the many partitions are time-based (say daily) you can export by date (or date range). You just have to store the date up to which you got somewhere, so you can restart from there.
If the 3 GB updates include UPDATEs to old time ranges you will likely be unable to get a consistent view by any method (ORA-1555...).
regards,
Let the hardware and database engine take care of the problems.
lh, February 27, 2019 - 8:43 am UTC
Hi
3 gb of updates isn't so much anymore.
Having long undo_retention times and large undo area isn't so expensive.
These should be set for large enough so one does not have to do excessive amount of coding.
lh