Reduce disk usage
January 07, 2020 - 6:04 am UTC
Reviewer: A reader
You mentioned the table is partitioned. If I am not mistaken then are all the partitions used all the time?
If not, then maybe you can create a new DG with exteenal redundancy and move the LOB segments to them for older partitions.
Once you've moved the older LOB segments, it will give you some breathing space on the current DG. In time when the usage is low on the current DG for the table you can move it to the new DG as well.
Of course, nothing can be well established w/o extensive testing.
But still we need to move the entire data to another table.
January 07, 2020 - 6:16 am UTC
Reviewer: A reader
Hi, We have a table with more than 50000000 records. Is it possible to move entire table data to a new table in 30 minutes time? We will have only 30 minutes to complete the entire activity. Another factor that needs to be kept in mind is that out of 120 GB consumed, only 20 gb is taken by clob segment. Rest 100 GB is table by corresponding table segment. So, is it worth moving to securefile knowing that 100 GB of 120 GB will remain uncompressed?
January 08, 2020 - 12:52 am UTC
Another factor that needs to be kept in mind is that out of 120 GB consumed, only 20 gb is taken by clob segment
My crystal ball is getting repaired at the moment, so yeah, we didn't know that :-)
There is a difference between clob segment and clob data (given that you said much the clob data in inline). Anyway...do some *testing*. Grab a subset of the data and see what benefits you get in a non-prod environment. Why take guesses when you can get an accurate measurement of how much compression benefit and space reclamation you will get.
With dbms_redef, you are moving the entire table in 30mins...It can take hours, but you don't care because the table is *online* during the process. It is only the last phase (finish_redef) where the table is locked, and that is seconds not hours.