Skip to Main Content
  • Questions
  • Consumption of huge disk space in the application


Question and Answer

Connor McDonald

Thanks for the question, Saivishnu.

Asked: January 06, 2020 - 3:50 pm UTC

Answered by: Connor McDonald - Last updated: January 08, 2020 - 12:52 am UTC

Category: Database Administration - Version: 11.2.0

Viewed 100+ times

You Asked

We have an application that runs 24*7. Currently, that application is consuming lot of disk space thanks to the clob data . We can't do much about clob data as it is necessary. First we though about compressing the clob data by using the features of securefile. For that, we have to migrate from basic to secure and it is not possible since it requires table redefinition which in the end result in the end result in exclusive table lock. We cannot let that happen since appln is 24*7. Another option we are considering is to change the redundancy of the disk group from normal to external thereby saving half the space. Issue here is that we have to move datafile to offline and then move it to another diskgroup with external redundancy.

1) how long will it take to move a 100 gb datafile to offline and then move it to another diskgroup?
(We will have only 30 min window)
2)is there any other solutions that we didn't consider?

1 main point i want to point out is that all of our tables are list partitioned and from out research, most clob data are stored in table segment itself.(most probably datasize <4kb).

and we said...

"requires table redefinition which in the end result in the end result in exclusive table lock"

Have you *timed* how long that lock is? It is just a switch of dictionary definitions, so we typically talking seconds here. The exact duration depends not on the size of the table, but on the amount of dictionary work to be done - so I'd recommend you test that to see how it comes out for your specific environment.

But that will be a lot easier than any other solution.

Another alternative is to defer this task until you've moved to 12.2 or beyond, where you will be able move datafiles and tables online with no interruption to service. After all, 11.2 is coming to the end of life in terms of support, so its definitely time to be thinking about moving on...

and you rated our response

  (2 ratings)

Is this answer out of date? If it is, please let us know via a Review


Reduce disk usage

January 07, 2020 - 6:04 am UTC

Reviewer: A reader

You mentioned the table is partitioned. If I am not mistaken then are all the partitions used all the time?

If not, then maybe you can create a new DG with exteenal redundancy and move the LOB segments to them for older partitions. 

Once you've moved the older LOB segments, it will give you some breathing space on the current DG. In time when the usage is low on the current DG for the table you can move it to the new DG as well.

Of course, nothing can be well established w/o extensive testing.


But still we need to move the entire data to another table.

January 07, 2020 - 6:16 am UTC

Reviewer: A reader

Hi, We have a table with more than 50000000 records. Is it possible to move entire table data to a new table in 30 minutes time? We will have only 30 minutes to complete the entire activity. Another factor that needs to be kept in mind is that out of 120 GB consumed, only 20 gb is taken by clob segment. Rest 100 GB is table by corresponding table segment. So, is it worth moving to securefile knowing that 100 GB of 120 GB will remain uncompressed?
Connor McDonald


January 08, 2020 - 12:52 am UTC

Another factor that needs to be kept in mind is that out of 120 GB consumed, only 20 gb is taken by clob segment

My crystal ball is getting repaired at the moment, so yeah, we didn't know that :-)

There is a difference between clob segment and clob data (given that you said much the clob data in inline). some *testing*. Grab a subset of the data and see what benefits you get in a non-prod environment. Why take guesses when you can get an accurate measurement of how much compression benefit and space reclamation you will get.

With dbms_redef, you are moving the entire table in 30mins...It can take hours, but you don't care because the table is *online* during the process. It is only the last phase (finish_redef) where the table is locked, and that is seconds not hours.

More to Explore


Need more information on Administration? Check out the Administrators guide for the Oracle Database