is gzip (or compress) archive logs an option?
A reader, September 15, 2003 - 12:32 pm UTC
Maybe you can gzip archived log files at a schedule (like once or twice an hour). You will be surprised how much space can be saved. It puts some stress on the CPU. If your CPU is powerful, maybe this is an option.
Can RMAN work with compressed Archived log files.
Shailandra, September 15, 2003 - 2:11 pm UTC
Hi Tom,
In our scripts for hot/cold/export backup we use pipe to compress the file while taking the backup. When I use RMAN to take backup archive log files, if the files are compressed, RMAN will not be able to find the archive log files and will error out. Is there a way that If we compress the archive log files, RMAN is still able to recognize the archivelog files and take its backup and then delete the archivelog files as usual.
Shailandra
September 15, 2003 - 4:11 pm UTC
well, only if you name them back to the template name I think -- it is expecting them to follow that naming convention.
that and rman actually tries to "read" those things, so that would probably not work either.
I guess I would be relying on the hardware compression on my tape drives to take care of that and just getting the archives off of the system ASAP.
A reader, September 15, 2003 - 10:33 pm UTC
Tom,
Consider a scenario
Rman used in nocatalog.
Entire database backed up with controlfiles and archivelogs.
Filesystem Corrupted or whatever reason entire file system gone bad.
Can i recover the database using RMAN.
Thanks..
September 16, 2003 - 7:53 am UTC
Nologging or Noarchive log , Archivelog Maintenace
Nikhil, September 16, 2003 - 6:17 am UTC
Your suggestion #1 is ok, but not always practical to ask maximum diskspace. Since, it will be idle for 70% of time. Peak time is last week of the month.
Re : "DATA LOSS" , application has flatfiles as source, loaded --> processed --> audited --> unwanted records deleted. So, flow is insert/process/delete, only 10-15% data remains. ie. (Fraud Management)
If we loss data, say upto 1 week, we may reprocess data from flatfiles.
#3 RMAN - can allocate maximum channels, with dedicated network bandwith, but most of times it fails to handle the quantity during peak time.
Look for the ways to generate less redo -
-- we have online partitioning rebuild/purge, but they are in nologging mode.
-- Left with some code, but wrapped plsql packages, can't touch. Even vendor has confirmed the optimal processing.
Thanks...
Nikhil
September 16, 2003 - 8:38 am UTC
remember disk is cheap compared to
a) lost data
b) downtime
c) cpu
d) our salaries in the long term
reprocessing = downtime, has anyone figured out the cost of that (it is generally HUGE)
Adrian, September 16, 2003 - 8:03 am UTC
I think the two following sections, give a hint at the answer:
Tom:
"You could always look for ways to reduce redo!! look for work you are doing that isn't really necessary (you might be surprised at what you discover!). look to see if there are any batches that process row by row that could do it set based (less redo and undo)."
Nikhil:
"Re : "DATA LOSS" , application has flatfiles as source, loaded --> processed --> audited --> unwanted records deleted. So, flow is insert/process/delete, only 10-15% data remains. ie. (Fraud Management)"
Change the way you load/process your data. Bulk load into no-logging tables, during the process phase insert the 10-15% that you actually want to keep into the real tables, then, when done truncate the no-logging tables. This way you don't get redo associated with the initial load, or the delete.
External Tables
Shailandra, September 16, 2003 - 9:15 am UTC
What about using the external tables instead of bulk loading. May be compare the result of data processing timeing using external tables or bulk loading the data and processing.
Cheers
Correction.
Shailandra, September 16, 2003 - 9:16 am UTC
I am sorry, he is using Oracle 8i which does not have external tables. So above suggeston is of no use.
Nologging or Noarchive log , Archivelog Maintenace
Nikhil, September 16, 2003 - 12:05 pm UTC
Hi,
Thanks everyone for your inputs ..
I think ultimately TOM is right, we have to go for sufficient filesystem.
Adrian :
Yes, we are almost following the same things, loading into PARTITIONS, selecting the 10-15%, insert into IMP tables, truncating PARTITIONS.
Anyway, it's scenario for Capacity Planning.