Skip to Main Content
  • Questions
  • Nologging or Noarchive log , Archivelog Maintenace

Breadcrumb

Question and Answer

Tom Kyte

Thanks for the question, Nikhil.

Asked: September 15, 2003 - 12:05 am UTC

Last updated: September 16, 2003 - 12:05 pm UTC

Version: 8.1.7

Viewed 1000+ times

You Asked

Hi, Tom,

Got a chance after a long time,


We have PRODUCTION databases with HIGH OLTP kind, and so archive log generation is much faster, we have some workaround/solution for maintaining the same.

1. Full RMAN Backup once a day and OnDemand for Archivelog only. (Maintenance free)
2. BCV Configuration, split every 2-6 hours, but On Tape it goes once a week. Need manual intervation to manage archives.
3. Manual intervation whenever archivelog filesystem over threshold. (Not easy to maintain)

Now, the situation is, initially design filesystem can hold 1 day of archivelogs, but after 6 months production every thing has got 3-4 fold increase.

So, what could be the practical/real world scenario to manager archives.

1. Provide enough filesystem.
On peak duration, even 300GB is not enough, while normal 100GB is fine.
2. Turn some tables to nologging mode, personally don't prefer, as learn from various posts on your site and metalink.
3. On peak duration, RMAN is not able handle the quantity. We have Veritas NetBackup 4.5 with Tape Library StorageTek l700, per channel throughput best 300-400MB/Min worst <100MB/Min.
4. Turn DB to noarchivelog mode, increase REDO LOG size to hold atleast 4hrs transaction, in the case where Data is possible to recreate and DATA LOSS is not much crucial. (HARD to convince to Management)

So, what do you think ..?

Nikhil

and Tom said...



I think #1 is the only reasonable answer -- provide sufficient diskspace.

#2 (nologging) isn't good because it won't even work! nologging is done for bulk operations like insert /*+ APPEND */, direct path loads, create table as select, index builds. It is doubtful you are doing those.

#3 is just a "statement"?

#4 means "our data isn't worth much. we can afford to lose it all -- or at least a days worth". increasing your redo log size won't do anything for media recovery (not in a SUPPORTED, 100%, will always work fashion in any case!). Data loss that is not crucial? I've yet to see that on a high end system.



You could always look for ways to reduce redo!! look for work you are doing that isn't really necessary (you might be surprised at what you discover!). look to see if there are any batches that process row by row that could do it set based (less redo and undo). look for practices like "rebuild indexes on a schedule", that can really increase the amount of redo you generate as the index needs to split all over again. find the causes of your redo generation and look to tune them as you would anything else.

Rating

  (8 ratings)

Is this answer out of date? If it is, please let us know via a Comment

Comments

is gzip (or compress) archive logs an option?

A reader, September 15, 2003 - 12:32 pm UTC

Maybe you can gzip archived log files at a schedule (like once or twice an hour). You will be surprised how much space can be saved. It puts some stress on the CPU. If your CPU is powerful, maybe this is an option.

Can RMAN work with compressed Archived log files.

Shailandra, September 15, 2003 - 2:11 pm UTC

Hi Tom,

In our scripts for hot/cold/export backup we use pipe to compress the file while taking the backup. When I use RMAN to take backup archive log files, if the files are compressed, RMAN will not be able to find the archive log files and will error out. Is there a way that If we compress the archive log files, RMAN is still able to recognize the archivelog files and take its backup and then delete the archivelog files as usual.

Shailandra

Tom Kyte
September 15, 2003 - 4:11 pm UTC

well, only if you name them back to the template name I think -- it is expecting them to follow that naming convention.

that and rman actually tries to "read" those things, so that would probably not work either.

I guess I would be relying on the hardware compression on my tape drives to take care of that and just getting the archives off of the system ASAP.

A reader, September 15, 2003 - 10:33 pm UTC

Tom,

Consider a scenario

Rman used in nocatalog.
Entire database backed up with controlfiles and archivelogs.

Filesystem Corrupted or whatever reason entire file system gone bad.

Can i recover the database using RMAN.

Thanks..


Nologging or Noarchive log , Archivelog Maintenace

Nikhil, September 16, 2003 - 6:17 am UTC

Your suggestion #1 is ok, but not always practical to ask maximum diskspace. Since, it will be idle for 70% of time. Peak time is last week of the month.

Re : "DATA LOSS" , application has flatfiles as source, loaded --> processed --> audited --> unwanted records deleted. So, flow is insert/process/delete, only 10-15% data remains. ie. (Fraud Management)

If we loss data, say upto 1 week, we may reprocess data from flatfiles.

#3 RMAN - can allocate maximum channels, with dedicated network bandwith, but most of times it fails to handle the quantity during peak time.

Look for the ways to generate less redo -
-- we have online partitioning rebuild/purge, but they are in nologging mode.
-- Left with some code, but wrapped plsql packages, can't touch. Even vendor has confirmed the optimal processing.

Thanks...
Nikhil

Tom Kyte
September 16, 2003 - 8:38 am UTC



remember disk is cheap compared to

a) lost data
b) downtime
c) cpu
d) our salaries in the long term


reprocessing = downtime, has anyone figured out the cost of that (it is generally HUGE)

Adrian, September 16, 2003 - 8:03 am UTC

I think the two following sections, give a hint at the answer:

Tom:
"You could always look for ways to reduce redo!! look for work you are doing that isn't really necessary (you might be surprised at what you discover!). look to see if there are any batches that process row by row that could do it set based (less redo and undo)."

Nikhil:
"Re : "DATA LOSS" , application has flatfiles as source, loaded --> processed --> audited --> unwanted records deleted. So, flow is insert/process/delete, only 10-15% data remains. ie. (Fraud Management)"


Change the way you load/process your data. Bulk load into no-logging tables, during the process phase insert the 10-15% that you actually want to keep into the real tables, then, when done truncate the no-logging tables. This way you don't get redo associated with the initial load, or the delete.



External Tables

Shailandra, September 16, 2003 - 9:15 am UTC

What about using the external tables instead of bulk loading. May be compare the result of data processing timeing using external tables or bulk loading the data and processing.

Cheers

Correction.

Shailandra, September 16, 2003 - 9:16 am UTC

I am sorry, he is using Oracle 8i which does not have external tables. So above suggeston is of no use.

Nologging or Noarchive log , Archivelog Maintenace

Nikhil, September 16, 2003 - 12:05 pm UTC

Hi,
Thanks everyone for your inputs ..

I think ultimately TOM is right, we have to go for sufficient filesystem.

Adrian :
Yes, we are almost following the same things, loading into PARTITIONS, selecting the 10-15%, insert into IMP tables, truncating PARTITIONS.

Anyway, it's scenario for Capacity Planning.


More to Explore

Backup/Recovery

Check out the complete guide to all of the Backup & Recovery techniques in the Oracle Database.