Skip to Main Content

Breadcrumb

Question and Answer

Tom Kyte

Thanks for the question, Bhavani.

Asked: April 12, 2002 - 8:56 pm UTC

Last updated: July 18, 2012 - 3:32 pm UTC

Version: 8.1.7

Viewed 10K+ times! This question is

You Asked

Hi Tom,

Recently we Shifted our Database server from one location to other location. Here is the steps our DBA done.

1) On 4th of April, we exported (Using export and import utilities) all our data to the New Machine
2) On 6th we shipped our New Machine to the other location
3) on 12th We have taken the Archive Log file from the current server (It is about 30 GB) and closed down the current server, applied the Archive Log on the New machine.

But we didn't get all the data till 12th. There is data only upto 4th of April. Then we again manually transferred the data

What might have went Wrong Tom? Is this the best way for Migration? Is there any other best way to Do it. (My DBA thought, complete Export and Import might take very long time, may be 1 day. Thats why he did it like that).

Thank you very much for your valuble answers.

Sincerely
Bhavani

and Tom said...

That is the answer then.

EXP is a point in time picture of your database. It is a logical backup, not a physical backup. It is a copy of the data. It is a point in time picture of your database.

It is physically impossible to do a point in time recovery with that -- once the imp is done, thats it.

Your DBA isn't much of a DBA I'm afraid. If he thought:

a) this would be more efficient
b) that you could apply archives to the imported data
c) that he THINKS he did apply archives to that database (that is IMPOSSIBLE, couldn't have happened!!! I don't know what he did but there is ZERO chance he applied any of those archives to this other database)


He should have:

a) made sure the database (old one) is running in archive log mode.
b) made a hot or cold backup of it
c) restored that to the new machine
d) started applying the accumulated logs
e) when he was nearly done -- he would shutdown the old database
f) continue applying the archives to the new one and apply the online redo logs from the old one
g) open the database


Thats how to do what you are trying. EXP -- that plays no role in this unless you:

a) shutdown old
b) startup with restricted session
c) export old
d) shutdown old
e) import into new



Rating

  (73 ratings)

Is this answer out of date? If it is, please let us know via a Comment

Comments

Great Explanation

A reader, April 13, 2002 - 3:37 pm UTC

This Answer is really Helpful. We all think, we know too much. This shows how much we actually know. Thank you very much Tom once again, for your valuble answers.

control file

A reader, April 14, 2002 - 6:06 am UTC

what about control file

Tom Kyte
April 14, 2002 - 9:08 am UTC

That would be covered in:

..
b) made a hot or cold backup of it
c) restored that to the new machine
...

Maybe I'm assuming too much but I would have to assume a DBA would know how to take a hot backup and restore it to another machine, they'll be doing a cancel based recovery if they do it this way with a non-binary backup of the control file.

Dave, April 14, 2002 - 7:02 am UTC

helo
1 "e) when he was nearly done -- he would shutdown the old database
"
means the older data base i s now closed for transactions? or else meaning

2 "EXP is a point in time picture of your database. It is a logical backup, not a
physical backup. It is a copy of the data. It is a point in time picture of
your database."

if this a logical picture but not physical when we import an export file dont we get a physical data by imp.



Tom Kyte
April 14, 2002 - 9:13 am UTC

means the older database is now closed for business, it is shutdown, no more work to be done against it -- the end goal here is a switchover.

As for the exp thing -- when I say it is a logical backup - you get a DIFFERENT but EQUIVALENT database out of it. It is not the same database as you started. Oh, it might have the same *data* but it will have the data in different places on the disk, different rowids. PHYSICALLY it is not the same database. Logically is is a full point in time copy of the original database. You cannot take archived redo log from the old database and apply to this logical copy.

asktom.com

liz, April 14, 2002 - 7:30 am UTC

hi tom
1.do yoiu take bakups of asktom database
2.what is the size of your database.
3.you keep archiving mode?
4.how old data we can get from your site

Tom Kyte
April 14, 2002 - 9:22 am UTC

Of course I do. I have a large amount of time invested in the questions/answers here. It would be the biggest mistake in the world NOT to take backups. Seriously, we run all of our production databases like this in archivelog mode. Fred (my DBA/SA) runs a hot backup about every other night.

Asktom's database is pretty small, just about 6gig. But we run 2 other databases on this machine as well -- and have about 10 other databases here and there that he manages as well.

We have a nice tape library that all of the machines are hooked into and practice restores every now and again.

If you click here:

</code> http://asktom.oracle.com/pls/ask/f?p=4950:6 <code>

you'll see the data goes back to May of 2000, but it's really back to January 2000 (jan/feb/mar/apr of 2000 we did it via email, thats when I decided to write the first "asktom" interface on the database. wasn't scaling up with email)

what to do with archivelogs

liz, April 14, 2002 - 7:37 am UTC

hi tom
when we take the fresh backup should we delete the archivelogs till here or every archive log shoul d always be kept along .what is the rule when should the archivelogs be deleted

Tom Kyte
April 14, 2002 - 9:24 am UTC

I would suggest you read:

</code> http://docs.oracle.com/docs/cd/A87860_01/doc/server.817/index.htm <code>

all of the backup and recovery books.

You probably want to use RMAN. You will want to keep at least 2 of the last backups available and all of the archives needed to restore each of them fully. You need AT LEAST two backups because backups are done to tape and tape can go bad, accidents happen.

You CANNOT take a backup over your last backup (ask yourself, what happens if a disk fails in the middle of the backup and I'm overwriting my last backup? well the answer is you just toasted your database, you have no backup now, cannot complete the current backup, you lose).

I keep the last 5 backups and all archives needed to restore them, the archives are stored redundantly as well.

archivelogs

liz, April 14, 2002 - 1:19 pm UTC

hi tom
thankyou, reallly we know nothing
but big confusion about keeping archive logs
lets suppose i took bakup on apr 01, then i took a backup on april 05, i keep only last backup ie apr 05 will i have to keep the archive logs from apr 01 to curent day.
i ask this because keeping all archivelogs will make heap of them

Tom Kyte
April 14, 2002 - 4:41 pm UTC

I won't tell you what HEAP of something you'll be in if you don't get a really good handle on backup and recovery.

I think I was pretty clear -- that you cannot keep just the last backup, that you should have redudant copies of archives.

What happens WHEN (not if, WHEN) the backup from april 5'th isn't readable. You'll really be wanting that April 1'st backup won't you.

You need to budget for this. You need to buy the hardware for this.

Hopefully you do not need a catastrophic loss of data to convince you of this.

asktom.com

Mike, April 14, 2002 - 5:18 pm UTC

How did you design your application of asktom.com or what kind tool did you use? Did you use the Oracle Designer? Does an application schema own all of the application objects? Are the tables partitioned?
Thanks.

Tom Kyte
April 14, 2002 - 6:25 pm UTC

</code> http://asktom.oracle.com/pls/asktom/f?p=100:11:::::P11_QUESTION_ID:1995355238285 <code>
....

no designer (its not a very big app)
no partitioning (the tables are not huge, I've recieved 16,000+ questions to date)

One schema owns the "application logic" tables (our framework), another schema owns the "data tables".

RE: thank you, really we know nothing...

Mark A. Williams, April 14, 2002 - 7:05 pm UTC

Liz:

I don't know what part of FL you are in, but Oracle have a really nice training facility right by the airport in Orlando. Also, the complete set of documentation is available on OTN for free!

Sorry for sticking my nose in, but it seems ALOT less expensive to pay approx. $2000 for a nice backup and recovery training course then it would be to lose production data...

- Mark

A reader

A reader, April 15, 2002 - 6:37 am UTC

Hi tom,
We have a production database and disaster recovery center database.In production we take daily hot backup
along with archive log files.we ship this tape to
our recovery center(diff location).Now here i have a little
confusion.I don't want to restore daily, the datafiles from the tape in a recovery database.what i want to do is, let
say today i am setting up the recovery center db.

1)I will take the hotback and archive from production and
apply to it.(first time)

2)Next day when i get next day tape i just want to apply
the new archive from the tape to recovery db and do cancel based recovery .Or i have to again restore all database files+ controlfile and apply archive ??

3)Is my approach ok of just applying the archive log file
every day and do cancel based recovery??

Your comments please


Tom Kyte
April 16, 2002 - 7:12 am UTC

This is so popular, we have a name for it -- the standby database.

Read:
</code> http://docs.oracle.com/docs/cd/A87860_01/doc/server.817/a76995/toc.htm <code>

it does this. (#3 -- no, not OK, you have to either

save all archives and apply them when you failover

use the standby approach)



archivelogs

ash k, April 15, 2002 - 11:18 am UTC

hello
modifying liz's question
backup on ,01,05,10,15 of april taken, keep last three backups 05,10,15 today is 20 april.
we must keep the archivelogs from 01 till today (20 april) am i right or wrong.or we must keep even older backups.

Tom Kyte
April 16, 2002 - 7:25 am UTC

You need the backups to make the april 5 backup consistent. That would be the archives generated during and after the april 5 backup.

April 5 is the answer.

mike

liz, April 15, 2002 - 11:23 am UTC

mike
let me clear my confusions

Cold BackUp and Archive Log Files

Madhu, April 16, 2002 - 11:14 am UTC

Hi Tom,

The same thing happened to us long back. But in our case, we did Cold Backup and then applied Archive Log fies for 16 days. But Archive Log files applied only for the first 4 days, and we don't know what happened, no data is there from the 5th to 16th day. But when we applied archive log files, it said archive log files applied successfully.

We don't know what went wrong. When we analysed the whole process, on 5th day, we changed one of the tables Structure. Because of that Archiving went wrong? or any other problem might have occred?

Thank you very much Tom.

Madhu

Tom Kyte
April 16, 2002 - 12:21 pm UTC

Sorry -- but you are comparing apples to oranges here. The above (exp/imp) would not work in any case. Yours should work in all cases.


Not having the details, I would guess that you used the wrong control files or did a point in time recovery. Without seeing what you did, I cannot tell you what went wrong.

No -- a simple schema change will not affect recovery.

Data Guard

Irfan, April 16, 2002 - 1:54 pm UTC

Hi Tom,
Why don't we use data guard facility instead of stand by database (for updating data in a different location)?

Thanks.

Tom Kyte
April 16, 2002 - 9:39 pm UTC

data guard = name of standby in 9i.

they are the same.

Datagaurd ???

A reader, April 17, 2002 - 3:46 am UTC

Tom,

From the above Comment.. Do u mean to say there is no ifference between datagaurd and standby databases.

I Heard something about Always in sync without time lag.

Can u elobrate a Bit.

Thanks.

Regards,
Ganesh R

Tom Kyte
April 17, 2002 - 7:32 am UTC

Data Guard is the 9i name for standby.

A TON of new features were added.

There are many things you can do in 9i data guard that you cannot do in 8i "standby".

However, data guard is the next generation "standby" -- the names are different, that is all. As always, what features you have depends on the version you have.

exp/imp works as if in archive mode

Irfan, April 17, 2002 - 3:48 am UTC

Hi Tom,
Sorry but would like to know that our production system is runnig on archive log and the disaster (under the same roof) is without the archive log. Daily the cron takes a full export from production and remotely copies to the disaster box. Then in a cyclic fashion the cron in disaster machine starts importing and morning we have the updated data in disaster machine. We never applied archived logs (since disaster is not in archive mode), eventhough we run the application in disaster but we get the exact data as in production. I really wonder how it is, though the control file is not at all updated? Please suggest.
Thanks .

Tom Kyte
April 17, 2002 - 7:36 am UTC

You do not have the "exact data as in production". You have a POINT IN TIME COPY of the data from production -- the point in time is the time of the actual export.

Do this -- at 3pm in the afternoon -- look at your "disaster" box, tell me what would happen if you failed over right then and there. Would you lose the work that was performed from 9am to 3pm that day on production? The answer is YES, YES you would. You would have on "disaster" only the data from the last export, nothing more -- nothing less.

If that satisfies your needs, great. I myself would feel more comfortable with a standby instance that is slightly more up to date.

A reader

A reader, April 18, 2002 - 11:25 am UTC

Hi Tom,
Can we use incremental/Cumulative exp/imp as backup/recover method? I never use it. Oracle 8i documentation mentioned the method and I don't understand well. I will never use it as backup/recover. Hoever it's good to know why. Can you please explain why we can or cannot use it or in what cases we can use it. The following are copied from the documentation.
"An incremental export extracts only tables that have changed since the last incremental, cumulative, or complete export. Therefore, an import from an
incremental export file imports the table definition and all of its data, not just the changed rows.
It is important to note that, because importing an incremental export file imports new versions of existing objects, existing objects are dropped before new ones are imported. This behavior differs from a normal import. During a normal import, objects are not dropped and an error is usually generated if the
object already exists."

Thanks a lot!

Tom Kyte
April 18, 2002 - 12:19 pm UTC

EXP is a logical copy. It is not a physical backup.

You cannot do a full recovery using EXP -- period. It is a point in time copy. nothing more, nothing less.

doing a backup ,a real honest backup, is

a) easier
b) faster
c) better

then exp. exp is (in my opinion) NOT A BACKUP TOOL. Backups mean *to me* that I can restore and then roll forward using the archives. YOu cannot do that with exp.

A Reader

A reader, April 18, 2002 - 2:36 pm UTC

I guess the case at the beginning of this topic, they may use incremental export because the old server was shut down in the last day. Couldn't they? I will vever use incremental/cumulative exp/imp as backup/recover tool. That does not mean in no case it should be used.
Thank you for all the excellent answers!

Tom Kyte
April 18, 2002 - 9:12 pm UTC

Well, the documentation does say -- it'll not be supported for very long so don't get used to it (incrementals with exp/imp, not the tool itself)

Archive Log Sizes

Irfan, April 23, 2002 - 10:03 am UTC

Tom, Below is the archive log listing of our three machines running OPS 734.

Node 1
-------
-rw-r----- 1 oracle dba 4196352 Apr 23 11:02 NODE1_9851.log
-rw-r----- 1 oracle dba 4196352 Apr 23 10:55 NODE1_9850.log
-rw-r----- 1 oracle dba 4196352 Apr 23 10:47 NODE1_9849.log
-rw-r----- 1 oracle dba 4196352 Apr 23 10:37 NODE1_9848.log

Node2
-------
-rw-r----- 1 oracle dba 6144 Apr 23 11:00 NODE2_3755.log
-rw-r----- 1 oracle dba 4096 Apr 23 10:33 NODE2_3754.log
-rw-r----- 1 oracle dba 34816 Apr 23 10:06 NODE2_3753.log

Node3
-------
-rw-r----- 1 oracle dba 6144 Apr 23 10:56 NODE3_4357.log
-rw-r----- 1 oracle dba 14336 Apr 23 10:28 NODE3_4356.log
-rw-r----- 1 oracle dba 6144 Apr 23 10:01 NODE3_4355.log

I would like to know:
a) Why all the Nodes doesn't have the same archive log sizes.
From v$log I get the bytes = 4194304 bytes (for all)
Particularly Node2 and Node3 are not at all uniformed comparing to Node1.
b) From v$log is 4194304 bytes, why Node1 log sizes are 4196352 bytes.
c) For Node1 when I execute:

SVRMGR> archive log list
Database log mode Archive Mode
Automatic archival Enabled
Archive destination /oracle/admin/PKS/archive/NODE
Oldest online log sequence 9850
Next log sequence to archive 9852
Current log sequence 9852

- Current log sequence is 9852, where as for the listing above for Node1, the current file the log is being written is 9851. What is the logic behind this?
d)Yesterday's last archive log and today's first archive log(Apr 22 & Apr 23)

-rw-r----- 1 oracle dba 4196352 Apr 23 07:21 NODE1_9833.log
-rw-r----- 1 oracle dba 3430400 Apr 22 14:13 NODE1_9832.log

- Whenver I see a new archive log is created, is of size 4196352 - but why that Apr 22's closing is smaller than normal 4196352 bytes. Is it not not fully written?

Thanks.


Tom Kyte
April 23, 2002 - 11:01 am UTC

do you or someone you work with issue "alter system switch logfile".

Only at EOD

A reader, April 23, 2002 - 12:44 pm UTC

Hi Tom,
But only end of the day we do switch like after 14 hrs. I am sure nobody does it before that. I really wonder, please guide.
Thanks

Tom Kyte
April 23, 2002 - 1:35 pm UTC

Here we go -- just to show that you can learn something new every day (i always do...)



Article-ID: <Note:45204.1>
Circulation: PUBLISHED (EXTERNAL)
Folder: server.OPS.Parallelserver
Topic: Archives
Title: Inconsistent Archive Redo Log Size with OPS
Document-Type: FAQ
Impact: MEDIUM
Skill-Level: CASUAL
Updated-Date: 05-FEB-2002 12:20:26
References:
Attachments: NONE
Content-Type: TEXT/PLAIN
Products: 0;
Platforms: 89;

This article is filed for OpenVMS but is generic for Oracle7 Parallel Server.


Archive Redo Log Sizes and Oracle Parallel Server
-------------------------------------------------

Redo logs are archived for 2 reasons:

o The current on line log has filled up and the new logfile to be used
needs archiving.

o A user forces the logs to be archived or switched before they have
been filled.

Assuming manual archiving has not taken place, the size of the archived
redo log should be the same as the size of the on-line redo logs being
archived. If manual archiving has not been done, all archived logs will
be the same size.

An exception to the rule is with Oracle Parallel Server. This can cause
smaller archived redo log files to be generated when no manual archiving
has occurred. This note explains why these smaller archived files
are generated.

Whenever a new logfile is generated it will have a log sequence number.
The log sequence number is used for recovery and will be unique across all
instances in the parallel server configuration.

It is important that the log sequence numbers generated by the different
instances do not get too far out of step. The primary reason for this is
to keep the recovery process simple.

By keeping log sequence numbers within a close range it means that recovery
doesn't have to cope with the situation where large amounts of redo come
from one thread and no redo comes from another thread.

A smaller archivelog is generally seen when one of the instances in the
configuration is doing a lot more work than the others.

The smaller archived redo log file is generated for the 'quieter' instance.
The file is not the size of the current online log because only the filled
portion of a redo log is written to the archive location. (For an idle
instance within a Parallel Server Configuration, the generated file will
contain only the redo log file header block)


Review of original question

Irfan, June 16, 2002 - 1:39 am UTC

Going back to the original question.

If the Export/Import is a point in time snapshot then will he face any problem running his new db in archive log mode and in case of crash will he be able to apply archive logs to his imported DB?



Tom Kyte
June 17, 2002 - 7:27 am UTC

NO

As I said:

EXP is a point in time picture of your database. It is a logical backup, not a
physical backup.
It is a copy of the data. It is a point in time picture of
your database.



An export dmp file can never be rolled forward from the logs, NEVER.

logical backup

mo, March 15, 2003 - 10:13 am UTC

Tom:

We had to restore a procedure from a logical backup done everynight. DBA said you can not restore one procedure and he to do all of the which are a lot? is there a way to only restore one procedure from a logical backup.



Tom Kyte
March 15, 2003 - 12:45 pm UTC

imp .... show=y

and you can grab the code out of there. (thats in "the book") as well.

But you know what, a DBA that needs to use a "logical backup" (an export) is not a DBA... DMP files are not "backups" in any real sense of the word. Oh -- I know the docs say "backup", but it is not.



recovery question

Reader, May 24, 2003 - 8:03 pm UTC

When oracle does instance recovery, it uses the "current" online redolog file. When the media recovery is needed, oracle applied both archive log and online redolog files. In both situations, during recovery, while applying redo records will Oracle generate redo again in the log buffer? It may seem to be a silly question to you, but i was just wondering? Thanks.
Also, if I use backup control file to do recovery, i have to restore all of my datafiles from backup. How does Oracle know the current status of the database checkpoint information as I am not using the current control file. Is it like Oracle gets the checkpoint info from archive log files and updates the backup control file with info regarding the current status of the database. I may have added or resized datafiles since my last backup of control file and how my backup control file is aware of this info? Thanks as always.

Tom Kyte
May 25, 2003 - 9:31 am UTC

no, it reads the redo and applies it, it does not generate any.

Oracle *doesn't* know the current status of the stuff -- thats the issue with losing ALL copies of your current control files, it complicates recovery (cannot be as automatic as usual).

Oracle asks for the archive logs generated after the hotbackup taken.

Belinda, September 11, 2003 - 5:44 pm UTC

Tom,
When I was doing the recovery testing, if I remove all database files and restore only the datafiles from previous hotbackup. then recreate controlfile with resetlogs (as no controlfile and online redos), later recover database using backup controlfile until cancel, Oracle asks for the archivelog files which are generated after the hotbackup was taken. I wonder where Oracle gets the archivelog info from? as these archivelog info should not be in the header of the datafiles from the old backup and we don't have current controlfile and online redos.

Tom Kyte
September 11, 2003 - 7:56 pm UTC

you said to recovery until cancel

oracle knows the algorithm used to generate archive log file names

hence, it just asks for that which is assumes might exist until you say "stop, thats enough"



archive logfile in different location if the partition is full

Sean, August 19, 2004 - 1:27 pm UTC

Hi Tom,

We want to archive log file in one partition, if this partition is full, go to another partition. By doing this, we will not accidentally hang our database because of full disk.

Are there ways doing this?

Thanks so much for your help.



Tom Kyte
August 19, 2004 - 5:29 pm UTC

sure, use a LVM and create a larger volume?

(that was not tongue in cheek, unless you want to monitor when partition 1 gets full and issue the alter system command, you'll want to put these disks together as a larger disk)

Which archive log files are used?

Anil Pant, August 25, 2004 - 2:34 am UTC

Suppose I've backups of 1st, 5th and 10th day. I've archive log files from 1st till 12th (the time it crashed). If the database crashes on 12th day and I do an complete recovery on 12th.
Following are my questions :
(a) are all the archive log files used (I mean from 1st day) for recovery? If not how will Oracle know which all to to use ?
(b)Does Oracle use as per the SCN number from the 10th backup's control file?

Thanks

Tom Kyte
August 25, 2004 - 7:37 am UTC

a) you tell me, which files did you restore. if you restore the ones from day 10, only the archives generated since then would be needed. if you restore the files from day 1 -- well, then of course all of the archives since day 1.

the data file headers have checkpoint information, that is what clues the database as to what redo it would need to bring the files upto date.

b) if you restore the files from the 10th days backup, that is where the database would get is information from, yes.

Full backup and archive log

Yogesh, October 25, 2004 - 7:48 am UTC

I have a full backup and set of archive log files (after full backup). I want to recover the database using these archive log files up to a specific timestamp.

Full backup timestamp 25-OCT-2004 11:30

One of the test tables had 5 rows.

After this I've done some transactions and created couple of archive log files.

1st with timestamp 25-OCT-2004 12:07
2nd with timestamp 25-OCT-2004 12:09

Now the test tables have 11 rows.

I've used following set of commands for incomplete recovery

alter database mount;
Database mounted.
recover database until time '2004-10-25:12:09:30';
Media recovery complete.
alter database open noresetlogs;
statement processed.

sqlplus test/test -- to check if logs are applied after incomplete recovery. But I'm getting only 5 rows, which were present before full backup.

Can you please explain what is the problem?


Tom Kyte
October 25, 2004 - 8:07 am UTC

I hate guessing, but I have to guess since so much of what you did is not actually told here.

I'll guess you restored the entire database -- everything, control files et.al.

read note: <Note:161826.1>

you need to include "using backup controlfile" on that recover command, since you are in fact using a backup controlfile.

Yogesh, October 25, 2004 - 9:08 am UTC

Your guess was correct, I did not used using backup controlfile clause.

One more small question. If some of the transactions were not written to archive log fle and present in online redolog file, how can we apply those?

Tom Kyte
October 25, 2004 - 9:16 am UTC

since you NEVER backup online redo log files, you would NEVER restore them -- so the online redo logs would be available to be applied during the recovery process (you have them multiplexed over many devices -- a copy should be remaining on your system somewhere, unless you lost *everything* -- but that is what DR is about, not regular media recovery)

Yogesh, October 25, 2004 - 9:31 am UTC

That means, if we use current online redolog files, oracle will apply them when we open the instance.

Tom Kyte
October 25, 2004 - 9:38 am UTC

if you do COMPLETE recovery, yes.

if you do cancel based (until time) -- then maybe yes, maybe no, depends if they need to be included or not.

Yogesh, October 25, 2004 - 12:22 pm UTC

But in above scenario, where I'm using UNTIL i.e. incomplete recovery, can't I force it to apply online redo?

How can I perform complete recovery in this case?

recover database?
recover database using backup controlfile?





Tom Kyte
October 25, 2004 - 1:17 pm UTC

the only way to force it to want to apply the online redo log would be to specify a time/scn that NEEDED the online redo. it'll happen quite naturally.


to perform a complete recovery -- just "recover database until cancel using backup controlfile" and apply all archives and online redos, then cancel

or keeping the current control files intact, just "recover database" and it'll happen.

Is this on subject?

Robert, February 15, 2005 - 2:22 pm UTC

Tom,

Is there a way to tell how 'full' an online redo log is?
(e.g.: to anticipate when it is getting ready to switch).

Thanks,

Robert.

Tom Kyte
February 15, 2005 - 3:49 pm UTC

i don't think so, i'm not aware of one

Checking how full the current online redo log is.....

Mark J. Bobak, February 16, 2005 - 12:22 am UTC

Here's a script to see how full the current redo log is:

column used justify right
alter system checkpoint
/
select
le.leseq log_sequence#,
substr(to_char(100 * cp.cpodr_bno / le.lesiz, '999.00'), 2) || '%' used
from
sys.x$kcccp cp,
sys.x$kccle le
where
le.inst_id = userenv('Instance') and
cp.inst_id = userenv('Instance') and
le.leseq = cp.cpodr_seq
/

And no, I didn't write it.
Credit to Steve Adams for this one.
See:
</code> http://www.ixora.com.au/scripts/redo_log.htm <code>
Bottom of the page, log_file_usage.sql


On another note:
Reading through this thread, I'm kind of scared to
see how little so many folks know about backup and
recovery. I just got done with a complete and total
production crash of three customer-facing production
databases, due to a disk subsystem catastrophe. By the time
a certain well known disk vendopr who's initials are e-m-c
finally fixed the array, we restored over 1.4TB from BCVs,
and applied several GBs of archive logs, it was about 30
hours later. But, I know our backup and recovery process.
I'm comfortable with it. Of the three databases, two were
complete recoveries. One of them had corruption in the
archive log stream, and we were forced to do an incomplete
recovery there. So, the incomplete recovery was not
great, but it really couldn't be avoided in this
circumstance.

My point is, I was *the* guy on-site this past weekend when
it came to all the Oracle questions. I was comfortable,
because I understand backup and recovery, and what can and
should happen in what order.

If you're a DBA, you need to know what a hot and cold backup
is, how logs are applied to a restored database or datafile,
what the difference is between a logical and physical
backup. This is critical stuff. Without it, sooner or
later, you'll be without a database...and soon after that,
without a job.....;-)

Ok, I'll shut up now....;-) Sorry, maybe I'm a little
sensitive due to my recent experience....;-)

Tom Kyte
February 16, 2005 - 8:01 am UTC

<quote>
I'm kind of scared to
see how little so many folks know about backup and
recovery
</quote>

you are, I'm scared and amazed personally.

I like to say "the only thing a DBA is not allowed to get wrong is recovery (not backup, recovery)"

I can fix pretty much anything else they do wrong, but if they cannot recover, it is all for nothing.

You would quite simply be amazed at how many people have had their first attempt at recovery be precisely the time they needed to recover a real system for the first time!




Archive Log Backups

Paul, February 16, 2005 - 9:07 am UTC

"One of them had corruption in the archive log stream"

Not quite sure what was meant here, surely it's a good idea to keep multiple copies of the archive logs on different media (assuming you need to be sure of being able to recover fully.)



Archive log corruption....

Mark J. Bobak, February 16, 2005 - 9:58 am UTC

Paul,

What I meant was, in the list of archive logs that I needed
to apply to get to current point-in-time, there was an
archive log that was corrupted. As soon as I encountered
that, I was no longer able to roll forward, and was forced
to do a resetlogs at that point in time.

In this case, with the catastrophic failure of the disk
array, pretty much everything that was being written to
when the disk failed was corrupted. I think the only way
to avoid it would have been to have a secondary archive
log destination filesystem that was mounted from a different
disk array. That's an interesting thought, and may be worth
further investiagation, but I'll have to talk to my SAs
about how practical it is in our case.

Archive log corruption....

Paul, February 16, 2005 - 10:44 am UTC

Mark,

I see, they were logs not yet "flushed" onto a backup device, makes sense hopefully not too drastic a situation if not ideal.

The only reason I mentioned it (and I as I said I didn't really quite get what yuor particular issue was), is that I've seen plenty of people who are willing to believe that their backup media may fail for the "main" database backups and so keep multiple generations, but when it comes to archive log files seem to believe that the backup media will be perfect. A strange notion to me (but I've seen plenty stranger).

I remember a good 15 years or so ago, at one site the tape robot managed to "throw" one of the tape cartridges such that it ended up sliding under one of the tape drives. As it was loading it at the time, it was of course needed for access, a lot of head scratching from the SAs and panic from those who absolutely needed the content.

hot backup failure

abc, August 12, 2005 - 11:50 am UTC

Tom,

Database got crash while in hot backup mode.what should I do?

Thanks!


Tom Kyte
August 13, 2005 - 9:32 am UTC

start it up, you might have to alter database recover;

Nice

Raju, August 25, 2005 - 5:34 am UTC

Hi Tom,
You say about 8 types of files in Oracle.

I know only these -> Datafiles,controlfiles,Redolog files,Pfile,SPfile,Tempfiles.

What are the other files?


Tom Kyte
August 25, 2005 - 8:31 am UTC

the 8 that I described in my book are:

parameter files
trace files
alert file
data files
temp files
control files
redo log files
password file

Archive log file

Senthil, October 21, 2005 - 7:42 am UTC

Hello Tom,

Production server archive log file has grown to a huge size
(30 + GBs).no major event took place other than the
scheduled cron jobs.
i don't know where to start analyzing the problem.
Please provide me your suggestions.

Regards,
Senthil.





Tom Kyte
October 21, 2005 - 10:06 am UTC

an archive log file wont grow any larger than the online redo log files, so I must assume you mean "over time, my archive destination has accumulated 30gb of archives"

And that is insufficient to comment on - how often do you back them off to something else and clear them out, what used to be the rate of generation and what is NOW the rate of generation (eg: 30 gig in 6 minutes, or 6 months)....

Scheduled cron jobs, like batch? like big batches that do lots of modifications that generate gobs of redo?

Archives required for recover

jp, October 22, 2005 - 10:22 am UTC

Tom,

How can I find which archive files are required to recover a datafile/tablespace in advance, I mean, before I start running the recover database command, If I am correct, there is a way where you can find this information

thanks

jp

Tom Kyte
October 22, 2005 - 10:48 am UTC

you can use the v$ tables that show the log history - you'll need all of them generated since the point in time right before the backup that you restored took place.

but insufficient data to really say - you don't tell me how/what tool you backup with or anything.

that tool/method you use would lead you to the answer (eg: if you are "doing it yourself" backing up - you KNOW, because you've been putting all of the needed archives with this backup constantly yourself)

Archive Log file

Senthil, October 24, 2005 - 7:00 am UTC

Tom,
Backups are taken on daily basis and are backed into tapes. 30 Gbs are logs have accumulate at the end of day which is large compared to the usual. while discussing about global temp tables you pointed that GTT do generate some redos.i have made some temp tables as GTTs. is this the cause for increase? how to overcome this problem.

Thanks.

Tom Kyte
October 24, 2005 - 11:28 am UTC

how do you make some temp tables as GTTS? not sure what that means exactly (so, something has changed?)

Archive Log file

Senthikl, October 24, 2005 - 1:28 pm UTC

Tom,

Temp tables which i have used are usual tables meant for data movement.changes those tables to GTT which serves as a interface.


Tom Kyte
October 25, 2005 - 1:11 am UTC

what else has changed on your system.


It is rather easy to measure the amount of redo generated by various operations, I would set up a small test that measured the amount generated using GTT vs non-GTT and see what I saw.

Archive log file

Senthil, October 25, 2005 - 11:00 am UTC

Tom,

Give me a easy to measure the amount of logs generated.

Thanks,


Tom Kyte
October 26, 2005 - 7:25 am UTC

archive log mode? just count your archives.

else, read your alert log and look for the "switching to a new log message", they all have timestamps

how to apply archive log files

Vivek.Yalla, June 12, 2006 - 6:38 am UTC

hi
i take backup of my database weekly which is cold backup of database and archive log file and the datbase is archive mode.can tell step that which is right to take cold backup and applying archive log to cold backup. I have tried with ALTER DATABASE BACKUP CONTROLFILE TO TRACE; But i cant able to apply full archive log file.Pls suggest best possible solution for applying archive log to cold backup.eagrly waiting for u reply

thanking

I asked it elsewhere but more relevant here

A reader, June 12, 2006 - 12:42 pm UTC

Hi Tom
Is there any way, apart the alert log file to find out how long does it take to
apply log during the recovery process?
The objective is to monitor recovery process and estimate recovery time.
thanks

Tom Kyte
June 13, 2006 - 10:39 am UTC

have you looked at v$instance_recovery and the set of v$recover* views?

Measuring redo apply

Roderick, June 12, 2006 - 5:30 pm UTC

v$recovery_progress perhaps?

Read Appendix A of this paper on OTN.
</code> http://www.oracle.com/technology/deploy/availability/pdf/MAA_WP_10gRecoveryBestPractices.pdf <code>

Applying Archive logs

Shawn Brockway, June 14, 2006 - 8:08 am UTC

We have a situation at work similar to the original post. We are moving a production database from one machine to a different machine. We are planning to take a hot backup that will finish at midnight Friday night of the database on hostA. We will begin restoring the database onto hostB be as soon as the backup is complete. The applications will still run on hostA until midnight Saturday night. Can we really apply the archive logs produced on hostA after the backup is completed to hostB? Our datafiles and control file restored from hostA would be as of midnight Friday. We need to keep populating changes across to hostB until midnight Saturday when our outage window occurs so that we can switch the applications. Your comments lead me to believe that this is possible, but I've not seen this demonstrated in any documentation. How would we do this?

Tom Kyte
June 14, 2006 - 9:00 am UTC

yes, yo ucan really apply the archive logs - it is just like "restoring to a new host" - you take last backup, all archives you have and away you go.

It is just what you would do if:

a) you backup on friday night.
b) your production machine blows up sunday night.


The steps your dba's would take to restore on monday morning are the same steps you'll take here. restore backup, apply all of the logs you have.

Archive Logs too Big in my database

Ravi, January 08, 2007 - 12:37 pm UTC

First of all, I hope you can read this.
I understand that the archive logs are as big as the redo log files if the log switch occurs only once the redo log file is completed (as opposed to a manual "switch logfile", I mean).
Our archive logs are getting created on an average every 15 minutes sized 75MB.
I understand that the redo log files store all changes made to the database as they occur - including committed and non-commited transactions. I understand Oracle generates a redo record containing change vectors that describe changes to the data segment block for the table, the undo segment data block, and the transaction table of the undo segments.
My manager's argument is that 75MB is too big for an archive log file and it is impossible because we don't do those many transactions (generate redo) in 15 minutes.

Whole reason for this discussion: We send these archive log files to our DR site located in a different state. Due to our smaller bandwidth, sending 2 - 75MB archive log files every half-an-hour is a big deal for the company and it is slowing down the performance significantly!

Is there any reason that I'm generating too much unnecessary redo that I'm not supposed to (for a reasonably static database)?

We're on 8.1.7.4 on hp-ux 11.00.
Tom Kyte
January 08, 2007 - 1:24 pm UTC

your manager is obviously wrong - why? because you have 75mb redo logs and they are generated every 15 minutes.

use log miner, see what is in there.

gzip the arch files before sending

Coen, January 09, 2007 - 7:54 am UTC

No comment on the size of the arch files.

More a question: Do you initially write the arch files on the local machine or do you directly send it to the DR site / let oracle write directly to that location ?

Could you implement the following:
1. write 'destination1' arch files to the local server.
2. generate a second destination of the arch files
(local server as well)
3. gzip the arch files on the second destination
4. send the zipped arch files
5. unzip on the DR site
6. apply the arch files

This would cut down the use of bandwidth, but there is an extra delay for the DR site. And . . . it is not all oracle but also some extra unix scripts.

Log Miner used, but....

Ravi, January 09, 2007 - 9:46 am UTC

I used log miner to see what the logs contain:

insert into "UNKNOWN"."Objn:1160843"("Col[1]","Col[2]","Col[3]","Col[4]","Col[5]","Col[6]","Col[7]","Col[8]","Col[9]","Col[10]","Col[11]","Col[12]") values (HEXTORAW('c33a0c55'),HEXTORAW('c11d'),HEXTORAW('54'),HEXTORAW('42'),NULL,HEXTORAW('3131'),HEXTORAW('313231'),HEXTORAW('3330303736'),HEXTORAW('3030'),NULL,HEXTORAW('524f5357454c4c'),HEXTORAW('46554c544f4e'));

update "UNKNOWN"."Objn:6363" set "Col[2]" = HEXTORAW('786b01080a3c07'), "Col[3]" = HEXTORAW('c105'), "Col[7]" = HEXTORAW('52'), "Col[8]" = HEXTORAW('52'), "Col[47]" = HEXTORAW('c34c3a40'), "Col[48]" = HEXTORAW('786b01080a3c07'), "Col[56]" = HEXTORAW('2f636f6c756d6269612f50524f442f7532322f6170706c6d67722f31312e302f6c6f6750524f442f6c32353532353437372e726571'), "Col[57]" = HEXTORAW('68706e34303434'), "Col[58]" = HEXTORAW('2f636f6c756d6269612f50524f442f7532322f6170706c6d67722f31312e302f6f757450524f442f6f32353532353437372e6f7574'), "Col[59]" = HEXTORAW('68706e34303434') where ROWID = 'AAAW8pAJwAAACH9AAN';

There are 247000+ rows in one archivelog file when log mined.

I don't recognize any of these statements. Maybe, Oracle Financials is doing internal updates/inserts. How can this info be made useful.

Coen, I considered zipping the files and shipping them, but still a compressed 25MB compressed file would take up a lot of bandwidth on our MPLS pipe/line.
Tom Kyte
January 11, 2007 - 9:28 am UTC

you didn't have the dictionary loaded - those are object numbers

select * from dba_objects where object_id in ( 1160843, 6363 );

contents of redo log file.

abz, February 12, 2007 - 5:34 am UTC

we only know that the contents of a redo log are
"change vectors" ,but what actually are the contents of a redo file, are they human readable, are they in a form
of recording of changes in logical db structures or
physical db structures.
Tom Kyte
February 12, 2007 - 10:36 am UTC

they are binary

if you want to read redo - use logminer - documented in the plsql supplied packages guide and the administrator guide.

datafiles

abz, February 14, 2007 - 11:56 pm UTC

Is the online redolog is applied to datafiles before
it is archived when archivelog is ON? or is it possible
that (in normal case) there might be some entries in the
archived redo log file which must be written to datafiles.

Tom Kyte
February 15, 2007 - 11:40 am UTC

no, redo is used to recover datafiles in the event of failure, it is not used during normal processing.

Before an online redo log can be reused in archive log mode

a) it must be archived AND
b) all of the blocks it protects must be written to disk.

Recovery Manager

trantuananh, January 25, 2008 - 5:49 am UTC

Hi Tom!
I have a question:

With RECOVERY WINDOW OF 7 DAYS, why does this stragety in Oracle Document say that:

http://download.oracle.com/docs/cd/B19306_01/backup.102/b14191/rcmconc1007.htm#sthref288
...
[quote]
In this scenario, the current time is January 30 and the point of recoverability is January 23. Note how the January 14 backup is not obsolete even though a more recent backup (January 28) exists in the recovery window. This situation occurs because restoring the January 28 backup does not enable you to recover to the earliest time in the window, January 23. To ensure recoverability to any point within the window, you must save the January 14 backup as well as all archived redo logs from log sequence 500 to 1150.
[/quote]

The Oracle document's example is:

Have a full backup from: 1th January with Recovery Window of 7 days. At the 23th, Database is crashed. Why is the 14th backup not obsolete? And why: "This situation occurs because restoring the January 28 backup does not enable you to recover to the earliest time in the window, January 23"?

Back for my strategy, I also have one backup full database with RECOVERY WINDOW OF 7 DAYS. The first backup in first week, month 01/2008 was created at Saturday, 5th. Now, date is 25/01/2008, why:

RMAN> list backup of database;

starting full resync of recovery catalog
full resync complete

List of Backup Sets
===================

BS Key Type LV Size Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ---------------
981 Full 15.24G DISK 00:12:51 05-JAN-08
BP Key: 983 Status: AVAILABLE Compressed: NO Tag: TAG20080122T123628

Piece Name: /u02/backupVNP/BackupVNP_DB_0lj6pu6c_21_1
List of Datafiles in backup set 981
File LV Type Ckp SCN Ckp Time Name
---- -- ---- ---------- --------- ----
1 Full 18767412704 05-JAN-08 /u03/oradata/VNP/system01.dbf
2 Full 18767412704 05-JAN-08 /u03/oradata/VNP/undotbs01.dbf
3 Full 18767412704 05-JAN-08 /u03/oradata/VNP/sysaux01.dbf
4 Full 18767412704 05-JAN-08 /u03/oradata/VNP/users01.dbf
5 Full 18767412704 05-JAN-08 /u03/oradata/VNP/users02.dbf
6 Full 18767412704 05-JAN-08 /u03/oradata/VNP/users03.dbf
7 Full 18767412704 05-JAN-08 /u03/oradata/VNP/rman.dbf
8 Full 18767412704 05-JAN-08 /u03/oradata/VNP/undotbs2.dbf

RMAN> report obsolete;

RMAN retention policy will be applied to the command
RMAN retention policy is set to recovery window of 7 days
no obsolete backups found


Please clariy me more!

Thank you very much!



Tom Kyte
January 25, 2008 - 9:12 am UTC

you want to be able to recover anywhere in the previous seven days.

You have currently:

a) today's database, jan 30th
b) a backup as of jan 28th
c) a backup as of jan 14th
d) a requirement to be able to restore the database as of any point in time between jan 24 and jan 30th (seven days)


if you erase the jan 14th backup, you would not be able to restore as of jan 24, 25, 26, 27 and part of the 28th.

so, therefore, in order to meet your requirements - it is just math.

Restore/Recovery to new server

Gareth Adcock, October 10, 2008 - 6:11 am UTC

I have a similar situation to the one described a few posts back (Applying Archive logs June 14, 2006 - 8am US/Eastern Reviewer: Shawn Brockway from Columbus, OH) in as much as I am trying to restore a database from one server to another using a rman online backup and then (once I have got this part to work) trying to apply archivelogs backed up via a standard non rman directory backup.

This is occuring on an off-site annual disaster recovery trial. We take Saturday evening¿s tapes and try to restore and recover the system on new machines at a disaster recovery centre (HP-UX 11.11, Oracle 9.2.0.3.0).

This is the backup script that I run from HP Data Protector every evening.

run {
allocate channel 'dev_0' type 'sbt_tape'
parms 'ENV=(OB2BARTYPE=Oracle8,OB2APPNAME=JDE,OB2BARLIST=JDE)';
allocate channel 'dev_1' type 'sbt_tape'
parms 'ENV=(OB2BARTYPE=Oracle8,OB2APPNAME=JDE,OB2BARLIST=JDE)';
allocate channel 'dev_2' type 'sbt_tape'
parms 'ENV=(OB2BARTYPE=Oracle8,OB2APPNAME=JDE,OB2BARLIST=JDE)';
allocate channel 'dev_3' type 'sbt_tape'
parms 'ENV=(OB2BARTYPE=Oracle8,OB2APPNAME=JDE,OB2BARLIST=JDE)';
sql 'alter system switch logfile';
backup full filesperset 1
format 'New1<JDE_%s:%t:%p>.dbf'
database
include current controlfile
archivelog all
;
sql 'alter system switch logfile';
}


Later each night I take a cold backup of the recovery catalog database as well as the Oracle directory structure (/u01 and /u02 without the controfiles, redo log files and datafiles; I include the archivelog files as a sort of `belt and braces¿ policy).

When we get onsite I restore /u01 and the partial backup of /u02 including the archivelogs, which pre and post date both Friday¿s and Saturday¿s rman online backups).

These are the commands that I use from the command line to restore and recover the database.

$ rman
RMAN> connect target /
RMAN> connect catalog rman_db1/*****@rcatdb
RMAN> startup nomount
RMAN> list archivelog all;

I take the last sequence number and then add one so that I will recover up to and including the last log that rman knows about (at this point I am only interested in restoring the rman backup and do not specify a later archivelog file, which I have from restoring the later /u01 /u02 directory backup).

RMAN> run {
set until logseq <last sequence +1> thread 1;
allocate channel ch1 type 'SBT_TAPE';
restore controlfile;
alter database mount;
restore database;
recover database;
}

The problem is that Saturday evening¿s back up is ignored and an attempt is made to restore Friday evening¿s back up with a subsequent recovery from the archivelog files (as if it were trying to recover to a point in time before the Saturday evening rman online back up).

This happened on last year¿s trial as well and I logged it with Oracle support. We came to the conclusion that I because I had not added one to the logfile sequence number that rman had attempted to restore up to but not including the last logfile and thus had gone back to Friday and then rolled forward.

I have tried a similar procedure on a small Linux system back at the office and the above logic seems to do what I want it to do i.e. restore the latest database back up (the logical equivalent of the `Saturday evening¿ backup) and then apply the last archive log (the one written during the actual backup itself).

My next option is to set up another database on the live server and be extremely careful which database I specify and where I point the restore but I am trying to avoid this if possible in case of unfortunate accidents (I like to keep live stuff and testing completely separate). I don¿t profess to be an expert, far from it, and would like to avoid this if possible. I¿m hoping that I¿ve done something stupid or that I¿ve missed something really obvious. If I get this working I¿m assuming that I can specify a later archivelog sequence number providing I have it and the previous files in place on disk.

addendum to previous post

Gareth Adcock, October 14, 2008 - 8:00 am UTC

Sorry but I have some more information regarding the previous post.

I think that I have found where I have been going wrong.

When I ran the following command during the disaster recovery trial:

RMAN> list archivelog all;

I got the following output (which I saved in my notes):

List of Archived Log Copies
Key     Thrd Seq     S Low Time  Name
------- ---- ------- - --------- ----
968579  1    37662   A 11-SEP-08 /u02/oradata/JDE/arch/arch_1_37662.dbf
969145  1    37663   A 11-SEP-08 /u02/oradata/JDE/arch/arch_1_37663.dbf
969146  1    37664   A 11-SEP-08 /u02/oradata/JDE/arch/arch_1_37664.dbf
969147  1    37665   A 11-SEP-08 /u02/oradata/JDE/arch/arch_1_37665.dbf
969148  1    37666   A 11-SEP-08 /u02/oradata/JDE/arch/arch_1_37666.dbf
969149  1    37667   A 11-SEP-08 /u02/oradata/JDE/arch/arch_1_37667.dbf
969150  1    37668   A 12-SEP-08 /u02/oradata/JDE/arch/arch_1_37668.dbf
969151  1    37669   A 12-SEP-08 /u02/oradata/JDE/arch/arch_1_37669.dbf
969152  1    37670   A 12-SEP-08 /u02/oradata/JDE/arch/arch_1_37670.dbf
969153  1    37671   A 12-SEP-08 /u02/oradata/JDE/arch/arch_1_37671.dbf
969154  1    37672   A 12-SEP-08 /u02/oradata/JDE/arch/arch_1_37672.dbf
969155  1    37673   A 12-SEP-08 /u02/oradata/JDE/arch/arch_1_37673.dbf
969156  1    37674   A 12-SEP-08 /u02/oradata/JDE/arch/arch_1_37674.dbf
969157  1    37675   A 12-SEP-08 /u02/oradata/JDE/arch/arch_1_37675.dbf
969158  1    37676   A 12-SEP-08 /u02/oradata/JDE/arch/arch_1_37676.dbf
969159  1    37677   A 12-SEP-08 /u02/oradata/JDE/arch/arch_1_37677.dbf
969160  1    37678   A 12-SEP-08 /u02/oradata/JDE/arch/arch_1_37678.dbf
969161  1    37679   A 12-SEP-08 /u02/oradata/JDE/arch/arch_1_37679.dbf
969162  1    37680   A 12-SEP-08 /u02/oradata/JDE/arch/arch_1_37680.dbf
969163  1    37681   A 12-SEP-08 /u02/oradata/JDE/arch/arch_1_37681.dbf
969164  1    37682   A 12-SEP-08 /u02/oradata/JDE/arch/arch_1_37682.dbf
969165  1    37683   A 12-SEP-08 /u02/oradata/JDE/arch/arch_1_37683.dbf
969166  1    37684   A 12-SEP-08 /u02/oradata/JDE/arch/arch_1_37684.dbf
969167  1    37685   A 12-SEP-08 /u02/oradata/JDE/arch/arch_1_37685.dbf
969168  1    37686   A 12-SEP-08 /u02/oradata/JDE/arch/arch_1_37686.dbf
969169  1    37687   A 12-SEP-08 /u02/oradata/JDE/arch/arch_1_37687.dbf
969170  1    37688   A 12-SEP-08 /u02/oradata/JDE/arch/arch_1_37688.dbf
969171  1    37689   A 12-SEP-08 /u02/oradata/JDE/arch/arch_1_37689.dbf
969172  1    37690   A 12-SEP-08 /u02/oradata/JDE/arch/arch_1_37690.dbf
969176  1    37691   A 12-SEP-08 /u02/oradata/JDE/arch/arch_1_37691.dbf
969179  1    37692   A 12-SEP-08 /u02/oradata/JDE/arch/arch_1_37692.dbf
969538  1    37693   A 12-SEP-08 /u02/oradata/JDE/arch/arch_1_37693.dbf
969539  1    37694   A 12-SEP-08 /u02/oradata/JDE/arch/arch_1_37694.dbf
969540  1    37695   A 12-SEP-08 /u02/oradata/JDE/arch/arch_1_37695.dbf
969541  1    37696   A 12-SEP-08 /u02/oradata/JDE/arch/arch_1_37696.dbf
969544  1    37697   A 13-SEP-08 /u02/oradata/JDE/arch/arch_1_37697.dbf

I looked at this and thought that as 37697 was the last archivelog that RMAN new about (when the catalog was shut down and backed up at 23:00) then I would restore and recover to this point (My primary objective at this stage was to get a database back; applying later archivelogs written after the Saturday evening online backup was a secondary consideration).

I incremented the number 37697 to 37698 (on the set until clause) in order to restore and recover up to and including 37697.

I don't have many screen listings from the offsite trial because there was a lot of time pressure to get this working and my documentation of what I actually did was less than scientific.

I have, however,  just queried rc_backup_redolog on the live recovery catalog and found the following:

select sequence# seq#,to_char(completion_time,'dd-Mon-yyyy hh:mm:ss PM')compl_time,to_char(first_time,'dd-Mon-yyyy hh:mm:ss PM') first,to_char(next_time,'dd-M
on-yyyy hh:mm:ss PM') next,blocks
from rc_backup_redolog
where completion_time between '13-SEP-2008' and '14-SEP-2008'
order by next_time
SQL> /

      SEQ# COMPL_TIME              FIRST                   NEXT                        BLOCKS
---------- ----------------------- ----------------------- ----------------------- ----------
     37662 13-Sep-2008 08:09:13 PM 11-Sep-2008 06:09:29 PM 11-Sep-2008 06:09:13 PM       9535
     37663 13-Sep-2008 08:09:37 PM 11-Sep-2008 06:09:13 PM 11-Sep-2008 08:09:22 PM      68615
     37664 13-Sep-2008 08:09:38 PM 11-Sep-2008 08:09:22 PM 11-Sep-2008 09:09:06 PM     204798
     37665 13-Sep-2008 08:09:12 PM 11-Sep-2008 09:09:06 PM 11-Sep-2008 10:09:22 PM     204798
     37666 13-Sep-2008 08:09:12 PM 11-Sep-2008 10:09:22 PM 11-Sep-2008 11:09:39 PM     204798
     37667 13-Sep-2008 08:09:44 PM 11-Sep-2008 11:09:39 PM 12-Sep-2008 12:09:01 AM     204798
     37668 13-Sep-2008 08:09:43 PM 12-Sep-2008 12:09:01 AM 12-Sep-2008 01:09:51 AM     204797
     37669 13-Sep-2008 08:09:22 PM 12-Sep-2008 01:09:51 AM 12-Sep-2008 01:09:33 AM     204798
     37670 13-Sep-2008 08:09:21 PM 12-Sep-2008 01:09:33 AM 12-Sep-2008 01:09:24 AM     204798
     37671 13-Sep-2008 08:09:23 PM 12-Sep-2008 01:09:24 AM 12-Sep-2008 01:09:31 AM     204798
     37672 13-Sep-2008 08:09:07 PM 12-Sep-2008 01:09:31 AM 12-Sep-2008 01:09:24 AM     204795
     37673 13-Sep-2008 08:09:07 PM 12-Sep-2008 01:09:24 AM 12-Sep-2008 01:09:54 AM     204793
     37674 13-Sep-2008 08:09:08 PM 12-Sep-2008 01:09:54 AM 12-Sep-2008 02:09:34 AM     204797
     37675 13-Sep-2008 08:09:07 PM 12-Sep-2008 02:09:34 AM 12-Sep-2008 02:09:41 AM     204798
     37676 13-Sep-2008 08:09:58 PM 12-Sep-2008 02:09:41 AM 12-Sep-2008 02:09:28 AM     204798
     37677 13-Sep-2008 08:09:58 PM 12-Sep-2008 02:09:28 AM 12-Sep-2008 03:09:09 AM     204798
     37678 13-Sep-2008 08:09:57 PM 12-Sep-2008 03:09:09 AM 12-Sep-2008 03:09:38 AM     204798
     37679 13-Sep-2008 08:09:58 PM 12-Sep-2008 03:09:38 AM 12-Sep-2008 03:09:25 AM     204798
     37680 13-Sep-2008 08:09:37 PM 12-Sep-2008 03:09:25 AM 12-Sep-2008 07:09:05 AM     204798
     37681 13-Sep-2008 08:09:39 PM 12-Sep-2008 07:09:05 AM 12-Sep-2008 08:09:08 AM     204794
     37682 13-Sep-2008 08:09:39 PM 12-Sep-2008 08:09:08 AM 12-Sep-2008 09:09:35 AM     204798
     37683 13-Sep-2008 08:09:39 PM 12-Sep-2008 09:09:35 AM 12-Sep-2008 10:09:33 AM     204798
     37684 13-Sep-2008 08:09:28 PM 12-Sep-2008 10:09:33 AM 12-Sep-2008 10:09:58 AM     204795
     37685 13-Sep-2008 08:09:27 PM 12-Sep-2008 10:09:58 AM 12-Sep-2008 12:09:42 PM     204798
     37686 13-Sep-2008 08:09:28 PM 12-Sep-2008 12:09:42 PM 12-Sep-2008 12:09:13 PM     204798
     37687 13-Sep-2008 08:09:28 PM 12-Sep-2008 12:09:13 PM 12-Sep-2008 01:09:23 PM     204798
     37688 13-Sep-2008 08:09:14 PM 12-Sep-2008 01:09:23 PM 12-Sep-2008 02:09:56 PM     204796
     37689 13-Sep-2008 08:09:14 PM 12-Sep-2008 02:09:56 PM 12-Sep-2008 03:09:36 PM     204797
     37690 13-Sep-2008 08:09:14 PM 12-Sep-2008 03:09:36 PM 12-Sep-2008 03:09:57 PM     204798
     37691 13-Sep-2008 08:09:13 PM 12-Sep-2008 03:09:57 PM 12-Sep-2008 04:09:12 PM     204798
     37692 13-Sep-2008 08:09:51 PM 12-Sep-2008 04:09:12 PM 12-Sep-2008 06:09:13 PM     194906
     37693 13-Sep-2008 08:09:49 PM 12-Sep-2008 06:09:13 PM 12-Sep-2008 08:09:01 PM      96756
     37694 13-Sep-2008 08:09:51 PM 12-Sep-2008 08:09:01 PM 12-Sep-2008 10:09:44 PM     204798
     37695 13-Sep-2008 08:09:51 PM 12-Sep-2008 10:09:44 PM 12-Sep-2008 10:09:39 PM     204793
     37696 13-Sep-2008 08:09:05 PM 12-Sep-2008 10:09:39 PM 13-Sep-2008 12:09:12 AM     204794

35 rows selected.

  

I think this means that the last complete archivelog file to actually to be backed up by Saturday evening's backup (18:30 13th September) was 37696, which was completed very early Saturday morning.

Regardless of what rman actually backed up during the Saturday evening backup, it did seem to know about 37697 (see previous 'LIST' command') and this would also have been available to it on disk from the file system backup restore.

I extended the query on rc_backup_redolog to include the Sunday evening's backup in order to get details on 37697 and 37698



select sequence# seq#,to_char(completion_time,'dd-Mon-yyyy hh:mm:ss PM')compl_time,to_char(first_time,'dd-Mon-yyyy hh:mm:ss PM') first,to_char(next_time,'dd-M
on-yyyy hh:mm:ss PM') next,blocks
from rc_backup_redolog
where completion_time between '14-SEP-2008' and '15-SEP-2008'
order by next_time

      SEQ# COMPL_TIME              FIRST                   NEXT                        BLOCKS
---------- ----------------------- ----------------------- ----------------------- ----------
     37662 14-Sep-2008 08:09:55 PM 11-Sep-2008 06:09:29 PM 11-Sep-2008 06:09:13 PM       9535
     37663 14-Sep-2008 08:09:15 PM 11-Sep-2008 06:09:13 PM 11-Sep-2008 08:09:22 PM      68615
     37664 14-Sep-2008 08:09:45 PM 11-Sep-2008 08:09:22 PM 11-Sep-2008 09:09:06 PM     204798
     37665 14-Sep-2008 08:09:44 PM 11-Sep-2008 09:09:06 PM 11-Sep-2008 10:09:22 PM     204798
     37666 14-Sep-2008 08:09:16 PM 11-Sep-2008 10:09:22 PM 11-Sep-2008 11:09:39 PM     204798
     37667 14-Sep-2008 08:09:17 PM 11-Sep-2008 11:09:39 PM 12-Sep-2008 12:09:01 AM     204798
     37668 14-Sep-2008 08:09:52 PM 12-Sep-2008 12:09:01 AM 12-Sep-2008 01:09:51 AM     204797
     37669 14-Sep-2008 08:09:53 PM 12-Sep-2008 01:09:51 AM 12-Sep-2008 01:09:33 AM     204798
     37670 14-Sep-2008 08:09:53 PM 12-Sep-2008 01:09:33 AM 12-Sep-2008 01:09:24 AM     204798
     37671 14-Sep-2008 08:09:27 PM 12-Sep-2008 01:09:24 AM 12-Sep-2008 01:09:31 AM     204798
     37672 14-Sep-2008 08:09:28 PM 12-Sep-2008 01:09:31 AM 12-Sep-2008 01:09:24 AM     204795
     37673 14-Sep-2008 08:09:27 PM 12-Sep-2008 01:09:24 AM 12-Sep-2008 01:09:54 AM     204793
     37674 14-Sep-2008 08:09:12 PM 12-Sep-2008 01:09:54 AM 12-Sep-2008 02:09:34 AM     204797
     37675 14-Sep-2008 08:09:12 PM 12-Sep-2008 02:09:34 AM 12-Sep-2008 02:09:41 AM     204798
     37676 14-Sep-2008 08:09:12 PM 12-Sep-2008 02:09:41 AM 12-Sep-2008 02:09:28 AM     204798
     37677 14-Sep-2008 08:09:13 PM 12-Sep-2008 02:09:28 AM 12-Sep-2008 03:09:09 AM     204798
     37678 14-Sep-2008 08:09:58 PM 12-Sep-2008 03:09:09 AM 12-Sep-2008 03:09:38 AM     204798
     37679 14-Sep-2008 08:09:58 PM 12-Sep-2008 03:09:38 AM 12-Sep-2008 03:09:25 AM     204798
     37680 14-Sep-2008 08:09:58 PM 12-Sep-2008 03:09:25 AM 12-Sep-2008 07:09:05 AM     204798
     37681 14-Sep-2008 08:09:58 PM 12-Sep-2008 07:09:05 AM 12-Sep-2008 08:09:08 AM     204794
     37682 14-Sep-2008 08:09:43 PM 12-Sep-2008 08:09:08 AM 12-Sep-2008 09:09:35 AM     204798
     37683 14-Sep-2008 08:09:43 PM 12-Sep-2008 09:09:35 AM 12-Sep-2008 10:09:33 AM     204798
     37684 14-Sep-2008 08:09:44 PM 12-Sep-2008 10:09:33 AM 12-Sep-2008 10:09:58 AM     204795
     37685 14-Sep-2008 08:09:43 PM 12-Sep-2008 10:09:58 AM 12-Sep-2008 12:09:42 PM     204798
     37686 14-Sep-2008 08:09:28 PM 12-Sep-2008 12:09:42 PM 12-Sep-2008 12:09:13 PM     204798
     37687 14-Sep-2008 08:09:28 PM 12-Sep-2008 12:09:13 PM 12-Sep-2008 01:09:23 PM     204798
     37688 14-Sep-2008 08:09:29 PM 12-Sep-2008 01:09:23 PM 12-Sep-2008 02:09:56 PM     204796
     37689 14-Sep-2008 08:09:28 PM 12-Sep-2008 02:09:56 PM 12-Sep-2008 03:09:36 PM     204797
     37690 14-Sep-2008 08:09:08 PM 12-Sep-2008 03:09:36 PM 12-Sep-2008 03:09:57 PM     204798
     37691 14-Sep-2008 08:09:08 PM 12-Sep-2008 03:09:57 PM 12-Sep-2008 04:09:12 PM     204798
     37692 14-Sep-2008 08:09:08 PM 12-Sep-2008 04:09:12 PM 12-Sep-2008 06:09:13 PM     194906
     37693 14-Sep-2008 08:09:06 PM 12-Sep-2008 06:09:13 PM 12-Sep-2008 08:09:01 PM      96756
     37694 14-Sep-2008 08:09:49 PM 12-Sep-2008 08:09:01 PM 12-Sep-2008 10:09:44 PM     204798
     37695 14-Sep-2008 08:09:49 PM 12-Sep-2008 10:09:44 PM 12-Sep-2008 10:09:39 PM     204793
     37696 14-Sep-2008 08:09:49 PM 12-Sep-2008 10:09:39 PM 13-Sep-2008 12:09:12 AM     204794
     37697 14-Sep-2008 08:09:49 PM 13-Sep-2008 12:09:12 AM 13-Sep-2008 06:09:15 PM     191972
     37698 14-Sep-2008 08:09:02 PM 13-Sep-2008 06:09:15 PM 13-Sep-2008 08:09:13 PM       3681

37 rows selected.

I can now see that 37697 was the archive logfile that was terminated by the 'alter system switch logfile' command at the start of the Saturday's backup script and that 37698 was the new file that was written to immediately afterwards.

I'm guessing when I kicked off the restore and recovery that rman decided to recover up to and including 37697; but no further.  This file was completed just before Satruday evening's backup and so the restore went back to Friday.  I'm also guessing that because rman knew of the existence of 37697 (even if it had not backed it up as of 23:00 Saturday night) that it was able to recover until that point.

Regarding previous two posts

Gareth Adcock, October 16, 2008 - 7:37 am UTC

Hi Tom,

I am really sorry about this but I think that I have got to the bottom of this now. Sorry to waste your time. For anyone else reading this please don't base your backup and recovery strategies on this information.

I think I have this sorted out now.

1) My backup script starts.

2) Despite the fact that a log switch takes place at the start of the script, RMAN only backs up the logs that had completed the archive process when the script started. This becomes apparent if I look at the latest record for rc_backup_redolog for the Saturday evening backup (13th September).

SEQ# COMPL_TIME FIRST NEXT BLOCKS
37696 13-Sep-2008 08:09:05 PM 12-Sep-2008 10:09:39 PM 13-Sep-2008 12:09:12 AM 204794

35 rows selected.

Sunday evening's backup picks up the logs written after the point at which the Saturday evening backup script started (37697 is current when the Saturday script starts and is switched immediately, 37698 is current during the backup and is switched when the backup finishes):
SEQ# COMPL_TIME FIRST NEXT BLOCKS
.......
.......
37696 14-Sep-2008 08:09:49 PM 12-Sep-2008 10:09:39 PM 13-Sep-2008 12:09:12 AM 204794
37697 14-Sep-2008 08:09:49 PM 13-Sep-2008 12:09:12 AM 13-Sep-2008 06:09:15 PM 191972
37698 14-Sep-2008 08:09:02 PM 13-Sep-2008 06:09:15 PM 13-Sep-2008 08:09:13 PM 3681
.......
.......

3) A log switch at the start of the backup, in itself, does not update the catalog with any new information although it will update the control file. However, something in the following commands will update the catalog tables with the details of the latest archivelog file (37697), though it will not be backed up this backup:

backup full filesperset 1
format 'New1<JDE_%s:%t:%p>.dbf'
database
include current controlfile
archivelog all

4) The script runs it course and the last thing it does is switch a logfile (37698 is switched) but this is done after the controlfile has been backed up. There will be no record of this file in the controlfile backup or the recovery catalog tables.

5) When I come to get the database back I restore a cold backup of the recovery catalog as of 22:45 Saturday night. I start it up and go into RMAN. I connect to the target database and connect to the catalog database. Then I 'startup nomount'. If I 'list archivelog all' I get a list of all the archivelogs that rman knows about (I believe this command in ordinary usage will update the catalog tables with the latest information from the control file if they are not up to date). Anyway, the tables must have been updated at some point during the backup because I have not restored the controlfile and rman knows about 37697.

My mistake was to assume that I needed to restore and recover up to and including the last archivelog that rman knows about from the Saturday online backup. This, however, takes me to a point in time just before the backup and hence a restore from Friday. I have treated the inclusion of the archivelogs (an optional extra) in the online backup as fundamental to the recovery from that particular backup when really they should be viewed as integral to a recovery from a previous backup or a convenient way to restore some archivelogs if they any go missing. In a disaster recovery, because I do not have the online redologs I need to do an incomplete recovery. In order to do this I need to apply the archivelogs from the period of the backup itself and any that I have post dating it. I take a seperate, later file system backup of the archivelogs when Ibackup the recovery catalog so in future I will restore these to the relevant directory and then catalog them individually after i have restored the controlfile and mounted the database. I think an SCN based recovery may be easier and less dependant upon a separate backup.

Hope you didn't waste too much time on this.

Gareth


Scofield, June 28, 2009 - 12:54 am UTC

Dear Tom; I really appreciate if you clear my doubts.

1-) When I issue:
alter system switch logfile;
The status of the previous redolog becomes active. (dirty blocks are being written to datafiles)
After this,When I issue
"alter system checkpoint;"
The redolog is immediately become inactive.( I dont have to wait)
What is the reason for this?

2-) Sir,after the "Alter system switch logfile" statement, which will occur first?
At first checkpoint and then log is archived.
Or
At First log is archived and then checkpoint?

3-) After I issue "alter system switch logfile"
The current log is archived.
After the log is archived, Is it cleared immediately? or Is it cleared after it becomes "current"
and Does oracle clears it or just overwrite?

4-)If the size of the redolog is 10m. The archivelog should be 10m.
However this size vary.
Does it mean that oracle doesnt archive entire redolog, it archives only the data in redolog?
Is that correct?



Tom Kyte
July 06, 2009 - 5:24 pm UTC

1) think about it???

when the checkpoint is complete, the redo logs that protected the now checkpointed data are not needed for instance recovery anymore.

alter system switch logfile; <<== initiates a checkpoint to get all blocks protected by that redo log on to disk, this happens in the background, in a lazy fashion. If you just waited a bit, the redo log would change status all by itself.

alter system checkpoing; <<=== start a foreground checkpoint to to "right now", you will WAIT (do not get prompt back) until this completes. All redo logs not currently being written to (eg: all but the current one) will no longer be needed for instance recovery - everything they protect MUST BE on disk since this checkpoint completed.


2) they are not sequential, things happen concurrently on computers, they happen simultaneously. One might finish before the other, but they both are going at the same time.


3) after the checkpoint has advanced to the point where the redo in a logfile is no longer needed for instance recovery AND that redo log file has been archived - then we can advance back into that redo log file and reuse it, we do not have to 'clear' it, we just advance back into it and start overwriting the old, no longer needed, redo data.

4) that is not true (if the online = 10m, the archive should be 10m). It was never true and in current releases will rarely if ever be true. Arch only copies what it needs to copy, it filters out extra junk that is not needed.


A statement like this hurts the head :)

... If the size of the redolog is 10m. The archivelog should be 10m.
However this size vary.
Does it mean that oracle doesnt archive entire redolog, it archives only the
data in redolog?
....

you write "if X, then Y. However, Y is not true (but X is true)."

You stated a theorem "if X, then Y" and then refuted it immediately. You already knew that "if x then y" is FALSE...

A reader, October 03, 2009 - 4:46 pm UTC

Hello Tom;

I know that redologs are not used in physical standby database.
So why do I see CLEARING_CURRENT or CLEARING in their status?
I dont know how to interpret their status
Tom Kyte
October 08, 2009 - 6:14 am UTC

... I know that redologs are not used in physical standby database. ...

you know wrong, they are.

lgwr ships to standby
standby writes to the STANDBY ONLINE REDO LOGS
if realtime apply is on - stanbdy also reads from and than applies ONLINE REDO LOGS
else - we wait until they are ARCHIVED


they go through the normal order of operations - just like in production.


same answer I gave to the exact same question you asked elsewhere on this site

Scofield, November 28, 2009 - 11:08 pm UTC

Hi sir;
As far as I know, controlfile only keeps track of start scn and end scn of the redologs.(v$log_history).
In Addition,controlfile doesnt keep track of whole transactions (v$archived_log)

When I mount the database, I can query v$archived_log.
Since this view is not stored in controlfile.how do I get the output?



Tom Kyte
November 29, 2009 - 8:58 am UTC

well, it is stored there.

a lot more than you say is stored in the control files.

quote src=Expert Oracle Database Architecture

...
Control Files

The control file is a fairly small file (it can grow up to 64MB or so in extreme cases) that contains a directory of the other files Oracle needs. The parameter file tells the instance where the control files are, and the control files tell the instance where the database and online redo log files are.

The control files also tell Oracle other things, such as information about checkpoints that have taken place, the name of the database (which should match the db_name parameter), the timestamp of the database as it was created, an archive redo log history (this can make a control file large in some cases), RMAN information, and so on.

Control files should be multiplexed either by hardware (RAID) or by Oracle when RAID or mirroring is not available. More than one copy of them should exist, and they should be stored on separate disks, to avoid losing them in the event you have a disk failure. It is not fatal to lose your control files—it just makes recovery that much harder.

Control files are something a developer will probably never have to actually deal with. To a DBA they are an important part of the database, but to a software developer they are not extremely relevant.

Scofield, December 05, 2009 - 7:31 pm UTC

Thanks sir,
Every transaction has scn which is stored in redologs.
Does controlfile also store this information?
(Does control file also keep track of the scn of each transaction?)
Tom Kyte
December 07, 2009 - 2:13 pm UTC

no, it does not, it does not have any need to.

A reader, December 11, 2009 - 6:39 pm UTC

I know that every transaction in redo has scn.
Controlfile must be aware of this, because
During media recovery,it can stop at the latest scn in the redologs.
It can figure out that "hey this is the last transaction in the redo, I should better stop recovery".

Since controlfile is not aware of this info, how does it perform this action?



Tom Kyte
December 14, 2009 - 7:53 am UTC

... I know that every transaction in redo has scn....

no, not really. The scn associated with a transaction is assigned upon commit, there are transactions in redo that do not have a commit time scn associated with them.

... Controlfile must be aware of this,...

no it doesn't. Don't you think there might be a marker in the online redo log that says "this is the end, don't read past here, STOP". Wouldn't that make more sense?


You said media recovery - you meant to say "crash or instance recovery". Media recovery involves backups, archives and a roll forward.


Let's do a mini crash recovery. Say you have two online redo log files. You just started up the database and we are starting at byte 1 of file 1 in the redo. We fill up redo log #1 and switch into #2. We get #2 50% full and we CRASH.


Now, you get into the database and start it up. We'll look around at the general state of things - control file information to find files, data file headers to see their state and then we'll open the redo logs. Now, we had switched from #1 to #2 and upon reading the control file - we discover that the checkpoint initiated by that switch had not yet completed. That means, we have to read and process log file #1 (if the checkpoint HAD completed, that would be the controlfile telling us "log #1 - not needed, skip it"). So, we open #1 and read it and process it - apply any changes that have not been applied (we can skip checking some changes altogether if the control file tells us "we checkpointed 'this far' and the change was from before that). Now we exhaust log file #1 and move onto log #2. We apply those changes and keep reading that file until......

we hit "stop here, this is where we left off - the next stuff in here is garbage".


and we are done. We now rollback any transactions (specifically the ones in the redo log that *didn't* have an scn :) ) that were not committed - but in progress - when we failed.

Database is completely recovered from that crash.

A reader, December 19, 2009 - 5:23 pm UTC

Thanks for clearing my doubts sir.

1-)
...Don't you think there might be a marker in the online redo log that says "this is the end, don't read past here, STOP

Sir, What is this marker like? Is it like, When a transaction commits, marker gets the scn of the transaction, so that this becomes the stop point.

2-)
...Now, we had switched from #1 to #2 and upon reading the control file - we discover that the checkpoint initiated by that switch had not yet completed.

Sir, how does oracle understand that the checkpoint initiated by that switch had not yet completed. Where does it check ?
Is there a datadictionary view that I can see this info?
Tom Kyte
December 20, 2009 - 8:56 am UTC

1)
It is just data, the redo logs are data, a structured set of data, that we know how to read. It has nothing to do with commits - it is just the last set of data written (we write to redo constantly - you don't need to commit to write redo).

Consider what would happen if all other transactions were suspended for a minute - and you started an update that would run for an hour and 50 minutes into it - the instance crashed. When we start up, we read the redo and we KNOW where to stop - at the last written redo record.


2) we know the status of checkpoints - datafile headers would be one location for example, we know the state of the checkpoints against that file. Query v$datafile

mfz, January 26, 2010 - 12:47 pm UTC

In metalink , I have a crtical message for this system as "Incomplete Setting of Redo Log Files".
Please let me know what should it make complete .

SQL> select * from v$version ;

BANNER
--------------------------------------------------------------------------------
Oracle Database 11g Enterprise Edition Release 11.1.0.7.0 - 64bit Production
PL/SQL Release 11.1.0.7.0 - Production
CORE    11.1.0.7.0      Production
TNS for 64-bit Windows: Version 11.1.0.7.0 - Production
NLSRTL Version 11.1.0.7.0 - Production



SQL> select log_mode from v$database;

LOG_MODE
------------
ARCHIVELOG



SQL> show parameter log

NAME                                 TYPE        VALUE
------------------------------------ ----------- --------------------
commit_logging                       string
db_create_online_log_dest_1          string
db_create_online_log_dest_2          string
db_create_online_log_dest_3          string
db_create_online_log_dest_4          string
db_create_online_log_dest_5          string
enable_ddl_logging                   boolean     FALSE
log_archive_config                   string
log_archive_dest                     string
log_archive_dest_1                   string
log_archive_dest_10                  string
log_archive_dest_2                   string
log_archive_dest_3                   string
log_archive_dest_4                   string
log_archive_dest_5                   string
log_archive_dest_6                   string
log_archive_dest_7                   string
log_archive_dest_8                   string
log_archive_dest_9                   string
log_archive_dest_state_1             string      enable
log_archive_dest_state_10            string      enable
log_archive_dest_state_2             string      enable
log_archive_dest_state_3             string      enable
log_archive_dest_state_4             string      enable
log_archive_dest_state_5             string      enable
log_archive_dest_state_6             string      enable
log_archive_dest_state_7             string      enable
log_archive_dest_state_8             string      enable
log_archive_dest_state_9             string      enable
log_archive_duplex_dest              string
log_archive_format                   string      ARC%S_%R.%T
log_archive_local_first              boolean     TRUE
log_archive_max_processes            integer     4
log_archive_min_succeed_dest         integer     1
log_archive_start                    boolean     FALSE
log_archive_trace                    integer     0
log_buffer                           integer     68411392
log_checkpoint_interval              integer     0
log_checkpoint_timeout               integer     1800
log_checkpoints_to_alert             boolean     FALSE
log_file_name_convert                string
remote_login_passwordfile            string      EXCLUSIVE
sec_case_sensitive_logon             boolean     TRUE
sec_max_failed_login_attempts        integer     10




    GROUP# STATUS  TYPE       MEMBER                                   IS_RECOVER
---------- ------- ---------- ---------------------------------------- ----------
         3         ONLINE     D:\APP\ROOT\ORADATA\TEST\REDO03.LOG NO
         2         ONLINE     D:\APP\ROOT\ORADATA\TEST\REDO02.LOG NO
         1         ONLINE     D:\APP\ROOT\ORADATA\TEST\REDO01.LOG NO
        


Tom Kyte
January 29, 2010 - 2:46 pm UTC

In metalink , I have a crtical message for this system as "Incomplete Setting of Redo Log Files".

where in metalink does it talk about your database specifically?

A reader, January 30, 2010 - 12:05 am UTC

Hi Sir

I know that when I start the database the scn of controlfile,datafile headers shoud be same.

I can compare scn of datafiles from v$datafile_header and Control file from v$datafile

Just for the experiement;
I shutdown the database and restore old redologs and create a controlfile with noresetlogs option
It fails with:
ORA-01229: data file 4 is inconsistent with logs.

How can oracle understand if the current online redologs are consistent or not?
Is there a view that I can check?
Tom Kyte
February 01, 2010 - 9:28 am UTC

... I know that when I start the database the scn of controlfile,datafile headers
shoud be same.
...

no, doesn't have to be that way, we'll look and performance crash recovery if we need to.

... How can oracle understand if the current online redologs are consistent or not?
Is there a view that I can check? ...

we can read them and see - the datafiles have their own set of data and the controlfile you created knows nothing about the state of the datafiles (you just created it after all!!!)

there need be no view, we told you that they are not in sync. The datafiles have information in them that the redo log files had not seen - we recognize that immediately.

Statistics

Mohannad, March 24, 2010 - 12:55 am UTC

Dears,
I am not a DBA, I need to know how to calculate or query the value of the followings:
1- Data base size in GB
2- Transaction log size in MB
3- Number of archived logs kept online
4- Read rate in MB/s
5- Write rate in MB/s
6- Block size in kb

Thanks.

Tom Kyte
March 24, 2010 - 5:59 am UTC

if you are not a DBA, why do you need these DBA statistics??

easiest path would be "ask your DBA, they'll have this.

archive log files differ in size

A reader, August 16, 2011 - 5:35 am UTC

As per my knowledge log switch occurs when redo log file in full and when manually switch logfiles using 'alter system switch logfile' command ..

Why does my archivelog size differ in size even though i didn't manually issue switch logfile command ? .. my redo log file is 200mb ..

-rw-r----- 2 oracle oinstall 196191744 2011-08-16 10:02 log_0000020754_1_696705195.arc
-rw-r----- 2 oracle oinstall 196122112 2011-08-16 11:15 log_0000020755_1_696705195.arc
-rw-r----- 2 oracle oinstall 196129792 2011-08-16 12:11 log_0000020756_1_696705195.arc

Tom Kyte
August 16, 2011 - 5:19 pm UTC

Because arch only archives relevant information - lgwr puts stuff in there that doesn't really need to be in the archives - so arch strips it out.


arch doesn't just copy the redo anymore (hasn't for a long time), it reads it, processes it and writes out only what it needs to.


logs can switch also in response to archive_lag_target

archived log file size alteration

A reader, September 15, 2011 - 9:15 am UTC

Dear Tom,

Case-1) We did plan to take a backup on scheduled basis of the archived logs which were being generated in a linux database server. We took the backup of those archived logs in our Windows machine. But we observed there is a certained change in size of those archived log files and we couldn't recover with those archived logs after transportaion of those back to another Linux system (due to the size has already been changed). Can you please let me know why this happened? Is the reason is OS block size mismatch between Linux and Windows?
Case-2) We took the backup of a RMAN backup(generated in Linux machine) to a Windows machine, but alike size of archived log files the size of RMAN backup didn't change at all. Even we can restore that RMAN backup from that windows machine back to a Linux machine. Why is it possible? Is RMAN backup files OS independent?
Tom Kyte
September 16, 2011 - 1:13 pm UTC

1) tell me what you mean by a backup here - what tool did you use.

2) RMAN backups are OS dependent- bits and bytes in a file are OS independent. You can move a file from OS1 to OS2 and back to OS1 and have the same data. Whatever you did to "backup" in step #1 is broken.

mfz, May 04, 2012 - 9:04 am UTC

Tom -

One of the production databases is in noarchivelog mode . There is no backups for this database .
This database is around 400 G . I am on 10.2.0.1 on Windows .

I found this on my 2nd day into the new job assignment . The DBA who has worked on this database is no longer with the company.
I dont want to turn on the "archive log" ,as there is only 180 G on the server for backups / archived redo logs.


How to estimate the space needed for archived redo logs ? My boss told me to count the number of log switches ? I am not sure , I agree with that.

What is the best way to size for the archived redo logs / backups ( considering I need 2 days of Level 0 backup ) .

Tom Kyte
May 06, 2012 - 2:39 pm UTC

How to estimate the space needed for archived redo logs ? My boss told me to
count the number of log switches ? I am not sure , I agree with that.


it will give you the lower bound, but not the upper bound. Turning on archive log mode will generate redo for things that do not generate redo today. Like alter table t move, create table as select, create index, insert /*+ append */ and so on.


It would be a good first estimate.

And since you are getting your archives copied somewhere else - you can monitor it and release the space as needed until you get a good handle on it. You do copy your archives over to somewhere else right?

mfz, May 07, 2012 - 10:57 am UTC

Tom , Up till now , the database is not in archive log mode . We are not arhiving any logs yet .

I will be turning on archive log today after getting storage from SAN Admin.
Tom Kyte
May 07, 2012 - 11:33 am UTC

I know, you wrote that.


What I'm suggesting is

a) your current redo generation would be a good approximation

b) when you do go to archive log mode - since YOU'LL BE COPYING THAT ELSEWHERE REALLY FAST - like you *always* do (right - hint hint - archives should be copied soon - else you'll lose them when the machine blows up). So you don't need a lot of disk space - you'll just need to monitor it.

A reader, May 07, 2012 - 11:42 am UTC

Got it ..... . Thanks very much.





archivelog file members

A reader, July 12, 2012 - 1:26 pm UTC

hi tom,

just a simple question which i am not able to find in the documentation

q1) Archivelog are actually "saved" redolog.
Assuming that i have 3 loggroup with 2 members inside each group, when the loggroup turn inactive and archiving begins,

will ARCH onlly archive 1 member inside the loggroup or will the archivelog contain 2 member of the same group ? (but i dont see the point to do so)

q2) on the parameter LOG_ARCHIVE_FORMAT, %t represents thread, but what is actually a thread in the redolog ? all i see what the same value. #thread -> 1

Regards,
Noob
Tom Kyte
July 12, 2012 - 6:26 pm UTC

q1) you get one redo log file per destination. if you want more than one, you setup multiple destinations to archive to.

q2) threads are only relevant in a clustered environment where there is more than one instance generating a thread of redo. Each instance would have its own thread.

Generation of redologs too rapidly

A reader, July 18, 2012 - 4:10 am UTC

Hi Tom,

- I am on 10g Linux Standard Edition
- I am running a daily backup and syncing the backup and archive to a standby.

Recently, i realise a spike in the amount of archive/redolog generated in 5 minutes interval i would have to do a logswitch already.

<code>
-rw-r----- 1 oracle oinstall 147M Jul 18 15:29 log1_15478_777187746.arc
-rw-r----- 1 oracle oinstall 147M Jul 18 15:34 log1_15479_777187746.arc
-rw-r----- 1 oracle oinstall 147M Jul 18 15:39 log1_15480_777187746.arc
-rw-r----- 1 oracle oinstall 147M Jul 18 15:45 log1_15481_777187746.arc
-rw-r----- 1 oracle oinstall 147M Jul 18 15:50 log1_15482_777187746.arc
-rw-r----- 1 oracle oinstall 131M Jul 18 15:55 log1_15483_777187746.arc


-----------------------------------

Q1) how do i make sure that the spike is due to the increase of transactions and not some "fault/bugs" repetitively generating alot of redo ?

am i able to see which are the top session or top activitity that is generating the redo ?

Q2) I am quite skeptical and worried about the frequency of the generation of the redologs, it is exactly 5 mins between each logs, and even late at night, in the wee morning, when transaction should be way much more lesser. but the size of the redo and the frequency it is generated are still the same ?

it seems like that is some "automation" amount of transaction generated per 5 min that filled up the redologs

-------------------------

Q3) Anyway whereby i can keep the amount of redologs generation lower ?

By having more frequent backup ?
By having incremental backup ?

Hope to hear advise
Regards,
Puzzled Noob over the 150mb redolog per 5 min ;/






</code>
Tom Kyte
July 18, 2012 - 10:09 am UTC

q1) use log miner, see what is in them...

you could only see the top session is it is still connected. You maybe be able to use ASH to identify top sessions over a period of time even if they are not connected if you have access to that.

q3) it depends on what you are doing, on what is generating the redo. you'll have to find that out first.

backing up will have no impact on this (in fact, if you are not using rman, you might find that backing up frequently generates MORE redo since we log more information when you issue alter database/tablespace begin backup)


Generation of redologs too rapidly

A reader, July 18, 2012 - 2:43 pm UTC

Hi Tom,

Thanks for the reply.
Will use logminer to see what's the content and feedback asap..

-------

However, i am not sure why did you said that having backup will have more redos.

I am using rman, i am just thinking, by having more backup or incremental backup, means changes are already backup on disk and i do not need any redo to rebuilt the changes.

current day
-01:00 - backup
-02:00 - redologs
-03:00 - redologs
-04:00 - backup again (delete obsolete)

i do not need archivelog on 0100 to 0400 anymore, isnt it ?
(of course again it would depends on my backup retention policy)

Regards,
Noob
Tom Kyte
July 18, 2012 - 3:31 pm UTC

you have still generated the same amount of redo. I don't care how often you back up - you'll still generate the SAME AMOUNT of redo. It won't reduce your redo generation by a single byte.

you'll still be generating a redo log ever 5 minutes as you are now. backing up cannot affect that (except make it MORE often if you don't use rman).

Generation of redologs too rapidly

A reader, July 18, 2012 - 2:45 pm UTC

hi tom,

Sorry, miss out a question

you could only see the top session is it is still connected.

can i check with you, what event should i be looking for redolog generation ?

(redolog sync ?)

Regards,
Noob
Tom Kyte
July 18, 2012 - 3:32 pm UTC

v$sesstat would have the redo size for each session.

Generation of redologs too rapidly

A reader, July 19, 2012 - 5:15 am UTC

hi tom,

problem solved.
use logminer, look into the redologs -> all updates on a column when a value is the same.

ask engineer to check the value 1st, if it is the same, dont bother to update it

Thanks alot
Regards,
Noob

More to Explore

Backup/Recovery

Check out the complete guide to all of the Backup & Recovery techniques in the Oracle Database.