Skip to Main Content

Breadcrumb

Question and Answer

Tom Kyte

Thanks for the question, chng.

Asked: August 22, 2000 - 11:55 pm UTC

Last updated: April 16, 2012 - 3:55 pm UTC

Version: 7.3.4

Viewed 10K+ times! This question is

You Asked

My database redo log size is 300MB and my redo log buffer is set
to 100M (1/3 of redo log file). My redo log switch every 20 minutes.

Do oracle make used of the 100M memory or just 1M? And what is a
good size of redo log buffer?


Thanks
RC
23/08/2000

and Tom said...

The redo log buffer is flushed

o every 3 seconds
o when 1/3 full
o when you commit.

It is doubtful that you are using 100meg of redo log buffer. In order to do so, you would have to have enough concurrent transactions to generate 300meg of redo without any of them committing in under 3 seconds.

Since your online redo logs of 300 meg switch every 20 minutes -- this is not the case.

In my opinion, your redo log buffers are much too large, you are only wasting 99 to 99.5 meg of RAM on them. I would prefer to see them in the 1/4-1/2 meg range myself to keep a steadier flow of data being written to the online redo log (over buffering data is just as bad as not caching at all in many cases).


If you are trying to get your redo logs to switch less frequently (although a steady state at 20 minutes is perfection), you would have to either

o reduce the number of transactions you execute or change the amount of data they affect

o increase the size of your online redo log.



Rating

  (32 ratings)

Is this answer out of date? If it is, please let us know via a Comment

Comments

Redo log buffer and redo log resize

NP, July 05, 2002 - 1:12 am UTC

Hi Tom
Great help
Thanks,
Our database (9.0.1.0.1) is running in non archive mode
its redo log buffer size is 52 k(532480) and redo log size is 100 mb as default on installation.
Now we want to switch it to archive log mode with reduced redo log size either 1-4 mb,
How to proceed ,is deleteing all the redo group and recreate it of size 1 or 2 mb?any special procedure or precaution.
Is there any need to change the size of redo buffer also from 52 k to .....,or its ok.
Thanks

Tom Kyte
July 05, 2002 - 10:04 am UTC

changing the log buffer is a simple init.ora change, that can be done anytime you have scheduled a restart of the db.

The redo log size change (but why goto 1-4meg? that is pretty small) can be done anytime. Just add N new logs of the size you want, and then drop the old logs you don't want anymore. You might have to do an alter system switch logfile to advance the logs off of the old ones but thats it.

Redo log file size

NP, July 05, 2002 - 11:00 pm UTC

Hi Tom.
Thanks,
In my previous question u add :-
"The redo log size change (but why goto 1-4meg? that is pretty small) can be
done anytime. "

Is 1-4 mb is not a ideal size for a moderate size databse
with high insert and low updates.is any proformence problem?
Could you please suggest a fair enough redo log size,
also how to know log how frequent the logs are switching.
Thanks
NP

Original question.
Hi Tom
Great help
Thanks,
Our database (9.0.1.0.1) is running in non archive mode
its redo log buffer size is 52 k(532480) and redo log size is 100 mb as default
on installation.
Now we want to switch it to archive log mode with reduced redo log size either
1-4 mb,
How to proceed ,is deleteing all the redo group and recreate it of size 1 or 2
mb?any special procedure or precaution.
Is there any need to change the size of redo buffer also from 52 k to .....,or
its ok.
Thanks


Followup:
changing the log buffer is a simple init.ora change, that can be done anytime
you have scheduled a restart of the db.

The redo log size change (but why goto 1-4meg? that is pretty small) can be
done anytime. Just add N new logs of the size you want, and then drop the old
logs you don't want anymore. You might have to do an alter system switch
logfile to advance the logs off of the old ones but thats it.



Tom Kyte
July 06, 2002 - 7:55 am UTC

size your logs so they switch no more frequently then you would like them to during peak. Many people like 5 minutes at least between switches.

Resizing Redo Log

NP, July 12, 2002 - 3:45 am UTC

Hi Tom
Thanks for previous help.
Recently we switched our Database to achvive log mode and resize redo log from 100 mb to 10 mb and now to 1 mb(presently) with three redo group mirror on 2 disk.

As our production environment is opertae under uneven load 24x 7.
Heavily in Morning (8 to 10 am) and afternoon (2 to 4 pm)
during this period it generate 20 mb of redos.(total 40 mb/day).
I am writing the exact timing of archvival
date time
10/07/02 4.04 pm
11/07/02 1.40 am
11/07/02 9.45 am
11/07/02 11.14am
11/07/02 2.56 pm
redo size (10 mb.
Now my question is that weather creating a larger redo (10 mb to 100 mb) is worth in this case or not.?
Is shifting to 1 mb size is a correct decision?
Is in case of crash / recovery and maintaning standby database smaller redo size is of worth under uneven load?
Thanks

Tom Kyte
July 12, 2002 - 8:14 am UTC

depends on your version. If you have the fast_start_io_target or fast_start_mttr_target init.ora parameters available, you don't need to use small redo size to limit recovery time.



re: redo log buffer

A reader, June 28, 2003 - 6:11 pm UTC

Tom,

Your answer
>>The redo log buffer is flushed
>>
>>o every 3 seconds
>>o when 1/3 full
>>o when you commit.

Say, the contents of the redo log buffer are flushed to the redo log files because of the following 2 reasons
o every 3 seconds
o when 1/3 full
-not because of a commit

after the above scenario(flushing redo log buffer to the redo log files) happens a couple of times and there has been no commit done
We have uncommited data in the redo log files and when a log switch occurs(since the redo log file is full), the contents of the db buffer are written to the db.
a) if we do a rollback at this point what happens?
b) If we don't do a rollback and continue with the changes and if all the redo log files are filled up, what would happen?
c) At what point are the redo logs copied to the archive logs?
d) Since redo logs could contain uncommited data, will the archive logs also have uncommited data?

Thanks in advance for clearing my confusion.


Tom Kyte
June 28, 2003 - 8:38 pm UTC

a) we read the rollback data and undo the changes you made to the blocks on disk.

b) we don't need the redo for checkpointed (on disk) blocks, so after the checkpoint completes, we'll just advance into the log file and continue. You can easily run a transaction that generates 1gig of redo with 10m of online redo log -- we'll just keep on rolling rolling rolling.

c) when we switch out of it to the next (assuming automatic archiving is enable of course)

d) sure, nothing wrong with that. redo is redo -- not really "uncommitted changes" but a history of what happened, a replayable history.

Good

R.Chacravarthi, June 29, 2003 - 1:25 am UTC

As you said redo to be a replayable history but the most apt
will be replayable restricted log history.Since committed
transactions can be redone but uncommitted transactions can
only be rolled back(the basic principle of ARIES/RRH which means Algorithms for Recovery and Isolation Exploiting Seman
-tics/Repeating Restricted History)
Thanks
R.chacravarthi

Tom Kyte
June 29, 2003 - 9:21 am UTC


Uncommitted transactions are both redone AND undone by the way. If you mean "hey, you could theoretically skip some of the redo" -- it is not possible in reality in Oracle (or in a real world database really) as you would not be able to flush dirty blocks to disk before commit -- meaning the buffer cache would be a constraint on the size of transactions, the number of concurrent transactions. The redo logs would be as well.

for you see, if you flush some of the dirty blocks, we must now be able to undo the changes to them. In order to undo the changes, we need the undo data but that as well is protected by redo -- meaning we need to apply the redo to get the undo -- catch 22.

re: redo log buffer

A reader, June 29, 2003 - 10:32 am UTC

For the answer to the reader 'A reader from Houston'

The redo logs are filled up and then copied over as archived redo logs, since redo logs have uncommitted data, this uncommitted goes into archived redo logs. Is that correct?
The uncommitted data present in the arched redo logs is copied over to a standby database. If we logon to the standby database, wouldn't it be showing uncommitted data? What am I missing here?


Tom Kyte
June 29, 2003 - 10:51 am UTC

a physical standby is either

a) in managed recovery mode (you are not reading it)
b) opened read only (it is "recovered" to a point in time, you see only committed work)

a logical standby uses SQL apply --meaning the redo is turned into insert/update/deletes and is not visible to you until you COMMIT.

re: redo log buffer

A reader, June 29, 2003 - 11:17 am UTC

Thanks a lot.

There is so much activity on this site even on a weekend. Don't you take a break?

Redo logs and DDL'S

A reader, November 18, 2003 - 8:28 am UTC

Tom,
Are DDL changes made to the redo logs? Is redo and undo maintained for DDL changes?

Tom Kyte
November 21, 2003 - 7:49 am UTC

DDL just causes recursive sql to happen (inserts/updates/deletes to the real data dictionary)

so yes, ddl makes changes to the redo stream since the recursive sql is just sql and generates redo as normal

you can also enable supplemental logging (in current versions) to have the actual DDL stored in the redo stream (used by standby databases and such).

Redo logs and DDLs

A reader, November 21, 2003 - 8:26 am UTC


Switching

Invisible, February 13, 2004 - 10:28 am UTC

Since it appears to be impossible to post a question directly, I'll try here. (Been waiting 2 weeks now - still, there's only one Tom!)

We have a database used for some app we didn't develop. Database was set up before I joined the company. According to the DB person who set it up, in Oracle online redo log files always have to be 1MB in size. (I now know this is false.)

I read somewhere that you should aim to have 1 log switch every 15 mins roughtly. (Obviously it varies.) Do you agree with this?

To begin with, we had about 1 log switch per month. Then it went up to 1 per week. Then it went up to as much as 4 per day.

It's still holding at about 4 per day - except that, now and then, I see little "bursts" of activity. Sometimes it appears that up to 5 log switches have occurred in less than 1 second!

Is that even physically possible? (We only have 4 online redo logs - ARCHIVELOG mode of course. ;-)

Speaking to app users (I don't even know what the app is supposed to do!), it appears they gradually fill up the database, but when a "project" is completed, it gets taken out of the database and put into our archives on CD.

I *imagine* this "taking it out of the database" is producing large amounts of redo fairly fast. If the DB went down half way through, Oracle would have to "undelete" all this stuff to get to a consistent state...

Questions:
1. Should I make the online redo logs larger?
2. How much larger?
3. Do I have to switch off the database to do this?
4. How (if at all) will this affect database recovery? (Recommendation from my Oracle book is to take a cold backup any time you do something major to the database.)

Also have questions about hot vs. cold backups, but I'll try to find a more appropreate thread to tack those onto...


Tom Kyte
February 13, 2004 - 10:57 am UTC

15 is as good as 30 is as good as 60 is as good as 5. pick a number, any number.

yes, 5 is possible in a short period of time, even with 4 -- depends on what we have to checkpoint (could be you updated a single block ALOT generating gobs of redo but very very little work for dbwr)


1) cannot answer that. are you suffering from "checkpoint not complete cannot allocate new log".?
2) see #1
3) no
4) you won't need to backup since you'll just be adding larger logs or more logs and you never never never backup online redo !



Thanks

Invisible, February 13, 2004 - 11:33 am UTC

Well, that's what I call a fast responce...

We're not having any particular issues that I know about, other than that (my magic book says) reducing the number of log switches might make things go faster. (Not having tried it, I can't really comment.)

What is there to checkpoint? Well, since the peak activity is (presumably) one big delete, I would imagine it's accessing each block only once... (I won't pretend to have a deep understanding of how Oracle works at the block level.)

Since the problem (if it is one!) is rapid log switches (rather than actually running out of online redo), I presume the thing to do is make the logs bigger rather than allocate more of them...

No need to backup? Yup, I like that idea... It's just I'm told certain events are hard to recover through. (e.g., reset logs is supposedly a *very* hard one. Not something I'm hoping to have to do, like, EVER.)

Just curious... the archived logs are sometimes smaller than the online ones. Does the archiver actually "know" how much real data is in there? I had assumed it just does an OS file copy...


Tom Kyte
February 13, 2004 - 11:39 am UTC

if the archives are smaller, that is usually due to

a) someone doing a switch logfile command
b) using the archive lag init.ora parameter to force log switches on a recurring basis (9i)

"redo log buffer", version 7.3.4

VKOUL, February 13, 2004 - 1:29 pm UTC

But I have seen the archived log files of different sizes without someone doing a switch logfile command

Could it be checkpointing

Tom Kyte
February 13, 2004 - 2:50 pm UTC

log switches cause checkpoints.

checkpoints do not cause log switches.

Nice

Carl, February 15, 2004 - 8:23 am UTC

Dear Tom,
What does the wait event "log file sync" deal with?
Please do reply.
Thanks.


Tom Kyte
February 15, 2004 - 11:50 am UTC

it is the wait event a client process gets when they issue "commit" and must wait for LGWR to finish the commit event.

Generally occurs with people committing in a tight for loop, in the mistaken thought that comitting frequently "is a good thing (tm)". It is a "bad" thing.

roll foward, roll back, redo, undo

Louis Mallette, April 22, 2008 - 9:02 am UTC

A) the last thing the redo logs contain is a uncommited data.
B) Instance fails
C) startup
D) roll foward: replay the uncommited data
E) roll back the uncommited data
F) everything is ok
H) We have a checkpoint after 3 sec

WE ASK: after H) What are in the redo log files.
An uncommited data only again.
or
the uncommited data + E) the info of the undo generate in the redo log buffers transfers in the redo log files.

tx
Tom Kyte
April 23, 2008 - 5:50 pm UTC

rolling back generates redo, the redo log files would have much stuff in them after D)

generate redo

Louis Mallette, April 24, 2008 - 8:51 am UTC

A) the last thing the redo logs contain is a uncommited data.
B) Instance fails
C) startup
D) roll foward: replay the uncommited data
E) roll back the uncommited data
F) everything is ok
H) We have a checkpoint after 3 sec

Let say that the uncommited info is a update.

You have goods answers in your books that update generate more redo than insert.

Can you tell us more technical information about the redo generated in each step.

After D,
After E,
After H,

1) Explain us more information about the redo generate in the redo log buffer.

2) Explain us more information about the redo generate in the redo log file.

This information will contain what: pointer, old values, new values, the update instruction ... ?
Tom Kyte
April 28, 2008 - 11:48 am UTC

rolling forward is a read of the redo, reapply changes to blocks. (d) - we've just read the redo really.

rolling back is a logical operation, to rollback an insert, we delete the row, to rollback a delete - we insert the row - to rollback an update - we un-update it. (e) - function of what we had to roll back. Same redo that would be generated when you type "rollback" in an active session basically.

(h) - doesn't really generate much of anything in the redo stream, other then to say "hey, we are doing this checkpoint thing"


1) don't understand what you mean. the redo log buffer buffers redo before we write it to disk. everything that ends up in the logs pretty much started in the log buffer

2) we know everthing we need to know - it is binary information sufficient to replay/redo our transaction. It is a change vector that says "if you change these bytes on the block to these values, we'll be good"




Archive log

Aneej, April 25, 2008 - 10:35 am UTC

Hi Tom,
I have a entered my archive log parameters as follows in the init.ora file
/*
log_archive_start=true
log_archive_dest="C:\oracle\oradata\ADMIN\archive"
log_archive_format=arch%t%s%u.arc
*/

But when i query my archive log list i get as follows:
/*
log_archive_start=true
log_archive_dest="C:\oracle\oradata\ADMIN\archive"
log_archive_format=arch%t%s%u.arc
*/

Tom Kyte
April 28, 2008 - 12:41 pm UTC

I don't know what you mean by this.

undo / redo

Louis Mallette, April 28, 2008 - 3:55 pm UTC

Tx for your answers

----------------------------------------------
if like you said
'Same redo that would be generated when you type rollback '
Here is what we understand about this process.
Is it good
----------------------------------------------

the values before the update:

select tk.val
from tom kyte tk
where tk.id = 1

tk.val is equal to 'LAST SCN COMMIT'

commit;

A) the last thing the redo logs contain is a uncommited data. ( a update )

the update

update tom_kyte tk
set tk.val = 'UNCOMMIT DATA'
where tk.id = 1

B) Instance fails
C) startup
D) roll foward: replay the uncommited data
E) roll back the uncommited data
F) everything is ok
H) We have a checkpoint after 3 sec

After step A)

Redo log buffer: We have sufficient infos to replay update. So we will have something like this

update tom_kyte tk -- before the instance fail
set tk.val = 'UNCOMMIT DATA'
where tk.id = 1

Redo log file: We have sufficient infos to replay update. So we will have something like this

update tom_kyte tk -- before the instance fail
set tk.val = 'UNCOMMIT DATA'
where tk.id = 1

undo tablespace: We have sufficient infos to rollback the update

update tom_kyte tk -- before the instance fail
set tk.val = 'LAST SCN COMMIT'
where tk.id = 1

AFTER STEP C) STARTUP

DATABASE BUFFER CACHE: have nothing
UNDO tablespace have nothing
REDO LOG BUFFER have nothing
REDO LOG FILES have the sufficients info to replay
the update.

update tom_kyte tk -- before the instance fail
set tk.val = 'UNCOMMIT DATA'
where tk.id = 1

AFTER STEP D)ROLL FOWARD

DATABASE BUFFER CACHE have infos generated

TK.VAL = 'UNCOMMIT DATA'

REDO LOG BUFFER have infos generated

update tom_kyte tk -- create on roll foward step
set tk.val = 'UNCOMMIT DATA'
where tk.id = 1

REDO LOG FILES have the same sufficients info to replay

update tom_kyte tk -- before the instance fail
set tk.val = 'UNCOMMIT DATA'
where tk.id = 1

UNDO tablespace have info generated

update tom_kyte tk -- create on roll foward step
set tk.val = 'LAST SCN COMMIT'
where tk.id = 1

AFTER STEP D)ROLL BACK

DATABASE BUFFER CACHE have infos about

TK.VAL = 'LAST SCN COMMIT'

REDO LOG BUFFER have infos generate

update tom_kyte tk -- create on roll foward step
set tk.val = 'UNCOMMIT DATA'
where tk.id = 1

update tom_kyte tk -- create on roll back step
set tk.val = 'LAST SCN COMMIT'
where tk.id = 1

REDO LOG FILES have the same sufficients info to replay

update tom_kyte tk
set tk.val = 'UNCOMMIT DATA'
where tk.id = 1

UNDO tablespace have info about

update tom_kyte tk -- create on roll foward step
set tk.val = 'LAST SCN COMMIT'
where tk.id = 1

AFTER F) H) ALL EVERY THING IS OK

DATABASE BUFFER CACHE: DBW0 have clean it with LRU algorith
UNDO tablespace have nothing ---> we have reuse the space
DATAFILE: have TK.VAL = 'LAST SCN COMMIT'
REDO LOG BUFFER have nothing ---> LGWR have clean it to redo log files

REDO LOG FILES have ...

update tom_kyte tk -- before the instance fail
set tk.val = 'UNCOMMIT DATA'
where tk.id = 1

+ the new redo log buffer info generate by roll foward and roll back

update tom_kyte tk -- create on roll foward step
set tk.val = 'UNCOMMIT DATA'
where tk.id = 1

update tom_kyte tk -- create on roll back step
set tk.val = 'LAST SCN COMMIT'
where tk.id = 1

IS IT CORRECT
Tom Kyte
April 29, 2008 - 8:25 am UTC

this is what we need to understand:

when we perform crash recovery - we read the redo logs. This redo is applied to the database blocks in the datafiles. We will roll forward all transactions bringing the database to the point it was at when the system failed (there will be open uncommitted transactions and closed committed transactions - their blocks will be as they were at the time of the crash - including their UNDO BLOCKS)

Now we roll back.

That is really all there is to it - it doesn't take pages and pages - we apply redo, get the blocks back the way they were when the system failed (including undo blocks - that is the trick, the key thing to know), and then rollback anything that was not committed.

and a rollback generates redo - so, even things that were rolling back when we crashed will be restored to that point - and then finish.

undo / redo

A reader, April 29, 2008 - 11:13 am UTC

I try to put your answers and 2 situation is again on the board.

Witch is the good one in I, J

A) the last thing the redo logs contain is a uncommited data.
B) Instance fails
C) startup
D) roll foward: replay the uncommited data
E) roll back the uncommited data
F) everything is ok

G) Instance fail again
H) startup
I) roll foward: Do we have to replay again the same uncommit data or we just have nothing to do.
J) roll back: Do we have to roll back again the same uncommit data or we just have nothing to do.
K) everything is ok
Tom Kyte
April 29, 2008 - 11:29 am UTC

I) cannot tell you - did we checkpoint the blocks after E
J) same as I


we use redo to make the blocks in the datafiles consistent with each other - depends on the state of the datafiles.

undo / redo

Louis Mallette, April 29, 2008 - 1:11 pm UTC

I try to put your answers and 2 situation is again on the board.

Witch is the good one in I, J

A) the last thing the redo logs contain is a uncommited data.
B) Instance fails
C) startup
D) roll foward: replay the uncommited data
E) roll back the uncommited data
F) everything is ok
We have a checkpoint after 3 sec


G) Instance fail again
H) startup
I) roll foward: Do we have to replay again the same uncommit data or we just have nothing to do.
J) roll back: Do we have to roll back again the same uncommit data or we just have nothing to do.
K) everything is ok



Followup April 29, 2008 - 11am US/Eastern:

I) cannot tell you - did we checkpoint the blocks after E
J) same as I

Yes a check point after E.

tx

Tom Kyte
April 30, 2008 - 8:19 am UTC

if the blocks were checkpointed, they are not in need of having that redo applied again obviously.

We apply redo that needs to be applied. That is all - think about the state of the block, what redo does that block in the file *need*, that is how far back we need to go in the online redo logs - if that block was recovered during instance recovery AND THEN written to the datafile - of course it would not need nor could it have that redo applied *again*. If that block was recovered during instance recovery but not written to the datafile - of course it would need that redo once more, because it wasn't applied permanently to the block.

log buffer

bala, February 04, 2009 - 6:11 am UTC

if after 1 mb, data get written by LGWR, then what is use of keeping log buffer size more than 10mb.
Tom Kyte
February 04, 2009 - 11:39 am UTC

it is a trip point.

Think of the log buffer like a bucket.

You are NOT allowed to overflow this bucket, when it is full, you have to wait for it to be emptied out a bit.

It is a 10 gallon bucket.

Every three seconds, or when you tell me to, or when it is one gallon full, I'll start scooping water out with a ladle and putting it into a pool.

You have a hose, you start filling the bucket. The hose is fast, you are silent. When the bucket is one gallon full - I start scooping. I scoop so fast that by the time you've added another gallon - the first gallon you added is in the pool. Since we are at a gallon again (you are doing a large load of water - "data"), I keep on scooping. You appear to have an infinite bucket because I'm emptying it as fast as you fill it - but there is more than one gallon of water in there from time to time.

Now add more people with more hoses.


There could be 10 gallons in that bucket - then you'd all wait a bit while I scooped some out and so on.


Now add more people with more hoses - lots of standing around. Each of you wants to put 50 gallons in - but you have to wait for me.

Or, we could get a bigger bucket, to buffer more.

database tuning

mitaly dutta, November 30, 2009 - 8:06 am UTC

hi tom!
i need you help for this q. ") You are a DBA who has two major clients. One is called The IEEE (Instant Excitement Erotic Escorts) and the other is a firm of divorce lawyers called Paya, Pan, Short and Wiener. Both have integrated web applications that enable their clientele to schedule appointments and perform other interactions directly against the firms’ databases. Both have been experiencing high transaction volumes, and your ongoing monitoring scheme has picked up on the opportunity to optimize performance by doing some instance tuning. Specifically, the IEEE’s transactional activity mostly consists of INSERTs, especially mid-month. Perhaps coincidentally, Paya, Pan, Short and Weiner have an unusually high proportion of UPDATE and DELETE activity late in the month. Assuming both database systems were set up initially with the default configuration from a Oracle 10g General Purpose template, what tuning measures would you consider for both, and what measures would you consider specifically for each?"
Tom Kyte
December 01, 2009 - 3:10 am UTC

easy, you are done.

No one has reported any performance related issues, everyone is happy.


... nd your ongoing monitoring scheme
has picked up on the opportunity to optimize performance by doing some instance
tuning ...

that is silly, that would be the answer then - your tool will tell you what to do.
javascript:doSubmit('SAVE')

There is no answer to this "question". Lots of buzz words "integrated web applications", as opposed to disintegrated?


... Assuming both database systems ... why two databases - there is your tuning opportunity, consolidate.

I would change nothing in the configuration, there are no facts to act on here.


Silly customer names to, if this were in an interview, I would question the level of the company I was interviewing with.

Alexander the ok, December 02, 2009 - 12:12 pm UTC

"Silly customer names to, if this were in an interview, I would question the level of the company I was interviewing with. "

There you go Tom, another example of how there's an exception to every rule. You would think there's never a case when you shouldn't explain the acronym you are using....

Maybe he's from an area of Canada where prositution is legal ;)

Log Buffer Theory

Rob H, February 17, 2010 - 10:40 am UTC

From Oracle doc:
---------------------------------------
7.4 Configuring and Using the Redo Log Buffer

Server processes making changes to data blocks in the buffer cache generate redo data into the log buffer. LGWR begins writing to copy entries from the redo log buffer to the online redo log if any of the following are true:
1-The log buffer becomes one third full.
2-LGWR is posted by a server process performing a COMMIT or ROLLBACK.
3-DBWR posts LGWR to do so.
---------------------------------------

In the case of #2, what if you have 3 sessions doing work at different levels (session "A" doing 10,000 row insert then commit, session "B" doing a 5,000 row delete then commit, session "C" doing a single row insert then commit), will the session "C" force the redo log buffer to write ALL the redo (essentially for all 3 sessions) to the redo logs (potentially causing a "log file sync" if it does it frequently enough)?
I'm "assuming" a single row insert committing every second (or less if its multiple sessions), would be a particularly "bad thing".

Is #3 is in reference to a timed event (the 3 second rule)? What other events would cause DBWR to do this?
Tom Kyte
February 17, 2010 - 11:20 am UTC

... will
the session "C" force the redo log buffer to write ALL the redo (essentially
for all 3 sessions) to the redo logs (potentially causing a "log file sync" if
it does it frequently enough)?
...

in reality, it wouldn't matter because the redo log is emptying itself out CONSTANTLY during this period of time - the redo log empties way before 1/3 full in practice (it has a size trigger too, like 1mb). It takes about as long to commit 1,000,000 modified records as it does 1 modified record - by the time you go to commit, the other stuff has already streamed out. But lgwr will optimize the write for the commit - it will do group commits (if more than one thing commits at a time) - but it will do just what it needs to do

dbwr is not a timed thing, dbwr would do this before writing out an undo block - it wants to make sure the redo is on disk first.

redo write sync

Rob H, February 17, 2010 - 1:46 pm UTC

documentation:
---------------------------
redo sync writes

Usually, redo that is generated and copied into the log buffer need not be flushed out to disk immediately. The log buffer is a circular buffer that LGWR periodically flushes. Redo sync writes increments when changes being applied must be written out to disk due to a commit.
---------------------------

here is an example
14:37:09 SQL> select name, value from V$SYSSTAT where name like '%redo sync%'

NAME                      VALUE
-------------------- ----------
redo synch writes      78265005
redo synch time       688273425

14:37:10 SQL> /

NAME                      VALUE
-------------------- ----------
redo synch writes      78267623
redo synch time       688297808

14:37:46 SQL>

So, in 35 seconds, redo synch writes incremented by over 2600, thats 75(ish) 'redo synch writes' events PER SECOND, so in this case, is someone literally committing more than 75 times a second?   

Is the sysstat entry of "redo writes" the value when the lgwr writes out on its own?

14:41:07 SQL> select name, value from V$SYSSTAT where name like '%redo writes%'

NAME                      VALUE
-------------------- ----------
redo writes            45835641

14:41:09 SQL> /

NAME                      VALUE
-------------------- ----------
redo writes            45835852

14:41:36 SQL>

Here its like 211 in a 35 second period (or 6 per second).

So, in theory, if the redo is writing out constantly does the v$sysstat "redo write sync" increment in that case?   Here I can see that event increasing rapidly, can I assume this is a direct correlation to users committing far too early (instead of letting the log buffer fill and then the 'redo writes' event increment) ?

Tom Kyte
February 18, 2010 - 8:13 am UTC

it should say

"Number of times the redo is forced to disk -- usually for transaction commit."

I've asked them to update the doc.


otherwise, this statistic would be the same as commits... but it isn't.


redo size

kuldeep, April 09, 2010 - 7:57 am UTC

Hi Tom,

I am updating a table and when number of affected rows are less it is not generating redo log. But oracle should generate redo log even if one row changed.

In below example when 99 rows changed redo size is 51004 but when 10 rows changed redo size is Zero.


Thanks and regards,

Kuldeep

SQL> desc sqltemp
Name Null? Type
----------------------------------------- -------- ----------------------------
ADDR VARCHAR2(16)
SQL_TEXT VARCHAR2(2000)
DISK_READS NUMBER
EXECUTIONS NUMBER
PARSE_CALLS NUMBER
HASHVAL NUMBER
LOADTIME VARCHAR2(19)

SQL> set autotrace traceonly statistics
SQL> update sqltemp set SQL_TEXT=null where rownum<100;

99 rows updated.


Statistics
----------------------------------------------------------
92 recursive calls
113 db block gets
23 consistent gets
18 physical reads
51004 redo size
800 bytes sent via SQL*Net to client
704 bytes received via SQL*Net from client
3 SQL*Net roundtrips to/from client
2 sorts (memory)
0 sorts (disk)
99 rows processed

SQL> roll
Rollback complete.

SQL> update sqltemp set SQL_TEXT=null where rownum<10;

9 rows updated.


Statistics
----------------------------------------------------------
1 recursive calls
11 db block gets
3 consistent gets
0 physical reads
0 redo size <===== zero byte redo generated
805 bytes sent via SQL*Net to client
703 bytes received via SQL*Net from client
3 SQL*Net roundtrips to/from client
1 sorts (memory)
0 sorts (disk)
9 rows processed <=======

SQL> roll
Rollback complete.

Tom Kyte
April 13, 2010 - 8:54 am UTC

STOP DOING THINGS AS SYS.

sys is special
sys is magic
sys is not for you
sys should never be used for any sort of testing
the number of times sys should be used is so tiny


try your testing as a real user, not as the all magic, all powerful, never to be used sys account.


and post a real test case next time, with create tables and all - I normally don't even bother to respond but this was so obvious.

Zero redo size

kuldeep, April 13, 2010 - 9:49 am UTC

Hi Tom,

In my last post sqltemp table was a non-sys table in a user schema.

New test case is as below. I have created table in my own schema.

----
SQL> create table kultemp as select * from user_objects ;

SQL> set autotrace on

SQL> update kultemp set OBJECT_NAME=rpad('*',128,'*') where rownum<10;

9 rows updated.


Execution Plan
----------------------------------------------------------
   0      UPDATE STATEMENT Optimizer=ALL_ROWS (Cost=9 Card=9 Bytes=594
          )

   1    0   UPDATE OF 'KULTEMP'
   2    1     COUNT (STOPKEY)
   3    2       TABLE ACCESS (FULL) OF 'KULTEMP' (TABLE) (Cost=9 Card=
          1640 Bytes=108240)





Statistics
----------------------------------------------------------
          4  recursive calls
         15  db block gets
         34  consistent gets
          0  physical reads
          <b>0  redo size</b>
        808  bytes sent via SQL*Net to client
        719  bytes received via SQL*Net from client
          3  SQL*Net roundtrips to/from client
          1  sorts (memory)
          0  sorts (disk)
          <b>9  rows processed</b>

SQL> roll
Rollback complete.

Tom Kyte
April 14, 2010 - 7:30 am UTC

I don't see any create tables - do you? I tried to run, but it just doesn't work for me. this is a test case? Mine always start with the - well - step one, what YOU would need to do in order to reproduce. What would you guys think if I left out every other line of my examples - it would make them infinitely less useful wouldn't it? And who do we know who you are logged in as - you might be right, you might be wrong - but we'll never know because you don't show your work :(

anything one cannot "see", one has to make assumptions about - the less I see, the more I have to assume - the easiest explanation for 0's in autotrace statistics is you were using SYS.

In this case - it is likely the fact that we use private redo strands in 10g and above and the redo measurement is deferred until later in the transaction if the amount of changes is small

see
http://forums.oracle.com/forums/thread.jspa?messageID=3915905�

for a discussion on this.

A reader, April 14, 2010 - 9:55 am UTC

Hello Sir,

the above link does not work


Thanks
Tom Kyte
April 14, 2010 - 3:41 pm UTC

remove the obvious character at the end that somehow got pasted in there...

http://forums.oracle.com/forums/thread.jspa?messageID=3915905

Zero redo size

kuldeep, April 15, 2010 - 1:58 am UTC

Hi Tom,

I am a fan of asktom site. Site contains great stuff on oracle and has always been a place to find solution of problem and clarification on concept topics.

With all due respect to this site and Tom may I ask whether the question answered by Tom or by his some aid/assistant who is helping him?

Thanks and regards,

Tom Kyte
April 15, 2010 - 8:25 am UTC

There is only one person that has the password to log in to answer questions.

And that is me, Tom.

redo apply time

sunny, June 08, 2010 - 12:45 pm UTC

Hi Tom
I have a question for redo apply times.
We have cloned 2 of our databases (both at 10.2.0.2.0 and of size 5TB approx) using offline FS copy on same OS version and now want to keep them synched in mount for a month using redo apply from primary sites.(initSID.ora files were copied too for identical parameters)
sys A has a redolog file size of 600 MB whereas B has that of 100 MB.
On average over a week , we observed sys A takes about 30 sec. for 1 redofile (600M) apply.
and B takes about 2 min of 1 archivedlogfile apply.
Since I am not sure what could exlain this behaviour, pl let me know what other info you'd need before you answer to this.
Tom Kyte
June 10, 2010 - 9:50 am UTC

I don't even really know what the question is.

what is an "offline FS".

is "A" the same as "B", if so - how have you proven that? check raw IO rates and cpu capabilities (without Oracle being involved in the mix)

are the databases on A and B configured the same - obviously they differ in some respects (redo log file size for one) but what else differs.

think about logical differences too - does the redo log file for sys a have changes for a small set of blocks (very concentrated modifications, on a small section of a database) and for sys b have a set of changes such that the same block is never modified twice (very scattered all over the place) - things like that.


redo apply times

sunny, June 10, 2010 - 11:22 am UTC

Greetings Tom

Thankyou for your response.

i meant they have been copied by bringing the DB&applications offline, stopping all activity on the box and copying files by scripts at OS/storage level.

A & B are not same, only except they have similar sizes,
A is bw,OLAP (600M redo) whereas B (100 M redo) is OLTP.(however I checked other imp parameters and they look almost the same too.)

As for underlying setup, both run on identical but individual LPARs carved out of the same server box.

I agree to the scattered v/s concentrated blocks involved in redo changes has atleast part role to play, though I havent proved it.

But my question is , is there any Oracle parameters or settings that could be affecting/tweaked, since as of now it 1 takes over 20 times the log apply times (per MB of redo) as that of other.

Tom Kyte
June 10, 2010 - 2:41 pm UTC

... i meant they have been copied by bringing the DB&applications offline, stopping
all activity on the box and copying files by scripts at OS/storage level.
...

is that what you mean by "offline FS"? Not really clear.


... A & B are not same, only except they have similar sizes,..

precisely how similar is similar. Strange thing about 'similar' - some people think some things are similar but others look at them and say "you have a toaster over and a banana here, what do you mean 'similar'"


"IMP" parameters? what does IMPort have to do with this??

... But my question is , is there any Oracle parameters or settings that could be
affecting/tweaked, since as of now it 1 takes over 20 times the log apply
times (per MB of redo) as that of other.
...

until we invent the universal fast=true parameter, you'd really need to have an understanding of "why" something is happening before you could tweak anything.

REDO

A reader, June 25, 2010 - 5:26 am UTC

Hi Tom,

I have read in one of the articles realted to REDO that -
"Change Vectors for a transaction REDO record - are generated in PGA "


Is it correct ? if so - how it happens in PGA -Could you
please explain about this ? thanks for your time and help

I know chage vector is nothing but description of change made to a block with a particular version in bits and bytes (op_code +dba+version+change values etc)

Thanks
Tom Kyte
July 06, 2010 - 10:02 am UTC

data is made up by the session and then placed into the SGA where lgwr can see it.

Your session does some work, makes a change. The results of this change (the bits and bytes) are made in your session and then placed in the right places.


log file sync waits

Yousef, April 16, 2012 - 8:39 am UTC

Hello Tom,

we have a production DB with high load it's version is 10.2.0, we use a monitoring software which is insider, it sometimes shows warnings on 'redo buffer allocation retry' as this value reached 5 - 6 and the software recommends to increase log buffer size as this value should be near to 0.
currently we have 3 groups of redo logs with 50MB size and archive log is enabled as we have a standby DB,
log buffer size is around 14MB , and redo files switch every 4 minutes, what is your recommendation for such situation.

Thanks,
Yousef
Tom Kyte
April 16, 2012 - 3:55 pm UTC

do you have any significant wait events in an ASH/AWR report to give us?

the number of events isn't really relevant.

the time of the events, the number of the events as opposed to the total events that could trigger it - they would be.

what does:

select r.value retries, e.value attempts , r.value/e.value*100 pct
from v$sysstat r, v$sysstat e
where r.name = 'redo buffer allocation retries' and e.name = 'redo entries';

return?