Skip to Main Content
  • Questions
  • Statspack: Who waits on log file sync and LGWR wait for redo copy events?

Breadcrumb

Question and Answer

Connor McDonald

Thanks for the question, Miroslav.

Asked: March 16, 2004 - 10:37 am UTC

Last updated: July 25, 2019 - 2:40 am UTC

Version: 9.2.0

Viewed 10K+ times! This question is

You Asked

Hello Tom,

I apologize in advance for not being able to make my question shorter.

Here is the portion of the statspack output:

Snap Id Snap Time Sessions Curs/Sess Comment
------- ------------------ -------- --------- -------------------
Begin Snap: 207 10-Mar-04 08:39:47 36 581,793.7
End Snap: 208 10-Mar-04 08:50:28 36 583,582.8
Elapsed: 10.68 (mins)

...

Top 5 Timed Events
~~~~~~~~~~~~~~~~~~ % Total
Event Waits Time (s) Ela Time
-------------------------------------------- ------------ ----------- --------
CPU time 3,747 66.52
enqueue 2,129 1,350 23.97
db file sequential read 37,842 286 5.07
log file sync 9,743 65 1.15
LGWR wait for redo copy 58,067 61 1.08

I look at the CPU time--it is 3,747s on a system that has 4 HT CPUs (effectively 8 CPUs show up in Windows task manager). This means that for one second real time, there is 5.85s of CPU time spent by many concurrent sessions loading data into the database.

Now, the question on the log file sync and LGWR wait for redo copy -- if I see 65 seconds wait time, and that, I believe, is time that the background process spent waiting. If background process waited that long, I wonder how much it affected the user sessions (how much did they wait in the same time)... (or, perhaps, this time is in fact the time that end-user sessions waited because redo had to be copied). In other words, is this the time the background processes wait, or is it the time that user sessions wait for the background processes to finish?

We have just turned our disk cache buffer to write-only as suggested in Jonathan Lewis's "Practical Oracle 8i" (it used to be 50%read/50%write). For the very similar load on the system we have seen the decrease in these statistics (both in the number and in time):

log file sync 5,464 21 .44
LGWR wait for redo copy 22,215 18 .38

To ask a question in a different way--if LGWR waited 3 seconds for redo to be copied, but no user session waited because of that (if that's even possible), does that still show up in the timed events?

Regards,
--Miroslav.


and Connor said...

watch out with CPU time. if you have long running calls, it will easily over and UNDER count cpu time.


x-----------------x 50cpu seconds
x----------------------x 50cpu seconds
x-----------------------------------------------------x 500cpu secs
x----------------x 30cpu secs
x----------------------------------------x 60cpu secs
^ ^
| |
|------- snapshots taken here ----| 1 minute window

so you have a 1 minute snapshot. Say you had the above "executions" going on. statspack would report 80cpu seconds used (50+30) because the CPU stat is gathered when the call completes. so the 2cnd and 4th lines above contribute their cpu time -- but none of the others do (they did not complete in the window). So, beware.


log file sync is a client wait event. It is the wait event your clients wait on when they say "commit". It is the wait for LGWR to actually write their redo to disk and return back to them. You can "tune" this by making lgwr faster (no raid 5 for example) and committing less frequently and generating less redo (BULK operations generate less redo than row by row do)

the other one is a background wait. LGWR is waiting for forgrounds to finish a current copy affecting the data LGWR is about to process.


HOWEVER, that said, tuning either of these will have no noticable affect on your systems performance whatsoever! It certainly looks like "enqueue" is your wait and that is all about application design - those are heavyweight locks induced by the application logic itself. You would be best served by looking at the *application* not at the "system" at this point.

do a level 12 10046 trace event on your application and see what it is truly waiting for. it is all about tuning the application.










Rating

  (50 ratings)

Is this answer out of date? If it is, please let us know via a Comment

Comments

Still don't understand the LGWR wait for redo copy

Miroslav Dzakovic, March 16, 2004 - 4:10 pm UTC

Thanks for the useful answer! Can you, please, clarify what exactly LGWR waits for -- which "forgrounds to finish a current copy" of what?

Clear on CPU. Our snapshot is 10 minutes, yet most transactions (hence all statement calls) finish in about a minute or less => under/over count possibility is +/-10%.

Clear on Equeues (I've seen this comment coming, but I didn't want to hide it from the statspack report). However, enqueues, while they look bad, are not causing problems in our system, only because they are there *by design*. Our system is designed so that some background jobs do the post-processing of data after it's been loaded. And, sometimes the background processes wait for the foreground processes to finish (using SELECT...FOR UPDATE) to ensure consistency. The thing is--while a foreground load jobs may take up to a minute per transaction, those background jobs take only a couple of seconds. Most of the time our job queue processes (6 of them) are idle, and when they wake up (each minute) to run newly submitted post-processing jobs, those jobs may find that they still need to wait for some other loads to finish. No harm done--even if some JQ processes are tied down because of that, others are free to process other jobs, and in the worst case it merely delays post-processing by a fraction of a minute. So, while it would be possible to eliminate "Enqueues" by timing the post-processing jobs differently, it would probably not improve overall system performance.

Finally, you are right that tuning LGWR and log file sync doesn't improve the overall system performance by much in our case (obviously, we are talking about some 2% of overall time). This wasn't really a tuning exercise, as much as a learning exercise, and the proof of Jonathan's point (setting the read/write cache% on disks takes one minute).

Tom Kyte
March 16, 2004 - 6:04 pm UTC

copy of redo information -- foregrounds moving stuff around (as lgwr is trying to flush it out due to one of the triggers that cause lgwr to do that -- 3 seconds, 1/3 full, 1meg full, commit).

but seriously -- these waits are *nothing*, I would encourage you to ignore them UNLESS they show up in an application trace!

You comments on enqueues is a perfect reason why tuning by statspack is "not the best way" and why you need to look at the application. anyone looking at that statspack would gravitate right to the enqueues -- but you know that in your case, they are "idle-ing waits" :)



Seriously, what foregrounds are moving what stuff around?

Miroslav Dzakovic, March 16, 2004 - 6:36 pm UTC

I get your point, and I'm not asking for tuning anymore. For the sake of knowledge you are so passionately sharing, please clarify. Can you give an example of a "foreground" (user sessions, or something else) and an example of "stuff" it is moving around (data, log buffers, or something else)?

Another one that's a puzzle to me is this one:

Event Waits Timeouts Time (s) (ms) /txn
---------------------------- ------------ ---------- ---------- ------- --------
log file sequential read 849 0 10 12 0.0

Why does Oracle read the log file (other than during recovery, which we didn't do in this statspack report time)?

Tom Kyte
March 16, 2004 - 6:58 pm UTC

LGWR is about to write a set of log blocks to disk. It has to wait until the foreground (us, you and me) processes complete any current copy operations which affect the buffers we are about to write out.  It used to be the 'redo copy latch' but there were performance changes made in 8i and up to make this "faster".

so it is log related data.

be you in archive log mode??


ops$tkyte@ORA9IR2> select * from v$system_event where event like 'log file sequential%'
  2  /
 
EVENT                     TOTAL_WAITS TOTAL_TIMEOUTS TIME_WAITED AVERAGE_WAIT ------------------------- ----------- -------------- ----------- ------------ log file sequential read            4              0           0            0 
 
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> @connect "/ as sysdba"
ops$tkyte@ORA9IR2> set termout off
sys@ORA9IR2> REM GET afiedt.buf NOLIST
sys@ORA9IR2> set termout on
sys@ORA9IR2> alter system switch logfile;
 
System altered.
 
sys@ORA9IR2> archive log all;
1 log archived.
sys@ORA9IR2> @connect /
sys@ORA9IR2> set termout off
ops$tkyte@ORA9IR2> REM GET afiedt.buf NOLIST
ops$tkyte@ORA9IR2> set termout on
ops$tkyte@ORA9IR2> select * from v$system_event where event like 'log file sequential%'
  2  /
 
EVENT                     TOTAL_WAITS TOTAL_TIMEOUTS TIME_WAITED AVERAGE_WAIT ------------------------- ----------- -------------- ----------- ------------ 
log file sequential read           69              0          26            0             



;) 

Thank you!

Miroslav Dzakovic, March 16, 2004 - 7:16 pm UTC

Nice and clear!

:-)

CPU Time event in Oracle 9i Statspack

Rathy, April 06, 2004 - 2:07 am UTC

Hi Tom

Please give your expert guidance about how to address the "CPU time" event, which is coming at the top in “Top 5 time event” in Statspack. In my case it's coming at the top of the time event.

Snap Id Snap Time Sessions Curs/Sess Comment
------- ------------------ -------- --------- -------------------
Begin Snap: 44 01-Apr-04 09:17:08 50 4.5
End Snap: 47 01-Apr-04 10:47:16 50 3.7
Elapsed: 90.13 (mins)

Cache Sizes (end)
~~~~~~~~~~~~~~~~~
Buffer Cache: 32M Std Block Size: 8K
Shared Pool Size: 160M Log Buffer: 768K

Load Profile
~~~~~~~~~~~~ Per Second Per Transaction
--------------- ---------------
Redo size: 43,420.37 3,849.72
Logical reads: 17,304.43 1,534.24
Block changes: 223.17 19.79
Physical reads: 215.36 19.09
Physical writes: 8.05 0.71
User calls: 1,380.56 122.40
Parses: 359.46 31.87
Hard parses: 1.52 0.13
Sorts: 26.28 2.33
Logons: 1.78 0.16
Executes: 470.49 41.71
Transactions: 11.28

% Blocks changed per Read: 1.29 Recursive Call %: 8.07
Rollback per transaction %: 6.84 Rows per Sort: 470.84

Instance Efficiency Percentages (Target 100%)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Buffer Nowait %: 100.00 Redo NoWait %: 100.00
Buffer Hit %: 98.78 In-memory Sort %: 99.98
Library Hit %: 99.81 Soft Parse %: 99.58
Execute to Parse %: 23.60 Latch Hit %: 99.87
Parse CPU to Parse Elapsd %: 92.14 % Non-Parse CPU: 96.18

Shared Pool Statistics Begin End
------ ------
Memory Usage %: 93.22 93.44
% SQL with executions>1: 67.19 48.72
% Memory for SQL w/exec>1: 43.79 47.68

Top 5 Timed Events
~~~~~~~~~~~~~~~~~~ % Total
Event Waits Time (s) Ela Time
-------------------------------------------- ------------ ----------- --------
CPU time 2,302 84.60
log file sync 57,524 291 10.71
db file parallel write 692 31 1.14
db file scattered read 147,666 29 1.07
db file sequential read 210,120 16 .57
-------------------------------------------------------------

Thanks for your service

Tom Kyte
April 06, 2004 - 9:10 am UTC

in 90 minutes on a single cpu machine you have 5400 cpu seconds.

you used less than half of that


that is all that is saying. it is not a wait event, it is TIMED events, we are just showing you the amount of cpu you used in that period of time. this is useful when looking at performance -- for if you used 100% of your CPU (the most you can use), then you might have a problem (since you probably wanted to really use (100+n)% of your CPU and hence people were waiting on cpu.

How did you get 5400 ?

Jan, April 06, 2004 - 10:07 am UTC


Tom Kyte
April 07, 2004 - 8:29 am UTC

90 minutes * 60 seconds in a minute = 5400 available CPU seconds on a single cpu machine??

RE: How did you get 5400

Mark A. Williams, April 06, 2004 - 10:33 am UTC

RE: How did you get 5400...

From the statspack report: 90.13 (mins)

90 (mins) * 60 (secs) = 5400 (secs)

- Mark

Log file sync

Dilip Patel, June 24, 2004 - 6:35 am UTC

Hello Tom,

I am getting log file sync as my top event in statspack report, However on further drill down, it is happning in most of my statspack report that "redo log space requests" AND "redo buffer allocation retries" are almost same.

1. Does it mean that every time there is a redo log space request and it have to wait for that ?
2. Is this solved by Increasing "Log buffers" or Increase the size or redo Log.

My present log buffer is 262144 and Redo log file is 10MB.

My Database is loading data in Batch.
Database server CPU # 8.

Following is my statspack report some useful lines.


Snap Id Snap Time Sessions
------- ------------------ --------
Begin Snap: 189 23-Jun-04 15:03:18 217
End Snap: 190 23-Jun-04 16:16:16 217
Elapsed: 72.97 (mins)


Top 5 Wait Events
~~~~~~~~~~~~~~~~~ Wait % Total
Event Waits Time (cs) Wt Time
-------------------------------------------- ------------ ------------ -------
log file sync 176,322 221,244 52.97
log file parallel write 157,386 141,780 33.95

Statistic Total per Second per Trans
--------------------------------- ---------------- ------------ ------------
CPU used by this session 47,491 10.9 0.3
CPU used when call started 47,490 10.9 0.3

background checkpoints completed 27 0.0 0.0
background checkpoints started 27 0.0 0.0

redo blocks written 545,713 124.7 3.1
redo buffer allocation retries 148 0.0 0.0
redo entries 530,471 121.2 3.0
redo log space requests 151 0.0 0.0
redo log space wait time 3,012 0.7 0.0
redo ordering marks 0 0.0 0.0
redo size 224,686,644 51,321.8 1,275.1
redo synch time 221,537 50.6 1.3
redo synch writes 176,215 40.3 1.0
redo wastage 46,021,268 10,511.9 261.2
redo write time 141,790 32.4 0.8
redo writer latching time 4 0.0 0.0
redo writes 157,385 36.0 0.9

user commits 175,993 40.2 1.0
user rollbacks 218 0.1 0.0


Tom Kyte
June 24, 2004 - 9:56 am UTC

log file sync is the wait you incurr when you commit.

this means either

o you commit too frequently (eg: like every statement instead of transactionally)
o your log devices are not sufficient to keep up with your load.

IF in fact it is even causing any noticable performance related issues.


You see, you had just about the same number of log file sync waits as commits (expected). If you divide the two -- you get about 0.012 seconds of wait per commit on average.

Given your statspack report is about 58 minutes too long in duration (averages over the long haul are not too meaningful) it is really hard to say much about these numbers other than a casual glance would say "average wait was 0.012 seconds, average wait happens once per transaction, will it make any impact on my system if my transactions go 0.012 seconds faster"




library cache lock

Pravesh Karthik from Chennai, September 22, 2004 - 5:37 am UTC

Tom,

Top 5 Wait Events
~~~~~~~~~~~~~~~~~ Wait % Total
Event Waits Time (s) Wt Time
-------------------------------------------- ------------ ----------- -------
library cache lock 1,869 5,621 57.94
async disk IO 835,577 1,037 10.69
direct path read 319,379 669 6.90
db file scattered read 131,376 555 5.72
db file parallel write 1,974 319 3.29
-------------------------------------------------------------

The report was taken in a 30 min snapshot interval

I have library cache lock as topped event. Do i need to increase shared pool ? how to findout who is holding the lock. I see few queries with higher buffer gets, optimizing that woould resolve this issue? or my shared pool small?

Thanks,
Pravesh Karthik

Tom Kyte
September 22, 2004 - 8:16 am UTC

are you parsing or even worse HARD PARSING like mad on this system.

Pravesh Karthik from chennai, September 22, 2004 - 10:42 am UTC

Tom,

Its as DWH system. querying 30-50 MM records.

Thanks,
Pravesh Karthik


Tom Kyte
September 22, 2004 - 10:56 am UTC

cool, so can you answer my question?

A reader, September 22, 2004 - 12:53 pm UTC


Sure tom,

To best of my knowledge and what ever i get from oracle schema i can tell you..

Pravesh Karthik

Tom Kyte
September 22, 2004 - 1:09 pm UTC

<quote>
are you parsing or even worse HARD PARSING like mad on this system.
</quote>

A reader, September 22, 2004 - 1:59 pm UTC

STATSPACK report for

DB Name DB Id Instance Inst Num Release Cluster Host
------------ ----------- ------------ -------- ----------- ------- ------------
XXXXXXX 3679640145 XXXXXX 1 9.0.1.1.0 NO XXXXXXXXX

Snap Id Snap Time Sessions Curs/Sess Comment
------- ------------------ -------- --------- -------------------
Begin Snap: 8 11-Sep-04 02:31:02 97 5.0
End Snap: 9 11-Sep-04 03:03:01 99 5.6
Elapsed: 31.98 (mins)

Cache Sizes (end)
~~~~~~~~~~~~~~~~~
Buffer Cache: 3,008M Std Block Size: 8K
Shared Pool Size: 464M Log Buffer: 1,024K

Load Profile
~~~~~~~~~~~~ Per Second Per Transaction
--------------- ---------------
Redo size: 1,122,530.61 451,506.23
Logical reads: 23,021.52 9,259.75
Block changes: 6,610.73 2,658.98
Physical reads: 6,104.73 2,455.45
Physical writes: 99.63 40.07
User calls: 98.91 39.78
Parses: 308.39 124.04
Hard parses: 0.37 0.15
Sorts: 2.94 1.18
Logons: 0.20 0.08
Executes: 1,747.18 702.76
Transactions: 2.49

% Blocks changed per Read: 28.72 Recursive Call %: 98.52
Rollback per transaction %: 0.00 Rows per Sort: 8605.57

Instance Efficiency Percentages (Target 100%)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Buffer Nowait %: 100.00 Redo NoWait %: 100.00
Buffer Hit %: 90.49 In-memory Sort %: 99.91
Library Hit %: 99.98 Soft Parse %: 99.88
Execute to Parse %: 82.35 Latch Hit %: 99.94
Parse CPU to Parse Elapsd %: 61.61 % Non-Parse CPU: 100.00

Shared Pool Statistics Begin End
------ ------
Memory Usage %: 92.40 91.93
% SQL with executions>1: 59.38 60.35
% Memory for SQL w/exec>1: 66.84 69.05

Top 5 Wait Events
~~~~~~~~~~~~~~~~~ Wait % Total
Event Waits Time (s) Wt Time
-------------------------------------------- ------------ ----------- -------
library cache lock 1,869 5,621 57.94
async disk IO 835,577 1,037 10.69
direct path read 319,379 669 6.90
db file scattered read 131,376 555 5.72
db file parallel write 1,974 319 3.29
-------------------------------------------------------------


Tom Kyte
September 22, 2004 - 2:58 pm UTC

You still want to say you are a data warehouse?

Load Profile
~~~~~~~~~~~~ Per Second Per Transaction
--------------- ---------------
Parses: 308.39 124.04



with 308 parses/ second?


do you have the ability to trace an application or set of applications and see what they are waiting on?

Top N phy reads

A reader, January 12, 2005 - 8:54 pm UTC

Hi Tom,
I set i_seg_phy_reads_th threshold to 100000,
but get only top 5 entries...
how can I make statspack to show all above the i_seg_phy_reads_th threshold ?


EXECUTE STATSPACK.MODIFY_STATSPACK_PARAMETER (i_snap_level=>7, i_seg_phy_reads_th=>100000, i_seg_log_reads_th=>1000000,i_modify_parameter=>'true');


Top 5 Physical Reads per Segment for DB: VWPROD1 Instance: vwprod1 Snaps: 4947
-> End Segment Physical Reads Threshold: 100000

Subobject Obj. Physical
Owner Tablespace Object Name Name Type Reads %Total
---------- ---------- -------------------- ---------- ----- ------------ -------
DATA_OWNER DBT_U01_DA QUE_FEEDBACK_EVENT TABLE 7,741,299 24.65
DATA_OWNER DBW_U02_SU MV_E_P TABLE 5,003,598 15.93
DATA_OWNER DBW_U02_SU MV_E_O TABLE 4,921,134 15.67
DATA_OWNER DBT_U01_DA STG_FEEDBACK_TRANSAC TABLE 2,774,573 8.84
DATA_OWNER DBW_U03_ST MLOG$_SUM_SESSION_EV TABLE 1,275,325 4.06
-------------------------------------------------------------



Tom Kyte
January 13, 2005 - 8:13 am UTC

it is a top-5 report there.

it is controlled (after looking in sprepins.sql) by top_n_segstat, a define variable at the top of that report.

change that if you like.

Excerpt from my STATSPACK

hans wijte, March 31, 2005 - 9:22 am UTC

STATSPACK report for

DB Name DB Id Instance Inst Num Release OPS Host
------------ ----------- ------------ -------- ----------- --- ------------
EK_PLPRD 3848255163 EK_PLPRD 1 8.1.7.3.0 NO rs9737

Snap Id Snap Time Sessions
------- ------------------ --------
Begin Snap: 1 31-Mar-05 15:10:14 24
End Snap: 2 31-Mar-05 15:25:19 24
Elapsed: 15.08 (mins)

Cache Sizes
~~~~~~~~~~~
db_block_buffers: 16000 log_buffer: 163840
db_block_size: 8192 shared_pool_size: 40000000

Load Profile
~~~~~~~~~~~~ Per Second Per Transaction
--------------- ---------------
Redo size: 2,483.54 80,271.57
Logical reads: 2,385.87 77,114.82
Block changes: 9.14 295.57
Physical reads: 7.68 248.07
Physical writes: 1.31 42.50
User calls: 223.90 7,236.61
Parses: 225.12 7,276.14
Hard parses: 0.24 7.71
Sorts: 58.61 1,894.39
Logons: 18.36 593.29
Executes: 232.84 7,525.86
Transactions: 0.03

% Blocks changed per Read: 0.38 Recursive Call %: 79.59
Rollback per transaction %: 17.86 Rows per Sort: 3.64

Instance Efficiency Percentages (Target 100%)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Buffer Nowait %: 100.00 Redo NoWait %: 100.00
Buffer Hit %: 99.68 In-memory Sort %: 100.00
Library Hit %: 99.90 Soft Parse %: 99.89
Execute to Parse %: 3.32 Latch Hit %: 99.99
Parse CPU to Parse Elapsd %: 73.42 % Non-Parse CPU: 99.83

Shared Pool Statistics Begin End
------ ------
Memory Usage %: 82.18 79.98
% SQL with executions>1: 79.44 83.30
% Memory for SQL w/exec>1: 71.20 81.61

Top 5 Wait Events
~~~~~~~~~~~~~~~~~ Wait % Total
Event Waits Time (cs) Wt Time
-------------------------------------------- ------------ ------------ -------
log file sync 1,108 30,766 37.11
log file parallel write 890 22,068 26.62
control file parallel write 224 21,963 26.49
db file sequential read 4,903 4,367 5.27
db file parallel write 69 1,825 2.20
-------------------------------------------------------------
^LWait Events for DB: EK_PLPRD Instance: EK_PLPRD Snaps: 1 -2
-> cs - centisecond - 100th of a second
-> ms - millisecond - 1000th of a second
-> ordered by wait time desc, waits desc (idle events last)

Avg
Total Wait wait Waits
Event Waits Timeouts Time (cs) (ms) /txn
---------------------------- ------------ ---------- ----------- ------ ------
log file sync 1,108 247 30,766 278 39.6
log file parallel write 890 0 22,068 248 31.8
control file parallel write 224 0 21,963 980 8.0
db file sequential read 4,903 0 4,367 9 175.1
db file parallel write 69 0 1,825 264 2.5
db file scattered read 111 0 1,723 155 4.0
refresh controlfile command 13 0 101 78 0.5
latch free 138 87 44 3 4.9
file open 64 0 33 5 2.3
SQL*Net break/reset to clien 2 0 4 20 0.1
direct path write 6 0 3 5 0.2
SQL*Net more data to client 486 0 2 0 17.4
control file sequential read 79 0 0 0 2.8
direct path read 13 0 0 0 0.5
log buffer space 2 0 0 0 0.1
SQL*Net message from client 202,667 0 1,062,013 52 ######
pipe get 354 354 181,193 5118 12.6
SQL*Net more data from clien 15 0 134 89 0.5
SQL*Net message to client 202,670 0 30 0 ######
-------------------------------------------------------------
^LBackground Wait Events for DB: EK_PLPRD Instance: EK_PLPRD Snaps: 1 -2
-> ordered by wait time desc, waits desc (idle events last)

Avg
Total Wait wait Waits
Event Waits Timeouts Time (cs) (ms) /txn
---------------------------- ------------ ---------- ----------- ------ ------
log file parallel write 890 0 22,068 248 31.8
control file parallel write 224 0 21,963 980 8.0
db file parallel write 69 0 1,825 264 2.5
db file sequential read 3 0 5 17 0.1
control file sequential read 39 0 0 0 1.4
rdbms ipc message 2,454 701 511,721 2085 87.6
smon timer 3 3 92,150 ###### 0.1
pmon timer 297 295 90,759 3056 10.6
-------------------------------------------------------------
^LSQL ordered by Gets for DB: EK_PLPRD Instance: EK_PLPRD Snaps: 1 -2
-> End Buffer Gets Threshold: 10000
-> Note that resources reported for PL/SQL includes the resources used by
all SQL statements called within the PL/SQL code. As individual SQL
statements are also reported, it is possible and valid for the summed
total % to exceed 100

Buffer Gets Executions Gets per Exec % Total Hash Value
--------------- ------------ -------------- ------- ------------
829,430 16,593 50.0 38.4 3013728279
select privilege#,level from sysauth$ connect by grantee#=prior
privilege# and privilege#>0 start with (grantee#=:1 or grantee#=
1) and privilege#>0

248,790 16,586 15.0 11.5 3460981388
select 1 into :b0 from ouvbat ,datbat where ((datbat.jour=ouvba
t.joursemaine and datbat.heure between ouvbat.heure_debut and ou
vbat.heure_fin) and ouvbat.file_batch=:b1)
......
etc. etc.
......
^LInstance Activity Stats for DB: EK_PLPRD Instance: EK_PLPRD Snaps: 1 -2

Statistic Total per Second per Trans
--------------------------------- ---------------- ------------ ------------
CPU used by this session 8,066 8.9 288.1
CPU used when call started 8,066 8.9 288.1
CR blocks created 61 0.1 2.2
Cached Commit SCN referenced 0 0.0 0.0
DBWR buffers scanned 1,395 1.5 49.8
DBWR checkpoint buffers written 261 0.3 9.3
DBWR checkpoints 0 0.0 0.0
DBWR free buffers found 996 1.1 35.6
DBWR lru scans 22 0.0 0.8
DBWR make free requests 23 0.0 0.8
DBWR summed scan depth 1,395 1.5 49.8
DBWR transaction table writes 0 0.0 0.0
DBWR undo block writes 280 0.3 10.0
SQL*Net roundtrips to/from client 136,176 150.5 4,863.4
background checkpoints completed 0 0.0 0.0
background checkpoints started 0 0.0 0.0
background timeouts 754 0.8 26.9
branch node splits 0 0.0 0.0
buffer is not pinned count 1,271,297 1,404.8 45,403.5
buffer is pinned count 697,335 770.5 24,904.8
bytes received via SQL*Net from c 35,125,012 38,812.2 1,254,464.7
bytes sent via SQL*Net to client 44,671,556 49,360.8 1,595,412.7
calls to get snapshot scn: kcmgss 195,836 216.4 6,994.1
calls to kcmgas 884 1.0 31.6
calls to kcmgcs 72 0.1 2.6
change write time 17 0.0 0.6
cleanouts only - consistent read 55 0.1 2.0
cluster key scan block gets 17,479 19.3 624.3
cluster key scans 10,014 11.1 357.6
commit cleanout failures: block l 7 0.0 0.3
commit cleanout failures: buffer 0 0.0 0.0
commit cleanout failures: callbac 4 0.0 0.1
commit cleanout failures: cannot 0 0.0 0.0
commit cleanouts 1,072 1.2 38.3
commit cleanouts successfully com 1,061 1.2 37.9
consistent changes 551 0.6 19.7
consistent gets 1,613,925 1,783.3 57,640.2
cursor authentications 405 0.5 14.5
data blocks consistent reads - un 551 0.6 19.7
db block changes 8,276 9.1 295.6
db block gets 545,290 602.5 19,474.6
deferred (CURRENT) block cleanout 905 1.0 32.3
dirty buffers inspected 35 0.0 1.3
enqueue conversions 6 0.0 0.2
enqueue releases 52,686 58.2 1,881.6
enqueue requests 52,681 58.2 1,881.5
enqueue timeouts 0 0.0 0.0
enqueue waits 0 0.0 0.0
execute count 210,724 232.8 7,525.9
free buffer inspected 36 0.0 1.3
free buffer requested 6,740 7.5 240.7
hot buffers moved to head of LRU 7,830 8.7 279.6
immediate (CR) block cleanout app 55 0.1 2.0
immediate (CURRENT) block cleanou 88 0.1 3.1
index fast full scans (full) 3 0.0 0.1
leaf node splits 25 0.0 0.9
logons cumulative 16,612 18.4 593.3
logons current
messages received 1,000 1.1 35.7
messages sent 1,000 1.1 35.7
no buffer to keep pinned count 0 0.0 0.0
no work - consistent read gets 1,025,157 1,132.8 36,612.8
opened cursors cumulative 202,936 224.2 7,247.7
opened cursors current
parse count (hard) 216 0.2 7.7
parse count (total) 203,732 225.1 7,276.1
parse time cpu 1,340 1.5 47.9
parse time elapsed 1,825 2.0 65.2
physical reads 6,946 7.7 248.1
physical reads direct 496 0.6 17.7
physical writes 1,190 1.3 42.5
physical writes direct 496 0.6 17.7
physical writes non checkpoint 1,149 1.3 41.0
pinned buffers inspected 0 0.0 0.0
prefetched blocks 1,434 1.6 51.2
prefetched blocks aged out before 0 0.0 0.0
process last non-idle time ################ ############ ############
recursive calls 790,268 873.2 28,223.9
recursive cpu usage 3,600 4.0 128.6
redo blocks written 5,049 5.6 180.3
redo buffer allocation retries 2 0.0 0.1
redo entries 4,246 4.7 151.6
redo log space requests 0 0.0 0.0
redo log space wait time 0 0.0 0.0
redo size 2,247,604 2,483.5 80,271.6
redo synch time 30,770 34.0 1,098.9
redo synch writes 855 0.9 30.5
redo wastage 263,820 291.5 9,422.1
redo write time 28,105 31.1 1,003.8
redo writer latching time 0 0.0 0.0
redo writes 890 1.0 31.8
rollback changes - undo records a 0 0.0 0.0
rows fetched via callback 49,335 54.5 1,762.0
session connect time ################ ############ ############
session cursor cache hits 2,148 2.4 76.7
session logical reads 2,159,215 2,385.9 77,114.8
session pga memory 2,131,480,704 2,355,227.3 ############
session pga memory max 2,132,059,420 2,355,866.8 ############
session uga memory 92,405,236 102,105.2 3,300,187.0
session uga memory max 924,264,176 1,021,286.4 ############
sorts (disk) 2 0.0 0.1
sorts (memory) 53,041 58.6 1,894.3
sorts (rows) 193,015 213.3 6,893.4
summed dirty queue length 0 0.0 0.0
switch current to new buffer
table fetch by rowid 628,650 694.6 22,451.8
table fetch continued row 1,203 1.3 43.0
table scan blocks gotten 126,900 140.2 4,532.1
table scan rows gotten 1,492,981 1,649.7 53,320.8
table scans (long tables) 9 0.0 0.3
table scans (short tables) 133,963 148.0 4,784.4
total file opens 64 0.1 2.3
transaction rollbacks 0 0.0 0.0
transaction tables consistent rea 0 0.0 0.0
transaction tables consistent rea 0 0.0 0.0
user calls 202,625 223.9 7,236.6
user commits 23 0.0 0.8
user rollbacks 5 0.0 0.2
write clones created in backgroun 0 0.0 0.0
write clones created in foregroun 0 0.0 0.0




Hi Tom,

I have 2 questions:

1. It seems my database is mostly busy writing redolog and controlfile info; what in general could be the reasons for this behaviour (I saw that 2 redolog file systems were both allocated on the same disk, so there will probably be a lot of contention) , taken into account that the number of user commits is very low


2. What could be the reason that the SQL statement performing the largest number of buffer gets it the following :
Buffer Gets Executions Gets per Exec % Total Hash Value
--------------- ------------ -------------- ------- ------------
829,430 16,593 50.0 38.4 3013728279
select privilege#,level from sysauth$ connect by grantee#=prior
privilege# and privilege#>0 start with (grantee#=:1 or grantee#=
1) and privilege#>0




Best regards

Hans


Tom Kyte
March 31, 2005 - 9:49 am UTC

 
ops$tkyte@ORA9IR2> select 307/1108 from dual;
 
  307/1108
----------
.277075812
 
that is pretty long to be waiting for a commit to complete.  I'd definitely look at that (perhaps the user commits are very low because of this, not because they should be low)....

I'd be looking at those devices, try to dedicate some to just this and if you are using cooked file systems and can mount the logs with forcedirectio, that would be useful for the logs.

As for the query, because it is executed so much?  Your execute to parse number is way low -- are the developers parsing a statement more than executing it?  (v$sql can help you see) 

forcedirectio not possible

Hans Wijte, April 01, 2005 - 4:08 am UTC

This database is on AIX JFS (Journaled File Systems) which doesn't have a forcedirectio mount option; I could use the undocumented parameter "_filesystemio_options" but I'm not in favor of using them.

I will however move one of the log groups to another file system and see if that helps.

Thanks,

HAnS

Tom Kyte
April 01, 2005 - 8:32 am UTC

check out
</code> http://www.jlcomp.demon.co.uk/statspack_01.html <code>

too, for future reference when reading these -- they are a bit tricky sometimes

Moving log files

hans wijte, April 07, 2005 - 6:15 am UTC

Tom,

it seems the move of the said log files to another file system (i.e. harddisk) has worked out; the following top5 wait events in this database are a sight for sore eyes, when compared to the wait events a couple of days ago:


Top 5 Wait Events
~~~~~~~~~~~~~~~~~ Wait % Total
Event Waits Time (cs) Wt Time
-------------------------------------------- ------------ ------------ -------
db file sequential read 3,775 2,158 37.28
latch free 884 1,060 18.31
log file sync 1,116 738 12.75
log file parallel write 1,147 639 11.04
control file parallel write 293 536 9.26
-------------------------------------------------------------
^LWait Events for DB: EK_PLPRD Instance: EK_PLPRD Snaps: 29 -30
-> cs - centisecond - 100th of a second
-> ms - millisecond - 1000th of a second
-> ordered by wait time desc, waits desc (idle events last)


The execute to parse ratio still is awfully low (3 %) but this is a bought application and therefore not so easily changed.

The comment from Jonathan Lewis on my original posting was most instructive as well.

thank you both again



Tom Kyte
April 07, 2005 - 10:20 am UTC

you might look at session cached cursors? if you feel the sql is repeated lots in the same session, it'll softly cache them and could help decrease the latch free's you are seeing if they are library cache related

you would monitor the

session cursor cache hits
session cursor cache count

statistics

log file sync

Lizhuohua, April 14, 2005 - 12:29 pm UTC

Hello Tom,
We have a ETL procedure like this:
create or replace procedure prc_etl
is
begin
delete from table_a where year_month=p_year_month;--for reETL Data
insert into table_a
select ....
commit;
end;

each sql will deal with about 200,000 rows data,
and run less than 20 secends singly,but this procedure will
run 16 minutes.I query the v$session_wait When it running ,I find that
my session wait log file sync.

To my surprise, when I do some test on another machine use this :

create table t2 as select * from all_objects where 1=0;
insert into t2
select a.*
from all_objects a,
(select null from user_objects where rownum <= 10);

commit;

begin
delete from t2;
commit;
insert into t2
select a.*
from all_objects a,
(select null from user_objects where rownum <= 10);
commit;
end;
It use about 1 minutes.
(My ETL procedure run on oracle 9205 for hp-ux,
and my test case test on 9205 (win2000))
My question is :
1.How can I improve performance of the ETL procedure.
2.Why there so much difference between my ETL procedure and my test case?



Tom Kyte
April 14, 2005 - 1:22 pm UTC

what is your log buffer and how many indexes on this table you delete+insert into.

(did your test case mimick your real table...)

log file sync

Lizhuohua, April 14, 2005 - 9:29 pm UTC

Hello Tom,
I'm sorry ,my test case not the same with my real table,there no index.
I do the follow test:
On my test case I create index on (OBJECT_NAME,OBJECT_ID) like my real table
and execute the delete+insert ,It use 80+ secends.
On the real table
execute that procedure ,it slow and wait log file sync.
After i drop the index on the real table ,it run fast
(Because I think something affect me yesterday).
Then i recreate the index as before ,it run fast too!
I realy don't known how to explain this.

Any Thoughts on this report

Jim, April 21, 2005 - 2:11 pm UTC

what is global cache cr request??



STATSPACK report for

DB Name DB Id Instance Inst Num Release Cluster Host
------------ ----------- ------------ -------- ----------- ------- ------------
CTMS 821857281 ctms2 2 9.2.0.4.0 YES DBSRV2

Snap Id Snap Time Sessions Curs/Sess Comment
------- ------------------ -------- --------- -------------------
Begin Snap: 2097 21-Apr-05 13:26:17 12,934 1.7
End Snap: 2107 21-Apr-05 13:42:32 12,984 1.7
Elapsed: 16.25 (mins)

Cache Sizes (end)
~~~~~~~~~~~~~~~~~
Buffer Cache: 696M Std Block Size: 8K
Shared Pool Size: 232M Log Buffer: 512K

Load Profile
~~~~~~~~~~~~ Per Second Per Transaction
--------------- ---------------
Redo size: 42,545.49 7,039.17
Logical reads: 43,814.55 7,249.14
Block changes: 399.83 66.15
Physical reads: 685.83 113.47
Physical writes: 12.66 2.09
User calls: 256.83 42.49
Parses: 567.05 93.82
Hard parses: 5.27 0.87
Sorts: 46.05 7.62
Logons: 1.30 0.22
Executes: 1,141.67 188.89
Transactions: 6.04

% Blocks changed per Read: 0.91 Recursive Call %: 91.22
Rollback per transaction %: 8.21 Rows per Sort: 19.40

Instance Efficiency Percentages (Target 100%)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Buffer Nowait %: 99.99 Redo NoWait %: 100.00
Buffer Hit %: 98.43 In-memory Sort %: 100.00
Library Hit %: 99.56 Soft Parse %: 99.07
Execute to Parse %: 50.33 Latch Hit %: 99.75
Parse CPU to Parse Elapsd %: 22.68 % Non-Parse CPU: 81.42

Shared Pool Statistics Begin End
------ ------
Memory Usage %: 88.45 89.11
% SQL with executions>1: 64.17 79.34
% Memory for SQL w/exec>1: 64.05 65.03

Top 5 Timed Events
~~~~~~~~~~~~~~~~~~ % Total
Event Waits Time (s) Ela Time
-------------------------------------------- ------------ ----------- --------
global cache cr request 568,659 3,560 33.25
CPU time 2,154 20.12
library cache pin 1,220 2,112 19.73
db file sequential read 112,659 1,475 13.77
SQL*Net message from dblink 5,391 270 2.52
-------------------------------------------------------------

Tom Kyte
April 22, 2005 - 9:51 am UTC

see metalink Note 69048.1 for details on that wait.




Redo Size negative

reddy, April 27, 2005 - 12:48 pm UTC

Hi Tom,

I see a strange negative values in my statspack report for redo size .what might be the reason? The snapshot duration was 1 hour .Here is how it looks like

Cache Sizes (end)
~~~~~~~~~~~~~~~~~
Buffer Cache: 1000M Std Block Size: 8K
Shared Pool Size: 2,500M Log Buffer: 30,700K

Load Profile
~~~~~~~~~~~~ Per Second Per Transaction
--------------- ---------------
Redo size: -271,324.74 -1,703.85
Logical reads: 283,533.48 1,780.51
Block changes: 6,722.45 42.22
Physical reads: 12,091.67 75.93

Thanks,
Reddy

Tom Kyte
April 27, 2005 - 1:05 pm UTC

the numbers probably rolled over at that point. If you look at a report right before or after that window -- what does it look like?

do the numbers in the v$ tables themselves look reasonable

import

abc, May 21, 2005 - 2:03 pm UTC

Tom,

My import job is running from last 20 hrs and its taking 10hrs to import one table.Mistake i did commit=y and facing log file sync but now whats the sloution iwthout stopping the job

Tom Kyte
May 21, 2005 - 9:27 pm UTC

the solution is to "wait", you cannot change the mind of a program already running like that...

log file sync

bipul, June 01, 2005 - 7:08 am UTC

Hi Tom.

We have a third party software which does a commit after each insert or delete [no bulk operation, as it is one of those application thats been written to run on any database ! ]. As a result during the peak activity period, the main wait event is log file sync. The following awr report shows that [ 30 minutes report for a 2 cpu 2 node rac environment]

Top 5 Timed Events
~~~~~~~~~~~~~~~~~~ % Total
Event Waits Time (s) DB Time Wait Class
------------------------------ ------------ ----------- --------- --------------
CPU time 1,682 19.03
log file sync 31,684 1,355 15.33 Commit
latch: cache buffers chains 9,662 828 9.36 Concurrency
db file sequential read 24,934 330 3.73 User I/O
buffer busy waits 3,759 206 2.33 Concurrency
-------------------------------------------------------------

The redo log files on RAID-5, which is again not helping. Given that I can't change these two things easily [i.e. frequent commits and RAID-5], is there any other way to improve the performance of LGWR, so that the total wait time is reduced ?

Thanks
bipul

Tom Kyte
June 01, 2005 - 10:10 am UTC

lets see, this is the equivalent of

a) tie hands behind back

(commit after every row -- this application wasn't written to run against every database, it was written obviously to destroy data integrity. Hope you don't do any automatic teller machine transactions against it, lets see take 100$ out of savings, commit, CRASH (before we put it into checking. No problem, losing 100$ isn't a big deal)

b) securely bind both feet together

(put the high write volume, most critical file on raid 5)


and now swim the english channel.

Sorry, the only ways to reduce log file syncs are

a) don't commit until your transaction is actually done.
b) speed the devices the log files are one.

I thought so

bipul, June 01, 2005 - 12:29 pm UTC

Hi Tom.

Thanks for the reply. I knew there was no other way, but wanted your expert advice.

Regarding: "transaction actually done", the transaction is actually done. This software is used to send emails to a mailing list. And it inserts and update data in 2 tables after sending each email. No concpet of bulk operation !

I might be able to do something about "securely bind both feet together". We are changing our storage system and I think thats right time to sort this redo log on RAID-5 issue.

What kind of RAID configuration would you suggest for redo log files?

Thanks
bipul

Tom Kyte
June 01, 2005 - 1:51 pm UTC

but:

which does a commit after each insert or delete

and

And it inserts and update data in 2 tables


are contrary statements - if it commits after each SQL statement, it cannot have a transaction that spans statements.

</code> http://asktom.oracle.com/pls/asktom/f?p=100:11:::::P11_QUESTION_ID:359617936136 <code>


"rdbms ipc message" in v$session_wait for log writer session

Sean, June 08, 2005 - 10:55 am UTC

Hi Tom,

The event for log writer session in v$session_wait is always “rdbms ipc message”. According to the reference manual:
---------------------------
rdbms ipc message
The background processes (LGWR, DBWR, LMS0) use this event to indicate that they are idle and are waiting for the foreground processes to send them an IPC message to do some work.
----------------------------

If it is normal event, why does it show up in v$session_wait table?

Thanks so much for your help.


Tom Kyte
June 08, 2005 - 11:27 am UTC

because you wait for normal things -- all events are "normal", you are almost always waiting for something. it is so you know "what the heck it is doing"

action for wait event

Sean, June 08, 2005 - 12:37 pm UTC

Hi Tom,

Looking through wait events in Appendix A of reference manual (9i), for some events, it is not easy to tell whether they are normal or unusual. “Unusual” means we need to investigate the cause of the wait. It seems that the reference manual should explain wait event like error message manual: CAUSE, ACTION.

Thanks so much for your help


daniel, December 22, 2005 - 12:20 pm UTC

Hi Tom,
In our very very active database, the top session always depics most of the wait time in log file sync, the other waits are so less as if i belive that the database does actually what? but it is efficient in having 200 tran/sec transactions rate. However this is not enough , we want the database to perform faster , as we anticipate large volumes of load. There was one target set to have db do 600 trans/sec. I see that the only major block in the statspack top 5 events is always:log file sync
I have seen your notes and opened tar, all of em says increase i/o of the disk where redo logs are: But as you know big organisations like us are now using san, we have ibm shark and the speed of netwrk is fast, the db server is connected to the shark via fibre optic, offcourse the shark is ready made from ibm with raid 5 with multiple paths. In this scenerio even creating multiple mount points (/u2 for tables, /u3 for indexes, etc ) become more or less meaningless. Anyway what we can do is to create another mount point (on same shark-san) call it /log and put our redo logs there, will that help? We have 512 mb of redo logs-6 groups, and log siwtching is around 7 per hr.
what if we increase redo log file to 1gb size?(but this is not for log file sync).

STATSPACK report for

DB Name DB Id Instance Inst Num Release Cluster Host
------------ ----------- ------------ -------- ----------- ------- ------------
LTS2 3872561801 LTS2 1 9.2.0.5.0 NO tadawuldb

Snap Id Snap Time Sessions Curs/Sess Comment
------- ------------------ -------- --------- -------------------
Begin Snap: 25279 13-Dec-05 10:01:15 681 422.6

End Snap: 25280 13-Dec-05 10:16:12 686 430.4

Elapsed: 14.95 (mins)

Cache Sizes (end)
~~~~~~~~~~~~~~~~~
Buffer Cache: 2,208M Std Block Size:
8K
Shared Pool Size: 1,376M Log Buffer:
5,120K

Load Profile
~~~~~~~~~~~~ Per Second Per Transaction
--------------- ---------------
Redo size: 1,386,670.31 6,654.81
Logical reads: 81,350.74 390.41
Block changes: 6,642.10 31.88
Physical reads: 360.63 1.73
Physical writes: 154.99 0.74
User calls: 5,715.58 27.43
Parses: 1,385.25 6.65
Hard parses: 0.15 0.00
Sorts: 511.20 2.45
Logons: 1.11 0.01
Executes: 3,198.50 15.35
Transactions: 208.37

% Blocks changed per Read: 8.16 Recursive Call %: 51.95
Rollback per transaction %: 1.66 Rows per Sort: 224.49

Instance Efficiency Percentages (Target 100%)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Buffer Nowait %: 99.99 Redo NoWait %: 100.00
Buffer Hit %: 99.56 In-memory Sort %: 100.00
Library Hit %: 100.00 Soft Parse %: 99.99
Execute to Parse %: 56.69 Latch Hit %: 99.86
Parse CPU to Parse Elapsd %: 44.95 % Non-Parse CPU: 98.47

Shared Pool Statistics Begin End
------ ------
Memory Usage %: 25.03 25.51
% SQL with executions>1: 87.13 86.65
% Memory for SQL w/exec>1: 72.10 73.32

Top 5 Timed Events
~~~~~~~~~~~~~~~~~~ % Total
Event Waits Time (s) Ela Time
-------------------------------------------- ------------ ----------- --------
log file sync 151,590 3,353 48.99
CPU time 2,950 43.10
db file sequential read 106,775 333 4.87
log file parallel write 112,500 85 1.24
SQL*Net more data to client 391,963 46 .68
-------------------------------------------------------------
Wait Events for DB: LTS2 Instance: LTS2 Snaps: 25279 -25280
-> s - second
-> cs - centisecond - 100th of a second
-> ms - millisecond - 1000th of a second
-> us - microsecond - 1000000th of a second
-> ordered by wait time desc, waits desc (idle events last)

Avg
Total Wait wait Waits
Event Waits Timeouts Time (s) (ms) /txn
---------------------------- ------------ ---------- ---------- ------ --------
log file sync 151,590 0 3,353 22 0.8
db file sequential read 106,775 0 333 3 0.6
log file parallel write 112,500 60,482 85 1 0.6
SQL*Net more data to client 391,963 0 46 0 2.1
db file parallel write 1,359 0 29 22 0.0
enqueue 1,189 0 26 22 0.0
db file scattered read 40,922 0 16 0 0.2
latch free 1,164 1,037 2 2 0.0
buffer busy waits 6,016 0 1 0 0.0
log file switch completion 47 0 1 15 0.0
control file parallel write 422 0 0 1 0.0
LGWR wait for redo copy 1,783 0 0 0 0.0
direct path write 401 0 0 0 0.0
SQL*Net break/reset to clien 80 0 0 0 0.0
log file single write 6 0 0 4 0.0
row cache lock 5 0 0 3 0.0
direct path read 1,023 0 0 0 0.0
control file sequential read 380 0 0 0 0.0
log file sequential read 6 0 0 0 0.0
buffer deadlock 30 30 0 0 0.0
SQL*Net message from client 5,223,786 0 512,442 98 27.9
SQL*Net more data from clien 99,497 0 348 3 0.5
SQL*Net message to client 5,223,796 0 4 0 27.9
-------------------------------------------------------------
Background Wait Events for DB: LTS2 Instance: LTS2 Snaps: 25279 -25280
-> ordered by wait time desc, waits desc (idle events last)

Avg
Total Wait wait Waits
Event Waits Timeouts Time (s) (ms) /txn
---------------------------- ------------ ---------- ---------- ------ --------
log file parallel write 112,513 60,490 85 1 0.6
db file parallel write 1,359 0 29 22 0.0
control file parallel write 422 0 0 1 0.0
LGWR wait for redo copy 1,783 0 0 0 0.0
db file scattered read 6 0 0 11 0.0
direct path write 401 0 0 0 0.0
log file single write 6 0 0 4 0.0
direct path read 1,023 0 0 0 0.0
control file sequential read 270 0 0 0 0.0
latch free 1 0 0 3 0.0
db file sequential read 8 0 0 0 0.0
log file sequential read 6 0 0 0 0.0
rdbms ipc message 175,198 61,794 4,345 25 0.9
smon timer 3 3 900 ###### 0.0
pmon timer 299 299 876 2930 0.0

So what do you suggest? The application is not commiting every sql, its tran based but since it relates to shares stock ipo etc, the commit rates are always higher, we have 40,000 concurrent connections, and wil reach 60k(offcourse these are from 10gas, db has dedicated mode connections 500 from 3 10gas clusters(java)
cheers



Tom Kyte
December 22, 2005 - 12:26 pm UTC

we cannot make your disk faster than they are. If your big shop made the strategic decision to go all "raid 5", knowing that - well, raid 5 has never been proclaimed "the best thing ever for high throughput writes"....

I can only tell you what will make log file sync waits become "reduced"

o make redo disk faster
o do not commit as frequently (batch up)



daniel, December 22, 2005 - 5:02 pm UTC

Well I must ask you to please explain why raid5 is heavy on writes, and if not raid 5, then what raid should we use for redo logs and archived logs?

Then I can have a debate with my unix admin that even though they have made raid5 for shark, they can always go back and reconfigure the shark to use other raid.

What if i ask them to let me use raw device om shark storage, (it would still be raid5) but would that help?

And after seeing the stats report, do you really think its worth it changing the redo logs raid device?

What about the question that i have been asked to tune to db to perform 600 trans /sec instead of 200 current? what areas should i look into? reduce logical ios? redo logs? etc?
cheers


Tom Kyte
December 22, 2005 - 5:57 pm UTC

raid 5 is bad for writes by design - it strips and computes parity. It is well known for "not being the best thing for writing to".

This explains the various raid levels:

</code> http://www.acnc.com/04_01_10.html <code>

I do not believe that shark can be reconfigured, it is a "black box" as I understand it.

raid 5 is raid 5, you cannot hide it.


If your goal is to remove log file sync (I'd trace the truly important applications - not with statspack - to confirm THEY are log file sync "bound"), then

a) reducing the commit frequency
b) making the log devices faster

are your choices.



As for your last question - it is LIKELY that you need to attack this problem, you are not going to be committing anything any faster.

A reader, December 22, 2005 - 6:13 pm UTC

Well, I guess am stuck then, if i cant reconfigure raid.

About application design, well the aplication is very simple, for every order placed by the client to buy or sell a stock, there are 10 executions (10 sql statements)
inserts into log table, select from other tables (these are 5 select statement), then insert into order table, then after another couple of updates,it commits.

Since the app is very very oltp intensive, our customers are increasing ( as you know saudi arabia recentl joining wto) and lots of companies are now offering ipo and their stocks for public), there is a lot of liquidity and hence the load. imagine our aix db server has 12 cpus and 32 gb of ram, yet we are just OK, and we are not sure what will happen in the next couple of months, we are investing money on apps servers(web famrs,10g clusters) but the db is single point, so what shall we do for capacity planning.

We dont mind scalling up the db server with cpu or ram, but if we do not know anything with the throuput of the db, we wil have problems, even nowaways we have had two issues which oracle (tar) has confirmed we have hit a bug for which they issue us a one off patch and will include that in 9208, this bug was fixed in 9204 but oracle did not have this in 9205 until now with us, oracle has themslves asked us not to upgrade to 9207 as it has several other bugs. In short the problem we have had was lib cach contention, (parses), and cache buffer chains.
Anyway sorry to give you the whole story,but the point is we need to do something to out db to increase the throuput and we are open to all ideas.

We have noted that when you are into very high oltp app, the database features are really sometomes hit back at you.

For example our cpu last week went 100% for 15 minutes (all 9 cpus), later after oracle tar was opene, they doscovered that in 9i with auto undo feature, the alert log showed that when we had such problm, oracle was making arounf 90 unde segm online(new feature where oracle makes undo offline and online on demand) so they asked us to include one event in init.ora to have the same behaviour as of 8i of always having undo online.

The point is there is nop perfect world with oracle for high oltp trans.


thanks

Tom Kyte
December 22, 2005 - 6:32 pm UTC

given increased user base - you have more income, buy "faster" disk.

The people buying the disk had to know what raid-5 means.

A reader, December 22, 2005 - 6:43 pm UTC

Yes we are thinking of buying faster disks, there are solid state disks available as well.
Our db is almost cached in mem (cach hit ratio is perfect).

I was wondering if you could point out to me any valid document that states RAID5 is not good at all for oralce databases, the people who bough raid 5 claimed now they bought it for the database as raid 5 offers read efficiency better that any other raid and yes we do not have problm with db, as you see no scattered or seq read.

I am thinking of even having a local disk attached to the machine with raid 10 and placing redo logs there, while the db can still reside on shark.

I Thank you for your input, cheers


Tom Kyte
December 22, 2005 - 8:07 pm UTC

I never said raid-5 wasn't good at all for Oracle databases.

In fact for some people, it does just fine.

However, it is not "the fastest for writing", and log file sync waits are the wait you get for synchronous IO (you have to *wait* for the write to finish).


In a data warehouse with lots of direct path writes during a load - index creates and such - raid 5 could be *death* for the load if used for data files

In a transactional system, with dbwr doing the writing in the background, raid-5 can many times be safely used for datafiles - but likely NOT for log (and archives) since you *wait* for that.

Hmmm

A reader, December 23, 2005 - 4:28 am UTC

According to the statspack info 'log file parallel write' takes an average of 1 ms.
The average wait for 'log file sync' is 22 ms.
Why bother about getting faster IO? - something else must be wrong here, or am I missing something?

danielmex2010@yahoo.com, December 23, 2005 - 6:00 am UTC

Well actuallt this was suggested by oracle tar engineer that our db is having waits on log file sync and that we should move our redo logs to faster disks.

If you see in the top 5 wait, the waits on log file syncg tops other waits,and this is 15 mins statspack report, which is the first 15 minutes when the trading market opens. However throughtout the remainder of trading time (90 more minutes), more or less the same log file sync tops the waits.

I have a redo log buffer of 6MB in size and am not using multiple db writers as aix is doing async io, however as per tom the logwr process is sync and raid 5 is fine for data files, but not for redo(makes sense in his example of oltp and datawarehousing). However I wonder that if raid5 is not good for writing but best for reading, would it not be a tradeoff to put redo on raid10, becuase then prob the archiver process would be slower and comes as top event.
Having redos on raid 5 may not be good for logwr but i guess its best for archiver. Our db is not in archive mode by the way and we plan to move it to archive mode to have the dataguard for dr site.

It seems that for the log file sync, oracle has only suggestion which is batch the commit and/or use faster disks. I cant batch the commits as its alreay doptimised, you cant batch commits in a synamic application lie ours.
For faster disks, I wish i have some kind of proof or stats that will show that if we move to raid10, log file sync will dropped significantly, i just dont want to go for raid10 or no raid at all, and then see if i get some benfit considering my statspack results.
If any one is interested to know more, write to me at danielmex2010@yahoo.com

Cheers

Frequent commits

ravinder matte, February 03, 2006 - 12:37 pm UTC

Tom,

could you please give me a scenario where frequent commits causes log file sync.

with a neat exaple please.

Thanks Matte.

Tom Kyte
February 03, 2006 - 5:22 pm UTC

every time you commit, you wait for log file sync (outside of plsql anyway)

just run a script like this:

drop table t;
drop sequence s;
create table t ( x char(2000) primary key );
create sequence s;
alter session set events '10046 trace name context forever, level 12';
insert into t values ( s.nextval );
commit;
(repeat last two lines over and over...)

and you'll see it.

High CPU and Log file sync

Dhar, February 05, 2006 - 12:11 am UTC

Hi Tom,

Thanks for your invaluable service to the oracle community.

Apologies for not keeping my text short.

We are running on Oracle 9i R2 (9.2.0.4),with MTS and dataguard configured in max performance( LGWR ASYNC=8192 NOAFFIRM REOPEN=1800) .One application uses Pro*c code and the other is Java . Java programs uses prepared stmt in 90% of the code .Pro*c code has lots of hardcoding in it and we are currently working on using bind variables.It uses Dynamic SQLs .Its a transactional system and its max usage is from Jan to May.

4 weeks back we moved our database from old to new sever which has more CPU and RAM (8 CPU and 8GB RAM ,IBM AIX)

There are severe performance problems.With 50 users the CPU idle time stays around 30 to 40 %.when is crosses 100 users the CPU idle time goes down to single digit.(sometimes to 0.3).I have been asked to give a quick fix on this. We are in CHOOSE mode but tables and index not analyzed .(except few tables).

I dont have a baseline of the performance in the old server as I have joined new to this team.No performance stats history is available .The same performance problem was faced 1yr back ,but the problem was not resolved then.(thought h/w was the limitation )



==========================================================================================================
1.Statspack report below with 900MB SGA

21 min Statspack report

Cache Sizes (end)
~~~~~~~~~~~~~~~~~
Buffer Cache: 100M Std Block Size: 8K
Shared Pool Size: 368M Log Buffer: 512K

Load Profile
~~~~~~~~~~~~ Per Second Per Transaction
--------------- ---------------
Redo size: 281,417.89 2,119.57
Logical reads: 41,043.47 309.13
Block changes: 1,730.85 13.04
Physical reads: 6,441.87 48.52
Physical writes: 81.19 0.61
User calls: 7,744.30 58.33
Parses: 413.22 3.11
Hard parses: 24.16 0.18
Sorts: 1,226.31 9.24
Logons: 21.12 0.16
Executes: 4,907.86 36.96
Transactions: 132.77

Top 5 Timed Events
~~~~~~~~~~~~~~~~~~ % Total
Event Waits Time (s) Ela Time
-------------------------------------------- ------------ ----------- --------
db file sequential read 2,846,176 17,800 71.06
buffer busy waits 1,993,222 2,796 11.16
db file scattered read 694,178 1,782 7.12
log file sync 107,203 1,211 4.84
CPU time 1,070 4.27
-------------------------------------------------------------



Top 3 events were 'db file sequential read','buffer busy waits','db file scattered read '.Our disk IO were high and more physical reads.Increased SGA from 900Mb to 2 Gb (as we had more than 4Gb of RAM unused).Now the top 5 events are CPU,log file sync .Increased SGA to 3gb but made no difference in performance so went back to 2GB SGA.

==========================================================================================================

Current setup:
2. Statspack report with 2GB SGA

Snap Id Snap Time Sessions Curs/Sess Comment
------- ------------------ -------- --------- -------------------
Begin Snap: 103 02-Feb-06 12:54:34 158 8.4
End Snap: 104 02-Feb-06 13:17:17 146 8.2
Elapsed: 22.72 (mins)

Cache Sizes (end)
~~~~~~~~~~~~~~~~~
Buffer Cache: 500M Std Block Size: 8K
Shared Pool Size: 1,024M Log Buffer: 512K

Load Profile
~~~~~~~~~~~~ Per Second Per Transaction
--------------- ---------------
Redo size: 339,685.68 3,384.02
Logical reads: 58,225.31 580.05
Block changes: 2,138.09 21.30
Physical reads: 62.85 0.63
Physical writes: 36.47 0.36
User calls: 10,618.45 105.78
Parses: 449.59 4.48
Hard parses: 27.10 0.27
Sorts: 2,104.74 20.97
Logons: 22.51 0.22
Executes: 6,049.66 60.27
Transactions: 100.38

% Blocks changed per Read: 3.67 Recursive Call %: 6.25
Rollback per transaction %: 16.82 Rows per Sort: 3.72

Instance Efficiency Percentages (Target 100%)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Buffer Nowait %: 99.99 Redo NoWait %: 100.00
Buffer Hit %: 99.90 In-memory Sort %: 100.00
Library Hit %: 99.34 Soft Parse %: 93.97
Execute to Parse %: 92.57 Latch Hit %: 99.49
Parse CPU to Parse Elapsd %: 20.99 % Non-Parse CPU: 96.24

Q1. <<Parse CPU to Parse Elapsd %: 20.99 >> is this a indication of problem ?

Sorts has gone up from 1,226.31(900MB SGA) to 2104.74 (2gb SGA) ,though transaction /per second had been less.what could be the reason for that ? No code change done.
Q2.Our Sort area size is 1MB .Is this too small ? As we are in MTS What would be the impact of increasing this value ? (say to 5 M)

Top 5 Timed Events
~~~~~~~~~~~~~~~~~~ % Total
Event Waits Time (s) Ela Time
-------------------------------------------- ------------ ----------- --------
CPU time 1,477 66.56
log file sync 125,155 441 19.89
db file sequential read 24,093 115 5.17
latch free 23,701 59 2.66
LNS wait on SENDREQ 482 48 2.16

==========================================================================================================

Q3.How do I identify if this CPU time is really killing our system.
Per CPU time used during this interval of 22.72 min 1477/60 = 24min for 8 CPU .ie 3min/CPU. Is this calculation correct .

How to reduce the log file sync ? I read in this link top that tuning this might not help much on overall performance and should be looking into the application rather than the system tuning.

Can you please give me pointer of how to proceed , where would be the right point to start with .

Event Waits Timeouts Time (s) (ms) /txn
---------------------------- ------------ ---------- ---------- ------ --------
log file sync 125,155 0 441 4 0.9
db file sequential read 24,093 0 115 5 0.2
latch free 23,701 1,623 59 2 0.2

LNS wait on LGWR 122,915 0 1 0 0.9
library cache lock 1 0 1 1341 0.0
control file sequential read 1,979 0 1 1 0.0
buffer busy waits 4,350 0 1 0 0.0

log file parallel write 114,130 113,866 0 0 0.8


SQL Orderby Gets ,the following is on top. What does this mean ?Is this query beating the system ?Should I start tuning from this query ?

CPU Elapsd
Buffer Gets Executions Gets per Exec %Total Time (s) Time (s) Hash Value
--------------- ------------ -------------- ------ -------- --------- ----------
15,667,433 1,078 14,533.8 19.7 236.74 370.80 1536701922

Thanks in Advance for your time.

Dhar.

Tom Kyte
February 06, 2006 - 12:30 am UTC

...(thought h/w
was the limitation )....

great, perfect. "we have no idea what the cause of the performance issue is, but if we add tons of hardware - it will surely go faster"

If and only if you were actually suffering from lack of hardware resources!

You are the perfect example of why you need to identify the cause of your performance issue before buying new stuff to solve an unknown problem.


You have a bad hard parse problem (look at the latch report of your statspack, bet you have lots of misses - maybe not SLEEPS but misses on library cache latch).

When Oracle wants a latch and cannot get it, we "spin", that is - we eat cpu trying to get the latch. You have 100 people all trying to hard parse at the same time (and they cannot - they need to modify this thing called a SHARED POOL - not "bobs pool", not "marys pool").

So, they get in line and wait for eachother - spinning on latches trying to get them. Only problem is 100 people are trying to do this - perhaps bumping the latch holder all of the cpu (so that you have 8 sessions eating cpu waiting for the 9th session to give up the latch but it cannot since they are on the cpu currently).

Every cpu you add - will just goto 100% utilized, until you have more cpu's than users.

The fix: the only fix: the only way to fix things: determine the problem, fix the problem.

Given the extremely limited information above - I would guess "hard parse problem, eating your cpu, serializing things imposing a major scaling issue that cannot be solved (in fact may be made WORSE) by adding hardware"

I can imagine shared server making this even incredibly worse. (100 users? On a machine of that size - very little chance I would be looking at shared server)




High CPU and Log file sync

dhar, February 06, 2006 - 7:16 pm UTC

Hi Tom,

Thanks for the feedback. It is very useful. Currently working with developers to change the code with bind variables.

Well.. I'm happy that I was not part of the team that thought h/w was the limitation. ( though now ,as I'm in the team).

<<You have a bad hard parse problem >>
Did you say this as you see 'Latch Free ' 23,701 in statspack report ?

I have listed part of the Latch Activity and Latch Sleep info below. Pct Get Miss on library cache is 0.9% and on shared pool 2.1 %.

(library cache 28,113,098 0.9 0.0 14 113,419 4.2)

^LLatch Activity for DB: PROD Instance: prod Snaps: 103 -104

Pct Avg Wait Pct
Get Get Slps Time NoWait NoWait
Latch Requests Miss /Miss (s) Requests Miss
------------------------ -------------- ------ ------ ------ ------------ ------
..
cache buffers chains 142,989,144 0.1 0.0 0 179,404 0.0
cache buffers lru chain 49,236 0.0 0.0 0 305,232 0.0
checkpoint queue latch 254,014 0.0 0 40,459 0.0
dml lock allocation 338,929 0.1 0.0 0 0
enqueues 317,329 0.1 0.0 0 0
kmcptab latch 21,166 0.0 0.0 0 0
kmcpvec latch 0 0 21,003 0.8
lgwr LWN SCN 114,107 0.0 0.0 0 0
library cache 28,113,098 0.9 0.0 14 113,419 4.2
library cache load lock 1,418 0.0 0 0
library cache pin 22,534,699 0.2 0.0 1 0
library cache pin alloca 3,303,361 0.0 0.0 0 0
list of block allocation 46,456 0.0 0.0 0 0
loader state object free 1,264 0.0 0 0
longop free list parent 2 0.0 0 2 0.0
message pool operations 245,452 0.0 0.0 0 0
messages 765,428 0.0 0.0 0 0
mostly latch-free SCN 114,707 0.2 0.0 0 0
multiblock read objects 14,930 0.0 0 0

post/wait queue 183,533 0.0 0.0 0 124,926 0.0

redo allocation 1,707,756 0.3 0.0 0 0
redo copy 0 0 1,480,651 0.1
redo writing 343,813 0.0 0 0
row cache enqueue latch 2,184,421 0.1 0.0 0 0
row cache objects 2,365,537 0.1 0.0 0 537 0.0

sequence cache 1,236,673 0.3 0.0 0 0
session allocation 498,313 0.2 0.0 0 0
session idle bit 29,157,632 0.1 0.0 0 0

shared pool 28,134,595 2.1 0.0 14 0
sort extent pool 1,843 0.0 0 0

transaction allocation 46,267 0.0 0 0
transaction branch alloc 26 0.0 0 0
undo global data 482,627 0.0 0.0 0 11 0.0
user lock 122,728 0.1 0.0 0 0
virtual circuit buffers 131,877,249 0.4 0.0 8 0
virtual circuit queues 58,456,804 1.0 0.0 11 0
virtual circuits 29,020,993 1.2 0.0 5 0
-------------------------------------------------------------

^LLatch Sleep breakdown for DB: PROD Instance: prod Snaps: 103 -104
-> ordered by misses desc
Get Spin &
Latch Name Requests Misses Sleeps Sleeps 1->4
-------------------------- -------------- ----------- ----------- ------------
virtual circuit queues 58,456,804 592,187 6,134 586056/6128/
3/0/0
shared pool 28,134,595 582,230 6,876 575614/6417/
162/37/0
virtual circuit buffers 131,877,249 519,346 3,074 516273/3072/
1/0/0
virtual circuits 29,020,993 361,631 1,279 360352/1279/
0/0/0
library cache 28,113,098 239,204 4,429 234804/4371/
29/0/0
cache buffers chains 142,989,144 135,784 1,210 0/0/0/0/0
library cache pin 22,534,699 40,372 244 40129/242/1/
0/0
session idle bit 29,157,632 18,145 471 0/0/0/0/0
redo allocation 1,707,756 4,316 18 4298/18/0/0/
0
sequence cache 1,236,673 3,290 2 3288/2/0/0/0
row cache objects 2,365,537 1,734 5 1729/5/0/0/0
row cache enqueue latch 2,184,421 1,254 4 1250/4/0/0/0
session allocation 498,313 795 12 783/12/0/0/0
library cache pin allocati 3,303,361 586 4 582/4/0/0/0
enqueues 317,329 281 2 279/2/0/0/0
user lock 122,728 130 2 128/2/0/0/0
undo global data 482,627 46 1 45/1/0/0/0
-------------------------------------------------------------


^LLatch Miss Sources for DB: PROD Instance: prod Snaps: 103 -104
-> only latches with sleeps are shown
-> ordered by name, sleeps desc

NoWait Waiter
Latch Name Where Misses Sleeps Sleeps
------------------------ -------------------------- ------- ---------- --------

library cache kglupc: child 0 1,634 819
library cache kglpnc: child 0 1,480 2,795
library cache kgllkdl: child: cleanup 0 256 22
library cache kglpndl: child: before pro 0 209 79
library cache kglpndl: child: after proc 0 193 2
library cache kglpnp: child 0 139 147
library cache kglobpn: child: 0 92 102
library cache kglhdgn: child: 0 90 258
..
library cache pin kglpnc: child 0 104 139
library cache pin kglupc 0 94 73
library cache pin kglpndl 0 28 7
library cache pin kglpnal: child: alloc spac 0 10 13

shared pool kghalo 0 2,918 3,019
shared pool kghupr1 0 1,711 1,935
shared pool kghfre 0 1,553 1,912
shared pool kghfrunp: alloc: wait 0 670 0
shared pool kghfrunp: clatch: wait 0 111 79
shared pool kghfrunp: clatch: nowait 0 67 0

virtual circuit queues kmcadq 0 4,362 5,999
virtual circuit queues kmcfsq 0 1,772 135
virtual circuits kmcgms: if shared server 0 1,368 3,033
virtual circuits kmcgms: if called with non 0 899 2
virtual circuits kmcgms 0 643 871
virtual circuits kmcmbuf: move ckt 0 627 396

Yes. I agree .There are lots of hard parsing .There are lots of queries as below

'FROM TABLE i WHERE ((i.type = 'A' AND i.stype = 'B') OR (i.type = 'C' AND i.stype = ' ')) AND i.code = 'ABC' AND i.ccode = 'PER' AND (i.sdate <= 1145160000 AND 1145160000 <= i.edate) AND (i.sflg = 'Y' OR 1145592000 <= i.edate) AND

and it goes to another half page. Most of them uses hard coded.These values are generated at the run time. Application uses DYNAMIC SQL's .

I did educate the developers on the benefits (as I got done from this site) of using bind variables.I did run a test using bind variables on sample application queries. It did not make much difference on time but made a huge differnece on latch activities on library cache.The developers are modifying the code to use binds.

There are lots of queries ,around 60 % of the queries uses one single table that has 90 colums in it.(one column is with varchar2(2048) , one with varchar2(1024) ,one with varchar2(512) and 6 columns with varchar2(>50).

The select clause goes this way .All 90 columns are picked up and all Nullable fields uses NVL function.

SELECT col1,col2,col3, nvl(col4, ' '),nvl(col5,' ') ,col6 ..
....coln, nvl(coln+1,' '),nvl(coln+2, ' '),..
...col89,nvl(col90,' ')
'FROM TABLE i WHERE ((i.type = 'A' AND i.stype = 'B') OR (i.type = 'C' AND i.stype = ' ')) AND i.code = 'ABC' AND i.ccode = 'PER' AND (i.sdate <= 1145160000 AND 1145160000 <= i.edate) AND (i.sflg = 'Y' OR 1145592000 <= i.edate) AND

Having this type of SQLs (many NVLs in select clause),would it add load to the system.?

Thanks,
Dhar.


Tom Kyte
February 07, 2006 - 1:22 am UTC

you told me you had a bad hard parse problem.
you have lots of hard parses per second.
your soft parse % is low.

that is what led me to a "hard parse problem"


having "no binds" is what adds to the load on the system. nvl, cheap, no binds - expensive.

High CPU and Log file sync

Dhar, February 07, 2006 - 12:13 pm UTC

Hi Tom,

Thanks for your clarification and excellent approach to the problems (as always).

Its always a good eye opener and learning curve for me (I guess for most of them) whenever or whatever thread I read on this site.


Thanks for your time.

Dhar.

Interesting subset of STATS...

A reader, February 07, 2006 - 9:05 pm UTC

It's kind of intriguing that logical reads per second went up ~50% (excuse my fuzzy math) but the other "per second" stats only went up ~25% or less while "Transactions" (aka "user commits") went down. It's sort of unlikely that just changing the SGA size might effect an execution plan, but the last poster may want to compare Top SQL in the BEFORE and AFTER reports to see if there's some change in user activity. Perhaps greatly reducing physical IO waits allowed some other "read only" interactions to increase throughput (possibly even in a runaway fashion -- thus "robbing" CPU from other processes like LGWR and LNS). Ideally, you'd have before and after OS stats on CPU usage.

High CPU and Log file sync - stats

Dhar, February 08, 2006 - 12:53 pm UTC

Hi Reader,

I think the increase in SGA size (cache buffer increase) had reduced the physical IO to a great extent and now it shows up in logical reads.

The cache buffer misses were more in 900Mb SGA (refer below) but the library cache stays same.CPU time was bottom of the top 5 events but the numbers almost remains the same.
Get Spin &
Latch Name Requests Misses Sleeps Sleeps 1->4
-------------------------- -------------- ----------- ----------- ------------
cache buffers chains 100,069,455 1,253,466 29,711 0/0/0/0/0
shared pool 21,384,191 380,254 2,453 377920/2241/
74/19/0
virtual circuit queues 39,710,107 358,318 2,773 355546/2771/
1/0/0
virtual circuit buffers 89,566,504 342,726 1,285 341444/1279/
3/0/0
virtual circuits 19,704,246 216,011 483 215529/481/1
/0/0
library cache 22,596,303 178,220 1,138 177088/1127/
4/1/0
library cache pin 17,782,325 31,307 53 31255/51/1/0
/0

Tom,

I missed (cache miss!) to ask one question on your reply.

<< I can imagine shared server making this even incredibly worse. (100 users? On a machine of that size - very little chance I would be looking at shared server)>>

Can you please explain on this (shared server making things worse)? .

I have read in your book MTS suits for the systems that has short time transactions that come in and go. (Here our connections comes to the database and goes off in 1 to 5 secs.).

Thanks,
Dhar.


Tom Kyte
February 09, 2006 - 4:42 am UTC

in a hard parse situation - once someone jumps onto a shared server, they own it until they are done. If you have contention for a resource (the library cache latch), and the person on the shared server cannot get it, well, you'll have one shared server sitting there waiting for that latch and not doing anything else.

that and shared server is slower than dedicated server in general - much longer code path.

shared server is ONLY appropriate in those types of systems and ONLY WHEN you need to use it to get "one more session logged in". That is what I said in the book(s)...


...
Dedicated/Shared Server Wrap-up
Unless your system is overloaded, or you need to use a shared server for a specific feature, a dedicated server will probably serve you best. A dedicated server is simple to set up (in fact, there is no setup!) and makes tuning easier.
............


If you re-read the sections I wrote on shared server (MTS is the wrong name) - you will see I say "it is ONLY appropriate in OLTP with very short transactions", which is very different from saying "use it if you have very short transactions". I was saying the only time you can even consider using it is with very short transactions (and even then there are issues as I point out)


High CPU and Log file sync - stats

Dhar, February 09, 2006 - 4:49 pm UTC

Tom,

Thanks for the clarification.

I’m going to do lots of homework as why shared server was configured in the 1st place and find out if we could go back to dedicated server. I’m also going to read your book Effective Oracle by Design – ‘completely‘

Thanks Tom.

Dhar.

Question

Mark, April 05, 2007 - 7:48 am UTC

This is an extract of the STATSPACK for 1hr duration on Oracle 9i.

Top 5 Physical Reads per Segment for DB: ARSDP1 Instance: ARSDP1 Snaps: 28122
-> End Segment Physical Reads Threshold: 1000

Subobject Obj. Physical
Owner Tablespace Object Name Name Type Reads %Total
---------- ---------- -------------------- ---------- ----- ------------ -------
ARADMIN ARSYSTEM T2217 TABLE 248,435 99.79
ARADMIN ARADMIN_LO SYS_LOB0000242063C00 LOB 202 .08
ARADMIN ARSYSTEM SYS_LOB0000011985C00 LOB 134 .05
ARADMIN ARSYSTEM T2216 TABLE 45 .02
PERFSTAT TOOLS STATS$SQL_PLAN_USAGE TABLE 31 .01
-------------------------------------------------------------

Does 99.79% mean, out of 1 hour , 99.79% of time was spent for physical read? If that is the case should I worry?

Here is some information about the table.

select table_name, num_rows, chain_cnt from all_tables where table_name = 'T2217';
TABLE_NAME NUM_ROWS CHAIN_CNT
------------------------------ ---------- ----------
T2217 713305 38860

select count(*) from t2217
COUNT(*)
----------
698435

The table has 101 columns.


Tom Kyte
April 05, 2007 - 11:05 am UTC

ARADMIN    ARSYSTEM   T2217                TABLE    248,435  99.79
ARADMIN    ARADMIN_LO SYS_LOB0000242063C00 LOB          202    .08
ARADMIN    ARSYSTEM   SYS_LOB0000011985C00 LOB          134    .05
ARADMIN    ARSYSTEM   T2216                TABLE         45    .02
PERFSTAT   TOOLS      STATS$SQL_PLAN_USAGE TABLE         31    .01 


99.79% of your reads took place against that table.

It has nothing to do with "time", rather "how many reads did you do"

logfile sync

A reader, January 10, 2008 - 4:50 pm UTC

Tom:

I can wait till you are taking new questions, but my question is related to logfile sync.

We have 10 processes writing to the same set of tables and the current freelist/freelist group is 1. We are about to increase the freelists/freelistgroups to more than 1 to speed up the inserts. Currently there is a huge wait on logfile sync by LGWR. By increasing the freelists will we make our situation much worser? I have done a small demo on a test box, but could not really simulate the actual production environment.

Thanks in Advance,

Tom Kyte
January 10, 2008 - 5:34 pm UTC

if your main wait is log file sync, and you are not waiting on freelists, why increase them.

I doubt you will do anything.

The only way to reduce the log file syncs is:

a) commit less often (gets my vote!)
b) make your log devices faster, so lgwr is faster


one possible side effect of more freelists will be that the inserts happen faster but really bunch up big time at the redo flush stage - compounding an existing problem making it worse.

but...

A reader, January 11, 2008 - 7:17 am UTC

Tom:

In response to your suggestion for the previous question:

1. say 6 processes (out of 10) are waiting for resources to insert into the same table ...this is a concurrency issue(currently in our instance)

2. So if we increase freelist and freelist groups - we increase redo reduce the concurrency , We increase logfile sync wait, but when LGWR flushes to disk the piggyback commits will help us over the long run right inspite of an increase in logfile sync waits?

Thanks in Advance,
Tom Kyte
January 11, 2008 - 8:19 am UTC

2) maybe - only one way to find out. Could be that lgwr is at capacity right now.


and unless you are waiting hugely on the freelists (ASSM would be the way to go, not manual segment space management).... I'm doubtful.

One Commit for One Transaction

prthvira, December 02, 2009 - 4:22 am UTC

Hello Tom,

Environment : OS - RHEL4.5 DB – ORACLE 10.2.0.4 EE

We are having 80% Wait time for log file sync
Transaction for second = 3051.17
Total seconds during the report = 116.48 x 60 = 6988
Total Transaction during report duration = 6988 * 3051 =21320388

Total User Commits = 21324617
Accordingly commit per transation should be 21324617 / 21320388 = 1.0001

As user commits and transations are all most equal. What can be the area to look into other than changing the redo logs to faster disk.



Relevant portion of statspack report is given below:-

Snapshot Snap Id Snap Time Sessions Curs/Sess Comment
~~~~~~~~ ---------- ------------------ -------- --------- -------------------
Begin Snap: 19324 01-Dec-09 05:59:08 469 .2
End Snap: 19325 01-Dec-09 07:55:37 468 10.5
Elapsed: 116.48 (mins)

Cache Sizes Begin End
~~~~~~~~~~~ ---------- ----------
Buffer Cache: 3,408M Std Block Size: 8K
Shared Pool Size: 624M
Log Buffer: 13,967K

Load Profile Per Second Per Transaction
~~~~~~~~~~~~ --------------- ---------------
Redo size: 2,032,593.80 666.17
Logical reads: 223,768.82 73.34
Block changes: 15,825.49 5.19
Physical reads: 2.68 0.00
Physical writes: 109.95 0.04
User calls: 3,904.68 1.28
Parses: 113.41 0.04
Hard parses: 0.07 0.00
Sorts: 9,164.10 3.00
Logons: 0.08 0.00
Executes: 3,518.87 1.15
Transactions: 3,051.17

% Blocks changed per Read: 7.07 Recursive Call %: 2.61
Rollback per transaction %: 0.00 Rows per Sort: 1.17

Instance Efficiency Percentages
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Buffer Nowait %: 100.00 Redo NoWait %: 99.92
Buffer Hit %: 100.00 In-memory Sort %: 100.00
Library Hit %: 99.99 Soft Parse %: 99.93
Execute to Parse %: 96.78 Latch Hit %: 97.91
Parse CPU to Parse Elapsd %: 35.13 % Non-Parse CPU: 99.81

Shared Pool Statistics Begin End
------ ------
Memory Usage %: 30.78 37.17
% SQL with executions>1: 58.62 79.19
% Memory for SQL w/exec>1: 78.05 80.05

Top 5 Timed Events Avg %Total
~~~~~~~~~~~~~~~~~~ wait Call
Event Waits Time (s) (ms) Time
----------------------------------------- ------------ ----------- ------ ------
log file sync 21,302,870 548,936 26 80.2
CPU time 34,337 5.0
latch: cache buffers chains 721,734 32,979 46 4.8
latch free 501,796 24,084 48 3.5
latch: enqueue hash chains 299,151 14,496 48 2.1
-------------------------------------------------------------




Avg
%Time Total Wait wait Waits
Event Waits -outs Time (s) (ms) /txn
--------------------------------- ------------ ------ ---------- ------ --------
log file sync 21,302,870 0 548,936 26 1.0
latch: cache buffers chains 721,734 0 32,979 46 0.0

user commits 21,324,617 3,051.2 1.0





Tom Kyte
December 02, 2009 - 7:06 am UTC

commit less frequently - if you are committing tiny bits of your transaction - correct that.

otherwise, 26ms is not horrible.

but the best way to reduce log file sync waits is to not do the thing that causes them. If you are doing batch operations of any sort - make the batches much larger and they will literally disappear.

Top 5 Timed Events

Daniel, March 05, 2010 - 4:27 am UTC

Hi Tom,

Your answer is superb. But what I would suggest is to seperate your articles related to versions and titles (ie Wait Events). I am not able to navigate into your site, but I get tons and tons of interesting article from your site. Even I am one of your oracle DBA fan.

Regards,
Daniel

Timed Events

Daniel, March 06, 2010 - 5:07 am UTC

Somehow this links always takes for review comment

Wait Events

Daniel, March 06, 2010 - 10:27 pm UTC

I have already poseted many comments
Tom Kyte
March 09, 2010 - 10:49 am UTC

you have posted one, a suggestion for me to spend hundreds of hours attempting to classify 11,000 documents. What are you looking for exactly?

Jess, July 20, 2012 - 5:48 pm UTC

Hi Tom,

We are observing the following while executing code under test:

Event Waits TotalWaitTime(s) AvgWait(ms) Waits/txn
log file parallel write 10,355,725 4,096 0 0.98
log file sync 191,080 278 1 0.02

Redo size is
Per Second Per Transaction
746,556.21 1,524.85

My first thought was "too many too frequent commits", but then having read this thread it seems that others see high waits in log file sync (higher than log file parallel write waits), which isn't the case here.

Is the reason the parallel write wait appears to be larger simply because all those 1 ms waits on log file sync add up even though the overall log file sync does not appear to be that large or is there something else at play?

Thank you
Tom Kyte
July 30, 2012 - 8:06 am UTC

lgwr waits for log file parallel write

YOU wait for log file sync


If log file parallel write is large, but log file sync is small - that means you are probably running some pretty good sized transactions. You generate redo for a bit (and lgwr is constantly writing this out in the background, hence it is waiting for log file parallel write - but you are NOT waiting for a thing). You then commit this pretty good sized transaction and then you wait for log file sync.

1ms is phenomenal!


Now, if it *makes sense* in this particular application - you could investigate the use of asynchronous commits:

http://docs.oracle.com/cd/E11882_01/server.112/e26088/statements_4010.htm#SQLRF53775

but make sure you understand what that means (it basically means we might tell an application 'your commit is done' - but it isn't and if the database crashed right then and there - that transaction might become UNDONE, rolled back, upon restart. You have to make sure this is OK with you before using this)


but for a 1ms wait, I doubt it would change throughput much.

Jess, July 22, 2012 - 9:17 am UTC

Hi Tom,

After I left the comment above, we took out the per-row commits and changed the code to commit in 1000 row batches. Log file sync and sequential reads went down, but db file sequential read and log file parallel write went up, and the throughput decreased dramatically (x50). I don't understand why removing per-row commits would make performance perceivably worse. Do you have any ideas as to what may be happening here?

Thanks again.
Tom Kyte
July 30, 2012 - 8:35 am UTC

because as you remove one bottleneck, another more severe one could arise.


Let's say for example, you have a system with 500 "concurrent" sessions (you won't, you cannot, not unless you have 500 cpus really). but let's say you do...

right now, you are experiencing IO waits all over the place (you are in effect using IO as a resource manager, to make sure all 500 sessions do not try to use the cpu at the same time). So, you have a ton of IO waits.

Now, let's say you buy super speedy disk - it has a 0ms response time (it is instantaneous). You remove ALL of the IO waits - 100%

You'll go faster right?

probably not, almost certainly not. what you'll do in this case is have a severe nuclear meltdown of your server.


All of all sudden - no more IO wait. So, what happens? Everyone tries to get on a cpu - but they cannot (you don't have 500 cpus). So, cpu waits go way way up. The machine is now 100% utilized at the cpu level (you cannot even type "ls" at the console, the machine appears to be frozen).

and then someone holding a latch gets kicked off the cpu. in fact, many latch holders do. And then have to wait until the other 400 and some odd sessions all eat up cpu trying to get that latch (so all of a sudden the amount of cpu time you need to process a transaction goes up by orders of magnitude).

But you are totally out of cpu.

how to fix that? If you never wait on IO, reduce the number of connections - get the number of active sessions down to be around 1 to 2 times the number of CPUs

see:

http://www.youtube.com/watch?v=xNDnVOCdvQ0

for an example of this, on Exadata - IO isn't the bottleneck, cpu is.


So, you removed one wait event that was acting as a resource manager. If you are not waiting on a commit - you are free to do other stuff.

And it appears you are now waiting on db file sequential reads - which appear to be influencing your log write times (but that doesn't matter since lgwr is waiting on those, NOT YOU).

but you wait on db file sequential reads.


sounds like you are doing slow by slow by slow processing (all index reads) - that's bad.


You might have to decrease the concurrency (not a bad idea) to increase throughput.

but if you want things to go orders of magnitude faster - you'll bulk up your operations - using a single sql statement in place of hundreds or thousands or millions of executions of a teeny tiny silly little piece of SQL.

watch this for example:

http://www.youtube.com/watch?v=2jWq-VUeOGs

Jess, July 31, 2012 - 4:59 am UTC

Thanks as always, Tom. That makes sense. However, I am not sure we can optimise past going row-by-row. The code is essentially dequeuing into a table:

while there are still records in the queue
DBMS_AQ.dequeue ([options, payload is a message])
commit;
insert into temp_table ([values for 1 message])
commit;
end loop;

Whether or not we leave the AQ commit in, the processing is fast. That said, we took it out because there is no reason to have it, and it appears to be generating control file sequential reads.

The second commit is where we're puzzled. If we leave it as is--committing every record--the processing is fast (about 2.5M records per hour). If we take out that commit and replace it with a check to commit once ever 1000 records, the speed drops by orders of magnitude (about 400K records per hour).

What you're saying makes sense, but this situation just seems counter-intuitive. How can committing every row cause the overall throughput to be better?

Tom Kyte
July 31, 2012 - 12:43 pm UTC

trace it and tkprof it. that'll go a long way to helping us figure it out.

I'm going to blame the queue - which is designed to

a) get a message
b) process message
c) commit

it could be an implementation issue with the dequeue inside of a big transaction.

log file sync vs partitioning

Diwakar Krishnakumar, July 22, 2019 - 11:45 pm UTC

Hi Tom,
We are incurring the log file sync wait event on the database for a specific time frame in the awr and trying to optimize the same.
I also see that the wait event is the result of the commits on the database.
Now my question is - in order to make the wait event look a little easier on the database by making the table(experiencing that wait event) partitioned? would that make any difference?
Please let me know as this would be really helpful.

Regards,
K.Diwakar
Chris Saxon
July 23, 2019 - 12:23 pm UTC

Unless partitioning enables you to make changes that reduce the redo you generate (e.g. switching a delete statement to drop partition), no.

log file sync vs partitioning

Diwakar Krishnakumar, July 23, 2019 - 6:44 pm UTC

Hello Sir,

Thank you for gettin back to me over this.
From what you have mentioned, is my below understanding correct:

1) Log file sync wait event has no impact with partitioning - in the current form.

though the changes are not going to impact the redo quantity, but the expectation is that the redo generation(changes on the respective object) will be more quicker as we are dealing with the respective partition, instead of the whole table....

Can you please point out to some examples in line with what you have mentioned:

1) switching a delete statement to drop partition --> Do you mean instead of a delete, drop the entire partition?
2) Also what about the updates - any way to deal with them?
3) partitioning to reduce the redo - any other scenarios/examples available?

Connor McDonald
July 25, 2019 - 2:40 am UTC

1) Log file sync wait event has no impact with partitioning - in the current form.


Partitioning in itself does not change redo, but moving to partitioning means that in some instances you should be able to do "truncate" or "drop" partition *instead* of delete, which means near zero redo for those operations.

2) Also what about the updates - any way to deal with them?


Not without significant changes, eg, changing to direct load of altered rows rather than inplace updates.

3) partitioning to reduce the redo - any other scenarios/examples available?


See (1)

log file sync vs partitioning

Diwakar Krishnakumar, July 25, 2019 - 6:28 pm UTC

Awesome and perfect :) thank you.

More to Explore

Performance

Get all the information about database performance in the Database Performance guide.