Skip to Main Content

Breadcrumb

May 4th

Question and Answer

Chris Saxon

Thanks for the question, Albert .

Asked: April 19, 2007 - 2:48 pm UTC

Last updated: July 15, 2019 - 12:25 pm UTC

Version: 10.2.0.2

Viewed 50K+ times! This question is

You Asked

Hi Tom,

Lucky I've got chance again.

Can you please explain about the db time now shown in statspack reports (10g). The explanation I know of is: it is the time spent in database since instance startup. Can you please elaborate this.

I can see that db time shown in statspack is more than the elapsed time. For eg, a statspack report of snaps taken between 15 minutes shows db time greater than 15 minutes. How is this possible?

Thanks always.

Regards,

Albert Nelson A.

and Tom said...

db time is the time spent in the database - however, you have probably more than one core...

if you have 2 cpus, you have 2 cpu seconds per every 'real second' for example.


It is sort of like looking at wait events. How can you have 30 minutes of enqueue wait in 15 minutes? Easy - 30 people waited one minute apiece, or two people waited 15 minutes or ......

You have more than one thing contributing "time" the database.

Rating

  (49 ratings)

Is this answer out of date? If it is, please let us know via a Comment

Comments

Robert, April 23, 2007 - 3:45 pm UTC


Albert Nelson, April 23, 2007 - 10:12 pm UTC


DB TIME - DB CPU

Keyur, July 20, 2007 - 4:47 pm UTC

Hey Tom,
I have AWR Report and in that report Time Statistics Section shows following number for cpu. I have 8 CPU.
Following is detail about Elapsed time between two snap and total CPU Time ?

Snap Id Snap Time Sessions Cursors/Session
Begin Snap: 19696 20-Jul-07 10:10:53 41 8.8
End Snap: 19697 20-Jul-07 10:20:55 41 8.7
Elapsed: 10.03 (mins)
DB Time: 9.62 (mins)

Statistic Name Time (s) % of DB Time
sql execute elapsed time 574.54 99.55
DB CPU 568.34 98.48
PL/SQL execution elapsed time 32.31 5.60
parse time elapsed 0.24 0.04
hard parse elapsed time 0.06 0.01
connection management call elapsed time 0.06 0.01
sequence load elapsed time 0.00 0.00
failed parse elapsed time 0.00 0.00
repeated bind elapsed time 0.00 0.00
DB time 577.13
background elapsed time 79.05
background cpu time 13.49

How come "sql execute elapsed time" and "DB CPU" are both nearly 99% of DB Time ? Total of both is greater than total DB time. How can I find Who is using "sql execute elapsed time" and "DB CPU" ? Actually What is "sql execute elapsed time" ? I look at latch and there is nothing significant, most of latch has 0.00 as Pct Get Miss/ Pct NoWait Miss.

Wait Class Indicate Following

Wait Class Waits %Time -outs Total Wait Time (s) Avg wait (ms) Waits /txn
System I/O 8,669 0.01 76 9 4.67
User I/O 6,484 0.00 19 3 3.50
Network 28,339 0.00 11 0 15.28
Commit 166 0.00 2 13 0.09
Concurrency 2 0.00 0 107 0.00
Configuration 2 0.00 0 105 0.00
Other 7 0.00 0 24 0.00
Application 462 0.00 0 0 0.2


Is there anything wrong having high number for "sql execute elapsed time" and "DB CPU" ?

Thanks

~Keyur

Tom Kyte
July 21, 2007 - 9:11 am UTC

it just means your machine spent most of its time doing good stuff:

executing sql in reaction to end users asking for work to be done.

DB TIME - CPU TIME

keyur, July 23, 2007 - 10:30 pm UTC

Hey Tom,
Would you please explain following question and specially what "sql execute elapsed time"

How come "sql execute elapsed time" and "DB CPU" are both nearly 99% of DB Time ? Total of both is
greater than total DB time. How can I find Who is using "sql execute elapsed time" and "DB CPU" ?

Actually What is "sql execute elapsed time" ?
Tom Kyte
July 24, 2007 - 9:39 am UTC

that is the time spent executing sql - which is good, on a database server you want

a) the database to be the user of all resources

b) the work being done to be useful work. parsing sql, necessary evil but doesn't actually change data. executing sql - EXCELLENT, this is actual useful work that end users want to have done for them.

this shows the database is using the machine and that the machine is being used to execute sql - awesome.

DB CPU time

Keyur, July 27, 2007 - 10:51 pm UTC

Excellant Tom, You explain everything just in two lines. Thanks a lot

Rollback per transaction %

Keyur, July 27, 2007 - 10:59 pm UTC

Hey Tom,
One more qestion. I was checking my AWR and found that there is very high "Rollback per transaction %" - 90%, definatly it's not good. I checked on this site and found I need to look at "User Rollback" statistics of V$SESSTAT view. I checked and found one of session (Which is external procedure for Advance Queuing) has increment in " user rollback" by every 2-3 seconds by 8-10 "user rollback". While listing SID information by joining v$sestat and v$session I found SID from V$sestat. But when I checked in OEM, I dont found that user, not even in TOAD session browser. My confusion is why ? Because it's run as external procedure ? How come SID is in V$sestat and not in V$session.

Because that user is calling one of in built procedure, how to tune that ? How to reduce "rollback per transaction %".

What kind of resource it use ? UNDO segments, redo logs ?

Would you please explain this.

Thanks

~Keyur
Tom Kyte
July 28, 2007 - 9:25 am UTC

if you could not find information in v$session, how did you conclude it was an external procedure - what external procedure - give some more information here please.

Rollback per transaction %

Keyur, July 28, 2007 - 11:00 pm UTC

I dont know how come SID number is in V$SESSTAT and not in V$SESSION. SID is V$SESTAT is 128 and that SID is not in V$SESSION. At what circumstance we see SID in VESESTAT but not in V$SESSION. How Oracle is incrementing "user rollabck" value in v$sestat and that session information is not in v$session ?


Tom Kyte
July 29, 2007 - 8:10 am UTC

re-read my question to you please.

You have asserted this is "an external procedure"

a) how did you determine that
b) what is your definition of an external procedure

I'm just trying to gather some basic information here - that is all. I could guess, but I don't like doing that. So, you seem to have identified "what" this session is somehow - how was that

user rollback goes up with a rollback issued by a client.

Rollback per transaction %

Keyur, July 30, 2007 - 9:56 am UTC

Tom,
Following query give me first 2 users with highest "User Rollback".

column name format a15
column username format a10
column program format a10
set line 200
column machine format a10
column SQLAD format a10
column program format a20
select * from (
select ses.USERNAME ,ses.sid,s.VALUE value ,n.NAME name, ses.SQL_ADDRESS SQLAD, round(ses.LAST_CALL_ET/60,2) LAST_CALL_ET,ses.program program
From v$sesstat s, v$statname n, v$session ses
where s.STATISTIC# = n.STATISTIC#
and ses.SID = s.SID
--and p.ADDR = ses.PADDR
and n.NAME like '%user rollbacks%'
and ses.username is not null
order by value desc ) x
where rownum <3;

USERNAME SID VALUE NAME SQLAD LAST_CALL_ET PROGRAM
---------- ---------- ---------- --------------- ---------- ------------ --------------------
MGWAGENT 121 625067 user rollbacks 00 0 extproc32mgwextproc@
esora01 (TNS V1-V3)

SYSMAN 126 5317 user rollbacks 00 2.1 OMS


2 rows selected.

---------- SAME QUERY RESULT AFTER 45 /SEC ----------


USERNAME SID VALUE NAME SQLAD LAST_CALL_ET PROGRAM
---------- ---------- ---------- --------------- ---------- ------------ --------------------
MGWAGENT 121 625169 user rollbacks 00 .05 extproc32mgwextproc@
esora01 (TNS V1-V3)

SYSMAN 126 5317 user rollbacks 00 2.85 OMS


2 rows selected.

=============================================

Then I checked in V$SESSION for SID = 121

set line 200
select SID, serial#, username, program, SQL_HASH_VALUE from v$session where sid=121


SID SERIAL# USERNAME PROGRAM SQL_HASH_VALUE
---------- ---------- ------------------------------ ------------------------------------------------ --------------
121 5553 MGWAGENT extproc32mgwextproc@esora01 (TNS V1-V3) 0


1 row selected.

extproc32mgwextproc@esora01 indicates connection from external procedure.
I have following entry into my listner.ora file for extrnal procedure.

(SID_DESC =
(SID_NAME = mgwextproc)
(ENVS="EXTPROC_DLLS=ANY,LD_LIBRARY_PATH=/u01/app/oracle/product/10.2.0/pro
d/jdk/jre/lib/sparc:/u01/app/oracle/product/10.2.0/prod/lib32")
(ORACLE_HOME = /u01/app/oracle/product/10.2.0/prod)
(PROGRAM = extproc32)

Now, hash value is showing 0, so how should I determine what that script is doing now ? What kind of SQL it is executing and why it is rolling back so much of data ?

This user is an Oracle Advance Queue Agent.

Thanks

~Keyur

Tom Kyte
July 30, 2007 - 5:37 pm UTC

please utilize support. not familar with this component - but probably this is a non-issue.

DB TIME and Idle wait events

Sri, February 11, 2008 - 10:56 am UTC

Tom,

From the manuals and other resources I get that DB Time in AWR report doesn't include time spent on idle wait events. (If my session is waiting from sql net from client for 2 hrs, that won't contribute towards dbtime . Right?). Now here is an awr report. (Not a complete one though, only the portion relevant to my question, which follows.)

The server is SUN O/S 5.9 and it has 32 CPUs.

WORKLOAD REPOSITORY report for

DB Name DB Id Instance Inst Num Release Cluster Host
------------ ----------- ------------ -------- ----------- ------- ------------
xxxx 983992951 DWH1 1 10.1.0.4.0 NO asedp206

Snap Id Snap Time Sessions Curs/Sess
--------- ------------------- -------- ---------
Begin Snap: 41014 06-Feb-08 00:00:08 93 11.3
End Snap: 41015 06-Feb-08 02:00:32 174 6.7
Elapsed: 120.40 (mins)
DB Time: 2,378.36 (mins)

Cache Sizes (end)
~~~~~~~~~~~~~~~~~
Buffer Cache: 11,248M Std Block Size: 16K
Shared Pool Size: 1,968M Log Buffer: 2,304K

Load Profile
~~~~~~~~~~~~ Per Second Per Transaction
--------------- ---------------
Redo size: 2,984,230.00 225,844.25
Logical reads: 56,064.82 4,242.94
Block changes: 17,168.18 1,299.27
Physical reads: 4,064.58 307.60
Physical writes: 660.76 50.01
User calls: 1,064.62 80.57
Parses: 197.48 14.95
Hard parses: 1.16 0.09
Sorts: 114.10 8.63
Logons: 0.78 0.06
Executes: 1,515.56 114.70
Transactions: 13.21

% Blocks changed per Read: 30.62 Recursive Call %: 73.38
Rollback per transaction %: 31.27 Rows per Sort: 721.75

Instance Efficiency Percentages (Target 100%)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Buffer Nowait %: 100.00 Redo NoWait %: 100.00
Buffer Hit %: 95.84 In-memory Sort %: 100.00
Library Hit %: 99.91 Soft Parse %: 99.41
Execute to Parse %: 86.97 Latch Hit %: 99.76
Parse CPU to Parse Elapsd %: 72.49 % Non-Parse CPU: 99.22

Shared Pool Statistics Begin End
------ ------
Memory Usage %: 92.27 91.92
% SQL with executions>1: 90.39 81.21
% Memory for SQL w/exec>1: 92.40 85.78

Top 5 Timed Events
~~~~~~~~~~~~~~~~~~ % Total
Event Waits Time (s) DB Time Wait Class
------------------------------ ------------ ----------- --------- --------------
PX Deq Credit: send blkd 362,618 81,629 57.20 Other
CPU time 39,728 27.84
enq: ST - contention 16,775 4,789 3.36 Configuration
db file sequential read 620,438 3,056 2.14 User I/O
db file scattered read 229,622 2,762 1.94 User I/O
-------------------------------------------------------------
Wait Events DB/Inst: DWH1/DWH1 Snaps: 41014-41015
-> s - second
-> cs - centisecond - 100th of a second
-> ms - millisecond - 1000th of a second
-> us - microsecond - 1000000th of a second
-> ordered by wait time desc, waits desc (idle events last)

Avg
Total Wait wait Waits
Event Waits Timeouts Time (s) (ms) /txn
--------------------------- ------------ ---------- ---------- ------- ---------
PX Deq Credit: send blkd 362,618 25,114 81,629 225 3.8
enq: ST - contention 16,775 724 4,789 285 0.2
db file sequential read 620,438 0 3,056 5 6.5
db file scattered read 229,622 0 2,762 12 2.4
SQL*Net more data from dbli 60,877 0 1,371 23 0.6
enq: TM - contention 224 217 641 2861 0.0
log file parallel write 120,217 0 560 5 1.3

Here is the question.

In a span of 120 minutes the DB time is 2378.36 minutes.
Out of this , looking at Top 5 timed events, I see that CPU time is only 39728 secs (=662.133 minutes) , which is only 27.84 % of DB time. I take it that most of the DB Time then is being contributed by other wait events. In top 5 timed events, we find the PX Deq Credit:Send Blkd to be the major contributor. (81629 secs or 1360.48 minutes , about 57.2 % of DB time).

But as per many documents on metalink and even the ADDM says, it is an idle event. My question is, if it is idle event why is it contributing towards DBTime? Or is it really not an idle event ? Response is extermely poor as per the users and it is same through out the day.

If you think PX Deq Credit:Send blkd is not contributing towards DB time, then which other wait event could be the culprit? The CPU time is only 27.8 % of DBtime , which is poor.

I have gne through many documents on metalink, some pointing to it being a bug or something not right with the way some of the idle events in 10g in a class "Other" , rather than as "Idle".

Can you please provide your take on this ?

On the other hand , if I take out the PX Deq Credit .. wait time of 81629 secs (=1360.48 minutes) from the DB Time, then the actual DB Time would be , for this span
2378.36 - 1360.48 minutes = 1017.88 mintes.
Considering 1017.88 minutes as DB time , the CPU time would be then 662.13 * 100/1017.88 = 65% and
the next wait will be end: ST contention will be around 7% of DBtime and other events db sequential read and db scattered read contributing also in single digits. Don't know how to proceed here to get a quick remedy or to make a quantitative statement as to where the issue and to justify it. The server is most of the time idle , the peak CPU utilizations only showing to be 30%.

The poorly performing SQL have been identified and given to the concerned team and that initiative is in progress.

Bye the way this report has been generated for various times and also for a short duration of 30-60 minutes also. Every report has PX Deq Credit Send blkd as major contributing factor and most of the docs say it is an idle event. Does this mean that client processing needs to be looked at for poor performance of batch loads.

Your input would be very helpful.

DB Time - questiones

A reader, December 29, 2009 - 8:16 am UTC

Hi Tom,

Say I have 4 core cpu at my host server of the database.
I took a snap of 30 minute interval.
so total cpu seconds avalaible is 30*60*4 = 7200 seconds.
Now AWR reported - 3600 seconds or 60 minutes as DB time.

now I know that 3600 sec were database seconds..used to process sql statements or do the activites needed to execute these statements.

Questiones -

1)How to find further breakup of the remainig 3600 seconds.
- wait time , background wait time ?
or background elapsed time , background cpu time.

2)is there way to conclude sonmething using these numbers..for example if i am using 3600 seconds as db time..is it good, bad,poor.

kindly explain.

Regards
Tom Kyte
January 04, 2010 - 8:24 am UTC

it is all there right at the top of the report?

Time Model Statistics          DB/Inst: ORA10GR2/ora10gr2  Snaps: 18313-18314
-> Total time in database user-calls (DB Time): 7.2s
-> Statistics including the word "background" measure background process
   time, and so do not contribute to the DB time statistic
-> Ordered by % or DB time desc, Statistic name

Statistic Name                                       Time (s) % of DB Time
------------------------------------------ ------------------ ------------
sql execute elapsed time                                  5.4         74.8
parse time elapsed                                        2.4         33.6
DB CPU                                                    1.7         24.1
hard parse elapsed time                                   1.3         17.9
PL/SQL execution elapsed time                             0.6          8.2
PL/SQL compilation elapsed time                           0.4          5.0
connection management call elapsed time                   0.4          5.0
hard parse (sharing criteria) elapsed time                0.2          2.4
failed parse elapsed time                                 0.0           .7
repeated bind elapsed time                                0.0           .1
sequence load elapsed time                                0.0           .0
DB time                                                   7.2          N/A
background elapsed time                                  26.9          N/A
background cpu time                                       4.0          N/A
          -------------------------------------------------------------
...... 







2)is there way to conclude sonmething using these numbers..for example if i am
using 3600 seconds as db time..is it good, bad,poor.



absolutely not, not without KNOWLEDGE. If you knew for example:

I had a single user, they loaded 15 records in that hour....

Then you could conclude using your KNOWLEDGE (intuition, smarts, whatever) that perhaps - just perhaps - 3600 cpu seconds of processing was a bit over the top for 15 records.


You need to have an understanding of what was going on.

one more question

A reader, January 06, 2010 - 2:59 am UTC

Hi Tom

Thanks for explaining it.

please allow to ask few more questiones.

1) so if we add up all the time given in time model statistics section this will be the total cpu seconds used by the database during that particular time window. rest of the time (cpu seconds ) were just not used ...is this understanding correct ?

2)if we see the instance activity stats - there the number are not attached with any time unit...such as there we see
question is how to corelate the data in this section with the data we see in time model stats.

CPU used by this session 836,097 231.17 45.45
CPU used when call started 829,146 229.25 45.08
CR blocks created 21,748 6.01 1.18
Cached Commit SCN referenced 3,145 0.87 0.17
Commit SCN cached 1 0.00 0.00
DB time 4,052,495 1,120.48 220.32
..........................................
..........................................
..........................................


for the same report..we see the time model statsistics as given below

sql execute elapsed time 12,190.93 93.50
DB CPU 8,366.51 64.17
PL/SQL execution elapsed time 399.01 3.06
parse time elapsed 159.29 1.22
hard parse elapsed time 99.33 0.76
PL/SQL compilation elapsed time 71.12 0.55
sequence load elapsed time 70.53 0.54
failed parse elapsed time 22.03 0.17
hard parse (sharing criteria) elapsed time 2.25 0.02
repeated bind elapsed time 0.68 0.01
connection management call elapsed time 0.33 0.00
hard parse (bind mismatch) elapsed time 0.07 0.00
DB time 13,038.52
background elapsed time 2,630.13
background cpu time 1,017.04


3) we know that

response time = service time + wait time

and service time = cpu used by this session

and we also can infer that

Service Time = CPU Parse + CPU Recursive + CPU Other

is this understanding correct..if yes then how and where db time would fit in this formulae..

or db time is something which will be measured only at the instance leve and it can not be measured at the transaction level.

3)what is the key differenc between cpu used by this session and db time..is there any direct relation between these 2 ?

I have just too many doubts while reading an awr reports -while lot of things i do understand but lot more I do not understand so trying to understand them .

thanks

Tom Kyte
January 06, 2010 - 8:29 am UTC

1) would not be used by the database, but they might have been used. Someone might have been playing doom on the server - we don't know.


2) you cannot correlate session level information with system level activity in all cases, why do you need to?

3) service_time = cpu used by this session PLUS some unknown number of seconds spent waiting for that CPU, which won't, which cannot be reported as wait time.

you are missing a component, a component that is unknowable



database time is elapsed time, cpu time is not elapsed time, cpu time should be <= elapsed time.

A reader, February 03, 2010 - 5:38 pm UTC

Tom,
In one WR report per 1 hur
DB time in Instance activity stats section shows 7,613,249 and dB time under time model statistics shows 29,781.43
Are these two different types of metrics or how are they related?

Tom Kyte
February 04, 2010 - 12:21 pm UTC

give a tad more context here, give enough of the AWR (assuming that is what WR is) report with heading to show me what you are talking about.


Not a million pages, enough for context.

A reader, March 04, 2010 - 11:11 am UTC

Tom,

What's the difference between cpu used by this session and CPU time/DB CPU in AWR? Why are they not equal?


In my AWR report:
CPU time 22,522 41.0
DB CPU 22,522.06 40.95
CPU used by this session 605,833 170.15 301.86

big blank block.

A reader, March 06, 2010 - 9:28 am UTC

I am very sorry that I make a very big black block for my wrong pasting.
please reedit it .

Thank you !

Tom Kyte
March 09, 2010 - 10:43 am UTC

get all of the cpu information from that report please.

A reader, March 08, 2010 - 2:35 pm UTC

I found a note on Jonathan's website: http://jonathanlewis.wordpress.com/2009/05/26/cpu-used/#comment-33189

Is this difference caused by sample frequecy? That is, "db cpu" is derived from v$sess_time_model while "CPU used by this session" is derived from v$sesstat. The sample frequency of v$sess_time_model and v$sess_time_model is different.

Thanks



=============
Tom,

What's the difference between cpu used by this session and CPU time/DB CPU in AWR? Why are they not
equal?


In my AWR report:
CPU time 22,522 41.0
DB CPU 22,522.06 40.95
CPU used by this session 605,833 170.15 301.86


A reader, March 10, 2010 - 2:40 pm UTC

Got more information from here:
http://www.freelists.org/post/oracle-l/CPU-Time-calculation-in-AWR-Reports

Thank you anyway.

Timed statistics model

A reader, April 09, 2010 - 2:53 am UTC

One of AWR reports which I am analyzing. The Timed model statistics sections is as given below :

Time Model Statistics
• Total time in database user-calls (DB Time): 27388.2s
• Statistics including the word "background" measure background process time, and so do not contribute to the DB time statistic
• Ordered by % or DB time desc, Statistic name
Statistic Name Time (s) % of DB Time
sql execute elapsed time 27,226.56 99.41
DB CPU 4,024.60 14.69
parse time elapsed 134.84 0.49
hard parse elapsed time 123.82 0.45
PL/SQL execution elapsed time 43.88 0.16
connection management call elapsed time 13.73 0.05
PL/SQL compilation elapsed time 8.79 0.03
hard parse (sharing criteria) elapsed time 2.07 0.01
sequence load elapsed time 0.47 0.00
repeated bind elapsed time 0.26 0.00
hard parse (bind mismatch) elapsed time 0.12 0.00
failed parse elapsed time 0.03 0.00
DB time 27,388.22
background elapsed time 5,772.72
background cpu time 955.62


I want to know 2 things :

1) what is DB CPU.
2) DB CPU is just 14% of DB TIme but SQL execute elapsed time is 99.41% - does this indicate that there has been too many waits for most of my SQL's. and I should look for reducing them , reviewing them.
is DB CPU is same as CPU used by this session ?

Kindly explain in some details.

Thanks
Tom Kyte
April 13, 2010 - 8:42 am UTC

(really really hard to read this :( )

1) cpu used by the database

2) means that of the time spent in the database - 14% of it was on the cpu, 86% of it was spent waiting.


that your sql execution is 99% of your database time just means you spent MOST of your time executing sql. You'd need to drill down to see if the wait times are reasonable or not for the SQL you've executed. This is way too high level to say (remember, key word is "You'd" - not "I'd" :) )

few questions on awr

A reader, April 09, 2010 - 4:39 am UTC

Hi Tom

I have few questions on AWR report

1) what is the meaning of following statistictis

- user calls
- logons
- executes
- transactions

2) in many sections I observed that
the end value of a statistics is more than the begin value of same statistics..is it something we should look for and analyze.

Thanks
Tom Kyte
April 13, 2010 - 8:46 am UTC

1) they are something self explanatory.

- user calls - number of calls from a user process into the database - things like "parse", "fetch", "execute", "close"

- logons - really means what it means. Number of logons

- executes - same here, number of "execute this statement" calls

- transactions - number of times a transaction was ended, eg: like a commit.


2) One would expect that in most cases as resources are USED over time, not built up.

response on the pst of January 4, 2010 - 8am Central time zone:

A reader, April 09, 2010 - 7:56 am UTC

Hello Tom

This on your post above on January 4, 2010 - 8am Central time zone:

I think the Time model statistics does not talk about breakup of CPU seconds..it is breakup of database time only.

Am I correct ? you mentioned that remaining part of CPU seconds are given in time model statistics...

Time Model Statistics
• Total time in database user-calls (DB Time): 27388.2s
• Statistics including the word "background" measure background process time, and so do not contribute to the DB time statistic
• Ordered by % or DB time desc, Statistic name
Statistic Name Time (s) % of DB Time
sql execute elapsed time 27,226.56 99.41
DB CPU 4,024.60 14.69
parse time elapsed 134.84 0.49
hard parse elapsed time 123.82 0.45
PL/SQL execution elapsed time 43.88 0.16
connection management call elapsed time 13.73 0.05
PL/SQL compilation elapsed time 8.79 0.03
hard parse (sharing criteria) elapsed time 2.07 0.01
sequence load elapsed time 0.47 0.00
repeated bind elapsed time 0.26 0.00
hard parse (bind mismatch) elapsed time 0.12 0.00
failed parse elapsed time 0.03 0.00
DB time 27,388.22
background elapsed time 5,772.72
background cpu time 955.62


somewhere above in your posts you alos mentioned that CPU time <= DB time...is it the case..as I see that most of the time DB < CPU time..as lots of CPU seconds might not be just used. idle CPU...

Kindly explain - i am totally confused now.

Tom Kyte
April 13, 2010 - 8:48 am UTC

... I think the Time model statistics does not talk about breakup of CPU
seconds..it is breakup of database time only.
...

I don't know what you mean there. It is a way of reporting metrics. I dohn't know what it means to "breakup of CPU seconds", you can report cpu seconds - you don't really "break them up"


CPU time is reported as CPU time USED. Not total CPU that could be used (that would not be useful, that is easy to compute - that would be number of seconds in report * number of cpu's)

last question

A reader, April 13, 2010 - 9:50 am UTC

Thanks alot Tom for answering all the three post above.
I have one last question :

why DB CPU is not same as "cpu used by this session".

Regards
Tom Kyte
April 14, 2010 - 7:45 am UTC

for two reasons.

consider:


start_time := get_timer();
for i in 1 .. 1000000
loop
   start_time2 := get_timer();
   do_something();
   total_time2 := total_time2 + (get_timer()-start_time2);
end loop;
total_time := get_timer() - start_time;



Now, in theory, both start_time and start_time2 would be measures of time spent "doing something"

Neither is 100% accurate, both are approximations.

total_time includes time spent doing other things, including time spent timing doing something and looping.

total_time2 only times do_something - but - and this is a big but - it does it 1,000,000 times - and each measurement is inaccurate. Each get_timer() call returns a number that is precise within some range (say 1/1000th of a second or 1/1000000th). So, each call to it might be off by 1/1000th or whatever and you make 2,000,000 calls to it. It'll be *close* probably (law of averages says the errors might zero out over the long run)


so, whenever you compare any two metrics - that are "the same" but collected at differing levels - expect variations.

reason two - db cpu is user cpu only, not background.


cpu used by this session

A reader, April 15, 2010 - 8:56 am UTC

Thanks Tom,now i have beter understanding of AWR metrics,values and thier importance.

CPU used by this session is reported into seconds or ms ?

Thanks
Tom Kyte
April 15, 2010 - 9:05 am UTC

should be seconds

DB CPu can be sometime less than the CPU time

A reader, May 10, 2010 - 8:21 am UTC

As You explained DB Time is elapsed time so it should be >= CPU Time, but sometimes DB Time can be lower than CPU Time when a session issues long-running recursive calls.
The DB Time statistic doesn't update until the top-level call is finished
(versus the CPU statistic that updates as each call completes).

This is what I was always confused about..in some of my awr reports I used to find that db time < cpu time ...I read the above in metalink today .

How to identify - cpu bound or IO bound.

A reader, May 17, 2010 - 7:55 am UTC

Hi Tom

1)using AWR report , how to say that the databse is using lot of cpu and wait for cpu is causing an issue.how to say that if I could add 1 or 2 cpu then the database would perform better.

2)using AWR how to say that we are IO bound and most of the db time is taken by IO activities and waits.
and if we increae SGA by this amount..we can get better performance (response time, tps).

please explain , what statistics to look at for the above 2 cases.

3) in OEM , I can see memory is almost always 100% utilized but page scan rates are very low...do I still have a memory issue.

Thanks
Tom Kyte
May 24, 2010 - 8:31 am UTC

1) you cannot - not in simple terms. Adding more cpu to a cpu bound system might just make you more cpu bound.

Sometimes, you have to use knowledge of how things work - the database software, the operating system and (important) the application itself.

What if the application is cpu bound because it is hard parsing like mad? Hard parsing is predominantly a serial process (lots and lots and lots of latching). While I'm parsing - in general - you will not be (you'll be spinning on a latch wait, burning cpu while doing so). If you add more cpu, all you'll be doing is allowing more people to spin on the cpu trying to get the latch. You'll need to add more cpus until you have one more cpu than people trying to get the latch simultaneously - then you'll have idle cpu - but - big but - it won't have made you run any faster at all...


You need to understand what is going on.

If the amount of cpu shown as used during the window of observation (say a one hour window of observation) is near the number of cpus * duration of report - then you are nearly using 100% of the cpu.

That cannot not, will not, does not take into consideration "spikes", it is only an average. if your report is for an hour and you have 2 cpus (2*3600 cpu seconds available) and your AWR report says you used 3600 cpu seconds - that does not mean you were always 50% idle during the one hour, you might have been 100% utilized for many many minutes at a time - you need more detailed information than an hourly observation will give you.

2) use the advisors, they'll tell you what will probably happen if you adjust some memory setting. that is what they do.

3) what memory - OEM has lots of information about memory there. It is 100% normal on most operating systems (that would be useful knowledge to have - what OS you are using) to be 100% using memory 100% of the time - the OS will use whatever you are not for caching, freeing up cache memory as you request it.

not sure if this is relevent

A reader, May 25, 2010 - 12:49 am UTC

Hello Tom,

Thanks for explaining the above questions.
Not sure what I am trying to say here make sense or not , relevant or not .but I find it really puzzling..

The performance issue I am working on is - about tuning the total run time of bunch of programs - mostly comes from informatica ,many of them are PL/SQL procedures being called from informatica (an ETL tool ).

Some of the facts :

QA and production both are 2 node RAC databases.
Database is more or less Datawarehouse.
QA and product are similar , not same.
QA we have 4 GB more SGA_TARGET (12 GB at both the nodes) than the production.

Now

in QA the "batch Load" takes 30 minutes lesser than the production.
ADDM suggests in QA we should add 12 GB RAM.
ADDM suggest in production we should add 2 GB RAM.

We did several things - including increaing the db file multiblock read counts - when we incresed it in qa we got 30 minutes of improvement, when we did the same in production we had no imporvements.

we could see the logical IO on the segmenets which are being full table scaneed has increased ...after we increased the db file multiblock read count.

we know that the primary issue is the excessive full table scans...we can not help here as this is a off the shelf package.and most of these queries comes out of that package.

would you be kind to explain this...is this something expected..if yes why, what should we look at.
Tom Kyte
May 25, 2010 - 7:29 am UTC

.. mostly comes from informatica ...

yeah, good luck with that. Informatica generates "slow by slow" code, row by row processing. It likes to dump things out of the database, do its slow magic on them, and reload them - that is about the slowest way to accomplish anything. If you can tune it, you'll be a rich person - very much in demand.


... including increaing the db file multiblock read counts
...

I told you to unset that, please do that and do not set it again. You are defeating the major improvement in 10g, just do not set it.


... we could see the logical IO on the segmenets which are being full table scaneed
has increased ...after we increased the db file multiblock read count.
..


tell me, in PAINSTAKING DETAIL, how you measured that and what the beginning/ending results where. did they increase from 101,102 to 101,103 or what.



and please, unset that parameter.

increase in LIO

A reader, May 25, 2010 - 9:45 am UTC

Hi Tom,

Here is what I did to compare the different stats "before" and "After" changing the stats (it was desperate measure as I was also not able to understand that why the performance has degraded).

(so LIO_32 is my Logical IO when I set db file multiblock read count to 32 , I tried with 8,32,128 , 8 was set originally , PIO_32 is the physical IO with dbfmbrc =32 and so on).

SELECT OBJECT_NAME, max(decode(DAY,'D128',LIO,null)) LIO_128
, max(decode(DAY,'D32',LIO,null)) LIO_32
, max(decode(DAY,'D8',LIO,null)) LIO_8
, max(decode(DAY,'D128',PIO,null)) PIO_128
, max(decode(DAY,'D32',PIO,null)) PIO_32
, max(decode(DAY,'D8',PIO,null)) PIO_8
FROM
(
select a.* ,'D128' DAY from
(
select b.object_name,sum(logical_reads_delta) LIO
,sum(PHYSICAL_READS_DELTA) PIO
from dba_hist_seg_stat a,
dba_objects b
where a.obj# = b.object_id
and a.snap_id between 20318 and 20326
group by b.object_name
order by 2 desc ) a
where rownum < 11
UNION ALL
select b.* , 'D32' DAY from
(
select b.object_name,sum(logical_reads_delta) LIO
,sum(PHYSICAL_READS_DELTA) PIO
from dba_hist_seg_stat a,
dba_objects b
where a.obj# = b.object_id
and a.snap_id between 20270 and 20278
group by b.object_name
order by 2 desc ) b
where rownum < 11
UNION ALL
select b.* , 'D8' DAY from
(
select b.object_name,sum(logical_reads_delta) LIO
,sum(PHYSICAL_READS_DELTA) PIO
from dba_hist_seg_stat a,
dba_objects b
where a.obj# = b.object_id
and a.snap_id between 20222 and 20229
group by b.object_name
order by 2 desc ) b
where rownum < 11
)
group by object_name

The output of above SQL (first colum with value 1,2,3,..11 denotes record numbers).

1 OE_ORDERLINES_DFACT 25259392 23213840 15971840 2699618 2322103 1590411
2 CX_BACKLOG_DFACT 18992768 9633168 5902416 1457090 126386 51508
3 LIST_OF_VALUES 23959328 24004576 24240160 0 86 885
4 OE_ORDERLINES_DET 6691808 6631824 4564492 4647388
5 CMS_INFOOBJECTS5 9072432 7803280 8853 13378
6 IX_GEO_REGION_LKP_REF1 10679264 10487232 10253744 91 96 101
7 IX_LOC_PROD_REF1 8824496 8445968 8229264 196 204 213
8 SYS_C0028309 7637344 10564992 4324 4368
9 ITEMWAREHOUSE_MAS 7914816 8196480 7848144 2388396 2730745 2373493
10 CX_IIT_CUSTOMER 9565504 9273952 9969232 1165 1131 1133
11 CMS_INFOOBJECTS6 7115216 7718320 7221456 248 2113 1425

I m not 100% sure if it correct way but this is what I could think through.

Thanks
Tom Kyte
May 25, 2010 - 12:48 pm UTC

no, this is not valid. We don't know what else was happening, we don't know if the plans change, we don't know how many times things were executed.

My guess is, if the LIO for a given query changed, then it was caused by a PLAN CHANGE.

So, please check to see if plans changed - nothing else, just that.

Oh wait, one other thing - unset the darn thing would you?


and try to make sure things line up - it is really hard to read otherwise.

elapsed time of sql in addm report

A reader, July 21, 2010 - 11:48 am UTC

i am viewing an addm report and is concern about a top sql activity.

i have an sql having top elasped time.


2,372 2,275 128,690,535 0.00 4.28 fphtjnquzzpud radiusd@localhost.localdomain (TNS V1-V3) SELECT USER_ID FROM SUBSCRIBER...



q1) what is the meaning of elasped time ?
the time taken per execution of an sql statement ?

q2) is the elasped time in the report shown above = sum of all the the elapsed time per execution of the sql ?

if that the case, the most/frequently i run a SQL statement, the higher the elasped time it will be and thus higher db time.

but how can we say that it is an inefficient sql since it has the most elasped time just because it is run way too frequently then other sqls ?
Tom Kyte
July 23, 2010 - 9:03 am UTC

you left out the column headings :(


elapsed time is cumulative in an AWR report.


... but how can we say that it is an inefficient sql ...

no one ever said it was inefficient. All that is being said here is "this is where you spend a lot of your time".

Perhaps then you can focus on a way to make it more efficient from the application perspective. For example, perhaps that "very efficient sql" is being done like this:

for x in (select * from some_big_table)
loop
   select name into l_var from t where t.key = x.key;
   .....



that second select is very "efficient" right? No, it is stupid, it is wrong, it is entirely inefficient - it should be:


for x in (select bt.*, t.name from some_big_table bt, t where t.key = bt.key)
loop
    .....


Now it is efficient.

A small query executed a lot is not efficient.






elapsed time of sql in addm report

A reader, July 25, 2010 - 5:08 am UTC

oh my god tom,
not only you answer my question,
you change my perspective in "Efficient" with an simple yet clearcut example!

thanks thanks thanks a million. nothing is ever worth more then your experience.

thanks alot. i mean it.

DBMS_ADDM

Rajeshwaran Jeyabal, November 19, 2010 - 10:06 am UTC

Tom:

<quote>
If Oracle Enterprise Manager is unavailable, you can run ADDM using the DBMS_ADDM package. In order to run the DBMS_ADDM APIs, the user must be granted the ADVISOR privilege.
</quote>

http://download.oracle.com/docs/cd/B19306_01/server.102/b14211/diagnsis.htm#sthref333

rajesh@10GR2> select * from session_privs where PRIVILEGE like 'ADVI%';

PRIVILEGE
----------------------------------------
ADVISOR

Elapsed: 00:00:00.03
rajesh@10GR2>
rajesh@10GR2>
rajesh@10GR2> desc dbms_addm;
ERROR:
ORA-04043: object dbms_addm does not exist


rajesh@10GR2>


Also checked the PL/SQL Packages and Types Reference for dbms_addm, but could not find. Is that dbms_addm API is missing Tom?

http://download.oracle.com/docs/cd/B19306_01/appdev.102/b14258/toc.htm
Tom Kyte
November 19, 2010 - 12:02 pm UTC

select * from dba_tab_privs where table_name = 'DBMS_ADDM';

and if it is available:

desc SYS.dbms_addm


maybe you do not have a public synonym to it.

DBMS_ADDM

Rajeshwaran Jeyabal, November 19, 2010 - 1:13 pm UTC

Tom:

I dont have entries in dba_tab_privs.

rajesh@10GR2>
rajesh@10GR2> SELECT *
  2  FROM dba_tab_privs
  3  WHERE table_name = 'DBMS_ADDM';

no rows selected

Elapsed: 00:00:00.01
rajesh@10GR2>
rajesh@10GR2> desc SYS.dbms_addm;
ERROR:
ORA-04043: object SYS.dbms_addm does not exist

rajesh@10GR2>
rajesh@10GR2>

Tom Kyte
November 20, 2010 - 5:14 am UTC

do you have the diagnostic pack licensed and installed as part of enterprise manager?


DBMS_ADDM

Rajeshwaran Jeyabal, November 22, 2010 - 2:19 am UTC

Yes Tom.
Tom Kyte
November 23, 2010 - 12:42 pm UTC

ah, hold on, it was new in 11g - the 10g R2 documentation "jumped the gun" as they say (that is why it is not documented fully either in 10g)

http://docs.oracle.com/docs/cd/B28359_01/appdev.111/b28419/whatsnew.htm#sthref6


Buffer is pinned ...

A, November 22, 2011 - 6:45 am UTC

Hello Tom,
What do "buffer is not pinned count" and "buffer is pinned count" signify?

The AWR report which Iam looking at it, is showing very high values for these:

buffer is not pinned count
95,333,737
13,304.69
959.36

buffer is pinned count
195,044,730
27,220.26
1,962.77


Thanks
A

Tom Kyte
November 22, 2011 - 8:47 am UTC

this is a pretty good explanation:

http://dioncho.wordpress.com/2009/04/08/buffer-pinning/



Your high numbers indicate you do a lot of logical IO - reads from the buffer cache. That is all that means ultimately.

log buffer

ujjal, January 14, 2012 - 12:25 am UTC

hi Tom,
why log buffer value in awr is showing more than the actual.
in awr Log Buffer: 14,380K
but show parameter log_buffer is showing 14497792
Tom Kyte
January 17, 2012 - 11:38 am UTC

it appears awr (but not statspack) is slightly over reporting the log buffer size. It is a tiny amount, not sure of the cause. Please feel free to pursue with support if you would like.

Database Performance

BC, February 14, 2012 - 3:44 pm UTC

Tom,

We are migrating our rac databases to new hardware and were wondering what would be the best way to compare db performance between old and new servers.

Thanks

BC
Tom Kyte
February 15, 2012 - 7:48 am UTC

there is only one way I know - benchmark.

run the same (or extremely similar) loads on both.

Real Application Testing, if you have access to it, would make this pretty easy
http://www.oracle.com/us/products/database/options/real-application-testing/index.html

otherwise, you'll have to set up a scripted load on the existing machine - measure (using AWR or statspack) - do the same on new machine - measure - and compare.

DB performance

nill hasan, May 28, 2012 - 9:19 am UTC

Hi tom,

First of all thanks to your effort this site.



DB Running 10g R2 in Linux 5 core CPU 8, SGA is 32 GB

i give to you below Snapshot Capture(ADDM)

Please great comments below snapshot what is my db status?





DB cpu (seconds) 0.00 0.00 0.00
DB time (seconds) 780.41 10.41 26.01
db block changes 28699.00 382.65 956.63
execute count 23987.00 319.83 799.57
global cache cr block receive time (seconds) 0.00 0.00 0.00
global cache cr blocks received 0.00 0.00 0.00
global cache current block receive time (seconds) 0.00 0.00 0.00
global cache current blocks received 0.00 0.00 0.00
global cache get time (seconds) 0.00 0.00 0.00
global cache gets 0.00 0.00 0.00
opened cursors cumulative 2390.00 31.87 79.67
parse count (total) 1721.00 22.95 57.37
parse time cpu (seconds) 1.31 0.02 0.04
parse time elapsed (seconds) 1.57 0.02 0.05
physical reads 215.00 2.87 7.17
physical writes 279.00 3.72 9.30
redo size (KB) 3541.60 47.22 118.05
session cursor cache hits 1913.00 25.51 63.77
session logical reads 31864523.00 424860.31 1062150.77
sql execute cpu time (seconds) 0.00 0.00 0.00
sql execute elapsed time (seconds) 0.00 0.00 0.00
user calls 30809.00 410.79 1026.97
user commits 27.00 0.36 0.90
user rollbacks 3.00 0.04 0.10
workarea executions - multipass 0.00 0.00 0.00
workarea executions - onepass 0.00 0.00 0.00
workarea executions - optimal 1906.00 25.41 63.53

Thanks advanced
nill hasan


Tom Kyte
May 29, 2012 - 6:12 am UTC

my comment is that this is totally unreadable.

Not that it would matter, you cannot make any judgments from this sort of data.

suggest you run an ADDM report if you want high level "this is what might need looking at"

AWR Report

Dinesh, August 16, 2012 - 12:39 pm UTC

What are the main things that we to check with respect to performance issue in AWR report??
please reply through mail, if possible
Tom Kyte
August 17, 2012 - 3:01 pm UTC

well, I'll never reply through mail - that would not be a very scalable approach for me now would it :)


there is no single main thing. If you want an automated "what should I look for", please do NOT run an AWR report. Please DO run an ADDM report. That gets you exactly what you are looking for - the "what should I investigate"


so, that begs the question - what is the AWR for. It is for someone to read and find some facts based on a reported issue, a problem they are trying to solve.

Eg: my batch program which is the only thing running is running slower today than yesterday.

great - get me a comparative AWR report for those two time periods and we can see what has changed, then use our knowledge of Oracle and the system we are running to ascertain why they have changed.

Rationale in ADDM report

MC5, November 28, 2012 - 3:42 am UTC

Hi Tom,

Wondering if you could clarify or explain a bit better what is meant by the following sentence in an ADDM report:

Rationale:
At least 10 distinct execution plans were utilized for this SQL statement during the analysis period


Does this mean that when the query was being executed, the optimizer had 10 different plans to choose from, and could have potentially chosen a less-than-optimal plan, which could be the reason why the sql statement is appearing in the ADDM report.

Thanks for the clarification about what is meant by this rationale.


Tom Kyte
November 29, 2012 - 8:01 am UTC

it means there were 10 child cursors with different plans for that sql statement. Each could have been optimal for the set of binds supplied - something like adaptive cursor sharing could do that for example.

what was this the rational "for" exactly?

rationale for SQL Profile maybe?

MC5, November 29, 2012 - 8:22 am UTC

Hi Tom,

The ADDM report indicated this:

Action
Run SQL Tuning Advisor on the SELECT statement with SQL_ID
"6k187zp4nuxz8".
Related Object
SQL statement with SQL_ID 6k187zp4nuxz8.
select /*+ FIRST_ROWS(500) */
VH.item_id,
VH.item_id
VH.item_id,
VH.reference,
VH.wp_action,
VH.wp_timestamp,
VH.wp_id,
VH.wp_action,
VH.wp_timestamp,
VH.wp_data_id
from item_VH VH
where (( VH.item_id IN
(SELECT accv.item_id
FROM item_acc_v accv) )
AND(((VH.reference) IS NOT NULL
AND UPPER(VH.reference) = :"SYS_B_0")
AND(VH.wp_timestamp=to_date(:"SYS_B_1", :"SYS_B_2"))))

Rationale
The SQL spent 100% of its database time on CPU, I/O and Cluster waits.
This part of database time may be improved by the SQL Tuning Advisor.
Rationale
Database time for this SQL was divided as follows: 100% for SQL
execution, 0% for parsing, 0% for PL/SQL execution and 0% for Java
execution.
Rationale
SQL statement with SQL_ID "6k187zp4nuxz8" was executed 193 times and had
an average elapsed time of 4.1 seconds.
Rationale
At least 10 distinct execution plans were utilized for this SQL statement
during the analysis period.

I was wondering if the "At least 10 execution plans" indicated that the optimizer had 10 distinct execution plans to choose from and perhaps was not choosing the best one, which if true was pointing me in the direction of creating an SQL Profile for this query to help the optimizer have more information. I've read in another post of your that the SQL Profile is like "analyzing a query" and that the generated plan gives the optimizer a better understanding of the relationship between the tables in the query.
Am I barking up the wrong tree here, thinking that the 10 distinct execution plans is cause for concern and an SQL Profile could help?

Many thanks for your advice.

Tom Kyte
November 30, 2012 - 3:51 am UTC

... AND UPPER(VH.reference) = :"SYS_B_0")
AND(VH.wp_timestamp=to_date(:"SYS_B_1", :"SYS_B_2")))) ...

is cursor_sharing set to similar????

(it is either force or similar)

this stinks, your developers have a HUGE security bug potential here (sql injection) in addition to having incurred a severe performance overhead - by not using binds.


I'll suspect cursor_sharing=similar

let me know.

A reader, November 30, 2012 - 5:32 am UTC

... AND UPPER(VH.reference) = :"SYS_B_0")
AND(VH.wp_timestamp=to_date(:"SYS_B_1", :"SYS_B_2")))) ...

is cursor_sharing set to similar????

(it is either force or similar)

this stinks, your developers have a HUGE security bug potential here (sql injection) in addition to having incurred a severe performance overhead - by not using binds.


I'll suspect cursor_sharing=similar

let me know.


Hi Tom,

Thanks for your reply. I forgot to mention previously we're on 11.2.0.3.
cursor_sharing is set to FORCE at the instance level. At one point we tried SIMILAR but couldn't indetify any notable performance difference (using AWR and end-user observation) and switched back to FORCE for the Adaptive Cursor Sharing feature which I believe is only available when set to FORCE. As a side note, we have been receiving periodic ORA-01000 max cursor exceeded errors and have open_cursors set to 3000. From what I understand open_cursors and cursor_sharing are not influencing each other but perhaps the ORA-01000 error is relevant to the SIMILAR vs FORCE debate.

For the binds, the ":SYS_B_0", :"SYS_B_1", and :"SYS_B_2" should represent the bind variables (?)

... AND UPPER(VH.reference) = :"SYS_B_0")
AND(VH.wp_timestamp=to_date(:"SYS_B_1", :"SYS_B_2")))) ...

at least that's how the AWR/ADDM eport is showing them. But I think you're referring to the :"SYS_B_2" bind which accepts the date format itself like 'DD/MM/YYYY' which I've seen in another post as a SQL Injection threat. Am I correct there?

Many thanks,
MC
Tom Kyte
November 30, 2012 - 6:41 am UTC

Ok, 11.2.0.3 and cursor sharing = force implies that adaptive cursor sharing is kicking in.

the sql injection isn't the date format - that is likely hard coded into the application, the sql injection would be mostly on sys_b_0 in this case.


adaptive cursor sharing could generate the child cursors - as could bind mismatch (different length strings being bound to :sys_b_0) - query up v$sql_shared_cursor to see if you can see a reason for the child cursors.



you shouldn't need cursor_sharing force if the developers fixed that bug - of not binding correctly.


Maybe they can do that when they fix this bug:
As a side
note, we have been receiving periodic ORA-01000 max cursor exceeded errors and
have open_cursors set to 3000. From what I understand open_cursors and
cursor_sharing are not influencing each other but perhaps the ORA-01000 error
is relevant to the SIMILAR vs FORCE debate.


that has NOTHING to do with what you are looking at. It has everything to do with cursor leaks in the application. Yet another developer introduced bug!!!! 100% sure of this.

see

https://asktom.oracle.com/pls/apex/f?p=100:11:0::::P11_QUESTION_ID:5644849400346957147

I'm 100% sure they have this bug in their code too...

binds

A reader, December 03, 2012 - 6:33 am UTC

Thanks Tom for pointing that out, I'll have a word with the developers about it.

I was a bit concerned by your mention of a potential sql injection vulenrability with the bind variable SYS_B_0, so I did some research and saw another post where you says "JUST USE BINDS and guess what - you cannot be, YOU CANNOT BE, sql injected. ( http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:1922946900346978163 1st follow up Sept. 3 2009) so in my case here if you say "the sql injection would be mostly on sys_b_0 in this case" I'm not so sure anymore if the bind at sys_b_0 is vulnerable or not, could you explain a bit more? Thanks.

Also, the only Y value in the cursor view output was for roll_invlaid_mismatch, I'll keep poking around to see if I can find more reasons for the child cursors.

Thanks again
MC
Tom Kyte
December 03, 2012 - 9:34 am UTC

cursor sharing = force or similar is NOT YOU USING BINDS.

In fact, it is you *not using binds*.


with cursor sharing, any binding happens AFTER the sql has already been injected!!!

that is, you mean to execute a query like this:


select * from t AND UPPER(VH.reference) = 'some_input'


but someone decides to supply you with the input:

xxx' or 1=1 --


so your sql statement becomes:

select * from t where upper(vh.reference) = 'xxx' or 1=1 --'

which cursor sharing would turn into

select * from t where upper(vh.reference) = :sys_bv1 or :sys_bv2 = :sys_bv3 --'


the "auto binding" done by cursor sharing happens WAY AFTER you've already been injected!!



hence in your code, the most likely sql injection point is where we put sys_b_0 in - BEFORE we put sys_b_0 in.


if YOU bind then cannot be sql injected.

if YOU DO NOT bind then you can be sql injected.

ACS with cursor_sharing=FORCE or SIMILAR

MC, December 04, 2012 - 3:57 am UTC

Thanks for sticking with me on this one and helping me understand it better.

If I do understand correctly, the section in my original query showing the bind variables

AND UPPER(VH.reference) = :"SYS_B_0")
AND(VH.wp_timestamp=to_date(:"SYS_B_1", :"SYS_B_2"))))

is using the Adaptive Cursor Sharing feature - which I read in the docs "is enabled by default and cannot be disabled" - and because I also have cursor_sharing=FORCE|SIMILAR I have an SQL injection threat.

So it is actually the cursor_sharing=FORCE|SIMILAR which is introducting threat, and not the ACS feature...

And to get rid of the sql injection threat I can set cursor_sharing to EXACT, which would still use ACS and make the sys_b_0 a true bind and eliminate the sql injection threat?

Thanks again for your input and for helping me understand the situation better.
Tom Kyte
December 07, 2012 - 4:50 am UTC

and because I also have
cursor_sharing=FORCE|SIMILAR I have an SQL injection threat.



that is not accurate, it would be accurate to say:

and because my developers use simple string concatention of inputs passed to them in a sql statement, I am subject to sql injection.


cursor sharing = force|similar does NOT introduce (nor take away) a sql injection threat, your developers did that.


OK I think I got it now

MC, December 07, 2012 - 6:35 am UTC

Thanks Tom for your help. I think I'll refer the developers to your previous statement of "this really stinks" and ask them to fix this asap.

MC

CPU usage

Amin, April 17, 2013 - 12:23 am UTC

Hi Tom,

How do I get the CPU usage percentage using a query of active sessions?

It's confusing the use of views:

V $ SGASTAT
V $ statname
V $ sysstat
V $ SESSTAT
V $ SESSION
V $ waitstat

regards
Amin


Tom Kyte
April 22, 2013 - 7:33 pm UTC

define to me what "cpu usage percentage" is or means to you first...


Document giving AWR details

Milind, May 25, 2013 - 4:43 pm UTC

Can you please let me know about which Oracle document gives information about AWR report reading and meaning of various events appearing in AWR report?
Tom Kyte
May 29, 2013 - 5:10 pm UTC

http://docs.oracle.com/cd/E11882_01/server.112/e16638/toc.htm

but, make sure you use ADDM before even reading an AWR report.

CPU time

A reader, September 09, 2013 - 5:56 pm UTC

Hello Tom,

In AWR report, under top 5 wait events, I see high CPU time wait event near to Time(s)=14,922 and % Total Call Time= 91.1%.

How do I troubleshoot what is causing high cpu usage and what does CPU time wait event really represent?

Elapsed: 59.41 (mins)
DB Time: 272.92 (mins)

Top 5 Timed Events

Event Time(s) Avg Wait(ms) %TotalCallTime Wait Class
CPU time 14,922 91.1

can you advice?
Tom Kyte
September 10, 2013 - 10:12 am UTC

that is not "wait" on cpu, that is cpu being used.

to find out what is using cpu, look at your sql reports in the awr report - it shows the biggest cpu users there.

AWR

amir, August 19, 2014 - 6:39 am UTC

AWR REport.. how to overcome elpased adn and db time issue.

Andry, September 02, 2016 - 3:58 am UTC

I used following query to tune at instance level.


select n.name, s.value
from v$statname n , v$sysstat s
where n.statistic# = s.statistic#
order by n.class, n.name

Output :-

NAME VALUE
1 CPU used by this session 326220259
2 DB time 760462384


I have a confusion, a value display in an output , how we can measure in bytes or seconds?
Connor McDonald
September 06, 2016 - 2:12 am UTC

It depends on the statistic. Some are bytes, other are centisecond etc.

Db Time > DB cpu + Waits

A reader, November 28, 2016 - 11:46 am UTC

Hi Tom,

In my awr report the DB time is much higher than the sum of the DB cpu and Wait times.


Elapsed: 30.18 (mins)
DB Time: 2,365.00 (mins)

Top 10 Foreground Events by Total Wait Time

Event Waits Total Wait Time (sec) Wait Avg(ms) % DB time Wait Class
DB CPU 6103 4.3
db file sequential read 6,221 61.5 10 .0 User I/O
enq: TX - index contention 67 8.4 125 .0 Concurrency
log file sync 1,236 7 6 .0 Commit
gc current block 2-way 2,447 5.4 2 .0 Cluster
enq: IV - contention 684 5.1 7 .0 Other
gc cr block 2-way 1,913 4 2 .0 Cluster
gc current grant 2-way 2,396 3.8 2 .0 Cluster
gc current block 3-way 516 1.4 3 .0 Cluster
library cache pin 355 1.2 3 .0 Concurrency

Can you please explain what could be the reason for the same?
Connor McDonald
November 29, 2016 - 1:43 am UTC

That is rare but not impossible. Potential causes are:

1) The OS-level system calls that capture elapsed time and elapsed CPU can have different granularities. So a call that was entirely CPU that took 123456 nanonseconds might register as: 123microseconds of CPU, and 1millisecond elapsed.

2) Hyper-threading and cpu counts that vary dynamically under virtualisation can impact things during the AWR period.

3) There we some bugs with AWR data capture under RAC systems.

Basically AWR does the best with what information is made available to it from the OS.

AWR Report:"Elapsed time" is higher than "DB Time"

A reader, July 12, 2019 - 3:48 am UTC


### In AWR Report: We know "If Elapsed time is less than DB Time don't need to analyze the AWR report".
issue is not into the database.

#### But If "Elapsed time" is higher than "DB Time" then what will be the right action to fix the issue?
Chris Saxon
July 12, 2019 - 9:03 am UTC

You can't really say whether there's a "database issue" by looking at these two figures alone.

You need to speak to your customers - the people using the application - and ask them: is performance acceptable?

AWR Report:"Elapsed time" is higher than "DB Time"

A reader, July 12, 2019 - 2:20 pm UTC

Thanks Chris,
You mean if customer confirmed that performance is okay(acceptable) then we're fine.

***** But if customer complained that performance is not acceptable and need to fix the issue. What will be the right action?

Chris Saxon
July 15, 2019 - 12:25 pm UTC

You mean if customer confirmed that performance is okay(acceptable) then we're fine.

Yep.

But if customer complained that performance is not acceptable and need to fix the issue. What will be the right action?

First question: the performance of which exact action(s) are unacceptable? Searching for products? Viewing customers? Loading a specific product?

Once you've found this, trace the code to see what's going on. Identify the slowest parts of the process, tune them, & repeat until performance is "good enough".

And remember: it's always worth asking why the user is doing action X. It maybe there's a way to avoid doing it altogether!

More on this from Connor:

https://blogs.oracle.com/oraclemagazine/a-higher-level-perspective-on-sql-tuning
https://blogs.oracle.com/oraclemagazine/a-higher-level-perspective-on-sql-tuning-part-2