Skip to Main Content
  • Questions
  • negative values of execute to parse in Statspack report

Breadcrumb

Question and Answer

Tom Kyte

Thanks for the question, David.

Asked: February 01, 2002 - 10:54 am UTC

Last updated: August 02, 2013 - 7:32 pm UTC

Version: 8.1.7

Viewed 10K+ times! This question is

You Asked

Hi Tom,

I worked with a programmer and desinged a schema. Now I am using Statspack to check how is the performance of the database and applicaiton.

There are two questions that I like to get your opinion

1. The Execute to Parse values is negative as showing below from Statspack report. How do you interprete this value?

Instance Efficiency Percentages (Target 100%)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Buffer Nowait %: 100.00 Redo NoWait %: 100.00
Buffer Hit %: 80.74 In-memory Sort %: 99.10
Library Hit %: 92.95 Soft Parse %: 91.24
Execute to Parse %: -31.89 Latch Hit %: 99.99
Parse CPU to Parse Elapsd %: 87.32 % Non-Parse CPU: 99.92

2. Below is my SGA info and section of Statspack report. I thought I have allocated enough shared memory. Do I need to increase the shared memory further?

Total System Global Area 75796640 bytes
Fixed Size 73888 bytes
Variable Size 47779840 bytes
Database Buffers 27852800 bytes
Redo Buffers 90112 bytes


shared_pool_size = 24000000 in the init.ora file


Shared Pool Statistics Begin End
------ ------
Memory Usage %: 95.02 79.74
% SQL with executions>1: 36.15 37.14
% Memory for SQL w/exec>1: 24.03 39.37


Thanks


and Tom said...

1) that means someone is parsing statements and NEVER executing them. They are just chewing up your CPU, latching the shared pool, killing your performance.

That percentage is computed as:

round(100*(1-:prse/:exe),2)

So, when the :prse > :exe, you get negative numbers. VERY VERY VERY bad.

2) you have 24m. that should be sufficient for most applications.

You have 27ish meg for the block buffer cache. That is considered "small" on most systems today. What is your cache hit ratio, if low, you may benefit from more. depends on the ram on the system.



Rating

  (30 ratings)

Is this answer out of date? If it is, please let us know via a Comment

Comments

so why some sql statements were parsed but were not executed?

David, February 01, 2002 - 5:52 pm UTC

Thanks Tom for your interpretation on the negative values of execute to parse. According to your percentage equition, how could parse be bigger than exec. I thought the purpose of parsing is for execution. So the smallest percentage should be 0, that is one parsing on execution.



Tom Kyte
February 01, 2002 - 7:48 pm UTC

There are tools (bad ones) that parse a statement but never execute it. Or they parse it to discover what inputs/outputs it takes and then later on parse it again to execute it (so the parse/execute is 2:1).

It is commonly referred to as "inefficient coding techniques". I see it with some 3rd "generic" libraries. They parse like there is no tomorrow -- works great in a single user environment -- starts to stink with two users (or more).

Just cause you parsed, dosen't mean you are obligated to execute.

negative value on Physical reads

June, August 20, 2004 - 9:29 am UTC

Tom,

what may cause the negative Physical reads as following in STATSPACK? Thanks!

----------------------
Cache Sizes
> ~~~~~~~~~~~
> db_block_buffers: 285000 log_buffer: 52428800
> db_block_size: 16384 shared_pool_size: 230M
>
> Load Profile
> ~~~~~~~~~~~~ Per Second Per Transaction
> --------------- ---------------
> Redo size: 64,994.29 3,981.60
> Logical reads: 11,386.58 697.55
> Block changes: 232.94 14.27
> Physical reads: -6,914,215.22 -423,569.86
> Physical writes: 106.05 6.50
> User calls: 44.82 2.75
> Parses: 5.95 0.36
> Hard parses: 1.07 0.07
> Sorts: 8.90 0.55
> Logons: 0.16 0.01
> Executes: 54.94 3.37
> Transactions: 16.32
>
> % Blocks changed per Read: 2.05 Recursive Call %: 98.72
> Rollback per transaction %: 0.91 Rows per Sort: #######
>
>


Tom Kyte
August 20, 2004 - 11:31 am UTC

this means the numbers "have rolled" -- and either gone negative or if they were not using signed integers, they hit 4billion and rolled to zero.

if the next report is OK, they hit 4billion and rolled to zero.
if the next report is still negative, they are using signed integers.

Is there a hard limit of 4 Billion

Neeraj Nagpal, August 20, 2004 - 1:05 pm UTC

Tom,
When you say numbers "have rolled", could you please clarify who may or may not be using the signed integers and is there a hard limit of 4billion on these numbers.

Thanks so much,
Neeraj

Tom Kyte
August 21, 2004 - 11:02 am UTC

most of the counters are 32bit or 64bit integers depending on OS/version and so on.

Negative values of execute to parse in Statspack report.

Ramana Mallela, August 20, 2004 - 1:28 pm UTC

Hi Tom,

Extending the Original question. Please advise how I can identify those SQL's, Which PARSE but never EXECUTE.

Thanks
Ramana

Tom Kyte
August 21, 2004 - 11:04 am UTC

v$sql -- look at the parse and execute counts.

cpu above 100%

RD, August 21, 2004 - 12:36 am UTC

Hi Tom,

Please can you explain "parse cpu to parse elapsed" percentage and it's meaning in simple words to me.
Then maybe I can understand how it is possible for me
to get the following percentage in my statspack report.


Instance Efficiency Percentages (Target 100%)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Buffer Nowait %: 100.00 Redo NoWait %: 100.00
Buffer Hit %: 99.88 In-memory Sort %: 100.00
Library Hit %: 99.55 Soft Parse %: 99.64
Execute to Parse %: 29.78 Latch Hit %: 100.00
Parse CPU to Parse Elapsd %: 127.27 % Non-Parse CPU: 97.12



Parse CPU to Parse Elapsd is more than 100 percent. It's really got me.

Further I also would like to understand why would oracle parse a sql and not execute it. Under what circumstances is this possible?

Thanks tons as always,
Regards,
RD.


Tom Kyte
August 21, 2004 - 11:46 am UTC

parse cpu = amount of cpu time used to parse
elapsed time parsing = amount of time on the wall clock spent parsing.

decode(:prsela, 0, to_number(null)
, round(100*:prscpu/:prsela,2)) pctval



in a perfect world, with no contention -- parse cpu = parse elapsed.

ratio = 100%

in a bad world, it takes longer to parse (elapsed) then cpu time used (contention). ratio < 100%


in your case, what this is saying is the CPU exceeded the elapsed, which technically is not possible -- but happens due to the way "small fast things" are measured on computers. It is hard to measure things that happen very rapidly accurately. So, this ratio, when > 100%, is the same as "100%" for all intents and purposes

If you want to read more about the measurement issue -- I have a brief write up on it in Effective Oracle By Design -- Cary Millsap has written much more extensively on it in his Oracle performance book (excellent book by the way)



As for the last question -- Oracle does what it is told to do. Oracle is told by you to PARSE. If you do not tell Oracle to execute the statment, it won't. that is how you have parse but no execute.

why would oracle parse a sql and not execute it?

Kamal Kishore, August 21, 2004 - 10:13 am UTC

I guess one way could be you do a DBMS_SQL.PARSE but do not do a DBMS_SQL.EXECUTE after that.


negative values of execute to parse in Statspack report

Ramana Mallela, August 30, 2004 - 11:54 am UTC

Hi Tom,

Earlier in your followup you mentioned as It is commonly referred to as

"inefficient coding techniques". I see it with some 3rd "generic" libraries. They parse like there is no tomorrow -- works great in a single user environment -- starts to stink with two users (or more). "

Please advise how they can be eliminated or efficiently coded ?

Thanks in advance.

Ramana



Tom Kyte
August 30, 2004 - 12:13 pm UTC

by only parsing a statment ONCE per session,

not once per execution
not once to "describe a table"
not once to "describe the ith column in a table"


by only parsing ONCE -- which is the least and the most times a statement to be executed (and if you aren't going to execute it, don't parse it) needs be.

negative values of execute to parse in Statspack report

Ramana Mallela, August 31, 2004 - 2:20 pm UTC

Hi Tom, 

Sorry for taking more of you precious time on this issue. Let me re-phrase my question.

As you said following are the sql's which are parsing,parsing.......& parsing.My question is how differently the dev team can code the following SELECT statements,so that they parse once and execute many times.

Details 
========================

  1   select PARSE_CALLS,EXECUTIONS,SQL_TEXT from v$sql
  2  where executions = 0 and parse_calls > 0
  3* order by parse_calls desc
SQL> /

PARSE_CALLS EXECUTIONS SQL_TEXT
----------- ---------- -------------------------------------------------------
       5102          0 SELECT SKILL_ID FROM WN_JOB_ORDER_SKILLS WHERE JOB_ORDE
                       R_ID = :"SYS_B_0"

       3247          0 SELECT SKILL_ID FROM WN_COMPANY_SKILLS WHERE COMPANY_BI
                       N_ID = :"SYS_B_0"

       2476          0 SELECT TASK_ID, OFFICE, TASK_TYPE, DUE_DATE, TASK_STATU
                       S, MEMO_TEXT, ALARM_WHEN, TO_CHAR(ALARM_WHEN, :"SYS_B_0
                       ") ALARM_TIME FROM WN_TASKS WHERE ASSIGNMENT_ID = :"SYS
                       _B_1" ORDER BY DUE_DATE, TO_CHAR(ALARM_WHEN, :"SYS_B_2"
                       )

        747          0 SELECT * FROM WN_OFFICE_SORT_DEF_BINS WHERE OFFICE = :"
                       SYS_B_0" OR OFFICE IN (SELECT OFFICE FROM WN_OFFICE_LOG
                       INS WHERE LOGIN = :"SYS_B_1" AND ACTIVE = :"SYS_B_2")

        717          0 select srl_id, srl_name, login, table_name, created_dat
                       e from wn_srl where login = :"SYS_B_0"

        635          0 select country_code, country_name from wn_countries
        343          0 SELECT * FROM WN_JOB_ORDERS WHERE JOB_ORDER_ID = :"SYS_
                       B_0"

        120          0 select * from wn_employee_resume where person_id = :"SY
                       S_B_0"


Regards 
Ramana

 

Tom Kyte
August 31, 2004 - 2:55 pm UTC

ask the development team why

a) cursor-sharing=force must be on? why havent they fix the bugs in their code?
b) why they parse a statement 5,102 times... but never actually RUN IT??? sort of like compiling a subroutine 5,102 times -- and never actually calling it




these are my stats

A reader, September 22, 2004 - 3:21 am UTC

Instance Efficiency Percentages (Target 100%)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Buffer Nowait %: 99.99 Redo NoWait %: 99.99
Buffer Hit %: 93.16 In-memory Sort %: 100.00
Library Hit %: 99.61 Soft Parse %: 99.59
Execute to Parse %: 11.79 Latch Hit %: 99.98
Parse CPU to Parse Elapsd %: 11.36 % Non-Parse CPU: 97.69

how can i get rid off low values of execute to parse and Parse CPU to Parse Elapsd %.

thanks

Tom Kyte
September 22, 2004 - 7:57 am UTC

hey, 1 cpu second spent parsing, versus 2 elapsed seconds spent parsing would result in a 50% ratio.

Seems really bad right -- 50%

but hey, we are talking about 1 cpu second and 2 wall clock seconds in a 15 minute period.

big deal, nothing to be concerned about


(this is why ratios in general can only be used as a RED FLAG causing you to dive into the details to see if a problem ACTUALLY IN PRACTICE EXISTS)


So, dive into the details here. Is the elapsed time minus the cpu time (wait time for parsing) divided by the number of users (arg, very ratio like, averages -- another awesome way to hide information but statspack is very high level like that) a major portion of their wait event time over the period of observation.


For example, lets say you have a 30 minute stats pack (1,800 seconds)

You have 500 seconds of parse elapsed time.
You have 50 seconds of parse cpu time

450 missing seconds.
But you had 1,000 users.

0.450 average seconds of wait per session in the period of observation. Your end users would not notice that you speeded them up by an entire 1/2 of a second here, it isn't the low hanging fruit.




This is similar to my analogy:

On my last car trip, I stopped at 500 red lights. Was that

a) good
b) bad
c) neither good nor bad


No matter what you choose, I will prove the other TWO are in fact correct. Why/How? I control the situation and the other numbers you don't have access to, haven't looked at.

BUT WHAT DOES THIS INDICATE

A reader, September 22, 2004 - 8:14 am UTC

Instance Efficiency Percentages (Target 100%)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

THIS TARGET (100%)

Tom Kyte
September 22, 2004 - 8:31 am UTC

do you understand what a ratio is?

do you understand what ratios do? how they work?

did you read the example above showing that 10% could be *just fine*?


ratios are ratios

ratio based tuning "get these to 100%" is fruitless.

the devil is in the details, please -- go to the details and actually *analyze the issue*



If you read in the paper


"And in population news, the fastest growing country on the planet is BallyBoo. It grew 500% in the last year"


what can you tell me? List the facts that you can tell me about the country of BallyBoo, it's population, it's impact on the world economy.



Now, if I tell you that the existing married residents of BallyBoo just had their parents move into the country with them as well as their 2 brothers and 2 sisters -- so the population went from 2 to 10 -- well, hmmm. Who really cares at this point?


Approximately 50% of married people are male. Now what? (other than you could probably infer that the other 50% are not male)




Great Points on Ratios and 3rd Party Tools

Greg Jarmiolowski, October 20, 2004 - 5:16 pm UTC

I have to live with the extra parses that come from using the microsoft ole db provider for oracle. But what I don't understand are all these parses without executions.

I execute
SELECT parse_calls, executions, sql_text
FROM v$sql
WHERE executions = 0
AND parse_calls > 0
AND module IS NULL
ORDER BY parse_calls DESC

And I get several statements similar to this all with 22,479 parse calls.

SQL_TEXT
-----------------------------------------------
select q_name, state, delay, expiration, rowid, msgid, dequeue_msgid, chain_no, local_order_no, enq_time, enq_tid, step_no, priority, exception_qschema, exception_queue, retry_count, corrid, time_manager_info, sender_name, sender_address, sender_protocol from QS_CBADM.QS_CBADM_ORDERS_MQTAB where msgid = :1

Is this normal? Should it be of concern?

Muchas gracias,

Greg

Tom Kyte
October 20, 2004 - 8:43 pm UTC

yes, they are parsing a query and never actually executing it. it could be a problem is it happens lots in a short time frame (we have some recursive sql in Oracle that does this too -- they parsed all of their sql up front but don't use all of it all of the time).

22,000 over a couple of days/weeks isn't too bad probably. You'd have to quantify that with a "time frame" -- eg: copy the data down, wait an hour and see how many you did that hour via subtraction.

I think it is Oracle's SQL

Greg Jarmiolowski, October 22, 2004 - 4:34 pm UTC

Tom,

Since module is null and it is querying an Oracle supplied schema I am assuming this is Oracle parsing all of it's own SQL.

It is happening more than once per minute on about 12 different queries. I thi nk I'll need to research further.

Greg

Tom Kyte
October 23, 2004 - 9:32 am UTC

why "since module is null"??

QS_CBADM.QS_CBADM_ORDERS_MQTAB

is not one of "oracles default tables"

parse_calls vs. executions

Robert, January 05, 2005 - 12:18 pm UTC

Tom,

I see your explaination of how we can have v$sqlarea.executions=0 (i.e.: by parsing statements but never executing them).

But how can you have the situation in v$sqlarea...
where executions = 0
and parse_calls = 0
and invalidations = 0 ?

I am seeing several of these.

Thank you,

Robert.

Tom Kyte
January 05, 2005 - 12:49 pm UTC

do they look like DDL in 10g?

or do they look like some SYS recursive SQL

(eg: peek at the sql, is it YOUR sql or OUR sql)

Some application / some recursive

Robert, January 05, 2005 - 1:14 pm UTC

Tom,

Version is 9.2

Some of it looks like application sql (11i Apps)
e.g.: SELECT COUNT(*), ROWID InternalRowid FROM RG_REPORT_AXES WHERE AXIS_SET_ID = 1212 AND DISPLAY_FLAG = 'Y'
...and some of it looks like recursive sql
as far as the 'selects' go.
Then there are some 'analyze' statements as well.

Thanks,

Robert

Tom Kyte
January 05, 2005 - 6:14 pm UTC

cool -- thanks for the sql -- that helped.  My sql parser in my brain threw an ora-937 looking at that and it made it really easy to reproduce:

<b>
ops$tkyte@ORA9IR2> select count(*), rowid x from dual;
select count(*), rowid x from dual
                 *
ERROR at line 1:
ORA-00937: not a single-group group function
 </b>
 
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> select sql_text from v$sqlarea where executions = 0 and parse_calls = 0 and invalidations = 0
  2  /
 
SQL_TEXT
-------------------------------------------------------------------------------
<b>select count(*), rowid x from dual</b>

 
it is just erroneous sql that didn't quite make it through the parse stage, it'll go away soon, when the slot is needed. 

FYI -- Follow up from long ago

Greg Jarmiolowski, February 03, 2005 - 9:59 am UTC

FYI

why "since module is null"??
>> Because if it was something like dllhost.exe I would have known it was the ole db driver.

QS_CBADM.QS_CBADM_ORDERS_MQTAB is not one of "oracles default tables"
>> no it is not "default" but I never said it was - just that it was Oracle supplied. It is part of a sample schema shipped with the product. I dropped the schema and now the relentless parsing with no executions is gone.


Mikhail, February 08, 2005 - 4:23 pm UTC

Oracle Advansed Queueing is responsible for such queries (my guess notification mechanism)

select q_name, state, delay, expiration, rowid, msgid, dequeue_msgid, chain_no
, local_order_no, enq_time, enq_tid, step_no, priority, exception_qschema, exc
eption_queue, retry_count, corrid, time_manager_info, sender_name, sender_addr
ess, sender_protocol from QS_ES.QS_ES_ORDERS_MQTAB where msgid = :1

Parse to exceute ratio

Alison Ashcroft, July 27, 2005 - 9:31 am UTC

Very informative - my Live system keeps producing high ratios and now I know why. The developers are incurring my wrath/thoughts on their over-parsing!

Execute to parse ratio detective

JK Purtzer, August 08, 2005 - 4:25 pm UTC

We are running a 9.2.0.6 OLTP DB serving a Website using a dot.net app. Here is a 15min stat of our ratios, I am trying to dig to see where the bottlenecks are in execute to parse. In response to your Effective Oracle by Design book about reading statspack in ch2.

Instance Efficiency Percentages (Target 100%)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Buffer Nowait %: 99.99 Redo NoWait %: 99.99
Buffer Hit %: 45.63 In-memory Sort %: 99.98
Library Hit %: 99.90 Soft Parse %: 99.74
Execute to Parse %: 42.95 Latch Hit %: 100.00
Parse CPU to Parse Elap. %: 87.60 % Non-Parse CPU: 98.10

earlier in this thread I saw this query:
SELECT parse_calls, executions, sql_text
FROM v$sql
WHERE executions = 0
AND parse_calls > 0
ORDER BY parse_calls DESC

This gives me 40 rows of sqltext which when summed up gives 274,000 parsing w/o executes.
How do I flush out where those are happening the top "10"
are mostly "select * from sometable"?

I also ran

select parse_calls, executions, sql_text
from v$sql
order by parse_calls desc;

and most of the top 40 on that query are a 1:1 parse_calls to executions, Is this OK or falling into the "bad" category
I'm not even sure what the first two are doing.

1 491875 491875 update seq$ set increment$=:2,minvalue=:3,maxvalue=:4,cycle#=:5,order$=:6,cache=:7,highwater=:8,audit$=:9,flags=:10 where obj#=:1
2 444809 444809 select privilege#,level from sysauth$ connect by grantee#=prior privilege# and privilege#>0 start with (grantee#=:1 or grantee#=1) and privilege#>0


Tom Kyte
August 08, 2005 - 8:34 pm UTC

dot.net isn't that redundant :)

execute to parse is probably just dandy for a web based application (well, not dandy but livable).

However, 45.63% for another number looks a bit low unless this system was just started, but then the others wouldn't/couldn't be so high.

Counting Parses

James K. Purtzer, August 09, 2005 - 3:20 pm UTC

Still want to run down the parses w/o execute the programmers wanted a quick clarification on counting parses
for example:
If x=1 then
Select * from TableA
Else
Select * from TableB
End If;

Is that two parses for the selects? even though the if-then statement would only run one?

Here is the rest of the load profile, I thought we weren't supposed to look at buffer cache! :) 13 days this db has been running, my graph on buffer advice is getting pretty flat, advice?
Cache Sizes (end)
~~~~~~~~~~~~~~~~~
Buffer Cache: 112M Std Block Size: 8K
Shared Pool Size: 152M Log Buffer: 512K

Load Profile
~~~~~~~~~~~~ Per Second Per Transaction
--------------- ---------------
Redo size: 5,947.29 3,490.89
Logical reads: 1,616.66 948.93
Block changes: 29.88 17.54
Physical reads: 884.75 519.32
Physical writes: 7.62 4.47
User calls: 26.02 15.27
Parses: 14.38 8.44
Hard parses: 0.04 0.02
Sorts: 5.90 3.46
Logons: 0.64 0.38
Executes: 25.20 14.79
Transactions: 1.70

% Blocks changed per Read: 1.85 Recursive Call%: 68.54
Rollback per transaction %: 0.00 Rows per Sort: 9.82

Tom Kyte
August 09, 2005 - 3:42 pm UTC

in plsql that would be 1 parse PER SESSION per code path executed.

if you
a) log in
b) call that and only one code path is executed
c) log out

we would expect 1 parse.




What about the parse to execute ratio 100%?

Sejas Patel, December 20, 2005 - 9:12 am UTC

Hi tom, 

My dbms_monitor produced a trace file, which after tkprof shows me something like below;

********************************************************************************

OVERALL TOTALS FOR ALL NON-RECURSIVE STATEMENTS

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        3      0.18       0.20          0         22          0           0
Execute      3      0.01       0.01          0         18          0           3
Fetch        0      0.00       0.00          0          0          0           0
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        6      0.19       0.21          0         40          0           3

Misses in library cache during parse: 1

Elapsed times include waiting on following events:
  Event waited on                             Times   Max. Wait  Total Waited
  ----------------------------------------   Waited  ----------  ------------
  SQL*Net message to client                       6        0.00          0.00
  SQL*Net message from client                     6       26.73         27.93


OVERALL TOTALS FOR ALL RECURSIVE STATEMENTS

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse       13      0.03       0.01          0          0          0           0
Execute     13      0.03       0.02          0          0          0           0
Fetch       45      0.00       0.00          0         50          0          36
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total       71      0.06       0.03          0         50          0          36

Misses in library cache during parse: 11
Misses in library cache during execute: 10

    6  user  SQL statements in session.
   10  internal SQL statements in session.
   16  SQL statements in session.
********************************************************************************
Trace file: orcl_ora_1068.trc
Trace file compatibility: 10.01.00
Sort options: default

       1  session in tracefile.
       6  user  SQL statements in trace file.
      10  internal SQL statements in trace file.
      16  SQL statements in trace file.
      12  unique SQL statements in trace file.
     232  lines in trace file.
       1  elapsed seconds in trace file.

the way i got this trace is by doing this;

create or replace procedure model_ref_p
as
t number;
begin
select count(*) into t from model_Ref;
end;
/
procedure created.

conn sys/aaa as sysdba
connected.

exec dbms_monitor.session_trace_enable(146);

conn vehicles/vvv
connected.

SQL> exec model_ref_p

PL/SQL procedure successfully completed.

SQL> exec model_ref_p

PL/SQL procedure successfully completed.

SQL> exec model_ref_p

PL/SQL procedure successfully completed.

Now i totally understand the 10 internal statements as they are looking for the indexes and i have a user audit on as well.

but why did it put my three procedure execution in NON-RECURSIVE???  Isn't is supposed to cache an SQL???

i tried the same thing with individual sql instead of a bundled procedure but same effect so then i tried in procedure but no change at all.

I also get similar output when i trace a session from JDBC (the reason i am asking you the question), which gives 1:1 parse:execute.  But my pl/sql app. is about .09:1 parse to execute.

So its funny how a procedure executed from sqlplus wont cache SQL.

But on my production system, when i trace a session on mod_plsql based website it actually gives be .09:1 parse:execute ratio.  So whats different from sqlplus and web app via mod pl/sql???  shouldn't Oracle cache the core SQL regardless???

Very frustrated...no way out.

May be a suggestion (if you can), if an application should be moved to J2EE when its serving 60Mil page impressions a month.

thanks much for your ongoing support.

regards, 

Tom Kyte
December 20, 2005 - 9:52 am UTC

I've no idea what you were tracing there. The session you wanted to trace didn't exist when you started tracing????


Anyway, look in your tkprof for the SQL statement and you should find IT was parsed once, executed MANY.

Execute to Parse and CURSOR_SHARING

Mark, March 24, 2006 - 1:30 pm UTC

Tom,
In this message
</code> http://asktom.oracle.com/pls/asktom/f?p=100:11:::::P11_QUESTION_ID:2654838638693#22995784340413 <code>

you mentioned cursor sharing.

Can you explain to us lesser folk how cursor_sharing = Force relates to an Execute to Parse %: in Statspack Reports to be negative?

Also, which is more accurate to get the parsed but not executed sql, using V$SQLAREA or V$SQL. I get very different results:

SELECT a.sql_text, a.parse_calls
FROM v$sqlarea a, v$sqltext t
WHERE a.hash_value = t.hash_value
AND a.address = t.address
AND a.executions = 0
AND t.piece = 0
/

or

SELECT a.sql_text, a.parse_calls
FROM v$sql a, v$sqltext t
WHERE a.hash_value = t.hash_value
AND a.address = t.address
AND a.executions = 0
AND t.piece = 0

Regards,
Mark

Tom Kyte
March 24, 2006 - 4:12 pm UTC

cursor_sharing = force will not budge the execute to parse % at all.

cursor_sharing = force can turn a hard parse problem into a soft parse problem with other side effects.

cursor_sharing = force will NOT change the parse counts (it'll reduce the hard parse count, turning them into soft, but the overall parse count will NOT budge)

v$sqlarea is an aggregation of v$sql, v$sql is typically what you should use.

cpu is very high

A reader, June 20, 2006 - 4:24 am UTC

Hi Tom,
Please help me to see why cpy is taking the huge amount of time,and all my what is parse cou which is showing high.
Begin Snap: 1552 20-Jun-06 13:02:01 193 181.0
End Snap: 1564 20-Jun-06 15:02:05 207 183.9
Elapsed: 120.07 (mins)

Cache Sizes (end)
~~~~~~~~~~~~~~~~~
Buffer Cache: 1,472M Std Block Size: 8K
Shared Pool Size: 416M Log Buffer: 256K

Load Profile
~~~~~~~~~~~~ Per Second Per Transaction
--------------- ---------------
Redo size: 3,779.09 5,761.81
Logical reads: 141.99 216.48
Block changes: 12.49 19.04
Physical reads: 5.58 8.50
Physical writes: 7.46 11.37
User calls: 93.09 141.93
Parses: 22.49 34.29
Hard parses: 0.17 0.26
Sorts: 3.41 5.21
Logons: 0.45 0.69
Executes: 28.28 43.12
Transactions: 0.66

% Blocks changed per Read: 8.80 Recursive Call %: 25.69
Rollback per transaction %: 53.14 Rows per Sort: 79.03

Instance Efficiency Percentages (Target 100%)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Buffer Nowait %: 100.00 Redo NoWait %: 100.00
Buffer Hit %: 99.15 In-memory Sort %: 100.00
Library Hit %: 99.49 Soft Parse %: 99.25
Execute to Parse %: 20.49 Latch Hit %: 100.00
Parse CPU to Parse Elapsd %: 73.30 % Non-Parse CPU: 99.88

Shared Pool Statistics Begin End
------ ------
Memory Usage %: 94.36 94.57
% SQL with executions>1: 38.98 36.67
% Memory for SQL w/exec>1: 65.20 60.74

Top 5 Timed Events
~~~~~~~~~~~~~~~~~~ % Total
Event Waits Time (s) Call Time
-------------------------------------------- ------------ ----------- ---------
CPU time 8,798 98.52
process startup 344 83 .93
db file sequential read 8,320 26 .29
latch: library cache 3,699 7 .08
log file parallel write 3,307 4 .04
-------------------------------------------------------------

Tom Kyte
June 20, 2006 - 10:04 am UTC

the cpu time is high quite simply because.....


you used your cpu.

we are just reporting the "fact" that you used 8,798 cpu seconds in that snapshot.

Now, in 2 hours, you have 7,200 cpu seconds per cpu.

Assuming a modest 2 cpu machine, one might say "well done, a 50% average load"

But the answer is always "it depends", totally insufficient data to say a thing about much with a 2 hour stats pack and just the header (but please, do not post the entire thing, that won't help)

You can lower your usage of cpu by doing smart things like "parse less, cache cursors, be more efficient"

but there is nothing glaringly "bad" here.

time(s)

Anand Varadarajan, May 04, 2007 - 8:43 am UTC

db file sequential read 1,941,420 37,703 80.29
enqueue 37,551 3,214 6.84
CPU time 1,945 4.14
SQL*Net break/reset to client3,340,237 1,069 2.28
async disk IO 38,164 1,030 2.19


Its a 8 cpu machine with a 30 mins snap period.
My question is : Is 37703 seconds actual seconds or are they reported in hundreds of a second?
Tom Kyte
May 04, 2007 - 1:08 pm UTC

what does the column heading say.

remember waits are able to be multiplied by people waiting.

If you have 1,000 people
And they wait 1 second each in a one minute window
You have 1,000 seconds of wait time in that 60 second observation.

parse to execute negative with 3rd software

amang nurpraneko, October 24, 2007 - 12:40 am UTC

Dear Tom,

How can we tune the code if we using 3rd party software running on oracle.?

DB Name DB Id Instance Inst Num Release Cluster Host
------------ ----------- ------------ -------- ----------- ------- ------------
xxxxxx 3041805073 xxxx 1 9.2.0.6.0 NO

Snap Id Snap Time Sessions Curs/Sess Comment
--------- ------------------ -------- --------- -------------------
Begin Snap: 7538 22-Oct-07 11:32:08 54 6.0
End Snap: 7545 22-Oct-07 14:31:57 50 4.4
Elapsed: 179.82 (mins)

Cache Sizes (end)
~~~~~~~~~~~~~~~~~
Buffer Cache: 512M Std Block Size: 8K
Shared Pool Size: 512M Log Buffer: 51,200K

Load Profile
~~~~~~~~~~~~ Per Second Per Transaction
--------------- ---------------
Redo size: 49,923.06 2,556.64
Logical reads: 20,765.13 1,063.42
Block changes: 271.26 13.89
Physical reads: 552.31 28.28
Physical writes: 24.69 1.26
User calls: 135.17 6.92
Parses: 54.40 2.79
Hard parses: 27.02 1.38
Sorts: 1,268.14 64.94
Logons: 0.03 0.00
Executes: 52.35 2.68
Transactions: 19.53

% Blocks changed per Read: 1.31 Recursive Call %: 56.45
Rollback per transaction %: 2.74 Rows per Sort: 1.92

Instance Efficiency Percentages (Target 100%)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Buffer Nowait %: 100.00 Redo NoWait %: 99.99
Buffer Hit %: 97.40 In-memory Sort %: 100.00
Library Hit %: 83.80 Soft Parse %: 50.34
Execute to Parse %: -3.90 Latch Hit %: 99.97
Parse CPU to Parse Elapsd %: 67.56 % Non-Parse CPU: 71.31

Shared Pool Statistics Begin End
------ ------
Memory Usage %: 87.47 86.46
% SQL with executions>1: 8.95 31.88
% Memory for SQL w/exec>1: 10.83 12.89

Top 5 Timed Events
~~~~~~~~~~~~~~~~~~ % Total
Event Waits Time (s) Ela Time
-------------------------------------------- ------------ ----------- --------
CPU time 3,536 32.56
log file sync 213,108 2,045 18.83
db file sequential read 721,170 1,679 15.46
db file scattered read 588,363 972 8.95
LGWR wait on LNS 943,839 696 6.41
-------------------------------------------------------------
Tom Kyte
October 24, 2007 - 9:00 am UTC

wow, you have some really bad 3rd party software.

tell them to google sql injection - they are not using binds, they are very insecure (hack their code maybe and prove it to them, sql-inject them yourself)

they parse sql more than they execute it (the negative ratio)
they hard parse like crazy

They should have been discarded out of hand during the technical evaluation phase before purchasing.

There is not much YOU can do

It is entirely in their hands, the only ones that can reduce parsing - the coders
The only ones that can bind properly - the coders


difference between parse time elapsed (time model) and (instance activity) of awr report

andrew, October 13, 2012 - 10:02 pm UTC

Hi Tom,
we have a low execute to parse 35.49.

soft parse 99.46

in a period of 420 minsin the instance activity:
parse time cpu 186,989 ( what is the unit min,sec? ms?) 7.25 0.13
parse time elapsed 526,817 20.42 0.36

in the time model
pl/sql execution elapsed time 190,151 sec
parse time elapsed 11,024 sec
hard parse elapsed time 7721 sec
db time 205,514 sec

I was reading this thread and to determine whether parse cpu to parse elapsed is good or not, we also the following values:
users:
parse elapsed time: ,-- in both time mode and instance activity, which one should I use?
parse time cpu: <-- this is not in the time model, where can I find it?
duration

Please advise

thanks

andrew

difference between parse time elapsed (time model) and (instance activity) of awr report

andrew, October 13, 2012 - 10:17 pm UTC

sorry, this should be correct stats, I posted wrongly in my previous post.
Hi Tom,
we have a low parse cpu to parse elapsed: 35.49
execute to parse 22.99.
soft parse 99.46

in a period of 420 minsin the instance activity:
parse time cpu 186,989 ( what is the unit min,sec? ms?) 7.25 0.13
parse time elapsed 526,817 20.42 0.36

in the time model
pl/sql execution elapsed time 190,151 sec
parse time elapsed 11,024 sec
hard parse elapsed time 7721 sec
db time 205,514 sec

I was reading this thread and to determine whether parse cpu to parse elapsed is good or not, we also the following values:
users:
parse elapsed time: ,-- in both time mode and instance activity, which one should I use?
parse time cpu: <-- this is not in the time model, where can I find it?
duration

Please advise

thanks

andrew


Tom Kyte
October 15, 2012 - 8:46 am UTC

here are the units for statistical metrics:

http://docs.oracle.com/cd/E11882_01/server.112/e25513/stats002.htm#i26940

(tens of milliseconds for parse statistics)


the one number is the amount of time spent in aggregate by all of the sessions donig something. So, for example, you could have in one second of elapsed time - 1,000 seconds of elapsed parse time if you had 1,000 people trying to parse all at the same time.

you have a lot of parsing going on and you have a lot of people waiting to parse. reducing the amount of parsing (parse once execute many) using statement caching or something would likely be a good thing for you - you are using 3 minutes of your cpu time just parsing. it won't be astronomical (you are only using 3 minutes of your 420*cpu_count cpu minutes) though.

difference between parse time elapsed (time model) and (instance activity) of awr report

Andrew, October 16, 2012 - 2:15 am UTC

Thanks for your quick response Tom. sorry, I am still confused.

1. you replied the time model - parse time elapsed means "the one number is the amount of time spent in aggregate by all of the sessions donig something"
2. what is parse time elaped in instance activity?
3. We have a 2 node RAC, does the parse time elaped in time model reflect the aggregates for both nodes?
4. the parse time elapsed in instance activity reflect aggregates for node 1 only?
5. why are the values of parse time elapsed in the instance activity and in time model so different?

6. so in the time model, there is NO parse time cpu, is there a formula to calucate?

Here's the stats again
we have a low parse cpu to parse elapsed: 35.49
execute to parse 22.99.
soft parse 99.46

in a period of 420 mins in the instance activity:
parse time cpu 186,989 ( what is the unit min,sec? ms?) 7.25 0.13
parse time elapsed 526,817 20.42 0.36

in the time model
pl/sql execution elapsed time 190,151 sec
parse time elapsed 11,024 sec
hard parse elapsed time 7721 sec
db time 205,514 sec

thanks again

andrew
Tom Kyte
October 16, 2012 - 9:53 am UTC

parse time elapsed in instance stats is the sum of all session parse times. This is why you can have 1,000 seconds of parse time elapsed in one second - if you had 1000 sessions parsing.

3) depends on how you generate the report, by default it will be for an instance, although you can generate a "global" report as well.

4) see #3

use the instance stats here - but look at how small a bit of time you are spending parsing as opposed to the total elapsed time. You are probably barking up the wrong tree here. I'm always a fan of decreasing parsing - but you don't spend much time doing it so it really isn't going to make anything "super fast"

but it might - if the thing you are trying to make super fast is the only thing parsing.

I would suggest you might not want to over read this AWR report (let ADDM do that), maybe you would be better by looking at ASH information for the program you are trying to tune.

is this db cpu value OK or not acceptable?

andrew, October 25, 2012 - 3:00 am UTC

Hi Tom,
In 10g
in the time model, I have 15 oracle sessions
With the db cpu being 7679.4 sec/15 sessions
@ session 511 sec
running for a duration of 15 mins (test), @ session 511/(15*60) = 50% of the duration

1.is the cpu of the system is OK or not?
2. How much is too much?
3. How does relate to the OS stats in the awr report

See attachment belowl Please advise

Statistic Name Time (s) % of DB Time
sql execute elapsed time 11,108.53
sql execute elapsed time 11,108.53 96.98
DB CPU 7,679.40 sec 67.04%
PL/SQL execution elapsed time 1,377.36 12.02
parse time elapsed 188.85 1.65
hard parse elapsed time 159.96 1.40
connection management call elapsed time 23.68 0.21
PL/SQL compilation elapsed time 13.18 0.12
hard parse (sharing criteria) elapsed time 11.31 0.10
hard parse (bind mismatch) elapsed time 7.74 0.07
repeated bind elapsed time 0.81 0.01
sequence load elapsed time 0.38 0.00
DB time 11,454.35
background elapsed time 543.01
background cpu time 461.01

Operating System StatisticsStatistic Total
AVG_BUSY_TIME 27,720
AVG_IDLE_TIME 91,210
AVG_IOWAIT_TIME 0
AVG_SYS_TIME 3,330
AVG_USER_TIME 24,350
BUSY_TIME 888,397
IDLE_TIME 2,920,117
IOWAIT_TIME 0
SYS_TIME 107,825
USER_TIME 780,572
LOAD 8
OS_CPU_WAIT_TIME 22,300
RSRC_MGR_CPU_WAIT_TIME 0
VM_IN_BYTES 9,560,064
VM_OUT_BYTES 0
PHYSICAL_MEMORY_BYTES 17,045,651,456
NUM_CPUS 32

thanks

andrew
Tom Kyte
October 25, 2012 - 9:09 am UTC

it would be nice if you used the code button to make things fixed width for code samples...


1) I cannot answer that. Are you doing things as efficiently as possible? If not, no - you used too much CPU. If so - yes, you did well.

To say "cpu was utilized 50% over 15 minutes" is too coarse. That could mean it was 100% utilized for half of that time, that would be bad. It could mean it was 100% plus (big run queues) for a long time - that would be bad. 50% over 15 minutes isn't descriptive enough to come to a conclusion.

2) more than about 66% for any period of time for OLTP, more than 90'ish, near 100%, for warehousing where you control the load very very precisely.


awr analysis

bijunator, July 22, 2013 - 9:05 am UTC

Apologies for being naive but what could be the following values indicating in the AWR report --
VM_IN_BYTES -- 841,520,495,616
VM_OUT_BYTES -- 499,033,451,520
PHYSICAL_MEMORY_BYTES -- 75,957,678,080
Tom Kyte
July 31, 2013 - 4:05 pm UTC

those are bytes paged in and out due to virtual memory as reported to us by the OS

You paged in 783GB of virtual memory and put out to rest ~500GB of virtual memory.

you have about 70GB of real memory.


seems like you could be using memory inefficiently - perhaps (probably) by running too many connections to your database (too many processes) or overallocating your SGA and PGA areas (assuming nothing else on this machine short of this single database...)

thanks a lot...

bijunator, August 02, 2013 - 11:51 am UTC

Thanks a lot Tom, well for a second I thought you ignored my query as it was too silly to answer :-)

Just giving an overview on the story, only if you have time for it --
Our DBAs importing export dumps of 700GBs size. Got 4 schemas amounting up to 1TB in size. Index list as beloww --
COUNT(*) INDEX_TYPE
---------- ---------------------------
1 FUNCTION-BASED NORMAL
2 LOB
131 BITMAP
1609 NORMAL
Memory details -- sga_max_size is 12G and pga_aggregate_target is 7.5G.
It took 15-18 days to perform the import. We were using impdp with parallel 8.

Well, as per the DBAs, it turned out that the parallel channel P000 was the bottle neck.
It was processing the indexes one by one while the rest 7 channels were waiting.
We killed the job and did the index creations manually.
It took less time than the impdp utility. I wonder what might be the root cause.
Do we have selective option like import data and then create indexes later.
Will the process of creating the metadata first by content=metadata_only and then importing data by content=data_only help in anyway.
Say the indexes will get created in the first shot and then the data gets populted into the already indexed tables.

I probably have wandered out of topic here but thanks anyways for your wonderful guidance...
Tom Kyte
August 02, 2013 - 7:32 pm UTC

http://docs.oracle.com/otndocs/products/database/enterprise_edition/utilities/pdf/datapump11gr2_parallel_1106.pdf


it is true that one worker creates the indexes - but it uses parallel index creates - so it would have been using parallel 8 on the index creates themselves. Unless you had disabled parallel query - you should have seen 8 PX processes building the indexes.


did you see that happening?
do you have parallel query enabled?
when you build the indexes manually, what was the exact process used?