Skip to Main Content

Breadcrumb

May 4th

Question and Answer

Tom Kyte

Thanks for the question.

Asked: June 14, 2001 - 9:23 am UTC

Last updated: December 29, 2008 - 3:29 pm UTC

Version: 8.1.7

Viewed 10K+ times! This question is

You Asked

Tom

How is it possible that we avoid a softparse?

My understanding is that when the first time a query is given it is hard parsed i.e. it reads the disk. But after that every time you give that query it is read from memory i.e. softparsed. Isnt it?
How can I avoid softparse?


Further, I observed that when I run a query from sqlplus, is being parsed each time i.e. the parse count is being equal to the execute count.
But when I put the same procedure and execute it again and again, then I find that it is being parsed once and executed many times?
( I got the count info from v$sqlarea)

Can you please explain with examples please?


and Tom said...

how you avoid a soft parse varies by language.

In PLSQL -- you get it for free. If you have code that does:


for i in 1 .. 100
loop
for x in ( selct * from t where zzz = i )
loop
.....

the query "select * from t where zzz = :BIND" is parsed ONCE per session -- not per execution.


In Java -- you would use a prepared/callable statement and prepare it ONCE per session -- not once per execution.

In VB -- same, don't be in such a hurry to close those statments, reuse them over and over and over.


In SQLPlus -- being a very generic tool -- it canot cache open cursors for us like that. You can play with session_cached_cursors to achieve some caching by the server.

PLSQL being very very specific -- can cache the (static) cursors for us. (but not ref cursors)

Rating

  (62 ratings)

Is this answer out of date? If it is, please let us know via a Comment

Comments

what if i am using servlet?

A reader, October 26, 2001 - 8:00 pm UTC

what if i am using servlet? each time user connect , do some query and disconnect?
how those user share same statement ?
or i have no choice but soft parse each time?

Tom Kyte
October 27, 2001 - 8:21 am UTC

If you are connecting/disconnecting -- then yes, each time will be a soft parse. Parsed statements are at the session level. destroy the session, lose the statement.

But...

A reader, March 20, 2002 - 1:54 pm UTC

Hi

When you execute multiple times BEGIN END block with insert in we get :


INSERT INTO SIEBEL.S_ACCNT_POSTN (
CREATED,
ASGN_MANL_FLG,
LAST_UPD_BY,
OU_EXT_ID,
MODIFICATION_NUM,
ROW_ID,
CONFLICT_ID,
ASGN_DNRM_FLG,
ROW_STATUS,
LAST_UPD,
CREATED_BY,
ASGN_SYS_FLG,
POSITION_ID)
VALUES ( :1, :2, :3, :4, :5, :6, :7, :8, :9, :10, :11, :12, :13)

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 19 0.46 1.40 0 0 0 0
Execute 19 0.02 0.02 0 22 285 19
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 38 0.48 1.42 0 22 285 19

Misses in library cache during parse: 0
Optimizer goal: RULE


It means 19 soft parsing of insert INSIDE plsql block...

Igor

Tom Kyte
March 20, 2002 - 3:06 pm UTC

Sorry -- should have been more specific.

Use a stored procedure, only stored procedures can do this caching. If you execute:

begin
insert into t .....
end;

it'll parse the insert once for every execute. OTOH, if you put the insert into a procedure:

create or replace procedure p( ... )
as
begin
insert into ....;
end;
/

then the insert will be parsed upon the first execution of the procedure in your session and not again.

But it also makes me a bit confused on parsing

Patricia Hu, March 20, 2002 - 3:45 pm UTC

So does this mean that even if the execution plan is in SGA, if we issue the exactly same query again in sql*plus shortly afterwards(1st time in sql*plus too), it would have another parse b/c there is no cursors cached? whereas only stored procedure cache open cursors? SO basically does it mean query submitted from sqlplus without using stored procedure doesn't store the execution plan at all? If so is there any use of bind variable in sqlplus?

Clarification would be very appreciated. Thanks.

Tom Kyte
March 20, 2002 - 7:39 pm UTC

There are two types of parses (well, actually "three" these days).

They are 

o HARD parse -- the query has never been seen before, isn't in the shared pool.  We must parse it, hash it, look in the shared pool for it, don't find it, security check it, optimize it, etc (lots of work).

o SOFT parse -- the query has been seen before, is in the shared poo.  We have to parse it, hash it, look in the shared pool for it and find it (less work then a hard parse but work none the less)

o a kinder, softer SOFT parse -- you are using session_cached_cursors (search this site for that word for info).  We take your query, look in the sessions cursor cache -- find the cursor and reuse it.  Very very little work.

So, after "startup", you go into sqlplus and do this:

SQL> alter session set session_cached_cursors=0; -- just making sure this is off
SQL> select * from emp;  -- hard parse
SQL> select * from emp; -- soft parse
SQL> select * from emp; -- soft parse
SQL> alter session set session_cached_cursors=100; -- enable this feature
SQL> select * from emp;  -- soft parse
SQL> select * from emp; -- kinder, gentler, soft parse

 

How do you distinguish between the last two parses?

Prince, March 20, 2002 - 9:32 pm UTC

Tom,

Does the sql_trace (or any other equivalance) show whether the parse is "a kinder, softer SOFT parse" (CACHing) or just the "SOFT parse" (no CACHing)?

Tom Kyte
March 20, 2002 - 10:07 pm UTC

you would look into v$mystat/v$statname for "session cursor cache count" to see it go up.

sql_trace won't show that tho. it'll just show as a soft parse. see

</code> http://asktom.oracle.com/pls/asktom/f?p=100:11:::::P11_QUESTION_ID:904605517791 <code>
for an example.


SELECT STATEMENT TAKES TO LONG

A reader, March 21, 2002 - 9:04 am UTC

I have following SELECT statement. Cost is low. But
query does not produce output after 1 hour and then it
fails with snapshot too old.
SELECT s.swlogin, DECODE (o.cust_id, NULL, 'Prospect', 'Existing Cust') campaign,
COUNT (resp_opportunity_id) total_number
FROM wsi_opportunity o, sw_specialist s
WHERE o.resp_date_created BETWEEN SYSDATE - 600 AND SYSDATE
AND o.opp_quick_note LIKE '%' || TRUNC (SYSDATE)
AND o.opp_assigned_to_id = s.swspecialistid
AND s.whdistributionlist =
(SELECT whdistributionlist
FROM sw_specialist
WHERE swlogin = 'Paul Slayer')
GROUP BY s.swlogin, DECODE (o.cust_id, NULL, 'Prospect', 'Existing Cust')

TKPROF OUTPUT
--------------
TKPROF: Release 8.1.5.0.0 - Production on Wed Mar 20 22:47:32 2002

(c) Copyright 1999 Oracle Corporation. All rights reserved.

Trace file: ././vant8i_ora_9218.trc
Sort options: default

********************************************************************************
count = number of times OCI procedure was executed
cpu = cpu time in seconds executing
elapsed = elapsed time in seconds executing
disk = number of physical reads of buffers from disk
query = number of buffers gotten for consistent read
current = number of buffers gotten in current mode (usually for update)
rows = number of rows processed by the fetch or execute call
********************************************************************************

ALTER SESSION SET SQL_TRACE = TRUE


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 0 0.00 0.00 0 0 0 0
Execute 1 0.00 0.05 0 0 0 0
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 1 0.00 0.05 0 0 0 0

Misses in library cache during parse: 0
Misses in library cache during execute: 1
Optimizer goal: CHOOSE
Parsing user id: 35
********************************************************************************

SELECT s.swlogin, DECODE (o.cust_id, NULL, 'Prospect', 'Existing Cust') campaign,
COUNT (resp_opportunity_id) total_number
FROM wsi_opportunity o, sw_specialist s
WHERE o.resp_date_created BETWEEN SYSDATE - 600 AND SYSDATE
AND o.opp_quick_note LIKE '%' || TRUNC (SYSDATE)
AND o.opp_assigned_to_id = s.swspecialistid
AND s.whdistributionlist =
(SELECT whdistributionlist
FROM sw_specialist
WHERE swlogin = 'Paul Slayer')
GROUP BY s.swlogin, DECODE (o.cust_id, NULL, 'Prospect', 'Existing Cust')

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.05 0.05 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 1 256.78 2690.24 242504 2268378 4 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 3 256.83 2690.29 242504 2268378 4 0

Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: 35

Rows Row Source Operation
------- ---------------------------------------------------
0 SORT GROUP BY
0 FILTER
6613 NESTED LOOPS
6618 TABLE ACCESS FULL SW_SPECIALIST
13229 PARTITION RANGE ITERATOR
33080 TABLE ACCESS BY LOCAL INDEX ROWID WSI_OPPORTUNITY
2816877 INDEX RANGE SCAN (object id 47580)
1 TABLE ACCESS BY INDEX ROWID SW_SPECIALIST
1 INDEX RANGE SCAN (object id 45017)

********************************************************************************

BEGIN DBMS_APPLICATION_INFO.SET_MODULE(:1,NULL); END;


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.04 0 0 0 0
Execute 1 0.01 0.05 0 0 0 1
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 2 0.01 0.09 0 0 0 1

Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: 35



********************************************************************************

OVERALL TOTALS FOR ALL NON-RECURSIVE STATEMENTS

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 2 0.05 0.09 0 0 0 0
Execute 3 0.01 0.10 0 0 0 1
Fetch 1 256.78 2690.24 242504 2268378 4 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 6 256.84 2690.43 242504 2268378 4 1

Misses in library cache during parse: 2
Misses in library cache during execute: 1


OVERALL TOTALS FOR ALL RECURSIVE STATEMENTS

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 0 0.00 0.00 0 0 0 0
Execute 0 0.00 0.00 0 0 0 0
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 0 0.00 0.00 0 0 0 0

Misses in library cache during parse: 0

3 user SQL statements in session.
0 internal SQL statements in session.
3 SQL statements in session.
********************************************************************************
Trace file: ././vant8i_ora_9218.trc
Trace file compatibility: 7.03.02
Sort options: default

1 session in tracefile.
3 user SQL statements in trace file.
0 internal SQL statements in trace file.
3 SQL statements in trace file.
3 unique SQL statements in trace file.
58 lines in trace file.




Tom Kyte
March 21, 2002 - 4:14 pm UTC

run the tkprof with explain= so I can see what indexes are really used and get the stats information as well.

The tables/indexes/columns are fully analyzed right?

cost has NO BEARING or relationship to execution time whatsoever.

A reader, March 21, 2002 - 4:27 pm UTC

Tables and indexes are analyzed properly. We use a cost based optimizer.

Tom Kyte
March 21, 2002 - 7:14 pm UTC

great, so -- post the tkprofs with explain=user/password

Yes, but...

Igor, March 27, 2002 - 3:51 am UTC

Hi

Siebel client sends anonymous block in loop
and each time is soft parsed which takes 90% of total
execution time.
I tried to play with session_cached_cursors as you
suggested.
On isolated system I get a bit better parse time, but
total elapsed time is much the same
(and sometimes even worse).

What mesurable improvements you get?
(I compared values of 0, 50 and 100)

Igor



Tom Kyte
March 27, 2002 - 8:28 am UTC

You should get MUCH better parse time.  Session cached cursors will not affect run time at all (only tuning the process will do that).

session cached cursors is a softer soft parse, as long as the siebel client doesn't have more queries then you have set in this setting, it'll "softer" soft parse it resulting in reduced parse times.

So for example:


ops$tkyte@ORA817DEV.US.ORACLE.COM> create table run_stats ( runid varchar2(15), name varchar2(80), value int );
Table created.

ops$tkyte@ORA817DEV.US.ORACLE.COM> create or replace view stats
  2  as select 'STAT...' || a.name name, b.value
  3        from v$statname a, v$mystat b
  4       where a.statistic# = b.statistic#
  5      union all
  6      select 'LATCH.' || name,  gets
  7        from v$latch;
View created.


ops$tkyte@ORA817DEV.US.ORACLE.COM> declare
  2      l_start number;
  3          l_block varchar2(1000) :=
  4          'begin
  5                  null;
  6           end;';
  7  begin
  8      insert into run_stats select 'before', stats.* from stats;
  9      l_start := dbms_utility.get_time;
 10  
 11          execute immediate 'alter session set session_cached_cursors=0';
 12      for i in 1 .. 1000
 13      loop
 14                  execute immediate l_block;
 15      end loop;
 16      dbms_output.put_line( (dbms_utility.get_time-l_start) || ' hsecs' );
 17  
 18  
 19          insert into run_stats select 'after 1', stats.* from stats;
 20      l_start := dbms_utility.get_time;
 21  
 22          execute immediate 'alter session set session_cached_cursors=10';
 23      for i in 1 .. 1000
 24      loop
 25                  execute immediate l_block;
 26      end loop;
 27      dbms_output.put_line( (dbms_utility.get_time-l_start) || ' hsecs' );
 28  
 29      insert into run_stats select 'after 2', stats.* from stats;
 30  end;
 31  /
24 hsecs
19 hsecs

PL/SQL procedure successfully completed.

<b>you can see that by the "wall clock", it is faster however thats only part of the story (especially in a SINGLE user configuration as you are)</b>


ops$tkyte@ORA817DEV.US.ORACLE.COM> select a.name, b.value-a.value run1, c.value-b.value run2,
  2         ( (c.value-b.value)-(b.value-a.value)) diff
  3    from run_stats a, run_stats b, run_stats c
  4   where a.name = b.name
  5     and b.name = c.name
  6     and a.runid = 'before'
  7     and b.runid = 'after 1'
  8     and c.runid = 'after 2'
  9     and (c.value-a.value) > 0
 10     and (c.value-b.value) <> (b.value-a.value)
 11   order by abs( (c.value-b.value)-(b.value-a.value))
 12  /

NAME                                           RUN1       RUN2       DIFF
---------------------------------------- ---------- ---------- ----------
...
STAT...session cursor cache hits                  0       1002       1002
LATCH.shared pool                              1119        112      -1007
LATCH.library cache                           12306       4280      -8026

23 rows selected.

<b>see those last two rows -- well, latches are LOCKS, locks are serialization devices.  As the number of users executing in the database goes UP, concurrency goes DOWN if you lock more.  Anything that uses less latches is more scalable from that perspective.  If you run this in a database with lots of users using the session cached cursors -- you'll see a larger overall improvement in parse times SINCE the parse times in that very active database will be MUCH HIGHER then in your single user database (you don't have anyone to wait on! )</b>

Hence, your parse times will see more improvement in the "real" database then in your single user database since the parse times are WORSE there then in your testing instance.

Runtime, this'll not affect that at all.  Parse time, very much so. 

What will happen if stored procedure is using ref cusrosr

A reader, March 27, 2002 - 5:20 pm UTC

If a stored procedure uses ref cursor to return result set to JDBC, We are finding that same statements are parsed again and again. Do you thing that we should avoid use of ref cursor ?

Tom Kyte
March 27, 2002 - 5:42 pm UTC

search for session_cached_cursors on this site.

with JDBC, you cannot avoid the ref cursor as that is the method to return result sets from stored procedures.

keef, March 27, 2002 - 10:37 pm UTC

so with jdbc + ref cursors, does that mean that since we do at least the soft parse every time, it does not matter if we reuse the PreparedStatement or re-prepare every execute (so long as we use the same connection/session, with session_cached_cursors set)? in other words, does it not help to "prepare once, execute many times" in this scenario?

tia.

Tom Kyte
March 28, 2002 - 5:04 pm UTC

You still want to prepare once, execute many whenever you can -- always.

A Reader

Drew, March 28, 2002 - 10:19 am UTC

I am using a third part tool - TOAD. In the Server Stats window, there is a stat called parse/execute ratio =170 and a warning sign, saying "high parse to execute ratio". In my init.ora file, "session_cached_cursors = 0". None complains about the performance, the database has been running fine with the warning sign for years. However, the warning sign makes me think of some tuning work. Where should I start with? Do you think the "kinder and softer" parse will help if I change the session_cached_cursors to a nonezero value? I thought the application programs can also do some tuning. In the applications, there is not a single procedure or function, just SQL. Since none complaining, the applcation prgrammers may not want to do any changes.
Thanks a lot.

Tom Kyte
March 28, 2002 - 5:13 pm UTC

session cached cursors will not reduce the parse/execute ratio (a ratio over 100% is horrible! it means people parse and never execute!!) but it CAN and will reduce the load on the server.

The soft parse count will still increase, but so will the session cursor cache hits stat. Everytime that one goes up -- its like you reduces the soft parse count down.

So, yes, I would give serious consideration to setting that parameter with this caveat "sometimes, if it ain't broke, don't fix it". It could result in increased memory consumption on the server (for the cached cursors) and that could have an impact (negative) on performace. So, as with any change -- be watchful (and use statspack to measure performance BEFORE and AFTER the change -- be ready to measure the change so you can tell if it is buying you anything)

How to set the session_cached_cursors parameter

Juan Carlos Reyes P, April 02, 2003 - 3:00 pm UTC

Hi tom based on what I read in the documentation, I did this two queries to evaluate if this parameter is properly set.
Are they right or there is another way to evaluate this parameter, once this parameter had been set.

Thanks :)

1)
SELECT SID, "cursorcachecount-parvalue"
FROM
( SELECT C.SID,
'Increase the current value in session_cached_cursors if ' || (C.VALUE) ||
' is near to ' ||
D.VALUE "cursorcachecount-parvalue"
FROM V$SESSTAT C,V$PARAMETER d
WHERE C.statistic# = 207 -- = 'session cursor cache count'
AND D.NAME = 'session_cached_cursors'
ORDER BY C.VALUE DESC
)
WHERE ROWNUM < 10 -- First 10 cases
-------------------------------------------------------------------------------------------
009 Increase the current value in session_cached_cursors if 21 is near to 100
028 Increase the current value in session_cached_cursors if 15 is near to 100


2)
SELECT 'Soft Parse%:' ||
TO_CHAR(
ROUND (1- ( (select value from V$SYSSTAT WHERE NAME = 'parse count (hard)')
/ (select value from V$SYSSTAT WHERE NAME = 'parse count (total)')),2)*100
,'990D00') || '; Always when this total is not a small number: ' ||
(select value from V$SYSSTAT WHERE NAME = 'parse count (total)')
FROM DUAL;
-----------------
Soft Parse%: 94.00; Always when this total is not a small number: 7820


Tom Kyte
April 02, 2003 - 3:07 pm UTC

1) never use numbers on statistics -- you'll get burned across os's / versions...

all that is saying is that "at that point in time, looks good".

You might be able to better automate this with a logoff trigger. during normal peak time, have a logoff trigger that captures for the current session the session cursor cache count statistic and saves it. Then, just take the average of those and compare to the set value and see if they make sense together.

2) i don't understand that one. you need to look at the parse execute ratio, not the soft parse % here. soft parse %, good for cursor sharing consideration. poor parse execute ratio might indicate you need session cached cursors.

A reader, April 02, 2003 - 7:23 pm UTC

Can you please tellme if now they are correct.

1)
CREATE TABLE SYS.Stat_Session_Historic
(
UUSER VARCHAR2(100),
DDATE DATE,
Sstatistic# NUMBER (6),
VVALUE NUMBER (6)
)
;

CREATE OR REPLACE TRIGGER SYS.TGR_LOGOFF_STATS
BEFORE
LOGOFF
ON DATABASE

INSERT INTO SYS.Stat_Session_Historic
SELECT USER, SYSDATE, statistic# , VALUE
FROM V$SESSTAT C
WHERE C.statistic# = 207 -- = 'session cursor cache count'
AND C.SID = UID
/

-- And the select will be the first 10 of an
average of their statics in a period of time for user

SELECT UUSER, AVGVAL
FROM
( SELECT UUSER, AVG( VVALUE ) AVGVAL
FROM sys.Stat_Session_Historic
where TRUNC(DDATE) = TRUNC(SYSDATE)
GROUP BY UUSER
ORDER BY 2 DESC
)
WHERE ROWNUM < 10 -- First 10 cases

UUSER AVGVAL
--------------
ADM 5
SAF 1


2)
SELECT 'Soft Parse%:' ||
TO_CHAR(
ROUND (1- ( (select value from V$SYSSTAT WHERE NAME = 'parse count (hard)')

/ (select value from V$SYSSTAT WHERE NAME = 'parse count (total)')),2)*100

,'990D00') || '; If is far to 100%, and this total count parse number is not small ' ||
(select value from V$SYSSTAT WHERE NAME = 'parse count (total)') || ' increase the parameter '
FROM DUAL;
-----------------
Soft Parse%: 94.00; If is not near to 100%, and this total count parse number is not small: 7820 increase the parameter

Thanks

Tom Kyte
April 02, 2003 - 9:21 pm UTC

Nope -- totally wrong.

You are creating tables owned by sys. bad idea. don't do that.

I did not read past:

1)
CREATE TABLE SYS.Stat_Session_Historic


that SYS. stopped me right away.

But

O, April 03, 2003 - 5:50 am UTC

Tom

You stated above that a query may be in the "shared POO"
Is there any way this can be flushed?

Tom Kyte
April 03, 2003 - 8:09 am UTC

but what?


a query plan may be in the shared pool -- yes.

the shared pool may be flushed using the alter system flush shared_pool;

if the stored query plan is not being used and can be flushed, it will be (meaning, it of course might NOT be as well)

don't do that on your live system, it'll impact performace somewhat.

What is the downsize of using session_cached_cursors

Shailandra, April 03, 2003 - 8:58 am UTC

Hi Tom,

What are the circumstances this parameter should not be used.

Thanks

Tom Kyte
April 03, 2003 - 9:04 am UTC

don't know of any.

Queries to evaluate the value in session_cached_cursors

Juan Carlos Reyes Pacheco, April 03, 2003 - 9:08 am UTC

Sorry, I used SYS, because this avoid granting privileges, and because I'm in a test database,
:) Now there is no SYS.
Could you please tellme if now both query are right?
And if not, how they should be
thanks
1)
CREATE TABLE Stat_Session_Historic
(
UUSER VARCHAR2(100),
DDATE DATE,
Sstatistic# NUMBER (6),
VVALUE NUMBER (6)
)
;

CREATE OR REPLACE TRIGGER TGR_LOGOFF_STATS
BEFORE
LOGOFF
ON DATABASE

INSERT INTO Stat_Session_Historic
SELECT USER, SYSDATE, statistic# , VALUE
FROM V$SESSTAT C
WHERE C.statistic# = 207 -- = 'session cursor cache count'
AND C.SID = UID
/

-- And the select will be the first 10 of an
average of their statics in a period of time for user

SELECT UUSER, AVGVAL
FROM
( SELECT UUSER, AVG( VVALUE ) AVGVAL
FROM Stat_Session_Historic
where TRUNC(DDATE) = TRUNC(SYSDATE)
GROUP BY UUSER
ORDER BY 2 DESC
)
WHERE ROWNUM < 10 -- First 10 cases

UUSER AVGVAL
--------------
ADM 5
SAF 1


2)
SELECT 'Soft Parse%:' ||
TO_CHAR(
ROUND (1- ( (select value from V$SYSSTAT WHERE NAME = 'parse count (hard)')


/ (select value from V$SYSSTAT WHERE NAME = 'parse count (total)')),2)*100


,'990D00') || '; If is far to 100%, and this total count parse number is
not small ' ||
(select value from V$SYSSTAT WHERE NAME = 'parse count (total)') || '
increase the parameter '
FROM DUAL;
-----------------
Soft Parse%: 94.00; If is not near to 100%, and this total count parse number is
not small: 7820 increase the parameter

Thanks


Tom Kyte
April 03, 2003 - 9:16 am UTC

1) nope, you are using a statistic# = 207.



  1* select * from v$statname where statistic# = 207
ops$tkyte@ORA920> /

STATISTIC# NAME                                CLASS
---------- ------------------------------ ----------
       207 cursor authentications                128

ops$tkyte@ORA817DEV> select * from v$statname where statistic# = 207
  2  /

STATISTIC# NAME                                CLASS
---------- ------------------------------ ----------
       207 PX remote messages recv'd              32


ops$tkyte@ORA9I> select * from v$statname where statistic# = 207
  2  /

STATISTIC# NAME                                CLASS
---------- ------------------------------ ----------
       207 PX remote messages sent                32



ops$tkyte@ORA815> select * from v$statname where statistic# = 207
  2  /

STATISTIC# NAME                                CLASS
---------- ------------------------------ ----------
       207 OS Swaps                               16

 

Methinks there is a stink...

Mr Kleen, April 03, 2003 - 9:17 am UTC

Tom, I think Mr or Mrs O were having a joke at your mistyping of "Shared Poo"..a Poo on this side of the world is another name for a Jobbie, or to put it another way a number 2, or yet another way faeces!

Therefore I was quite relieved to see that it could be flushed!!! ;)



Tom Kyte
April 03, 2003 - 9:21 am UTC

ah hah, i didn't even get it. now I see it, i left the l off and they want to flush the poo. got it ;)

back again

Juan Carlos Reyes P, April 03, 2003 - 10:18 am UTC

Thanks good point I didn't know that stats# could change from release to realease
It was 206 not 207 FIXED

Well we're going back again.

1)
CREATE TABLE Stat_Session_Historic
(
UUSER VARCHAR2(100),
DDATE DATE,
SstatisticName VARCHAR2 (200),
VVALUE NUMBER (6)
)
;

CREATE OR REPLACE TRIGGER TGR_LOGOFF_STATS
BEFORE
LOGOFF
ON DATABASE

INSERT INTO Stat_Session_Historic
SELECT USER, SYSDATE, 'session cursor cache count', VALUE, C.SID
FROM V$SESSTAT C
WHERE C.statistic# = (select STATISTIC# from v$statname where name = 'session cursor cache count')
AND C.SID = (SELECT SID FROM V$SESSION WHERE USER#= UID )
/

-- And the select will be the first 10 of an
average of their statics in a period of time for user

SELECT UUSER, AVGVAL
FROM
( SELECT UUSER, AVG( VVALUE ) AVGVAL
FROM Stat_Session_Historic
where TRUNC(DDATE) = TRUNC(SYSDATE)
GROUP BY UUSER
ORDER BY 2 DESC
)
WHERE ROWNUM < 10 -- First 10 cases

UUSER AVGVAL
--------------
ADM 5
SAF 1


2)
SELECT 'Soft Parse%:' ||
TO_CHAR(
ROUND (1- ( (select value from V$SYSSTAT WHERE NAME = 'parse count (hard)')



/ (select value from V$SYSSTAT WHERE NAME = 'parse count (total)')),2)*100



,'990D00') || '; If is far to 100%, and this total count parse number is

not small ' ||
(select value from V$SYSSTAT WHERE NAME = 'parse count (total)') || '
increase the parameter '
FROM DUAL;
-----------------
Soft Parse%: 94.00; If is not near to 100%, and this total count parse number is

not small: 7820 increase the parameter

Thanks



Tom Kyte
April 03, 2003 - 11:02 am UTC

well, v$mystat might be easier:

ps$tkyte@ORA817DEV> select * from v$mystat
2 where statistic# = (select statistic# from v$statname where name =
3 'session cursor cache count');

SID STATISTIC# VALUE
---------- ---------- ----------
15 192 0

but yes, that would capture it.


2) I still don't get the relevance of this. This tells me you have too many hard parses, but they cannot be fixed with session cached cursors.

juancarlosreyesp@yahoo.com, April 03, 2003 - 11:16 am UTC

Thanks 1) OK
About number 2)
Soft parse ratio: This shows whether there are many hard parses on the system. The ratio should be compared to the raw statistics to ensure accuracy. For example, a soft parse ratio of 0.2 typically indicates a high hard parse rate. However, if the total number of parses is low, then the ratio should be disregarded.

1 - ( parse count (hard) / parse count (total) )

I agree, but as it is called soft parse ratio I thougth it could have some relation with it.
When i sent my statspack report to www.oraperf.com they suggested me to increase sessoin_cached_cursors, I supposed it was because this ratio.
Wich other ratio or method do you suggest , to evaluate this parameter?
Thanks :)



Tom Kyte
April 03, 2003 - 1:12 pm UTC

Think about it -- how could session cached cursors help a hard parse problem??

It cannot -- by definition you hard parse when? when the sql never existed before.

session cached cursors cannot help a hard parse problem. cursor_sharing could, but not cached cursors.

My question to you is -- of what relevance to session cached cursors could the soft parse ratio possibly be?

other methods for tuning session cached cursors

juancarlosreyesp@yahoo.com, April 03, 2003 - 1:55 pm UTC

:) Thanks
My question to you is -- of what relevance to session cached cursors could the
soft parse ratio possibly be?

You are rigth, no relevance of softparse ratio to session cached cursors.

the question is now: is there a ratio or another method to measure the need of tunning session cached cursors.

Thanks

P.D. other than this that you explained in other question.

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 3 0.80 1.72 0 0 0 0
Execute 3 0.00 0.00 0 0 0 0
Fetch 3 0.02 0.05 2 665 0 14
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 9 0.82 1.77 2 665 0 14
Misses in library cache during parse: 0
In this statistics we can see the is too soft parsing, parse = execute.
In a well tuned application
parse = 1, execute = some number greater than or equal to 1
The best way to speed something up is to NOT do it. Hence, don't parse 3 times,
just parse 1 time.


Tom Kyte
April 03, 2003 - 2:49 pm UTC

if you find yourself parsing the same statement more then 3 times and you really cannot fix the code, session cached cursors can be of some assistance. if you do not, it will not help nor hurt.




A reader, April 03, 2003 - 3:25 pm UTC

thanks

Default value for session_cached_cursors

me again ;) sorry, April 03, 2003 - 3:51 pm UTC

The default value is 0, but I read that is advisable to set it to 2.
Even if you think you don't need it.
Is this rigth.

Tom Kyte
April 03, 2003 - 3:54 pm UTC

where did you read it and what justification did they give.

A reader, April 03, 2003 - 4:00 pm UTC

:( Sorry, I forgot where I read but the reason was this

session cursor cache count
Total number of cursors cached. This statistic is incremented only if SESSION_CACHED_CURSORS > 0

I was wondering how much memory it could take in an OLTP, but this could be reasonable for a DSS

I got I got I got I got I got :)

Juan Carlos Reyes P., April 04, 2003 - 10:07 am UTC

If you were thinking I was joking o I was not enough serious in my questions. Here it is
the text and the link, as I had been reading a lot of it, I didn't remember well. It tookme a time to find it again.

page
</code> http://www.ixora.com.au/scripts/library.htm

Text
The session cursor cache is an important facility for reducing load on the library cache. In our opinion, the session_cached_cursors parameter should always be set to at least 2. However, a larger value is normally beneficial.

first) Question setting to 2 is not too much memory for a big OLTP. Which in your opinion should be the default value for this parameter.
second)
Is this sql advisable to measure the value of this parameter
http://www.ixora.com.au/scripts/sql/session_cursor_cache.sql <code>

Thanks

Tom Kyte
April 04, 2003 - 11:35 am UTC

if steve adams said it, it is more then likely "true".


as they said -- a larger is normally beneficial. I am partial (opinion, no true science here) to 100.

the script looked reasonable to me

A reader, April 04, 2003 - 1:03 pm UTC

Thanks.

CLOSE_CACHED_OPEN_CURSORS

Juan Carlos Reyes Pacheco, April 07, 2003 - 1:35 pm UTC

Hi Tom
Could you please explainme a usefull use to this parameter, specially for developing in Oracle Forms.

executing the command
ALTER SESSION SET CLOSE_CACHED_OPEN_CURSORS ...

Or is this obsolete?

Thanks

Tom Kyte
April 07, 2003 - 2:31 pm UTC

very much obsolete.

A reader, April 07, 2003 - 2:36 pm UTC

Thanks :)

Avoiding soft parse completely

Krishnaswami, January 28, 2004 - 5:02 pm UTC

Hi Tom,
I have a question on the following code taken from your earlier reply. I run this code in SESSION S1.
for i in 1 .. 100
loop
for x in ( select * from t where zzz = :BIND )
loop
.....
the query "select * from t where zzz = :BIND" is parsed ONCE per session -- not per execution.

Upon completion of the for loop, another SESSION S2 modifies the table t by dropping one column.If I run the for loop again in session S1, I am sure soft parse will happen as table t was changed by session S2.
What happens to the cached cursors in session S1 when session S2 fires the Alter Table statement ?
Thanks for your reply
Krishnaswami


Tom Kyte
January 29, 2004 - 7:44 am UTC

it'll be a hard parse actually.

the cursors are automagically taken care of for you -- they are invalidated in the cache.

Soft Parse

Krishnaswami, January 29, 2004 - 8:48 am UTC

Thanks for the reply. I get it.

How to Avoid Soft Parse In Web Environment Between Pages

A reader, April 28, 2004 - 6:24 pm UTC

Hi Tom,

Thanks to you and other experts, many of us now know that we need to use database connection pool for web applications. However, as you have explained in other thread, database connection is not the same as database session. Therefore, it is unclear to me how to avoid the soft parse in a web application (database sessionless between web requests) and "lost update" issues.

Here is an example of the web requests:
1. Web request (e.g. get data to display to the user for update) sends to Oracle9iAS
2. Oracle9iAS gets a LOGICAL connection from the database connection pools to call a package
3. Oracle uses the database session based on the physical database connection to execute the package
4. Oracle returns the results back to Oracle9iAS
5. Oracle9iAS gets the results and closes the LOGICAL database connection
6. Oracle9iAS sends HTML to the browser
7. The user updates the data and sends the updated data to Oracle9iAS
8. repeat Step#2-6 but the package will update the same data originally retrieved

Here are some of my questions. I hope you can help me clarify my understanding or refer me to some manuals about these topics.

1. Is it correct to say that database connection pooling is NOT the same as database session (i.e. sessions are not pooled and reuse)?

2. Does getting the logical database connection from the DB conn pools in Oracle9iAS create a database SESSION?

3. Does closing the logical DB conn to return to the DB conn pools close the database SESSION and cusors? If not, can other web users gain access to each other DB session and cursors if they use the same DB connection from the pool (i.e. the connections in the pool are reused, right)?

4. If database session is close for each web request, how can we reuse the same cursors to avoid multiple soft parse if we execute the package again in the subsequent web requests for update? I assume the session_cached_cursors setting may not help since we use different session for each web request.

5. How would you handle the "lost update" issue in the web environment? I know the users get the data out for update. However, I don't think I can use the "pessimistic" approach using "select for update" in the initial data retrieval because after I get the data, I will need to close the LOGICAL connection. Therefore, is the "optimistic" approach is the only way to avoid "lost update" issue and alert the users of data has changed? You have given this example in your book:

update t
set col1 = new_val
where col = old_val

if sql%rowcount <= 1 then
--data has changed, raise error???
end if

Thank you for all your helps.

Regards,
Jon

Tom Kyte
April 29, 2004 - 7:01 am UTC

1) depends on how you define database connection pooling I suppose.

In typical pooling situations -- a SESSION is created as a single user (schema). This session is held open and used/reused by many "real sessions" -- web sessions.

There is a single, long lived database SESSION that is used to service many short lived 'web sessions' (pages)

2) typically the session is created once and used over and over. Using n-tier proxy authentication however can change all of that. Here, instead of a single username connecting to the database, you create a CONNECTION to the database and then create database SESSIONS on that connection (creating a session takes a fraction of the time creating a CONNECTION does). These sessions contain the end users true identity (so you can use roles, grants and auditing again!)

3) typically -- if you do not close the cursors, they are available to the next person that grabs that connection. The developer is responsible for closing all resources they opened (just like a file would not be automagically closed)

4) you generally cannot do effcient cursor reuse in a stateless web environment. (if you do everything in PLSQL, you'll get it as good as you can!)

5) optimitic locking is what I use.

Compute a checksum of the data on the way out and verify it did not change on the way in.

Store the old and new values on the web page and verify they did not change on the way in.

augment the table to have a timestamp maintained by a trigger and read it out on the way out, verify it on the way in.


optimistic, in a stateless environment, is the only option.

soft parse is get/miss?

atul, June 07, 2004 - 1:31 am UTC

Hi Tom,

From your book i read that,
if oracle finds a statment in a shared pool then its a soft parse,else its a hard parse..

So if its a soft parse then will it be a get/miss?

Thanks & Regards,
Atul

Tom Kyte
June 07, 2004 - 8:21 am UTC

if we find the statement in the shared pool -- it is a "hit" on the shared pool (eg: Not a miss in library cache). if we do not, it is a "miss" on the shared pool (eg: Misses in library cache will be positive in the tkprof report)



Parsing IF ELSE Conditions within package

Ik, August 17, 2004 - 10:27 am UTC

Tom,

Given below is the feedback given by a DBA on a PL/SQL package that had a series of IF THEN ELSE conditions. He wanted a lookup table instead.

"I see a lot of conditional code, which will become a unique dml statement every time your package is called."
"Select on a lookup table will always run as one statement in the database."

I agree with the lookup table part and the select against that would be parsed only once.

My Question : Is his first statement correct? Would the IF ELSE condition be treated as different DML statements? Is that utter nonsense or am i totally ignorant.

Thanks a ton for the service you are doing for the oracle community.

regards

Tom Kyte
August 17, 2004 - 10:46 am UTC

plsql code is compiled at compile time into pcode.

a series of if then else will not become a "unique dml statement"

no idea what in general they are refering to.

Thanks Tom

Ik, August 17, 2004 - 10:48 am UTC


A reader, September 15, 2004 - 12:11 pm UTC

Tom,

After generating statspack report i notice lot of soft parses , so i increased session_cached_cursors=100 from 0 and i noticed the number went down, but not dramatically. Is it ok if i increase to 200 and test it out.

And i now this maybe difficult to say but much would be a safe number to go upto.

Tom Kyte
September 15, 2004 - 12:54 pm UTC

session_cached_cursors will have zero net effect on the soft parse count.

only the programmers can affect that number -- by parsing once, executing many (move all select/insert/update/delete out of their code and into plsql and it'll go way down!)


session_cached_cursors can make a soft parse a "softer soft parse"


Low " Parse CPU to Parse Elapsd %: "

Dilip Patel, September 17, 2004 - 9:19 am UTC

Tom,

On my database, there are some queries which are not using bind variables

(Reason: We are using data transfers from other database using DB Link over WAN, so with bind variable the
performance of data transfer was not acceptable, without using bind variables performance is good, so we have to live with the not using bind variables)

On statspack report, I am getting my shared pool 94% utilized and Parse CPU to Parse Elapsd % in the range of 10% to 20% always.

So how to workout about this.

Following is the part of statspack report.


Load Profile
~~~~~~~~~~~~ Per Second Per Transaction
--------------- ---------------
Redo size: 31,482.03 1,346.01
Logical reads: 1,967.33 84.11
Block changes: 139.82 5.98
Physical reads: 51.88 2.22
Physical writes: 23.67 1.01
User calls: 131.90 5.64
Parses: 5.61 0.24
Hard parses: 0.00 0.00
Sorts: 28.47 1.22
Logons: 0.16 0.01
Executes: 65.47 2.80
Transactions: 23.39

% Blocks changed per Read: 7.11 Recursive Call %: 40.40
Rollback per transaction %: 0.04 Rows per Sort: 2.53

Instance Efficiency Percentages (Target 100%)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Buffer Nowait %: 100.00 Redo NoWait %: 100.00
Buffer Hit %: 97.36 In-memory Sort %: 100.00
Library Hit %: 100.00 Soft Parse %: 100.00
Execute to Parse %: 91.43 Latch Hit %: 99.98
Parse CPU to Parse Elapsd %: 21.37 % Non-Parse CPU: 100.00

Shared Pool Statistics Begin End
------ ------
Memory Usage %: 94.83 94.82
% SQL with executions>1: 62.15 62.16
% Memory for SQL w/exec>1: 65.69 65.70



opened cursors cumulative 19,983 5.6 0.2
opened cursors current
parse count (hard) 0 0.0 0.0
parse count (total) 20,093 5.6 0.2
parse time cpu 215 0.1 0.0
parse time elapsed 1,006 0.3 0.0

Thanks

Dilip

Tom Kyte
September 17, 2004 - 9:54 am UTC

                     Parses:                  5.61                  0.24
                Hard parses:                  0.00                  0.00

Soft Parse %:  100.00

counters the argument that you are hard parsing.  You are in fact not hard parsing at all during the period of observation here.

are you currently "pretty much cpu bound?"  you are not parsing very heavily, no hard parsing -- yet, you are waiting to get on the CPU when you do parse, that is what I see from these tea leaves (and really -- it is sort of like trying to read tea leaves with snippets of stuff)


also, 

ops$tkyte@ORA9IR2> select 20093/5.61 from dual;
 
20093/5.61
----------
3581.63993
 
ops$tkyte@ORA9IR2> select 3581/60 from dual;
 
   3581/60
----------
59.6833333


<b>an hour is probably too long to observe -- averaging things out over an hour loses the focus of the numbers, they get fuzzier and fuzzier over time</b>


and remember what you are looking at here.

2.15 cpu seconds vs 10.06 wall clock seconds.  (ratios are an awesome method to HIDE INFORMATION).

In the grand scheme of things, is 8 seconds a meaningful wait time during an hour of elapsed time? 

high parse count

alistair, November 18, 2004 - 10:49 am UTC

Hi Tom,
The following query was taken from a tkprof trace file, I was wondering why it has a parse count almost equal to the execute count when the query is using bind variables. The sql is part of cursor within a package.
DB version is 9iR2


SELECT
EUC.VIEW_ID,
GREATEST (EUC.DATE_FROM, :B1) AS DATE_FROM,
LEAST (EUC.DATE_TO, :B2) AS DATE_TO,
'N' AS FUND_MANAGEMENT_FLAG,
'N' AS PREFERRED_VIEW_FLAG,
:B4 LEVEL_INDICATOR
FROM EUC_UTL_MIN_CLASS_ORD_VIEW EUC
WHERE EUC.CLASS_ORDER_ID = :B3
AND EUC.DATE_FROM < :B2
AND EUC.DATE_TO > :B1 ORDER BY EUC.DATE_FROM


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1943 800.00 1401.69 0 0 0 0
Execute 1955 1200.00 1642.01 0 0 0 0
Fetch 2305 900.00 1189.12 7 3911 0 350
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 6203 2900.00 4232.82 7 3911 0 350

table definition
CREATE TABLE EUC_UTL_MIN_CLASS_ORD_VIEW
(
CLASS_ORDER_ID NUMBER NOT NULL,
VIEW_ID NUMBER NOT NULL,
DATE_FROM DATE NOT NULL,
DATE_TO DATE NOT NULL
)
/
ALTER TABLE EUC_UTL_MIN_CLASS_ORD_VIEW
ADD ( CONSTRAINT PK_EUC_UTL_MIN_CLASS_ORD_VIEW
PRIMARY KEY (CLASS_ORDER_ID, VIEW_ID, DATE_FROM, DATE_TO))

Tom Kyte
November 18, 2004 - 11:10 am UTC

because the client code controls how many times a statement is parsed....


the client code is what controls the parsing pretty much.

but how?

max, February 18, 2005 - 5:52 am UTC

hi tom,

we 've observed the same as alistair: for a query with bind variables parses equal executes, and tkprof only shows ONE miss in library cache during parse ...

could you please explain how this could have been forced by the CLIENT?

Tom Kyte
February 18, 2005 - 8:53 am UTC

the CLIENT asked us to parse the query. We do what the client asks us to do.


look, your programmers can either code:


loop
open cursor
parse cursor <<=== we sort of have to oblige them
bind
execute
close
end loop;


or, they can code


open cursor
parse cursor
loop
bind
execute
end loop
close


if they are not doing things in a loop, they can:


my_nice_procedure
is

static boolean first_time_i_am_called = TRUE;


if first_time_i_am_called
then
open
parse
first_time_i_am_called = FALSE;
end if;

bind
execute



instead of:

my_poor_procedure
is

open
parse
bind
execute
close





thanks. sure.

max, February 18, 2005 - 1:55 pm UTC

it was one hard parse, followed by multiple soft parses in our case ...

from the SQL traces we can see that sometimes clients retrieve result sets in a single row fashion by issuing the same statement (bound with primary key values) over and over again -- fetching one row per execution.

some kind of "generic framework" is responsible for that (don't know any details apart from what has been observed by examining the SQL traces related to certain client interactions). in some cases compared to such an one-row-at-a-time procedural approach retrieving the whole set with just one execution is round about four times faster.

i guess there 's absolutely NO way to improve overall performance without touching that client code, right?

Tom Kyte
February 18, 2005 - 3:01 pm UTC

gotta love "generic frameworks" don't you.


in my experience, it would be orders of magnitude between slow by slow and set based.

90% of tuning starts at home, in the application. physical schema's, algorithms. The last 10% is the magic we can sometimes squeeze out of the server (assuming the basic setup of the server was "common sensed" -- eg: you have larger than a 64k buffer cache and such...)

A reader, August 01, 2005 - 9:19 am UTC

Tom would you say Hard Parsing in a Rule Based optimizer is actually cheaper than with a CBO for the same query, nearly all the time in an OLTP system?
This is because CBO needs to compare so many plans for a query while RBO does much simpler processing?

Tom Kyte
August 01, 2005 - 10:14 am UTC

I would say hard parsing in OLTP is pure evil and if you are doing it, it doesn't matter much to me if you are CBO or RBO as the bug that needs to be fixed is the hard parsing, not the optimizer mode.

hard parsing is *expensive* regardless. There are queries that will consume more resources with the RBO than the CBO and vice versa.

Hard parsing and OLTP do not go together.

Trying to get rid of excessive soft parsing

Rick, October 04, 2005 - 11:09 am UTC

Hello Tom,

I'm having a problem with excessive soft parsing by Oracle Spatial. I know you're not an expert in OS (and neither am I) but this is a problem that could be applicable to third-party software as well.

If I have a set of n points and calculate the distance between them (using SDO_DISTANCE) resulting in n*n results, the trace file shows that OS does the following query n*n times:

select wktext, srid from mdsys.cs_srs where srid = 81989

and the following query 2*n*n times:

select count(*) from mdsys.geodetic_srids where srid = 81989

For your information, the SRID is the coordinate system, and
OS is (I assume) checking that it is a legal one and that is
not of a certain type.

Confronted with a third-party system that does this,
what can we do to cut down on the number of soft parses?
With n sometimes equalling many millions, the CPU needed
for the soft parsing can run into over 10% of the run time.

I've tried setting session_cached_cursors, no effect.
I was considering trying a query rewrite to convert these
into selects from DUAL.

You can use the following to reproduce the problem. After
running it the queries can be found in the trace file.
It simply creates a table with 40 points (defined by x
and y coordinates) and then calculates the distance
between each of them.

drop table cola_markets;

CREATE TABLE cola_markets (
mkt_id NUMBER PRIMARY KEY,
name VARCHAR2(32),
location SDO_GEOMETRY);

delete from USER_SDO_GEOM_METADATA
where table_name = 'COLA_MARKETS';

INSERT INTO USER_SDO_GEOM_METADATA
VALUES (
'COLA_MARKETS',
'LOCATION',
SDO_DIM_ARRAY(
SDO_DIM_ELEMENT('X', 0, 200, 0.005),
SDO_DIM_ELEMENT('Y', 0, 200, 0.005)
),
81989
);

begin
for i in 1 .. 40
loop
INSERT INTO cola_markets
VALUES(
i,'cola_a',SDO_GEOMETRY(2001,81989,
SDO_POINT_TYPE(i,i,null),null,null
)
);
end loop;
end;
/

commit;

drop table distances;

set timing on

alter session set sql_trace = true;

create table distances as
select t1.mkt_id mkt1, t2.mkt_id mkt2,
sdo_geom.sdo_distance (t1.location,
t2.location,
0.005
) distance
from cola_markets t1, cola_markets t2
;


Many thanks for any thoughts you may have on this.
If you want me to post the relevant bits of the tkprof. I can do that.

Tom Kyte
October 04, 2005 - 4:48 pm UTC

I've sent this to some spatial experts and we'll see what they say..

soft parse.

Mateen mohammed, October 04, 2005 - 5:03 pm UTC

Hi Tom,
I have been trying to ask a question for the past few months but never got a chance. I will appreciate if you can help me.

Question: I would like to find out what all sql's got executed or running / waiting in last x minutes. I would like to achieve this without taking snapshots and database statistics on. Is there a column which stores this info.

In v$sqlarea, the first_load_time do give the time first sql got loaded but if a sql is executed multiple times then It doesn't record the time when was the last time it started executing.

I will appreciate if you let me know if there is any column in oracle views/tbles... and should be compatible with 9.2.0.5

Tom Kyte
October 04, 2005 - 8:35 pm UTC

no, there is not.

you would have to use a thing like statspack/awr - they can give you this information.



Sorry, forgot to mention the version

Rick, October 05, 2005 - 3:27 am UTC

In your posting guidelines you say "If your followup requires a response that might include a query, you had better supply ...". Perhaps you should add Database Version as well, I forgot!

I'm running on 10.1.0.3.0



Forms

A reader, October 09, 2006 - 11:18 am UTC

Hi Tom,

is it possible to reduce soft parsing when using a the old Oracle Forms built application.

if yes , could you give us some guide lines on how doing that ?

many thanks

Tom Kyte
October 09, 2006 - 11:33 am UTC

depends on who is doing the parsing - session cached cursors can reduce the impact, in your developed code you can definitely to "parse once, execute many"

you might try otn.oracle.com -> discussion forums for forms - it has been well over a decade now since I've used it personally.

soft parse/explain plan,

A reader, October 09, 2006 - 12:05 pm UTC

First time I issue an update statement it would do a hard parse and then starts executing. In the next of attempt of executing the exact same SQL it would do a soft parse. The explain plan is going to remain same in both the cases unless there is any change in the statistics.

If the there is a parallel degree hint, does the optimizer know to execute the statement in parallel during parsing time?

In the similar line, if I change the degree of parallelism for the table that is involved in my SQL which has already been parsed, does Oracle do the hard parse again?

Normally for how long the SQL's would be in library cache? How do we know what the SQL's currently reside in library cache (which have undergone parsing)?

What is your idea of having DEFAULT degree of parallelism on my tables which are involved heavily in DML kind of statements?

Thanks,



Tom Kyte
October 09, 2006 - 1:33 pm UTC

if there is a parallel hint and parallel dml is enabled and everything is OK for parallel to happen (all restrictions regarding parallel dml have been met), then the optimizer is free to consider parallel execution.


changing attributes of the table like that will invalidate the shared sql - causing the next parse to be a hard parse, yes.

Normally how long? only sensible answer is "it depends". they could be there for days, weeks, longer. One a bad system (with a hard parse problem) they might only be there for seconds.


In general, PDML (parallel dml) is something rare, something you would want to explicitly ASK to have be done, not really done by default.

thanks for the Forms answer

A reader, October 09, 2006 - 12:21 pm UTC

Hi Tom,

thank you very much for the help on the Forms answer.

i am not the developer myself , i am the DBA , i have several latch contention in library cache and shared pool.

i create a 20 minute statspack , i found nearly each statment has been parsed.

here i have 2 quistions please:
1) what do you mean by "depend on who is doing the parsing"
2) you mentioned in you veryyy interesting book "expert one on one" that the cursor_sharing = force can change the execution plan and shouldn't be as a permant solution , but what do you think about the new "similar" option in 9i, do you think it can do the trick ?

thanks in advance

Tom Kyte
October 09, 2006 - 1:37 pm UTC

1) if the programmers of the form wrote lots of procedural code, they are the ones that control when and how often we parse, you can influence that. If the code is being parsed by the forms runtime engine, you have less influence over how that code is executed.

2) cursor sharing WILL NEVER REDUCE THE NUMBER OF PARSE CALLS.

I don't see how cursor sharing fits in here?

yes thanks

A reader, October 09, 2006 - 1:48 pm UTC

Hi Tom,

i am sorry i didn't got your point about the forms engine, if possible you can show us an example that will be very helpful , if not please forgive me and ignore that point ( i will just ask the developers to check the parsing hopefully they understand how to do )

i don't mean to use the cursor_sharing for avoiding soft parsing , i am talking to help in reducing hard parsing ( i don't have much but i still have some, current cursor_sharing is exact and i learnt from your book that force might choose wrong plan , but for similar , does it has the same drawbacks

thanks

Tom Kyte
October 09, 2006 - 1:55 pm UTC

hard to demo a "gui", especially one I haven't used in a decade!

My point was: there is sql YOU control, there is sql OUT OF YOUR CONTROL.

If your programmers are doing the parsing
then
your programmers can reduce the parsing
elsif the tool you are using does the parsing
you have less influence over the number of times it is parsed
end if;

forms is VERY bind friendly, it would be doubtful that forms needs cursor sharing ever.

if you have forms that have excessive HARD parsing then I can absolutely say:

your programmers have a bug they need to fix, they did it.


see also:
</code> https://www.oracle.com/technetwork/issue-archive/2006/06-jan/o16asktom-101983.html <code>

follow up,

A reader, October 09, 2006 - 1:50 pm UTC

thanks a lot for the explanation.

So for my first question, if the optimizer is free to consider parallel execution, will it shows that information in my EXPLAIN PLAN? Is it possible to figure out from the explain plan that how much parallel slaves would be used (assuming I have a hint of parallel degree 8)?

Thanks,


Tom Kyte
October 09, 2006 - 1:57 pm UTC

sure, you'll see a parallel plan.

you won't see "degree" however, that is many times adjusted at runtime to fit in with the resources actually available.

thanks for the forms help

A reader, October 10, 2006 - 11:19 am UTC

Hi Tom,

thank you for the kindly help in explaining how forms works, i read the cursor_sharing link as well , it is very interesting.

i need your some more of your help please:

from statspack report i have the following statments in the section sql order by parse

% Total
Parse Calls Executions Parses Hash Value
------------ ------------ -------- ----------
595,997 595,995 5.72 1532986588
Module: f90runm@sprd (TNS V1-V3
SELECT PAYROLL_M,PAYROLL_Y FROM PAYROLL
WHERE CODE = 1

595,935 595,935 5.72 1279656770
Module: f90runm@sprd (TNS V1-V3
SELECT SUBSTR(USER_NAME,1,35) FROM U_USERS WHERE USER_ID =
(SELECT USER FROM DUAL )

595,746 595,746 5.72 2843921319
Module: f90runm@sprd (TNS V1-V3
SELECT N.NATIONALITY_NAME_A,E.FULL_NAME,E.EMPLOYEE_NO,E.UNIT_COD
E FROM PS_NATIONALITIES N,PS_EMPLOYEES E WHERE N.NATIONALITY_
CODE = E.NATIONALITY_CODE AND E.MILITARY_NO = TO_NUMBER(:b1)

595,719 595,718 5.72 467516886
Module: f90runm@sprd (TNS V1-V3
SELECT X.NAME FROM OCCUPATIONS X,EMPLOYEES E WHERE X.CODE = E.NCODE AND E.NO = TO_NUMBER
(:b1)

1) do you think those kind of statments are ones that are of my control or they are of the type of the Forms engine control , if they are mine control , could you please tell me how i can reduce the number of parses

2) how i could determine the proper value for the parameter session_cached_cursor , is there a good way i can determine if i might increase this value or not ? and up to how many i should increase

many thanks

Tom Kyte
October 10, 2006 - 7:55 pm UTC

1) I cannot answer that, you'd need to look at the form

2) 100... that's a good one, either
a) don't set it
b) put it to 100

and see if your session cached cursors "hits" goes up and your latching goes "down"

New question about 'session cursor cache count%'

A reader, November 30, 2006 - 4:01 pm UTC

Hi Tom,

select * from v$sesstat a,v$statname b where a.STATISTIC#=b.STATISTIC# and ( b.NAME like '%session cursor cache count%' or b.NAME like '%parse%count%')
order by sid desc

shows a value of 0 for all SIDs for the parameter 'session cursor cache count'. We have session_cached_cursors set to 0, but I thought the SQLs run by PL/SQL will automatically get cached, even though this session_cached_cursors is not set. How come we have 0 value for the above SQL for 'session cursor cache count' for all SIDs ?

thanks
Anto

Tom Kyte
November 30, 2006 - 5:47 pm UTC

but the plsql cursor caching is NOT the session cached cursors, plsql has them actually open.

A reader, November 30, 2006 - 4:04 pm UTC

Even v$sysstat show a value of 0 for session cursor cache hits. Does the SQLs cached by pl/sql does NOT go into this count ?

thanks
Anto

Tom Kyte
November 30, 2006 - 5:49 pm UTC

correct - plsql uses this parameter setting to control how many cursors plsql will keep OPEN (even after you say "close them"), and that is different from the transparent "session cached cursors" stuff that happens after you open the same cursor three times in a session.

A reader, December 01, 2006 - 12:00 pm UTC

So I guess plsql caching never shows up there - even though it is happening internally all the time

thanks
Anto

Tom Kyte
December 01, 2006 - 12:46 pm UTC

it shows up in reduced PARSE CALLS

session cached cursors - parse calls is never affected, if you parsed 1,000 times before setting it >0, you'll still have 1,000 parse calls afterwards.

a little confused

A reader, December 07, 2006 - 2:53 pm UTC

"correct - plsql uses this parameter setting to control how many cursors plsql will keep OPEN (even after you say "close them"), and .."

a) So if session_cached_cursors is 0, plsql will never keep OEPN any cursor ?

b) Does it make sense to set this session_cached_cursors parameter to say 100, for codes executed via pl/sql procedures/packages which are not containing any dynamic SQLs like execute immediate or ref cursors ?

thanks
Anto

Tom Kyte
December 07, 2006 - 5:50 pm UTC

a) two entirely different concepts going on here.

session cached cursors - transparent, works for all applications regardless of language. a softer soft parse - but a parse.

plsql cursor caching - is a plsql bit of magic, it is NOT session cached cursors, plsql cursor caching AVOIDS THE PARSE entirely.


b) 100 is a "good" number, sure. it controls the size of the plsql cursor caching too.

A reader, December 08, 2006 - 9:09 am UTC

Thanks

Anto

questions on shared pool and parsing,

A reader, September 20, 2007 - 2:55 pm UTC

Does a soft parse requires a latch? I understand that the cursor is already in the memory so it does not require latch. Pleae correct me.

In RAC instances, does the cursor in instance 1 shares with the instance 2? In other words, if I execute "select * from emp" on instance 1 (assume it does a hard parse) and later if I execute the exact same statement in instance 2 will it do a hard parse or a soft parse? If the cursors are shared between the instances, is it cache fusion that transports the information between the instances?

Is there an algorithm defined in Oracle kernel on how does the cursor gets aged out from the shared pool? Can we pin the cursor?

In my database, the value of open_cursors is 512. But the count(*) from v$open_cursor view is more than 600. How is that possible (but the count of distinct address||hash_value was around 250)?

How do we know whether the cursor is OPEN or CLOSED? Is there any intermediate state for the cursor?

If the cursor is CLOSED, does it do soft parse or a hard parse?

I appreciate your help.

Thanks,

Tom Kyte
September 24, 2007 - 7:48 am UTC

a soft parse does a HUGE amount of work, yes, it requires LOTS of latching.

http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:2588723819082

soft parse needs to a a semantic parse (resolve object references) and that is latching the library/dictionary cache. We also need to search the shared pool which obviously needs a latch (to prevent others from writing/modifying the data structure we are searching!!!)

soft parses cause TONS of latches.


In RAC, the cursor would be parsed on each node (they could have parallel query enabled on one node, not the other - pga memory could be very different on node 1 from node 2, the optimizer modes could be different and so on). node 1 and node 2 will KNOW that each has a parsed copy (the hard parse does some coordination) but each will have it's own copy.


we use an LRU normally, and might flush tons of stuff under pressure, yes there is an algorithm - no the details are not for you and I to read.

Yes you could pin a cursor, but I would ask "WHY"

If you use it, you will not lose it. Pinning was invented as a temporary workaround to shared pool issues before the large pool was introduced way be in 8.0 and people were using shared server. Not really useful now.


open cursors is a per session limit.
v$open_cursors is across all sessions, so #session*open_cursors = max size of v$open_cursor



re-parse by the same user

Alberto Dell'Era, September 24, 2007 - 1:52 pm UTC

> soft parse needs to a a semantic parse (resolve object references)

I understand that this has to be done if different users parse the same query - but if the same user re-parses the same statement (and I'd say that this is the most frequent scenario), wouldn't it suffice to search the statement text and the parsing_user_id in the library cache ?


Tom Kyte
September 26, 2007 - 8:27 pm UTC

you are talking about what "session_cached_cursors" is talking about - but just because I parse a query (i'm the parsing user id) doesn't mean you cannot reuse it in general!

ops$tkyte%ORA10GR2> create table t ( x int );

Table created.

ops$tkyte%ORA10GR2> grant select on t to public;

Grant succeeded.

ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> select * from ops$tkyte.t lookforme;

no rows selected

ops$tkyte%ORA10GR2> connect scott/tiger
Connected.
scott%ORA10GR2> select * from ops$tkyte.t lookforme;

no rows selected

scott%ORA10GR2> connect /
Connected.
ops$tkyte%ORA10GR2> select parsing_user_id, sql_text from v$sql where sql_text like '%lookforme';

PARSING_USER_ID
---------------
SQL_TEXT
-------------------------------------------------------------------------------
             61
select * from ops$tkyte.t lookforme


ops$tkyte%ORA10GR2> select uid from dual;

       UID
----------
        61


Alberto Dell'Era, September 24, 2007 - 4:34 pm UTC

In support of my previous q - it seems that creating an object that hides another invalidates statements that reference the latter.

In 10.2.0.3, user DELLERA2:

create table t (x int);
insert into t(x) values (42);
create or replace public synonym t for t;
grant select on t to dellera;

In user DELLERA:

SQL> select * from t;

         X
----------
        42

SQL> select sql_text  from v$sql where sql_text = 'select * from t';

SQL_TEXT
----------------
select * from t

SQL> create table t (z int);

Table created.

SQL> select sql_text  from v$sql where sql_text = 'select * from t';

no rows selected

If this happens always, it should be safe to search for the statement text and the parsing_user_id in the library cache. If the statement is found, the parsed representation there cached can be reused - thus avoiding the semantic analysis and authorization checks. Then, a child could be created if the sql environment is different.

That doesn't mean that the soft parse is inexpensive of course (since the library cache has to be latched anyway and that produces potential contention and cpu consumption for spinning).

Tom Kyte
September 26, 2007 - 8:39 pm UTC

not in general. see above. two users - different parsing user ids...

and before someone says "yeah, well, it was already qualified with an owner" - there are many counter arguments I have to that as well, but here is one without qualification:

ops$tkyte%ORA10GR2> create table t (x int);

Table created.

ops$tkyte%ORA10GR2> create or replace public synonym TTTT for T;

Synonym created.

ops$tkyte%ORA10GR2> grant select on TTTT to public;

Grant succeeded.

ops$tkyte%ORA10GR2> select * from TTTT;

no rows selected

ops$tkyte%ORA10GR2> connect scott/tiger
Connected.
scott%ORA10GR2> select * from TTTT;

no rows selected

scott%ORA10GR2> connect /
Connected.
ops$tkyte%ORA10GR2> select parsing_user_id, uid, sql_text from v$sql where sql_text like '%from TTTT';

PARSING_USER_ID        UID
--------------- ----------
SQL_TEXT
-------------------------------------------------------------------------------
             61         61
select * from TTTT


Alberto Dell'Era, September 27, 2007 - 9:38 am UTC

Uhm, in hindsight I've been concise yet NOT complete, let me be verbose and complete ;)

Assume no DDL (grants/revokes included) is performed in the database - in this case each user "sees" a constant view (scope?) of the database: once an identifier T is resolved as "table T in schema X" for user U1, it will always map to "table T in schema X" for U1. It can't map to anything else.

Hence when no DDL is performed, if the same statement text with the same parsing_user_id as the currently parsing user is found in the library cache, it can be immediately reused - semantically it must be the same [note: maybe a child cursor might be created if the sql environment is different, or if the types of the bind variables are different, etc etc - as that changes the plan, but not the meaning of the identifiers of the SQL statement].

If the text is found with a different parsing_user_id, the stmt has to be semantically analyzed for equivalence and maybe shared, as both your test cases show - no shortcut here.

Of course DDL happens in the database, and that may change the meaning of the identifiers contained in the statement. Now, DDL can be issued on

(a) the objects referenced by the cached statement;
(b) other objects.

For case (a), the DDL will invalidate the statement, so the next parse will find no match, and become an hard parse, hence no danger of confusing one object with another.

For (b), the main issue is when you create an object that hides an object referenced by the cached statement [e.g. my "create table t (z int);" above that hides the public synonym pointing to table T in schema DELLERA2]. My test case shows (surprisingly) that this invalidates the cached statement as well, even if it references a completely different object, albeit with the same name. This wouldn't be necessary at all if every soft parse checked the semantic equivalence, re-compiling the statement every time (building the parse tree from the text, and then decorating the tree with the semantical informations from the data dictionary): that would simply pick "table t(z int)" at the next parse.

To recap, my hypothesis is that DDL that changes the meaning of an identifier invalidates all statements that use that identifier. Hence, in-between DDL, it is possible to simply search for the statement text (and the parsing_user_id) in the library cache, avoiding the costly re-compilation and semantic analysis of soft-parsed statements. I wonder whether this opportunity is exploited by the kernel; my test case suggests (not proves) that this might be the case (in 10.2.0.3).
Tom Kyte
September 28, 2007 - 3:26 pm UTC

I think you are seeing a side effect of session cached cursors....

but what is your test case?

I set session cached cursors = 0 and see library cache latching going on...

ops$tkyte%ORA10GR2> exec runStats_pkg.rs_start;

PL/SQL procedure successfully completed.

ops$tkyte%ORA10GR2> declare
  2          l_cursor sys_refcursor;
  3  begin
  4          for i in 1 .. 10
  5          loop
  6                  open l_cursor for 'select * from tkyte_t';
  7                  close l_cursor;
  8          end loop;
  9  end;
 10  /

PL/SQL procedure successfully completed.

ops$tkyte%ORA10GR2> exec runStats_pkg.rs_middle;

PL/SQL procedure successfully completed.

ops$tkyte%ORA10GR2> declare
  2          l_cursor sys_refcursor;
  3  begin
  4          for i in 1 .. 10000
  5          loop
  6                  open l_cursor for 'select * from tkyte_t';
  7                  close l_cursor;
  8          end loop;
  9  end;
 10  /

PL/SQL procedure successfully completed.

ops$tkyte%ORA10GR2> exec runStats_pkg.rs_stop(9000);
Run1 ran in 1 hsecs
Run2 ran in 61 hsecs
run 1 ran in 1.64% of the time

Name                                  Run1        Run2        Diff
STAT...calls to get snapshot s          13      10,001       9,988
STAT...opened cursors cumulati          15      10,005       9,990
STAT...execute count                    15      10,005       9,990
STAT...parse count (total)              15      10,005       9,990
STAT...recursive calls                  34      30,004      29,970
LATCH.library cache                    136      40,088      39,952
LATCH.library cache lock                89      40,044      39,955
STAT...session pga memory                0     196,608     196,608
STAT...session uga memory max      254,380           0    -254,380
STAT...session pga memory max      262,144           0    -262,144

Run1 latches total versus runs -- difference and pct
Run1        Run2        Diff       Pct
630      80,476      79,846       .78%

PL/SQL procedure successfully completed.


if I set session cached cursors on - for that session, I see reduced latching due to the softer soft parse.


Sometimes ignorance is a blessing

AMIR RIAZ, September 28, 2007 - 9:12 am UTC

Hi Tom

Recently i have appointed on one of project(OLTP) my senior architecture (a Java person) is using literals in sql in place of bind variables and is flushing the shared pool using

alter system flush shared_pool ; command.

when i asked him he reason out since the number of users of the application are few(30-50) we can use it. while i believe we must use bind variables. Is bind variables have to do anything with number of users in oltp application. What's your opinion?

also he believes that cursor is not aged out of SGA untill it is not aged out of session_cached_cursor list. please describe the internnal working of session_cached_cursor with SGA. I think if we are hard parsing using session_cached_cursor(in case of ref cursor) is useless.

I am currently in great stress. Please answer. the project is in releasing stage and i have to fine tune it. our management is not ready to accept my point of view because they have a perfectly working product. Tell me how can i procede with tunning without bind variables.
Tom Kyte
September 28, 2007 - 5:49 pm UTC

tell the java person to google

sql injection


and ask them "so, you have had your code reviewed by someone to verify that you do not have the most insecure product ever right?"


It takes exactly ONE bad sql statement in one application to kill an ENTIRE DATABASE instance. 30-50 users, I don't care, that would be more than enough and if you inflict this application on other databases (you know, server consolidation) you'll kill them.

session_cached_cursors only kicks in when you parse the same sql 3 times in a session, something you are UNLIKELY do to when using literals.

Also, it does NOT prevent the cursor from getting aged out, not at all (it would be useless if it did that). It simply makes it easier for us to re-find the sql statement in the shared pool without doing all of the normal work. But if the cursor isn't there anymore (aged out) no big deal.

Alberto Dell'Era, September 28, 2007 - 5:43 pm UTC

> I think you are seeing a side effect of session cached cursors....
> but what is your test case? 

My test case, now repeated without session cached cursors, was:

In 10.2.0.3, user DELLERA2:

create table t (x int);
insert into t(x) values (42);
create or replace public synonym t for t;
grant select on t to dellera;

In user DELLERA:

DELLERA> alter session set session_cached_cursors=0;

Session altered.

DELLERA> select * from t;

         X
----------
        42

DELLERA> select sql_text  from v$sql where sql_text = 'select * from t';

SQL_TEXT
-----------------
select * from t

Now I create a table DELLERA.T, which "hides" ("replaces in scope") the public synonym T pointing to DELLERA2.T. Note that I'm not dropping, or altering, any object which the cached statement "select * from t" depends on - I'm just creating a brand new object. 

DELLERA> create table t (z int);

Table created.

Surprisingly, that invalidates the cached statement:

DELLERA> select sql_text  from v$sql where sql_text = 'select * from t';

no rows selected

That cached statement is not invalidated, of course, if I create a table that doesn't hide the public synonym. But create any object that hides the public synonym (I've also tried creating a view T, a private synonym T, even a sequence T) and the cached statement does get invalidated, every time.

This makes sure that the meaning of T is always the same for statements parsed by the same user that parsed the cached statement (which is v$sql.parsing_user_id). So in the most common soft parse scenario (same user re-parsing the same sql text) it's enough to search the statement *text* in the library cache, and if v$sql.parsing_user_id matches, there's no need to recompile the statement (or re-solve the identifier T).

In fact in your test case:

LATCH.library cache                    136      40,088      39,952
LATCH.library cache lock                89      40,044      39,955

Run1 latches total versus runs -- difference and pct
Run1        Run2        Diff       Pct
630      80,476      79,846       .78%

You are latching the library cache only (40,088 + 40,044 = 80,132, approx equal to the total 80,476); if the statement were recompiled, wouldn't it be necessary to latch the dictionary cache as well ? 

Tom Kyte
October 03, 2007 - 1:20 pm UTC

it is not that the statement is "recompiled" entirely - it is a soft parse.

And it does a significant amount of work. I do not want to understate that - a soft parse is a bad thing to do. the only good parse is "absence of a parse".

so, shortcut capable or not, we shall just say "thou shall parse as little as you can" - period.

Alberto Dell'Era, October 03, 2007 - 1:58 pm UTC

> so, shortcut capable or not, we shall just say "thou shall
> parse as little as you can" - period.

But while "we" keep saying that - it's also worth investigating how things work, more knowledge doesn't harm - semicolon close-parenthesis.


clarification needed

karthik, July 10, 2008 - 5:17 pm UTC

Tom,

Please see this excerpt from metalink note 1020427.102 regarding "session cached cursors" in pl/sql

Note that no optimization occurs.  In other words, the cursor caching code does not check to see if a particular query already has a cursor open; it simply creates another one.  This is done until OPEN_CURSORS is reached.  If you have OPEN_CURSORS cached as open, however, then if an identical query occurs, the server reuses the cached cursor.


I have a few questions here
1,In PLSQL, parsing is totally avoided when compared with session_cached_cursors which is a "softer soft parse",am i right?
2,The explanation in metalink says no performance optimization occurs.So if it is going to open a cursor again,it will be doing an extra amount of work rather than reusing a cursor which is already open and just to execute the statement in the shared pool if it exists? Also the threshold set that after reaching open_cursors it will reuse the cached cursor seems to be an confusing one(after all why it has to wait for all the cursors are used)
3,Is this explanation is for PLSQL session cached cursors which happens automatically or the session_cached_cursors parameter set when not using PLS?

Overall this metalink article contradicts the popular belief that cursors are reused and not opened fresh when the same sql statement is executed
Tom Kyte
July 10, 2008 - 6:17 pm UTC

1) correct

2) they could have said it better. what I believe they mean is:

begin
open c1 for select * from dual;
close c1;
open c2 for select * from dual;
close c2;
end;

would use two cursors, it would not "optimize" to use one.

3) not sure what you mean

cursor reuse

karthik, July 11, 2008 - 1:51 pm UTC

Tom,
1,What i was trying to say in point 3 was-The metalink article discusses this behaviour with respect to PLSQL cursor caching concept. Is this applicable for statements not fired from a PLSQL block and rather from a simple sqlplus where i may have to depend on session_cached_cursors.

2,The example you quoted -is that just because the cursor handle name varies(c1 and c2), it has to open a fresh cursor?

Tom Kyte
July 11, 2008 - 1:59 pm UTC

1) this feature - of complete cursor caching - is a plsql feature, one of the many reasons you should put all sql in plsql.

2) it is because they are in different places, we could see the same thing with implicit cursors as well.

Amedeo

amedeo, December 24, 2008 - 12:24 pm UTC

Hi Tom,

If increasing session_cached_cursors doesn't impact Execute to Parse, what can be done to get the percentage in awr/perfstat reports to get closer to 100% efficient.

Many databases we support (9.2 and 10.2) show very low (<10) percent for "execute to Parse" and "parse CPU to Parse Elapsed".

Thanks
Amedeo

Tom Kyte
December 29, 2008 - 3:29 pm UTC

you have to fix the application - there is NOTHING else to do. There is nothing that can be done. You have to get the application to call "parse" less often.


Let me make a subtle change to the statement "increasing session_cached_cursors doesn't impact Execute to Parse"

Let me change that to:

increasing session_cached_cursors almost certainly won't impact Execute to Parse in most systems



The only time it will - is when you use lots of plsql. PLSQL uses session_cached_cursors to control the size of it's internal cursor cache. Increasing that number will increase the number of cursors plsql will hold open - and could lead to a smaller parse count.

I say "most systems" won't see that because the systems that are encountering poor parse performance are for the most part..... Not written using plsql

More to Explore

PL/SQL demos

Check out more PL/SQL tutorials on our LiveSQL tool.

PL/SQL docs

PL/SQL reference manual from the Oracle documentation library