Skip to Main Content
  • Questions
  • difference between soft parse and hard parse

Breadcrumb

Question and Answer

Connor McDonald

Thanks for the question.

Asked: January 26, 2002 - 7:17 pm UTC

Last updated: August 15, 2022 - 5:16 am UTC

Version: 8.1.7.2

Viewed 100K+ times! This question is

You Asked

Hi Tom

Can you explain briefly the difference between soft and hard parse?

cheers

and Tom said...

Here is a long winded answer, it is extracted in part from a new book coming out soon "beginning Oracle programming" that I collaborated on:

Parsing

This is the first step in the processing of any statement in Oracle. Parsing is the act of breaking the submitted statement down into its component parts ? determining what type of statement it is (query, DML, DDL) and performing various checks on it.

The parsing process performs two main functions:

o Syntax Check: is the statement a valid one? Does it make sense given the SQL grammar documented in the SQL Reference Manual. Does it follow all of the rules for SQL.

o Semantic Analysis: Going beyond the syntax ? is the statement valid in light of the objects in the database (do the tables and columns referenced exist). Do you have access to the objects ? are the proper privileges in place? Are there ambiguities in the statement ? for example if there are two tables T1 and T2 and both have a column X, the query ?select X from T1, T2 where ?? is ambiguous, we don?t know which table to get X from. And so on.

So, you can think of parsing as basically a two step process, that of a syntax check to check the validity of the statement and that of a semantic check ? to ensure the statement can execute properly. The difference between the two types of checks are hard for you to see ? Oracle does not come back and say ?it failed the syntax check?, rather it returns the statement with a error code and message. So for example, this statement fails with a syntax error:

SQL> select from where 2;
select from where 2
*
ERROR at line 1:
ORA-00936: missing expression

While this statement failed with a semantic error ? if the table NOT_A_TABLE existed and we had permission to access it, this statement would have succeeded:

SQL> select * from not_a_table;
select * from not_a_table
*
ERROR at line 1:
ORA-00942: table or view does not exist

That is the only way to really tell the difference between a semantic and syntactic error ? if the statement COULD have executed given the proper objects and privileges, you had a semantic error, otherwise if the statement could not execute under any circumstances, you have a syntax error. Regardless ? Oracle will not execute the statement for you!

The next step in the parse operation is to see if the statement we are currently parsing has already in fact been processed by some other session. If it has ? we may be in luck here, we can skip the next two steps in the process, that of optimization and row source generation. If we can skip these next two steps in the process, we have done what is known as a Soft Parse ? a shorter process to getting our query going. If we cannot, if we must do all of the steps, we are performing what is known as a Hard Parse ? we must parse, optimize, generate the plan for the query. This distinction is very important. When developing our applications we want a very high percentage of our queries to be Soft Parsed ? to be able to skip the optimize/generate phases ? as they are very CPU intensive as well as a point of contention (serialization). If we have to Hard Parse a large percentage of our queries, our system will function slowly and in some cases ? not at all.

The way this sharing of SQL in Oracle is accomplished is via the shared pool, a piece of memory in the SGA maintained by Oracle. We covered this topic in chapter 5 but will revisit is again in the context of processing a query. After Oracle parses the query and it passes the syntax and semantic checks ? it will look in the shared pool component of the SGA to see if that same exact query has already been processed by another session. Since it has performed the semantic check it has already figured out:

o Exactly what tables are involved
o That we have access to the tables (the proper privileges are there)

And so on. Now, it can look at all of the queries in the shared pool that have already been parsed/optimized and generated to see if the work has already been done.
........

Rating

  (239 ratings)

Is this answer out of date? If it is, please let us know via a Comment

Comments

Soft Parse and session_cashed_cursor parameter

A reader, January 28, 2002 - 4:55 am UTC

can you explain what means the default value (0) of session_ casched_cursor parameter? Oracle always has a cache
of sqls in sga and size of this cache is determinated through init.ora parameter shared_pool_size.

Tom Kyte
January 28, 2002 - 8:46 am UTC

The default value of ZERO means that by default, ZERO cursors will be cached for your session.

They will be cached in the shared pool -- but your session will have to find them there.  Session Cached Cursors can remove the need to have to "find" them.  Consider this example that shows the difference spent when soft parsing without session cached cursors vs soft parsing WITH session cached cursors:

ops$tkyte@ORA817DEV.US.ORACLE.COM> create table emp as select * from scott.emp;
Table created.

ops$tkyte@ORA817DEV.US.ORACLE.COM> create table run_stats ( runid varchar2(15), name varchar2(80), value int );
Table created.

ops$tkyte@ORA817DEV.US.ORACLE.COM> create or replace view stats
  2  as select 'STAT...' || a.name name, b.value
  3        from v$statname a, v$mystat b
  4       where a.statistic# = b.statistic#
  5      union all
  6      select 'LATCH.' || name,  gets
  7        from v$latch;

View created.


ops$tkyte@ORA817DEV.US.ORACLE.COM> column name format a40
ops$tkyte@ORA817DEV.US.ORACLE.COM> declare
  2      l_start number;
  3          l_cnt   number;
  4  begin
  5      execute immediate 'alter session set session_cached_cursors=0';
  6      insert into run_stats select 'before', stats.* from stats;
  7  
  8      l_start := dbms_utility.get_time;
  9      for i in 1 .. 1000
 10      loop
 11            execute immediate 'select count(*) from emp' into l_cnt;
 12      end loop;
 13      dbms_output.put_line( (dbms_utility.get_time-l_start) || ' hsecs' );
 14  
 15      execute immediate 'alter session set session_cached_cursors=100';
 16      insert into run_stats select 'after 1', stats.* from stats;
 17  
 18      l_start := dbms_utility.get_time;
 19      for i in 1 .. 1000
 20      loop
 21            execute immediate 'select count(*) from emp' into l_cnt;
 22      end loop;
 23      dbms_output.put_line( (dbms_utility.get_time-l_start) || ' hsecs' );
 24  
 25      insert into run_stats select 'after 2', stats.* from stats;
 26  end;
 27  /
45 hsecs
35 hsecs

PL/SQL procedure successfully completed.

<b>so, session cached cursors RAN faster (i ran this a couple of times, there were no hard parses going on.  But the real good news is:</b>

ops$tkyte@ORA817DEV.US.ORACLE.COM> select a.name, b.value-a.value run1, c.value-b.value run2,
  2         ( (c.value-b.value)-(b.value-a.value)) diff
  3    from run_stats a, run_stats b, run_stats c
  4   where a.name = b.name
  5     and b.name = c.name
  6     and a.runid = 'before'
  7     and b.runid = 'after 1'
  8     and c.runid = 'after 2'
  9     and (c.value-a.value) > 0
 10     and (c.value-b.value) <> (b.value-a.value)
 11   order by abs( (c.value-b.value)-(b.value-a.value))
 12  /

NAME                                           RUN1       RUN2       DIFF
---------------------------------------- ---------- ---------- ----------
LATCH.checkpoint queue latch                      3          4          1
LATCH.redo allocation                            30         31          1
STAT...consistent gets                         5088       5089          1
STAT...deferred (CURRENT) block cleanout          2          3          1
 applications

STAT...calls to get snapshot scn: kcmgss       5019       5018         -1
STAT...enqueue releases                          10          9         -1
STAT...execute count                           1015       1014         -1
STAT...opened cursors cumulative               1015       1014         -1
STAT...parse count (total)                     1015       1014         -1
STAT...session cursor cache count                 0          1          1
STAT...redo entries                              28         27         -1
STAT...recursive calls                         1180       1179         -1
STAT...physical reads                             1          0         -1
LATCH.direct msg latch                            2          0         -2
LATCH.session queue latch                         2          0         -2
LATCH.done queue latch                            2          0         -2
STAT...free buffer requested                      8          6         -2
STAT...enqueue requests                          11          9         -2
LATCH.messages                                    3          0         -3
STAT...db block changes                          47         44         -3
LATCH.redo writing                                3          0         -3
LATCH.ksfv messages                               4          0         -4
STAT...session logical reads                  17128      17123         -5
LATCH.row cache objects                         184        178         -6
STAT...db block gets                          12040      12034         -6
STAT...parse time elapsed                         9          3         -6
STAT...parse time cpu                            13          4         -9
STAT...recursive cpu usage                       51         38        -13
LATCH.cache buffers chains                    34315      34335         20
STAT...redo size                              23900      24000        100
STAT...session cursor cache hits                  3       1002        999
LATCH.shared pool                              2142       1097      -1045
LATCH.library cache                           17361       2388     -14973

34 rows selected.

ops$tkyte@ORA817DEV.US.ORACLE.COM> 

<b>see the significantly REDUCED number of LATCH counts on the library and shared pool.  Since a latch is a lock, a lock is a serialization device, serialization implies WAITS -- using the session cached cursors will increase scalability and performance as you add more and more users.  Its not only FASTER, but more scalable as well...</b>

 

A reader, January 28, 2002 - 7:31 am UTC


SQL parsing

A reader, January 28, 2002 - 9:16 am UTC

You mean that to parse SQL Oracle have always make two steps to check SQL code "syntactically and semantically" and than check shared_pool. I have read, that Oracle always save unparsed representation of SQL code and have a hashing algorithm to quick locate this code and I thought that Oracle make this step (check shared_pool for matching statement) as a very first, without make any other checks before.

Tom Kyte
January 28, 2002 - 9:51 am UTC

It needs to do the syntax to make sure it can understand it.

It needs to do the semantic check to see what objects are really referenced.

Only then can it safely goto the shared pool to see if the sql was already parsed. Think about it, what if we have:

select * from t;

That statement might be in the shared pool -- the one in the shared pool however references the table SCOTT.T. Now, you submit:

select * from t;

Can you reuse that plan? I don't know, you don't know, the database sure doesn't know. It must parse the statement -- figure out that T is a table, determine the owner of the table T in question. Then, it can goto the shared pool to see if the statement is there -- and upon finding the identical "select * from t", it will have to further verify that the T referenced by both queries is the SAME exact T.

Hence, we need to do the "semantic" parse before we can find a statement in the shared pool for you to reuse (there could be 1,000 "select * from t" statements in the shared pool -- none of which you could use for example). We need to syntatically parse the query before we can semantically parse it. The hash is only good for finding query strings that are the same -- its not any good for figuring out if the referenced statement is ACTUALLY the same statement in your execution context.



A reader, January 28, 2002 - 9:45 am UTC

Thank you for quick answer of my question "Soft Parse and session_cashed_cursor parameter".
To my understanding:
does Oracle save this session cursors in PGA (dedicated server mode)and are this cursors available for other sessions

Tom Kyte
January 28, 2002 - 10:37 am UTC

UGA, it'll be in the UGA. The PGA is specific to a process and in various configurations (eg: shared server mode, aka MTS) the PGA you use on one query will be different from the PGA you use in another.

In dedicated server mode, the UGA is in the PGA so the difference is subtle (but there). In shared server, the UGA is in the SGA.

A cursor is specific to a SESSION, the query plans and such are shared by all sessions.



Thought I had it, but maybe not

Jon, January 28, 2002 - 10:23 am UTC

Just as I thought I was starting to get this soft vs hard parse, I read this and got confused all over again...

Here you say that, "When developing our applications we want a very high percentage of our queries to be Soft Parsed", but in your book (p 441), you say that "Using bind variables is crucial to the performance; avoiding the soft parse of a query is equally as important". Sounds contradictory...can you clarify?

I thought the pecking order was:

BAD (not using bind variables; therefore "hard parse")

WHERE XXXXX_ID = '123456'

call count
------- ------
Parse 1
Execute 1

Better (using bind variables, but soft parsing, perhaps due to closing cursor and reopening, etc.)

WHERE XXXXX_ID = :b1

call count
------- ------
Parse 1570
Execute 1570

Best (using bind variables, avoids soft parses)

WHERE XXXXX_ID = :b1

call count
------- ------
Parse 1
Execute 1570

Please set me straight. Thanks.

Tom Kyte
January 28, 2002 - 10:43 am UTC

Ok, at the SYSTEM LEVEL (eg: what you see with statspack) we want our ratio of SOFT parses to HARD parses to be very very large. We want to be SOFT parsing at the system level (that means our applications as a whole make use of shared sql). That means a given query that was hard parsed once was soft parsed (reused) over and over.

Within an application -- we want our PARSE to EXECUTE ratio to be very very LOW. We want to parse a statement once, execute it over and over. It is best if all of our queries in our application are soft parsed (meaning someone else did the hard parse) and I would be suspicious if all queries were hard parsed in an application unless the database had just been restarted.


So, system wide -- the ratio of SOFT to HARD should be high (1000 soft parses vs 10 hard parses means each statement was reused 100 times).

In an application -- the ratio of PARSE to EXECUTE should be low (1 parse to 100 executes means our app parsed -- be it soft or hard -- ONCE and executed it 100 times)



You are correct in the above, the last one is the best one IN AN APPLICATION. System wide -- everyone will parse it AT LEAST ONCE (and they should parse it AT MOST once).



What is the new book you are co-operating on

Nag, January 28, 2002 - 10:26 am UTC

Tom

When will this book you are co-operating on , be out in the market.


Tom Kyte
January 28, 2002 - 10:45 am UTC

It is due out in March/April.

It is a beginning Oracle book -- very different from my current book. At a very different level. I've written 3 chapters in it. The entire book will be 1 introduction, 19 chapters, 2 case studies and 5 appendices....

Why is cached_cursors session level ?

Ravi, January 28, 2002 - 3:24 pm UTC

Tom,

Thanks for your time. Please clarify:

1. Why is the cursor caching at session level ? Suppose one user fires the query 'SELECT * FROM CT.EMP WHERE ENAME = :p1'. Then if another user makes the exact same query (with same value of p1), should not the cached cursor be used for her also ?

2. Are there any negatives of using cursor caching ? If not, why is the default value 0 ?

3. Could you show an example which avoids soft parsing. For example I would like to see one hard parse and n executes, without any further parsing.. I tried this in SQL Plus, but I get as many soft parses as many executes.

Tom Kyte
January 28, 2002 - 3:58 pm UTC

you are confusing cursors with shared sql.

A cursor points to some shared sql. Shared sql is used by all. A cursor is more then just the plan and such from the shared pool, a cursor is a "state" as well. A cursor is your data -- YOUR data. A cursor has your bind variable values -- YOUR values.

Suppose one user fires that query, the plan will be placed into the shared pool for all to use. A CURSOR is a session thing -- you might (in your SINGLE SESSION) have 500 CURSORS all pointing to that same piece of shared sql.

Just remember - a cursor is a SESSION thing, its your state, your values, your result set. the plan -- thats a SHARED thing, its the part of the shared pool that all sessions can share.


2) cursor caching increases the amount of memory your session will consume. It is a tuning feature that trades increased RAM for significantly increased performance. I personally see no downside to it.

3) see
</code> http://asktom.oracle.com/pls/asktom/f?p=100:11:::::P11_QUESTION_ID:2444907911913 <code>

...

Quick connections (web) will not benefit from cached cursors ?

Ravi, January 28, 2002 - 4:59 pm UTC

Thanks for your quick response Tom .. I got my answers.

A followup question on cursor caching:
The components in our web servers(iis) make and break connections quickly. In such cases, cursor caching will not be useful, right ?

Tom Kyte
January 29, 2002 - 6:48 am UTC

depends. do they parse and execute the same statement more then once in this quick connection? If so, they might benefit from this.

if they have code like:


loop
parse "insert into t values ( ? )"
bind
execute
close
end

they would benefit (although that code SHOULD be fixed)

Shared SQLs and execution plan

Andrej, January 29, 2002 - 7:15 am UTC

When I have a very large shared pool and execution plan for my query is always saved in shared pool what happens if I make very large insert in this table and than analyze this table. Will Oracle change execution plan?


Tom Kyte
January 29, 2002 - 8:01 am UTC

Yes, Oracle will change the plan -- it is all part of our automatic dependency tracking, we'll flush the plans and redo them the next time they are executed.

so soft parse ALWAYS occurs???

A reader, January 29, 2002 - 5:43 pm UTC

Quoted from above:

-----The next step in the parse operation is to see if the statement we are currently parsing has already in fact been processed by some other session. If it has we may be in luck here, we can skip the next two steps in the process, that of optimization and row source generation. If we can skip these next two steps in the process, we have done what is known as a Soft Parse-----

You mean there is ALWAYS soft parse????? I thought you can avoid that

Now I am confused again



Tom Kyte
January 30, 2002 - 7:16 am UTC

if you parse -- you will either SOFT or HARD parse. If you parse, you by definition cannot skip the parse!!! you are doing the parse!!!

You can skip the parse by keeping cursors open. Instead of coding:

parse
bind
execute
close

in a subroutine, do:

if (firsttime)
parse
end if

bind
execute


You can skip the parse by using PLSQL. PLSQL always caches cursors for us. If you have a procedure:

procecedure p
is
begin
insert into t values ....
update t set ....
delete from t .....
select * into ... from t where ...
end;


those four DML statements will be parsed ONCE per session -- no matter how many times you call P in your session (all you have to do is make sure you parse the call to P once and you are done with your job)



Session cursor cache to avoid soft parse....

Mark J. Bobak, January 30, 2002 - 12:00 am UTC

If a statement is in the session cursor cache, it can
avoid a soft parse. So, absolute best case, it's in the
shared pool (someone else hard parsed it), you execute
the SQL, you soft parse it, it goes to the session cursor
cache, subsequent executions of the same SQL by the SAME
SESSION, will not parse, hard or soft. I believe you can
observe this in TKPROF parse counts. (If a statement is
already in session cursor cache, and you turn trace on and
execute the same statement, TKPROF will report 0 parses.)

As always, correct me if I'm wrong, Tom.

Also, may I recommend Bjorn Engsig's excellent paper on
this stuff and how cursor_sharing relates. It's at:
</code> http://otn.oracle.com/deploy/performance/pdf/cursor.pdf <code>

-Mark


Tom Kyte
January 30, 2002 - 7:20 am UTC

it will not avoid the soft parse, it'll be a softer soft parse.

The soft parse count will still go up, see how the stats:

STAT...parse count (total) 1015 1014 -1


show that -- we parsed 1000 times that statement in both. We just parsed it "softer" the second time.....




Two more question to parsing

A reader, January 30, 2002 - 7:41 am UTC

Can using of public synonyms reduce the parsing?
If I have a very large shared pool, is it not easier to hard reparse the small sql statement as to locate it in memory?

Tom Kyte
January 30, 2002 - 7:45 am UTC

Public synonyms will not reduce the parsing.

ONLY the application can reduce the parsing.

Public synonyms can actually make things more difficult.


As for the "is it not easier to hard reparse the small sql statement" -- the answer is 100%, definitely NO in capital letters. It is most definitely NOT easier -- it is much much harder, time consuming AND it increases latching on the shared pool and since latches are locks, locks are a serialization device, serialization devices mean "one at a time", it (parsing) greatly reduces concurrency, scalability and your chances for success.

stored outlines

Olaf, January 30, 2002 - 8:59 am UTC

Does Oracle make any sorts of parsing (hard or soft) if I use stored outlines?

Tom Kyte
January 30, 2002 - 10:43 am UTC

Yes, 100%.

stored outlines is a hinting mechanism. It works via rewriting the query with hints. The normal, same process takes place 100%.

Consider if you stored an outline that referenced an index. You drop the index. The query still executes since the hint to use that index is just totally ignored.

Stored outlines are hints...

When to close statement?

scott watson, January 30, 2002 - 10:42 am UTC

Tom,

You mention that you should code routines in the following way to avoid soft parses however, I was wondering where I should tell my developers to close their statements.

So if they code their EJB as you suggest

in a subroutine, do:

if (firsttime)
parse
end if

bind
execute

when would they close the statement.

Thanks.

Tom Kyte
January 30, 2002 - 11:34 am UTC

Me -- i never close them ;)

If I have a subroutine that will be called over and over -- I would parse the statement once, and keep it open. When we log off, it'll be closed.

Connection pools and other such stuff mess this up sometimes, it varies by vendor.

so how can soft parse avoid

A reader, January 30, 2002 - 12:25 pm UTC

have test this

opened 3 sessions

first session:

CREATE OR REPLACE procedure temp
as
l_deptno number default 30;
l_cnt number;
begin
select count(*)
into l_cnt
from emp
where deptno=l_deptno;
end;
/

exec temp
exec temp
exec temp

Second session monitoring v$sesstat of first session:



NAME VALUE
---------------------------------------------------------------- ----------
parse time cpu 0
parse time elapsed 0
parse count (total) 123
parse count (hard) 33
execute count 148

parse time cpu 0
parse time elapsed 0
parse count (total) 124
parse count (hard) 33
execute count 150

parse time cpu 0
parse time elapsed 0
parse count (total) 125
parse count (hard) 33
execute count 150

The parse count rises

Third session monitroing v$sql:

select sql_text, executions, parse_calls from v$sql where sql_text like '%DEPTNO%';

SQL_TEXT EXECUTIONS PARSE_CALLS

SELECT COUNT(*) FROM EMP WHERE DEPTNO = :b1 1 1

SELECT COUNT(*) FROM EMP WHERE DEPTNO = :b1 2 1

SELECT COUNT(*) FROM EMP WHERE DEPTNO = :b1 3 1

so parse is soft parse I guess, now my question is how can I avoid that soft parse

cheers

Tom Kyte
January 30, 2002 - 4:15 pm UTC

Yes, the parse count rises in order to parse:

exec temp
exec temp
exec temp

which is the same as:

begin temp; end;
begin temp; end;
begin temp; end;

the parsing you are seeing is sqlplus parsing that block of code each time. Now, if you put 10 queries in the procedure temp -- you'll see the same parse count increment -- one for each block -- not 11 or 10....

You are avoiding the parse on the select count(*). In sqlplus -- you cannot avoid that one parse, its just a "dumb" character mode environment. It'll parse (be it soft or hard) every string you submit.





soft and hard parse

MUnzer, January 30, 2002 - 9:42 pm UTC

Tom:

1. How do you make sure in application development code that I am using soft parse im queries.

Do you use bind variables all the time?

also, if you have
select * from table where state='VA';
and
select * from table where state='CA';

would those be soft or hard parsed since paramter is different.

would you write this as

select * from table where state=:p_state;

and use owa_util.bind variables to run it ?

Thank you,





Tom Kyte
January 31, 2002 - 6:58 am UTC

if you use bind variables -- you will be using shared sql.

You can use tools like SQL_TRACE+tkprof and statspack to see that you are (if you have my book -- read chapter 10)

The queries:

select * from table where state='VA';
and
select * from table where state='CA';


would be hard parsed the first time they are parsed -- softparsed the 2cnd through N'th time. Just like every other query in the database. I would most likely use a bind variable here if this query was to be run for each state and the state was a parameter to a procedure.

I don't know if I would use owa_util to bind it -- i would only do that if i was using listprint or cellsprint found in the owa_util package.



A reader, February 14, 2002 - 9:59 am UTC

Thank you very much

softer soft parse and soft parse, difference

A reader, December 02, 2002 - 6:19 am UTC

Hi

Just to verify if I understand correctly!

I understand that soft parse does the following

1. syntax check
2. semantics check
3. go to shared pool and look if the SQL is there

softer soft parse does

1. syntax check
2. semantics check
3. if SQL is in my session cache then execute otherwise look in shared pool

From your examples it seems that the part of looking in shared pool uses library cache latch and shared pool latch right?

Am I correct in any sense?

Tom Kyte
December 02, 2002 - 7:25 am UTC

softer soft parse does #3, it has to latch in order to tell the database "hey, i'm going to use this again"

what if sql is flushed from shared pool

A reader, December 02, 2002 - 8:12 am UTC

Hi

so in softer soft parse

3. if SQL is in my session cache then execute otherwise look in shared pool

is used

so if this what it does what happens when it tried to execute something which have been flushed from shared pool? (something inexistent)

Tom Kyte
December 02, 2002 - 8:22 am UTC

that is why you need the latch to the shared pool in order to tell the database "we want to execute that statement again" (and make sure it is still there). If the sql isn't there anymore, we'll not find it and you'll be doing a hard parse.




then what step does session_cached_cursor save??

A reader, December 02, 2002 - 8:30 am UTC

Hi

Then I dont see what step does session_cached_cursors save...?

I mean all steps are the same when use that parameter or not :-?

Tom Kyte
December 02, 2002 - 8:53 am UTC

No, in a soft parse you have

1. syntax check
2. semantics check
3. go to shared pool and look if the SQL is there

in a softer soft parse you have

3. go to shared pool and look if the SQL is there

re-read the first "review" to this question and you'll see NUMERICALLY the benefits (in reduced latching, increased performance)

softer soft parse

Sam, January 09, 2003 - 3:57 pm UTC

Tom,

You mentioned:

"No, in a soft parse you have

1. syntax check
2. semantics check
3. go to shared pool and look if the SQL is there

in a softer soft parse you have

3. go to shared pool and look if the SQL is there"

I have a question on which latch gets are saved when doing the softer soft parse. Is avoiding step 2 the main reason why there are fewer latch gets? (I think you need a latch to get the lock to load and lock the object definition during parsing, and doesn't look like step 1 requires any latch). Also, is the cost of step 3 the same in soft parse vs softer soft parse? Or does the cached cursor hold some pointer to the library cache so it does not need to search the library cache? (it just needs to see if the location pointed to is still valid)


Tom Kyte
January 09, 2003 - 6:58 pm UTC

mostly 2.

the cached cursor holds a pointer, it is a little more efficient.

You can use runstats as I did above to see what you save.


Re-generating Plan After Analyze

Jerry, January 19, 2003 - 2:54 pm UTC

If I have a cached cursor, and then analyze the underlying table, it seems to re-generate a new plan.  That's good news, but not what I expected.  Have I modeled this correctly?

SQL> set echo on
SQL> set feedback on
SQL> set autotrace traceonly
SQL>
SQL> drop table t;

Table dropped.

SQL>
SQL> create table t nologging as select * from all_objects;

Table created.

SQL>
SQL> create index t_idx on t(object_id);

Index created.

SQL>
SQL> @call_sql
SQL> select t1.object_name, t2.object_name
  2    from t t1, t t2
  3   where t1.object_id = t2.object_id
  4     and t1.owner = 'SYS';

13480 rows selected.


Execution Plan
----------------------------------------------------------
   0      SELECT STATEMENT Optimizer=CHOOSE
   1    0   TABLE ACCESS (BY INDEX ROWID) OF 'T'
   2    1     NESTED LOOPS
   3    2       TABLE ACCESS (FULL) OF 'T'
   4    2       INDEX (RANGE SCAN) OF 'T_IDX' (NON-UNIQUE)




Statistics
----------------------------------------------------------
          0  recursive calls
          0  db block gets
      33740  consistent gets
        403  physical reads
          0  redo size
     761191  bytes sent via SQL*Net to client
      10381  bytes received via SQL*Net from client
        900  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
      13480  rows processed

SQL>
SQL>
SQL>
SQL> analyze TABLE t compute statistics
  2  FOR TABLE
  3  FOR ALL INDEXES
  4  FOR ALL indexed COLUMNS;

Table analyzed.

SQL>
SQL> @call_sql
SQL> select t1.object_name, t2.object_name
  2    from t t1, t t2
  3   where t1.object_id = t2.object_id
  4     and t1.owner = 'SYS';

13480 rows selected.


Execution Plan
----------------------------------------------------------
   0      SELECT STATEMENT Optimizer=CHOOSE (Cost=86 Card=293 Bytes=17287)
   1    0   HASH JOIN (Cost=86 Card=293 Bytes=17287)
   2    1     TABLE ACCESS (FULL) OF 'T' (Cost=42 Card=293 Bytes=11134)
   3    1     TABLE ACCESS (FULL) OF 'T' (Cost=42 Card=29299 Bytes=615279)




Statistics
----------------------------------------------------------
         42  recursive calls
          0  db block gets
       1701  consistent gets
        239  physical reads
          0  redo size
     761191  bytes sent via SQL*Net to client
      10381  bytes received via SQL*Net from client
        900  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
      13480  rows processed
 

Tom Kyte
January 19, 2003 - 2:58 pm UTC

that is exactly what is supposed to happen. there is a seriously complex and detailed dependency mechanism inside of Oracle. It takes care of invalidations like this automagically.

consider this analyze to be like "drop index I" -- you would expect the queries against that table with I on it to be "fixed". the analyze is no different.

Can you Substantiate

Lakshmi Narasimhan R, January 19, 2003 - 10:24 pm UTC

Hi Tom

"Public synonyms can actually make things more difficult."

Can you explain it.

Atleast i am seeing in maximum projects, people are using Public synonym only for all the tables and privilages given to users according to their needs.

Is it a wrong way of designing or public synonyms are difficult only in few scenario's.

Your explanation will be appreciated.

Reg



Tom Kyte
January 20, 2003 - 10:26 am UTC

tell me, how many EMP public synonyms can you have?

it is a namespace issue (there is one public namespace).

how many times I've seen "oh, i didn't know that name was taken, our scripts just drop and create that public synonym -- sorry about that".

you should strive to use as few public synonyms as possible.

Parse Count in Tkprof

Sushanta, April 04, 2003 - 5:47 am UTC


This is pseudo code of a program in Developed in PRO-C

# ------------------------------------------------------Check the

EXEC SQL DECLARE c_parse1 CURSOR FOR
/* Outer Cursor */
SELECT abc1.a
FROM cc_abc_table abc1
WHERE ccp.e IN (1, 3)
EXEC SQL OPEN c_parse1; <<Total Number of Records in the is
for (;;)
{
EXEC SQL FETCH c_parse1 INTO :cc_abc_t indicator :i_cc_abc_t;
get_abc_data(messagehandle,i_index,runid)
{
get_abcdet_segment(messagehandle,i_po_index,runid)
{
EXEC SQL DECLARE c_parse2 CURSOR FOR
/* Inner Cursor */
SELECT count(*)
FROM cc_xyz_table xyz1
WHERE xyz1.a_fk = :t_po_number
EXEC SQL OPEN c_parse2;
for (;;)
{
EXEC SQL FETCH c_parse2 INTO :cc_podet_t indicator:i_cc_podet_t;
}
EXEC SQL CLOSE c_parse2;
}
}
}

# -----------------------------------------------------Tkprof of the Query >>

SELECT abc1.a
FROM cc_abc_table abc1
WHERE ccp.e IN (1, 3)


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 44 0.03 0.05 268 330 5 300
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 46 0.03 0.05 268 330 5 300
--------------------------------------------------------------------------------

SELECT count(*)
FROM cc_xyz_table xyz1
WHERE xyz1.a_fk = :b0

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 234 0.05 0.07 0 0 0 0
Execute 300 0.01 0.02 0 0 0 0
Fetch 523 0.02 0.01 8 2385 0 385
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 1057 0.08 0.10 8 2385 0 385



The Outer Cursor
SELECT abc1.a
FROM cc_abc_table abc1
WHERE ccp.e IN (1, 3)

total Number of records are fetch is 300.

So the Inner Cursor
is running 300 times.
Hence the Inner Cursor
SELECT count(*)
FROM cc_xyz_table xyz1
WHERE xyz1.a_fk = :t_po_number

is Openning and Closing 300 times .

But in the Tkprof File it is observed that the parse count is 234 (How ??)
It should be 1 -Hard Parsing (if the SQL is Not found in the library cache) and 299 (Soft Parsing)
, however it is showing 234 .

Can u clarify on this

Thanks


Tom Kyte
April 04, 2003 - 6:39 am UTC

what are your hold cursor and release cursor settings on the proc compile line.

and hopefully that is a "fake" example -- that you really do not do that in real life that you know that the above should just be a single query with whatever logic is in get_abcdet_segment(messagehandle,i_po_index,runid) (which probably has its own sql) rolled right in.


Actually the parse count, under optimal circumstances, would be 1 in all cases!


As above

Sushanta, April 07, 2003 - 1:57 am UTC

Hi,

The Hold Cursor and Release Cursor setting in PRO-C is default in our setting .



Option name : release_cursor=boolean
Current value : no
Restrictions : yes, no, true, false
Description : Control release of cursors from cursor cache



This is a part of the Programme just i have mention for reference
inside these there are lot of processing are going on.
Now i remove the Function
get_abcdet_segment(messagehandle,i_po_index,runid) which is confusing to you


------------------------------------------------------------------------------
EXEC SQL DECLARE c_parse1 CURSOR FOR
SELECT abc1.a
FROM cc_abc_table abc1
WHERE ccp.e IN (1, 3)
EXEC SQL OPEN c_parse1;
for (;;)
{
EXEC SQL FETCH c_parse1 INTO :cc_abc_t indicator :i_cc_abc_t;
EXEC SQL DECLARE c_parse2 CURSOR FOR
SELECT count(*)
FROM cc_xyz_table xyz1
WHERE xyz1.a_fk = :t_po_number
EXEC SQL OPEN c_parse2;
for (;;)
{
EXEC SQL FETCH c_parse2 INTO :cc_podet_t indicator:i_cc_podet_t;
}
EXEC SQL CLOSE c_parse2;
}
------------------------------------------------------------------------------


Could u tell us where is the Problem and why the Parse count is giving such a odd number.

Thanks

Tom Kyte
April 07, 2003 - 8:00 am UTC

that function is not confusing me -- I'm telling you is should not be there -- you are doing what we call "a join". You are doing it "manually" -- eg: wrong. My point is there should be one query here -- not queries in loops in loops in loops.


The default values for those parameters varies by OS. You have release cursor "no" apparently -- meaning that to a degree - cursors will be cached if there is space. Apparently - you SOMETIMES have space and sometimes NOT. 60 some odd times, we reused an already opened cursor -- the other 234 times we did not. Nothing strange here -- other then the fact that you are doing procedurally what rightly should be done purely in SQL.

Same as above

Sushanta, April 07, 2003 - 2:04 am UTC

Also if u comment this statement
EXEC SQL CLOSE c_parse2;

the Parse Count is giving as 1

Tom Kyte
April 07, 2003 - 8:00 am UTC

now that is almost what you want. I would want parse count = 0 for this one as it should just be a single query....

Same as above

Sushanta, April 08, 2003 - 12:21 am UTC

Hi Tom ,
I am not yet get the answar of my question.
My question is that why the Parse Count is shows such a odd number (234),
And when i Comment the CLOSE c_parse2 the prase count is 1.
I also mention that it is a very large programme about 12000 lines
and it also not easy to interpret the SQL which i have mention in a single statement,
because in between the two cursor there are lot of processing .

So the main question is now
when i comment the CLOSE c_parse2 then the parse count in 1
and
when i decommented the CLOSE c_parse2 then the parse count is 234

Why so , and it is not very realistic to approach the solution.

can u clarify on this



A reader, May 21, 2003 - 7:37 am UTC

" Hard parses,
they are very CPU intensive as well as a point of contention (serialization).
"

What do you mean 'contention (serialization)'? Oracle only does one a time as processing the hard parses?
Thank you.

Tom Kyte
May 21, 2003 - 8:59 am UTC

we latch the shared pool among other shared structures.

latches are serialization devices
serialization devices make things run -- well -- serially
things that run serially introduce contention, don't scale as well as things that run concurrently.

when we parse, we update shared data structures. These shared data structures in the SGA are protected by latches. It is not that we parse one at a time, but rather that while parsing, we update lots of stuff in the SGA, read lots of stuff in the SGA -- but can only do that serially -- to avoid two people from updating a linked list at the same time for example.

Hard Parses and Parses in Statspack

Vivek Sharma, May 21, 2003 - 4:08 pm UTC

Dear Tom,

I am going thru your book Expert One-on-One and have gained a very good knowledge of Oracle Internals. On Page 480, Tuning Stategies using Statspack, in the Load Profile, the Parses per second is 16.44 and Hard Parses is 0.17.
Then How did you calculate that every 6 seconds you are parsing some bit of sql for the very first time. Also on Page 481 for Non Parse CPU you have mentioned a formula (100 * 1-(PARSE_CPU/TOT_CPU),2). From where do I take the values of Parse CPU and TOT_CPU. Can u please explain.

Thanks for your hard work.
Vivek Sharma

Tom Kyte
May 21, 2003 - 5:07 pm UTC

ops$tkyte@ORA920> select 1/.17 from dual;

     1/.17
----------
5.88235294


if I hard parse 0.17 of something every second, it takes about 6 seconds to hard parse a whole thing

statspack supplies the parse_cpu and tot_cpu, it gets that from v$sysstat. 

Query parsing in PL/SQL

Tony, May 22, 2003 - 1:37 am UTC

What about the DML statmemts and quries in packages/proceduese? Do they get parsed during compilation or during each and every execution? What is done in compilation?





Tom Kyte
May 23, 2003 - 7:51 am UTC

that is one of the massive benefits of stored plsql -- the statements are parsed once per SESSION, not once per execution.

during compilation, the statements are not parsed into the shared pool and optimized or anything, you need to actually execute the statement by running the procedure to have that happen.

Is too much of soft parse count an over kill?

Tony, May 22, 2003 - 2:53 am UTC

s too much of soft parse count an over kill?. If yes, How to reduce soft parse count?

Tom Kyte
May 23, 2003 - 7:52 am UTC

you should parse a statement once per session...

to reduce the soft parse count -- stop parsing it, you control it, just don't do it

Soft Parse - No Soft Parse

Bala, June 03, 2003 - 1:35 am UTC

1.Your answer to the first follow up says
"They will be cached in the shared pool -- but your session will have to find
them there. Session Cached Cursors can remove the need to have to "find" them."
Find what? the cursor or shared SQL
What SESSION_CACHED_CURSORS is caching for me? If it is caching for me my cursor which according to you
"is
more then just the plan and such from the shared pool, a cursor is a "state" as
well. A cursor is your data -- YOUR data. A cursor has your bind variable
values -- YOUR values."
where is the plan now, is it still in the shared pool or in my UGA along with my Cursor(Session state and stuff)?
2. A softer soft parse would avoid
"1. syntax check
2. semantics check"

Does SESSION_CACHED_CURSOR avoid
"3. go to shared pool and look if the SQL is there"
making the SQL available in my UGA and thus lessening the number of latches - serialization.....

It's really confusing Tom....


Tom Kyte
June 03, 2003 - 7:28 am UTC

1) it makes it a softer soft parse. It is helping you get your cursor state instantiated faster, with less latches.

3) no. it lessens the latching required to do so as it set up a dependency that would be invalidated if something happened to the cached sql - but it still needs to see if it is OK.


the example above should help you in understanding. it shows there is still latching, just less of it.

Too many Parses resulting in lesser Database Connections

Vivek Sharma, June 04, 2003 - 2:12 pm UTC

Dear Tom,

I was going thru your book Expert One on One in which u had mentioned that too many parses takes too much resources resulting in lesser user connections. I totally agree with your points as I have faced this problem in one of my databases but I am eager to know the exact reason for this. How is a new user connection related to it. Can U please explain in brief.

Thanks and Regards
Vivek Sharma

Tom Kyte
June 04, 2003 - 2:28 pm UTC

I said you lose the ability to SCALE.

if your machine can do X units of work.

And you make each session to X/100 units of work.
You can have 100 sessions doing work.


If you stop parsing so much and now you do X/200 units of work per user...
You can have 200 sessions doing work.


The less you do, the more you can do.

X Unit of Work

Vivek Sharma, June 04, 2003 - 4:12 pm UTC

Dear Tom,

Sorry, Not much Clear.

Please check whether I am correct.

Suppose My machine has the capability of doing 10000 unit of work.

If my users do not parse too much and does only 100 units then 10000/100 = 100 Sessions will be doing the work.
Whereas, If I parse too much and the sessions do about 200 unit of works then 10000/200 = 50 Sessions will be doing the work. Hence I lose the scalability.

Please correct me if I am wrong and apologize.

Thanks
Vivek Sharma

Tom Kyte
June 04, 2003 - 4:50 pm UTC

yes, that is my point.

the LESS you do, the MORE you can do.

Optimization During Parsing

Robert, June 04, 2003 - 6:42 pm UTC

Tom,
In this white paper from Oracle Corp
"Efficient use of bind variables, cursor-sharing and related cursor parameters" (8/2001).....
bottom of page 4 in fine print it notes:

"In some cases, optimization may take place during parsing, but this is irrelevent to the discussion in this WP"

Can you elaborate on this abit or point me to where (asktom or EOOOO) you already done so ?
thanks


Tom Kyte
June 05, 2003 - 7:29 am UTC

hard parse = optimize
soft parse = been there, done that, use the plan from the shared pool.


they are referencing the difference between hard and soft parses.

What happens during compilation?

Tony, July 04, 2003 - 8:16 am UTC

You have mentioned that parsing is done even for stored procedures during first execution. Then, what exactly happens during compilation? I've been thinking that parsing of stored procedure is done during compilation itself. Please clarify

Tom Kyte
July 04, 2003 - 8:45 am UTC

the parse of the stored procedure is done during compilation. this generates pcode, an executable if you will.

The SQL contained therein is executed by the plsql engine at runtime and is parsed just like any and all sql is when it first hits the shared pool.

What's the title of the book and if possible ISBN

Mohammd Saleem, August 20, 2003 - 7:34 pm UTC

Tom,
I read your expert reviews/suggestioons/explonations from time to time. You always refer to my book, hardly (The complete title).

I am a DBA. I know the basic concepts decently well. I want to be an proficient in performance tuing and Backup and recovery, and 8i to 9i differences. Can you tell me if you have written any books on them.
Please send me the titles and ISBNs.

Tom Kyte
August 21, 2003 - 5:36 pm UTC

see my home page -- asktom.oracle.com.

nothing in backup and recovery (the oracle docs are actually pretty good there)

What's the title of the book and if possible ISBN

Mohammd Saleem, August 20, 2003 - 7:35 pm UTC

Tom,
I read your expert reviews/suggestioons/explonations from time to time. You always refer to my book, hardly (The complete title).

I am a DBA. I know the basic concepts decently well. I want to be an proficient in performance tuing and Backup and recovery, and 8i to 9i differences. Can you tell me if you have written any books on them.
Please send me the titles and ISBNs.

How much memory?

Shankar, October 13, 2003 - 8:10 pm UTC

Tom, If I have open_cursors=100, how much memory oracle needs to allocate in the pga for a session for each cursor?

(2) Also, what is the difference between open_cursors and session_cached_cursors? Thanks.

Tom Kyte
October 14, 2003 - 6:24 am UTC

the same as if you have it at 64 :)

or 5000

we allocate the cursor array (pointers) 64 at a time.

As you use them - we allocate a cursor structure to be pointed to.

so, if you set to 100 and never use more then 40, it is as if it were set to 64 (or 63 or .....)


It is a function of how many you actually open and since each one could be a different size -- it is not possible to say how much 40 open ones would be (short of getting your 40 together -- measuring the pga use, opening them and fetching a bit, and then measuring again)

Syntax Check of a SQL

Vivek Sharma, October 26, 2003 - 4:50 pm UTC

Dear Tom,

I was asked a question by one of my junior when I was explaining him the difference between Soft Parse and Hard Parse. He asked me that where is the syntax check stored i.e. when I execute a query
select * form <tablename> it gives me syntax error as
*
ERROR at line 1:
ORA-00923: FROM keyword not found where expected

But where are these syntax stored ? are they in the dictionary cache of the shared pool. If this query was entered properly, it would have checked it in the library cache for the exact match. But from where does it checks that the syntax is unrecognizable by oracle.

Thanks and Regards
Vivek Sharma

Tom Kyte
October 26, 2003 - 4:56 pm UTC

syntax checking is part of parsing the query into a data structure. It is part of the procedural code that is the query parser itself. It has a set of rules, a grammar, that describes valid SQL. The string is literally parsed using that grammar into a data structure.

It is just procedural code.

how to reduce soft parses

A reader, November 14, 2003 - 4:01 am UTC

Hi

My execute to parse ratio is pretty low, around 25%, there are high number of parses and high number of executions. Few hard parses but very high number of soft and softer parses, how can we improve in this case? It is a Forms application.

I have also another Forms Application which uses extensively stored packages (the one before does not use stored PL/SQL) and execute to parse ratio is 95%... Is this the only way to increase this ratio...? Using stored PL/SQL stuffs?

Tom Kyte
November 14, 2003 - 8:11 am UTC

a forms application should have excessively HIGH execute/parse ratios unless the forms application uses and abuses dynamic sql a ton

'Dynamic' forms SQL

Gary, November 16, 2003 - 5:24 pm UTC

The issue I seem to be seeing more often in Forms programs is people using the facility to set the where clause dynamically for a base-table block, rather than using a pre-query trigger (which is becoming a lost art). And the hard-coding is more like

if :PARAMETER.COL1 IS NOT NULL THEN
whereclause := whereclause || ' AND col1 LIKE ''%' ||
:PARAMETER.col1 || '%''';
end if;

Maybe worth scanning the code for default_where

Tom Kyte
November 17, 2003 - 6:39 am UTC



they should be:


if :parameter.col1 is not null then
whereclause := whereclause ||
' and col1 like ''%''|| :parameter.col1 || ''%'' ';
end if;


so the resulting string looks like:


and c1 like '%' || :block.item || '%'
and c2 = :anotherblock.item
and c3 > :yetanotherBlock.item


you can reference binds in the default where.

What about this?

A reader, November 19, 2003 - 3:01 pm UTC

Tom,
If the statement is soft parsed, and bind variables are resolved afterwards, will it not mis-lead the optimizer? especially whenever there is non uniform distribution of data. For example, there could be the query :- select * from customer where first_name = :first_name. Sometimes the first_names are unique and one row (very less) is returned, but some times lots of rows are returned. If there is index on name, and optimizer used index first time, but for second query, with soft parse, index scan will be slower, or I am missing something.
Thanks


Tom Kyte
November 21, 2003 - 3:22 pm UTC

the parse/optimization is deferred to the open, after the bind.

the first hard parse will determine what the plan is for the rest of the soft parses.

Totally lost as to massive execute time

Mike, November 20, 2003 - 12:35 pm UTC

This is one of the worst SQL's I've ever seen, may we all forgive the original developer he probably knew not what he did...

Aside from the lack of Bind Variables, the fact that it asks to use an index on a table and then uses upper on the column so that the index can't be used as it intended, sticks a PL/SQL function in the where clause and outer joins like it was the latest fashion....

In the tkprof section on the timings all the time is spent on the execute phase... the PL/SQL function does not do any DML, just some queries and there are no triggers that would fire on the selects (although there are triggers)

I don't understand why so much time is spent on the exec phase? I thought the lack of binds would hit the Parse phase is it the PL/SQL function that is causing it?

If this isn't related enough to the above questions then please ignore it, I did a search for "High executes" and tkprof and this was the only thread that came back.

Thanks,

Mike.

select data_section
from
( select /*+ index (NIM_DAT_ENGINEERING_REQS) index (nim_dat_listof_users
lus_dat_i_pi1) */ ERE_REQUIREMENT_ID||'^U^V'||ERE_expire_yn||'^U^V'||ERE_status
_id||'^U^V'||set1.sst_name||'^U^V'||ERE_class_id||'^U^V'||set2.scl_name||'^U^V'||ER
E_NAME||'^U^V'||ERE_GEN_NAME||'^U^V'||set5.lva_display_value||'^U^V'||set6.pro_na
me||'^U^V'||set7.lva_display_value||'^U^V'||set8.lva_display_value data_section,
omx_oam_utl_03.check_access(ERE_profile_id,ERE_access_id,
spe_update_access_method_id,cast(multiset(select lui_access_id,lui_user_id
from nim_Dat_listof_users_items where lui_list_id (+) = lus_list_id) as
omx_t_id_rec_tab),:fineg_flag,:user_id) check_ok from
NIM_DAT_ENGINEERING_REQS,nim_cfg_source_statuses set1,
nim_cfg_source_classes set2,nim_cfg_list_values set5,nim_sec_profiles set6,
nim_cfg_list_values set7,nim_cfg_list_values set8 ,nim_dat_listof_users,
nim_cfg_source_peripherals where 1=1 and set1.sst_source_id(+) = 53 and
set1.sst_status_id(+) = ERE_STATUS_ID and set2.scl_source_id(+) = 53 and
set2.scl_class_id(+) = ERE_CLASS_ID and upper(ERE_NAME) like
upper('10053_0%') and set5.lva_list_id(+) = 97 and set5.lva_storage_value(+)
= ERE_GEN_REGION and set6.pro_profile_id(+) = ERE_PROFILE_ID and
set7.lva_list_id(+) = 121 and set7.lva_storage_value(+) = ERE_GEN_TECH_TYPE
and set8.lva_list_id(+) = 1181 and set8.lva_storage_value(+) =
ERE_GEN_PROGRAMME_STATUS and lus_owner_id (+) = ERE_REQUIREMENT_ID and
upper(lus_name) = upper(spe_name) and spe_source_id = scl_source_id and
spe_class_id in (scl_class_id,0) and lus_security_yn (+) = 'Y' and
ERE_class_id IN (0,54)) where check_ok = 'Y'


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.01 0.01 0 0 0 0
Execute 1 11.46 11.52 1 6 0 0
Fetch 1 0.10 0.10 839 856 12 1
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 3 11.57 11.63 840 862 12 1

Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: 22 (recursive depth: 1)

Rows Row Source Operation
------- ---------------------------------------------------
1 FILTER
2 FILTER
1 NESTED LOOPS OUTER
2 HASH JOIN OUTER
1 HASH JOIN
19 TABLE ACCESS FULL NIM_CFG_SOURCE_PERIPHERALS
1 HASH JOIN OUTER
1 HASH JOIN OUTER
1 HASH JOIN OUTER
1 HASH JOIN OUTER
1 HASH JOIN OUTER
1 TABLE ACCESS BY INDEX ROWID NIM_DAT_ENGINEERING_REQS
29655 INDEX FULL SCAN (object id 19762)
2 INDEX RANGE SCAN (object id 19583)
6 TABLE ACCESS FULL NIM_CFG_SOURCE_STATUSES
2 TABLE ACCESS BY INDEX ROWID NIM_CFG_LIST_VALUES
3 INDEX RANGE SCAN (object id 19546)
3 TABLE ACCESS BY INDEX ROWID NIM_CFG_LIST_VALUES
4 INDEX RANGE SCAN (object id 19546)
4 TABLE ACCESS BY INDEX ROWID NIM_CFG_LIST_VALUES
5 INDEX RANGE SCAN (object id 19546)
85 TABLE ACCESS FULL NIM_SEC_PROFILES
1 TABLE ACCESS BY INDEX ROWID NIM_DAT_LISTOF_USERS
2 INDEX RANGE SCAN (object id 19780)
1 INDEX RANGE SCAN (object id 19784)

********************************************************************************



Tom Kyte
November 21, 2003 - 4:30 pm UTC

most queries spend their time in the fetch phase.

some during the execute phase. You are seeing the time it took to build the result set.

Soft/Hard vs. Bind vars

Mark, November 21, 2003 - 7:35 pm UTC

In one of your responses way back in this thread you state in regard to the following SQL -

select * from table where state='VA';
and
select * from table where state='CA';

"would be hard parsed the first time they are parsed -- softparsed the 2cnd through N'th time. Just like every other query in the database. I would most likely use a bind variable here if this query was to be run for each state and the state was a parameter to a procedure."

I thought bind variables were irrelevent with stored procedure SQL (except when expressly written as dynamic SQL). If so, what did you mean when you said you'd use bind variables?

Also - I was surprised to see (via v$sqlarea) that whitespace was not ignored, so that two statements which are really the same were stored separately in the shared pool:
SELECT SYSDATE FROM DUAL;
vs.
SELECT SYSDATE FROM DUAL;

You comment?
Thanks.


Tom Kyte
November 21, 2003 - 10:40 pm UTC

with a stored procedure, sure

with a PROCEDURE (VB, Java, C, C++, etc etc etc) -- no, not unless the coder coded it properly.


whitespace, case, EVERYTHING - the queries must be 100% the same.

What affects the parse time?

Jens, November 25, 2003 - 7:45 am UTC

Hi Tom,

I have a cursor for loop where I loop through the resultset of "difficult" query. This program was never changed for months, it runs every month in an acceptable time.

But suddenly the never changed query takes a lot of parse time:

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 7 786.90 790.33 0 0 0 0
Execute 8 0.19 0.25 0 4 139 122
Fetch 15 8.25 11.57 4998 11655 667 153
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 30 795.34 802.15 4998 11659 806 275

If I try it with the RBO it's ok, but I don't want to change the code. In the meantime the DBA changed two parameter:

log_buffer from 524288 to 2048000
log_checkpoint_timeout from 0 to 1800

But I could not see any correlation between these parameters and the optimizer behaviour.

What other settings could have an impact on the parse-time? I know OPTIMIZER_MAX_PERMUTATIONS, but this parameter was left unchanged to the default of 80.0000 (if i set it to a very low value, the performance is also acceptable)

Bye,

Jens







Tom Kyte
November 25, 2003 - 8:16 am UTC

what does the rest of the tkprof look like (how many misses in library cache)

why is this parsed more then one time????????

something else changed on this system -- a table altered, indexes mucked with -- what is different here?

what is the plan, what was the plan?

Parsing Stored Procedure

A reader, November 28, 2003 - 1:25 pm UTC

Hi Tom,

Is parsing of stored procedures much slower than normal SQL statements for example INSERT?

I am using ODP.NET in my application right now and one of its current limitation is that it cannot reuse cursors. So the parse/execution ratio is always 1 to 1.

Begin PKG_TXNS.CREATE_TXN(:v0, :v1, :v2, :v3, :v4, :v5, :v6, :v7); End;

call count cpu elapsed disk query current rows

------- ------ -------- ---------- ---------- ---------- ---------- ----------

Parse 1932 0.00 14.78 0 0 0 0
Execute 1932 0.00 0.65 0 0 0 1932
Fetch 0 0.00 0.00 0 0 0 0


INSERT INTO REVERSINGTXNS ( TXNID, ORIGINALTXNID ) VALUES( :"SYS_B_0", :"SYS_B_1" )



call count cpu elapsed disk query current rows

------- ------ -------- ---------- ---------- ---------- ---------- ----------

Parse 924 0.00 0.09 0 0 0 0
Execute 924 0.00 0.24 15 22 11237 924
Fetch 0 0.00 0.00 0 0 0 0

The stored procedure contains only a simple insert statement. What makes it almost 100 times slower than direct SQL statements?

Tom Kyte
November 28, 2003 - 2:18 pm UTC

use session_cached_cursors to offset this.

if that is just "one" of its limitations -- well, gotta say using MS technologies is a wonderful thing isn't it.

but I do not see the wide disparity you do -- no:


drop table t;
create table t ( a number, b number, c number );

create or replace package demo_pkg
as
procedure foo( a in number, b in number, c in number );
end;
/
create or replace package body demo_pkg
as
procedure foo( a in number, b in number, c in number )
is
begin
insert into t values ( a, b, c );
end;
end;
/

create or replace procedure p1
as
begin
for i in 1 .. 2000
loop
execute immediate 'begin demo_pkg.foo( :x, :y, :z ); end;'
using i, i, i;
end loop;
end;
/

create or replace procedure p2
as
begin
for i in 1 .. 2000
loop
execute immediate 'insert into t values ( :x, :y, :z )'
using i, i, i;
end loop;
end;
/
@trace
exec p1
exec p2


begin demo_pkg.foo( :x, :y, :z ); end;

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 2000 0.49 0.60 0 0 0 0
Execute 2000 1.19 1.55 0 1 20 2000
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 4000 1.68 2.16 0 1 20 2000

********************************************************************************
insert into t values ( :x, :y, :z )


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 2000 0.23 0.21 0 0 0 0
Execute 2000 1.01 0.99 0 2013 6137 2000
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 4000 1.24 1.21 0 2013 6137 2000


Clarification

Richard, January 16, 2004 - 5:17 am UTC

You said in your first response in this thread :

"After Oracle parses the query and it passes the syntax and semantic checks ? it will look in the shared pool component of the SGA to see if that same exact query has 
already been processed by another session.  Since it has performed the semantic check it has already figured out:

o Exactly what tables are involved
o That we have access to the tables (the proper privileges are there)

And so on.  Now, it can look at all of the queries in the shared pool that have already been parsed/optimized and generated to see if the work has already been done.  
........ "

Say for example,

SQL> connect user1/test
SQL> select ename from scott.emp where empno = :x ;

Now let's say that this query undergoes PARSING and the SQL is in the shared pool for future use.

SQL> connect scott/tiger
SQL> revoke select on emp from user1;
SQL> connect user1/test
SQL> select ename from scott.emp where empno = :x

So, now, when does Oracle check that the USER1 has no privileges to select SCOTT's EMP table ?

Can I say that every SQL statement has to undergo symantic Analysis everytime irrespective of whether the SQL is available in shared pool or not ?
 

Tom Kyte
January 16, 2004 - 11:11 am UTC

it'll check when user1 goes to execute it again.

yes, semantic checking must happen during parse.

Clarification

Richard, January 16, 2004 - 5:22 am UTC

Tom

Please ignore my previous query. You have answered it already in the subsequent follow ups. So I understand that every SQL (whether it is in S.Pool or not) has to undergo SYNTAX and Semantic check every time. Certainly it makes sense but I was wondering when Oracle would do it if the SQL statement is already available in the S.Pool.

Thanks


parsing question

SR, January 17, 2004 - 10:51 am UTC

Tom,

I was hoping if you could shed some light on following. In our production following is observed:

select * from v$statname where name like '%parse%'
STATISTIC# NAME CLASS
177 parse time cpu 64
178 parse time elapsed 64
179 parse count (total) 64
180 parse count (hard) 64

select a.sid, a.program, b.value total_parse, c.value soft_parse, (b.value-c.value) hard_parse,
d.value cpu_time, e.value elapsed_time
from v$session a, v$sesstat b, v$sesstat c, v$sesstat d, v$sesstat e
where a.sid = b.sid and a.sid = c.sid and b.statistic# = 179 and c.statistic# = 180
and d.statistic# = 177 and e.statistic# = 178 and a.sid = d.sid and a.sid = e.sid
order by b.value desc

SID PROGRAM TOTAL_PARSE SOFT_PARSE HARD_PARSE CPU_TIME ELAPSED_TIME

13 oracle@GXPROD (QMN0) 5642612 6988 5635624 12926 77639

I hope the text lines up. But why is so much hard parse going on and can I improve this situation?

I really appreciate your help here.

Thanks

Tom Kyte
January 18, 2004 - 12:37 pm UTC

there are an infinite number of reasons for "why" -- but the most common:

a) the developers of the application did not use bind variables and hence every time they run a query it is a brand new query that has never been seen before

followed by

b) you just started your system and the cache is not warmed up


at the very end of the list, very very end is

c) your shared pool is tool small.


i would guess "a" if asked to guess with so little information.

Tom, thanks for the feedback. Some more question

SR, January 18, 2004 - 8:55 pm UTC

The QMN high parse lead me to look further. This is 8.1.7.4 database. I don't know how neatly following output will look. If it was allowed in html table then it would

I hope it is readble. Almost all the SQL's I grep'ed with '$' in sql_text and sorted by parse_calls were some of the heavy hitters. Something looks really bad in these numbers. I remember looking thru some trace files and saw very similar thing as well. I have listed the trace file output (5 minutes trace) and the v$sql rows.

shared pool is 200mb
aq_tm_processes is 1

Following is sample from the trace file I generated by tracing application server

select i.obj#,i.ts#,i.file#,i.block#,i.intcols,i.type#,i.flags, i.property,
i.pctfree$,i.initrans,i.maxtrans,i.blevel,i.leafcnt,i.distkey, i.lblkkey,
i.dblkkey,i.clufac,i.cols,i.analyzetime,i.samplesize,i.dataobj#,
nvl(i.degree,1),nvl(i.instances,1),i.rowcnt,mod(i.pctthres$,256),
i.indmethod#,i.trunccnt,nvl(c.unicols,0),nvl(c.deferrable#+c.valid#,0),
nvl(i.spare1,i.intcols),i.spare4,spare2,decode(i.pctthres$,null,null,
mod(trunc(i.pctthres$/256),256))
from
ind$ i, (select enabled, min(cols) unicols, min(to_number(bitand(defer,1)))
deferrable#, min(to_number(bitand(defer,4))) valid# from cdef$ where obj#=
:1 and enabled > 1 group by enabled) c where i.obj#=c.enabled(+) and i.bo#=
:1


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 20 0.06 0.07 0 0 0 0
Execute 66 0.06 0.02 0 0 0 0
Fetch 200 0.04 0.28 98 561 0 134
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 286 0.16 0.37 98 561 0 134


Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: SYS (recursive depth: 2)

Rows Execution Plan
------- ---------------------------------------------------
0 SELECT STATEMENT GOAL: CHOOSE
0 MERGE JOIN (OUTER)
0 SORT (JOIN)
0 TABLE ACCESS (CLUSTER) OF 'IND$'
0 INDEX (UNIQUE SCAN) OF 'I_OBJ#' (NON-UNIQUE)
0 SORT (JOIN)
0 VIEW
0 SORT (GROUP BY)
0 TABLE ACCESS (CLUSTER) OF 'CDEF$'
0 INDEX (UNIQUE SCAN) OF 'I_COBJ#' (NON-UNIQUE)

********************************************************************************


select /*+ rule */ bucket_cnt, row_cnt, cache_cnt, null_cnt, timestamp#,
sample_size, minimum, maximum, distcnt, lowval, hival, density, col#,
spare1, spare2, avgcln
from
hist_head$ where obj#=:1 and intcol#=:2


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 12 0.01 0.01 0 0 0 0
Execute 146 0.05 0.05 0 0 0 0
Fetch 146 0.00 0.17 31 458 0 146
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 304 0.06 0.23 31 458 0 146

Misses in library cache during parse: 0
Optimizer goal: RULE
Parsing user id: SYS (recursive depth: 1)

Rows Execution Plan
------- ---------------------------------------------------
0 SELECT STATEMENT GOAL: HINT: RULE
0 TABLE ACCESS (BY INDEX ROWID) OF 'HIST_HEAD$'
0 INDEX (RANGE SCAN) OF 'I_HH_OBJ#_INTCOL#' (NON-UNIQUE)

********************************************************************************


select intcol#,nvl(pos#,0),col#
from
ccol$ where con#=:1


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 17 0.00 0.00 0 0 0 0
Execute 752 0.09 0.14 0 0 0 0
Fetch 1619 0.05 0.09 7 3238 0 867
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 2388 0.14 0.23 7 3238 0 867

Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: SYS (recursive depth: 1)

Rows Execution Plan
------- ---------------------------------------------------
0 SELECT STATEMENT GOAL: CHOOSE
0 TABLE ACCESS (BY INDEX ROWID) OF 'CCOL$'
0 INDEX (RANGE SCAN) OF 'I_CCOL2' (UNIQUE)

********************************************************************************


select type#,blocks,extents,minexts,maxexts,extsize,extpct,user#,iniexts,
NVL(lists,65535),NVL(groups,65535),cachehint,hwmincr, NVL(spare1,0)
from
seg$ where ts#=:1 and file#=:2 and block#=:3


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 21 0.02 0.00 0 0 0 0
Execute 21 0.00 0.01 0 0 0 0
Fetch 21 0.01 0.18 23 63 0 21
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 63 0.03 0.19 23 63 0 21

Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: SYS (recursive depth: 1)

Rows Row Source Operation
------- ---------------------------------------------------
1 TABLE ACCESS CLUSTER SEG$
1 INDEX UNIQUE SCAN (object id 9)


select obj#,type#,ctime,mtime,stime,status,dataobj#,flags,oid$, spare1
from
obj$ where owner#=:1 and name=:2 and namespace=:3 and(remoteowner=:4 or
remoteowner is null and :4 is null)and(linkname=:5 or linkname is null and
:5 is null)and(subname=:6 or subname is null and :6 is null)


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 18 0.02 0.01 0 0 0 0
Execute 40 0.04 0.03 0 0 0 0
Fetch 40 0.03 0.13 37 107 0 26
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 98 0.09 0.17 37 107 0 26

Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: SYS (recursive depth: 1)

Rows Execution Plan
------- ---------------------------------------------------
0 SELECT STATEMENT GOAL: CHOOSE
0 TABLE ACCESS (BY INDEX ROWID) OF 'OBJ$'
0 INDEX (RANGE SCAN) OF 'I_OBJ2' (UNIQUE)



select name,intcol#,segcol#,type#,length,nvl(precision#,0),decode(type#,2,
nvl(scale,-127/*MAXSB1MINAL*/),178,scale,179,scale,180,scale,181,scale,182,
scale,183,scale,231,scale,0),null$,fixedstorage,nvl(deflength,0),default$,
rowid,col#,property, charsetid,charsetform,spare1,spare2
from
col$ where obj#=:1 order by intcol#


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 21 0.09 0.09 0 0 0 0
Execute 70 0.03 0.03 0 0 0 0
Fetch 958 0.06 0.02 5 270 0 888
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 1049 0.18 0.14 5 270 0 888

Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: SYS (recursive depth: 2)

Rows Execution Plan
------- ---------------------------------------------------
0 SELECT STATEMENT GOAL: CHOOSE
0 SORT (ORDER BY)
0 TABLE ACCESS (CLUSTER) OF 'COL$'
0 INDEX (UNIQUE SCAN) OF 'I_OBJ#' (NON-UNIQUE)


select node,owner,name
from
syn$ where obj#=:1


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 10 0.06 0.04 0 0 0 0
Execute 10 0.00 0.00 0 0 0 0
Fetch 10 0.01 0.09 17 30 0 10
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 30 0.07 0.13 17 30 0 10

Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: SYS (recursive depth: 1)

Rows Execution Plan
------- ---------------------------------------------------
0 SELECT STATEMENT GOAL: CHOOSE
0 TABLE ACCESS (BY INDEX ROWID) OF 'SYN$'
0 INDEX (UNIQUE SCAN) OF 'I_SYN1' (UNIQUE)


Following is one of a kind (all above don't show rows as the server was still active when I stopped trace)

select tc.type#,tc.intcol#,tc.position#,c.type#, c.length,c.scale,
c.precision#,c.charsetid,c.charsetform
from
triggercol$ tc,col$ c ,trigger$ tr where tc.obj#=:1 and c.obj#=:2 and
tc.intcol#=c.intcol# and tr.obj# = tc.obj# and (bitand(tr.property,32) !=
32 or bitand(tc.type#,20) = 20) union select type#,intcol#,position#,69,0,0,
0,0,0 from triggercol$ where obj#=:3 and intcol#=1001 union select tc.type#,
tc.intcol#,tc.position#,121,0,0,0,0,0 from triggercol$ tc,trigger$ tr where
tr.obj# = tc.obj# and bitand(tr.property,32) = 32 and tc.obj# = :4 and
bitand(tc.type#,20) != 20


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 23 0.01 0.03 0 0 0 0
Execute 23 0.01 0.01 0 0 0 0
Fetch 254 0.05 0.08 17 2209 0 231
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 300 0.07 0.12 17 2209 0 231

Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: SYS (recursive depth: 1)

Rows Row Source Operation
------- ---------------------------------------------------
6 SORT UNIQUE
6 UNION-ALL
6 NESTED LOOPS
7 NESTED LOOPS
34 TABLE ACCESS CLUSTER COL$
2 INDEX UNIQUE SCAN (object id 3)
39 INDEX RANGE SCAN (object id 122)
6 TABLE ACCESS BY INDEX ROWID TRIGGER$
12 INDEX UNIQUE SCAN (object id 120)
0 INDEX RANGE SCAN (object id 122)
0 NESTED LOOPS
7 INDEX RANGE SCAN (object id 122)
0 TABLE ACCESS BY INDEX ROWID TRIGGER$
12 INDEX UNIQUE SCAN (object id 120)


Rows Execution Plan
------- ---------------------------------------------------
0 SELECT STATEMENT GOAL: CHOOSE
6 SORT (UNIQUE)
6 UNION-ALL
6 NESTED LOOPS
7 NESTED LOOPS
34 TABLE ACCESS (CLUSTER) OF 'COL$'
2 INDEX (UNIQUE SCAN) OF 'I_OBJ#' (NON-UNIQUE)
39 INDEX (RANGE SCAN) OF 'I_TRIGGERCOL2' (NON-UNIQUE)
6 TABLE ACCESS (BY INDEX ROWID) OF 'TRIGGER$'
12 INDEX (UNIQUE SCAN) OF 'I_TRIGGER2' (UNIQUE)
0 INDEX (RANGE SCAN) OF 'I_TRIGGERCOL2' (NON-UNIQUE)
0 NESTED LOOPS
7 INDEX (RANGE SCAN) OF 'I_TRIGGERCOL2' (NON-UNIQUE)
0 TABLE ACCESS (BY INDEX ROWID) OF 'TRIGGER$'
12 INDEX (UNIQUE SCAN) OF 'I_TRIGGER2' (UNIQUE)

select text
from
view$ where rowid=:1


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 14 0.00 0.04 0 13 0 0
Execute 14 0.00 0.00 0 0 0 0
Fetch 14 0.00 0.02 3 28 0 14
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 42 0.00 0.06 3 41 0 14

Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: SYS (recursive depth: 1)

Rows Row Source Operation
------- ---------------------------------------------------
1 TABLE ACCESS BY USER ROWID VIEW$


Rows Execution Plan
------- ---------------------------------------------------
0 SELECT STATEMENT GOAL: CHOOSE
1 TABLE ACCESS (BY USER ROWID) OF 'VIEW$'


There are so many of these I just don't understand why they are getting reparsed. Isn't oracle running it for its own sake, like internal stuff?

I am really puzzled.


SQL_TEXT LOADED_VERSIONS OPEN_VERSIONS USERS_OPENING EXECUTIONS LOADS INVALIDATIONS PARSE_CALLS DISK_READS BUFFER_GETS ROWS_PROCESSED COMMAND_TYPE OPTIMIZER_MODE
insert into sys.aud$( sessionid,entryid,statement,timestamp#, userid,userhost,terminal,action#,returncode, obj$creator,obj$name,auth$privileges,auth$grantee, new$owner,new$name,ses$actions,ses$tid,logoff$pread, logoff$lwrite,logoff$dead,comment$text,spare1,spare2,priv$used) values(:1,:2,:3,SYSDATE,:4,:5,:6,:7,:8,:9, :10,:11,:12,:13,:14,:15,:16,:17, :18,:19,:20,:21,:22,:23) 1 1 965 3294139 3 0 3288922 54569 23560335 3294121 2 CHOOSE
update sys.aud$ set action#=:2, returncode=:3, logoff$time=sysdate, logoff$pread=:4, logoff$lread=:5, logoff$lwrite=:6, logoff$dead=:7 where sessionid=:1 and entryid=1 and action#=100 1 0 0 3280637 1 0 3280637 51910 22976944 3280635 6 CHOOSE
select privilege#,level from sysauth$ connect by grantee#=prior privilege# and privilege#>0 start with (grantee#=:1 or grantee#=1) and privilege#>0 1 0 0 3313241 2 0 3086704 912 67950067 14743279 3 CHOOSE
select text from view$ where rowid=:1 1 0 0 2293306 1 0 2293306 70702 4587224 2293306 3 CHOOSE
select t.name, (select owner_instance from sys.aq$_queue_table_affinities where table_objno = t.objno) from system.aq$_queue_tables t where t.name = :1 and t.schema = :2 for update skip locked 1 1 1 142206332 4 0 1296258 38465 1422072265 142206332 3 CHOOSE
select t.schema, t.name, t.flags, q.name from system.aq$_queue_tables t, sys.aq$_queue_table_affinities aft, system.aq$_queues q where aft.table_objno = t.objno and aft.owner_instance = :1 and q.table_objno = t.objno and q.usage = 0 and bitand(t.flags, 4+16+32+64+128+256) = 0 for update of t.name, aft.table_objno skip locked 1 1 1 1296258 4 0 1296258 6234762 819408698 142206332 3 CHOOSE
select grantee#,privilege#,nvl(col#,0),max(nvl(option$,0)) from objauth$ where obj#=:1 group by grantee#,privilege#,nvl(col#,0) order by grantee# 1 0 0 389865 2 0 359685 222811 2024534 3288041 3 CHOOSE
select col#, grantee#, privilege#,max(nvl(option$,0)) from objauth$ where obj#=:1 and col# is not null group by privilege#, col#, grantee# order by col#, grantee# 1 0 0 374791 1 0 346728 155006 762998 0 3 CHOOSE
select name,intcol#,segcol#,type#,length,nvl(precision#,0),decode(type#,2,nvl(scale,-127/*MAXSB1MINAL*/),178,scale,179,scale,180,scale,181,scale,182,scale,183,scale,231,scale,0),null$,fixedstorage,nvl(deflength,0),default$,rowid,col#,property, charsetid,charsetform,spare1,spare2 from col$ where obj#=:1 order by intcol# 1 0 0 626841 9 0 330999 118565 2238396 8758210 3 CHOOSE
select 1 from aq$_schedules 1 0 0 935059 4 0 300215 160 1201272 0 3 CHOOSE
select 1 from sys.aq$_queue_table_affinities 1 0 0 935059 4 0 300215 36 1501251 300215 3 CHOOSE
select con#,type#,condlength,intcols,robj#,rcon#,match#,refact,nvl(enabled,0),rowid,cols,nvl(defer,0),mtime,nvl(spare1,0) from cdef$ where obj#=:1 1 0 0 537761 6 0 284193 36829 6421503 4393795 3 CHOOSE
update seq$ set increment$=:2,minvalue=:3,maxvalue=:4,cycle#=:5,order$=:6,cache=:7,highwater=:8,audit$=:9 where obj#=:1 1 0 0 275064 3 0 275061 12408 1101019 275061 6 CHOOSE
select i.obj#,i.ts#,i.file#,i.block#,i.intcols,i.type#,i.flags, i.property,i.pctfree$,i.initrans,i.maxtrans,i.blevel,i.leafcnt,i.distkey, i.lblkkey,i.dblkkey,i.clufac,i.cols,i.analyzetime,i.samplesize,i.dataobj#, nvl(i.degree,1),nvl(i.instances,1),i.rowcnt,mod(i.pctthres$,256),i.indmethod#,i.trunccnt,nvl(c.unicols,0),nvl(c.deferrable#+c.valid#,0), nvl(i.spare1,i.intcols),i.spare4,spare2,decode(i.pctthres$,null,null, mod(trunc(i.pctthres$/256),256)) from ind$ i, (select enabled, min(cols) unicols, min(to_number(bitand(defer,1))) deferrable#, min(to_number(bitand(defer,4))) valid# from cdef$ where obj#=:1 and enabled > 1 group by enabled) c where i.obj#=c.enabled(+) and i.bo#=:1 1 0 0 473428 8 0 263898 536397 3624643 894903 3 CHOOSE
select con#,obj#,rcon#,enabled,nvl(defer,0) from cdef$ where robj#=:1 1 0 0 445245 6 0 246829 127807 2796453 951065 3 CHOOSE
select intcol#,nvl(pos#,0),col# from ccol$ where con#=:1 1 0 0 4393690 6 0 245310 97287 19116995 5164884 3 CHOOSE
select pos#,intcol#,col#,spare1 from icol$ where obj#=:1 1 0 0 671549 3 0 187091 37354 3977817 1317372 3 CHOOSE
select order#,columns,types from access$ where d_obj#=:1 1 0 0 178297 2 0 178297 111980 1048617 345872 3 CHOOSE
select ts#,file#,block#,nvl(bobj#,0),nvl(tab#,0),intcols,nvl(clucols,0),audit$,flags,pctfree$,pctused$,initrans,maxtrans,rowcnt,blkcnt,empcnt,avgspc,chncnt,avgrln,analyzetime, samplesize,cols,property,nvl(degree,1),nvl(instances,1),avgspc_flb,flbcnt,kernelcols,nvl(trigflag, 0),nvl(spare1,0),nvl(spare2,0),spare4,nvl(spare3,0) from tab$ where obj#=:1 1 0 0 194303 7 1 120622 117555 692870 194296 3 CHOOSE
update sys.job$ set failures=0, this_date=null, flag=:1, last_date=:2, next_date=:3, total=total+(sysdate-nvl(this_date,sysdate)) where job=:4 1 0 0 24544 2 0 117026 130 184285 23032 6 CHOOSE
select owner#,name,namespace,remoteowner,linkname,p_timestamp,p_obj#, d_owner#, nvl(property,0) from dependency$,obj$ where d_obj#=:1 and p_obj#=obj#(+) order by order# 1 0 0 178297 6 4 109876 96065 1087876 309934 3 CHOOSE
select condition from cdef$ where rowid=:1 1 0 0 153913 6 0 85163 2361 184988 92494 3 CHOOSE
select job from sys.job$ where next_date < sysdate and (field1 = :1 or (field1 = 0 and 'Y' = :2)) order by next_date, job 1 0 0 124880 18 0 70911 3797 95738 36733 3 CHOOSE

Thanks for your help

Tom Kyte
January 19, 2004 - 9:05 am UTC

if it were in an html table, it would be word wrapped and virtually (no, make that totally) unreadable. totally unreadable. courier with real line breaks is the only real font in this world -- for code and such anyway.


I've no idea what I'm to be looking for here -- none. what looks "bad" to you???


These queries obviously use binds -- you can see them. the $ are part of the OBJECT NAME. You should not be looking at SYS sql right now (over which you have very very very little control). Look to your sql.

Tom, my point is why so much parsing is happening

SR, January 19, 2004 - 11:36 am UTC

For example, first SQL
PARSE_CALLS=3288922,
EXECUTIONS=3294139
Is this normal?


SQL_TEXT LOADED_VERSIONS OPEN_VERSIONS USERS_OPENING EXECUTIONS
LOADS INVALIDATIONS PARSE_CALLS DISK_READS BUFFER_GETS
ROWS_PROCESSED COMMAND_TYPE OPTIMIZER_MODE
insert into sys.aud$( sessionid,entryid,statement,timestamp#,
userid,userhost,terminal,action#,returncode,
obj$creator,obj$name,auth$privileges,auth$grantee,
new$owner,new$name,ses$actions,ses$tid,logoff$pread,
logoff$lwrite,logoff$dead,comment$text,spare1,spare2,priv$used) values(:1,:2,:3,SYSDATE,:4,:5,:6,:7,:8,:9, :10,:11,:12,:13,:14,:15,:16,:17,
:18,:19,:20,:21,:22,:23) 1 1 965 3294139 3 0 3288922
54569 23560335 3294121 2 CHOOSE


update sys.aud$ set action#=:2, returncode=:3, logoff$time=sysdate, logoff$pread=:4, logoff$lread=:5, logoff$lwrite=:6, logoff$dead=:7 where sessionid=:1 and entryid=1 and action#=100 1 0 0 3280637 1 0 3280637 51910 22976944 3280635 6 CHOOSE


select privilege#,level from sysauth$ connect by grantee#=prior privilege# and
privilege#>0 start with (grantee#=:1 or grantee#=1) and privilege#>0 1 0
0 3313241 2 0 3086704 912 67950067 14743279 3 CHOOSE


select text from view$ where rowid=:1 1 0 0 2293306 1 0
2293306 70702 4587224 2293306 3 CHOOSE


select t.name, (select owner_instance from sys.aq$_queue_table_affinities where table_objno = t.objno) from system.aq$_queue_tables t where t.name = :1 and t.schema = :2 for update skip locked 1 1 1 142206332 4 0
1296258 38465 1422072265 142206332 3 CHOOSE

Thanks for your help


Tom Kyte
January 19, 2004 - 1:00 pm UTC

recursive SQL is (has to be) parsed during each execution. so yes, in this case, this is normal.

Perfectly Clear.....

Denise, January 19, 2004 - 2:15 pm UTC

thanks Tom...

I also wondered about the differences of hard parse vs
soft parse.

Your explanation made it perfectly clear and explains
why sometimes when I would kick off a query for the first
time it would take a while to process...then any subsequent
calls with that same query would process almost immeditately. Great explanation(as usual!!).

I noticed you mentioned a book you collaborated on
noted in this thread that is dated January -- 2002...
being released March/April(I assume year 2002).

What is the name of the book and where can I get a copy
of it?



Excellent

Sikandar Hayat, January 19, 2004 - 10:54 pm UTC


Nice explanation of soft vs hard......

Interesting think while visiting Amazon site is all your books have price with decimal .99$.....

Soft parsed vs No parse

A reader, February 04, 2004 - 7:13 pm UTC

Tom,

In one of your followup, you mentioned the following :

"
you should parse a statement once per session...

to reduce the soft parse count -- stop parsing it, you control it, just don't do it"

1. My understanding after reading through this discussion is that when you use PL/SQL (not dynamic sql like execute immediate, but static sql), it will automatically take care of parsing only once per session and we do not need to do anything special about it.
2. Using static PL/SQL avoids doing soft parses for each execution, but does a parsing (soft or hard) once during the session. Is it what some books term as 'No parse'?
3. If you use any other language other than PL/SQL,
my understanding is that a soft parse take place once for each execution and cannot be avoided. If it is not correct, pls. explain how to avoid soft parses. When you mention "do not parse inside a loop" which parse do you mean ( soft or hard). eg. if you have a language like Java and if you code something like

parse
loop
execute
fetch
end loop
close

My understanding is it is parsed once per session (hard/soft) and parsed (soft) for every execution since any language other than PL/SQL does not do caching automatically. Is that correct or this will also be parsed (hard/soft) once per session and executed many times(since parse is outside the loop). Pls. explain

5. Consider

Case A
======
parse
loop
execute
fetch
end loop
close

Case B
======
loop
parse
execute
fetch
end loop
close

Case C
======
loop
parse
execute
fetch
close
end loop

My understanding is that
a. Soft/hard parsed once and executed many times
b. Soft/Hard parsed once and parsed once for each execution
c. Hard parsed every execution since closing within the loop. If not pls clarify

Tom Kyte
February 04, 2004 - 7:21 pm UTC

1) correct (for static sql, not so for dynamic -- that is up to you again)

2) yes.

3) incorrect. a parse takes place each time the coder parses (prepares) the query. if the coder does this more than once per session -- well, they are the one doing that. Would you compile a subroutine before calling it each time? probably not -- so why compile SQL each time?

a parse is a parse - hard or soft - they both should be avoided.

Java, VB, C, all languages can parse once, execute many. Here is a simple example:

class select
{
static PreparedStatement pstmt = null;

public static void process_data( Connection conn, String inputs )
throws Exception
{
int empno;
String ename;
String job;
int mgr;
String hiredate;
int sal;
int comm;
int deptno;

if ( pstmt == null )
{
pstmt = conn.prepareStatement
("select * "+
"from emp " +
"where ename like ? " );
}
pstmt.setString( 1, inputs );
ResultSet rset = pstmt.executeQuery();

while( rset.next() )
{
empno = rset.getInt(1);
ename = rset.getString(2); job = rset.getString(3);
mgr = rset.getInt(4);
hiredate = rset.getString(5);
sal = rset.getInt(6);
comm = rset.getInt(7);
deptno = rset.getInt(8);
System.out.println( ename );
// process( empno, ename, job, mgr, hiredate, sal, comm, deptno );
}
rset.close();
}


No matter how many times you invoke that subroutine -- pstmt will only have been parsed the first time around...

Hard Parse vs Soft Parse

A reader, February 05, 2004 - 10:02 am UTC

Tom,

My understanding with regard to Hard/Soft parse in a nutshell is as follows :

1. Soft parse = Syntax + Semantic check + Lookup in the library cache for finding if a similar statement exists

This is less expensive than hard parse and takes less CPU time and cycle, but involves obtaining a latch on :

a. Row cache/Data dictionary cache(for semantic check)
b. Library cache (to determine if a similar statement already exists)
Both of the above steps are a serialization mechanism that prevents scalability

2. Hard Parse = Syntax + Semantic check + lookup in the library cache and if not found, allocating memory in shared pool and generating execution plans and performing row source optimization.
This is more expensive than soft parse and consumes more CPU time and cycle and also involves obtaining latches on
row cache (for semantic check)
library cache (for determining if a parsed version exists)
shared pool (for allocating shared pool area to store the new execution plan)

All of the above steps are a serialization mechanism that prevents scalability

3. No parse (is the ideal way to design an application and code so it provides scalability and does not consume any CPU time/cycles and does not involve latch contention)

Is my understanding about them true. If not pls. help me


Tom Kyte
February 05, 2004 - 7:17 pm UTC

that is pretty much it, yup.

to shorten further:

hard parse bad,
soft parse less bad,
softer soft parse (via session_cached_cursors) less bad still,
no parse good.



no parse?

reader, February 05, 2004 - 11:21 am UTC

From above, how do you achieve step 3? You got to do either hard or soft or as Tom says softer softer parse (session cached cursors) and there is nothing called no parse. is there?

Tom Kyte
February 05, 2004 - 7:25 pm UTC

means you parse as LITTLE AS POSSIBLE..


parse once, execute the statment over and over and over.

not like most programmers -- parse/execute/close, parse/execute/close and so on..

Hard parse vs. soft parse

A reader, February 05, 2004 - 12:58 pm UTC

Tom,

My understanding with regard to Hard/Soft parse in a nutshell is as follows :

1. Soft parse = Syntax + Semantic check + Lookup in the library cache for
finding if a similar statement exists

This is less expensive than hard parse and takes less CPU time and cycle, but
involves obtaining a latch on :

a. Row cache/Data dictionary cache(for semantic check)
b. Library cache (to determine if a similar statement already exists)
Both of the above steps are a serialization mechanism that prevents scalability

2. Hard Parse = Syntax + Semantic check + lookup in the library cache and if not
found, allocating memory in shared pool and generating execution plans and
performing row source optimization.
This is more expensive than soft parse and consumes more CPU time and cycle and
also involves obtaining latches on
row cache (for semantic check)
library cache (for determining if a parsed version exists)
shared pool (for allocating shared pool area to store the new execution plan)

All of the above steps are a serialization mechanism that prevents scalability

3. No parse (is the ideal way to design an application and code so it provides
scalability and does not consume any CPU time/cycles and does not involve latch
contention)

Is my understanding about them true. If not pls. help me


Thanks

Small clarification

A reader, February 13, 2004 - 2:53 pm UTC

Tom,

I read the article you had mentioned about cursor sharing (Oracle white paper). I am getting clear about the soft/hard parse issues after reading this discussion. However, I still have the following doubt and would be useful if you clarify it for me.

Say for eg. I have the following piece of application code in PL/SQL :

declare
cursor c is select ... from ... where ...
i number;
begin
for i in 1..10
loop
open c;
fetch c into ....
close c;
end loop;
end;

I am aware that it is a bad programming practice. However, for my understanding, pls clarify :
1. In the above case, will a hard or a soft parse take place because the cursor is opened/fetched/closed in a loop or will PL/SQL cache the cursor and will parse only once (The above article says that for PL/SQL an explicit cursor with static SQL will be cached and not parsed).This is the thing that confuses me when you say "Keep the cursor open and do not close it"

2. If i have the same logic as above in other prog. languages other than PL/SQL (such as VB, Java, C etc), what will happen

Thanks much

Tom Kyte
February 13, 2004 - 3:10 pm UTC

that is not a bad programming practice necessarily -- if the inputs into the query in the loop change that is. It should be a select into, but same difference.

In plsql -- close doesn't really close :) PLSQL is awesome for performance, scaling and doing tons of work for you.


As most there would be 1 hard parse (if the query was never parsed before) OR 1 soft parse. As PLSQL goes out of its way to cache cursors transparently for us.


In java, vb, C, etc -- you the programmer must code this better -- with a prepared statement that you prepare once and reuse over and over and over


This is why PLSQL rocks, this is why PLSQL rolls -- it is all about doing SQL right. In all other languages you have to

o convert data types
o cache your own cursors
o do all of the work.

same effect when using dbms_sql package ?

pasko, February 14, 2004 - 2:23 am UTC

Hi Tom,

Thanks for your comments above,

Should we put the usage of
'DBMS_SQL Package' and 'EXECUTE IMMEDIATE'
in the same brackets with : java, vb, C, etc ,
although they are infact plsql ?

Thanks in advance.

Best Regards.



Tom Kyte
February 14, 2004 - 5:04 pm UTC

dynamic sql in plsql means -- you are doing it yourself.

with dbms_sql, you can prepare once, bind/execute over and over and then close.

with execute immediate, you cannot. Hence execute immediate is not appropriate in the case where you execute the same statement more then a handful of times.

dbms_sql vs native

James Blanding, February 16, 2004 - 11:57 am UTC

The Application Developer's Guide for 8i compares the advantages of DBMS_SQL and native Dynamic SQL. Your last comment seems to relate to the following tidbit from Chapter 8 of that guide:

Reuse of SQL Statements

"The PARSE procedure in the DBMS_SQL package parses a SQL statement once. After the initial parsing, the statement can be used multiple times with different sets of bind arguments.

In contrast, native dynamic SQL prepares a SQL statement for execution each time the statement is used. Statement preparation typically involves parsing, optimization, and plan generation. Preparing a statement each time it is used incurs a small performance penalty. However, Oracle's shared cursor mechanism minimizes the cost, and the performance penalty is typically trivial when compared to the performance benefits of native dynamic SQL."


So the reuse of a given SQL statement by the DBMS_SQL package is like "parse/execute once, fetch many times", right? Whereas native dynamic SQL using bind variables is more like "hard parse once, soft parse/fetch many times"? Or am I getting things confused here (seems likely, with the latter anyway...)?

Also, which model does a properly written (i.e. utilizes bind variables) ODBC application follow?

I am trying to convince a particular software vendor that one of their programs needs to be rewritten to use bind variables. It produces obscene quantities of SELECT statements that are identical save for a few literal values. I gathered a sample of these statements via SQL_TRACE last week and tried to quantify the difference in performance between running the statements as-is and running one statement with bind variables. To do this, I put all of the literal values into a table, and then looped over them with a cursor. For each row from the table, I used EXECUTE IMMEDIATE to select from the actual data table. In one case, I just used a single form of the statement with bind variables and the USING syntax, and in the other case, I worked the values from the cursor into a string variable so that the statement would be different each time. I'm sure there are some reasons why this is not completely accurate (like some overhead from the surrounding PL/SQL code, especially the string concatenation); I was really only trying to get some ballpark numbers. But after reading your last post, I'm wondering if using DBMS_SQL would have been more appropriate. I guess I could just try it myself heh...

Tom Kyte
February 16, 2004 - 12:43 pm UTC

dbms_sql lets you:

parse
loop
bind
execute
<fetch if query>
end loop
close


NDS makes you

loop
parse
bind
execute
<fetch>
close
end loop


The "best way" - the right way -- for more then a handful of iterations is the first loop -- dbms_sql will quickly blow away NDS if you are doing dozens/hundreds of iterations.

The best way in ODBC, Proc, VB, Java, whatever is the first way.

Comparing

a) literal sql that is parsed each time
to
b) bound sql that is parsed each time is "fair"

it shows what impact not binding has. To go further, you can show them the difference between b and

c) bound sql that is parsed ONCE per session

to show the further gain they can achieve.


A quick fix for them MIGHT be cursor_sharing=force

search for that on this site, read about it.

difference in parse/execute numbers in v$sqlarea

bogi, April 19, 2004 - 12:02 pm UTC

Hi Tom,

I have noticed in our DB that for some statements number of parse is bigger than number of executions in v$sqlarea view. Example:
select * from dba_jobs is shown as 14 parse and 2 execute.
What this could mean?

Thanks,

Bogi


Tom Kyte
April 19, 2004 - 12:29 pm UTC

the job queue processes parse all of the sql is might need -- then executes only that which it needed to use. In 9i, the architecture is radically different and you should not see this anymore (assuming you are on 8i)

review

bogi, April 20, 2004 - 3:09 am UTC

I am on 9.2.0.4...:-( and we are using a job queue processes a lot.

What I can do is to clean the SGA and monitor what will happen after that. But, if you have some additional hint, thanks in advance.

Thanks a lot!

bogi


Tom Kyte
April 20, 2004 - 8:39 am UTC

have you identified this as the cause of a performance issue that is causing you to not meet your processing goals?

review

A reader, April 20, 2004 - 9:18 am UTC

100% sure not...but this can be one of the elements that cause sys. perf. to decrease by my view. We have a lot of this statements where parse is bigger that execution. What I would like to see is one parse-many executions or at least number of parse=num. of executions.
I would like to find the source of this and try to fix it (if I can find it) and get some gains on sys. perf. Any gain is a good gain.
Thanks,

bogi

Tom Kyte
April 20, 2004 - 9:26 am UTC

this is one of those things that "you have no control over". short of opening a performance related tar with support.

and I disagree with "any gain is a good gain", for you haven't shown that anything would be gained from this and further, you haven't shown that things that NEED to have a gain (your applications) would materially benefit from this.

I follow the method of "find out what my critical apps that are not performing are waiting on, remove those waits, ignore all other things, they are not relevant".

review

A reader, April 20, 2004 - 11:10 am UTC

Thanks for advice....make sense what you have suggested.
I will see what I can do.

bogi

hardparse != recursive calls

Sami, April 26, 2004 - 10:16 am UTC

Dear Tom,

I thought hard parse will always produce non-zero recursive calls. Here my "select count(distinct owner) from t1;" is a hard parse because it is getting parsed very first time after creating t1 table but I don't see any recursive calls.


Connected to:
Oracle8i Enterprise Edition Release 8.1.7.3.0 - Production
With the Partitioning and Parallel Server options
JServer Release 8.1.7.3.0 - Production

SQL> drop table t1;

Table dropped.

SQL> set autotrace on
SQL> create table t1 as select * from dba_objects;

Table created.

SQL> select count(distinct owner) from t1;

COUNT(DISTINCTOWNER)
--------------------
45


Execution Plan
----------------------------------------------------------
0      SELECT STATEMENT Optimizer=CHOOSE
1    0   SORT (GROUP BY)
2    1     TABLE ACCESS (FULL) OF 'T1'




Statistics
----------------------------------------------------------
0  recursive calls
15  db block gets
1566  consistent gets
1561  physical reads
0  redo size
379  bytes sent via SQL*Net to client
431  bytes received via SQL*Net from client
2  SQL*Net roundtrips to/from client
1  sorts (memory)
0  sorts (disk)
1  rows processed

SQL> l
1* select count(distinct owner) from t1
SQL> /

COUNT(DISTINCTOWNER)
--------------------
45


Execution Plan
----------------------------------------------------------
0      SELECT STATEMENT Optimizer=CHOOSE
1    0   SORT (GROUP BY)
2    1     TABLE ACCESS (FULL) OF 'T1'




Statistics
----------------------------------------------------------
0  recursive calls
15  db block gets
1566  consistent gets
1  physical reads
0  redo size
379  bytes sent via SQL*Net to client
431  bytes received via SQL*Net from client
2  SQL*Net roundtrips to/from client
1  sorts (memory)
0  sorts (disk)
1  rows processed

SQL>  

Tom Kyte
April 26, 2004 - 10:17 am UTC

"I thought hard parse will always produce non-zero recursive calls. "

false - if library cache has everything we need -- no recursive sql would be needed.

Still did not get it

A reader, May 25, 2004 - 4:40 am UTC

Excuse me, but I still cannot see the difference between a shared sql and a cursor.

"Just remember - a cursor is a SESSION thing, its your state, your values, your
result set."

If it is a result set, e.g. I have a query:

SELECT * FROM t WHERE id=10;

I run this query 2 times, it means that 2-nd time it will not scan table db blocks but just find a result in a memory?

If yes, then what if the data in "t" was changed? What if result set has bilions of rows?



Tom Kyte
May 25, 2004 - 7:30 am UTC

shared sql is like a DLL, a shared library. One copy -- everyone points at it, shares it.

a cursor is like a program running, it points at and uses the DLL, the the shared library. every program doesn't get it's own copy of the binary text, they share it.

The program is a cursor.
The DLL/shared library is like shared sql.



Everytime you execute a query, it is done from the beginning. There is data in the cache, but it is retrieved all over again.

Misses in library cache during parse: 1

READER, June 15, 2004 - 12:38 am UTC

Misses in library cache during parse: 1

Why is this atleast 1 ?


Tom Kyte
June 15, 2004 - 3:23 pm UTC

it isn't, this page shows that

session caching

A reader, July 12, 2004 - 12:15 pm UTC

read about session cached on your site...wanna verify my understanding....
1. So if you soft parse - you still have to do the shared pool
check (the hsahing etc.) where as if you have
the cursor in the session cache, you skip that
step as well and straigtaway execute the statement, correct?

2. regardless of soft parse or softer soft parse,
you will have to do syntax and semantic checks,
correct? (This is where I am not sure - if you know
the exact cursor is stored in your sesion cache - you
should be able to skip this step as well and go straight
to execution, no?

Thanx!

Tom Kyte
July 12, 2004 - 8:28 pm UTC

1) you still have to check that the statement is in the sql area - lots is done (that is why I call it a softer soft parse, not a "no parse")

there are:

hard parses
soft parses
softer soft parse with session cached cursors
and the best -- no parse

2) you will skip some of that -- since you are rejoining something in your session -- not something "global to all". we have to parse fully on a soft parse because we are not sure if the select * from emp we find it the shared pool is a select * from emp you can use -- but if we find it in your cache -- well.... we know it is OK.

thanx!

A reader, July 12, 2004 - 8:43 pm UTC

"you still have to check that the statement is in the sql area - lots is done
(that is why I call it a softer soft parse, not a "no parse")"

An example?
e.g. is the fact that the cursor is invalid
for some reason a situation that fits this scenario?

and how can you use the "no parse" option - that
seems new/interesting!:) (unless you mean to say remove the
useless sql statements!;))

So far what I understand is:
in session based cursor cache, if you have a
cursor pointing to a valid entry in the sql area
then you can skip the syntax and semantic check..
Thus this should be the first step Oracle does -
a. first look in session cache -
b. if you find a cursor - verify that it is a valid
cursor by checking the sql area?
c. if all is ok - you directly go to execute (step e)...
otherwise
d. parse (hard or soft depending on we all know what)
e. execute


Is the above a correct description?

thanx!


Tom Kyte
July 12, 2004 - 11:55 pm UTC

you have a cursor that might be pointing to a valid entry -- the entry might not be there anytmore....


it does not PIN the shared sql, it just points to it and hopes it is still there. if not, hard parse city.

no parse means "coders did it the right way in the first place", meaning the need for session cached cursors is not there in the first place. they parse once and just reuse it over and over.

thanx!

A reader, July 13, 2004 - 11:04 am UTC

so the steps I gave earlier are correct then?
oracle does look at session cursor cache and
if the cursor exists and is valid - it goes
directly to execute step - otherwise, it
simply does the hard/softparse steps...

Is this summary correct? Any missing steps/details?

Many thanx!

Tom Kyte
July 13, 2004 - 12:27 pm UTC

it has to go to the shared pool after finding it in the session cursor cache still.

another point..

A reader, July 13, 2004 - 11:32 am UTC

"no parse means "coders did it the right way in the first place", meaning the
need for session cached cursors is not there in the first place. they parse once and just reuse it over and over. "

If you are using connection pools (a very common
architecture) then you have to use the JDBC statement
caching to achieve the above...
see </code> http://asktom.oracle.com/pls/ask/f?p=4950:61:7335380096214811059::::P61_ID:10094488919905 <code>



thanx!

A reader, July 13, 2004 - 12:48 pm UTC

so the steps would be:
a. check in session based cursor cache, if you have a
cursor - check in the shared pool to
verify that it is a valid entry. if so go to the
execute step.
otherwise
b. parse (hard or soft depending on we all know what)
c. execute
Please confirm...
Basically I am interested in knowing the session cache
related steps...

thanx!



Tom Kyte
July 13, 2004 - 1:40 pm UTC

that is conceptually how it works -- yes.

thank you!

A reader, July 13, 2004 - 1:42 pm UTC


soft parse/ different plan

Andrew Markiewicz, July 28, 2004 - 5:53 pm UTC

Tom.
While investigating a situation with bind variable data types (number, varchar2) matching or not matching the column it is querying, I came across a parsing situation that is confusing me.

From what I can tell, I have a query that hard parses and executes using a bind variable. Then I change the data type of that bind variable, set the value again, and execute the same query.

tkprof shows:
The first hard parsed
Second soft parsed
However, the second has a different execution plan.

1)Shouldn't the plans be the same?
2)Does changing the data type of the bind variable have some effect when it is soft parsed? Or is the bind variable data type not considered when checking the shared pool?


EX: 126(128)-> drop table amat2;

Table dropped.

Elapsed: 00:00:00.13
126(128)->
126(128)->
126(128)-> create table amat2
2 as (
3 select substr(to_char(object_id),1,9) f1
4 , 'AAAAAAAAAAAAAAAAAAAAAAA' f2
5 from dba_objects
6 where object_id is not null
7 union
8 select substr(to_char(object_id + (select count(*) from dba_objects)),1,9) f1
9 , 'AAAAAAAAAAAAAAAAAAAAAAA' f2
10 from dba_objects
11 where object_id is not null
12 )
13 ;

Table created.

Elapsed: 00:00:02.61
126(128)->
126(128)-> alter table amat2 add (primary key (f1));

Table altered.

Elapsed: 00:00:00.46
126(128)->
126(128)-> analyze table amat2 compute statistics
2 for table
3 for all indexes
4 for all columns
5 ;

Table analyzed.

126(128)-> @desc amat2
Name Null? Type
----------------------------------------- -------- ----------------------------
F1 NOT NULL VARCHAR2(9)
F2 CHAR(23)

Run: (note, &&1 substitution variable is just to tag the query. In this case '2_2')
The first query has a numeric bind variable with a varchar2 column.

4998(5000)-> variable b1 number
4998(5000)-> exec :b1 := 39010

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.01
4998(5000)-> @qry2 &&1
4998(5000)-> select /*+ choose tag&&1 */
2 amat2.*
3 , '&&1' tag
4 from amat2
5 where f1 like nvl(:b1, '%')
6 /

F1 F2 TAG
--------- ----------------------- ---
39010 AAAAAAAAAAAAAAAAAAAAAAA 2_2

Elapsed: 00:00:00.03

The second query has a varchar2 bind variable with the same varchar2 column.

4998(5000)->
4998(5000)-> variable b1 varchar2(9)
4998(5000)-> exec :b1 := '39010'

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.00
4998(5000)-> @qry2 &&1
4998(5000)-> select /*+ choose tag&&1 */
2 amat2.*
3 , '&&1' tag
4 from amat2
5 where f1 like nvl(:b1, '%')
6 /

F1 F2 TAG
--------- ----------------------- ---
39010 AAAAAAAAAAAAAAAAAAAAAAA 2_2

Elapsed: 00:00:00.01
4998(5000)->



tkprof output:
select /*+ choose tag2_2 */
amat2.*
, '2_2' tag
from amat2
where f1 like nvl(:b1, '%')

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.01 0.01 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 2 0.00 0.00 1 4 0 1
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 4 0.01 0.01 1 4 0 1

Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: 32

Rows Row Source Operation
------- ---------------------------------------------------
1 CONCATENATION (cr=4 r=1 w=0 time=1021 us)
0 FILTER (cr=0 r=0 w=0 time=2 us)
0 TABLE ACCESS BY INDEX ROWID AMAT2 (cr=0 r=0 w=0 time=0 us)
0 INDEX RANGE SCAN SYS_C0059568 (cr=0 r=0 w=0 time=0 us)(object id 168583)
1 FILTER (cr=4 r=1 w=0 time=799 us)
1 TABLE ACCESS BY INDEX ROWID AMAT2 (cr=4 r=1 w=0 time=795 us)
1 INDEX RANGE SCAN SYS_C0059568 (cr=3 r=0 w=0 time=252 us)(object id 168583)

********************************************************************************

select /*+ choose tag2_2 */
amat2.*
, '2_2' tag
from amat2
where f1 like nvl(:b1, '%')

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 2 0.00 0.00 0 4 0 1
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 4 0.00 0.00 0 4 0 1

Misses in library cache during parse: 0
Optimizer goal: CHOOSE
Parsing user id: 32

Rows Row Source Operation
------- ---------------------------------------------------
1 CONCATENATION (cr=4 r=0 w=0 time=127 us)
0 FILTER (cr=0 r=0 w=0 time=2 us)
0 TABLE ACCESS FULL AMAT2 (cr=0 r=0 w=0 time=0 us)
1 FILTER (cr=4 r=0 w=0 time=104 us)
1 TABLE ACCESS BY INDEX ROWID AMAT2 (cr=4 r=0 w=0 time=100 us)
1 INDEX RANGE SCAN SYS_C0059568 (cr=3 r=0 w=0 time=69 us)(object id 168583)


Thanks for your time.

Tom Kyte
July 29, 2004 - 7:25 am UTC

It is a nuance of the "child cursor".  the change in plan is attributed to the fact that you have two child cursors - each with a different "bind type".

You can see this with a simple test:

ops$tkyte@ORA9IR2> @test
ops$tkyte@ORA9IR2> alter system flush shared_pool;
 
System altered.
 
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> drop table t;
 
Table dropped.
 
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> create table t ( y varchar2(2000) );
 
Table created.
 
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> variable a varchar2(1)
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> @trace
ops$tkyte@ORA9IR2> alter session set events '10046 trace name context forever, level 12';
 
Session altered.
 
ops$tkyte@ORA9IR2> select * from t where y = :a;
 
no rows selected
 
ops$tkyte@ORA9IR2> select sql_text from v$sql where sql_text like 'select * from t where y = :a%';
 
SQL_TEXT
-------------------------------------------------------------------------------
select * from t where y = :a
 
ops$tkyte@ORA9IR2> variable a varchar2(2000)
ops$tkyte@ORA9IR2> select * from t where y = :a;
 
no rows selected
 
ops$tkyte@ORA9IR2> select sql_text from v$sql where sql_text like 'select * from t where y = :a%';
 
SQL_TEXT
-------------------------------------------------------------------------------
select * from t where y = :a
select * from t where y = :a
 
ops$tkyte@ORA9IR2> select BIND_MISMATCH
  2    from v$sql_shared_cursor
  3   where KGLHDPAR in ( select address from v$sql where sql_text like 'select * from t where y = :a%' );
 
B
-
N
Y
 

We have two child cursors -- tkprof would show however:
select *
from
 t where y = :a
                                                                                
....
                                                                                
Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: 199
                                                                                
                                                                                
select *
from
 t where y = :a
                                                                                
....
Misses in library cache during parse: 0
Optimizer goal: CHOOSE
Parsing user id: 199


only one "miss"
 

Outstanding

Andrew Markiewicz, July 29, 2004 - 12:19 pm UTC

I would not have thought of that.
Thanks.

Followup questions.
1)Does each child cursor undergo a hard parse at some point? If they each have a different plan I would think they must.
Are you saying that, tkprof finds the parent cursor and so it's a soft parse and doesn't record as a "miss" in the library cache, but it really has to hard parse it again because of the bind variable discrepancy?


2)In your example, both bind variables are varchar2 only differing in size (1 and 2000). I was under the impression that it was only type that factored into type mismatch.
Must the size of the bind variable match as well?
Or is that only a sqlplus feature?

I decided to check this with a plsql example. The different size of a plsql variable does not seem to affect the parsing.

598(600)-> create table amat7 (f1 varchar2(2000));

Table created.

Elapsed: 00:00:01.05
598(600)-> @plsql
598(600)-> declare
2 a varchar2(1) := 'A';
3 b amat7.f1%TYPE := 'A';
4
5 cursor c1 is
6 select /*+ choose */ *
7 from amat7
8 where f1 = a;
9
10 cursor c2 is
11 select /*+ choose */ *
12 from amat7
13 where f1 = b;
14
15 begin
16 open c1;
17 open c2;
18 end;
19 /

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.04
598(600)-> @qsql
598(600)-> select address, child_number, sql_text
2 from v$sql
3 where sql_text like 'SELECT /*+ choose */ * from amat7 where f1 =%';

ADDRESS CHILD_NUMBER
---------------- ------------
SQL_TEXT
----------------------------------------------------------------------------------------
000000039A640338 0
SELECT /*+ choose */ * from amat7 where f1 = :b1


Elapsed: 00:00:00.13
598(600)-> @qsqladd
598(600)-> select address, kglhdpar, bind_mismatch
2 from v$sql_shared_cursor
3 where kglhdpar in(
4 select address
5 from v$sql
6 where sql_text like 'SELECT /*+ choose */ * from amat7 where f1 =%'
7 )
8 /

ADDRESS KGLHDPAR B
---------------- ---------------- -
00000003976EEDD8 000000039A640338 N


Thanks again Tom. I always learn something on your site.

Tom Kyte
July 29, 2004 - 1:51 pm UTC

1) i would say it is very much "hard parse like", definitely.  sort of in between hard and soft parse, closer to a hard parse -- but the parent cursor is already there and setup.

2) it is the manner in which the binder binds -- and sqlplus bound a varchar2(1) once and varchar2(2000) the next time.  PLSQL on the other hand bound a varchar2(1) both times (cause that is all you had).  consider:

ops$tkyte@ORA9IR2> @connect /
ops$tkyte@ORA9IR2> set termout off
ops$tkyte@ORA9IR2> set termout on
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> alter system flush shared_pool;
 
System altered.
 
ops$tkyte@ORA9IR2> drop table amat7;
 
Table dropped.
 
ops$tkyte@ORA9IR2> create table amat7 (f1 varchar2(2000));
 
Table created.
 
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> select address, child_number, sql_text
  2    from v$sql
  3   where upper(sql_text) like 'SELECT%/*+%CHOOSE%*/% * %FROM%AMAT7%WHERE%F1%=%:B1';
 
no rows selected
 
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> create or replace procedure p( p_len in number )
  2  as
  3      a       varchar2(1)     := 'A';
  4      b       amat7.f1%TYPE   := rpad('A',p_len);
  5
  6      cursor c1 is select /*+ choose */ * from amat7 where f1 = a;
  7      cursor c2 is select /*+ choose */ * from amat7 where f1 = b;
  8  begin
  9      open c1;
 10          close c1;
 11      open c2;
 12          close c2;
 13  end;
 14  /
 
Procedure created.
 
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> exec p(1);
 
PL/SQL procedure successfully completed.
 
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> select address, child_number, sql_text
  2    from v$sql
  3   where upper(sql_text) like 'SELECT%/*+%CHOOSE%*/% * %FROM%AMAT7%WHERE%F1%=%:B1';
 
ADDRESS  CHILD_NUMBER
-------- ------------
SQL_TEXT
-------------------------------------------------------------------------------
58B5C4AC            0
SELECT /*+ choose */ * FROM AMAT7 WHERE F1 = :B1
 
<b>one cursor...</b>

ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> exec p(32);
 
PL/SQL procedure successfully completed.
 
ops$tkyte@ORA9IR2> select address, child_number, sql_text
  2    from v$sql
  3   where upper(sql_text) like 'SELECT%/*+%CHOOSE%*/% * %FROM%AMAT7%WHERE%F1%=%:B1';
 
ADDRESS  CHILD_NUMBER
-------- ------------
SQL_TEXT
-------------------------------------------------------------------------------
58B5C4AC            0
SELECT /*+ choose */ * FROM AMAT7 WHERE F1 = :B1
 
<b>in fact one cursor upto and including 32 bytes of bind... but then:</b>
 
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> exec p(33);
 
PL/SQL procedure successfully completed.
 
ops$tkyte@ORA9IR2> select address, child_number, sql_text
  2    from v$sql
  3   where upper(sql_text) like 'SELECT%/*+%CHOOSE%*/% * %FROM%AMAT7%WHERE%F1%=%:B1';
 
ADDRESS  CHILD_NUMBER
-------- ------------
SQL_TEXT
-------------------------------------------------------------------------------
58B5C4AC            0
SELECT /*+ choose */ * FROM AMAT7 WHERE F1 = :B1
 
58B5C4AC            1
SELECT /*+ choose */ * FROM AMAT7 WHERE F1 = :B1
 
 
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> select address, kglhdpar, bind_mismatch
  2    from v$sql_shared_cursor
  3   where kglhdpar in(
  4     select address
  5  from v$sql
  6   where upper(sql_text) like 'SELECT%/*+%CHOOSE%*/% * %FROM%AMAT7%WHERE%F1%=%:B1'
  7  )
  8  /
 
ADDRESS  KGLHDPAR B
-------- -------- -
58F5B6D4 58B5C4AC N
58A785B4 58B5C4AC Y

<b>two, 32 to 33 is a step, there is another at 80 something, 120 something and so on as I recall</b>

 

Great explanation

Andrew Markiewicz, August 02, 2004 - 9:42 am UTC


Binds putting me in a bind....

Andrew Markiewicz, August 03, 2004 - 6:51 pm UTC

Tom,
I have a related situation to my question last week.
The problem originates in one of our Forms. The plan clearly is not the correct choice causing the incorrect plan to be used throughout the day.
I've reproduced the problem in a test case which I hope you can shed some light on.

Basic issue from what I can tell, is one index is chosen if the bind variables are varchar2 and a different index is chosen if the bind variables are char. It seems the variable type difference causes different statistics to be used in calculations evidenced by the 10053 trace event.


142(144)-> drop table amat2;

Table dropped.

Elapsed: 00:00:00.63
142(144)-> create table amat2
2 as (
3 select rpad(substr(to_char(trunc(object_id/150)),1,9),9,'0') f1
4 , decode(mod(rownum,1000)
5 ,0, '10'
6 , '17'
7 ) f2
8 from dba_objects
9 where object_id is not null
10 )
11 ;

Table created.

Elapsed: 00:00:01.36
142(144)-> @amat2_qry
142(144)-> insert into amat2 (f1, f2)
2 (
3 select rpad(substr(to_char(trunc(object_id/150)),1,9),9,'0')
4 , decode(mod(rownum,1000)
5 ,0, '10'
6 , '17'
7 )
8 from dba_objects
9 where object_id is not null
10 )
11 /

<REPEAT INSERT 3 MORE TIMES>

Elapsed: 00:00:01.16
142(144)->
142(144)-> create index amat2_1 on amat2(f1);

Index created.

Elapsed: 00:00:01.54
142(144)-> create index amat2_2 on amat2(f2);

Index created.

Elapsed: 00:00:01.91
142(144)->
142(144)-> analyze table amat2 compute statistics
2 for table
3 for all indexes
4 for all columns
5 ;

Table analyzed.

<shared pool was flushed...>

22713(22715)-> variable b1 char(9)
22713(22715)-> variable b2 char(2)
22713(22715)->
22713(22715)-> alter session set events '10053 trace name context forever, level 12';

Session altered.

Elapsed: 00:00:00.01
22713(22715)-> exec :b1 := '850000000'

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.01
22713(22715)-> exec :b2 := '17'

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.01
22713(22715)->
22713(22715)-> @qry
22713(22715)-> select /*+ choose */
2 f1
3 , f2
4 from amat2
5 where f1 = :b1
6 and f2 = :b2
7 /
...
850000000 17

750 rows selected.

Elapsed: 00:00:00.37
22713(22715)->
22713(22715)-> alter session set events '10053 trace name context off';

Session altered.

Elapsed: 00:00:00.00
22713(22715)->

<RUN SAME SCRIPT CHANGING ONLY THE BIND VARIABLES>
22542(22544)-> variable b1 varchar2(9)
22542(22544)-> variable b2 varchar2(2)
...


sql plans:
25057(25059)-> @vsql %amat2%

SQL_TEXT
------------------------------------------------------------------------------------------------------------------------------------
ADDRESS CHILD_NUMBER FIRST_LOAD_TIME LAST_LOAD_TIME EXECUTIONS MODULE ACTION
---------------- ------------ ------------------- ------------------- ---------- --------------- ---------------
select /*+ choose */ f1 , f2 from amat2 where f1 = :b1 and f2 = :b2
000000039823BE38 0 2004-08-03/16:21:16 2004-08-03/16:21:16 1 andrew sqlplus login

select /*+ choose */ f1 , f2 from amat2 where f1 = :b1 and f2 = :b2
000000039823BE38 1 2004-08-03/16:21:16 2004-08-03/16:22:47 1 andrew sqlplus login


Elapsed: 00:00:00.03
25057(25059)-> @vsql_plan
Enter value for sql_address: 000000039823BE38
Enter value for child_number_default_0:

Depth Plan Steps
----- ----------------------------------------------------------------
0 SELECT STATEMENT Optimizer=HINT: CHOOSE (Cost=6 Card= Bytes=)
1 TABLE ACCESS BY INDEX ROWID AMAT2 (Cost=6 Card=882 Bytes=9702)
2 INDEX RANGE SCAN AMAT2_1 (Cost=5 Card=883 Bytes=)

Elapsed: 00:00:00.05
25057(25059)-> @vsql_plan
Enter value for sql_address: 000000039823BE38
Enter value for child_number_default_0: 1

Depth Plan Steps
----- ----------------------------------------------------------------
0 SELECT STATEMENT Optimizer=HINT: CHOOSE (Cost=2 Card= Bytes=)
1 TABLE ACCESS BY INDEX ROWID AMAT2 (Cost=2 Card=1 Bytes=11)
2 INDEX RANGE SCAN AMAT2_2 (Cost=1 Card=1 Bytes=)

Elapsed: 00:00:00.03


10053 trace outputs:
** char variables **
select /*+ choose */
f1
, f2
from amat2
where f1 = :b1
and f2 = :b2
***************************************
SINGLE TABLE ACCESS PATH
Column: F1 Col#: 1 Table: AMAT2 Alias: AMAT2
NDV: 231 NULLS: 0 DENS: 5.9864e-03
HEIGHT BALANCED HISTOGRAM: BKT: 75 #VAL: 76
Column: F2 Col#: 2 Table: AMAT2 Alias: AMAT2
NDV: 2 NULLS: 0 DENS: 3.3902e-06
FREQUENCY HISTOGRAM: BKT: 147485 #VAL: 2
TABLE: AMAT2 ORIG CDN: 147485 ROUNDED CDN: 1 CMPTD CDN: 0
Access path: tsc Resc: 58 Resp: 58
Access path: index (equal)
Index: AMAT2_1
TABLE: AMAT2
RSC_CPU: 0 RSC_IO: 138
IX_SEL: 0.0000e+00 TB_SEL: 5.9864e-03
Access path: index (equal)
Index: AMAT2_2
TABLE: AMAT2
RSC_CPU: 0 RSC_IO: 2
IX_SEL: 0.0000e+00 TB_SEL: 3.3902e-06


** varchar2 variables **
SINGLE TABLE ACCESS PATH
Column: F1 Col#: 1 Table: AMAT2 Alias: AMAT2
NDV: 231 NULLS: 0 DENS: 5.9864e-03
HEIGHT BALANCED HISTOGRAM: BKT: 75 #VAL: 76
Column: F2 Col#: 2 Table: AMAT2 Alias: AMAT2
NDV: 2 NULLS: 0 DENS: 3.3902e-06
FREQUENCY HISTOGRAM: BKT: 147485 #VAL: 2
TABLE: AMAT2 ORIG CDN: 147485 ROUNDED CDN: 882 CMPTD CDN: 882
Access path: tsc Resc: 58 Resp: 58
Access path: index (equal)
Index: AMAT2_1
TABLE: AMAT2
RSC_CPU: 0 RSC_IO: 138
IX_SEL: 0.0000e+00 TB_SEL: 5.9864e-03
Access path: index (equal)
Index: AMAT2_2
TABLE: AMAT2
RSC_CPU: 0 RSC_IO: 801
IX_SEL: 0.0000e+00 TB_SEL: 9.9902e-01


Why are different statistics used, or perhaps a different process, when the bind variable data type changes?

Forms data types a text_item as 'char' in the developer. The documentation states that is valid for (char, varchar2, etc) but it's reaction seems to be just that of a char.
Any thoughts?

Thanks.

Tom Kyte
August 03, 2004 - 8:19 pm UTC

quick questions:

a) are you sure histograms are relevant for your system -- they take long to compute and could be in this case part of the problem.

b) if you use size= on the histograms, what then

c) why index f2? doesn't seem like an appropriate column to be indexed?


(we might not go too in depth on this here, this is more like a question, than a followup, could get involved)

Andrew Markiewicz, August 04, 2004 - 9:45 am UTC

Agreed. This is getting off topic.

a) We decided to use method_opt=> 'for all columns size auto' to allow Oracle to automagically figure out the histograms needed. It does place histograms on columns that are skewed and removes any that are not skewed, so it seems to do it's job. We calculate stats once a week in batch (sufficient for now) and are testing table monitoring on a smaller production system, which works quite well.

b) Prior to testing the bind variables, I removed the histograms on the columns. It produces the desired plan for sqlplus and Forms, so that is an option. (It also validates that those particular statistics are causing the problem.) But I still have that burning question why the bind variable type produces a different plan with the same stats. I digress again.

c) F2 probably would not be indexed were this a real table. I was trying to create a test case that was similar to the real problem we have, (i.e. similar statistics in density and 1/num_distinct).
Our actual table has a concatenated index with the would be f2 leading and a data distribution of:
F2 CNT PCT
-- ---------- ------
10 136723 7.71
12 71884 4.05
14 79938 4.51
15 68491 3.86
16 105229 5.93
17 731246 41.21
18 103312 5.82
19 41120 2.32
20 74131 4.18
21 115027 6.48
22 141736 7.99
26 97093 5.47
27 7514 .42
28 381 .02
29 478 .03
----------
su 1774303
15 rows selected.

Another option we are considering is reversing the columns on the concatenated index. It was determined it won't affect anything else and will make that index less appealing to range scans although skip scans might come come up then. We'll see.

Thanks for your time.

parsing

mo, August 05, 2004 - 11:09 am UTC

Tom:

I chapter 10 of your book "tuning strategies" p441, you say
that the best to do is:
1. use bind variables
2. avoid soft parse of a query

Q1. When you use bind variables you always avoid hard parsing? correct?
Q2. How can you avoid soft parsing since the soft parsing per your description above has to happen to validate the syntax and sematic of the query. Do you mean you do this once and put it in shared pool so next time it is called you just run it?

Thanks,

Tom Kyte
August 05, 2004 - 1:07 pm UTC

q1) a query is hard parsed the very first time it is seen, binds or not. with binds however, the same query will be seen over and over and over -- eliminating th need for hard parses.

q2) i mean, if you are going to execute the same statement in your program more than once -- just parse it once and reuse that prepared statement.

again clarification

Raaghid, September 23, 2004 - 9:24 am UTC

Even if i reuse the same, soft parse will happen again is it n't?. (only hard parse is avaided)

Tom Kyte
September 24, 2004 - 8:33 am UTC

you can skip the parse in any 3gl too, by programming good.

answer is there

Raaghid, September 23, 2004 - 9:29 am UTC

Sorry. answer is available in this page itself.

You can skip the parse by using PLSQL.... etc etc

One small doubt

Raj, October 14, 2004 - 2:23 pm UTC

As you said if there is DYNAMIC SQL in PLSQL then setting session_cached_cursors will be beneficial. Is this true if I am not using Dynamic SQL but using REF Cursors in the pls/sql code

Tom Kyte
October 14, 2004 - 7:41 pm UTC

yes.

A reader, November 24, 2004 - 2:48 am UTC

I as searching through the Oracle documentation and couldn't find specific chapter about parsing explaining what you have explained above. If I missed it can ypu please provide link.
Thanks.

A reader, November 24, 2004 - 8:10 am UTC

sorry but couldn't find on that page any info about soft/hard parsing

Tom Kyte
November 24, 2004 - 9:04 am UTC

it describes the parsing process - guess you'd come here for the "in depth details"

see also:

</code> http://docs.oracle.com/docs/cd/B10501_01/server.920/a96524/c16sqlpl.htm#574 <code>



Runtime Change in execution plan ?

Bipin Dalal, December 08, 2004 - 8:08 pm UTC

Hi Tom,

As you mentioned in this thread, query plan generation is done during hard parse of a SQL.

(...we can skip the next two steps in the process, that of
optimization and row source generation...)

If I understand this correctly, this will NEVER change for a given parsed SQL. Meaning a SQL with bind variables will be re-used and will have it's execution plan carved out inside the shared pool that will never change across calls to this SQL. Is this understanding correct?

One of our experienced DBA believes that this changed with 9i and now oracle changes execution plan of a parsed SQL depending on actual values bound to bind variables.

Can you please clarify?

I dont know any way of finding out what exact execution plan that was used after a SQL execution. Is there a way to find that? (then I can carry out the experiment myself)

Thanks in advance.

-Bipin.

Tom Kyte
December 09, 2004 - 1:01 pm UTC

search this site for

bind variable peeking


it affects ONLY the first hard parse as well. basically, in 8i the query:

variable x number;
exec :x := 5;
select * from emp where x = :x;

would make a generic assumption about :x and develop a generic plan. In 9i, using bind variable peeking -- if we were to hard parse that query, it would be optimized as if you typed in:

select * from emp where x = 5;

the optimizer peeks at the bind variable, then generates the single plan that would be used by all.

Thanks a bunch..

Bipin Dalal, December 09, 2004 - 5:36 pm UTC

Tom,

I am embarassed that I did not see already discussed Question before. It is exactly what I was looking for. You're very generous, you know that.

Thanks again.

-Bipin.

excellent thread

Safrin, January 04, 2005 - 8:45 am UTC

you said
"you can skip the parse in any 3gl too, by programming good. "

Are you refering to Pl/sql , Java & C. Please reply

Tom Kyte
January 04, 2005 - 8:59 am UTC

sure, they would all be 3gls -- but in plsql, it just happens for you with static sql, part of the language itself

so C and Java would have been the real targets of the comment.

Bind parameters

Bob B, January 05, 2005 - 5:03 pm UTC

Does the sql cache recognize that a bind variable is a bind variable regardless of what it is named? For instance:

SELECT *
FROM DUAL
WHERE 1 = :my_var1

SELECT *
FROM DUAL
WHERE 1 = :my_var2

In my mind, those should cache to the exact same statement.

Tom Kyte
January 05, 2005 - 8:05 pm UTC

but they won't -- because we HASH the string.

they need to match

down to the whitespace
down to the case

select * from dual;
select * from Dual;

are two different queries..

select * from dual;

is yet a third.

variable bind variables

Alberto Dell'Era, January 06, 2005 - 6:59 am UTC

I know a product ("generic" library) that can be configured to issue statements such as

select * from t where x = :ph1;
select * from t where x = :ph2;
...
select * from t where x = :phn;

We named it the "variable bind variable".

It sports the "variable table alias" too:

select * from t t1 where t1.x = ...;
select * from t t2 where t2.x = ...;
...
select * from t tn where tn.x = ...;

I've always wondered why they worked so hard to inflict self-damage - but perhaps there's another RDBMS that needs that "sql massaging".

Soft vs no parse

Alex, March 18, 2005 - 10:17 am UTC

Hi,

I'm having trouble finding an explanation for the difference between a soft parse and no parse. I know I read somewhere something like "soft parses are good, but the best way is no parses....".

I've also seen you say "only the programmer can avoid parsing altogether" or something to that effect.

I just don't see how you can do any better than finding your statement in the shared pool and reusing it (soft parse?). Thanks.

Tom Kyte
March 18, 2005 - 10:58 am UTC

soft parses are better than hard parses but the only good parse is no parse at all.

(how to make something go really fast?  don't do it)


Which is faster and more scalable:

a) soft parse:

1) take sql text string
2) HASH IT
3) latch (latch is a lock, locks are serilization devices, serialization devices
          inhibit scalability) the shared pool (cannot modify it whilst we are 
          walking it)
4) find N possible hits (there could be 50 "select * from emp" queries in there
5) upon finding a hit, parse the string syntactically and semanatically (we must
   see WHAT emp you are accessing -- might not be ANY of the emps in the shared
   pool) -- remember, we are latching to do this.... and using a bit of cpu...
6) upon finding a hit that matches semantically (we are the same EMP table), 
   make sure the rest of the environment is the same (eg: make sure that the 
   plan you found matches your environment, maybe you have optimizer_goal=rule   
   and the plan you found has first_rows).  Do this over all matches actually, 
   since there could be dozens of plans that match you semantically but miss
   on the optimizer environment
7) upon finding one that matches -- link to it (remember, we've been doing 
   lots of latching here)


or 

b) no parse:

<this space intentionally left blank>




<b>Consider the following example, the only difference between p1 and p2 is that in 9i and before -- p1 will soft parse the query every time, p2 will not (in 10g, execute immediate is optimized a little more, it'll parse the statement ONCE in this example)</b>


ops$tkyte@ORA9IR2> create table t1 ( x int );
Table created.
ops$tkyte@ORA9IR2> create table t2 ( x int );
Table created.
 
ops$tkyte@ORA9IR2> create or replace procedure p1( n_rows in number )
  2  as
  3  begin
  4      for i in 1 .. n_rows
  5      loop
  6          execute immediate 'insert into t1 values ( :x )' using i;
  7      end loop;
  8  end;
  9  /
Procedure created.
 
ops$tkyte@ORA9IR2> create or replace procedure p2( n_rows in number )
  2  as
  3  begin
  4      for i in 1 .. n_rows
  5      loop
  6          insert into t2 values ( i );
  7      end loop;
  8  end;
  9  /
Procedure created.
 
ops$tkyte@ORA9IR2> exec p1(1);
PL/SQL procedure successfully completed.
 
ops$tkyte@ORA9IR2> exec p2(1);
PL/SQL procedure successfully completed.
 
ops$tkyte@ORA9IR2> exec runstats_pkg.rs_start
PL/SQL procedure successfully completed.
 
ops$tkyte@ORA9IR2> exec p1(100000)
PL/SQL procedure successfully completed.
 
ops$tkyte@ORA9IR2> exec runstats_pkg.rs_middle
PL/SQL procedure successfully completed.
 
ops$tkyte@ORA9IR2> exec p2(100000)
PL/SQL procedure successfully completed.
 
ops$tkyte@ORA9IR2> exec runstats_pkg.rs_stop(10000)
Run1 ran in 1410 hsecs
Run2 ran in 485 hsecs
run 1 ran in 290.72% of the time

<b>faster by the wall clock...</b>
 
Name                                  Run1        Run2        Diff
STAT...consistent gets             100,375         375    -100,000
STAT...parse count (total)         100,060          60    -100,000
STAT...opened cursors cumulati     100,060          60    -100,000
STAT...session logical reads       203,017     103,011    -100,006
LATCH.cache buffers chains         708,765     509,144    -199,621
LATCH.library cache pin alloca     400,272         272    -400,000
LATCH.shared pool                  500,416     100,408    -400,008
LATCH.library cache pin            800,535     200,527    -600,008
LATCH.library cache              1,100,826     200,815    -900,011
 
Run1 latches total versus runs -- difference and pct
Run1        Run2        Diff       Pct
3,624,579   1,123,818  -2,500,761    322.52%
 
PL/SQL procedure successfully completed.

<b>and lots of latching going on....</b>

 

Lost Recursive calls on 9i!!

Nilendu Misra, March 21, 2005 - 7:51 pm UTC

I was trying to build a simple, SQL*Plus based test case for cost of parsing (i.e., recursive calls coming back with each different literals). And I had some surprising finds on 9i - yet to explain it or say it's a 'bug' - but here goes the test.

Test Bed -

create table t (a number,
        b number,
        c number,
        d number,
        e number,
        f varchar2(200),
        g date);

declare
j pls_integer;
begin
for i in 1..1000000 loop
   j := dbms_random.random;
   insert into t values (j,j+1, j-1, j+7, j/2,'this is a row which will be tested...', sysdate);
end loop;
commit;
end;
/


create index i1 on t(b);

create index i2 on t(g);

analyze table t compute statistics for table
                                     for all indexes
                                   for all columns;


Now on 10g --

SQL> select * from t where b = 233221323 and g > (sysdate - 1);

no rows selected

Elapsed: 00:00:00.00

Execution Plan
----------------------------------------------------------
   0      SELECT STATEMENT Optimizer=CHOOSE (Cost=1 Card=1 Bytes=79)
   1    0   TABLE ACCESS (BY INDEX ROWID) OF 'T' (TABLE) (Cost=1 Card=
          1 Bytes=79)

   2    1     INDEX (RANGE SCAN) OF 'I1' (INDEX) (Cost=3 Card=1)




Statistics
----------------------------------------------------------
          1  recursive calls
          0  db block gets
          3  consistent gets
          1  physical reads
          0  redo size
        535  bytes sent via SQL*Net to client
        373  bytes received via SQL*Net from client
          1  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
          0  rows processed

on 9i --

SQL> select * from t where b = 2332213523 and g > (sysdate - 1);

no rows selected

Elapsed: 00:00:00.62

Execution Plan
----------------------------------------------------------
   0      SELECT STATEMENT Optimizer=CHOOSE (Cost=2 Card=1 Bytes=79)
   1    0   TABLE ACCESS (BY INDEX ROWID) OF 'T' (Cost=2 Card=1 Bytes=
          79)

   2    1     INDEX (RANGE SCAN) OF 'I1' (NON-UNIQUE) (Cost=1 Card=1)




Statistics
----------------------------------------------------------
          0  recursive calls
          0  db block gets
          3  consistent gets
          2  physical reads
          0  redo size
        511  bytes sent via SQL*Net to client
        368  bytes received via SQL*Net from client
          1  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
          0  rows processed


Both were immediate first execution of a SQL after the index creation!!

Is it a SQL*Plus issue or Oracle version issue. Hmm.. Setting up the "sql_trace = true" on both beds and changing the literal --

on 10g --
--------
SQL> alter session set sql_trace = true;

Session altered.

Elapsed: 00:00:00.04
SQL> select * from t where b = 2332523 and g > (sysdate - 1);

no rows selected

Elapsed: 00:00:00.01

Execution Plan
----------------------------------------------------------
   0      SELECT STATEMENT Optimizer=CHOOSE (Cost=1 Card=1 Bytes=79)
   1    0   TABLE ACCESS (BY INDEX ROWID) OF 'T' (TABLE) (Cost=1 Card=
          1 Bytes=79)

   2    1     INDEX (RANGE SCAN) OF 'I1' (INDEX) (Cost=3 Card=1)




Statistics
----------------------------------------------------------
          1  recursive calls
          0  db block gets
          3  consistent gets
          0  physical reads
          0  redo size
        535  bytes sent via SQL*Net to client
        373  bytes received via SQL*Net from client
          1  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
          0  rows processed

on 9i
-----

SQL> alter session set sql_trace = true;

Session altered.

Elapsed: 00:00:00.04
SQL> select * from t where b = 2332523 and g > (sysdate - 1);

no rows selected

Elapsed: 00:00:00.63

Execution Plan
----------------------------------------------------------
   0      SELECT STATEMENT Optimizer=CHOOSE (Cost=2 Card=1 Bytes=79)
   1    0   TABLE ACCESS (BY INDEX ROWID) OF 'T' (Cost=2 Card=1 Bytes=
          79)

   2    1     INDEX (RANGE SCAN) OF 'I1' (NON-UNIQUE) (Cost=1 Card=1)




Statistics
----------------------------------------------------------
          0  recursive calls
          0  db block gets
          3  consistent gets
          2  physical reads
          0  redo size
        511  bytes sent via SQL*Net to client
        368  bytes received via SQL*Net from client
          1  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
          0  rows processed

Note -- still shows 'recursive calls = 0"!

Let's look at the trace files -


for 10g --

select *
from
 t where b = 2332523 and g > (sysdate - 1)


call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute      1      0.00       0.00          0          0          0           0
Fetch        1      0.00       0.00          0          3          0           0
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        3      0.00       0.00          0          3          0           0

Misses in library cache during parse: 1
Optimizer mode: CHOOSE
Parsing user id: 147

Rows     Row Source Operation
-------  ---------------------------------------------------
      0  TABLE ACCESS BY INDEX ROWID T (cr=3 pr=0 pw=0 time=55 us)
      0   INDEX RANGE SCAN I1 (cr=3 pr=0 pw=0 time=48 us)(object id 394056)


for 9i --

select *
from
 t where b = 2332523 and g > (sysdate - 1)


call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute      1      0.00       0.00          0          0          0           0
Fetch        1      0.00       0.00          2          3          0           0
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        3      0.00       0.00          2          3          0           0

Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: 24

Rows     Row Source Operation
-------  ---------------------------------------------------
      0  TABLE ACCESS BY INDEX ROWID T
      0   INDEX RANGE SCAN I1 (object id 95644)

Plus, on 9i  trace file we actually see a recursive call which is missing on 10g file --

select o.owner#,o.name,o.namespace,o.remoteowner,o.linkname,o.subname,
  o.dataobj#,o.flags
from
 obj$ o where o.obj#=:1


call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute      1      0.00       0.00          0          0          0           0
Fetch        1      0.00       0.00          0          3          0           1
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        3      0.00       0.00          0          3          0           1

Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: SYS   (recursive depth: 1)

Overall for 10g --

OVERALL TOTALS FOR ALL RECURSIVE STATEMENTS

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute      3      0.00       0.00          0          1          3           3
Fetch        0      0.00       0.00          0          0          0           0
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        4      0.00       0.00          0          1          3           3

Overall for 9i --

OVERALL TOTALS FOR ALL RECURSIVE STATEMENTS

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        3      0.00       0.00          0          0          0           0
Execute      4      0.00       0.00          0          1          5           3
Fetch        1      0.00       0.00          0          3          0           1
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        8      0.00       0.00          0          4          5           4


So, in spite of 9i actually doing more recursive calls than 10g -- 'autotrace' shows 9i doing less (actually none!).

I'm a little lost on why 9i autotrace shows '0 recursive calls' at this moment!! Any idea why?

 

Difference between count(*) & count(1)

Sachin, April 07, 2005 - 2:57 am UTC

Hi Tom,

What is the difference between count(*) and count(1)?

Thanks


Exact Number of Cursors?

Jon, April 09, 2005 - 1:32 am UTC

Tom,

In your previous post, you have stated:

<quote>
ops$tkyte@ORA9IR2> create or replace procedure p( p_len in number )
2 as
3 a varchar2(1) := 'A';
4 b amat7.f1%TYPE := rpad('A',p_len);
5
6 cursor c1 is select /*+ choose */ * from amat7 where f1 = a;
7 cursor c2 is select /*+ choose */ * from amat7 where f1 = b;
8 begin
9 open c1;
10 close c1;
11 open c2;
12 close c2;
13 end;
14 /

Procedure created.

ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> exec p(1);

PL/SQL procedure successfully completed.

ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> select address, child_number, sql_text
2 from v$sql
3 where upper(sql_text) like 'SELECT%/*+%CHOOSE%*/% *
%FROM%AMAT7%WHERE%F1%=%:B1';

ADDRESS CHILD_NUMBER
-------- ------------
SQL_TEXT
-------------------------------------------------------------------------------
58B5C4AC 0
SELECT /*+ choose */ * FROM AMAT7 WHERE F1 = :B1

one cursor...
</quote>

Should it be two cursors and zero child cursor?

I think two cursors make sense because there are two cursors declared in the stored procedures even though both are using the same static SQL. In other words, it is like two runtime programs and one library (i.e. cursors=runtime programs and library=shared SQL).

If querying the v$sqlarea and v$open_cursor, it will return two rows of the above SQL statement.

We will get N's open cursors when we declare N's cursors for even static SQL (even without using binding variable). For example,

cursor c1 is select 'x' from dual;
cursor c2 is select 'x' from dual;

Please correct me if my understanding is wrong.

Thanks.

Peace,
Jon

Tom Kyte
April 09, 2005 - 7:39 am UTC

we tend to call the things in the shared pool "cursors" for lack of a better term.

In the plsql program, there are two cursor instances, two things pointing to the single thing in the shared pool, correct.

Ravi, April 16, 2005 - 5:04 am UTC

If a package had been loaded to the shared pool by a user, there is a potential for a

a)Soft parse

b) Softer soft parse

Now, what happens to the loaded SQL/Execution plans if an event that changes them happens, like a potential new index that the plan could use becomes available or an index in the plan is dropped.

1) Does Oracle automatically instantiate the SQL so that it is Hard Parsed all over again?

2) Is the primary reason SQL is stored as text in PL/SQL program unit is to Hard Parse it when it is Loaded for the first time, ie, there is no point it storing the Execution plans along with the Package/Function/Procedure, as they may change during the next execution because of reason like Index drop/create?



Tom Kyte
April 16, 2005 - 8:53 am UTC

a "package" is not parsed. a package is compiled and may have to be loaded into the shared pool

A call to the package is parsed. A package has no "plans".


As for sql, the dependency mechanism is quite sophisticated. If you change anything that could change the plan, add an index, gather new statistics, .... Oracle will automatically invalidate the plans and hard parse the query the next time around.



Ravi, April 17, 2005 - 10:22 am UTC

Thanks.

Just for the record, although you may have implied it already, an PL/SQL program unit (package/function/procedure), when it is first called for the day onto a database is Hard Parsed.
1)This means that all the SQL (Selects,Updates) in the unit gets all its syntax/semantic check/ Execution plan derivation done and loaded onto the Library cache (if the SQL's not already been parsed)?

Thanks
Ravi


Tom Kyte
April 17, 2005 - 10:38 am UTC

the first parse of an anonymous block will be hard parsed.

It is not accurate to say that the first call to a plsql program unit is hard parsed. For, if I (right after a startup, flush shared pool, recompile of P):


begin p( :x ); end;
begin if ( some_condition ) then p( :x ); end if; end;
begin p( :x ); end;


P would have been stored compiled (I'll assume it was valid). The code for P may or may not have been in the cache but that isn't relevant yet.

The first block of code will be hard parsed -- not P, but the BLOCK that invokes P. P is already compiled. If P is not in the shared pool, if the code isn't load up, it will be, but it was parsed at compile time -- not runtime.

The second block of code will be hard parsed as well, not P, but the block that never had been parsed before.

The third block of code would likely be SOFT parsed as it was already hard parsed and probably is still in the shared pool...


The SQL within P is either hard or soft or NOT AT ALL parsed -- depending.

a) if the SQL isn't in the shared pool, that SQL will be hard parsed.
b) if the SQL is in the shared pool AND that SQL isn't in the sessions PLSQL cursor cache already, it'll be soft parsed
c) if the SQL is in the shared pool AND that SQL is in the sessions PLSQL cursor cache, it'll just be bound and executed, no parsing at all.


IF something happens to the table upon which that SQL is dependent on, then that plan in the shared pool will be invalidated and the next time the PLSQL code executes that SQL statement again, it'll either hard or soft parse it (it would have been invalidated, it will automagically pick up the new plan)

A reader, April 17, 2005 - 12:34 pm UTC

Would your answer
"c) if the SQL is in the shared pool AND that SQL is in the
sessions PLSQL cursor
cache, it'll just be bound and executed, no parsing at all.
"
be the SOFTER SOFT PARSE?

Tom Kyte
April 17, 2005 - 3:28 pm UTC

nope, that is "NO PARSE"

ops$tkyte@ORA10G> create or replace procedure p
  2  as
  3  begin
  4          for x in ( select * from scott.emp )
  5          loop
  6          null;
  7          end loop;
  8  end;
  9  /
 
Procedure created.
 
ops$tkyte@ORA10G>
ops$tkyte@ORA10G> @trace
ops$tkyte@ORA10G> alter session set events '10046 trace name context forever, level 12';
 
Session altered.
 
ops$tkyte@ORA10G> exec p
 
PL/SQL procedure successfully completed.
 
ops$tkyte@ORA10G> exec p
 
PL/SQL procedure successfully completed.
 
ops$tkyte@ORA10G> exec p
 
PL/SQL procedure successfully completed.
 
ops$tkyte@ORA10G> exec p
 
PL/SQL procedure successfully completed.
 
ops$tkyte@ORA10G> exec p
 
PL/SQL procedure successfully completed.
 
ops$tkyte@ORA10G> exec p
 
PL/SQL procedure successfully completed.
 
ops$tkyte@ORA10G> exec p
 
PL/SQL procedure successfully completed.



SELECT *
FROM
 SCOTT.EMP
                                                                                                                  
                                                                                                                  
call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute      7      0.00       0.00          0          0          0           0
Fetch        7      0.00       0.00          0         49          0          98
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total       15      0.00       0.00          0         49          0          98
                                                                                                                  
Misses in library cache during parse: 0
Optimizer mode: ALL_ROWS
Parsing user id: 84     (recursive depth: 1)


That shows 1 soft parse, period.  nothing else.  PLSQL is great that way.

A softer soft parse is still a parse, it still increments the parse count.  PLSQL is caching the cursor for us, in hopes we use it again and again. 

A reader, April 17, 2005 - 3:54 pm UTC

When you said
"PLSQL is caching the cursor for us, in hopes we use it again and again."

Did you actually mean that the first time the cursor was OPENED or Executed, the ENTIRE data was cached? If that is true that is pretty good saving, I guess.

In that case does PL/SQL do only the Fetching, ie even if a Close Cursor statement is available, it only "pretends" it closes, while internally, the cursor is alive with all its data?

Tom Kyte
April 17, 2005 - 4:31 pm UTC

No, the result set would not be cached (that would be bad).

The fact that the cursor is opened is cached. If you

loop
parse
bind
execute
close
end loop

over and over -- that would be bad, the only good parse is NO parse.

PLSQL implements the "parse it once in your session" and "use it over and over" concept to the max.

close cursor pretends to close it, open cursor just resets it.


The same cursor can be bound/executed (with different inputs) over and over and over.


This saves the parsing that otherwise would happen.

A reader, April 21, 2005 - 6:16 am UTC

What did you mean by LIKELY, taking snippets from your last two postings:

Your example

begin p( :x ); end;
begin if ( some_condition ) then p( :x ); end if; end;
begin p( :x ); end;

And explanation on the third bit of code

"The third block of code would likely be SOFT parsed as it was already hard
parsed and probably is still in the shared pool..."

But in your next example when you run it SEVEN times

ops$tkyte@ORA10G> exec p

I assume it becomes an Anonymous block

begin
p;
end;
/

But you've proved that it was NEVER Soft parsed using your Stats, is that because even Anonymous blocks CAN avoid a Soft parsed (and become No Parsed)?

Is that because the anonymous block has moved onto the users' PL/SQL cache?

And be honest, do you ever get exasperated when a particular post keeps bouncing back to you like this one?

Tom Kyte
April 21, 2005 - 6:27 am UTC

Ok, assume the database shared pool does not have any calls to P in it yet...


time t1: begin p( :x ); end; <<<<=== hard parse
time t2: begin if ( some_condition ) then p( :x ); end if; end; <<<-- hard parse
time t3: some other session altered the table P queries, invalidated P
time t4: begin p( :x ); end; <<<<===== hard parse, it was invalidated

My stats did not prove the block wasn't parsed. I didn't show the parsing of the block, there would have been 7 of them.

Can anonymous blocks avoid a soft parse? sure, just like any SQL can.
Does SQLPlus have that optimization? No, it doesn't.


(as for the last paragraph, as long as the questions asked were not already answered in the posting -- not really....)

awesome reply.

Kapil Goyal, April 21, 2005 - 8:40 am UTC

They way you explained the use of session cached cursors was really really great...
You are simple great Tom :).

ops$tkyte@ORA817DEV.US.ORACLE.COM> select a.name, b.value-a.value run1,
c.value-b.value run2,
2 ( (c.value-b.value)-(b.value-a.value)) diff
3 from run_stats a, run_stats b, run_stats c
4 where a.name = b.name
5 and b.name = c.name
6 and a.runid = 'before'
7 and b.runid = 'after 1'
8 and c.runid = 'after 2'
9 and (c.value-a.value) > 0
10 and (c.value-b.value) <> (b.value-a.value)
11 order by abs( (c.value-b.value)-(b.value-a.value))
12 /


A reader, April 23, 2005 - 10:56 am UTC

From one of your last answers

"c) if the SQL is in the shared pool AND that SQL is in the sessions PLSQL cursor
cache, it'll just be bound and executed, no parsing at all.
"
Is it ever possible that the SQL is in the PLSQL cursor cache but NOT in the shared pool?

Tom Kyte
April 23, 2005 - 11:15 am UTC

yes that that'll result in "misses in library cache during execute".

consider:

drop table t;

create table t ( x int );

create or replace procedure p
as
begin
for x in ( select * from t )
loop
null;
end loop;
end;
/

alter session set sql_trace=true;
/* expect this'll hard parse */
exec p
/* expect this'll NOT parse */
exec p
/* expect this'll invalidate the cursor, but not the procedure */
create index t_idx on t(x);
select status from user_objects where object_name = 'P';
/* expect this'll hard parse and cause an "misses in library cache" */
exec p


tkprof says:

select *
from
t


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 3 0.00 0.00 0 0 0 0
Fetch 3 0.00 0.00 0 9 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 7 0.00 0.00 0 9 0 0

Misses in library cache during parse: 1
Misses in library cache during execute: 1
Optimizer goal: CHOOSE
Parsing user id: 268 (recursive depth: 1)

Rows Row Source Operation
------- ---------------------------------------------------
0 TABLE ACCESS FULL T (cr=3 r=0 w=0 time=33 us)


1 parse called by the application. It was hard (1 misses in library cache during parse). However, it was executed three times and one of those times (after we created the index) it was not found in the library cache and re-hard parsed again....



A reader, April 23, 2005 - 11:05 am UTC

Assume I have a program like

Declare

Cursor c1 is

begin

for i in 1..10 loop --- LOOP 1

open c1;

fetch...

end loop;

for i in 1..3 loop
for C3 in (Select * from blah1,blah2 ) --- LOOP 2 -- SQL does not change, but the cursor for loop is invoked thrice.

..... -- fetching from cursor for loop

end loop;
end loop;


end;


Am I right to say Cursor C1 will be parsed once (going by your latest posts) while the Cursor For loop will be parsed every time it is opened?

Suspect this because Cursor For loop might treat their SQL as dynamic?

(I wouldnt be coding with standards above, only an example, what I do have is an anonymous block with a Cursor For loop which is invoked over and over again, but with the same SQL)


Tom Kyte
April 23, 2005 - 11:17 am UTC

in plsql using static sql, it is most probably that the SQL is parsed ONCE for your session....

turn on sql_trace and tkprof the results, you'll see.

PLSQL automatically caches static sql (and in 10g can even do some dynamic sql caching). So, when the cursor is closed, plsql says "yeah, whatever, I'm hanging onto it thank you very much"....

In the anonymous block, it would be cached for the duration of that block. For a real procedure -- for the session.

Anonymous Blocks

Justin, June 17, 2005 - 5:23 am UTC

Hi,

If I found that for a anonymous Block in v$sql:

EXECUTIONS PARSE_CALLS SQL_TEXT
5325 5325 BEGIN BEZSQLPKG.BEZSQLSAVE( :MYREFCURSOR, :1, :2, :3, :4, :5, :6, :7, :8, :9, :10, :11, :12, :13, :14, :15, :16, :17, :18, :19, :20, :21, :22, :23, :24, :25, :26, :27, :28, :29, :30, :31, :32, :33, :34, :35, :36, :37, :38, :39, :40, :41, :42 ); END;

is that parse_calls soft or hard parse or both?

now even more strange:
EXECUTIONS PARSE_CALLS SQL_TEXT
102952 0 BEGIN INIGet(:1, :2, :3, :4, :5, :6, :7); END;

3776 621 BEGIN INIGet(:1, :2, :3, :4, :5, :6, :7); END;

Why does it seem to share sql for those two not for the first one?


Differences:
The first one is a procedure in a package only used by one session (an interface).

The other two are used by all sessions.
How can I investigate this further?


Running on 9.2.0.6
session_cached_cursors = 30
cursor_sharing= force

Regards,

Justin



Tom Kyte
June 17, 2005 - 2:55 pm UTC

it is both, it is parse calls.

use v$sql_shared_cursor to see why two cursors are not shared. I'd guess on that one, bind mismatch, but it should show you



Upper limit for session_cached_cursors?

Brandon Allen, June 20, 2005 - 1:34 pm UTC

I currently have session_cached_cursors=100 and am wondering if I should increase it. Here are my current system stats:

SQL_baanprd>select max(value) from v$sesstat where statistic# = (select statistic# from v$statname where name = 'session cursor cache count');

MAX(VALUE)
----------
100

So, I can see the users are maxing out their session cached cursors. But, I found a couple unofficial postings on Metalink mentioning that going above 100 is not recommended, e.g.:

"I've heard that going above approx 100 for the session cursor cache is a waste, since at that point it takes just as long to scan the cursor cache as it would be to locate the cursor in the shared pool."

and

"Oracle recomends to set this number at maximum to 100."
(Note, the above quote was not directly from an Oracle employee, just from someone else possibly quoting what Oracle had told them)

Your thoughts?

Thanks!




Tom Kyte
June 20, 2005 - 2:50 pm UTC

Well, since this is a relatively easy setting to change with probable minimal impact, what I'd be tempted to do in this case would be:

gather metrics, what are my transaction response times now, today. how much cpu am I using to parse. what my application is mostly waiting on. how my latching looks (library and shared pool in particular). memory utilization, both sga and pga.

measure that, get a feel for that (a day or two or more of observing)

then alter it upward and measure again.


to get transaction response time typically involves the APPLICATION having thought that (gasp) someone might be interested in that -- so, it might be hard to get unless the developers thought about it.

Nothing concrete I know, but only because I haven't been in a situation yet to do it myself. If you do, it would be interesting to hear back.

A reader, July 15, 2005 - 2:52 pm UTC

How can I avoid Hard parsing? Is it by using bind variables?
In statspack; if memory usage % is high; what does it indicate?

Thanks.

Tom Kyte
July 15, 2005 - 6:15 pm UTC

binding is the way to avoid hard parsing yes, we hard parse when presented with a query that isn't already parsed in the shared pool. So, by binding you can present us with queries that were already parsed and are in the shared pool.

there are tons of ratios in statspack. But if a % is high, and it is a "percent used", it can only mean one thing

You are using most of that which you have allocated to be used.

check sql before or after checking its existence in shared pool

adolf_h, October 04, 2005 - 1:01 am UTC

in this thread above you mentioned on January 28, 2002 in reply to "SQL parsing" of "A reader" that oracle first checks SQL code "syntactically and semantically" is correct and then the shared pool [ which should be the case ideally ]... but in documentation
</code> http://oraclelon1.oracle.com/docs/cd/B19306_01/server.102/b14220/intro.htm#sthref121 <code>
point 5 says
<quote>
The server process receives the statement and checks the shared pool for any shared SQL area that contains a similar SQL statement. If a shared SQL area is found, then the server process checks the user's access privileges to the requested data, and the previously existing shared SQL area is used to process the statement. If not, then a new shared SQL area is allocated for the statement, so it can be parsed and processed.
</quote>
it first checks shared pool and then user's access privileges. shouldnt that be done before checking in shared pool?

Tom Kyte
October 04, 2005 - 1:56 pm UTC

I'm not going to rip through the code (cause I never ever have) but whether is it done right before or right after is sort of inconsequential - because it is always done regardless (soft, hard, whatever - the syntax and then semantic check happen).

so, I'll put it into the bucket of six one way, half dozen the other - but it would be more efficient to parse/semantic check and then look - since we might skip the "look" part all together if the parse fails.



Alberto Dell'Era, October 04, 2005 - 6:27 pm UTC

> so, I'll put it into the bucket of six one way, half dozen
> the other - but it would be more efficient to parse/semantic
> check and then look - since we might skip the "look" part
> all together if the parse fails.

Don't know whether I'm nitpicking - but if you find a same-text sql statement in the library cache, that means that the stmt is syntactically correct so you can skip the syntax check - and if (as I believe) the strict-sense-parse result of any statement (ie, the transformation of the stmt in a tree of keywords and tokens - "select x from t" has "select" as root, "x" as a leaf, "from" as a branch under "select" and "t" as a leaf of the branch) is stored as well, you can skip the cpu-expensive strict-sense-parse; you have "only" to decorate the tree (ie understand that "t" is my table versus your table versus John's view etc) by accessing the dictionary cache.

I believe that applications close and reopen cursors unnecessarily a "tad" more frequently than they submit illegal statements, so I'm ready to bet a bottle of "Brunello di Montalcino" that it's better to "look than check" and not the opposite ...

Tom Kyte
October 04, 2005 - 8:56 pm UTC

but to do the semantic check, we have to build a parse tree - which IS the syntax check :) In order to build the parse tree - it has to be 'correct'.

I don't believe the parse tree "as it is" is saved.

but, we are back to the 6 one way....... (for hashing and searching(LATCH) is expensive too)

it could be either or, not really that important in the grand scheme of things (and without reading the code - hard to determine which)

that means no short-circuit

adolf_h, October 05, 2005 - 7:47 am UTC

it uses & and not &&... it will follow all steps irrespective of result of previous step.

Tom Kyte
October 05, 2005 - 7:53 am UTC

no one said that??

Parse PLSQL INTO CLOB

Kato, December 23, 2005 - 8:46 am UTC

Hello:

I am storing PLSQL code into CLOB (DML and DDL) and i want to check sintaxis of this code, with DBMS_SQL.PARSE, but this is a procedure and works with varchar2 sentences.
Do you have other option to make a function that validate PLSQL code and recive CLOB argument?

Thanks.

Kato.




Tom Kyte
December 23, 2005 - 12:42 pm UTC

that would be *dangerous*.

dbms_sql.parse with DDL *executes it*, it doesn't just parse it. You *cannot* do that.

As the Pink Floyd used to sing "careful with that axe Eugene" - or the robot in lost in space "danger, danger - warning will robinson".

there is an api to dbms_sql that lets you parse a plsql table of varchar2's (you can substring your clob on newlines and put a line per array element) - BUT don't do that - as you'll actually end up RUNNING bits of code here and there - very very dangerous.


from the docs:

PARSE Procedure

This procedure parses the given statement in the given cursor. All statements are parsed immediately. In addition, DDL statements are run immediately when parsed.

There are two versions of the PARSE procedure: one uses a VARCHAR2 statement as an argument, and the other uses a VARCHAR2S (table of VARCHAR2) as an argument.

Adding index and analyzing table not affecting parses (?)

Robert, January 25, 2006 - 2:42 pm UTC

Tom,

Why is not adding index and analyzing table not affecting parsing?

Please consider....

SQL> create table t2 as select * from dba_objects;

Table created.

SQL> select count(*) from t2;

 COUNT(*)
---------
    29621

1 row selected.

SQL> create index i2 on t2(object_id);

Index created.

SQL> select count(*) from t2;

 COUNT(*)
---------
    29621

1 row selected.

SQL> analyze table t2 estimate statistics;

Table analyzed.

SQL> select count(*) from t2;

 COUNT(*)
---------
    29621

1 row selected.

SQL> analyze table t2 compute statistics for all indexed columns;

Table analyzed.

SQL> select count(*) from t2;

 COUNT(*)
---------
    29621

1 row selected.

SQL> col sql_text for a30
SQL> select sql_text, executions, address from v$sql where sql_text = 'select count(*) from t2 ';

SQL_TEXT                       EXECUTIONS ADDRESS
------------------------------ ---------- --------
select count(*) from t2                 4 E1F4D918

1 row selected.

SQL> select * from v$sql_shared_cursor where KGLHDPAR = 'E1F4D918';

ADDRESS  KGLHDPAR U S O O S L S E B P I S T A B D L T R I I R S M
-------- -------- - - - - - - - - - - - - - - - - - - - - - - - -
DCD51148 E1F4D918 N N N N N N N N N N N N N N N N N N N N N N N N

1 row selected.

SQL> 

I thought I would have selected 4 rows from v$sql_shared_cursor (?).

Would you please explain?

Thank you,

Robert. 

Tom Kyte
January 25, 2006 - 4:50 pm UTC

why - you kept wiping out the old ones, they were invalidated - not of any use to anyone.  We would not keep them there.

You have to do something that does not INVALIDATE the existing stuff, but rather causes us to NOT be able to share the existing stuff.  EG:


ops$tkyte@ORA9IR2> create table t as select * from all_objects where 1=0;

Table created.

ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> alter session set optimizer_goal = first_rows;

Session altered.

ops$tkyte@ORA9IR2> select * from t;

no rows selected

ops$tkyte@ORA9IR2> alter session set optimizer_goal = all_rows;

Session altered.

ops$tkyte@ORA9IR2> select * from t;

no rows selected

ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> select sql_text, executions, address from v$sql
  2  where sql_text = 'select * from t';

SQL_TEXT
-------------------------------------------------------------------------------
EXECUTIONS ADDRESS
---------- --------
select * from t
         1 577D2BA4

select * from t
         1 577D2BA4



we cannot share those select * from t's since one is optimized for first rows and the other all rows. 

SQL in Trigger and Parsing

Charlie Zhu, February 02, 2006 - 6:09 pm UTC

Hi Tom,

I found the behavior is different from 10g to 9i about the Static SQL in the Trigger.

in 10g, it's natively parse one time and exec many time.

Here is the TkProf report:

== == ===
MERGE INTO ABELISTING.XBNIDENTSSALESCOUNTER D USING (SELECT :B3 TIME_ID, :B2
XBNIDENTIFIER, :B1 ITEMPRICE FROM DUAL) S ON (D.ISBN = S.XBNIDENTIFIER AND
D.MOMENTUM_DATE = S.TIME_ID) WHEN MATCHED THEN UPDATE SET TOTALCOUNT =
D.TOTALCOUNT + 1, TOTALPRICE = TOTALPRICE + S.ITEMPRICE WHEN NOT MATCHED
THEN INSERT (MOMENTUM_DATE, ISBN, TOTALCOUNT, TOTALPRICE) VALUES (S.TIME_ID
,S.XBNIDENTIFIER, 1, S.ITEMPRICE)


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 9 0.01 0.05 0 20 22 9
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 10 0.01 0.05 0 20 22 9

and the Trigger Code:

== == ===

CREATE OR REPLACE TRIGGER ABEDBA.XBNIDENTSSALESCOUNTER
before INSERT ON abedba.abepoitems FOR EACH ROW
WHEN (new.xbnidentifier is not null)
DECLARE

ld_po_date date;
errmsg varchar2(200);

BEGIN

ld_po_date := Trunc(Nvl(:new.rowadddt, Sysdate),'DD');

-- UPDATE THE SALES COUNTER

MERGE INTO abelisting.xbnidentssalescounter d
USING (select ld_po_date time_id, :new.xbnidentifier xbnidentifier, :new.itemprice itemprice
from dual) S
ON (d.ISBN = s.xbnidentifier
and d.MOMENTUM_DATE = s.time_id)
WHEN MATCHED THEN
UPDATE
SET totalcount = d.totalcount + 1,
totalprice = totalprice + s.itemprice
WHEN NOT MATCHED THEN
INSERT (MOMENTUM_DATE, ISBN, TOTALCOUNT, TOTALPRICE)
VALUES (S.TIME_ID ,S.xbnidentifier, 1, S.itemprice);

Exception
WHEN OTHERS THEN
errmsg := TO_CHAR(SQLCODE)||': '||SQLERRM;
yi.my_utl.p_err('PO_Items','sales count',errmsg);
END;
/




Tom Kyte
February 03, 2006 - 1:52 pm UTC

No, it is not any different. sql executed in a trigger is cached for the duration of the triggering statement (you had ONE triggering statement that caused the row trigger to fire NINE times)

sql executed in a procedure can be cached for the life of the sesssion.

run this (I just did in 10gr2, 10gr1, same effects)

drop table t;

create table t ( x int );

create or replace procedure p
as
begin
for x in (select * from dual in_proc) loop null; end loop;
end;
/

create trigger t_trigger
before insert on t for each row
begin
for x in (select * from dual in_trigg) loop null; end loop;
p;
end;
/
alter session set sql_trace=true;
insert into t values ( 1 );
insert into t values ( 1 );
insert into t select 1 from all_users;


And the tkprof will show the "in_trigg" sql parsed three times for the three triggering statements whereas the in_proc sql parsed once:


SELECT * FROM DUAL IN_TRIGG

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 3 0.00 0.00 0 0 0 0
Execute 54 0.00 0.00 0 0 0 0
Fetch 54 0.00 0.00 0 162 0 54
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 111 0.00 0.00 0 162 0 54
********************************************************************************
SELECT * FROM DUAL IN_PROC

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 54 0.00 0.00 0 0 0 0
Fetch 54 0.00 0.00 0 162 0 54
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 109 0.00 0.00 0 162 0 54


Move SQL out of Triggers to Reduce Parsing

Charlie Zhu, February 06, 2006 - 1:48 pm UTC

Your answer is so valuable.
We have got many SQLs to be moved out of Triggers.

Thanks for your clarification, practice makes perfect.
(I only got a roughly knowledge after read your book "Effective Oracle by Design")
Now I'm 100% clear.


Identify the frequently used SQL in Triggers

Charlie Zhu, February 06, 2006 - 3:15 pm UTC

I'm just wondering.

How to find the frequently used SQL in Triggers?

I can get the SQL ordered by Parse Calls in AWR report, but don't know if they're in Triggers.

Thanks a lot.

Tom Kyte
February 07, 2006 - 1:15 am UTC

you might be able to use:


ops$tkyte@ORA10GR2> select sql_text
  2  from v$sql
  3  where program_id in ( select object_id from dba_objects where object_type = 'TRIGGER');


in 10g  

Find high parsed SQL

Charlie Zhu, February 08, 2006 - 7:27 pm UTC

Can I use this SQL to find the high parsed SQL?

select executions, PARSE_CALLS,BUFFER_GETS,ELAPSED_TIME,cpu_time, sql_text
from v$sql a
order by PARSE_CALLS desc;

You query return lot's of unrelated SQL, that is not in Triggers by join Program_ID.

Tom Kyte
February 10, 2006 - 10:42 am UTC

probably more of a problem would be SQL's where parse_calls is near execution calls and both are "large"


I know that query returns some false positives, it was a starting point.




Query System dictionary - The top parsed calls SQL

Charlie Zhu, February 10, 2006 - 5:17 pm UTC

23,016 Parsed Calls per house, and same Executions.

select privilege#, level from sysauth$ connect by grantee#=prior privilege# and privilege#>0 start with grantee#=:1 and privilege#>0

Do you know what problem related to this system call?

Thanks.

Tom Kyte
February 11, 2006 - 1:48 pm UTC

no problem at all, it is recursive sql, sql we do to execute your sql.

You cannot do anything about it (short of making us do recursive sql less often, which goes back to you reducing the amount of times you yourself parse)

why oracle cache some DDLs?

jianhui, February 28, 2006 - 10:39 pm UTC

Dear Tom,
I was wondering why Oracle caches some DDLs such like TRUNCATE, ALTER TABLE, however, CREATE TABLE is not cached in library when i query v$sqlarea. Anyway what's the purpuse of caching DDLs, who's goning to reuse them?
Many thanks,


Tom Kyte
March 01, 2006 - 8:12 am UTC

putting the statements in v$sql lets you see the active sql from another session. They were always "there" in earlier releases, we just could not see them. If they did not appear in v$sql - then someone with an active session running long running DDL cannot be "diagnosed"

The creates are actually in there during the execution of the create statement (run a long running create table as select and you'll see it) but they seem to get immediately flushed as part of the creation at the end (probably when their temporary extents are turned into permanent ones)

Code reuse and maintenability

Jon Waterhouse, March 10, 2006 - 12:39 pm UTC

I have three tables with similar structures, and code in package procedures that, apart from the change in table name and one column name, is identical.

Is there a sensible way (is it sensible) to write a generic procedure to which the table_name (actually two table names: the real table and a temp table) and the column_name would be passed?

The benefit is that you only have one place if you need to make a change. The drawback, as I understand it, is that you would have to use dynamic SQL. This is code that is run in a batch every couple of weeks, and works on complete tables so the extra parsing is not a big deal; but you also lose the semantic check.

Just for context: the three tables are client attributes (educ, occupation, employment_status) which change from time-to-time, and this code merges any necessary changes in as new data is loaded. The tables have a client_ID, start and end dates and the "code" (education level,occupation,status).

I don't like having the "duplicated" code, but the dynamic SQL option is messy. Is there some other option I'm overlooking?

Tom Kyte
March 10, 2006 - 8:25 pm UTC

if you have three tables

and they are so similar

down to the fact that they use the same precise logic to modify themselves,

I would ask you to look closer and ask if you don't really have but one table.


I always prefer explicit code over generic stuff, that is my opinion based on the experience of having generic code go awry (since it is generic and not statically compiled). For every maintenance benefit you can propose, there is an anti-benefit.



Insert probs..!!!

Sikkin, March 13, 2006 - 5:15 am UTC

Hi Tom,

for insert 281 rows its taking 2625 execute with 2344 parse. do you think its not bad for insert?
give me solutions and suggestions.. Rest of this insert all taking equal no.of execute with the rows inserted.

insert into AIM_DEPENDENT_ELEMENT(ELEMENT_OID,REQUIRED_ELEMENT_OID,TYPE_CODE)
values
(:1,:2,:3)


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 2344 1.14 1.23 0 0 0 0
Execute 2625 194.71 286.73 0 0 11648 281
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 4969 195.85 287.96 0 0 11648 281

Misses in library cache during parse: 1
Optimizer mode: CHOOSE
Parsing user id: 225 (CPSUSER)

Rows Execution Plan
------- ---------------------------------------------------
0 INSERT STATEMENT MODE: CHOOSE

Tom Kyte
March 13, 2006 - 10:19 am UTC

why do you parse it more than.... once?


curious that you inserted 2,625 times but only 281 of them SUCCEEDED!!! isn't it.


What that is saying is that 2,625-281 times, the insert failed. It caused an error. Program caught error and dealt with it.

The program also parsed it - the program holds all of the control here - you need to look to your program.

Remember to insert a row implies we did all of the work - modified the indexes. To insert a row that fails implies further we had to ROLL BACK that work (expensive) and run queries to give you the error message (you'll find lots of recursive sql against CONS$ I would guess as well!)

Update issue..!

Sikkin, March 19, 2006 - 11:58 pm UTC

Hi tom,

the insert problem was thru..
now it is with this update..
can you please say what could be the reason for parsing is high in update?
Have your suggestion please.. the rows processed was 1490 and hence the update was 3 rows right?
Total elapsed time around 2 min.do you mean to say that update is failing for 1163 times?

UpDaTe AIM_STORAGE_DEVICE set BLOCK_SIZE_AMOUNT = :1 , BLOCK_SIZE_UNIT = :2 ,
CAPACITY_AMOUNT = :3 , CAPACITY_UNIT = :4 , CAPTION = :5 , CLASS_CODE = :6 ,
CLASS_INDICATOR = :7 , COMPUTER_SYSTEM_OID = :8 , CONTROLLER = :9 ,
DESCRIPTION = :10 , DEVICE_CLASS_CODE = :11 , DOMAIN_NAME = :12 ,
ELEMENT_STATUS = :13 , EXTERNAL_KEY = :14 , FCA_PORT_OID = :15 ,
FIRMWARE_VERSION = :16 , FREE_CAPACITY = :17 , FREE_EXTENTS = :18 ,
HOSTED_SERVICE_OID = :19 , INSTALL_DATE = :20 , LABEL = :21 ,
LIFE_CYCLE_STATE = :22 , LOCATION = :23 , LUN_BINDING_OID = :24 ,
MANUFACTURER = :25 , MODEL = :26 , MOUNTABLE_INDICATOR = :27 ,
NETWORK_DEVICE_OID = :28 , OID = :29 , PARTITION_NUMBER = :30 ,
PARTITION_TAG_CODE = :31 , PATH = :32 , PRIVATE_EXTENT_SIZE = :33 , PROPS =
:34 , READ_ONLY_INDICATOR = :35 , REPLICA_INDICATOR = :36 ,
RESERVED_INDICATOR = :37 , SERIAL_NUMBER = :38 , SPARE_INDICATOR = :39 ,
SS_PORT_OID = :40 , STORAGE_SYSTEM_OID = :41 , STORAGE_TYPE = :42 ,
STORAGE_TYPES = :43 , SYSTEM_RESERVED_INDICATOR = :44 , TYPE_CODE = :45 ,
UPDATED_DATE = :46 , VOLUME_ID = :47 , VOLUME_MANAGER_OID = :48 ,
VOLUME_TYPE = :49
where
OID = :50


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1163 4.75 12.21 0 0 0 0
Execute 1490 90.70 121.03 0 2980 7350 1490
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 2653 95.45 133.25 0 2980 7350 1490

Misses in library cache during parse: 1
Misses in library cache during execute: 1
Optimizer mode: CHOOSE
Parsing user id: 237 (CPSUSER)

Rows Row Source Operation
------- ---------------------------------------------------
3 UPDATE
3 TABLE ACCESS BY INDEX ROWID AIM_STORAGE_DEVICE
3 INDEX UNIQUE SCAN PK_AIM_STORAGE_DEVICE (object id 174932)


Rows Execution Plan
------- ---------------------------------------------------
0 UPDATE STATEMENT MODE: CHOOSE
3 UPDATE OF 'AIM_STORAGE_DEVICE'
3 INDEX MODE: ANALYZED (UNIQUE SCAN) OF
'PK_AIM_STORAGE_DEVICE' (UNIQUE)


Tom Kyte
March 20, 2006 - 7:10 am UTC

we parse every time you ask us to parse. If you stop asking us to parse so often, we will. Until you do however, we are obliged to parse it for you in response to you asking us to.

same exact answer as for the insert, if you fixed that - you already know how to fix the update.

Another insert probs..!

sikkin, March 20, 2006 - 12:11 am UTC

Hi tom,

Another insert showing for zero rows processed its executing 1220 times, kindly have your say on this also..

Thanks in Advance.
Always Thanking you.
INSERT into AIM_HEALTH_ELEMENT (ELEMENT_OID, REQUIRED_ELEMENT_OID, TYPE_CODE)
VALUES (:b3, :b2, :b1)

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.03 0.08 0 0 0 0
Execute 1220 85.20 113.62 3 0 4880 0
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 1221 85.23 113.70 3 0 4880 0

Misses in library cache during parse: 1
Optimizer mode: CHOOSE
Parsing user id: 237 (CPSUSER) (recursive depth: 1)

Rows Execution Plan
------- ---------------------------------------------------
0 INSERT STATEMENT MODE: CHOOSE

Tom Kyte
March 20, 2006 - 7:12 am UTC

same precise exact answer as the first time you asked - only the numbers are different. before I wrote:

...
curious that you inserted 2,625 times but only 281 of them SUCCEEDED!!! isn't
it.

What that is saying is that 2,625-281 times, the insert failed. It caused an
error. Program caught error and dealt with it.
.......


Now, change 2,625 to 1220 and 281 to 0 and the commentary is exactly the same!

Curious to know

Reader, March 22, 2006 - 5:33 am UTC

Tom,

1.In the first case insert 2625 - 281 = 2344 times was failed as we can seen in parse (2344).
2.In the update case we happened to see the same number of exeuted and rows processed. how can we its get failed? and the parse is also high....!!!
3.In the second case insert 1220 - 0 times was failed means why dont we see parse value 1220?

Tom Kyte
March 22, 2006 - 3:51 pm UTC

1) that is a statement, no question

2) if OID was the primary key - it would look like every update updated a single row successfully.

as stated many times before, parsing is something Oracle does when a program tells us to. The program told us to parse it that many times, we do not control that, the program does.

3) because the program didn't ask us to parse it that many times.

SQL in Trigger and Parsing again

Charlie Zhu, March 30, 2006 - 8:20 pm UTC

Statement and Session Level got same parse, only once.

run this (I just did in 10gr1)

drop table t;

create table t ( x int );

create or replace procedure p
as
begin
for x in (select * from dual in_proc) loop null; end loop;
end;
/

create trigger t_trigger
before insert on t for each row
begin
for x in (select * from dual in_trigg) loop null; end loop;
p;
end;
/

alter session set sql_trace=true;

begin
for i in 1..3 loop
insert into t values ( 1 );
end loop;
end;
/

SELECT *
FROM
DUAL IN_TRIGG


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 3 0.00 0.00 0 0 0 0
Fetch 3 0.00 0.00 0 9 0 3
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 7 0.00 0.00 0 9 0 3

********************************************************************************

SELECT *
FROM
DUAL IN_PROC


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 3 0.00 0.00 0 0 0 0
Fetch 3 0.00 0.00 0 9 0 3
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 7 0.00 0.00 0 9 0 3

After I changed the code to below, it parsed many times.

begin
insert into t values ( 1 );
insert into t values ( 1 );
end;
/

SELECT *
FROM
DUAL IN_TRIGG

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 2 0.00 0.00 0 0 0 0
Execute 2 0.00 0.00 0 0 0 0
Fetch 2 0.00 0.00 0 6 0 2
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 6 0.00 0.00 0 6 0 2


Statement level parse and PL/SQL LOOP

Charlie Zhu, March 30, 2006 - 8:41 pm UTC

I found this by benchmark a loop insert into
a single hash cluster table.

Here're more figures:
(I execute the PL/SQL block 2 times, then there're 2 parses)

alter session set sql_trace=true;

begin
for i in 1..3 loop
insert into t values ( 1 );
end loop;
end;
/

begin
for i in 1..3 loop
insert into t values ( 1 );
end loop;
end;
/

SELECT * FROM DUAL IN_TRIGG

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 2 0.00 0.00 0 0 0 0
Execute 6 0.00 0.00 0 0 0 0
Fetch 6 0.00 0.00 0 18 0 6
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 14 0.00 0.00 0 18 0 6

begin
for i in 1..3 loop
insert into t values ( 1 );
end loop;
end;

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 2 0.00 0.00 0 0 0 0
Execute 2 0.00 0.00 0 0 0 2
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 4 0.00 0.00 0 0 0 2



Parsing of statements from stored procedure

Shailesh Saraff, April 17, 2006 - 10:18 am UTC

Hello Tom,

you have said in earlier post....

>>in plsql using static sql, it is most probably that the SQL is parsed ONCE for
your session....

>>turn on sql_trace and tkprof the results, you'll see.

I have one query related to this, we did some trials, please let me know if my interpretatios are wrong.

Using Ref cursor in Oracle 10.1.0.2

CREATE OR REPLACE PACKAGE PkgTestRefPatient AS
TYPE PATIENTCUR IS REF CURSOR;
PROCEDURE ProGetPatients (CurPat OUT PATIENTCUR);
END;
/

CREATE OR REPLACE PACKAGE BODY PkgTestRefPatient AS

PROCEDURE ProGetPatients (CurPat OUT PATIENTCUR) IS
BEGIN
OPEN CurPat FOR SELECT *
FROM TblPatient WHERE ROWNUM <15;
END ProGetPatients;

END;
/

DECLARE
CurPat PkgTestRefPatient.PATIENTCUR;
rPat TblPatient%ROWTYPE;
BEGIN
PkgTestRefPatient.ProGetPatients( CurPat);
LOOP
FETCH CurPat INTO rPat;
EXIT WHEN CurPat%NOTFOUND;
dbms_output.put_line(rPat.Patientenid||'.....'||rPat.Name||'.........'||rPat.Vorname);
END LOOP;
CLOSE CurPat;
END;
/

Anonymous block used for retrieving records from cursor. When I executed this block 3 times in trace I got following info....

DECLARE
CurPat PkgTestRefPatient.PATIENTCUR;
rPat TblPatient%ROWTYPE;
BEGIN
PkgTestRefPatient.ProGetPatients( CurPat);
LOOP
FETCH CurPat INTO rPat;
EXIT WHEN CurPat%NOTFOUND;
dbms_output.put_line(rPat.Patientenid||'.....'||rPat.Name||'.........'||rPat.Vorname);
END LOOP;
CLOSE CurPat;
END;

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 3 0.05 0.06 0 0 0 0
Execute 3 0.04 0.05 0 0 0 3
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 6 0.09 0.11 0 0 0 3

Misses in library cache during parse: 0
Optimizer goal: CHOOSE
Parsing user id: 19
********************************************************************************

SELECT *
FROM
TBLPATIENT WHERE ROWNUM < 15


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 3 0.01 0.02 0 0 0 0
Execute 3 0.00 0.00 0 0 0 0
Fetch 45 0.03 0.03 0 42 12 42
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 51 0.04 0.05 0 42 12 42

Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: 19 (recursive depth: 1)

Rows Row Source Operation
------- ---------------------------------------------------
14 COUNT STOPKEY
14 TABLE ACCESS FULL TBLPATIENT

********************************************************************************

Why does query in stored procedure parse 3 times? I thought query gets parsed at the time of compiling and storing of this objects in the database.

We are planning to use RefCursor in our .NET application to reduce parsing of statemets, I thought query will be
Parse:0
Execute: 3
Fetch: 45

am I doing something wrong? Please let me know.

Thanks & Regards,

Shailesh

Tom Kyte
April 17, 2006 - 11:06 am UTC

that is a ref cursor, they are not cached like that - since they are POINTERS to cursors and you could call that procedure

begin
PkgTestRefPatient.ProGetPatients( CurPat1 );
PkgTestRefPatient.ProGetPatients( CurPat2 );
PkgTestRefPatient.ProGetPatients( CurPat3 );
end;


and have the "same" cursor open many many times (airquotes used because they are in fact NOT the same).

The queries do not get "parsed into the shared pool" at compile tmie - sure they get parsed so plsql can understand them in that sense - but they are not parsed and stored in the shared pool or anything like that.

Shailesh Saraff, April 18, 2006 - 6:11 am UTC

Thanks Tom,

But how can I avoid parsing, PL/Sql tables or anything else?

Please suggest if you have other ideas.

Thanks & Regards,

Shailesh

Tom Kyte
April 18, 2006 - 9:42 am UTC

using a ref cursor will result in a parse. I would NOT ever recommend removing a ref cursor and replacing it with plsql table types or anything like that. ref cursors are the appropriate mechanism to return a result set from a stored procedure.

softer soft/soft parsers?

Matte, May 04, 2006 - 11:43 pm UTC

create table t1 as select * from user_objects;
/
create or replace procedure p as 
begin
for x in ( select count(*) from t1)
loop
    null;
end loop;
end;
/
SQL>  alter session set events '10046 trace name context forever,level 12';

SQL>  exec p;   --Executed 10 Times
********************************************************************************
SELECT COUNT(*)
FROM
 T1
call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute     10      0.01       0.00          0          0          0           0
Fetch       20      0.00       0.00          0         30          0          10
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total       31      0.01       0.00          0         30          0          10

Misses in library cache during parse: 0
Optimizer goal: CHOOSE
Parsing user id: 696     (recursive depth: 1)

Rows     Row Source Operation
-------  ---------------------------------------------------
     10  SORT AGGREGATE
    190   TABLE ACCESS FULL T1
********************************************************************************
Parse = 1 which is nice and what i expeted!!

See the second test case:
 alter session set events '10046 trace name context forever,level 12';
select count(*) from t1;  --Executed 10 Times

********************************************************************************

select count(*)
from
 t1


call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse       10      0.00       0.00          0          0          0           0
Execute     10      0.00       0.00          0          0          0           0
Fetch       20      0.01       0.00          0         30          0          10
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total       40      0.01       0.00          0         30          0          10

Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: 696

Rows     Row Source Operation
-------  ---------------------------------------------------
      1  SORT AGGREGATE
     19   TABLE ACCESS FULL T1


Elapsed times include waiting on following events:
  Event waited on                             Times   Max. Wait  Total Waited
  ----------------------------------------   Waited  ----------  ------------
  SQL*Net message to client                      20        0.00          0.00
  SQL*Net message from client                    20        2.03          7.58

********************************************************************************

My conerns is why the second test casee parsed 10 times even  though my sql is EXACTLY same. Is the oracle does not recongize directly without any parsing?

---------------
please explain me!!!
I read all the answer in this link. You said "softer soft parse", Is this not applies to the 1st test case?
I am sure there is some reason that i missed!!

thanks
Matte Ravinder 

Tom Kyte
May 05, 2006 - 1:49 am UTC

Oracle parses when a PROGRAMER TELLS IT TO.

Oracle does not parse on it's own.

sqlplus is a command line utility. It

a) reads inputs
b) parses statement
c) executes it
d) goto a

You typed in 10 select statements, it did step b 10 times. It did not make the micro-optimization (remember, sqlplus is just a command line utility here) to cache the cursor - it would not make sense in SQLPLUS (you might ask yourself why you would do the same sql ten times in a row!)


since sqlplus said "Oracle parse this", Oracle parsed it. This indicates:

Misses in library cache during parse: 1

it was hard parsed ONCE and soft parsed the remaining 9 times.


If you look at the trace file, you'll find your "exec p;" was parsed 10 times as well!!!

soft parses

A reader, May 05, 2006 - 11:51 am UTC

Why the select count(*) from t1 in the procedure p is parsed only one time eventhough the exec p is parsed 10 times?

Tom Kyte
May 05, 2006 - 2:56 pm UTC

because plsql rocks and doesn't close the cursor, it caches it in the hopes that you'll use it again as you did.

the more sql you put into plsql - the better your applications will be :)

Alexander the ok, August 04, 2006 - 2:35 pm UTC

Hi Tom,

I would like to know if there is anyway to calculate an estimate of how much added time there is for a given query the first time it is executed (hard parse).

The reason I ask is because I'm in the process of tuning a query. I started to look at it because it took about 5:45 minutes. That was the first run. Afterwards it runs consistently around 2:30. Same execution plans. The only thing I could think of was additional time for the hard parse, but a difference of 3 minutes just for that? That doesn't sound right does it?

Tom Kyte
August 04, 2006 - 2:57 pm UTC

Nope, not really.

the only other things I can think of

the hard parse was instantaneous and...

o the first run did a lot of physical io that the others did not, hence the first run was slower

o the first run was run when something else massive was just finishing up. It was gone by the time the remaining runs went.

o the first run experienced contention for some resource. The others have been lucky so far and haven't experienced the same contention.



Alexander the ok, August 04, 2006 - 3:20 pm UTC

Can you please elaborate on

"o the first run did a lot of physical io that the others did not, hence the first run was slower"

I think that's the explanation but I don't understand why that would be different. Is that because those blocks got loaded in the buffer cache the first pass and the second they were read from the buffer cache for much better performance? The stats seem to agree with what you said.

First run:
48 recursive calls
3 db block gets
28266601 consistent gets
45953 physical reads <-----
540 redo size
620 bytes sent via SQL*Net to client
1552 bytes received via SQL*Net from client
3 SQL*Net roundtrips to/from client
1 sorts (memory)
0 sorts (disk)
0 rows processed
Second run:
48 recursive calls
3 db block gets
28266601 consistent gets
14820 physical reads <------
540 redo size
624 bytes sent via SQL*Net to client
1552 bytes received via SQL*Net from client
3 SQL*Net roundtrips to/from client
1 sorts (memory)
0 sorts (disk)
0 rows processed


Tom Kyte
August 04, 2006 - 3:44 pm UTC

absolutely, that is what caches do :)


but I'm worried that you have a query that is using indexes when it shouldn't be. that is a TON of logical IO to discover..... that there is nothing to report back.

indexes != fast

Too many parsing

Kumar, August 10, 2006 - 12:38 am UTC

just trying to improve performance. I noticed several sql's with hig parsing
and execution numbers.I pinned several sql's into shared pool based on cursor
info. All these queries do use bind variables and the cursor_sharing is set to
similiar. I do not understand why I am still seeing these sql's parsing. Only
67% of the shared pool is being used so I am at a lost. Can you please tell me what I
am doing wrong or what's going on with the library cache?

Here is the statspack report
===============================

DB version 9.2.0.6
------------------

Elapsed:30.07 (mins)
--------------------

Cache Sizes (end)
~~~~~~~~~~~~~~~~~
Buffer Cache: 1,104M Std Block Size: 8K
Shared Pool Size: 704M Log Buffer: 102,432K

Load Profile
~~~~~~~~~~~~ Per Second Per Transaction
--------------- ---------------
Redo size: 31,622.97 1,351.14
Logical reads: 8,669.17 370.40
Block changes: 189.85 8.11
Physical reads: 1,379.45 58.94
Physical writes: 6.75 0.29
User calls: 557.58 23.82
Parses: 117.62 5.03
Hard parses: 0.09 0.00
Sorts: 7.50 0.32
Logons: 0.50 0.02
Executes: 360.44 15.40
Transactions: 23.40

% Blocks changed per Read: 2.19 Recursive Call %: 48.27
Rollback per transaction %: 5.79 Rows per Sort: 7408.63

Instance Efficiency Percentages (Target 100%)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Buffer Nowait %: 100.00 Redo NoWait %: 100.00
Buffer Hit %: 84.12 In-memory Sort %: 99.99
Library Hit %: 99.79 Soft Parse %: 99.93
Execute to Parse %: 67.37 Latch Hit %: 99.91
Parse CPU to Parse Elapsd %: 78.75 % Non-Parse CPU: 98.63

Shared Pool Statistics Begin End
------ ------
Memory Usage %: 62.27 62.76
% SQL with executions>1: 92.65 92.08
% Memory for SQL w/exec>1: 67.91 67.43
======================================================

SQL ordered by Parse Calls
End Parse Calls Threshold:1000
------------------------------

% Total
Parse Calls Executions Parses Hash Value
------------ ------------ -------- ----------
37,384 37,384 17.62 1415971694
Module: java@chigfxspwl02 (TNS V1-V3)
begin :1 := JAVA_XA.xa_start_new(:2,:3,:4,:5,:6); end;

37,383 37,381 17.62 217985621
Module: java@chigfxspwl02 (TNS V1-V3)
begin :1 := JAVA_XA.xa_end_new(:2,:3,:4,:5); end;

37,355 37,355 17.61 2970935134
Module: java@chigfxspwl01 (TNS V1-V3)
begin :1 := JAVA_XA.xa_commit_new (:2,:3,:4,:5); end;

37,319 52,895 17.59 2475783927
Module: java@chigfxspwl02 (TNS V1-V3)
select count(*) from DUAL

9,293 9,293 4.38 3736651632
Module: JDBC-1.0-Client
SELECT * FROM dual WHERE :"SYS_B_0"=:"SYS_B_1"

8,457 8,457 3.99 589337545
Module: JDBC Thin Client
SET TRANSACTION READ WRITE

3,607 3,607 1.70 1271383454
Module: JDBC Thin Client
SELECT CLIENT_MNEMONIC as CLIENT_ID, COUNTERPARTY_MNEMONIC as E
XECBANK_MNEMONIC, NOTIFICATION_ID, CLIENT_GCI_NUMBER, COUNTERP
ARTY_GCI_NUMBER,CLIENT_GROUP_ID, CLIENT_TRADE_ID, PRODUCT_TYPE,
TRADE_DATE, SETTLEMENT_DATE, BUY_SELL_TYPE, MARKET_SYMBOL, RATE
, DEALT_CURRENCY,QUANTITY, CONTRA_CURRENCY, CONTRA_QUANTITY,QUA

2,662 2,662 1.25 2508997793
Module: JDBC Thin Client
select value from fxpb_system_V2 where attribute=:"SYS_B_0"

2,661 2,661 1.25 3512471037
Module: JDBC Thin Client
SELECT VALUE FROM FXPB_SYSTEM_V2 WHERE ATTRIBUTE = :"SYS_B_0"

2,660 2,660 1.25 1922758896
Module: JDBC Thin Client
select count(id) as DealCount from fxpb_newdeal_v2

2,408 2,408 1.13 1448472766
Module: JDBC Thin Client
SELECT CLIENT_MNEMONIC as CLIENT_ID, COUNTERPARTY_MNEMONIC as E
XECBANK_MNEMONIC, NOTIFICATION_ID, CLIENT_GCI_NUMBER, COUNTERP
ARTY_GCI_NUMBER,CLIENT_GROUP_ID, CLIENT_TRADE_ID, PRODUCT_TYPE,
TRADE_DATE, SETTLEMENT_DATE, BUY_SELL_TYPE, MARKET_SYMBOL, RATE
, DEALT_CURRENCY,QUANTITY, CONTRA_CURRENCY, CONTRA_QUANTITY,QUA

1,804 1,804 0.85 2038001853
Module: JDBC Thin Client
SQL ordered by Parse Calls
-> End Parse Calls Threshold: 1000

% Total
Parse Calls Executions Parses Hash Value
------------ ------------ -------- ----------
SELECT CLIENT_MNEMONIC as CLIENT_ID, COUNTERPARTY_MNEMONIC as E
XECBANK_MNEMONIC, NOTIFICATION_ID, CLIENT_GCI_NUMBER, COUNTERP
ARTY_GCI_NUMBER,CLIENT_GROUP_ID, CLIENT_TRADE_ID, PRODUCT_TYPE,
TRADE_DATE, SETTLEMENT_DATE, BUY_SELL_TYPE, MARKET_SYMBOL, RATE
, DEALT_CURRENCY,QUANTITY, CONTRA_CURRENCY, CONTRA_QUANTITY,QUA

1,010 1,010 0.48 1526602886
Module: JDBC-1.0-Client
Select FXO2RefNum ,GTIFRefNum From fxo2deal Where GTIFRefNum is
NOT NULL and (T2RefNum is NULL or T2Refnum =:"SYS_B_0") and DATA
PENDINGSTATUS is NULL and routingfloorcode=:"SYS_B_1"

988 988 0.47 1685098223
Module: JDBC-1.0-Client
Select FXO2RefNum as FXO2RefNum, CoverType, to_char(AVTDealDate,
:"SYS_B_00") as AVTDealDate, to_char(AVTDealDate,:"SYS_B_01")as
AVTDealTime, DealtCCY, ContraCCY, QuoteBasis, substr(to_char(Int
erbankSpotRate),:"SYS_B_02",:"SYS_B_03") as InterbankSpotRate, t
o_char(ValueDate,:"SYS_B_04") as ValueDate, substr(to_char(Dealt

898 898 0.42 1515031132
Module: JDBC-1.0-Client
begin if (user != 'SYSTEM' and user != 'SYS') then INSERT INT
O SYSTEM.CONN_AUDIT values (user,sys_context('USERENV', 'OS_USER
'),sysdate,sys_context('USERENV','HOST'), (SELECT program FROM
v$session WHERE sys_context('USERENV','SESSIONID') = AUDSID));
end if; end;

898 1,796 0.42 2403185938
SELECT USER FROM SYS.DUAL

894 894 0.42 260204310
Module: JDBC-1.0-Client
INSERT INTO SYSTEM.CONN_AUDIT VALUES (USER,SYS_CONTEXT('USERENV'
, 'OS_USER'),SYSDATE,SYS_CONTEXT('USERENV','HOST'), (SELECT PROG
RAM FROM V$SESSION WHERE SYS_CONTEXT('USERENV','SESSIONID') = AU
DSID))

894 894 0.42 1356713530
select privilege#,level from sysauth$ connect by grantee#=prior
privilege# and privilege#>0 start with (grantee#=:1 or grantee#=
1) and privilege#>0

876 876 0.41 2860561840
Module: JDBC Thin Client
ALTER SESSION SET NLS_DATE_FORMAT = 'MM/DD/YYYY HH24:MI:SS'

876 876 0.41 3455468337
Module: JDBC-1.0-Client
SELECT VALUE FROM NLS_INSTANCE_PARAMETERS WHERE PARAMETER =:"SYS
_B_0"

876 876 0.41 3570694802
Module: JDBC-1.0-Client
SQL ordered by Parse Calls
-> End Parse Calls Threshold: 1000

% Total
Parse Calls Executions Parses Hash Value
------------ ------------ -------- ----------
ALTER SESSION SET NLS_LANGUAGE = 'AMERICAN'

709 709 0.33 201246338
Module: JDBC-1.0-Client
ALTER SESSION SET ISOLATION_LEVEL = SERIALIZABLE

654 654 0.31 2234447347
Module: JDBC Thin Client
call FXPBO_INSERT_CASHNOE (:1,:2,:3,:4,:5,:6,:7,:8,:9,:10,:11,:1
2,:13,:14,:15,:16,:17,:18,:19,:20,:21,:22,:23,:24,:25,:26,:27,:2
8,:29,:30,:31,:32,:33,:34,:35,:36,:37,:38,:39,:40,:41,:42,:43,:4
4,:45)

606 606 0.29 2931156591
Module: java@chigfxspwl01 (TNS V1-V3)
insert into TRADEEXATTR (PTE_TRADE_ID, VALUE, TYPEID, TRADEEXATT
R_ID) values (:1, :2, :3, :4)

602 602 0.28 46427555
Module: JDBC Thin Client
SELECT CLIENT_MNEMONIC as CLIENT_ID, COUNTERPARTY_MNEMONIC as E
XECBANK_MNEMONIC, NOTIFICATION_ID, CLIENT_GCI_NUMBER, COUNTERP
ARTY_GCI_NUMBER,CLIENT_GROUP_ID, CLIENT_TRADE_ID, PRODUCT_TYPE,
TRADE_DATE, SETTLEMENT_DATE, BUY_SELL_TYPE, MARKET_SYMBOL, RATE
, DEALT_CURRENCY,QUANTITY, CONTRA_CURRENCY, CONTRA_QUANTITY,QUA

537 620 0.25 249927579
Module: java@chigfxspwl02 (TNS V1-V3)
select * from MODENARECEIPT where ModenaReceipt.fxo2RefNum = :"S
YS_B_0" ORDER BY FXO2REFNUM

537 620 0.25 1720657831
Module: java@chigfxspwl02 (TNS V1-V3)
select * from MODENAPAYMENT where ModenaPayment.fxo2RefNum = :"S


Thanks,
Kumar



Tom Kyte
August 10, 2006 - 9:08 am UTC

this is like deja-vu, I just finished following up to this:

</code> http://asktom.oracle.com/pls/asktom/f?p=100:11:::::P11_QUESTION_ID:4032595293314#69782213000036 <code>

The answer for you is the same.

There is exactly one thing that will reduce the number of times a SQL statement is parse.

there is only ONE thing that can reduce this.

That one thing:

Your developers.


Nothing else, no one else. No magic wands, no incantations, no auto-magical sayings.

Just the developers.


Oracle only parses SQL when the developers demand it of us.

Have the developers stop it. I see they are using cursor sharing similar there - you are likely subject to SQL Injection (bad security bug), you are definitely subject to reduced performance and scalability due to this bug IN THEIR CODE.

makes me so sad to see it done over and over and over again. ugh.

statements in PLSQL

Federico, August 13, 2006 - 5:40 am UTC

Hi Tom,
I quote you from this thread

"
that is one of the massive benefits of stored plsql -- the statements are parsed
once per SESSION, not once per execution.

during compilation, the statements are not parsed into the shared pool and
optimized or anything, you need to actually execute the statement by running the
procedure to have that happen.
"
I am in a 10g r2 db.

1)I have a package with a procedure that the first time it is executed takes about 40 seconds (it opens a ref cursor
on a "long" statement). The following calls are almost immediate.
After that first execution, if I open another sqlplus (i.e another session), the calls are fast the same.
Even if I disconnect and reconnect, the calls are always fast: it seems that the first execution in the first
session made the hard parse into the shared pool (in fact "session pga memory max" is high for the first session, lower for the others) and the next ones made only the soft parse (soft parse because PLSQL does not cache ref
cursors).
Is it correct?

2) What surprised me is that if another one from another machine (a colleague from his pc or the application server) makes his first call, this is long as my first one. I thought that after my first execution, the statement would
have been parsed into the shared pool and been available to others, but it seems that PLSQL has a sort of private
shared pool for each client. Or SESSION_CACHED_CURSORS > 0 makes unavailable for other sessions/clients the
statement parsed? But this would not explain why for the other sessions from my pc the calls are fast.

Can you clarify me?

Thanks in advance.

Tom Kyte
August 13, 2006 - 9:30 am UTC

trace it - just trace it, tkprof the resulting trace. It can be (will be) very revealing.

sounds like the query either

a) takes inordinately long to optimize, the hard parse does that. You'd see a high parse time
b) does lots of physical io the first time that it is executed and not so the subsequent times. You'd see wait events to that effect in the tkprof report (assuming you traced waits which you should)



confused with one metalink note

A reader, October 10, 2006 - 11:38 am UTC

Hi Tom,

thanks for your great support, i Metalink Note:62143.1
=======================start========================================
To get placed in the session cache the same statement has to be parsed 3 times within the same cursor - a pointer to the shared cursor is then added to your session cache. If all session cache cursors are in use then the least recently used entry is discarded.
If you do not have this parameter set already then it is advisable to set it to a starting value of about 50. The statistics section of the bstat/estat report includes a value for 'session cursor cache hits' which shows if the cursor cache is giving any benefit. The size of the cursor cache can then be increased or decreased as necessary. SESSION_CACHED_CURSORS are particularly useful with Oracle Forms applications when forms are frequently opened and closed.
======================end===============================

i am confused about 4 things , please bear with me

1) "To get placed in the session cache the same statement has to be parsed 3 times within the same cursor " , "particularly useful with Oracle Forms applications when forms are frequently opened and closed. " . what i understand is that the Oracle Forms will close the cursor once the block is exist , therefore how i could have the same statement be parsed 3 times before the block is exit ?

2) "'session cursor cache hits' " , okay i have this value , it say about 90 per second , is that good or bad , is that indication on increasing the session_cached_cursor ? up to which value i should increase ?

3) you mentioned that the PLSQL will cache your cursors automatic , here this parameter also cache the cursor , but in plsql it will parse once , but with this parameter it will just do softer parse , could you explain why ?

4)can one cursor include more than one sql statements ? i believe yes , will you please correct me , if possible to show us an
example of that

thank you very much

Tom Kyte
October 10, 2006 - 8:06 pm UTC

1) your session parses the same statement three times, that is what triggers this.

2) un-answerable given information provided. Sort of like saying "I hit three red lights in the last 5 minutes, should I drive faster".

3) plsql caches cursors - avoid the parse call ALL TOGETHER. It is not the softer soft parse done by plsql, it is the "absence of a parse"

4) no, a cursor is in effect an open sql statement. once you close it, it goes away

What is the ideal value for SGA components in SP report ?

A reader, October 10, 2006 - 9:12 pm UTC

Hey TK,

That information is very useful and interesting but we are using oracle 92070 (RAC - 2 instances) so could you Pls. share the ideal/best value for each parameter in SP report, and how to achieve those values... !!

You can briefly give the answer...
but request you to give explaination as and when required.

Regards,
Star Nirav

Tom Kyte
October 11, 2006 - 8:01 am UTC

HUH????

sorry, this makes no sense to me.

Thats why... !!! But

Star Nirav, October 11, 2006 - 2:47 pm UTC

Hey TK...

But it makes sense for us.... Pls. advice and give the ideal values of SP parameters.

Regards,

Tom Kyte
October 11, 2006 - 4:09 pm UTC

what the heck is SP, your question makes no sense to me at all "SN"

what is pls.??

what is SP parameters???

here is an answer for you:

the sky is blue sometimes when the green grass is wet with dew.

That makes as much sense to me as the question, which I just don't understand. It is probably the answer to some question out there to boot.

It really doesn't matter if YOU understand YOUR question, YOU are not trying to answer it.

Soft parse vs Hard parse

A reader, October 12, 2006 - 8:25 am UTC

Hi Tom,

While going through an article at the following link

</code> http://www.dba-oracle.com/t_hard_vs_soft_parse_parsing.htm <code>

I found that The difference between soft and hard parse is only

> The loading of the SQL into shared pool.

Syntax and semantics check is performed in both the cases.

But your article says soft parse does not do syntax and semantics checks.

Have I misunderstood the concept?

Please help.

Tom Kyte
October 12, 2006 - 8:35 am UTC

sigh, and just where does my article say that.

It (to me) clearly says quite the opposite.


I described the syntax and semantic checks and then wrong:

...
The next step in the parse operation is to see if the statement we are currently
parsing has already in fact been processed by some other session. If it has ?
we may be in luck here, we can skip the next two steps in the process, that of
optimization and row source generation. If we can skip these next two steps in
the process, we have done what is known as a Soft Parse
....................


see that last sentence, it says "the next two steps", not the "prior two steps"



A reader

A reader, October 12, 2006 - 8:42 am UTC

Nice explanation and sorry for misunderstanding your facts.

One more query.

Let's say we have two sessions running, say A and B.

They are connected to schema TESTUSER.

The tables in the schema has old statistics hence the queries are running in a suboptimal way.

1. Session A fires a query (with old stats).

2. Then Session B collects the latest statistics.

3. Then Session A again fires the query again.

Will Session A's query in step no. 3 will use the latest stats to generate an optimal execution plan.




Tom Kyte
October 12, 2006 - 11:33 am UTC

yes (as long as the default for dbms_stats NO_INVALIDATE is left to default)


SP means statspack sir...!!!

Star Nirav, October 17, 2006 - 10:12 am UTC

Hi TK,

here SP means StatsPack reports. we just would like to know that in RAC with 2 nodes, what would be the ideal value for OLTP Db.

Thanks...

Tom Kyte
October 17, 2006 - 10:31 am UTC

"SN" (where did this "TK" come from anyway)

statspack has no "parameters", it generates a report.

The report can be interpreted by someone that knows what the numbers mean and the system to which they are being applied.

There are no universal best numbers, well other than:

42

42 is always good (plug it into google search and you'll see!)

SESSION_CACHED_CURSORS

Anurag, October 19, 2006 - 11:10 am UTC

I am looking for a "magic number" I guess... so apologies in advance :)

We are running into ORA-4031 after upgrading to 9iR2 from 8.1.7. From the statspack report, we see a low soft parse ratio of 89%, and are working on bind variables. But we did not get frequent ORA-4031 errors in 8i.

My question is, how do we determine if our existing value of 1024 for SESSION_CACHED_CURSORS is appropriate? Here is the relevant statspack output:

Cache Sizes (end)
~~~~~~~~~~~~~~~~~
Buffer Cache: 1,280M Std Block Size: 8K
Shared Pool Size: 768M Log Buffer: 512K

Load Profile
~~~~~~~~~~~~ Per Second Per Transaction
--------------- ---------------
Redo size: 199,101.12 7,895.96
Logical reads: 47,794.36 1,895.43
Block changes: 1,630.98 64.68
Physical reads: 4,281.08 169.78
Physical writes: 591.11 23.44
User calls: 307.26 12.19
Parses: 147.73 5.86
Hard parses: 15.89 0.63
Sorts: 71.34 2.83
Logons: 0.14 0.01
Executes: 1,030.03 40.85
Transactions: 25.22

% Blocks changed per Read: 3.41 Recursive Call %: 85.97
Rollback per transaction %: 7.17 Rows per Sort: 945.58

Instance Efficiency Percentages (Target 100%)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Buffer Nowait %: 99.95 Redo NoWait %: 100.00
Buffer Hit %: 93.25 In-memory Sort %: 98.32
Library Hit %: 97.78 Soft Parse %: 89.25
Execute to Parse %: 85.66 Latch Hit %: 99.90
Parse CPU to Parse Elapsd %: 9.26 % Non-Parse CPU: 97.83

Shared Pool Statistics Begin End
------ ------
Memory Usage %: 100.00 100.00
% SQL with executions>1: 94.48 91.97
% Memory for SQL w/exec>1: 86.30 79.12

Top 5 Timed Events
~~~~~~~~~~~~~~~~~~ % Total
Event Waits Time (s) Ela Time
-------------------------------------------- ------------ ----------- --------
CPU time 31,459 36.88
latch free 55,795 20,411 23.92
db file sequential read 15,592,692 15,383 18.03
PX Deq Credit: send blkd 126,656 10,266 12.03
db file scattered read 1,185,652 1,486 1.74


Tom Kyte
October 19, 2006 - 2:11 pm UTC

you are hard parsing like MAD, why?

You understand that probably up to 1/3 of your cpu time might be spent.....

parsing sql


the management of the shared pool changed radically in 9ir2, how many cpus do you have with that huge shared pool?

Anurag, October 19, 2006 - 4:13 pm UTC

The DB server has 4 CPUs, 1.3 GHz. OS is Solaris 8.

As for hard parsing, I assume its because of not using bind variables... unless that was a rhetorical question :) We are identifying and fixing them using the remove_constants function from your site. We last did this activity about two years ago on 8i... When we upgraded to 9i, the DBA upped the shared pool from 600 meg to its current value.

We've just started using the following sql to monitor shared pool usage over time:

select sysdate,name,round(sum(bytes/1024/1024),4) mb
from v$sgastat
where pool = 'shared pool'
and name in ('free memory','library cache','sql area', 'miscellaneous')
group by name
order by name

Output so far is:
SYSDATE NAME MB
10/19/2006 2:25:54 PM free memory 80.7315
10/19/2006 2:25:54 PM library cache 428.8761
10/19/2006 2:25:54 PM miscellaneous 99.8225
10/19/2006 2:25:54 PM sql area 156.6411

10/19/2006 2:47:54 PM free memory 80.5285
10/19/2006 2:47:54 PM library cache 432.4615
10/19/2006 2:47:54 PM miscellaneous 99.6677
10/19/2006 2:47:54 PM sql area 153.7231

The shared pool was flushed at 8:30 AM today after we got an ORA 4031. I'm not sure why the lib cache memory is increasing in the above though... lack of bind variables should only cause the "sql area" to grow, right?

One more thing, we have pga_aggregate_target set to 16MB - which the statspack report tells us is too small. The DBA is investigating if this needs to be increased. Could it account for the 4031s? (I know you always say that 4031 = not using bind variables, just wanted to see if anything else could be involved).

I suppose we could try cursor_sharing=SIMILAR until the bind variable thing is done... what do you say.

Many thanks, as always!

Tom Kyte
October 20, 2006 - 3:58 am UTC

can you check your _kghdsidx_count setting?


16mb is way way small for pga, that is a system wide setting, not a per session thing!

but it would not lead to 4031's

Anurag, October 20, 2006 - 10:28 am UTC

_kghdsidx_count is 2. Query used:

select n.ksppinm name, v.ksppstvl value
from x$ksppi n, x$ksppsv v
where n.indx = v.indx
and n.ksppinm = '_kghdsidx_count';

The DBA has opened an iTar on this, and we'll ask support if the value can be decreased to 1 - would you recommend that?

I'm still puzzled by the amount of ram (400 meg+ and growing) taken up by the "library cache" of the shared pool. We see that the "sql area" is growing and shrinking, but library cache pretty much just keeps growing.

I'm also looking to post this as a new question as soon as something opens up... thanks for your time and expertise!

Tom Kyte
October 20, 2006 - 10:53 am UTC

with a value of 2, it is taking your shared pool and slicing it into 768/2 mb shared pools in effect.

In current releases (9.2.0.8 for example), with cpu_count at 4 and shared pool at 768mb, mine is set to 1 by default (newer logic limits the size of the subpools to 512m)..

when you have the subpools, it is possible for one side to fill up (and not the other).

the subpools were added to decrease contention during parsing - a problem you currently also suffer from.

New question!

Anurag, October 20, 2006 - 2:45 pm UTC

Thanks for taking this as a followup... I got a chance to post it as a new question. Look forward to your analysis!

session_cached_cursors and connection pooling

Ray DeBruyn, October 25, 2006 - 12:33 pm UTC

This topic was quite good at clearing up hard vs soft parses. Thank you for making this available to us all.

My question is will I benefit from setting session_cached_cursors > 0 if I am using connection pooling? I use PLSQL Web Toolkit, so each hit runs a small amount of code and the then session is reset. I wonder if the session cached cursors would be reset also.

Tom Kyte
October 25, 2006 - 2:03 pm UTC

not so much using the plsql web toolkit as you are using mod_plsql and that will reset the session state at the end of each call.

Have a look at snapshot of one customer.

Star Nirav, October 26, 2006 - 5:02 pm UTC

Hi Tom,

Pls. have a look on this snapshot <trimmed> of one of mine customer.

We are having RAC (2-node setup).

---------------- Output of Snapshot ------------
Snap Id Snap Time Sessions Curs/Sess Comment
------- ------------------ -------- --------- -------------------
Begin Snap: 282 25-Oct-06 17:00:01 2,413 18.6
End Snap: 283 25-Oct-06 17:30:00 2,251 18.0
Elapsed: 29.98 (mins)

Cache Sizes (end)
~~~~~~~~~~~~~~~~~
Buffer Cache: 4,096M Std Block Size: 8K
Shared Pool Size: 1,440M Log Buffer: 2,048K

Load Profile
~~~~~~~~~~~~ Per Second Per Transaction
--------------- ---------------
Redo size: 34,738.92 5,319.20
Logical reads: 309,246.61 47,351.66
Block changes: 177.27 27.14
Physical reads: 1,065.49 163.15
Physical writes: 111.91 17.14
User calls: 752.24 115.18
Parses: 207.83 31.82
Hard parses: 25.03 3.83
Sorts: 36.98 5.66
Logons: 0.74 0.11
Executes: 631.20 96.65
Transactions: 6.53

% Blocks changed per Read: 0.06 Recursive Call %: 57.52
Rollback per transaction %: 46.39 Rows per Sort: 73.42

Instance Efficiency Percentages (Target 100%)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Buffer Nowait %: 100.00 Redo NoWait %: 100.00
Buffer Hit %: 99.68 In-memory Sort %: 99.99
Library Hit %: 96.58 Soft Parse %: 87.96
Execute to Parse %: 67.07 Latch Hit %: 99.98
Parse CPU to Parse Elapsd %: 46.15 % Non-Parse CPU: 99.46

Shared Pool Statistics Begin End
------ ------
Memory Usage %: 96.40 89.29
% SQL with executions>1: 67.24 55.51
% Memory for SQL w/exec>1: 61.69 50.70

Top 5 Timed Events
~~~~~~~~~~~~~~~~~~ % Total
Event Waits Time (s) Ela Time
------------------------------- -------- -------- --------
db file sequential read 1,489,437 9,107 49.58
CPU time 5,785 31.50
ARCH wait on SENDREQ 13 1,745 9.50
enqueue 34,750 862 4.69
global cache cr request 1,025,636 379 2.06

---------------- Output of Snapshot truncated -----------

FAQ:

1) Customer is complaining about slowness
2) Instance Efficiency report shows the figure which is upto the mark.
3) Not sure about Execute to parse, soft parse and top5 wait events, so request you to make me understand.

Your feedback on other parameters/configurations are also appriciated.


***** In the another thread </code> http://asktom.oracle.com/pls/ask/f?p=4950:61:14742910931561883966::::P61_ID:531273103649#74023316185185 <code>

you have said that you dont believe that mine customer is having active 1400 sessions per instance, please check the above snapshot, you will see both instance is having more than 2000 active sessions. And there are 4 applications servers and 2 DNS servers.

Now, please help me to make my concepts/queries clear.


Regards,
Star Nirav


Tom Kyte
October 27, 2006 - 7:31 am UTC

my feedback: it is a statspack report? what question might you about it.

statspacks do not show "parameters" or "configuration" in general, statspack shows numbers and ratios about your instance.

and if you remember the prior thread, I said I don't believe you have a middle tier machine whereby 1400 people are hitting enter, I said nothing about the database - it was the middle tier I was questioning - you said you "have AN application server"


and what does this have to do with the difference between a soft and hard parse?

Explaination given by me...

Star nirav, October 31, 2006 - 2:06 pm UTC

I am not asking about the parameters/configurations. Here I am asking that what should be the ideal percentage of Soft parse and so on.

2) I said that also I am having 3-tier application and 2 sessions in RAC. (In peak-hours, sessions are approx 1400 connected per node.)

3) as we know that library cache should be 97% and above is a good sign of performance then what would be for soft parse and hard parse...?

I hope that I am clear what I wanted to convey to you.

Waiting for the response.

Regards,
Star Nirav

--------------- Prev. thread -------------

Followup:
my feedback: it is a statspack report? what question might you about it.

statspacks do not show "parameters" or "configuration" in general, statspack
shows numbers and ratios about your instance.

and if you remember the prior thread, I said I don't believe you have a middle
tier machine whereby 1400 people are hitting enter, I said nothing about the
database - it was the middle tier I was questioning - you said you "have AN application server"


and what does this have to do with the difference between a soft and hard parse?



Tom Kyte
October 31, 2006 - 4:07 pm UTC

ratios - hahaha

there are NO PERFECT ones (well, soft parse % is one exception - I'll say it should be well over 99%, but the rest - they are JUST NUMBERS)

"as we know" - I know nothing of the sort - how do you know this to be true?

Sample of the difference between soft parse/hard parse/cursor sharing/no parse

Carlos, November 25, 2006 - 12:24 pm UTC

Hi,

In case it can be of use to the people of this thread I did a little script to compare the performance between the different possibilities.

</code> http://www.ipoal.com/modules.php?name=Content&pa=showpage&pid=21&page=3 <code>

I did generate a graph on how many iterations by second were done if you hard parse, if you soft parse, or if you use cursor_sharing and I explained a little bit.

I hope this helps

Metalink Doc ID: Note:32895.1 -- SQL Parsing Flow Diagram

Jan van Mourik, November 29, 2006 - 10:05 am UTC

Tom,

Metalink note 32895.1 shows a nice parsing flow diagram. Base on "A reader"'s review from July 13, 2004, I was wondering the following. For 'YES' on
- Is it in an open cursor?
- cursor in Session Cursor cache
- cursor in Held cursor cache
the flow diagram goes straight into "EXECUTE".
But certainly for "cursor in Session Cursor cache" a validity check on the shared pool will first be made. I assume this holds true for the other two as well?
If the cursor is not valid anymore, does it flow back into "Open a cursor"?

Regards and many thanks, jan

Tom Kyte
November 30, 2006 - 9:09 am UTC

if we are talking about "session_cached_cursors" - it has to verify the statement is still "there" to be executed.

"difference between soft parse and hard parse" - I have learnt another topic today

Kala, November 30, 2006 - 6:40 pm UTC

For any Oracle related doubts I always refer to this site and most of the times I always find what I want.

But I am only unhappy that I am NEVER ever able to post queries when I don't find a solution in this.

Excellent site and I am an admirer of Tom.



be patient with this question

A reader, December 19, 2006 - 8:20 am UTC

I am beginner to Oracle. I am used Oracle 10g Release 2. As a developer I wanna know what Oracle really do for the SQL statement I am submit to it until resulting the result set. Also if appropriate what of Oracle component (Memory, Process etc) are included in each process.

Please give the detail answer, or point me to document with clear explanation (with example may be).

Many many thanks before.

Tom Kyte
December 19, 2006 - 8:43 am UTC

Well, if you really want to hear it in the way I write - go to the home page and get "Expert Oracle Database Architecture"



Thanks I will go to buy it

A reader, December 19, 2006 - 7:00 pm UTC


Interview Question

Mark Brady, March 02, 2007 - 5:39 pm UTC

Although it may be unfair to distill an interview down to one question but...

I ask people representing themselves as senior level Oracle developers the difference between hard parse and soft parse and much more often than not get answers bordering on dolphin clicks. It happened again just yesterday. At that point it seems a waste to continue.

But I do continue, and at the end of the interview the most common response is, "I never got into the theoretical aspect the way you have."

Am I over-exaggerating to say if one can't explain that difference they're junior level - at least when it comes to Oracle? Is that too simplistic of an approach?

If you had 10,000 people apply for a job and you need to teach an Oracle novice to preliminarily screen candidates, and you could teach them one concept for them to use as a litmus test, what would your choice be?


Was it on your blog I saw something about 99 out of 100 programmers can't program? I feel that way more and more as the number of interviews I do increases.

Tom Kyte
March 04, 2007 - 6:38 pm UTC

"I never got into the theoretical aspect the way you have."


Parsing is not theoretical, it is practical, it happens, it is a big performance "thing", it is something they need to know from day 1.

I like that question however, I wish I would have thought of it. I just try to get a "select count(*) from T" out of candidates sometimes (without success...)

http://asktom.oracle.com/Misc/what-is-your-fizzbuzz-factor.html

Fetch Time Difference?

Thiyag, April 02, 2007 - 1:42 am UTC

Hi Tom:

I have SQL query being executed in prepared statement mode and as a stand alone statement . I know the statement is really bad in a multi-user mode. Just for the same of justification of performance we had the statement compared. Here is the TKprof results of the both.

/* Statement Mode */

SELECT area_ID, area_TYPE_ID, area_NAME_START, area_NAME_END,
SEC_area_NAME_START ,SEC_area_NAME_END,ODD_EVEN_IND,VANITY_NAME_FLAG,
OU_ID
FROM
area WHERE ( ( area_type_id = 2 AND ( area_name_start=
'MASSACHUSETTS-BALA' OR area_name_start='MA' OR area_name_start=
'MARANHAO' OR area_name_start='MAB')) OR ( area_type_id = 3 AND (
area_name_start='ORANGE')) OR ( area_type_id = 4 AND (
area_name_start='SALEM')) OR ( area_type_id = 5 AND (
area_name_start='CONGRESS SAINT' OR area_name_start='CONGRESS STREET'
OR area_name_start='CONGRESS ST')) OR ( area_type_id = 6 AND
('60108' between area_name_start and area_name_end)) OR (
area_type_id = 9 AND ('01971-8888' between area_name_start and
area_name_end)) ) AND ( ou_id = 10002201 OR ou_id is null) AND
VANITY_NAME_FLAG = 'N'


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1000 0.01 0.01 0 0 0 0
Execute 1000 0.03 0.04 0 0 0 0
Fetch 1000 8.25 8.05 0 172000 0 4000
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 3000 8.29 8.11 0 172000 0 4000

Misses in library cache during parse: 0
Optimizer mode: ALL_ROWS
Parsing user id: 64

/* Prepared Statement Mode */


SELECT area_ID, area_TYPE_ID, area_NAME_START, area_NAME_END,
SEC_area_NAME_START ,SEC_area_NAME_END,ODD_EVEN_IND,VANITY_NAME_FLAG,
OU_ID
FROM
area WHERE ( ( area_type_id = :1 AND ( area_name_start=:2 OR
area_name_start=:3 OR area_name_start=:4 OR area_name_start=:5)) OR
( area_type_id = :6 AND ( area_name_start=:7)) OR ( area_type_id =
:8 AND ( area_name_start=:9)) OR ( area_type_id = :10 AND (
area_name_start=:11 OR area_name_start=:12 OR area_name_start=:13))
OR ( area_type_id = :14 AND (:15 between area_name_start and
area_name_end)) OR ( area_type_id = :16 AND (:17 between
area_name_start and area_name_end)) ) AND ( ou_id = :18 OR ou_id is
null) AND VANITY_NAME_FLAG = 'N'


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1000 0.01 0.01 0 0 0 0
Execute 1000 0.09 0.08 0 0 0 0
Fetch 1000 12.78 12.75 0 172000 0 4000
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 3000 12.89 12.85 0 172000 0 4000

Misses in library cache during parse: 0
Optimizer mode: ALL_ROWS
Parsing user id: 64

The two TKprof results look exactly same execept for the significant difference in the Fetch time. Can you please let me know why is it so? I belive the fetch time should be exactly same since they get back the same results. Am I missing something? How do I figure out this issue?

Tom Kyte
April 03, 2007 - 8:31 pm UTC

were the bind inputs identical to the literals?

and you are measuring lots of little (short) things. If you run this over and over - do they "equalize" over time?

Fetch Time Difference

Thiyag, April 04, 2007 - 5:49 am UTC

Yes, the bind values were exactly the same. We verified this by comparing the results they returned. We are using 10.2.0.2 and I executed this test under a single user mode. I have some 80 sets of different queries which do similar type of fetch and they all have similar type of results (significant difference in fetch time). These queries (80 of them) were executed for a period of 5 minutes to eliminate any "warm up" factors. Here are some of results which I have when I execute in the multiple users modes:

No. of Users PrepStmt(TPS) Statement(TPS)
1 45.17204 62.48333
3 162.1928 171.8167
4 209.7607 216.4626
5 236.7869 250.6332
10 267.0454 315.3878

Please provide me your insights and where to look for any issues.
Tom Kyte
April 04, 2007 - 10:23 am UTC

but these results are squarely against what you said - the prepared statements are superior.

Pl have closer look

Thiyag, April 04, 2007 - 10:55 am UTC

There is some messup in the formatting. Reposting this:

No.of Users   PrepStmt(TPS)   Statement(TPS) 
1             45.17204        62.48333 
3             162.1928        171.8167 
4             209.7607        216.4626 
5             236.7869        250.6332 
10            267.0454        315.3878

Tom Kyte
April 04, 2007 - 11:33 am UTC

315 is greater than 267

for example.


I hardly care about single user situations as they never happen in real life. You are measuring "tps" - I would suggest your code is flawed perhaps (eg: it over parses - with prepared statements you do not call parse over and over as you must with statements)

Fetch Time?

Thiyag, April 05, 2007 - 12:28 am UTC

Can you please comment on the fetch time difference when I look up the Tkprof results.

In the code the query is build dynamically depending on the input values. Hence we would have to prepare it everytime before executing it.
Tom Kyte
April 05, 2007 - 10:48 am UTC

for that, I'd want to see an averaged out timeline over dozens of invocations.

not a single report.

difference between 32 and 64 bit Oracle database

Sandeep kumar, April 05, 2007 - 7:23 am UTC

what is difference between 32 and 64 bit Oracle database ?

Tom Kyte
April 05, 2007 - 11:02 am UTC

about 32 bits


one is 32 bit pointers, and has the addressability of a 32bit application.

one is 64 bit pointers, and has the rather unlimited addressability of a 64bit application.


one can address much more physical memory than the other.

Hard Parse or Soft Parse

Car Elcaro, April 05, 2007 - 10:21 am UTC

When I submited an anonymous PL/SQL block for the second time will Oracle do Hard Parse or Soft Parse. I do this on my test database so I guarantee no other SQL / PL/SQL are submited.

Here I do
SQL> declare
  2    x varchar2(10);
  3  begin
  4    select trunc(sysdate) into x from dual;
  5    dbms_output.put_line(x);
  6  end;
  7  /
05-APR-07

PL/SQL procedure successfully completed.

SQL> / <b>-- here is the second run in the same session</b>
05-APR-07

PL/SQL procedure successfully completed.



When another session OR the same session run it again will Oracle do Hard Parse for that PL/SQL block again?
Tom Kyte
April 05, 2007 - 11:32 am UTC

it'll hard parse it the first time and then soft parse on subsequent parse calls as long as the parsed representation stays in the shared pool

Bind Variables

Reader, April 12, 2007 - 11:09 am UTC

Hello Tom,
We are using 3rd party application on Oracle 9i DB. I was observing the Execute To parse ratio was too less, sometimes in single digit.

This is the response I got from the Remedy support. Iam bit surprised by their response and thinking what should I reply?

I know you would not be interested in reading 3rd party's response but would request you to read it.

<<Response from 3rd party vendor ... Start>>

The 'cursor_sharing = FORCE' feature in Oracle can significantly improve efficiency when an application itself does not use bind variables.
Oracle will analyse SQL statements and, effectively, perform 'auto-binding' so that it can reuse the non-variable portion of the SQL statements.
This should reduce the non-shareable memory. Some customers have also reported that this Oracle feature improved throughput.

In Oracle the 'cursor_sharing' feature can be enabled on a per-instance basis (i.e. you would set this in the "init<sid>.ora" file and restart the Oracle instance).

It would then be in effect for all databases within this instance.
By default it would then also be enabled for all users of this instance although you can issue an SQL command to turn it off for your session ( ALTER SESSION set CURSOR_SHARING = EXACT to turn it off.... and ALTER SESSION set CURSOR_SHARING = FORCE to enable it again).

If 'cursor_sharing = FORCE' is in effect for an Oracle instance to which ARS will connect, then the 'Oracle-Cursor-Sharing: FORCE' setting must be set within the 'ar.conf' file for this ARS Server.
If 'cursor_sharing = FORCE' is not set for the Oracle instance then this 'ar.conf' entry must not be in effect either.
The two must go together... both either on or off.

If 'cursor_sharing=FORCE' is set in 'init<sid>.ora' but not in 'ar.conf', you would find that threads within ARS would hang when making some specific calls to Oracle (i.e. particularly when working with Attachment fields).

It was to overcome this problem that the 'Oracle-Cursor-Sharing: FORCE'option was added to the 'ar.conf' file.

This 'ar.conf' option simply informs the AR Server that you are using 'cursor_sharing=FORCE' in the Oracle instance.

Knowing that 'cursor_sharing=FORCE' is in effect in Oracle causes the AR Server to turn off cursor sharing while making the database calls which would cause our thread to hang.
The option is re-enabled directly after the problem database call has completed.

KM-000000011540 - Does ARS SQL Statements use bind variables and if not this may cause a lot of non-shareable memory usage.

KM-000000010520 - In order to tune Oracle database shared-pool usage, we had setup the init.ora parameter cursor_sharing=FORCE. As a result of this, arserverd hangs when trying to do an insert into a "B" table containing a "LONG RAW" column.

KM-000000022271 - Oracle ALTER SESSION privilege in the context of ARS System

Hope it answer your question. Please let me know if I can assist you any further.

<<Response from 3rd party vendor ... End>>

Setting CURSOR_SHARING = FORCE do any harm?

Tom Kyte
April 13, 2007 - 12:12 pm UTC

.... It would then be in effect for all databases within this instance. ...

hahaha, that shows a misunderstanding, a fundamental one. An instance will only ever see a single database. A database might be opened by millions of instances over its lifetime, but an instance will only ever mount and open a single database.



and cursor_sharing of any setting, that will have this much effect on the number of times a statement is parsed:

ZERO

if your goal is to affect the execute to parse ratio, cursor sharing would have ZERO affect, none. It will never reduce the number of parse calls - only the application can do that.

Cursor sharing = force MIGHT lower the number of hard parses, turning them into soft parses, but it will never make a parse "go away"


... Knowing that 'cursor_sharing=FORCE' is in effect in Oracle causes the AR Server to turn off cursor sharing while making the database calls which would cause our thread to hang.
The option is re-enabled directly after the problem database call has completed. ....

that makes no sense to me whatsoever, not sure that is meant there. I don't know what an AR server is - but if it is an Oracle instance you have named AR - well,

a) cursor_sharing = force does NOT turn off cursor sharing (nothing can turn that off!!! it is the way oracle works. Cursor sharing=force MAY make it so we can actually SHARE MORE SQL

b) it would not cause a thread to hang




cursor_sharing = force should not be set instance wide in my opinion. It negatively affects any well written application, it is using an atom bomb to crack open a walnut. the application that needs this has a serious bug (google sql injection, any application that doesn't bind properly is subject to it and serious performance issues)


Uga and buffer cache data

jdam, May 25, 2007 - 9:10 am UTC

Hi Tom,

Which is the different between the data contained in the UGA and the buffer cache.


Tom Kyte
May 26, 2007 - 11:59 am UTC

buffer cache is cached blocks.

uga is session memory, plsql variables, cursors and the like.

TO A Reader, regarding AR System

chris_c, May 29, 2007 - 11:21 am UTC

The third part application you are refering too Remedy Action Request system suffers from being database independant. Part of the way the vendor has achieved this is to not use bind variables however this is not only insecure it can cause performance issues as there will be lots of Hard parsing going on.

To get around this the application vendor suggests you set 'cursor_sharing=FORCE' this however causes the application to hang at certain points (when inserting LONG RAW data). to work around this the application configuration file has a setting that will cause the application to alter the session prior to the problem sql being issued.

there is some documentation here http://documents.bmc.com/supportu/documents/78/31/67831/67831.pdf that describes the setting but it fails to mention that the cause of the problem is poor coding and an attempt to be database independant rather than a deficiency in oracle.

peeking and parsing

Emmanuel, November 29, 2007 - 6:36 pm UTC

Hi Tom,

I did the following (session 1)
 drop table t;
/

create table t tablespace tbs_prueba
as
select 1 X, rpad('*',4000,'*') data
 from all_objects
/

insert into t select 2, rpad('*',4000,'*')
from all_objects where rownum<=10
/

commit
/

create index t_idx on t(x)
/

begin
dbms_stats.gather_table_stats(user,'T',cascade=>true,estimate_percent => null,method_opt=>'for all indexed columns');
/end;
/

variable z number
/

exec :z:=2;

@trace;
select count(data) from t t1 where x=:z
@stop_trace;


At this moment the query has taken the following execution plan due to peeking and that's correct, because this plan describes perfectly my data distribution.

select count(data)
from
 t t1 where x=:z

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute      1      0.00       0.01          0          0          0           0
Fetch        2      0.00       0.00          0         12          0           1
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        4      0.01       0.01          0         12          0           1

Misses in library cache during parse: 1
Misses in library cache during execute: 1
Optimizer mode: ALL_ROWS
Parsing user id: 61

Rows     Row Source Operation
-------  ---------------------------------------------------
      1  SORT AGGREGATE (cr=12 pr=0 pw=0 time=716 us)
     10   TABLE ACCESS BY INDEX ROWID T (cr=12 pr=0 pw=0 time=415 us)
     10    INDEX RANGE SCAN T_IDX (cr=2 pr=0 pw=0 time=60 us)(object id 52914)


Now in another session (session 2)
begin
dbms_stats.gather_table_stats(user,'T',cascade=>true,estimate_percent=>null,method_opt=>'for all columns size 1');
end;
/


So this implies that the knowledge of my data distribution not exists now. So why when i ran
select count(data) from t t1 where x=:z

in (session 1) again this takes the same execution plan???


I am aware if i run the following
select count(data) from t <b>t2</b> where x=:z


I will do a hard parse and the query will take the full table scan, but as a result of generating statistics don't do that????

What I'm missing ????

Thanks!!






Tom Kyte
November 29, 2007 - 7:22 pm UTC

what I'm missing is - you have two table T's or something here? not sure at all what you are doing.

they both use user into the dbms_stats call, they both do not qualify T, there is more than one T

peeking and parsing

A reader, November 30, 2007 - 12:21 pm UTC

Sorry tom I will try to be more specific this time.

I'am going to change a little my example.

Session 1
drop table t;

Table dropped.

emmanuel@nirvana>
  1  create table t as
  2  select 1 X, rpad('*',4000,'*') data
  3* from all_objects
emmanuel@nirvana> /

Table created.

emmanuel@nirvana> ed
Wrote file afiedt.buf

  1  insert into t select 2, rpad('*',4000,'*')
  2* from all_objects where rownum<=10
emmanuel@nirvana> /

10 rows created.

emmanuel@nirvana> commit;

Commit complete.

emmanuel@nirvana> create index t_idx on t(x);

Index created.

emmanuel@nirvana> exec dbms_stats.gather_table_stats(user,'T',cascade=>true,estimate_percent => null,method_opt=>'for all indexed columns');

PL/SQL procedure successfully completed.

emmanuel@nirvana> select count(data) from t t1 where x=1;

COUNT(DATA)
-----------
      49779

emmanuel@nirvana> select count(data) from t t1 where x=2;

COUNT(DATA)
-----------
         10
emmanuel@nirvana> set autotrace traceonly exp
emmanuel@nirvana> select count(data) from t t1 where x=1;

Execution Plan
----------------------------------------------------------
Plan hash value: 2966233522

---------------------------------------------------------------------------
| Id  | Operation          | Name | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |      |     1 |  2005 | 11029   (1)| 00:02:13 |
|   1 |  SORT AGGREGATE    |      |     1 |  2005 |            |          |
|*  2 |   TABLE ACCESS FULL| T    | 49779 |    95M| 11029   (1)| 00:02:13 |
---------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - filter("X"=1)

emmanuel@nirvana> select count(data) from t t1 where x=2;

Execution Plan
----------------------------------------------------------
Plan hash value: 1789076273

--------------------------------------------------------------------------------------
| Id  | Operation                    | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT             |       |     1 |  2005 |    11   (0)| 00:00:01 |
|   1 |  SORT AGGREGATE              |       |     1 |  2005 |            |          |
|   2 |   TABLE ACCESS BY INDEX ROWID| T     |    10 | 20050 |    11   (0)| 00:00:01 |
|*  3 |    INDEX RANGE SCAN          | T_IDX |    10 |       |     1   (0)| 00:00:01 |
--------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   3 - access("X"=2)

emmanuel@nirvana> variable z number;
emmanuel@nirvana> exec :z:=2;

PL/SQL procedure successfully completed.

emmanuel@nirvana> select count(data) from t t1 where x=:z;

Execution Plan
----------------------------------------------------------
Plan hash value: 2966233522

---------------------------------------------------------------------------
| Id  | Operation          | Name | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |      |     1 |  2005 | 11030   (1)| 00:02:13 |
|   1 |  SORT AGGREGATE    |      |     1 |  2005 |            |          |
|*  2 |   TABLE ACCESS FULL| T    | 24895 |    47M| 11030   (1)| 00:02:13 |
---------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - filter("X"=TO_NUMBER(:Z))


As you can see at this moment we have histograms over table t and the cardinality it's well calculated.
But you know when we use explain plan or autotrace for statements with bind variables it makes much more difficult to see the 'real' execution plan as those tools don't perform the peek.

So in session 2 I trace
emmanuel@nirvana> variable z number;
emmanuel@nirvana> exec :z:=2;

PL/SQL procedure successfully completed.

emmanuel@nirvana> select count(data) from t t1 where x=:z;

COUNT(DATA)
-----------
         10



In Session 1
emmanuel@nirvana>  exec dbms_stats.gather_table_stats(user,'T',cascade=>true,estimate_percent => null,method_opt=>'for all indexed columns');

PL/SQL procedure successfully completed.


The purpose of doing this is to see that for the first ejecucicion performs the peek so an INDEX RANGE SCAN it's performed.

Next in session 2 I run statistics, I think this will invalidate my execution plan (The INDEX RANGE SCAN).

After doing this i return session2 and execute again the same sentence but change my variable z.

session 2
emmanuel@nirvana> exec :z:=1;

PL/SQL procedure successfully completed.

emmanuel@nirvana> select count(data) from t t1 where x=:z;

COUNT(DATA)
-----------
      49779

emmanuel@nirvana> @stop_trace

Session altered.


Now i execute tkprof with aggregate=no;
BEGIN :z:=2; END;


call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute      1      0.00       0.00          0          0          0           1
Fetch        0      0.00       0.00          0          0          0           0
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        2      0.00       0.00          0          0          0           1


select count(data)
from
 t t1 where x=:z


call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute      1      0.00       0.00          0          0          0           0
Fetch        2      0.00       0.00          0         12          0           1
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        4      0.00       0.00          0         12          0           1

Misses in library cache during parse: 1
Misses in library cache during execute: 1
Optimizer mode: ALL_ROWS
Parsing user id: 61

Rows     Row Source Operation
-------  ---------------------------------------------------
      1  SORT AGGREGATE (cr=12 pr=0 pw=0 time=493 us)
     10   TABLE ACCESS BY INDEX ROWID T (cr=12 pr=0 pw=0 time=565 us)
     10    INDEX RANGE SCAN T_IDX (cr=2 pr=0 pw=0 time=85 us)(object id 52935)


BEGIN :z:=1; END;


call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute      1      0.00       0.00          0          0          0           1
Fetch        0      0.00       0.00          0          0          0           0
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        2      0.00       0.00          0          0          0           1

select count(data)
from
 t t1 where x=:z


call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute      1      0.00       0.00          0          0          0           0
Fetch        2      0.40       0.39          0      49878          0           1
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        4      0.40       0.39          0      49878          0           1

Misses in library cache during parse: 0
Optimizer mode: ALL_ROWS
Parsing user id: 61

Rows     Row Source Operation
-------  ---------------------------------------------------
      1  SORT AGGREGATE (cr=49878 pr=0 pw=0 time=398314 us)
  49779   TABLE ACCESS BY INDEX ROWID T (cr=49878 pr=0 pw=0 time=995732 us)
  49779    INDEX RANGE SCAN T_IDX (cr=99 pr=0 pw=0 time=250049 us)(object id 52935)



So why after run statistics in (session 1) the execution plan dosen't change whit value z=1, i think that oracle should make a hard parse and make the peek again. I was expecting an index range scan and then a full table scan.

Can you please clarify ???


Tom Kyte
November 30, 2007 - 2:20 pm UTC

I'll guess

you are using 10g

you are using defaults

the default in 10g is "auto_invalidate" for "no_invalidate" (a dbms_stats parameter)

if you make the second dbms_stats be:

exec dbms_stats.gather_table_stats(user,'T',cascade=>true,estimate_percent => null,method_opt=>'for all indexed columns', no_invalidate=>FALSE);


you should see the hard parse.

the database decided "stats did not change enough to invalidate, so we won't"


just to make it easier, save this as test:

set echo on

drop table t;

create table t as
select 1 X, rpad('*',400,'*') data
from all_objects
/

insert into t select 2, rpad('*',400,'*')
from all_objects where rownum<=10
/

commit;
create index t_idx on t(x);
exec dbms_stats.gather_table_stats(user,'T',cascade=>true,estimate_percent => null,method_opt=>'for all indexed columns');
set autotrace traceonly exp
select count(data) from t t1 where x=1;
select count(data) from t t1 where x=2;
set autotrace off
prompt run @x 2
pause
exec dbms_stats.gather_table_stats(user,'T',cascade=>true,estimate_percent => null,method_opt=>'for all indexed columns', no_invalidate=>FALSE);
prompt run @x 1




and save this as x
set echo on
alter session set sql_trace=true;
variable z number;
exec :z:=&1;
select count(data) from t t1 where x=:z;



and run @test.sql

peeking and parsing

Emmanuel, November 30, 2007 - 2:45 pm UTC

Thank you, you are right !!

I am very pleased to see people who really love their work.

As always, surprising !!!!!!!

No parse internals

Amir Riaz, December 07, 2007 - 7:44 am UTC

hi tom

i am trying to understand the functionality behind no parse. as you described with no parse we have minimum latches in the shared pool or no latches at all. does oracle copies the execution plan to our pga in case of static sql. because i think in case of static sql we find everything in pga and so we dont have to go to the shared pool. but in case of soft softer parse we have one part of the cursor in session_cached_cursor while other part in the shared pool.

regards
Amirriaz
Tom Kyte
December 10, 2007 - 10:37 am UTC

when you parse, you ALWAYS go to the shared pool.

when you bind/execute, you ALWAYS go to the shared pool again. To verify nothing has changed.

The data structures are stored there, it is a matter of how "seriously" you have to access them - hard parse = WORST, soft parse = slightly better than hard parse, softer soft parse = tiny bit better than soft parse, NO PARSE = best you can do.


but a bind/execute phase will require the cursor to point to stuff in the shared pool and to verify nothing has CHANGED.

consider:

ops$tkyte%ORA9IR2> create table t ( x int );

Table created.

ops$tkyte%ORA9IR2>
ops$tkyte%ORA9IR2> create or replace procedure proc
  2  as
  3  begin
  4          for x in ( select * from t ) loop null; end loop;
  5  end;
  6  /

Procedure created.

ops$tkyte%ORA9IR2>
ops$tkyte%ORA9IR2> exec proc

PL/SQL procedure successfully completed.

ops$tkyte%ORA9IR2>
ops$tkyte%ORA9IR2> @trace
ops$tkyte%ORA9IR2> alter session set events '10046 trace name context forever, level 12';

Session altered.

ops$tkyte%ORA9IR2> exec proc

PL/SQL procedure successfully completed.

ops$tkyte%ORA9IR2> exec proc

PL/SQL procedure successfully completed.
<b>
ops$tkyte%ORA9IR2> create index t_idx on t(x);

Index created.
</b>


that'll invalidate the cursors - the execution plan MIGHT change, so we need to implicitly reparse when we execute next...
ops$tkyte%ORA9IR2> exec proc

PL/SQL procedure successfully completed.

ops$tkyte%ORA9IR2> exec proc

PL/SQL procedure successfully completed.


Now, the tkprof will show:


SELECT * FROM T

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        0      0.00       0.00          0          0          0           0
Execute      4      0.00       0.00          0          0          0           0
Fetch        4      0.00       0.00          0         12          0           0
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        8      0.00       0.00          0         12          0           0

Misses in library cache during parse: 0<b>
Misses in library cache during execute: 1</b>
Optimizer goal: CHOOSE
Parsing user id: 61     (recursive depth: 1)


See the misses in library cache during execute? We implicitly re-parsed for you there - to repopulate the shared pool after the create index happened.

no parsing

Amir Riaz, December 09, 2007 - 8:55 am UTC

hi tom

i am trying to understand the functionality behind no parse. as you described with no parse we have minimum latches/contention in the shared pool or no latches at all. does oracle copies the execution plan to pga in case of static sql. because in case of static sql we find everything in pga that include plan, sql, security and so we dont have to go to the shared pool. but in case of soft softer parse we have one part of the cursor in session_cached_cursor while other part in the shared pool.

Another concept in my mind is open cursor is a mechanism by which oracle copies execution plan in the pga. when we have a plan in pga we say our cursor is open. i know that cursor is made of different conponents. what 's you opinion about these concepts. if both false can you elobrate the function of no parsing mechanism

regards
Amir Riaz

regards
Amir riaz

this thread deserve more than five stars

AMIR RIAZ, December 11, 2007 - 9:55 am UTC

Hi tom.

thanks for your reply. Now i have understand this concept.

In this thread above you said a sql statement in the trigger will always do a soft parse. why?. why plsql does not cache that cursor in pga. Here i am talking about the static sql which plsql caches automatically in case its been written in stored procedure but if written in trigger we have a soft parse. note that a trigger is also a plsql block so why we have a changed behaviour.

Can you describe the reasons behind it.

regards
Amir Riaz
Tom Kyte
December 11, 2007 - 11:01 am UTC

... In this thread above you said a sql statement in the trigger will always do a soft parse. ...


well not ALWAYS - it the trigger is fired 1000 times because you update 1000 rows in a single sql statment - it will parse it ONCE. But, if the trigger is fired 1,000 times because you execute an update 1,000 times - then it would be parsed once per update (for 1,000 parses)

why? because that is the way the code used to be implemented...

take a script like this:

drop table t;

create table t ( x int );
create or replace trigger t before insert on t for each row
begin
    for x in (select * from dual look_for_me) loop null; end loop;
end;
/

connect /

set feedback off
set echo off
spool x.sql
set termout off
begin
    for i in 1 .. 1000
    loop
        dbms_output.put_line( 'insert into t values(1);' );
    end loop;
end;
/
spool off
set termout on
set echo on
set feedback on

alter session set sql_trace=true;
set termout off
@x
set termout on


and in 10g and before, you'll find:

SELECT * FROM DUAL LOOK_FOR_ME

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse     1000      0.04       0.03          0          0          0           0
Execute   1000      0.05       0.04          0          0          0           0
Fetch     1000      0.07       0.07          0       3000          0        1000
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total     3000      0.17       0.15          0       3000          0        1000


whereas in 11g, you'll discover:

SQL ID : 92hqqnh3wnwpb
SELECT *
FROM
 DUAL LOOK_FOR_ME


call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute   1000      0.05       0.05          0          0          0           0
Fetch     1000      0.05       0.07          0       3000          0        1000
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total     2001      0.11       0.12          0       3000          0        1000



they changed the behaviour....



No Parse - Open Cursors - How many are reasonable?

Steve, January 11, 2008 - 9:12 am UTC

Hi Tom,

From above

"You can skip the parse by keeping cursors open. Instead of coding:

parse
bind
execute
close

in a subroutine, do:

if (firsttime)
parse
end if

bind
execute"

Let's say we want to implement this accross the board in our calling C++ code. We may have 200 connections doing work calling potentially 200 different stored procedures. Never closing cursors this can lead to a maximum of 200x200=40,000 open cursors. Is this reasonable?

Thanks in advance!

Tom Kyte
January 13, 2008 - 10:50 pm UTC

the open cursors are a client thing, sure having 200 open cursors is reasonable. I'd have no problem with that at all.

Especially if you drop my parses per second from 1,000 to 100 or something awesome like that.

Karthick Pattabiraman, February 07, 2008 - 7:55 am UTC

hi,

today for the first time i gave a presentation on perfomance optimization. the main topic i was discussing was Bind Variables.

In that i was explain how a sql gets executed my presentation had some thing like that

1. Parsing
Syntax Check
Semantic analysis

Check for the SQL statement in the shared pool

IF no stagement found THEN
3. optimization
4. Row sorce generation
END IF.

If step 1 alone happens its called soft parse. And if all the step 1,2 and 3 happens its called hard parse.

so one person stoped me and told the checking of shared pool happens even before parsing. so what he told was like this.

Check for the SQL statement in the shared pool

IF no stagement found THEN
1. Parsing
Syntax Check
Semantic analysis
3. optimization
4. Row sorce generation
END IF.

so i want to know when this looking into shared pool happens.

Can you point me to a place where this is documented.

Thank you,

Karthick.
Tom Kyte
February 07, 2008 - 8:45 am UTC

the looking in the shared pool and parsing (syntax) and semantic check must all be done.

their "if no statement is found" is false, the syntax and semantic check MUST take place.


Suppose you log in and issue:

select * from t;


Suppose we took that to the shared pool straight away - and we found a match. In the shared pool we find exactly "select * from t"

Can we use that query plan now? NO - we cannot.

Why not?

because when I run select * from t - that is select * from TKYTE.T
the one on the shared pool, that is selecting from SCOTT.T


Just because the sql looks the same doesn't mean IT IS the same.


So, even if there is a hit in the shared pool we have to

a) parse the select * from T
b) figure out that T in the context of the current user is TKYTE.T
c) figure out that * in the context of the current user is A,B,C

and see if that matches to what we found in the shared pool.


So, it does not skip that.


soft parse = syntax, semantic, shared pool lookup
hard parse = soft parse + optimization, row source generation


I do not care about the ordering of syntax,semantic,shared pool (they could be reversed - doesn't matter)

I only care that the three happen.


Karthick Pattabiraman, February 07, 2008 - 8:58 am UTC

you rock you roll....

Shared pool is at database level

Karthick Pattabiraman, February 07, 2008 - 9:19 am UTC

Oh from your answer i can under stand that shared pool is at the database level. I was in an assumption that its at schema level.

Ok once you said this i also got another question and i also have a sort of answer for this by reading your previous answer.

i do select * from t;

not its in shared pool

i drop table t

now i believe the query will still be in shared pool.

so now i again issue select * from t

so now i obviously have to do parsing even if the query is in the shared pool other wise it will crash as the object t will not be in the database.

please correct me if iam wrong.
Tom Kyte
February 07, 2008 - 9:37 am UTC

technically, the shared pool is at the INSTANCE level (a single database may be mounted and opened by many instances, each with their own shared pool)...

when you drop the table, it'll invalidate all cursors in the shared pool that point to it. They'll effectively "leave the pool" and cause a hard parse the next time they are executed.

Parsing

Richard, March 26, 2008 - 8:12 am UTC

A question, but, I think, *new blood*, too:
With regard to *The next step in the parse operation is to see if the statement we are currently parsing has already in fact been processed by some other session*, is there any mileage in Oracle transforming statements (intelligently) to be made the same, if they only differ in case/whitespace?

e.g.
SeLeCt  My_CoLUmn    fRom   My_TablE;
and
select my_column from my_table;


Could Oracle be engineered to *see* that the statements are, in fact, logically identical?
Tom Kyte
March 26, 2008 - 9:04 am UTC

it could (plsql DOES, at compile time for static sql), but it does not. At compile time for plsql, I'm willing to expend this extra cpu to normalize the sql, at parse time - it would be yet another step added to a soft parse

A Switch, Then?

Richard, March 27, 2008 - 6:59 am UTC

In that case, a nice spfile parameter would be nice! i.e. "Yes, I am willing to pay the penalty for the extra parsing" or "No! Never do this!".

rEDUCE pARSE TO eXECUTE

KEYUR.PATEL, April 01, 2008 - 12:35 am UTC

Hello Tom,

While looking at statspack, I found we have 0.02% of hard parse and rest of it is soft parse 99%. So It's an acceptable, where you have 99.80% soft parse and 0.02% as hard parse. Further looking at statspack SQL ordered by Parse Calls, so many of procedure parse to execute ratio is 1 to 1. What should we look for to tune it.

SQL ordered by Parse Calls for DB: CUSDB Instance: ngdt1 Snaps:
:v_begin_snap -
-> End Parse Calls Threshold: 1000

% Total
Parse Calls Executions Parses Hash Value
------------ ------------ -------- ----------
2,085 2,085 6.50 1141354210
BEGIN procedure 1; end;

1,878 1,878 5.85 1974143919
BEGIN procedure 2; end;

1,849 1,849 5.76 3319563340


Tom Kyte
April 01, 2008 - 9:25 am UTC

it is an acceptable hard parse percent, but it doesn't mean that your parse statistics are acceptable.

I see you parse for every execute - that is not good. Your execute to parse percent must be very very low, that is bad. You only need to parse a sql statement ONCE in an application, not every time you execute it!!!!!!!



The application controls how often something is parsed, you look at the application code and figure out "why is the developer closing this sql statement when they KNOW it'll be used again and again and again"


why it is so..

Reene, April 02, 2008 - 9:20 am UTC

Hi Tom

I have a query - i ran it in test enviroment and then production. (test and production are almost similar - data volume,stats wise for the underlying tables of this query).
i saw a huge difference in the parse time cpu in these 2 enviroment of this query.

here is the tkprof out put for this query.

why it is so,could not quite understand this difference.

prodcution tkprof



call count cpu elapsed disk query current rows

------- ------ -------- ---------- ---------- ---------- ---------- ----------

Parse 1 99.33 96.36 18 196 0 0

Execute 1 0.00 0.00 0 0 0 0

Fetch 11 2.02 1.95 1970 18727 0 148

------- ------ -------- ---------- ---------- ---------- ---------- ----------

total 13 101.35 98.32 1988 18923 0 148



Misses in library cache during parse: 1

Optimizer mode: CHOOSE

Parsing user id: APPS




test tkprof


call count cpu elapsed disk query current rows

------- ------ -------- ---------- ---------- ---------- ---------- ----------

Parse 1 11.67 11.82 0 0 0 0

Execute 1 0.00 0.00 0 0 0 0

Fetch 12 1.88 1.85 1557 95949 0 151

------- ------ -------- ---------- ---------- ---------- ---------- ----------

total 14 13.55 13.67 1557 95949 0 151



Misses in library cache during parse: 1

Optimizer mode: CHOOSE

Parsing user id: APPS



kindly suggest
Thanks


Tom Kyte
April 02, 2008 - 10:13 am UTC

if you generate 10053 traces for this query on both - are the traces hugely different in content/size (indicating something is very different in one environment over the other)

and the environments have identical types of cpu's correct?

(if you don't know what a 10053 trace is, just search for 10053 on this site)

small doubts....

Aman Sharma, April 03, 2008 - 7:25 am UTC

Hi sir,
it may be a veyr silly question but I am not clear about the parse id.What it is ?I understand it as roll number which is given to a student to identify him.If tehre are two jackobs,their roll number would tell usthat they are distinct.
2)When Oracle checks teh issued query ,it checks the syntax and semantical information.The semantical information will come from tData dictionary.Than using that sql text ,it will search for the matching sql text in the library cache.My doubt is that data dictionarycache also contains the privs and the structure of teh object.If we have already checked the privs for the table and now we find amatch in the shared pool for the query that means the previous check for the data dictionary was actualy an overhead.Why it needs to be checked in the first initial steps of parsing?I understand that to finally let the query go it would be required because it would be of no use to waste time on a query whose table we can't see.But still for me,its like a chicken-egg question.Please clarify.
3)last one is that why the object structure is kept in therow cache?What purpose it will solve?The data is in any case in buffer cache and teh selectivity estimates will come from the data dictionary cache so what is the requirement of keeping object structure in the row cache?Is it to keep the "breakable parse lock" to invalidate the cursor when the structure will change by some ddl?
Thanks and regards
Aman....


parent and child cursor

karthik, April 08, 2008 - 6:30 am UTC

Tom,
When shared sql statements exists in the shared pool,cached cursors exist in the session level.right.Now my confusion is about parent and child cursors.is that the parent cursor that exist in the shared pool and the child cursor that exists in the UGA.My understanding is each child cursor will have different execution plan if each session who queries the same data has different optimizer settings.So is that the child cursor that is cached in uga of each session?please clarify
Tom Kyte
April 09, 2008 - 2:07 pm UTC

they all exist in the shared pool as peers of each other, the first one parsed is referred to as the parent, the rest are the 'children' - but they are all really peers of each other.

today a query could be "parent", and tomorrow that some one could be "child"

they are handled the same in the shared pool

A reader, May 29, 2008 - 2:36 pm UTC

Tom,
I have an Oracle Form whose Query is based on a PL/SQL packaged procedure.
The cursor used in the packeged procedure is not a ref cursor but a 'normal' cursor with open/fetch etc.
If a user in a session were to query the block multiple times, is it possible that from the second iteration onward he is going to get a 'softer' soft parse.
Am I right, assuming that happens because as the product that supplies answer is PL/SQL, although it doesn't matter if the Client is Oracle Forms.

2) If I were to change this cursor to REf cursor and ask the form to execute session_Cached_cursor = 100 at Form startup, I am assuming I will get 'soft' softer parses? The reason is its possible this Query could be used by an Oracle Report which can be based on a Ref cursor (and ofcourse, I will code in the Oracle Report to set session_cached_cusor too).

Thanks

Ravi
Tom Kyte
May 29, 2008 - 9:29 pm UTC

1) they'll get "no parse" - plsql caches cursors, it'll be softer than a softer soft parse, it'll be "lack of parse" - which is perfect....


2) a ref cursor will be parsed each time, yes, and can be of the softer soft parse style.

Aman Sharma, June 29, 2008 - 3:26 pm UTC

Hi sir,
I was asked the same question that how soft and parse differ.I replied the same answer as you have mentioned here.But I got one more counter question that from where the priviledges of the referenced table/view are checked?I mean what oracle did when a query is submitted,
a) Syntax check to make sure that grammar is correct.
b) Semantics check to make sure that the referred object is having proper privs for the user.
c)With the hash of the query,acquire LC latch and start looking in the library cache.If found,mark as a soft parse and do no optimization.If not start the optimization of the query.
But the guy asked that from where the privsare checked?His answer was that its from the Data Dictionary Cache and that leads to another part that if any case ,oracle has to scan the DDC for the privs than why not check the query hash also in the Lib.Cache and than log a soft or hard parse?
Can you please clarify the above mentioned counter-question and also the step that says that privs are checked(semantics).I guess privs are still the data from the data dictionary so it should be in the Data Buffer Cache but it doesnt get aligned with the docs which mention that DDC only caches the privs.So this means,Shared Pool access is inevitable.
Please help to clarify.
Thanks and regards
Aman....
Tom Kyte
July 06, 2008 - 6:21 pm UTC

the semantic check does that - when you resolve the OBJECT itself.

select * from t


You can say

o we syntax parse that - tokenize it - make sure it is SQL (it is)

o next we need to figure out what * and T are - in order to "see" T you need access to T, therein lies the "privs" check. If we cannot figure out what the heck T is - you don't have access to it.



Why bother checking the hash at that point, if we did and we got a hit, guess what we'd have to do anyway????

figure out what T means TO YOU (ask them, what if you and I both log in and have a table T and issue select * from T, that the hash matches - meaningless, we need to know that I am accessing tkyte.t and you are accessing your_table.t)


Aman...., July 06, 2008 - 10:46 pm UTC

Thanks alot for the answer sir.So you mean to say that in the Semantic check,Data Dictionary cache is accessed?If this is than again,we did scan the Shared Pool?That was that guy's question that why you check the Shared Pool just for dictionary privilidges alone?Why not at the same time,check teh hash of the query also and mark the cursor as soft/hard parsed one?
Thanks and regards
Aman....
Tom Kyte
July 07, 2008 - 12:01 pm UTC

sure, in order to figure out what "T" means in

select * from T

we need the library cache information, we need to figure out what T is. That is in the shared pool.


It would be TOO EARLY to hash it and compare it. So what if we 'found a hit', we still need to figure out what T is, then what * is so when we find another 'select * from T' out there - we can decide if it is in fact the same one.


The problem with hashing and finding a hit before you figure all of that other stuff out is that you have no idea if the query you found is one you can use, you haven't gotten that far yet.

SQL in PL/SQL

Car Elcaro, October 27, 2008 - 2:11 am UTC

You said above :
"sql executed in a procedure can be cached for the life of the sesssion."

Are they remain cached even the number of SQL in procedure are greater than session_cache_cursor paramater ? Can you elaborate your statement ?

Thank you.
Tom Kyte
October 27, 2008 - 8:03 am UTC

in older releases (pre 9i) we cached up to open_cursors

now the size of the plsql cursor cache is session_cached_cursors

Highly Informative Parsing Info

Atul Gupta, October 31, 2008 - 9:19 am UTC

Hello Tom, I read the complete post and got boost of parsing info which i'm sure will help me in writing and understanding code btw are there any improvements in parsing in oracle 11g version ?

Have nice time ahead and wishing you happy new year 09.

br
atul
Tom Kyte
November 02, 2008 - 4:48 pm UTC

the only good parse in all releases will be.....

the absence of a parse


and as a developer, you have the power, the power to make that happen.

Softer Soft Parse

Car Elcaro, November 11, 2008 - 1:31 am UTC

Hi Tom,

(1) From tabular statistics of TKPROF report, can we know the different whether SQL statement is parsed with SOFT parse or SOFTER SOFT parse ? In my understanding both of them will still reported in statistic of parse database call. Correct ?

(2) What is the differences between session cached cursor and PL/SQL cached ? Are they physically differ ? Why their mechanism differ, the former still doing parse when SQL submited not come from PL/SQL but the last bypass parse ?

(3) I do a test here
create table t as select * from all_objects;

select value
from v$diag_info
where inst_id = (select instance_number from v$instance) and
      name = 'Default Trace File';

alter session set timed_statistics = true;

alter session set max_dump_file_size = unlimited;

alter session set events '10046 trace name context forever, level 12';

begin
  for i in 1 .. 100 loop
    execute immediate 'select count(*) from t a';
  end loop;

  for i in 1 .. 100 loop
    execute immediate 'select count(*) from t a';
  end loop;

  for i in 1 .. 100 loop
    execute immediate 'select count(*) from t a';
  end loop;

  for i in 1 .. 100 loop
    execute immediate 'select count(*) from t a';
  end loop;
end;
/

alter session set events '10046 trace name context off';

********************************************************************************

SQL ID : 04w16t6qk80vx
select count(*)
from
 t a


call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        4      0.00       0.00          0          0          0           0
Execute    400      0.00       0.01          0          0          0           0
Fetch        0      0.00       0.00          0          0          0           0
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total      404      0.00       0.01          0          0          0           0

Misses in library cache during parse: 0
Optimizer mode: ALL_ROWS
Parsing user id: 109     (recursive depth: 1)

Rows     Row Source Operation
-------  ---------------------------------------------------
      0  SORT AGGREGATE (cr=0 pr=0 pw=0 time=0 us)
      0   INDEX FAST FULL SCAN T_PK (cr=0 pr=0 pw=0 time=0 us cost=467 size=0 card=1000000)(object id 208672)

********************************************************************************


Why I see 4 times parse, as I hope only 1 parse occur ? I am using default value for session_cached_cursor that is 50.


Thanks.
Tom Kyte
November 11, 2008 - 4:27 pm UTC

1) they are both still parses, you have to look at the session statistic "session cursor cache hits "


2) plsql caches to AVOID PARSING - plsql creates the best kind of parse - the absence of a parse.

softer soft parsing just makes a soft parse a little less horrible then it would be - just a little. It is still a parse.


In plsql, when you close a cursor, plsql can say "oh yeah, sure, you think you want to close this but I know you'll be back" and keeps it open. It manages them in a way so that you never hit "ora-100 max cursors" because of this (it will close them before your session gets that). But in general, plsql will avoid the 2nd, 3rd, and Nth parse of frequently executed sql.

3) you used it in four different places. Write modular code and you would see only one parse.


session_cached_cursors in this case (since plsql is not parsing over and over) only sizes the cache PLSQL WILL USE TO KEEP CURSORS OPEN.

parse enhancement

Car Elcaro, November 13, 2008 - 1:42 am UTC

I believe that in question #3 above, Oracle still using parsing when sql statement place in different place could be another enhancement for future release.

I thought this enhancement possible because new feature of this current release, that is subprogram inlining, yes Oracle rewrite one location subprogram to be multi location but behave like we use modular code, this is amazing. Here the test.

create table t as select * from all_objects;

create or replace procedure foo
as
begin
  for i in 1 .. 100 loop
    execute immediate 'select count(*) from t a';
  end loop;
end;
/

--enable inlining
alter session set plsql_optimize_level = 3;

select value
from v$diag_info
where inst_id = (select instance_number from v$instance) and
      name = 'Default Trace File';

alter session set timed_statistics = true;

alter session set max_dump_file_size = unlimited;

alter session set events '10046 trace name context forever, level 12';

begin
  foo;
  foo;
  foo;
  foo;
end;
/

alter session set events '10046 trace name context off';

********************************************************************************

SQL ID : 04w16t6qk80vx
select count(*)
from
 t a


call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          1          1          0           0
Execute    400      0.01       0.01          0          0          0           0
Fetch        0      0.00       0.00          0          0          0           0
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total      401      0.01       0.01          1          1          0           0

Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 95     (recursive depth: 1)

Rows     Row Source Operation
-------  ---------------------------------------------------
      0  SORT AGGREGATE (cr=0 pr=0 pw=0 time=0 us)
      0   TABLE ACCESS FULL T (cr=0 pr=0 pw=0 time=0 us cost=205 size=0 card=56203)

********************************************************************************


Although call foo, 4 times to be inline, here I see only 1 parse. I thought the same thing could be possible for question #3.

Thanks Tom.
Tom Kyte
November 14, 2008 - 4:27 pm UTC

it could in general be open in all three places simultaneously with completely different inputs...

And, if you ask me...

Some things (in my opinion) shouldn't be fixed - the obvious solution here is "modular code, well formed code". I'd hate to have code where the same sql statement appeared all over the place - ugh.

Puzzling Parse !!

Ankit, November 14, 2008 - 8:32 am UTC

Consider the series of SQL queries on table books

SQL> select * from wazza.books where isbn=104;

 

      ISBN TITLE                PUBLISHER            AUTHOR

---------- -------------------- -------------------- --------------------

QTY_IN_STOCK

------------

       104 Tree of smoke        Little Brown         Denis

          22

 

 

SQL> select * from wazza.books where isbn=101;

 

      ISBN TITLE                PUBLISHER            AUTHOR

---------- -------------------- -------------------- --------------------

QTY_IN_STOCK

------------

       101 Man gone down        Atlantic             Michael Thomas

          10

 

 

SQL> select * from wazza.books where isbn=102;

 

      ISBN TITLE                PUBLISHER            AUTHOR

---------- -------------------- -------------------- --------------------

QTY_IN_STOCK

------------

       102 Out stealing horses  Graywolf Press       Anne Born

          15

 

 

SQL> select * from wazza.books where isbn=103;

 

      ISBN TITLE                PUBLISHER            AUTHOR

---------- -------------------- -------------------- --------------------

QTY_IN_STOCK

------------

       103 The rest is noise    Bantam books         Alex

           4

 

 

SQL> select * from wazza.books where isbn=104;

 

      ISBN TITLE                PUBLISHER            AUTHOR

---------- -------------------- -------------------- --------------------

QTY_IN_STOCK

------------

       104 Tree of smoke        Little Brown         Denis

          22

 

 

SQL> ALTER SESSION SET SQL_TRACE =false;

 

Session altered.

 

Now We have the tkprof report as follows:>>


TKPROF: Release 10.1.0.2.0 - Production on Fri Nov 14 15:10:40 2008

Copyright (c) 1982, 2004, Oracle.  All rights reserved.

Trace file: E:\oracle\product\10.1.0\admin\my\udump\my_ora_1040.trc
Sort options: default

********************************************************************************
count    = number of times OCI procedure was executed
cpu      = cpu time in seconds executing 
elapsed  = elapsed time in seconds executing
disk     = number of physical reads of buffers from disk
query    = number of buffers gotten for consistent read
current  = number of buffers gotten in current mode (usually for update)
rows     = number of rows processed by the fetch or execute call
********************************************************************************

Error in CREATE TABLE of EXPLAIN PLAN table: sys.plan_table
ORA-00922: missing or invalid option

parse error offset: 1043
EXPLAIN PLAN option disabled.
********************************************************************************

ALTER SESSION SET SQL_TRACE =true


call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        0      0.00       0.00          0          0          0           0
Execute      1      0.00       0.00          0          0          0           0
Fetch        0      0.00       0.00          0          0          0           0
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        1      0.00       0.00          0          0          0           0

Misses in library cache during parse: 0
Optimizer mode: ALL_ROWS
Parsing user id: 65  
********************************************************************************

select * 
from
 wazza.books where isbn=104


call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        2      0.00       0.00          0          0          0           0
Execute      2      0.00       0.00          0          0          0           0
Fetch        4      0.00       0.00          0          4          0           2
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        8      0.00       0.00          0          4          0           2

Misses in library cache during parse: 0
Optimizer mode: ALL_ROWS
Parsing user id: 65  

Rows     Row Source Operation
-------  ---------------------------------------------------
      1  TABLE ACCESS BY INDEX ROWID BOOKS (cr=2 pr=0 pw=0 time=54 us)
      1   INDEX UNIQUE SCAN SYS_C005532 (cr=1 pr=0 pw=0 time=25 us)(object id 51487)

********************************************************************************

select * 
from
 wazza.books where isbn=101


call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute      1      0.00       0.00          0          0          0           0
Fetch        2      0.00       0.00          0          2          0           1
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        4      0.00       0.00          0          2          0           1

Misses in library cache during parse: 0
Optimizer mode: ALL_ROWS
Parsing user id: 65  

Rows     Row Source Operation
-------  ---------------------------------------------------
      1  TABLE ACCESS BY INDEX ROWID BOOKS (cr=2 pr=0 pw=0 time=56 us)
      1   INDEX UNIQUE SCAN SYS_C005532 (cr=1 pr=0 pw=0 time=27 us)(object id 51487)

********************************************************************************

select * 
from
 wazza.books where isbn=102


call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute      1      0.00       0.00          0          0          0           0
Fetch        2      0.00       0.00          0          2          0           1
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        4      0.00       0.00          0          2          0           1

Misses in library cache during parse: 0
Optimizer mode: ALL_ROWS
Parsing user id: 65  

Rows     Row Source Operation
-------  ---------------------------------------------------
      1  TABLE ACCESS BY INDEX ROWID BOOKS (cr=2 pr=0 pw=0 time=54 us)
      1   INDEX UNIQUE SCAN SYS_C005532 (cr=1 pr=0 pw=0 time=26 us)(object id 51487)

********************************************************************************

select * 
from
 wazza.books where isbn=103


call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute      1      0.00       0.00          0          0          0           0
Fetch        2      0.00       0.00          0          2          0           1
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        4      0.00       0.00          0          2          0           1

Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 65  

Rows     Row Source Operation
-------  ---------------------------------------------------
      1  TABLE ACCESS BY INDEX ROWID BOOKS (cr=2 pr=0 pw=0 time=55 us)
      1   INDEX UNIQUE SCAN SYS_C005532 (cr=1 pr=0 pw=0 time=26 us)(object id 51487)

********************************************************************************

ALTER SESSION SET SQL_TRACE =false


call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute      1      0.00       0.00          0          0          0           0
Fetch        0      0.00       0.00          0          0          0           0
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        2      0.00       0.00          0          0          0           0

Misses in library cache during parse: 0
Optimizer mode: ALL_ROWS
Parsing user id: 65  



********************************************************************************

OVERALL TOTALS FOR ALL NON-RECURSIVE STATEMENTS

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        6      0.00       0.00          0          0          0           0
Execute      7      0.00       0.00          0          0          0           0
Fetch       10      0.00       0.00          0         10          0           5
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total       23      0.00       0.00          0         10          0           5

Misses in library cache during parse: 1


OVERALL TOTALS FOR ALL RECURSIVE STATEMENTS

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        0      0.00       0.00          0          0          0           0
Execute      0      0.00       0.00          0          0          0           0
Fetch        0      0.00       0.00          0          0          0           0
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        0      0.00       0.00          0          0          0           0

Misses in library cache during parse: 0

    7  user  SQL statements in session.
    0  internal SQL statements in session.
    7  SQL statements in session.
    0  statements EXPLAINed in this session.
********************************************************************************
Trace file: E:\oracle\product\10.1.0\admin\my\udump\my_ora_1040.trc
Trace file compatibility: 10.01.00
Sort options: default

       1  session in tracefile.
       7  user  SQL statements in trace file.
       0  internal SQL statements in trace file.
       7  SQL statements in trace file.
       6  unique SQL statements in trace file.
      84  lines in trace file.
      24  elapsed seconds in trace file.


 the tkprof report generated for query 
select * 
from
 wazza.books where isbn=104

shows 2 parses.....
shouldnt it be 1 parse,since the query exists in the Lib_Cache ?

Please Help

Tom Kyte
November 18, 2008 - 5:49 pm UTC

sqlplus works like this:
loop forever
   read user input
   if user asked to leave then EXIT
   parse user input <<<<<======
   execute it
end loop


every time a program says "parse this", we parse. The second parse was a soft parse which is very very bad - better than a hard parse which is very very very bad.

remember please - sqlplus, simple command line interface for one off ad-hoc queries, it is not a production application like you are coding, you would only parse something once, not over and over (the programmer has 100% control here, they are the only ones with the control really!)

Clarification

Ankit, November 21, 2008 - 3:01 am UTC

With regards to you last follow up for my question....
I dont understand this... i mean i thought few days but i m not able to make complete sense regarding my query.

The thing i understood is a soft parse is better than hard parse as hard parse uses more resources.

The thing is a query which is executed recently has its execution plan and relative data in the SGA .

please correct me if theres any wrong in what i interpret.
Tom Kyte
November 24, 2008 - 2:06 pm UTC

... The thing i understood is a soft parse is better than hard parse as hard parse
uses more resources.
...


that is true, but a soft parse is STILL BAD - it is not AS BAD as a hard parse, but it still takes a ton of work. You can avoid many soft parses, most of them.

Let me put it this way. In a program you have to

o parse a sql statement AT LEAST ONCE to execute it.
o parse a sql statment AT MOST ONCE to execute it over and over.

you have to parse once, but only once - you don't need to parse each time you execute a query.


A soft parse is very very bad.
A hard parse is very very very bad.


just one less bad. But bad still none the less.

Tk Prof report reference

Ankit, November 25, 2008 - 8:40 am UTC

Tom ,
this is what is asked at first place


the following query was parsed twice (as seen in the tkprof report)
>>

select *
from
wazza.books where isbn=104


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 2 0.00 0.00 0 0 0 0
Execute 2 0.00 0.00 0 0 0 0
Fetch 4 0.00 0.00 0 4 0 2
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 8 0.00 0.00 0 4 0 2

Misses in library cache during parse: 0
Optimizer mode: ALL_ROWS
Parsing user id: 65



I want to know why it was parsed twice .... i dont know if you missed what i asked originally.....

I just want to know that why did the last query parsed twice (Note this query was executed twice in quick succession and still it parsed twice)?
Tom Kyte
November 25, 2008 - 12:32 pm UTC

because you were in SQLPLUS

I already answered your question.

Please re-read it, I gave you psuedo code for sqlplus, everytime a program says "parse this", we do

We soft parsed it twice in this case (since misses in library cache were zero, we found it in the shared pool)

Avoid Parse

Atul Gupta, November 25, 2008 - 9:48 am UTC

Hello,
I believe Query is parsed more than once if there is change in the state of database the query was run last time, meaning adding of index, alter table by adding column etc.. so when u run the query it finds internally that the last plan which it has used is not promising or stale and hence re-parse it and estimate what changes were done since the last time which calls for parse more than once.

Ofcourse tom has last word in this !!!!

br
atul
Tom Kyte
November 25, 2008 - 12:38 pm UTC

wrong.

it is because the program said "parse this sql statement"

they were both soft parses.


Anytime a programmer/program says "parse", Oracle says "how hard" - and does it. In this case, it was two 'soft' parses.

Suppose

Ankit, November 25, 2008 - 12:53 pm UTC

With reference to the last question...

How am i supposed to execute the query twice but make it parse just once ?
Tom Kyte
November 25, 2008 - 2:15 pm UTC

in a real program, you as the programmer would control that, you would call prepare ONCE not twice.

You need to parse at least once in a program
You need to parse AT MOST once in a program

sqlplus is just a dumb little client program - it reads from user, executes it. It is not expecting (and therefore does nothing to 'cache' the cursor handle) the same sql statement to be executed twice in a row - they didn't think it was a relevant thing to program for sqlplus.


In your real code, you would have total control.

Parse

Atul, November 26, 2008 - 4:39 am UTC

Hello Tom,
thsnks for clarifications, but why it parsed twice not thrice ? I submit a query and it gets parsed N times then who decides this N times and what are the critetia ?

br
atul
Tom Kyte
November 28, 2008 - 4:19 pm UTC

<b>SQL> select * from wazza.books where isbn=104;</b>
SQL> select * from wazza.books where isbn=101;
SQL> select * from wazza.books where isbn=102;
SQL> select * from wazza.books where isbn=103;<b>
SQL> select * from wazza.books where isbn=104;</b>
SQL> ALTER SESSION SET SQL_TRACE =false;



you ran it twice, you want thrice, hit slash again after running "where isbn=104" - so you would run it three times.


YOU decided the N times, because YOU submitted it twice, not three.

parse

A reader, November 27, 2008 - 2:01 am UTC


parse

A reader, November 27, 2008 - 2:01 am UTC


recursie calls

Jajj, November 28, 2008 - 2:49 pm UTC

Hi Tom,

I am trying to compare two tables using the method you demonstrated here - https://asktom.oracle.com/Misc/oramag/on-injecting-and-comparing.html

But my query is taking too long (upto 58 mins). Here is the trace it is generating. This shows a lot of recursive calls and db gets. What am I doing wrong? How can I improve the performance? The source and target tables have around 7M rows each.

Thanks in advance.

Jajj.



11:34:24 SQL> set autotrace traceonly;
11:34:32 SQL> @cursor

12945 rows selected.

Elapsed: 00:23:56.07

Execution Plan
----------------------------------------------------------
   0      SELECT STATEMENT Optimizer=ALL_ROWS (Cost=212543 Card=146343
          63 Bytes=3834203106)

   1    0   WINDOW (BUFFER) (Cost=212543 Card=14634363 Bytes=383420310
          6)

   2    1     FILTER
   3    2       SORT (GROUP BY) (Cost=212543 Card=14634363 Bytes=38342
          03106)

   4    3         VIEW (Cost=211713 Card=14634363 Bytes=3834203106)
   5    4           UNION-ALL
   6    5             HASH JOIN (Cost=176679 Card=7320498 Bytes=194725
          2468)

   7    6               TABLE ACCESS (FULL) OF 'AAAAAAAAAAAAAA' (TABLE
           (TEMP)) (Cost=2 Card=398 Bytes=6766)

   8    6               HASH JOIN (Cost=176637 Card=7338880 Bytes=1827
          381120)

   9    8                 TABLE ACCESS (FULL) OF 'BBBBBBBBBBBBBBBBBBBB
          BB' (TABLE) (Cost=3512 Card=1904315 Bytes=64746710)

  10    8                 HASH JOIN (Cost=101633 Card=7338880 Bytes=15
          77859200)

  11   10                   TABLE ACCESS (FULL) OF 'CCCCCCCCCCCCC' (TA
          BLE (TEMP)) (Cost=336 Card=361971 Bytes=10135188)

  12   10                   TABLE ACCESS (FULL) OF 'DDDDDDD' (TABLE) (
          Cost=41161 Card=7338880 Bytes=1372370560)

  13    5             TABLE ACCESS (FULL) OF 'DDDDDDD' (TABLE) (Cost=3
          5034 Card=7313865 Bytes=1155590670)


Statistics
----------------------------------------------------------
       6836  recursive calls
       1267  db block gets
     244492  consistent gets
     760364  physical reads
          0  redo size
    1148229  bytes sent via SQL*Net to client
      12975  bytes received via SQL*Net from client
        864  SQL*Net roundtrips to/from client
         18  sorts (memory)
          1  sorts (disk)
      12945  rows processed

Tom Kyte
November 28, 2008 - 5:36 pm UTC

I see more than two tables in there - care to elaborate on what you are really doing because it is not anything similar to what I was doing - which was "compare two tables"



more details

Jajjanaka, November 28, 2008 - 5:51 pm UTC

Hi Tom,

Thanks for the quick response.

The compare is not a straight compare between the two tables. Let me call the tables as source and target tables. All the columns are not present in source table. So, I am joining 4 different tables to get all the columns and then comparing them with the columns from the target table. That is the reason the plan is showing more tables.

Thanks again for taking the question and quick response.

Jajj.
Tom Kyte
December 01, 2008 - 5:33 am UTC

and so why did write:

... I am trying to compare two tables using the method you demonstrated here - ...

Ok, so you have a big four table join, to create your "one" table - and you are wondering why it is taking a while?

the db gets = IO's to read your large tables
the recursive sql = probably done in order to allocate temporary storage for the intermediate result sets.


you are doing a "big thing" here. 58 minutes might actually be very fast (not knowing your true data volumes, not knowing your query, not knowing the size of the machine you have to run this on, not knowing very much really...)

output from tkprof

Jajjanaka, November 29, 2008 - 12:18 am UTC

Hi Tom,

Here is the output from tkprof.
Thanks in advance.

Jajjanaka.

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.07 0.11 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 864 199.00 3517.98 755212 244217 997 12945
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 866 199.08 3518.09 755212 244217 997 12945

Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 239

Rows Row Source Operation
------- ---------------------------------------------------
12945 WINDOW BUFFER (cr=244217 pr=755212 pw=515047 time=2669083293 us)
12945 FILTER (cr=244217 pr=755068 pw=514904 time=2575968736 us)
7374902 SORT GROUP BY (cr=244217 pr=755068 pw=514904 time=2632665196 us)
14736859 VIEW (cr=244217 pr=445715 pw=205551 time=1167305450 us)
14736859 UNION-ALL (cr=244217 pr=445715 pw=205551 time=1078884289 us)
7369856 HASH JOIN (cr=137112 pr=339958 pw=205551 time=717955871 us)
399 TABLE ACCESS FULL AAAAAAAAAA (cr=3 pr=0 pw=0 time=1281 us)
7389772 HASH JOIN (cr=137109 pr=339958 pw=205551 time=660569774 us)
1915700 TABLE ACCESS FULL BBBBBBBBB (cr=10720 pr=10583 pw=0 time=180592538 us)
7391021 HASH JOIN (cr=126389 pr=212933 pw=89109 time=692095320 us)
361913 TABLE ACCESS FULL CCCCCCCCCC (cr=998 pr=11 pw=0 time=1221998 us)
7392748 TABLE ACCESS FULL DDDDDDD (cr=125391 pr=123813 pw=0 time=318014950 us)
7367003 TABLE ACCESS FULL DDDDDDD (cr=107105 pr=105757 pw=0 time=176812924 us)




********************************************************************************

OVERALL TOTALS FOR ALL NON-RECURSIVE STATEMENTS

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 2 0.08 0.11 0 0 0 0
Execute 4 0.00 0.04 0 0 0 0
Fetch 864 199.00 3517.98 755212 244217 997 12945
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 870 199.09 3518.14 755212 244217 997 12945

Misses in library cache during parse: 2
Misses in library cache during execute: 2


OVERALL TOTALS FOR ALL RECURSIVE STATEMENTS

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 44 0.05 0.05 0 0 42 0
Execute 349 0.09 0.16 0 0 0 0
Fetch 569 0.03 0.59 25 1201 0 2907
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 962 0.19 0.81 25 1201 42 2907

Misses in library cache during parse: 18
Misses in library cache during execute: 18

4 user SQL statements in session.
349 internal SQL statements in session.
353 SQL statements in session.
********************************************************************************
Trace file: xxxxxxxxxxxxxx.trc
Trace file compatibility: 10.01.00
Sort options: default

1 session in tracefile.
4 user SQL statements in trace file.
349 internal SQL statements in trace file.
353 SQL statements in trace file.
21 unique SQL statements in trace file.
4071 lines in trace file.
3873 elapsed seconds in trace file.



Tom Kyte
December 01, 2008 - 6:02 am UTC

the pw's are physical writes
the pr's are physical reads

it is clear that you 'swapped' to disk (the pr's are writes to temp). You spent most of your time doing physical IO.

Look at your first hash join:
7391021         HASH JOIN  (cr=126389 pr=212933 pw=89109 time=692095320 us)
 361913          TABLE ACCESS FULL CCCCCCCCCC (cr=998 pr=11 pw=0 time=1221998
7392748          TABLE ACCESS FULL DDDDDDD (cr=125391 pr=123813 pw=0 


that did 89,109 writes - to temp - because the resulting table did not fit into memory.

so what you have to ask yourself is "could it fit (into memory)"? If the answer is "yes", then you would look at your pga/hash_area_size settings and adjust them perhaps.

most of your runtime is spent doing physical IO, look for ways to reduce your need to do physical IO.

the query

Jajj, November 29, 2008 - 8:02 am UTC

Sorry for multiple posts. Just wanted to show the query as well.

Thanks.

SELECT MAX (tablename) tablename, COUNT (*) OVER (PARTITION BY id) action_count, id, col3, col4,......, col9
FROM (SELECT 'T1' tablename, id, col3, col4, ........, col9
FROM ddddddd INNER JOIN aaaaaaaaa ON (join conditions)
INNER JOIN bbbbbbbbb USING (join condition)
INNER JOIN ccccccccc ON (join condition)
WHERE (conditions)
UNION ALL
SELECT 'T2', id, col3, col4, ........, col9
FROM eeeeeeeee)
GROUP BY id, col3, col4, ............, col9
HAVING COUNT (*) = 1

Force a Hard Parse

Jack Douglas, December 15, 2008 - 3:12 am UTC

The question about how to force a hard parse has come up a few times, and I haven't seen a good way of doing it that doesn't have nasty side-effects.

I have found that as you mention somewhere, if the 'parse environment' changes this can result in a hard parse, and this seems to be the case when you change something like 'sort_area_retained_size' for the session - each unique value of 'n' can result in a new hard parse, at least in 10gr2.

alter session set sort_area_retained_size = n;


Would you advise against this approach? (obviously the question remains why would you want to hard parse but it can be helpful for testing if you don't want to flush the shared pool for example).
Tom Kyte
December 29, 2008 - 9:57 am UTC

... and I haven't seen a good way of doing it that doesn't have nasty side-effects ...

what nasty side effects?

I would not use the sort_area_size, no.

What have you tried?
What are the "side effects"


For example, say I wanted to have a different cursor area when my status codes changed. I know (because I actually know my data) that when status code is less than 100, the optimizer tends to use plan "1" and when they are greater than 100, it uses plan "2"

I would dynamically open a cursor (in 10g and before)

if ( p_status_code < 100 ) 
then
   open c for select * from t T_lt_100 where status_code = p_status_code; 
else
   open c for select * from t T_gte_100 where status_code = p_status_code;
end if;




if you are testing, flushing the shared pool is ok - it is what I do... Or just modify the query slightly. Or change something about the table (use dbms_stats to set the table stats and invalidate...)

what impies the cusror open?

jian huang zheng, December 27, 2008 - 12:52 am UTC

Hi Tom
I read a lot about the cached cursor and understand that stored procedure will cache cusror and leave it open for us in the session life.
can you elaborate more on the meaning of curosr open? Does it mean that when cursor is cached and *open* oracle will pin the relevant shared structure pointed by the cursor in the shared pool and prevent it from flushing out?
when cursor is closed then the shared part is candidated to be flushed out when needed?
Thanks!
Tom Kyte
December 29, 2008 - 3:37 pm UTC

just because an application has a cursor open doesn't mean the information in the shared pool is "pinned, un-removable". Consider:

ops$tkyte%ORA10GR2> create table t ( x int );

Table created.

ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> create or replace procedure p
  2  as
  3  begin
  4          for x in ( select * from t )
  5          loop
  6                  null;
  7          end loop;
  8  end;
  9  /

Procedure created.

ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> @trace
ops$tkyte%ORA10GR2> alter session set events '10046 trace name context forever, level 12';

Session altered.

ops$tkyte%ORA10GR2> exec /* expect select to hard parse */ p

PL/SQL procedure successfully completed.

ops$tkyte%ORA10GR2> exec /* expect select to NOT  parse */ p

PL/SQL procedure successfully completed.

ops$tkyte%ORA10GR2> exec /* expect select to NOT  parse */ p

PL/SQL procedure successfully completed.

ops$tkyte%ORA10GR2> create index t_idx on t(x);

Index created.

ops$tkyte%ORA10GR2> exec /* expect select to IMPLICITLY hard parse */ p

PL/SQL procedure successfully completed.

ops$tkyte%ORA10GR2> exec /* expect select to NOT  parse */ p

PL/SQL procedure successfully completed.

ops$tkyte%ORA10GR2> exec /* expect select to NOT  parse */ p

PL/SQL procedure successfully completed.



so, we executed it six times.

I expect tkprof to say:

parse = 1
misses in library cache during parse = 1 (that is the HARD parse)
misses in library cache during execute = 1 (that is the IMPLICIT hard parse)

and it does:

SELECT *
FROM
 T


call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          1          0           0
Execute      6      0.00       0.00          0          2          0           0
Fetch        6      0.00       0.00          0         18          0           0
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total       13      0.00       0.00          0         21          0           0

Misses in library cache during parse: 1
Misses in library cache during execute: 1

SQL Flooding Shared Pool

Harschil Kaparwan, December 30, 2008 - 8:26 am UTC

Hi Tom,

I have identified the SQL occupying more shared pool space with multiple copies of it.
 Detailed as follows::: 


SQL> show parameter shared_pool_size 


NAME            TYPE              VALUE 
-------------- -----------       --------------------- 
shared_pool_size  big integer     754974720 


SQL>select bytes/1024/1024 FREE_MB 
from v$sgastat 
        where name = 'free memory' 
        and pool = 'shared pool' 
FREE_MB 
---------- 
 32MB 


SQL>SELECT substr(sql_text,1,100) "Stmt", count(*), 
                sum(sharable_mem)    "Mem", 
                sum(users_opening)   "Open", 
                sum(executions)      "Exec" 
          FROM v$sql 
         GROUP BY substr(sql_text,1,100) 
        HAVING sum(sharable_mem) > 10000000 


Stmt                   COUNT(*)        Mem    Open    Exec 
--------------       ----------   ---------     -----   ------ 
select objid,title,   11325       225210490   1   11323    
TO_CHAR(escalate_time,
'YYYY/MM/DD HH24:MI:SS')
from T where                              


I have taken 1 SQL Statement for sample from v$sql,  there are 11325 such copies  :: 


select objid,   title, TO_CHAR(escalate_time, 'YYYY/MM/DD 
HH24:MI:SS'),from T where escalate_time < TO_DATE('2008-12-30-03-56-32', 'YYYY-MM-DD-HH24-MI-SS') 
 and FLOOR( flags / 16 ) * 2 = FLOOR( flags / 8 ) 
 order by escalate_time asc, creation_time asc, objid asc 


--  v$sql shows similar SQL WIth diffrence 
only in 
escalate_time < TO_DATE('2008-12-30-03-56-32', 'YYYY-MM-DD-HH24-MI-SS') 
portion. 
-- Predicate 
escalate_time < TO_DATE('2008-12-30-03-56-32', 'YYYY-MM-DD-HH24-MI-SS') 
in all SQLs  has values  differing by 2 mins. Means every 2 mins this statement is loaded & executed.


This SQL is run by the Application code. 
Does this show the poor use of bind variables. I mean no bind variable 
is used and therefore every time the SQL is executed.. a new copy of 
the SQL is loaded and executed. 


How we can get rid of above problem. 


Many thanks 



Tom Kyte
January 05, 2009 - 9:11 am UTC

... Does this show the poor use of bind variables. ...

absolutely, this is the definition of not using binds here. This is very much an application bug.

Force a Hard Parse

Jack Douglas, January 07, 2009 - 5:26 am UTC

Hi Tom,

You asked "what nasty side effects?" other methods of forcing a hard parse would have.

I just meant flushing out everyone else's plans in the shared pool (ok on a test server as you say).

"What have you tried?"

The 'perfect' solution for me is to use dynamic sql - binding the variables I want to bind, but leaving the ones I want to cause a hard parse in the sql (this is a data warehouse). The only problem with this is that for large queries with many binds, the procedures I write become far harder to read or make changes to (and they don't cause compile-time errors when I make mistakes). What I would love is to be able to use bind variables for everything in my PL/SQL, but somehow specify that some of the binds are treated as hard-coded values. In some cases it's practical to split the query up as you suggest but in others it isn't unfortunately.

"Or change something about the table (use dbms_stats to set the table stats and invalidate...)"

This sounds like a much better plan than what I proposed, thanks.

Jack
Tom Kyte
January 07, 2009 - 9:44 am UTC

in a data warehouse, it is likely that you don't want to use binds - you are running queries that take many minutes, not running many queries PER SECOND

queries per second -> must bind
queries that take many many seconds -> probably not

mod_plsql

Chris, January 13, 2009 - 5:32 pm UTC

Tom, a couple questions regarding parsing and mod_plsql. I am seeing (in AWR report) that the call being made to the database from the incoming web request is being parsed for every execution.

1. Is this because the session state is reset at the end of every session?
2. How hard is this on the machine (i.e. is parsing the PL/SQL better than parsing SQL)?
3. This is a highly transactional system (debatable I suppose) in which my 10 minute AWR capture has 205,000 parse/executions of the web call, and another 205,000 for the resetting of package state ("begin dbms_session.modify_package_state(dbms_session.reinitialize); end;"). Is there anything I can do to alleviate any of this parsing or increase scalability as this is across thousands of connections?
Tom Kyte
January 16, 2009 - 4:06 pm UTC

1) yes
2) it is unavoidable - it is not overly hard on the machine.
3) have you observed an inability to scale as yet?

Hard parsing and for loop

Nikhilesh, January 19, 2009 - 2:54 am UTC

Dear Tom,
I have tried to get parse_calls as 1 and for multiple executions. But could succeed if i use the statement in for loop i.e. REST_1 and REST_4. ie when i use for loop parse call is one for 2 executions otherwise its as many as executions.

is

Please see below stats
declare
a number;
b number:=67;
begin
for i in 1..2 loop
select contract_id into a from customer_contract REST_1 where contract_id=67;
end loop;
select contract_id into a from customer_contract REST_2 where contract_id=67;
select contract_id into a from customer_contract REST_2 where contract_id=67;
select contract_id into a from customer_contract REST_3 where contract_id=b;
select contract_id into a from customer_contract REST_3 where contract_id=b;
for i in 1..2 loop
select contract_id into a from customer_contract REST_4 where contract_id=b;
end loop;
end;

SQL_TEXT LOADS PARSE_CALLS EXECUTIONS

SELECT CONTRACT_ID FROM CUSTOMER_CONTRACT REST_1 WHERE CONTRACT_ID=67 1 1 2
SELECT CONTRACT_ID FROM CUSTOMER_CONTRACT REST_2 WHERE CONTRACT_ID=67 1 2 2
SELECT CONTRACT_ID FROM CUSTOMER_CONTRACT REST_3 WHERE CONTRACT_ID=:B1 1 2 2
SELECT CONTRACT_ID FROM CUSTOMER_CONTRACT REST_4 WHERE CONTRACT_ID=:B1 1 1 2

Is it always a hard parse if we don't use the statemnet in for loop?

Thanks in advance.
Tom Kyte
January 19, 2009 - 9:19 pm UTC

... Is it always a hard parse if we don't use the statemnet in for loop? ...

NO, NO.

We hard parse when the sql isn't in the shared pool.
We soft parse when it is.

We have no clue if you are in a loop, or what ever.

A parse call is a parse call - it is NOT a hard parse necessarily. In your example you have cursors rest_1..rest_4. Each will be parsed by PLSQL each time they are hit in the code. The first time - that was a hard parse, the second - was a soft parse.

A sql query executed by your program must be parsed at least once and at most once. In your case - you have two cursors for each of REST_2 and REST_3 - each of which must be parsed.

CPU elapsed specifically for hard parsing

Graham Halsey, January 22, 2009 - 12:51 pm UTC

I'm a DBA for a 9i instance and trying to make the application developers change their code to reduce hard parsing. Here are a few snippets from a statspack report covering 3.75 days >>

                        Per Second       Per Transaction
                   ---------------       ---------------
     Parses:              1,592.16                  3.06
Hard parses:                  6.12                  0.01

Instance Efficiency Percentages (Target 100%)
[snip]
Parse CPU to Parse Elapsd %:   58.33     % Non-Parse CPU:   97.38

 Shared Pool Statistics        Begin   End
                              ------   ------
             Memory Usage %:   25.68   92.08
    % SQL with executions>1:   64.34   20.11
  % Memory for SQL w/exec>1:   66.28   23.40

Statistic                  Total     per Second    per Trans
-------------------- ----------- -------------- ------------
parse count (hard)     1,981,549            6.1          0.0
parse count (total)  515,858,976        1,592.2          3.1
parse time cpu         4,670,489           14.4          0.0
parse time elapsed     8,006,348           24.7          0.1


By my calculations that means we used up 77.8 minutes of CPU time for (soft + hard) parsing in the statspack period.

The developers want me to justify the urgency for this piece of work by telling them how much CPU time is used up specifically for hard parsing. Is there a way of splitting "parse time cpu" between hard and soft parses in 9i?

Thanks in advance.
Tom Kyte
January 22, 2009 - 1:12 pm UTC

.. statspack report covering 3.75 days >> ...

that is a useless report.


but, tell the developers to google

sql injection

and ask them "so, you guys validate all inputs right - I mean, you parse every input to death before concatenating into your sql right - I mean, you'll be able to probably remove thousands of lines of un-necessary code when you start binding since you CANNOT sql inject a statement that uses binds - whereas a sql statment built by string concatenation is trivial to sql inject"


Frankly, 77.8 minutes of cpu time over 3.75 days are not numbers you can work with. To long, WAY too long of a period of time.

I'm more worried about the large security issue, try that tack - especially with management.

One of the top hacks of computer code is

jim, January 22, 2009 - 1:39 pm UTC

SQL injection! It was identified in a recent security study that SQL injection is a prime source of defects in software. (leading to sites getting hacked and data compromised.) This is old stuff. It has been in the manuals for well over a decade. (multiple vendors recommend, encourage, plead, not just Oracle)

CPU elapsed specifically for hard parsing

Graham Halsey, January 22, 2009 - 6:13 pm UTC

Thanks for the suggestion about SQL injection, I'll certainly raise that!

I got my maths a bit wrong with that previous statspack report, it actually works out at 778 mins of parse time cpu over 3.75 days, I was a factor of 10 out.

For what it's worth, the shortest statspack period I have is 15 minutes, which shows:

                        Per Second       Per Transaction
                   ---------------       ---------------
Hard parses:                 10.97                  0.01
[snip]
Statistic                                      Total     per Second    per Trans
--------------------------------- ------------------ -------------- ------------
parse count (hard)                             9,870           11.0          0.0
parse count (total)                        3,044,876        3,383.2          3.3
parse time cpu                                30,083           33.4          0.0
parse time elapsed                            30,759           34.2          0.0


So that's just over 5 minutes parse time CPU in a 15 minute period. But again that's (soft + hard) parse time cpu rather than specifically hard parse time.

Thanks for your advice
Tom Kyte
January 22, 2009 - 6:25 pm UTC

what does your execute to parse ratio look like out of curiosity

CPU elapsed specifically for hard parsing

Graham Halsey, January 22, 2009 - 6:43 pm UTC

In that 15 minute period:

Execute to Parse %:   89.24


Tom Kyte
January 23, 2009 - 7:43 am UTC

that is not bad - I was sure with such high parse rates it would be much lower.

This system parses a lot, anything you can do to reduce that will increase your ability to scale, especially with regards to the hard parsing.

Look to your latch report in the stats pack, do you see shared pool, library cache latch free spins/waits? They are parsing related and every spin is burning some cpu

CPU elapsed specifically for hard parsing

Graham Halsey, January 23, 2009 - 8:56 am UTC

Thanks once again Tom!

Yep, there are significant sleeps and spins for the latches you mentioned. This from that 15 minute statspack report:

                                      Get                         Spin &
Latch Name                       Requests      Misses      Sleeps Sleeps 1->4
-------------------------- -------------- ----------- ----------- ------------------------
cache buffers chains        1,159,756,010   2,681,454      34,249 2648819/31576/1113/143/0
library cache                 141,404,768   1,732,314      17,326 1715357/16596/353/8/0
row cache enqueue latch        42,172,984   1,716,871         353 1716519/351/1/0/0
row cache objects              42,812,104     864,624         723 863905/715/4/0/0
session allocation             18,574,323     814,345         592 813753/592/0/0/0
redo allocation                16,438,079     705,749         541 705210/537/2/0/0
library cache pin             100,929,707     295,921         267 295655/265/1/0/0
shared pool                    16,081,568     124,864       7,893 117119/7601/140/4/0
library cache pin allocati     26,398,448      23,250          25 23225/25/0/0/0

Tom Kyte
January 23, 2009 - 9:31 am UTC

those represent a huge chunk of cpu. we use a loop of 2000 iterations to try to get the latch, if we do not get it, we go to sleep, when we wake up we try again.

lots of spins, lots of contention, lots of cpu

"parse time cpu" aware of number of CPUs?

Graham Halsey, February 04, 2009 - 10:59 am UTC

Regarding the "parse time cpu" value in the statspack, is that aware of the number of CPUs in the server?

In the 15 minute statspack report it showed "parse time cpu" of 30,083. That's 300 seconds (out of 900 seconds elapsed), but do I need to split that by the number of CPUs in the box to get the true effect of parsing on the box's total CPU capacity?

i.e. is parsing using up 300/900 of the CPU capacity in my box or 300/(900*12) ... if I have 12 CPUs in the server?

Thanks,
Graham.
Tom Kyte
February 04, 2009 - 12:27 pm UTC

I don't know how anything that simply counts up the amount of cpu time would be or would have to be aware of the number of cpu's


You do, when analyzing the numbers, but the counter - does not.


Say you have 4 cpus.
That means you have a resource of 3,600 cpu seconds in a 15 minute window.

If you have a parse time of 1,200 cpu seconds in that 15 minutes, that means about 1/3 of your cpu resources were spent parsing.


MOD_PLSQL parsing

DayneO, November 26, 2009 - 8:47 am UTC

Hi Tom,

My client is experiencing major performance problems with their web based application. It is running on Oracle IAS with MOD_PLSQL connected to a 10gR1 database.

I used a simple "wall clock" using BeforeProc and AfterProc on the "slow" DAD. The results show procedure runtimes ranging from 0.5 sec to 1.5 minutes (for a given PSP procedure) with 90% of calls executing wihtin 10 sec. So the procedure is able to run quickly but something happens every now and again which makes the wheels come off. Response times are unpredictable and I expect that some other session is causing the problem. The DAD is quite active and receives approximately 30,000 requests per working day over different procedures.

I am trying to track down what is causing the delay, but have no access to statspack (dba recons it adds too much overhead). I only have select_catalog_role.

So far I have:
1) Got myself a list of top 5 system waits from (v$system_event):
latch: library cache 32.52 %
db file sequential read 15.91 %
db file scattered read 11.72 %
read by other session 5.15 %
ksfd: async disk IO 4.47 %

Hmmm.. looks like SQL parsing problem to me (bind variables?)

2) Got the top 5 waits for the application schema (accross all pooled connections) from v$session_event:
db file sequential read 42.64 %
SQL*Net more data from client 21.83 %
latch: library cache 13.84 %
db file scattered read 10.36 %
latch: shared pool 4.70 %

SQL*Net.. I've never received that ever before and that is likely network problem, yes?
But more concerning to me is the latch waits again
3) I then checked for the top parsing users using v$sesstat.
The findings were scary and I'm not sure that I have it correct.
parse count (hard)/execute count = 7% on average accross the connected sessions, but the
parse count (total) / execute count = 70% on average.
4) I then checked the V$SQLAREA and found that SQL was being parsed for each execute even though bind variables are being used.

The DAD is setup with "PlsqlMaxRequestsPerSession 1000" but did notice that mod_plsql issues a dbms_session.reset
after every proc execute. Would this be the cause of the soft parses?

a) From what you can see, is the soft parsing really the cause of the performance problem? Surely MOD_PLSQL would be optimized for parse/execution?
b) How could I reduce my parsing in this environment?

Tom Kyte
November 28, 2009 - 1:15 pm UTC

(dba recons it adds too much overhead)

DBA is a, well, I probably shouldn't use that language.

Does this "dba" understand even a little how statspack works? You take a snap - and during that call to statspack.snap - that is the only time statspack is active. In between snaps the overhead of statspack is.......

0.0000%

Tell the "dba" to please close their eyes before touching the keyboard, they'll be as effective as you are now - neither of you can see anything useful to your job.


latch waits on library cache, parsing related, you need to find out if
a) hard parse
or
b) soft parse

related - soft parse you might not be able to do much, hard parse, you can do a lot about.


that is a strange way to compute a ratio for those. You want to know what percentage of parses were hard parses.

round( hard/total * 100 )

that should be a tiny number.


4) in mod_plsql that will be the norm unless the application issues the same sql over and over for the same page. mod_plsql does a reset of the session state, closing all cursors, at the end of the page.





find out what percentage of these are hard parses - start there.

A reader, December 04, 2009 - 6:35 pm UTC

Off topic. Speaking of "automagically":

http://www.urbandictionary.com/define.php?term=automagically&defid=1337

I always thought you made that up Tom. At least I heard that first on this site. :)

parsed code and execution plan

Arindam, September 28, 2010 - 12:14 pm UTC

Hi Tom,

I would like to ask a very basic question to you.

How can Oracle constitute a parsed code and how it can decide to choose the best possible way to achieve the result by traversing that parsed code (i.e. execution plan)? What internally works there?
Tom Kyte
September 28, 2010 - 12:22 pm UTC

do you mean a parsed sql statement - how does the optimizer optimize? Is that your question?

parsed code and execution plan

Arindam, September 28, 2010 - 12:32 pm UTC

Hi Tom,

Actually I am new to this world. I heard that in shared pool's library cache the sql/pls-sql codes are matched textually with its exact contents (based on default CURSOR_SHARING parameter setting) and then if not matched then the oracle develops a flowchart for parsing the query which is parsed code. It alongside develops the execution plan which is a best possible way to traverse the flowchart or parsed code to achieve the desired result.
All I want to know that how oracle internally develops that? You may be right in your terms but I am not sure if that is what I wanted.
Tom Kyte
September 28, 2010 - 12:50 pm UTC

see
http://docs.oracle.com/docs/cd/E11882_01/server.112/e16638/optimops.htm#i21299

I believe that is what you are looking for - how a statement is parsed and optimized.

parsed code and execution plan

Arindam, September 28, 2010 - 12:56 pm UTC

Hi Tom,

Thanks, I will read it and let you know if there is any concern. Would that solve my purpose for Oracle 9i as well as I was expecting the reply for 9i version only?
Tom Kyte
September 28, 2010 - 1:16 pm UTC

it works with mod_plsql against any database that mod_plsql works against (at least 8i and up)

how to identify hard vs. soft parses per sql_id in v$sqlarea

Sigrid Keydana, November 26, 2010 - 5:19 am UTC

Hi Tom,

I'm sorry for asking a possibly rather dumb question, but I'm very much of a newbie DBA.

In our company, we're scanning application-issued sql for the expense caused by a statement, and I would like to refine the query in order to differentiate between number of hard parses and number oft soft parses, so in the end I have the three categories of

1) statement causing many hard parses -> Java app evidently doesn't use bind variables
2) statement causes many soft parses -> probably using PreparedStatement, but once per loop
3) statement parsed once -> PreparedStatement used "correctly" (if used)

How can I identify the two kinds of parses from v$sqlarea (which is the view I guess I have to use here, views like v$sql_plan_statistics not being populated):

1) hard parse: in principle, this should be the sum of the child cursors per parent I suppose, so sum(version_count) group by sql_id? But then, I miss all child cursors that are not currently in the cache... and would it even be the correct what I assume, namely that:
no. of child cursors = number of different execution plans = number of hard parses?

2) soft parse: would that be parse_calls per sql_id, then?


Also, may I add the question to your basic explanation of hard vs. soft parse - wouldn't there be, after the semantic check, still other tests like bind variable peeking that lead to a new child cursor, and so, a new hard parse?
Or is everything reported as a "mismatch" part of the semantic check?

Thank you very much in advance,
Sigrid

-
Tom Kyte
November 28, 2010 - 2:51 pm UTC

if you see a statement in V$SQL, it was hard parsed once by the application - and only hard parsed once by the application. Once it is in V$SQL, it is already hard parsed and all other parses will be ...... soft

UNLESS, the statement is aged out of the shared pool to make room, or is invalidated because a new index was added or statistics were gathered.

so, if it is in the shared pool - it has been hard parsed and subsequent parses of it should be soft (unless it got aged out for some reason)


bind peeking in 10g and before would never lead to a "new hard parse", once the binds are peeked - the plan is set.

Adaptive cursor sharing (new in 11g) changes that - but that happens in a subtle way. The parsing process is not looking for a reason to reparse as much as the EXECUTION ENGINE is. With adaptive cursoring sharing, the parsing routine marks a statement as "suspect" - might be subject to bind peeking issues. the execution engine watches the statement as it is executed and if it is not performing as predicted - asks the parser to start creating bind aware plans. So it is less of a parse thing than an execution thing..

A reader, November 28, 2010 - 5:00 pm UTC

Hello Sir,

"bind peeking in 10g and before would never lead to a "new hard parse", once the binds are peeked - the plan is set."

(1) If the cursor_sharing set to SIMILAR than will oracle hard parse each statement which are not bind safe?

(2) Also when new child cursor is created will oracle hard parse again?

Thanks


Tom Kyte
November 29, 2010 - 2:13 am UTC

1) with cursor sharing = similar, oracle will

a) rewrite the query with binds, e.g. select * from t where x = 5 will become "select * from t where x = ?"

b) IF it is deemed "not safe to bind against X" - perhaps because it has histograms and is skewed - then the "bind value 5" will be added to the signature of the plan generated for "select * from t where x = ?". That means in order to soft parse and get a shared pool match - you would have to be executing the query "select * from t where x = ?" AND be using a bind variable value of "5"

c) if it is deemed SAFE to bind against X, then you only need to have the query match - not the bind values.



So, in cursor sharing similar, if your auto-bound columns are deemed "not safe to bind against", they will cause a child cursor to be created for every unique set of values.

for example:

ops$tkyte%ORA11GR2> create table t
  2  as
  3  select case when rownum < 100 then 1 else 99 end id1, rownum id2, a.*
  4    from all_objects a;

Table created.

ops$tkyte%ORA11GR2> 
ops$tkyte%ORA11GR2> create index t_idx1 on t(id1);

Index created.

ops$tkyte%ORA11GR2> create index t_idx2 on t(id2);

Index created.

ops$tkyte%ORA11GR2> 
ops$tkyte%ORA11GR2> select count(*) from t where id1 = 1;

  COUNT(*)
----------
        99

ops$tkyte%ORA11GR2> select count(*) from t where id2 = 1;

  COUNT(*)
----------
         1

ops$tkyte%ORA11GR2> exec dbms_stats.gather_table_stats( user, 'T', estimate_percent=>100 );

PL/SQL procedure successfully completed.

ops$tkyte%ORA11GR2> 
ops$tkyte%ORA11GR2> alter system flush shared_pool;

System altered.

ops$tkyte%ORA11GR2> alter session set cursor_sharing = similar;

Session altered.

ops$tkyte%ORA11GR2> select count(*) from t where id1 = 1;

  COUNT(*)
----------
        99

ops$tkyte%ORA11GR2> select count(*) from t where id1 = 2;

  COUNT(*)
----------
         0

ops$tkyte%ORA11GR2> select count(*) from t where id1 = 3;

  COUNT(*)
----------
         0

ops$tkyte%ORA11GR2> select count(*) from t where id1 = 4;

  COUNT(*)
----------
         0

ops$tkyte%ORA11GR2> select count(*) from t where id2 = 1;

  COUNT(*)
----------
         1

ops$tkyte%ORA11GR2> select count(*) from t where id2 = 2;

  COUNT(*)
----------
         1

ops$tkyte%ORA11GR2> select count(*) from t where id2 = 3;

  COUNT(*)
----------
         1

ops$tkyte%ORA11GR2> select count(*) from t where id2 = 4;

  COUNT(*)
----------
         1

ops$tkyte%ORA11GR2> 
ops$tkyte%ORA11GR2> select child_number, sql_text from v$sql where sql_text like 'select count(*) from t where id_ = _%' order by 2, 1;

CHILD_NUMBER SQL_TEXT
------------ --------------------------------------------------------------------------------
           0 select count(*) from t where id1 = :"SYS_B_0"
           1 select count(*) from t where id1 = :"SYS_B_0"
           2 select count(*) from t where id1 = :"SYS_B_0"
           3 select count(*) from t where id1 = :"SYS_B_0"
           0 select count(*) from t where id2 = :"SYS_B_0"


Notice how there are 4 child cursors for ID1 = ? - because ID1 is seen as skewed by the statistics present on the table - whereas ID2 = ? has just one child cursor because ID2 is known to be "not skewed"


Now, if you execute "select * from t where id1 = 1" 1,000,000 times - it will hard parse ONCE and soft parse the remainder. So, it does not hard parse "each statement which is not bind safe", rather it just adds the bind values to the signature of the query for the matching in the shared pool - it can and will still soft parse given the same query.


2) a child cursor is by definition a "hard parsed thing", it is a new cursor as different from the other child cursors as can be. Yes.

Identify hard vs. soft parses in v$sqlarea, again

Sigrid Keydana, November 29, 2010 - 9:11 am UTC

Hi Tom,

thank you very much for your explanation. Sorry for asking again, but I'm not yet sure what to conclude from this for my purpose of differentiating parse-intensive statements (using categories "many hard parses" vs. "many soft parses").

If I understand correctly your explanations, they do not contradict my original assumptions that

- no. of hard parses of 1 statement = child cursor count for this sql_id
- no. of soft parses = v$sql.parse_calls

do they?
Or if they are either wrong or don't seem to make much sense, could I ask you for how I can best reach such a goal of retrieving these different kinds of "expensive statements" issued by an application?

(I am aware that a third category, or second sub-category of "many hard parses", namely when quasi-identical statements don't use bind variables, would have to be identified in still another way, perhaps using regexp functions on the sql_text column).

Sorry for taking your time again, but I'd very much like to get this important topic right...

Thanks a lot,
Sigrid

Tom Kyte
November 29, 2010 - 10:47 am UTC

number of hard parses for a given statement is GREATER THAN OR EQUAL TO number of child cursors (things get invalidated, aged out)

number of soft parses give a given statement is LESS THAN OR EQUAL TO number of parses minus 1.


You shouldn't be focusing on these metrics however at the statement level - in general a statement is hard parsed about once and soft parsed the rest. What you should be interested in are statements with

o VERY LOW execute counts
o VERY HIGH parse counts (near to execute counts)


Those are the evil ones. Ones with VERY LOW execute counts are typically those that do not use bind variables.

Ones with high parse counts - near the number of execute counts - are being parsed way too often and you need to look into statement caching in the application.


APEX and dbms_session.reset_package?

Stew Ashton, April 17, 2011 - 5:25 am UTC


Tom, earlier in this page someone mentioned that mod_plsql calls dbms_session.modify_package_state frequently.
http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:2588723819082#1412723700346538744 Recently, Joel Kallman tweeted a link to a blog stating that APEX calls dbms_session.reset_package frequently, which "closes all session cursors".
http://thebambers.me.uk/oracle/?p=13 Questions:

1) Which procedure is really called in APEX, reset_ or modify_ ?
2) Why does the cursor cache need to be emptied?
3) Does this lead to lots of soft parses, or simply to lots of "softer soft parses"?
Tom Kyte
April 18, 2011 - 10:19 am UTC

A quick query on v$sql reveals that reset_package was invoked and there is no sight of modify being invoked.

In this case, the entire session state is torn down
http://docs.oracle.com/docs/cd/E11882_01/appdev.112/e16760/d_sessio.htm#i1010767

It'll lead to soft parses.

APEX and dbms_session.reset_package followup

Stew Ashton, April 18, 2011 - 3:00 pm UTC


Thanks for your quick answer. It's disappointing when a product you recommend doesn't seem to follow one of your most repeated recommendations. I wish I could understand *why* APEX considers it necessary to provoke lots of soft parsing.

Should we reduce session_cached_cursors in APEX installations?
Tom Kyte
April 18, 2011 - 3:30 pm UTC

You can control this:

http://docs.oracle.com/docs/cd/B19306_01/server.102/b14190/confmods.htm#CIHHEBJG

just beware that it might change some behavior (cursors will be held open - logically they are closed, but they will be held open in the plsql cursor cache) - it will also tend to consume more memory as the memory is not freed. It is not the default behavior but is available.

Amazing!

Stew Ashton, April 18, 2011 - 4:08 pm UTC


Where else could I have learned this? Thanks again!

APEX listener and dbms_session

Stew Ashton, April 19, 2011 - 12:38 pm UTC


When I realized your answer was specific to mod_plsql, I asked about the APEX Listener over on OTN. Colm Divilly answered:

"APEX Listener uses dbms_session.modify_package_state(dbms_session.reinitialize) only."

I will be using the Listener, so I'm good to go :)

Parsing and bind variables

Jayadevan, May 06, 2011 - 1:25 am UTC

Hi TOm,
In an AWR repotr, I came across a query that uses just one table -
select column1 from mytable
WHERE (column1 =:1
AND column2 =:2
AND column3=:3
AND column4 =:4
AND column5 =:5)
OR (column1 =:6
AND column2 =:7
AND column3=:8
AND column4 =:9
AND column5 =:10)
OR (column1 =:11
AND column2 =:12
AND column3=:13
AND column4 =:14
AND column5 =:15)
....
all the way upto
column1 =:5000
In short, the filter is a few pages long. Is it better to write queries this way or dump the values to be filtered into 2-3 temp tables and join the main table with those? I think Oracle will find it easy to parse/optimize a small query with a few joins and that query will be better for the library cache. Is that right?
Tom Kyte
May 06, 2011 - 10:32 am UTC

It would probably make sense to employ a global temporary table or an in memory collection so that the query was simply:


select column1 from mytable
where (column1,column2,column3) in (select ... from global _temp_table );


Mark, September 22, 2011 - 4:57 pm UTC

Hi Tom,
If a sql statement is found in sql area but a new child cursor has to be created due to a "mismatch", is this counted as a soft or as a hard parse?
Tom Kyte
September 22, 2011 - 6:01 pm UTC

it would be going through the entire hard parse process (optimization, row source generation).

Shelee, September 23, 2011 - 8:12 am UTC

for Data warehousing environment, our DBA is saying, bind enabling is not required, let it hard parse.. is that ok?
Tom Kyte
September 23, 2011 - 9:55 am UTC

when you run many queries per second - you need to use binds (oltp)

when you run queries that take many seconds - you do not.

It all comes down to math - will you be parsing queries at such a rate that the parsing will inhibit your performance. When you parse and execute a query that takes 1ms (oltp) - it would, and it would kill your scalability.

When you parse and execute a query that takes 10 seconds, it would not.


And further, in a warehouse, you want the optimizer to optimize the query "as best as possible". So much so, that if

select * from t where x = 6;

and

select * from t where x = 7;

should use different plans - you want them to use different plans (because you are hitting large amounts of data).

So, yes, I agree with your dba.


search for:

"queries that take many seconds"

in quotes on this site to read more about this.

Soft Parse

A reader, November 24, 2011 - 5:53 am UTC

Tom,
If I have a query like (for example) in an Oracle Session1

Select count(*) from (Select column1, column2
from many_joined_tables));

I've executed it a few times (say) and assume the query's plan is cached.

If another user executes just the Inline query

Select column1, column2
from many_joined_tables;


Will this other user get the benefits of a "soft parse" for their query?


Tom Kyte
November 28, 2011 - 8:57 am UTC

no, the query would have to match EXACTLY.


select * from dual;
Select * from dual;
select * from DUAL;


would be three different queries each requiring a hard parse just due to the case being different.

Find sqls with hard parses

Jayadevan, July 13, 2012 - 6:58 am UTC

Hi Tom,
Is there a query using which we can find SQLs which are getting hard-parsed many times? We are on 11GR2. We would like to do this for our production environment (so tracing is not an option).
Regards,
Jay
Tom Kyte
July 16, 2012 - 3:13 pm UTC

Hah, a trick question!!!

How many times does a query get hard parsed? Once. So, how can you have a query that is hard parsed many times - in general - you won't!

see
http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:1163635055580

it'll take a swipe at your shared pool (v$sql) and look for sqls that are the same if you take out their constants.

OEM has a report of this too - run an ADDM report.

Thank you

Jay, July 19, 2012 - 10:04 pm UTC

I did not realize that it was a trick question. Good catch. Thank you for the solution.

validity of a soft-parsed statement

Sue, August 16, 2012 - 11:17 am UTC

Hello Tom,

Once a query starts getting soft-parsed, for long will it continue that way, before oracle decides that a particular statement needs hard-parsing again ?

Basically, I have this weekly job, that processes like 10 Million rows of data, and takes like 30 hours. So, during execution, after thefirst time, the queries in the proc are later on soft-parsed. Then after a gap of 5 days, the program is run again, so will the queries be hard-parsed again ? Or is it likely that the qeury plan is still in cache ?
Tom Kyte
August 17, 2012 - 2:58 pm UTC

maybe forever.

maybe five milliseconds.


it depends. Do the queries have to be aged out of the shared pool in the next five days ? maybe yes, maybe no.

do the tables the queries are on get analyzed which would invalidate the cursors? maybe yes, maybe no.

and so on.


does something happen in those five days that require the sql to be invalidated or flushed? maybe.


does 11g not do a soft parse when we set session cursor sharing?

alan tan, January 15, 2013 - 1:44 am UTC

Hi,Tom
With Oracle 11gr2,I noticed when I set session_curosr_sharing to a not zero value,Oracle will not make a soft parse to the subsequent SQL'S.
Is this a new feather of 11gr2?
SQL> select  a.name name, b.value
  2            from v$statname a, v$mystat b
  3           where a.statistic# = b.statistic# and a.name like 'parse%';

NAME                                                                  VALUE
---------------------------------------------------------------- ----------
parse time cpu                                                            0
parse time elapsed                                                        9
parse count (total)                                                      14
parse count (hard)                                                        0
parse count (failures)                                                    0
parse count (describe)                                                    0

已选择6行。

SQL> alter session set session_cached_cursors=500;

会话已更改。

SQL> begin
  2  for i in 1 .. 1000
  3         loop
  4               execute immediate 'select /*+ 1000 */ count(*) from emp' ;
  5         end loop;
  6  end;
  7  /

PL/SQL 过程已成功完成。

SQL> select  a.name name, b.value
  2            from v$statname a, v$mystat b
  3           where a.statistic# = b.statistic# and a.name like 'parse%';

NAME                                                                  VALUE
---------------------------------------------------------------- ----------
parse time cpu                                                            0
parse time elapsed                                                        9
parse count (total)                                                      18
parse count (hard)                                                        1
parse count (failures)                                                    0
parse count (describe)                                                    0

已选择6行。

SQL>
SQL> alter session set session_cached_cursors=100;

会话已更改。

SQL> select  a.name name, b.value
  2            from v$statname a, v$mystat b
  3           where a.statistic# = b.statistic# and a.name like 'parse%';

NAME                                                                  VALUE
---------------------------------------------------------------- ----------
parse time cpu                                                            0
parse time elapsed                                                        9
parse count (total)                                                      20
parse count (hard)                                                        1
parse count (failures)                                                    0
parse count (describe)                                                    0

已选择6行。

SQL>
SQL>
SQL> begin
  2  for i in 1 .. 1000
  3         loop
  4               execute immediate 'select /*+ 1000 */ count(*) from emp' ;
  5         end loop;
  6  end;
  7  /

PL/SQL 过程已成功完成。

SQL> select  a.name name, b.value
  2            from v$statname a, v$mystat b
  3           where a.statistic# = b.statistic# and a.name like 'parse%';

NAME                                                                  VALUE
---------------------------------------------------------------- ----------
parse time cpu                                                            0
parse time elapsed                                                        9
parse count (total)                                                      23
parse count (hard)                                                        1
parse count (failures)                                                    0
parse count (describe)                                                    0

已选择6行。

SQL>
SQL>
SQL> alter session set session_cached_cursors=10;

会话已更改。

SQL>
SQL> begin
  2  for i in 1 .. 1000
  3         loop
  4               execute immediate 'select /*+ 1000 */ count(*) from emp' ;
  5         end loop;
  6  end;
  7  /

PL/SQL 过程已成功完成。

SQL> select  a.name name, b.value
  2            from v$statname a, v$mystat b
  3           where a.statistic# = b.statistic# and a.name like 'parse%';

NAME                                                                  VALUE
---------------------------------------------------------------- ----------
parse time cpu                                                            0
parse time elapsed                                                        9
parse count (total)                                                      27
parse count (hard)                                                        2
parse count (failures)                                                    0
parse count (describe)                                                    0

已选择6行。

SQL>
SQL> alter session set session_cached_cursors=0;

会话已更改。

SQL>
SQL> begin
  2  for i in 1 .. 1000
  3         loop
  4               execute immediate 'select /*+ 1000 */ count(*) from emp' ;
  5         end loop;
  6  end;
  7  /

PL/SQL 过程已成功完成。

SQL> select  a.name name, b.value
  2            from v$statname a, v$mystat b
  3           where a.statistic# = b.statistic# and a.name like 'parse%';

NAME                                                                  VALUE
---------------------------------------------------------------- ----------
parse time cpu                                                            0
parse time elapsed                                                        9
parse count (total)                                                    1030
parse count (hard)                                                        2
parse count (failures)                                                    0
parse count (describe)                                                    0

已选择6行。

SQL>
SQL>
SQL> alter session set session_cached_cursors=1;

会话已更改。

SQL> begin
  2  for i in 1 .. 1000
  3         loop
  4               execute immediate 'select /*+ 1000 */ count(*) from emp' ;
  5         end loop;
  6  end;
  7  /

PL/SQL 过程已成功完成。

SQL> select  a.name name, b.value
  2            from v$statname a, v$mystat b
  3           where a.statistic# = b.statistic# and a.name like 'parse%';

NAME                                                                  VALUE
---------------------------------------------------------------- ----------
parse time cpu                                                            0
parse time elapsed                                                        9
parse count (total)                                                    1034
parse count (hard)                                                        3
parse count (failures)                                                    0
parse count (describe)                                                    0

已选择6行。


Regards
Alan
Tom Kyte
January 15, 2013 - 10:48 am UTC

plsql uses session_cached_cursors since version 9.2.0.4 (I think it was .4, pretty sure) to determine how many cursors to cache open for us.

that always worked for static SQL in plsql... if you were to run your with static sql - you'd see similar results

Since 10g, execute immediate has been able to cache cursors open as well. as long as the cursor is using the same text time after time...

In 11g - there is a bug (patch exists) whereby parse count is not getting incremented when it should in certain situations - this is one of them. If we run this with sql trace and tkprof we can see this.

so I ran this code (@trace enabled sql trace, @tk finds my trace file and tkprofs it for me)

connect/
show parameter session_cached
@mystat "parse count (total)"
@trace
declare
    l_query varchar2(2000);
    l_data  varchar2(30);
begin
    for i in 1 .. 1000
    loop
        l_query := 'select * from dual d' || mod(i,2);
        execute immediate l_query into l_data;
    end loop;
end;
/
@mystat2
@tk "sys=no"

connect /
show parameter session_cached
@mystat "parse count (total)"
@trace
declare
    l_query varchar2(2000);
    l_data  varchar2(30);
begin
    for i in 1 .. 1000
    loop
        l_query := 'select * from dual d';
        execute immediate l_query into l_data;
    end loop;
end;
/
@mystat2
@tk "sys=no"



the results were:


ops$tkyte%ORA11GR2> connect/
Connected.
ops$tkyte%ORA11GR2> show parameter session_cached

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
session_cached_cursors               integer     50
ops$tkyte%ORA11GR2> @mystat "parse count (total)"
ops$tkyte%ORA11GR2> set echo off

NAME                         VALUE
----------------------- ----------
parse count (total)             17

ops$tkyte%ORA11GR2> @trace
ops$tkyte%ORA11GR2> alter session set events '10046 trace name context forever, level 12';

Session altered.

ops$tkyte%ORA11GR2> declare
  2          l_query varchar2(2000);
  3          l_data  varchar2(30);
  4  begin
  5          for i in 1 .. 1000
  6          loop
  7                  l_query := 'select * from dual d' || mod(i,2);
  8                  execute immediate l_query into l_data;
  9          end loop;
 10  end;
 11  /

PL/SQL procedure successfully completed.

ops$tkyte%ORA11GR2> @mystat2
ops$tkyte%ORA11GR2> set echo off

NAME                         VALUE DIFF
----------------------- ---------- ------------------
parse count (total)             22                5

ops$tkyte%ORA11GR2> @tk "sys=no"
ops$tkyte%ORA11GR2> column trace new_val TRACE
ops$tkyte%ORA11GR2> 
ops$tkyte%ORA11GR2> select c.value || '/' || d.instance_name || '_ora_' || a.spid || '.trc' trace
  2    from v$process a, v$session b, v$parameter c, v$instance d
  3   where a.addr = b.paddr
  4     and b.audsid = userenv('sessionid')
  5     and c.name = 'user_dump_dest'
  6  /

TRACE
-------------------------------------------------------------------------------
/home/ora11gr2/app/ora11gr2/diag/rdbms/ora11gr2/ora11gr2/trace/ora11gr2_ora_507
9.trc


ops$tkyte%ORA11GR2> 
ops$tkyte%ORA11GR2> disconnect
Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
ops$tkyte%ORA11GR2> !tkprof &TRACE ./tk.prf &1

TKPROF: Release 11.2.0.3.0 - Development on Tue Jan 15 11:44:17 2013

Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.



ops$tkyte%ORA11GR2> connect /
Connected.
ops$tkyte%ORA11GR2> edit tk.prf

ops$tkyte%ORA11GR2> 
ops$tkyte%ORA11GR2> 
ops$tkyte%ORA11GR2> connect /
Connected.
ops$tkyte%ORA11GR2> show parameter session_cached

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
session_cached_cursors               integer     50
ops$tkyte%ORA11GR2> @mystat "parse count (total)"
ops$tkyte%ORA11GR2> set echo off

NAME                         VALUE
----------------------- ----------
parse count (total)             17

ops$tkyte%ORA11GR2> @trace
ops$tkyte%ORA11GR2> alter session set events '10046 trace name context forever, level 12';

Session altered.

ops$tkyte%ORA11GR2> declare
  2          l_query varchar2(2000);
  3          l_data  varchar2(30);
  4  begin
  5          for i in 1 .. 1000
  6          loop
  7                  l_query := 'select * from dual d';
  8                  execute immediate l_query into l_data;
  9          end loop;
 10  end;
 11  /

PL/SQL procedure successfully completed.

ops$tkyte%ORA11GR2> @mystat2
ops$tkyte%ORA11GR2> set echo off

NAME                         VALUE DIFF
----------------------- ---------- ------------------
parse count (total)             21                4

ops$tkyte%ORA11GR2> @tk "sys=no"
ops$tkyte%ORA11GR2> column trace new_val TRACE
ops$tkyte%ORA11GR2> 
ops$tkyte%ORA11GR2> select c.value || '/' || d.instance_name || '_ora_' || a.spid || '.trc' trace
  2    from v$process a, v$session b, v$parameter c, v$instance d
  3   where a.addr = b.paddr
  4     and b.audsid = userenv('sessionid')
  5     and c.name = 'user_dump_dest'
  6  /

TRACE
-------------------------------------------------------------------------------
/home/ora11gr2/app/ora11gr2/diag/rdbms/ora11gr2/ora11gr2/trace/ora11gr2_ora_508
3.trc


ops$tkyte%ORA11GR2> 
ops$tkyte%ORA11GR2> disconnect
Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
ops$tkyte%ORA11GR2> !tkprof &TRACE ./tk.prf &1

TKPROF: Release 11.2.0.3.0 - Development on Tue Jan 15 11:44:20 2013

Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.



ops$tkyte%ORA11GR2> connect /
Connected.
ops$tkyte%ORA11GR2> edit tk.prf



so it looks like it does not parse over and over again - but... if we look to the trace file we'll find:


select * from dual d1

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse      500      0.00       0.00          0          0          0           0
Execute    500      0.00       0.00          0          0          0           0
Fetch      500      0.00       0.00          0       1000          0         500
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total     1500      0.01       0.01          0       1000          0         500
********************************************************************************
select * from dual d0

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse      500      0.00       0.00          0          0          0           0
Execute    500      0.00       0.00          0          0          0           0
Fetch      500      0.00       0.00          0       1000          0         500
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total     1500      0.01       0.01          0       1000          0         500


it does parse the query when we flip flop the values - however:

select * from dual d

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute   1000      0.00       0.00          0          0          0           0
Fetch     1000      0.00       0.01          0       2000          0        1000
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total     2001      0.01       0.01          0       2000          0        1000



when we parse the same sql statement - it can cache it open.


so - i short:

since 9.2.0.4 plsql has been using session_cached_cursors to determine how many cursors to cache open (even when you close them). before that, it used open_cursors.

since 10g, plsql has been able to cache native dynamic sql open - as long as you use the same statement in that line of code over and over.


this has always been true for static sql in plsql (put some sql in a stored procedure, have session cached cursors set > 0, call procedure 1,000 times - your sql will only have been parsed ONCE - major benefit of using stored procedures!)

and is now true since 10g for dynamic sql (but not a ref cursor... they are always parsed...)

does 11g not do a soft parse when we set cursor_cached_curosr?

alan tan, January 15, 2013 - 1:58 am UTC

Sorry,Tom
I mean session_cached_cursors in above post.

Thanks!
Alan

does 11g not do a soft parse when we set cursor_cached_curosr?

alan tan, January 16, 2013 - 12:27 am UTC

Hi,Tom
Thanks for your excellent reply.
But I got into confusion now...

At the beginning of there thread,somebody said
'If a statement is in the session cursor cache, it can
avoid a soft parse'.

you said:
'it will not avoid the soft parse, it'll be a softer soft parse.

The soft parse count will still go up, see how the stats:

STAT...parse count (total) 1015 1014 -1


show that -- we parsed 1000 times that statement in both. We just parsed it "softer" the second
time.....'


But from your test to my post,it looks like when SQL is session cached,Oracle will just parse it once and excute over and over,no more soft parse and softer soft parse needed.

select * from dual d

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 1000 0.00 0.00 0 0 0 0
Fetch 1000 0.00 0.01 0 2000 0 1000
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 2001 0.01 0.01 0 2000 0 1000

So,is it safe to say ,when a sql is in the session cached,Oracle will not soft parse it, not softer soft parse it,just excute it?

Regards
Alan

Tom Kyte
January 16, 2013 - 11:54 am UTC

'If a statement is in the session cursor cache, it can
avoid a soft parse'.


that is a false statement, it is not true. that is not what session cached cursors are about. they result in a softer soft parse, but they are a parse none the less.


But from your test to my post,it looks like when SQL is session cached,Oracle
will just parse it once and excute over and over,no more soft parse and softer
soft parse needed.,


I demonstrated PLSQL's ability to cache cursors open - plsql uses session cached cursors as a bound, a limit of the number of cursors to cache. What you are seeing parse wise is a demonstration of PLSQLs ability to cache a cursor OPEN. not session cached cursors.




So,is it safe to say ,when a sql is in the session cached,Oracle will not soft
parse it, not softer soft parse it,just excute it?


no, it is not safe to that that, it is wrong to say that.




do a test without plsql - in java or something - and you'll see that if you prepare/execute/close in a loop (without statement caching enabled), you'll be parsing every time you prepare.

does 11g not do a soft parse when we set cursor_cached_curosr?

alan tan, January 17, 2013 - 1:25 am UTC

I understand now.

Thank you,Tom.
Great man!

Regards
Alan

nice

kylie, August 21, 2014 - 7:19 am UTC

its very educationable and nice it is useful for me

parsing with cursor_sharing=FORCE

Steve, January 13, 2015 - 5:39 pm UTC

Hi Tom,

When cursor_sharing is set to FORCE, do queries which are properly written with bind variables still consistently get soft parsed?

Thanks,

Steve

Sandeep, March 04, 2015 - 9:32 am UTC

Hi Tom,

But when the check happens, ideally the check should happen at the beginning, whether the requested query exists already in the shared pool or not, and check the excution_state flag (it is imaginary as of now, by me) if the state is one imply that the query execution was success last time, so no need of syntax and semantic check, maintain this imaginary flag in the buffer as doing for result caching. I am sorry for my imagination, if it turns out to be a stupid one.

Regards,

Sandeep, March 04, 2015 - 9:38 am UTC

Thanks Tom

I got it, there may be chance of changes in the schema as well as table level so the Semantic parse is required each time the same query is requested, and it would not hurt to check the syntax again instead of maintaining one more buffer component.

Thank You.

Sandeep, March 04, 2015 - 9:55 am UTC

Hi Tom,

I saw the below, and it is bit not clear for me.

http://www.dba-oracle.com/t_hard_vs_soft_parse_parsing.htm

In the above it is clearly said that, generation of execution plan is also done in soft parse. While reading about adoptive cursor sharing from ( http://oracle-base.com/articles/11g/adaptive-cursor-sharing-11gr1.php ), it highlights the use of the existing execution plan as a common thing, which I find in your explanation as well for soft parse. kindly let me know where I am missing the link

Regards

Hints for Hard parsing

Rajeshwaran, Jeyabal, July 18, 2016 - 6:29 am UTC

We are developing an Batch job for DW application, which will run for every one hour and for each execution the data volume differs. so we are writing the code in such a way that each execution will be a hard parsing and never reuses the plan from shared pool.

something like this.

rajesh@ORA11G> create or replace procedure
  2  process_data(x varchar2)
  3  as
  4     l_sql long;
  5     l_value int;
  6  begin
  7     l_sql := ' select /*#inputs#*/ max(object_id) '||
  8             ' from t '||
  9             ' where owner = :x' ;
 10     l_sql := replace(l_sql,'#inputs#', to_char(sysdate,'yyyymmddhhmiss'));
 11     execute immediate l_sql into l_value using x ;
 12     for k in (select * from table(dbms_xplan.display_cursor))
 13     loop
 14             dbms_output.put_line(k.plan_table_output);
 15     end loop;
 16     dbms_output.put_line(' l_value ='||l_value);
 17  end;
 18  /

Procedure created.

rajesh@ORA11G> exec process_data('SYS');
SQL_ID  bsd6csyxdm2at, child number 0
-------------------------------------
 select /*20160718115308*/ max(object_id)  from t  where owner = :x

Plan hash value: 2966233522

---------------------------------------------------------------------------
| Id  | Operation          | Name | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |      |       |       |   631 (100)|          |
|   1 |  SORT AGGREGATE    |      |     1 |    11 |            |          |
|*  2 |   TABLE ACCESS FULL| T    |  2496 | 27456 |   631   (2)| 00:00:01 |
---------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - filter("OWNER"=:X)

 l_value =106502

PL/SQL procedure successfully completed.

rajesh@ORA11G> exec process_data('SYS');
SQL_ID  3t2n1qm9v97ku, child number 0
-------------------------------------
 select /*20160718115312*/ max(object_id)  from t  where owner = :x

Plan hash value: 2966233522

---------------------------------------------------------------------------
| Id  | Operation          | Name | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |      |       |       |   631 (100)|          |
|   1 |  SORT AGGREGATE    |      |     1 |    11 |            |          |
|*  2 |   TABLE ACCESS FULL| T    |  2496 | 27456 |   631   (2)| 00:00:01 |
---------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - filter("OWNER"=:X)

 l_value =106502

PL/SQL procedure successfully completed.

rajesh@ORA11G>


Do we have any Hints available in Oracle to explicitly force the hard parsing for each execution of SQL? (some like bind_ware hint )
Connor McDonald
July 18, 2016 - 10:01 am UTC

You could flush the statement from the library cache:

https://blogs.oracle.com/mandalika/entry/oracle_rdbms_flushing_a_single

Or abuse VPD to assign unique predicates to the statements:

http://oracle-randolf.blogspot.co.uk/2009/02/how-to-force-hard-parse.html

But - is there any particular reasons you need to force the hard parse every time?

Hints for Hard parsing

Rajeshwaran, Jeyabal, July 18, 2016 - 10:57 am UTC

But - is there any particular reasons you need to force the hard parse every time?

kind of bind variable peeking. but we are still on 10g, ( Adaptive cursor sharing not possible ;) )
Every one hour this procedure runs with some inputs.
    a) on the first execution, we go hard parse and get the best plan.
    b) for subsequent execution, we either use different/same inputs - but the underlying table may have very different data sets from the previous run.
    so we dont want to reuse the plan, instead we need a Hard parse to get the best plan.

Do you have any other ways to get best plan, without Hard parsing ?
Chris Saxon
July 18, 2016 - 12:17 pm UTC

but the underlying table may have very different data sets from the previous run.

So you've had lots of DML on the table between runs? Or it's just the nature of what you're selecting?

In either case, you could gather stats with the no_invalidate parameter set to false:

The procedure invalidates the dependent cursors immediately if set to FALSE

http://docs.oracle.com/cd/B19306_01/appdev.102/b14258/d_stats.htm#i1035352

java.37

java.37, December 13, 2020 - 11:48 am UTC

There are very good comments, thank you very much for the information://java37.com/

Parsing in Oracle Explained in detail

Srinivasarao, August 17, 2021 - 4:55 am UTC

Excellent explanation of Parsing, Hard and Soft Parse. one cannot ask for more.
Chris Saxon
August 17, 2021 - 12:23 pm UTC

Glad we could help

Why syntax check every time

Asim, August 14, 2022 - 6:59 pm UTC

I have gone through this discussion and understood a lot, but one thing I cant.

I understand that semantcis check are necessary every time during soft parse regardless of cache hit or miss bcz user privileges and object names may have changed between one execution and other. But if the cache library hits that means the hash value exactly matched then why is syntax check done again.

Connor McDonald
August 15, 2022 - 5:16 am UTC

A syntax check can be done "locally", ie, but the requesting session. No requirement to access shared resources etc.

A library cache hit means access to a shared resource. That's expensive (relatively)