Skip to Main Content

Breadcrumb

more

Connor and Chris don't just spend all day on AskTOM. You can also catch regular content via Connor's blog and Chris's blog. Or if video is more your thing, check out Connor's latest video and Chris's latest video from their Youtube channels. And of course, keep up to date with AskTOM via the official twitter account.

Question and Answer

Tom Kyte

Thanks for the question, Ravi.

Asked: December 15, 2000 - 3:26 pm UTC

Answered by: Tom Kyte - Last updated: May 14, 2013 - 12:27 pm UTC

Category: Database - Version: 805

Viewed 50K+ times! This question is

Whilst you are here, check out some content from the AskTom team: About terminating characters for SQL statements

You Asked

Tom,


In my report.txt, week after week, I am seeing the contention for lib. cache latch statistics.

In lib cache latch statistics, the SQL AREA GETHITRATIO is only 0.172 and PINHITRATIO is only 0.685

In the latch statistics section,
for library cache

Gets 15934992
misses 114277
hit_ratio .993
sleeps 68578
sleeps/misses 0.6

In order to improve this, I increased the shared pool size, set the shared pool reserved size and pinned the often reloaded objects. Are these steps ok. What might be the other steps that I should take to avoid the latch contention on lib. cache.

Thanks
Ravi

and we said...

Lbrary cache latch contention is typically caused by NOT using bind variables. It is due to excessive parsing of statements.

One way to see if this might be the case in your situation is to run a script like:


create table t1 as select sql_text from v$sqlarea;

alter table t1 add sql_text_wo_constants varchar2(1000);

create or replace function
remove_constants( p_query in varchar2 ) return varchar2
as
l_query long;
l_char varchar2(1);
l_in_quotes boolean default FALSE;
begin
for i in 1 .. length( p_query )
loop
l_char := substr(p_query,i,1);
if ( l_char = '''' and l_in_quotes )
then
l_in_quotes := FALSE;
elsif ( l_char = '''' and NOT l_in_quotes )
then
l_in_quotes := TRUE;
l_query := l_query || '''#';
end if;
if ( NOT l_in_quotes ) then
l_query := l_query || l_char;
end if;
end loop;
l_query := translate( l_query, '0123456789', '@@@@@@@@@@' );
for i in 0 .. 8 loop
l_query := replace( l_query, lpad('@',10-i,'@'), '@' );
l_query := replace( l_query, lpad(' ',10-i,' '), ' ' );
end loop;
return upper(l_query);
end;
/
update t1 set sql_text_wo_constants = remove_constants(sql_text);

select sql_text_wo_constants, count(*)
from t1
group by sql_text_wo_constants
having count(*) > 100
order by 2
/



The output of that last query will show you statements that are identical in the shared pool after all numbers and character string constants have been removed. These statements -- and more importantly their counts -- are the potential bottlenecks. In addition to causing the contention, they will be HUGE cpu consumers.

If you discover your applications do not use bind variables -- you must have this corrected. You'll never have a good hit ratio if everyone submits "unique" sql. Your shared pool will never be used right and you'll be using excessive CPU (90% of the time it takes to process "insert into t values ( 1 )" is parsing. If you use "insert into t values ( :x )", and bind the value of 1 -- then the next person that runs that insert will benefit from your work and run that much faster.

and you rated our response

  (176 ratings)

Is this answer out of date? If it is, please let us know via a Review

Reviews

May 02, 2001 - 7:15 am UTC

Reviewer: Pedro Oliveira from Lisbon, Portugal


May 26, 2001 - 11:25 pm UTC

Reviewer: Sean from Atlanta


review of bstat/estat question

September 07, 2001 - 9:26 am UTC

Reviewer: Wayne from DFW, Texas

Tom,

The query you gave in your answer sheds good light on tuning using the bstat/estat utilities.
Wayne

Performance stuff

December 11, 2001 - 2:12 am UTC

Reviewer: Tania Wroe from South Africa

Excelent

anxious to try this out

April 30, 2002 - 2:25 pm UTC

Reviewer: A reader

I have been trying to understand what is behind my extremely high library cache hit ratio (99.5%), yet relatively poor gethitratio of 60%. From what I read no physical db tuning would improve the gethitratio since there was plenty of free memory for the shared pool.

Error while processing

May 24, 2002 - 5:24 am UTC

Reviewer: Raman Pujani from India

Hi Tom,

Most of the time when I run remove_constant and the query to generate report, I get :

update t1 set sql_text_wo_constants = remove_constants(sql_text)
*
ERROR at line 1:
ORA-03113: end-of-file on communication channel

What could be the reason ?

Regards,
Raman.

Tom Kyte

Followup  

May 24, 2002 - 9:31 am UTC

Please contact support -- it will be a bug, you may require a patch. I've never seen that behaviour myself.

There will be a trace file on the server they will need to diagnose this issue.

meanwhile, you can do something like:

declare
l_tmp long;
begin
for x in ( select rowid rid, sql_text from t1 )
loop
l_tmp := remove_constants( x.sql_text );
update t1 set sql_text_wo_constants = l_tmp where rowid = x.rid;
end loop;
end;
/

instead of the single update.


Can you please explain ?

June 21, 2002 - 10:03 am UTC

Reviewer: A reader

Hi tom ,

the solution/work around, you have given above, does it
use the bind variables ?

beacause I don't know yet, after all your explanation in all questions and your book
the difference between the cursor variable and bind variable

I think can we use the bind variable in the following way
will the performance better ?


/********************************************/
declare
l_tmp long;
begin
open x for
'select rowid rid, sql_text from t1' --returns 10000 rows
loop
l_tmp := remove_constants( x.sql_text );
execute immediate
'update t1 set sql_text_wo_constants=:a where rowid :y'
using remove_constants( x.sql_text ),x.rid
end loop;
end;
/
/*********************************************/

could you please explain. Tankans..


Tom Kyte

Followup  

June 21, 2002 - 1:13 pm UTC

NO -- that would be a terribly slow, bad way to do it.

It violated my rule #1

a) if you can do it in single SQL statment -- DO IT.


All of the code above makes 100% use of bind variables where appropriate. When considering "have I done this bind variable thing right" you only need look at the sql statement:

update t1 set sql_text_wo_constants = remove_constants(sql_text);

and then ask "will I ever submit a statement that looks ALMOST exactly like that except that some constant has changed?"

The answer in this case is "definitely NOT, there aren't even any constants in there".


When you see queries like:

update t set x = 5 where y = 2;
update t set x = 6 where y = 1;
update t set x = 8 where y = 43;

Those statements demand bind variables. That should be a single statment:

update t set x = :x where y = :y;

that is executed with different inputs. As my update:

update t1 set sql_text_wo_constants = remove_constants(sql_text);

has no constants -- there are thusly NO binds to be done.

Tom, that is excellent!

June 21, 2002 - 1:33 pm UTC

Reviewer: Sanjay from Canada

Could you clarify further as how to rad the output from query to make out whether bind variables are being used or not?
SQL> /

Function created.

SQL>  update t1 set sql_text_wo_constants = remove_constants(sql_text);

276 rows updated.

SQL> select sql_text_wo_constants, count(*)  from t1 group by sql_text_wo_constants
  2  having count(*) > 100 order by 2;

no rows selected

SQL> /

no rows selected

SQL> 2
  2* having count(*) > 100 order by 2
SQL> c/100/5
  2* having count(*) > 5 order by 2
SQL> /

SQL_TEXT_WO_CONSTANTS
--------------------------------------------------------------------------------
  COUNT(*)
----------
SELECT OBJECT_NAME FROM ALL_OBJECTS WHERE OBJECT_ID = @
        92

What does @ mean here? 

Tom Kyte

Followup  

June 21, 2002 - 2:33 pm UTC

That means you have 92 copies of the statement in the shared pool and the ONLY difference between them is the number in the object_id =


My remove constants turns any character string literal into '#' and any number into @. It then counts how many statements are in the shared pool that are the same EXCEPT for the constants.

If you

select sql_text from v$sql where upper(sql_text) like 'SELECT OBJECT_NAME FROM ALL_OBJECTS WHERE OBJECT_ID%'

you'll see what i mean. You have someone NOT using bind variables in all likelyhood.

ORA-03113

June 21, 2002 - 5:22 pm UTC

Reviewer: Harrison Picot from Haymarket, VA USA

ORA-03113: end-of-file on communication channel

I am sorry that I can't remember the context, but I have
seen that message in relation to some kind of a space
problem. It wasn't an Oracle bug it was my bug. I can't
remember more about it than that, but I remember thinking
that it was "obvious" after I found it. Sorry this is so
vague, but had I called Oracle support, I'm not sure that
message would trigger an answer to the problem I had. At
the time I thought my database had been corrupted by
lightening, and restored from backups, but when I got to
same point again, I got the same error and that was when
the light went on. Anyway, this can just be a warning that
ORA 03113 is not a clear message, so checking your logs
(as Tom suggests) will worth a lot more than that error.

Can you explain this ?

June 24, 2002 - 9:49 am UTC

Reviewer: A reader

Hi tom,

Thanks, for your answer.
I would like to still ask you one question on this bind variable thing.

Now as you've explained me
1.) if I can do it in one stmt. I should do it.

Ok, my qustion is in your followup answer

/***************************************************/
Followup:
Please contact support -- it will be a bug, you may require a patch. I've never
seen that behaviour myself.

There will be a trace file on the server they will need to diagnose this issue.

meanwhile, you can do something like:

declare
l_tmp long;
begin
for x in ( select rowid rid, sql_text from t1 )
loop
l_tmp := remove_constants( x.sql_text );
update t1 set sql_text_wo_constants = l_tmp where rowid = x.rid;
end loop;
end;
/

instead of the single update.

/********************************************************/


don't we have the same situation where
we are saying

update t set x = l_tmp where y = x.rid

isn't it same as

update t set x = 4 where y = 5

so won't the sql stmt will distinct at run time
it be executed the # of rows time ?
are we using bind varible ? if yes how ?
if not, then in this case, will it be better to use bind
using the execute immedieate ?

Plese explain..
thanks in advence











Tom Kyte

Followup  

June 24, 2002 - 9:52 am UTC

No, it is not. I am using host variables. The statement:

update t1 set sql_text_wo_constants = l_tmp where rowid = x.rid;

is really:

pdate t1 set sql_text_wo_constants = :BV1 where rowid = :BV2;

anywhere there is a PLSQL variable -- it is really a host variable, a bind variable. That is part of the beauty of PLSQL -- it makes binding almost MOOT. You have to absolutely go out of your way to NOT bind in plsql.





Thanks Tom

June 24, 2002 - 10:43 am UTC

Reviewer: A reader

Hi Tom,

Thanks, for clearing this up for me.
Now I feel comfortable with this bind variable and cursor variables, as you've acknoledged that pl/sql cursor variable/ plsql variable is it self a bind/host variable.

I will further investigate and explore this becacuse I know I have seen some illustration where in the pl/sql block
where we can use the pl/sql varable, dynamic stmt is used.

I think from my point of view almost all statement can be written using pl/sql variable ( DDL stmt excluded). but I will investigate further ...

thanks..



Regarding ORA-03113

June 24, 2002 - 3:56 pm UTC

Reviewer: Colin Davies from Pasadena, CA, USA

I just ran into this over the weekend while building a new instance on a new server. It was apparently the result of not having enough semaphores.

When I reduced the PROCESSES parameter in the init.ora file to 50, the error went away; when I raised it, it came back. We adjusted the kernel parameters in the /etc/system file, rebooted the server, and everything was fine again.

Tom Kyte

Followup  

June 25, 2002 - 8:35 am UTC

Well, let me say something about the 3113 error.

It means "something went wrong, we died".

It can be caused by literally thousands and thousands of things. It is a very generic error message.

A crash (segmentation fault) in the server process could cause it.
Not enough semaphores can cause it (but only on startup, not later).
Network errors can cause it.
and so on.

There is no single answer to "what causes a 3113".


If you look up support note <Note:19790.1> on metalink, you'll see:

Explanation:
There are numerous causes of this error. It is a 'catch all' type
error which means 'I cannot communicate with the oracle shadow process'.
This usually occurs when the Oracle server process has died for
some reason.

Diagnosis:
Oracle support need more information to determine what caused the
error. The most useful items are:

1) Does this occur when trying to (a) make a connection to the
database or (b) on an established connection ?

2) There should be a trace file from the failed session in
USER_DUMP_DEST

3) Does a particular statement reproduce this problem or is it
random ? Use SQL_TRACE on the session to find the problem
statement.


More on ORA error

June 25, 2002 - 3:02 pm UTC

Reviewer: Jose Fco. Cleto from Dominican Republic

I also saw this error once, one of my developers was trying to insert a record and have a before insert/update tirgger with a :OLD.Field1 var the is updating part.

create or replace trigger biur_t before insert or update on t for each row
begin
...
if inserting then
...
:OLD.Field1 := sysdate;
else
...
end;

Guess what??? it looks just normal and compile as well but every time they try to insert a record the same messages POP.

We search the web, we look some books and open the traces files and every log and trying to find every string with the offending table or user and That's all folks, we fix it and it's gone away.

More on ORA error

June 25, 2002 - 3:03 pm UTC

Reviewer: Jose Fco. Cleto from Dominican Republic

I also saw this error once, one of my developers was trying to insert a record and have a before insert/update tirgger with a :OLD.Field1 var the is updating part.

create or replace trigger biur_t before insert or update on t for each row
begin
...
if inserting then
...
:OLD.Field1 := sysdate;
else
...
end;

Guess what??? it looks just normal and compile as well but every time they try to insert a record the same messages POP.

We search the web, we look some books and open the traces files and every log and trying to find every string with the offending table or user and That's all folks, we fix it and it's gone away.

Determine who submitted the bad SQL?

August 21, 2002 - 9:14 pm UTC

Reviewer: Eric Raskin from Pleasantville, NY USA

I've run your script on my database and found a few statements that are candidates for bind variables. We have a bunch of Oracle Forms applications running, so I'm not sure which one is executing each statement. How do I determine who submitted the offending SQL?

Tom Kyte

Followup  

August 22, 2002 - 8:53 am UTC

Well, the shared pool by its very definition is "application agnostic" - it is after all the "shared pool"

It could be that EVERY application issues the offending sql.
It could be that 15 applications do
Or just one.

If you have the FMT files (text files representing the forms) you could "grep" them looking for references to the tables in question.

If you see the sql running (using showsql.sql available on this site via search for example), you'll be able to see who is doing it.

OEM Is not using Bind variables.

August 22, 2002 - 4:56 pm UTC

Reviewer: A reader

Hi Tom,

Your remove_constants logic is very useful for me.

I was testing your remove_constants function in one of our devlopment environments and found one Intresting point.

/* ORACLEOEM */ SELECT MIN_VALUE, MAX_VALUE, INCREMENT_BY, LAST_NUMBER, CYCLE_FL
AG, ORDER_FLAG, CACHE_SIZE FROM SYS.DBA_SEQUENCES WHERE SEQUENCE_OWNER = ('#') A
ND SEQUENCE_NAME = ('#')

SQL_TEXT_WO_CONSTANTS
--------------------------------------------------------------------------------
COUNT(*)
----------
113

/* ORACLEOEM */ SELECT TEXT FROM DBA_SOURCE WHERE OWNER = '#' AND NAME = '#' AND
TYPE = '#' ORDER BY LINE
567

/* ORACLEOEM */ SELECT INITCAP(STATUS) FROM DBA_OBJECTS WHERE OWNER = '#' AND OB
JECT_NAME = '#' AND OBJECT_TYPE = '#'
590

So OEM is not using your rule of thumb(it is not using bind variables).

My doubt is why can't OEM use bind variables?



Tom Kyte

Followup  

August 22, 2002 - 10:46 pm UTC

can't is the wrong word.

didn't is the correct word.

It would be an OEM bug. In fact, #1504636 is filed about this issue. Not everyone has read my book yet ;) Working on that....

Outstanding tuning tool!

August 22, 2002 - 5:35 pm UTC

Reviewer: Jimko from Rochester, NY

Using this great technique, I found and fixed three minor transgressions in my code that were wasting cache space. However the vast majority of SQL that was duplicated in the cache was generated by Oracle Sales Analyzer or Oracle Discoverer. Don't you have any in-house readers, Tom? ;-)

I can understand why OSA might not be able to use bind variables because of its dynamic nature, but Discoverer builds fairly straight forward queries against static tables. Are there any techniques we can use to shield the database from abuse by query generation tools?

By coincidence, I had read you book up through page 448 last night. I experienced some deja vu when I resumed my reading this evening on page 449 where you cover this very function.

Your book is excellent. I don't know why more technical documentation isn't written in the 1st person. It's the natural way to communicate.

Tom Kyte

Followup  

August 22, 2002 - 10:49 pm UTC

Actually, in a data warehouse (such as you would be using these tools) -- not using Binds is ok.

You need binds when you are parsing X statements / second.

When are you parsing 1 statement every Y seconds, it is not nearly as important.

OLTP = mandatory binds
Mixed Used = mostly 99% binds
DW = binds if you want to (i cannot believe i said that!)

The plans for the dataware house will be better if you use constants/literals in general. You do not have the scalability wall since you are doing queries that take a long time (and hence you are not parsing x queries per second, the latching doesn't hit you). Also, the usage of the shared pool is "gentler" so the space problem is not bad at all -- we have time to manage the shared pool, we aren't in a rush as much.



RE: DW = binds if you want to...

August 23, 2002 - 1:58 pm UTC

Reviewer: Mark A. Williams from Indianapolis, IN USA

DW = binds if you want to (i cannot believe i said that!)

Are you sure you don't have a fever? Body snatched? Did it feel weird to say it?

Just kidding, of course...

- Mark

Bind Variables

June 02, 2003 - 12:36 pm UTC

Reviewer: vinodhps from India,Chennai

Hi Tom ,
Function was pretty useful.. i corrected those queires which was found ........

oiisdba@OIIS8.WORLD> update t1 set sql_text_wo_constants = remove_constants(sql_text);

12867 rows updated.

oiisdba@OIIS8.WORLD> select sql_text_wo_constants, count(*)
2 from t1
3 group by sql_text_wo_constants
4 having count(*) > 100
5 order by 2
6 /

SQL_TEXT_WO_CONSTANTS
--------------------------------------------------------------------------------------------------
COUNT(*)
----------
SELECT * FROM T_FAS_SALE_AWA WHERE DOC_NO=@
115

SELECT ROWID,REAM_ID,PALLET_CODE,BATCH_NO,PART_KEY,QLTY_DEST,USERID,DATESTAMP,PRINT_CNT,RG_VARIETY
D,SMR_FLAG,REAM_STATUS,DEFECT_CD,REAM_WT FROM T_PMS_REAM_DTL WHERE PART_KEY = @ AND REAM_ID > '#'
D REAM_ID < '#' AND (PALLET_CODE=:@) AND (BATCH_NO=:@) ORDER BY REAM_ID
489

Thanks...

Very useful

June 05, 2003 - 4:37 pm UTC

Reviewer: A reader


how can pl/sql code lead to lack of usage of bind variables

June 10, 2003 - 1:48 pm UTC

Reviewer: A reader from ca

"anywhere there is a PLSQL variable -- it is really a host variable, a bind
variable. That is part of the beauty of PLSQL -- it makes binding almost MOOT.
You have to absolutely go out of your way to NOT bind in plsql.
"

well, if you do a simple select as below in pl/sql
wont it violate the always-use-bind-variable rule
(due to the hard coded constant 'FGL'?
....
select a
into b
from t
where t.x = 'FGL';
....

Thanx!


Tom Kyte

Followup  

June 10, 2003 - 1:52 pm UTC

that is properly bound.

Tell me -- if you run that procedure over and over and over -- how many different SQL statements will you have?

One.


If on the other hand, you code:


open ref_cursor for 'select * from t where t.x = ' || some_variable;


the answer to that question will be:

infinitely many.


That is why you need to use binds in dynammic sql:

open ref_cursor for 'select * from t where t.x = :x' USING some_variable;


but in static sql in plsql -- you need not. By definition, in static sql, the values will NEVER change and you need bind only when the values CHANGE

thanx for a clear explanation!

June 10, 2003 - 2:46 pm UTC

Reviewer: A reader from ca

now I understand - in pl/sql if static sql then anyways
the query does not change -
if not then you have to use variables which are
bound - so is there anyway that you could be violating
the "always-bind" rule in pl/sql?

Thanx!

oops! got it!

June 10, 2003 - 2:48 pm UTC

Reviewer: A reader from ca

i guess using the dynamic sql you showed is the
way you would violate the "always-gind" rule..'thanx!:)

static sql in pl/sql

June 10, 2003 - 3:44 pm UTC

Reviewer: A reader from ca

Tom
You said,
" but in static sql in plsql -- you need not. By definition, in static sql, the
values will NEVER change and you need bind only when the values CHANGE ".

Does this argue for encapsulating all select statements in functions (That could be reused later?) this would ensure that there are no two select stmts that are the same with different values within pl/sql packages?
e.g.
instead of having
select x
from w
where v = &#8216;F&#8217;

at one place
and
select x from w where v = &#8216;G&#8217;

at another you make a function of that select statement and use it.

Do you see any pros/cons of this approach?

Thanx for your wonderful site!:)


Tom Kyte

Followup  

June 10, 2003 - 7:57 pm UTC

that would be taking it to an extreme perhaps. not that I'm 100% down on it -- i just see it heading towards "no sql in your plsql package, you have to add it to this package over here" sort of runs. A good idea taken too far can become an inhibitor.

Given that it is humans typing in the code, the probability that we'll flood the shared pool with hundreds of statements that *could have been shared* is limited.

So, no, I would not encapsulate all sql in functions.

If I could just get everyone to put all sql in plsql.... that would be a huge step forward

thanx - that makes sense!

June 10, 2003 - 8:03 pm UTC

Reviewer: A reader


bind variables in install statements

June 11, 2003 - 4:06 pm UTC

Reviewer: A reader from CA

lot of times some data has to be there in the
installed tables - usually metadata...
in our case, those dont use bind values.
For exmaple,

insert into table t1 values ( 1, 'metadata 1');
insert into table t1 values ( 2, 'metadata 1');
insert into table t1 values ( 3, 'metadata 1');

There could be many such statements.

Since they are one time they do fill up the library
cache once in the beginning.

Do you think they are harmful (Since their effect
is only in the beginning)

Tom Kyte

Followup  

June 12, 2003 - 8:19 am UTC

if it would be as easy to avoid as not avoid, you should avoid it.

what would be wrong with a file like:


1,'metadata 1'
2,'metadata 2'
....

and sqlldr or an external table (in 9i)...

thanx!

June 12, 2003 - 11:27 am UTC

Reviewer: A reader from ca

I guess i just wanted to gauge the priority of this work -
from what you say, it seems like it should be done but
is not such a big problem

Regards


June 24, 2003 - 5:05 pm UTC

Reviewer: A reader

Tom,

I used your function remove_constants and have found out that are whole buch of queries that dont use bind variables.

The situation is I dont have acces to the codes so i cannot fix it (3rd party software) and to make matters worse we are running on 8.0.5. To upgarde the systems to 9i will take atleast 3-4 months.Till then can you suggest something that i can do.

Thanks.


Tom Kyte

Followup  

June 25, 2003 - 11:36 am UTC

ask they 3rd party vendor why they write such bad code and when are they going to fix it?

if you cannot fix the code, which has a bug, there is not much you can do in 8.0.

in 8i and up you MIGHT be able to use cursor_sharing, MAYBE.

July 12, 2003 - 2:34 am UTC

Reviewer: Fan Zhang from Palo Alto, CA USA

Tom,
As the function remove_constants converts all the numbers to @, it may also change the names of columns and tables. e.g., both 'select c1 from t1' and 'select c2 from t2' will be converted as 'SELECT C@ FROM T@'. Is it better to conver only the numbers after the '=' sign? Thanks,

Tom Kyte

Followup  

July 12, 2003 - 9:35 am UTC

no, it is better to not over complicate things.

This is a simple solution to a nasty problem. I assume a sensible human being will be reading the output and upon seeing

select c@ from t@

in the report they would know full well "ok, ignore that one"

I would not want to write a SQL parser -- it is not just = what about:

select substr( x, 1, 40 ) from t;
select substr( x, 1, 41 ) from t;
select substr( x, 33, 20 ) from t;

select 55, a.* from a;
select 56, a.* from a;


select * from t where x > 55;
select * from t where x > 56;

select * from t where x between 1 and 100;
select * form t where x between 32 and 52;


and so on.....

July 13, 2003 - 12:56 am UTC

Reviewer: Fan Zhang from Palo Alto, CA USA

Tom,

Thanks very much for your reply. I agree the situation is much more complicated than I first thought, and it’s hard to list all the possibilities where numbers are used as values. On the other hand, the numbers is used in a name (tables, columns, schemas…) in limited number of ways. By taking this advantage, the following modified function seems working in most cases, though I feel there can be many exceptions (e.g., to_date).

The reason for me to pursue the issue is that I know in some databases, for security raeson, numbers are used intensively in column/table names. If we get 1000 'select * from t@ where c@ = @', it may come from 1 case of 'select * from t1 where c1=@' and 999 of 'select * from t2 where c2=@'.

*****************
create or replace function
remove_constants( p_query in varchar2 ) return varchar2
as
l_query long;
l_char varchar2(1);
l_in_quotes boolean default FALSE;
l_before_number boolean default FALSE;
l_ascii number;
begin
for i in 1 .. length( p_query )
loop
l_char := substr(p_query,i,1);
l_ascii :=ascii(l_char);

if ( l_in_quotes )
then
if (l_char ='''')
then
l_query := l_query || '''';
l_in_quotes := FALSE;
else
l_query := l_query || '#';
end if;
elsif ( NOT l_in_quotes and l_char = '''' )
then
l_in_quotes := TRUE;
l_query := l_query || '''';
elsif ( (NOT l_in_quotes )
and l_before_number
and (l_ascii between 48 and 57))
then
l_query := l_query || '@';
else
l_query := l_query || l_char;
end if;

if ( l_ascii = 36 -- 36 => $
or l_ascii = 95 -- 95 => _
or l_ascii between 65 and 90 -- 65-90 => A-Z
or l_ascii between 97 and 122 -- 97-122 => a-z
)
then
l_before_number := FALSE;
elsif (not l_before_number
and l_ascii between 48 and 57 -- 48-57 => 0-9
)
then
l_before_number := FALSE;
elsif ( NOT l_in_quotes )
then
l_before_number := TRUE;
end if;

end loop;

for i in 0 .. 8 loop
l_query := replace( l_query, lpad('@',10-i,'@'), '@' );
l_query := replace( l_query, lpad(' ',10-i,' '), ' ' );
l_query := replace( l_query, lpad('#',10-i,'#'), '#' );
end loop;
dbms_output.put_line(l_query);
return upper(l_query);

end;
/
************

Tom Kyte

Followup  

July 13, 2003 - 9:38 am UTC

this sentence:

for security raeson, numbers are used intensively in column/table names.


should read:

in a misguided attempt to achieve security via obscurity, numbers are used intensively in column/table names achieving no actual security benefit but removing much value from the database itself.






Which Process?

July 15, 2003 - 8:36 am UTC

Reviewer: Sunil Gururaj from Cyprus

Hi Tom,

Could you let me know which "ADDRESS" column of v$session relates to the address in v$sqlarea view on which your above query is operating on.

Put it simply, I need to find out the program or the process which is NOT using Binds and need to advice the development team accordingly.

Thanks,

Tom Kyte

Followup  

July 15, 2003 - 10:06 am UTC

and you cannot tell by the queries?

sql_address column is probably what you are looking for -- to goto v$sql

Nice Tool to catch the culprit SQLs

August 30, 2003 - 7:57 am UTC

Reviewer: SUDIPT SINHA from INDIA

Hi Tom
It is very nice tool to catch the culprit SQLs. Here Iam using Oracle JDeveloper. I have query like this

SELECT XYZ.TADIBANK_ID, XYZ.TADIBRANCH_ID, XYZ.TADVCHALLAN_ID, XYZ.TADDCHALLAN_DATE, XYZ.CTDILOGIN_ID, XYZ.CTDDMODIFIED_DATE, XYZ.CTDIOFFICE_ID, XYZ.TADVREGN_ID, XYZ.TADVCST_REGN_ID, XYZ.TADICIRCLE_OFFICE_ID, XYZ.TADIASSMT_YEAR, XYZ.TADVACT_ID, XYZ.TADIPERIOD, XYZ.TADRCHALLAN_AMT, XYZ.TADVSCROLL_ID, XYZ.TADVERROR_ID, XYZ.TADVRECONCILIATION_FLAG, XYZ.TADIRECONCILIATION_COUNT, XYZ.TADVREC_CONFIRM_FLAG_PCR_RCPT, FUNC1('#',XYZ.TADVACT_ID,'#') AS ACT, XYZ.TADDFROM_DATE, XYZ.TADDTO_DATE, FUNC1('#','# .........................
FROM XYZ
WHERE...............

in where clause I have used the bind variable. But in the FUNC1 I have used the literals and it should be same for all the users, still it is coming in that query(which u have given) for more than 100 times. Whether I have to use bind for those literal also?

Regards
Sudipt

Tom Kyte

Followup  

August 30, 2003 - 10:54 am UTC

are the literal values for the queries the SAME?

what release of the database are you using?

are the queries executed from a stored procedure or directly?

Nice Tool to catch the culprit SQLs.. contd..

September 01, 2003 - 12:47 am UTC

Reviewer: SUDIPT SINHA from India

1. The literal values for the queries are the SAME (for all users)

2. Release of the database -- Oracle 9.2.0.3

3. The queries are executed directly using JDBC( in built in Oracle JDeveloper). FUNC1() is a stored function in Oracle DB.


Tom Kyte

Followup  

September 01, 2003 - 8:29 am UTC

peek into v$sql_shared_cursor for that query and see which columns are flagged with "Y"s -- eg, this is a short excerpt from my book "Efficient Oracle by design" showing what I mean:

...
We simply set up the four users A, B, C, and D. We grant CREATE SESSION and CREATE TABLE privileges to users A, B, and D, and grant just CREATE SESSION to C. Then we run the following script:

create public synonym emp for d.emp;
alter system flush shared_pool;

connect a/a
create table emp ( x int );
select * from emp;

connect b/b
create table emp ( x int );
select * from emp;

connect d/d
create table emp ( x int );
grant select on emp to c;
select * from emp;

connect c/c
select * from emp;

Now, let's take a quick peek at V$SQL, which shows all of the currently cached SQL statements.

ops$tkyte@ORA920> select address, executions, sql_text
  2    from v$sql
  3   where upper(sql_text) like 'SELECT * FROM EMP%';

ADDRESS  EXECUTIONS SQL_TEXT
-------- ---------- ------------------------------
58DBD9CC          1 select * from emp
58DBD9CC          1 select * from emp
58DBD9CC          2 select * from emp

We can see that there are three versions of the query in the shared pool and that one of the versions was used two times-that is the query submitted by users C and D. 

We can also query V$SQL_SHARED_CURSOR to determine exactly why Oracle developed three query plans, to discover the root cause for the mismatch here. This view has many columns that can be used to determine why a plan in the shared pool was not shared. For example, the ROW_LEVEL_SEC_MISMATCH column would be flagged for two queries that appear to be the same but differed at runtime based on the security predicate returned by Oracle's Virtual Private Database (VPD) feature. The OUTLINE_MISMATCH column would be flagged if one query used a query outline and the other did not. Other columns in this view provide similar mismatch information. In this example, we can see this information regarding our queries:


ops$tkyte@ORA920> select kglhdpar, address,
  2         auth_check_mismatch, translation_mismatch
  3    from v$sql_shared_cursor
  4   where kglhdpar in
  5   ( select address
  6       from v$sql
  7      where upper(sql_text) like 'SELECT * FROM EMP%' )
  8  /

KGLHDPAR ADDRESS  AUTH_CHECK_MISMATCH TRANSLATION_MISMATCH
-------- -------- ------------------- --------------------
58DBD9CC 59D4C9C8 N                   N
58DBD9CC 59E54EB0 Y                   Y
58DBD9CC 5898D42C Y                   Y

In this case, Oracle flagged two columns: AUTH_CHECK_MISMATCH, meaning the authorization/translation check failed for the existing cursor (query plan), and TRANSLATION_MISMATCH, meaning the base objects of the child cursor (query plan) do not match. This shows us exactly why there are three queries plans (child cursors): The base objects referenced were different. 
 

Nice Tool to catch the culprit SQLs.. contd..

September 02, 2003 - 9:15 am UTC

Reviewer: SUDIPT SINHA from India

I am sorry that the user I am talking about are the browing user not the Database user.
In my database I am using only one schema(one DB user). The application is accessed through Oracle 9iAS. Still I am getting so many count using your query.

Regards
Sudipt

Tom Kyte

Followup  

September 02, 2003 - 10:39 am UTC

use the technique above to find out WHY

Library Latch Cache Contention

October 30, 2003 - 1:31 pm UTC

Reviewer: JHT from NJ, USA

Tom,

Wow, I think this is the best website for interesting and complicated oracle issues!!!

If I am reading my statspack report correctly, I have a library latch cache contention. So I ran the "remove_constants" procedure Tom recommended at the top of this thread and no SQL with > 100 counts turned up. I have set cursor_sharing=FORCE in the 8.1.6 db. What are other possible reasons for library cache latch contention? Below are snippets from the statspack report... Thanks.

Snap Length
Start Id End Id Start Time End Time (Minutes)
-------- -------- -------------------- -------------------- -----------
190 191 26-Oct-03 01:07:56 26-Oct-03 01:16:00 8.07


Cache Sizes
~~~~~~~~~~~
db_block_buffers: 131072
db_block_size: 8192
log_buffer: 524288
shared_pool_size: 100000000


Load Profile
~~~~~~~~~~~~
Per Second Per Transaction
--------------- ---------------
Redo size: 151,257.45 2,187.62
Logical reads: 6,142.08 88.83
Block changes: 946.70 13.69
Physical reads: 778.34 11.26
Physical writes: 77.79 1.13
User calls: 1,609.01 23.27
Parses: 1,047.90 15.16
Hard parses: 0.01 0.00
Sorts: 1.05 0.02
Transactions: 69.14

Rows per Sort: 117.75
Pct Blocks changed / Read: 15.41
Recursive Call Pct: 60.36
Rollback / transaction Pct: 0.30


Instance Efficiency Percentages (Target 100%)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Buffer Nowait Ratio: 99.94
Buffer Hit Ratio: 87.33
Library Hit Ratio: 100.04
Redo NoWait Ratio: 100.00
In-memory Sort Ratio: 97.84
Soft Parse Ratio: 100.00
Latch Hit Ratio: 89.24


Top 5 Wait Events
~~~~~~~~~~~~~~~~~ Wait % Total
Event Waits Time (cs) Wt Time
-------------------------------------------- ------------ ------------ -------
latch free 2,182,681 251,374 42.78
slave wait 30,702 137,440 23.39
db file sequential read 127,626 74,321 12.65
log file sync 43,839 41,690 7.09
io done 30,386 40,844 6.95
-------------------------------------------------------------



Pct Avg Pct
Get Get Sleeps Nowait Nowait
Latch Name Requests Miss /Miss Requests Miss
------------------------------- ------------ ------ ------ ----------- ------
Active checkpoint queue latch 924 0.0 0
Checkpoint queue latch 121,876 0.0 0.1 0
Token Manager 164 0.0 0
archive control 2 0.0 0
archive process latch 2 0.0 0
cache buffer handles 138 0.0 0
cache buffers chains 5,518,468 0.2 0.1 666,964 0.0
cache buffers lru chain 475,650 0.2 0.0 382,391 0.2
channel handle pool latch 170 0.0 170 0.0
channel operations parent latch 340 0.0 170 0.0
constraint object allocation 243 0.0 0
direct msg latch 93,078 0.0 0.0 0
dml lock allocation 247,685 0.1 0.0 0
done queue latch 29,564 0.0 29,567 0.0
enqueue hash chains 330,628 0.2 0.1 0
enqueues 394,860 0.2 0.1 0
event group latch 170 0.0 0
global transaction 7,431 0.0 0
global tx free list 531 0.0 0
global tx hash mapping 2,090 0.0 0
job_queue_processes parameter l 154 0.0 0
ksfv messages 122,630 0.0 0
ktm global data 2 0.0 0
latch wait list 1,409,267 2.1 0.1 1,653,622 4.0
library cache 11,937,161 22.3 0.8 0
library cache load lock 56 0.0 0
list of block allocation 71,525 0.0 0.0 0



Tom Kyte

Followup  

October 30, 2003 - 9:54 pm UTC

confusion

you ran for 8 minutes (8*60 = 480 cpu seconds)

but you had 2,513 seconds of waits..

are you like a 16/32 cpu machine here?

you do not have a hard parse issue - you have 100% soft parses (PERFECTION, congrats!)

what you might have is an "over parse" issue, soft parses are less expensive than hard, but only "no parse" is free.

details. need more details -- describe the system

Ratio query x parse

October 31, 2003 - 8:55 am UTC

Reviewer: Marcio from br

Tom, I'm wondering about you said:
"You need binds when you are parsing X statements / second.
When are you parsing 1 statement every Y seconds, it is not nearly as important."

How can I get such ratio? -- x statements / second.

thanks,
Marcio


Tom Kyte

Followup  

November 01, 2003 - 11:40 am UTC

you would sort of know -- it is your system, are you OLTP with "transactions per second" or are you a warehouse with N concurrently executing massive queries.

But, you can use statspack to see executions/second and transactions/second to get a gist.

Library Latch Cache Contention

November 03, 2003 - 9:11 am UTC

Reviewer: JHT from NJ USA

Thanks Tom for your earlier response to this issue.

Our DB Server is 8 CPUs. It is an OLTP system where the majority of the processing is this: There is A LOT (26,000 per second) of inserts into a view. An "instead of" trigger fires (and discards about 23,000 inserts because of some application logic). The rest (3,000 inserts) is processed and updates/deletes/inserts into a table (IPMAP) that keeps track of what cable modem owns what ip address. This triggers also inserts into a "history" table to store this same information.

If this is a soft parse issue, how do I go about improving this?

Below is the costly SQL statements (in terms of buffer gets) in this 8 minute window. The first statement is the insert into the VIEW.

Buffer Gets Executes per Exec Total Hash Value
-------------- ------------ ------------ ------ ------------
SQL statement
------------------------------------------------------------------------------
1,848,346 214,866 8.6 62.2 2948872974
insert into LEASEVIEW (IPADDR,REGIONID,DEVICEID,RAREMOTEID,DEVICETYPE,LEASEDAT

859,372 214,946 4.0 28.9 837720088
SELECT COUNT(*) FROM IPMAP WHERE IPADDR = :b1 AND REGIONID = :b2 AND DEVI

216,003 27,029 8.0 7.3 1919955916
SELECT COUNT(*) FROM IPMAP WHERE ((IPADDR = :b1 AND REGIONID = :b2 ) OR (D

205,496 27,019 7.6 6.9 146615416
UPDATE IPMAP SET DEVICEID=:b1,RAREMOTEID=:b2,DEVICETYPE=:b3,LEASEDATE=:b4,EXPD

198,669 27,040 7.3 6.7 1492083554
SELECT AD.ACCTTYPE,DA.DATAACCOUNTID ACCOUNTID FROM ACCTDEVICE AD,DATAACCOUNT




Tom Kyte

Followup  

November 03, 2003 - 2:42 pm UTC

I would not have done it that way -- for such intensive processing -- I would have built a stored procedure that would reject 23 out of 26k records -- inserting the other 3k.

what is the calling application and is it parsing the insert before each and every insert?

also -- the trigger, does it contain the code "in it" or is the trigger a single line -- a call to a stored procedure.  If not, each execute is going to cause a soft parse of all of the trigger SQL.  Consider:

ops$tkyte@ORA920PC> create table t ( x int );
 
Table created.
 
ops$tkyte@ORA920PC> create or replace view v as select * from t;
 
View created.
 
ops$tkyte@ORA920PC>
ops$tkyte@ORA920PC> create or replace procedure p
  2  as
  3  begin
  4          for x in ( select * from dual IN_PROCEDURE )
  5          loop
  6                  null;
  7          end loop;
  8  end;
  9  /
 
Procedure created.
 
ops$tkyte@ORA920PC>
ops$tkyte@ORA920PC> create or replace trigger v_trigger
  2  instead of insert on v
  3  begin
  4          for x in ( select * from dual IN_TRIGGER )
  5          loop
  6                  null;
  7          end loop;
  8          p;
  9  end;
 10  /
 
Trigger created.
 
ops$tkyte@ORA920PC>
ops$tkyte@ORA920PC> alter session set sql_trace=true;
 
Session altered.
 
ops$tkyte@ORA920PC>
ops$tkyte@ORA920PC> insert into v values ( 1 );
 
1 row created.
 
ops$tkyte@ORA920PC> insert into v values ( 1 );
 
1 row created.
 
ops$tkyte@ORA920PC> insert into v values ( 1 );
 
1 row created.
 
ops$tkyte@ORA920PC> insert into v values ( 1 );
 
1 row created.
 
ops$tkyte@ORA920PC> insert into v values ( 1 );
 
1 row created.
 
ops$tkyte@ORA920PC>

tkprof shows us:

select * from dual IN_TRIGGER
                                                                                                  
call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        5      0.00       0.00          0          0          0           0
Execute      5      0.00       0.00          0          0          0           0
Fetch       10      0.00       0.00          0         15          0           5
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total       20      0.00       0.00          0         15          0           5
********************************************************************************
select * from dual IN_PROCEDURE
                                                                                                  
call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute      5      0.00       0.00          0          0          0           0
Fetch       10      0.00       0.00          0         15          0           5
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total       16      0.00       0.00          0         15          0           5
                                                                                                  


See how the trigger sql was parsed 5 times for 5 executes, but plsql stored procedures nicely "cache" the sql.

So, my advice:

a) don't use the view
b) use a stored procedure
c) make DARN sure the caller parses the begin p(....); end; block ONCE and executes it over and over and over again.


your parsing will go way way way down.


 

Library Latch Cache Contention

November 03, 2003 - 10:32 am UTC

Reviewer: JHT from NJ USA

Oops...I meant 26,000 inserts per minute (not per second)

January 15, 2004 - 5:19 pm UTC

Reviewer: Pauline from NY, U.S.A.

Tom,
When I use the way you provided to us to find the statements
not using BIND variable, I see something in our production database like:

SQL_TEXT_WO_CONSTANTS
--------------------------------------------------------------------------------
COUNT(*)
----------
SELECT DISTINCT ADDRESS_TO_PTY_ID FROM PARTY_ACCOUNT_ADDRESSES PAA WHERE PAA.ACCT_ID =@ AND PAA.PTY_ID = ( SELECT PTY_ID FROM PARTY_ACCOUNTS PTYA, PRIMARY_ACCOUNTS PRIMA WHERE PTYA.ACCT_ID =@ AND PTYA.ACCT_ID = PRIMA.ACCT_ID AND PTYA.PTYACCTROLE_ID = @ AND NVL(PTYA.DELETED_FLG,'#') <> '#' AND ROWNUM < @ GROUP BY PTYA.PTY_ID ) AND NVL(DELETED_FLG,'#') <> '#' AND PAA.ADDRESS_ID = @ AND PAA.ACCTADDR_USETYP_ID = @
292

Then I try to do:
select sql_text from v$sql
where upper(sql_text) like 'SELECT DISTINCT ADDRESS_TO_PTY_ID FROM PARTY_ACCOUNT_ADDRESSES PAA WHERE PAA.ACCT_ID%';

I see no rows selected.

If I do :

select sql_text from v$sql
where upper(sql_text) like 'SELECT DISTINCT ADDRESS_TO_PTY_ID%';

Then I see 292 rows selected.

Why that upper(sql_text) not working for the longer string?
Is there any limitation for using upper function?

Thanks.




Tom Kyte

Followup  

January 16, 2004 - 1:06 am UTC

you must have the white space or something wrong in your like string (maybe a tab or newline hiding in there that you are not seeing)

like works dandy with long strings. no problems there -- you just need the right input.

Views

January 16, 2004 - 6:43 am UTC

Reviewer: Marcio from br

<quote>
So, my advice:

a) don't use the view
b) use a stored procedure
c) make DARN sure the caller parses the begin p(....); end; block ONCE and
executes it over and over and over again.
</quote>
About a) In this context -- why do I not use views?


Tom Kyte

Followup  

January 16, 2004 - 11:19 am UTC

well, i thought I showed why

the instead of trigger parses the sql for each triggering statement.

the procedure does not.

Views for maintenance.

January 16, 2004 - 2:54 pm UTC

Reviewer: Marcio from br

Sorry, I was talking about VIEWs, because I use it
to manage insted of TABLES directly -- I keep tables under my own control
and give grants on views... more or less like this:
ops$t_mp00@MRP920>
ops$t_mp00@MRP920> create sequence s;

Sequence created.

ops$t_mp00@MRP920> create table p$ ( id int primary key,
2 name varchar2(10),
3 qty_c int default 0 );

Table created.

ops$t_mp00@MRP920>
ops$t_mp00@MRP920> create or replace trigger p$_bifer
2 before insert on p$ for each row
3 begin
4 select s.nextval into :new.id from dual;
5 end;
6 /

Trigger created.

ops$t_mp00@MRP920>
ops$t_mp00@MRP920> create or replace view p
2 as
3 select id p_id, name, qty_c q from p$;

View created.

ops$t_mp00@MRP920>
ops$t_mp00@MRP920> create table c$ ( id int primary key, name varchar2(10), p_id references p$ );

Table created.

ops$t_mp00@MRP920>
ops$t_mp00@MRP920> create or replace trigger c$_bifer
2 before insert on c$ for each row
3 begin
4 select s.nextval into :new.id from dual;
5 add_p(:new.p_id);
6 end;
7 /

Warning: Trigger created with compilation errors.

ops$t_mp00@MRP920>
ops$t_mp00@MRP920> create or replace view c
2 as
3 select id c_id, p_id, name child_name from c$
4 /

View created.

ops$t_mp00@MRP920>
ops$t_mp00@MRP920> create or replace procedure add_p ( i in number )
2 as
3 begin
4 update p
5 set q = q + 1
6 where p_id = i;
7 end;
8 /

Procedure created.

ops$t_mp00@MRP920>
ops$t_mp00@MRP920> show error
No errors.
ops$t_mp00@MRP920>
ops$t_mp00@MRP920> insert into p(name) values ( 'Marcio');

1 row created.

ops$t_mp00@MRP920>
ops$t_mp00@MRP920> insert into c(p_id, child_name) values (1, 'Alicia');

1 row created.

ops$t_mp00@MRP920> insert into c(p_id, child_name) values (1, 'Jack');

1 row created.

ops$t_mp00@MRP920> insert into c(p_id, child_name) values (1, 'Nicole');

1 row created.

ops$t_mp00@MRP920>
ops$t_mp00@MRP920> select * from p;

P_ID NAME Q
---------- ---------- ----------
1 Marcio 3

1 row selected.

ops$t_mp00@MRP920> select * from c;

C_ID P_ID CHILD_NAME
---------- ---------- ----------
2 1 Alicia
3 1 Jack
4 1 Nicole

3 rows selected.

ops$t_mp00@MRP920> select * from p$;

ID NAME QTY_C
---------- ---------- ----------
1 Marcio 3

1 row selected.

I use this to maybe rename column, change order of columns, whatever... up until now thought this is good way to manage schemas. What do you think?

thanks,



Tom Kyte

Followup  

January 17, 2004 - 1:12 am UTC

if you are talking about views in general -- I've said over and over "views are good"

if you are talking about "should my answer to which you referred be changed to say use the view instead of procedure" -- no, in this case not.

views are not 100% evil
views are not 100% good
views are a tool

and in this case (the case I was answering above) not the right choice IMO.




My mistake

January 17, 2004 - 1:29 pm UTC

Reviewer: Marcio from br

Sorry my misunderstood, when I saw your comment about views, I just forgot context and put my follow up.

I work for that way (demonstrated above) following your advice found here on this site (about views).

Thanks.

March 18, 2004 - 4:47 pm UTC

Reviewer: TRS from Plano, TX USA

This was almost as good as the Thia soup they've named after you.... ;)

Oracle Text and bind variable

June 04, 2004 - 1:00 pm UTC

Reviewer: Angel from Barcelona

Hi Tom,

This is a very useful tool for catching the not-binded sqls.

One thing that puzzles me is that always when we run this, it turns out that the biggest offenders are internal sqls of Oracle Text.

Could you tell me if there are any legitimate reasons why Oracle Text is not using binded parameters?

Thanks.





DECLARE SEL NUMBER; BEGIN :@ := "CTXSYS"."TEXTOPTSTATS".ODCISTATSSELECTIVITY( S
YS.ODCIPREDINFO('#', '#', '#', @), SEL, SYS.ODCIARGDESCLIST(SYS.ODCIARGDESC(@, N

SQL_TEXT_WO_CONSTANTS
--------------------------------------------------------------------------------
COUNT(*)
----------
ULL, NULL, NULL, NULL, NULL), SYS.ODCIARGDESC(@, NULL, NULL, NULL, NULL, NULL),
SYS.ODCIARGDESC(@, '#', '#', '#', NULL, NULL), SYS.ODCIARGDESC(@, NULL, NULL, NU
LL, NULL, NULL)), @, NULL , NULL, '#', SYS.ODCIENV(@, @)); IF SEL IS NULL THEN :
@ := -@; ELSE :@ := SEL; END IF; EXCEPTION WHEN OTHERS THEN RAISE; END;
725

DECLARE COST SYS.ODCICOST := SYS.ODCICOST(NULL, NULL, NULL, NULL); BEGIN :@ :=
"CTXSYS"."TEXTOPTSTATS".ODCISTATSINDEXCOST( SYS.ODCIINDEXINFO('#', '#', SYS.ODCI
COLINFOLIST(SYS.ODCICOLINFO('#', '#', '#', '#', NULL, NULL)), NULL, @, @), @.@,

SQL_TEXT_WO_CONSTANTS
--------------------------------------------------------------------------------
COUNT(*)
----------
COST, SYS.ODCIQUERYINFO(@, SYS.ODCIOBJECTLIST(SYS.ODCIOBJECT('#', '#'))), SYS.OD
CIPREDINFO('#', '#', NULL, @), SYS.ODCIARGDESCLIST(SYS.ODCIARGDESC(@, NULL, NULL
, NULL, NULL, NULL), SYS.ODCIARGDESC(@, NULL, NULL, NULL, NULL, NULL), SYS.ODCIA
RGDESC(@, '#
816


Tom Kyte

Followup  

June 04, 2004 - 1:33 pm UTC

which release is that -- i know they fixed that.....

Not happening in my database (and i use text heavily)

Oracle Text and bind variable

June 07, 2004 - 5:37 am UTC

Reviewer: Angel from Barcelona, Spain

Hi Tom,

It's Oracle9i Release 9.2.0.3.0 for Windows.

I went to
</code> http://otn.oracle.com/software/products/oracle9i/index.html <code>
to check what was the latest release of 9i and I found that 9.2.0.3 is not available to download (9.2.0.2 is the latest for Windows)

This is strange, because I am sure it was there before. Do you know why it has been removed?

Please tell me if there is something I got wrong. Thanks,





Latch contention algorithm

June 23, 2004 - 12:13 pm UTC

Reviewer: A reader from NJ, USA

Tom,

I reviewed your book (Expert One-on-One Oracle and Oracle Perf. By design, Cary Millsap's Optimizing Oracle Performance. Based on these, I have come up with a little logic of how the Oracle kernel's latch acquisition algorithm will look like at a very high level. Please confirm if my understanding is correct. The code snippet is shown below :

Code snippet for latch that sorround an Oracle kernel code
==========================================================
if get_latch(latch_name,Mode) {
/*
.....
.....
Oracle kernel code associated for the latch
.....
.....
*/
release (latch_name)
}

Latch acquisition algorithm
===========================
Boolean get_latch (char *latch_name, char Mode) {
Latch_Acquired := 0;

If (latch for 'latch_name' available) {
Latch_Acquired := 1; /* Mark latch as acquired */
}

If (Latch_Acquired) {
If (Mode = 'I') {
Increment Gets count for Immediate mode
}
else {
Increment Gets count for Willing-to-Wait mode
}
}
else {
If (Mode = 'W') {
Increment MISSES count for Willing To Wait mode
/*
The while loop below consumes CPU time as the process consumes CPU doing nothing but
spiining for the latch
*/
While (NOT Latch_Acquired) {
If ( Request_Count++ <= LATCH_SPIN_COUNT) {
get_latch (latch_name,Mode);
}
sleep (x); /* This will post a Latch Free WAIT event and causes */
elapsed time exceed CPU time */
Request_Count = 0;
}
}
else {
Increment MISSES count for Immediate mode
}
}
return (Latch_Acquired);
}

Please confirm if my understanding is correct.

Tom Kyte

Followup  

June 23, 2004 - 1:21 pm UTC

funny how much indentation matters ;) I never could read the C code that used that (very common) style of {}, always had to pretty print it...

anyway, I think the spin loop you have is a bit off, but the gist is there, not going to nit-pick too much, this is how I might psuedo code that part:

If (Mode = 'W')
{
Increment MISSES count for Willing To Wait mode
/*
The while loop below consumes CPU time as the process consumes
CPU doing nothing but spining for the latch
*/
While (NOT Latch_Acquired)
{
for( request_count = 0;
request_count <= spin_count && !latch_acquired;
request_count++ )
{
try to grab the latch, recursion would be logically wrong...
}
if !Latch_Acquired
{
sleep (x); /* This will post a Latch Free WAIT event
and causes elapsed time exceed CPU time,
note that an overburdened CPU will do the
same in a spin loop (ela > cpu) as we get
pre-empted */
increment sleeps
}
}
}


Clarification

June 23, 2004 - 2:03 pm UTC

Reviewer: A reader from NJ, USA

Tom,

In the above feedback, you mentioned that recursion would be logically wrong.
1. Can you please eloborate on this please
2. Also, when the process is pre-empted because of busy CPU, it will not show as WAIT line but will be represented by "Time spent not using the CPU" (which per Cary Millsap's response time analysis would be counted as N ==> time spent not utilizing the CPU) and will not appear in any of the wait interfaces since the wait interfaces do not show the time waiting for CPU.

Is my understanding of 2 correct?

As always, thanks for your very clear explanation

Tom Kyte

Followup  

June 23, 2004 - 2:47 pm UTC

1) the recursion would go through all of the checks again and as latch acquired is a local variable, it would be an infinite loop

2) correct it will not be wait time, it will be unacouunted time. elapsed time will show it, you won't be able to look at the waits and see what it was.

My point was you stated:

This will post a Latch Free WAIT event and
causes elapsed time exceed CPU time


well, spinning can also cause "elapsed time exceed cpu time" if we get bumped off. no wait can be recorded -- but ela will go up while CPU does not.

Clarification

June 23, 2004 - 2:04 pm UTC

Reviewer: A reader from NJ, USA

Tom,

In the above feedback, you mentioned that recursion would be logically wrong.
1. Can you please eloborate on this please
2. Also, when the process is pre-empted because of busy CPU, it will not show as WAIT line but will be represented by "Time spent not using the CPU" (which per Cary Millsap's response time analysis would be counted as N ==> time spent not utilizing the CPU) and will not appear in any of the wait interfaces since the wait interfaces do not show the time waiting for CPU.

Is my understanding of 2 correct?

As always, thanks for your very clear explanation

Remove_Constants

June 23, 2004 - 7:07 pm UTC

Reviewer: Neeraj

Tom,

The remove_constants function is proving to be very
helpful. It can tell you right away how many copies of
the same SQL statement are there in the SQLAREA. Is there
a query which can show the amount how much memory is
being wasted per SQL statement as compared to the total
memory allocated by shared_pool_size parameter..?

Thanks Always for your response.
Neeraj



Tom Kyte

Followup  

June 24, 2004 - 9:03 am UTC

there are columns in v$sql ending in _MEM you can aggregate up.

Library Cache Latch and Shared Pool Latch

June 28, 2004 - 2:14 am UTC

Reviewer: Vivek Sharma from Bombay, India

Dear Tom,

I have a Document listed on orapub.com. The Document Titled "Conquering Oracle Latch Contention" is written by Craig A. Shallahamer. In the document on page no 12, the author states about the Library Cache Latch Contention that

"Very generally and incomplete, the library cache kernel code is used to check if a SQL Statement is cached in the library cache".

I think that the author means that the Library Cache Latch Contention occurs to check the parsed version of a SQL Statement.

As per my understanding, the author is wrong in his statement. I regularly read your articles and visit this site and based on this I have understood that

Library Cache Latch contention can occur if there are too many parses in the Library Cache probably due to any of these reasons

1. Not using Bind variables
2. Shared Pool is too small
etc.

whereas Shared Pool Latch is used for the purpose what the author of above said article states for Library Cache Latch. i.e. Shared Pool Latch is used to scan the Shared Pool (library Cache) to check if a SQL Statement is cached in the library cache. In this case, if the sql is found, then the parsing is avoided whereas if the statement is not in the Library cache then the library cache latch is acquired to parse the Statement. Am I correct ?

Regards
Vivek




Tom Kyte

Followup  

June 28, 2004 - 8:20 am UTC

the shared pool latch is used to protect memory allocations the library cache is used to protect modifications to these structures after allocation.

it would be the library cache latch that gets hit on the parse.

Library Cache Latch v/s. Shared Pool Latch

June 29, 2004 - 1:00 am UTC

Reviewer: Vivek Sharma from Bombay, India

Dear Tom,

Thanks for your reply.

You said that Shared Pool Latch is used to protect Memory Allocation. What does it mean ? Can you explain me one scenario when the memory allocation is to be protected so that it is clear for my understanding.

What I have understood so far was

When a user fires a query then

1. Shared Pool Latch is acquired to check for the parsed verion of this SQL.
2. If the Parsed Version is available then Library Cache Latch is acquired to protect the parsed version of that SQL for any modification. (SOFT Parse).
3. If the parsed version is not available, even then the Library Cache latch is acquired to parse the query and generate the execution plan (HARD Parse)

Are my assumptions wrong. If yes, then which all latch will be acquired for each steps. Please let me know for better understanding.

Thanks and Regards
Vivek


Tom Kyte

Followup  

June 29, 2004 - 8:04 am UTC

the shared pool is a data structure.

people (processes, things, like dbwr, lgwr, etc) allocate memory from it to create data structures they need and use.

they that allocate memory from it, use a latch -- the shared pool latch. In order to allocate memory from this shared data structure called the shared pool we need to latch it, serialize access to it -- to prevent two things from trying to modify this overall structure at the same time.

Now, later, after the memory is allocated, other things need to do stuff to it. some of the memory that is allocated is used for a library cache. In order to parse -- we need to access the library cache, a memory structure that happens to reside in the shared pool. We use a library latch for that.


so, we use the library latch later when inspecting the library cache. not the shared pool. when you have parsing issues (too much parsing, no binds, whatever) you see library latch contention.

if you have access to the book "Effective Oracle by Design", I describe the parsing process. Also, "Beginning Oracle Programing" has similar information in a chapter I wrote.

Shared Pool Latch Contention

June 29, 2004 - 11:03 am UTC

Reviewer: Vivek Sharma from Bombay, India

Dear Tom,

Thanks for your reply. I understood the concept of Library Cache Latch and Shared Pool Latch now. Would like to know what all things to be taken care off if the database is experiencing heavy Shared Pool Latch Contention ? What should be tuned ?

Thanks and Regards
Vivek

Tom Kyte

Followup  

June 29, 2004 - 4:11 pm UTC

have you ever seen heavy contention for this latch?

Shared Pool Latch Contention

June 30, 2004 - 2:08 am UTC

Reviewer: Vivek Sharma from Bombay, India

Dear Tom,

In the Statspack report "Latch Activity Section" it is written that PCT misses should be nearly 0 whereas the the pct misses on Library Cache and Shared Pool Latch are 1.8 and 2.1 respectively.

If the shared pool latch is used to allocate and protect memory allocation in the shared pool, then will the parameter parallel_automatic_tuning and configuring large_pool help eliminate the contention for this latch so that the memory for DBWR, LGWR and other processes are allocated in the large_pool.

Thanks and Regards
Vivek

Tom Kyte

Followup  

June 30, 2004 - 10:04 am UTC

and what are the raw INPUT numbers to that percentage.

You know, if I go for something twice, and miss it once -- that is 50% -- big deal.


soooo, what are the raw numbers, what are the waits on this, is it really impacting you.

ratios (pct's) are used to obscure and hide information -- it is near impossible to tune by them. they can be used as red flags (if the pct goes from 0.00001 to 5, that would be a "flag")

Dynamic SQL

July 15, 2004 - 11:42 pm UTC

Reviewer: Mike

there are some queries not using bind variables, I came to the developers for the issues. They claim becuase of using dynamic sql and there is no way to make the shared/bind work.
How do you think of their claims?

Tom Kyte

Followup  

July 16, 2004 - 10:57 am UTC

i think they need to read the documentation for this product they are using, just as much as they read their java or VB books.

using binds in dynamic sql is trivial -- nay, even EASIER than not using binds.


August 10, 2004 - 10:12 am UTC

Reviewer: A reader

Hi Tom,
We are using Siebel CRM application and I want to use above technique to find sql statements which are not using bind variable. Many of the sql generated by siebel is more then 1000 character long..In that case how could use remove_constants procedure and stuff..

Please help

Thanks

Tom Kyte

Followup  

August 10, 2004 - 10:22 am UTC

you'd have to modify it to use v$sqltext_with_newlines.

you could easily get up to 4000 characters via:

select address, hash_value,
max(decode(piece,0,sql_text)) || max(decode(piece,1,sql_text)) || max(decode(piece,2,sql_text)) ||
max(decode(piece,3,sql_text)) || max(decode(piece,4,sql_text)) || max(decode(piece,5,sql_text)) ||
max(decode(piece,6,sql_text)) || max(decode(piece,7,sql_text)) || max(decode(piece,8,sql_text)) ||
max(decode(piece,9,sql_text)) || max(decode(piece,10,sql_text)) || max(decode(piece,11,sql_text)) ||
max(decode(piece,12,sql_text)) || max(decode(piece,13,sql_text)) || max(decode(piece,14,sql_text)) ||
max(decode(piece,15,sql_text)) || max(decode(piece,16,sql_text)) || max(decode(piece,17,sql_text)) ||
max(decode(piece,18,sql_text)) || max(decode(piece,19,sql_text)) || max(decode(piece,20,sql_text)) ||
max(decode(piece,21,sql_text)) || max(decode(piece,22,sql_text)) || max(decode(piece,23,sql_text)) ||
max(decode(piece,24,sql_text)) || max(decode(piece,25,sql_text)) || max(decode(piece,26,sql_text)) ||
max(decode(piece,27,sql_text)) || max(decode(piece,28,sql_text)) || max(decode(piece,29,sql_text)) ||
max(decode(piece,30,sql_text)) || max(decode(piece,31,sql_text)) || max(decode(piece,32,sql_text)) ||
max(decode(piece,33,sql_text)) || max(decode(piece,34,sql_text)) || max(decode(piece,35,sql_text)) ||
max(decode(piece,36,sql_text)) || max(decode(piece,37,sql_text)) || max(decode(piece,38,sql_text)) ||
max(decode(piece,39,sql_text)) || max(decode(piece,40,sql_text)) || max(decode(piece,41,sql_text)) ||
max(decode(piece,42,sql_text)) || max(decode(piece,43,sql_text)) || max(decode(piece,44,sql_text)) ||
max(decode(piece,45,sql_text)) || max(decode(piece,46,sql_text)) || max(decode(piece,47,sql_text)) ||
max(decode(piece,48,sql_text)) || max(decode(piece,49,sql_text)) || max(decode(piece,50,sql_text)) ||
max(decode(piece,51,sql_text)) || max(decode(piece,52,sql_text)) || max(decode(piece,53,sql_text)) ||
max(decode(piece,54,sql_text)) || max(decode(piece,55,sql_text)) || max(decode(piece,56,sql_text)) ||
max(decode(piece,57,sql_text)) || max(decode(piece,58,sql_text)) || max(decode(piece,59,sql_text)) ||
max(decode(piece,61,sql_text))
from v$sqltext_with_newlines
group by address, hash_value
/


else, you are looking at a clob (not impossible, just more work)

BIND VARIABLE and ORACLE FORMS

August 11, 2004 - 5:21 pm UTC

Reviewer: Deepak from FL, USA

Hi Tom,

I have a following code in Oracle Forms:

GO_BLOCK('BLOCK1') ;
SET_BLOCK_PROPERTY ('BLOCK1',default_where,'ref_key = ' || out_ref_key);
EXECUTE_QUERY;

This code always gets hard-parsed. Please suggest a way to Bind this query.

Thanks
Deepak

Tom Kyte

Followup  

August 12, 2004 - 8:32 am UTC

'ref_key = :block.item'


will auto-bind for you. use a "control block" (a block full of just items, not based on a table). put out_ref_key into a field there and use :control_block.item_name

As Always, TOM RULES !!!

August 12, 2004 - 10:35 am UTC

Reviewer: Deepak from FL, USA

Thank you tom. The Oracle Forms solution works.

I have a select statement on a lookup table which is called from 100's of different forms, Stored Procedures, Oracle Reports, Triggers etc...

Is it possible to FORCE cursor sharing for 1 single statement without changing the database parameter, so as not to affect cursor_sharing for other statements in the Database?

Deepak

Tom Kyte

Followup  

August 12, 2004 - 10:43 am UTC

umm, if you have a single localized sql statement - why not just bind it and do it right in the first place?!?

August 12, 2004 - 3:02 pm UTC

Reviewer: A reader

Because it is called from 1000's of different places. I mean,

Form1
Program Unit 1
select select_list from lu_table
from type = 'TYPE1' and code = 'CODE1' ;


Form2
Program Unit 1
select select_list from lu_table
from type = 'TYPE2' and code = 'CODE2' ;
...
...
Form n
Program Unit X
select select_list from lu_table
from type = 'TYPEX' and code = 'CODEX' ;


same goes with Stored Procedures, Oracle Reports, Table Triggers.




Tom Kyte

Followup  

August 12, 2004 - 3:32 pm UTC

sooo, what you meant to say is:

I have hundreds of forms, stored procedures, reports, triggers etc that each have their own uniquely developed SQL

Not

I have a select statement on a lookup table which is called from 100's of
different forms, Stored Procedures, Oracle Reports, Triggers etc...


You don't have "a select", you have "hundreds of them"

No, there is no magic that says "hey, when you see a query that looks like "select ... from lu_table where type = .... and code = ...."


Two bad things there -- they appear to have used a single code table for everything (hows that declarative integrity working out for you? not!). They didn't employ modular coding techniques.

Wait three -- they didn't bind...


Sorry -- no magic for this one -- they probably all have different whitespace anyhow, wouldn't make any difference


select *
from t
where x = 5 and y = 10


is different from


select *
from t
where x = 5
and y = 10

is different from


Select *
from t
where x = 5
and y = 10


so -- it wouldn't do much anyway.

August 12, 2004 - 3:52 pm UTC

Reviewer: A reader

In such a case, is it OK to go with CURSOR_SHARING=FORCE ?

Tom Kyte

Followup  

August 12, 2004 - 4:25 pm UTC

I am reminded of a show I watched -- "Extreme Elimination" (if you haven't seen it, it is a Japanese series dubbed in english -- the contestants are made to do some pretty wacky things).


One of the contests was the person has to run full speed through a series of doors. There are 3 walls one after the other. Each wall has 3 doors. The doors are covered in paper. 2 out of 3 of the doors are "open", the 3rd door is closed.

Contestant must run full speed through these doors. If they hit paper all of the way, they win. If they hit a locked wooden door (running at full speed remember), well -- they don't win.

That is what I think of when I think about cursor_sharing.


You need to test that -- everywhere. It is not something you just turn on lightly. Things will be affected by it. I prefer for applications that are getting fixed to turn it on themselves (so as to not affect "good" applications) myself.

September 15, 2004 - 11:54 am UTC

Reviewer: A reader

Tom,

After issuing
select sql_text_wo_constants, count(*)
from t1
group by sql_text_wo_constants
having count(*) > 50
order by 2

i noticed i have lot of entries like for different procedures

BEGIN MY_PROC_P1('#',:@,:RETCODE); END; 120
BEGIN MY_PROC-P2( '#', '#', '#', :@, 150
NULL, NULL, TO_DATE('#','#') );END;

As per my understanding procedures are autobinded, so why do i see entries like these.

Thanks.


Tom Kyte

Followup  

September 15, 2004 - 11:58 am UTC

your understanding is wrong.

the sql inside of a stored procedure (static sql) is "nicely bound as each reference to a plsql variable in a sql query is a bind variable"

but the CALLER of such procedures STILL NEEDS TO BIND THE CALL TO THE PROCEDURE itself!

The good news, if p1 has 10 queries, you have 1/10 of the problem you would have. The coders have one statement to fix, not 10!

September 15, 2004 - 12:07 pm UTC

Reviewer: A reader

Thanks for your prompt reply.

"but the CALLER of such procedures STILL NEEDS TO BIND THE CALL TO THE PROCEDURE
itself! "

does it mean bind variable has to be declared and then be passed to those procedures..



Tom Kyte

Followup  

September 15, 2004 - 12:45 pm UTC

yes, just like they bound:

BEGIN MY_PROC_P1('#',:@,:RETCODE); END;
^^^^^^^^

retcode they should be binding the first two inputs (geez, they know how to do this! why didn't they just do it!)

September 15, 2004 - 1:41 pm UTC

Reviewer: A reader

Much clearer now .. Thanks

your final words are required.

December 22, 2004 - 4:26 am UTC

Reviewer: Raaghid from Chenai, India

I attended a seminar (NOT from oracle corporation) on database maintenance. Session was ended with following conclusions.

1. Rollback segments to be defined as n/4 ("n" # of users and NOT exceeding 50)
2. Ideally database should have 4 control files and also should be in seperate hdd.
3. Extents calculation, make it as single extend etc. (Myself know that this is myth, since I have gone through your forum - so please ignore this point)
4. DB_BLOCK_BUFFERS is low, users will NOT have enought memory to operate efficently
DB_BLOCK_BUFFERS is high, your system may begin to swap and may come to halt
5. Read Hit ratio below 90 are usually a sign of poor indexing (But distortion of hit ratio number is possible)
6.Data dictionary cache miss ratio 90-95 % good
7. Library cache hit ratio > 95% good.
8. Increasing sort_area_size to reduce I/O and determine the PER USER space allocated in main memory for each process to sort.


Tom Kyte

Followup  

December 22, 2004 - 9:38 am UTC

1) rollback segments should be figured out by Oracle (automatic undo management)

2) 3 or 4, mostly 3 is what I see

3) silly

4) shows they have a firm grasp of the obvious :)

5) "read hit ratio"?

6) you want to miss 90/95% of the time?

7) and 96 is better

8) per user space, hmm, I thought it would be the per sort space and a user can have many of them -- but anyway -- pga_aggregate_target....



read hit ratio

December 23, 2004 - 5:02 am UTC

Reviewer: Raaghid from India

5) "read hit ratio"?
select (1-(sum(decode(name,'physical reads',value,0))/
(sum(decode(name,'db block gets',value,0))+
(sum(decode(name,'consistent gets',value,0))))))*100 Read_Hit_ratio
from v$sysstat;

6) you want to miss 90/95% of the time?

ohh. it is buffer miss followed by "hit ratio"

select sum(gets) gets, sum(getmisses) misses,
(1 - (sum(getmisses) /(sum(gets)+sum(getmisses))))*100 hit_ratio
from v$rowcache;

Point no. 5: Read hit ratio below 90-95 % means poor indexing. Do you agree? please answer.


Tom Kyte

Followup  

December 23, 2004 - 11:24 am UTC

5) that's a cache hit ratio isn't it.


No, I totally disagree that a hit ratio MEANS ANYTHING.


A hit ratio can be used like a flag. My hit ratio suddenly changed from X to Y -- that is an indication that something "changed" -- maybe for the good, maybe for the bad. It means something changed.


Using your logic, a hit ratio above 95% might mean I've overindexed :)

it might, it might not.

January 19, 2005 - 3:38 am UTC

Reviewer: Helena Marková from Bratislava, Slovakia


bind result

March 25, 2005 - 8:29 am UTC

Reviewer: Safrin from india

I have checked our production server using your procedure remove_constants
results are:
select count(1) from t1;
resut --> 10990

select count(1) from
(select sql_text_wo_constants, count(*) cnt from t1
group by sql_text_wo_constants
having count(*) > 1
resut --> 481

select sum(cnt) from
(select sql_text_wo_constants, count(*) cnt from t1
group by sql_text_wo_constants
having count(*) > 1

result --> 8157

total parsing sqls - 10,990
sqls having repeated parsing - 481
sql repeated parsing with no. of times - 8157
(ie instead of 481, 8157 times parsed)

ratio = ( 481 / (10990- (8157-481)) )*100
= 14.51 %
Hence our production server sqls approx 85 % bind enabled and 15 % of places, still to be changed to make it as "bind enabled".
(So I have to take list sqls repeated in table t1 (arrived using remove constants) and I have to take it up with development team for changing the same.)

Please confirm my calculation above.









Tom Kyte

Followup  

March 25, 2005 - 6:14 pm UTC

you cannot compute a ratio from that, you can only compute what percentage of the shared pool CURRENTLY is "bind friendly" and "not bind friendly"

for the "not bind friendly" will kick out bind friendly statements.

but you do seem to have the ratios backwards.

If 8,157 statements are in there that are the same except for literls (potential non-bind friendly statements) and you have 11,000 statements in there -- about 85% of them are "not bind friendly"

exactly

March 27, 2005 - 7:21 am UTC

Reviewer: riyaz from india

Exactly it is only tells how much % friendly bind sqls are there.
Thanks.

Then how to find out ratio.

Tom Kyte

Followup  

March 27, 2005 - 10:22 am UTC

it is a binary thing.

You either have bind friendly sql
or you don't

that sql which is not bind friendly needs to be reviewed to see if it should be.

Library Cache Latch contention

May 12, 2005 - 10:03 am UTC

Reviewer: tom evans from Croydon , Surrey, England

Hi Tom,

We have severe problems revolving around Library Cache latch waits / sleeps. I wonder if you know of any way of finding the amount of time a Library Cache child latch is held,(which we have identified as #106), and to identify the respective SQL statements, in version 8.1.7.4.

I do appreciate that we are dealing with microseconds, but this child is using approx 97% of the waits on the system and we would hope therefore that it will repeat and / or be holding the child latch for a longer time.

We have already reduced the Library Cache Latch requests by over 40%, by code optimisation (less executions/more soft parses), but the sleeps have not decreased at all.

Tom Kyte

Followup  

May 12, 2005 - 1:08 pm UTC

can you define "severe" -- where you are and where you need to get to go.

the executions are not a problem, it is the PARSE

your goal would be the parse once and execute it over and over and over. (soft parses are bad parses, hard parses are just worst bad parses)

do you have a pretty bad hard parse problem still?

The latches are held for incredibly small periods of time (hopefully), they are not tracked at that level.

dynamic sql and bind variable

May 19, 2005 - 12:04 pm UTC

Reviewer: reader from USA

Tom,
We have very complicated pl/sql package in which the developer use the dynamic sql to insert data into the global temp table. This is used all day in our OLTP enviornment where application users search for the person with different input. When we had checked the statement
exececute and counts are same around 6000 meaning that many different versions of the sql in the cache instead of 1 or 2.

I had used the following query,

SELECT substr(sql_text,1,40) "SQL", count(*) ,
sum(executions) "TotExecs"
FROM v$sqlarea
WHERE executions < 5
GROUP BY substr(sql_text,1,40)
HAVING count(*) > 30
ORDER BY 2;

The code in PL/SQL package, is there any way we can tune this??

v_InsertStmt := 'INSERT INTO temp_person_search
(sequence_id, ' || 'membership_number, ' || 'account_individuals, ' || 'date_of_births, ' ||
'ages, ' ||
'address, ' ||
'phone_numbers, ' ||
'household_id, ' ||
'sort_individual_id) ' ||
'(SELECT ' || v_SequenceID || ', ' ||
'pkg_person_search.fn_get_current_member_num(h1.household_id), ' ||
'pkg_person_search.fn_get_household_members(h1.household_id), ' || 'pkg_person_search.fn_get_hhld_mem_birth_date(h1.household_id), ' ||
'pkg_person_search.fn_get_hhld_mem_age(h1.household_id), ' ||
'pkg_person_search.fn_get_household_address(h1.household_id), ' || 'pkg_person_search.fn_get_household_phone_nums(h1.household_id), ' || 'h1.household_id, ' ||
'pkg_person_search.fn_get_result_order_indv_id(h1.household_id) ' ||
'FROM household h1 ' ||
'WHERE h1.household_id IN (' || v_SQLStmt || '))';

EXECUTE IMMEDIATE v_InsertStmt;

Tom Kyte

Followup  

May 19, 2005 - 12:54 pm UTC

</code> http://asktom.oracle.com/pls/asktom/f?p=100:11:::::P11_QUESTION_ID:3779680732446#15740265481549 <code>

'WHERE h1.household_id IN ( select * from table( cast(str2tbl(:x) as str2tableType ) ) ) ' USING v_sqlstmt;

bind.....

The function won't work in our case

May 19, 2005 - 5:29 pm UTC

Reviewer: reader from USA

Tom,
Thanks for the reply, but in our case the function and casting won't work as we build the where condition based on the customer input that input is different everytime


WHERE h1.household_id IN
( SELECT household_id FROM ( SELECT DISTINCT(h.household_Id) FROM household h WHERE (h.daytime_phone_num = '8454571465' OR h.evening_phone_num = '8454571465') ) WHERE ROWNUM < 253 ))

Another example

WHERE h1.household_id IN ( SELECT household_id FROM ( SELECT DISTINCT(h.household_Id) FROM household h WHERE h.current_bank_acct_num = '03173219' ) WHERE ROWNUM < 253 ))

Tom Kyte

Followup  

May 20, 2005 - 7:05 am UTC

um, sure it can work. anything can work.


</code> http://asktom.oracle.com/pls/asktom/f?p=100:11:::::P11_QUESTION_ID:1288401763279 <code>

getting error while casting in sys_context

May 25, 2005 - 5:26 pm UTC

Reviewer: Vrajesh from USA

I am geeting error while using casting with sys_context.
VAR1
-----
1,2,3,4

SELECT count(*)
FROM work_queue_item wqi, work_queue_status wqs
WHERE wqi.status_id = wqs.status_id
AND wqi.status_id IN (select * from table ( CAST( str2tbl ( sys_context( 'WQ_CTX', 's_status_id' ) ) as str2tableType )))

The wqi.status_id is Number data type.

The context was set in same session tho' pl/sql procedure. I was getting ORA-00902: invalid datatype error both way. Could you please show me where I am doing wrong?

Thanks,


Tom Kyte

Followup  

May 25, 2005 - 8:05 pm UTC

type name is probably str2TBLtype, not TABLE



same as above

May 25, 2005 - 5:34 pm UTC

Reviewer: Vrajesh from USA

As continue for above question why I need to cast?
Why the following statement not work for above conext value.

SELECT count(*) FROM work_queue_item wqi,work_queue_status wqs WHERE wqi.status_id = wqs.status_id
AND wqi.status_id IN ( to_number(sys_context( 'WQ_CTX', 's_status_id' )) )



Tom Kyte

Followup  

May 25, 2005 - 8:05 pm UTC

cast is needed in older releases and sometimes in plsql, hence I tend to use it.

May 30, 2005 - 4:55 am UTC

Reviewer: shay ittah from Israel

Hi Tom,

I am running this query :
select w.sid,
c.name || ' (child #' || c.child# || ')' latch,
s.program,
s.machine || ', ' || s.process process,
p.spid ospid,
a.sql_text,
a.hash_value,
a.parse_calls,
a.executions
from v$session_wait w,
v$latch_children c,
v$session s,
v$process p,
v$sqlarea a
where w.event = 'latch free'
and c.addr = w.p1raw
and s.sid = w.sid
and p.addr = s.paddr
and a.address (+) = s.sql_address
/


and the output is

397 library iwserver@r3d3 r3d3, 13201 Begin 1055490975 6902 13560
cache (TNS V1-V3) 25409 Ptsm_Vas_General_Services3_Pkg
(child #7) .Ptsm_GeneralServices_Validate
(:txt0, :txt1,:txt2, :a0, :a1,
:a2, :a3, :a4,:b, :r, :s);
End;

567 library iwserver@r3d3 r3d3, 3032 Begin 1055490975 6902 13560
cache (TNS V1-V3) 2681 Ptsm_Vas_General_Services3_Pkg
(child #1) .Ptsm_GeneralServices_Validate
(:txt0, :txt1,:txt2, :a0, :a1,
:a2, :a3, :a4,:b, :r, :s);
End;
same statement / same hash_value different childs?

How come?

Tom Kyte

Followup  

May 30, 2005 - 9:02 am UTC

you know, I really cannot make heads or tails of that output.

</code> http://asktom.oracle.com/~tkyte/print_table/ <code>

try printing "down" the page.

but also try using "length" and "dump" on the SQL text and look for obvious differences.



over 22 seconds on nam='latch: library cache'

July 09, 2005 - 5:17 pm UTC

Reviewer: A reader

Here is the version:
SQL> select * from v$version;

BANNER
----------------------------------------------------------------
Oracle Database 10g Enterprise Edition Release 10.1.0.3.0 - 64bi
PL/SQL Release 10.1.0.3.0 - Production
CORE    10.1.0.3.0      Production
TNS for Solaris: Version 10.1.0.3.0 - Production
NLSRTL Version 10.1.0.3.0 - Production


Here is the trace file:

=====================
PARSING IN CURSOR #1 len=65 dep=0 uid=220 oct=6 lid=220 tim=387747479508 hv=4170283050 ad='9ae98b90'
update cust set lastLogin = sysdate where customer_id = :cid
END OF STMT
PARSE #1:c=0,e=93,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=1,tim=387747479495
*** 2005-07-08 14:18:01.567
<b>WAIT #1: nam='latch: library cache' ela= 22360910 </b>p1=24488193544 p2=199 p3=0
BINDS #1:
 bind 0: dty=2 mxl=22(22) mal=00 scl=00 pre=00 oacflg=00 oacfl2=0000 size=24 offset=0
   bfp=ffffffff7bb6d2a0 bln=22 avl=06 flg=05
   value=103313413
WAIT #1: nam='buffer busy waits' ela= 77925 p1=159 p2=392206 p3=1
WAIT #1: nam='buffer busy waits' ela= 12791 p1=547 p2=710649 p3=1
WAIT #1: nam='enq: TX - index contention' ela= 11388 p1=1415053316 p2=4587522 p3=301972
WAIT #1: nam='latch: enqueue hash chains' ela= 7739 p1=24840189800 p2=20 p3=0
EXEC #1:c=20000,e=22479366,p=0,cr=11,cu=17,mis=0,r=1,dep=0,og=1,tim=387769959129
WAIT #1: nam='SQL*Net message to client' ela= 9 p1=1413697536 p2=1 p3=0
WAIT #1: nam='SQL*Net message from client' ela= 453 p1=1413697536 p2=1 p3=0
STAT #2 id=1 cnt=0 pid=0 pos=1 obj=0 op='COUNT STOPKEY (cr=31 pr=0 pw=0 time=1043 us)'
STAT #2 id=2 cnt=0 pid=1 pos=1 obj=22585 op='TABLE ACCESS FULL WG2K_PLAYER_LOCKOUT (cr=31 pr=0 pw=0 time=1034 us)'
=====================
PARSING IN CURSOR #2 len=105 dep=0 uid=220 oct=3 lid=220 tim=387769961679 hv=1751757214 ad='9c679790'
select work, round_id, version, revision, variation, flags from work_in_progress where customer_id = :1
END OF STMT
PARSE #2:c=0,e=200,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=1,tim=387769961661
BINDS #2:
 bind 0: dty=2 mxl=22(22) mal=00 scl=00 pre=00 oacflg=00 oacfl2=0000 size=24 offset=0
   bfp=ffffffff7bb5ff78 bln=22 avl=06 flg=05
   value=103313413
EXEC #2:c=0,e=555,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=1,tim=387769962607
WAIT #2: nam='SQL*Net message to client' ela= 4 p1=1413697536 p2=1 p3=0
FETCH #2:c=0,e=123,p=0,cr=3,cu=0,mis=0,r=1,dep=0,og=1,tim=387769963173
WAIT #2: nam='SQL*Net message from client' ela= 449 p1=1413697536 p2=1 p3=0
XCTEND rlbk=0, rd_only=0
WAIT #0: nam='log file sync' ela= 7090 p1=3563 p2=0 p3=0
WAIT #0: nam='SQL*Net message to client' ela= 5 p1=1413697536 p2=1 p3=0
WAIT #0: nam='SQL*Net message from client' ela= 418 p1=1413697536 p2=1 p3=0
XCTEND rlbk=1, rd_only=1
WAIT #0: nam='SQL*Net message to client' ela= 9 p1=1413697536 p2=1 p3=0
WAIT #0: nam='SQL*Net message from client' ela= 486 p1=1413697536 p2=1 p3=0
XCTEND rlbk=1, rd_only=1
WAIT #0: nam='SQL*Net message to client' ela= 3 p1=1413697536 p2=1 p3=0
WAIT #0: nam='SQL*Net message from client' ela= 768 p1=1413697536 p2=1 p3=0
STAT #1 id=1 cnt=0 pid=0 pos=1 obj=0 op='UPDATE  (cr=11 pr=0 pw=0 time=114758 us)'
STAT #1 id=2 cnt=1 pid=1 pos=1 obj=16837 op='TABLE ACCESS BY INDEX ROWID cust (cr=5 pr=0 pw=0 time=529 us)'
STAT #1 id=3 cnt=1 pid=2 pos=1 obj=52162 op='INDEX RANGE SCAN cust_IDX001 (cr=3 pr=0 pw=0 time=149 us)'
=====================

Any idea why Oracle spent so long on library cache latch considering that we're already using bind variables?

and why Oracle emited '*** 2005-07-08 14:18:01.567' before that waits?

Thanks a lot in advance 

Tom Kyte

Followup  

July 10, 2005 - 9:20 am UTC

a parse is a parse is a parse is a parse. There are types of parses, hard and soft, soft being better than hard, but a parse is a parse is a parse. binds or not. It is a parse and as such, must do significant latching into the shared pool to process.

is your system suffering from "lots of parsing", tons of parsing

and don't you wish every line had that nice timestamp, I sure do.

Remove_constants with multibyte characters

July 20, 2005 - 4:18 am UTC

Reviewer: Neelz from Japan

Dear Tom
I was trying to find out the Sql statements not using bind variables with the procedure you had given. When I execute the function it returns with error ORA-06502. It is because of using multibyte characters(Japanese) in the query like table names or column names. Could you please help me with a solution for this.
For example:
select remove_constants('&#65324;') from dual;
gives error ORA-06502.

Thanks & Regards



Tom Kyte

Followup  

July 20, 2005 - 6:28 am UTC

the code is not that complex, can you take a look at it and sort of debug it?

library cache latch contention

July 26, 2005 - 3:10 pm UTC

Reviewer: A reader

we are seeing a lots of 'latch free' waits. Its the #1 wait. The cause is library cache latch contention. Its not practicle to change the code at the moment. The system is very complex. Any way to reduce the library cache latch waits?

Tom Kyte

Followup  

July 26, 2005 - 3:29 pm UTC

what is causing the library cache latch contention:

a) excessive soft parsing
b) excessive hard parsing

and -- do you have a test system that you can load up to, well, test a parameter change on.

July 26, 2005 - 5:20 pm UTC

Reviewer: A reader

There is no hard parsing at all.
We have cursor_sharing=SIMILAR

We don't have a test system where we can practically test anything. We'll hv to do it in production.

Load Profile
~~~~~~~~~~~~ Per Second Per Transaction
--------------- ---------------
Redo size: 58,660.73 1,936.94
Logical reads: 10,185.88 336.33
Block changes: 343.58 11.34
Physical reads: 429.88 14.19
Physical writes: 40.21 1.33
User calls: 3,393.25 112.04
Parses: 910.57 30.07
Hard parses: 0.02 0.00
Sorts: 51.59 1.70
Logons: 0.12 0.00
Executes: 1,643.58 54.27
Transactions: 30.29

% Blocks changed per Read: 3.37 Recursive Call %: 34.87
Rollback per transaction %: 1.90 Rows per Sort: 23.03

Instance Efficiency Percentages (Target 100%)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Buffer Nowait %: 99.71 Redo NoWait %: 99.97
Buffer Hit %: 95.78 In-memory Sort %: 100.00
Library Hit %: 100.47 Soft Parse %: 100.00
Execute to Parse %: 44.60 Latch Hit %: 98.76
Parse CPU to Parse Elapsd %: 2.10 % Non-Parse CPU: 94.78

Shared Pool Statistics Begin End
------ ------
Memory Usage %: 71.22 71.42
% SQL with executions>1: 77.44 77.36
% Memory for SQL w/exec>1: 76.06 75.95

Top 5 Timed Events
~~~~~~~~~~~~~~~~~~ % Total
Event Waits Time (s) Ela Time
-------------------------------------------- ------------ ----------- --------
latch free 1,008,928 109,850 33.27
enqueue 126,613 62,231 18.84
buffer busy waits 51,496 38,175 11.56
db file sequential read 738,505 32,484 9.84
free buffer waits 31,495 30,784 9.32
-------------------------------------------------------------


Tom Kyte

Followup  

July 26, 2005 - 5:54 pm UTC

Parses: 910.57 30.07

ouch, that hurts doesn't it.

what do you have session cached cursors set to? what is the session cursor cache hits|count statistic showing?

session wait

July 27, 2005 - 3:34 pm UTC

Reviewer: abc

SID SEQ# EVENT P1TEXT P1 P1RAW P2TEXT
---------- ---------- -------------------- ---------- ---------- ---------------- ------------------
1 58267 pmon timer duration 300 000000000000012C
2 39979 rdbms ipc message timeout 300 000000000000012C
3 22189 rdbms ipc message timeout 69 0000000000000045
6 528 rdbms ipc message timeout 180000 000000000002BF20
4 42730 rdbms ipc message timeout 300 000000000000012C
9 47573 db file sequential r file# 245 00000000000000F5 block#
ead

18 31212 db file sequential r file# 277 0000000000000115 block#
ead

13 270 PX Deq: Txn Recovery sleeptime/ 200 00000000000000C8 passes
Start senderid

14 270 PX Deq: Txn Recovery sleeptime/ 200 00000000000000C8 passes
Start senderid

22 270 PX Deq: Txn Recovery sleeptime/ 200 00000000000000C8 passes
Start senderid

5 8597 wait for stopper eve 0 00
nt to be increased

23 328 SQL*Net message to c driver id 1413697536 0000000054435000 #bytes
lient

7 2677 SQL*Net message from driver id 675562835 0000000028444553 #bytes
client

10 3009 SQL*Net message from driver id 1413697536 0000000054435000 #bytes
client

16 14370 SQL*Net message from driver id 1413697536 0000000054435000 #bytes
client

20 1448 SQL*Net message from driver id 1413697536 0000000054435000 #bytes
client

17 8526 SQL*Net message from driver id 1413697536 0000000054435000 #bytes
client

11 4241 SQL*Net message from driver id 675562835 0000000028444553 #bytes
client

12 3078 SQL*Net message from driver id 1413697536 0000000054435000 #bytes
client

Tom ,Please give your comments on above

Tom Kyte

Followup  

July 27, 2005 - 3:51 pm UTC

looks like the output of a query to me? not sure what you want me to say.

July 28, 2005 - 11:43 am UTC

Reviewer: A reader

<Tom>
Parses: 910.57 30.07

ouch, that hurts doesn't it.

what do you have session cached cursors set to? what is the session cursor
cache hits|count statistic showing?
</Tom>

session cached cursors is not set. We tried to set it to 100 , but doesn't look like it helped.

what is the session cursor cache hits|count statistic showing? --- Where do i see it?

Tom Kyte

Followup  

July 28, 2005 - 12:15 pm UTC

it would be in the statspack report along with all of the other statistics.

July 28, 2005 - 12:58 pm UTC

Reviewer: A reader

Statistic Total per Second per Trans
--------------------------------- ------------------ -------------- ------------
CPU used by this session 980,970 546.5 18.1
CPU used when call started 980,949 546.5 18.0
CR blocks created 5,850 3.3 0.1
Cached Commit SCN referenced 1,218 0.7 0.0
Commit SCN cached 89 0.1 0.0
DBWR buffers scanned 0 0.0 0.0
DBWR checkpoint buffers written 33,798 18.8 0.6
DBWR checkpoints 2 0.0 0.0
DBWR cross instance writes 25 0.0 0.0
DBWR free buffers found 0 0.0 0.0
DBWR fusion writes 10,635 5.9 0.2
DBWR lru scans 18 0.0 0.0
DBWR make free requests 72 0.0 0.0
DBWR revisited being-written buff 1 0.0 0.0
DBWR summed scan depth 0 0.0 0.0
DBWR transaction table writes 230 0.1 0.0
DBWR undo block writes 4,017 2.2 0.1
DFO trees parallelized 19 0.0 0.0
PX local messages recv'd 86 0.1 0.0
PX local messages sent 86 0.1 0.0
PX remote messages recv'd 92 0.1 0.0
PX remote messages sent 69 0.0 0.0
Parallel operations downgraded to 0 0.0 0.0
Parallel operations not downgrade 19 0.0 0.0
RowCR - row contention 25 0.0 0.0
RowCR attempts 1,009 0.6 0.0
RowCR hits 952 0.5 0.0
SQL*Net roundtrips to/from client 5,984,253 3,333.9 110.1
active txn count during cleanout 10,243 5.7 0.2
background checkpoints completed 1 0.0 0.0
background checkpoints started 2 0.0 0.0
background timeouts 2,148 1.2 0.0
branch node splits 5 0.0 0.0
buffer is not pinned count 9,944,004 5,539.8 182.9
buffer is pinned count 4,560,050 2,540.4 83.9
bytes received via SQL*Net from c 530,065,085 295,300.9 9,750.7
bytes sent via SQL*Net to client 841,800,537 468,969.7 15,485.1
calls to get snapshot scn: kcmgss 3,457,742 1,926.3 63.6
calls to kcmgas 252,079 140.4 4.6
calls to kcmgcs 4,396 2.5 0.1
change write time 32,627 18.2 0.6
cleanout - number of ktugct calls 9,568 5.3 0.2
cleanouts and rollbacks - consist 5,216 2.9 0.1
cleanouts only - consistent read 342 0.2 0.0
cluster key scan block gets 260,623 145.2 4.8
cluster key scans 151,531 84.4 2.8
commit cleanout failures: block l 54 0.0 0.0
commit cleanout failures: buffer 25 0.0 0.0
commit cleanout failures: callbac 21 0.0 0.0
commit cleanout failures: cannot 159 0.1 0.0
commit cleanouts 141,756 79.0 2.6
commit cleanouts successfully com 141,497 78.8 2.6
commit txn count during cleanout 2,120 1.2 0.0
consistent changes 10,281 5.7 0.2
consistent gets 17,630,040 9,821.8 324.3
consistent gets - examination 8,444,648 4,704.5 155.3
Instance Activity Stats for DB: ECA Instance: eca1 Snaps: 22323 -22324

Statistic Total per Second per Trans
--------------------------------- ------------------ -------------- ------------
current blocks converted for CR 1 0.0 0.0
cursor authentications 14 0.0 0.0
data blocks consistent reads - un 10,047 5.6 0.2
db block changes 616,730 343.6 11.3
db block gets 670,863 373.7 12.3
deferred (CURRENT) block cleanout 66,423 37.0 1.2
dirty buffers inspected 108,718 60.6 2.0
enqueue conversions 405 0.2 0.0
enqueue deadlocks 5 0.0 0.0
enqueue releases 232,629 129.6 4.3
enqueue requests 231,908 129.2 4.3
enqueue timeouts 3 0.0 0.0
enqueue waits 17,058 9.5 0.3
exchange deadlocks 17,159 9.6 0.3
execute count 2,950,234 1,643.6 54.3
free buffer inspected 1,527,907 851.2 28.1
free buffer requested 1,049,648 584.8 19.3
gcs messages sent 956,297 532.8 17.6
ges messages sent 56,557 31.5 1.0
global cache blocks lost 65 0.0 0.0
global cache convert time 658,050 366.6 12.1
global cache converts 73,128 40.7 1.4
global cache cr block build time 6,703 3.7 0.1
global cache cr block flush time 24,781 13.8 0.5
global cache cr block receive tim 65,111 36.3 1.2
global cache cr block send time 11,616 6.5 0.2
global cache cr blocks received 41,016 22.9 0.8
global cache cr blocks served 74,157 41.3 1.4
global cache current block flush 32,034 17.9 0.6
global cache current block pin ti 1,903,565 1,060.5 35.0
global cache current block receiv 354,068 197.3 6.5
global cache current block send t 7,540 4.2 0.1
global cache current blocks recei 103,002 57.4 1.9
global cache current blocks serve 43,077 24.0 0.8
global cache defers 313 0.2 0.0
global cache freelist waits 144 0.1 0.0
global cache get time 1,082,145 602.9 19.9
global cache gets 840,060 468.0 15.5
global cache prepare failures 0 0.0 0.0
global cache skip prepare failure 25,111 14.0 0.5
global lock async converts 733 0.4 0.0
global lock async gets 483 0.3 0.0
global lock convert time 1,712 1.0 0.0
global lock get time 5,553,240 3,093.7 102.2
global lock releases 740,602 412.6 13.6
global lock sync converts 1,311 0.7 0.0
global lock sync gets 740,244 412.4 13.6
hot buffers moved to head of LRU 365,778 203.8 6.7
immediate (CR) block cleanout app 5,577 3.1 0.1
immediate (CURRENT) block cleanou 16,050 8.9 0.3
index fast full scans (full) 1 0.0 0.0
index fetch by key 2,043,240 1,138.3 37.6
index scans kdiixs1 1,258,729 701.2 23.2
leaf node 90-10 splits 22 0.0 0.0
leaf node splits 105 0.1 0.0
logons cumulative 221 0.1 0.0
Instance Activity Stats for DB: ECA Instance: eca1 Snaps: 22323 -22324

Statistic Total per Second per Trans
--------------------------------- ------------------ -------------- ------------
messages received 17,597 9.8 0.3
messages sent 17,597 9.8 0.3
no buffer to keep pinned count 0 0.0 0.0
no work - consistent read gets 6,807,056 3,792.2 125.2
opened cursors cumulative 1,411,846 786.5 26.0
parse count (failures) 6 0.0 0.0
parse count (hard) 30 0.0 0.0
parse count (total) 1,634,474 910.6 30.1
parse time cpu 51,185 28.5 0.9
parse time elapsed 2,435,356 1,356.7 44.8
physical reads 771,637 429.9 14.2
physical reads direct 95 0.1 0.0
physical writes 72,171 40.2 1.3
physical writes direct 65 0.0 0.0
physical writes non checkpoint 70,309 39.2 1.3
pinned buffers inspected 418,098 232.9 7.7
prefetch clients - default 6 0.0 0.0
prefetched blocks 19,697 11.0 0.4
prefetched blocks aged out before 37 0.0 0.0
process last non-idle time 252,581,568,265 140,713,965.6 4,646,289.1
queries parallelized 19 0.0 0.0
recursive calls 3,261,006 1,816.7 60.0
recursive cpu usage 448,622 249.9 8.3
redo blocks written 220,573 122.9 4.1
redo buffer allocation retries 52 0.0 0.0
redo entries 326,144 181.7 6.0
redo log space requests 89 0.1 0.0
redo log space wait time 1,341 0.8 0.0
redo ordering marks 4,253 2.4 0.1
redo size 105,296,012 58,660.7 1,936.9
redo synch time 1,693,641 943.5 31.2
redo synch writes 54,058 30.1 1.0
redo wastage 4,083,300 2,274.8 75.1
redo write time 140,164 78.1 2.6
redo writer latching time 3,892 2.2 0.1
redo writes 15,688 8.7 0.3
remote instance undo block writes 25 0.0 0.0
rollback changes - undo records a 9,615 5.4 0.2
rollbacks only - consistent read 605 0.3 0.0
rows fetched via callback 1,296,280 722.2 23.9
session connect time 278,351,372,041 155,070,402.3 5,120,329.9
session logical reads 18,283,663 10,185.9 336.3
session pga memory max 52,871,856 29,455.1 972.6
session uga memory 26,238,760 14,617.7 482.7
session uga memory max 107,014,712 59,618.2 1,968.6
shared hash latch upgrades - no w 1,775,368 989.1 32.7
shared hash latch upgrades - wait 635 0.4 0.0
sorts (disk) 0 0.0 0.0
sorts (memory) 92,599 51.6 1.7
sorts (rows) 2,133,009 1,188.3 39.2
summed dirty queue length 916,778 510.7 16.9
switch current to new buffer 83,353 46.4 1.5
table fetch by rowid 5,184,007 2,888.0 95.4
table fetch continued row 88,547 49.3 1.6
table scan blocks gotten 2,853,247 1,589.6 52.5
table scan rows gotten 2,460,559 1,370.8 45.3
Instance Activity Stats for DB: ECA Instance: eca1 Snaps: 22323 -22324

Statistic Total per Second per Trans
--------------------------------- ------------------ -------------- ------------
table scans (long tables) 0 0.0 0.0
table scans (short tables) 354,522 197.5 6.5
transaction lock background get t 1 0.0 0.0
transaction lock background gets 136 0.1 0.0
transaction lock foreground reque 136 0.1 0.0
transaction lock foreground wait 1,029 0.6 0.0
transaction rollbacks 729 0.4 0.0
user calls 6,090,875 3,393.3 112.0
user commits 53,328 29.7 1.0
user rollbacks 1,034 0.6 0.0
workarea executions - onepass 0 0.0 0.0
workarea executions - optimal 184,874 103.0 3.4
write clones created in backgroun 1 0.0 0.0
write clones created in foregroun 1,542 0.9 0.0
-------------------------------------------------------------


Tom Kyte

Followup  

July 28, 2005 - 1:19 pm UTC

a simple "i don't see it in there" would suffice ;)



select * from v$sysstat where name like 'session%';

wait a bit.

do the query again

subtract the results of the first query from the second, that'll show if you are getting any session cache hits

and you do have the parameter set to a non zero number right.

July 28, 2005 - 4:06 pm UTC

Reviewer: A reader

STATISTIC# NAME CLASS VALUE
---------- ---------------------------------------------------------------- ---------- ----------
208 session cursor cache hits 64 0
209 session cursor cache count 64 0

Tom Kyte

Followup  

July 29, 2005 - 7:31 am UTC

and, do you have session_caches_cursors *SET* or not.

July 29, 2005 - 10:00 am UTC

Reviewer: A reader

<do you have session_caches_cursors *SET* or not.>
No.
However, we tried it setting to a value of 100 in QA Stress region, but did not find any difference.

Tom Kyte

Followup  

July 29, 2005 - 12:32 pm UTC

define "did not find any difference"

you obviously won't have any hits or anything if it is not set (thought that was obvious anyway).



July 29, 2005 - 12:31 pm UTC

Reviewer: A reader

Can someone who is running 9i RAC on Solaris share his statspack report with me. I just want to compare ours to get an idea whether our values are similar or way-off. Also, it'll be helpful if you can mention how many CPUs you have and how much is your cpu utilization?

Thanks in advance.

Tom Kyte

Followup  

July 29, 2005 - 2:55 pm UTC

statspacks are like fingerprints.

unless they are running YOUR application, there is no chance I would expect them to correlate even a little bit.

August 01, 2005 - 5:23 pm UTC

Reviewer: A reader

<do you have session_caches_cursors *SET* or not.>
No.
However, we tried it setting to a value of 100 in QA Stress region, but did not
find any difference.


Followup:

define "did not find any difference"

-- Test was done by my colleague and he said he did not find any difference in terms of (i) perses/second (ii) library cache latch free waits and (iii) CPU utilization.



Tom Kyte

Followup  

August 01, 2005 - 8:44 pm UTC

did they measure if it was getting "hit", eg: USED

if not, I would agree with their assement

if it was, I would not.

but that aside, it seems you've already tested that and there is no opportunity for cursor caching here.

back to vendor. no magic.

dave

October 10, 2005 - 7:15 am UTC

Reviewer: uk

Hi Tom, when trying to run your example at the top I get this 9.2.0.5 Linux Redhat

SQL> create table t1 as select sql_text from v$sqlarea;

Table created.

SQL> alter table t1 add sql_text_wo_constants varchar2(1000);

Table altered.

SQL> create or replace function
  2  remove_constants( p_query in varchar2 ) return varchar2
  3  as
  4      l_query long;
  5      l_char  varchar2(1);
  6      l_in_quotes boolean default FALSE;
  7  begin
  8      for i in 1 .. length( p_query )
  9      loop
 10          l_char := substr(p_query,i,1);
 11          if ( l_char = '''' and l_in_quotes )
 12          then
 13              l_in_quotes := FALSE;
 14          elsif ( l_char = '''' and NOT l_in_quotes )
 15          then
 16              l_in_quotes := TRUE;
 17              l_query := l_query || '''#';
 18          end if;
 19          if ( NOT l_in_quotes ) then
 20              l_query := l_query || l_char;
 21          end if;
 22      end loop;
 23      l_query := translate( l_query, '0123456789', '@@@@@@@@@@' );
 24      for i in 0 .. 8 loop
 25          l_query := replace( l_query, lpad('@',10-i,'@'), '@' );
 26          l_query := replace( l_query, lpad(' ',10-i,' '), ' ' );
 27      end loop;
 28      return upper(l_query);
 29  end;
 30  /

Function created.

SQL>
SQL> update t1 set sql_text_wo_constants = remove_constants(sql_text);
update t1 set sql_text_wo_constants = remove_constants(sql_text)
                                      *
ERROR at line 1:
ORA-06502: PL/SQL: numeric or value error: character string buffer too small
ORA-06512: at "PERFSTAT.REMOVE_CONSTANTS", line 10


If i change
 
l_char  varchar2(1);

to l_char  varchar2(10);

it works but I am unsure if I have 'broken' anything

Thanks
 

Tom Kyte

Followup  

October 10, 2005 - 9:01 am UTC

you have a multibyte character set.... your fix was good - or you could have

ops$tkyte@ORA10GR2> declare l_char varchar2(1 CHAR ); begin null; end;
  2  /
 
PL/SQL procedure successfully completed.


varchar2(1 CHAR) 

not a bind variable problem

November 01, 2005 - 3:22 pm UTC

Reviewer: A reader from CT USA

I have library cache contention but no bind variable issues, or should I say it doesn't look like any serious bind variable issues. Can it be something else? I ran your script, the max number of duplicate statements was 4 closely followed by some 2's, and there were only about 80 statments, yet everything is sitting on a latch free library cache event. in just 9 sessions. Shared pool is 150MB.

Tom Kyte

Followup  

November 02, 2005 - 4:55 am UTC

how often do you parse. a parse is a parse is a parse is lots of latching.

Native Dynamic SQL

November 03, 2005 - 10:15 am UTC

Reviewer: Andrew Markiewicz from Madison, WI

Tom,
In Ch.9 of your book Effective Oracle By Design, you mention that native dynamic sql is parsed every time. I've noticed an increase in latch free wait events when a program that uses excessive NDS is run even though the sql uses binds.

Could that be the source of the latch contention?



Tom Kyte

Followup  

November 04, 2005 - 2:35 am UTC

(and that is true in 9i - in 10g, it is not "as true" - 10g can reuse sql --

for i in 1 .. 100
loop
execute immmediate 'insert into t values ( :x )' values i;
end loop;

in 9i - 100 parses, in 10g 1 parse at most


for i in 1 .. 100
loop
execute immmediate 'insert into t t' || mod(i,2) || ' values ( :x )' values i;
end loop;

in 9i - 100 parses, in 10g 100 parses - because the sql statement flip flops between 'insert into t t0' and 'insert into t t1' every time.....


In short:

hard parsing - very very bad.
soft parsing - very bad.
no parse - good.......


Yes, the soft parses can definitely do that, in 9i, you would consider dbms_sql so you can parse once and execute many.


bind variable

December 02, 2005 - 2:42 am UTC

Reviewer: K.P.VIDYADHARAN from India

I really understood the concept and I was able to find
out the sql which is causing high parsing
Thanks....TOM


transaction allocation latch#134

January 26, 2006 - 12:21 am UTC

Reviewer: jianhui from ca

Hi Tom,
my database had this wait spike on the top latch# wait report, i searched metalink and have not found useful information about the detail of this latch, could you give me some highlight about this latch, what is it used, what activities in applications can cause this latch? Or maybe you have URL of it somewhere?
thanks a lot!

Best ways to reduce the soft parses

February 09, 2006 - 1:07 pm UTC

Reviewer: ravinder matte from austin,TX

hi tom,
I am almost get out oh the hard parses. But i still lot of soft parses.

What would you recommend to start my war on the soft parses?

Tom Kyte

Followup  

February 10, 2006 - 11:24 am UTC

a pow-wow with the developers.

They are the only ones that can reduce them.


Moving all sql out of the client into stored procedures is a good first start. PLSQL caches SQL transparently. So, if the application has a transaction that takes 5 sql statements - they are probably parsing 5 statements, executing 5 statements every time, over and over and over and over and over.

Move those 5 into a stored procedure, now the application has 1 statement "begin procedure(?,?,?...); end;".

When they call that the first time, Oracle will parse the 5 sql's
When they call that the second time in that session, we just execute them again, no parse.

All the coder has to do now is realize "I only have to parse the begin... end; statement ONCE.

Now you have minimized parsing.


Or, statement caching via the connection pool (Oracle connection pool in java permits this)

Or, </code> http://www.oracle.com/technology/products/oracle9i/daily/jun24.html <code>

February 16, 2006 - 11:56 am UTC

Reviewer: Dave from UK

Hi Tom, confused over the output I am getting from the original script

SQL> select sql_text_wo_constants, count(*)
  from t1
 group by sql_text_wo_constants
having count(*) > 100
 order by 2


SQL_TEXT_WO_CONSTANTS
------------------------------------------------------------------------------------------------------------------------------
  COUNT(*)
----------
GRANT SELECT,INSERT
       226

TRUNCATE TABLE SYS.ORA_TEMP_@_DS_@
       251

But when I try to see what that tanle is..

SQL> select object_name from dba_objects where object_name like 'ORA_TE%';

no rows selected


Do you happen to know what that ORA_TEMP_@_DS_@ is?

It is also in a few other queries which come up as not using binds
 

Tom Kyte

Followup  

February 17, 2006 - 1:11 pm UTC

well, those are DDL and DDL cannot "bind" (you regularly grant 266 times?? or was this after an install or something?)


we sometimes have recursive sql that generates temporary tables (eg: I know flashback table does), but I'm not immediately familar with that exact name.

Library cache lock problem

February 22, 2006 - 1:04 pm UTC

Reviewer: Deba from UK

Hi,
I did not find any better related thread for posting my question, that's why I am posting here.

We are facing library cache lock very frequently. We have used the following sqlto track which user is waiting on which object due to library cache lock or pin.

SELECT /*+ ordered */ w1.sid waiting_session, h1.sid holding_session,
w.kgllktype lock_or_pin,
w.kgllkhdl address, DECODE(h.kgllkmod, 0, 'None', 1, 'Null', 2, 'Share', 3,
'Exclusive', 'Unknown') mode_held,
DECODE(w.kgllkreq, 0, 'None', 1, 'Null', 2, 'Share', 3, 'Exclusive',
'Unknown') mode_requested,h1.sql_hash_value ,lock_id1 object_waited_on
FROM dba_kgllock w, dba_kgllock h, v$session w1, v$session h1 , dba_lock_internal dl
WHERE (((h.kgllkmod != 0) AND (h.kgllkmod != 1) AND ((h.kgllkreq = 0)
OR (h.kgllkreq = 1)))
AND (((w.kgllkmod = 0) OR (w.kgllkmod= 1)) AND ((w.kgllkreq != 0)
AND (w.kgllkreq != 1))))
AND w.kgllktype = h.kgllktype AND w.kgllkhdl = h.kgllkhdl
AND w.kgllkuse=w1.saddr AND h.kgllkuse=h1.saddr
and dl.mode_requested<>'None'
and dl.mode_requested<>dl.mode_held
and w1.sid=dl.SESSION_ID

WAIT_SES HOLD_SES LOCK_OR_PIN ADDRESS MODE_HELD MODE_REQUESTED SQL_HASH_VALUE OBJECT_WAITED_ON
110 80 Lock E54B622C Exclusive Share 1057805097 UPMARKET_EI_MV
112 110 Lock E56289D8 Share Exclusive 4103562029 SOLD_INFL_COMP_CPT_MV
76 110 Lock E857A16C Share Exclusive 4103562029 FACT_FORECAST_SUMMARY_MV
76 52 Lock E857A16C Share Exclusive 2349705177 FACT_FORECAST_SUMMARY_MV
52 80 Lock E54B622C Exclusive Share 1057805097 UPMARKET_EI_MV

In the above output you can see that session 110 is holding share lock on two different MVIEWS at same time - one is FACT_FORECAST_SUMMARY_MV and another one is SOLD_INFL_COMP_CPT_MV. This is not clear to me . At the same
time how a session can held share lock on two different mviews ? Under what circumstances share lock on mview is used? Can programmer cotrol this lock manually means can we keep this type of lock for sometimes or it is maintained by Orcale only and programmer does not have any control over this lock? Could you explain this shared lock on MVIEWS with example please ? How share lock on two diffrent MVIEWS at same time by single session get maintained by oracle? please give one example.

Thanks
Deba

Tom Kyte

Followup  

February 22, 2006 - 2:06 pm UTC

are they in a refresh group together?

But yes, a program can easily lock multiple objects.

More information - the sessions, what are they doing, are they your programs or are they refresh jobs - what?

Dave from UK's question

February 22, 2006 - 5:49 pm UTC

Reviewer: Philip Douglass from Sunrise, FL

Dave from UK:

Are you using DBMS_STATS? I regularly see ORA_TEMP_@_DS_@ tables in my v$session_longops table when I run dbms_stats.gather_table_stats on large tables.

Library cache lock problem

February 23, 2006 - 6:06 am UTC

Reviewer: Deba from UK

Hi,
We refresh soem views at specific time interval using dbms_mviews.refresh. The mviews are not in refresh group. This is ongoing process. Some process capture data from OLTP ( CDC is configured ) thruogh database link, some process refresh mviews and some process calculate data and store them in different tables.

What we want to know when shar lock on mview gets generated ? Accordingly we can see the codes. Is there any issue while refreshing mviews. We tried to simulate this but failed. We don't know when share lock on mview gets generated.

Thanks
Deba

Tom Kyte

Followup  

February 23, 2006 - 8:18 am UTC

How did you determine this to be a problem.

what "codes"?

Library cache lock problem

February 23, 2006 - 1:49 pm UTC

Reviewer: deba from UK

Hi,

The use of MVIEWS is all follows :

1) We refresh them at regular intervals
2) We select data from MVIEWS ( either using SELECT only or INSER INTO ... SELECT ... )

We can see lot of sessions are waiting on 2 or 3 sessions. These 2 or 3 sessions generally held shared lock on MVIEWS. Now I want to know when Oracle acquire share lock on MVIEWS. When we select data there should not be any share lock on MVIEW. There will be exclusive lock on MVIEWS when those get refreshed. So when blocking sessions held share lock on MVIEWS then it means that there is no refresh for MVIEWS. It puzled me why session held share lock on MVIEWS though it is not doing any refresh.So I want to know

a) Under which circumstances share lock on MVIEWS get generated ? Is it possible to simulate ? If so then how ?

b) Some MVIEWS are based on other other MVIEWS also.Under any situations share lock on MVIEWS can be generated due to change in underlying MVIEWS or underlying tables ? Changes mean any changes ( either data or structure ).

Deba

Tom Kyte

Followup  

February 23, 2006 - 7:11 pm UTC

but the shared lock will not prevent others from reading - so back to square one.

are these sessions accidently modifying the mview themself?

you always have the lock table command.

Library cache lock problem

February 27, 2006 - 8:52 am UTC

Reviewer: deba from UK

Hi,

There should be not be any share lock on MVIEW if I read from any MVIEW. Could you tell me when share lock on MVIEW generates ?

library cache lock,

March 29, 2006 - 1:57 pm UTC

Reviewer: A reader

What should we look (and also where should we look) when there is a library cache lock when running a dbms_stats package on a half a million row table?

The table is a non-partitioned and has one primary key on that. The library cache lock happens even if I use "analyze table" instead of DBMS_STATS. Also, when I tried to change the degree value by using alter table, again it hangs with the same wait event.

When it is waiting on library cache lock, even a simple select statement on that table just hangs.

We have found out the work around by creating a temp table, create a primary key on that, drop the existing table and rename the temp table back to the existing one. The analyze or dbms_stats on that temp table worked just fine.


The database is 10.0.1 and running on a 5 node RAC.

Interestingly, the library cache happens on only that particular table and when I tried analyzing other big tables it just analyzes fine.



Tom Kyte

Followup  

March 29, 2006 - 2:24 pm UTC

please work this one via support - doesn't sound "right", it is not normal.

Could you Please Explain the difference in DIfferent Library Cache Latch

April 21, 2006 - 8:59 am UTC

Reviewer: Raj from Switzerland

Hi Tom,

Could you please explain the difference between the following latch.

LATCH.library cache pin
LATCH.shared pool
LATCH.library cache pin allocation
LATCH.library cache

It is very confusing when each type of latch is acquired.

Thanks and Regards,

Raj

Tom Kyte

Followup  

April 21, 2006 - 9:15 am UTC

what we really need to know is what gets them - parsing, it is all about parsing.

reduce parsing, reduce getting them.

you are free to search metalink to see the notes on them.

Library cache lock

April 25, 2006 - 2:14 pm UTC

Reviewer: Ray from India

This has been an intersestion thread to me. I would like to share a problem which I encountered recently

We have a a view which does quite a number of PL/SQL function calls (should not do. trying to get rid of the function calls). A couple of function calls uses execute immediate statements. These functions are quite bulky in nature. The packages containing these functions are bulky themselves

Whenever a SQL is issued against this view it holds the library cache lock for a long time. I have learned from Steve Adams' site that ' a library cache lock is also held when a referenced object is not pinned'.

My questions are:
1. Does a PL/SQL call from a SQL statement result in library cache lock owing to context switch?
2.Could it be that the package (in chunks) gets paged out and the session holds the lock and waits for the package to be pinned again?
3. Is it the 'execute immediate' statements causing the problem?
4. Is 'execute immediate' statement condidered a DML (even if it does no DML operation)? If so, then it might break the exisisting parse locks (again from Steve Adams)(occuring many times over for as many number of rows) requiring a reparse and hence the lock?

Yor views and clarification would resolve my state of confusion. Apologies, if I have misinterpreted Steve Adams in appropriately and used it in the wrong context of my problem.

Thanks in advance for your valuable time and your unconditional service to the Oracle community.

Regards
Ray




Very useful

April 26, 2006 - 5:27 am UTC

Reviewer: Vinayak Pai from Belgium

The code provided to check if there is a Lbrary cache latch contention was very useful.

session_cahced_cursors

May 03, 2006 - 12:08 pm UTC

Reviewer: A reader from austin,tx

Is this paramater has any effect on the library cache latch contention?
If yes, Could you please explain me how this effects?

Thanks

Tom Kyte

Followup  

May 03, 2006 - 1:27 pm UTC

</code> http://asktom.oracle.com/pls/asktom/f?p=100:11:::::P11_QUESTION_ID:2588723819082 <code>

read about a softer soft parse on that page (session cache cursors)

since this can help skip the semantic parse of the query, it can be useful in reducing latches on the library cache, yes.

Error on running Lbrary cache latch contention script

August 09, 2006 - 4:30 pm UTC

Reviewer: Anne Joseph from MN

Hi Tom,

I'm trying to run your script to determine whether bind variables are being used or not, and I get ORA-01401. We are running 11i , rdbms 9.2.0.6 , Red Hat Linuz 2.1.  Could you please help me out.  Thanks.

SQL> create table t1 as select sql_text from v$sqlarea;

Table created.

SQL> 
SQL> alter table t1 add sql_text_wo_constants varchar2(1000);

Table altered.

SQL> create or replace function 
  2  remove_constants( p_query in varchar2 ) return varchar2
  3  as
  4      l_query long;
  5      l_char  varchar2(1);
  6      l_in_quotes boolean default FALSE;
  7  begin
  8      for i in 1 .. length( p_query )
  9      loop
 10          l_char := substr(p_query,i,1);
 11          if ( l_char = '''' and l_in_quotes )
 12          then
 13              l_in_quotes := FALSE;
 14          elsif ( l_char = '''' and NOT l_in_quotes )
 15          then
 16              l_in_quotes := TRUE;
 17              l_query := l_query || '''#';
 18          end if;
 19          if ( NOT l_in_quotes ) then
 20              l_query := l_query || l_char;
 21          end if;
 22      end loop;
 23      l_query := translate( l_query, '0123456789', '@@@@@@@@@@' );
 24      for i in 0 .. 8 loop
 25          l_query := replace( l_query, lpad('@',10-i,'@'), '@' );
 26          l_query := replace( l_query, lpad(' ',10-i,' '), ' ' );
 27      end loop;
 28      return upper(l_query);
 29  end;
 30  /

Function created.

SQL> update t1 set sql_text_wo_constants = remove_constants(sql_text);
update t1 set sql_text_wo_constants = remove_constants(sql_text)
       *
ERROR at line 1:
ORA-01401: inserted value too large for column



Thanks. 

Tom Kyte

Followup  

August 09, 2006 - 5:12 pm UTC

make the column we added wider then.



Why column not large enough

August 10, 2006 - 9:42 am UTC

Reviewer: Anne Joseph from MN

Thanks Tom. I increased it and it worked of course! But I'm curious why I had to increase it. I had read your explanation of the code : "My remove constants turns any character string literal into '#' and any number into @" - so after the translation the length of t1.sql_text_wo_constants and v$sqlarea.sql_text should be the same, right ?  Obviously there is something I am missing here - could you please help explain.

The script revealed :


SQL_TEXT_WO_CONSTANTS
------------------------------
  COUNT(*)
----------
SAVEPOINT ORACLE_SVPT_@
       197


SQL> select sql_text from v$sql
  2  where upper(sql_text) like 'SAVEPOINT ORACLE_SVPT_%';

SQL_TEXT
------------------------------------------------------------------
SAVEPOINT ORACLE_SVPT_111
SAVEPOINT ORACLE_SVPT_34
SAVEPOINT ORACLE_SVPT_198
SAVEPOINT ORACLE_SVPT_125
SAVEPOINT ORACLE_SVPT_189
SAVEPOINT ORACLE_SVPT_200
SAVEPOINT ORACLE_SVPT_136
........

This goes on as SAVEPOINT ORACLE_SVPT_<nnnnn>....  197 rows.

From reading your excellent info on binding, I think the reason this is coming up is because this must be a dynamic statement where the savepoint name is NOT a bind parameter. Please correct me if I am wrong. 
This is a 11i application, so I am not sure if I can "fix" the app - is there any other way to fix this ?

As always thanks so much for the wealth of info on binding - great eye opener! 

Tom Kyte

Followup  

August 10, 2006 - 9:49 am UTC

savepoint name cannot be bound, corret. But in this case - there will be a fixed number of savepoints won't there - you cannot change the name of the savepoint (it would radically affect the behavior of the application!!!!!)



If a query has:

where x = ''

I will turn that into

where x = '#'

that is, I might make a string LONGER than it was - and that could result in a string longer than the 1000 characters that was input.

Why column not large enough

August 10, 2006 - 2:11 pm UTC

Reviewer: Anne Joseph from MN

Yes, I see now - I hadn't thought of the empty string, and, neither did I think about the effects of changing the name of the savepoint!! Thanks for the prompt response!



How about users_executing?

August 22, 2006 - 2:20 pm UTC

Reviewer: Raghu from San Jose, CA

Instead of
"create table t1 as select sql_text from v$sqlarea;" do you think we should have
"create table t1 as select sql_text from v$sqlarea where users_executing > 0;" ?

SQLs in v$sqlarea not currently executing should have no impact on the performance at this time right?




Tom Kyte

Followup  

August 27, 2006 - 3:08 pm UTC

why? you are actually not expecting the sql of interest to be currently being executed! It has literals in it, the supposition is it is executed once and never again.

maybe "users_executing = 0", but not > 0 for sure!!!

No, it should be just as I did it.

Shared_pool_size is Zero

August 27, 2006 - 9:42 am UTC

Reviewer: oracleo from India

The following is the output of v$parameter.

108 shared_pool_size 6 0
110 large_pool_size 6 0
112 java_pool_size 6 0
114 streams_pool_size 6 0
116 shared_pool_reserved_size 6 7759462
404 buffer_pool_keep 2

In one of the databases, it was revelation, that shared pool size is 0. according to me it could cause lots of contention in library cache.
pls advice

Tom Kyte

Followup  

August 27, 2006 - 9:13 pm UTC

or you could be using automatic sga sizing....


ops$tkyte%ORA10GR2> show parameter pool

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
buffer_pool_keep                     string
buffer_pool_recycle                  string
global_context_pool_size             string
java_pool_size                       big integer 0
large_pool_size                      big integer 0
olap_page_pool_size                  big integer 0
shared_pool_reserved_size            big integer 7969177
shared_pool_size                     big integer 0
streams_pool_size                    big integer 0
ops$tkyte%ORA10GR2> show parameter sga

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
lock_sga                             boolean     FALSE
pre_page_sga                         boolean     FALSE
sga_max_size                         big integer 580M
sga_target                           big integer 580M
ops$tkyte%ORA10GR2>


I have a 580m sga, Oracle is figuring out how much to put into each pool and can/will change its mind over time. 

forgot to mention

August 27, 2006 - 9:52 am UTC

Reviewer: oracleo from india

version is 10Gr2 on linux Redhat

Hit ratio very high

September 19, 2006 - 4:06 pm UTC

Reviewer: Richard from Salzburg


Tom,

I recently read a paper by Cary Millsap about how works the Buffer Cache and how a Hr > 95 (more or less) could point a bad sql and a very cpu usage. It is a very good article. Then I read that Hit ratios very very high is not always good. Mi question is why is too bad a very high hit ratio in the Shared Pool? In Buffer Cache now I can understand why but not in the Shared Pool. Could you please clarify me this point?

Thanks in advance Tom.

Tom Kyte

Followup  

September 20, 2006 - 2:36 pm UTC

where did you read it about the shared pool?

Hit Ratio

September 21, 2006 - 9:45 am UTC

Reviewer: A reader


Sorry, I would refer to the library cache. When Cary Millsap talk about that hit ratios, in general, are not useful to lead a tuning, the case of study was the Buffer Cache. I don´t know if in any other areas like Shared Pool -> Library Cache, PGA ... etc this issue is similar, i.e. imagine mi H.r. in my library cache is close to 100%. Could I assert that there is no problem with my library cache?

Thank you Tom.

Tom Kyte

Followup  

September 22, 2006 - 2:06 am UTC

his point is - that the GOAL is NOT to get some ratio to 100%. That is not the goal.

You could have a ratio for anything of say "50%" and that might well be perfectly acceptable.

If your library hit ratio is near 100%, you could have a serious problem. You might have code that looks like this:


for i in 1 .. 10000000
loop
parse "insert into t values ( :x )"
bind i to :x
execute statement
close statement
end loop


that is one way to ensure a really good library cache hit - parse way too often, this piece of code:

parse "insert into t values ( :x )"
for i in 1 .. 10000000
loop
bind i to :x
execute statement
end loop
close statement

may well cause your "hit ratio" to decrease - but your overall performance would be so much better.

Hit ratio

September 22, 2006 - 8:47 am UTC

Reviewer: Richard from Salzburg, Austria


Thank you Tom. Your example is simple and very good. As always.

related ORA-4031 error

November 11, 2006 - 10:08 pm UTC

Reviewer: Aj Joshi from USA

Tom,
When memory is needed oracle allocates from shared pool. If it does not find it then it removes unused sql. LRU
Question : Apart from packages/procedures PINNED/KEPT in memory what else cannot be removed when memory is needed?
I am not sure about session memory as decided by param session_cached_cursors can be removed.

About pinning objects : It was for oracle7 and it was supposed to improve with 8i/9i since memory in pieces can be allocated. Something like if total memory needed is 40K then it could get 4 chunks of 10K that are not contiguous and then satisfy the request. Correct me if I am wrong. So the question is in that case why do we keep getting this error in 8i and even on 9i. Must be some explanation. I think there is a minimum allocation it does. For example if I give a short sql like : 'select * from dual' which does not need much memory then I think it still allocates a bigger minimum amount. How much is this minimum amount of memory. Is it 1K. Is it determined by some params.
Since memory used by old sql which is not used can be reused. LRU algorithm or whatever : And at any given time a small number of sql in memory is being actively used : So it can always remove the old sql and use the memory : so less chance of ORA-4031 unless there are lot of active processes. However we have seen serious ORA-4031 errors even in case of very less active processes. Sometimes needing bounce.
Other than PINNED objects and session_cached_cursors I think shared pool sql can be removed and yet we get this ORA-4031 error.We do see a lot of inactive processes like over 400 inactive and very less active like less than 30. So are the inactive ones holding on to session_cached_cursors. No, there is no way to reduce the inactive processes.


Tom Kyte

Followup  

November 12, 2006 - 10:19 am UTC

answer: any freeable memory, each component of Oracle when allocating from the shared pool will say whether this has to stay or not. Way too many things to go into any sort of detail on.

Session memory is in the UGA and is not touchable except by the session really (normally, the session memory is actually in a dedicated server - way removed from the SGA all together)

Pinning was sort of introduced to help with shared server (known as MTS - multi-threaded server way back when). It would allocate the UGA out of the shared pool and as this was not managed in an LRU (it was explictly allocated/deallocated) and tended to use lots of different sizes and shapes, you would end up with a shared pool that had lots of unfreeable bits in it - breaking it up entirely. So, enter the --- large pool, remove the shared server memory allocations from the shared pool.



big parse, samll parse

November 14, 2006 - 12:31 pm UTC

Reviewer: John from CA, USA

Tom,

1. Which is worse, parsing a 5000-line SQL 2 times or parsing a 1 line SQL 500 times?
2. To parse 2 5000-line SQL, is it posible to say which is worse than the other? What makes one worse than the other in terms of negatively impacting the system?

John

Tom Kyte

Followup  

November 15, 2006 - 6:41 am UTC

eh? how long is a piece of string.

the only sensible answer is:

it depends.

Non Usage of Bind Variables

November 15, 2006 - 1:00 pm UTC

Reviewer: Shivdeep Modi from Newcastle, UK

A handy script and an interesting thread

Just look at that...

November 15, 2006 - 9:06 pm UTC

Reviewer: Reader from Russia

Just came other this on metalink forum:

BEGIN PKG_NAME(:DBP_AABEG,:DBP_AABEH,:DBP_AABEI); END;
BEGIN PKG_NAME(:DBP_AABGQ,:DBP_AABGR,:DBP_AABGS); END;
BEGIN PKG_NAME(:DBP_AABMJ,:DBP_AABMK,:DBP_AABML); END;

So - as you can see, just using bind variables is sometimes makes no difference from not using them at all :)

Tom Kyte

Followup  

November 16, 2006 - 3:03 pm UTC

eh? you make zero sense here.

Reader

November 16, 2006 - 8:10 pm UTC

Reviewer: Reader from Russia

eh-heh, but you'll get same hard parsing on every statement regardless...

my point was - "just use bind variables" sometimes doesn't solve hard parsing issues :)

Tom Kyte

Followup  

November 17, 2006 - 3:19 am UTC

yes it does, it allows you to make the number of hard parses as small as possible - which is the only way to "solve" a hard parse issue.

there are many reasons why "you bind but it is not shareable" - from optimizer related settings, to bind mismatches, to nls settings, to security settings, to .....


ok

November 17, 2006 - 4:48 am UTC

Reviewer: Reader from Russia

"there are many reasons why "you bind but it is not shareable" - from optimizer related settings, to bind mismatches, to nls settings, to security settings, to ....."

ok, i suppose you was not looking at my previous review...

here:

BEGIN PKG_NAME(:DBP_AABEG,:DBP_AABEH,:DBP_AABEI); END;
BEGIN PKG_NAME(:DBP_AABGQ,:DBP_AABGR,:DBP_AABGS); END;
BEGIN PKG_NAME(:DBP_AABMJ,:DBP_AABMK,:DBP_AABML); END;

they have *unique* bind variable names for each statement - seems that they just auto-generate them.

no matter what - you'll get hard parse each time here. That was my point.

Tom Kyte

Followup  

November 17, 2006 - 8:03 am UTC

ouch, I cannot believe they did that - didn't even notice.



November 17, 2006 - 11:47 am UTC

Reviewer: Alexander the ok

The bind variable names matter? I thought you could call your bind variable anything and the difference was just that you were using one vs no binds.

Tom Kyte

Followup  

November 19, 2006 - 4:08 pm UTC

the sql matching is, has been, will be by "text match"

unless you have the same text (including "case of text", "white space", etc) - they are different.

Proof...

November 17, 2006 - 1:10 pm UTC

Reviewer: J. Laurindo Chiappa from Sao paulo, SP, Brazil

Alexander, the proof is very easy : remember, Oracle SQL processing IS based on text-recognizing, and different bind variables IMPLIES in differente texts, thus DIFFERENT hashes... See, in the SAME machine, SAME user, same everything :

scott@BDDEV:SQL>variable x number;
scott@BDDEV:SQL>variable y number;
scott@BDDEV:SQL>select /*+ TEST_01 */ loc, deptno, deptno + 10 result from dept where deptno = :x;

scott@BDDEV:SQL>select /*+ TEST_01 */ loc, deptno, deptno + 10 result from dept where deptno = :y;


system@BDDEV:SQL>select * from v$sqlarea where sql_text like 'select /*+ TEST_0%';

SQL_TEXT
------------------------------------------------------------      SHARABLE_MEM     PERSISTENT_MEM        RUNTIME_MEM              SORTS      VERSION_COUNT    LOADED_VERSIONS      OPEN_VERSIONS      USERS_OPENING            FETCHES         EXECUTIONS    USERS_EXECUTING              LOADS FIRST_LOAD_TIME          INVALIDATIONS        PARSE_CALLS         DISK_READS        BUFFER_GETS     ROWS_PROCESSED       COMMAND_TYPE OPTIMIZER_MODE               PARSING_USER_ID  PARSING_SCHEMA_ID      KEPT_VERSIONS ADDRESS                  HASH_VALUE MODULE                                                                  MODULE_HASH ACTION                                                                  ACTION_HASH SERIALIZABLE_ABORTS           CPU_TIME       ELAPSED_TIME I        CHILD_LATCH
------------------ ------------------ ------------------ ------------------ ------------------ ------------------ ------------------ ------------------ ------------------ ------------------ ------------------ ------------------ ------------------- ------------------ ------------------ ---------------------
select /*+ TEST_01 */ loc, deptno, deptno + 10 result from dept where deptno = :x
             17187               1208               2464                  0                  1                  1                  0                  0                  1                  1                  0                  1 2006-11-17/14:51:16                  0                  1                  0                  0                  0                  3 CHOOSE                                    22                 22                  0 C000000079996BC0         3582669877 SQL*Plus                                                                 -625018272                                                                          -265190056                   0                  0                925 N                  2

select /*+ TEST_01 */ loc, deptno, deptno + 10 result from dept where deptno = :y
             17595               1208               2464                  0                  1                  1                  1                  1                  1                  1                  0                  1 2006-11-17/14:51:16                  0                  1                  0                  0                  0                  3 CHOOSE                                    22                 22                  0 C000000079AA02E8         4254623821 SQL*Plus                                                                 -625018272                                                                          -265190056                   0                  0                889 N                  2


==> See ? Two DIFFERENT statements, two hard parses, only due DIFFERENT bind names...

 Regards,
 
   Chiappa 

remove_constants and bind variables

November 21, 2006 - 10:33 am UTC

Reviewer: Tony Baker from Nottingham, England

Thanks for all the information provided.

I suspect that the application that is running against the databases that I look after is not using bind variables.

I have tried running your script remove_constants but I get
the following errors

Function created.

update t1 set sql_text_wo_constants = remove_constants(sql_text)
*
ERROR at line 1:
ORA-06502: PL/SQL: numeric or value error: character string buffer too small
ORA-06512: at "SYSTEM.REMOVE_CONSTANTS", line 10



SQL_TEXT_WO_CONSTANTS COUNT(*)
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ----------
2798

1 row selected.

I have looked in the table T1, below is an example of the SQL returned

select u.EMP_NO, g.GROUP_ID,g.ACCESS_OWN_DATA,g.CONDITION_NAME,g.COND_SUCCESS_ACTION,g.COND_FAILURE_ACTION, g.REPORTEE_SEC, u.USER_PWD, u.SECURITY_OFFICER, u.ADMIN_OPTIONS, u.MODULE_OPTIONS from PS_USERS u, PS_GROUPS g where u.USER_ID='LWPSE9' and u.GROUP_ID=g.GROUP_ID


select u.EMP_NO, g.GROUP_ID,g.ACCESS_OWN_DATA,g.CONDITION_NAME,g.COND_SUCCESS_ACTION,g.COND_FAILURE_ACTION, g.REPORTEE_SEC, u.USER_PWD, u.SECURITY_OFFICER, u.ADMIN_OPTIONS, u.MODULE_OPTIONS from PS_USERS u, PS_GROUPS g where u.USER_ID='PACE04' and u.GROUP_ID=g.GROUP_ID


select u.EMP_NO, g.GROUP_ID,g.ACCESS_OWN_DATA,g.CONDITION_NAME,g.COND_SUCCESS_ACTION,g.COND_FAILURE_ACTION, g.REPORTEE_SEC, u.USER_PWD, u.SECURITY_OFFICER, u.ADMIN_OPTIONS, u.MODULE_OPTIONS from PS_USERS u, PS_GROUPS g where u.USER_ID='PACE17' and u.GROUP_ID=g.GROUP_ID

The SQL is identical apart from the following

u.USER_ID='LWPSE9'
u.USER_ID='PACE17'
u.USER_ID='PACE04'

If the application was using bind variables would the results in the table T1 show a single entry for this query instead of three and should I expect to see

u.USER_ID='LWPSE9'
u.USER_ID='PACE17'
u.USER_ID='PACE04'

replaced with something like

u.USER_ID=:user

Tom Kyte

Followup  

November 22, 2006 - 3:39 pm UTC

make buffer bigger.

gethitratio

December 07, 2006 - 6:26 pm UTC

Reviewer: A reader

Tom,

The gethitratios in library cache are extremely low.. Instead of being around 90% they are less than 1%. Please shortly explain what is the main reason. Thanks.

NAMESPACE GETHITRATIO

SQL AREA 0.96008153
TABLE/PROCEDURE 0.998642242
BODY 0.998955199
TRIGGER 0.999912234
INDEX 0.46969697
CLUSTER 0.993840985
OBJECT 1
PIPE 1
JAVA SOURCE 1
JAVA RESOURCE 1
JAVA DATA 1


Tom Kyte

Followup  

December 08, 2006 - 7:25 am UTC

multiply by 100 and you'll be happier perhaps.

December 08, 2006 - 10:11 am UTC

Reviewer: A reader

Tom,

When I multiply by 100, all other gethitratios are ok but the index is 46.96%. Is that an indication latche contention is occuring when indexes are used? Any means to correct it? The schema has tables with unindexed foreign keys. On the average, a deadlock occurs once every month in the database. Thanks ...

Tom Kyte

Followup  

December 09, 2006 - 12:28 pm UTC

no, it does not.

bet if you look at the underlying numbers that make the ratio, you'll find them to be of no consequence (eg: small)

so look into the numbers behind the numbers.

Not able to select on table due to library cache

May 03, 2007 - 12:18 am UTC

Reviewer: A reader from UK

Tom,

Last week there was Library cache lock on a table because of which we were not able to even select that table.We thought that the table was dropped by some one but after careful investigation we found it as library cache lock.

1)I am wondering about How can library cache lock will disable select/describe on table?

2)What all possible reasons to get library cache lock on table and disable readonly also on that table

I used the following steps to find the SID which is causing block but interestingly this user has only read only access on that table.It can not DML/DDL on that table



step1:

select /*+ ordered use_nl(lob pn ses) */
decode(lob.kglobtyp, 0, 'NEXT OBJECT', 1, 'INDEX', 2, 'TABLE', 3, 'CLUSTER',
4, 'VIEW', 5, 'SYNONYM', 6, 'SEQUENCE',
7, 'PROCEDURE', 8, 'FUNCTION', 9, 'PACKAGE',
11, 'PACKAGE BODY', 12, 'TRIGGER',
13, 'TYPE', 14, 'TYPE BODY',
19, 'TABLE PARTITION', 20, 'INDEX PARTITION', 21, 'LOB',
22, 'LIBRARY', 23, 'DIRECTORY', 24, 'QUEUE',
28, 'JAVA SOURCE', 29, 'JAVA CLASS', 30, 'JAVA RESOURCE',
32, 'INDEXTYPE', 33, 'OPERATOR',
34, 'TABLE SUBPARTITION', 35, 'INDEX SUBPARTITION',
40, 'LOB PARTITION', 41, 'LOB SUBPARTITION',
42, 'MATERIALIZED VIEW',
43, 'DIMENSION',
44, 'CONTEXT', 46, 'RULE SET', 47, 'RESOURCE PLAN',
48, 'CONSUMER GROUP',
51, 'SUBSCRIPTION', 52, 'LOCATION',
55, 'XML SCHEMA', 56, 'JAVA DATA',
57, 'SECURITY PROFILE', 59, 'RULE',
62, 'EVALUATION CONTEXT',
'UNDEFINED') object_type,
lob.kglnaobj object_name,
pn.kglpnmod lock_mode_held,
pn.kglpnreq lock_mode_requested,
ses.sid,
ses.serial#,
ses.username
from v$session_wait vsw,
x$kglob lob,
x$kglpn pn,
v$session ses
where vsw.event = 'library cache lock'
and vsw.p1raw = lob.kglhdadr
and lob.kglhdadr = pn.kglpnhdl
and pn.kglpnmod != 0
and pn.kglpnuse = ses.saddr

Step 2:
select spid as thread, sid, osuser, s.program
from sys.v_$process p, sys.v_$session s
where p.addr = s.paddr
and s.sid=<sid>


Step 3:
kill -9 <pid>

Thanks


library cache lock

July 31, 2007 - 2:44 pm UTC

Reviewer: Sean from Atlanta

Tom,
i was asked to improve the performance on the materialized-views -MVs build. the MVs have rebuilt every night (do not ask why). they do the one after other. i thought if build them in parallel would be much fast. so i have been testing the parallel MVs build. and found most of the build sessions wait on "library cache lock". i used query to find what are the objs being locked:
select oracle_username, os_user_name, session_id, process,
locked_mode, o.owner, o.object_type, o.object_name
from sys.dba_objects o,
v$locked_object l
where l.object_id = o.object_id
order by o.owner, o.object_name, session_id

they are the sys owned:
SNAP_LOADERTIME$
SNAP_LOGDEP$
SUMAGG$
SUMDELTA$
MLOG$
OBJ$
SNAP$
SUMINLINE$
SUMJOIN$
SUMKEY$
SUMPARTLOG$
SUMPRED$

What could MVs build parallel avoid the library cache lock?

Thanks In Avance.
Tom Kyte

Followup  

August 02, 2007 - 10:11 am UTC

I will ask why.

define what you mean by "rebuilt", describe the process please

Profile of the SQL statements in cache

October 02, 2007 - 5:15 pm UTC

Reviewer: Sandro from Italy

Hi Tom,
I have find this script at http://www.dba-village.com/village/dvp_scripts.ScriptDetails?ScriptIdA=1842 link.
-------------------------- begin --------------------------
ttitle "Count of executions, grouped by minutes cached."
break on exec skip 2
select
exec ,
substr(Min_Cached,5) Min_Cached,
COUNT(*),
ROUND(AVG(Executions),2) Avg_Exec,
sum(executions) Executions,
min(executions) Min_Exec,
max(executions) Max_Exec,
round(sum(OPTIMIZER_COST),0) Cost
,substr(to_char(100 * ratio_to_report(sum(OPTIMIZER_COST)) over (), '999.00'), 2) || '%' pct_Cost
from (
select decode(executions,0,0,1,1,2,2) exec,
DECODE(Min_Cached,
0,' 1) 00-01min',
1,' 2) 01-02min',
2,' 2) 01-02min',
DECODE(SIGN(Min_Cached- 6),-1,' 3) 03-05min',
DECODE(SIGN(Min_Cached- 16),-1,' 4) 06-15min',
DECODE(SIGN(Min_Cached- 31),-1,' 5) 16-30min',
DECODE(SIGN(Min_Cached- 61),-1,' 6) 31-60min',
DECODE(SIGN(Min_Cached- 121),-1,' 7) 1- 2hr',
DECODE(SIGN(Min_Cached- 180),-1,' 8) - 3hr',
DECODE(SIGN(Min_Cached- 240),-1,' 9) - 4hr',
DECODE(SIGN(Min_Cached-12*60),-1,'10) -12hr',
'11) 12hr+ '))))))))) Min_Cached,
Executions,OPTIMIZER_COST
from (
select ROUND((SysDate-
TO_DATE(First_Load_Time,
'yyyy-mm-dd/hh24:mi:ss'))*24*60) Min_Cached,
Executions,OPTIMIZER_COST
from V$SQL
where Parsing_User_ID != 0
and executions>0
)
)
group by exec,Min_Cached;
-------------------------- end --------------------------
Scope of this sql script is to obtain Statistical information about the number of executions of a statements, and time in cache, to identify the profile of the statements by the applications on a database.
It is a good start to identify not using bind s variable?
Tom Kyte

Followup  

October 05, 2007 - 11:03 am UTC

how does this help you find sql statements not using a bind?

I gave you an approach above.

Cursor sharing

October 05, 2007 - 6:27 pm UTC

Reviewer: JAM from CA, USA

Tom,

Just wondering: Does the following statement from a 3rd party vendor make you laugh or cry?

XXXX does not use bind variables because of the unique way in which users can build their own queries within the application.

Seriously, the unique way the vendor refers to is a search screen with about 50 columns that users can plug-in various variables to one or more column. (The table by the way is about 300 columns, 40M rows)

Haven't provided any specifics because I'm just curious about your general feeling of the vendors statement. In your experience have you found applications with legitimate reasons for not using bind?

Track Hard Parse Statements Over Time

April 23, 2008 - 6:43 pm UTC

Reviewer: Robert from Memphis, USA.

Tom,

I have been using your query/method at the top of the page with great success.
I have shown some of the Developers a snapshot/summary of library cache of all similar statements not using bind variables where count(*) > 100 and it seems to have gotten their attention.
Now we want to have an ongoing report that shows offending SQL ranked in order of severity ('duplicate' statements).

Here is my question: Your script takes a snapshot of a single moment in time but V$SQL can change from hour to hour... and I want to get a summary of ALL major offenders for an entire week. How can I collect those hard parse statement stats/queries for an entire week?

What about creating a table with primary key of first_load_time, hash_value, and substr(sql_text), then every hour, say, take a snapshot of v$sql based on your query above... but capture ALL statements which match your criteria... the first_load_time would insure I didn't double up on sql that stayed in v$sql between snapshots.

Tben for the weekly report I can summarize them all up again with counts.

What do you think?

Thanks,

Robert.
Tom Kyte

Followup  

April 28, 2008 - 9:39 am UTC

... Now we want to have an ongoing report that shows offending SQL ranked in order
of severity ('duplicate' statements).
..


why - it only takes ONE occurrence of ONE bad sql statement to kill everything.


You cannot get what you want - the bad sql is coming and going so fast - that you are missing many sql statements likely in your hourly snapshots as it is. (eg: the worst offender could appear to be the "least used one" just because your shared pool is flushing so often...)

Stick with you have and keep nailing them one by one till they are all gone.

What about General/Summary Hard Parse Stats?

April 29, 2008 - 7:53 am UTC

Reviewer: Robert from Memphis, USA.

Tom,

Thank you for the insight into my question on tracking hard parses.

What stats can I look at, either Statspack report, v$ stats, etc. that would indicate or suggest lots of hard parses going on at a given point, a given hour, etc.?

Thank you,

Robert.
Tom Kyte

Followup  

April 29, 2008 - 8:49 am UTC

statspack - right at the top - shows this very clearly.

there is a total parse count, a hard parse count and a soft parse %


total parse count - shows if you are parsing like crazy (soft and hard parsing)

the total + hard parse count are used to generate a soft parse percentage (what percent of parses were SOFT vs HARD). This number should pretty much be very high - above 99% in general. One of the few few few ratios you can say "it should definitely be in this range".

April 29, 2008 - 10:43 am UTC

Reviewer: Robert from Memphis, USA.


June 13, 2008 - 7:45 am UTC

Reviewer: Sam Mousleh from France

Tom, I have ran the script you provided to find Lbrary cache latch contention. So as result I can see many queries with the count is high, example count for the last query is 2158, please look at the last section for the script results. And the hit ratio's from the statspack is very good:
Instance Efficiency Percentages (Target 100%)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Buffer Nowait %: 100.00 Redo NoWait %: 100.00
Buffer Hit %: 94.80 In-memory Sort %: 100.00
Library Hit %: 99.99 Soft Parse %: 99.70
Execute to Parse %: 96.60 Latch Hit %: 99.58
Parse CPU to Parse Elapsd %: 92.98 % Non-Parse CPU: 99.63

I don't realy understand the purpose of it, in the statspack report I do see many latch sleeps:

Latch Activity for DB: SABO Instance: SABO Snaps: 28176 -28178
->"Get Requests", "Pct Get Miss" and "Avg Slps/Miss" are statistics for
willing-to-wait latch get requests
->"NoWait Requests", "Pct NoWait Miss" are for no-wait latch get requests
->"Pct Misses" for both should be very close to 0.0

Pct Avg Wait Pct
Get Get Slps Time NoWait NoWait
Latch Requests Miss /Miss (s) Requests Miss
------------------------ -------------- ------ ------ ------ ------------ ------
Consistent RBA 108,273 0.0 0 0
FAL request queue 4 0.0 0 0
FIB s.o chain latch 132 0.0 0 0
FOB s.o list latch 2,265 0.0 0 0
SQL memory manager latch 2 0.0 0 1,191 0.0
SQL memory manager worka 1,158,591 0.0 0.0 0 0
X$KSFQP 28 0.0 0 0
active checkpoint queue 12,053 0.0 0.0 0 0
alert log latch 14 0.0 0 0
archive control 424 0.0 0 0
archive process latch 80 0.0 0 0
begin backup scn array 1 0.0 0 0
cache buffer handles 1,395,954 0.0 0.0 0 0
cache buffers chains 837,999,907 0.1 0.0 4 46,634,519 0.0
cache buffers lru chain 343,842 0.5 0.0 0 67,019,084 0.2
cas latch 32,205,261 0.1 0.0 0 0
channel handle pool latc 217 0.0 0 0
channel operations paren 2,800 0.0 0 0
checkpoint queue latch 2,201,940 0.0 0.0 0 57,609 0.0
child cursor hash table 29,824 0.0 0 0
commit callback allocati 12,781 0.0 0 0
dictionary lookup 9 0.0 0 0
dml lock allocation 493,698 0.0 0.0 0 0
dummy allocation 195 0.0 0 0
enqueue hash chains 779,023 0.0 0.0 0 0
enqueues 363,025 0.0 0.0 0 0
event group latch 95 0.0 0 0
global tx hash mapping 292 0.0 0 0
hash table column usage 141 0.0 0 1,706,189 0.0
job_queue_processes para 59 0.0 0 0
ktm global data 23 0.0 0 0
lgwr LWN SCN 108,276 0.0 0.0 0 0
library cache 99,847,263 1.0 0.0 15 11,817 4.6
library cache load lock 1,542 0.0 0 0
library cache pin 91,085,186 0.2 0.0 2 0
library cache pin alloca 7,150,475 0.0 0.0 0 0
list of block allocation 4,247 0.0 0 0
loader state object free 194 0.0 0 0
longop free list parent 1,271 0.0 0 59 0.0
message pool operations 14 0.0 0 0
messages 369,323 0.0 0.0 0 0
mostly latch-free SCN 491,595 0.3 0.0 0 0
multiblock read objects 18,762,943 0.1 0.0 0 0
ncodef allocation latch 58 0.0 0 0
object stats modificatio 12 0.0 0 0
post/wait queue 202,484 0.0 0.0 0 105,514 0.0
process allocation 95 0.0 0 95 0.0
process group creation 195 0.0 0 0
redo allocation 957,456 0.0 0.0 0 0
redo copy 0 0 741,027 0.0
Latch Activity for DB: SABO Instance: SABO Snaps: 28176 -28178
->"Get Requests", "Pct Get Miss" and "Avg Slps/Miss" are statistics for
willing-to-wait latch get requests
->"NoWait Requests", "Pct NoWait Miss" are for no-wait latch get requests
->"Pct Misses" for both should be very close to 0.0

Pct Avg Wait Pct
Get Get Slps Time NoWait NoWait
Latch Requests Miss /Miss (s) Requests Miss
------------------------ -------------- ------ ------ ------ ------------ ------
redo on-disk SCN 343,716 0.0 0 0
redo writing 339,347 0.0 0.0 0 0
row cache enqueue latch 62,201,242 0.2 0.0 0 0
row cache objects 62,209,555 4.2 0.0 0 190 0.0
sequence cache 101,827 0.0 0.0 0 0
session allocation 685,027 0.0 0.0 0 0
session idle bit 1,563,084 0.0 0.0 0 0
session switching 58 0.0 0 0
session timer 1,212 0.0 0 0
shared pool 41,912,797 1.0 0.0 1 0
sim partition latch 0 0 3,138 0.0
simulator hash latch 33,381,090 0.0 0.0 0 0
simulator lru latch 629,084 0.0 0.4 0 1,672,383 0.1
sort extent pool 77 0.0 0 0
transaction allocation 747,057 0.0 0 0
transaction branch alloc 316 0.0 0 0
undo global data 434,038 0.0 0.0 0 0
user lock 282 0.0 0 0
-------------------------------------------------------------
Latch Sleep breakdown for DB: SABO Instance: SABO Snaps: 28176 -28178
-> ordered by misses desc

Get Spin &
Latch Name Requests Misses Sleeps Sleeps 1->4
-------------------------- -------------- ----------- ----------- ------------
row cache objects 62,209,555 2,636,047 1,501 2634552/1489/6/0/0
cache buffers chains 837,999,907 1,054,910 1,255 1053666/1233/11/0/0
library cache 99,847,263 960,617 6,712 954009/6505/102/1/0
shared pool 41,912,797 401,849 2,742 399208/2543/95/3/0
library cache pin 91,085,186 215,764 317 215450/311/3/0/0
row cache enqueue latch 62,201,242 112,183 77 112106/77/0/0/0
cas latch 32,205,261 23,737 21 23716/21/0/0/0
multiblock read objects 18,762,943 22,925 48 22877/48/0/0/0
cache buffers lru chain 343,842 1,682 43 1640/41/1/0/0
mostly latch-free SCN 491,595 1,558 3 1555/3/0/0/0
library cache pin allocati 7,150,475 546 5 541/5/0/0/0
redo allocation 957,456 394 2 392/2/0/0/0
session allocation 685,027 259 8 251/8/0/0/0
simulator lru latch 629,084 179 68 111/68/0/0/0
dml lock allocation 493,698 170 2 168/2/0/0/0
-------------------------------------------------------------

Here is result of the script I run on our dabase:
-------------------------------------------------
SQL_TEXT_WO_CONSTANTS COUNT(*)
------------------------------------------------------------ ----------
R_PACKAGE_BALANCES PKGBAL ,ER_RESOURCES RES WHERE PKGBAL.SUB
SCRIPTION_OBJ_ID IN (@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@
,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@
,@,@

SELECT PURSRV.PURCHASED_SERVICE_OBJ_ID ,PURSRV.SUBSCRIPTION 121
_OBJ_ID ,PURSRV.STATUS_OBJ_ID ,PURSRV.PROV_STATUS_OBJ_ID ,PU
RSRV.SERVICE_ID ,PURSRV.PROVISIONING_TAG ,PURSRV.TIME_STAMP
,PURSRV.NON_REFUND_DESCRIPTION FROM ER_PURCHASED_SERVICES PU
RSRV WHERE PURSRV.SUBSCRIPTION_OBJ_ID IN (@,@,@,@,@,@,@,@,@,
@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,

SQL_TEXT_WO_CONSTANTS COUNT(*)
------------------------------------------------------------ ----------
@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@

SELECT OPTS.SUBSCRIPTION_OBJ_ID ,OPTS.DURATION ,OPTS.PROMO_ 122
CODE ,OPTS.USER_GROUP ,OPTS.PAYMENT_TYPE ,OPTS.SUPPLIER_ID ,
OPTS.PREMIUM_LEVEL ,OPTS.CHANNEL ,OPTS.CHARGING_METHOD ,OPTS
.ACCESS_DEVICE ,OPTS.OPTION_OBJ_ID ,OPTS.OPTS_LOCK_ID ,MAX(O
PTS.TIME_STAMP) FROM ER_OPTIONS OPTS WHERE OPTS.SUBSCRIPTION
_OBJ_ID IN (@) GROUP BY OPTS.SUBSCRIPTION_OBJ_ID ,OPTS.DURAT
ION ,OPTS.PROMO_CODE ,OPTS.USER_GROUP ,OPTS.PAYMENT_TYPE ,OP
TS.SUPPLIER_ID ,OPTS.PREMIUM_LEVEL ,OPTS.CHANNEL ,OPTS.CHARG
ING_METHOD ,OPTS.ACCESS_DEVICE ,OPTS.OPTION_OBJ_ID ,OPTS.OPT

SQL_TEXT_WO_CONSTANTS COUNT(*)
------------------------------------------------------------ ----------
S_LOCK_ID

SELECT PKGBAL.AMOUNT ,PKGBAL.THRESHOLD ,PKGBAL.SUBSCRIPTION 122
_OBJ_ID ,RES.DESCRIPTION ,RES.RESOURCE_CODE ,RES.RESOURCE_TY
PE ,PKGBAL.BALANCE_OBJ_ID ,PKGBAL.PKG_BALANCE_LOCK_ID FROM E
R_PACKAGE_BALANCES PKGBAL ,ER_RESOURCES RES WHERE PKGBAL.SUB
SCRIPTION_OBJ_ID IN (@) AND PKGBAL.RESOURCE_OBJ_ID = RES.RES
OURCE_OBJ_ID

SELECT DISTINCT SUBS.SUBSCRIPTION_OBJ_ID , SUBS.ACCOUNT_OBJ 238
_ID , SUBS.STATUS_OBJ_ID , SUBS.PACKAGE_ID , SUBS.START_DATE

SQL_TEXT_WO_CONSTANTS COUNT(*)
------------------------------------------------------------ ----------
, SUBS.END_DATE , SUBS.TIME_STAMP , SUBS.NEXT_CYCLE_DISCOUN
T , SUBS.IS_REFUNDABLE , SUBS.NON_REFUND_DESCRIPTION, SUBS.S
UBS_LOCK_ID , SUBS.SUPER_CREDIT_ID , SUBS.SUPER_CREDIT_BALAN
CE , DRM_DETAILS.DRM_DESCRIPTION, DRM_DETAILS.DRM_TYPE , DRM
_DETAILS.DRM_ID , SUBS.INTERACTIVE_USAGE_FLAG , COUNTRY.COUN
TRY_OBJ_ID, SUBS.BATCH_PROCESSOR

SELECT DISTINCT SUBS.SUBSCRIPTION_OBJ_ID , SUBS.ACCOUNT_OBJ 243
_ID , SUBS.STATUS_OBJ_ID , SUBS.PACKAGE_ID , SUBS.START_DATE
, SUBS.END_DATE , SUBS.TIME_STAMP , SUBS.NEXT_CYCLE_DISCOUN
T , SUBS.IS_REFUNDABLE , SUBS.NON_REFUND_DESCRIPTION , SUBS.

SQL_TEXT_WO_CONSTANTS COUNT(*)
------------------------------------------------------------ ----------
SUBS_LOCK_ID , SUBS.SUPER_CREDIT_ID , SUBS.SUPER_CREDIT_BALA
NCE , PUR_SRV.SERVICE_ID , DRM_DETAILS.DRM_DESCRIPTION , DRM
_DETAILS.DRM_TYPE , DRM_DETAILS.DRM_ID , SUBS.INTERACTIVE_US
AGE_FLAG , SUBS.PROMOTIONAL , SUBS.BATCH_PROCESSOR , SUBS.PR
OMOTIONAL_EXPIRY_DATE ,

SELECT DISTINCT SUBS.SUBSCRIPTION_OBJ_ID , SUBS.TIME_STAMP 2158
FROM ER_ACCOUNTS ACC, ER_SUBSCRIPTIONS SUBS, ER_OPTIONS OPTS
, ER_COUNTRY COUNTRY WHERE ACC.ACCOUNT_OBJ_ID = SUBS.ACCOUNT
_OBJ_ID AND SUBS.SUBSCRIPTION_OBJ_ID = OPTS.SUBSCRIPTION_OBJ
_ID(+) AND ACC.COUNTRY_OBJ_ID = COUNTRY.COUNTRY_OBJ_ID AND C

SQL_TEXT_WO_CONSTANTS COUNT(*)
------------------------------------------------------------ ----------
OUNTRY.COUNTRY_CODE ='#' AND SUBS.STATUS_OBJ_ID=@ AND SUBS.S
UBSCRIPTION_OBJ_ID > @ AND SUBS.BATCH_RETRY_DATE IS NOT NULL
AND SUBS.BATCH_RETRY_DATE <= TO_DATE('#','#') AND OPTS.CHAR
GING_METHOD=@ ORDER BY SUBS.SUBSCRIPTION_OBJ_ID ASC

8 rows selected.

Thank you for your help
Sam

Tom Kyte

Followup  

June 13, 2008 - 10:08 am UTC

you don't say how long this is for

we don't see how many parses you do (a parse is a parse - soft or hard - both require lots of latching, hard is just worse, soft is very bad, hard is very very bad)

you cannot tell anything from this - but it does look like lots of repeated sql there doesn't it.

June 13, 2008 - 10:41 am UTC

Reviewer: Sam Mousleh from France

Sorry, I did not include the hall report, because it's so big and we are limited to 1000 words maximum.
I will try again to send it.

Sam
Tom Kyte

Followup  

June 13, 2008 - 10:54 am UTC

it is 32k, not words...

June 13, 2008 - 10:42 am UTC

Reviewer: Sam Mousleh from France

Here is the first report section...

STATSPACK report for

DB Name DB Id Instance Inst Num Release Cluster Host
------------ ----------- ------------ -------- ----------- ------- ------------
SABO 310423053 SABO 1 9.2.0.6.0 NO su1162

Snap Id Snap Time Sessions Curs/Sess Comment
--------- ------------------ -------- --------- -------------------
Begin Snap: 28176 05-Jun-08 13:00:03 489 28.1
End Snap: 28178 05-Jun-08 14:00:03 484 27.8
Elapsed: 60.00 (mins)

Cache Sizes (end)
~~~~~~~~~~~~~~~~~
Buffer Cache: 7,008M Std Block Size: 8K
Shared Pool Size: 304M Log Buffer: 1,024K

Load Profile
~~~~~~~~~~~~ Per Second Per Transaction
--------------- ---------------
Redo size: 68,109.68 1,936.06
Logical reads: 126,505.31 3,596.00
Block changes: 403.43 11.47
Physical reads: 6,577.85 186.98
Physical writes: 17.19 0.49
User calls: 167.84 4.77
Parses: 365.27 10.38
Hard parses: 1.09 0.03
Sorts: 515.41 14.65
Logons: 0.03 0.00
Executes: 10,744.48 305.42
Transactions: 35.18

% Blocks changed per Read: 0.32 Recursive Call %: 99.48
Rollback per transaction %: 1.30 Rows per Sort: 5.69

Instance Efficiency Percentages (Target 100%)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Buffer Nowait %: 100.00 Redo NoWait %: 100.00
Buffer Hit %: 94.80 In-memory Sort %: 100.00
Library Hit %: 99.99 Soft Parse %: 99.70
Execute to Parse %: 96.60 Latch Hit %: 99.58
Parse CPU to Parse Elapsd %: 92.98 % Non-Parse CPU: 99.63

Shared Pool Statistics Begin End
------ ------
Memory Usage %: 95.36 97.92
% SQL with executions>1: 69.27 69.89
% Memory for SQL w/exec>1: 64.22 64.56

Top 5 Timed Events
~~~~~~~~~~~~~~~~~~ % Total
Event Waits Time (s) Ela Time
-------------------------------------------- ------------ ----------- --------
CPU time 27,990 76.37
db file sequential read 5,853,053 6,502 17.74
log file sync 105,494 793 2.16
log file parallel write 108,260 698 1.90
db file scattered read 4,206,900 538 1.47
-------------------------------------------------------------
Wait Events for DB: SABO Instance: SABO Snaps: 28176 -28178
-> s - second
-> cs - centisecond - 100th of a second
-> ms - millisecond - 1000th of a second
-> us - microsecond - 1000000th of a second
-> ordered by wait time desc, waits desc (idle events last)

Avg
Total Wait wait Waits
Event Waits Timeouts Time (s) (ms) /txn
---------------------------- ------------ ---------- ---------- ------ --------
db file sequential read 5,853,053 0 6,502 1 46.2
log file sync 105,494 0 793 8 0.8
log file parallel write 108,260 0 698 6 0.9
db file scattered read 4,206,900 0 538 0 33.2
buffer busy waits 12,756 0 52 4 0.1
latch free 12,800 0 22 2 0.1
log file sequential read 300 0 17 57 0.0
sbtwrite2 1,191 0 11 10 0.0
control file parallel write 1,504 0 9 6 0.0
sbtbackup 4 0 5 1307 0.0
SQL*Net more data to client 33,777 0 4 0 0.3
control file sequential read 4,669 0 2 0 0.0
sbtclose2 4 0 1 327 0.0
sbtinfo2 4 0 1 273 0.0
control file single write 32 0 1 26 0.0
SQL*Net break/reset to clien 19 0 1 27 0.0
enqueue 26 0 0 18 0.0
sbtremove2 2 0 0 220 0.0
log file switch completion 6 0 0 55 0.0
SQL*Net message from dblink 365 0 0 1 0.0
switch logfile command 5 0 0 46 0.0
library cache pin 24 0 0 6 0.0
sbtinit 8 0 0 14 0.0
local write wait 9 0 0 11 0.0
log file single write 8 0 0 11 0.0
single-task message 1 0 0 40 0.0
LGWR wait for redo copy 157 0 0 0 0.0
wait list latch free 1 0 0 25 0.0
log buffer space 3 0 0 8 0.0
direct path read 1,048 0 0 0 0.0
direct path write 307 0 0 0 0.0
sbtend 8 0 0 0 0.0
SQL*Net message to dblink 365 0 0 0 0.0
async disk IO 116 0 0 0 0.0
sbtinit2 8 0 0 0 0.0
buffer deadlock 1 1 0 0 0.0
SQL*Net message from client 782,399 0 1,434,920 1834 6.2
queue messages 4,039 1,431 21,057 5213 0.0
virtual circuit status 120 120 3,489 29074 0.0
pipe get 21 8 30 1441 0.0
SQL*Net message to client 782,394 0 1 0 6.2
SQL*Net more data from clien 1,132 0 0 0 0.0
-------------------------------------------------------------
Background Wait Events for DB: SABO Instance: SABO Snaps: 28176 -28178
-> ordered by wait time desc, waits desc (idle events last)

Avg
Total Wait wait Waits
Event Waits Timeouts Time (s) (ms) /txn
---------------------------- ------------ ---------- ---------- ------ --------
log file parallel write 108,260 0 698 6 0.9
control file parallel write 1,269 0 8 6 0.0
log file single write 8 0 0 11 0.0
db file sequential read 44 0 0 1 0.0
rdbms ipc reply 12 0 0 3 0.0
control file sequential read 796 0 0 0 0.0
LGWR wait for redo copy 157 0 0 0 0.0
enqueue 4 0 0 1 0.0
direct path read 1,048 0 0 0 0.0
latch free 4 0 0 0 0.0
direct path write 306 0 0 0 0.0
log file sequential read 8 0 0 0 0.0
buffer busy waits 1 0 0 0 0.0
rdbms ipc message 110,090 4,210 22,846 208 0.9
pmon timer 1,212 1,212 3,510 2896 0.0
smon timer 19 10 3,465 ###### 0.0
-------------------------------------------------------------

Tom Kyte

Followup  

June 13, 2008 - 10:56 am UTC

Parses: 365.27

you parse a lot, that requires the library cache latch. But in this report, latch sleeps are not a major contributor to the wait time - but the latch gets are all contributors to the CPU time.

June 13, 2008 - 10:49 am UTC

Reviewer: Sam Mousleh from France

Here is another important section of the report..
Thank you

Latch Miss Sources for DB: SABO Instance: SABO Snaps: 28176 -28178
-> only latches with sleeps are shown
-> ordered by name, sleeps desc

NoWait Waiter
Latch Name Where Misses Sleeps Sleeps
------------------------ -------------------------- ------- ---------- --------
cache buffers chains kcbrls: kslbegin 0 517 334
cache buffers chains kcbgtcr: kslbegin excl 0 381 591
cache buffers chains kcbgtcr: fast path 0 270 311
cache buffers chains kcbchg: kslbegin: bufs not 0 19 9
cache buffers chains kcbgcur: kslbegin 0 17 3
cache buffers chains kcbzsc 0 16 0
cache buffers chains kcbzgb: scan from tail. no 0 12 0
cache buffers chains kcbzib: multi-block read: 0 9 0
cache buffers chains kcbchg: kslbegin: call CR 0 6 3
cache buffers chains kcbgtcr: kslbegin shared 0 5 0
cache buffers chains kcbcge 0 1 0
cache buffers chains kcbzib: finish free bufs 0 1 3
cache buffers chains kcbzwb 0 1 0
cache buffers lru chain kcbzgb: multiple sets nowa 44,700 23 0
cache buffers lru chain kcbbiop: lru scan 374 13 0
cache buffers lru chain kcbzar: KSLNBEGIN 15,164 3 0
cache buffers lru chain kcbzgm 3,866 3 0
cache buffers lru chain kcbzgb: posted for free bu 2,732 1 41
cas latch kgsloupd1 0 21 21
dml lock allocation ktaiam 0 2 2
library cache kglic 0 2,932 66
library cache kglupc: child 0 1,396 2,324
library cache kglpnc: child 0 1,101 3,183
library cache kglpndl: child: before pro 0 247 341
library cache kglpnp: child 0 125 321
library cache kgllkdl: child: cleanup 0 113 62
library cache kglpin: child: heap proces 0 107 27
library cache kglhdgc: child: 0 56 41
library cache kglobpn: child: 0 49 8
library cache kglpndl: child: after proc 0 47 10
library cache kglhdgn: child: 0 36 78
library cache kglget: child: KGLDSBRD 0 30 126
library cache kgldti: 2child 0 16 2
library cache kgldte: child 0 0 7 27
library cache kglobld 0 2 1
library cache kglpin 0 1 1
library cache pin kglpnc: child 0 131 136
library cache pin kglupc 0 105 116
library cache pin kglpndl 0 47 24
library cache pin kglpnp: child 0 28 29
library cache pin kglpnal: child: alloc spac 0 6 12
library cache pin alloca kgllkdl 0 3 4
library cache pin alloca kglpnal 0 2 1
mostly latch-free SCN kcs01 0 2 3
mostly latch-free SCN kcs02 0 1 0
multiblock read objects kcbzib: mbr get 0 27 29
multiblock read objects kcbzib: normal mbr free 0 21 19
redo allocation kcrfwr 0 2 0
row cache enqueue latch kqreqd 0 58 22
row cache enqueue latch kqreqa 0 19 55
row cache objects kqrpfl: not dirty 0 1,273 372
row cache objects kqrpre: find obj 0 228 1,129
session allocation ksufap: active sessions 0 3 0
Latch Miss Sources for DB: SABO Instance: SABO Snaps: 28176 -28178
-> only latches with sleeps are shown
-> ordered by name, sleeps desc

NoWait Waiter
Latch Name Where Misses Sleeps Sleeps
------------------------ -------------------------- ------- ---------- --------
session allocation ksucri 0 2 6
session allocation ksuxds: not user session 0 2 2
session allocation ksuxds: KSUSFCLC not set 0 1 0
shared pool kghalo 0 1,351 4
shared pool kghupr1 0 1,103 2,737
shared pool kghfrunp: clatch: nowait 0 289 0
shared pool kghfrunp: alloc: wait 0 269 0
shared pool kghfrunp: clatch: wait 0 158 94
shared pool kghalp 0 13 0
shared pool kghfre 0 4 1
shared pool kghfen: not perm alloc cla 0 2 0
simulator lru latch kcbs_simulate: simulate se 0 68 68
-------------------------------------------------------------
Dictionary Cache Stats for DB: SABO Instance: SABO Snaps: 28176 -28178
->"Pct Misses" should be very low (< 2% in most cases)
->"Cache Usage" is the number of cache entries being used
->"Pct SGA" is the ratio of usage to allocated size for that cache

Get Pct Scan Pct Mod Final
Cache Requests Miss Reqs Miss Reqs Usage
------------------------- ------------ ------ ------- ----- -------- ----------
dc_database_links 192 0.0 0 0 1
dc_files 7,600 0.0 0 0 152
dc_free_extents 40 0.0 0 0 4
dc_global_oids 330,886 0.0 0 0 87
dc_histogram_defs 174,807 0.1 0 0 1,231
dc_object_ids 495,926 0.0 0 0 466
dc_objects 30,967 0.3 0 8 951
dc_profiles 81 0.0 0 0 1
dc_rollback_segments 5,134 0.0 0 0 130
dc_segments 53,259 0.2 0 14 423
dc_sequences 1,642 0.1 0 1,642 11
dc_tablespace_quotas 26 11.5 0 26 10
dc_tablespaces 706,661 0.0 0 0 42
dc_user_grants 900 0.0 0 0 26
dc_usernames 132,405 0.0 0 0 14
dc_users 29,161,512 0.0 0 0 29
-------------------------------------------------------------


Library Cache Activity for DB: SABO Instance: SABO Snaps: 28176 -28178
->"Pct Misses" should be very low

Get Pct Pin Pct Invali-
Namespace Requests Miss Requests Miss Reloads dations
--------------- ------------ ------ -------------- ------ ---------- --------
BODY 736 0.7 856 0.6 0 0
CLUSTER 382 0.5 302 1.3 0 0
INDEX 130 2.3 67 4.5 0 0
PIPE 52 23.1 65 18.5 0 0
SQL AREA 112,152 3.0 42,045,433 0.0 577 8
TABLE/PROCEDURE 478,013 0.0 3,765,359 0.0 382 0
TRIGGER 248 0.0 248 0.0 0 0
-------------------------------------------------------------
Shared Pool Advisory for DB: SABO Instance: SABO End Snap: 28178
-> Note there is often a 1:Many correlation between a single logical object
in the Library Cache, and the physical number of memory objects associated
with it. Therefore comparing the number of Lib Cache objects (e.g. in
v$librarycache), with the number of Lib Cache Memory Objects is invalid

Estd
Shared Pool SP Estd Estd Estd Lib LC Time
Size for Size Lib Cache Lib Cache Cache Time Saved Estd Lib Cache
Estim (M) Factr Size (M) Mem Obj Saved (s) Factr Mem Obj Hits
----------- ----- ---------- ------------ ------------ ------- ---------------
176 .6 161 41,309 485,708 1.0 181,573,120
208 .7 192 50,285 485,711 1.0 181,574,848
240 .8 223 59,446 485,714 1.0 181,575,905
272 .9 255 68,780 485,720 1.0 181,579,173
304 1.0 286 74,811 485,741 1.0 181,618,438
336 1.1 318 78,529 485,763 1.0 181,687,097
368 1.2 346 84,684 485,770 1.0 181,724,048
400 1.3 362 89,476 485,771 1.0 181,724,742
432 1.4 362 89,633 485,771 1.0 181,724,768
464 1.5 362 89,633 485,771 1.0 181,724,768
496 1.6 362 89,633 485,771 1.0 181,724,768
528 1.7 362 89,633 485,771 1.0 181,724,768
560 1.8 362 89,633 485,771 1.0 181,724,768
592 1.9 362 89,633 485,771 1.0 181,724,768
624 2.1 362 89,633 485,771 1.0 181,724,768
-------------------------------------------------------------
SGA Memory Summary for DB: SABO Instance: SABO Snaps: 28176 -28178

SGA regions Size in Bytes
------------------------------ ----------------
Database Buffers 7,348,420,608
Fixed Size 752,080
Redo Buffers 1,323,008
Variable Size 3,137,339,392
----------------
sum 10,487,835,088
-------------------------------------------------------------


SGA breakdown difference for DB: SABO Instance: SABO Snaps: 28176 -28178

Pool Name Begin value End value % Diff
------ ------------------------------ ---------------- ---------------- -------
large free memory 33,554,432 33,554,432 0.00
shared 1M buffer 2,099,200 2,099,200 0.00
shared Checkpoint queue 4,106,240 4,106,240 0.00
shared FileOpenBlock 6,773,240 6,773,240 0.00
shared KGK heap 7,000 7,000 0.00
shared KGLS heap 1,606,168 1,131,072 -29.58
shared KQR L PO 1,561,320 1,562,344 0.07
shared KQR M PO 1,179,624 1,196,008 1.39
shared KQR S SO 7,424 7,424 0.00
shared KQR X PO 28,368 28,368 0.00
shared KSXR pending messages que 853,952 853,952 0.00
shared KSXR receive buffers 1,034,000 1,034,000 0.00
shared MTTR advisory 298,328 298,328 0.00
shared PL/SQL DIANA 1,769,864 1,371,744 -22.49
shared PL/SQL MPCODE 5,543,416 5,633,280 1.62
shared PLS non-lib hp 3,216 3,216 0.00
shared VIRTUAL CIRCUITS 2,073,680 2,073,680 0.00
shared branch 355,512 355,512 0.00
shared db_block_hash_buckets 30,574,480 30,574,480 0.00
shared db_handles 1,044,000 1,044,000 0.00
shared dictionary cache 3,229,952 3,229,952 0.00
shared enqueue 1,654,896 1,654,896 0.00
shared errors 52,504 400,880 663.52
shared event statistics per sess 11,255,440 11,255,440 0.00
shared fixed allocation callback 1,168 1,168 0.00
shared free memory 19,458,928 8,740,712 -55.08
shared joxs heap init 4,240 4,240 0.00
shared krvxrr 253,056 253,056 0.00
shared ksm_file2sga region 370,496 370,496 0.00
shared ktlbk state objects 586,384 586,384 0.00
shared library cache 70,989,544 73,428,504 3.44
shared miscellaneous 60,497,992 60,358,832 -0.23
shared parameters 157,784 164,384 4.18
shared pl/sql source 640 584 -8.75
shared processes 1,180,800 1,180,800 0.00
shared sessions 2,690,480 2,690,480 0.00
shared sim memory hea 7,301,888 7,301,888 0.00
shared sim trace entries 393,216 393,216 0.00
shared sql area 177,103,016 185,982,000 5.01
shared table definiti 6,552 1,344 -79.49
shared transaction 954,448 954,448 0.00
shared trigger defini 544 544 0.00
shared trigger inform 1,808 1,680 -7.08
shared trigger source 168 168 0.00
shared type object de 322,360 321,216 -0.35
buffer_cache 7,348,420,608 7,348,420,608 0.00
fixed_sga 752,080 752,080 0.00
log_buffer 1,312,768 1,312,768 0.00
-------------------------------------------------------------
init.ora Parameters for DB: SABO Instance: SABO Snaps: 28176 -28178

End value
Parameter Name Begin value (if different)
----------------------------- --------------------------------- --------------
aq_tm_processes 0
background_dump_dest /product/oracle/sgbd/SABO/bdump
compatible 9.2.0.6
control_files /database/oracle/SABO/CTL_SABO_00
core_dump_dest /product/oracle/sgbd/SABO/cdump
db_block_size 8192
db_cache_size 7348420608
db_domain
db_file_multiblock_read_count 16
db_name SABO
dispatchers (PROTOCOL=TCP) (SERVICE=SABOXDB)
fast_start_mttr_target 300
hash_join_enabled TRUE
instance_name SABO
java_pool_size 0
job_queue_processes 10
large_pool_size 33554432
local_listener LSNR_SABO_01_LOC
log_archive_dest /product/oracle/sgbd/SABO/arch
log_archive_format arc_SABO_%s.arc
log_archive_start TRUE
log_buffer 1048576
open_cursors 5000
optimizer_mode CHOOSE
os_authent_prefix ops$
pga_aggregate_target 7340032000
processes 900
query_rewrite_enabled FALSE
remote_login_passwordfile EXCLUSIVE
service_names su1162_SABO, SABO
sga_max_size 10487835088
shared_pool_size 318767104
star_transformation_enabled FALSE
timed_statistics TRUE
undo_management AUTO
undo_retention 50000
undo_tablespace UNDO_001_SABO
user_dump_dest /product/oracle/sgbd/SABO/udump
utl_file_dir /varsoft/sabo/trace/tmp, /varsoft
workarea_size_policy auto

Tom Kyte

Followup  

June 13, 2008 - 10:57 am UTC

and according to that, increasing the shared pool would do nothing. You just parse a lot, soft parsing

you might utilize session_cached_cursors if an application tends to parse and reparse the same sql over and over

June 13, 2008 - 12:23 pm UTC

Reviewer: Sam Mousleh from France

Tom, I thought soft parse is good and we should avoid only hard parse. Parses: 365.27 and 10.38 per transaction.
1. Do you mean we parse a lot per transaction.
2. Are you saying if I increasing the shared pool nothing will change.

About session_cached_cursors I read so many articles that it does not improve respond time. it might improve parsing a little but not respond time.

Can I set the session_cached_cursors at database level I mean in the spfile or init.ora file.

Also you said at the begining that (a parse is a parse - soft or hard - both require lots of latching, hard is just worse, soft is very bad, hard is very very bad) so how we can avoid both parse hard and soft then.

Thank you again
Sam
Tom Kyte

Followup  

June 13, 2008 - 1:25 pm UTC

there are three types of parses in Oracle:

a hard parse. It is very very very bad.
a soft parse. It is simply very very bad.
the absence of a parse, this is the only good parse going.


1) you parse a lot per second! that was a parse per second count, each parse takes lots of CPU, latching and you do 365.27 of them on average every single second.


2) that is what the shared pool advisor, which is watching your system and your workload and what you do is saying.




The programmers are the only ones that can avoid a parse. They have code that looks like:


parse sql
bind (hopefully this is there!!!!)
execute sql
close sql



in a routine that gets called frequently, that should be:

if (first_time)
then
   parse sql
   first_time gets false
end
bind sql
execute sql
/* do not close it */




If they move all of their SQL into PLSQL - this will happen almost automagically !!! PLSQL caches sql - when you say "close cursor" in plsql, plsql laughs at you and says "no way, you are just going to execute it again, I'll keep it open for you". The developers would still have to parse the CALL to plsql once and cache that statement handle, but they wouldn't have to worry about the rest of the SQL.


session_cached_cursors is best set at the init.ora level, yes.

June 16, 2008 - 6:30 am UTC

Reviewer: Sam Mousleh from France

I am sorry Tom, but I didn't realy get what you are saying about not closing sql after execute sql!!! how can you close or open sql statement?

if (first_time)
then
parse sql
first_time gets false
end
bind sql
execute sql
/* do not close it */

All sql's statements is in plsql procedures... and there is so many procedures in this application. many procedures that is calling many other procedures with loops and conditions and SELECT COUNT(*)... and so on,

I am trying to improve performance of the procedure that of course calls other procedures to do the job. we are reaching TIME OUT "over a 1 minutes and 10 seconds".

Thank you again,
Sam
Tom Kyte

Followup  

June 16, 2008 - 12:10 pm UTC

your programmers know how to do this - they are the ones you need to talk to. If you don't know how to open and close sql - well, they do.

Since everything is in plsql, you as the "dba" control the size of the plsql cursor cache via session_cached_cursors - what is yours set to? You might consider increasing it.


It is unlikely this is the cause of you taking more than 1:10 to execute, that would more likely be "inefficient code"

Library latch contention

July 29, 2008 - 12:22 pm UTC

Reviewer: Chris from Louisville, KY USA

Tom,
I have a 10.2.0.3 database that I believe is experiencing library latch contention (among other things). In the Top SQL Statements of the ASH report I have 51.67% for the PL/SQL block generated by a web server request. That 51.67% breaks down as follows in the %Event column):

18.97 cursor: pin S wait on X
16.11 CPU + Wait for CPU
12.54 kksfbc child completion

I thought I read the "cursor: pin S wait on X" was a new term for library latches?

Two other very concerning entries come from the AWR. The application is an OLTP type app that does tens of millions of transactions over a several hour time period. When I compare stats from the new code to benchmarked code (for a 10 or 15 minute period) I see the following:

Parse CPU to Parse Elapsd %: went from 62.13 to 4.74
Soft Parse %: went from 99.99 to 81.65
% Non-Parse CPU: went from 99.61 to 93.38

I have used your code to strip literals and numbers from v$sqlarea and look for bad SQL (without binds) and found nothing (which I sort of expected since the application is 99% PL/SQL and the binding should be done for me and there is very little dynamic SQL).

The noticeable difference is that machine CPU is now being pegged whereas it used to run much more efficiently. Looking for any suggestions on where to look and am I even on the right track.

I will note that with the new code that has been added there are a lot of selects from PL/SQL tables using CAST(TABLE with CARDINALITY hints. I have no idea if that should affect anything and have given thought to recoding using global temp tables but would be a ton of programming effort (man months) I don't want to go through if it's not even the right track.

I know you love details and this may sound somewhat vague, I'm just at a loss as to what details to provide because I'm not sure where to go with this. The system is a shop floor type app with all action coming through thousands of simultaneous web server connections from plant floor devices so tracing individual sessions isn't very practical, but I could do it at the system level for a couple minutes if I knew what to then go look for in the monstrous trace file I'd get.

Thoughts on how to proceed?

10.2.0.3 - RAC

August 13, 2008 - 10:59 am UTC

Reviewer: Dirtbag1402 from Bournemouth, Dorset, UK

If you're hitting this:

18.97 cursor: pin S wait on X
16.11 CPU + Wait for CPU
12.54 kksfbc child completion


I'd say you're hitting Oracle Bug: 6477136 - but it appears to be a Real Application Clusters issue. We get this problem when the shared pool has been flushed or the DB bounced, and I think it's related to Oracle's ASMM, where the various memory structures are resized. Possibly also related to cursor_sharing setting.

I'd look into this bug, sadly the base bug isn't visible so an SR may be required.

simulate library cache latch,

August 21, 2008 - 1:30 pm UTC

Reviewer: A reader

How to simulate library cache latch in our 10gR2 environment?

Also, I once tried executing SQL quering on v$ views (pretty big SQL with more than 400 characters) and that was waiting for library cache latch. In the RAW trace file, it showed it is waiting on different object (user defined index) which I am not using my query at all.

How is this possible?

At that time, there was heavy parsing by user application and perhaps one of their SQL was waiting for library cache latch (not sure).

In this situation will my big query also end up in library cache latch? I tried small query something like "select * from v$session where sid=nnn" and that worked without any delay.

Does library cache latch happens if there is no enough room (enough chunks of memory) in the shared pool?
Does library cache latch happen if the alloted latches at the database level are exhausted by latching other cursors?

How does CURSOR_SHARING parameter values(all the 3 values) affect the library cache latch?

Appreciate your response.

Thanks,

Tom Kyte

Followup  

August 21, 2008 - 10:13 pm UTC

I don't know what you mean by simulate a latch.

I don't know what was in your trace file, you'd have to give us an example to look at.

What do you mean by "a pretty big sql with more than 400 characters"???? do you mean "just a long sql string" - if you do, 400 is small and even if it were 40,000 or 400,000 that doesn't matter - that won't increase the time we hold a latch really - we use latches to serialize access to data structures, things we go in and get out really fast. We'd take your big query and break it into tokens - really small bits and pieces - and then look up the bits and pieces, bit by bit. There would not be a long latch held for a 'big' query.



the library cache latch is used to safely access the shared data structure that is the library cache in the SGA. When I'm reading from it, you cannot be modifying (loading into) it. So, we use a latch to serialize - very very quickly.

40000 long string?

August 22, 2008 - 4:29 am UTC

Reviewer: reader from KL

I don't think oracle support 40,000 characters long sql string as a single SQL statement.
Tom Kyte

Followup  

August 22, 2008 - 9:47 am UTC

sure it does, there is actually no documented limit.

You can do a sql statement as big as you like........

even in plsql - you have to use dbms_sql to do it there, but even in plsql (there is a parse API that takes a plsql table of strings, each string is a "line" of text and you can have really large strings).



SQL query length limit

August 23, 2008 - 3:01 pm UTC

Reviewer: A reader

"sure it does, there is actually no documented limit. "

My application submitted a 744,265 character long SQL query via JDBC. It's been hanging for over 20 hours. V$SQL doesn't show any activity, no executions, disk reads, buffer gets, nothing. V$SESSION shows the query waiting on "SQL*Net more data from client".

One of Millsap's case studies featured in his book indicates the follwing regarding the more data from client event:

"The long SQL text strings wouldn¿t fit into a single SQL*Net packet, so the Oracle kernel spent a considerable amount of time awaiting second and subsequent SQL*Net packets during parse calls."

Any chance this is what's holding up my query from running?

Thanks.
Tom Kyte

Followup  

August 26, 2008 - 8:04 pm UTC

given what you supply us - not much - I cannot comment on what is happening. We don't know (and don't really want to see) what a 750k sql statement looks like - if it was

select *
from dual, dual, dual, dual, dual, ....... lots and lots of duals, dual;

then it would be the optimizer going into high gear and doing lots of work.

Huge IN list

August 27, 2008 - 12:16 am UTC

Reviewer: A reader from FL

The 750K query has an IN list of 43,000 items. We are going to change the query to avoid having a large IN list but I'd like to know if such a large query could be "hanging" due to a more data from client wait event.

Tom Kyte

Followup  

August 27, 2008 - 8:31 am UTC

you can see what a session is waiting on in the v$ views, query them.

That needs to be fixed - like soon. That query should be tiny, select * from t where c in ( select * from global_temporary_table );

that inlist should never have been generated, it should have been at best stuffed into a global temporary table and used from there.

Global temp table

August 27, 2008 - 9:14 am UTC

Reviewer: A reader

That's exactly what we did, use a global temp table and it works great. The session was waiting for "more data from client" back when it was hanging.

No new lines

October 09, 2008 - 10:08 am UTC

Reviewer: A reader from FL

A few more questions regarding the long SQL statement I mentioned above.

The scenario I described had the application "hanging" and the database waiting for "more data from client". I recently discovered the application's Java code builds the SQL statement as a single string of characters with no embedded new lines. If we were to print the SQL it would show as one very long line.

Does a very long SQL string (with no new lines) cause the database to wait on "more data from client"?

Can I assume the entire SQL was received by the database because I found it in V$SQL?

We already fixed this issue by using a global temp table. My questions are more curiosity than anything else.

Thanks.


Tom Kyte

Followup  

October 09, 2008 - 11:18 am UTC

... Does a very long SQL string (with no new lines) cause the database to wait on
"more data from client"?
..

only in the event of a bug, one that I am not personally aware of.

eg: there is no restriction on the length of a sql statement - with or without newlines.

...
Can I assume the entire SQL was received by the database because I found it in
V$SQL?
.....

yup - how else could it have gotten there?

remove_constants: Not in 10.2.0.3

November 20, 2008 - 1:54 pm UTC

Reviewer: Richard from London, UK

Try as I might, I cannot get remove_constants to work in 10g )10.2.0.3) Linux.

The problem lies with:
update t1 set sql_text_wo_constants = remove_constants(sql_text);

Here's the error:
ORA-06502: PL/SQL: numeric or value error: character string buffer too small
ORA-06512: at "SYSTEM.REMOVE_CONSTANTS", line 10

Works on 9i.
Tom Kyte

Followup  

November 24, 2008 - 1:20 pm UTC


if your line 10 is my line 10, that is:

ops$tkyte%ORA10GR2> create or replace function
  2  remove_constants( p_query in varchar2 ) return varchar2
  3  as
  4      l_query long;
  5      l_char  varchar2(1);
  6      l_in_quotes boolean default FALSE;
  7  begin
  8      for i in 1 .. length( p_query )
  9      loop
 10          l_char := substr(p_query,i,1);
 11          if ( l_char = '''' and l_in_quotes )
 12          then


and I would suspect is has nothing to do with the version - everything to do with multi-byte character sets.

varchar2(1) is one byte - maybe it needs be


  1  create or replace function
  2  remove_constants( p_query in varchar2 ) return varchar2
  3  as
  4      l_query long;<b>
  5      l_char  varchar2(1 CHAR);</b>
  6      l_in_quotes boolean default FALSE;
  7  begin



for your multibyte data..

Never Doubted You!

February 17, 2009 - 1:12 pm UTC

Reviewer: Richard from London, near Jack the Ripper's Old Haunts!

Worked! Thanks!

What about...

June 11, 2009 - 6:15 am UTC

Reviewer: Richard from London, UK

I love the Am I Using Binds query, and have used it to great effect on many occasions.
I use it to prove that a shop is not being as efficient as it could be, and it is usually well-received... BUT... what I'd also like to be able to do based (I expect)
on the script is to show how much Shared Pool, in megabytes/whatever, is being wasted.

Is there a way to do that?

This is a new question, but, I feel, is a natural corollary to the replies thus far given

Tom Kyte

Followup  

June 11, 2009 - 10:52 am UTC

.. what I'd also like to be able to do based (I
expect)
on the script is to show how much Shared Pool, in megabytes/whatever, is being
wasted.
....

the answer to that is simple:

100%, 100% of the shared pool is being destroyed by this practice.


Here is a script I run to "prove" that out, to demonstrate that:

ops$tkyte%ORA11GR1> create table t ( x int primary key );

Table created.

ops$tkyte%ORA11GR1>
ops$tkyte%ORA11GR1> select count(*) from v$sql where sql_text NOT LIKE 'select /* nobind */ * from t where x = %';

  COUNT(*)
----------
       684

ops$tkyte%ORA11GR1> pause

ops$tkyte%ORA11GR1>
ops$tkyte%ORA11GR1> set timing on
ops$tkyte%ORA11GR1> declare
  2      l_cursor sys_refcursor;
  3  begin
  4      for i in 1 .. 10000
  5      loop
  6          open l_cursor for 'select /* nobind */ * from t where x = ' || i;
  7          close l_cursor;
  8      end loop;
  9  end;
 10  /

PL/SQL procedure successfully completed.

Elapsed: 00:00:13.70
ops$tkyte%ORA11GR1> declare
  2      l_cursor sys_refcursor;
  3  begin
  4      for i in 1 .. 10000
  5      loop
  6          open l_cursor for 'select /* bind */ * from t where x = :x' using i;
  7          close l_cursor;
  8      end loop;
  9  end;
 10  /

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.85
ops$tkyte%ORA11GR1>
ops$tkyte%ORA11GR1> set linesize 121
ops$tkyte%ORA11GR1> select count(*), to_char(sum(sharable_mem),'999,999,999') sm,
  2         to_char(sum(sharable_mem)/ decode( count(*), 0, to_number(null), count(*) ), '999,999,999' ) per_stmt
  3    from v$sql
  4   where sql_text like 'select /* nobind */ * from t where x = %';

  COUNT(*) SM           PER_STMT
---------- ------------ ------------
       353    3,107,841        8,804

Elapsed: 00:00:00.19
ops$tkyte%ORA11GR1>
ops$tkyte%ORA11GR1> select count(*), to_char(sum(sharable_mem),'999,999,999') sm
  2    from v$sql
  3   where sql_text = 'select /* bind */ * from t where x = :x';

  COUNT(*) SM
---------- ------------
         1       12,892

Elapsed: 00:00:00.03
ops$tkyte%ORA11GR1>
ops$tkyte%ORA11GR1> select count(*) from v$sql where sql_text NOT LIKE 'select /* nobind */ * from t where x = %';

  COUNT(*)
----------
       433




things to note:

1) we did 10,000 hard parses - why are there are only 353 in the shared pool? because we filled it up, flushed it, filled it, flushed it, etc....

2) why did the count of nobind sql drop from 684 to 433 - because when we flush, we flush - anything we can. That bind nobind sql came in and basically kicked out a lot of 'good' sql - sql that will now be implicitly hard parsed next time.



One bad unbound sql statement
In one bad application
executed frequently enough

is sufficient to destroy the entire shared pool

free global transaction table entry

July 21, 2009 - 4:20 am UTC

Reviewer: Mozammel Hoque from TX USA

SQL> SELECT event, state, p1, p2, p3
  2  FROM   v$session_wait
  3  WHERE  sid = 2807;

EVENT                                    STATE                       P1         P2         P3
---------------------------------------- ------------------- ---------- ---------- ----------
free global transaction table entry      WAITING                  47497          0          0


SQL> SELECT event#, event_id, name, parameter1, wait_class_id, wait_class#, wait_class 
  2  FROM v$event_name WHERE name = 'free global transaction table entry';

    EVENT#   EVENT_ID NAME                                   PARAMETER1   WAIT_CLASS_ID WAIT_CLASS# WAIT_CLASS
---------- ----------  -------------------------------------- ------------ ------------- -----------  -----
       629 1932123989 free global transaction table entry    tries           1893977003           0 Other

How can we get rid of wait event "free global transaction table entry"?

library cache load lock

November 04, 2009 - 1:28 pm UTC

Reviewer: kulbhushan

Hi tom, We faced database hung on 3 node 11i erp rac
database.We saw the libracy cache load lock timed out events reported in alert log. Then few ora-600 and later ROW CACHE ENQUEUE LOCK timed out event. Eventually database was hung and we had to bouce services .
we created support sr 7845542.992 for RCA.
The support says to increase shared pool size to avoid shared pool fragmentation and avoid reload ,additionaly to upgrade to 10g database.

I am covinced adding additional pool size would solve this or upgrade to 10 .furthermore even 10g has such issues reported.
I saw couple of bugs mentioned such issue can happen due deadlock of session holding latches .kindly let me know your view on issue

. Regards









Regexp version

May 26, 2010 - 4:47 am UTC

Reviewer: Donat Callens from Belgium

Hi Tom,

Hereby a regular expression version of the query to find statements that are identical in the shared pool:
SELECT COUNT (*),
   sql_text_wo_constants
FROM
   (SELECT  regexp_replace (
               regexp_replace (
                  regexp_replace (
                     sql_text,
                     '([^'']+'')[^'']+''', -- find quoted strings
                     '\1#''' -- replace them with '#'
                  ),
                  '([^[:alnum:]])[[:digit:]]+([^[:alnum:]])', -- find numbers
                  '\1@\2' -- replace them with @
               ),
               ' +', -- find blank spaces
               ' ' -- replace them with single space
            ) AS sql_text_wo_constants
   FROM v$sqlarea
   )
GROUP BY sql_text_wo_constants
HAVING COUNT (*) > 100
ORDER BY 1 DESC;

Notes:
1. The sub-query is only there to avoid typing the regular expression statement twice.
2. This query avoids the issue with variables having digits in their name. I.E. 'select value11, value12, (...)' is not handled correctly by the remove_constant function
3. Your version is much, much faster... I know!

Donat

Using bind variables in proc

June 30, 2011 - 11:59 pm UTC

Reviewer: rakesh from Canada

HI Tom,

select sql_text_wo_constants, count(*)
from t1
group by sql_text_wo_constants
having count(*) > 100
order by 2
/


SQL_TEXT_WO_CONSTANTS COUNT(*)
---------------------------------------------------------------------------------------------------- ----------
BEGIN :L_RET_VALUE := ABC.P_PROVISIONING_ABC.ABC_ANRAX(@ ,@ ,'#' ,'#'); END; 2880
BEGIN :L_RET_VALUE := ABC.P_PROVISIONING_ABC.ABC_LAC(@ ,@ ,'#' ,'#'); END; 4156
BEGIN :L_RET_VALUE := ABC.P_PROVISIONING_ABC.ABC_DIRECTORY_NUMBER(@ ,@ ,'#' ,'#'); END; 8813
BEGIN P_PROVISIONINGMANAGER_API.UPDATE_SOP_OBJECTS ('#','#','#'); END; 13464


i got the above output, As the count is in 5 digits. I really need to work on it to stabilize my database. The above procedures are called by the application.

1)How to pass bind variables while calling the procedure?

2)If it is not possible,is there any other work around. Please suggest me as i am very new to DBA and got lot of issues on my database.
Tom Kyte

Followup  

July 01, 2011 - 9:16 am UTC

1) the same way you pass bind variables to anything - look - they know how to bind for the output - it is pretty much similar for the inputs.

2) it is very possible. The developers already know how to bind, they are getting the outputs! tell them just to bind the inputs as well.

The syntax for that is dependent on the language you are using/programming in. But binding to the plsql block will be the same as binding to SQL.

Wait Event - latch: cache buffers chains

January 12, 2012 - 10:53 am UTC

Reviewer: Jim Cox from El Segundo, CA USA

Hi Tom

I have been trying to figure out why some vendor sql at times takes 2 minutes to run and at other time takes hours.
The sql does not use bind variables, so i know that is an issue. I ran your sql from above and i do not see the sql listed that is taking a long time to run, so I am missing something.

Here is the OEM Wait History for the sql when it does not complete properly. Is there any other reason other than not using bind variables that would cause cache buffer chains ?

What do you suggest I do next ?

Thanks
Jim


Wait Class Wait Event P1 Test P1 P2 Text P2 P3 Text P3 Wait Time
Concurrency latch: cache buffers chains address number 124 tries 0
Concurrency latch: cache buffers chains address number 124 tries 0
Concurrency latch: cache buffers chains address number 124 tries 1
Concurrency latch: cache buffers chains address number 124 tries 2
Concurrency latch: cache buffers chains address number 124 tries 4
Concurrency latch: cache buffers chains address number 124 tries 1
Concurrency latch: cache buffers chains address number 124 tries 1
Concurrency latch: cache buffers chains address number 124 tries 1
Concurrency latch: cache buffers chains address number 124 tries 1
Concurrency latch: cache buffers chains address number 124 tries 0

Tom Kyte

Followup  

January 13, 2012 - 9:13 am UTC

when you run it and it runs 'fast', what else is going on in the database - when slow - what else is going on?

I'd guess that when 'slow', there are lots of other users or some big batch job - and they are doing IO and they are competing with each other for the buffer cache.


Wait Event - latch: cache buffers chains - Followup

January 13, 2012 - 2:26 pm UTC

Reviewer: Jim Cox from El Segundo, CA USA

Not really much sql being run Tom and no batch jobs

I broke out the sql and ran it on a test database

Both servers and databases have the same issues with the same sql

I entered a follow up question under Performance Tuning
since you were assisting me last year on similar issue

Hope you can assist me

http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:3083446555294#4298956900346743544

Thanks
Jim

Diff between "library cache lock" and "library cache pin"

January 21, 2012 - 7:01 am UTC

Reviewer: Rakesh from Canda

Hi Tom,

1)What is the between "library cache lock " and "library cahe pin".

2)Can you give us some examples on,when there will be a "lock" and "pin"
Tom Kyte

Followup  

January 21, 2012 - 10:40 am UTC

here is a pretty good existing article on this subject:

http://orainternals.wordpress.com/2009/06/02/library-cache-lock-and-library-cache-pin-waits/

Library cache pin

January 23, 2012 - 11:34 pm UTC

Reviewer: Rakes from Canada

HI tom,

I am still not clear with licrary cache pin. Your link says that,

"an underlying objects should not be modified when a session is executing or accessing a dependent object (SQL). So, before parse locks on a library cache object can be broken, library cache pins must be acquired in Exclusive mode and then only library cache objects can be dropped. If a session is executing an SQL, then library cache pins will not be available and there will be waits for library cache pins. Typically, this happens for long running SQL statement.
"
If i am running "select * from emp" from session 1,where emp table is a pretty huge table and it takes around 1 hr to complete.

If i run the same query from session 2, will session 2 wait on "library cache pin" till the session 1 gets completed?


Thanks a lot

Tom Kyte

Followup  

January 24, 2012 - 1:24 pm UTC

consider a pin a lock, a lock a pin, the semantic difference to us is too minor.

If i am running "select * from emp" from session 1,where emp table is a pretty
huge table and it takes around 1 hr to complete.

If i run the same query from session 2, will session 2 wait on "library cache
pin" till the session 1 gets completed?


absolutely NOT.

It is DDL they are trying to prevent, not DML

latch : shared pool wait event

February 07, 2012 - 1:27 am UTC

Reviewer: aliyar from bangalore

Dear tom ,

o/s : linux
database : 11gr2

i have taken awr report for one hour time duration

Snap Id Snap Time Sessions Curs/Sess
--------- ------------------- -------- ---------
Begin Snap: 5236 06-Feb-12 18:00:33 126 4.2
End Snap: 5237 06-Feb-12 19:00:53 93 3.2
Elapsed: 60.33 (mins)
DB Time:



Event Waits Time(s) (ms) time Wait Class
------------------------------ ------------ ----------- ------ ------ ----------
cursor: pin S 51,214 32,594 636 41.9 Concurrenc
latch: shared pool 23,610 17,529 742 22.6 Concurrenc
cursor: mutex S 23,827 13,235 555 17.0 Concurrenc
DB CPU 10,688 13.8
log file sync 877,048 4,089 5 5.3 Commit

found that cursor: pin and latch: shared pool are top waits in the database.

i tried to understand about latch: shared pool wait and its fixes from google and metalink. but i could not anything usefully.

can you please help me to understand the above said wait event and fix for that wait

Thanks
Aliyar
Tom Kyte

Followup  

February 08, 2012 - 1:19 am UTC

there are two 'fixes'

a) cut down on your concurrent users - 126 sessions - so you have 32 to 64 cores on your server?

b) cut down on your parsing - especially hard parsing, you don't share your soft parse %, but I'll bet it is a number way lower than 90%. If it is, you not only have a performance bug inflicted by your programmers, but a severe security bug as well (called sql injection)


latch : shared pool wait event

February 13, 2012 - 1:04 pm UTC

Reviewer: aliyar from bangalore

dear Tom,

thanks for your immediate reply for the previous question about " latch : shared pool wait event " . i have taken awr report from good time and bad time . result as follows


Good time report taken 01 hour interval

Host Name        Platform                         CPUs Cores Sockets Memory(GB)
---------------- -------------------------------- ---- ----- ------- ----------
xxxxxxxxxx Linux x86 64-bit                    4     4       2      31.28


              Snap Id      Snap Time      Sessions Curs/Sess
            --------- ------------------- -------- ---------
Begin Snap:      5403 13-Feb-12 17:00:15       163       4.8
  End Snap:      5404 13-Feb-12 18:00:36       149       5.1
   Elapsed:               60.34 (mins)
   DB Time:              363.79 (mins)




Load Profile              Per Second    Per Transaction   Per Exec   Per Call
~~~~~~~~~~~~         ---------------    --------------- ---------- ----------
      DB Time(s):                6.0                0.0       0.01       0.01
       DB CPU(s):                1.4                0.0       0.00       0.00
       Redo size:        2,787,450.0            6,975.8
   Logical reads:           33,314.5               83.4
   Block changes:           10,532.5               26.4
  Physical reads:            4,662.5               11.7
 Physical writes:              654.3                1.6
      User calls:              509.0                1.3
          Parses:               85.2                0.2
     Hard parses:                0.9                0.0
W/A MB processed:                1.0                0.0
          Logons:                0.5                0.0
        Executes:              445.5                1.1
       Rollbacks:                0.0                0.0
    Transactions:              399.6



Top 5 Timed Foreground Events
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
                                                           Avg
                                                          wait   % DB
Event                                 Waits     Time(s)   (ms)   time Wait Class
------------------------------ ------------ ----------- ------ ------ ----------
db file sequential read           4,013,902       6,481      2   29.7 User I/O
DB CPU                                            4,874          22.3
log file sync                     1,442,402       4,405      3   20.2 Commit
direct path read                    284,571       1,595      6    7.3 User I/O
latch: shared pool                   19,107       1,239     65    5.7 Concurrenc
Host CPU (CPUs:    4 Cores:     4 Sockets:    2)
~~~~~~~~         Load Average
               Begin       End     %User   %System      %WIO     %Idle
           --------- --------- --------- --------- --------- ---------
                5.09      6.00      30.1       7.6      26.5      60.1




=======================================================
=========================================================

Bad Time report 01 hour interval

              Snap Id      Snap Time      Sessions Curs/Sess
            --------- ------------------- -------- ---------
Begin Snap:      5236 06-Feb-12 18:00:33       126       4.2
  End Snap:      5237 06-Feb-12 19:00:53        93       3.2
   Elapsed:               60.33 (mins)
   DB Time:            1,295.40 (mins)



Load Profile              Per Second    Per Transaction   Per Exec   Per Call
~~~~~~~~~~~~         ---------------    --------------- ---------- ----------
      DB Time(s):               21.5                0.1       0.09       0.07
       DB CPU(s):                3.0                0.0       0.01       0.01
       Redo size:        1,879,295.1            7,753.1
   Logical reads:           10,841.6               44.7
   Block changes:            6,955.0               28.7
  Physical reads:              104.3                0.4
 Physical writes:              388.2                1.6
      User calls:              294.2                1.2
          Parses:               47.4                0.2
     Hard parses:                0.3                0.0
W/A MB processed:                0.2                0.0
          Logons:                0.0                0.0
        Executes:              249.2                1.0
       Rollbacks:                0.0                0.0
    Transactions:              242.4


Top 5 Timed Foreground Events
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
                                                           Avg
                                                          wait   % DB
Event                                 Waits     Time(s)   (ms)   time Wait Class
------------------------------ ------------ ----------- ------ ------ ----------
cursor: pin S                        51,214      32,594    636   41.9 Concurrenc
latch: shared pool                   23,610      17,529    742   22.6 Concurrenc
cursor: mutex S                      23,827      13,235    555   17.0 Concurrenc
DB CPU                                           10,688          13.8
log file sync                       877,048       4,089      5    5.3 Commit


Host CPU (CPUs:    4 Cores:    4 Sockets:    2)
~~~~~~~~         Load Average
               Begin       End     %User   %System      %WIO     %Idle
           --------- --------- --------- --------- --------- ---------
               25.17      6.48      54.4      23.8       2.3      20.8



SQL> show parameter cursor_sharing

NAME_COL_PLUS_SHOW_P TYPE                             VALUE_COL_
-------------------- -------------------------------- ----------
cursor_sharing       string                           force
SQL>


concurrency user count is less in bad time while comparing with good time. i dont find any hard parsing issue from the above report.

could you please give me some more guidence on this issue

Thanks
Aliyar

Tom Kyte

Followup  

February 13, 2012 - 2:09 pm UTC

during 'good' time you had an AVERAGE of 6 seconds in the database for every one second of real time.

Given you have four cores - that is OK (maybe). Think about it - it means that ON AVERAGE over that one hour period - every single second had six people active constantly.

Your CPU utilization looks like it was in the 40-50% range.


Now, during your 'bad' time - you had an an AVERAGE of 21.5 seconds of database time per every single second. That is 21.5 sessions all trying to do stuff (on a four core machine remember) at exactly the same time!

Give you have four cores, that is not OK. Think about it - it means that ON AVERAGE over that one hour period - every single second had 21/22 people active constantly.



I've emphasized ON AVERAGE - for I'd bet a lot of money that there were times during that hour that WAY MORE than 21 people were trying to be active.


See how you do more work in each second on average when your average concurrency is six/second than you do when it is 21.5/second? You can do MORE WORK with LESS CONCURRENT users.

I'm not talking about sessions, I'm talking about real concurrency. Concurrent active sessions.




I'll stand by my original answer.

You do have a parsing problem - you have cursor sharing=force trying to hide from it. Your developers did it wrong. SQL Injection is something you have to worry about as well!


Why do you have over 100 connected sessions to a 4 core machine?

Why do you let so many sessions (way way way more than 21, 21 is just an average - I'll bet there were times you had many times more than that trying to be active) try to be active on a small four core machine?



I'd suggest lowering your connection pool to about 16 from its current setting of well over 100.


watch this for some evidence that what I'm saying is true:

http://www.youtube.com/watch?v=xNDnVOCdvQ0


latch : shared pool event

February 13, 2012 - 3:57 pm UTC

Reviewer: aliyar from bangalore

dear Tom,

Thanks a lot for clear explanation. from your reply and youtube session i got the following points

1 ) as we have more active calls ( approx 22 active calls per sec with just 04 cpu core ) we are seeing " latch : shared pool event" and "cursor: mutex S" waits as top 5 waits. so here we need not to bother about waits, rather heavy load on the system.


2 ) permenent fix is either ask application team to cut down the concurrency to lower value which is acceptable for 04 cpu core or increase the CPU core to higher value which is acceptable for 22 active calls per sec kind of load

in a netshell , our system performnace came downdue do more load which was more than server capacity

please correct me if i am wrong

again thanks a lot Tom . i have learnt a lot from your site.thanks for such a usefull site for DBA

Thanks
Aliyar
Tom Kyte

Followup  

February 13, 2012 - 4:33 pm UTC

1) not really high load - but lots of concurrent requests on a machine with limited resources. Consider what happens when you have dozens of users trying to parse at the same time - all needing a latch - a latch held by someone that is currently NOT ON THE CPU. This is what causes your big waits for these things (coupled with the fact that dozens of sessions are waiting for it at the same time!)




2) or.... you implement a resource manager plan that limits the number of active sessions to N, where N is a reasonable number.


otherwise, yes - you are seeing contention based waits because you are trying to do a lot concurrently (decrease concurrency and waits go away - as evidenced by the fact you were doing MORE with LESS when it was "good").


You would like to queue in the middle tier as queuing on the database will take.... database resources! If you want to do more in the database, you would use your middle tier as it was designed - like CICS to Tuxedo has been used for decades. To LIMIT the amount of concurrent work submitted to the databases - to release work to the database in a manner that allows it to reach maximum efficiency.

SQL statement not in execution phase,

April 23, 2012 - 8:48 am UTC

Reviewer: A reader

We have a scenario happening now in production database that a particular SID has locked other sessions. I can see from v$session using blocking_session attribute.

I tried to trace using dbms_monitor for couple of sessions that are blocked. I didn't see trace file generating in dump location and when I checked v$session_even, there wasn't any event that was associated to the SID.

I thought the SQL statement is stuck in library cache. When I queried v$librarycache view, I found the following:

NAMESPACE GETS GETHITS GETHIT_RATIO PINS PINHITS PINHIT_RATIO RELOADS INVALIDATIONS
---------------------------------------------------------------- ---------- ---------- ------------ ---------- ---------- ------------ ---------- -------------
SQL AREA STATS 1071031 129221 12.07 1071031 129221 12.07 0 0
ACCOUNT_STATUS 1223355 1222313 99.91 0 0 100 0 0
SQL AREA BUILD 1078080 256446 23.79 0 0 100 0 0
SQL AREA 8244880 7994569 96.96 285141834 283524257 99.43 242289 182992


I also checked locking in library cache and found thousands of rows with Namespace as SQL AREA BUILD.

I didn't understand this concept. Could you please explain what SQL AREA BUILD, SQL AREA STATS mean and why such high invalidations for SQL AREA in the above output?

Thanks,


Tom Kyte

Followup  

April 23, 2012 - 1:30 pm UTC

I tried to trace using dbms_monitor for couple of sessions that are blocked. I
didn't see trace file generating in dump location and when I checked
v$session_even, there wasn't any event that was associated to the SID.,


well, if they are blocked - they are blocked and will be blocked (and hence not really writing anything out to trace) until they become unblocked.


I thought the SQL statement is stuck in library cache. When I queried
v$librarycache view, I found the following:


why - why do you think that, what evidence did you gather to suggest that? why couldn't it just be "session 1 locked a row that sessions 2, 3, 4, 5, .... are trying to lock"

follow up,

April 23, 2012 - 2:46 pm UTC

Reviewer: A reader

I have no evidence that it got stuck in library cache. You are right - multiple sessions are waiting for the lock to get released.

Well, I checked with the development team. They migrated from 11.1 to 11.2 recently and started having performance issues. The other difference is, on their older system, the DB was on single instance and now it is 2-node RAC.

The UPDATE statement updates one row since they have primary key attribute in their "where" clause. The users through web URL access the database and performs the updates. The chances are very less that multiple users try to update the same invoice ID. However, the team mentioned the users might close the web server abruptly if they don't see the results quickly.

I also checked the explain plan and it is very simple (INDEX UNIQUE SCAN on the primary key)

Since they are on RAC now, could hash partitioning the Primary key help?


Tom Kyte

Followup  

April 24, 2012 - 7:31 am UTC

Since they are on RAC now, could hash partitioning the Primary key help?


it sounds like they are going after the same records to me - nothing will help that short of "don't let them go after the same record"



Let me ask you this - are they updating a row they have not already locked. If so, they are almost surely suffering from "lost updates" (google that up if you are not familiar with it, a very classic database issue).

Perhaps they should consider using "select for update NOWAIT" or "select for update WAIT <N>" to attempt to lock and inspect the record before updating it - if the record is locked - they will no longer block and they can inform the user that 'hey, the thing is being worked on by someone else'



Question on Parsing

August 10, 2012 - 5:13 am UTC

Reviewer: Ankit from India

Hi Tom,

You have mentioned in an earlier followup that:
a hard parse. It is very very very bad.
a soft parse. It is simply very very bad.
the absence of a parse, this is the only good parse going.

Consider an OLTP enviroments with 70-80 different SQLs running daily, catering to different phases of Business Transactions like Order processing, Order Cancellation etc. Each SQL is defined using Bind variables to avoid Hard Parse. A business transaction lifecycle uses some 15 SQL statements for one type of Business Transaction.

In such an enviroment won't it be impossible not to soft parse a SQL again and again as the Business Transaction Lifecycle would repeated a number of times depending upon the no. of the executions leading to repeated Soft parse of the associated SQL?

Hard parse can be removed by using Bind variables but in the above environment where the SQLs need to be run again during the Transaction Life Cycle what can be done to stop/avoid soft parses?

Please forgive my naiveness as I do not have any develpoment experience.


Tom Kyte

Followup  

August 17, 2012 - 1:42 pm UTC

if you have a method representing a transaction and you KNOW that method will be invoked over and over again

why would you have that method

open
bind
execute
close

a cursor - when it could

if first_time open end if;
bind
execute


????



or, just move the SQL into plsql stored procedures and we'll do that for you. You'll still have them parsing the call to plsql if they won't keep it open - but if the plsql does five sql statements - you'll have reduced the parsing to 20% of what it used to be!!!

plsql keeps them open for you automagically.

jdbc statement caching can keep them open for you automagically.

developers can do it if they think about it.

August 14, 2012 - 3:31 am UTC

Reviewer: Ankit from India

Hi Tom,

Logged in after 4 days to see no followup on this question.

With all due respect, do you think my question is irrelevent or incorrect?

Thanks,
Ankit
Tom Kyte

Followup  

August 17, 2012 - 2:06 pm UTC

with all due respect, sometimes I travel and spent a lot of time on airplanes, or sleep and things like that.


:)

August 21, 2012 - 1:18 am UTC

Reviewer: Ankit from India

Dear Tom,

I understand your hectic schedule, I just thought that my question was irrelevent and you might have deemed the question as absurd and choose not to answer it.

I am really really appreciate and am thankful for all the help you provide on this site.

Thanks again

Infrastructure for large concurrent users

September 20, 2012 - 7:42 am UTC

Reviewer: Ankit from India

Hi Tom,

I read your article - 'On Connection Pools, Cursor Differentiation, and Optimal Ordering' in Oracle Magazine edition.

The user problem described is due to a large number of concurrent users trying to access the application which is hosted on a machine
with 4 CPU cores and once the number of concurent users were reduced the performance increased.

Consider a scenario where the Application is a web-based shopping site like Amazon.com and the database is running on Oracle.

-- There will be many users ~ 100K users using the application (browsing the product catalogue). Can be considered as READ-ONLY Activity
-- Some 5K users actually doing transactions ( buying the products from the site), typically OLTP.

In this type of usage and not causing performance issue like Latching etc., a lot of CPU cores would be required since there will be a number of concurrent users for whom if the number
of CPU cores required will be very high, in the above case it would be some 400-600 CPU cores?

For supporting this kind of application is RAC and EXADATA the only solution?

Thanks,
Ankit
Tom Kyte

Followup  

September 26, 2012 - 11:08 am UTC

In this type of usage and not causing performance issue like Latching etc., a
lot of CPU cores would be required since there will be a number of concurrent
users for whom if the number
of CPU cores required will be very high, in the above case it would be some
400-600 CPU cores?


no, it would not be 400-600.

We can do 10's of thousands of transactions per second on a small 12 core machine (revisit those links). If we do 10's of thousands of transactions *per second* - and you have 100k users (who do some number of transactions per MINUTE at most - if even that)... then think about how many cores you might need.


it is not the size of the user base, it is the rate of transactions. assuming a click every 10 seconds or so, you'd need 10,000 transactions per second, something you can do on a small machine

IF you get your connection pool under control and you don't overwhelm the machine trying to do a gazillion things all at once.

it is simple math at the end, forget user counts, get transaction rates - that is what counts.

size the pool based on your resources (you cannot get blood from a stone, you cannot get an extra transaction per second by exceeding the optimum number of connections - in fact you'll LOSE transactions per second by doing that)

library cache wait

September 28, 2012 - 1:12 pm UTC

Reviewer: Peter Parker from NewYork

Hello Tom

We've several ETL loads running against our DB, it's a usual warehouse setup, truncate, load data, recompile some packages etc in night and weekend, the next day reports run in business hours. For the last couple of days, the load seems to hang indefintely specially when it's being run in overlapped business hours (that's when the reports are being fethed from discoverer).

When we checked with DBA, they sent us sessions which are running (ACTIVE) which are causing our load to wait forever:

This is what they sent:

Active session holding the library cache lock:

SQL> select sid,substr(event,1,30),wait_time
  2  from v$session_wait
  3  where sid in (select sid from x$kglpn , v$session
  4  where KGLPNHDL in (select p1raw from v$session_wait
  5  where wait_time=0 and event like 'library cache%')
  6  and KGLPNMOD <> 0
  7  and v$session.saddr=x$kglpn.kglpnuse );

  SID SUBSTR(EVENT,1,30)                                                WAIT_TIME
----- ---------------------------------------------------------------- ----------
   52 SQL*Net message from client                                               6
  190 SQL*Net message from client                                              -1
  292 SQL*Net message from client  
-1



Session waiting due to above listed:



SQL> select sid Waiter,  p1raw,
substr(rawtohex(p1),1,30) Handle,
substr(rawtohex(p2),1,30) Pin_addr
from gv$session_wait where wait_time=0 and event like 'library cache%';  2    3    4

    WAITER P1RAW            HANDLE                                       PIN_ADDR
---------- ---------------- -------------------------------------------- ----------------------
        11 070000007E48A4E8 C9332D0411045513384D                         C9332D041104394B6111
        53 070000007C7D0030 C9332D04110437071855                         C9332D04110449501405
       213 070000007E48A4E8 C9332D0411045513384D                         C9332D0411043D5A4339
       220 070000007C7D0030 C9332D04110437071855                         C9332D0411045A5A0749
       253 070000007D81D600 C9332D04110448104151                         C9332D041104583839
       255 070000002ABCDD70 C9332D04105A53343D51                         C9332D041104471F4309
       306 070000007E276AD8 C9332D04110453015119                         C9332D0410570B16460D
       340 070000007E276AD8 C9332D04110453015119                         C9332D0411043E300521
       383 070000002ABCDD70 C9332D04105A53343D51                         C9332D04106048424329


My question is:

The sessions which are ACTIVE and shown as holding the locks are a bunch of SELECT statements, why would a SELECT statement hold LIBRARY CACHE LOCK and not let my load go ahead. I'm not getting satisfactory answer after reading around and checking metalink as well, can you please help explain.

Right now we investigate these session and have them killed and load works, but why..?..
Tom Kyte

Followup  

September 28, 2012 - 4:52 pm UTC

sorry, but those sessions are waiting for the client to tell them to do something - sql*net message from client means "we are waiting"

how about your DBA's use ADDM and ash/awr instead of trying to query up x$ views?

LIbrary Cache Wait

September 30, 2012 - 3:39 pm UTC

Reviewer: Peter Parker from NewYork


The waiting session are listed down (not the one above).

The one you are looking at are holding session which are showing ACTIVE in v$session and sql_id's lead to to SELECT statements.

I'm sure they are causing library cache waits are when we kill those sessions (listed at start of post), the one's waiting (listed at end of post) start working.

My question is why SELECTS are causing library cache waits (one guess was they were being called from procedures/functions which the load is trying to compile). But is there is really any other reason?
Tom Kyte

Followup  

October 08, 2012 - 3:12 pm UTC

if they are in sql net message FROM CLIENT - they are not active.

are you sure they are reporting 'active'?

and if the wait time is zero - those other guys are not "stuck", they just started waiting??

i'm really not sure what I'm looking at there.

CPU cores for supporting high transaction rate

October 01, 2012 - 5:51 am UTC

Reviewer: Ankit from India

Hi Tom,

Thanks for the detailed explanation. I have revisited your article and as per that I seem to get an impression that:

Size of connection Pool = 8 * Number of CPU cores

From your article -->

"I see that you had 134 sessions in the database running on a total of 4 CPU cores. Because it is not really possible for 4 CPU cores to allow 134 sessions to be concurrently active, I recommend that you decrease the number of concurrent sessions by reducing the size of your connection pool—radically. Cut your connection pool to something like 32." --> 32 = 8 * 4

"I know that this fewer-does-more suggestion sounds counterintuitive, but I encourage you to watch this narrated Real World Performance video. In this video, you’ll see what happens in a test of a 12-core machine running transactions and decreasing the size of a connection pool from an unreasonable number (in the thousands) to a reasonable number: 96. At 96 connections on this particular machine, the machine was able to do 150 percent the number of transactions per second and took the response time for these transactions from ~100 milliseconds down to ~5 milliseconds." --> 96 = 8 * 12

Coming back to the scenario where we have 10,000 Transactions/second and so the size of connection pool can be 1000 ( not sure if there is any factor for relationship between Transaction/second and Connection Pool size), so the number of CPU cores would be 125 ( 1000 = 8 * 125)

125 CPU Cores seem too high to me considering that you mentioned that 10,000 Transactions/second can be run on a 12 core machine.

Can you please point out the gaps in above analysis.

Thanks,
Ankit

Tom Kyte

Followup  

October 08, 2012 - 3:19 pm UTC

In reality - the size of connection pool = N * number of cores

where N is a very personal real number greater than zero and almost surely less than 16.

Yes, N could be less than one, but probably shouldn't be above 16, normally - somewhere between 2 and 8 would be 'best'.


( not sure if there is any factor for relationship between Transaction/second and Connection Pool size),

there is, the MORE connections you have over the sweet spot (typically around 2-8*cores), the LESS transactions per second you will achieve.

did you watch the video? It shows you everything you need to know.



You don't need 125 cores to do 10k transactions per second as long as your transactions are short. do the math.

If your transaction takes 5ms, a core could in theory do 200 transactions in a second (in a perfect world). So, 50 cores could do your 10,000 transactions per second.

If your transaction takes about 1ms, then a core could do 1,000 transactions a second (in a perfect world) and you'd need 10 cores.

If that 1ms is partially spent waiting for something, then a single core might be able to do more than 1,000 transactions in a second and you'd need less cores.


Thanks Tom !

October 25, 2012 - 6:03 am UTC

Reviewer: Ankit from India

Hi Tom,

I finally watched the video today, I know understand the logic.

Thanks a lot !

Thanks Tom !

October 25, 2012 - 6:03 am UTC

Reviewer: Ankit from India

Hi Tom,

I finally watched the video today, I now understand the logic.

Thanks a lot !

Oracle text not using binds

October 26, 2012 - 1:30 pm UTC

Reviewer: Bertran from Buenos Aires, Argentina

Hi Tom,

Searching the net I found this old thread but don't see any answer from you except that "it doesn't happen to you".

I see in our RAC ( 11.2.0.3.0 ) a lot of blocks from Oracle Text like :

declare
cost sys.ODCICost := sys.ODCICost(NULL, NULL, NULL, NULL);
arg0 VARCHAR2(1) := null;

begin
:1 := "CTXSYS"."TEXTOPTSTATS".ODCIStatsIndexCost(
sys.ODCIINDEXINFO('GIO_OWNER',
'CX_PROV#PROVIDER_NAME',
sys.ODCICOLINFOLIST(sys.ODCICOLINFO('GIO_OWNER', 'PROVIDER', '"PROVIDER_NAME"', 'VARCHAR2', NULL, NULL, 0, 0, 0, 0)),
NULL,
0,
0, 0, 0),
0.00001403,
cost,
sys.ODCIQUERYINFO(2, NULL, sys.ODCICOMPQUERYINFO(NULL, NULL)),
sys.ODCIPREDINFO('CTXSYS',
'CONTAINS',
NULL,
0),
sys.ODCIARGDESCLIST(sys.ODCIARGDESC(3, NULL, NULL, NULL, NULL, NULL, NULL), sys.ODCIARGDESC(5, NULL, NULL, NULL, NULL, NULL, NULL), sys.ODCIARGDESC(2, 'PROVIDER', 'GIO_OWNER', '"PROVIDER_NAME"', NULL, NULL, NULL), sys.ODCIARGDESC(3, NULL, NULL, NULL, NULL, NULL, NULL)),
:6,
NULL
, :7,
sys.ODCIENV(:8,:9,:10,:11));
if cost.CPUCost IS NULL then
:2 := -1.0;
else
:2 := cost.CPUCost;
end if;
if cost.IOCost IS NULL then
:3 := -1.0;
else
:3 := cost.IOCost;
end if;
if cost.NetworkCost IS NULL then
:4 := -1.0;
else
:4 := cost.NetworkCost;
end if;
:5 := cost.IndexCostInfo;
exception
when others then
raise;
end;


Do you know why Oracle Text is not using binds ? Looking at the blocks in memory the only difference seems to be the parameter "sel" in "CTXSYS"."TEXTOPTSTATS".ODCIStatsIndexCost ( 0.00001403 in the example ).

Thanks in advance,
Bertran.
Tom Kyte

Followup  

October 28, 2012 - 11:15 pm UTC

that is using binds, :1, :2, ... - they are binds.

how many of these blocks do you have that only differ by "SEL" - that could be an oversight (in which case you'd have to open an SR with support)

but this statement is definitely using binds.

Oracle text not using binds... more info

October 29, 2012 - 7:23 am UTC

Reviewer: Bertran Saragusti from Buenos Aires, Argentina

Tom,

I do know it is using "some" binds. What I'm concerned about is the ones not being binded.

We have this "call distribution" :

select inst_id,count(1) num_entries, sum(sharable_mem) sum_shar_mem
from gv$sqlarea
where sql_text like '%ODCIStatsIndexCost%'
and command_type=47
group by inst_id
order by 1;

INST_ID NUM_ENTRIES SUM_SHAR_MEM
1 570 37357952
2 348 25585205
3 223 14671841
4 345 20356261

Maybe this is not a big issue to make a fuzz about, but I was just curious why Oracle do not bind all the parameters it uses in one of its own products.

Thanks,
Bertran.
Tom Kyte

Followup  

October 29, 2012 - 7:39 am UTC

how many different copies are you seeing?

sometimes we (we being the collective we - all of us, you - me - them) do not bind something *on purpose* - we want a different (or we want the ability to have a different) plan because of the different value.

so - what we need to determine is - is this a problem, is this flooding the sharedd pool - or is it the desired side effect of a programmer purposely NOT BINDING a specific attribute because it leads to a small set of really good plans.


for example, if I had a skewed column that had say 50 or less distinct values - I might "on purpose" not bind against that column in order to get the best plan.

you said "it is not using binds" - but it is

you said "a lot" - but I have no idea what "a lot" is

please quantify

concurrency

April 08, 2013 - 3:32 pm UTC

Reviewer: A Reader

Hi Tom,


Library cache contention - Concurrency
DB version : 10.2.0.4

Summary:

issue : 
a) database load ( cpu) go high 0% IDLE and   load as high as worst on DB  host, when number of hits from application to db increases.
b) we see DB not able to accept no connections
c) no issue DB listener

Observations:
a) AWR is scary
b) Session waiting on latch is too too scary
c) Isuses seen when more load on applciation .. hence more active sessiosn.

Root cause - as per our analysis
a) Excessive parsing
b) Special SQL_ID running on behalf of someone... 2hbdag3dfzj9u  -- we have seen people hunting for SQL_TEXT for this SQL_ID
actually this is a cursor not sql it refres to table_1_ff_159_0_0_0
c) we have seen excessive calls to one application procedure ( pkg_sess_role ) which in turn does many things and does 'set role <role_name> ' many times.
d) DB level Audting is enabled and these set role is logged into AUD$.. ADU$ is having a CLOB column... and we suspect table_1_ff_159_0_0_0 is a child cursor which is related with 
this ADD$... 

We know concurrency issue is ( should be) resolved at applciation level.. but we see role of AUD$ and child curors for 2hbdag3dfzj9u - table_1_ff_159_0_0_0 
ady0t9s6h5x5n -  table_1_ff_154_0_0_0




Load Profile
~~~~~~~~~~~~                            Per Second       Per Transaction
                                   ---------------       ---------------
                  Redo size:            476,983.24             57,112.89
              Logical reads:             72,541.85              8,686.00
              Block changes:              2,409.76                288.54
             Physical reads:                270.08                 32.34
            Physical writes:                127.94                 15.32
                 User calls:                854.40                102.30
                     Parses:                557.59                 66.76
                Hard parses:                  1.17                  0.14
                      Sorts:                174.12                 20.85
                     Logons:                  0.30                  0.04
                   Executes:              4,055.11                485.55
               Transactions:                  8.35

Top 5 Timed Events                                         Avg %Total
~~~~~~~~~~~~~~~~~~                                        wait   Call
Event                                 Waits    Time (s)   (ms)   Time Wait Class
------------------------------ ------------ ----------- ------ ------ ----------
latch: library cache                 47,588     180,599   3795   41.9 Concurrenc
cursor: pin S                    50,096,103      63,829      1   14.8      Other
latch: library cache lock            50,437      28,447    564    6.6 Concurrenc
CPU time                                         27,485           6.4
latch: cache buffers chains         161,600      16,543    102    3.8 Concurrenc


SQL ordered by Elapsed Time  

  Elapsed      CPU                  Elap per  % Total
  Time (s)   Time (s)  Executions   Exec (s)  DB Time    SQL Id
---------- ---------- ------------ ---------- ------- -------------
   183,760      5,469          732      251.0    42.7 ad4t1g35gz0pd
BEGIN pkg_sess_role.proc1( :1, :2, :3, :4, :5, :6 ,:7); END;


-- Tanel's Latchprof... to get who is messing up


SQL> get checklatch
  1  set linesize 900
  2  spool latch1.lst
  3  @latchprof name,sqlid % "library cache" 10000
  4* spool off
SQL> get latchprof
  1  DEF _lhp_what="&1"
  2  DEF _lhp_sid="&2"
  3  DEF _lhp_name="&3"
  4  DEF _lhp_samples="&4"
  5  COL name FOR A35 TRUNCATE
  6  COL latchprof_total_ms HEAD "Held ms" FOR 999999.999
  7  COL latchprof_pct_total_samples head "Held %" format 999.99
  8  COL latchprof_avg_ms HEAD "Avg hold ms" FOR 999.999
  9  COL dist_samples HEAD Gets
 10  COL total_samples HEAD Held
 11  BREAK ON lhp_name SKIP 1
 12  DEF _IF_ORA_10_OR_HIGHER="--"
 13  PROMPT
 14  PROMPT
 15  COL latchprof_oraversion NOPRINT NEW_VALUE _IF_ORA_10_OR_HIGHER
 16  SET TERMOUT OFF
 17  SELECT DECODE(SUBSTR(BANNER, INSTR(BANNER, 'Release ')+8,1), 1, '', '--') latchprof_oraversion
 18  FROM v$version WHERE ROWNUM=1;
 19  SET TERMOUT ON
 20  WITH
 21      t1 AS (SELECT hsecs FROM v$timer),
 22      samples AS (
 23          SELECT /*+ ORDERED USE_NL(l) USE_NL(s) USE_NL(l.gv$latchholder.x$ksuprlat) NO_TRANSFORM_DISTINCT_AGG  */
 24              &_lhp_what
 25            &_IF_ORA_10_OR_HIGHER , COUNT(DISTINCT gets)      dist_samples
 26            , COUNT(*)                    total_samples
 27            , COUNT(*) / &_lhp_samples    total_samples_pct
 28          FROM
 29              (SELECT /*+ NO_MERGE */ 1 FROM DUAL CONNECT BY LEVEL <= &_lhp_samples) s,
 30              v$latchholder l,
 31              (SELECT
 32                      sid                                     indx
 33                    , sql_hash_value                          sqlhash
 34                    , sql_address                             sqladdr
 35                    &_IF_ORA_10_OR_HIGHER , sql_child_number  sqlchild
 36                    &_IF_ORA_10_OR_HIGHER , sql_id            sqlid
 37               FROM v$session) s
 38          WHERE
 39              l.sid LIKE '&_lhp_sid'
 40          AND (LOWER(l.name) LIKE LOWER('%&_lhp_name%') OR LOWER(RAWTOHEX(l.laddr)) LIKE LOWER('%&_lhp_name%'))
 41          AND l.sid = s.indx
 42          GROUP BY
 43              &_lhp_what
 44          ORDER BY
 45              total_samples DESC
 46      ),
 47      t2 AS (SELECT hsecs FROM v$timer)
 48  SELECT /*+ ORDERED */
 49      &_lhp_what
 50    , s.total_samples
 51    &_IF_ORA_10_OR_HIGHER , s.dist_samples
 52    --  , s.total_samples_pct
 53    , s.total_samples / &_lhp_samples * 100 latchprof_pct_total_samples
 54    , (t2.hsecs - t1.hsecs) * 10 * s.total_samples / &_lhp_samples latchprof_total_ms
 55    --   s.dist_events,
 56    &_IF_ORA_10_OR_HIGHER , (t2.hsecs - t1.hsecs) * 10 * s.total_samples / dist_samples / &_lhp_samples latchprof_avg_ms
 57    FROM
 58      t1,
 59      samples s,
 60      t2
 61    -- WHERE ROWNUM <= 30
 62  /
 63* COL name CLEAR

18:11:17 SQL> @checklatch
NAME                                SQLID               Held       Gets  Held %     Held ms Avg hold ms
----------------------------------- ------------- ---------- ---------- ------- ----------- -----------
library cache                       ad4t1g35gz0pd       6345         31   63.45   78665.310    ########
library cache lock                  ady0t9s6h5x5n       1135       1127   11.35   14416.770      12.792------<< pseudo cursor
library cache                       ady0t9s6h5x5n        583        360    5.83    7405.266      20.570------<< pseudo cursor
library cache                                            273         55    2.73    3467.646      63.048----?
library cache pin                                         26         25     .26     330.252      13.210
library cache                       fcn2tzk76bpqg         25         12     .25     317.550      26.463
 

18:16:20 SQL> select sql_text from v$sql  where sql_id='ad4t1g35gz0pd';

SQL_TEXT
---------------------------------------------------------------------------------------------------------
BEGIN pkg_sess_role.proc1( :1, :2, :3, :4, :5, :6 ,:7); END;

 

SQL> SELECT *
  2    FROM (  SELECT datatype_string, name, position
  3              FROM dba_hist_sqlbind
  4             WHERE sql_id = 'ad4t1g35gz0pd'
  5          ORDER BY snap_id DESC, position ASC)
  6   WHERE ROWNUM <= 7;

DATATYPE_STRING NAME                             POSITION
--------------- ------------------------------ ----------
VARCHAR2(32512) :1                                      1
NESTED TABLE    :2                                      2
RAW(32512)      :3                                      3
VARCHAR2(32512) :4                                      4
VARCHAR2(32512) :5                                      5
VARCHAR2(32512) :6                                      6
VARCHAR2(32512) :7                                      7



NESTED TABLE is causes this…


And the other SQL ID


SQL> select distinct sql_text, count(*)   from v$open_cursor where sql_id = 'ady0t9s6h5x5n' group by sql_text;

SQL_TEXT                                                       COUNT(*)
------------------------------------------------------------ ----------
table_1_ff_154_0_0_0                                                307


--- Pseudo cursors

What  it is referring today….

SQL> SELECT owner,
  2         object_name,
  3         object_id,
  4         object_type
  5    FROM dba_objects
  6   WHERE object_id = TO_NUMBER ('154', 'xxx');

OWNER                          OBJECT_NAME                     OBJECT_ID OBJECT_TYPE
------------------------------ ------------------------------ ---------- -------------------
SYS                            KOTTB$                                340 TABLE



Further 

PACKAGE BODY pkg_sess_role AS
…
…

PROCEDURE proc1(
               I_USERDN       IN      VARCHAR2,
               I_ROLELIST     IN      T_SESS_ROLELIST,
               I_LOCALE       IN      LONG RAW,
               I_TIMEZONE     IN      VARCHAR2,
               I_ERRORNUMBERSACTIVATED IN VARCHAR2,
               I_USERASSOTICKET IN VARCHAR2,        




SQL> SELECT u.NAME user_name, o.NAME object_name,
  2  CASE
  3  WHEN o.type# = 13
  4  THEN 'TYPE'
  5  WHEN o.type# = 14
  6  THEN 'TYPE BODY'
  7  ELSE 'UNKNOWN'
  8  END AS obj_type
  9  FROM SYS.kottd$ kd, SYS.obj$ o, SYS.user$ u
 10  WHERE o.oid$ = kd.sys_nc_oid$
 11  AND u.user# = o.owner#
 12  AND o.NAME = 'T_SESS_ROLELIST';

USER_NAME                      OBJECT_NAME                    OBJ_TYPE
------------------------------ ------------------------------ --------------------------------
USERA                         T_SESS_ROLELIST             TYPE




      
ady0t9s6h5x5n -- pseudo cursor...

Further,

1.we asked Oracle ( 
I have raised SR 3-7011120501.  ) if pseudo cursors contention is bug and confirmed it  a bug.
bug 7648406 exists also for 10.2.0.4
2. And to add to it  
open_cursors =32000    !
and
session_cached_cursors = 12000  !
these two parameters are a must candidate to revisit.



Questions
a) Upto what extent this bug would fix the issue? I feel it should resolve .. as nested tables data types in package call and CLOB columns in AUD$  are involved 
b) Actual bug is in the code...PLSQL code.? I am going to re-read  your Effective Oracle By design and chaptor -effective PLSQL. tonight

PROCEDURE proc1(
               I_USERDN       IN      VARCHAR2,
               I_ROLELIST     IN      T_SESS_ROLELIST,----- nested table type... leading  pseudo cursors..
               I_LOCALE       IN      LONG RAW,
               I_TIMEZONE     IN      VARCHAR2,
               I_ERRORNUMBERSACTIVATED IN VARCHAR2,
               I_USERASSOTICKET IN VARCHAR2,        
               I_ASMSESSIONTICKET IN VARCHAR2) IS
        FN CONSTANT VARCHAR2(30)  := 'proc1';
    BEGIN
        IF pkg_log.HASACTIVELOGGER(GRP,PKG,FN,pkg_log.LEVEL_ENTRY) THEN
          DECLARE 
            V_BUFFER VARCHAR2(32512) := pkg_log.FORMATBEGINTABLE('i_rolelist','T_SESS_ROLELIST');
          BEGIN
            IF I_ROLELIST IS NOT NULL THEN ----code from this line onwards looks disaster to me....
              FOR I IN 1..I_ROLELIST.COUNT LOOP
                V_BUFFER := pkg_log.CONCAT(V_BUFFER,
                  pkg_log.FORMATBEGINOBJECT('['||I||']','o_session_rolelist'), --- specifically this one... causing it to die for library cache latch?
                  pkg_log.FORMATSTRINGPARAMETER('role',I_ROLELIST(I).ROLE),
                  pkg_log.FORMATENDOBJECT);
              END LOOP;
            END IF;
            V_BUFFER := pkg_log.CONCAT(V_BUFFER, pkg_log.FORMATENDTABLE);
          
            pkg_log.ENTRY(GRP,PKG,FN,pkg_log.CONCAT(
              pkg_log.FORMATSTRINGPARAMETER('i_userdn',I_USERDN),
              V_BUFFER,
              pkg_log.FORMATSTRINGPARAMETER('i_locale','[LONG RAW]'),
              pkg_log.FORMATSTRINGPARAMETER('i_timezone',I_TIMEZONE),
              pkg_log.FORMATSTRINGPARAMETER('i_errornumbersactivated',I_ERRORNUMBERSACTIVATED),
              pkg_log.FORMATSTRINGPARAMETER('i_USERAssoticket',I_USERASSOTICKET),
              pkg_log.FORMATSTRINGPARAMETER('i_asmsessionticket',I_ASMSESSIONTICKET)
            ));
          END;
        END IF;
        
        SETROLES(
            pkg_sessioncontext.proc1(I_USERDN, I_ROLELIST, I_LOCALE, I_TIMEZONE,I_ERRORNUMBERSACTIVATED,I_USERASSOTICKET,I_ASMSESSIONTICKET));
        pkg_log.EXIT(GRP,PKG,FN,NULL);
    END; 

is it because of for loop above excessive parsing ?
 
 
thanks 
 

Tom Kyte

Followup  

April 22, 2013 - 2:26 pm UTC

".... AWR is scary ... "

?????????????????????????????? I have no idea how to even parse that statement.


before I look at this, how many connections do you have (count(*) from v$session) and how many cores do you have ( not sockets, not threads - actual cores)

..concurrency

April 08, 2013 - 3:43 pm UTC

Reviewer: A Reader

Tom
AWR snip is taken from 1 hr AWR
no of cpu cores 32

..concurrency contd

May 04, 2013 - 10:42 pm UTC

Reviewer: A Reader

Thanks Tom

Please find the details below.


1. how many connections do you have (count(*) from v$session) ?

we have total 03 databases running on the same host.
1.1 count from v$session for the database in discussion above
during normal period ~ 800
during problem period( when we have concurrency waits) ~1200
1.2 count from v$session for other 02 databases which are running on the same host are ~1800 and ~1300


2. how many cores do you have ( not sockets, not threads - actual cores)

$ psrinfo -p
16

16 cores.


regards


Tom Kyte

Followup  

May 06, 2013 - 7:21 pm UTC

this video was made for you:

http://tinyurl.com/RWP-OLTP-CONNECTIONS


are you KIDDING!!!! 16 poor little cores. 16 teeny tiny little cores.


and 3,000 to 5,000 connections!!!!!

are you *kidding*?

who thought that was a good idea????

..concurrency contd

May 07, 2013 - 3:29 am UTC

Reviewer: A Reader

You are right Tom,
:)
it is poor design issue . and is being worked out.
:(

In the meantime, the need of the hour is how to live with it till it is fixed/ re-desinged.

Calls to the package has added more to the existing problem.
i.e. Whenever there are more concurrent calls to the package we concurrency related wait events ( library cache latch) are the top most.

-- package.procedure deatails with my observation/comments

PROCEDURE proc1(
I_USERDN IN VARCHAR2,
I_ROLELIST IN T_SESS_ROLELIST,----- nested table type... leading pseudo
cursors..
I_LOCALE IN LONG RAW,
I_TIMEZONE IN VARCHAR2,
I_ERRORNUMBERSACTIVATED IN VARCHAR2,
I_USERASSOTICKET IN VARCHAR2,
I_ASMSESSIONTICKET IN VARCHAR2) IS
FN CONSTANT VARCHAR2(30) := 'proc1';
BEGIN
IF pkg_log.HASACTIVELOGGER(GRP,PKG,FN,pkg_log.LEVEL_ENTRY) THEN
DECLARE
V_BUFFER VARCHAR2(32512) := pkg_log.FORMATBEGINTABLE('i_rolelist','T_SESS_ROLELIST');
BEGIN
IF I_ROLELIST IS NOT NULL THEN ----code from this line onwards looks disaster to me....
FOR I IN 1..I_ROLELIST.COUNT LOOP
V_BUFFER := pkg_log.CONCAT(V_BUFFER,
pkg_log.FORMATBEGINOBJECT('['||I||']','o_session_rolelist'), --- specifically
this one... causing it to die for library cache latch?
pkg_log.FORMATSTRINGPARAMETER('role',I_ROLELIST(I).ROLE),
pkg_log.FORMATENDOBJECT);
END LOOP;
END IF;
V_BUFFER := pkg_log.CONCAT(V_BUFFER, pkg_log.FORMATENDTABLE);

pkg_log.ENTRY(GRP,PKG,FN,pkg_log.CONCAT(
pkg_log.FORMATSTRINGPARAMETER('i_userdn',I_USERDN),
V_BUFFER,
pkg_log.FORMATSTRINGPARAMETER('i_locale','[LONG RAW]'),
pkg_log.FORMATSTRINGPARAMETER('i_timezone',I_TIMEZONE),
pkg_log.FORMATSTRINGPARAMETER('i_errornumbersactivated',I_ERRORNUMBERSACTIVATED),
pkg_log.FORMATSTRINGPARAMETER('i_USERAssoticket',I_USERASSOTICKET),
pkg_log.FORMATSTRINGPARAMETER('i_asmsessionticket',I_ASMSESSIONTICKET)
));
END;
END IF;

SETROLES(
pkg_sessioncontext.proc1(I_USERDN, I_ROLELIST, I_LOCALE,
I_TIMEZONE,I_ERRORNUMBERSACTIVATED,I_USERASSOTICKET,I_ASMSESSIONTICKET));
pkg_log.EXIT(GRP,PKG,FN,NULL);
END;



Questions
a) your inputs on above procedure code in view of library cache latch issue.( evey minute 1 or 2 calls to above package we dont have issues.. but of the number of calls goes beyond 20 .. we started seeing concurrency related issue... worsen by
Psuedo cursor bug 7648406 Child cursors not shared for "table_..." cursors (that show as "SQL Text Not Available") when NLS_LENGTH_SEMANTICS = CHAR
)

b) whether not binding I in loop below is the issue?
FOR I IN 1..I_ROLELIST.COUNT LOOP

c) Unable to get the bind values for the given procedure call
select * from dba_hist_sqlbind where sql_id=<sqlId>;
WAS_CAPTURED column is NO in the output?

regards


Tom Kyte

Followup  

May 07, 2013 - 2:54 pm UTC

cut back the size of your connection pool. no design required, just common sense and math (divide number of cores into sessions, see big number, know that is bad)

a) I cannot comment on a snippet of code that calls a gazillion other things, without knowledge of what is trying to be accomplished (what the goal is), why the code even exists. (and you cannot really supply all of that here.... way too much)


I don't know how a table_.... cursor would be impacting this, it doesn't seem to be doing any lob.

b) I don't see any dynamic sql, hence you are binding everywhere possible since static sql in plsql can only bind inputs.

c) you don't show how it was called - and we don't capture all or even most in there. and that is a snapshot of v$sql_bind_capature which only:

http://docs.oracle.com/cd/B19306_01/server.102/b14237/dynviews_2114.htm

... One of the bind values used for the bind variable during a past execution of its associated SQL statement. Bind values are not always captured for this view. Bind values are displayed by this view only when the type of the bind variable is simple (this excludes LONG, LOB, and ADT datatypes) and when the bind variable is used in the WHERE or HAVING clauses of the SQL statement. ...


..concurrency contd

May 14, 2013 - 5:07 am UTC

Reviewer: A Reader

Hello Tom,
Thanks for your time.

regarding your question
 
............I don't know how a table_.... cursor would be impacting this, it doesn't seem to be doing any lob. 


In our scenario table_... cursor is due to    NESTED TABLE  data type..  

details follows.


SQL_ID for this package.procedure  ( pkg_sess_role.proc1 )  in discussion is - ad4t1g35gz0pd





18:16:20 SQL> select sql_text from v$sql  where sql_id='ad4t1g35gz0pd';

SQL_TEXT
----------------------------------------------------------------------------------------------------
-----
BEGIN pkg_sess_role.proc1( :1, :2, :3, :4, :5, :6 ,:7); END;

 

SQL> SELECT *
  2    FROM (  SELECT datatype_string, name, position
  3              FROM dba_hist_sqlbind
  4             WHERE sql_id = 'ad4t1g35gz0pd'
  5          ORDER BY snap_id DESC, position ASC)
  6   WHERE ROWNUM <= 7;

DATATYPE_STRING NAME                             POSITION
--------------- ------------------------------ ----------
VARCHAR2(32512) :1                                      1
NESTED TABLE    :2                                      2
RAW(32512)      :3                                      3
VARCHAR2(32512) :4                                      4
VARCHAR2(32512) :5                                      5
VARCHAR2(32512) :6                                      6
VARCHAR2(32512) :7                                      7



NESTED TABLE is causing this…


other SQL ID


SQL> select distinct sql_text, count(*)   from v$open_cursor where sql_id = 'ady0t9s6h5x5n' group 
by sql_text;

SQL_TEXT                                                       COUNT(*)
------------------------------------------------------------ ----------
table_1_ff_154_0_0_0                                                307


--- Pseudo cursors

What  it is referring today….

SQL> SELECT owner,
  2         object_name,
  3         object_id,
  4         object_type
  5    FROM dba_objects
  6   WHERE object_id = TO_NUMBER ('154', 'xxx');

OWNER                          OBJECT_NAME                     OBJECT_ID OBJECT_TYPE
------------------------------ ------------------------------ ---------- -------------------
SYS                            KOTTB$                                340 TABLE



Further 

PACKAGE BODY pkg_sess_role AS
…
…

PROCEDURE proc1(
               I_USERDN       IN      VARCHAR2,
               I_ROLELIST     IN      T_SESS_ROLELIST,  -----<<<<<<<<<<<<<< Nested Table DATA TYPE
               I_LOCALE       IN      LONG RAW,
               I_TIMEZONE     IN      VARCHAR2,
               I_ERRORNUMBERSACTIVATED IN VARCHAR2,
               I_USERASSOTICKET IN VARCHAR2,        




SQL> SELECT u.NAME user_name, o.NAME object_name,
  2  CASE
  3  WHEN o.type# = 13
  4  THEN 'TYPE'
  5  WHEN o.type# = 14
  6  THEN 'TYPE BODY'
  7  ELSE 'UNKNOWN'
  8  END AS obj_type
  9  FROM SYS.kottd$ kd, SYS.obj$ o, SYS.user$ u
 10  WHERE o.oid$ = kd.sys_nc_oid$
 11  AND u.user# = o.owner#
 12  AND o.NAME = 'T_SESS_ROLELIST';

USER_NAME                      OBJECT_NAME                    OBJ_TYPE
------------------------------ ------------------------------ --------------------------------
USERA                         T_SESS_ROLELIST                 TYPE



               
ady0t9s6h5x5n -- pseudo cursor...



bug 7648406 information 
Abstract : Child cursors not shared for "table_..." cursors (that show as "SQL Text Not Available") when NLS_LENGTH_SEMANTICS = CHAR

we have 

13:48:14 SQL> show parameter nls_length_

NAME_COL_PLUS_SHOW_PARAM       TYPE                             VALUE_COL_PLUS_SHOW_PARAM
------------------------------ -------------------------------- ------------------------------
nls_length_semantics           string                           CHAR
 

and DB version 10.2.0.4

13:48:17 SQL > select * from v$version;

BANNER
----------------------------------------------------------------
Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bi
PL/SQL Release 10.2.0.4.0 - Production
CORE    10.2.0.4.0      Production
TNS for Solaris: Version 10.2.0.4.0 - Production
NLSRTL Version 10.2.0.4.0 - Production





bottomline - using NESTED table data type also cause table_.... cursor [ CLOB  data type also does this ] 


   Here are the stats , during problem window ,
   at one point in time t1   
     100 concurrent calls to package pkg_sess_role.proc1 
     300 psuedo cursor calls.
     problem persists
   later at some point in time t2 
     10  concurrent calls to package pkg_sess_role.proc1 ( concurrent calls reduced   due to varying application  load )
     300 psuedo cursor calls.
     problem persists
   later at some point in time t3 
     2  concurrent calls to package pkg_sess_role.proc1 ( concurrent calls reduced   due to varying application  load )
     200 psuedo cursor calls.
     problem persists 
   later at some point in time t4 
     0   concurrent calls to package pkg_sess_role.proc1 ( concurrent calls reduced to zero  due to varying application  load )
     200 psuedo cursor calls.
     problem persists
   later at some point in time t5 
     0   concurrent calls to package pkg_sess_role.proc1  
     3 psuedo cursor calls.
     problem GONE. DB load came down. everything ok



question
a) as we see  huge library cache contention against the package  ( pkg_sess_role.proc1 ), whether contention is contributed majorily due to  table_... cursors?
b) if above ( a) is TRUE then it is only the  non sharing of psuedo cursor is the cause of contention here.. and we need not to fix anything on package from  parsing point of view and just bug fix 7648406   would help?
c) this is again question similar to (a) ,  whether library cache latch waits would get accumulated if there are calls to table_... cursor , if such psuedo cursor calls calls are made on behalf of the  package ( pkg_sess_role.proc1 in our case)?



 

Tom Kyte

Followup  

May 14, 2013 - 12:27 pm UTC

16 cores

100 things trying to be concurrently active

do the math


the cause of your contention is 100 things trying to use 16 resources. Say a latch holder gets kicked off the CPU holding a latch while the other 99 try to get it... think about what happens.


back off the size of your connection pool, this is not a design detail, this is a configuration change.


I can instantly cut your waits by 50%, cut your connection pool in half. You want the waits to go away? have a reasonable number of concurrent sessions on your 16 core machine.

you have more active sessions that are dying for cpu than that machine can handle. At least benchmark this in your test environment...

remove_constants simpler with regexp

May 30, 2013 - 10:04 pm UTC

Reviewer: John-Paul Busick from USA

The reason for using bind variables has finally been nailed home after trying this function on our test DB and finding 3749 different sql_text's for the exact same statement. All because the code was inserting variables as literals into the sql before executing to get the desired result.

It took me a few minutes to get what the remove_constants function was doing with each char in the loop, but once I did I thought it might be easier to read or expandable using regexp_replace like this:

CREATE OR REPLACE FUNCTION remove_constants (p_query IN VARCHAR2)
RETURN VARCHAR2
AS
l_query LONG;
l_char VARCHAR2 (1);
l_in_quotes BOOLEAN DEFAULT FALSE;
BEGIN
--this regexp pattern matches a single quote then all none single quote
-- characters until a closing single quote and replaces it with the uniform
-- hash symbol
l_query := REGEXP_REPLACE (p_query, '[''][^'']+['']', '''#''');

--this will replace any numeric values not specifically preceded by a colon
-- with the at symbol (this may change numbers inside SQL column names
-- and stuff but since we are just trying to eliminate differences due to none
-- bound literals this should still give us the count we want for similar
-- sql_text values that are not binding variables correctly)
l_query := REGEXP_REPLACE (l_query, '[^:][0-9]+', '@');

--this just consolidates spaces by matching any space and every space that
-- follows it with a single space
l_query := REGEXP_REPLACE (l_query, '[ ]+', ' ');

RETURN UPPER (l_query);
END;
/

remove_constants query without function

June 05, 2013 - 2:05 pm UTC

Reviewer: John-Paul Busick from USA

I didn't have access to create a function in the database I was trying to help tune so I tried the regexp_replace directly in a query instead of in a function and it worked out for me so I thought if anyone else was in a similar boat I would post the statement:

SELECT REGEXP_REPLACE (
REGEXP_REPLACE (
REGEXP_REPLACE (sql_text, '[''][^'']+['']', '''#''')
, '[^:][0-9]+', '@'),
'[ ]+', ' '),
COUNT (*)
FROM v$sqlarea
GROUP BY REGEXP_REPLACE (
REGEXP_REPLACE (
REGEXP_REPLACE (sql_text, '[''][^'']+['']', '''#''')
, '[^:][0-9]+', '@'),
'[ ]+', ' ')
HAVING COUNT (*) > 100
ORDER BY 2 DESC;