Developers sent me a link... AGAINST BINDING!
Alvin, June 05, 2003 - 1:27 am UTC
</code>
http://www.oreilly.com/catalog/jorajdbc/chapter/ch19.html <code>
I have a difficult time convincing them to use prepared statements (PS) in JAVA. And for that i get below.....
================================
STATSPACK report for
DB Name DB Id Instance Inst Num Release OPS Host
------------ ----------- ------------ -------- ----------- --- ------------
xxxx xxxxxxxxxxx 1 8.1.7.0.0 NO dbname
Snap Id Snap Time Sessions
------- ------------------ --------
Begin Snap: 31 05-Jun-03 03:02:19 139
End Snap: 32 05-Jun-03 04:09:33 139
Elapsed: 67.23 (mins)
Cache Sizes
~~~~~~~~~~~
db_block_buffers: 87500 log_buffer: 163840
db_block_size: 8192 shared_pool_size: 512000000
Load Profile
~~~~~~~~~~~~ Per Second Per Transaction
--------------- ---------------
Redo size: 8,970.49 2,589.41
Logical reads: 6,458.52 1,864.31
Block changes: 47.08 13.59
Physical reads: 68.60 19.80
Physical writes: 27.95 8.07
User calls: 301.12 86.92
Parses: 52.53 15.16
Hard parses: 10.26 2.96
Sorts: 21.27 6.14
Logons: 0.14 0.04
Executes: 52.37 15.12
Transactions: 3.46
% Blocks changed per Read: 0.73 Recursive Call %: 17.58
Rollback per transaction %: 0.00 Rows per Sort: 5.01
Instance Efficiency Percentages (Target 100%)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Buffer Nowait %: 100.00 Redo NoWait %: 99.98
Buffer Hit %: 98.94 In-memory Sort %: 99.94
Library Hit %: 87.04 Soft Parse %: 80.47
Execute to Parse %: -0.30 Latch Hit %: 99.98
Parse CPU to Parse Elapsd %: 83.51 % Non-Parse CPU: 99.99
Shared Pool Statistics Begin End
------ ------
Memory Usage %: 84.24 86.20
% SQL with executions>1: 51.80 65.49
% Memory for SQL w/exec>1: 22.83 42.57
================================
How come i have a high Non parse CPU ? shouldn't the cpu be parsing all the time based from the stats ?
=================================
Get Spin &
Latch Name Requests Misses Sleeps Sleeps 1->4
-------------------------- -------------- ----------- ----------- ------------
library cache 3,310,698 8,945 11,174 3036/1648/33
56/905/0
shared pool 1,556,136 1,570 365 1329/134/95/
12/0
redo writing 77,126 1,018 2 1016/2/0/0/0
row cache objects 1,527,365 488 7 482/5/1/0/0
cache buffers chains 51,607,754 231 30 211/17/1/2/0
session idle bit 2,631,371 129 46 86/41/1/1/0
session allocation 408,936 34 7 27/7/0/0/0
cache buffers lru chain 198,693 10 4 6/4/0/0/0
enqueues 275,358 8 2 6/2/0/0/0
latch wait list 6,111 7 2 5/2/0/0/0
-------------------------------------------------------------
It's bad i know... and i get 4031 errors !! and i'm not a bit surprised !!
1. my question is why does exec statspack.snap take around 10 mins++ to finish ?
2. what does this mean 'library cache kglhdgn: child: 0 10,531 4,082
"
The head java leader is getting annoyed of me asking them to use PS. I was surprise that they found the above link.
June 05, 2003 - 7:53 am UTC
1) because your database is a mess and the java guys are KILLING YOU. Literally KILLING YOU.
It is funny you know. I use that exact example in my next book -- to prove that the java guy in that chapter has got not a single clue about what he is talking about -- not at all
The test is 100% flawed. Here is a short excerpt from my forthcoming book on this topic:
Bind Variables and Java
One of the recent questions on the website asktom.Oracle.com recently was this interesting one. It was about programming in Java with JDBC. This discussion applies equally to Visual Basic programmers using VB with ODBC as the concept of "Statements" and "PreparedStatements" exists in ODBC in more or less the same fashion. It questioned the use of Statements versus a PreparedStatement. When using Statements in JDBC - you must use the "string concatenation approach". Using PreparedStatements allows you to use bind variables. The question was:
Tom -- Please briefly skim this link (link omitted for obvious reasons) which gives an excerpt for JDBC performance. It says always use statements (no bind variables allowed) instead of preparedstatements because they perform better without discussing the impact on the database, only in terms of a single apps metrics. Is this accurate or is this information just extremely short sighted with regards to overall db impact?
Well, that was easy - I gave them the proof from above - case closed. PreparedStatements with bind variables are absolutely without question the only way to do it. Of course, later on, I got a followup:
For a moment keep aside shared pool, hard parse and soft parse and talk about PreparedStatement and Statement as they are the only way to execute statements from java. I wrote this benchmark code that shows a Statement performs better than a PreparedStatement unless you execute the same statement a whole lot of times. So, I reproduced the findings of the above link and prove that Statements are better than PreparedStatements.
I had some issues with this one - they missed the point. They start with "for a moment keep aside shared pool, hard parse and soft parse". Well, if we were to ignore those - we totally miss the boat on this topic as they are the only things to consider. The facts are:
o Hard Parsing incurs many latches
o Latches are serialization devices
o Serialization is not a scalable thing
o Therefore as you add users, the system that uses Statements instead of PreparedStatements with bind variables will fail.
I quite simply could not observe their request to put aside the shared pool and hard/soft parse. They are the relevant topics - they must be considered. That was my initial response - but you know, this bothered me so much, I had to explore it further. So, starting with their benchmark code which simply inserted into a database table, I made it a multi-user benchmark to demonstrate the fact that if you expand this simple, single user benchmark out to a real world example with multiple users - you will see clearly what the issue is and why you need to avoid statements.
But, an interesting thing happened. I could not reproduce their findings that a Statement in JDBC without bind variables versus a PreparedStatement using bind variables. When I ran their code - I could, using my code - I found that a single statement executed using either of a Statement or PreparedStatement took the same amount of time initially and if we executed the SQL over and over - the PreparedStatement was always much faster. This conflicted with their observations totally.
So, I set out to find out why. We'll walk through this process here because it does cover two interesting things:
o If for some reason your test isn't meeting your hypothesis - either your hypothesis is wrong or your test is flawed
o The seemingly simple, convincing test can be the most misleading thing in the world
We'll walk through this simple benchmark now, starting with their original test case and working up to the "real thing". It used a single table TESTXXXPERF which was created using the script perftest.sql as follows:
scott@ORA920> drop table testxxxperf;
Table dropped.
scott@ORA920> create table testxxxperf
2 ( id number,
3 code varchar2(25),
4 descr varchar2(25),
5 insert_user varchar2(30),
6 insert_date date );
Table created.
scott@ORA920> exit
Then, the main java code was supplied. It consisted of three subroutines basically - a main that connected to the database and then called a routine to insert into that table using statement and then called a routine to do the same with prepared statements. The code piece by piece is:
import java.sql.*;
import oracle.jdbc.OracleDriver;
import java.util.Date;
public class perftest
{
public static void main (String arr[]) throws Exception
{
Connection con = null;
DriverManager.registerDriver(new oracle.jdbc.OracleDriver());
con = DriverManager.getConnection
("jdbc:oracle:thin:@aria-dev:1521:ora920", "scott", "tiger");
con.setAutoCommit(false);
Integer iters = new Integer(arr[0]);
doStatement (con, iters.intValue() );
doPreparedStatement(con, iters.intValue() );
con.commit();
con.close();
}
That is the main routine which simply connects to my Oracle 9iR2 instance as scott/tiger - disables the autocommit JDBC uses by default and then invokes the subroutine to execute a Statement N times and then a PreparedStatement N times. I set it up to allow us to pass "N" into the Java routine so we can run multiple simulations. Next, we'll look at the doStatement routine:
static void doStatement(Connection con, int count)
throws Exception
{
long start = new Date().getTime();
Statement st = con.createStatement();
for (int i = 0; i < count; i++)
{
st.executeUpdate
("insert into testxxxperf " +
"(id, code, descr, insert_user, insert_date)" +
" values (" + i + ", 'ST - code" + i + "'" +
", 'St - descr" + i + "'" + ", user, sysdate ) ");
}
long end = new Date().getTime();
st.close();
con.commit();
System.out.println
("statement " + count + " times in " +
(end - start) + " milli seconds");
}
Very straight forward - it simply creates a statement object and then loops "count" times and builds a unique - never before seen INSERT statement and executes it. It is somewhat scaled back from reality in that it is not checking for quotes in strings and fixing them up - but we'll let that go for now. Also note that it retrieves the time before and after executing the statement and prints out the results. Next, we look at the prepared statement:
static void doPreparedStatement (Connection con, int count)
throws Exception
{
long start = new Date().getTime();
PreparedStatement ps =
con.prepareStatement
("insert into testxxxperf " +
"(id, code, descr, insert_user, insert_date)"
+ " values (?,?,?, user, sysdate)");
for (int i = 0; i < count; i++)
{
ps.setInt(1,i);
ps.setString(2,"PS - code" + i);
ps.setString(3,"PS - desc" + i);
ps.executeUpdate();
}
long end = new Date().getTime();
con.commit();
System.out.println
("pstatement " + count + " times in " +
(end - start) + " milli seconds");
}
}
Basically the same code but this uses a PreparedStatement to insert "count" rows. It accomplishes the same exact task as the doStatement routine - just using a PreparedStatement. Lastly, I set up a shell script to execute this:
!#/bin/csh -f
sqlplus scott/tiger @perftest
java perftest $1
A CMD file for Windows might look like:
sqlplus scott/tiger @perftest
java perftest %1
Now, I ran this with inputs of 1 (do one statement/prepared statement), 10, 100 and 1,000 and the results were:
Rows to Insert Statement PrepareStatement
1 0.05 seconds 0.92 seconds
10 0.34 seconds 1.03 seconds
100 2.69 seconds 2.35 seconds
1000 26.68 seconds 15.74 seconds
So, at first glance - it looks like they might have something here. If you were to ignore the database (which I'm not inclined to do personally). If I just look at this test - I might conclude that if I'm not going to execute the same statement over and over - about 100 times - I would best be served by using a Statement. The problem is there is a FLAW in our test! I discovered this flaw when I rewrote the code a little to go "multi-user". I knew in a multi-user test, using System.out.println would not be a very "scalable" testing tool. It would be hard to collect and analyze the results. So, I did what I always do when benchmarking and setup a database table to hold the timing results. The slightly modified Java code had an extra subroutine "saveTimes" to save the timing information into the database. That routine you can add to the test program above is:
static PreparedStatement saveTimesPs;
static void saveTimes( Connection con,
String which,
long elap ) throws Exception
{
if ( saveTimesPs == null )
saveTimesPs = con.prepareStatement
("insert into timings " +
"( which, elap ) values "+
"( ?, ? )" );
saveTimesPs.setString(1,which);
saveTimesPs.setLong(2,elap);
saveTimesPs.executeUpdate();
}
Then, I modified the doStatement and doPreparedStatement routines like this:
static void doStatement (Connection con,
int count) throws Exception
{
long start = new Date().getTime();
Statement st = con.createStatement();
for (int i = 0; i < count; i++)
{
st.executeUpdate
("insert into testxxxperf " +
"(id, code, descr, insert_user, insert_date)" +
" values (" + i +
", 'ST - code" + i + "'" +
", 'St - descr" + i + "'" +
", user, sysdate ) ");
}
st.close();
con.commit();
long end = new Date().getTime();
//System.out.println( "STMT" + " (" + (end-start) + ")" );
saveTimes( con, "STMT", end-start );
}
And I did likewise for the PreparedStatement routine. This would simply save the
times in a database table:
create table timings ( which varchar2(10), elap number );
so we could run a query to get average/min/max timings from multiple users. So, remembering that the only thing I changed was to comment out the System.out.printlns and add a routine to record the time - I ran this in single user mode to test. I found:
Rows to Insert Statement PrepareStatement
1 0.05 seconds 0.05 seconds
10 0.30 seconds 0.18 seconds
100 2.69 seconds 1.44 seconds
1000 28.25 seconds 15.25 seconds
That's different - very different. Surprisingly different. All of a sudden - there is not only no penalty ever for using a PreparedStatement - but it quickly benefits us in single user mode even to use it. What could be the cause.
The code being timed was no different. Not a single byte of code was changed. Sure, we commented out a System.out.println and added a call to saveTimes - but that code was never timed before. So, what did change then? Well, it turns out the saveTimes routine was the culprit here. If you look at that code - it uses a PreparedStatement. It "warmed up" the PreparedStatement class. It paid a one time penalty to load that class - java dynamically loads classes as you use them. The simple act of connecting did that for the Statement class (it is used during the connection to Oracle). Once the timing of the initial load of the PreparedStatement class was factored out - it turns out that a PreparedStatement is no more expensive to execute than a Statement is in JDBC. The entire premise of a Statement being "lighter weight", "more efficient" for small numbers of statements was flawed - wrong. If you used a single PreparedStatement anywhere in your code - you would have paid this "load" penalty (which is pretty small when you look at it over all) for ALL PreparedStatements.
That was the interesting part of this example - that the basic test itself was flawed, we were timing an unrelated "thing". Since most non-trivial Java JDBC programs are going to have to use a PreparedStatement somewhere - they all pay this "load" penalty. Not only that but this "load penalty" isn't a penalty at all - but simply the price of admission to building a scalable application on Oracle. If you don't use Prepared statements - if you insist on using Statements and "gluing the values in" - opening yourself up to the SQL Injection security flaw and buggy code - your application will not scale as you add users. There is no "maybe" here, there is no "might not scale", your application will not scale - period.
Amazing!
Christo Kutrovsky, June 05, 2003 - 9:03 am UTC
That's a very important piece of code to know.
I am not yet involved in JAVA, but I sure will be some day, and I'll need to know how to prove the BIND variables point.
Thank you Tom.
Lots of PreparedStatement questions left unanswered
Old, but young at SQL, March 15, 2004 - 10:26 pm UTC
Thanks for the education about bind variables and the chapter from your book (and many related similar pages). I'm getting 00604 and an 04031 errors, but I don't know why. It doesn't happen here, just in the field, with the exact same environment, and near identical usage.
Perhaps it's the lack of PreparedStatement use in places, but I don't use connections that long, yet the errors keep happening- it seems like closing connections (not pooling, at least in our code...) isn't releasing the memory!
Q: Shouldn't all these resources be released when I close the connection?
I get the multi-vs-single user issue, which doesn't apply to this project, but not the shared memory issue.
Q) If I do sequential inserts using Statement (not PreparedStatement), does it keep around the memory and the latches after I use the statement object a second time?
Q) How about if I close the statement and then create a new one?
The Java docs imply that re-use and closing both release associated Java and database resources.
I also get the parse-time issue- that's big. But I don't get why more latching occurs with Statements.
Q) Does some latching occur during the parse? Why?
I have only 3 processes using the database at a time, and only a few very lightly used tables are possibly read and written at the same time.
Note: I prefer Statement over PreparedStatement because the latter are so hard to debug. If there's a bad value, it doesn't even say which field! The errors are obscure. I guess I should subclass it and all the setType methods and salt away the values and reconstruct them for printing if there's an exception-
Q) Has anyone done this?
About 90% of the time PreparedStatement seems to work. So I went back to Statement for the rest. Statement is so much easier to use/debug that it saves a lot of development time, ie, money, so I use it for selects, which I do many of, but each only once per hour (and never multi-user).
I understand the lower parsing time, but so many of my statements are infrequent.
Q) Sometimes I have constants in my PreparedStatements. Is this bad? (I put jobs in a table: time,0,0,0,0,0. Later on when parts complete, they update the column.) What if I put the whole string literal in a PreparedStatement.
Q) Is a PreparedStatement containing no ? fields still more efficient than a Statement? Is it less efficient than putting some/all the fields in later with ps.setInteger()?
I appreciate the help, and will try using PreparedStatements again where they failed before. I'd like some more help at times- I know you're busy. In the 6 months I've been on this project this site has refused questions...
Q) Where can I go to buy some help, per question or per hour?
(other pet peeve: you can't query a connection for the number of open statements. If this were possible, I could leave connections open with confidence... I guess I have to take the time to subclass everything and keep track myself, sigh..., and trap exceptions in the class, and when one happens, close the statement and then re-throw...)
Thanks,
obyas
March 16, 2004 - 7:36 am UTC
if connections are not being released, you are using a connection pool of some sort. but -- you would know it as you would be setting it up.
yes, if you log off, all resources associated with that connection are released. However, if you are seeing ora-4031's and you haven't been using prepared statements every where and you have lots of activity, well, it is pretty straightforward what is happening (you are doing it to yourself by using statements).
Prepared statements are *not* just about single user vs multi-user. They are about "doing it right" vs "doing it wrong". They kill performance in both cases. Not only that but the "common" solution is to make the shared pool really large which just pushed the problem back a couple of feet and made it BIGGER (the correct approach is to downsize the shared pool, make it smaller, not bigger -- while you fix your bug)
If you are doing inserts with a statement -- there IS no second time. If you are doing inserts with a statement -- each insert is unique, never before seen. It is quite simply the very *worst* thing on the planet you can do. Nothing worse really.
Latches are quick in/out type of locks. latches are held very brief moments while we access a shared data structure (like the shared pool). You use them very briefly. You don't even need to finish the statement to give them up.
Using statements is bad bad bad. cannot say it any other way.
Why is is hard to understand that statements result in higher latching then pstatements? with statements every execute is a parse. a parse takes many latches. with prepared statements, only a parse is a parse -- you can execute over and over and over without parsing.
Tell me -- would you compile your "methods" (aka subroutines) before each invocation and then throw out the code? would you compile your subroutines thousands of times in a single program execution? No, you wouldn't. so, why are you doing just that??? that is what you are doing by using a statement.
Show me how a statement makes "debugging" easier -- do that for me ok? So, you get an ora-1401. How the HECK does a statement help you out there????? How how how -- just give me a teeny tiny small example??? I don't get it.
It wastes time
It is harder to develop with
It costs $$$$$$$ to use statements. Look at the money you are wasting right here, right now. This is cheaper?????????????????
(can you guess what MY pet peeve is by now?)
You do understand that statements introduce a HUGE security risk right? (search google for "sql injection")
You do understand that statements consume HUGE java resources right? (betcha just create String temporaries all over the place)
You do understand that statements literally can kill the database right? (you are there, you should know that -- you are doing it and seeing the results)
A prepared statement with "?" does not make sense. I cannot beleive you do the same exact insert (what happened to the primary key fields eh?) over and over. But yes, if in fact you have a statement with all literal values that you execute over and over -- a prepared statement is more efficient than a statement -- but I cannot see the use case for such a beast.
In the 6 months you've been on this project this site has taken
ask_tom@ASKUS> select count(*) from WWC_ASK_SUBMITTED_QUESTIONS$
2 where timestamp > add_months(sysdate,-6);
COUNT(*)
----------
1013
new questions and performed:
ask_tom@ASKUS> select count(*) from WWC_ASK_REVIEW_FOLLOWUPS$
2 where created_on > add_months(sysdate,-6);
COUNT(*)
----------
5387
followups like this one.......
but what resources are out there? how about
o groups.google.com -> comp.databases.oracle.* The internet usenet newsgroups. There you'll get lots of people willing to provide you advice.
o otn.oracle.com -> get support -> discussion forums. Ditto
Sorry, we did not write the jdbc spec, we just implemented it. However, it would be somewhat trivial I believe to have a hash table based on you connection and every time you parse, you add to the hash table, when you close, you remove. Not very difficult (sigh)
Tell me this -- when you used to process FILES in the olden days -- did you leak file handles? or did you close them when you were done, open them when you needed them. statements/prepared statements are no different than a file.
Answers to your questions, and a few things are not yet clear
Old, but young at SQL, March 16, 2004 - 2:26 pm UTC
It seems like from your answer, if I create a statement, use it and close it, it keeps using more and more shared memory until I close the connection. I can accept that. (Note that it still doesn't make sense- why would Oracle keep it around? If people aren't getting it, perhaps you're not sharing a piece of the reason. I can take it on faith, but I can internalize it much more deeply if I know why.)
Maybe the problem is different though- You say:
> statements/prepared statements are no different than a file.
So which has more memory in use at a single time:
Statement foo = conn.createStatement();
foo.execute("select foo from bar where xxx");
foo.execute("select foo2 from bar where yyy");
foo.close()
Or
Statement foo;
ResultSet rs;
foo = conn.createStatement();
rs = foo.execute("select foo from bar where xxx");
foo.close()
foo = conn.createStatement();
rs = foo.execute("select foo2 from bar where yyy");
foo.close()
I was assuming they were the same, based on the docs, that re-using foo closes rs AND any resources associated with the first select. But my results would be explained if it were not true, that Statement.execute() opens a file till the close() (besides the ResultSet), rather than a createStatement() opens a file.
> if connections are not being released, you are using a connection pool
Nope, no pooling:
Open a connection,
create statements A and B
do 3-10 queries with statement A
each with 1-20000 inserts with statement B
close the statements
close the connection
So I'll change the inserts to all use PreparedStatements. If I can't get them to work, I'll close/reopen the statement every 50 or so inserts.
You asked why PreparedStatement is hard to debug.
With a Statement, I can print out the String either before the exception happens, or as part of the exception. Then I can copy/paste it into SQLPlus and get a real error message that points me to some exact place in the query.
With a PreparedStatement, I have to write my own code that assembles the string and prints it out. Sometimes I assembled the statement by hand and it worked fine, but the PreparedStatement still complained about a bind variable (whoever left out WHICH variable and its value out of the error msg should be shot.) I know it's not the way to run a project, but I don't have much time, and the Statement worked, and there's no hint in the JDBC API docs that there was anything wrong with Statement, except I figured it took a lot longer, but performance was fine. Until now. Yes, I'll fix it.
Often in development I'd work and re-work queries. Doing this in a string, the Select query LOOKS like itself. In my queries, which return lots of rows, the "parameters" configure the query rather than pinpointing one answer, so seeing them in the query helps a lot. In some of them the database table was a parameter. So semantically, the "variables" are different from what Oracle thinks of as the "bind variables." Having it all laid out is clearer.
Putting in a ?, it's easier to set things in the wrong order. Say I rework three things in the the query and then go fix the order and ooops, I fixed the last two stmt.setString() but not the first. Since the data in my queries is meaningful to the programmer, separating the data from the query makes it more error prone. Being called in to learn a new industry and their poorly designed database and undocumented database/application/industry is bad enough, using programming constructs that obscure clarity is a hindrance. (In fact, were I to do it over again, the only change I'd make is to PLAN to change to PreparedStatements at the END.)
Does it really matter if the 1-10 Selects are Statements vs PreparedStatements? Each happens once per connection so it won't save any time. I can easily change to closing each one after I use it. I don't have time to change all the Selects, and it has to ship, so too many changes is destabilizing.
Q) How do I get Oracle to tell me the amount of connections and statements that are open and the amount of shared mem in use?
Again, many thanks for the help, and the pointers to others.
I have two days to fix a problem that takes 4 days to manifest...
Thanks,
obyas
March 16, 2004 - 2:40 pm UTC
that is not what I said at all -- if you use a statement, close it, it is gone from the client side.
But over in your shared pool you have a bunch of stuff that has to be managed, but that no one is every going to use again. A waste of energy, we compiled a subroutine that'll never get used a again and now we have to manage it until we figure that out and hopefully age it out of the shared pool subpool it is in over time...
We keep it around in the hopes that the applications were designed to be efficient ;) sometimes we are wrong and the applications are not designed that way.
I cannot compare those two snippets as the first piece of code is not "sensible". you don't just "execute" a SELECT. You have to get and process a result set (else -- why bother??)
If you executed that select AND you lost the "result set" (which is not the statement but yet another object) that would be like opening a file and not closeing it and letting the file handle go out of scope. For all we know, you'll come back in 5 minutes and want that result set -- we cannot close it, only the well formed program can do that.
Maybe what you really want to do (if you want things fast, easy to code, easy to debug, maintainable, BOUND) is to use PLSQL actually. Java is a seriously hard language to write code (especially database code) in by hand. If you code your data logic in PLSQL you'll be coding circles around the other developers.
a) you query the v$ tables -- a wealth of information exists therein.
</code>
http://docs.oracle.com/docs/cd/B10501_01/server.920/a96536/ch3.htm#1109131 <code>
The end is near, thanks!
Old, but young at SQL, March 16, 2004 - 8:25 pm UTC
> But over in your shared ... A waste of energy, we compiled a subroutine .. and .. manage it until .. age it out of the shared pool subpool ...
> We keep it around in the hopes that the applications were designed to be
So if I am running the same query once every hour, that's fine to use as a Statement, because the procedure et al will either have aged out or will be re-used?
It just seems that when available memory in the pool gets low, Oracle should jettison these cached procedures, instead of throwing an out-of-memory error...
But thanks, that explains most of it. But it still leaves 2 questions:
1. If my PreparedStatement contains literals, like "SELECT foo FROM table WHERE a=0 AND b=?", does the use of '0' cause the same sort of memory pool loss as a Statement? Is it better? Perhaps when this is closed the procedure made for it is completely freed?
2. How can I monitor the shared memory pools in SQLPlus?
Thanks much,
obyas
After this ships, I will register and follow that link and learn about Pl/Sql.
March 16, 2004 - 8:29 pm UTC
if it is EXACTLY the same query -- yes. a statement would be OK in that particular case.
Oracle tries to jettison them but...
the shared pool is sub-divided into sub pools - especially as it gets larger and larger (soon it takes longer to age out than to actually do the work you ask us). So, the larger you make it in a system that isn't binding -- the worse it actually gets.
1) a=0 is GOOD when coupled with "and b=?". the a=0 is invariant, the b=? is variant. you bind things that change (like you parameterize inputs to subroutines)....
2) select * from v$sgastat
PLSQL rocks, you will like it.
One last question (I promise)
Old, but young at SQL, March 17, 2004 - 12:29 am UTC
If I make a PreparedStatement with no bind variables, but still execute it as a prepared statement, is this exactly the same as a Statement, or will the PreparedStatement's procedures et al be nuked when the PreparedStatement is closed (where the Statement's procedure et al is cached in the shared pool)?
March 17, 2004 - 7:17 am UTC
The prepared statement, when closed, will be just like a closed statement.
Thanks
Old, but young at SQL, March 17, 2004 - 12:50 pm UTC
Running out of memory due to inability to flush a cache seems like a serious Oracle bug. I'd submit it if I knew how.
Thanks much for the education. It sounds like the way the customer used my program in testing aggravated the problem, that the cache would have aged out most queries in production. With the fixes, using PreparedStatements for the voluminous queries, there should be no problems.
Long term, I'll subclass PreparedStatement to handle errors better and systematically and track the number of open statement, to support keeping connections open. Or go to Pl/Sql (I'll surf this site for a book req.)
Thanks MUCH- your fast responses were a godsend on this one.
-obyas
How long does it take for something to age out of the cache?
Old, but young at SQL, March 21, 2004 - 12:12 am UTC
How long does it take for a query to age out of the cache?
(I worked 70 hours last week, and thee's a ton of other things to do- I don't want to take the extra time to change 20 complex queries into PreparedStatements- each is used once per hour (most of the time would be in testing.) The changes I already made reduced the number of Statements by a factor of 100 so far, but my boss is afraid there'll be pressure to change all of them. )
March 21, 2004 - 9:57 am UTC
that is like asking how long does it take ice to melt or paint to dry.
it depends.
they may never age out.
they may age out really fast.
they may age out pretty soon.
they may age out after a long time.
Old, but young at SQL, March 22, 2004 - 2:53 pm UTC
Can you give me any sense of average times,
or the algorithm used to determine leaving the queue,
or how to use the output of
select * from v$sgastat
to determine this? The customer wants a guarantee that it won't happen again (but wants it shipped this week, with other tasks, as well.)
------
CREATE table tmp_DS AS
(SELECT * FROM DS WHERE ts BETWEEN ? and ? )
ORA-01027: bind variables not allowed for data definition operations
Interesting. I suppose I could make a temporary table, but the values in there, and use its values for this query... I can see why you recommend PL/Sql
March 22, 2004 - 3:40 pm UTC
there is no way to gauge this.
depends on the size of your shared pool
which subpool your statement hashes to
what other statements just happen to get assigned to the same place.
etc....
DDL btw is not in the shared pool, therefore, bind variable = not used for DDL as it is not shareable.
One more, sigh
Old, but young at SQL, March 22, 2004 - 7:53 pm UTC
Okay, I wrote 3 subclassed which delegate to
Connection, Statement and PrepStatement
For PreparedStatement I also make an array of the
parameters that come in so if an exception happens,
I can print out the query and paste it into SQLPlus.
Here's what my routine produces:
Error: prep.executeUpdate() with query
INSERT INTO LastDataFeed
(t1, s1, t1p, s1p, t2, s2, periodMin, curtime, num)
values (?, ?, ?, ?, ?, ?, ?, ?, ?)
INSERT INTO LastDataFeed
(t1, s1, t1p, s1p, t2, s2, periodMin, curtime, num)
values (1069804800000, '2003-11-26 00:00:00.000', 1069805100000, '2003-11-26 00:05:00.000', 1070496000000, '00:00:00', 5, '23:28:44', 0);
java.sql.SQLException: ORA-01006: bind variable does not exist
And when I paste it into SQLPlus, it works!
Hmmm, if in my code I respond to the exception by calling
prep.executeUpdate(failedQueryString);
I get the same error. But if I make a Statement and call
stmt.executeUpdate(failedQueryString);
IT WORKS!
For completeness, the table is:
Name Type
-------------------
T1 NUMBER
S1 VARCHAR2(25)
T1P NUMBER
S1P VARCHAR2(25)
T2 NUMBER
S2 VARCHAR2(25)
PERIODMIN NUMBER
CURTIME VARCHAR2(20)
NUM NUMBER(9)
(I did a search on ORA-01006, but the 3 results were just for Pl/Sql...
Maybe I should just hack- if the prepStatement doesn't work, maybe I should submit the mfg'd query as a Statement, sigh...)
-obyas
March 22, 2004 - 9:16 pm UTC
that means your subclass isn't doing the binding right?
java.sql.SQLException: ORA-01006: bind variable does not exist
you are "binding a variable that doesn't exist".. doesn't matter if you can print a pretty string, you are not calling the bind routine with the proper inputs.
But what does that mean?
Old, but young at SQL, March 22, 2004 - 11:40 pm UTC
Does this mean one of the field names is wrong?
If that were true the string produced out of the query+args would not produce errors as well.
I think the problem was a combination of incomplete docs and presumtuousness on my part. I was also using the PreparedStatement as a Statement-- it inherits Statement's
executeUpdate(queryString)
routine, so I assumed it would execute a query (with no variables) without disturbing the prepared-query. I guess the queryString overwrote the earlier one that had the '?'s, and it was complaining that it couldn't bind the vars I gave it to the string with no '?'s...
'My bad.
Thanks,
-obyas
March 23, 2004 - 6:58 am UTC
No, it means you have a bug in the code you layered on top of the code.
New, related problem
Old, but young at SQL, March 30, 2004 - 5:00 pm UTC
Maybe I didn't explain the previous one well. Executing a statement using a PreparedStatement object was nuking the PreparedStatement.
1 PreparedStatement p = con.prepareStatement("select x from foo where bar=?");
2 ResultSet rs = p.executeQuery("select count(*) from foo");
...
3 ResultSet rs = p.executeQuery();
throws an error.
Making a separate Statement object for executing line 2 allows line 3 to succeed.
------
The code also makes temporary views that focus on different aspects of an hour of data, then queries use these views. The view name changes, to avoid multithreading problems. My guess is that even preparedStatement queries won't be reused from hour to hour because each references a different viewname. True?
One solution would be to pool and reuse the view names.
Thanks,
Obyas
SQL Injection
Reader, May 18, 2004 - 7:28 am UTC
Dear Tom,
Great answers....Can you please explain with regards to the Oracle Architecture as to How does the use of bind variables protects against SQL injection.
May 18, 2004 - 2:39 pm UTC
if you have my book effective Oracle by design -- i go into this in great detail.
Basically:
select * from emp where ename = :x;
-- no matter what you put into :x, it'll treated as a string to be compared to ENAME.
select * from emp where ename = '&X';
and I supply
KING' or 'a' = 'a
as &X, all of a sudden, I dump the entire table instead of a row, or even:
KING' or (select some_function_I_normally_wouldnt_run(x) from dual) = '1
and I run a function I shouldn't and so on. You are injecting ANY SQL you want into the system without binds.
Strange CPU x Elapsed time
A reader, May 18, 2004 - 5:19 pm UTC
Why might CPU time be greater than elapsed time in tkprof ? I'm using 9ir2. I'm not using binds -- this is only a test case.
SELECT COUNT(*) FROM ticket
join historico on ticket.ID_ULTIMO_HISTORICO = historico.id_historico
join problema on problema.id_problema = historico.id_problema
join status on status.id_status = historico.id_status
join RESP_CAUSA on ticket.ID_RESP_CAUSA = RESP_CAUSA.ID_RESP_CAUSA
--join RESP_CAUSA tbl_resp_item on historico.ID_RESP_ITEM = tbl_resp_item.ID_RESP_CAUSA
join causa on causa.id_causa = ticket.id_causa
--join previsao_retorno on historico.id_prev_retorno = previsao_retorno.id_previsao
join qry_clientes on qry_clientes.id = ticket.id_cliente
join qry_operadores on historico.id_pessoa=qry_operadores.id
join qry_empresa on qry_operadores.id_empresa=qry_empresa.id_empresa where ( qry_clientes.id in ( 575, 2571, 22720, 2
15, 1390, 1104, 36220, 1917, 26481, 20740, 3824, 170, 349, 33100, 231, 34000, 2128, 21103, 393, 850, 250, 2627, 67, 447, 676,
884, 2173, 1531, 921, 3360, 5873, 352, 30740, 1453, 38104 ) and ( ( historico.id_status<>2 and historico.id_status<>5 ) or (
historico.id_status in (2,5) and historico.data_evento > to_date('2004/05/14 13:50:24', 'YYYY/MM/DD HH24:MI:SS') ) ) )
call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 1.06 1.03 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 1 0.94 0.91 593 1424 0 1
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 3 2.00 1.95 593 1424 0 1
Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: 95
Rows Row Source Operation
------- ---------------------------------------------------
1 SORT AGGREGATE
195 NESTED LOOPS
195 HASH JOIN
195 NESTED LOOPS
195 NESTED LOOPS
195 HASH JOIN
7783 NESTED LOOPS
7783 NESTED LOOPS
7783 HASH JOIN
32 NESTED LOOPS
35 INLIST ITERATOR
35 INDEX RANGE SCAN PK_EMPRESA (object id 42459)
32 INDEX UNIQUE SCAN PK_CLIENTE (object id 42596)
7783 INDEX FAST FULL SCAN IDX_T_UL_H (object id 44783)
7783 BITMAP CONVERSION TO ROWIDS
7783 BITMAP INDEX SINGLE VALUE NDX_TBL_RESP_CAUSA (object id 41387)
7783 INDEX UNIQUE SCAN CAUSA_PK11049574450818 (object id 33567)
19429 TABLE ACCESS FULL HISTORICO
195 INDEX UNIQUE SCAN STATUS_PK11049574471311 (object id 33576)
195 INDEX UNIQUE SCAN PROBLEMA_PK11049484157278 (object id 33573)
7430 VIEW
7430 TABLE ACCESS FULL PESSOA
195 INDEX UNIQUE SCAN PK_EMPRESA (object id 42459)
May 19, 2004 - 7:33 am UTC
it is "measurement errors" and different granularities of the clocks.
If you have either
"Effective Oracle by Design" or
"Optimizing Oracle Performance" by Cary Millsap
we both go into this -- he does more so.
Basically, when you measure lots of tiny events -- you can sometimes get just 0's and 1's back (if they take about the time your clock ticks take) -- or they could be off by 1 clock tick in general. add up a couple thousand of them and they tend to average out -- but it can be all by a little.
basically -- it just means in your case that cpu and ela are the same in effect.
SQL Injection
Reader, May 19, 2004 - 12:37 am UTC
Thanks Tom for the explanation you have provided. But I would appreciate if you give the same example in JAVA because I am having a hard time convincing the JAVA developers the same thing. I managed to convince them regarding bind variables and Shared Pool usage but I am not able to convince them the same for SQL Injection since they are saying how come a Statement object and PreparedStatement object matters. I don't have much experience with JAVA so I hope you can give me a example which I can show to my JAVA Developers.Regarding your new book that is the first thing I am going to purchase this weekend.
May 19, 2004 - 9:49 am UTC
get them my book "Effective Oracle by Design"
there i not only benchmark it in that "language du-jour", but explain the sql injection in detail.
they cannot make the leap? they don't understand the FUNDEMENTAL difference between:
select * from emp where ename = :x
and
"select * from emp where ename = '" & anything_the_end_users_want_to_send_us & "'";
??????? really? they don't get that simple concept -- that the end user filling in a form can send anything they want?
have they googled
"sql injection"
on the web????
as a database manager, I reject out of hand as buggy any code that isn't making use of bind variables. developers (of which I happen to be one actually) -- can do whatever they want in test, but in production -- they don't make the rules. really.
Here is a snippet from the book:
<quote>
(3)Without Bind Variables, Your Code Is Less Secure
Another concern when you don't use bind variables revolves around security, specifically the risk of SQL injection. To understand how it works, suppose that we have an application that asks a user for a username and password. We execute this query:
select count(*) from user_table where username = THAT_USER
and password = THAT_PASSWORD
This seems innocent enough right? Well, let's use SQL*Plus to test that theory and see what happens with and without bind variables.
Tom: Please re-break the highlighted code throughout the chapter. The maximum code lengths for Oracle books are 78 characters (regular font) and 82 characters (small font). Thanks, Monika
ops$tkyte@ORA920> create table user_table
2 ( username varchar2(30), password varchar2(30) );
Table created.
ops$tkyte@ORA920> insert into user_table values
2 ( 'tom', 'top_secret_password' );
1 row created.
ops$tkyte@ORA920> commit;
Commit complete.
ops$tkyte@ORA920> accept Uname prompt "Enter username: "
Enter username: tom
ops$tkyte@ORA920> accept Pword prompt "Enter password: "
Enter password: i_dont_know' or 'x' = 'x
Notice the password we just used. It incorporates a little SQL there doesn't it? Since we are just gluing strings together, not binding, the end user can actually type in arbitrary SQL and have it executed! Our application takes this string now and continues on:
ops$tkyte@ORA920> select count(*)
2 from user_table
3 where username = '&Uname'
4 and password = '&Pword'
5 /
old 3: where username = '&Uname'
new 3: where username = 'tom'
old 4: and password = '&Pword'
new 4: and password = 'i_dont_know' or 'x' = 'x'
COUNT(*)
----------
1
Look at that. Apparently, the password `i_dont_know' or `x' = `x' is our password. But if we use bind variables instead and accept the exact input from the end user, we see this:
ops$tkyte@ORA920> variable uname varchar2(30);
ops$tkyte@ORA920> variable pword varchar2(30);
ops$tkyte@ORA920> exec :uname := 'tom';
ops$tkyte@ORA920> exec :pword := 'i_dont_know'' or ''x'' = ''x';
PL/SQL procedure successfully completed.
ops$tkyte@ORA920>
ops$tkyte@ORA920> select count(*)
2 from user_table
3 where username = :uname
4 and password = :pword
5 /
COUNT(*)
----------
0
We get the correct answer.
Think about this the next time you put an application out on the Internet. How many hidden vulnerabilities might you have lurking in there if you develop your application using string concatenation instead of bind variables? Think of the "neat" side effects someone could have on your system. Consider this password:
ops$tkyte@ORA920> accept Pword prompt "Enter password: "
Enter password: hr_pkg.fire_emp( 1234 )
Whoops, this person just executed a stored function as the user who is connected to the database. While he might not get logged on, he nonetheless got connected to your system and fired someone. Is this unbelievable? Absolutely not. Search www.google.com for SQL Injection, and you'll see results 1 through 10 of about 15,800. Just consider the implications.
If you don't believe the performance-related arguments for using bind variables in your system, maybe this last bit will be enough to persuade you. Bind variables add security.
</quote>
SQL Injection
Reader, May 20, 2004 - 6:58 am UTC
Thanks.It was enough to convince them this time.
Your script doesn't give me the "problem" queries
Adrian, December 28, 2004 - 12:44 pm UTC
I have the same problem that Trevor, a very low Parse CPU to Parse Elapsd (4.31%) and a very high time waited for latch free. That should indicate a bind variable problem, but when I run your script, it only give me two queries with executions > 100, and the sum of both executions is about 1000.
Is it possible that only this two queries are slowing down my database? What am I missing?
Here is an excerpt of my statspack report:
STATSPACK report for
DB Name DB Id Instance Inst Num Release Cluster Host
------------ ----------- ------------ -------- ----------- ------- ------------
MEMSET 2106035274 MEMSET 1 9.2.0.4.0 NO memorynet
Snap Id Snap Time Sessions Curs/Sess Comment
------- ------------------ -------- --------- -------------------
Begin Snap: 151 28-Dic-04 11:26:24 582 5.0
End Snap: 152 28-Dic-04 11:40:12 645 5.3
Elapsed: 13.80 (mins)
Cache Sizes (end)
~~~~~~~~~~~~~~~~~
Buffer Cache: 112M Std Block Size: 8K
Shared Pool Size: 448M Log Buffer: 160K
Load Profile
~~~~~~~~~~~~ Per Second Per Transaction
--------------- ---------------
Redo size: 10,300.76 1,466.48
Logical reads: 38,414.01 5,468.84
Block changes: 66.57 9.48
Physical reads: 13,867.40 1,974.24
Physical writes: 5.88 0.84
User calls: 45.09 6.42
Parses: 282.92 40.28
Hard parses: 0.71 0.10
Sorts: 54.54 7.76
Logons: 0.17 0.02
Executes: 1,622.57 231.00
Transactions: 7.02
% Blocks changed per Read: 0.17 Recursive Call %: 98.77
Rollback per transaction %: 0.79 Rows per Sort: 398.62
Instance Efficiency Percentages (Target 100%)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Buffer Nowait %: 99.96 Redo NoWait %: 100.00
Buffer Hit %: 63.90 In-memory Sort %: 100.00
Library Hit %: 99.95 Soft Parse %: 99.75
Execute to Parse %: 82.56 Latch Hit %: 99.79
Parse CPU to Parse Elapsd %: 4.31 % Non-Parse CPU: 99.10
Shared Pool Statistics Begin End
------ ------
Memory Usage %: 93.53 95.29
% SQL with executions>1: 63.36 66.12
% Memory for SQL w/exec>1: 69.38 74.86
Top 5 Timed Events
~~~~~~~~~~~~~~~~~~ % Total
Event Waits Time (s) Ela Time
-------------------------------------------- ------------ ----------- --------
latch free 35,897 8,464 42.86
CPU time 3,259 16.50
buffer busy waits 12,660 2,397 12.14
enqueue 656 1,777 9.00
db file sequential read 918,246 1,724 8.73
December 28, 2004 - 2:33 pm UTC
<quote>Is it possible that only this two queries are slowing down my database? What am
I missing?
</quote>
more than possible, yes.
however:
Soft Parse %: 99.75
looks good but....
Parses: 282.92 40.28
283 parses/second indicates "soft parse heck" here. Why are you parsing "so much", no chance for you to reuse a cursor?
what is your session cached cursors set to.
Parsing
Adrian, December 29, 2004 - 3:40 am UTC
Thanks for your response
Session_cached_cursors is set to 100 ¿Should I raise it?
90% of the database work is done running Oracle Portal, with all our portlets developed in plsql. I thought that using (properly) plsql there should not be parsing issues, ¿is anything specific that I can do to avoid the excesive soft parsing?
December 29, 2004 - 10:00 am UTC
depends on how the guys that wrote portal did it unfortunately, it always comes back to "how did the people writing the code accessing the database do things"
what is you session cursor cache hits currently at 100? I've a feeling unfortunately that they are closing out all cursors in the session each time -- and you probably are not running alot of the same queries in the same "page" so it might not be of much use.
Parsing
Adrian, December 29, 2004 - 10:52 am UTC
I obviously can't view or modify Oracle portal code, and our page generation code doesn't repeat queries for the same page, so am I stuck with this parsing problem? Is anything I can do in the database to alleviate this? Or should I start thinking in a hardware upgrade?
From the same statspack report:
opened cursors cumulative 223,069
session cursor cache hits 129,085
December 29, 2004 - 6:47 pm UTC
you are getting a good session cache hit, it might be interesting to see if increasing it to 150 postively impacts the number of hits.
any use of web cache here (to reduce how many times you actually hit the database)?
have you confirmed the latch free's are for the library cache?
energy
barnana, February 09, 2005 - 1:32 pm UTC
what are some other ways that energy is formed?
Another asktom success story
Dan Loomis, May 10, 2005 - 10:06 am UTC
Hi Tom - I thought you might appreciate some positive feedback, as this thread is making a pretty large impact at my company.
A really, really smart developer here is writing a java database connection pooler from scratch, which will be implemented enterprise-wide as the standard db pooler for all of our jdbc applications. He's got some really cool features wired in, many of which are mechanisms for auditing/logging various scenarios, such as average # of connections used, auto detection of leaked cursors and orphaned connections, performance monitoring, etc.
Being an asktom undergraduate (I'm nowhere near graduating yet) I understand one of top issues on your mind - using bind variables to maximize sql sharing, increase scalability, and eliminate sql injection risks. So I thought it would be neat if he could add a feature to flag all jdbc Statement usage as being a "Bad Practice", for all the reasons identified in this thread. Wouldn't you know it, he comes back and says:
<quote>
I may agree with this only if its not the default behavior and rather a flag you can set in the properties file.
The reason being, I read an interesting article not too long ago where a developer did a really good job of benchmarking the various methods of executing SQL, and the surprising conclusion was the non-bound dynamic SQL executed faster than binding, up to a point. Binding has an initial chunk of overhead and doesn't make sense until you cross the X rows of data you are trying to insert, etc.
Check this article out:
</code>
http://www.oreilly.com/catalog/jorajdbc/chapter/ch19.html <code>
Statement beats PreparedStatement up to 65 rows of data.
So in summary, for simple stuff, we go faster using dynamic SQL with a regular Statement. Its the easiest to code and the fastest for small queries, so I would hate to have the pooler complain about this.
</quote>
Well, you can guess this jdbc book's conclusion was rather shocking to me, as this directly conflicts with years of Oracle conventional wisdom. I try to reproduce the benchmark and ultimately failed to debunk the premise for the same reasons you call out.
Then I find this thread...
Long story short, not only has the db pooler developer taken a complete 180% on the issue, but now he's one of our biggest advocates for using PreparedStatement and all the benefits thereof. It should go a long way to stamping out dynamically-contructed, non-bound sql in our database environments.
All it took was a willingness to question authority. And of course having asktom in my back pocket ;->
Thanks again for all of your hard work - I bet it's making an impact far greater than you realize.
May 10, 2005 - 10:45 am UTC
Thanks -- I know that book, I wrote *directly* about it in Effective Oracle by Design.
It was totally wrong in its conclusion.
thanks for the feedback, appreciate that.
(tell him to make sure to support STATEMENT CACHING!!!!!!!! to reduce the soft parse as well -- the only good parse is:
NO PARSE
)
Statement Caching?
Shankar, May 10, 2005 - 7:28 pm UTC
Is it achieved using session_cached_cursors parameter? How do I ensure "no parse"? Thanks.
May 10, 2005 - 11:30 pm UTC
only the program can ensure "no parse"
session cached cursors is a softer soft parse, but far far removed from no parse.
Only the PROGRAM can stop telling us to parse.
A reader, October 06, 2005 - 8:52 am UTC
Dear Tom
Elapsed: 30.02 (mins)
Cache Sizes (end)
~~~~~~~~~~~~~~~~~
Buffer Cache: 2,560M Std Block Size: 8K
Shared Pool Size: 400M Log Buffer: 512K
Load Profile
~~~~~~~~~~~~ Per Second Per Transaction
--------------- ---------------
Redo size: 17,226.11 6,440.57
Logical reads: 9,217.40 3,446.24
Block changes: 116.69 43.63
Physical reads: 9.44 3.53
Physical writes: 8.11 3.03
User calls: 294.83 110.23
Parses: 94.58 35.36
Hard parses: 0.07 0.03
Sorts: 35.80 13.39
Logons: 0.46 0.17
Executes: 130.69 48.86
Transactions: 2.67
% Blocks changed per Read: 1.27 Recursive Call %: 45.43
Rollback per transaction %: 0.79 Rows per Sort: 30.10
Instance Efficiency Percentages (Target 100%)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Buffer Nowait %: 100.00 Redo NoWait %: 100.00
Buffer Hit %: 99.90 In-memory Sort %: 100.00
Library Hit %: 99.97 Soft Parse %: 99.93
Execute to Parse %: 27.63 Latch Hit %: 99.97
Parse CPU to Parse Elapsd %: 0.89 % Non-Parse CPU: 96.06
Shared Pool Statistics Begin End
------ ------
Memory Usage %: 93.53 93.48
% SQL with executions>1: 35.21 35.20
% Memory for SQL w/exec>1: 36.81 36.80
Top 5 Timed Events
~~~~~~~~~~~~~~~~~~ % Total
Event Waits Time (s) Ela Time
-------------------------------------------- ------------ ----------- --------
latch free 26,209 7,321 88.27
row cache lock 192 368 4.44
CPU time 262 3.16
db file sequential read 16,982 199 2.40
SQL*Net break/reset to client 3 60 .72
-------------------------------------------------------------
Wait Events for DB: ORA9 Instance: ora9 Snaps: 2111 -2112
-> s - second
-> cs - centisecond - 100th of a second
-> ms - millisecond - 1000th of a second
-> us - microsecond - 1000000th of a second
-> ordered by wait time desc, waits desc (idle events last)
Avg
Total Wait wait Waits
Event Waits Timeouts Time (s) (ms) /txn
---------------------------- ------------ ---------- ---------- ------ --------
latch free 26,209 397 7,321 279 5.4
row cache lock 192 104 368 1919 0.0
db file sequential read 16,982 0 199 12 3.5
SQL*Net break/reset to clien 3 0 60 20038 0.0
enqueue 73 4 33 446 0.0
log file sync 5,404 0 18 3 1.1
buffer busy waits 41 10 17 420 0.0
db file parallel write 590 0 10 17 0.1
control file parallel write 587 0 2 3 0.1
control file sequential read 351 0 1 2 0.1
process startup 12 0 0 32 0.0
LGWR wait for redo copy 121 28 0 3 0.0
db file scattered read 9 0 0 14 0.0
log buffer space 2 0 0 61 0.0
SQL*Net more data to client 1,719 0 0 0 0.4
log file parallel write 8,984 8,982 0 0 1.9
local write wait 3 0 0 6 0.0
direct path write 10 0 0 1 0.0
direct path read 9 0 0 0 0.0
buffer deadlock 1 1 0 0 0.0
SQL*Net message from client 581,543 0 60,322 104 120.7
wakeup time manager 56 56 1,666 29750 0.0
jobq slave wait 274 259 806 2941 0.1
SQL*Net message to client 581,560 0 1 0 120.7
-------------------------------------------------------------
Background Wait Events for DB: ORA9 Instance: ora9 Snaps: 2111 -2112
-> ordered by wait time desc, waits desc (idle events last)
Avg
Total Wait wait Waits
Event Waits Timeouts Time (s) (ms) /txn
---------------------------- ------------ ---------- ---------- ------ --------
db file parallel write 590 0 10 17 0.1
control file parallel write 587 0 2 3 0.1
control file sequential read 239 0 0 2 0.0
LGWR wait for redo copy 121 28 0 3 0.0
latch free 2 0 0 35 0.0
rdbms ipc reply 5 0 0 10 0.0
log file parallel write 8,984 8,982 0 0 1.9
buffer busy waits 2 0 0 5 0.0
direct path write 9 0 0 1 0.0
direct path read 9 0 0 0 0.0
rdbms ipc message 35,648 26,564 7,033 197 7.4
pmon timer 628 605 1,758 2800 0.1
smon timer 8 2 1,691 ###### 0.0
-------------------------------------------------------------
SQL ordered by Gets
ÿÿ
A reader, October 06, 2005 - 8:54 am UTC
Dear Tom
Elapsed: 30.02 (mins)
Cache Sizes (end)
~~~~~~~~~~~~~~~~~
Buffer Cache: 2,560M Std Block Size: 8K
Shared Pool Size: 400M Log Buffer: 512K
Load Profile
~~~~~~~~~~~~ Per Second Per Transaction
--------------- ---------------
Redo size: 17,226.11 6,440.57
Logical reads: 9,217.40 3,446.24
Block changes: 116.69 43.63
Physical reads: 9.44 3.53
Physical writes: 8.11 3.03
User calls: 294.83 110.23
Parses: 94.58 35.36
Hard parses: 0.07 0.03
Sorts: 35.80 13.39
Logons: 0.46 0.17
Executes: 130.69 48.86
Transactions: 2.67
% Blocks changed per Read: 1.27 Recursive Call %: 45.43
Rollback per transaction %: 0.79 Rows per Sort: 30.10
Instance Efficiency Percentages (Target 100%)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Buffer Nowait %: 100.00 Redo NoWait %: 100.00
Buffer Hit %: 99.90 In-memory Sort %: 100.00
Library Hit %: 99.97 Soft Parse %: 99.93
Execute to Parse %: 27.63 Latch Hit %: 99.97
Parse CPU to Parse Elapsd %: 0.89 % Non-Parse CPU: 96.06
Shared Pool Statistics Begin End
------ ------
Memory Usage %: 93.53 93.48
% SQL with executions>1: 35.21 35.20
% Memory for SQL w/exec>1: 36.81 36.80
Top 5 Timed Events
~~~~~~~~~~~~~~~~~~ % Total
Event Waits Time (s) Ela Time
-------------------------------------------- ------------ ----------- --------
latch free 26,209 7,321 88.27
row cache lock 192 368 4.44
CPU time 262 3.16
db file sequential read 16,982 199 2.40
SQL*Net break/reset to client 3 60 .72
-------------------------------------------------------------
Wait Events for DB: ORA9 Instance: ora9 Snaps: 2111 -2112
-> s - second
-> cs - centisecond - 100th of a second
-> ms - millisecond - 1000th of a second
-> us - microsecond - 1000000th of a second
-> ordered by wait time desc, waits desc (idle events last)
Avg
Total Wait wait Waits
Event Waits Timeouts Time (s) (ms) /txn
---------------------------- ------------ ---------- ---------- ------ --------
latch free 26,209 397 7,321 279 5.4
row cache lock 192 104 368 1919 0.0
db file sequential read 16,982 0 199 12 3.5
SQL*Net break/reset to clien 3 0 60 20038 0.0
enqueue 73 4 33 446 0.0
log file sync 5,404 0 18 3 1.1
buffer busy waits 41 10 17 420 0.0
db file parallel write 590 0 10 17 0.1
control file parallel write 587 0 2 3 0.1
control file sequential read 351 0 1 2 0.1
process startup 12 0 0 32 0.0
LGWR wait for redo copy 121 28 0 3 0.0
db file scattered read 9 0 0 14 0.0
log buffer space 2 0 0 61 0.0
SQL*Net more data to client 1,719 0 0 0 0.4
log file parallel write 8,984 8,982 0 0 1.9
local write wait 3 0 0 6 0.0
direct path write 10 0 0 1 0.0
direct path read 9 0 0 0 0.0
buffer deadlock 1 1 0 0 0.0
SQL*Net message from client 581,543 0 60,322 104 120.7
wakeup time manager 56 56 1,666 29750 0.0
jobq slave wait 274 259 806 2941 0.1
SQL*Net message to client 581,560 0 1 0 120.7
-------------------------------------------------------------
Background Wait Events for DB: ORA9 Instance: ora9 Snaps: 2111 -2112
-> ordered by wait time desc, waits desc (idle events last)
Avg
Total Wait wait Waits
Event Waits Timeouts Time (s) (ms) /txn
---------------------------- ------------ ---------- ---------- ------ --------
db file parallel write 590 0 10 17 0.1
control file parallel write 587 0 2 3 0.1
control file sequential read 239 0 0 2 0.0
LGWR wait for redo copy 121 28 0 3 0.0
latch free 2 0 0 35 0.0
rdbms ipc reply 5 0 0 10 0.0
log file parallel write 8,984 8,982 0 0 1.9
buffer busy waits 2 0 0 5 0.0
direct path write 9 0 0 1 0.0
direct path read 9 0 0 0 0.0
rdbms ipc message 35,648 26,564 7,033 197 7.4
pmon timer 628 605 1,758 2800 0.1
smon timer 8 2 1,691 ###### 0.0
-------------------------------------------------------------
SQL ordered by Gets
ÿÿ
A reader, October 06, 2005 - 8:56 am UTC
follow above
SQL ordered by Gets
CPU Elapsd
Buffer Gets Executions Gets per Exec %Total Time (s) Time (s) Hash Value
--------------- ------------ -------------- ------ -------- --------- ----------
11,967,046 642 18,640.3 72.1 101.43 110.22 1952116255
SQL ordered by Executions
CPU per Elap per
Executions Rows Processed Rows per Exec Exec (s) Exec (s) Hash Value
------------ --------------- ---------------- ----------- ---------- ----------
24,801 24,784 1.0 0.00 0.04 4197364295
Module: JDBC Thin Client
SQL ordered by Parse Calls
% Total
Parse Calls Executions Parses Hash Value
------------ ------------ -------- ----------
24,826 24,801 14.57 4197364295
Module: JDBC Thin Client
===================================================
Parses: 94.58 35.36
Soft Parse %: 99.93
Execute to Parse %: 27.63
looks good but ...
Parse CPU to Parse Elapsd %: 0.89
latch free 26,209 7,321 88.27
Is this also indicating a bind variable problem?
Is that high buffer gets sql related to the problem?
October 06, 2005 - 9:27 am UTC
Soft Parse %: 99.93
would indicate you are doing well by binds - not bad.
A reader, October 06, 2005 - 11:03 am UTC
Dear Tom
Elapsed: 30.02 (mins)
Cache Sizes (end)
~~~~~~~~~~~~~~~~~
Buffer Cache: 2,560M Std Block Size: 8K
Shared Pool Size: 400M Log Buffer: 512K
Load Profile
~~~~~~~~~~~~ Per Second Per Transaction
--------------- ---------------
Redo size: 17,226.11 6,440.57
Logical reads: 9,217.40 3,446.24
Block changes: 116.69 43.63
Physical reads: 9.44 3.53
Physical writes: 8.11 3.03
User calls: 294.83 110.23
Parses: 94.58 35.36
Hard parses: 0.07 0.03
Sorts: 35.80 13.39
Logons: 0.46 0.17
Executes: 130.69 48.86
Transactions: 2.67
% Blocks changed per Read: 1.27 Recursive Call %: 45.43
Rollback per transaction %: 0.79 Rows per Sort: 30.10
Instance Efficiency Percentages (Target 100%)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Buffer Nowait %: 100.00 Redo NoWait %: 100.00
Buffer Hit %: 99.90 In-memory Sort %: 100.00
Library Hit %: 99.97 Soft Parse %: 99.93
Execute to Parse %: 27.63 Latch Hit %: 99.97
Parse CPU to Parse Elapsd %: 0.89 % Non-Parse CPU: 96.06
Shared Pool Statistics Begin End
------ ------
Memory Usage %: 93.53 93.48
% SQL with executions>1: 35.21 35.20
% Memory for SQL w/exec>1: 36.81 36.80
Top 5 Timed Events
~~~~~~~~~~~~~~~~~~ % Total
Event Waits Time (s) Ela Time
-------------------------------------------- ------------ ----------- --------
latch free 26,209 7,321 88.27
row cache lock 192 368 4.44
CPU time 262 3.16
db file sequential read 16,982 199 2.40
SQL*Net break/reset to client 3 60 .72
-------------------------------------------------------------
Wait Events for DB: ORA9 Instance: ora9 Snaps: 2111 -2112
-> s - second
-> cs - centisecond - 100th of a second
-> ms - millisecond - 1000th of a second
-> us - microsecond - 1000000th of a second
-> ordered by wait time desc, waits desc (idle events last)
Avg
Total Wait wait Waits
Event Waits Timeouts Time (s) (ms) /txn
---------------------------- ------------ ---------- ---------- ------ --------
latch free 26,209 397 7,321 279 5.4
row cache lock 192 104 368 1919 0.0
db file sequential read 16,982 0 199 12 3.5
SQL*Net break/reset to clien 3 0 60 20038 0.0
enqueue 73 4 33 446 0.0
log file sync 5,404 0 18 3 1.1
buffer busy waits 41 10 17 420 0.0
db file parallel write 590 0 10 17 0.1
control file parallel write 587 0 2 3 0.1
control file sequential read 351 0 1 2 0.1
process startup 12 0 0 32 0.0
LGWR wait for redo copy 121 28 0 3 0.0
db file scattered read 9 0 0 14 0.0
log buffer space 2 0 0 61 0.0
SQL*Net more data to client 1,719 0 0 0 0.4
log file parallel write 8,984 8,982 0 0 1.9
local write wait 3 0 0 6 0.0
direct path write 10 0 0 1 0.0
direct path read 9 0 0 0 0.0
buffer deadlock 1 1 0 0 0.0
SQL*Net message from client 581,543 0 60,322 104 120.7
wakeup time manager 56 56 1,666 29750 0.0
jobq slave wait 274 259 806 2941 0.1
SQL*Net message to client 581,560 0 1 0 120.7
-------------------------------------------------------------
Background Wait Events for DB: ORA9 Instance: ora9 Snaps: 2111 -2112
-> ordered by wait time desc, waits desc (idle events last)
Avg
Total Wait wait Waits
Event Waits Timeouts Time (s) (ms) /txn
---------------------------- ------------ ---------- ---------- ------ --------
db file parallel write 590 0 10 17 0.1
control file parallel write 587 0 2 3 0.1
control file sequential read 239 0 0 2 0.0
LGWR wait for redo copy 121 28 0 3 0.0
latch free 2 0 0 35 0.0
rdbms ipc reply 5 0 0 10 0.0
log file parallel write 8,984 8,982 0 0 1.9
buffer busy waits 2 0 0 5 0.0
direct path write 9 0 0 1 0.0
direct path read 9 0 0 0 0.0
rdbms ipc message 35,648 26,564 7,033 197 7.4
pmon timer 628 605 1,758 2800 0.1
smon timer 8 2 1,691 ###### 0.0
-------------------------------------------------------------
SQL ordered by Gets
ÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿ
October 06, 2005 - 12:06 pm UTC
please stop posting this - this is like the 3rd copy.
Parse CPU to Parse Elapsd is very less
Raj, April 19, 2006 - 11:04 am UTC
Hi Tom,
I have the similar issue that Parse CPU to Parse Elapsd is very less but i dont see any latch free event in my top 5 events. also no of parses also seem to be high and soft parse is also not very effective.
here is the extract of my statspack report:
STATSPACK report for
Database DB Id Instance Inst Num Startup Time Release RAC
~~~~~~~~ ----------- ------------ -------- --------------- ----------- ---
553936198 db01 1 18-Apr-06 15:39 10.2.0.1.0 NO
Host Name: DATABASESERVER Num CPUs: 8 Phys Memory (MB): 15,991
~~~~
Snapshot Snap Id Snap Time Sessions Curs/Sess Comment
~~~~~~~~ ---------- ------------------ -------- --------- -------------------
Begin Snap: 275 19-Apr-06 15:00:08 33 21.4
End Snap: 276 19-Apr-06 16:00:05 33 22.6
Elapsed: 59.95 (mins)
Cache Sizes Begin End
~~~~~~~~~~~ ---------- ----------
Buffer Cache: 4,320M Std Block Size: 8K
Shared Pool Size: 400M Log Buffer: 4,096K
Load Profile Per Second Per Transaction
~~~~~~~~~~~~ --------------- ---------------
Redo size: 8,843,135.05 20,607.13
Logical reads: 206,433.93 481.05
Block changes: 60,902.08 141.92
Physical reads: 130.89 0.31
Physical writes: 747.13 1.74
User calls: 1.89 0.00
Parses: 130.03 0.30
Hard parses: 13.89 0.03
Sorts: 27.60 0.06
Logons: 0.03 0.00
Executes: 3,576.07 8.33
Transactions: 429.13
% Blocks changed per Read: 29.50 Recursive Call %: 99.98
Rollback per transaction %: 0.00 Rows per Sort: 647.01
Instance Efficiency Percentages
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Buffer Nowait %: 99.99 Redo NoWait %: 99.62
Buffer Hit %: 99.94 In-memory Sort %: 100.00
Library Hit %: 99.50 Soft Parse %: 89.32
Execute to Parse %: 96.36 Latch Hit %: 99.88
Parse CPU to Parse Elapsd %: 11.13 % Non-Parse CPU: 96.31
Shared Pool Statistics Begin End
------ ------
Memory Usage %: 72.18 79.24
% SQL with executions>1: 79.59 76.89
% Memory for SQL w/exec>1: 84.32 74.85
Top 5 Timed Events Avg %Total
~~~~~~~~~~~~~~~~~~ wait Call
Event Waits Time (s) (ms) Time
----------------------------------------- ------------ ----------- ------ ------
CPU time 7,692 22.2
db file sequential read 208,049 6,908 33 20.0
log file switch (checkpoint incomplete) 87,534 3,937 45 11.4
log buffer space 24,709 3,617 146 10.5
log file parallel write 162,260 2,469 15 7.1
-------------------------------------------------------------
Host CPU (CPUs: 8)
~~~~~~~~ Load Average
Begin End User System Idle WIO WCPU
------- ------- ------- ------- ------- ------- --------
27.04 0.54 72.41
Instance CPU
~~~~~~~~~~~~
% of total CPU for Instance: 27.36
% of busy CPU for Instance: 99.20
%DB time waiting for CPU - Resource Mgr:
Memory Statistics Begin End
~~~~~~~~~~~~~~~~~ ------------ ------------
Host Mem (MB): 15,991.3 15,991.3
SGA use (MB): 4,768.0 4,768.0
PGA use (MB): 104.8 148.7
% Host Mem used for SGA+PGA: 30.5 30.7
-------------------------------------------------------------
Time Model System Stats DB/Inst: db01/db01 Snaps: 275-276
-> Ordered by % of DB time desc, Statistic name
Statistic Time (s) % of DB time
----------------------------------- -------------------- ------------
sql execute elapsed time 29,402.0 99.9
DB CPU 7,686.4 26.1
parse time elapsed 2,972.8 10.1
hard parse elapsed time 2,523.5 8.6
hard parse (sharing criteria) elaps 1,052.5 3.6
PL/SQL execution elapsed time 169.5 .6
repeated bind elapsed time 7.2 .0
PL/SQL compilation elapsed time 1.1 .0
connection management call elapsed 0.1 .0
hard parse (bind mismatch) elapsed 0.0 .0
sequence load elapsed time 0.0 .0
DB time 29,416.8
background elapsed time 6,065.8
background cpu time 229.8
-------------------------------------------------------------
Could you please provide me some tips to how to solve the issue.
Thanks,
Raj
April 19, 2006 - 11:13 am UTC
to solve "what issue"
I can say a large % of your 7,692 cpu seconds is spent parsing - every second 13 to 14 new statements that never existed before are hard parsed.
If you goto your latch report section, you'll get an idea how much time is wasted there by looking at the latch MISSES.
Parse CPU to Parse Elapsd is very low but Latch Wait is not high
Raj, April 21, 2006 - 8:51 am UTC
Hi Tom,
I checked the wait evens but Latch Free is not high still Parse CPU to Parse Elapsd is very low.
STATSPACK report for
Database DB Id Instance Inst Num Startup Time Release RAC
~~~~~~~~ ----------- ------------ -------- --------------- ----------- ---
553936198 data01 1 20-Apr-06 10:11 10.2.0.1.0 NO
Host Name: dataserver Num CPUs: 8 Phys Memory (MB): 15,991
~~~~
Snapshot Snap Id Snap Time Sessions Curs/Sess Comment
~~~~~~~~ ---------- ------------------ -------- --------- -------------------
Begin Snap: 283 20-Apr-06 11:00:07 32 21.3
End Snap: 284 20-Apr-06 12:00:04 34 19.7
Elapsed: 59.95 (mins)
Cache Sizes Begin End
~~~~~~~~~~~ ---------- ----------
Buffer Cache: 6,880M 6,864M Std Block Size: 8K
Shared Pool Size: 1,264M 1,280M Log Buffer: 4,096K
Load Profile Per Second Per Transaction
~~~~~~~~~~~~ --------------- ---------------
Redo size: 9,616,604.17 20,530.74
Logical reads: 225,910.02 482.30
Block changes: 66,736.40 142.48
Physical reads: 119.61 0.26
Physical writes: 821.14 1.75
User calls: 1.70 0.00
Parses: 144.57 0.31
Hard parses: 15.15 0.03
Sorts: 27.09 0.06
Logons: 0.04 0.00
Executes: 3,897.73 8.32
Transactions: 468.40
% Blocks changed per Read: 29.54 Recursive Call %: 99.98
Rollback per transaction %: 0.00 Rows per Sort: 715.22
Instance Efficiency Percentages
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Buffer Nowait %: 99.99 Redo NoWait %: 99.60
Buffer Hit %: 99.95 In-memory Sort %: 100.00
Library Hit %: 99.50 Soft Parse %: 89.52
Execute to Parse %: 96.29 Latch Hit %: 99.87
Parse CPU to Parse Elapsd %: 11.26 % Non-Parse CPU: 96.28
Shared Pool Statistics Begin End
------ ------
Memory Usage %: 19.08 20.71
% SQL with executions>1: 69.15 79.03
% Memory for SQL w/exec>1: 87.37 91.57
Top 5 Timed Events Avg %Total
~~~~~~~~~~~~~~~~~~ wait Call
Event Waits Time (s) (ms) Time
----------------------------------------- ------------ ----------- ------ ------
CPU time 8,265 24.0
db file sequential read 217,022 5,555 26 16.2
log file switch (checkpoint incomplete) 82,780 5,239 63 15.2
log buffer space 27,907 4,427 159 12.9
log file parallel write 701,824 2,165 3 6.3
-------------------------------------------------------------
Host CPU (CPUs: 8)
~~~~~~~~ Load Average
Begin End User System Idle WIO WCPU
------- ------- ------- ------- ------- ------- --------
29.06 0.76 70.18
Instance CPU
~~~~~~~~~~~~
% of total CPU for Instance: 29.48
% of busy CPU for Instance: 98.88
%DB time waiting for CPU - Resource Mgr:
Memory Statistics Begin End
~~~~~~~~~~~~~~~~~ ------------ ------------
Host Mem (MB): 15,991.3 15,991.3
SGA use (MB): 8,192.0 8,192.0
PGA use (MB): 129.1 134.8
% Host Mem used for SGA+PGA: 52.0 52.1
-------------------------------------------------------------
Time Model System Stats DB/Inst: data01/data01 Snaps: 283-284
-> Ordered by % of DB time desc, Statistic name
Statistic Time (s) % of DB time
----------------------------------- -------------------- ------------
sql execute elapsed time 29,378.9 100.0
DB CPU 8,254.1 28.1
parse time elapsed 2,947.4 10.0
hard parse elapsed time 2,469.7 8.4
hard parse (sharing criteria) elaps 994.4 3.4
PL/SQL execution elapsed time 181.6 .6
repeated bind elapsed time 7.5 .0
PL/SQL compilation elapsed time 0.8 .0
connection management call elapsed 0.3 .0
sequence load elapsed time 0.2 .0
DB time 29,386.0
background elapsed time 5,537.0
background cpu time 279.5
-------------------------------------------------------------
Wait Events DB/Inst: data01/data01 Snaps: 283-284
-> s - second, cs - centisecond, ms - millisecond, us - microsecond
-> %Timeouts: value of 0 indicates value was < .5%. Value of null is truly 0
-> Only events with Total Wait Time (s) >= .001 are shown
-> ordered by Total Wait Time desc, Waits desc (idle events last)
Avg
%Time Total Wait wait Waits
Event Waits -outs Time (s) (ms) /txn
--------------------------------- ------------ ------ ---------- ------ --------
db file sequential read 217,022 0 5,555 26 0.1
log file switch (checkpoint incom 82,780 5 5,239 63 0.0
log buffer space 27,907 2 4,427 159 0.0
log file parallel write 701,824 0 2,165 3 0.4
log file switch completion 10,516 6 1,848 176 0.0
db file parallel write 214,614 0 1,784 8 0.1
db file scattered read 26,174 0 1,352 52 0.0
log file sync 8,598 1 741 86 0.0
control file sequential read 112,201 0 716 6 0.1
rdbms ipc reply 23,793 0 442 19 0.0
library cache pin 576 10 394 684 0.0
enq: RO - fast object reuse 7,450 0 247 33 0.0
enq: CF - contention 976 1 246 252 0.0
local write wait 8,167 0 199 24 0.0
buffer busy waits 98,430 0 173 2 0.1
control file parallel write 12,050 0 154 13 0.0
enq: CI - contention 1,217 0 138 114 0.0
enq: HW - contention 367 1 67 182 0.0
direct path read 20,850 0 64 3 0.0
reliable message 7,531 0 43 6 0.0
log file sequential read 1,398 0 25 18 0.0
direct path write 59,323 0 18 0 0.0
enq: TX - index contention 780 0 14 18 0.0
read by other session 163 2 14 84 0.0
log file switch (private strand f 188 0 11 60 0.0
LGWR wait for redo copy 262,017 0 9 0 0.2
latch: library cache 14,212 0 7 0 0.0
log file single write 1,398 0 4 3 0.0
latch: In memory undo latch 135 0 4 27 0.0
kksfbc child completion 60 100 3 55 0.0
Data file init write 48 0 2 52 0.0
kkdlgon 24 38 1 45 0.0
os thread startup 47 0 1 18 0.0
latch: cache buffers chains 2,124 0 1 0 0.0
undo segment extension 541,980 100 1 0 0.3
row cache lock 1 0 0 461 0.0
enq: FB - contention 659 0 0 1 0.0
latch: cache buffers lru chain 565 0 0 1 0.0
latch free 983 0 0 0 0.0
latch: checkpoint queue latch 176 0 0 1 0.0
latch: redo allocation 543 0 0 0 0.0
latch: object queue header operat 288 0 0 0 0.0
enq: TX - allocate ITL entry 52 0 0 1 0.0
SQL*Net break/reset to client 310 0 0 0 0.0
latch: shared pool 181 0 0 0 0.0
SGA: allocation forcing component 3 67 0 13 0.0
latch: row cache objects 29 0 0 1 0.0
latch: library cache pin 15 0 0 0 0.0
buffer deadlock 954 86 0 0 0.0
latch: session allocation 22 0 0 0 0.0
Could you Please Explain which area should i concentrate to improve the performance here. Is soft Parse % is reasonable??
Thanks and Regards,
Raj
April 21, 2006 - 8:54 am UTC
did you miss my comment above from before? just reposting a big statspack isn't going to make me say something different.
so what about the wait - look at your misses - every time you miss, you go into a loop trying to get it (burning CPU in the effort). Only after you spin some 2000 times in a loop (burning cpu) will you "go to sleep and wait" for a latch.
A confirmation
Reader, October 12, 2006 - 9:10 am UTC
Hello Tom,
The following is extract of statspack report. The SQL ordered by Executions part of the statspack report is as follows.
What does the zero values for Rows Processed and CPU Per Exec indicate?
CPU per Elap per Old
Executions Rows Processed Rows per Exec Exec (s) Exec (s) Hash Value
------------ --------------- ---------------- ----------- ---------- ----------
401 0 0.0 0.00 0.00 181027966
Module: javatest.exe
SELECT COL1,COL2,COL3 FROM SYSREF_JAVA
Thanks
Still waiting
Reader, October 13, 2006 - 12:28 am UTC
Hello Tom,
I'm waiting for a reply to my previous query.
October 13, 2006 - 7:08 am UTC
geez - give me a break.
I on occasion
o sleep
o eat
o do my day job
why didn't you provide me the requested information within 30 seconds of my asking for it yourself??? I waited - but you didn't reply, i was still waiting for you.
Statement Caching
Tim, October 13, 2006 - 1:29 pm UTC
With respect to your comment a few posts above:
<quote>
only the program can ensure "no parse"
session cached cursors is a softer soft parse, but far far removed from no
parse.
Only the PROGRAM can stop telling us to parse.
</quote>
I understand (I think) the hard parse - basically - use binds appropriately. But "no parse" - don't quite get. Could you explain further? Particularly - how do I get there? I am working in PLSQL and PowerBuilder. In either of these is it realistically under my control to "no parse" - or if I am using binds appropriately - will I just "get what I get" based on how Oracle and Sybase have programmed PLSQL and PowerBuilder to behave?
Thanks.
October 13, 2006 - 2:46 pm UTC
let us say you have 1 million rows to load using "some language", you could either:
loop
read a record
exit loop when no more data to read
prepare insert
bind insert
execute insert
close insert
end loop
OR you could
prepare insert
loop
read a record
exit loop when no more data to read
bind insert
execute insert
end loop
close insert
first one parses 1,000,000 times, second parses 1 time.
if you use PLSQL - plsql caches your cursors very nicely, you don't have to deal with it, it is done. If you use some other language, you have to be careful not to over parse - don't be in such a hurry to close that sql statement if you don't have to.
Parse CPU to Parse Elapsd %
Key, July 08, 2008 - 4:09 pm UTC
Hello Tom,
I think I am not able to get your point on Parse CPU to Parse Elapsed %. Would you please explain about it again ? How "Parse CPU to Parse Elapsed %" derived and how come it could be significant performance problem while this ratio is low.
Thanks
July 08, 2008 - 5:32 pm UTC
,'Parse CPU to Parse Elapsd %:' dscr
, decode(:prsela, 0, to_number(null)
, round(100*:prscpu/:prsela,2)) pctval
that is straight from statspack.
take the CPU time
divide by elapsed time
get a ratio.
if cpu = elapsed, then we waited for nothing - just did work on the cpu and parsed and then you ran the query.
as elapsed goes up and cpu stays the same - say 1 cpu second, 2 elapsed seconds - 50% ratio - that means we spent a considerable amount of time waiting for something
waiting for cpu - maybe, system could have been overloaded - look at other metrics to see if this is the case
waiting for latches - could be. we might have gone to sleep waiting for the library cache latch - that would increase elapsed but cpu would not go up with it.
as this number goes to zero that means the elapsed time to parse is very big compared to the cpu time for parsing - that is bad, we are waiting for something big time.
Parse CPU to Parse Elapsd %
A reader, July 09, 2008 - 12:03 pm UTC
That is an excellent explanation. Thanks a lot for that.
Now next questions are:
1. What other metrics in statspack we should look to check if system could have been overloaded ? and that's a reason for a low "Parse CPU to Parse Elapsed%" ratio ?
2. You said, that could indicate latch problem. But what if "latch free" is not in top 5 wait event. Still do we need to look into latch statistics and try to tune it or do we need to look some other part to tune in order to improve this ratio ?
In my statspack, I see most of the time (even CPU is more than 50% ideal) this ratio is 3-4%. I dont see 'latch free" wait event in top 5.
Thanks
July 09, 2008 - 2:26 pm UTC
1) all of them. Statspack is full of "data". Our brains (our experience, our knowledge, our skill) can turn that data into something meaningful.
One persons "overload numbers" would be another persons "perfectly tuned".
2) what if latch free was in the top 10. why stop at 5, 5 is just a number. do you or do you not have a latching issue?
Don't tune ratios - use a ratio to say "hey, this might mean something - maybe, let's look at it".
If you are spending a lot of time waiting to parse - you need to get to the root cause, it doesn't matter if latch free is event #15 in a list, it might be the root cause.
Also remember - if the numbers are really small, then a ratio is really not very useful. If in an hour you spent 3 seconds of cpu and 12 seconds elapsed time parsing - so what (bad looking ratio, but ho-hum numbers)
Use math when you approach this, delve into the formula behind the ratio (edit sprepins.sql in $ORACLE_HOME/rdbms/admin, they are all there).
And most importantly, only tune that which is causing you a problem. Say you are at the customer sight and they say "tune this application". If you run a statspack, see this ratio, attempt to influence this ratio - are you doing good?
Only if the application you are asked to 'tune' parses a lot and it's major wait is for...... parsing.
Otherwise, you are not.
key, July 22, 2008 - 11:28 am UTC
Thanks Tom for expert explanation.
While I notice very low ( < 10%) Parse CPU to Parse Elpased in stats pack. But after further research, I found DBA has implemented "Alter ststem Flush shared pull" every 30 minutes. I think this would result in a high library cache latch every half an hour because of most of sessions would try to recompile sql and pl/sqls. And that's why we see this ratio is very low. Kinda, artificially keeping low ratio.
Please elaborate your thaughts on this situation and how it would affect performance.
Thanks
July 22, 2008 - 6:27 pm UTC
hah, my thoughts do not need a lot of elaboration:
stop, don't
Now, I won't comment on the ratio (for all I know, the 10% number came from a 30 minute stats pack and represents 1 cpu second and 10 elapsed seconds of parse time - eg: yawn, so what - ratios, you need the numbers behind the number to see if it is an issue) - but the practice of flushing the shared pool every 30 minutes is up there in the ranks of "a really bad idea"
Latch free
Steve, August 04, 2008 - 10:58 am UTC
I have a system that is having issues and I'm sure it's to do with lack of bind variables. The script to query sql_text certainly throws a few out. But in the statspack report, although execute to parse ratio is very low, latch free is not in the top 5 wait events. Is that to be expected? Should I be looking elsewhere as well?
Instance Efficiency Indicators
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Buffer Nowait %: 100.00 Redo NoWait %: 100.00
Buffer Hit %: 46.51 Optimal W/A Exec %: 99.97
Library Hit %: 80.54 Soft Parse %: 55.85
Execute to Parse %: 14.18 Latch Hit %: 99.99
Parse CPU to Parse Elapsd %: 94.15 % Non-Parse CPU: 50.12
Shared Pool Statistics Begin End
------ ------
Memory Usage %: 76.12 71.99
% SQL with executions>1: 45.15 28.44
% Memory for SQL w/exec>1: 52.97 33.26
Top 5 Timed Events Avg %Total
~~~~~~~~~~~~~~~~~~ wait Call
Event Waits Time (s) (ms) Time
----------------------------------------- ------------ ----------- ------ ------
db file scattered read 56,876 99 2 27.0
CPU time 97 26.4
direct path read 9,835 62 6 16.8
db file sequential read 8,432 45 5 12.2
log file parallel write 2,311 18 8 4.9
-------------------------------------------------------------
August 04, 2008 - 2:08 pm UTC
Soft Parse %: 55.85
that indicates that binds are not being used, however the waits are so tiny here - I'd say this system was fairly "idle", unless the snapshot was for 5 seconds or something.
insufficient data to say anything else.
How come Soft Parse 99% and There are waits on Latches
Emad Kehail, August 25, 2008 - 7:22 am UTC
Hello Tom,
I am kindly asking to check this output from a Statspack report.
Snap Id Snap Time Sessions Curs/Sess Comment
------- ------------------ -------- --------- -------------------
Begin Snap: 1656 25-Aug-08 09:55:20 740 30.6
End Snap: 1666 25-Aug-08 10:12:13 817 27.8
Elapsed: 16.88 (mins)
Cache Sizes (end)
~~~~~~~~~~~~~~~~~
Buffer Cache: 600M Std Block Size: 8K
Shared Pool Size: 256M Log Buffer: 2,048K
Load Profile
~~~~~~~~~~~~ Per Second Per Transaction
--------------- ---------------
Redo size: 42,760.79 2,394.25
Logical reads: 90,257.39 5,053.66
Block changes: 371.88 20.82
Physical reads: 451.07 25.26
Physical writes: 165.96 9.29
User calls: 816.59 45.72
Parses: 1,791.00 100.28
Hard parses: 13.90 0.78
Sorts: 191.37 10.72
Logons: 17.32 0.97
Executes: 5,569.57 311.85
Transactions: 17.86
% Blocks changed per Read: 0.41 Recursive Call %: 92.42
Rollback per transaction %: 78.79 Rows per Sort: 1356.63
Instance Efficiency Percentages (Target 100%)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Buffer Nowait %: 100.00 Redo NoWait %: 100.00
Buffer Hit %: 99.67 In-memory Sort %: 99.36
Library Hit %: 99.85 Soft Parse %: 99.22
Execute to Parse %: 67.84 Latch Hit %: 99.10
Parse CPU to Parse Elapsd %: 10.49 % Non-Parse CPU: 90.49
Shared Pool Statistics Begin End
------ ------
Memory Usage %: 71.93 75.85
% SQL with executions>1: 82.52 86.42
% Memory for SQL w/exec>1: 84.75 90.99
Top 5 Timed Events
~~~~~~~~~~~~~~~~~~ % Total
Event Waits Time (s) Ela Time
-------------------------------------------- ------------ ----------- --------
CPU time 7,023 87.93
db file sequential read 47,877 260 3.26
db file scattered read 32,880 249 3.12
latch free 43,696 132 1.65
direct path read 71,792 125 1.56
-------------------------------------------------------------
The Soft Parse is fine as I guess, but the Parse CPU to Parse Elapsd is bad. Also, in the top 5 wait Events, the latch free are one of the most top 5.
Is this an indication of bad sized shared pool?? The Shared Pool advisor shows no improvements if I increased the shared pool size!!!
One final note, the report has been captured during a heavy load when most of the users tried to access the database using the web site.
Kindly asking for clues from you.
August 26, 2008 - 8:40 pm UTC
I like to say, there are three kinds of parses in Oracle:
a) the hard parse, it is very very very bad.
b) the soft parse, it is simply very VERY bad (only two very's)
c) the absence of a parse, no parse, this is the only good kind of parse
ALL PARSING, ALL OF IT, TAKES A LOT OF LATCHES, hard parsing is simply worse than soft parsing. soft parsing is *bad*
and you do more parsing than I've seen in a while. wow wow wow, 1700+ per second! wow - ouch and wow.
I'd be looking for ways to reduce the parsing - use stored procedures (they help a lot), use statement caching (jdbc), use better code (a program needs to parse a sql statement at least once and at most once, it is that second bit they botch entirely)
session_cached_cursors - you might set that to 100 if you haven't already.
Independent Database Systems with standard SQL features
Emad Kehail, August 31, 2008 - 6:15 am UTC
Hello Tom,
Many thanks for your valuable reply.
1) We are currently using ASP.Net in developing our web site and applications. However, I believe this will help the developers to use Statement Caching in their .Net code
http://download.oracle.com/docs/html/B14164_01/featOraCommand.htm#CIHCAFIG 2) I already set the session_cached_cursors to 50. But to what limit I can get benefit from this. The SQL statements are coming from web site where every page is a new session with a new connection. Would it be good to increase it to 100??
3) I do believe in your suggestion regarding the stored procedures and packages. However, our manager is a believer in distributed systems and he believes we have to develop all the web applications with the concept of "Independent Database Systems" where SQL statements must not be written for specific database features....
I believe we have to use every possible feature in Oracle Database... Oracle must execute the SQL statement.. Oracle has a great optimizer that will run the SQL commands as best as possible..
I am kindly asking for your opinion regarding the "Independent Database Systems"
August 31, 2008 - 9:46 am UTC
2) The SQL statements are coming from web site where every page
is a new session with a new connection.
you are doomed. A new connection for each page. Ouch, it is like 1995 with cgi-bin all over again. Why are you not using a connection pool of some sort? Do you know what the most expensive operation in database land is?
connecting, making the physical connection, to a database.
3) hahahahahahahahaha
thanks for the laugh. Does your manager understand that databases are different, we have different sql syntax, we have MASSIVELY different concurrency models.
Look at the big applications that support database independence - say "peoplesoft". It supports "any" database right?
as long as it is one of three
as long as it is a very specific dot release of the three, with specific patches for each of the three
as long as it is on a very short list of operating systems
as long as they are using the version for that database (eg: there are differences) and database version and database patch level and OS platform.
You are doomed to mediocre performance at best on one database and horrible performance, probably entirely unreliable behavior, on the rest.
In my experience, the best way to achieve this cross database capability is to go 100% stored procedures and have someone that actually understands how to use database A write them for database A and someone else that knows how database B works to write them for database B. The application can just call stored procedures to do stuff and the stored procedures can do the transactions in the correct and proper way for each database.
Independent Database Systems
Emad Kehail, September 04, 2008 - 3:13 pm UTC
Many thanks Tom...
Your words made me more confident. I will ask all the parties to look over your chapter "Developing Successful Applications" in your book Expert One on One...
Again.. thanks so much
May you clarify me about version count?
trantuananh24hg, May 25, 2010 - 10:45 pm UTC
Good morning, sir!
As my acknowledgment, the version_count in v_$sqlarea was Hard_Parse, isn't it? If it was, why did it appear more than execution? I knew that:
Hard_Parse => Caching & Soft_Parse => Execution.
However, with the same statement, the version_count is double/triple... to the execution, especially, it appeared to the statement gone with database link.
I think I will post the example if you agree, will you?
logvnp@VNP> col sql_text format a30
logvnp@VNP> select sum(version_count), sum(executions), sql_text
2 from v$sqlarea
3 having sum(version_count)>100
4 group by sql_text;
SUM(VERSION_COUNT) SUM(EXECUTIONS) SQL_TEXT
------------------ --------------- ------------------------------
110 5017 INSERT INTO LICHSU_DULIEU_CT (
THAYDOI_ID,TEN_TRUONG,GIATRI_C
U,GIATRI_MOI) VALUES(:B3 ,'NGA
Y_CN',:B1 ,:B2 )
169 13087 update sequences set max_id=:"
SYS_B_0" where tenseq=:"SYS_B_
1"
163 488 SELECT MAX (ngay_active) ngay_
active, MAX (ngay_ptc) ngay_pt
c, MAX (ngay_can) ngay
_can FROM (SELECT DECOD
E (a.kmkt_id, :"SYS_B_0", a.ng
ay_kt) ngay_active,
DECODE (a.kmkt_id, :"SYS_B_1"
, a.ngay_kt) ngay_ptc,
DECODE (a.kmkt_id, :"SYS_B
_2", a.ngay_kt) ngay_can
FROM ccs_common.km_dieukie
ns@dbl_2_billing a WHE
RE a.matinh = :"SYS_B_3" AND a
.kmht_id = :"SYS_B_4")
530 255 select max(ten_tt) ten_tt,
max(diach
i_tt) diachi_tt,
max(ms_thue) ms_thue
, max(m
a_kh) ma_kh,
sum(nogoc_thuc) nogoc_th
uc, sum
(thue_thuc) thue_thuc,
to_char(sum(to
ngno_thuc),:"SYS_B_000") tongn
o_thuc,
sum(nogoc_km) nogoc_km,
sum(thue_km)
thue_km,
to_char(sum(tongno_km),:"SY
S_B_001") tongno_km,
sum(nogoc) nogo
c, sum(
thue) thue,
to_char(sum(tongno),:"SYS
_B_002") tongno,
max(ma_tb) ma_tb,
to_char(
,
1052 303 select distinct qltn_tamthu.ch
uanhoa_string(kh.ten_tt) ten_t
t, qltn_tamthu.c
huanhoa_string(kh.diachi_tt) d
iachi_tt, kh.ms_
thue,kh.ma_kh,
nvl(no.nogoc,:"SY
S_B_000") nogoc_thuc,
nvl(no.thue,:"SYS_B_001"
) thue_thuc, to_
char(nvl(no.nogoc,:"SYS_B_002"
)+nvl(no.thue,:"SYS_B_003"),:"
SYS_B_004") tongno_thuc,
nvl(km.nogoc,:"SYS_B_
005") nogoc_km,
nvl(km.thue,:"SYS_B_006") thue
_km, to_char( nv
l(km.nogoc,:"SYS_B_007")+nvl(k
m.thue,:"SYS_B_008"),:"SYS_B_0
09") tongno_km,
(nvl(no.nogoc,:"SYS_B_010")-nv
l(km.nogoc,:"SYS_B_011")) nog
oc, (nvl(no.thue
,:"SYS_B_012")-nvl(km.thue,:"S
YS_B_013")) thue,
to_char( nvl(no.nogoc,:"SYS_
B_014")-nvl(km.nogoc,:"SYS_B_0
15")+nvl(no.thue,:"SYS_B_016")
-nvl(km.thue,:"SYS_B_017"),:"S
YS_B_018") tongno,
nvl(no.ma_tb,db.somay) ma_t
b, to_char(nvl(da_tra.
tra,:"SYS_B_019"),:"SYS_B_020"
) datra,
180 13076 SELECT max_id FROM sequences w
here tenseq =:"SYS_B_0"
427 473 Select :"SYS_B_0" from CCS_HT
H.QUYENLV a where HUY = :"SYS
_B_1" AND nam
e = :"SYS_B_2" AND THANGKYNO L
IKE :"SYS_B_3" and rownum<=:"S
YS_B_4"
1176 89 select a.ma_tinh from ccs_com
mon.thue_bao a where a.so_tb=:
"SYS_B_0"
172 104 SELECT count(*) FROM subadmin.
lich_su_tb@db_2_nsa
WHERE so_tb = :
"SYS_B_0" AND ngay_thang > ADD
_MONTHS (SYSDATE, -:"SYS_B_1")
/*AND thao_tac = 1*/ AND ma_
dv IN (:"SYS_B_2", :"SYS_B_3")
2051 128 SELECT COUNT (:"SYS_B_0")
FROM (SELECT
MAX (ngay_thang) ngay_thang, s
o_tb
FROM (SELECT a.so_tb, a.nga
y_thang
FROM ccs_common.
lich_su_tb a
WHERE a.so_t
b = :"SYS_B_1" AND a.ma_dv = :
"SYS_B_2")
GROUP BY so_tb)
WHERE ngay_thang >= :"
SYS_B_3"
15 rows selected.
The DB used cursor_sharing=FORCE. And all of above statement was called from PL/SQL, "the literal string was not shared in PL/SQL block" - Did I understand right?
Thank you!
May 26, 2010 - 7:38 am UTC
cursor sharing force, how sad. I know I don't like your development team at all - they must be very new to application development to make such a basic mistake (assuming this is your developed code).
...
180 13076 SELECT max_id FROM sequences w
here tenseq =:"SYS_B_0"
...
good gosh- they even invented their OWN SEQUENCES!!!!! How "brilliant" of them.
Guess what, you have much larger problems than worrying about version counts - much much much larger problems.
You have SQL Injection issues to be concerned with.
You have serialization issues to deal with (developer implemented sequences! ouch, that has got to hurt painfully)
You have "really bad coding practices" such as "lets count silly rows before deciding we want to do something"
172 104 SELECT count(*) FROM subadmin.
lich_su_tb@db_2_nsa
WHERE so_tb = :
"SYS_B_0" AND ngay_thang > ADD
_MONTHS (SYSDATE, -:"SYS_B_1"
I can see the code now:
select count
if count > 0
then
do something
end if
I would of course just code:
do something
if nothing exists to be acted on - you know what do something does? It returns... There is no need to COUNT first - it is not only a bad performance idea - it is a huge bug.
who is to say that between the time you count and then time you do something - the count doesn't change? anyone that relies on that count to print out something like "we processed N items" is wrong, that is definitely a bug.
this is a mere 15 statements, I'm not too worried about them - they could be caused by many things - here I'll show you one example:
ops$tkyte%ORA11GR2> create table t ( x varchar(4000), y varchar2(4000), z varchar2(4000) );
Table created.
ops$tkyte%ORA11GR2> alter session set cursor_sharing=force;
Session altered.
ops$tkyte%ORA11GR2> alter system flush shared_pool;
System altered.
ops$tkyte%ORA11GR2> declare
2 type array is table of number;
3
4 l_data array := array( 30, 100, 1000 );
5 begin
6 for i in 1 .. l_data.count loop
7 for j in 1 .. l_data.count loop
8 for k in 1 .. l_data.count loop
9 execute immediate
10 'insert /* LOOK FOR ME */ into t (x,y,z) values ( ' ||
11 '''' || rpad( '*', l_data(i), '*' ) || ''', ' ||
12 '''' || rpad( '*', l_data(j), '*' ) || ''', ' ||
13 '''' || rpad( '*', l_data(k), '*' ) || ''') ';
14 end loop;
15 end loop;
16 end loop;
17 end;
18 /
PL/SQL procedure successfully completed.
ops$tkyte%ORA11GR2>
ops$tkyte%ORA11GR2> column sql_text format a30
ops$tkyte%ORA11GR2> column address new_val ADDR
ops$tkyte%ORA11GR2> select parsing_user_id puid, parsing_schema_id psid,
2 sql_text, address, child_address
3 from v$sql
4 where sql_text like 'insert %LOOK FOR ME%';
PUID PSID SQL_TEXT ADDRESS CHILD_AD
---------- ---------- ------------------------------ -------- --------
304 304 insert /* LOOK FOR ME */ into 33271810 3DDF1844
t (x,y,z) values ( :"SYS_B_0",
:"SYS_B_1", :"SYS_B_2")
304 304 insert /* LOOK FOR ME */ into 33271810 3CB5C994
t (x,y,z) values ( :"SYS_B_0",
:"SYS_B_1", :"SYS_B_2")
304 304 insert /* LOOK FOR ME */ into 33271810 3E2413D0
t (x,y,z) values ( :"SYS_B_0",
:"SYS_B_1", :"SYS_B_2")
304 304 insert /* LOOK FOR ME */ into 33271810 3E3CEF7C
t (x,y,z) values ( :"SYS_B_0",
:"SYS_B_1", :"SYS_B_2")
304 304 insert /* LOOK FOR ME */ into 33271810 3E101A58
t (x,y,z) values ( :"SYS_B_0",
:"SYS_B_1", :"SYS_B_2")
304 304 insert /* LOOK FOR ME */ into 33271810 3EABFC8C
t (x,y,z) values ( :"SYS_B_0",
:"SYS_B_1", :"SYS_B_2")
304 304 insert /* LOOK FOR ME */ into 33271810 3EF77E08
t (x,y,z) values ( :"SYS_B_0",
:"SYS_B_1", :"SYS_B_2")
7 rows selected.
ops$tkyte%ORA11GR2>
ops$tkyte%ORA11GR2> select bind_mismatch, BIND_LENGTH_UPGRADEABLE
2 from v$sql_shared_cursor
3 where address = '&ADDR'
4 /
old 3: where address = '&ADDR'
new 3: where address = '33271810'
B B
- -
N N
N Y
N Y
N Y
N Y
N Y
N Y
7 rows selected.
you can use the view v$sql_shared_cursor to see why the sql was not shared (refer to the reference guide for information on the meaning of the columns in this view - note that bind length upgradeable is new in 11g - won't be there in 10g, bind_mismatch would be instead)
But again, this is about the least of your worries, this application is so poorly constructed security wise - you should be very very afraid of it in production. Hopefully none of the end users know this yet - else they'll be exploiting it.
Parse CPU to Parse Elapsd % > 100 %
sp00n, April 12, 2011 - 9:30 am UTC
Hi, how could it possible ?
Instance Efficiency Percentages (Target 100%)
---------------------------------------------------
Buffer Nowait %: 100.00 Redo NoWait %: 100.00
Buffer Hit %: 97.08 In-memory Sort %: 100.00
Library Hit %: 98.01 Soft Parse %: 99.49
Execute to Parse %: 2.81 Latch Hit %: 99.96
Parse CPU to Parse Elapsd %: 138.30 % Non-Parse CPU: 95.39
April 13, 2011 - 9:20 am UTC
go look at the underlying numbers - look at the raw parse cpu time and parse elapsed time. Probably, they are very small.
cpu times and elapsed times are measured (necessarily) at DIFFERENT points in time and hence have small errors in them. If you have small errors in small numbers - the ratios you make from those numbers are possibly "off"
Parse CPU to Parse Elapsed % higher than 100%
Balachandran, July 09, 2013 - 8:33 am UTC
Hi Tom,
One of the Statspack did show a > 100 figure for Parse CPU to Parse Elapsed % when the exact values of Parse time cpu and parse elapsed from the sysstat. As you said, could this be because, they are calculated at different times?
Instance Efficiency Percentages (Target 100%)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Buffer Nowait %: 99.96 Redo NoWait %: 100.00
Buffer Hit %: 99.53 In-memory Sort %: 100.00
Library Hit %: 99.46 Soft Parse %: 95.78
Execute to Parse %: 81.49 Latch Hit %: 99.04
Parse CPU to Parse Elapsd %: 114.14 % Non-Parse CPU: 99.29
Statistic Total per Second per Trans
--------------------------------- ------------------ -------------- ------------
parse time cpu 52,832 1.8 0.1
parse time elapsed 46,288 1.6 0.0
More importantly, what should we do next if we see > 100 value. Brush aside and continue analyzing or take a cue from it, in any way?
I ask this because, I do see "latch free" wait event among top FG wait events:
Avg
Total Wait wait Waits
Event Waits Timeouts Time (s) (ms) /txn
---------------------------- ------------ ---------- ---------- ------ --------
PL/SQL lock timer 4,432 4,430 35,190 7940 0.0
async disk IO 7,783,486 0 33,034 4 7.0
db file parallel write 1,963,142 0 7,115 4 1.8
db file sequential read 8,332,129 0 5,098 1 7.5
latch free 955,963 32,835 3,410 4 0.9
And, the latch activity details show high immediate get miss % for library cache:
Pct Avg Wait Pct
Get Get Slps Time NoWait NoWait
Latch Requests Miss /Miss (s) Requests Miss
------------------------ -------------- ------ ------ ------ ------------ ------
cache buffers chains 8,834,119,102 0.9 0.0 396 77,335,589 1.1
cache buffers lru chain 13,598,852 0.2 0.0 10 121,418,085 0.9
library cache 524,479,302 0.7 0.1 187 38,642 37.4
row cache objects 54,168,799 1.3 0.8 2878 52,461 3.3
session allocation 575,230,884 5.3 0.0 1 0
Latch Sleep Breakdown:
Get Spin &
Latch Name Requests Misses Sleeps Sleeps 1->4
-------------------------- -------------- ----------- ----------- ------------
cache buffers chains 8,834,119,102 81,520,387 32,835 81489436/318
95/278/105/0
session allocation 575,230,884 30,314,591 3,355 30311240/334
7/4/0/0
library cache 524,479,302 3,430,444 348,964 3083212/3455
10/1715/7/0
row cache objects 54,168,799 727,351 562,084 166366/55988
6/1099/0/0
redo allocation 157,262,127 1,792,433 1,162 1791275/1154
/4/0/0
shared pool 61,459,893 62,122 3,926 58240/3842/3
8/2/0
Library Cache Activity
Get Pct Pin Pct Invali-
Namespace Requests Miss Requests Miss Reloads dations
--------------- ------------ ------ -------------- ------ ---------- --------
BODY 23,586 0.5 23,587 0.7 0 0
CLUSTER 609 0.3 1,092 0.4 0 0
INDEX 285 47.7 285 51.9 0 0
SQL AREA 190,050 1.5 199,038,655 0.0 3,797 32,583
TABLE/PROCEDURE 11,508,533 10.3 21,919,885 5.5 1,904 0
TRIGGER 2,459,329 0.0 2,459,390 0.0 25 0
By the way the data are for 8 hour period when the Batch was running. Database is running at Oracle 9.2 and Here's the Load profile, if you may want to look at:
Load Profile
~~~~~~~~~~~~ Per Second Per Transaction
--------------- ---------------
Redo size: 1,140,006.47 29,506.37
Logical reads: 141,783.92 3,669.74
Block changes: 9,373.33 242.61
Physical reads: 677.06 17.52
Physical writes: 372.47 9.64
User calls: 213.58 5.53
Parses: 985.50 25.51
Hard parses: 41.55 1.08
Sorts: 425.22 11.01
Logons: 1.15 0.03
Executes: 5,324.10 137.80
Transactions: 38.64
Thanks in advance for your time to address my question(s).
July 16, 2013 - 3:05 pm UTC
... As you said, could this be because, they are calculated at different times? ...
sure. and at different levels of granularity (some at 1/1000th of a second, others at 1/1000000th of a second)
it is computed as:
,'Parse CPU to Parse Elapsd %:' ch28n
, decode(:prsela, 0, to_number(null)
, round(100*:prscpu/:prsela,2)) pctval
and since your parse CPU is a little larger than parse elapsed, it'll be greater than 100%.
an 8 hour statspack is not very useful.
Too many versions_count for a sql
Richard, November 22, 2013 - 4:14 pm UTC
Hi,
We are using 11.2.0.1 JDBC thin driver . For a particular sql ( INSERT into I_XXXXX_LLLLLLLLLLLLLL ( isp_xxxxxx_reference, isp_xxxxxx_name,
isp_xxxx_value ) ), we can see around 30,000 child cursors. All the inserts happen through java application. The main reasons for this as per
v$sql_shared_cursor are due to PURGED_CURSOR and BIND_LENGTH_UPGRADEABLE.
Could you explain how come java can cause generation of child cursor due to PURGED_CURSOR and BIND_LENGTH_UPGRADEABLE ? Please explain with example if
possible. We already noticed 2 system outage for reaching maximum limit of having child cursor and trying to fight with java developers .
When will your new book come out in UK market ? I am big fan of your book.
Thanks
Richard
Too many versions_count for a sql
Richard, November 25, 2013 - 11:11 am UTC
Hi,
I know you are very busy. Could you reply to my previous thread please as I am trying to understand the cause ?
Thanks
Richard