Thanks !!
Kunal Ojha, June 12, 2007 - 1:55 pm UTC
Thanks Tom!! That was certainly a useful piece of information.
Benchmarking response time for a select query.
Kunal Ojha, June 13, 2007 - 3:34 pm UTC
Hi Tom,
Let me put the problam statement in other way.
What would be the best approach to benchmark the response time for a particular query?
The problem that we generally face is :
Once we have run a query, lets say with approach 1 and recorded the time to be, lets say t1 secs, when we run the query with a second approach, lets say approach 2 we do not know for sure if approach 2 is faster (if it is so) because its the optimum approach or because of the simple reason that data existed from the previous run.
June 13, 2007 - 3:38 pm UTC
run query q1 over and over (with many different inputs)
run query q2 over and over (with many different inputs)
discard first couple of observations, and last couple
use the observations in the middle
eg: emulate real life as much as possible.
In general, your goal will be to reduce logical IO by the way (in general, for most queries)
Thanks !
Kunal Ojha, June 13, 2007 - 3:47 pm UTC
Thanks a lot.
Our approach is more or less same as what you suggested.
Discarding the first couple and last couple of test runs is definitely a good idea which we will incorporate !!
Thanks again for such a prompt response !
Mirjana, June 14, 2007 - 5:47 am UTC
Ramesh, June 29, 2007 - 1:36 am UTC
But how this works ?
July 02, 2007 - 2:08 pm UTC
how WHAT works?
Similar issue...
Manas, May 11, 2012 - 10:49 am UTC
We have a table on which we are executing multiple UPDATES. The script is executed through SQL (sqlplus -S in UNIX) & will be executed once only. Now the script is something like-
UPDATE <<table1>>
SET <<values1>>
WHERE <<clauses1>>;
UPDATE <<table1>>
SET <<values2>>
WHERE <<clauses2>>;
.
.
.
.
UPDATE <<table1>>
SET <<values500000>>
WHERE <<clauses500000>>;
COMMIT;
Now since we are not using Bind variables in here, I tried setting the CURSOR_SHARING for the session to FORCE. the First run (in which CURSOR_SHARING=FORCE) took almost 26 minutes and the second one (with CURSOR_SHARING=EXACT) took 27 minutes. Also TERMOUT is not set to OFF, can it cause any performance issue?
Or in this case also we should follow the same approach as mentioned above. Take execution time many times and ignore the first and last ones??
May 11, 2012 - 11:54 am UTC
you want cursor sharing force for this - not to make this job run faster - but to NOT WIPE OUT your entire shared pool - killing future performance.
printing to the screen could be an issue - sure, it adds to the elapsed time.
But I'd look at the process that causes me to do something so very very bad - and fix that.
I would want
a) a file with the "key" and the new values
b) create an external table on that
c) use merge to merge the external table into the existing table
bam, done, one sql statement, probably a few seconds. You are spending more time parsing sql than you are executing it here.