Skip to Main Content
  • Questions
  • Flushing Out Database Buffer Cache/parsed sql query from shared pool.

Breadcrumb

Question and Answer

Tom Kyte

Thanks for the question, Kunal.

Asked: June 11, 2007 - 3:07 pm UTC

Last updated: May 11, 2012 - 11:54 am UTC

Version: 9.0.5

Viewed 1000+ times

You Asked

Hi Tom,

On many occasions during load testing/other PL/SQL code testing we run queries using different approach to find the optimum approach. In such cases we want to start fresh with each approach and do not want data/parsed query to pre-exist (from the previous run).

My Question is :
Is there a way we can flush out a known data set from the database buffer cache?
Similerly, can we flush out the parsed version of a specific sql query from the shared pool?

and Tom said...

no you don't

in real life, the cache would never be empty.

and you are almost certainly double cached - using a traditional file system - so your physical IO's are not real physical IO's after you flush. (link below shows how to flush in 9i and before). You would need to reboot the server to empty the OS filesystem cache!!! (else the physical IO's are artifically SUPER FAST because they are not true physical IO's!)

It is true that 10g introduced an alter system flush buffer_cache, but it is not really worthwhile.

Having an empty buffer cache is AS FAKE, IF NOT MORE SO than what you are currently doing.

http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:12257624552691#12828851201155

Rating

  (6 ratings)

Is this answer out of date? If it is, please let us know via a Comment

Comments

Thanks !!

Kunal Ojha, June 12, 2007 - 1:55 pm UTC

Thanks Tom!! That was certainly a useful piece of information.


Benchmarking response time for a select query.

Kunal Ojha, June 13, 2007 - 3:34 pm UTC

Hi Tom,
Let me put the problam statement in other way.
What would be the best approach to benchmark the response time for a particular query?
The problem that we generally face is :
Once we have run a query, lets say with approach 1 and recorded the time to be, lets say t1 secs, when we run the query with a second approach, lets say approach 2 we do not know for sure if approach 2 is faster (if it is so) because its the optimum approach or because of the simple reason that data existed from the previous run.
Tom Kyte
June 13, 2007 - 3:38 pm UTC

run query q1 over and over (with many different inputs)
run query q2 over and over (with many different inputs)

discard first couple of observations, and last couple

use the observations in the middle

eg: emulate real life as much as possible.


In general, your goal will be to reduce logical IO by the way (in general, for most queries)

Thanks !

Kunal Ojha, June 13, 2007 - 3:47 pm UTC

Thanks a lot.
Our approach is more or less same as what you suggested.
Discarding the first couple and last couple of test runs is definitely a good idea which we will incorporate !!
Thanks again for such a prompt response !

Mirjana, June 14, 2007 - 5:47 am UTC


Ramesh, June 29, 2007 - 1:36 am UTC

But how this works ?
Tom Kyte
July 02, 2007 - 2:08 pm UTC

how WHAT works?

Similar issue...

Manas, May 11, 2012 - 10:49 am UTC

We have a table on which we are executing multiple UPDATES. The script is executed through SQL (sqlplus -S in UNIX) & will be executed once only. Now the script is something like-
UPDATE <<table1>>
SET <<values1>>
WHERE <<clauses1>>;

UPDATE <<table1>>
SET <<values2>>
WHERE <<clauses2>>;
.
.
.
.
UPDATE <<table1>>
SET <<values500000>>
WHERE <<clauses500000>>;

COMMIT;

Now since we are not using Bind variables in here, I tried setting the CURSOR_SHARING for the session to FORCE. the First run (in which CURSOR_SHARING=FORCE) took almost 26 minutes and the second one (with CURSOR_SHARING=EXACT) took 27 minutes. Also TERMOUT is not set to OFF, can it cause any performance issue?

Or in this case also we should follow the same approach as mentioned above. Take execution time many times and ignore the first and last ones??
Tom Kyte
May 11, 2012 - 11:54 am UTC

you want cursor sharing force for this - not to make this job run faster - but to NOT WIPE OUT your entire shared pool - killing future performance.

printing to the screen could be an issue - sure, it adds to the elapsed time.

But I'd look at the process that causes me to do something so very very bad - and fix that.

I would want

a) a file with the "key" and the new values
b) create an external table on that
c) use merge to merge the external table into the existing table

bam, done, one sql statement, probably a few seconds. You are spending more time parsing sql than you are executing it here.