Skip to Main Content
  • Questions
  • which is faster? static cursor or dynamic cursor

Breadcrumb

Question and Answer

Tom Kyte

Thanks for the question, Rohil.

Asked: September 02, 2002 - 11:21 pm UTC

Last updated: July 16, 2004 - 1:58 pm UTC

Version: 9i

Viewed 1000+ times

You Asked

hi tom,
"which is faster "a static cursor OR a dynamic cursor(using cursor variables and REF cursor)" ?
is there any significant difference in execution speeds? (or does it depend on anything else?)

Given an option of using static and dynamic cursor, which one should be used?

thanx in advance
bye
rohil

and Tom said...

STATIC code is always better.

You use dynamic sql only when there is quite simply NO WAY to do it statically.

Static sql has the advantages of

o speed
o ease of coding
o ease of maintenance
o automatic dependency tracking (if you statically reference something, it is
recorded in the data dictionary and you can query up that fact. If you
dynamically reference it -- it is NOT recordable)
o ease of debugging/testing (with dynamic sql, the query isn't generated until
runtime -- will that query work? I don't know until i run EVERY possible
combination. With STATIC sql I know at compile time that its valid sql)


Always use static sql until you cannot.

Rating

  (7 ratings)

Is this answer out of date? If it is, please let us know via a Comment

Comments

Statis Vs Dynamic Cursor

Ashok Shinde, September 03, 2002 - 8:27 am UTC

This has helped me alot. But can you use the static cursor with the datatypes like long, blob, lob ?
Thanks,
Ashok

Tom Kyte
September 03, 2002 - 8:41 am UTC

Yes, Yes, and finally -- yes.



But what about this case

Ed, September 03, 2002 - 11:53 am UTC

Tom,

I saw this SQL statement as a static piece of SQL in a procedure (that works). Surely the NVLs are a bad idea, and this is a case for using Native Dynamic SQL even though the static version works.

SELECT forename,
surnames .... etc
FROM az_data
WHERE surname LIKE UPPER(p_surname)||'%'
AND id = NVL(p_id,id)
AND cat_code = NVL(p_cat_code,cat_code)
AND amt BETWEEN NVL(p_low_amt,0) AND NVL(p_hi_amt,9999999999999)
AND req_date BETWEEN NVL(p_low_date,TO_DATE('20000101','YYYYMMDD'))
AND NVL(p_hi_date,TO_DATE('29990101','YYYYMMDD'))
AND postcode LIKE UPPER(p_postcode)||'%'
ORDER BY surname, forenames;


Tom Kyte
September 03, 2002 - 12:35 pm UTC

This is a case where dynamic sql is called for, PROBABLY, maybe, it depends...

It is quite probable that this statement would run better if the where clause (which apparently isn't really known until runtime) was generated at runtime.

Now, if the query is using an index on SURNAME or POSTCODE or SURNAME,POSTCODE -- that would not be the case. The STATIC sql (as it is) would be better in all probability.

But yes, thanks for making me tone it down a tad. Common sense must be applied. See

</code> http://asktom.oracle.com/pls/asktom/f?p=100:11:::::P11_QUESTION_ID:1288401763279 <code>

for how I might code this if such an index was not being used (or if ID was highly selective and indexed for example -- meaning if the query had "where id = p_id" it should use the index on ID)


Query

San, September 03, 2002 - 1:17 pm UTC

Hello Tom,

I have a table to record employee activity,

Table: Employee_activity

EmployeeId Start_time End_time(date) Activity day
1 9.00 Am 10.00 am Worked 11/01
1 10.00 Am 11.00 Am lunch 11/01
2 9.00 Am 12.00 Am worked 11/01
1 9.00 Am 10.30 Am worked 11/02
1 11.00 Am 12.00 Am worked 11/02


I like to get a output like this (record for the unaccounted time),

EmployeeId Start_time End_time(date) Activity day
1 9.00 Am 10.00 am Worked 11/01
1 10.00 Am 11.00 Am lunch 11/01
1 11.00 am 5.00 Pm Unaccounted 11/01

2 9.00 Am 12.00 Am worked 11/01
2 12.00 Am 5.00 Pm Unaccounted 11/01

1 9.00 Am 10.30 Am worked 11/02
1 10.30 Am 11.00 am Unaccounted 11/02
1 11.00 Am 12.00 Am worked 11/02
1 12.00 Am 5.00 pm unaccounted 11/02

The total working hours is 8.

I am using oracle 8.1.7. Can you please explain me how I can do this in a single query. If this cannot be done in a single query please suggestion me other options with example.

Thanks Tom.
Have a blessed and highly favored day.

Thanks for the question regarding "which is faster? static cursor or dynamic cursor", version 9i

butch larue, March 02, 2004 - 5:21 pm UTC

this is interesting BECAUSE according to Oracle 8i application developers guide, chapt9, dynamic sql, performanance improvements, "The performance of native dyunamic SQL in PL/SQL is comparable to the performance of statis SQL because the PL/SQL interpretter has built in support...." I am becoming dissolutioned as we have a system that creates and runs dynamic sql that is waaaaaaaaaay sloooooooooooow. I am trying to remedy the situation but dont know where to begin. As the time on the clock runs along, fewer and fewer statements per time increment are run. What are the first tuning steps to take? Memory? Disk?

Tom Kyte
March 02, 2004 - 7:19 pm UTC

dyunamic SQL in PL/SQL is comparable to the performance
^^^^^^^^^^


dynamic sql is not necessarily any slower than static sql for raw query performance. not at all.


have you tkprofed.
have you shown that "static sql is waaaaaaaaaaaaaaay faster"?
have you used statspack at all
have you begun to id where the bottleneck(s) are?

that is the first step (memory, disk - not a chance. find out what the problem is and then address it -- not the other way around)

OK

Srinivas, March 03, 2004 - 12:50 am UTC

Dear Tom,
Any other better way for this query?
sql>select * from worker where age > 35;
Please do reply.


Tom Kyte
March 03, 2004 - 9:42 am UTC

why?

Dynamic SQL and Query block in tkprof

Govind, July 16, 2004 - 7:53 am UTC

Hi Tom,

I have got a procedure which is using a dynamic sql for to fetch data from a set of tables and it running very slow.

In the tkprof output it is not showing much time in CPU or eplased but it shows to many
'query = number of buffers gotten for consistent read'
for every fetch upto 60000 for every fetch.

Can it be any better if it is changed to a normal static SQL ?

Is the number of query buffers because of the fact that the SQL is dynamic?

Can having more query buffers reduce the performance ?

Many Thanks!!

Cheers!!

Tom Kyte
July 16, 2004 - 11:18 am UTC

there will be no difference in query performance, no.


you need to tune the query -- you most likely have some really poor indexes out there.

Dynamic SQL performance

Govind, July 16, 2004 - 11:56 am UTC

Tom

I have checked the indexes and table fragmentation and they look fine

I have added the query we are using below for your perusal.There is one more thing which is worth mentioning and that is, that this procedure which has about 10 such dynamic SQL (similar to the one below) is called in a loop, and for a single execution of the parent/calling procedure this procedure is likely to called about 500 to 1000 times.

Not sure if that will make any difference.

The SQL used is the one below with the difference that it is created dynamically as can been from the hard codes.

SELECT scccha.identifier scccha_id,
scccha.cfrequ_identifier,
scccha.client_charge,
greatest(scccha.start_date, to_date('01-APR-04', 'DD-MON-RRRR')),
least(scccha.end_date, to_date('31-MAR-05', 'DD-MON-RRRR')),
scccha.start_date,
scclie.end_date,
scserv.end_Date,
scclie.identifier scclie_id,
scccha.created_by,
scserv.identifier scserv_id
FROM sp_cont_client_charges scccha,
sp_contract_clients scclie,
sp_contract_services scserv
WHERE scccha.scclie_identifier = scclie.identifier
AND scclie.scserv_identifier = scserv.identifier
AND scserv.scontr_identifier = 50
AND scclie.scontr_identifier = 50
AND scclie.self_funding = 'N'
AND scccha.start_date <= to_date('31-MAR-05', 'DD-MON-RRRR')
AND nvl(scccha.end_Date, '31-DEC-9999') >= to_date('01-APR-04', 'DD-MON-RRRR')
AND NOT EXISTS (SELECT 1
FROM sp_cont_serv_costs
WHERE scserv_identifier = scserv.identifier
AND contract_type in ('C', 'L'))
AND EXISTS (SELECT 1
FROM sp_cont_client_acodes
WHERE scccha_identifier = scccha.identifier
AND rownum = 1)
ORDER BY scccha.identifier


Tom Kyte
July 16, 2004 - 1:58 pm UTC

sorry, if you are doing 6,000 lio's per row, you are suffering from major league index abuse. indexing data that isn't selective enough or not having the proper number of columns in the index itself.

your algorithm sounds to be the culprit here -- you'll want to document in english what it needs to do and then look at a set based solution -- don't loop yourself, let the server do what the server does best - -join, query, read, process.