Skip to Main Content

Breadcrumb

May 4th

Question and Answer

Chris Saxon

Thanks for the question, Sonali.

Asked: December 27, 2001 - 2:09 pm UTC

Last updated: July 06, 2020 - 2:34 pm UTC

Version: 8.1.7/ 8.0.6

Viewed 100K+ times! This question is

You Asked

I am trying to use bind variables for the 1st time.

I have a part of the code here from a test stored procedure-

Declare
x number;
y number;
z number;
a date;
b date;
c number;
d number;
e number;

execute immediate 'insert into Auth (AUTH_ID, Auth_Res_ID, Auth_Work_ID, Auth_Role_ID, Auth_Status,
Auth_Start_Date, Auth_Fin_Date, Auth_Restricted, Auth_Timecard,
Auth_Secondary_Status, Auth_Amount, Auth_Rem_Amount, Auth_Schedule)
Values (Auth_SEQ.NextVal, :x, :y, :z, 70, :a, :b , :c, :d, 40, :e, 0, 10)'
Using inResID, inWorkID, inRoleID, dtStartDate, dtEndDate , inRestricted, inTimecard, inAmount;


I keep getting error -

MGR-00072: Warning: PROCEDURE PR_SONALI created with compilation errors.
SQLWKS>
SQLWKS> show errors
Errors for PROCEDURE PR_SONALI:
LINE/COL ERROR
--------------------------------------------------------------------------------
112/22 PLS-00103: Encountered the symbol "insert into Auth (AUTH_ID, Auth_


What is wrong with my code. I am reading Experts one on one by Tom Kyte and was trying this.

Also, if my stored procedure is already compiled then how does a bind variable used inside procedure going to make a difference as far as parcing goes?

I tried this on Oracle8.1.7.

Also if I have to do this on Oracle 8.0.6, I will need to use
DBMS_SQL package right ?
How do I write this code to use DBMS_SQL package ?

Thanks
Sonali


and Tom said...

Well, in order to use bind variables -- just use PLSQL variables!!! PLSQL is awesome for this reason, every reference to a PLSQL variable is in fact a BIND VARIABLE.

Just

insert into auth ( ... ) values ( auth_seq.netval, inResId, inworkId, .... );

inResid IS a bind variable.
inWorkId IS a bind variable.



Rating

  (204 ratings)

Is this answer out of date? If it is, please let us know via a Comment

Comments

I did not know PL/SQL variable is a BIND variable !!

Sonali, December 27, 2001 - 2:39 pm UTC

Great thanks !!!
So one should use bind variables when you pass in SQL statements directly to the server , right ?
Unfortunately our architecture do not support oracle bind variables, because they support both SQL server and Oracle.



Tom Kyte
December 27, 2001 - 2:48 pm UTC

guess what -- SQL SERVER does bind varibles too. You can use it (and SHOULD USE IT) in both environment.

Bind variables are not an Oracle'ism.


Is the following approach right

Kulguru, December 27, 2001 - 3:20 pm UTC

Tom

I observed that my predecessor developers,have coded a lot of stuff lik e

create or replace procedure ....

begin

....
....
select ..... from ..... where
....
...
col3='hard coded variable'

...
...
group by
order by

....other code
end;


What I had done simply is to declare a variable for the hardcoded valued.

like

lv_var varchar2:='hardcoded value'


and then substitued this variable for the hardcoded value in the queries as shown above.

I have done this for all such occurences of hardcoded values in sql's.

Do you think this is the right way of removing hard coded variables with bind variables.

I have done this in test instance, after taking your opinion I will move it into prod.

Further, can you give the link to the thread where you have discussed the adverse impact of using hardcoded variables on the library cahce memory.


Tom Kyte
December 27, 2001 - 3:34 pm UTC

In this case -- the bind variable is not necessary (believe it or not).

With this query -- the value that is hard coded is also IMMUTABLE, it'll never change. This query is shareable -- everyone that runs the procedure, runs the same exact sql. Its perfect. No need to bind in this particular case.

It is when you are dynamically building a query -- putting hard coded literals in there -- that you MUST bind.

The link is my book actually. I go over it time and time and time again. There are many things to consider - increased parse times due to hard parses instead of soft, long lines to get into the library cache due to excessive latching caused by hard parses, excessive CPU used to optimize queries, trashing of the shared pool since it fills up with unique statements that will never be reused.....



Related to bind variable

A reader, December 27, 2001 - 3:49 pm UTC

Case1:
Proceddure xxyy(prama1) is
begin
oper ref_cur_ a for 'select * from tab1 where col=:y' using :prama1
.............

Case2:
Proceddure xxyy(prama1) is
begin
oper ref_cur_ a for select * from tab1 where col=prama1
...

Will bind variable will be used in case2 for col



Tom Kyte
December 27, 2001 - 5:27 pm UTC

sure will -- you can see this via TKPROF and SQL_TRACE:

ops$tkyte@ORA817DEV.US.ORACLE.COM> create or replace procedure p( param in varchar2 )
  2  as
  3          type rc is ref cursor;
  4          l_cursor rc;
  5  begin
  6          open l_cursor for select * from dual THIS_IS_OUR_QUERY where dummy = param;
  7  end;
  8  /

Procedure created.

ops$tkyte@ORA817DEV.US.ORACLE.COM> 
ops$tkyte@ORA817DEV.US.ORACLE.COM> alter session set sql_trace=true;

Session altered.

ops$tkyte@ORA817DEV.US.ORACLE.COM> exec p( 'xxx' );

PL/SQL procedure successfully completed.

tkprof shows us:

SELECT *
FROM
 DUAL THIS_IS_OUR_QUERY  WHERE DUMMY = :b1


call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute      1      0.00       0.00          0          0          0           0
Fetch        0      0.00       0.00          0          0          0           0
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        2      0.00       0.00          0          0          0           0

it was rewritten as a bind.  ALL plsql variable references will be "bound" 

Well, I feel as if Iam running around and around the same bush

Kulguru, December 27, 2001 - 4:01 pm UTC

****************************

Can you go into the specifics or inother words can you dig further into your statement

"With this query -- the value that is hard coded is also IMMUTABLE, it'll never change. This query is shareable -- everyone that runs the procedure, runs the
same exact sql. Its perfect. No need to bind in this particular case.
"

Every time the sql given in the procedure executes doesnt it have to be parsed, as a hard coded variable been used in the where clause ... HOW IS THE QUERY SHAREABLE TO EERY ONE THAT RUNS THE PROCEDURE.. HOW ?


AND

Out of all the consequences of not using bind variables, how will the below 2 happen , please explain

1.long lines to get into the library cache due to excessive latching caused by hard parses,
2.excessive CPU used to optimize queries( why in this world do we have to use excessive CPU to optimize queries).

Lastly..

give us the link for an example for the statement

"It is when you are dynamically building a query -- putting hard coded literals in there -- that you MUST bind."


Sorry Tom, I might be sounding a bit frustrated but believe me Iam. I want to get this into my head. Please help.


Tom Kyte
December 27, 2001 - 5:32 pm UTC

A query like this:

select * from emp where empno = 5;

If executed over and over and over -- is just as shareable as:

select * from emp where empno = :x;


You need not bind that particular one -- if you execute it OVER and OVER and OVER. It is when people build queries on the fly (using EXECUTE IMMEDIATE or DBMS_SQL in plsql) that you want to take care to BIND values.

Anytime you have STATIC sql in PLSQL -- your job is done. All plsql variable references will be bound and any literals in there are OK cause the query will be reused over and over again.

The query is shareable.


As for #2, excessive CPU.

Ok, instead of hard parsing "select * from emp where ename = :x" once and then soft parsing it 1,000 times (thus SKIPPING the optimization step 1,000 times) you instead submit 1001 queries like:

select * from emp where ename = 'A';
select * from emp where ename = 'B';
...
select * from emp where ename = 'KING';
...
select * from emp where ename = 'AFDAFDFSDAS';

You have now run through query optimization (a cpu intensive task) 1,000 more times -- using excessive CPU (1000 times MORE cpu then you needed to use).




How should I know which sql's are are not using bind variables

Kulguru, December 27, 2001 - 5:53 pm UTC

Tom

Can you give me the sql which will list out the sql's where bind variables are not being used ,and are being parsed again and again like your example

select * from emp where ename = 'A';
select * from emp where ename = 'B';
...
select * from emp where ename = 'KING';
...
select * from emp where ename = 'AFDAFDFSDAS';

Which v$ views do I need to query( V$sql, V$sqlarea, or v$sqltext, by the way what is the difference b/w the 3), or v$library_cache.

Thank you


Why does this happen, is there a limit on number of sql's which can get ...

Kulguru, December 27, 2001 - 8:08 pm UTC

hard parsed at a given time, and ( doesnt this happen when too many queries are issued at a given time, even though they carry bind variables)

How does using of bind variables avoid the following..

"long lines to get into the library cache due to excessive latching caused by hard parses"



Tom Kyte
December 28, 2001 - 9:02 am UTC

A hard parse happens when the SQL you submitted isn't found in the shared pool and we have to do ALL of the steps necessary to process it -- including optimization.

A soft parse happens when the SQL you submitted IS found in the shared pool and we can skip some steps in the processing of it.

It does not have anything to do with how many queries are submitted simultaneously.


Using bind variables avoids those long lines by the fact that is promotes the sharing of SQL (the reuse of plans) and allows you to SKIP steps that would otherwise cause you to get a latch (lock) on the library cache. You skip processing -- you avoid waiting.

Hard Parses eat up CPU

Jim Kennedy, December 28, 2001 - 8:57 am UTC

Hard parses are very CPU intensive. I worked on a system where an import process benchmarked too 26 minutes and pegged the server's CPU due to not using bind variables.

Got the developer to switch to using bind variables and the same benchmark took under 6 minutes and had 35% CPU peaks with an average CPU usage of 5%.

Quite a difference.

With repeated DML if you can avoid hard parses and soft parses as much as possible you will be amazed at how much faster your DML will be (after the 1st hard parse) and how much more scalable (more users on the same box) your application will be.

Reader

Reader, December 28, 2001 - 11:09 am UTC

Tom,

In your script to find literal SQLs in the Library Cache,

URL
</code> http://asktom.oracle.com/pls/asktom/f?p=100:11:::::P11_QUESTION_ID:1163635055580 <code>

why do you assign "'" to "'#"

in the script
<excerpt>
elsif ( l_char = '''' and NOT l_in_quotes )
then
l_in_quotes := TRUE;
l_query := l_query || '''#';
end if;
<excerpt>
Thanks

Tom Kyte
December 28, 2001 - 11:33 am UTC

I am turning a string like:


select * from emp where ename = 'KING'

into

select * from emp where ename = '#'

As I'm going through the string, byte by byte -- I look for an opening quote and when I find it -- I turn it into '#, then i "eat" the rest of the bytes until i find the close quote. Then we stick that on. that turns 'KING' into '#' so that all character string literals look exactly the same.

How to use bind variables

Mark, January 30, 2002 - 3:11 pm UTC

Most useful for fixing legacy code.

performance gain look up in Trace file TKPROF results When using Bind Variables

Vikas Sharma, September 02, 2002 - 6:08 am UTC

Hi Tom,
Thanks for my last answer.
I have a PL/SQL block like :
declare
v varchar2(30);
begin
select dname into v from dept where DEPTNO = :x;
end;
/
I am using bind variable in this. So as per my understanding, the SELECT stmt in this block should get parse only once and should be sharable by another users.

Now at sql*Plus i create a new session and executes this block For the first time. and then second time and the result of my trace by tkprof i dont see much differance. Following are the results:
When i run first time:
********************************************************************************

declare
v varchar2(30);
begin
select dname into v from dept where DEPTNO = :x;
end;

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.01 0.01 0 0 0 1
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 2 0.01 0.01 0 0 0 1

Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: 41
********************************************************************************

SELECT DNAME
FROM
DEPT WHERE DEPTNO = :b1


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 1 0.00 0.00 0 1 4 1
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 3 0.00 0.00 0 1 4 1

Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: 41 (recursive depth: 1)
******************************************************************

When Second time:
********************************************************************************

declare
v varchar2(30);
begin
select dname into v from dept where DEPTNO = :x;
end;

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.02 0.02 0 0 0 0
Execute 1 0.00 0.00 0 0 0 1
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 2 0.02 0.02 0 0 0 1

Misses in library cache during parse: 0
Optimizer goal: CHOOSE
Parsing user id: 41
********************************************************************************

SELECT DNAME
FROM
DEPT WHERE DEPTNO = :b1


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 1 0.00 0.00 0 1 4 1
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 3 0.00 0.00 0 1 4 1

Misses in library cache during parse: 0
Optimizer goal: CHOOSE
Parsing user id: 41 (recursive depth: 1)

Rows Row Source Operation
------- ---------------------------------------------------
1 TABLE ACCESS FULL DEPT

********************************************************************************

The cpu useages was more when i executed it second time. But i can see one more thing extra that is
Rows Row Source Operation
------- ---------------------------------------------------
1 TABLE ACCESS FULL DEPT

Would you please explain me this behaviour.

Regards,
Vikas




Tom Kyte
September 02, 2002 - 10:24 am UTC

When dealing with timings -- you have to take them with a grain of salt. there is no difference between 0.00 and 0.02 here. The timings are gathered with 1/100 of a second (1/1000's in 9i) precision. Things that take less then 1/100 of a second can result in 0 being contributed -- or 0.01 being contributed. Depends on WHEN the event being measured took place.

Hence, these two run times are 100% the same in all respects.


We can see the shared sql kicking in with:

Misses in library cache during parse: 0


in the second example -- that indicates a library cache hit. Now as to why the one has the plan and the other doesn't -- the plans are kicked out when the cursor is totally closed which for the plsql stuff probably won't happen UNTIL YOU LOG OUT.

So, I'll guess that you run tkprof on the trace file the first time BEFORE exiting sqlplus (so the trace was incomplete). The second time, you exited sqlplus and then ran it (and the trace file was complete)



Thanks

Vikas Sharma, September 03, 2002 - 12:20 am UTC

Thanks a lot.

Regards,
Vikas

Shareable queries...

Kashif, October 29, 2002 - 11:21 am UTC

Hi Tom,

I'm using EXECUTE IMMEDIATE inside one of our procedures to generate the next value of a sequence as follows:

--
FUNCTION GetNextSeqVal (
vFldName IN VARCHAR2,
vDbid IN VARCHAR2)
RETURN NUMBER
IS
vSeq NUMBER;
vString VARCHAR2 (256);
BEGIN
vString := 'select '||vFldName||'_'||vDbid||'_'||'SEQ.nextval From Dual';
EXECUTE IMMEDIATE vString
INTO vSeq;
RETURN (vSeq);
END;
--

An example of the generated sql:

select vartxt_main1_seq.nextval from dual;

My question:

Will the generated sql be shareable/reuseable inside the shared pool, when a subsequent user calls this function with the same arguments? I'm thinking yes, but I just need you to re-confirm.

Thanks.

Kashif

Tom Kyte
October 29, 2002 - 2:10 pm UTC

That is a terrible approach if you want my opinion ( and you must, cause you put something here ;)

never never never do dynamically what you can do statically in plsql!!!  since the caller obviously has knowledge of the sequence name -- they should just do the select from dual.  Watch this:

ops$tkyte@ORA920.LOCALHOST> drop sequence seq;

Sequence dropped.

ops$tkyte@ORA920.LOCALHOST> create sequence seq;

Sequence created.

ops$tkyte@ORA920.LOCALHOST> 
ops$tkyte@ORA920.LOCALHOST> drop table run_stats;

Table dropped.

ops$tkyte@ORA920.LOCALHOST> create table run_stats ( runid varchar2(15), name varchar2(80), value int );

Table created.

ops$tkyte@ORA920.LOCALHOST> 
ops$tkyte@ORA920.LOCALHOST> grant select any table to ops$tkyte;

Grant succeeded.

ops$tkyte@ORA920.LOCALHOST> create or replace view stats
  2  as select 'STAT...' || a.name name, b.value
  3         from v$statname a, v$mystat b
  4        where a.statistic# = b.statistic#
  5       union all
  6       select 'LATCH.' || name,  gets
  7         from v$latch;

View created.

ops$tkyte@ORA920.LOCALHOST> 
ops$tkyte@ORA920.LOCALHOST> alter system flush shared_pool;

System altered.

ops$tkyte@ORA920.LOCALHOST> 
ops$tkyte@ORA920.LOCALHOST> declare
  2       l_start number;
  3       l_run1  number;
  4       l_run2  number;
  5           l_num   number;
  6  begin
  7       insert into run_stats select 'before', stats.* from stats;
  8  
  9       l_start := dbms_utility.get_time;
 10       for i in 1 .. 1000
 11       loop
 12               select seq.nextval into l_num from dual;
 13       end loop;
 14       l_run1 := (dbms_utility.get_time-l_start);
 15       dbms_output.put_line( l_run1 || ' hsecs' );
 16  
 17       insert into run_stats select 'after 1', stats.* from stats;
 18       l_start := dbms_utility.get_time;
 19       for i in 1 .. 1000
 20       loop
 21               execute immediate 'select seq.nextval from dual' into l_num;
 22       end loop;
 23       l_run2 := (dbms_utility.get_time-l_start);
 24       dbms_output.put_line( l_run2 || ' hsecs' );
 25       dbms_output.put_line
 26       ( 'run 1 ran in ' || round(l_run1/l_run2*100,2) || '% of the time' );
 27  
 28       insert into run_stats select 'after 2', stats.* from stats;
 29  end;
 30  /
13 hsecs
21 hsecs
run 1 ran in 61.9% of the time

PL/SQL procedure successfully completed.


<b>so obviously static sql is faster -- but, more importantly:</b>

ops$tkyte@ORA920.LOCALHOST> select a.name, b.value-a.value run1, c.value-b.value run2,
  2          ( (c.value-b.value)-(b.value-a.value)) diff
  3    from run_stats a, run_stats b, run_stats c
  4   where a.name = b.name
  5      and b.name = c.name
  6      and a.runid = 'before'
  7      and b.runid = 'after 1'
  8      and c.runid = 'after 2'
  9      and (c.value-a.value) > 0
 10      and (c.value-b.value) <> (b.value-a.value)
 11   order by abs( (c.value-b.value)-(b.value-a.value))
 12  /

NAME                                 RUN1       RUN2       DIFF
------------------------------ ---------- ---------- ----------
...
STAT...recursive calls               3164       2240       -924
STAT...parse count (total)            127       1072        945
STAT...opened cursors cumulati        108       1072        964
ve

STAT...redo size                    63932      67056       3124
LATCH.shared pool                    3160       6659       3499
LATCH.library cache pin alloca        566       4376       3810
tion

LATCH.library cache pin              5089      10682       5593
LATCH.library cache                  6744      16104       9360

68 rows selected.

ops$tkyte@ORA920.LOCALHOST> 
ops$tkyte@ORA920.LOCALHOST> spool off

<b>static sql is more scalable -- remember that latches are a serialization device, serialization devices are also known as LOCKS, locks inhibit scalability and performance and .......</b>


Your dynamic sql is shareable.  it is not as good as it should be tho! 

Function called from numerous places...

Kashif, October 29, 2002 - 3:59 pm UTC

Thanks for the response Tom. The deal is that this function is called from 70+ different database packages, AND from various screens/application level code. Originally this function generated 'sequence' numbers by incrementing the value of a record in a table by 1, updating the table, and then returning the new incremented value. You guessed it, third party app, one code fits all databases philosophy. We had performance issues, obviously, and decided that it would be more efficient to use sequences. However, there are potentially 70+ different sequences, and I thought I would simply use dynamic sql to construct the name of the sequence, and then call that sequence. Your results however humble me. I think I'll code the if-then-else statement, for each potential set of arguments passed in, e.g. something like:

--
if vFldname = 'VARTXT' and vDbid = 'MAIN1' then
select vartxt_main1_seq.nextval
into vseq
from dual;
...
end if;
return (vseq);
--

Any other ideas you might have of getting around this would be helpful.

Also, the reason I decided not to do the 'select seq.nextval from dual' in the individual packages was to avoid having to maintain the packages ourselves (those packages are also owned by the third party). The same reason applied for the forms.

Kashif


Tom Kyte
October 30, 2002 - 6:47 am UTC

don't use 70+ different sequences then -- they just need random numbers right, for surrogate keys. You can get away with a single sequence, no problem.

Just change the function to:


select myseq.nextval into l_returnval from dual;
return l_returnval;



NDS and BV

Robert C, October 30, 2002 - 10:17 am UTC

Tom, i want to confirm with you on this, altho I
read your chapter on NDS like 5 times....

NDS uses BV in a "positional" fashion correct ?
meaning :

sql_stmt := Â’INSERT INTO dept VALUES (:1, :2, :3)Â’;
EXECUTE IMMEDIATE sql_stmt USING id, dept, location;

the BVs : id, dept, location - are mapped to - :1, :2, :3
- by position, right ?

so if i am to build an DYNAMIC SQL at run-time like:

'insert into GTT
select * from emp_a where hiredate between to_date(:s) and to_date(:e)
UNION ALL
select * from emp_b where hiredate between to_date(:s) and to_date(:e)
UNION ALL
.....'

and I DON'T know how many emp_? tables to UNION at compile,
I can NOT make use of NDS with the USING clause, correct ?

The DBAs - infected with the FUD disorder - is sitting on my request for app context so I have to make do w/o it for now...
Thanks



Tom Kyte
October 31, 2002 - 4:40 pm UTC

yes, positional.


you can definitely use DBMS_SQL for that without an application context.
if you can pack S and E into a varchar2(64) -- you can use dbms_application_info.

The following shows the issue and how you can use dbms_application_info in your case (without waiting for the DBA)


ops$tkyte@ORA920.US.ORACLE.COM> create table t ( x date, y date );

Table created.

ops$tkyte@ORA920.US.ORACLE.COM>
ops$tkyte@ORA920.US.ORACLE.COM> declare
  2      l_x date := sysdate;
  3      l_y date := sysdate;
  4  begin
  5      execute immediate
  6      'insert into t ( x, y )
  7       select :x, :y from dual' using l_x, l_y;
  8  end;
  9  /

PL/SQL procedure successfully completed.

ops$tkyte@ORA920.US.ORACLE.COM>
ops$tkyte@ORA920.US.ORACLE.COM> declare
  2      l_x date := sysdate;
  3      l_y date := sysdate;
  4  begin
  5      execute immediate
  6      'insert into t ( x, y )
  7       select :x, :y from dual
  8       union all
  9       select :x, :y from dual' using l_x, l_y;
 10  end;
 11  /
declare
*
ERROR at line 1:
ORA-01008: not all variables bound
ORA-06512: at line 5


ops$tkyte@ORA920.US.ORACLE.COM>
ops$tkyte@ORA920.US.ORACLE.COM> declare
  2      l_x date := sysdate;
  3      l_y date := sysdate;
  4  begin
  5      dbms_application_info.set_client_info( to_char(l_x,'yyyymmddhh24miss')||to_char(l_y,'yyyymmddhh24miss') );
  6
  7      execute immediate
  8      'insert into t ( x, y )
  9       select to_date(substr(userenv(''client_info''),1,14),''yyyymmddhh24miss''),
 10              to_date(substr(userenv(''client_info''),15),''yyyymmddhh24miss'')
 11         from dual
 12       union all
 13       select to_date(substr(userenv(''client_info''),1,14),''yyyymmddhh24miss''),
 14              to_date(substr(userenv(''client_info''),15),''yyyymmddhh24miss'')
 15         from dual';
 16  end;
 17  /

PL/SQL procedure successfully completed.

ops$tkyte@ORA920.US.ORACLE.COM> select * from t;

X         Y
--------- ---------
31-OCT-02 31-OCT-02
31-OCT-02 31-OCT-02
31-OCT-02 31-OCT-02

ops$tkyte@ORA920.US.ORACLE.COM> 

SUBSTR on sys_context('userenv','client_info') - still a BV ?

Robert C, November 01, 2002 - 9:29 am UTC

>>if you can pack S and E into a varchar2(64) -- you can >>use dbms_application_info.
thank you, Tom,
Yes, that's exactly what i am resorting to doing now...
and i'd pack segments of data into that 64 byte space
like....
l_Param := pdate_start || pdate_end || pClientID || psState;
DBMS_APPLICATION_INFO.SET_CLIENT_INFO(l_Param);

and then I'd have to SUBSTR on the sys_context
like...hiredate BETWEEN
To_Date(substr(SYS_CONTEXT ('userenv','client_info'),1,11))

Now is Oracle still treating this as BV ?

Tom Kyte
November 01, 2002 - 3:36 pm UTC

yes -- it will (treat like bind variable)

How to pass NULL to a bind variable?

A Reader, March 31, 2003 - 1:07 pm UTC

We have a select statement like the following:

select * from t where some_value=:judge_value;

The judge_value could be NULL. Since NULL cannot use "=", how should we pass the NULL to use the bind varible?
Thank you for your help.


Tom Kyte
March 31, 2003 - 1:21 pm UTC

well, you'll have to tell me what you want to happen when you bind null?

many times, people want "all rows"
some times, people want "null rows"

which do you want?

How to pass NULL to a bind variable?

A Reader, March 31, 2003 - 1:34 pm UTC

Since you mentioned it... *BOTH* ... ;)

Tom Kyte
March 31, 2003 - 2:40 pm UTC

Ok, this would be it then:

scott@ORA817DEV> create or replace package demo_pkg
2 as
3 type rc is ref cursor;
4
5 procedure get_data( p_cursor in out rc,
6 p_filter in varchar2 default NULL,
7 p_all in boolean default FALSE );
8 end;
9 /

Package created.

scott@ORA817DEV>
scott@ORA817DEV> create or replace package body demo_pkg
2 as
3 procedure get_data( p_cursor in out rc,
4 p_filter in varchar2 default NULL,
5 p_all in boolean default FALSE )
6 is
7 begin
8 if ( NOT p_all )
9 then
10 if ( p_filter is not null )
11 then
12 open p_cursor for select empno, ename from emp where job like p_filter;
13 else
14 open p_cursor for select empno, ename from emp where job is null;
15 end if;
16 else
17 open p_cursor for select empno, ename from emp;
18 end if;
19 end;
20 end;
21 /

Package body created.

scott@ORA817DEV>
scott@ORA817DEV> variable x refcursor
scott@ORA817DEV> set autoprint on
scott@ORA817DEV> exec demo_pkg.get_data( :x, '%A%' );

PL/SQL procedure successfully completed.


EMPNO ENAME
---------- ----------
7499 ALLEN
7521 WARD
7566 JONES
7654 MARTIN
7698 BLAKE
7782 CLARK
7788 SCOTT
7844 TURNER
7902 FORD

9 rows selected.

scott@ORA817DEV> exec demo_pkg.get_data( :x );

PL/SQL procedure successfully completed.


no rows selected

scott@ORA817DEV> exec demo_pkg.get_data( :x, p_all => TRUE );

PL/SQL procedure successfully completed.


EMPNO ENAME
---------- ----------
7369 SMITH
7499 ALLEN
7521 WARD
7566 JONES
7654 MARTIN
7698 BLAKE
7782 CLARK
7788 SCOTT
7839 KING
7844 TURNER
7876 ADAMS
7900 JAMES
7902 FORD
7934 MILLER

14 rows selected.


Procedure params as Bind Vars.

Clark, August 18, 2003 - 1:32 pm UTC

Hi Tom,

I understand BV's - I think! - but...

My front end app calls a stored proc with an acctno as the parameter. I can make that acctno a BV upon calling Oracle.

However, as far as I can tell, there is no way to SET that BV without at some point registering a different piece of SQL in the cache. Is that correct?

Example.

I can exec a stored proc thus from SQL*Plus:
BEGIN ACCT_PROC(:ACCBV); END;

But to set :ACCBV from SQL*Plus before executing ACCT_PROC I have to
BEGIN :ACCBV := 123; END;

So although I've made the first 'query' shareable, I've had to introduce a different hardcoded, non-reusable 'query' into the pool in order to set the BV.

Is there a way of setting the :ACCBV bind var without at some point hardcoding the value? I feel sure I must be missing something, but I've run out of ideas...

Thanks.

Tom Kyte
August 18, 2003 - 3:03 pm UTC

only in sqlplus and since sqlplus is NOT A DEVELOPMENT TOOL really - just a casual interface to the database -- you would not be doing this.


you would be using a real programming language to build a real program.

Development tools/ bind vars

Clark, August 19, 2003 - 9:21 am UTC

Tom,

I know sqlplus isn't a dev tool, but nevertheless the front-end tool, Delphi, Developer, whatever, must at some point issue a command to Oracle to set a bindvar. If it's not to 'clog up' the sql cache, like the sqlplus example would, how does it do it?

Fancy GUI says
"Please enter the account number: _______"

GUI posts a command to Oracle along the lines of
BEGIN ACCT_PROC(123); END;
but presumably using bindvars
BEGIN ACCT_PROC(:ACCBV); END;

So what command does the GUI send to Oracle to set that bindvar, :ACCBV, to 123? There must always be an interface (yes?) between the client and the server:
Client -> Issue Oracle command
-> Oracle/server processes command
-> Returns result to client
irrespective of whether the client is sqlplus or the best dev tool in the world.

How/when did the client GUI set any bindvars - does Oracle bind to variables in memory in the client?

I'm just trying to understand properly how these things work so I can use Oracle exactly how it is meant to be used.

Cheers.


Tom Kyte
August 19, 2003 - 10:28 am UTC

no, the front end tools use API's to bind values to statements programatically.

think about it. you execute:

begin :x := 55; end;


there is a bind variable there already!!! ;x!!! sqlplus programatically bound :x on input (as null) -- just like delphi would.

binding doesn't require executing SQL

executing sql can require binding.

consider if you execute:

begin :y := 55; end;


begin :x := :y; end;

that second statement -- uses binds totally right? no unique sql must be parsed for that.


don't get lost in the weeds.

bind variable

reader, September 16, 2003 - 5:39 pm UTC

declare cursor c1
....

begin

for rec in c1 loop

select col1, col2
from ...
where col3 = rec.col2;

end;

In the above case, rec.col2 is a bind variable or not?

Tom Kyte
September 16, 2003 - 6:36 pm UTC

in plsql with static sql is is IMPOSSIBLE to not bind correctly.

yes, rec.col2 is a bind variable -- 100%

you want to scale, use static sql in plsql, all binds -- all the time.

Refcursor and execute immediate ....

reader, September 16, 2003 - 8:54 pm UTC

In general, If I use refcursor or dynamic sql using 'execute immediate' in pl/sql procedures, there is no way I can use bind variable. Is this correct? Thanks.

Tom Kyte
September 16, 2003 - 8:59 pm UTC

false, totally 110% false.


execute immediate p_string using l_input1, l_input2, l_input3;



open p_cursor for l_string using l_input1, l_input2;



Why refcursor cannot be cached?

reader, September 16, 2003 - 9:53 pm UTC

my colleague says, If I use refcursor in the pl/sql procedure, it cannot be cached. Does it mean that Orcale has to do hard parse every time refcursor is used even though I open the refcursor for the same statement?

Tom Kyte
September 17, 2003 - 6:24 am UTC

not a HARD parse, just a parse. we don't know if it will be a

a) hard parse
b) soft parse
c) softer soft parse via "session_cached_cursors"


the reason it cannot be cached (i go into more detail in my new book "Effective Oracle by Design") in a nutshell? consider:


is
open l_cursor for p_string;
....
end;

or

is
if ( x = 5 )
then
open l_cursor for select * from emp;
else
open l_cursor for select * from dept;
end if;
...
end;

so there, the query changes from call to call, not really possible to "cache", the query associated with the cursor is not static -- but dynamic (confusing, the second example is STATIC sql, but the sql associated with the ref cursor is not "static" itself


or, say you have a procedure P that opens a ref cursor and returns it. you code:


variable x refcursor
variable y refcursor

exec p( :x );
exec p( :y );

the cursor is now opened TWICE at the same time -- if it were cached, they would both point to the same result set, set cursor -- but they cannot.


so, the flexibility of the ref cursor comes at a cost -- that of incurring a PARSE (be it hard, soft or softer soft) each time.





how to use binding on a Function.

jane, September 26, 2003 - 9:17 am UTC

Hi Tom,

I would like know how to use binding on a fuction like this that is throughtout all my code.

FUNCTION GET_JOB_STUDIES (P_JOB IN VARCHAR2)
RETURN CURSORTYPE_COMPONENT
IS
V_LOG CURSORTYPE_COMPONENT;
STR VARCHAR2(3000);
BEGIN
STR:='SELECT SAMPLE_pkg.get_study_number(samples_fk_id) STUDY_NUMBER FROM
JOB_SAMPLES WHERE jobs_fk_id='''||P_JOB||'''
and record_status = ''CURRENT''
AND SAMPLE_TRACK_JAVA_PKG.get_study_number(samples_fk_id) NOT IN (
SELECT NVL(SUBSTR(COMPONENT_FK_ID,1,INSTR(COMPONENT_FK_ID,''_'',1,2)-1),COMPONENT_FK_ID )
FROM TP_COMPONENTS
WHERE TEMPLATE_PREP_FK_ID LIKE ''%'||P_JOB||'%''
AND RECORD_STATUS=''CURRENT'')';

OPEN V_LOG FOR STR;
RETURN V_LOG;
CLOSE V_LOG;
END GET_JOB_STUDIES;

Tom Kyte
September 26, 2003 - 10:44 am UTC

i am so afraid of all of the java/plsql functions I see in your code -- this "java_pkg" stuff keeps popping up. not a good thing. used judiciously, when you have NO OTHER CHOICE, ok. used all over the place, not good.

tell me -- why is this a dynamic query at all? looks like a static sql query to me.

but it would be:

STR:='SELECT SAMPLE_pkg.get_study_number(samples_fk_id) STUDY_NUMBER
FROM JOB_SAMPLES
WHERE jobs_fk_id= :x
and record_status = ''CURRENT''
AND SAMPLE_TRACK_JAVA_PKG.get_study_number(samples_fk_id)
NOT IN (SELECT NVL(SUBSTR(COMPONENT_FK_ID,1,INSTR(COMPONENT_FK_ID,''_'',1,2)-1),COMPONENT_FK_ID
)
FROM TP_COMPONENTS
WHERE TEMPLATE_PREP_FK_ID LIKE :y
AND RECORD_STATUS=''CURRENT'')';

open v_log for str using p_job, '%'||p_job||'%';



OPEN V_LOG FOR STR;

How to unbind?

A reader, December 19, 2003 - 2:41 pm UTC

Hi tom, I have queries, used in forms, functions, in loops, etc.
like this

FOR A IN (SELECT CODE FROM TABLE ) LOOP
SELECT SOMETHING FROM TABLE2 WHERE CODE=A.CODE;
END LOOP;

SELECT SOMETHING FROM TABLE2 WHERE CODE=A.CODE;
is binded
SELECT SOMETHING FROM TABLE2 WHERE CODE=:1

What can I do to get the parse not bind the value in the code field.
this is
SELECT SOMETHING FROM TABLE2 WHERE CODE='valu1'
SELECT SOMETHING FROM TABLE2 WHERE CODE='value2'
SELECT SOMETHING FROM TABLE2 WHERE CODE='value3'

Something like

SELECT /*+ not bind column CODE, because I want to use an
histogram, create a distinct execution plan for every different value I use */
SOMETHING FROM TABLE2 WHERE CODE=:1

Thanks Tom.


Tom Kyte
December 20, 2003 - 9:29 am UTC

that is horrible.

it should just be:


for a in ( select table.code, table2.something from table, table2 where table.code = table2.code )

never never never do what you are doing. LET THE DATABASE DO THAT WHICH IT WAS CODED TO DO. (eg: databases were BORN to join)


but yes, "where code = a.code" is using binds and running at a fraction of the speed, using orders of magnitude more resources then it should. but, at least it uses binds.

do it in a join and don't sweat it. if you do a big bulky query (bulky meaning "let sql do ITS JOB", not "bulky awkward") -- you won't even have to concern yourself with the stuff you are trying to now.

You'd have to dynamically build, prepare, execute, fetch and close a query to "avoid binds"


A reader, December 22, 2003 - 8:09 am UTC

Thanks Tom.

ref cursor all very well but ....

Simon, January 08, 2004 - 8:38 am UTC

Hey Tom!
Thanks for all over the years man.

I want to create  an API with a ref cursor returning function a bit like a view, advantage is that I can keep the schema private. But I can not seem to get it to work like a view in terms or returning the data into another table...


SQL> insert into tasksx select  test_pkg.fn_gettasks(123) from dual;
insert into tasksx select  test_pkg.fn_gettasks(123) from dual
            *
ERROR at line 1:
ORA-00947: not enough values

-- This is weird as the package just opens a cursor on the tasks table, the tasksx table was created as "select * from tasks where 1=2" so they should be the same.
 

Tom Kyte
January 08, 2004 - 1:59 pm UTC

a ref cursor is a query, a cursor.

specifically -- it is not a TABLE (and a view is just a special kind of table).


seems to me you really DO want a view here. You have the same "security and encapsulation" niceness with a view.

view vs ref cursor

Simon, January 08, 2004 - 5:06 pm UTC

I don't think a view shields a client from the underlying schema? Surely they can look at the code it is based on?
The ref cursor must be usable from the package as an output parameter, I just can't seem to find an example. Typically, would be great to see how to put the data into a table if it is possible - hey maybe it isn't?

Tom Kyte
January 08, 2004 - 8:28 pm UTC

you are saying "i want to insert into"

they are unshielded right there. which way do you want it? shielded (means they call a procedure that does the insert into) or not (means they need a database TABLE to use)


A cursor is procedurally processed. You would have to actually write code, fetch rows, insert them.

from view / ref cursor to client app/database

Simon, January 09, 2004 - 3:52 am UTC

Hi Tom
The situation is that we have an off the shelf app and schema. We wish to expose some data but using a view allows the client to easily see where it comes from. Using a package with a function that returns a ref cursor seems great particularly as we can parameterise the function. But, the issue I have is how a user would use the function, ie what code would be used to call the function and loop through the results or insert them into the customer's own database. I am sure it is possible but cant find any code examples.
Thanks for addressing this.

Tom Kyte
January 09, 2004 - 8:32 am UTC

a view does not allow that?

not anymore than your code will.

If you want them to be able to use SQL, they will NEED a view or table. Period.


If you want them to procedurally process this data, you will need to read the PL/SQL docs (it is literally FULL of cursor loops that fetch from an open cursor and do stuff with the data -- very very basic stuff)


</code> http://docs.oracle.com/docs/cd/B10501_01/appdev.920/a96624/06_ora.htm#1510 <code>

Krishna, March 22, 2004 - 12:17 pm UTC

>if you can pack S and E into a varchar2(64) -- you can use
>dbms_application_info.

What if it is not possible to pack the vars into a varchar2(64)? What are the other options I have of easily binding the vars?



Tom Kyte
March 22, 2004 - 12:38 pm UTC

well, bind variables to start (you can almost always use them)

CREATE context MY_CTX using MY_PROC;


is the other way -- there you can set as many variables as you want in your own context and access them via:

sys_context( 'my_ctx', 'variable_name' );


that is the "other option". You can write a plsql function as well -- that returns the "parameter" and then


select * from t where x = f();


or -- to make it get called once per query (to help ensure it gets called once per query):

select * from t where x = ( select f() from dual where rownum > 0 );


but the application context would be the right way.

Another method besides using an "if" statement to check for a null parameter?

ht, March 24, 2004 - 11:05 pm UTC

Tom,
One of the threads above discusses using "if" statements to handle joins that include nulls:
if in_x is null then
update y set a=1 where x is null;
else
update y set a=1 where x=in_x;
end if;
Is this the only way to handle parameters that may pass a null? If I have an update statement with 10 columns that join on 10 parameters that are passed, I would need 10 queries?

TIA

ht

Tom Kyte
March 25, 2004 - 9:09 am UTC


update t
set a = 1
where x = in_x or (x is null and in_x is null);


However, remember entirely NULL keys = not indexed. Soooo, that update will always full scan, soooo, if you want it using an index -- you will avoid NULLs in the first place.

(thats not really "a join", it is a "predicate" technically...)

How does one avoid nulls?

ht, March 25, 2004 - 11:23 am UTC

Tom,
Thanks for the solution and it's performance impact. I'm wondering if I should be using a combination of SQL*Loader / pl/sql or another method to implement this:

1. pass all dba_users recs into x_dba_users to a stored proc.
2. the stored proc updates a date column if no recs have changed or inserts if the rec has changed
3. query x_dba_users to monitor changes in users

Since multiple-versioned instances will be inserting into this table so I can alert on password changes, new users created, account status', etc., nulls need to be handled in the predicate since v7, v8-8.06, and 8.06+ instances have varying dba_users columns (not to mention that a v9 instance may have a null dba_users.account_status that needs to be handled).

Since performance is an issue when the stored proc checks if account_status (or another column) has changed since the last time the stored proc ran, can you suggest a better methodology?

Thanks again,
ht


Tom Kyte
March 25, 2004 - 9:13 pm UTC

sorry -- not sure what you are doing here entirely.

don't know why you would be updating by anything other than 'username' and that would sort of be "constant"


To find changes (null or otherwise) would be a simple "MINUS"


select * from dba_users
MINUS
select * from your_table;

would give everything in dba_users not in your_table and the reverse would give the reverse...

refcursor with dynamic predicate

Sami, April 20, 2004 - 9:27 am UTC

Dear Tom,

I am using the following package to return the result set to the java application.
The issue here is that it is getting parsed everytime for different i_region,i_userstatusid and i_processed_by.

Do I have to use "OPEN o_resultset FOR query_str using :bind1,:bind2 and :bind3"?
The number of prdicates will be added to my query is NOT constant (may 1 or 2 or 3).

Do I need to code like

if (i_region is not null and i_userstatusid is not null and i_processed_by is not null)
then
OPEN o_resultset FOR query_str using :region_bind,:userstatus_bind and :processedby_bind;
elsif (i_region is not null and i_userstatusid is not null)
OPEN o_resultset FOR query_str using :region_bind,:userstatus_bind;
elsif (i_region is not null and i_processed_by is not null)
OPEN o_resultset FOR query_str using :region_bind,:processedby_bind;
elsif (i_region is not null)
:
:
end if;

Number of IF CONDITIONS= 2 power number of dynamic predicates ....;-)

OR

Is there any other better way to solve this problem?
Your help will be really appreciated.



CREATE OR REPLACE PACKAGE BODY TEST_Pkg AS
PROCEDURE GetValues(
i_region IN VARCHAR2 DEFAULT NULL,
i_userstatusid IN VARCHAR2 DEFAULT NULL,
i_processed_by IN VARCHAR2 DEFAULT NULL,
i_rownum IN PLS_INTEGER DEFAULT 20,
o_resultset OUT ref_cursor_type )
IS
query_str VARCHAR2(10000) :=NULL;
where_clause VARCHAR2(5000) := NULL;
BEGIN

query_str := '
SELECT * FROM (
SELECT /*+ FIRST_ROWS */ e.userstatusid,
p.processed_by,
p.last_name,
p.first_name,
p.company_name,
c.countryname,
e.customerid
p.userid
FROM table1 p,
table2 e,
table3 c
WHERE p.pk_id= e.pk_id
AND p.business_country_id = c.countryabbrev
AND c.category IN (''CATEGORY1'',''CATEGORY2'') ';

IF i_region IS NOT NULL THEN
where_clause := where_clause||' AND p.region in('||i_region||') ';
END IF;


IF i_userstatusid IS NOT NULL THEN
where_clause := where_clause ||' AND e.userstatusid in ('||i_userstatusid||') ';
END IF;

IF i_processed_by IS NOT NULL THEN
where_clause := where_clause ||' AND p.processed_by in ('||i_processed_by||') ';
END IF;

where_clause := where_clause||' ORDER BY e.LASTUPDATEDATE desc) WHERE ROWNUM <='||i_rownum;

query_str := query_str||where_clause;

OPEN o_resultset FOR query_str;

END;
END;
/


Tom Kyte
April 20, 2004 - 9:42 am UTC

see

</code> http://asktom.oracle.com/pls/asktom/f?p=100:11:::::P11_QUESTION_ID:1288401763279

for the technique you probably want to apply here, coupled with:

http://asktom.oracle.com/pls/asktom/f?p=100:11:::::P11_QUESTION_ID:210612357425 <code>

for the variable inlists.

dbms_session.set_context with variable number in_list

Sami, April 20, 2004 - 4:14 pm UTC

Dear Tom,

Thanks for your help. 

I don't know where to put in_list_number function in dbms_session.set_context.

CREATE OR REPLACE PROCEDURE My_Procedure( p_name   IN VARCHAR2 DEFAULT NULL,
                         p_hiredate  IN DATE DEFAULT NULL,
                         p_status       IN NUMBER DEFAULT NULL)
 AS
     TYPE rc IS REF CURSOR;
     l_cursor rc;
     l_query  VARCHAR2(512)
              DEFAULT 'select name,hire_date,status from test_tab where 1 = 1 ';
     CURSOR l_template IS SELECT name,hire_date,status FROM TEST_TAB;
     l_rec  l_template%ROWTYPE;
BEGIN
     IF ( p_name IS NOT NULL ) THEN
         dbms_session.set_context( 'MY_CTX', 'NAME','%'||UPPER(p_name)||'%');
         l_query := l_query ||' and name like  sys_context( ''MY_CTX'', ''NAME'' ) ';
     END IF;
     IF ( p_hiredate IS NOT NULL ) THEN
         dbms_session.set_context( 'MY_CTX', 'HIREDATE',
                   TO_CHAR(p_hiredate,'yyyymmddhh24miss'));
         l_query := l_query ||
               ' and hire_date >
                   TO_DATE(
                      sys_context( ''MY_CTX'',
                                   ''HIREDATE'' ),
                      ''yyyymmddhh24miss'') ';
     END IF;
     


------------ having trouble here starts-------------

IF ( p_status IS NOT NULL ) THEN
         dbms_session.set_context( 'MY_CTX', 'STATUS',' ('||p_status||')');
         l_query := l_query ||
               ' and status in  in_list_number(sys_context( ''MY_CTX'',''STATUS'')) ';
     END IF;

------------ having trouble here ends-------------



    Printquery( l_query );
     OPEN l_cursor FOR l_query;
     LOOP
         FETCH l_cursor INTO l_rec;
         EXIT WHEN l_cursor%NOTFOUND;
         dbms_output.put_line( l_rec.name || ',' ||
                               l_rec.hire_date || ',' ||
                               l_rec.status);
     END LOOP;
     CLOSE l_cursor;
 END;
/

create or replace type myTableType_number as table      of number(25);


CREATE OR REPLACE FUNCTION In_List_number( p_string IN VARCHAR2 ) RETURN myTableType_number
 AS
     l_string        LONG DEFAULT p_string || ',';
     l_data          myTableType_number := myTableType_number();
     n               NUMBER;
 BEGIN
   LOOP
       EXIT WHEN l_string IS NULL;
       n := INSTR( l_string, ',' );
       l_data.extend;
       l_data(l_data.COUNT) :=
             LTRIM( RTRIM( SUBSTR( l_string, 1, n-1 ) ) );
       l_string := SUBSTR( l_string, n+1 );
  END LOOP;

  RETURN l_data;
END;
/




  1  select * from THE (
  2*         select cast( in_list_number('1,2,3') as mytableType_number) from  dual)
usssd429@HSBC SQL> /
 
COLUMN_VALUE
------------
           1
           2
           3
 



SQL> select * from test_tab;
 
     EMPNO NAME                 HIRE_DATE             STATUS
---------- -------------------- ----------------- ----------
         1 SAMINATHAN           04/20/04 14:38:45          1
         2 TOM                  01/11/04 14:39:15          2
         3 ORACLE               03/17/03 14:39:55          3
 
SQL> exec my_procedure(p_name=>'Tom');
select name,hire_date,status from test_tab where 1 = 1  and name like  sys_context( 'MY_CTX', 'NAME' )
TOM,01/11/04 14:39:15,2
 
PL/SQL procedure successfully completed.

 
Now I am trying to form a query like below.
select name,hire_date,status from test_tab where 1 = 1  and status like (1,2,3);

SQL> exec my_procedure(p_status=>'1,2,3');
BEGIN my_procedure(p_status=>'1,2,3'); END;
 
*
ERROR at line 1:
ORA-06502: PL/SQL: numeric or value error: character to number conversion error
ORA-06512: at line 1


Kindly look at the snippet of the code below to accomplish the same.

     IF ( p_status IS NOT NULL ) THEN
         dbms_session.set_context( 'MY_CTX', 'STATUS',' ('||p_status||')');
         l_query := l_query ||
               ' and status in  in_list_number(sys_context( ''MY_CTX'',''STATUS'')) ';
     END IF;


Please enlighten me, how to use dbms_session.set_context with in_list_number to fulfill my requirement. 

Tom Kyte
April 21, 2004 - 11:59 am UTC

IF ( p_status IS NOT NULL ) THEN
dbms_session.set_context( 'MY_CTX', 'STATUS',' ('||p_status||')');
l_query := l_query ||
' and status in in_list_number(sys_context(
''MY_CTX'',''STATUS'')) ';

should be more like


IF ( p_status IS NOT NULL ) THEN
dbms_session.set_context( 'MY_CTX', 'STATUS', p_status );
l_query := l_query ||
' and status in (select * from TABLE( cast( in_list_number(sys_context(
''MY_CTX'',''STATUS'') as myTabletype_number ) ) ) ';


might not have the () matched up but you are going for:

where column in
(select * from TABLE( cast( f( sys_context('a','b') ) as your_type ) ))



Excessive parsing

Matt, April 20, 2004 - 10:09 pm UTC

oracle 9.2.0.5:

Tom,

What wait event will you see due to excessive hard parsing?

Tom Kyte
April 21, 2004 - 7:39 pm UTC

latch free, library cache

but stats pack will show you -- soft parse %.

Thank you so much Tom

Sami, April 21, 2004 - 3:21 pm UTC

Dear Tom,

Thanks for your immediate response.

<asktom>
might not have the () matched up but you are going for:
</asktom>

")" was missing before "as myTabletype_number ".

IF ( p_status IS NOT NULL ) THEN
dbms_session.set_context( 'MY_CTX', 'STATUS', p_status );
l_query := l_query ||
' and status in (select * from TABLE( cast(
in_list_number(sys_context(
''MY_CTX'',''STATUS'')) as myTabletype_number ) ) ) ';



Need confirmation

Sami, April 21, 2004 - 9:13 pm UTC

Dear Tom,
I have 3 conditional predicates(like below) in my procedure which returns refcursor.

My understanding is that still maximum of 8(3 power 2) hard parsing may occur. I mean the same refcursor can be used to execute 8 different QUERIES based on the parameters.

Is that correct?

IF ( p_name IS NOT NULL ) THEN
dbms_session.set_context( 'MY_CTX', 'NAME','%'||UPPER(p_name)||'%');
l_query := l_query ||' and name like sys_context( ''MY_CTX'', ''NAME'' ) ';
END IF;


IF ( p_hiredate IS NOT NULL ) THEN
dbms_session.set_context( 'MY_CTX', 'HIREDATE', TO_CHAR(p_hiredate,'yyyymmddhh24miss'));
l_query := l_query ||' and hire_date > TO_DATE( sys_context( ''MY_CTX'',''HIREDATE'' ),''yyyymmddhh24miss'') ';
END IF;

IF ( p_status IS NOT NULL ) THEN
dbms_session.set_context( 'MY_CTX', 'STATUS', p_status );
l_query := l_query || ' and status in (select * from TABLE( cast( in_list_number(sys_context( ''MY_CTX'',''STATUS'')) as myTabletype_number ) ) ) ';
END IF;

Tom Kyte
April 21, 2004 - 9:32 pm UTC

sure, but remember, 8 is alot less than "infinity".

8 is not much. even with 4, 5, 6, ... it won't be all of them -- the end users will most likely never pick EVERY combo. they will settle in on a smallish number of useful combinations over time.


How to do this?

Stephen, April 22, 2004 - 2:56 am UTC

Hi Tom,
How to make words of a String appear in each line?For example

SQL> var x varchar2(32)
SQL> exec :x := 'How now brown cow'

PL/SQL procedure successfully completed.

I want the String to appear as
How 
now 
brown
cow

Could you please help?
Bye!
 

Tom Kyte
April 22, 2004 - 7:34 am UTC

replace( string, ' ', chr(10) )


or chr(13)||chr(10) if you are on windows.

How to do this?

Stephen, April 22, 2004 - 2:59 am UTC

Hi Tom,
How to make words of a String appear in each line?For example

SQL> var x varchar2(32)
SQL> exec :x := 'How now brown cow'

PL/SQL procedure successfully completed.

I want the String to appear as
How 
now 
brown
cow

Could you please help?
Bye!
 

here's a way...

Justin, April 23, 2004 - 12:04 pm UTC

var x varchar2(100)

exec :x := 'How'||CHR(10)||'now'||CHR(10)||'brown'||CHR(10)||'cow';

print x

Hope that helps

What am I missing here?

A reader, April 26, 2004 - 11:04 am UTC

Dear Tom,

Thanks for your help.

SQL > var rs refcursor

SQL> get s1
  1* BEGIN rst_pkg.GETRST(I_REGION=>'4050',I_PROCESSED_BY=>null,I_USERSTATUSID=>'2,3,5',I_BUSINESS_COUNTRY_ID=>null,I_COMPANY_NAME=>null,I_CUSTOMERID=>null,I_ROWNUM=>100,O_RESULTSET=>:rs); END;

my rst_pkg.GETRST will generate dynamic refcursor based on the parameters using dbms_session.set_context.

SQL> @s1
SELECT * FROM (
SELECT  e.userstatusid,
                p.processed_by,
                p.last_name,
                p.first_name,
                p.company_name,
                c.countryname,
                e.customerid,
                TO_CHAR(e.LASTUPDATEDATE,'dd MON, yy ')||'AT
'||TO_CHAR(e.LASTUPDATEDATE,'hh24:mi ')||'GMT' formattedlastupdatedate,
                p.userid
FROM    PROFILEDUSER p,
                EXTENDEDATTRIBUTES e,
                COUNTRIES  c
WHERE   p.profileduserid= e.profileduserid
AND     p.business_country_id = c.countryabbrev
AND     hsbc_user_category IN ('GIB','HIBM')  
and p.region  in (select * from TABLE( cast( in_list_number(sys_context('RST_CTX','p.region')) as
rst_tabletype_number ) ) )  
and e.userstatusid  in (select * from TABLE( cast(
in_list_number(sys_context('RST_CTX','e.userstatusid')) as rst_tabletype_number ) ) )  ORDER BY
e.LASTUPDATEDATE desc
) 
WHERE ROWNUM <=100
 
PL/SQL procedure successfully completed.
 
 
SQL> select sys_context('RST_CTX','p.region') from dual;
 
SYS_CONTEXT('RST_CTX','P.REGION')
--------------------------------------------------------------------------------
4050
 
SQL> select sys_context('RST_CTX','e.userstatusid') from dual;
 
SYS_CONTEXT('RST_CTX','E.USERSTATUSID')
--------------------------------------------------------------------------------
2,3,5
 
 
SQL> select * from (
SELECT  e.userstatusid,
                p.processed_by,
                p.last_name,
                p.first_name,
                p.company_name,
                c.countryname,
                e.customerid,
                TO_CHAR(e.LASTUPDATEDATE,'dd MON, yy ')||'AT
'||TO_CHAR(e.LASTUPDATEDATE,'hh24:mi ')||'GMT' formattedlastupdatedate,
                p.userid
FROM    PROFILEDUSER p,
                EXTENDEDATTRIBUTES e,
                COUNTRIES  c
WHERE   p.profileduserid= e.profileduserid
AND     p.business_country_id = c.countryabbrev
AND     hsbc_user_category IN ('GIB','HIBM')  
and p.region  in (4050)
and e.userstatusid in (2,3,5)
)
where rownum <=100

1)The above SQL statement returned 100 records.
2)print rs also returned 100 records

BUT
3) the SQL generated using my print query is returning 0 records (below query).What could be wrong here? I check the context values using select sys_context('context_name','variable') from dual; (see above)

SQL> SELECT * FROM (
SELECT  e.userstatusid,
                p.processed_by,
                p.last_name,
                p.first_name,
                p.company_name,
                c.countryname,
                e.customerid,
                TO_CHAR(e.LASTUPDATEDATE,'dd MON, yy ')||'AT
'||TO_CHAR(e.LASTUPDATEDATE,'hh24:mi ')||'GMT' formattedlastupdatedate,
                p.userid
FROM    PROFILEDUSER p,
                EXTENDEDATTRIBUTES e,
                COUNTRIES  c
WHERE   p.profi  2    3    4    5    6    7    8    9   10   11   12   13   14   15  leduserid= e.profileduserid
AND     p.business_country_id = c.countryabbrev
AND     hsbc_user_category IN ('GIB','HIBM')  and p.region  in (select * from
TABLE( cast( in_list_number(sys_context('RST_CTX','p.region')) as
rst_tabletype_number ) ) )  and e.userstatusid  in (select * from TABLE( cast(
in_list_number(sys_context('R
ST_CTX','e.userstatusid')) as rst_tabletype_number ) ) )  ORDER BY
e.LASTUPDATEDATE desc) WHERE ROWNUM <=100
 16   17   18   19   20   21   22   23  
SQL> /
 
no rows selected

 

Tom Kyte
April 26, 2004 - 2:17 pm UTC

give me a full test script (table creates, inserts and all), code to reproduce with and I'll be glad to take a look -- otherwise I'd just guess "that the context values were not set in the session you cut and pasted the query into....

OR

in_list_number(sys_context('R
ST_CTX','e.userstatusid'))

really did break a line like that so the context name was

R\nST_CTX

not just

RST_CTX



Dear Tom, you are so great

Sami, April 26, 2004 - 3:28 pm UTC

R\nST_CTX was my problem. It is woking now.
Thanks a million.


hardparse Vs variable_in_list

Sami, April 29, 2004 - 5:34 pm UTC

Dear Tom,

To avoid hard parse I am converting my code from (A) to (B).
But method (B) gives more complex plan because of variable_ in_list & CAST,.


A)
IF i_processed_by IS NOT NULL THEN
query_str := query_str||' AND p.processed_by in ('||i_processed_by||') ';
END IF;

B)
IF i_processed_by IS NOT NULL THEN
('||i_processed_by||') ';
dbms_session.set_context('RST_CTX', 'p.processed_by',i_processed_by);
query_str :=query_str|| ' and p.processed_by in (select * from TABLE( cast( Rst_Pkg.in_list(sys_context(''RST_CTX'',''p.processed_by'')) as rst_tabletype ) ) ) ';
END IF;

Q1)Do you see any perfromance degradation of method (B)[due to complex plan] compare to method (A)?

Q2)
This plsql package which returnes refcursor is invoked by java program using connect pooling mechanism. Do you see any
drawback of using sys_context in connect pooling environment?

Thanks in advance

Tom Kyte
April 29, 2004 - 6:30 pm UTC

q1) read
</code> http://asktom.oracle.com/pls/asktom/f?p=100:11:::::P11_QUESTION_ID:3779680732446#15740265481549 <code>

you may well find the cardinality hint to be very useful with this technique.

q2) nope. by the time the ref cursor is opened, the use of the sys_context stuff is "done".

as long as you explicitly set all of the context values you use in your "session", it'll be OK.

Is there any workaround(to use cardinality hint) in 8i db

Sami, April 30, 2004 - 8:34 am UTC

Dear Tom,

One will not get this kind of information anywhere. Really appreciated.

I went thru the link you have given. At the end it says
"Note that cardinality is 9ir1 and above and dynamic sampling is 9ir2 and above."

We are still using 8.1.7.3. Is there a way to handle it in 8i?

Thanks in advance.





Tom Kyte
April 30, 2004 - 10:18 am UTC

no, there is not.

What is prediacte selectivity? Thanks.

reader, May 01, 2004 - 2:21 pm UTC


Tom Kyte
May 01, 2004 - 5:52 pm UTC

the selectivity of the predicate :)


predicate = where clause
a selective predicate is one that retrieves "few" rows from "alot of rows"
a non-selective predicate is one that retreives "lots" of rows


air quotes intentional -- "few", "alot", "lots" are all relative terms.

Bind Variables and Hard Parses

denni50, May 11, 2004 - 4:38 pm UTC

Hi Tom

Speaking of bind variables and hard parses(this will probably sound like a lame question),with the below code, does the declaration section of the block also get hard parsed?

declare
v_start_date VARCHAR2(12) :='01-FEB-2004';
v_end_date VARCHAR2(12) :='29-FEB-2004';
v_clnt VARCHAR2(4) :='ACGA';
begin
for i in(select idnumber,usercode1,giftdate,giftamount,transnum
from gift where idnumber in(select g.idnumber
from gift g
where g.usercode1=v_clnt
and g.giftdate < to_date(v_start_date,'DD-MON-YYYY')
having sum(g.giftamount)=0
group by g.idnumber)
and giftamount>0
and giftdate between to_date(v_start_date,'DD-MON-YYYY')
and to_date(v_end_date,'DD-MON-YYYY')
)loop
insert into gift2(idnumber,giftdate,giftamount,transnum,usercode1)
values(i.idnumber,i.giftdate,i.giftamount,i.transnum,i.usercode1);
end loop;
end;

The first 2 variables will definitely get changed every month, the 3rd one may or may not. I find it easier to be able to make changes in one section where the new changes are declared. Am I using the concept of bind variables correctly in this situation?

thanks



Tom Kyte
May 11, 2004 - 8:59 pm UTC

yup, 100%


once a month - that would be OK, I would not lose a ton of sleep over a single statement executed in sqlplus once a month (since I myself probably type in a couple hundred a day....)

application code that runs over and over and over = gotta bind.


HOWEVER............

why have code *at all*.

insert into gift2( ... )
select idnumber,usercode1,giftdate,giftamount,transnum
from gift where idnumber in(select g.idnumber
from gift g
where g.usercode1=v_clnt
and g.giftdate < to_date(:v_start_date,'DD-MON-YYYY')
having sum(g.giftamount)=0
group by g.idnumber)
and giftamount>0
and giftdate between to_date(:v_start_date,'DD-MON-YYYY')
and to_date(:v_end_date,'DD-MON-YYYY')
/


hmmm....



SQL from Forms

Hans Z., May 12, 2004 - 3:15 am UTC

Hi,

we use Forms(6i) with "DEFAULT_WHERE" on data blocks on them. We also have PL/SQL libraries where we append SQL to the original DEFAULT_WHERE like this:

append_where := ' AND arendeid IN (SELECT arendeid
FROM trofo038_v16
WHERE lankod LIKE '''||NVL(TO_CHAR(lv_lankod),'%')||'''
AND komkod LIKE '''||NVL(TO_CHAR(lv_komkod),'%')||'''
AND traktkod LIKE '''||NVL(lv_traktkod,'%')||'''
AND rblock LIKE '''||NVL(lv_rblock,'%')||'''
AND LTRIM(enhet,0) LIKE '''||NVL(lv_enhet,'%')||''')';

The DEFAULT_WHERE sent by Forms is bound nicely, but is there any way that we can bind the appended SQL?

Thanks,
Hans Z.

Tom Kyte
May 12, 2004 - 7:43 am UTC

yes, use a control block (non database block) and pop lv_lankod into :control-block.item1.

then reference :control-block.item1 in the where clause instead of lv_lankod. anytime lv_lankod is modified -- update the control-block item.

thanks Tom...

denni50, May 12, 2004 - 8:28 am UTC

I like your code better..all SQL..no PL/SQL.
;~)


Tom..

denni50, May 12, 2004 - 9:38 am UTC

I think it's time to write that third book:"Bind Variables"

I tried your code and got the SP2-0552 error:
SP2-0552: Bind variable "V_END_DATE" not declared.

The solution is:
Action: "Run the VARIABLE command to check that the bind variables you used in your SQL statement exist. Before running a SQL statement with bind variables, you must use the VARIABLE command to declare each variable"

What is the VARIABLE command?

My understanding from what I am reading is that bind variables are only used within PL/SQL and not straight
SQL..so these bind variables do have to be declared.
When I ran your code(see previous post) I was expecting
the :v_start_end to behave like a substitution variable
where the user is prompted for the value.

I know you have explained this a gazillion times but can
you go over it again(in kindergarden terms)..the difference
of host and bind variables and their usage in SQL(if applicable) and PL/SQL.

thanks


Tom Kyte
May 12, 2004 - 6:51 pm UTC

that is a sqlplus ism -- in this case, you would not even really need to use a bind variable (since you run this once a month)

the very act of setting a bind in sqlplus unfortunately requires a hard parse itself?!

variable x number
exec :x := 55; <=== plsql block in disguise.


in plsql, plsql variables in sql are binds.

in pro*c, you would use :x to use a host variable x in a query as a bind.

in vb/jdbc you use ? to represent a bind

You could in this case just use &v_start_end to use a substitution since the hard parse happens but once in a moon and is sort of unavoidable if your enviroment is sqlplus.

binds in vb

Mark A. Williams, May 12, 2004 - 7:42 pm UTC

> in vb/jdbc you use ? to represent a bind

Using a "?" in VB to represent a bind follows the ODBC convention. However, depending on the data access method, it is quite possible/desirable to use the familiar ":placeholder" syntax. This syntax or convention is available via the Oracle Data Provider for .NET, for example, but that will have to wait for the book...

I realize this could be viewed as pedantic - the important point, of course, is to USE binds where appropriate regardless of the syntax or convention. That will also be in the book...

- Mark

ok Tom....

denni50, May 13, 2004 - 8:31 am UTC

It's finally sinking in....

I understood the concept and reason for using bind variables, it was "how and when" to use them that was kind of fuzzy with me.

You "bind" a variable when you have code that is run
over and over and over again...with many users using that
code repeatedly with different values at any given time.

Code that is run sparingly or a value that is constant can be 'hard coded'.

So if you need to write a procedure using bind variables
the method to pass ever-changing values would be via
a function?....correct.

thanks for having the patience to explain this very important concept once again.

:~)











Tom Kyte
May 13, 2004 - 10:49 am UTC

...
So if you need to write a procedure using bind variables
the method to pass ever-changing values would be via
a function?....correct
......

don't understand the reference to "a function"?

Function

denni50, May 13, 2004 - 1:52 pm UTC

the function would pass the values to the stored procedure
that is coded to use bind variables.

I thought I saw an example somehere(here or in a book)..
when I get a chance I'll post a sample.

thx

Tom Kyte
May 13, 2004 - 3:10 pm UTC

well, a function is just a procedure that can return a value so...... it would be a procedure calling a procedure.

;)

thanks Tom...(as always)

denni50, May 13, 2004 - 4:07 pm UTC

I'll have to look into that a little bit more about
using bind variables.

The latest developments here is the vendor has finally
agreed to upgrade the prod & development systems
to 9.2.0.4. That is scheduled for this month
(development first then prod after testing).

They sent me all the cd's and the patch set.
I've re-done the old testserver that the
organization said I could have for my personal
use, had the SA do a clean install of Windows 2000
Server...removed 9.0.1 that wasn't working on NT
and had the organization upgrade memory to another gig.

Got everything installed and running on 9.2.0.4..and
everything is flying. There is so much new stuff to learn
and I finally have ANALYTICS!!!!!!!

I am so happy I told my boss "I don't wanna go home!"..
she said "we can arrange for a sofa to be delivered"..
(chuckle!)

I can login from home though and "play".
just thought I'd let you know for what it's worth...
you know how much I've been belly-aching over this.

;~)








negative performance with bind variable approach

Sami, May 19, 2004 - 5:26 pm UTC

Dear Tom,

Always I have strong belief in bind variable advantages because never seen any negative side till now.

As per
</code> http://asktom.oracle.com/pls/asktom/f?p=100:11:::::P11_QUESTION_ID:2320123769177#17653632995613 <code>

I changed my refcursor package to make use of bind variables using dbms_session.set_context and sys_context.
The new modified package (using bind varibale ) is 10 times slower than the old package (without bind variable).

New Package Query String comes as
===================================

SELECT * FROM (
SELECT e.userstatusid,
p.processed_by,
p.last_name,
p.first_name,
p.company_name,
c.countryname,
e.customerid,
TO_CHAR(e.LASTUPDATEDATE,'dd MON, yy ')||'AT '||TO_CHAR(e.LASTUPDATEDATE,'hh24:mi ')||'GMT' formattedlastupdatedate,
p.userid
FROM P1 p,
E1 e,
C1 c
WHERE p.P1id= e.P1id
AND p.business_country_id = c.countryabbrev
AND user_category IN ('CAT1','CAT2')
and p.region in (select * from TABLE( cast( My_Pkg.in_list_number(sys_context('MY_CTX','p.region')) as rst_tabletype_number ) ) )
and e.userstatusid in (select * from TABLE( cast( My_Pkg.in_list_number(sys_context('MY_CTX','e.userstatusid')) as rst_tabletype_number ) ) )
ORDER BY e.LASTUPDATEDATE desc) WHERE ROWNUM <=:b_rownum;

Old Package Query String comes as
===================================

SELECT * FROM (
SELECT e.userstatusid,
p.processed_by,
p.last_name,
p.first_name,
p.company_name,
c.countryname,
e.customerid,
TO_CHAR(e.LASTUPDATEDATE,'dd MON, yy ')||'AT '||TO_CHAR(e.LASTUPDATEDATE,'hh24:mi ')||'GMT' formattedlastupdatedate,
p.userid
FROM P1 p,
E1 e,
C1 c
WHERE p.P1id= e.P1id
AND p.business_country_id = c.countryabbrev
AND user_category IN ('CAT1','CAT2')
AND p.region in(4050)
AND e.userstatusid in (1)
ORDER BY e.LASTUPDATEDATE desc) WHERE ROWNUM <=20;

Is it because I am not implementing "cardinality" hint in my query (it is not supported in 8i).





Tom Kyte
May 19, 2004 - 6:18 pm UTC

you way over bound.

cat1, cat2 - don't bind them, they are constants, never changing
20 -- don't bind that!

I don't see anything that vary's in there! binds are for things that vary -- you have nothing in there to change.


You bind ONLY things that change from execution to execution. since you change nothing here, everything that needs to be bound is already in fact bound.

I did not over bind

Sami, May 19, 2004 - 6:28 pm UTC

Dear Tom,

Looks like I did not explain properly.

<<You bind ONLY things that change from execution to execution. >>
I have gone thru your book many times. So I have an idea when to use bind variables


I did not over bind.
I did not bind cat1, cat2 as you mentioned in you comment. Look at,even in my new code 'CAT1' and 'CAT2' are unchanged. They are not touched.


<<20 -- don't bind that! >>
The number of records user wanted to see. It will vary time to time. User has the control over it. That is why I did bind.

<<I don't see anything that vary's in there!>>

AND p.region in(4050)
AND e.userstatusid in (1)

In the above predicate region "4050" and userstatusid "1" are variables. Time to time it will vary. That is why I used bind variable.

Tom,Kindly comment on this issue. Thanks for your time



Tom Kyte
May 19, 2004 - 6:41 pm UTC

and then have you tried hinting it -- this is one of the times you might need to do that.

typically /*+ FIRST_ROWS */ does what you need.

(sorry, just saw the in's replaced with selects and extrapolated them all up)

When NOT TO USE bind variables

Sami, June 11, 2004 - 2:54 pm UTC

Dear Tom,

Since we do not have "cardinality hint" in 8i you advised to use "FIRST ROWS hint". But even this one is not helping me either.

Here I see better performance without bind variable compare to bind variable.

So my questions is that what are the times we should not use bind variables other than few scenarios which I am aware of..

Bind variables are not good
1)if we use refcursor in 8i (the same thread explains..
coz of the query complexity

userstatus in (select * from TABLE( cast(
My_Pkg.in_list_number(sys_context('MY_CTX','userstatus')) as rst_tabletype_number ) )

2)not suitable for Datawarehouse

3)if the value is 100% static

4)if the data is skewed (histogram statistics cannot be utilized if the statement has bind variable)

Is there any other scenarios?

Because recently we used bind variables but the performance got worsened.


Tom Kyte
June 11, 2004 - 4:42 pm UTC

1) use a global temporary table then. you can use dbms_stats to set the stats on that. bind variables *are relevant*

2) not always what you want to use in a DW. they are totally suitable for DW. There are times you don't want to use them in a DW.

3) correct.

4) that is a DW, you would not have a OLTP system where sometimes the query should full scan and sometimes not. this would be an infrequently executed query.

SP2-0552: Bind variable "1" not declared.

Branka, June 22, 2004 - 3:54 pm UTC

Tom,
Can you help me to understand what is going here? I run simple command

select &1 from dual;
SP2-0552: Bind variable "1" not declared.

I know that I can declare variable, but I would like to use interactive mode and do not understand why it does not work.


Tom Kyte
June 22, 2004 - 9:34 pm UTC

show me a cut and paste, i get something entirely different.

Bind variable "1" not declared

Branka, June 22, 2004 - 4:43 pm UTC

I find problem.
Someone updated glogin.sql file with next line of code:
set define '^'

SP2-0552: Bind variable "1" not declared

Branka, June 23, 2004 - 3:50 pm UTC

[oracle@mictlan admin]$ vi glogin.sql

COLUMN name_col_plus_show_sga FORMAT a24

-- Defaults for SHOW PARAMETERS
COLUMN name_col_plus_show_param FORMAT a36 HEADING NAME
COLUMN value_col_plus_show_param FORMAT a30 HEADING VALUE

-- Defaults for SET AUTOTRACE EXPLAIN report
COLUMN id_plus_exp FORMAT 990 HEADING i
COLUMN parent_id_plus_exp FORMAT 990 HEADING p
COLUMN plan_plus_exp FORMAT a60
COLUMN object_node_plus_exp FORMAT a8
COLUMN other_tag_plus_exp FORMAT a29
COLUMN other_plus_exp FORMAT a44

set define '^'
--column orasid new_value _instance
--define _instance=SQL
select lower(user)||' teedev' orasid,
substr(to_char(sysdate, 'fmMonth DD, YYYY'),1,18) today
from dual;

--set termout on
--set sqlp '^_instance > '

"glogin.sql" 52L, 1300C written
[oracle@mictlan admin]$ sqlplus

SQL*Plus: Release 9.2.0.1.0 - Production on Wed Jun 23 16:02:39 2004

Copyright (c) 1982, 2002, Oracle Corporation.  All rights reserved.

Enter user-name: teedev
Enter password:

Connected to:
Oracle9i Enterprise Edition Release 9.2.0.1.0 - Production
With the Partitioning, OLAP and Oracle Data Mining options
JServer Release 9.2.0.1.0 - Production


ORASID                                TODAY
------------------------------------- ------------------
teedev teedev                         June 23, 2004

SQL> select &1 from dual;
SP2-0552: Bind variable "1" not declared.
SQL> 

how to insert value like "Ernst & Young"

Cris, July 21, 2004 - 8:19 am UTC

Tom, 

a very simple question for you:

I have to execute a very large script of our customer with a lot of insert like:


SQL> Insert into MYTABLE (SUP_CODE,SUP_TITLE) values ('EY1','Ernst & Young');
Enter value for Young:

I cannot edit a script - it is too large.
What can I do on plsql lever?

Cris
 

Tom Kyte
July 21, 2004 - 8:56 am UTC

SQL> set define off 

Bind Variables Revisited.....

denni50, August 16, 2004 - 10:33 am UTC

Hi Tom

I'm back to this topic that I wrote you awhile back in this thread where you replied with the answer below.
*****************************************************************************************
Followup:
yup, 100%


once a month - that would be OK, I would not lose a ton of sleep over a single statement executed in sqlplus once a month (since I myself probably type in a couple hundred a day....)

application code that runs over and over and over = gotta bind.


HOWEVER............

why have code *at all*.

insert into gift2( ... )
select idnumber,usercode1,giftdate,giftamount,transnum
from gift where idnumber in(select g.idnumber
from gift g
where g.usercode1=:v_clnt
and g.giftdate < to_date(:v_start_date,'DD-MON-YYYY')
having sum(g.giftamount)=0
group by g.idnumber)
and giftamount>0
and giftdate between to_date(:v_start_date,'DD-MON-YYYY')
and to_date(:v_end_date,'DD-MON-YYYY')
/


hmmm....
*******************************************************************************************

I understand what you're saying "why have code *at all*."

What I am coming back to here is how would I supply the values:
:v_clnt,:v_start_date and :v_end_date at execution time.
The only reason I was using DECLARE was so I could assign the values to the variables.
(see below):


declare
v_start_date VARCHAR2(12) :='01-MAR-2003';
v_end_date VARCHAR2(12) :='31-MAR-2003';
v_clnt VARCHAR2(4) :='ACLJ';
begin
for i in(select gi.idnumber,gi.usercode1,gi.giftdate,gi.giftamount,gi.transnum
from gift gi
where ((gi.idnumber in(select g.idnumber
from gift g
where g.usercode1=v_clnt
and g.giftdate < to_date(v_start_date,'DD-MON-YYYY')
group by g.idnumber
having sum(g.giftamount)=0))
or
(gi.giftdate in (select min(g.giftdate)
from gift g
where g.idnumber=gi.idnumber)
and gi.usercode1=v_clnt))
and gi.giftamount>0
and gi.giftdate between to_date(v_start_date,'DD-MON-YYYY')
and to_date(v_end_date,'DD-MON-YYYY')
order by giftdate)
)loop
insert into gift2(idnumber,giftdate,giftamount,transnum,usercode1)
values(i.idnumber,i.giftdate,i.giftamount,i.transnum,i.usercode1);
end loop;
commit;
end;

thanks









Tom Kyte
August 16, 2004 - 7:39 pm UTC

if you are using sqlplus -- there will be NO AVOIDING the hard parse -- whether you use a plsql block to assign 3 sqlplus variables or whatever.

only if you use a 3gl language to write a program can you skip the hard parse.

never mind Tom...

denni50, August 16, 2004 - 3:44 pm UTC

I found the answer going through the other threads here
on bind variables.

I did the following:
variable v_clnt varchar2(4)
variable v_start_date varchar2(12)
variable v_end_date varchar2(12)
exec :v_clnt:='ACLJ'; :v_start_date:='01-FEB-2003;:v_end_date:='28-FEB-2003'

then ran the insert statement.


Binding & tuning

RD, August 16, 2004 - 11:51 pm UTC

Hi Tom,
I'm reading your book Effective Oracle By Design at the moment and came upon this

"I make sure to mention the bind
variables, so that I,m not given the query select * from some_table where id = 55; to tune, when the application actually executes select * from some_table where col = :bind_variable. You cannot tune a query with literals and expect a query that contains bind variables to have the same performance characteristics."

Here now I have the following query for example:-

Select * from emp e, dept d where e.deptno = d.deptno and
e.empno = :x ;

Here I myself do not know what and how many empno's the end users are going to input for processing.
So how would I go about trying to tune it.

Thanks,
RD

Tom Kyte
August 17, 2004 - 7:52 am UTC

well, that query either

a) will use an index on empno
b) will not use an index on empno

so, in your application -- where this query is executed thousands of times -- which is the path that makes sense? two full scans and a hash join or index range scans?

Here, the answer is pretty straightforward. empno is the primary key of EMP, it'll index unique scan, table access by index rowid into emp, get the deptno and then index unique scan the deptno primary key and table access by index rowid the dept table.



Temporarily turn off binding?

Andy, August 19, 2004 - 3:06 am UTC

Hi Tom - apologies if this is a bit off-topic. I'm investigating "truncated" timestamps that are being inserted into the database via a JDBC OCI provider (which I know nothing about). I've set up a logon trigger to trace the activity of a specific user, and have isolated the INSERT statement to show what the JDBC provider is actually sending the database. That worked OK, except that the TKPROF INSERT statement generated by JDBC only shows binds instead of literals - the binds seem to be created automatically by JDBC for inserts/updates. For the purposes of identifying what timestamp value is in the insert, can I TEMPORARILY turn off binding in the database? Or is there some other way to "get at" the literals? (I'm on 9.2.0.1.0).

Tom Kyte
August 19, 2004 - 9:36 am UTC

is it a TIMESTAMP or DATE in the database?

(no we cannot turn off binding, that is a client thing entirely)

Date value and binds

Andy, August 19, 2004 - 9:55 am UTC

It's a DATE field. The client code (not written by me so my knowledge is limited) uses a Java Statement object, and tries to pass the date and time to the database. Only the date gets stored (with default time). The intention was to enter the date and time the row was inserted, so my first workaround was to suggest a trigger on the row that fires on insert. The customer didn't want that, though, so we are currently updating the row directly from the client (update table set column = sysdate where id = etc.) once it has been inserted, instead of entering a java date value in the statement object. I was hoping to isolate the problem (i.e. either in client or in DB) rather than wading through the java code, hence my question about finding the literal.


Tom Kyte
August 19, 2004 - 10:12 am UTC

ops$tkyte@ORA9IR2> alter session set events '10046 trace name context forever, level 12';
 
Session altered.
 
ops$tkyte@ORA9IR2> declare
  2          l_x date := sysdate;
  3  begin
  4          insert into t values ( l_x );
  5  end;
  6  /
 
PL/SQL procedure successfully completed.


<b>use that trace and the tracefile will show you:

=====================
PARSING IN CURSOR #11 len=28 dep=1 uid=156 oct=2 lid=156 tim=1067308758695649 hv=1547522806 ad='5c877254'
INSERT into t values ( :b1 )
END OF STMT
PARSE #11:c=0,e=1800,p=0,cr=4,cu=0,mis=1,r=0,dep=1,og=0,tim=1067308758695645
BINDS #11:<b>
 bind 0: dty=12 mxl=07(07) mal=00 scl=00 pre=00 oacflg=13 oacfl2=1 size=8 offset=0
   bfp=0ada0444 bln=07 avl=07 flg=09
   value="8/19/2004 7:2:48"</b>
EXEC #11:c=0,e=422,p=0,cr=1,cu=20,mis=0,r=1,dep=1,og=4,tim=1067308758696123
EXEC #1:c=10000,e=8996,p=3,cr=53,cu=20,mis=0,r=1,dep=0,og=4,tim=1067308758696173
WAIT #1: nam='log file sync' ela= 4499 p1=2423 p2=0 p3=0
WAIT #1: nam='SQL*Net message to client' ela= 2 p1=1650815232 p2=1 p3=0
WAIT #1: nam='SQL*Net message from client' ela= 259 p1=1650815232 p2=1 p3=0


<b>what is getting sent over</b>
 

Andy, August 20, 2004 - 4:14 am UTC

Just what I needed - thanks Tom (JDBC was indeed just sending over a date without the time).

Diagnosis

A reader, September 16, 2004 - 10:13 pm UTC

How can I determine if the (non) use of bind variables is actually hurting my overall system? I mean, suppose I see a very high (>50%) parse/execute ratio in my statspack reports, lots of hardcoded literals all over the shared pool, but no one is complaining!

I dont want to 'fix it if it aint broke'. So, what statistic or behaviour can I look at to conclude that, yes, not using bvs is really hurting my system and using bvs' would result in x% boost?

Thanks

Tom Kyte
September 17, 2004 - 8:26 am UTC

what are your major wait events.

Diagnosis

A reader, September 17, 2004 - 12:43 pm UTC

Lets say my major wait events have nothing to do with bind variables. So, from this can I conclude that even though my system/apps are not using bv's, I am not really adversely affected by it? It is just a future "scalability inhibitor" that needs to be addressed?

Tom Kyte
September 17, 2004 - 1:28 pm UTC

at the "system level" sure, but what if you systems

a) major waits are for batch processes
b) not the end users

and if you look at only what end user processes wait for (application level tracing) -- you might find a different story, you might find that while your batches wait 1 hour for IO, the end users wait 15 minutes for latching related issues.


IO appears to be the bottleneck, but to the end users, even if IO was made infinitely fast -- it would have no affect to them.

ADO.NET, etc

A reader, November 16, 2004 - 8:37 pm UTC

I have a bunch of sps that my developers in ADO.NET, ASPX pages, etc call.

Do they need to take special care to use bind variables when calling the sp or does the sp call automatically use bv's?

Thanks

Tom Kyte
November 16, 2004 - 11:43 pm UTC

they NEED to bind the call to the sp, the sql done by the SP is "bound" nicely

so, instead of having to "worry" about 15 sql calls to bind, they just need to worry about "one"

and they BETTER -- especially if the end user is the one supplying the values they would otherwise just "accept"

thing about this string:

stmt = "begin p( '" & input1 & "', '" & input2 & "' ); end;"


hmm, what happens if user types in:

input1 = x', 'y' ); execute immediate 'drop table t'; end; --
input2 =

so, now stmt to execute is:

begin p( 'x', 'y' ); execute immediate 'drop table t'; end; -- <junk....>

think about that for a second or two -- tell the developers "ok, binds are not about performance dudes, binds are about SECURITY!!!!!"

google

sql injection

read away....

Ref Cursor vs Static Embedded SQL

David, November 18, 2004 - 5:00 am UTC

Hi Tom,

This is a question about soft parsing - kind of related. We have Java with embedded static SQL (PreparedStatements) all using bind variables - nicey, nicey, avoiding multiple parses etc.. However, I am a great advocate of never having data model directly exposed to Application code and I believe you are too - "... don't put SELECT, INSERT, UPDATE, DELETE statements in Java" you said (or have I mis-quoted you?)

To replicate the SELECT embedded SQL I thought of using Ref Cursors to minimise the amount of Java re-work. Just replace the embedded SQL statement with a PLSQL Stored Procedure Call returning a Ref Cursor.

However, using ref cursors like this will mean that the statement is parsed every time, like you say. The static SQL with binds from Java will not (or will it?)

Which method due you suggest if the statement will be executed 000's of times per hour (with binds) ?#

Don't know how to benchmark to prove one way or the other, as I don't know how to trace the app when calling Java.

Any help/comments appreciated.



Tom Kyte
November 18, 2004 - 10:46 am UTC

no misquotes there....

session_cached_cursors will help offset the softparse -- and returning values (instead of ref cursors) is always an option as well.

RE: Ref Cursor vs Static Embedded SQL

David, November 19, 2004 - 5:21 am UTC

Tom,

Helpful as ever!

Returning values is not an option, as the SELECT will return multiple rows (although number of rows will be small). I could return an OBJECT which would be a collection of database records, but I always try and avoid using object types if at all possible. However, this would allow the SQL to be static - granted. Would there be a performace problem in passing back a fully populated collection as opposed to a ref cursor ? Which method do you recommend given the soft-parsing involved with ref cursors ?



Tom Kyte
November 19, 2004 - 11:00 am UTC

I would use ref cursors and session cached cursors

OR (after benchmarking) letting the java programs:

select * from TABLE( your_plsql_pipelined_function );




How Could I use Bind in this Scenario

Alex, November 19, 2004 - 12:23 pm UTC

Hi Tom, I have a function that is used in a hughe database against any Change Log Table to get the creator ID, this is the user id who intially created that record.

FUNCTION F_GET_ORIGINAL_CREATOR_ID ( P_BASE_TABLE_NAME VARCHAR2,
P_PRIMARY_KEY_ID NUMBER)
RETURN NUMBER
IS
lv_creator_id pse_sys_usr.pse_sys_usr_id%TYPE;
lv_sql_Statement VARCHAR2(1000);
BEGIN
IF P_BASE_TABLE_NAME IS NOT NULL AND P_PRIMARY_KEY_ID IS NOT NULL THEN
lv_sql_Statement := ' SELECT EFFECTIVE_USR_ID FROM '||P_BASE_TABLE_NAME||'_CL'||
' WHERE '|| P_BASE_TABLE_NAME||'_ID = '||P_PRIMARY_KEY_ID||
' AND EFFECTIVE_DTE_STAMP = (SELECT MIN(EFFECTIVE_DTE_STAMP) FROM '||P_BASE_TABLE_NAME||'_CL'||
' WHERE '|| P_BASE_TABLE_NAME||'_ID = '||P_PRIMARY_KEY_ID||')';
Execute Immediate lv_sql_Statement INTO lv_creator_id;
END IF;
RETURN lv_creator_id;
EXCEPTION
WHEN OTHERS THEN
RETURN NULL;
END F_GET_ORIGINAL_CREATOR_ID;

In V$SQL, I see it loads a new version of SQL every time whenever a value is changed or the base table name is changed. Can you please help me to use BIND variables in this scenario.

Thanks.

Back Ground: Every Base table (EMP) will have a change log table (EMP_CL) and it has effective_usr_id to store the initial creator id through the triggers.

Tom Kyte
November 19, 2004 - 12:55 pm UTC

ugh, dynamic sql.

SELECT EFFECTIVE_USR_ID
FROM '||P_BASE_TABLE_NAME||'_CL'||
' WHERE '|| P_BASE_TABLE_NAME||'_ID = '||P_PRIMARY_KEY_ID||
' AND EFFECTIVE_DTE_STAMP = (SELECT MIN(EFFECTIVE_DTE_STAMP)
FROM '||P_BASE_TABLE_NAME||'_CL'||
' WHERE '|| P_BASE_TABLE_NAME||'_ID =
'||P_PRIMARY_KEY_ID||')'


at the very very very least should be:

SELECT EFFECTIVE_USR_ID
FROM '||P_BASE_TABLE_NAME||'_CL'||
' WHERE '|| P_BASE_TABLE_NAME||'_ID = :PK1 '||
' AND EFFECTIVE_DTE_STAMP = (SELECT MIN(EFFECTIVE_DTE_STAMP)
FROM '||P_BASE_TABLE_NAME||'_CL'||
' WHERE '|| P_BASE_TABLE_NAME||'_ID = :PK2 )'

open that using p_primary_key_id, p_primary_key_id


But better yet, instead of even HAVING such a function, simply:

create view base_table_name_vw
as
select a.*, ( SELECT substr(
min( to_char(effective_dte_stamp,'yyyymmddhh24miss') ||
EFFECTIVE_USR_ID ), 15 )
FROM base_table_name_cl
where base_table_name_id = a.base_table_name_id ) orig_creator
from base_table_name a
/

and avoid the dynamic sql/function stuff all together. if they want the original creator, they select that column - if not, they don't

Using View

Alex, November 19, 2004 - 1:18 pm UTC

What did you mean using view instead of function. Function is used so that we can call it for any base table, but views have to be made for evey single base table. In our database we have about 3000 base tables and same number for _CL tables.

Please elaborate what did you mean using the view?

Thanks.

Tom Kyte
November 19, 2004 - 2:13 pm UTC

that is exactly what I meant. create 3000 views (if you can write a generic function, I'm sure you can write a generic procedure to create a view of a given table since they all have the same structure)


Better yet ...

Gabe, November 19, 2004 - 4:53 pm UTC

To Alex ...

Better yet ... have no function and have no views ... add a "created_by" column to each table and populate via the row trigger upon insert. Change Log tables are _good_ for auditing type activities ... trying to use their content for regular type processing is just a performance drain.

Tom Kyte
November 19, 2004 - 7:54 pm UTC

they want the *original creator* -- the min effective date by "key"

it would require a data model change (in my opinion) to do that -- you don't want to repeat the original creator in each and every row.

am I missing something?!

Gabe, November 20, 2004 - 11:04 am UTC

<quote>they want the *original creator* -- the min effective date by "key"</quote>

Yes ... and I was suggesting not to ever have to perform a MIN ... "base_table.created_by" would be populated upon insert and not changed afterwards. You know, the typical basic row auditing:

created_by
created_dt
last_changed_by
last_changed_dt

An insert will modify all 4 columns; updates and deletes will modify only the last 2.

<quote>it would require a data model change</quote>
The change they want to do has a wide impact (whether is the function or the views or a combination of both the code will have to be revisited and touched) ... there is no doubt about it (and if they coded "select * from base_table" then they deserve their fate). To me it looks they built a multi-user system with 3000+ tables and _forgot_ to add basic row-level auditing (and more, they actually need it) ... hence, now it would be the time to cut their losses and fix the model ... else, they will have this frequent and unnecessary join forever (not a Paradise I would like to be in :)



Tom Kyte
November 20, 2004 - 11:23 am UTC

it looks like a versioned table -- no updates - just inserts for a change history.

It seems like we're talking about different things ...

Gabe, November 20, 2004 - 11:38 am UTC

<quote>Back Ground: Every Base table (EMP) will have a change log table (EMP_CL) and it has effective_usr_id to store the initial creator id through the triggers.</quote>

I'm suggesting adding columns to the base tables ... not to the change history tables.


Tom Kyte
November 20, 2004 - 11:45 am UTC

ahh, i see, yes -- that would make sense, if the first user to edit and the "last user" to edit are relevant and queried often, that would make perfect sense.

(forest, trees, yadda yadda yadda sometimes.)

To Gabe

A reader, November 20, 2004 - 2:56 pm UTC

"I'm suggesting adding columns to the base tables ... not to the change history tables."

Ok so you add "creator" to the base table (emp_base), how does that help to get it in all subsequent changes to the row in the emp_cl table? (which is what the person wanted?)

Tom Kyte
November 20, 2004 - 3:34 pm UTC

no, they KEEP the emp_cl table -- they are most interested in the original creator -- Gabe is right here.

The EMP table would have "created_by"

The EMP_CL table would have the history of the changes.

They would no longer need a function of any sort.

A reader, November 20, 2004 - 7:33 pm UTC

Right, but the idea behind the original person's trigger was that they seemed to want the original creator on every subsequent change record in emp_cl. That would store it needlessly for each row. So, in that sense, Gabe's idea of storing it once in the base table is good.

But, if they need to access it on every change record, that is a join from the emp_base to emp_cl table?

Instead of having to incur the expense of that join for each emp_cl record, it would make sense to capture it on each new before insert trigger on emp_cl record, right? Which brings us right back to where we started?

Tom Kyte
November 21, 2004 - 8:34 am UTC

It seems at the end of the day

a) the original creator should be an attribute of the base "EMP" table
b) the trigger would version the rows into the EMP_CL table
c) there would be no need for this function, its dynamic sql or overhead.

strange query

raajesh, December 09, 2004 - 10:18 am UTC

Tom,

please help me out with this query..

When I give a query, with the where condition to a number field in ' ' (quotes) it is taking a long time to execute wheras if I dont give it in quotes it is executing in a faster way.. why?

like
select <fieldvalue> from table where <32digits number field>=<value in quotes> is taking much time than
select <fieldvalue> from table where <32digits number field>=<value without quotes>...

This is not observed however, in other environments.

where iam i making a mistake...

thanks much

srr



Tom Kyte
December 09, 2004 - 3:13 pm UTC

ops$tkyte@ORA10GR1> create table t ( x varchar2(5) );
 
Table created.
 
ops$tkyte@ORA10GR1> create index t_idx on t(x);
 
Index created.
 
ops$tkyte@ORA10GR1>
ops$tkyte@ORA10GR1> delete from plan_table;
 
0 rows deleted.
 
ops$tkyte@ORA10GR1> explain plan for select * from t where x = '5';
 
Explained.
 
ops$tkyte@ORA10GR1> select * from table(dbms_xplan.display);
 
PLAN_TABLE_OUTPUT
-------------------------------------------------------------------------------
Plan hash value: 1247842438
 
--------------------------------------------------------------------------
| Id  | Operation        | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------
|   0 | SELECT STATEMENT |       |     1 |     4 |     1   (0)| 00:00:01 |
|*  1 |  INDEX RANGE SCAN| T_IDX |     1 |     4 |     1   (0)| 00:00:01 |
--------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
   1 - access("X"='5')
 
Note
-----
   - dynamic sampling used for this statement
 
17 rows selected.
 
ops$tkyte@ORA10GR1>
ops$tkyte@ORA10GR1> delete from plan_table;
 
2 rows deleted.
 
ops$tkyte@ORA10GR1> explain plan for select * from t where x = 5;
 
Explained.
 
ops$tkyte@ORA10GR1> select * from table(dbms_xplan.display);
 
PLAN_TABLE_OUTPUT
-------------------------------------------------------------------------------
Plan hash value: 749696591
 
--------------------------------------------------------------------------
| Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |      |     1 |     4 |     2   (0)| 00:00:01 |
|*  1 |  TABLE ACCESS FULL| T    |     1 |     4 |     2   (0)| 00:00:01 |
--------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
   1 - filter(TO_NUMBER("X")=5)
 
Note
-----
   - dynamic sampling used for this statement
 
17 rows selected.
 
<b>sounds like what you did, implicit conversions....</b>

 

Is the bind variable used???

sean, December 09, 2004 - 11:44 pm UTC

Tom,
found the sql statement from an application.
it seams using bind variable ---> "A1"."TASK_ID"=:TASK_ID
But the stats shows:
parse 7227
execute 7227

I am very confused.
thank you very much

SELECT "A1"."TASK_ID","A1"."CONV_TASK_IND","A1"."EXPENDITURE_TYPE",
"A1"."COST_ORG_CODE","A1"."ACCT_PERIOD","A1"."APPROP_DEPT_CODE",
"A1"."APPROP_SYMBOL","A1"."APPROP_FISCAL_YEAR","A1"."CIV_CCS_CODE",
"A1"."TRANSACTION_DATE","A1"."TRANS_HRS","A1"."RAW_COST_AMT",
"A1"."BURDEN_COST_AMT"
FROM
"P2_COST_TRANS_VIEW" "A1" WHERE "A1"."TASK_ID"=:TASK_ID ORDER BY
"A1"."TASK_ID"


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 7227 3.78 5.80 0 0 0 0
Execute 7227 4.14 4.74 0 0 0 0
Fetch 8520 954.01 970.90 85 91293300 0 1293
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 22974 961.93 981.44 85 91293300 0 1293


Tom Kyte
December 10, 2004 - 1:05 am UTC

that means the coder parsed it 7,227 times?

they control that -- they were soft parses but the only good parse is no parse. ask the coder "why did you do that"


then, ask them -- and why would you run a query that takes


91293300/1293
-------------
70605.8005


70,605 LIO's per row - and think "this is an OK thing"

Strange query continued...

raajesh, December 10, 2004 - 4:06 am UTC

Tom

I tried the same which you give, my field being a number..

but it is using indexes in both the cases and the output is taking the same time.

why is this misbehaviour...

Thaks much..
srr
srr

Tom Kyte
December 10, 2004 - 10:56 am UTC

prove it. lets see the tkprof. present it like I would present it to you -- full disclosure.

(in other words - throw me a bone, can't you see i'm totally guessing because you give me no example to work with????)


and if the output is taking the same time, what is the problem exactly??

statistics

Raajesh, December 10, 2004 - 12:34 pm UTC

Tom,

this is what I tried...
SQL> desc t;
 Namn                                      Null?    Typ
 ----------------------------------------- -------- -----------------
 X                                                  NUMBER

SQL> select * from t;

         X
----------
         1
         2
         3

SQL> set autotrace on;
SQL> select * from t where x=3;

         X
----------
         3


Körschema
----------------------------------------------------------
   0      SELECT STATEMENT Optimizer=CHOOSE
   1    0   INDEX (RANGE SCAN) OF 'T_IDX' (NON-UNIQUE)




Statistik
----------------------------------------------------------
          0  recursive calls
          0  db block gets
          1  consistent gets
          0  physical reads
          0  redo size
        189  bytes sent via SQL*Net to client
        276  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
          1  rows processed

SQL> select * from t where x='3';

         X
----------
         3


Körschema
----------------------------------------------------------
   0      SELECT STATEMENT Optimizer=CHOOSE
   1    0   INDEX (RANGE SCAN) OF 'T_IDX' (NON-UNIQUE)




Statistik
----------------------------------------------------------
          0  recursive calls
          0  db block gets
          1  consistent gets
          0  physical reads
          0  redo size
        202  bytes sent via SQL*Net to client
        276  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
          1  rows processed

Though my x is a number field, it is using the same index for both x=3 and x='3' conditions and not a full table scan as it came in your case...

For a similar condition, (though in this case the time is the same) , x='3' is not working in other environment and is taking longer time.

Am I doing anything wrong..

I just followed the procedure which you have given..

Thanks much
srr
 

Tom Kyte
December 10, 2004 - 7:53 pm UTC

number is opposite of varchar2


you are confusing the heck out of me here.

first you say:

<quote>
When I give a query, with the where condition to a number field in ' ' (quotes)
it is taking a long time to execute wheras if I dont give it in quotes it is
executing in a faster way.. why?
</quote>

but now you say "they are the same speed"

so -- which is it?

show us the EXAMPLE that is going bad, throw me a bone.

Sean, December 11, 2004 - 12:26 am UTC

Tom,
you stated:
"
that means the coder parsed it 7,227 times?

they control that -- they were soft parses but the only good parse is no parse.
ask the coder "why did you do that"
"
then i have asked the developer, and he said the Native dynamic sql used. Oracle has to soft parse on Native Dynamic sql. I asked why not using DBMS_SQL, and he said because DBMS_SQL would not work for DDL statements.




Tom Kyte
December 11, 2004 - 7:54 am UTC

dbms_sql works for DDL

but that is not even remotely RELEVANT here. what does DDL have to do with DML?

they are doing DML, they are parsing it each and every time -- they are the cause of the problem here.

(probably should even be executed that many times -- probably needs to be executed once. probably the developer has implemented their own nested loop join running 10's of thousands of queries when a single one would suffice -- not only suffice but run faster with tons less resources!)

Bind variable question

Yogesh, December 24, 2004 - 8:56 am UTC

Tom, I have to do a update/delete of around 15K records in two tables.
For this i am writing a PL/SQL block which contains a main cursor that will select the id's (Prim Key) of the records that need to be updated/deleted.

I use the following code to update/delete the id's returned from the main cursor...

FOR rec_mvmt_error IN c_mvmt_error LOOP


update shipment_movement set validate_flag = 'V', last_updated_by = 'TRLR_SCRIPT' where movement_id = rec_mvmt_error.movement_id ;

delete from movement_error where movement_id = rec_mvmt_error.movement_id;

i_total_cnt_updated := (i_total_cnt_updated +1);
IF (MOD(i_total_cnt_updated,500)=0) THEN
COMMIT;
END IF;
END LOOP;

Does this approach in update/delete use the bind variable or is there a better way to do it !!!

Thanks and wishing you a Merry Christmas Tom!!!

Tom Kyte
December 24, 2004 - 9:20 am UTC

why would you write CODE TO DO THIS??????????????????????????


you need exactly, precisely two sql statments to do this, NO CODE PLEASE.


15k records -- tiny, teeny tiny, small.

Bind variables

Yogesh, December 27, 2004 - 6:07 am UTC

Because it gives me a roll back segment error when i try to delete them using a sql statement ....

Tom Kyte
December 27, 2004 - 10:14 am UTC

then someone hasn't set up your system right -- find a dba and have them correct the issue.

Bind varaiables

Yogesh, December 28, 2004 - 7:43 am UTC

No actually, we dont need to carry out such mass deletes every so often.. it just once a while activity carried out from Backend..so we cannot ask the DBA to change the system settings...

can you please let me know if the above script approach will make use of bind variables.. or is there any better approach...

Thanks Tom...



Tom Kyte
December 28, 2004 - 10:44 am UTC

i already outlined the best approach and you've rejected it.

Bind variables and static procedure calls

Martin, January 07, 2005 - 3:56 am UTC

Hello Tom,

Simple question, but difficult (for me) to prove conclusively,
but let's say you have a procedure which is called at the beginning of loads of other procedures (it's actually a debug function), where the name of the current procedure is "passed in", i.e.

PROCEDURE p
AS
BEGIN
debug('p');
END p;

PROCEDURE q
AS
BEGIN
debug('q');
END q;

Is there any advantage at all in declaring a variable ensuring that each call to the debug procedure is the same?, i.e.

PROCEDURE p
AS
proc_name VARCHAR2(10) := 'p';
BEGIN
debug(proc_name);
END p;

Thanks in advance

Tom Kyte
January 07, 2005 - 9:17 am UTC

nope..

So...

Martin, January 07, 2005 - 9:24 am UTC

I'm thinking mostly of the parsing of statements like this, in a PL/SQL context, does Oracle convert statements like the above, i.e. debug('p'), debug('q') to the same thing as debug(var), and subsequently "reuse" this?

I know it's a minor thing, but i'm finding it difficult to prove this either way.

Thanks in advance.

Tom Kyte
January 07, 2005 - 9:40 am UTC

no, only if you were:

execute immediate 'begin debug( ''p'' ); end;';
execute immediate 'begin debug( ''q'' ); end;';

would you need to do anything special (bind !)

Thanks

Martin, January 07, 2005 - 9:57 am UTC

Yep, I've done investigation looking at the parse count for procedures doing the two approaches, and I've proved there's no difference.

So, bind variables really DOES only apply to dynamic SQL or PL/SQL.

Thanks in advance

Tom Kyte
January 07, 2005 - 10:07 am UTC

correct

Can bind variables be used here

abhay, February 11, 2005 - 12:23 pm UTC

Is there any scope of using bind variables in following code ?

CREATE OR REPLACE PROCEDURE devn( p_batch_id number,p_rule_code varchar2 )
is
begin

for rec in (select * from rule_status where batch_id = p_batch_id and rule_code = p_rule_code )
loop
...............
end loop;

end;
/

Tom Kyte
February 12, 2005 - 8:02 am UTC

you are full utlizing bind variables.

that is the sheer and utter beauty of plsql -- variables referenced in SQL are in fact *binds*


think about it, every time you run that query, it feeds the values for p_batch_id and p_rule_code in there -- it doesn't build a unique query that never existed before.

Binding in Java

Alex, February 17, 2005 - 2:34 pm UTC

Hi Tom,

I'm trying to help identify I performance problem with our app and I suspect it's due to lack of bind variables. The problem is, I don't know for sure because it's dynamic sql in Java that I don't understand. Here's a snipet:

Stmt = "SELECT * FROM (SELECT DISTINCT ";
Stmt += "substr(CODE, 0,2) cc, emp ";
Stmt += "FROM dept ";
Stmt += "WHERE KEYS_HERE AND ";
Stmt += "emp = 'KEY_0_HERE' ";
Stmt += "UNION ";
Stmt += "SELECT DISTINCT ";
Stmt += "substr(CODE, 0,2) cc, emp ";
Stmt += "FROM dept2 ";
Stmt += "WHERE KEYS_HERE AND ";
Stmt += "emp = 'KEY_0_HERE' ";
Stmt += ") ORDER BY cc, emp ";

Where KEY_0_HERE is determined by statements like

for(int k=0; k < keyColumns.length; k++){
stmtWithKeys = replace(stmtWithKeys,"KEY_"+k+"_HERE", rs[keyColumns[k]]);

and KEYS_HERE is determined by statements like

if(stmtWithKeys.indexOf("KEYS_HERE") != -1){
stmtWithKeys = replace(stmtWithKeys,"KEYS_HERE", KeyAnd);
stmtWithKeys = replace(stmtWithKeys,"alias.","");

Questions
1) Can you tell from this if you think bind variables are being used?
2) If they are not, how would recommend correcting this? In Effective Oracle by Design you mention "host variables that you assign values to and bind". An example of that if applicable would be outstanding. Much obliged.

Tom Kyte
February 17, 2005 - 2:53 pm UTC

1) not a bind variable to be seen!

2) steps to correct:

a) outlaw select, insert, update, delete, merge in the java code.
b) only procedure calls allowed

look at the query:



SELECT *
FROM (SELECT DISTINCT substr(CODE, 0,2) cc, emp
FROM dept
WHERE KEYS_HERE
AND emp = 'KEY_0_HERE'
UNION
SELECT DISTINCT substr(CODE, 0,2) cc, emp
FROM dept2
WHERE KEYS_HERE
AND emp = 'KEY_0_HERE'
) ORDER BY cc, emp


funny thing is -- does the developer understand that there is a sort/distinct applied to this implicitly (union does that). distinct is "not needed" at all.

substr starting from 0?

replace KEY_0_HERE with ? and bind it.

find out what "keys_here" is (magic? it is just SQL somewhere hiding in the code) and put it in there.


or just turn it into a stored procedure that they pass inputs into and get ref cursors from. that'd be my approach, revoke ALL on all tables, create a schema to hold the code -- put plsql in there and let them have at the plsql....




unfortunately......

Alex, February 17, 2005 - 4:01 pm UTC

The first thing I said when I saw this class, which is filled with stuff like this, was "this should not be done in Java." I don't think it's realistic to redo the entire thing, it's too big. I was hoping binding everything would help enough so our client could actually use the functionality it's suppose to provide.

So, to do that, would I replace Stmt = "emp = 'KEY_0_HERE'";
with
Stmt = "emp = :x";?

I was having problems trying to rewrite them with binds.
I can:

variable emp number;
exec emp = emp := 1;

Then do a query using emp, but you say not to do that in the book. I need the app specific way to bind for a sql statement. Thanks for your time.


Tom Kyte
February 17, 2005 - 7:08 pm UTC

in java, they use ?, not :x

abhay, February 22, 2005 - 3:17 pm UTC

I have to update rows using dynamic WHERE conditions:

select
(
case when RULE_CODE = '1' then
whr = ' where NAME = ' || rec.name
case when RULE_CODE = '2' then
whr = ' where PERM = ' || rec.perm
............
else whr = '1 = 2'
)
from dual;

upd_str = 'update ... ' || WHR_STR;
EXECUTE IMMEDIATE upd_str;

My problem is that this is generating very low ( ~5% ) EXECUTE /PARSE ratio.

a. Is this because there are no bind variables ?

b. Is there a different approach ?



if RULE_CODE = 2' then
whr = ' where PERM = ' || rec.perm





Tom Kyte
February 22, 2005 - 5:19 pm UTC

you are having a parse every time you execute it why you have a low parse execute ratio

your low SOFT PARSE % is due to the bad lack of binds

curious, why not just code:

if ( rule_code = '1' )
then
update .... where name = rec.name;
elsif ( rule_code = '2' )
then
.....


static sql seems to totally apply here, would fix the parse/execute AND the lack of binds bug you have.

Binds

Alex, February 23, 2005 - 1:35 pm UTC

Tom,

Something just dawned on me. In the example I gave I think the developer made a mistake and this:

Stmt = "emp = 'KEY_0_HERE'";

should not have single quotes around KEY_0_HERE, it was meant to be a value not a literal string. If that is the case, is it still missing binds? I know in pl/sql something like emp = v_emp is being bound, but with a string in Java, I don't know how that works. Sorry for the silly questions.

Tom Kyte
February 24, 2005 - 5:06 am UTC

it is definitely missing binds, you don't see any "bind" calls in the code do you?

in java/jdbc, a bind would be a '?' in the query and the would call 'bind' instead of 'replace' (eg: it is the same amount of work from the coders perspective to bind or replace -- actually replacing is harder, but from the databases perspective is a a TON of extra work if the coders use replace instead of binding)

Sooo helpful

Alex, February 24, 2005 - 9:46 am UTC

You are correct, no bind calls and they're using replace. Although i'm not even sure what a 'bind call' would look like, setInt(1, i); maybe. All the examples I can find I don't see where the "?" gets it's value from. I'm under the impression I have to do more than just stick a ? where the string is. Do you have examples kicking around to give me an idea how to do this, or would you mind whipping one up with the query I gave? It's so frustrating knowing how badly this effects the database and not knowing how to code it (i'm a database programmer doing Java). Thanks a lot.

Tom Kyte
February 24, 2005 - 9:53 am UTC

setInt(1,i) would be one.


</code> http://asktom.oracle.com/pls/asktom/f?p=100:11:::::P11_QUESTION_ID:5180609822543 <code>

has some examples..

possible issues

Alex, February 24, 2005 - 10:35 am UTC

Ok I think I finally know now what I must do. If I could trouble you with two more quick ones:

1) Currently "stmt" is a string, It looks like I have to change it to a PreparedStatement, could that cause issues?

2) I'm guessing that I have to match the type of whatever I'm passing the set method with it's respected type, ie pass an int to setInt, strings to setString?

Tom Kyte
February 24, 2005 - 4:56 pm UTC

1) stmt will be a string, you will use a PreparedStatement instead of a Statement to execute it so you can bind

2) correct.

sqlj? java SP?

Alex, February 25, 2005 - 3:09 pm UTC

Hi Tom,

This is turning into a real nightmare. I forgot to even ask if using SQLJ would been any better, and also, would running the java classes in the database as stored procedures (not pl/sql) help a lot by reducing calls back and forth?

Tom Kyte
February 25, 2005 - 6:43 pm UTC

sqlj is dead (going away, desupported). java programmers were not using it (guess it was too easy :)

keep the java out of the database -- but by all means exploit stored procedures whenever possible!

Bind variables - less parsing but uniform and worse plan

Oleg Oleander, February 25, 2005 - 8:10 pm UTC

-- bind variables stored in context
SQL> select TO_NUMBER(SYS_CONTEXT('VIR_SESSION_CTX','FBV_1_1_L')), TO_NUMBER(SYS_CONTEXT('VIR_SESSION_CTX','FBV_1_1_H'))
  2  from dual;

TO_NUMBER(SYS_CONTEXT('VIR_SESSION_CTX','FBV_1_1_L')) TO_NUMBER(SYS_CONTEXT('VIR_SESSION_CTX','FBV_1_1_H'))             
----------------------------------------------------- -----------------------------------------------------             
                                                 5000                                                  5000             

-- in both Query-A and Query-B we filter on 'PROJECT_ID' and 'NET_SUM'
-- in Query-A we use constants, in Query-B bind varibles thru context

-- Query-A:    this uses index: IX_T_BILLS_NETSUM on B.NET_SUM.VALUE
SQL> set timing on
SQL> select  *
  2  from V#F_PROJECT_BILLS B
  3  where    ( B.NET_SUM.VALUE between 5000 and 5000);

no rows selected

Eltelt: 00:00:00.02

EXECUTION PLAN
----------------------------------------------------------                                                              
   0      SELECT STATEMENT Optimizer=CHOOSE
   1    0   VIEW OF 'V#F_PROJECT_BILLS'
   2    1     UNION-ALL                                         3    2       FILTER                                       
   4    3         NESTED LOOPS
   5    4           NESTED LOOPS
   6    5             NESTED LOOPS
   7    6               MERGE JOIN (CARTESIAN)
   8    7                 FIXED TABLE (FULL) OF 'X$CONTEXT'
   9    8                   FIXED TABLE (FULL) OF 'X$CONTEXT'
  10    7                 BUFFER (SORT)
  11   10                   TABLE ACCESS (BY INDEX ROWID) OF 'T_BILLS'
**12   11                     INDEX (RANGE SCAN) OF 'IX_T_BILLS_NETSUM' (NON-UNIQUE)
  13    6               TABLE ACCESS (BY INDEX ROWID) OF 'T_PROJECTS'
  14   13                 INDEX (UNIQUE SCAN) OF 'PK_T_PROJECTS' (UNIQUE)                                       15    5             INDEX (UNIQUE SCAN) OF 'PK_T_PLS_RIGHTS__PRO_EMP' (UNIQUE)
  16   15               FIXED TABLE (FULL) OF 'X$CONTEXT'
  17    4           INDEX (UNIQUE SCAN) OF 'PK_T_BSLS_RIGHTS_EMP_BS' (UNIQUE)
  18    3         FIXED TABLE (FULL) OF 'X$CONTEXT'
  19   18           FIXED TABLE (FULL) OF 'X$CONTEXT'
  20    3         COLLECTION ITERATOR (PICKLER FETCH) OF 'F_GET_CTX_GENERIC_LIST'                                       21    2       FILTER
  ...left out.
                                                                                                                          

-- Query-B:    this chooses PK_T_PROJECTS on P.PROJECT_ID (it is inside V#F_PROJECT_BILLS)
SQL> select  *
  2  from V#F_PROJECT_BILLS B
  3  where ( B.NET_SUM.VALUE between TO_NUMBER(SYS_CONTEXT('VIR_SESSION_CTX','FBV_1_1_L')) and TO_NUMBER(SYS_CONTEXT('VIR_SESSION_CTX','FBV_1_1_H')));

no rows selected

Eltelt: 00:00:11.04

EXECUTION PLAN
----------------------------------------------------------                                                              
   0      SELECT STATEMENT Optimizer=CHOOSE
   1    0   VIEW OF 'V#F_PROJECT_BILLS'
   2    1     UNION-ALL                                         3    2       FILTER                                       
   4    3         FILTER                                        5    4           NESTED LOOPS
   6    5             NESTED LOOPS
   7    6               NESTED LOOPS
   8    7                 NESTED LOOPS
   9    8                   FIXED TABLE (FULL) OF 'X$CONTEXT'
  10    9                     FIXED TABLE (FULL) OF 'X$CONTEXT'
  11    8                   TABLE ACCESS (BY INDEX ROWID) OF 'T_PROJECTS'
**12   11                     INDEX (UNIQUE SCAN) OF 'PK_T_PROJECTS' (UNIQUE)                                       13    7                 INDEX (UNIQUE SCAN) OF 'PK_T_PLS_RIGHTS__PRO_EMP' (UNIQUE)
  14   13                   FIXED TABLE (FULL) OF 'X$CONTEXT'
  15    6               TABLE ACCESS (BY INDEX ROWID) OF 'T_BILLS'
  16   15                 INDEX (RANGE SCAN) OF 'IX_BILLS_PROJECT_ID' (NON-UNIQUE)
  17    5             INDEX (UNIQUE SCAN) OF 'PK_T_BSLS_RIGHTS_EMP_BS' (UNIQUE)                             18    3         FIXED TABLE (FULL) OF 'X$CONTEXT'
  19   18           FIXED TABLE (FULL) OF 'X$CONTEXT'
  20    3         COLLECTION ITERATOR (PICKLER FETCH) OF 'F_GET_CTX_GENERIC_LIST'                                     
  21    2       FILTER
  ...left out.

Please tell me if I am right with the following:
1. With Query-A the optimizer looked at the constant values, 
   and index stats and decided to use the index on NET_SUM as the better plan.

2. With Query-B the optimizer didnot look at the values bound thru SYS_CONTEXT,
   so took the "safer" plan to use - the otherwise rather selective - index on PROEJCT_ID.

3. If the optimizer looked at the bound values it would have changed the plan, but that means
   different plans for the same SQL, a different plan maybe for every execution. This cannot happen
   since the library cache holds one execution plan for an sql statment to avoid parsing too much.
  
4. So in this case with using bind varibles we avoid parsing but got a uniform and worse execution plan.

Thank you very much for teaching.

Oleg 

Tom Kyte
February 26, 2005 - 7:51 am UTC

well, there is alot going on here -- you have a view, you have plsql functions going on -- lots of stuff.


suggest you try a first rows hint if you want the query to be optimized for response time (getting the first rows as fast as possible)

the crutch

Alex, March 01, 2005 - 11:50 am UTC

Tom,

For my dynamic sql bind problem, would setting cursor sharing to force help me in this situation. I was thinking of altering the session before running this monster, I would expect a big improvement, but what do you think. I don't think the side effects of cursor sharing would be a problem for this case. At the very least, if I saw improvement, I know binds are at least one problem?

Tom Kyte
March 01, 2005 - 12:25 pm UTC

binds will improve the scalability of your application mostly, you'd have to set up a multi-user test to see this.


only set cursor sharing at the session (not system) level and only for the applications that need it

APL, March 04, 2005 - 2:30 am UTC

SELECT ir.investigation_id InvestigationId,ir.file_no FileNo,
to_char(ir.occ_from,'MM/DD/YYYY') OccDate, ir.occ_from OccFromDate,
gu.first_nm FirstName,gu.mi_nm MiddleName,gu.last_nm LastName,
ls.description Status, irm.hfb_flag HFBFlag, irm.del_flag DelFlg,
to_char(irm.assigned_date,'MM/DD/YYYY') AssiDate, irm.create_by,
irm.status_cd StatusCd, ir.loc_fac_desc FacDesc, ir.class_category_desc
ClassDesc , irm.assigned_id AssignedID
FROM
investigation_report ir JOIN investigation_report_mtd irm ON
ir.investigation_id = irm.investigation_id AND ir.version_no =
irm.version_no LEFT OUTER JOIN grit_user gu ON irm.assigned_id=gu.user_id
LEFT OUTER JOIN lookup_status ls ON irm.status_cd = ls.code where
IR.Version_NO=0 AND IRM.Status_Cd = SYS_CONTEXT('INVESTIGATIONSEARCH_CTX',
'IRM.Status_Cd') ORDER BY InvestigationId DESC


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 63 0.13 0.08 0 0 0 0
Execute 64 0.03 0.05 0 0 0 0
Fetch 189 22.50 22.15 504 49863 3 1890
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 316 22.66 22.29 504 49863 3 1890

Misses in library cache during parse: 0
Optimizer goal: CHOOSE
Parsing user id: 141 (GRITSOX) (recursive depth: 1)

Rows Row Source Operation
------- ---------------------------------------------------
30 SORT ORDER BY (cr=779 r=0 w=0 time=352486 us)
9631 MERGE JOIN OUTER (cr=779 r=0 w=0 time=257309 us)
9631 HASH JOIN (cr=777 r=0 w=0 time=200591 us)
9631 HASH JOIN OUTER (cr=222 r=0 w=0 time=67341 us)
9631 TABLE ACCESS FULL INVESTIGATION_REPORT_MTD_BT PARTITION: 1 1 (cr=210 r=0 w=0 time=20592 us)
413 TABLE ACCESS FULL GRIT_USER_BT (cr=12 r=0 w=0 time=846 us)
10217 TABLE ACCESS FULL INVESTIGATION_REPORT_BT PARTITION: 1 1 (cr=555 r=0 w=0 time=37224 us)
9631 SORT JOIN (cr=2 r=0 w=0 time=34073 us)
1 TABLE ACCESS BY INDEX ROWID LOOKUP_STATUS (cr=2 r=0 w=0 time=53 us)
1 INDEX UNIQUE SCAN PK_LOOKUP_STATUS (cr=1 r=0 w=0 time=23 us)(object id 110428)


I am getting the above plan from tkprof ouput. I am using this for a search condition in our application. Here I am calling a stored procedure and inside the stored procedure, I am executing this query. Here I am using context to set the bind variable. Can you tell me why the parse ratio and parse cpu time is more in this case even after using bind variables.

Tom Kyte
March 04, 2005 - 8:04 am UTC

because your code requested this statement to be parsed 63 times?

binding does not eliminate parsing, only programmers can eliminate parsing. binding changes the type of parse from HARD (really bad) to SOFT (just bad).

The only good parse is *no parse* -- and you would have to do that (or use plsql of course, which will automagically cache static sql so it gets parsed once per session instead of once per execution)

Parse ratio

APL, March 05, 2005 - 1:50 am UTC

This query is executing from PL/SQl ie I am setting values into the context and using those values to build the query. We are following the method of using bind variables from your book 'Expert One on One'.(Chapter 16-Dynamic Queries) Even after that the parse ratio is not coming down.
Is parse counts show above is hard parse or soft parse? Is there any way to differentiate between two from the trace files?
The load test of my system is showing red in the parse to execute ratio. How should i bring down the parse count?


Tom Kyte
March 05, 2005 - 6:58 am UTC

with dynamic SQL -- unless you use dbms_sql (or 10g in some cases, there is an optimization new in there), every execute is a parse as well.

See the chapter on dynamic sql in that same book -- I discuss why dbms_sql is perhaps not as dead as it looks.


The parse counts above show:

Misses in library cache during parse: 0


they were all soft.

PARSE COUNT

APL, March 06, 2005 - 9:05 am UTC

Is there any change in parse count if I modify the parameter cursor_sharingto similar? Now its in exact mode.

Tom Kyte
March 06, 2005 - 2:57 pm UTC

nope, cursor_sharing does not, cannot fix an over parse situation. that is why cursor_sharing isn't an end solution, it is a step on the road to getting better.

If you need cursor sharing set to anything other than exact, it means you are seriously in a parse/execute/close situation...

it'll turn hard into soft, but the only good parse is no parse.

parse count

APL, March 07, 2005 - 4:14 am UTC

If i change my parameter session_cached_cursors to some higher value, is there any gain in performance or changing in soft parse?

Tom Kyte
March 07, 2005 - 8:19 am UTC

you will reduce the amount of latching if you have lots of session cursor cache hits due to the "softer soft parse" (if you search for that -- you'll find other dicussions on this topic).

A single program in isolation won't run any faster, but take that same program and run 10 copies at the same time and you'll see them run more concurrently and faster over all.

it helps you scale up - a single program running in isolation won't benefit too much, but 10 copies of that program running simultaneously will.

dont user cursor sharing

riyaz, March 07, 2005 - 7:12 am UTC

Hi buddy, my suggestion:

It is not advisable to user cursor sharing, it is BETTER to change the code to make it as bind enabled. In my experience, some times had drastic effect other way. (slows down instead of speeding up)



Can you "reuse" the bind placeholder?

Nick, March 30, 2005 - 2:17 pm UTC

A simple question-
I have written some code to count the number of NULL values in a row. It is generates dynamic sql that ends up looking something like this:

select sum(v.n) from
( select 0 as n from dual
union all
select 1 from ORDERS where LINE_TR is NULL and ID= :1
union all
select 1 from ORDERS where LINE_CR is NULL and ID= :1
) v ;

Can I do that and use just one 'using' substitution like:

EXECUTE IMMEDIATE mysql INTO null_count USING '1234';

Another words, can my ':1' substitution be used in more than one place, or is it just a one-for one substitution.

Thanks.



Tom Kyte
March 30, 2005 - 2:31 pm UTC

select sum(decode(line_tr,null,0,1)+
decode(line_cr,null,0,0)+
....)
from t
where id = :1
/

that'll return 0 or some value always.

Confused about binds

Richard, April 01, 2005 - 7:45 am UTC

I'm studying Dynamic SQL and am confused about how binds are populated in certain circumstances.

I'm looking at p. 715 of One-on-One. On that page, your code says (about half way down, in a for loop):

p_cnames(i) || ' ' ||
p_operators(i) || ' ' ||
':bv' || i;

How does :bv get populated?

I can see how the values in arrays p_cnames and p_operators are accessed (i.e. by position), but not those in p_values; yet p_values is an array, too.

Any help most gratefully received.

Tom Kyte
April 01, 2005 - 8:46 am UTC

last bit of code on that page, that reads "Now, we can bind the inputs to the query"

Thank You

Richard, April 01, 2005 - 9:37 am UTC

Thank you! My brain had obviously gone to sleep!

How to use bind variables here?

A reader, April 29, 2005 - 2:25 pm UTC

I am constructing a dynamic SQL based on user input.

Something like

declare
q long;
begin
q:='select .... from table where 1=1'
q:=q||' and ....';
end;

One of the fields is a description.

If user enters say "Red Truck", I want the SQL to read

and lower(item_desc) like '%Red%Truck%'

How can I use bind variables here?

Thanks

Tom Kyte
April 29, 2005 - 6:36 pm UTC

</code> http://asktom.oracle.com/pls/asktom/f?p=100:11:::::P11_QUESTION_ID:1288401763279 <code>

you'll put in the context the value %Red%Truck%

Optional Parameters

A reader, June 21, 2005 - 5:13 pm UTC

Tom,
I am trying to pass parameters to a procedure which takes in 2 IN parameters, and one of them is optional (Default )

Here is the procedure declaration

p_emp_data(empno in integer,
deptno in integer default -1,
v_rc out sys_refcursor);

I tried to execute it
exec p_emp_data(1234,10,:x) ..it works fine

If i say
exec p_emp_data(1234,:x) , i am getting "Wrong number or types of arguments in call" error. I thought since both are integers, it might need a different data type. So i changed p_deptno type to varchar2 and tried, still same message.

I was looking at your package mentioned above (demo_pkg)with thread "How to pass NULL to a bind variable?"
and you were using it the way i wanted to, and i'm not able to get it to work.
How can i make this work :-(

Thanks,


Tom Kyte
June 21, 2005 - 9:07 pm UTC

exec p_emp_data( 1234, v_rc => :x );

if you want to "skip" a parameter, you need to drop down to named notation to let it know, else it is trying to bind the :x to the number, it has no other choice but to do that.

Skip Parameters

A reader, June 21, 2005 - 10:03 pm UTC

Tom, thanks for your reply, But I looked at your example and you are executing procedure by skipping parameters. Only in last statement your are using named notation. mainly second statement, you skipped varchar2 and boolean. How can i do that? Do i have to keep ref cursor parameter as first one?

exec demo_pkg.get_data( :x, '%A%' );
exec demo_pkg.get_data( :x );
exec demo_pkg.get_data( :x, p_all => TRUE );

Thanks again for your help.



Tom Kyte
June 21, 2005 - 11:58 pm UTC

2 as
3 type rc is ref cursor;
4
5 procedure get_data( p_cursor in out rc,
6 p_filter in varchar2 default NULL,
7 p_all in boolean default FALSE );
8 end;
9 /

I did not skip any


exec demo_pkg.get_data( :x, '%A%' ); <<<== p_cursor, p_filter IN ORDER
exec demo_pkg.get_data( :x ); <<<== p_cursor IN ORDER
exec demo_pkg.get_data( :x, p_all => TRUE ); <<<== NAMED notation since I
skipped p_filter

I did not "skip over" any, you can leave off trailing defaults, but you cannot "SKIP OVER" them without using named notation.

Thanks Tom. I got it now

A reader, June 22, 2005 - 8:40 am UTC


Bind Variables... all the way?

Rich, June 28, 2005 - 9:34 am UTC

Hi Tom,

Should we always try to use bind variables? In the following article, the author seems to suggest that by using bind variables, the optimizer plan may not have all the important information it needs to elaborate the optimal execution plan.

</code> http://www.rittman.net/archives/000832.html: <code>

What do you think?

Rich

Tom Kyte
June 28, 2005 - 10:15 am UTC

No, you shouldn't always do anything!

Never say Never, Never say Always, I always say.

From my book Effective Oracle by Design:


2)There Are Exceptions to Every Rule
In this section, we've seen mathematical, scientific, reproducible proof that as a general rule, you should use bind variables:

Bind variables are a sound method to reduce the number of latches (read that word as locks) your application will use when parsing queries. 

Soft parsing uses significantly less CPU time than hard parsing, and bind variables are the way to achieve soft parsing. 

Stringing literals into your SQL statements, rather than using bind variables, opens your system to the SQL injection security bug.

Stringing literals into your SQL statements can cause the statement to fail if the user inputs some unanticipated characters such as quotation marks.

No matter what environment you are using-PL/SQL, Java and JDBC, or some other language-using bind variables is not only at least as fast, if not faster, than not using bind variables, but the code is also easier to write.


In short, if you need to execute hundreds or thousands or more SQL statements per minute or second, use bind variables in place of literals that change in the SQL. The effect of not using them can be quite dramatic. All it takes is one bad query in your system that fails to use a bind variable, but is executed frequently differing only in the literal values, to bring it to its knees.  

Does that mean that you should always use bind variables in every circumstance? No, those kinds of rules of thumb are dumb. For every rule of thumb out there, there are cases where they do not hold true. In some cases on systems that measure queries per second (which we've been focusing on in the examples here), you don't want use bind variables. On a system where you are measuring seconds per query (a data warehouse, for example), rather than per second, bind variables are something you may actually want to avoid using.

Queries-per-Second Systems

As you've seen, in a typical system, where a Statspack report tells you that you execute tens of queries or more per second on average, you should use bind variables. But I have seen binding taken to an extreme, when developers seek and destroy all occurrences of literals in queries wherever they exist (this is also an unfortunate side effect of cursor_sharing=-FORCE, a session or init.ora setting). The problem with this approach is that you could be removing from the optimizer crucial information that it would have used to generate an optimal plan at runtime.

For example, suppose we have a table that has a STATUS column with the value Y (to indicate the record was processed) or N (to indicate the record was not processed). Most of the records in such a table would be in the processed state, with a small percentage not processed. We have an application that must pick off the unprocessed (N) records, process them, and update their status. Therefore, we have the following simple query in our code:

select * from records_to_be_processed where status = 'N';

Now, suppose that someone imposed the rule that bind variables are mandatory, so we must change that 'N' into a bind variable. The problem we discover after doing so is that performance of our application just went down the tubes; what used to take seconds now takes minutes. The reason is simple: our query stopped using an index and is now full-scanning a rather large table. Consider this simple simulation:

ops$tkyte@ORA920> create table records_to_be_processed
  2  as
  3  select decode( mod(rownum,100), 0, 'N', 'Y' ) processed, a.*
  4    from all_objects a;
Table created.

ops$tkyte@ORA920> create index processed_idx on records_to_be_processed(processed);
Index created.

ops$tkyte@ORA920> analyze table records_to_be_processed compute statistics
  2  for table
  3  for all indexes
  4  for all indexed columns
  5  /
Table analyzed.

ops$tkyte@ORA920> variable processed varchar2(1);
ops$tkyte@ORA920> exec :processed := 'N'
PL/SQL procedure successfully completed.

ops$tkyte@ORA920> set autotrace traceonly explain
ops$tkyte@ORA920> select *
  2    from records_to_be_processed
  3   where processed = 'N';

Execution Plan
----------------------------------------------------------
   0      SELECT STATEMENT Optimizer=CHOOSE (Cost=2 Card=1 Bytes=97)
   1    0   TABLE ACCESS (BY INDEX ROWID) OF 'RECORDS_TO_BE_PROCESSED' 
                                          (Cost=2 Card=1 Bytes=97)
   2    1     INDEX (RANGE SCAN) OF 'PROCESSED_IDX' (NON-UNIQUE) 
                                          (Cost=1 Card=1)

ops$tkyte@ORA920> select *
  2    from records_to_be_processed
  3   where processed = :processed;

Execution Plan
----------------------------------------------------------
   0      SELECT STATEMENT Optimizer=CHOOSE 
                                 (Cost=46 Card=16220 Bytes=1573340)
   1    0   TABLE ACCESS (FULL) OF 'RECORDS_TO_BE_PROCESSED' 
                                 (Cost=46 Card=16220 Bytes=1573340)

When we remove the fact that processed = 'N' from the query and just generically say where processed = :some value, the optimizer must make some assumptions. Here, it assumes that half of the table on average will be returned, since the statistics indicate there are only two values. In this case, the index should be avoided. In general, it did the right thing, but specifically, it did the wrong thing.  It should be noted however, that there is a change that autotrace is lying to us here however!  There is a chance, due to bind variable peeking, that the optimizer will rethink that plan when asked to really execute the query and it would use an index.  We'll visit that in more detail below in the section on Bind Variable Peeking.  

But, in this case, the only correct solution here is to put the literal back. No matter how many times we execute that query in our application, and no matter how many copies of our application we invoke, the query will always be where processed = 'N'. The value is a true constant that never changes. 


If this query sometimes used processed = 'N', sometimes used processed = 'Y', and sometimes used processed = 'Something else', using a bind variable would be the correct approach. In short, you need to use bind variables only when the value supplied to the query changes at runtime. 

Another way to look at this is if you are using static SQL-SQL that is known fully at compile time-you do not need to use a bind variable; the values in the query can never change, no matter how many times you execute the query. On the other hand, if you are using truly dynamic SQL, whereby the query is dynamically constructed at runtime, you should use bind variables for those columns for which inputs change. For example, suppose you have some Java/JDBC code that looks like this:

    PreparedStatement pstat =
    conn.prepareStatement
    ("select ename, empno "+
       "from emp " +
      "where job = 'CLERK' " +
        "and ename like '" + ename_like + "'" );

That will result in a query like this being generated at runtime:

select ename, empno
  from emp
 where job = 'CLERK'
   and ename like '%A%'

At first glance, it might seem like you should bind both the literal 'CLERK' and '%A%'. In fact, only '%A%' needs to be bound. No matter how many times that query is executed by that application or how many times you run that application, the predicate where job = 'CLERK' is constant. Only the predicate against ENAME is variable, so only it needs to be bound. Here is the correct code in Java/JDBC:

    PreparedStatement pstat =
    conn.prepareStatement
    ("select ename, empno "+
       "from emp " +
      "where job = 'CLERK' " +
        "and ename like ?" );

    pstat.setString( 1, ename_like );

It accomplishes the same goal, but properly uses bind variables where necessary. 
The same concept applies in PL/SQL stored procedures as in any other language: If the literal in the query is invariant at runtime, it need not be bound. Consider this query:

for x in ( select *
             from emp
            where job = 'CLERK'
              and ename like p_ename ) ...

You do not want to replace the literal 'CLERK' with a bind variable. Again, no matter how many times you run that query, the predicate will always be where job = 'CLERK'. 

In PL/SQL, the only time you need to worry about bind variables is when you use dynamic SQL and build a query at runtime. In other languages that support only dynamic SQL, such as Java with JDBC and VB with ODBC, you need to take care to properly bind each query when and where appropriate. This is yet another reason to consider using stored procedures: They actually make it very hard to misuse or skip using bind variables!  Bear in mind, the developers still must bind in their call to the stored procedure, they cannot skip it all together.  But by using stored procedures, you will reduce the number of times they must bind - typically a single stored procedure call will execute many different SQL statements - each of which must be bound.  They will bind once to the stored procedure and PL/SQL will take it from there.


Seconds-per-Query Systems

In what I'll loosely refer to as data warehouses, instead of running say 1,000 statements per second, they do something like take an average of 100 seconds to run a single query. In these systems, the queries are few but big (they ask large questions). Here, the overhead of the parse time is a tiny fraction of the overall execution time. Even if you have thousands of users, they are not waiting behind each other to parse queries, but rather are waiting for the queries to finish getting the answer.

In these systems, using bind variables may be counterproductive. Here, the runtimes for the queries are lengthy-in seconds, minutes, hours, or more. The goal is to get the best query optimization plan possible to reduce the runtime, not to execute as many of OLTP, one-tenth-second queries as possible. Since the optimizer's goal is different, the rules change.

As explained in the previous section (considering the example with the status = 'N' query against the RECORDS_TO_BE_PROCESSED table), sometimes using a bind variable forces the optimizer to come up with the best generic plan, which actually might be the worst plan for the specific query. In a system where the queries take considerable time to execute, bind variables remove information the optimizer could have used to come up with a superior plan. In fact, some data warehouse-specific features are defeated by using bind variables. For example, Oracle supports a star transformation feature for data warehouses that can greatly reduce the time a query takes (we discuss this feature briefly in Chapter 6, the CBO). However, one restriction that precludes star transformation is having queries that contain bind variables. 


In general, when you are working with a seconds-per-query system, use of bind variables may well be the performance inhibitor!
</quote> 

Seqence number problem in PLsql

Jamil Qadir, August 22, 2005 - 12:04 pm UTC

Hi

I have two remote database with same seqence number name and tables. I want to get the sequence number created at the time of insert for futher process in my plsql.


Tom Kyte
August 24, 2005 - 3:17 am UTC

insert into table_name ( pk, x, y ) values ( s.nextval, 55, 42 )
returning pk into l_pk;


use the returning clause on the insert.

Bind variables

Anne, October 17, 2005 - 5:26 pm UTC

Hi Tom,

Which method is preferable for the use of bind variables :
Method 1:
declare
l_ename varchar2( 20) := 'KING';
begin
for x in ( select * from emp where ename = l_ename )
loop
.....
end loop;
end;


OR
Method2 :
declare
l_query varchar2(2000);
begin
l_query := 'selet * from emp where ename = :x';
execute immediate l_query into .... using emp_array(i);
end;
OR
Method 3 :
declare
cursor c (i_ename...) is
select *
from emp
where ename = i_ename;

begin
l_query := open c for :x;
execute immediate l_query into .... using emp_array(i);

end;

(I am not sure about the syntax for method3 - apologize for that)

Tom Kyte
October 18, 2005 - 8:21 am UTC

method 1, absolutely.


never use dyanmic sql UNLESS YOU ABSOLUTELY HAVE NO OTHER METHOD to achieve your goal. dynamic sql in plsql (in PLSQL) is expensive, breaks the dependency chain, removes the compile time checks - in short it REMOVES functionality and adds "nothing" and makes the code harder to maintain.

method 3 isn't valid syntax - but I think you are trying to use a ref cursor an they should only be used WHEN NECESSARY as well.




Bind Variable

Anne, October 18, 2005 - 8:41 am UTC

Thanks so much Tom!

using Bind variables

SVS, October 21, 2005 - 11:48 am UTC

Hi Tom

I understand the importance of using bind variables and want to start implementing them wherever its possible.

We are using Forms 9i for development.

Below is the existing code

...
...
...
l_select := l_select ||
'AND ee.tax_nbr = '||:nblk_criteria.tax_nbr;
l_select_cpty := l_select_cpty || ' AND ee.tax_nbr = '||:nblk_criteria.tax_nbr;
l_select_2 := l_select_2 || '
AND ee.tax_nbr = '||:nblk_criteria.tax_nbr;
...
...
l_select := l_select || l_select_cpty || l_select_2;
...
...

SET_BLOCK_PROPERTY('blk_main',QUERY_DATA_SOURCE_NAME,'('||l_select||')');
EXECUTE_QUERY;

Using BIND VARIABLES
====================
...
...
...
l_select := l_select ||
'AND ee.tax_nbr = :x using '||:nblk_criteria.tax_nbr;
l_select_cpty := l_select_cpty || ' AND ee.tax_nbr = :x using '||:nblk_criteria.tax_nbr;
l_select_2 := l_select_2 || '
AND ee.tax_nbr = :x using '||:nblk_criteria.tax_nbr;
...
...
l_select := l_select || l_select_cpty || l_select_2;
...
...

SET_BLOCK_PROPERTY('blk_main',QUERY_DATA_SOURCE_NAME,'('||l_select||')');
EXECUTE_QUERY;


My question is very very trivial I believe :)

can I use :x over and over again in the script or do I have to use different names e.g. x1, x2, x3, x4......... at different places.

Thanks


Tom Kyte
October 21, 2005 - 11:53 am UTC

in forms, the query would just be:


'and ee.tax_nbr = :nblk_criteria.tax_nbr'


just put the :block.item reference right in the predicate, forms will bind it.

Does this code use bind variables?

ht, November 14, 2005 - 1:00 am UTC

Tom,
Thank you for your time.

I've read your discussion on bind variables and I have 2 questions about my code below (10.2.0.1):
1. Am I using bind variables? I believe the PL/SQL code makes this inherent but OEM is telling me that the code can be optimized by using bind vars.
2. Besides using a minus to grab only segments that have changed, is there a more efficient way to write the code below (given I will be passing in segment info 1 at a time)?

TIA,
ht
drop table segment_history;

Table dropped.


create table segment_history as select * from dba_segments where rownum=0;

Table created.


alter table segment_history add modified date;

Table altered.


alter table segment_history add created date;

Table altered.


desc segment_history
Name Null? Type
----------------------------------------- -------- ----------------------------
OWNER VARCHAR2(30)
SEGMENT_NAME VARCHAR2(81)
PARTITION_NAME VARCHAR2(30)
SEGMENT_TYPE VARCHAR2(18)
TABLESPACE_NAME VARCHAR2(30)
HEADER_FILE NUMBER
HEADER_BLOCK NUMBER
BYTES NUMBER
BLOCKS NUMBER
EXTENTS NUMBER
INITIAL_EXTENT NUMBER
NEXT_EXTENT NUMBER
MIN_EXTENTS NUMBER
MAX_EXTENTS NUMBER
PCT_INCREASE NUMBER
FREELISTS NUMBER
FREELIST_GROUPS NUMBER
RELATIVE_FNO NUMBER
BUFFER_POOL VARCHAR2(7)
MODIFIED DATE
CREATED DATE


create or replace procedure track_segment
2 (
3 in_segment_name varchar2,
4 in_segment_type varchar2,
5 in_bytes number
6 )
7 as
8 begin
9 update segment_history set modified=sysdate
10 where
11 segment_name=in_segment_name
12 and segment_type=in_segment_type
13 and bytes=in_bytes;
14
15 if sql%rowcount=0 then
16 insert /*+append*/ into segment_history
17 (
18 segment_name,
19 segment_type,
20 bytes,
21 created
22 )
23 values
24 (
25 in_segment_name,
26 in_segment_type,
27 in_bytes,
28 sysdate
29 );
30 end if;
31 end;
32 /

Procedure created.


show errors;
No errors.

set autotrace on

exec track_segment('seg1','proc',1024000);

PL/SQL procedure successfully completed.

exec track_segment('seg1','proc',1024000);

PL/SQL procedure successfully completed.

exec track_segment('seg1','proc',10240000);

PL/SQL procedure successfully completed.


select segment_name,segment_type,bytes, to_char(created,'ddMonyyyy hh24:mi:ss'), to_char(modified,'ddMonyyyy hh24:mi:ss')
2 from segment_history;

SEGMENT_NAME
--------------------------------------------------------------------------------
SEGMENT_TYPE BYTES TO_CHAR(CREATED,'D TO_CHAR(MODIFIED,'
------------------ ---------- ------------------ ------------------
seg1
proc 1024000 13Nov2005 21:42:50 13Nov2005 21:42:50

seg1
proc 10240000 13Nov2005 21:42:50


2 rows selected.


Execution Plan
----------------------------------------------------------
Plan hash value: 932560795

--------------------------------------------------------------------------------
-----

| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time
|

--------------------------------------------------------------------------------
-----

| 0 | SELECT STATEMENT | | 2 | 168 | 3 (0)| 00:00
:01 |

| 1 | TABLE ACCESS FULL| SEGMENT_HISTORY | 2 | 168 | 3 (0)| 00:00
:01 |

--------------------------------------------------------------------------------
-----


Note
-----
- dynamic sampling used for this statement


Statistics
----------------------------------------------------------
28 recursive calls
0 db block gets
18 consistent gets
0 physical reads
0 redo size
952 bytes sent via SQL*Net to client
469 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
2 rows processed




Tom Kyte
November 14, 2005 - 9:08 am UTC

In plsql - it is virtually impossible to not bind correctly using static sql! So, the plsql routine is perfectly bound - but I do have issues with it (more on that in a minute)

In plsql, all variables referenced in static SQL statics *are bind variables*! It is perfect (plsql is) in that regards...

The CALLS to the procedure are not bound:

exec track_segment('seg1','proc',1024000);

and in sqlplus, it would be hard to bind properly (sqlplus is NOT a programming environment). You would in your REAL program that really calls this (I'm assuming sqlplus is just a demo here ;) you would use bind variables for the call of the procedure as well.



The issue I have with the procedure is the use of append. Fortunately for you it does not work with INSERT VALUES. Append only works with INSERT as SELECT (large bulk loads).

The reason it is fortunate? Twofold -- first, if it worked, you would end up with a single row per block (direct path inserts always write above the high water mark). And you would never reuse space in the table. It would be excessively wasteful .

Secondly, you would have to commit after the insert if it did work as you could not read nor write that table again until you did!



merge into segment_history sh
using ( select in_segment_name sname, in_segment_type stype, in_bytes bytes
from dual ) x
on ( sh.segment_name = x.sname and sh.segment_type = x.stype and sh.bytes = x.bytes )
when matched then update set modified = sysdate
when not matched then insert ( segment_name, segment_type, bytes, created )
values ( x.sname, x.stype, x.bytes, sysdate );

would be another way to code this.

Using pl/sql to call proc; is merge faster?

ht, November 14, 2005 - 10:58 am UTC

Tom,
As always, you have provided quick and informative advice.

I call the proc by creating (begin exec proc(in1,in2,in3); end;) strings using SQL*Plus from a "select ... dba_segments ...".

After removing the append hint in the "insert into" portion of the proc, would the proc be slower without using the "merge" clause? I will probably migrate to the "merge" clause because it is more readable but am wondering if there is a performance reason to expedite the change.

ht

Tom Kyte
November 14, 2005 - 2:04 pm UTC

You would find the code to run about the same (I did).

The append hint just isn't "doing anything", it's removal will not change a thing.


As for the binding - if you are generating this from a query selecting on dba_segments - I would just


merge into your_table
USING dba_segments
ON ( join-condition )
when matched...
when not matched .....


eg: this is a single statement that does all of the work - no procedural code whatsoever.

Thank you.

A reader, November 14, 2005 - 4:35 pm UTC


Binding with C++

TM, November 23, 2005 - 2:23 pm UTC

Tom,

If you can help shed some light on this below, I'm not sure even if we use bind variable from C++, on TKPROF output we see as many parses as executes for the query.

This is an application written using Borland's C++ Builder 5.0. It uses the BDE (Borland Database Engine) to communicate with the database. There is a TDatabase Object that uses an ODBC connection to connect to the database. Then the application uses TQuery objects to execute the different queries within the application.

Here is the contents of an example TQuery object:

select distinct
nvl(AoProducts.Id, 0) PRODUCTID,
nvl(AoAdType.Id, 0) ADTYPEID
from AoAdOrder, AoAdRunSchedule, AoAdInfo, AoProducts, AoAdType, RtChargeEntryElem
where
AoAdOrder.Id = AoAdRunSchedule.AdOrderId and
AoAdOrder.Id = RtChargeEntryElem.AdOrderId and
AoAdRunSchedule.Id = RtChargeEntryElem.AdRunScheduleId and
RtChargeEntryElem.CategoryCode = 0 and
RtChargeEntryElem.CategorySubCode = 1 and
RtChargeEntryElem.PublishedFlag = 1 and
AoAdRunSchedule.ProductId = AoProducts.Id and
AoAdRunSchedule.AdId = AoAdInfo.Id and
AoAdInfo.AdTypeId = AoAdType.Id and
AoAdOrder.Id = :ADORDERID

:ADORDERID is the parameter in this query that is bound in the code snippet below:

This query is inside a loop of another query that is looping through a list of ads. That outside loop sets the value of orderid that is bound.

FindPublishableInsertsQuery->Close();
FindPublishableInsertsQuery->Params->Items[ 0 ]->AsInteger = orderid;
if (FindPublishableInsertsQuery->Prepared == false)
{
FindPublishableInsertsQuery->Prepare();
}
FindPublishableInsertsQuery->Open();
while (FindPublishableInsertsQuery->Eof == false)
{
// Get the ProductID and AdTypeID of the published insert
int nPublishedProductId = FindPublishableInsertsQuery->Fields->Fields[FindPublishableInsertsQueryPRODUCTID->Index]->AsInteger;
int nPublishedAdTypeId = FindPublishableInsertsQuery->Fields->Fields[FindPublishableInsertsQueryADTYPEID->Index]->AsInteger;

// Next
FindPublishableInsertsQuery->Next();
}
FindPublishableInsertsQuery->Close();

Then after all orders have been processed we do a unprepare.

if (FindPublishableInsertsQuery->Prepared)
{
FindPublishableInsertsQuery->UnPrepare();
}


The developer have stepped through the code, and the Prepare() and UnPrepare() methods are only called one time per query. The Prepared property of the query gets set to true the first time the Prepare() is executed, and it stays set to that value until the UnPrepare() is executed at the end of the application. Yet when we view the logfile the query above is parsed as many times as it is executed, rather than only parsing the query one time.

Here is the TKPROF output for the query:

select distinct
nvl(AoProducts.Id, 0) PRODUCTID,
nvl(AoAdType.Id, 0) ADTYPEID
from AoAdOrder, AoAdRunSchedule, AoAdInfo, AoProducts, AoAdType, RtChargeEntryElem
where
AoAdOrder.Id = AoAdRunSchedule.AdOrderId and
AoAdOrder.Id = RtChargeEntryElem.AdOrderId and
AoAdRunSchedule.Id = RtChargeEntryElem.AdRunScheduleId and
RtChargeEntryElem.CategoryCode = 0 and
RtChargeEntryElem.CategorySubCode = 1 and
RtChargeEntryElem.PublishedFlag = 1 and
AoAdRunSchedule.ProductId = AoProducts.Id and
AoAdRunSchedule.AdId = AoAdInfo.Id and
AoAdInfo.AdTypeId = AoAdType.Id and
AoAdOrder.Id = :1

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 54 2.16 2.61 16 79 0 0
Execute 54 2.80 6.47 0 0 0 0
Fetch 103 2.13 6.19 133 2620 0 96
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 211 7.10 15.27 149 2699 0 96

Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: 85

Rows Row Source Operation
------- ---------------------------------------------------
0 SORT UNIQUE
0 NESTED LOOPS
0 NESTED LOOPS
0 NESTED LOOPS
0 NESTED LOOPS
0 NESTED LOOPS
1 INDEX UNIQUE SCAN PK_AOADORDER (object id 75468)
0 TABLE ACCESS BY INDEX ROWID RTCHARGEENTRYELEM
0 INDEX RANGE SCAN IDX_RTCHARGEENTRYELEM1 (object id 76717)
0 TABLE ACCESS BY INDEX ROWID AOADRUNSCHEDULE
0 INDEX UNIQUE SCAN PK_AOADRUNSCHEDULE (object id 75497)
0 INDEX UNIQUE SCAN PK_AOPRODUCTS (object id 75806)
0 TABLE ACCESS BY INDEX ROWID AOADINFO
0 INDEX UNIQUE SCAN PK_AOADINFO (object id 75441)
0 INDEX UNIQUE SCAN PK_AOADTYPE (object id 75510)


Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
SQL*Net message to client 103 0.00 0.00
db file sequential read 133 0.02 1.44
SQL*Net message from client 103 0.03 0.63
*******************************************************************************
We are trying to find why this is happening and how could we "parse once and execute many times". Any input on this will be really helpful.
regards.


Tom Kyte
November 23, 2005 - 8:04 pm UTC

parsing has nothing to do with binding.

You parse a statement with bind variables.
You parse a statement without bind variables.

You can however parse less often with binding!! but you have to do that!!

You parsed 54 times. 1 of them was a HARD parse (misses in library cache during parse :1 shows that). 53 of them were SOFT parses.


...
This query is inside a loop of another query that is looping through a list of
ads. That outside loop sets the value of orderid that is bound.
.....

there is the problem right there!! You meant to join! You are doing your own inefficient nested loops join!!!


(sorry, I do not know BDE, I don't do C++, I don't know their API, I cannot tell you how to cache a statement in this environment. Usually it involves *NOT* calling close, just calling the bind/execute statements of the API over and over)


but, did you really mean to join?

Binds with c++

TM, November 24, 2005 - 9:37 pm UTC

Tom,

The way I understand from (expert 1-on-1 and perf-by-design) "parsing has something to do with binding".... in the sense, if one binds then the hard parse is eliminated, but theere could be more soft parses which are much more efficient than hard parses all the time with literals (no binds). Is my concept OK, or I mis understood something?

In above, thanks for reminding me to look at "library cache misses:1" which I somehow skipped. Yes, I agree, we did 1 HARD parse and rests are SOFT parses and that should be OK.

Now, on your question of "did we really mean to do nested loops" - kind of YES, since we are in RBO mode still although on Oracle 9i. We are yet to figure out a method to how to switch to CBO (if you have any good pointers, please let me know).

Also, the parent query that supplies the orderid to the inner query does some processing in the client PC using the BDE c++ and then determines which are best to go to the inner query. Probably,we could have done that with singe SQL, but it appears it would have done that any way with the join too. Since we are not using CBO and it's features, we had to go that route.

Now, wanted to know what prompted you to say that "the problem is right there and it is inefficient"? Is it due to the difference between the parse and CPU time, or the disk read/query figures associated with each fetch? Could you point out on this to let me know what seems to be ineffcient here.

Thanks much and I really appreciate the time you find out to answer our concept issues and allow us to "think" productively.
Best Regards.

...
This query is inside a loop of another query that is looping through a list of
ads. That outside loop sets the value of orderid that is bound.
.....

there is the problem right there!! You meant to join! You are doing your own
inefficient nested loops join!!!


(sorry, I do not know BDE, I don't do C++, I don't know their API, I cannot tell
you how to cache a statement in this environment. Usually it involves *NOT*
calling close, just calling the bind/execute statements of the API over and
over)


but, did you really mean to join?


Tom Kyte
November 25, 2005 - 9:38 am UTC

A hard parse is very very very bad.
A soft parse is simply very very bad.

The only good parse is NO PARSE at all (parse once, execute many)...


You should have had ONE parse. Do you see how much CPU you burned during the parse? That should be much closer to ZERO.




My commment about nested loops was - you are doing your own "do it yourself" nested loops. you have code that looks like:


for x in ( query1 )
loop
for y in (query2 that refers to query1 data)
loop
....


that should just be a JOIN, a single query to get all information. You have too much procedural code and not enough non-procedural code!

pro*c

A reader, November 25, 2005 - 2:46 am UTC

Hi tom,
I am listing few ways to execute statement in pro*c.

1) SELECT * FROM EMP WHERE EMPNO = '%s'.
This is in Method 3 where we prepare this statement
and execute. And use the variable to replace the empno.
2) SELECT * FROM EMP WHERE EMPNO = :empno
This is in Method 3 where we prepare this statement
and execute.
3) Static sql statement where we use like
SELECT * FROM EMP WHERE EMPNO = :empno
where empno is variable name.

I assume that (1) wont use bind variables and (2) uses bind variables. Can you tell me whether (3) uses bind variables or not? If not, how can we make it to use bind variable.

Thanks



Tom Kyte
November 25, 2005 - 10:08 am UTC

#1 is wrong.

#2 and #3 both bind.

Binds with c++

TM, November 25, 2005 - 1:51 pm UTC

Thanks Tom on the above subject. I know that our goal is to have "parse once and execute many" - also not to have any soft parses, that is also burning CPU power. The only thing is we are not getting anyway to do it from BDE against the Oracle database. DO you think, I should also see 1 parse and 53 executes, if I do it from PL/SQL and call the procedure from BDE? Do you think that is the right approach?

Also, in the event of 1 parse and execute many, will I see the same in the TKPROF output where I see 54 and 54 both for parse and execute OR will I see parse = 1 and execute = 54? I hope it will be the later, but wanted to see your opinion.

YEs, we do understand too much of "do-it-ourself" code, and the only way may be we could avoid a lot of it - is by coding all of it in PL/SQL and also reducing the network traffic with the Client.

Please let me know on the parse and excute.
thanks and best regards.

Tom Kyte
November 25, 2005 - 3:30 pm UTC

if you move the single sql statment into a stored procedure, you would likely see:

53 parses of the plsql
53 executes of the plsql
1 parse of the sql
53 executes of the sql
--------------------------
54 parses


now, if you can move your entire TRANSACTION into plsql - you'll have achieved something since you get payback for every cached statement above one. Say you have 3 sql statements in some transaction and you execute it 53 times, you'll go from

53*3 = 159 parses

to

53 + 1 + 1 + 1 = 56 parses...

the more sql your transaction has, the better the payoff.

Binds with c++

TM, November 25, 2005 - 4:01 pm UTC

Thanks Tom for such a clear explanation that cleared some of my PL/SQL concepts too. Now, is the 53 parse and 53 executes of the PL/SQL is due to the fact that it is not a stored procedure/function and it is NOT a named function? And had it been a function or a procedure, it would have been "1 parse and 53 executes" too for that area? Meaning, I'm trying to be 100% efficient on this, and trying to parse as little as I can. Is it possible and still use database side coding using PL/SQL?

I agree, like you said in the books, binding and "parse once and excute many" are "burned" it-self into PL/SQL and we would like to do as much data centric TRANSACTIONS in the database it-self rather than anywhere else. We have obviously more than 3 transactions in this case that repeats as many times as the one we talked about already, and surely benefit from PL/SQL.

Also, Chap. 10 of Perf-by-design helps to set OIC (90)and OICA (10) parameters to try change RBO apps to CBO apps, instance-wide. But any more pointers on that for full-proof try will be helpful. If you think, this is not the right area and a new question (that you do not take any more), please advise me accordingly :-) or if you written any links before, I'm willing to read it.

Regards.

Tom Kyte
November 25, 2005 - 4:09 pm UTC

No, it is a matter of "I cannot tell you how in BDE to avoid not parsing", if you can figure that out, you can get rid of 52 of 53 parses - for ANY statement!




binds with c++

TM, November 25, 2005 - 4:19 pm UTC

Sorry for insisting on clarification, but your answer of "NO" - does it mean named stored procedures/functions will also parse and execute eual times? Is it not that stored procs are compiled once and executed many times?

Not askign about BDE any more as we will try sample programs to try it out on test-basis.

Since you are not answering my RBO to CBO mode query, I will try to search on your site for the same to see if there are any discussions before on this.

thanks Tom.

Tom Kyte
November 25, 2005 - 4:26 pm UTC

the program controls the number of times any statement it issues is parsed.


I can parse "begin procedure( ?, ?, ?); end;" ONCE in my code and execute it 500 tmies.

I can also parse it 500 times and execute it 500 times....


(I'm not in favor of setting the optimizer_* parameters, gathering statistics and using system statistics is typically what you want - there is a very good book by Jonathan Lewis linked to from my home page that you will be interested in getting - so you can understand the CBO and how it works before you use it)

bind with c++

TM, November 25, 2005 - 4:43 pm UTC

Thanks Tom all the clarifications. I noticed that book by Jonathan Lewis - and I will preorder it and go thru it. Thanks again.

Another question about bind variables

RP, December 06, 2005 - 6:08 am UTC

Hi Tom,

having just read your blog about not having a problem with people asking stupid questions......

I try to use bind variables where i can but i have 2 questions that have been bugging me:

1. If i used no bind variables and every sql statement went through the full parsing regime - surely that will decrease performance alone - wouldnt the cache just use LRU/FIFO or something and carry on? The cache full of sql thats never reused is a shame but why would it bring the db to its knees?

2. You recommended a JDBC book elsewhere on this site and have read a chapter from it where he advocates using bind variables always. He has a diagram showing the hard parse steps as:

a. check the syntax.
b. construct query plan.
c. execute sql.

and by using bind variable you avoid a and b with b being quite cpu intensive.

My question is that surely the query plan can change for different values of bind variables? For example if the column has value '99' use a full table scan but if its '01' use the index.

Hope you are not letting out a huge sigh as you read this....

Cheers

RP

Tom Kyte
December 06, 2005 - 8:14 am UTC

1) it would KILL IT.

ask yourself, would you compile a subroutine, run it, and then erase it. And, do this at runtime, that is, your program would have a string that is some code, you would write that code to disk, you would run a compiler, you would link in that subroutine, you would call it, you would then erase the compiled code - never to use it again. You would do this for every subroutine, every bit of code.

Would you even consider doing something like that?

That is what you just described to me, guess what parsing sql is? It is invoking a compiler to always:

a) parse the string for syntax, set up a parse tree. validate the sql is sql.
b) qualify all of the referenced objects in the string (eg: select * from t would be turned into select a, b, c, d from owner.t).
c) hash the string
d) search v$sql for that hash

and if we cannot find an EXACT match for that string - including optimizer environment and all we add to that list:

e) generate a query plan.

that is what a parse is - the equivalent of compiling a subroutine.

Oh yeah, one more thing, in order to access the library cache (like for (b) or (d)) we have to LATCH these SHARED structures.

A latch is a type of lock
A lock is a serialization device
Serialization devices inhibit scalability
The more you latch, the less you can do concurrently.

parsing is HUGELY EXPENSIVE, there are three types of parses in Oracle

1) hard parses that do a-e above. They are very very very bad.
2) soft parses that do a-d above. They are simply very very bad.
3) no parse (you parsed the statement once in your session and use it as often as you can). These are the only good parses ;)


Guess what else - users don't care about parsing, they care about EXECUTING the sql. parsing is something meaningless to them. but you would find that for many sql's - 4 out of 5 units of time would be spent HARD PARSING the sql and 1 unit out of 5 spent EXECUTING IT (or even worse).

So, do you want your end users spending 80% or more of their time waiting for something they do not care about?


hard parsing and over soft parsing

a) runs many times slower in elapsed time
b) utilizes many times more cpu
c) eats ram in the shared pool like crazy
d) probably means you are horribly subject to SQL INJECTION in your application (meaning - security is none existent)
</code> https://www.oracle.com/technetwork/issue-archive/2005/05-jan/o15asktom-084959.html <code>

it is slow, it is inefficient, it makes horrible use of memory, it is unsecure.

Do I have anything good to say about it? :)


2) are you talking abot the menon book - do you have a full page reference. a soft parse can skip (b), but only "no parse" can skip a and b - re-check that reference, he might have been talking about parse ONCE execute MANY - not the difference between hard and soft parsing.


the query plan doesn't change with binds, that is the point.


In a transactional system where you do many queries per second, you are using binds, period.

In a data warehouse, the data warehouse query might not use a bind variable (but all queries leading up to the execution of the DW query - such as the query to authenticate, authorize, get preferences, read metadata, etc -- they will)

Further clarification

RP, December 06, 2005 - 8:43 am UTC

Hi Tom,

"the query plan doesn't change with binds, that is the point."

this bit i don't fully understand. Surely, different data distributions may lead to different execution plans? Did you see my example where 90% of the rows have one particular value and so it would make sense to avoid the index (if doing a select *) but if they choose one of the 10% values then use the index.

RP

Tom Kyte
December 06, 2005 - 9:44 am UTC

but when using binds, you get one plan for all.

surely in a transactional system, you would not want a full scan for something and an index range for something else.


there are "we do queries per second" - they must, MUST bind.
there are "we do queries that take many many many seconds (dw)", they might not bind.

Parse in Session

Tapas Mishra, December 06, 2005 - 11:09 am UTC

Hi Tom, above explanation was very good on RP's question. I need little further clarification on the comment
" 3) no parse (you parsed the statement once in your session and use it as often as you can). These are the only good parses ;)"

How is Oracle parsing the statement once "in session"? Are you saying that in OLTP, the queries that run more often, comming from different sessions will bind in their own sessions memory, so that when it ultimately come to the library cache it looks all the same hash-wise? And how to do it in session? This is a pure concept thing that I'm missing. If you can calrify that will be great!

Regards,
Tapas

Tom Kyte
December 06, 2005 - 2:53 pm UTC

I'm saying programmers should not be so quick to close cursors that can and should be reused.

They have all of the control over how many times a statement is parsed, no one else does.

What will no parse do?

Charlie Zhu, February 08, 2006 - 5:31 pm UTC

It's a good material to training Java Developer about RP's answer,
I'll show them to our Java developers.

From a) to e), the Parse path.

What will the no parse do?

Thanks,
Zhu1

Tom Kyte
February 10, 2006 - 9:00 am UTC

... What will the no parse do? ...

exactly "nothing", that is the point :)

Package procedure with parameters

Totu, April 19, 2006 - 4:42 am UTC

Dear Tom.

I use below policy for DML operations in my application:

I create package for with 3 procedures:
Insert, Edit, Delete.
Each procedure has parameters.

Client software only sends procedure name and its' parameters to server. Example:

CREATE TABLE ks
(ks_id NUMBER(3,0) NOT NULL,
ks_kodu NUMBER(3,0) NOT NULL,
ks_adi VARCHAR2(100),
kst_id NUMBER(3,0))
/

CREATE OR REPLACE
PACKAGE pkg_work_with_ks
IS
PROCEDURE AddNewks(
Prm_ks_kodu IN ks.ks_kodu%TYPE,
Prm_ks_adi IN ks.ks_adi%TYPE,
Prm_kst_id IN ks.kst_id%TYPE
);
PROCEDURE UpdateExistingks(
Prm_ks_id IN ks.ks_id%TYPE,
Prm_ks_kodu IN ks.ks_kodu%TYPE,
Prm_ks_adi IN ks.ks_adi%TYPE,
Prm_kst_id IN ks.kst_id%TYPE
);
PROCEDURE DeleteExistingks(
Prm_ks_id IN ks.ks_id%TYPE
);
END;

CREATE OR REPLACE
PACKAGE BODY pkg_work_with_ks IS

PROCEDURE AddNewks(
Prm_ks_kodu IN ks.ks_kodu%TYPE,
Prm_ks_adi IN ks.ks_adi%TYPE,
Prm_kst_id IN ks.kst_id%TYPE
) IS

BEGIN

INSERT INTO ks(ks_id, ks_kodu, ks_adi, kst_id)
VALUES(seq_ks_ks_id.NEXTVAL, Prm_ks_kodu, Prm_ks_adi, Prm_kst_id);

END;



PROCEDURE UpdateExistingks(
Prm_ks_id IN ks.ks_id%TYPE,
Prm_ks_kodu IN ks.ks_kodu%TYPE,
Prm_ks_adi IN ks.ks_adi%TYPE,
Prm_kst_id IN ks.kst_id%TYPE
) IS

BEGIN

UPDATE ks
SET
ks_id = Prm_ks_id,
ks_kodu = Prm_ks_kodu,
ks_adi = Prm_ks_adi,
kst_id = Prm_kst_id

WHERE ks_id = Prm_ks_id;

END;



PROCEDURE DeleteExistingks(
Prm_ks_id IN ks.ks_id%TYPE
) IS

BEGIN

DELETE FROM ks
WHERE ks_id = Prm_ks_id;

END;

END;
/

I use VS 2003 and ODP.Net for Oracle 10G.
I use OracleCommand object to pass procedure name and its' params to server.
So. Is it best practise regarding using biind variables, scalabe solution?

Thanks in advance.




Tom Kyte
April 19, 2006 - 8:41 am UTC

I look at this package and ask myself "why", why bother with a layer like this, to what end? to what purpose?


Now, if you built transactional API's that did truly useful work (taking the database from one consistent state to the next) I would be overjoyed - there would be more than one sql statement per routine in general and they would have names like:

process_new_hire
fire_employee
transfer_employee


I call that a TAPI (transactional API). What you have is a tAPI (table API). table API's - ugh, never saw the point - unless you are paid by SLOC (source lines of code) as they result in lots of code but provide no benefit.


Yes, your code above uses bind variables - and YES the programmers could still totally mess that up by not binding to the the stored procedure calls but rather by using literals.

Is this a best practice? not according to me.

Is this scalable? No, not really, not much different from just having in the sql in the application - in fact, since you are just doing single row insert/update/deletes - probably LESS scalable as you added a "do nothing" layer.

Let me clarify more

totu, April 19, 2006 - 1:27 pm UTC

Dear Tom.

Thank you for gold advices. But let me to tell more:

1. "I look at this package and ask myself "why", why bother with a layer like this, to what end? to what purpose?"

-: Tom, these table/s are very simple tables of my DB, which application will only do simple DML(insert, edit, delete). So, I agree with you: this can be called table API's.
But for the complex operation (like your example : process_new_hire
fire_employee
transfer_employee)
there will be many lines of code - you call it transactional API's. F.e in my application: Closing existing account in bank- very complex.

So, I tried grouping both simple(tAPI)/complex(TAPI)issues inside package. I tried to do it APPLICATION POLICY.

Of course, I could do these tAPI's operation on client side using dynamic sql. But let me know: Is it OK to trust on OracleDataSet/OracleDataAdapter automatically generate SQL for you? I thnik NO. No control on this.

2. "Is this scalable? No, not really, not much different from just having in the sql in the application - in fact, since you are just doing single row insert/update/deletes - probably LESS scalable as you added a "do nothing"
layer."

-: Lets say, in the future I have to add another sql statement to theses tAPIs. In my example I will only once change my Package. But in your way I have to di it as upgrade: change application code, compile, test and install againg. Tom, pls let me know if i am false and why...

3. "Yes, your code above uses bind variables - and YES the programmers could still totally mess that up by not binding to the the stored procedure calls but rather by using literals"
-: Tom, sorry, may be I not native English speaker, I couldn't understand completely this one.

Againg, as always, I waiting for your GLOD ADVICES!!!
Thanks in advance.


Tom Kyte
April 19, 2006 - 5:14 pm UTC

1) i would not use the above package, period. no use to it.

No one would find the tAPIs (lower case t as I just invented the convention above) useful.

"application policy" doesn't mean anything to me, please elaborate?


I don't think the client should be doing things other than well formed transactions and if you need a "table editor" to modify singleton rows - sure, just let the client do it.

2) then these tAPIs become TAPIS and they are good. Now, they are not.

3) the plsql you wrote: used bind variables perfectly.

The VB code your programmers will write: remains to be seen if they use binds or not. That is where the binding is important (it is nearly impossible to incorrectly do binding IN PLSQL - you have to drop down to dynamic sql to do it wrong, in VB is is very easy to NOT BIND correctly - you always have to watch the VB coders)

Bind Variables w/ Stored Procedures

Tim, May 04, 2006 - 12:36 pm UTC

I saw your reply two posts (or so) up:
"...and YES the programmers could still
totally mess that up by not binding to the the stored procedure calls..."

I was wondering on the level of impact of not binding stored procedure calls? I did a trace of my client program and saw that although I had done proper coding to use bind variables in my stored procedure calls - the programming language I am working in had replaced the binds with literals - just on the stored procedures - not with the SQL.

Turns out there is a database connection parm I must use to turn on that will bind input arguments to stored procedures - apparently the default is not to do so. (But yet the default is to use binds for SQL - a little inconsistent - but whatever). After I turned on the parm - I could clearly see binds being used in the trace.

So my question - how much of a performance impact would you expect this to be? The stored procedure gets executed many times in a session and there are many session concurrently.

If this were SQL - I would expect large impact.

But stored procedure - I don't know if they generate plans or need parsing - so does it really even matter? I thought not - until I saw your comment and so was wondering if you would mind explaining a bit further.

Thanks.

Tom Kyte
May 04, 2006 - 5:00 pm UTC

... I was wondering on the level of impact of not binding stored procedure calls? ...

as bad as not binding
- an insert
- an update
- a delete
- a merge
- a select

they generate the equivalent of plans - they need parsing.



ops$tkyte@ORA9IR2> create or replace procedure p( x in number default null, y in number default null )
  2  as
  3  begin
  4          null;
  5  end;
  6  /

Procedure created.

ops$tkyte@ORA9IR2> create or replace procedure p1
  2  as
  3  begin
  4          for i in 1 .. 1000
  5          loop
  6                  execute immediate 'begin p('||i||'); end;';
  7          end loop;
  8  end;
  9  /

Procedure created.

ops$tkyte@ORA9IR2> create or replace procedure p2
  2  as
  3  begin
  4          for i in 1 .. 1000
  5          loop
  6                  execute immediate 'begin p(:i); end;' using i;
  7          end loop;
  8  end;
  9  /

Procedure created.

ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> exec runStats_pkg.rs_start;

PL/SQL procedure successfully completed.

ops$tkyte@ORA9IR2> exec p1

PL/SQL procedure successfully completed.

ops$tkyte@ORA9IR2> exec runStats_pkg.rs_middle;

PL/SQL procedure successfully completed.

ops$tkyte@ORA9IR2> exec p2

PL/SQL procedure successfully completed.

ops$tkyte@ORA9IR2> exec runStats_pkg.rs_stop(1000);
Run1 ran in 236 hsecs
Run2 ran in 15 hsecs
run 1 ran in 1573.33% of the time

Name                                  Run1        Run2        Diff
LATCH.enqueue hash chains            2,004           6      -1,998
LATCH.enqueues                       2,004           6      -1,998
LATCH.library cache pin alloca      12,026       4,039      -7,987
LATCH.child cursor hash table        8,008          19      -7,989
LATCH.library cache pin             17,055       8,074      -8,981
LATCH.library cache                 48,105      12,164     -35,941
LATCH.shared pool                   61,135      10,176     -50,959

Run1 latches total versus runs -- difference and pct
Run1        Run2        Diff       Pct
150,549      34,594    -115,955    435.19%

PL/SQL procedure successfully completed.
 

How to avoid Parsing ?

Reader, May 11, 2006 - 6:05 pm UTC

create table test (xx number ,yy varchar2(10));

create or replace procedure p1 (id number ,iname varchar2 ) as
begin
insert into test (xx,yy) values (id,iname) ;
end;

create or replace procedure p2 (id number ,iname varchar2 ) as
V_ID test.XX%TYPE;
v_INAME test.YY%TYPE;
begin
V_ID := ID ;
V_INAME := INAME ;
insert into test (xx,yy) values (V_id,V_iname) ;
end;


when we execute the above code there is always a parse and the CPU spikes .
TKPROF also reveals that as every time it shows as
"Misses in Library cache during parse"
What am I doing wrong and why should it not use the bind variables ?

Thanks


Tom Kyte
May 11, 2006 - 8:23 pm UTC

eh? demonstrate or at least "say" how to see what you are saying...


case 1: you run p1 and p2 over and over in the same session. I would expect AT MOST one hard parse and probably - that is it, just one parse.

case 2: you run p1 and p2 once in many sessions. I would expect AT MOST one hard parse and then the other sessions all do soft parses - but they must do at least one parse to get the statement (the insert) ready to go


but I'm not at all sure what you are

a) seeing
b) measuring

Laly, May 16, 2006 - 3:42 am UTC

Hello,

What if I don't want an PL/SQL embedded query to use bind variable automatically on some column (in the case of a very skewed data distribution) ?

I can see only dynamic SQL to avoid automatic binding.


Thanks,

Laly.

Tom Kyte
May 16, 2006 - 7:06 am UTC

That is correct, you would need to generate a unique sql statement.

beware sql injection if you do that!

Bind Variable

Dinesh, May 26, 2006 - 1:52 am UTC

Hi TOM

I have to capture the multiple(max 4) values selected in the COMBO/DROPDOWN box in GUI(>NET,VB).BAsed on these values I have to write of query in a procedure these values I am taking as a IN parameter in a procedure.
I want to know how we can do this using BIND VARIABLES ???
Say
Create or replace procedure Proc1(p_capture IN Varchar) is
Begin
.
.
.
Select col 1, col2 .....from table123 where prod_id in :p_capture;
..
.
.
.
.
.

Exception
.
.
.

End Proc1
Would it work.I have just Briefed u about the scene.
Please help
Thanx in Advance


Tom Kyte
May 26, 2006 - 8:38 am UTC

create or replace procedure proc1( p_in1 in varchar2, p_in2 in varchar2,
p_in3 in varchar2, p_in4 in varchar2 )
as
begin
.....
select ... from t where prod_id in ( p_in1, p_in2, p_in3, p_in4 );
....


immediately comes to mind.


Have you met this "U" person? Please - begging you here - have them get in touch with me. I have so many requests for their time - it is simply amazing. They have a ton of work to do. If you have had the opportunity to brief them - maybe you can help me get in touch with them?

Who's U?

Jack, May 26, 2006 - 9:02 am UTC

One possibility:
</code> http://en.wikipedia.org/wiki/U_Thant <code>

Tom Kyte
May 27, 2006 - 6:50 pm UTC

Probably not, he is dead for some 32 years. We'll have to keep looking.

Re: Tom's answer to Tim's Question

John, May 26, 2006 - 1:00 pm UTC

Tom,

What if we have a dynamic list of PL/SQL programs to call? We have a PL/SQL program that pulls a list of "sub jobs" to run from a table and executes like so:

-- process each method call in the run list
FOR list_item IN run_list
LOOP
sSubjArea := list_item.subj_area_id ;
sMethod := list_item.method_id ;

pr_upd_subj_area_status (sSubjArea, sMethod, FALSE) ;

PLSQL_block := 'BEGIN '||sMethod||'; END;' ;
EXECUTE IMMEDIATE PLSQL_block ;

pr_upd_subj_area_status (sSubjArea, sMethod, TRUE) ;
END LOOP ;

Is there a way to bind in the way we use NDS here? We are using Oracle 9.2.0.6

Tom Kyte
May 27, 2006 - 9:28 pm UTC

what isn't bound here?



Bind Variables !

Dinesh, May 29, 2006 - 12:52 am UTC

Thanx Tom !
This is probably not the reply I was looking forward to !!!
I mean, I have to capture all the values as string in one variable and split them using delimiter say ',' and then put them in IN condition .

Tom Kyte
May 29, 2006 - 7:36 am UTC

Umm,

YES - of course you do. One might question why you would have to PARSE THE STRING in the first place. The person that is creating this string, tell them they can "stop doing that" and just pass formal parameters like any good program would expect.

You could do this:
</code> http://asktom.oracle.com/pls/asktom/f?p=100:11:::::P11_QUESTION_ID:110612348061 <code>

but that would be silly for 4 in list items (slight overkill no?) and since it shouldn't be a string in the first place (JUST FOUR ITEMS - these are called formal parameters)

Bind Variable

Dinesh, May 29, 2006 - 2:38 am UTC

Hi Tom
this is what I want:
How to bind IN lists ( i.e. how to pass a list of values in one parameter and have Oracle treat the single parameter as a list )


Tom Kyte
May 29, 2006 - 7:38 am UTC

see above, but in your case, I'll say it again "you don't want to, you want to actually do it 'right'"

but at the end of the day, that'll be your decision of course...

This made me laugh:

</code> http://asktom.oracle.com/pls/ask/search?p_string=how+to+bind+in+lists <code>


"how to bind in lists" - who would have thought that might return hits :)

How to get the type of a bind variable?

Fernando, July 05, 2006 - 10:54 am UTC

Hi Tom,

How can I get the type of a bind variable?

Tom Kyte
July 08, 2006 - 8:03 am UTC

trick question.

You don't - YOU TELL US what you would like to bind!


select * from t where :x = :y;


what possible types could we tell you here? None really, you are free to bind whatever you want.


select * from dual where dummy = :x;

If you have a string to bind for :x, tell us "i have a string to bind here", if you have a date - tell us "I've a date", if you have a number to bind, you tell us that...

In short, you tell US what you are willing to bind.

capture bind variable

A reader, July 12, 2006 - 8:26 am UTC

Hi

Due to migration reasons we have to test around 300 sql statements in oracle 9i.

We captured the queries in oracle 8i and we have them in plain text files. All queries use bind variables.

We have been thinking how to execute these queries using slqplus in 9i. We dont know how to generate text files such as

var b1 number
var b2 varchar2(10)

exec :b1 := 1
exec :b2 := 'ENG'

select
from emp
where id_national = :b2
and id_emp = :b1;

Does Oracle supply any tool to do this?

Tia

Tom Kyte
July 12, 2006 - 4:48 pm UTC

nope, - we don't have any way to know that you bind :b1 as a number and ;b2 as a string.

Just because id_emp is a number doesn't mean the program is binding a number, could bind whatever it feels like.




bind variables

sam, August 02, 2006 - 6:50 pm UTC

Tom:

I have 3 queries. How Can i use one query instead and substitue bind variable for warehouse_id
if possible.

l_query1 varchar2(2000) default 'select a.stock_number,b.qty
from stock_item a, physical_inventory b
where a.stock_number = b.stock_number(+)
and b.warehouse_id like ''ABC%''

l_query2 varchar2(2000) default 'select a.stock_number,b.qty
from stock_item a, physical_inventory b
where a.stock_number = b.stock_number(+)
and b.warehouse_id in ('W1','T2','R3')';

l_query3 varchar2(2000) default 'select a.stock_number,b.qty
from stock_item a, physical_inventory b
where a.stock_number = b.stock_number(+)
and b.warehouse_id = 'BIG_WAREHOUSE';

open l_cursor for l_query1;
open l_cursor for l_query2;
open l_cursor for l_query3;

Tom Kyte
August 03, 2006 - 9:13 am UTC

well, you do not need the outer join at all.

Just lose the (+).

It cannot be necessary. For you see - if we make up a row in B (if we really needed the outer join) then b.warehouse_id would be NULL. Meaning is it not like 'ABC%', it is not in 'W1', 'T2', 'R3', it is not equal to 'BIG_WAREHOUSE'

So, lose the (+)


Now, in query1, query2, query3 - what varies, do you change the values supplied?

bind

sam, August 03, 2006 - 12:16 pm UTC

Tom:

1. Well I wanted the page to print every single "stock item" in the main table. If I drop the (+) then the stock item that is in the main table and not in the Table B will not print.

Are you saying that the query would return the wrong results if stock number is in A but not in B. Would it trun like this?

select stock number from A where B.stock_number is NULL and B.warehouse_id like 'ABC%';


2. The diference btween the 3 queries is that the fisrt uses "like" and the second uses "IN" and the third uses "=". I am not sure how I can use a bind variable and then replace the appropriate value with it.

Tom Kyte
August 03, 2006 - 4:23 pm UTC

sorry, but re-read your predicate - it doesn't DO THAT

if you code:

where a.col = b.col(+)
and b.another_col <operator> something

all we can say is either:

a) you don't need the (+)
b) or you made a big mistake.

...
select stock number from A where B.stock_number is NULL and B.warehouse_id like
'ABC%';
......

perhaps you meant "or" there... (b.stock_number is NULL OR b.warehouse_id like '...' )


2) eh? you are losing me here totally and entirely.

You have three queries. So what if they use like, =, <, whatever.

What "varies" between them - I'm assume you want to run all three.





bind

sam, August 03, 2006 - 6:52 pm UTC

Tom:

1. I see what you are saying. The actual query is longer than that. It basically looks at each stock number in the stock item master table and then calculates the quantity for each bin in each warehouse.

I guess you are saying that even with the (+) it is not printing all stock numbers in item master table because it is joining with a NULL stock number in B. If this is correct how do I modify it to print every single stock number in Item Master table even if it does not have any records in inventory.

There are several warehouses per location. I created a query for all the warehouses in each location.


'select f.stock_number,f.description,loc#1,qty#1,loc#2,qty#2,
loc#3,qty#3,loc#4,qty#4,loc#5,qty#5
from
(select stock_number,description,
max(decode(seq,1,storage_code,null)) Loc#1,
max(decode(seq,1,qty_Available,null)) qty#1,
max(decode(seq,2,storage_code,null)) Loc#2,
max(decode(seq,2,qty_Available,null)) qty#2,
max(decode(seq,3,storage_code,null)) loc#3,
max(decode(seq,3,qty_Available,null)) qty#3,
max(decode(seq,4,storage_code,null)) loc#4,
max(decode(seq,4,qty_Available,null)) qty#4,
max(decode(seq,5,storage_code,null)) loc#5,
max(decode(seq,5,qty_Available,null)) qty#5
from (
select stock_number, storage_code,description,qty_available,
row_number() over(partition by stock_number order by stock_number nulls last) seq
from (select a.stock_number, b.storage_code,a.description,
compute_qty_stored(b.warehouse_id,a.stock_number,b.storage_Code) qty_available
from stock_item a, physical_inventory b
where a.stock_number = b.stock_number(+)
and b.warehouse_id like ''ABC%'' and a.stock_number <> ''99999''
and withdrawn is null
and stock_type in (''I'',''B'')
union all
select a.stock_number, b.storage_code,a.description,
compute_qty_stored(b.warehouse_id,a.stock_number,b.storage_Code) qty_available
from stock_item a, storage_receipt b
where a.stock_number = b.stock_number(+)
and b.warehouse_id like ''ABC%'' and a.stock_number <> ''99999''
and withdrawn is null
and stock_type in (''I'',''B'')
)
where qty_available > 0
group by stock_number,storage_Code,description,qty_available)
where seq <=5
group by stock_number,description ) e,
(select stock_number,description from stock_item where stock_type in (''I'',''B'') and withdrawn is null and stock_number <> ''99999'' ) f
where e.stock_number(+)=f.stock_number order by f.description';


2. If I understand you correctly, you are saying these are 3 different queries and you can not really use one query since they do ALL do not use "=" or "Like" or "in".

Tom Kyte
August 03, 2006 - 6:59 pm UTC

you can use a single query - but you sort of have to explain the requirement a bit more clearly.

You gave me three queries:

l_query1 varchar2(2000) default 'select a.stock_number,b.qty
from stock_item a, physical_inventory b
where a.stock_number = b.stock_number(+)
and b.warehouse_id like ''ABC%''

l_query2 varchar2(2000) default 'select a.stock_number,b.qty
from stock_item a, physical_inventory b
where a.stock_number = b.stock_number(+)
and b.warehouse_id in ('W1','T2','R3')';

l_query3 varchar2(2000) default 'select a.stock_number,b.qty
from stock_item a, physical_inventory b
where a.stock_number = b.stock_number(+)
and b.warehouse_id = 'BIG_WAREHOUSE';


This is the single query that gets all three in one and fixes your outer join bug:

select a.stock_number,b.qty
from stock_item a,
physical_inventory b
where a.stock_number = b.stock_number(+)
and (b.stock_number IS NULL
OR
b.warehouse_id like 'ABC%'
OR
b.warehouse_id in ('W1','T2','R3')';
OR
b.warehouse_id = 'BIG_WAREHOUSE'
)


but, you still haven't said what needs to be bound. If it is all of the literals - JUST BIND them.


But I think you really have a case where you have one of three queries you want to execute - one with a equals and if not that one maybe the in and if not that one maybe the like

but we won't know till you clear it up :)

bind

sam, August 04, 2006 - 3:10 pm UTC


Tom:

I have three group of users in three states. Each state has its own warehouses. When a user from Ohio logs in I want him to see the data that pertains for warehouses in Ohio, The same thing applies for users in California and Washington.

Thus I do not need one query for all.

What I was trying to check if there is way to bind the warehouse ID value into one query. The value of the bind would is dependent on where the user is located. But I think I have to have three queries.

2. Are you saying that I should change my query now to:

l_query1 varchar2(2000) default 'select a.stock_number,b.qty
from stock_item a, physical_inventory b
where a.stock_number = b.stock_number(+)
and (b.stock_number is NULL or b.warehouse_id like ''ABC%'')

What i am trying to print is all items in master table regardless if they had physical inventory records or not. But if they do not have any inventory records then it should print blanks in the report.

Tom Kyte
August 04, 2006 - 3:43 pm UTC

actually, that is not quite right yet.

select ..
from stock_item a,
(select * from physical_inventory where warehouse_id like 'ABC%') b
where a.stock_number = b.stock_number(+)


is more what you seem to want - all stock items and if they happen to have a physical inventory record - that too.

How to extend this for an IN-Clause?

J. Sieben, October 30, 2006 - 5:03 am UTC

I'm using bind variables for quite some time now and I fully understand the reasoning behind it. But as a matter of fact, I wasn't able to extend this mimic to support IN-clauses in SQL.
So, fi, if I try to use a bind variable in a statement like

select *
from emp
where ename in ('SCOTT', 'BLAKE')

I find it hard to substitute the namelist with a bind variable especially as I'm not aware of the number of names I'm going to search for.

Passing in the list of values as a comma-separated list doesn't seemt to be the solution, too. That's because I think that if you pass this variable in then it will cause a hard parse or otherwise it will try to find the complete comma separated list in a column cell.

I don't know whether I was able to explain the issue properly but would appreciate any comment or help on this anyway....

Best regards,

Jürgen

Tom Kyte
October 30, 2006 - 9:07 am UTC

Bind Variable

kishore, November 29, 2006 - 12:08 pm UTC

Hi Tom,

I executed the below query 4 times from the same session. I expected it to get parsed only once and excute and fetch 4 times each. However the output is as shown below. Could you please let me know what could be the other factors impacting this.

SELECT COUNT(polref)
FROM
po010 WHERE polref = 3667729


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 4 0.00 0.00 0 0 0 0
Execute 4 0.00 0.00 0 0 0 0
Fetch 4 0.00 0.00 0 8 0 4
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 12 0.00 0.00 0 8 0 4

Misses in library cache during parse: 1
Optimizer goal: FIRST_ROWS
Parsing user id: 1957

Rows Row Source Operation
------- ---------------------------------------------------
1 SORT AGGREGATE
1 INDEX UNIQUE SCAN PO010_AM (object id 4003796)


Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
SQL*Net message to client 8 0.00 0.00
SQL*Net message from client 8 0.00 0.02

Thanks & Regards
Kishore Negi

Tom Kyte
November 30, 2006 - 9:22 am UTC

because you used sqlplus which is a "dumb command line tool that you give sql statements to and it just does a) parse, b) execute, c) display of results"

if you used - say - plsql and wrote a program, you would have control over how many times it was parsed.

if you use a simple dumb command line tool that

a) reads input
b) parses/executes/displays results

you will not

Bind Variables with Dynamic SQL

John Gilmore, December 04, 2006 - 10:58 am UTC

I'm writing a packaged procedure which will return a ref cursor. It will be used by a reporting tool to perform a query for a specific report.

The report has six parameters, all except one being optional. Additionally, most of the parameters will appear multiple times in the query text as they are required to match multiple columns of the tables being queried. So while it's no real problem building up the dynamic SQL query as a string, the problems start when I have to figure out the number of parameters needed for the "open ... for ... using" statement. Each of the five parameters may or may not occur which gives way too many different combinations.

I wanted to use bind variables for the parameters but due to the number of optional parameters involved and the fact that they appear multiple times in the query, this seems to be way too complex and it now seems that I will have to forget about bind variables and just concatenate the parameter values directly into the dynamic SQL string.

Can you see an easy way around this problem?

Performance Problem using "like" with Contexts

John Gilmore, December 11, 2006 - 12:19 pm UTC

I was experiencing performance problems with a large dynamic PL/SQL query.

I narrowed it down to using a combination of the "like" predicate while using sys_context functions.

A simplified version of the query appears below, where bill_of_lading_vw is a large and complex view and container is a base table.

SQL> select count(*) from bill_of_lading_vw;

  COUNT(*)
----------
 3,825,888

Elapsed: 00:10:05.07

SQL> select count(*) from container;

  COUNT(*)
----------
 7,430,981

Elapsed: 00:00:11.08

SQL> select sys_context('COASTS_REPORTS_CTX','container_no') from dual;

SYS_CONTEXT('COASTS_REPORTS_CTX','CONTAINER_NO')
-------------------------------------------------------
2336965

Elapsed: 00:00:00.00

SQL> set autotrace on statistics

-- Query 1
--
SQL> select count(*) from
  2    bill_of_lading_vw bolv,
  3    (select co.man_seq_no, co.co_seq_no, co.bill_of_lading_no
  4     from container co
  5     where co.container_no = '2336965'
  6     group by co.man_seq_no, co.co_seq_no, co.bill_of_lading_no) c
  7  where bolv.man_seq_no        = c.man_seq_no
  8  and   bolv.co_seq_no         = c.co_seq_no
  9  and   bolv.bill_of_lading_no = c.bill_of_lading_no;

  COUNT(*)
----------
         3

Elapsed: 00:00:16.09

Statistics
----------------------------------------------------------
          0  recursive calls
         24  db block gets
       2919  consistent gets
       8153  physical reads
          0  redo size
        211  bytes sent via SQL*Net to client
        276  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          1  sorts (memory)
          1  sorts (disk)
          1  rows processed


-- Query 2
--
SQL> select count(*) from
  2    bill_of_lading_vw bolv,
  3    (select co.man_seq_no, co.co_seq_no, co.bill_of_lading_no
  4     from container co
  5     where co.container_no = sys_context('COASTS_REPORTS_CTX','container_no')
  6     group by co.man_seq_no, co.co_seq_no, co.bill_of_lading_no) c
  7  where bolv.man_seq_no        = c.man_seq_no
  8  and   bolv.co_seq_no         = c.co_seq_no
  9  and   bolv.bill_of_lading_no = c.bill_of_lading_no;

  COUNT(*)
----------
         3

Elapsed: 00:00:18.08

Statistics
----------------------------------------------------------
          0  recursive calls
         24  db block gets
       2919  consistent gets
       8156  physical reads
          0  redo size
        211  bytes sent via SQL*Net to client
        276  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          1  sorts (memory)
          1  sorts (disk)
          1  rows processed


-- The only difference between Queries 1 and 2 is that
-- Query 1 uses a literal and Query 2 uses sys_context.
--
-- Query 3 below is similar to Query 1 except that it
-- uses "like" rather than "="

-- Query 3
--
SQL> select count(*) from
  2    bill_of_lading_vw bolv,
  3    (select co.man_seq_no, co.co_seq_no, co.bill_of_lading_no
  4     from container co
  5     where co.container_no like '2336965'
  6     group by co.man_seq_no, co.co_seq_no, co.bill_of_lading_no) c
  7  where bolv.man_seq_no        = c.man_seq_no
  8  and   bolv.co_seq_no         = c.co_seq_no
  9  and   bolv.bill_of_lading_no = c.bill_of_lading_no;

  COUNT(*)
----------
         3

Elapsed: 00:00:18.09

Statistics
----------------------------------------------------------
          0  recursive calls
         24  db block gets
       2919  consistent gets
       8151  physical reads
          0  redo size
        211  bytes sent via SQL*Net to client
        276  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          1  sorts (memory)
          1  sorts (disk)
          1  rows processed


-- Likewise, Query 4 below is similar to Query 2 except
-- that it uses "like" rather than "=".
--
-- Notice that there has been a significant performance 
-- degradation here.

-- Query 4
--
SQL> select count(*) from
  2    bill_of_lading_vw bolv,
  3    (select co.man_seq_no, co.co_seq_no, co.bill_of_lading_no
  4     from container co
  5     where co.container_no like sys_context('COASTS_REPORTS_CTX','container_no')
  6     group by co.man_seq_no, co.co_seq_no, co.bill_of_lading_no) c
  7  where bolv.man_seq_no        = c.man_seq_no
  8  and   bolv.co_seq_no         = c.co_seq_no
  9  and   bolv.bill_of_lading_no = c.bill_of_lading_no;

  COUNT(*)
----------
         3

Elapsed: 00:01:56.02

Statistics
----------------------------------------------------------
          0  recursive calls
         24  db block gets
     219935  consistent gets
     216643  physical reads
          0  redo size
        211  bytes sent via SQL*Net to client
        276  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          1  sorts (memory)
          1  sorts (disk)
          1  rows processed


Any idea what's happening here?  As far as I can see, Queries 3 and 4 are identical apart from the fact that one uses a literal and the other uses sys_context. 

Tom Kyte
December 11, 2006 - 1:10 pm UTC

one of them "knows the number and knows there are no percents"

and one doesn't.

huge difference. put '%' in there instead of the number, then tell us what happens.

It depends where you put it.

John Gilmore, December 15, 2006 - 6:11 am UTC

If you place the wildcard anywhere except at the start of the string you get a similar disparity in the times. However, placing it at the start of the string gives you roughly similar times.

I should have mentioned that this column is indexed. So obviously the time difference occurs because the optimiser doesn't ever use the index when a bind variable is used.

This raises a broader issue. I'd always blindly assumed that bind variables are superior; however, in this case at least, using a literal in the query results in times that can be an improvement by up to an order of magnitude.

OLTP & DW... sort of!

Richard, February 19, 2007 - 12:00 pm UTC

You said: "surely in a transactional system, you would not want a full scan for something and an index range
for something else.

there are "we do queries per second" - they must, MUST bind.
there are "we do queries that take many many many seconds (dw)", they might not bind."

Scenario: BIG table (millions of rows). Data is skewed.
1. Shared Pool is flushed.
2. SQL issued, and Execution Plan, etc. generated and now in Shared Pool. Execution Plan says "Use index abc".
3. Same SQL issued, but (in reality) FTS would be the best option (owing to values used). However, owing to Execution Plan in the Shared Pool, index abc is used.
Question: Is this pretty much what happens? i.e no extra, smart parsing/checking is done by the CBO, leading to "incorrect" Execution Plan being used? If so, is the only real way around this to have 2 SQL statements; 1 for index and 1 for FTS? (maybe using uppercase & lowercase to distinguish them). If so, that's tantamount to second-guessing the CBO, isn't it?
Tom Kyte
February 19, 2007 - 2:18 pm UTC

you would be hard parsing each time if your "smart" parsing was happening.

OLTP - bind, period.

Alberto Dell'Era, February 19, 2007 - 2:59 pm UTC

This page is almost completely in bold font - from
http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:2320123769177#22435244068154
to the bottom.

Don't like "forever bold"?

Stew Asthon, February 19, 2007 - 5:22 pm UTC

Me neither.

What about sqlplus

Rahul, February 28, 2007 - 6:22 pm UTC

>>(sqlplus is NOT a programming environment). You
>> would in your REAL program that really calls this (I'm assuming sqlplus is just a demo here ;) you
>> would use bind variables for the call of the procedure as well.

Tom, this is part of your previous response up in this thread.
Let's say that sqlplus is not a demo and I am using sqlplus (in a script) to run some of my backend stored procedures;
Then, does it not use Bind Variables when I do this?
1)
exec my_procedure ('1000');

Or should I have used something like this:
2)
var v number
exec :v := 1000;
exec my_procedure (:v);

Or should I do something like this:
3)
var v number
begin
:v := 1000;
my_procedure (:v);
end;
/


Thanks,
Rahul.
Tom Kyte
February 28, 2007 - 10:05 pm UTC

but:

exec :v := 1000;
exec my_procedure (:v);

is identical to:

begin :v := 1000; end;
/
begin my_procedure(:v); end;
/


that first block - hard parse

again - sqlplus is NOT a programming environment, it is a scripting environment, you will not be efficient with a program written as a sqlplus script in general.

V, March 11, 2007 - 2:16 pm UTC

Hi Tom,
I work for an ERP software company and we support major databases like SQL server , Oracle and etc ..

In Oracle designs,
We create database schemas and application users access these objects by public synonyms.

My question is comparing to SQL server.
Following is extract from Mircosoft document.

"Batches with unqualified object names result in non-reuse of query plans. For example, in "SELECT * FROM MyTable", MyTable may legitimately resolve to Alice.MyTable if Alice issues this query, and she owns a table with that name. Similarly, MyTable may resolve to Bob.MyTable. In such cases, SQL Server does not reuse query plans. If, however, Alice issues "SELECT * FROM dbo.MyTable", there is no ambiguity because the object is uniquely identified, and query plan reuse can happen."

We increased 25% of application scalability by implementing above procedure.

In Oracle, since we use public synonyms, this kind of problem does not exist but could you please confirm.

Thank you,
V

Tom Kyte
March 12, 2007 - 7:21 pm UTC

the same problem "exists"

If "select * from mytable" would be

select * from user1.mytable when executed by user1
and
select * from user2.mytable when executed by user2

it would "not share" because those two sqls are obviously not shareable.

however, if that sql statement "select * from mytable" always resolved to

select * from X.mytable

for both user1 and user2 - we would share the sql.


AVOID public synonyms at all costs.

use ALTER SESSION set current_schema=schema if you want, but avoid public synonyms.

it makes it impossible to install your application in some database with other applications we'll we'll almost certainly hit a name that you have!!!

How to bind variables

Kathy Tao, June 02, 2007 - 11:32 pm UTC

Hi Tom,
I am trying to dynamically append a cursor variable v_orgCheck into a SELECT statement as following:

PROCEDURE pr_PopulateCount
(p_userid varchar2)

IS
v_orgCheck varchar2(300):='';
v_org varchar2(255):=null;
v_ctr number:=0;
v_superUser boolean:=false;

cursor role_org is
select role_id, role_org
from USER_ROLE_ROLEORG_XREF
where user_seqno = to_number(p_userid)
;
BEGIN

FOR role_rec in role_org LOOP
If role_rec.role_id = 1 then
v_orgCheck := '';
v_superUser :=true;

else
if (role_rec.role_id = 2 or role_rec.role_id = 3 or role_rec.role_id = 4 or role_rec.role_id = 5 or role_rec.role_id = 7)
and not v_superUser then
v_org := ''''||role_rec.role_org||'''';

if v_ctr > 0 then
v_orgCheck := v_orgCheck || ' or x.role_org = '||v_org;
else
v_orgCheck := v_orgCheck || ' And (x.role_org = '||v_org;
end if;

v_ctr := v_ctr + 1;
end if;
End if;

exit when role_rec.role_id = 1;
END LOOP;

if v_superUser or v_ctr = 0 then
v_orgCheck := '';
else
v_orgCheck := v_orgCheck || ')';
end if;

DELETE FROM RPT_UERCOUNT;
INSERT INTO RPT_UERCOUNT
SELECT * FROM
(
SELECT u.user_id, INITCAP(u.user_lname) lname, INITCAP(u.user_fmname) fname,
r.role_description, x.role_org
FROM user_info u, user_role_roleorg_xref x, lu_role r
WHERE u.user_seqno = x.user_seqno
AND x.role_id = r.role_id
|| '''' || v_orgCheck ||''''
ORDER BY role_org, role_description, lname, fname);
End pr_PopulateCount;

When I compile it, I get error message: ORA-01722 invalid number and ORA-06512 which complains about that varialbe v_orgCheck. Can I concatenate the variable like this?

Thanks,
Kathy

order by with a variable

Guna, February 11, 2008 - 4:50 pm UTC

I have a stored proc with some parameters and one of them is sort_index as integer type. The procedure is returning a ref_cursor and in that ref_cursor's select I am giving order by sort_index, I am getting the record set but the record set is not order no matter what I give for sort_index parameter. I have tried by passing 1 for sort_index parameter and the name of the first column itself after making the sort_index a varchar parameter.

The code is below.
create or replace usp_proc1(p_zip integer, p_sort_index integer, my_ref out sys_refcursor)
is
begin
open my_ref for
select * from customer c where c.zip_code = p_zip order by p_sort_index;
end;

Please help me if I have to use dynamic SQL or there is a way I could do without using dynamic SQL.
Thanks a lot.
Tom Kyte
February 11, 2008 - 10:49 pm UTC

order by p_sort_index

is just like saying

order by 'hello world'

the VALUE is bound in, not the "ordinal number 1 meaning column one"

you need something like:
order by decode( p_sort_index, 1, c1 ), 
         decode( p_sort_index, 2, c2 ),
         ...
         decode( p_sort_index, N, CN )

Binding order by value

Guna, February 12, 2008 - 10:08 am UTC

Thanks Tom, it helped. I was able to give
order by decode(:p_value, 1, col1) and it worked. I am just curious why it doesnt work when we say
order by decode(:p_value, 1, 1).
Thanks once again.
Tom Kyte
February 12, 2008 - 1:26 pm UTC

because

order by decode( :p_value, 1, 1 ) is the SAME AS
order by decode( :p_value, 1, 'HELLO WORLD' )

the order by <ordinal number> ONLY works in this very very specific context:

order by 1, 2, 3 -- order by column 1 then 2 then 3.


if you

order by f(x), f(y), f(z)

it orders by whatever f(x), f(y), f(z) return - they are not ORDINAL NUMBERS anymore, they are just function return values to be evaluated and sorted by.

using by bind in order by condition

A reader, February 12, 2008 - 10:43 pm UTC

Tom,

Is it still a problem that if I pass literal for order by condition and use bind variable for ?



scott@test>create or replace procedure myproc (x out sys_refcursor,dep
tno int,y int default 1)
2 as
3 begin
4 open x for 'Select * from emp where deptno = :dno order by '||y using deptn
o;
5 end;
6 /

Procedure created.

scott@test>

scott@chiora@SID-140>exec myproc(:x,10,2)

PL/SQL procedure successfully completed.

scott@chiora@SID-140>print

EMPNO ENAME JOB MGR HIREDATE SAL COMM DEPTNO
---------- ---------- ---------- ---------- --------- ------ ------ ----------
7782 CLARK MANAGER 7839 09-JUN-81 2450 10
7839 KING PRESIDENT 17-NOV-81 5000 10
7934 MILLER CLERK 7782 23-JAN-82 1300 10

scott@test>exec myproc(:x,10,6)

PL/SQL procedure successfully completed.

scott@test>print

EMPNO ENAME JOB MGR HIREDATE SAL COMM DEPTNO
---------- ---------- ---------- ---------- --------- ------ ------ ----------
7934 MILLER CLERK 7782 23-JAN-82 1300 10
7782 CLARK MANAGER 7839 09-JUN-81 2450 10
7839 KING PRESIDENT 17-NOV-81 5000 10

scott@test>

bad bind variable

aa153, August 18, 2009 - 11:14 am UTC

Hi, I would appreciate it if someone help me understand why I get PLS-00049: bad bind variable 'X' whcompiling below function. Thanks in advance

create or replace FUNCTION bindTest(acct in varchar2) return ACCOUNTTypes.cursorType AS
v_resultSet ACCOUNTTypes.cursorType;
v_query VARCHAR2(1000);
BEGIN
v_query := '
SELECT DISTINCT acct
FROM ACCOUNT
WHERE cardnum = ' || :x;
execute immediate v_query
into v_resultSet using acct;
return v_resultSet;
END;
Tom Kyte
August 24, 2009 - 4:40 pm UTC

because DDL does not accept binds.

You are attempting to create a procedure that would be created with a bind variable

You want to create a procedure that uses a sql statement that has a bind variable


MOVE the :x inside the quote, make the bind variable part of the SQL string itself.


.... where cardnum = :x';


RA, January 21, 2010 - 12:20 pm UTC

Hi Tom,

I came across this in the oracle applications concept guide:
http://download.oracle.com/docs/cd/B53825_03/current/acrobat/121devg.pdf

page 4-16 has this as an example:

PACKAGE EMP IS
PROCEDURE Insert_Row;
PROCEDURE Lock_Row;
PROCEDURE Update_Row;
PROCEDURE Delete_Row;
END EMP;

PACKAGE BODY EMP IS
PROCEDURE Insert_Row IS
CURSOR C IS SELECT rowid FROM EMP
WHERE empno = :EMP.Empno;
BEGIN
INSERT INTO EMP(
empno,
ename,
job,
mgr,
hiredate,
sal,
comm,
deptno
) VALUES (
:EMP.Empno,
:EMP.Ename,
:EMP.Job,
:EMP.Mgr,
:EMP.Hiredate,
:EMP.Sal,
:EMP.Comm,
:EMP.Deptno
);
OPEN C;
FETCH C INTO :EMP.Row_Id;
if (C%NOTFOUND) then
CLOSE C;
Raise NO_DATA_FOUND;
end if;
CLOSE C;
END Insert_Row;


I am confused by this - in this case where would the bind variable values come from? - there are no parameters for the procedure. Is this a special kind of Apps programming or something? I always thought the normal way is to pass the values as parameters to the procedure and then use it simply as pl/sql variables in the insert and dont even worry about bind variables because pl/sql takes care of it. Is the above a valid construct at all or am I delirious?

Thanks for your time as usual.
Tom Kyte
January 25, 2010 - 1:25 am UTC

that is Oracle forms - not server side - pl/sql. the :emp.whatevers - are references to fields on blocks. They are like global variables in forms - outside of plsql itself.

I would ignore that document unless you are doing Oracle Forms programming.

is this plsql code using bind variables or not?

A reader, April 22, 2010 - 10:11 am UTC

Dear Tom,

I remember when I meet you at Paris 3-4 december 2008 and asked you a question about bind variable within stored plsql procedure. Your answer was ' Yes you don't have to care about bind variables when you are doing plsql, But be carreful that the program calling your stored proc should do it using bind variables". And here we are. I have got a 10046 trace file which, when tkprofed, shows this

"BEGIN
the_stored_procedure
(p1 => 4597,
p2 => to_date ('08/04/2010', 'dd/mm/yyyy'),
p3 => 129,
p4 => to_date ('08/04/2010', 'dd/mm/yyyy')
);
END;

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.03 0.02 0 192 0 0
Execute 1 0.00 0.00 0 0 0 1
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 2 0.03 0.02 0 192 0 1

Misses in library cache during parse: 1
Optimizer goal: ALL_ROWS
Parsing user id: 46

Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
SQL*Net message to client 1 0.00 0.00
SQL*Net message from client 1 0.00 0.00
********************************************************************************

But the code that exists into this stored procedure is executed with bind variables as shown into the same trace file

UPDATE the_table
SET a1 = :b15,
b1 = :b14,
c1 = :b13,
d1 = :b12,
WHERE
id = :b18
AND dat = :b17

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 0 0.00 0.00 0 0 0 0
Execute 132 1.18 1.38 26 2516 10799 740
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 132 1.18 1.38 26 2516 10799 740

Misses in library cache during parse: 0
Optimizer goal: ALL_ROWS
Parsing user id: 47 () (recursive depth: 1)

So is this plsql code using bind variables or not?

Thanks

Tom Kyte
April 22, 2010 - 12:49 pm UTC

The PLSQL code is using bind variables.


The client that called PLSQL is not.


The client that called this code should have prepared:

begin the_stored_procedure( p1 => ?, p2 => to_date(?,'dd/mmy/yyyy'), ... ); end;

and bound in 4597, 08/04/2010, ... etc ...



Try to check tablespace size - error is coming

Ather Hussain, August 06, 2010 - 12:40 am UTC

SQL> SELECT tablesspace_name, largest_free_chunks,
nr_fre  2  e_chunks, sum_alloc_blocks, sum_free_blocks,
to_char(10  3  0*sum_free_blocks/sum_alloc_blocks, '09.99') || '%'
as pct_free
F  4    5  ROM ( SELECT tablespace_name,
(SELECT tablespace_name AS f  6  s_ts_name,
max(blocks) AS largest_free_chunk,
c  7    8  ount(blocks) AS nr_free_chunks,
sum(blocks) AS sum_free_blocks
FROM dba_  9   10  free_space
GROUP BY tablespace_name )
WHERE tablespac 11   12  e_name = fs_ts_name
 13  ;
WHERE tablespace_name = fs_ts_name
*
ERROR at line 12:
ORA-00923: FROM keyword not found where expected

Tom Kyte
August 06, 2010 - 8:30 am UTC

well, so what am I supposed to do?

The query you pasted here is utterly unreadable and unformatted. I cannot find your bugs if you provide something so hard to read now can I???


so I take your "work" and with a little joining/splitting

SELECT tablesspace_name, largest_free_chunks, nr_fre
2  e_chunks, sum_alloc_blocks, sum_free_blocks, to_char(10
3  0*sum_free_blocks/sum_alloc_blocks, '09.99') || '%' as pct_free F
4
5  ROM ( SELECT tablespace_name, (SELECT tablespace_name AS f
6  s_ts_name, max(blocks) AS largest_free_chunk, c
7
8  ount(blocks) AS nr_free_chunks, sum(blocks) AS sum_free_blocks FROM dba_
9
10  free_space GROUP BY tablespace_name ) WHERE tablespac
11
12  e_name = fs_ts_name
13  ;


which we can format to:

SELECT tablesspace_name, largest_free_chunks, nr_free_chunks, sum_alloc_blocks, sum_free_blocks,
       to_char(100*sum_free_blocks/sum_alloc_blocks, '09.99') || '%' as pct_free
  FROM
( SELECT tablespace_name,
         (SELECT tablespace_name AS fs_ts_name,
                 max(blocks) AS largest_free_chunk,
                 count(blocks) AS nr_free_chunks,
                 sum(blocks) AS sum_free_blocks
            FROM dba_free_space GROUP BY tablespace_name )
    WHERE tablespace_name = fs_ts_name
 ;



and I see nothing resembling SQL :)

you

select a,b,c
from (select X, (scalar subquery)
where predicate)


a,b,c = your outer select list.

x = tablespace name in your query

(scalar subquery) = the fact that you are selecting a SELECT - but you've done that wrong, a scalar subquery can select AT MOST one column - yours selects 4)

You are missing a from clause in your inline view - you go select ... where - no from clause there


Not having any idea what you are trying to do - I cannot parse this any further.

Oracle WITH+clause

Abhirama, October 05, 2010 - 1:06 pm UTC

Hi Tom, Thank you for all your posts,this has helped me with understanding on bind variables.

We are trying to use stored procedures from Oracle to use within Crystal reports.
We have an Oracle 11g database and our DBA's do not allow us to create temp tables with Stored procedures. So the option I was thinking was to use, Oracle WITH clause

create procedure a(per_num IN NUMBER, resultQuery OUT sys_refcursor ) AS
BEGIN
open v_cur for
WITH
T1 AS ( SELECT col1, col2 FROM table1 WHERE col3 = per_num ) ,
T2 AS (SELECT col3, col4 FROM table2 LEFT OUTER JOIN table3 )
SELECT * FROM T2
END;

The Stored procedure seems to work fine initially but taking up a lot of time now.
The query itself (the part from WITH clause) when run separately from the Stored Procedure performs consistently well.

After reading this thread for bind variables, I undertsand that bind variables help with performance.
Since the WITH clause is being used within a ref_cursor,
would you recommend that I put the entire clause in a string.
Then use the bind variable for per_num variable?
The WITH clause is a long query touching different tables and using Oracle functions like REGEXP_LIKE etc.
I have simplified the query here in this example for your reference.


Please let me know your suggestions on this approach and how to improve the performance of this stored procedure.

I am a newbie with oracle, so if there are any db settings that also will need to be factored in to improve performance with this query, would you please let me know.

Thank you for your valuable time and would greatly appreciate your opinion on this.
Tom Kyte
October 06, 2010 - 4:32 am UTC

in your example, the use of t1 is confusing. You don't use it anywhere?

do you have the ability to sql-trace and tkprof?

WITH clause

Abhirama, October 06, 2010 - 10:22 am UTC

Thank you Tom for responding. Sorry about the example earlier. Here is the real code I am working with. Thinking about modifying it to use the With clause as a string variable. I'll check if I can use the sql-trace and tkprof commands you mentioned from SQL-Developer.

create or replace
PROCEDURE DP2(prov_id IN NUMBER, age IN varchar2, resultQuery OUT sys_refcursor) IS

start_age NUMBER;
end_age NUMBER;
BEGIN
start_age := 0;
end_age := 0;

IF prov_id = 0 THEN
raise_application_error(-20101, 'Provider ID is 0.');
END IF;

IF age = 'YES' THEN
start_age := 18;
end_age := 75;
ELSE
start_age := 0;
end_age := 200;
END IF;

BEGIN
open resultQuery for

WITH

PROVIDERS AS (
SELECT ppr.person_id, ppr.prsnl_person_id, p.name_full_formatted as provider_name, pa.alias as npi
FROM V500.PERSON_PRSNL_RELTN ppr
LEFT OUTER JOIN V500.PERSON p
ON ppr.prsnl_person_id = p.person_id
LEFT OUTER JOIN V500.PRSNL_ALIAS pa
ON pa.person_id = ppr.prsnl_person_id
WHERE pa.prsnl_alias_type_cd IN (11365300.00)
AND ppr.prsnl_person_id IN (prov_id)
AND ppr.person_prsnl_r_cd = 1115.00
AND ppr.active_ind =1 AND PA.active_ind = 1
)
,

PROVIDER_DM_ALL_PATIENTS AS (
SELECT DISTINCT p.*, pa.alias AS cmrn,
ph.phone_num, pp.name_full_formatted AS patient_name,
floor(months_between(trunc(sysdate),trunc(pp.birth_dt_tm) )/12) age_by_floor, pp.deceased_cd
FROM PROVIDERS p
LEFT OUTER JOIN V500.PERSON_ALIAS pa
ON p.person_id = pa.person_id
AND pa.person_alias_type_cd = 2.0
AND pa.active_ind = 1
LEFT OUTER JOIN V500.PHONE ph
ON ph.phone_type_cd = 170.00
AND ph.parent_entity_id = p.person_id
and ph.active_ind = 1
LEFT OUTER JOIN V500.person pp
ON pp.person_id = p.person_id
and pp.active_ind = 1

)
,

PROVIDER_DM_PATIENTS AS (
SELECT p.* FROM PROVIDER_DM_ALL_PATIENTS p
WHERE p.age_by_floor between start_age and end_age and
p.deceased_cd = 0
)

select * from PROVIDER_DM_PATIENTS;

EXCEPTION WHEN OTHERS THEN
close resultQuery;
END;

END;
Tom Kyte
October 07, 2010 - 1:47 am UTC

EXCEPTION WHEN OTHERS THEN
 close resultQuery; 
END;


http://asktom.oracle.com/pls/asktom/asktom.search?p_string=%22i+hate+your+code%22

I really do feel that way on this one in particular. Let the client KNOW there was a failure, re-raise the exception.

In a funny way, this code is actually OK probably - since the failure would likely happen during the open - hence the cursor would not be open - hence the code would fail anyway because you try to close a "not open" cursor. stop using when others like that.


We'll need to compare the plans - cannot do very much without it.


but a question for you - why are you using with here at all - I don't see you reusing any of the result sets - why isn't this just a single query?


WITH instead of derived table.

Shannon Severance, October 07, 2010 - 12:39 pm UTC

but a question for you - why are you using with here at all - I don't see you reusing any of the result sets - why isn't this just a single query?

I can't speak for Abhirama, but I use the with keyword, even without reuse of the named subquery as a way to split a big query into smaller parts so I can think about the parts seperately, and then in the final select just worry about how the pieces I've already built up work together. Sort of like views that are local to that query.

Of course without reuse, an equivelent query could be written with derived tables.

Here is a simple example, to find how many times did the first day of the month fall on a Sunday during the twentieth century. ( http://projecteuler.net/index.php?section=problems&id=19 )

SQL> with Years as (select level + 1900 as Y
  2          from dual
  3          connect by level <= 100)
  4      , Months as (select level as M
  5          from dual
  6          connect by level <=12)
  7      , Dates as (select to_date(to_char(Y)
  8             || to_char(M, '00') || '01'
  9                 , 'YYYYMMDD') as FirstOfMonth
 10          from Years cross join Months)
 11  select count(*)
 12  from Dates
 13  where to_char(FirstOfMonth, 'D') = '7'
 14  /

  COUNT(*)
----------
       171

Tom Kyte
October 11, 2010 - 11:19 am UTC

... I can't speak for Abhirama, but I use the with keyword, even without reuse of the named subquery as a way to split a big query into smaller parts so I can think about the parts seperately, ...

I'd use inline views for that instead.

ops$tkyte%ORA11GR2> select count(*)
  2    from (select to_date(to_char(Y) || to_char(M, '00') || '01', 'YYYYMMDD') as FirstOfMonth
  3            from (select level + 1900 as Y
  4                    from dual
  5                 connect by level <= 100) Years
  6           cross join
  7                 (select level as M
  8                    from dual
  9                  connect by level <=12) Months
 10         )
 11          where to_char(FirstOfMonth, 'D') = '7'
 12  /

  COUNT(*)
----------
       171


with subquery factoring is a 'hint' to the optimizer to materialize the data - we are typically benefited from NOT materializing the data if we do not reuse the result over and over.

Dynamic Update, Apex style

A reader, October 13, 2010 - 9:50 am UTC

Oracle Application Express supports PL/SQL blocks containing references to "session state" using bind variable notation. When the engine executes the block, it replaces references to the bind variables with the appropriate value from session state using the "v" function.

I would like to do something similar in my own application. How is this done? I see that there are utility functions like wwv_flow_utilities.get_binds that look like they may be used but they are not documented.

For example,
update mytab set col1=:P1_COL1,col2=:P1_COL2 where pk1=:P1_PK1 and (col1 != :P1_COL1 or col2 != :P1_COL2)


Note that my example intentionally uses the same bind variable twice so that would need to taken into account when passing in bind variables by either position or name.

Help? Thanks
Tom Kyte
October 15, 2010 - 7:04 am UTC

I wrote that bit a while ago :)

You may make it do whatever you like - that is, if it doesn't do precisely what you want it to do - fix it, make it do it so yourself...

It is really only good for "by name" binding - via dbms_sql.


function get_binds( p_stmt  in varchar2 ) return dbms_sql.varchar2_table
as
    l_binds      dbms_sql.varchar2_table;
    l_ch         varchar2(2);
    l_str        varchar2(255);
    l_stmt  long default p_stmt;
    l_added      long default ':';

    n   number;
    m            number;
    o            number;
begin
 loop
  n := nvl( instr( l_stmt, '/*' ), 0 );
  exit when (n=0);
  m := nvl( instr( l_stmt, '*/', n+1 ), length(l_stmt) );
  if ( m = 0 ) then m := length(l_stmt); end if;
  l_stmt := substr( l_stmt, 1, n-1 ) || substr( l_stmt, m+2 );
 end loop;
 loop
  n := nvl( instr( l_stmt, '--' ), 0 );
  exit when (n=0);
  m := nvl( instr( l_stmt, chr(10), n+1 ), length(l_stmt) );
  if ( m = 0 ) then m := length(l_stmt); end if;
  l_stmt := substr( l_stmt, 1, n-1 ) || substr( l_stmt, m+1 );
 end loop;
 loop
     n := nvl( instr( l_stmt, '''' ), 0 );
     exit when (n=0);
     o := n;
     loop 
  m := nvl( instr( l_stmt, '''', o+1 ), length(l_stmt) );
  if ( m = 0 ) then m := length(l_stmt); end if;
  if ( substr(l_stmt,m+1,1)  = '''' )
  then
   o := m+1;
  else
   exit;
  end if;
     end loop;
     l_stmt := substr( l_stmt, 1, n-1 ) || substr( l_stmt, m+1 );
 end loop;
    loop
 n := nvl( instr( l_stmt, ':' ), 0 );
 exit when ( n= 0 );
        l_str := upper(substr(l_stmt,n,31));
        for j in 2 .. length(l_str)
        loop
           l_ch := substr( l_str, j, 1 );
     if ( l_ch not between 'A' and 'Z' and
    l_ch not between '0' and '9' and
    l_ch <> '_' )
     then
    l_str := substr(l_str,1,j-1);
    exit;
     end if;
        end loop;
        if ( length(l_str) > 1 )
  then
   if (instr( l_added, l_str||':' ) = 0)
   then
      l_added := l_str || l_added;
      l_binds(l_binds.count+1) := l_str;
   end if;
        end if;
  l_stmt := substr( l_stmt, n+1 );
    end loop;
    return l_binds;
end get_binds;

wwv_flow_utilities.get_binds

A reader, October 15, 2010 - 10:57 am UTC

I don't really need the source code for wwv_flow_utilities.get_binds, that works well, if it is good enough for Apex, it is good enough for me.

Ah, looks like I had the same question for Scott on the Apex OTN forum and he had answered it http://forums.oracle.com/forums/message.jspa?messageID=1719889#1719889 with an example.

need to change the query having bind variable

rucha pathak, December 16, 2011 - 3:36 am UTC

Hi tom
i am verrrrrrry muuch new in this field
i need to tune the following query which is doing full table sca. n also using bind variable

UPDATE COM_PROCESS_LOG SET NISRUNNING = 0, DTPROCESSEND = SYSDATE,
STRUPDATEDBY = :B2 , DTUPDATED = SYSDATE WHERE STRPROCESSCD = :B1 AND
DTPROCESSEND IS NULL
plz provide me the solution
thanks in advance

regards




Tom Kyte
December 16, 2011 - 5:34 pm UTC

full scans are cool, they are awesome, they are great.

why do you think the full scan is evil here?

plz provide me the facts.

how many rows in this table.
how many rows get updated in general by this update
what does this table look like
what indexes exist on this table
why do you think an index would be the right path?

multiset operations

Badris, January 18, 2012 - 8:14 am UTC

I have a block that has its nested tables declared as below

SQL> declare
  2      type vt is table of varchar2(20);
  3      a vt:=vt('a','b','c','d','e','f');
  4      b vt:=vt('c','f','s');
  5      c vt:=vt();
  6  begin
  7      c.extend(b.count);
  8      c:=a multiset intersect distinct b;
  9      dbms_output.put_line(c.count);
 10  end;
 11
 12  /
2

PL/SQL procedure successfully completed.

SQL>

This script works fine, to improve the performance, I would like to use bind variables for the nested table objects as they contain huge number of items. Kindly request you to guide me.

Thanks in advance
Badri

Tom Kyte
January 18, 2012 - 9:43 am UTC

well, in sqlplus - it is not really going to be possible since it is a very simple reporting tool.

from a real language, from a programming language, it would be easy, just bind the collections as your inputs - tons of examples on the internet - plug in something like:

<your language> binding a collection in oracle


and you'll like find a lot of examples.


it would have been useful to know what language you are going to program in.

multiset operations

Badris, January 18, 2012 - 10:29 am UTC

Thanks for your quick response, I am using PL/SQL as the programming interface as part of our ETL processing in our project. Oracle is only used to build the ETL functionality.

So I will try to use some other alternative instead of collections in this scenario.

Thanks
Badri
Tom Kyte
January 18, 2012 - 12:24 pm UTC

no, collections are correct, where is the data coming from for the collections in the first place?

where is the data coming from that will fill the collections?

multiset operations

Badris, January 19, 2012 - 12:50 am UTC

Sorry, ours is a DW-ETL(staging area) process, which pulls the data from the oracle GL tables through DB Links/text file feeds. We have populated our collection(in staging area) with that data using the procedures built in PL/SQL. So we do not use another language such as Java,.NET etc for our processing(in case if you are looking for this info).

Thanks
Badri
Tom Kyte
January 19, 2012 - 5:15 am UTC

so, just read the data into your collections, they will be 'binds' at that point.

ops$tkyte%ORA11GR2> declare
  2      type vt is table of varchar2(30);
  3      a vt;
  4      b vt;
  5      c vt;
  6  begin
  7          select username bulk collect into a
  8        from all_users;
  9  
 10      select distinct owner bulk collect into b
 11        from all_objects
 12       where object_type = 'FUNCTION';
 13  
 14      c := a multiset intersect distinct b;
 15  
 16      dbms_output.put_line( 'after intersect there are ' || c.count);
 17      for i in 1 .. c.count
 18      loop
 19                  dbms_output.put_line( c(i) );
 20          end loop;
 21  end;
 22  /
after intersect there are 11
OPS$TKYTE
APEX_030200
SYSMAN
OLAPSYS
MDSYS
ORDSYS
XDB
CTXSYS
EXFSYS
WMSYS
SYS

PL/SQL procedure successfully completed.

multiset operations

Badris, January 19, 2012 - 6:02 am UTC

Thanks for that response. Imagine, in your script if the collections are loaded within a for loop with different set of data per each iteration, that is exactly our current scenario and it really consumes lot of time to execute. That is why I was looking for usage of BIND VARIABLES in place of collections, so that every collection object load and multiset intersect operation becomes reusable and hence there should be a performance increase as per concept of bind variables.

Thanks
Badri
Tom Kyte
January 19, 2012 - 7:02 am UTC

but the are perfect "as is", you are trying to 'bind' where 'binding' does not make sense.

You bind into a sql statment:


select * from t where database_column = plsql_variable;

that uses binding.

you bind into a plsql statement from a client program:

prepareStatement( 'begin p( :some_parameter ); end;' );


but you don't 'bind' in your procedural code, you just code it - it is already 'optimal'


Three Web Application Rules

Alan, March 14, 2012 - 11:57 am UTC

1)All SQL must be written in stored procedures (no exceptions)
2)Some SQL returning generic resultsets must be assembled as a string first and then executed (within the procedures).
3)All SQL reads to use bind variables (maybe some exceptions but not for the dynamically generated SQL returning the generic resultsets to the web application).

There are alternatives (AQ, FOO etc.) but given the above constraints is there a simple elegant solution given that we only know the number of bind variables at run time?

Bind Variables for Tuning

Rajeshwaran, Jeyabal, March 28, 2012 - 1:11 pm UTC

Tom:

I got some sql's from Front end development along with bind variable values, If its single values then i can declare and use bind variables directly. (like below)

How can i use bind variables for the values provided in the IN clauses.

1) If i use in (select * from table(parse_inlist(:b_ename,',')) ) for getting the values from bind variable, Am i not modifiying the orginal sql provided?

2)Is this a right approache to handle bind variables for IN clauses? if possible please share any alternative methods.


rajesh@ORA10GR2> variable b_ename varchar2(30);
rajesh@ORA10GR2> exec :b_ename := q'|SMITH,WARD|';

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.01
rajesh@ORA10GR2> print b_ename;

B_ENAME
--------------------------------
SMITH,WARD

rajesh@ORA10GR2>
rajesh@ORA10GR2> select * from emp
  2  where ename in (:b_ename);

no rows selected

Elapsed: 00:00:00.01
rajesh@ORA10GR2>
rajesh@ORA10GR2> select * from emp
  2  where ename in (select * from table(parse_inlist(:b_ename,',')) );

     EMPNO ENAME      JOB              MGR HIREDATE         SAL       COMM     DEPTNO
---------- ---------- --------- ---------- --------- ---------- ---------- ----------
      7369 SMITH      CLERK           7902 17-DEC-80       1800                    20
      7521 WARD       SALESMAN        7698 22-FEB-81       1250        500         30

Elapsed: 00:00:00.12
rajesh@ORA10GR2>
rajesh@ORA10GR2> select sql_text,executions,loads
  2  from v$sql
  3  where sql_id in ('6s3j229v9cq08','0gv9nmwxgtab9');

SQL_TEXT                                 EXECUTIONS      LOADS
---------------------------------------- ---------- ----------
select * from emp where ename in (:b_ena          1          1
me)

select * from emp where ename in (select          1          2
 * from table(parse_inlist(:b_ename,',')
) )


Elapsed: 00:00:00.01
rajesh@ORA10GR2>


rajesh@ORA10GR2> variable b_deptno number;
rajesh@ORA10GR2> exec :b_deptno := 10;

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.01
rajesh@ORA10GR2> select empno from emp where deptno = :b_deptno;

     EMPNO
----------
      7782
      7839
      7934

Elapsed: 00:00:00.46
rajesh@ORA10GR2>

Tom Kyte
March 28, 2012 - 1:27 pm UTC

show me the original query you got.

it wasn't "where ename in (:x)", it was something else.

Bind Variables for Tuning

Rajeshwaran, Jeyabal, March 29, 2012 - 4:48 am UTC

Tom, Here is the sql along with Bind values.

select proj.proj_key
from rv_proj_content projcont,
  rv_chart chart,
  rv_proj_content_status_flt stsflt,
  rv_project proj
where projcont.current_chart_key    = chart.chart_key (+)
and stsflt.proj_content_barcode    = projcont.proj_content_barcode
and proj.proj_key                  = projcont.proj_key
and upper(fnc_get_full_name (stsflt.intakecomp_by_fname, stsflt.intakecomp_by_lname)) = ? ----USER,APPLOAD
and proj.proj_status_cd         in (?, ?, ?) --- CNA, PNP, NEW
and ( chart.chart_status        in ( ?, ?) -- NEW, RELEASED
or projcont.retrieval_request_status   IN ( ?, ? ) --- UNMATCHED, INTAKE

Tom Kyte
March 29, 2012 - 7:03 am UTC

so, how does that relate to what you wrote above?

I don't see any occurrences of "x in (:y)"



you would set up a bind variable for each "?" and set a value for it

variable b1 varchar2(20)
variable b2 varchar2(20)
....
variable bN varchar2(20)

exec :b1 := 'USER'; :b2 := 'APPLOAD'; .... :BN := 'INTAKE'

select proj.proj_key
from rv_proj_content projcont,
rv_chart chart,
rv_proj_content_status_flt stsflt,
rv_project proj
where projcont.current_chart_key = chart.chart_key (+)
and stsflt.proj_content_barcode = projcont.proj_content_barcode
and proj.proj_key = projcont.proj_key
and upper(fnc_get_full_name (stsflt.intakecomp_by_fname,
stsflt.intakecomp_by_lname)) = :b1
and proj.proj_status_cd in (:b2,:b3,:b4) --- CNA, PNP, NEW
and ( chart.chart_status in ( ....

Bind Variables for Tuning

Rajeshwaran, Jeyabal, March 29, 2012 - 9:04 am UTC

So, sometimes the application will send two bind variable values for IN clause and sometimes with three bind variable values for IN clause.

Give this we get two different sql's in shared pool based on the number of bind variable values in the IN clause.

1) Do you think its good to have different sql's in shared pool based on the number of bind variable values or Can we achieve a single sql in shared pool for different number of bind variable values (regardless of the number of values) we receieve from Application ?

rajesh@ORA10GR2> variable b1 varchar2(20);
rajesh@ORA10GR2> variable b2 varchar2(20);
rajesh@ORA10GR2> variable b3 varchar2(20);
rajesh@ORA10GR2>
rajesh@ORA10GR2> exec :b1 :='a'; :b2 :='b'; :b3 :='c';

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.01
rajesh@ORA10GR2>
rajesh@ORA10GR2> select /* test */ * from emp
  2  where ename in (:b1,:b2,:b3);

no rows selected

Elapsed: 00:00:00.01
rajesh@ORA10GR2>
rajesh@ORA10GR2> select /* test */ * from emp
  2  where ename in (:b1,:b2);

no rows selected

Elapsed: 00:00:00.01
rajesh@ORA10GR2>
rajesh@ORA10GR2> column sql_text format a40;
rajesh@ORA10GR2>
rajesh@ORA10GR2> select sql_text
  2  from gv$sql
  3  where sql_id in ('dcangmaqdd44w','3sawa1qgmpukd')
  4  /

SQL_TEXT
----------------------------------------
select /* test */ * from emp where ename
 in (:b1,:b2,:b3)

select /* test */ * from emp where ename
 in (:b1,:b2)


Elapsed: 00:00:00.03
rajesh@ORA10GR2>

Tom Kyte
March 29, 2012 - 10:11 am UTC

http://asktom.oracle.com/Misc/varying-in-lists.html

that is my approach to it in the various releases.

cost difference with bind and non bind

Biswaranjan, May 14, 2012 - 12:49 pm UTC

Hi Tom,

Hope you are doing well.

I have read your binding.pdf where you have shown a nice example using bind variable (using your runstats_pkg package).
I understood that example.
But I have a doubt with the following tested results.


SQL> create table tune_a(col1 number,col2 number);

Table created.

SQL> insert into tune_a select level,level+1 from dual connect by level<10000001;

10000000 rows created.

SQL> commit;

Commit complete.

SQL> set autotrace on
SQL> alter session set optimizer_dynamic_sampling=0;

Session altered.

SQL> var col1 number
SQL> exec :col1:=675390
PL/SQL procedure successfully completed.
SQL> var col1 number
SQL> exec :col1:=675390
SQL> select * from tune_a where col1=:col1;

      COL1       COL2
---------- ----------
    675390     675391


Execution Plan
----------------------------------------------------------
Plan hash value: 395449195

----------------------------------------------------------------------------
| Id  | Operation         | Name   | Rows  | Bytes | Cost (%CPU)| Time     |
----------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |        | 19078 |   484K|  5238   (3)| 00:01:03 |
|*  1 |  TABLE ACCESS FULL| TUNE_A | 19078 |   484K|  5238   (3)| 00:01:03 |
----------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter("COL1"=TO_NUMBER(:COL1))


Statistics
----------------------------------------------------------
          1  recursive calls
          0  db block gets
      23370  consistent gets
       5008  physical reads
          0  redo size
        468  bytes sent via SQL*Net to client
        381  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
          1  rows processed

SQL> select * from tune_a where col1=6798345;

      COL1       COL2
---------- ----------
   6798345    6798346


Execution Plan
----------------------------------------------------------
Plan hash value: 395449195

----------------------------------------------------------------------------
| Id  | Operation         | Name   | Rows  | Bytes | Cost (%CPU)| Time     |
----------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |        | 19078 |   484K|  5205   (2)| 00:01:03 |
|*  1 |  TABLE ACCESS FULL| TUNE_A | 19078 |   484K|  5205   (2)| 00:01:03 |
----------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter("COL1"=6798345)


Statistics
----------------------------------------------------------
          1  recursive calls
          0  db block gets
      23370  consistent gets
      10683  physical reads
          0  redo size
        470  bytes sent via SQL*Net to client
        381  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
          1  rows processed

SQL> select * from tune_a where col1=986543;

      COL1       COL2
---------- ----------
    986543     986544


Execution Plan
----------------------------------------------------------
Plan hash value: 395449195

----------------------------------------------------------------------------
| Id  | Operation         | Name   | Rows  | Bytes | Cost (%CPU)| Time     |
----------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |        | 19078 |   484K|  5205   (2)| 00:01:03 |
|*  1 |  TABLE ACCESS FULL| TUNE_A | 19078 |   484K|  5205   (2)| 00:01:03 |
----------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter("COL1"=986543)


Statistics
----------------------------------------------------------
          0  recursive calls
          0  db block gets
      23370  consistent gets
      10844  physical reads
          0  redo size
        468  bytes sent via SQL*Net to client
        381  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
          1  rows processed

SQL> exec :col1:=954231

PL/SQL procedure successfully completed.

SQL> select * from tune_a where col1=:col1;

      COL1       COL2
---------- ----------
    954231     954232


Execution Plan
----------------------------------------------------------
Plan hash value: 395449195

----------------------------------------------------------------------------
| Id  | Operation         | Name   | Rows  | Bytes | Cost (%CPU)| Time     |
----------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |        | 19078 |   484K|  5238   (3)| 00:01:03 |
|*  1 |  TABLE ACCESS FULL| TUNE_A | 19078 |   484K|  5238   (3)| 00:01:03 |
----------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter("COL1"=TO_NUMBER(:COL1))


Statistics
----------------------------------------------------------
          1  recursive calls
          0  db block gets
      23370  consistent gets
      10896  physical reads
          0  redo size
        468  bytes sent via SQL*Net to client
        381  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
          1  rows processed

####################################################################
In above results when I am using bindvariable why the cost of the bind variable query is showing more than
not bind variable query.(5238>5205)
I had also a look in to the trace file's output using tkprof.

Is it like bindvariable does impact on performance in plsql only ,not with the sql statement I did above.

thanks as always,
Biswaranjan.


Tom Kyte
May 14, 2012 - 12:55 pm UTC

marginally less cpu - MARGINALLY (those two costs are frankly - THE SAME - they are so very close - they are THE SAME) with a literal. Not enough to even CONSIDER using a literal.


And your statement about "impact on performance in plsql only" is utterly false. What from the above lead you to say that? Please don't say "cost", cost is just a number - it doesn't reflect the actual resources used, it is a guess.

doubt cleared

Biswaranjan, May 14, 2012 - 1:35 pm UTC

Hi Tom,

thanks for your quick and nice reply.

hmm I wrote it wrong.
I am learning performance tuning and get everyday good thing from your sites.
Many doubt to come and to be cleared too :).

tanks again






bind variables doubt for strange output.

Biswaranjan, May 15, 2012 - 4:44 pm UTC

Hi Tom,


I have run some anonymous block as below......


SQL> alter system flush shared_pool;
SQL> set serveroutput on;

declare
      type rc is ref cursor;
      l_rc rc;
      l_dummy all_objects.object_name%type;
      l_start number default dbms_utility.get_time;
  begin
      for i in 1 .. 1000
      loop
          open l_rc for
          'select object_name
             from all_objects
            where object_id = ' || i;
          fetch l_rc into l_dummy;
          close l_rc;
          -- dbms_output.put_line(l_dummy);
      end loop;
      dbms_output.put_line
       (round((dbms_utility.get_time-l_start)/100, 2) ||
        ' Seconds...' );
  end;
/

35.69 Seconds...

PL/SQL procedure successfully completed.
SQL> /
34.78 Seconds...

PL/SQL procedure successfully completed.
SQL> /
37.89 Seconds...            

PL/SQL procedure successfully completed.      (when I tried yesterday tis one,after 15 times i got .16 ,.13 seconds,but did not 
reproduced it :( ..

Then I tried to run like below......
declare
      type rc is ref cursor;
      l_rc rc;
      l_dummy all_objects.object_name%type;
      l_start number default dbms_utility.get_time;
  begin
      for i in 1 .. 1000
      loop
          open l_rc for
          'select object_name
             from all_objects
            where object_id = ' || i;  ----here I understood , will be hard parse 
          fetch l_rc into l_dummy;
          close l_rc;
          -- dbms_output.put_line(l_dummy);
      end loop;
      dbms_output.put_line
       (round((dbms_utility.get_time-l_start)/100, 2) ||
        ' Seconds...' );
    ---second loop replica
     l_start :=dbms_utility.get_time;
     for i in 1 .. 1000
      loop
          open l_rc for
          'select object_name
             from all_objects
            where object_id = ' || i; ----but here I have doubt why hard parse 
          fetch l_rc into l_dummy;
          close l_rc;
          -- dbms_output.put_line(l_dummy);
      end loop;
      dbms_output.put_line
       (round((dbms_utility.get_time-l_start)/100, 2) ||
        ' Seconds...' );
  end;
/

31.29 Seconds...
33.81 Seconds...

PL/SQL procedure successfully completed.

I have executed the above 2loops blocks 5 to 10 times and found no change.

Can you please tell why for the --second loop replica or the second time anonymous 
block execution, the soft parse didnt happen. (even already query should be there in shared pool like 1 to 1000).


Now I am testing with the bind variable concept.

SQL> declare
  2        type rc is ref cursor;
  3        l_rc rc;
  4        l_dummy all_objects.object_name%type;
  5        l_start number default dbms_utility.get_time;
  6    begin
  7        for i in 1 .. 1000
  8        loop
  9            open l_rc for
 10            'select object_name
 11               from all_objects
 12              where object_id = :x'
 13            using i;
 14            fetch l_rc into l_dummy;
 15            close l_rc;
 16            -- dbms_output.put_line(l_dummy);
 17        end loop;
 18        dbms_output.put_line
 19         (round((dbms_utility.get_time-l_start)/100, 2) ||
 20          ' Seconds...' );
 21  end;
 22  /
.13 Seconds...

PL/SQL procedure successfully completed.

SQL> /
.14 Seconds...

PL/SQL procedure successfully completed.

SQL> /
.13 Seconds...

PL/SQL procedure successfully completed.

I understood for these above cases though we use bind variable so for the first loop condition it will do hard parse 
and for other 999 times soft parse will happen.


Now for 1 to 1000 with new bind variables each time as below.
(I used some awk programming for :a1,i to :a1000,i1000 replacement though was not able to write a dynamic sql for it)

declare
      type rc is ref cursor;
      l_rc rc;
      l_dummy all_objects.object_name%type;
      l_start number default dbms_utility.get_time;
  begin
      for i in 1 .. 1000
      loop
         if i=1 then
            open l_rc for
             'select object_name
             from all_objects
      where object_id =:a1'
            using i;
           fetch l_rc into l_dummy;
           close l_rc;
         elsif i=2 then
           open l_rc for
             'select object_name
             from all_objects
      where object_id =:a2'
            using i;
           fetch l_rc into l_dummy;
           close l_rc;
         .     .     .
         .     .     .
          so**********on  
         .     .     .

     elsif i=1000 then
         open l_rc for
             'select object_name
             from all_objects
      where object_id =:a1000
            using i;
           fetch l_rc into l_dummy;
           close l_rc;
      end if;
          
   end loop;

       dbms_output.put_line
       (round((dbms_utility.get_time-l_start)/100, 2) ||
        ' Seconds...' );
  end;
/
36.45 seconds...                                  ----here i understood for hard parse as we use 1000 diff bind variables
PL/SQL procedure successfully completed.
SQL> /
36.24 seconds...                                  ----for here for second time why it cause hard parse though same 1000
PL/SQL procedure successfully completed.          diff bind variables used .


I again thought may be due to this if else condition little more time can take so I run the last anonymous bolck with 
if else condition but made the bindvariable as only ":a" and 
which took .17 or .16 seconds as softparse bind variable's blocks output.


Can you please look into this and tell why I got above strange output or did I miss somewhere to understand the 
correct outputs.

thanks as always,
Biswaranjan

Tom Kyte
May 16, 2012 - 1:08 am UTC

I'm not really sure what you are trying to ask.


In your examples - you should use dbms_utility.get_cpu_time - not elapsed time.


Can you please tell why for the --second loop replica or the second time
anonymous
block execution, the soft parse didnt happen. (even already query should be
there in shared pool like 1 to 1000).



also, you are assuming your shared pool is big enough to hold all of the parsed sql - I'll bet it is not. Hence you'll always be hard parsing since the sql you parsed last time won't be present in the shared pool by the time you re-run the example. so, I'll just guess you are hard parsing - look in the shared pool and see what you see. Use v$mystat joined to v$statname to print out the parse statistics and see how many hard parses you do each time.


why it cause hard parse though same 1000
PL/SQL procedure successfully completed. diff bind variables used .


for the same reason - you are ASSUMING they are soft parses the second time. I'll bet they are not - you've flooded your shared pool and by the time you do "i=1" the second time around - the sql isn't there anymore.

bind variable doubt got cleard :)

Biswaranjan, May 16, 2012 - 2:59 am UTC

Wowwwwwww ...long live Tom. I became a real fan of you.\

I saw "shared_pool_size" which is very small in my system
and I increased it to 500M and everything(every case which i wrote in my recent post) were coming as expected(I used cpu time as u said).

Great.

I hope no more doubt about bind variable for me.

You rock,
Biswaranjan

What does ":variable1:variable2" mean?

Aditya, October 18, 2012 - 4:39 am UTC

Hi Tom,

I came across a situation where I saw something like this.

EXEC SQL FOR :indx
UPDATE table_name
SET col1 = <some_value>,
col2 = :variable1:variable2
WHERE <some_condition>;

Here variable1 and variable2 are two arrays of type double and short respectively.

Please let me know, what "col2 = :variable1:variable2" means here.
Tom Kyte
October 18, 2012 - 8:23 am UTC

it is an indicator variable - which allows you to tell us "the other bind variable is NOT NULL or is NULL"

http://docs.oracle.com/cd/B28359_01/appdev.111/b28427/pc_04dat.htm#i12463

if variable2 is "set", then the host variable variable1 is NULL. If not set, then it is NOT NULL

it is a flag.

a PL/SQL block of code should be created as functions instead of as procedures?

Piero, January 08, 2013 - 9:32 am UTC

why a PL/SQL block of code should be created as function instead of as procedure if it reference host or bind variables ?

Tom Kyte
January 14, 2013 - 11:42 am UTC

because it pleases you to do so.
because it makes the resultant code more readable, more functional.
because you want to.


that is why - nothing really technical. As in most languages - a function is a procedure that can return its last OUT parameter, rather than passing it as an OUT parameter.


procedure p( x in out number, y out number )
function f( x in out number ) return number

are for all intents and purposes "the same" as far as what they can accomplish.


You might do p as f if you wanted to simplfy this bit of code:

declare
    l_x number := 42;
    l_y number;
begin
    p(l_x,l_y);
    dbms_output.put_line( 'the answer is ' || l_y );
end;


into

declare
    l_x number := 42;
begin
    dbms_output.put_line( 'the answer is ' || f(l_x) );
end;




a PL/SQL block of code should be created as functions instead of as procedures?

Piero, February 09, 2013 - 3:53 am UTC

you are truly grateful for the answer.
maybe I'm not clear what a host varible or a bind variable is,
if possible I'd ask you again:
it is true that a stored procedure can not 'use a host variable because it does not see anything on the client side? a host variable is a variable defined with the VARIABLE command in sqlplus? I read often confusion between a :bind variable ("used in ececute immediate only and declared in the declare section of a pl/sql block") and a host variable, and in the example that you've kindly explained, I do not see either of them.
Again, thank you for having responded to me
Piero
Tom Kyte
February 11, 2013 - 10:00 am UTC

the variable command is just a sqlplus thing.

jdbc does bind variables
odbc does bind variables
php does binds
perl does binds
<pretty much every language> does binds.


plsql uses bind variables -

begin
   l_empno  number := 1245;
begin
   for x in (select * from emp where empno = l_empno )
   ...



l_empno will be a bind variable in the query - it will be passed into the query, not hard coded into the query. If you enabled tracing, the actual sql statment would look like:

SELECT * FROM EMP WHERE EMPNO = :BV01

or something similar.


what language are you programming in? I'll point you to an example of how to bind in that language.

Alexander, February 13, 2013 - 1:41 pm UTC

Is it possible for Oracle to have a 99% Soft Parse ratio when not using bind variables? I'm supporting a vendor product that does not, there are over 100k sql statements in v$sql but I pulled a random AWR for an hour and it's at 99.59%. I can't explain this, unless Oracle does not re-parse statements if the exact same sql and values are already in cache? We have 50GB of shared pool.


Tom Kyte
February 13, 2013 - 2:32 pm UTC

what is your setting of cursor_sharing.


if it is set to force/similar - we will soft parse - which sounds good, but isn't.

so - tell us your setting for cursor_sharing.



... can't explain this, unless Oracle does not re-parse statements if
the exact same sql and values are already in cache? ...

this is true, if the same sql is in the shared pool, we can reuse that.


50gb of shared pool, wow, just wow.

Alexander, February 13, 2013 - 2:39 pm UTC

It's the default, exact. I tried changing it to force as a workaround for now and it trigger some kind of weird bug in RMAN and we couldn't run backups.

I misspoke about the shared_pool, the SGA max_memory parameters are set to 50GB. Of that the shared_pool is using 20GB. Still huge right?
Tom Kyte
February 13, 2013 - 3:41 pm UTC

20gb is large, yes.

If you look into v$sql - do you see statements with executions much greater than one? do you see statements with bind variable placeholders in them?


unless they are very simple SQL however, I don't see how you could fit 100,000 of them in to 20gb - that is only about 200KB per statement, that seems a bit low.

Alexander, February 14, 2013 - 8:27 am UTC

There are no bind place holders. I does look like a lot of it is being reused:

Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production
With the Partitioning, Automatic Storage Management, OLAP, Data Mining
and Real Application Testing options

SQL> select count(*) from v$sql;

  COUNT(*)
----------
    570966

SQL> select count(*) from v$sql where executions > 1 ;

  COUNT(*)
----------
    358265

SQL> select component, current_size from v$memory_dynamic_components;

COMPONENT                                                        CURRENT_SIZE
---------------------------------------------------------------- ------------
shared pool                                                        2.1609E+10
large pool                                                          134217728
java pool                                                           134217728
streams pool                                                        134217728
SGA Target                                                         3.6373E+10
DEFAULT buffer cache                                               1.4093E+10
KEEP buffer cache                                                           0
RECYCLE buffer cache                                                        0
DEFAULT 2K buffer cache                                                     0
DEFAULT 4K buffer cache                                                     0
DEFAULT 8K buffer cache                                                     0
DEFAULT 16K buffer cache                                                    0
DEFAULT 32K buffer cache                                                    0
Shared IO Pool                                                              0
PGA Target                                                         1.9462E+10
ASM Buffer Cache                                                            0

16 rows selected.


Tom Kyte
February 14, 2013 - 10:32 am UTC

are you looking at sql_fulltext - you might not be seeing it all.


and see what the executions are really


select execs, count(*)
from (
select case when executions = 1 then '1 execution'
when executions between 2 and 10 then '<10 execs'
when executions between 11 and 100 then '<100 execs'
else 'more than 100 execs'
end execs
from v$sql
)
group by execs
order by execs;

Alexander, February 14, 2013 - 11:00 am UTC

I'm familiar with the code they're running from previous research. I actually asked you about it because I was seeing gazillion's of stuff like this, I wondered if they was anyway I would force it to bind:

INSERT INTO WN1PDC.CAXNTNOD  VALUES ( 8696, 56891, 30.0000000000, 218.067719,
17.796875, 6271.309082, 6265.778320, 5.530556, 5.530556, 95.910889, 2.186172, 33
4.884918, 334.884918, 56.150795, 1.523889, 4.006667, 183.627777, 145.382217, 127
6.927734, 23500708.000000, 232311.468750, 602494.437500, 9569.182617, 20083.1054
69, 72.741538, 26.339194, 0.919271, 1212.142944, 329.010010, -1.0000000000, 0.00
0000, 2.638889, 0.000000, 0.061849, 0.012153, 45.901249, 0.212912, 0.000000, 0.0
00000, 4.425661)


SQL> select execs, count(*)
  2  from (
  3  select case when executions = 1 then '1 execution'
  4  when executions between 2 and 10 then '<10 execs'
  5  when executions between 11 and 100 then '<100 execs'
  6  else 'more than 100 execs'
  7  end execs
  8  from v$sql
  9  )
 10  group by execs
 11  order by execs;

EXECS                 COUNT(*)
------------------- ----------
1 execution             154035
<10 execs               280702
<100 execs               73506
more than 100 execs       6709

Tom Kyte
February 14, 2013 - 11:55 am UTC

so, almost none of your SQL is soft parsed very often according to that.

430+K of SQL is executed probably just once or twice - and 80k is parsed more than 10 times with almost none being regularly executed.

that insert should have an execute count of 1 - unless there is no primary key on that table...

the only way to force something to bind is to write it that way in the first place.

cursor sharing=force/similar can "auto bind" after the fact but doesn't remove the huge security hole nor does it decrease parse count.

a PL/SQL block of code should be created as functions instead of as procedures?

Piero, February 26, 2013 - 10:47 am UTC

thanks again and sorry for my late to respond to your proposal help.

i "try" to program with pl/sql.


Full table scan?

pranav, March 11, 2013 - 6:23 pm UTC

Hi Tom,

Please check the below 2 test cases. One of our application is running the sql provided in "test case 2". Problem is with ":B3" bind variable. Though we are passing value to the bind variable :B3, it is still going for the full table scan. This is dynamic query and couldn't fix the query using "code changes"(as it is not inhouse developed). DB Version: 11.2.0.2. Infact I tried the similar query in 11.1.0.7 and the plan is same(full table scan).

Reason why I have shown the "test case 1" is with those bind variables it is using index scan.

Please let me know how we can fix this type of problem. Thanks.

Test Case 1:
============
explain plan for
select * from TESTTBL where COLA = :B1 or COLB = :B2;

---------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
---------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 545K| 36M| 634 (1)| 00:00:09 |
| 1 | CONCATENATION | | | | | |
| 2 | TABLE ACCESS BY INDEX ROWID| TESTTBL | 8142 | 564K| 46 (0)| 00:00:01 |
|* 3 | INDEX RANGE SCAN | TESTTBL_PK | 8142 | | 6 (0)| 00:00:01 |
|* 4 | TABLE ACCESS BY INDEX ROWID| TESTTBL | 537K| 36M| 588 (1)| 00:00:09 |
|* 5 | INDEX RANGE SCAN | TESTTBL2 | 537K| | 124 (1)| 00:00:02 |
---------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

3 - access("COLA"=:B1)
4 - filter(LNNVL("COLA"=:B1))
5 - access("COLB"=:B2)

Test Case 2:
============
explain plan for
select * from TESTTBL where COLA = :B1 or COLB = :B2 or :B3 is null;

-------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
-------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 2318K| 156M| 69077 (3)| 00:16:08 |
|* 1 | TABLE ACCESS FULL| TESTTBL | 2318K| 156M| 69077 (3)| 00:16:08 |
-------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

1 - filter(:B3 IS NULL OR "COLB"=:B2 OR "COLA"=:B1)
Tom Kyte
March 12, 2013 - 8:23 am UTC

think about this for a moment.


If :b3 is null - every single row in the table will be returned. All of them, everything. And it is likely that cola/colb allow for NULLs so an index on them wouldn't point to every row - hence an index COULD NOT BE USED.


I'm very surprised it is using an index on colb - since it is getting 537,000 records out of 2,318,000 - I would have presumed a full scan there.


what plan did you expect from this, how would you, as a programmer, programmed a bit of code to do this (using SQL plans as your programming language)

Full table scan

pranav, March 12, 2013 - 10:33 am UTC

Thanks Tom.

What I did not understand or expected it to do is, whenever we pass some value to :B3 why optimizer is not considering other 2 predicates and using the indexes? In this case it should be same as case 1? Why is it not peeking into bind values and decide the plan? Please clarify.
Tom Kyte
March 12, 2013 - 11:31 am UTC

a query has a plan.

that plan has to be able to work for any - repeat *any* - set of inputs.

It can peek the binds, but it cannot develop a plan that *would not work* for a subsequent execution with different bind inputs.


the plan is developed on the first execution, that single plan has to be able to deal with ANY set of inputs. Therefore, it has to be able to deal with ":b3 is null" being true

Full table scan

pranav, March 15, 2013 - 11:05 am UTC

Thanks Tom.

Is there a way to rewrite the below code? I think we can use UNION ALL here. But if there are more such conditions it is going to be a complex query.

select * from cust_act where
(actno = :B1 or custno = :B1 or :B1 is null) and
(actnm = :B2 or custnm = :B2 or :B2 is null) and
custdt = :B3
Tom Kyte
March 25, 2013 - 9:58 am UTC

if any of B1, B2 are null - you want a full scan - or just an index range scan on (:B3) if appropriate.

why are you trying to avoid the best plan possible.

not sure what your requirements are here.

Full table scan

pranav, March 25, 2013 - 10:22 am UTC

Thanks Tom.

Yes that is the exact requirement. But the plan always goes for Full table scan. It is very rare that :B1 or :B2 values are NULLS. So Ideally I want it to have Index range scans.
Tom Kyte
March 25, 2013 - 3:01 pm UTC

but, do you understand, that if :b1 or :b2 could be null - that the index on custdt is the ONLY one that can be used right?

so, do you have an index on custdt


bind variables

Lal Cyril, August 23, 2013 - 1:20 pm UTC

Tom,
I have a query which is executed from the application server (weblogic) for a report module which uses bind variables. he query as shown in v$sql is shown below.

select /*+ first_rows(100)*/ col1,col2 from tab where c1=:1 and c2 >=:2 and c3<=:3.

c1 is a varchar2 column and the bind value set is a string. c2 and c3 are date columns and the bind values set are date objects.

The problem i am facing is, for the same input values for the bind variables i am getting different execution plans for the query fired from the application and from sql developer.

From sql developer i couldn't substitute values for the date variables, so i execited the query by substituing bind variables with a literal converted to date using to_date.

But still my doubt is a hard parse from the application tier and from sql developer should give the same execution plan right?

i tried to use the set autotrace traceonly statistics method from sqlplus which also gave the same values as that executed from sql developer.

Can you hint me the reason for this?
This particular query returns around 400,000 rows as shown from the autotrace traceonly statistics, but from the application tier only 26 rows are fetched and shown in one page. (the rows processed column in v$sql shows only 26)

Will fetch size in jdbc impact query plans?
or is it the first_rows hint which is causing the issue?

Also if i want to execute the query by setting bind variables in sqlplus, what is the option for date variables?

As always expecting your expert advise






Tom Kyte
August 28, 2013 - 6:27 pm UTC

... From sql developer i couldn't substitute values for the date variables, so i execited the query by substituing bind variables with a literal converted to date using to_date.
...

yes you can, sqldeveloper supports bind variables.


https://forums.oracle.com/thread/888443


... But still my doubt is a hard parse from the application tier and from sql developer should give the same execution plan right?
...

absolutely...... NOT



suppose each of c1,c2,c3 has an index on them (three separate single column indexes)


suppose c2 has a maxvalue of 01-jan-2000.
suppose c3 has a min value of 01-jan-2013.
suppose c1 has very few distinct values....


Now, you run a query:

select * from t where c1 = 'hello' and c2 > to_date( '01-jan-2001' ) and c3 < to_date( '01-aug-2013' )

what index would you like it to use? I'd like the index on c2 (it'll find ZERO rows - the index on c1 or c3 would find LOTS of rows)

now you run:

select * from t where c1 = 'hello' and c2 > to_date( '01-jan-1900' ) and c3 < to_date( '01-aug-2012' )

Now I'd like the index on c3 to be used as it would find zero rows whereas c1 and c2 would find "lots"

now you run...... and so on.


search for "bind peeking" to see how a query with bind variables is optimized and reused...

bind variables

Lal, August 29, 2013 - 5:30 am UTC

Tom,
Thanks for the response. It seems its not possible to define a date variable in sqlplus or sql developer as a bind variable
For executing the above query using bind variables from sql developer i tried the following.

variable v1 varchar2
variable d1 date
variable d2 date
exec :v1 := 'A'
exec :d1 := sysdate-1
exec :d2 := sysdate+1
select * from t1 where c1 = :v1 and c2>:d1 and c3<=:d2;

This caused the following error
Usage: VAR[IABLE] [ <variable> [ NUMBER | CHAR | CHAR (n [CHAR|BYTE]) |
VARCHAR2 (n [CHAR|BYTE]) | NCHAR | NCHAR (n) |
NVARCHAR2 (n) | CLOB | NCLOB | REFCURSOR |
BINARY_FLOAT | BINARY_DOUBLE ] ]

The date datatype is not shown in the available list.
Any workarounds to execute the query using bind variables in sql developer or sql plus?

Also regarding bind peeking i will rephrase my question

From application tier the query executed is
select /*+ first_rows(100)*/ * from t1 where c1 = ? and c2>? and c3<=?;

where the bind values is passed as
'A' for vind variable for c1
date object corresponding to 01-Aug-2013 for bind variable for c2 and
date object corresponding to 10-Aug-2013 for bind variable for c2

To ensure a hard parse from application, i gave a grant on table t1 to public and then revoked the same and then executed the query from the application tier.

Now from sqldeveloper i executed the following query with same values as executed for the application tier but as literals.
select * from t1 where c1 = 'A' and c2>=to_date('01-Aug-2013','dd-mon-yyyy') and c3<=to_date('10-Aug-2013','dd-mon-yyyy');

since i gave literals in sql developer it should go for a hard parse.
Since the values for the parameters from application tier and sql developer are same and a hard parse also happens for each execution, this should use the same execution plan right?

I am getting this different execution plans for the above two executions.
Can you explain why this happens?
Referring to my previous post, is this due to fetch size issues. or the first rows hint.
SQL develoiper fetches the first 50 rows only and then when i use the page down key, i think it executes the query for the next page. The application tier fetches 26 rows for one page.
Tom Kyte
September 04, 2013 - 6:01 pm UTC

but you can use a string and either rely on the implicit conversion that will take place or to_date() it explicitly.


... hard parse also happens for each execution, this should use the same
execution plan right?
...


why would a hard parse happen for every execution. the bind variables in the application are there to make sure it DOES NOT hard parse for every execution!


in your application - you are using bind variables, the bind variables will be used to optimize the query for the first hard parse. After that - we will reuse that plan for subsequent soft parses/executes. If you later hard parse a query with some literals in it (or just *different* bind variables) - it may or may not have the same plan.


search for "bind peeking"
search for "adaptive cursor sharing"

to read more on this. tons already written.

Tom

A reader, September 05, 2013 - 2:51 am UTC

SQL was initially developed at IBM by Donald D. Chamberlin and Raymond F. Boyce in the early 1970s. This version, initially called SEQUEL (Structured English Query Language), was designed to manipulate and retrieve data stored in IBM's original quasi-relational database management system, System R, which a group at IBM San Jose Research Laboratory had developed during the 1970s.[13] The acronym SEQUEL was later changed to SQL because "SEQUEL" was a trademark of the UK-based Hawker Siddeley aircraft company.[14]

In the late 1970s, Relational Software, Inc. (now Oracle Corporation) saw the potential of the concepts described by Codd, Chamberlin, and Boyce and developed their own SQL-based RDBMS with aspirations of selling it to the U.S. Navy, Central Intelligence Agency, and other U.S. government agencies. In June 1979, Relational Software, Inc. introduced the first commercially available implementation of SQL, Oracle V2 (Version2) for VAX computers.

After testing SQL at customer test sites to determine the usefulness and practicality of the system, IBM began developing commercial products based on their System R prototype including System/38, SQL/DS, and DB2, which were commercially available in 1979, 1981, and 1983, respectively

bind variables

Lal, September 18, 2013 - 1:51 pm UTC

Tom,
Thanks for your reply. I have not put my question in a correct way.
I understand the concept of bind variables, to avoid the hard parses for subsequent executions.

My question is
I have the following query executed from a java code using jdbc.

select /*+ first_rows(100)*/ * from t1 where c1 = ? and c2>? and c3<=?;

The same query i executed from sql developer also, but with literal values since c2 and c3 are date columns and i cannot use bind variables in sql developer or sqlplus for date columns.

For the same query, i got different execution plans from java and sql developer, for the same set of values for c1,c2 and c3.
What could be the reason for that?
Tom Kyte
September 24, 2013 - 11:25 pm UTC

because you were binding a string in one environment and a date in another. those will always result in two child cursors - one a plan for when you bind strings, another for when you bind dates.

And since they are parsed with potentially different inputs - bind peeking could lead to different plans.

you'll want to see what the query using dates was parsed with - what binds were peeked

http://asktom.oracle.com/pls/apex/f?p=100:11:0::::P11_QUESTION_ID:3186342300346635761

and use them to have the same sort of inputs used to hard parse and optimize with.

Bins Variables

Lal, October 03, 2013 - 4:16 am UTC

Tom,

If i use to_date when i execute the query from sql developer, still the plans can change for the first execution?

Case A
i execute the following query
select /*+ first_rows(100)*/ * from t1 where c1 = ? and c2>? and c3<=?;
from the java application with values for binds set as 'A' (string in java),'01-Oct-2010' (date in java) and '01-Oct-2010' (date in java) respectively.

Case B
i execute the following query from sql developer
select /*+ first_rows(100)*/ * from t1 where c1 = 'A' and c2>to_date('01-Oct-2010','dd-mon-yyyy')
and c3<=to_date('01-Oct-2010','dd-mon-yyyy');

I am not bothered about the subsequent query executions and plan reuse.
I understand that the above two queries are different, since i used literals in the query for the second case.
But logically these two queries are same. ie for case A i used bind variables and case B i used literals with date function used for the date columns.

For this, is there a chance that optimiser selects two different plans for these two queries for the first execution?

Dhamodaran, November 21, 2015 - 2:56 am UTC

Greetings.
Thank you very much for all your support in understanding Oracle the best way.

We have a simple update statement as below.

1) update DIST_TABLE set EVENT_ID=:1 where REF_ID=:2
2) update DIST_TABLE set DIST=:1 ,DIST_ID=:2 where REF_ID=:3

The REF_ID is primary key and has unique index.

#1 is running faster with index scan
#2 is very slow running and causing high CPU too.

Why #1 using the index unique scan, but #2 using full table scan?

The bind variable values are straight forward string values, as you can see below trace(2).

This is Oracle 11g database. The table has around 5 million records, and used for OLTP.
The query is coming from the application via Jdbc client.

I would expect #2 query to use the unique index, so that the performance would be good.

Kindly provide your valuable input what oracle is doing here.

=========(1) PK UNIQUE INDEX USED =========
PARSING IN CURSOR #47165654821760 len=67 dep=0 uid=41 oct=6 lid=41 tim=1447221961862941 hv=3328185741 ad='1acd057f0' sqlid='4mvzf3v3605cd'
update DIST_TABLE set EVENT_ID=:1 where REF_ID=:2
END OF STMT
PARSE #47165654821760:c=0,e=46,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=1,plh=3317714573,tim=1447221961862938
EXEC #47165657270024:c=0,e=147,p=0,cr=2,cu=4,mis=0,r=1,dep=1,og=1,plh=2622476710,tim=1447221961863412
CLOSE #47165657270024:c=0,e=4,dep=1,type=3,tim=1447221961863461
EXEC #47165654821760:c=0,e=462,p=0,cr=7,cu=17,mis=0,r=1,dep=0,og=1,plh=3317714573,tim=1447221961863493
STAT #47165654821760 id=1 cnt=0 pid=0 pos=1 obj=0 op='UPDATE DIST_TABLE (cr=7 pr=0 pw=0 time=398 us)'
STAT #47165654821760 id=2 cnt=1 pid=1 pos=1 obj=17426 op='TABLE ACCESS BY INDEX ROWID DIST_TABLE (cr=4 pr=0 pw=0 time=19 us cost=3 size=95 card=1)'
STAT #47165654821760 id=3 cnt=1 pid=2 pos=1 obj=17427 op='INDEX UNIQUE SCAN DIST_TABLE_PK (cr=3 pr=0 pw=0 time=10 us cost=2 size=0 card=1)'
XCTEND rlbk=0, rd_only=0, tim=1447221961863624
WAIT #47165654821760: nam='log file sync' ela= 863 buffer#=4238 sync scn=585632617 p3=0 obj#=-1 tim=1447221961864595
WAIT #47165654821760: nam='SQL*Net message to client' ela= 2 driver id=1952673792 #bytes=1 p3=0 obj#=-1 tim=1447221961864654
WAIT #47165654821760: nam='SQL*Net message from client' ela= 1487 driver id=1952673792 #bytes=1 p3=0 obj#=-1 tim=1447221961866178
CLOSE #47165654821760:c=0,e=28,dep=0,type=1,tim=1447221961866244


===============(2) PK UNIQUE INDEX NOT USED FOR SIMILAR SQL ==============
PARSING IN CURSOR #47476858596296 len=90 dep=0 uid=41 oct=6 lid=41 tim=1447766926343294 hv=232681020 ad='f15b7008' sqlid='a7xs3xh6xwvjw'
update DIST_TABLE set DIST=:1 ,DIST_ID=:2 where REF_ID=:3
END OF STMT
PARSE #47476858596296:c=0,e=71,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=1,plh=4185332264,tim=1447766926343291
BINDS #47476858596296:
Bind#0
oacdty=01 mxl=128(36) mxlc=00 mal=00 scl=00 pre=00
oacflg=03 fl2=1000010 frm=01 csi=873 siz=384 off=0
kxsbbbfp=2b2e114aa618 bln=128 avl=09 flg=05
value="23.005 KM"
Bind#1
oacdty=01 mxl=128(40) mxlc=00 mal=00 scl=00 pre=00
oacflg=03 fl2=1000010 frm=01 csi=873 siz=0 off=128
kxsbbbfp=2b2e114aa698 bln=128 avl=10 flg=01
value="DIS0000137"
Bind#2
oacdty=01 mxl=128(60) mxlc=00 mal=00 scl=00 pre=00
oacflg=03 fl2=1000010 frm=01 csi=873 siz=0 off=256
kxsbbbfp=2b2e114aa718 bln=128 avl=15 flg=01
value="REF000005780796"

*** 2015-11-17 17:28:49.809
WAIT #47476858596296: nam='db file scattered read' ela= 233037 file#=4 block#=462296 blocks=2 obj#=17426 tim=1447766929809814
.....................
EXEC #47476858596296:c=3636228,e=132445714,p=46,cr=116363,cu=3,mis=0,r=1,dep=0,og=1,plh=4185332264,tim=1447767058789121
STAT #47476858596296 id=1 cnt=0 pid=0 pos=1 obj=0 op='UPDATE DIST_TABLE(cr=116363 pr=46 pw=0 time=132445463 us)'
STAT #47476858596296 id=2 cnt=1 pid=1 pos=1 obj=17426 op='TABLE ACCESS FULL DIST_TABLE (cr=116363 pr=46 pw=0 time=132445365 us cost=30603 size=37 card=1)'
XCTEND rlbk=0, rd_only=0, tim=1447767058789272
......
WAIT #47476858596296: nam='SQL*Net message from client' ela= 239889 driver id=1952673792 #bytes=1 p3=0 obj#=17426 tim=1447767059172614
CLOSE #47476858596296:c=0,e=10,dep=0,type=1,tim=1447767059172701
============ END ============
Connor McDonald
November 22, 2015 - 4:34 am UTC

Please log a new question with:

- the full DDL for the table (and the indexes)
- the output of the following (after you have run each update)

select * from table(dbms_xplan.display_cursor(sql_id=>'4mvzf3v3605cd',format=>'ALL LAST'))

select * from table(dbms_xplan.display_cursor(sql_id=>'a7xs3xh6xwvjw',format=>'ALL LAST'))


index is used instead of full table scan in case of bind variable

Ranjan, July 09, 2017 - 10:51 am UTC

create table t2 as select level a,level+1 b from dual connect by level <=100000;
insert into t2 select 1,2 from dual connect by level <=100000;
commit;


SQL> set autotrace traceonly;
SQL> variable var1 number;
SQL> exec :var1:=1;

PL/SQL procedure successfully completed.

SQL> select * from t2 where a=1;

100001 rows selected.


Execution Plan
----------------------------------------------------------
Plan hash value: 1513984157

--------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
--------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 98201 | 671K| 93 (9)| 00:00:02 |
|* 1 | TABLE ACCESS FULL| T2 | 98201 | 671K| 93 (9)| 00:00:02 |
--------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

1 - filter("A"=1)


Statistics
----------------------------------------------------------
0 recursive calls
0 db block gets
6917 consistent gets
0 physical reads
0 redo size
1347071 bytes sent via SQL*Net to client
73707 bytes received via SQL*Net from client
6668 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
100001 rows processed

SQL> select * from t2 where a = :var1;

100001 rows selected.


Execution Plan
----------------------------------------------------------
Plan hash value: 591771271

--------------------------------------------------------------------------------
------

| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time
|

--------------------------------------------------------------------------------
------

| 0 | SELECT STATEMENT | | 2 | 14 | 2 (0)| 00:0
0:01 |

| 1 | TABLE ACCESS BY INDEX ROWID| T2 | 2 | 14 | 2 (0)| 00:0
0:01 |

|* 2 | INDEX RANGE SCAN | IND_T2 | 2 | | 1 (0)| 00:0
0:01 |

--------------------------------------------------------------------------------
------


Predicate Information (identified by operation id):
---------------------------------------------------

2 - access("A"=TO_NUMBER(:VAR1))


Statistics
----------------------------------------------------------
1 recursive calls
0 db block gets
6917 consistent gets
0 physical reads
0 redo size
1347071 bytes sent via SQL*Net to client
73707 bytes received via SQL*Net from client
6668 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
100001 rows processed

SQL>

Table and index statistics are already gathered.
No idea why in case of bind variable for coulumn a value as 1 is using index scan instead of full table scan?
Definitely I am missing something.

SQL> set autotrace off;
SQL> select * from v$version;

BANNER
----------------------------------------------------------------
Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Prod
PL/SQL Release 10.2.0.1.0 - Production
CORE 10.2.0.1.0 Production
TNS for 32-bit Windows: Version 10.2.0.1.0 - Production
NLSRTL Version 10.2.0.1.0 - Production

Thanks,
Ranjan.
Chris Saxon
July 10, 2017 - 3:56 pm UTC

The plan shown by autotrace is an explain, not execution, plan. So it may show something different to the plan the statement really uses:

https://asktom.oracle.com/Misc/when-explanation-doesn-sound-quite.html

Notice that the stats for the two runs are the same, suggesting they used the same plan. If you get the real plan, what do you see?

https://blogs.oracle.com/sql/how-to-create-an-execution-plan

cont.. to last post

ranjan, July 13, 2017 - 4:37 am UTC

yah i got it right after posting the question.
remembered to one of toms post as EXPLAIN PLAN ARE BLIND TO BIND:)

Bind Variables in For Loop

Rajasekhar, July 02, 2020 - 10:50 am UTC

Hello Tom,
Thanks for educating us. i had a query related to bind variables.
do you think is it possible to use bind variables in for loop?
Below code, in order to get the latest data from table2 based on the first for loop table1 column is it possible to use it.as it may be reduces parsing every time and memory consumption.
if no, is there any chance to optimize this kind of queries.

table 2 updates frequently and has 100 milion records.

declare
begin

for i in (select columns from table1)
loop
for j in ('select columns from table2 where num=:x') using i.column name
loop
some validations
end loop;

end loop;
end;
/
Chris Saxon
July 02, 2020 - 3:48 pm UTC

Yes it's possible. In PL/SQL, variables in SQL are bind variables.

Use statis SQL in the inner loop referencing the value from the outer loop:
for j in ( select columns from table2 where num= i.column_name )


Of course, it's better still to join the tables and have one loop:

select ...
from  table1
join   table2
on ....

Performance tuning

Rajasekhar, July 03, 2020 - 6:49 am UTC

Hello Tom,

Thanks for clearly the doubt regarding bind variables.
i can't join the tables here. let me explain my problem here

we are trying to map an item to an invoice. 1 item can be mapped to n no of invoices.
outer loop table is item table and inner loop table is possible combinations of items to invoices.
itemno itemamount invno invoiceamount
1 20 test1 30
2 15 test1 30

so when outer loop send the item no, inner loop queries the table to get the items and associated invoice.

for i in item_table --select * from item_table
for j in (combination table=i.item no) --select * from table where itemno=i.item no
invoiceamount:=invoiceamount-itemamount;
updating the same table to get the updated invoice amount for next item.
inserting the mapped data into another table.
end loop;
end loop;

It is taking long time because of large data(100 million). we trying to optimize the code.
i thought using bind variables may increase the performance.but it is coded same as you said, again strucked what to do?
Now i am thinking about switching things, As it is performing many DML operation between two engines(PLSQL,SQL).
if you have any suggestions for reducing the time or any optimizing techniques, it would be so much helpful.
Chris Saxon
July 03, 2020 - 10:14 am UTC

i can't join the tables here

Yes you can, that's what you're doing!

Putting DML inside a loop is the best way to write SLOOOOOOW code.

Ideally you'd have one SQL statement that changes all the rows. I'm not sure that's possible here.

I suggest you look into bulk processing, in basic terms this replaces the loop with something like:

select ... 
bulk collect into item_array
from  item_table
join  combination_table
on   i.item_no = c.item_no;

forall i in 1 .. item_array.count 
  update ...

forall i in 1 .. item_array.count 
  insert ...


Given the large amount of data you're processing, you'll want to fetch in batches though. Read more about how to do this at:

https://blogs.oracle.com/oraclemagazine/bulk-processing-with-bulk-collect-and-forall

indexes

Rajasekhar, July 06, 2020 - 7:24 am UTC

Hello Tom,

Thank you for your suggestion.
Do you think disabling the indexes(almost 5) on combination table and activating it after all opeartings, will help us?
if yes, does it impact on selecting nd DML operations on that record among 100 milion reocrds.

thanks in advance.
Chris Saxon
July 06, 2020 - 2:34 pm UTC

It's Chris, but hey ;)

If you disable indexes while you load the data, then none of your queries can use those indexes! This likely makes your application unusable while this process runs.

Start by changing the code to use bulk processing first. If this is still too slow, then look into other tuning tricks.


More to Explore

PL/SQL demos

Check out more PL/SQL tutorials on our LiveSQL tool.

PL/SQL docs

PL/SQL reference manual from the Oracle documentation library