Skip to Main Content
  • Questions
  • native dynamic sql, dynamic sql and dbms_sql

Breadcrumb

Question and Answer

Tom Kyte

Thanks for the question, sam.

Asked: August 24, 2001 - 5:02 pm UTC

Last updated: November 12, 2010 - 9:00 am UTC

Version: 8.1.7

Viewed 10K+ times! This question is

You Asked

I.what is the difference between native dynamic sql, dynamic sql and dbms_sql. Believe me Tom, there must be scores of Developers who want this cleared for them.

II.select deptno, count(*)
from emp
group by deptno
union
select to_number(null), count(*)
from emp;

DEPTNO COUNT(*)
---- ----
10 3
20 5
30 6
14

Does to_number of a null has any meaning.

III. I have procedure which does a delete then a insert and then a update. Now these dml operations are done depending upon the look up value_id's in a table called t.

desc t;

value_id number,
,proc_status varchar2(10)
,start_dtm date
,end_dtm date
)

Now when the procedure doing the delete,insert, and update into different tables, we want to show the status of that particular value_id as CURRENT ( prior to that the status will be SLEEPING), and after all the operations are over the status should be OVER.
Further start_dtm, and end_dtm should get populated with values of date and time when the said operations start and end for a particular value_id .
We have tried a couple of methods but the results have not bee satisfactory. Can you propose an efficient method to do this.

thank you




and Tom said...

I. In my book -- i go into some 35 pages on this very subject. The summary is:

In Closing

In this chapter we thoroughly explored dynamic SQL in stored procedures. We explored the differences between Native Dynamic SQL and DBMS_SQL ? showing when to use one over the other. Both implementations have their time and place.

Dynamic SQL allows you to write procedures that are otherwise impossible ? generic utilities to dump data, to load data, and so on. Further examples of dynamic SQL based routines can be found on the WROX web site such as a utility to load dBase III files into Oracle via PLSQL, printing the results of a query down the page in SQLPlus (see the Invokers/Definers rights chapter), pivoting result sets (see the analytic functions chapter for details on that) and much more.

In a nutshell -- there is Native dynamic sql (execute immediate) and dbms_sql. They are BOTH dynamic sql (sql that is not known until runtime). It really does take quite a bit of space to go over when to use one over the other -- the small checklist I have in that chapter is:

DBMS_SQL will be used when?

o You do not know the number or types of columns you will be working with. DBMS_SQL includes procedures to ?describe? a result set. Native Dynamic SQL does not. With Native Dynamic SQL you need to know what the result set will look like at compile time if PLSQL is to process the results.

o You do not know the number or types of possible bind variables you will be working with. DBMS_SQL allows us to procedurally bind inputs into our statements. Native Dynamic SQL requires us to be aware of the number and types of bind variables at compile time (but we will investigate an interesting workaround to this issue).

o You will be fetching or inserting 1,000's of rows and can employ array processing. DBMS_SQL permits ?array? processing, the ability to fetch N rows in a single call instead of a row at a time. Native Dynamic SQL in general does not, there is a workaround demonstrated below.

o You will be executing the same statement many times in the same session. DBMS_SQL will allow us to parse the statement once and execute it many times over. Native Dynamic SQL will cause a soft parse with each and every execution. See the section on Tuning for why this extra parsing is not desirable.

Native Dynamic SQL should be used when?

o You know the number and types of columns you will be working with.
o You know the number and types of bind variables (we can also use application contexts to use the easier native dynamic SQL to get around the fact we don't know the number and types)
o You will be doing DDL.
o You will be executing the statement a very few (one optimally) times.


The book has the "why" behind the advice (and many examples/benchmarks)

II) As for "Does to_number of a null has any meaning." -- yes it does.

If you try it without it:

1 select deptno, count(*)
2 from emp
3 group by deptno
4 union
5 select null, count(*)
6* from emp
scott@ORA817.US.ORACLE.COM> /
select deptno, count(*)
*
ERROR at line 1:
ORA-01790: expression must have same datatype as corresponding expression


you'll find that a NULL by default will be treated as a varchar2 -- the select lists for unions must be exatly the same types.

III) Sounds like you might want to use an autonomous transaction. Again, I have a big chapter on this particular feature in my book. Here:

</code> http://asktom.oracle.com/~tkyte/autonomous/index.html <code>

i have a bit of what they are about as well. You might have a procedure:

create or replace procedure update_t( p_value_id in number,
p_status in number,
p_start_dtm in date default null,
p_end_dtm in date default null )
as
pragma autonomous_transaction;
begin
update t
set status = p_status,
start_dtm = nvl( p_start_dtm, start_dtm ),
end_dtm = nvl( p_end_dtm, end_dtm )
where value_id = p_value_id;

commit;
end;

when you start, call update_t( 123, 'CURRENT', sysdate );
when you end, call update_t( 123, 'OVER', null, sysdate );
when you go from current to sleeping call update_t( 123, 'SLEEPING' );

is that what you are looking for?





Rating

  (28 ratings)

Is this answer out of date? If it is, please let us know via a Comment

Comments

If the job is broken or job fails

Sam, August 24, 2001 - 7:03 pm UTC

In the above procedure the requrement is that there is also a status called failed, if my insert fails then the status of the value_id should be set to failed. As also if due to some other reason the job fails the status should be set to failed.

Further, this job wakes up every minute,the procedure which gets executed picks up all the value_id's whose status is sleeping.

Now each time it will find 10 to 15 value_id's. The procedure has a loop which first sets the status of the value_id to current, and then does the said operations.

But the problem this job cannot get finished within one minute all processing for all the value_id's picked up by the cursor.
When the next minute the next job wakes up , some of the previous value_id's might get selected again. How should I handle this situation.

In my given situation how can I run parallel jobs to share the value_id's available at a given point of time, and that the next job will not pick up the same value_id's.

I hope I was clear, if not let me know.

Tom Kyte
August 25, 2001 - 1:18 pm UTC

Then catch the exception and call update_t with those status values.

It looks like you are trying to implement a process queue in the database ( a queue of things to do).

I would strongly encourage you to use the existing QUEUEING software that comes with the database:

</code> http://docs.oracle.com/docs/cd/A81042_01/DOC/appdev.816/a76938/toc.htm <code>

It'll save you a TON of effort and code.

Why should there be a call to pragma autonomous transaction

SAM, August 25, 2001 - 12:38 am UTC

-Why should there be a call to pragma autonomous transaction.
Why cant we do a commit like in the calling procedure itself, instead of doing it in a separate procedure as you have done, and then giving a call to it.
Just before I do a insert I will do the update statement and do the commit, the same after the insert, why do I have to code a separate procedure for that with pragma autonomous transaction.
( and as you are repeating telling us, some thing which can be come with pure sql , should not be done using pl/sql)


-Why are we doing an nvl in the following
start_dtm = nvl( p_start_dtm, start_dtm ),
end_dtm = nvl( p_end_dtm, end_dtm )



Tom Kyte
August 25, 2001 - 1:06 pm UTC

I used an autonomous transaction because I thought the original questioner wanted this status table to be readable by other sessions as a "status" of sort (hence we need to commit) but I didn't want to interleave this utility routines transactional logic with the larger program as a whole. That is, i wanted this update to be a transaction (i needed to commit) but i didn't want to commit any outstanding work in the caller.

Hence, this is NOT something that could have been done in SQL since you need a plsql block to do an autonomous transaction.

Also, don't take the "do it in sql, if not able do it in plsql, if not able ..." mantra of mine and try to twist into "don't write modular code". I put a single update into a utility routine that is callable from many places. I'm still doing all of the logic in SQL here (reference the very next comment where I explain the usage of NVL() in there. This is a perfect example "if you can do it in a single sql statment -- do it".

nvl

SAM, August 25, 2001 - 12:45 am UTC

start_dtm = nvl( p_start_dtm, start_dtm ),
end_dtm = nvl( p_end_dtm, end_dtm )

The quesion actually is , for the null value substitution, why are you using start_dtm , and end_dtm, becasue when no value is passed even these will be null.

Can you give a clarification please.

Thank you

Tom Kyte
August 25, 2001 - 1:03 pm UTC

I did that cause of the way I demonstrated the "update_t" routine. It has the ability to set the status, start_dtm, end_dtm fields but start_dtm and end_dtm are not always to be set

when you start, call update_t( 123, 'CURRENT', sysdate ); updates start_dtm, not end_dtm

when you end, call update_t( 123, 'OVER', null, sysdate ); updates end_dtm, not start_dtm

when you go from current to sleeping call update_t( 123, 'SLEEPING' ); updates NEITHER

Instead of having 3 different updates, i just used one:

update t
set status = p_status,
start_dtm = nvl( p_start_dtm, start_dtm ),
end_dtm = nvl( p_end_dtm, end_dtm )
where value_id = p_value_id;


If p_start_dtm is NULL, we don't update start_dtm in the database (well, technically we do but we don't change it values). If P_end_dtm is passed as NULL, we don't change the current value of end_dtm in the database. That is the meaning of that update.

Also as for "why are you using start_dtm , and end_dtm, becasue when no value is passed even these will be null." that is not the case. When I call:

when you go from current to sleeping call update_t( 123, 'SLEEPING' );

start_dtm most certainly will not be NULL in the database.




autonomous transactions

Vladimir Zakharychev, August 25, 2001 - 11:16 am UTC

To do a small clarification of this topic (and free Tom's time for more important stuff ;):

autonomous transaction in this particular case ensures that only this particular update will be committed (and thus guaranteed to always happen) no matter what happens in calling procedure. Right now you are sure that you do a commit immediately before this procedure is called, so it actually doesn't make much sense to envelope the status update in an autonomous transaction, but in the future you may well consider to not do that commit. In this case you won't have to rewrite the status update - it still will commit its own work no matter what happens on the upper level. As Tom's article on autonomous transactions shows, autonomous transactions are most useful for implementing custom audit or logging, and, of course, in row-level triggers, where you might want to alter other data in the same table that activated the trigger, but are denied to do so due to mutating table state - autonomous transactions give you ability to alter that data:
although the table is mutating, actually only one particular row is being changed, so other rows are pretty static and accessible at this point. Autonomous transaction allows you to access these other rows from the trigger, alter them and commit your work. Keep in mind though, that although the trigger may result in an exception and the transaction which triggered it will probably be rolled back, autonomous transaction changes WILL NOT be rolled back in this case, so you should be very careful here.
Obvious use of this cool ability is a hierarhical table. For example, you might want to delete particular leaf in such table and want its siblings to be automatically moved to its parent, not deleted in cascade. Autonomous transaction in a trigger before delete would easily do this by setting parent id of all siblings of deleted leaf to leaf's parent id and then allowing delete to proceed.

regards,
Bob.


OK

Kumar, December 21, 2005 - 1:06 am UTC

Hi Tom,
I would like to get the table names dropped and created in the
following code.

SQL> begin
       execute immediate 'drop table a cascade constraints';
       
       -- I would like to have a statement that says that
       -- so and so table got dropped

       -- Can we use trigger specific ora_dict_obj_name event attribute function here???
       
       execute immediate 'create table b(x int,y varchar2(30))';
       
        -- Here also I want to have..

Any workaround for this requirement?

Please do reply.
Bye!!

 

Tom Kyte
December 21, 2005 - 7:22 am UTC

you can play with the triggers - sure. Sort of a "non feature" if you ask me. I'd just say substr out the first 40 characters of the DDL statement and say:


dbms_output.put_line
( 'Successful execution of "' || substr( l_ddl, 1, 40 ) || '..."' );


lo-tech, simple, not fancy - but gets the job done :)

Depends performance of NDS vs DBMS_SQL on machine?

Marc Blum, March 30, 2006 - 4:38 am UTC

We are developing a produkt, which at some stage has to do some computations
with formulas only known at runtime. Therefore we ought to use dynamic SQL.
Oracle documentation says that Native Dynamic SQl is better than use of
DBMS_SQL because of better performance and lower resource consumption. We wrote
a script to benchmark NDS against optimized use of DBMS_SQL (1 parse, <n>
executes).

Now we find out, that depending on which machine we do the benchmark, we get
result from "NDS/DBMS_SQL = 2/5" through "100/100" up to "4/1". We have a
multitude of machines tested from Laptops with Centrino CPU and Windows OS,
64bit Server with Linux to 32Bit Server with windows. We didn't find any
rules, when NDS will be faster than DBMS_SQL! Deviation of 10% may be ok, but
we can't go into production, if performance of NDS depends on hardware by a
faktor of 4!

We need some criteria to decide, when NDS is slower performed than
DBMS_SQL.

Please!

================================================

--
-- verifiziert die Behauptung der ORACLE-Dokumentation, dass Native
-- Dynamic SQL schneller ist als optimiertes dynamisches SQL ber
-- DBMS_SQL ist
--
--
CLEAR SCREEN

DROP TYPE num_type;
DROP PACKAGE pack;
DROP PROCEDURE proc;

CREATE TYPE num_type
IS
TABLE OF NUMBER;
/

CREATE OR REPLACE PACKAGE pack
AS

PROCEDURE calc
(i_anzahl IN NUMBER,
i_art IN VARCHAR2);

END pack;
/

CREATE OR REPLACE PACKAGE BODY pack
AS

PROCEDURE calc
(i_anzahl IN NUMBER,
i_art IN VARCHAR2)
IS
l_num_tab num_type := num_type();
l_return NUMBER;
--
l_cur INT;
l_rows INT;
--
l_pos VARCHAR2(100);
BEGIN
l_num_tab.EXTEND;
l_num_tab(1) := i_anzahl;
l_num_tab.EXTEND;
l_num_tab(2) := i_anzahl+1;
l_num_tab.EXTEND;
l_num_tab(3) := i_anzahl+2;
--
IF i_art = 'DBMS_SQL'
THEN
l_pos := 'open';
l_cur := dbms_sql.open_cursor;

l_pos := 'parse';
dbms_sql.parse(l_cur, 'select (:b1 + :b2)/:b3 as ergebnis from dual', dbms_sql.NATIVE);

l_pos := 'define';
dbms_sql.define_column(l_cur, 1, l_return);

END IF;


FOR i IN 1..i_anzahl
LOOP
IF i_art = 'NDS'
THEN

EXECUTE IMMEDIATE
'begin
:b1 := (:b2(1) + :b2(2))/:b2(3) ;
end;'
USING OUT l_return, l_num_tab;



ELSIF i_art = 'DBMS_SQL'
THEN
l_pos := 'bind';
dbms_sql.bind_variable(l_cur, 'b1', l_num_tab(1));
dbms_sql.bind_variable(l_cur, 'b2', l_num_tab(2));
dbms_sql.bind_variable(l_cur, 'b3', l_num_tab(3));

l_pos := 'execute';
l_rows := dbms_sql.EXECUTE_and_fetch(l_cur, TRUE);

l_pos := 'value';
dbms_sql.COLUMN_VALUE(l_cur, 1, l_return);
END IF;
--
END LOOP;

IF i_art = 'DBMS_SQL'
THEN
l_pos := 'close';
dbms_sql.close_cursor(l_cur);
END IF;


l_num_tab.DELETE;
--

dbms_output.put_line('- - - - - - - - - -');
dbms_output.put_line('Art: '||i_art);
dbms_output.put_line('Anzahl: '||i_anzahl);
dbms_output.put_line('letztes Ergebnis: '||l_return);

EXCEPTION
WHEN OTHERS THEN
dbms_output.put_line(l_pos);
RAISE;
END calc;
END pack;
/
SHOW ERRORS




------------------------------------------------

SET SERVEROUTPUT ON
SET TIMING ON
SET FEEDBACK OFF

EXEC pack.calc(10,'NDS')
EXEC pack.calc(10,'DBMS_SQL')
EXEC pack.calc(100,'NDS')
EXEC pack.calc(100,'DBMS_SQL')
EXEC pack.calc(1000,'NDS')
EXEC pack.calc(1000,'DBMS_SQL')
EXEC pack.calc(10000,'NDS')
EXEC pack.calc(10000,'DBMS_SQL')
EXEC pack.calc(100000,'NDS')
EXEC pack.calc(100000,'DBMS_SQL')




Tom Kyte
March 31, 2006 - 11:18 am UTC

I wrote exactly about this in expert one on one oracle (the premature "death" of dbms_sql)... Oh wait, that was in the original answer :)

...
o You will be executing the same statement many times in the same session.
DBMS_SQL will allow us to parse the statement once and execute it many times
over. Native Dynamic SQL will cause a soft parse with each and every execution.
See the section on Tuning for why this extra parsing is not desirable.
.......


dbms_sql can cache cursors (because you program it). So dbms_sql can "parse once, execute many"

since parsing is hugely hugely expensive - dbms_sql can definitely best native dynamic sql (NDS) if you execute the same statement over and over and over.


Note that in 10g, this "changes" a big. For example:


create table t ( x int );
alter session set sql_trace=true;
declare
l_cnt number;
begin
for i in 1 .. 100
loop
execute immediate 'select count(*) from dual' into l_cnt;
execute immediate 'select count(*) from dual d' || mod(i,2) into l_cnt;
end loop;
end;
/



In 9i - you would see 100 parses for

select count(*) from dual;

and 50 parses for each of

select count(*) from dual d0;
select count(*) from dual d1;


In 10g however, you'll observe:

select count(*) from dual
call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 100 0.00 0.00 0 0 0 0
Fetch 100 0.00 0.00 0 0 0 100
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 201 0.01 0.00 0 0 0 100

one parse for the first, but:

select count(*) from dual d1
call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 50 0.00 0.00 0 0 0 0
Execute 50 0.00 0.00 0 0 0 0
Fetch 50 0.00 0.00 0 0 0 50
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 150 0.00 0.00 0 0 0 50
********************************************************************************
select count(*) from dual d0
call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 50 0.00 0.00 0 0 0 0
Execute 50 0.00 0.00 0 0 0 0
Fetch 50 0.00 0.00 0 0 0 50
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 150 0.00 0.00 0 0 0 50


still 50 for each of the d1/d0's

Execute immediate in 10g will hold open the last statement it executed (on that line) and if you re-execute the same statement - it doesn't have to parse. That is why the first query was parsed ONE TIME - it didn't change from execute to execute.

However the second query changed each time from "dual d1" to "dual d0" and so on.

Further questions and back to topic

Marc Blum, April 03, 2006 - 7:52 am UTC

First thank you for the insights regarding the behavioural changes between 9 and 10.

What I don't understand:

You wrote that EXECUTE IMMEDIATE recognizes, if the statement to be executed, didn't change and avoids extra parsing. The example loop

for i in 1 .. 100
loop
execute immediate
'select count(*) from dual' into l_cnt;

execute immediate
'select count(*) from dual d' || mod(i,2) into l_cnt;

end loop;

generates this serie of statement

select count(*) from dual
select count(*) from dual d0
select count(*) from dual d1
select count(*) from dual
select count(*) from dual d0
select count(*) from dual d1
select count(*) from dual
select count(*) from dual d0
select count(*) from dual d1
...

So, there are NEVER two identical statements in a row. I checked the raw trace file and found, that for "select count(*) from dual" extra parsing is avoided and the open cursor reexecuted, for the other two this does nor happen. I don't see any reason for this.

----------------------------------------

back to my question: our problem is, that the ratio of Execution_time(NDS):Execution_time(DBMS_SQL) varies depending on instance and machine, NOT depending on Oraversion, as we tested only against 10.2.0.2

On some instance/machine, NDS executes faster, on some really really slower with a ratio of 4:1

So, do you know about, experienced platform/hardware/OS-dependencies regarding NDS?

Thank you very much in advance
Marc


Tom Kyte
April 03, 2006 - 8:19 am UTC

No, it generated TWO series of statements - each execute immediate call can cache.


there is the series:

select * from dual;
select * from dual;
select * from dual;
....


and then other that goes back and forth between d1 and d0


you are missing my point about nds vs dbms_sql, dbms_sql is avoiding parsing, it is going to scale much better than nds in your case IF You can in real life avoid parsing.

forget executing time for a moment, look at latching and other resource usage - more important here.

I was too fast,

Marc Blum, April 03, 2006 - 8:02 am UTC

Teh SQL statements are:

select count(*) from dual
select count(*) from dual d0
select count(*) from dual
select count(*) from dual d1
select count(*) from dual
select count(*) from dual d0
select count(*) from dual
select count(*) from dual d1

But this does not change anything.

Tom Kyte
April 04, 2006 - 9:25 am UTC

there are two execute immediates there - each has their own "cache"


the sequence is:

execute immediate 1 select count(*) from dual;
execute immediate 2 select count(*) from dual d0;
execute immediate 1 select count(*) from dual;
execute immediate 2 select count(*) from dual d1;
execute immediate 1 select count(*) from dual;
execute immediate 2 select count(*) from dual d0;
execute immediate 1 select count(*) from dual;
execute immediate 2 select count(*) from dual d1;
execute immediate 1 select count(*) from dual;
execute immediate 2 select count(*) from dual d0;
execute immediate 1 select count(*) from dual;
execute immediate 2 select count(*) from dual d1;


Marc Blum, April 03, 2006 - 9:09 am UTC

Tom wrote:

each execute immediate call can cache.

Marc answered:

pfff... is this documented behaviour? From looking at the trace file I really don't get any clue about this one.

But back to my problem: I have several instances on 10.2.0.2 on different machines, and the testscript IS optimized. But I get very different ratios between execution of NDS and DBMS_SQL. Please, the problem is NOT, that there is a difference, but that the degree of difference varies from instance to instance.

Thank you for your patience.


Tom Kyte
April 04, 2006 - 9:34 am UTC

got test case?

dbms_sql will always give YOU the ability to control what cursors are open/closed.

execute immediate will try to cache - but depending on the order of execution, open_cursors, session_cached_cursors and what other cursors the application opens - will vary in it's ability to do so.

Please revisit my posting from March 30

Marc Blum, April 04, 2006 - 10:58 am UTC

always Ora 10.2.0.2
always same parameterization (different dev instances)
varying OS and Hardware
optimized testscript (see above)

but metalink SR is on it's way

thank you for your time and interest


Tom Kyte
April 04, 2006 - 7:26 pm UTC

paraphrase - remember I spend like 30 seconds on these here.

I'll reiterate:

with NDS - you are subject to many factors in the caching of cursors, open_cursors, the application, session_cached_cursors, how the data flows through the execution immediatel

with DBMS_SQL you have ABSOLUTE 100% control over the parsing.


that is what this will come down to - or just provide a test case with very very concrete (but small) directions to run with.

I give up

Marc Blum, April 05, 2006 - 3:39 am UTC

sorrily, again I experience a hard wall when exchanging posts with you

*sigh*

bye bye, Tom

Tom Kyte
April 05, 2006 - 5:48 pm UTC

not a hard wall - I just need things bundled up compactly. I cannot (will not) re-read large sections of pages to get the gist.


Problem with Execute immediate

Thiyag, July 07, 2006 - 6:20 am UTC

Hi Tom:

I have "Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Prod" and have the following script to check number of records between two databases using db link(I have put here the scaled down version of script to simulate the problem).
create database link DLNK_SSTIDB_29797 connect to <user> identified by <pwd> using '<tns>'
/
DECLARE
proddblink VARCHAR2 (40) := 'DLNK_SSTIDB_29797';
v_db_version_test NUMBER;
v_db_version_prod NUMBER;
v_table_name VARCHAR2 (40) := 'TJ';
BEGIN
EXECUTE IMMEDIATE 'select count(1) from '
|| v_table_name
INTO v_db_version_test;
/* Get db version of test db */
EXECUTE IMMEDIATE 'select count(1) from '
|| v_table_name
|| '@'
|| proddblink
INTO v_db_version_prod;

IF (v_db_version_test <> v_db_version_prod)
THEN
BEGIN
EXECUTE IMMEDIATE 'drop database link '
|| proddblink;
raise_application_error (-20004, 'Data Version Mismatch.');
END;
END IF;
END;
/

This script is to suppose to dynamically drop the dblink if there is an error (dblink is also created dynamically but not shown in the code). This script errors out saying:ORA-02018: database link of same name has an open connection
ORA-06512: at line 20.

My opinion is the after the execution of execute immediate the cursor is closed dynamically (implict cursor). Let me know how I can get around this?

I have the following setting for the cursor in the database:

select name,value from v$parameter where name like '%cursor%'

NAME VALUE
cursor_space_for_time FALSE
session_cached_cursors 0
cursor_sharing EXACT
open_cursors 300

Appreciate you help.

Tom Kyte
July 08, 2006 - 10:45 am UTC

The fact is that database links stay open for the duration of the session unless and until you close them...

alter session close database link <linkname>


so, close it.


(and why do you want to count one's? don't you really want to count records ?? that is count(*) - but probably OK since we turn count(1) into count(*) to make the count(1) not take so long... just looks funny is all)

Still having issue

thiyag, July 11, 2006 - 2:19 am UTC

Tom: I modified the script as per your suggestion and I force closed the database link. But it does not allow me to close the link and errors out "ORA-02080: database link is in use ORA-06512: at line 20". (I changed the count(1) to count(*) to look more logical :))

DECLARE
proddblink VARCHAR2 (40) := 'DLNK_SSTIDB_29797';
v_db_version_test NUMBER;
v_db_version_prod NUMBER;
v_table_name VARCHAR2 (40) := 'TJ';
BEGIN
EXECUTE IMMEDIATE 'select count(*) from '
|| v_table_name
INTO v_db_version_test;
/* Get db version of test db */
EXECUTE IMMEDIATE 'select count(*) from '
|| v_table_name
|| '@'
|| proddblink
INTO v_db_version_prod;

IF (v_db_version_test <> v_db_version_prod)
THEN
BEGIN
EXECUTE IMMEDIATE 'alter session close database link '
|| proddblink;
EXECUTE IMMEDIATE 'drop database link '
|| proddblink;
raise_application_error (-20004, 'Data Version Mismatch.');
END;
END IF;
END;
/

Tom Kyte
July 11, 2006 - 7:50 pm UTC

might need two blocks - your block of code is "using it", might still be open in that block - didn't really test it out. break into two blocks and see what you see with that..



One more post note

thiyag, July 11, 2006 - 2:23 am UTC

I have this same script working in 9.2.0.4 and started facing issue when moved to 10g. Did the behavior change due to the version upgrade or some other parameter setting which makes this difference in the behaviour? Just curious.

two blocks works without the alter session..

Thiyag, July 12, 2006 - 2:12 am UTC

When split in to two block it works even without the "alter session close database link" as shown below. But the problem is I want the db link to be dropped only when there is an error. Two different blocks will force me to drop the link irrespective of the exception. (I have these blocks running as sqlplus scripts and whenever there is an error I drop the db link and exit the sqlplus script using "whenever sqlerror exit -1")

DECLARE
proddblink VARCHAR2 (40) := 'DLNK_SSTIDB_29797';
v_db_version_test NUMBER;
v_db_version_prod NUMBER;
v_table_name VARCHAR2 (40) := 'TJ';
BEGIN
EXECUTE IMMEDIATE 'select count(*) from '
|| v_table_name
INTO v_db_version_test;
/* Get db version of test db */
EXECUTE IMMEDIATE 'select count(*) from '
|| v_table_name
|| '@'
|| proddblink
INTO v_db_version_prod;
END;

PL/SQL procedure successfully completed

BEGIN
EXECUTE IMMEDIATE 'drop database link '||'DLNK_SSTIDB_29797';
END;

PL/SQL procedure successfully completed

Tom Kyte
July 12, 2006 - 4:19 pm UTC

no they won't.

variable x number
declare
...
begin
:x := 0;
execute immiedate ....
exception
when whatever
:x := 1;
end;
/

exec if :x := 1 then execute immediate '.....';



Does Oracle 9i/10g make have a change of behaviour?

Thiyag, July 13, 2006 - 12:09 am UTC

Thanks Tom for your suggestion. Yeah that will help me to handle it. I have this same code in 9i which had been working fine all the while and I started facing this problem when we moved to 10g. Wanted to find if there is a change in behaviour of the two version. Or there is something(how to find this?) else which might be the issue.

drop user in execute immediate

Ana, August 02, 2006 - 10:29 am UTC

I am trying to automate the recreation of development database from production data using export/import. Below is the part where I drop all users before importing data into development.

#!/sbin/sh
ORACLE_HOME=/usr/oracle; export ORACLE_HOME
ORACLE_SID=testdb; export ORACLE_SID 

$ORACLE_HOME/bin/sqlplus /nolog << EOF > /usr/local/oracle/${ORACLE_SID}/recreate-database.log
connect / as sysdba
set serveroutput on
begin
for x in ( select username from dba_users where username not in ( 'SYS', 'SYSTEM','OUTLN','DBSNMP' ))
loop
execute immediate 'drop user x.username cascade';
end loop;
end;
/
exit
EOF

But it comes out with something like:

SQL> SQL> SQL> Connected.
SQL> SQL> 2 3 4 5 6 7 8 9 User: PERFSTAT
drop user PERFSTAT
begin
*
ERROR at line 1:
ORA-01936: cannot specify owner when creating users or roles
ORA-06512: at line 6

A "drop user <username> cascade" from command line works fine. But I couldn't figure out why this ORA-01936 when doing the same from an EXECUTE IMMEDIATE. I would appreciate if you can let me know what I am missing. 

Thanks
Anandhi

 

Tom Kyte
August 02, 2006 - 12:09 pm UTC

why not just restore your backups from production? It'd be a great way to know absolutely that you "can"

drop user in execute immediate

Ana, August 04, 2006 - 9:56 am UTC

I thought I replied to that 2 days ago, but don't see it posted yet...maybe it was me...anyway...

Export is our method of backup on our less critical databases. And by doing an import into dev, I will actually be testing the production backups.

Also it was pointed out to me that my execute immediate statement was wromg. It should actually have been:

execute immediate 'drop user '||x.username||' cascade';

as x.username is a variable

Thanks a lot for your time.

Anandhi

Tom Kyte
August 04, 2006 - 12:12 pm UTC

export is a poor choice for backups. Ask around if anyone has ever had a dmp file not work for some reason or another. Might just as well not do it if you are not going to do it right (there is only one way to backup a database - that is to back it up. exp is a logical data copy - good for moving data from A to B, extremely poor for backing up A)


unless and until you post the actual script you use, not going to comment.

Other than to say you are NOT doing backups of these "less critical databases" at all - never have been - but should be.

(i'd be interested in the exact command you are using to do this LOGICAL COPIES using exp - odds are it is not a consistent export OR you are using sysdba to export which definitely means it is not a consistent export regardless of any other command line options - which means it really is true garbage in the dmp file)

When to use dynamic sql

Lise, October 02, 2009 - 5:10 am UTC

Hi,
I understand the differences between NDS and DBMS_SQL, and when they can be useful.
However, say that I know all the SQL criteria I need to code for and this results in say 500 lines of code.

I could also code this in one simple dynamic sql statement by altering the WHERE clause.

So - do you code lots of lines of code or do you use dynamic sql?
Tom Kyte
October 07, 2009 - 3:20 pm UTC

I use static sql as a rule, dynamic sql is usually used only for utilities (like dump_csv) or a search screen where I build the where clause based on the inputs.

dynamic sql is the path LAST taken. I'd like to see the hyperbolic example whereby 500 lines of code is reduced to a small page.

Thank you

Lise, October 08, 2009 - 9:20 am UTC

I was hoping you would say that.
Thanks!

dynamic procedure

Raghav, October 15, 2009 - 6:17 am UTC

Hi Tom

As you said, use static sql as a rule, dynamic sql is usually used only for utilities (like dump_csv)
or a search screen where I build the where clause based on the inputs,
can we also accept the parameters for a procedure dynamically?

See the below scenerio.

I am trying to create and execute the procedure through another procedure.

create table temp
(c1 varchar2(20),
c2 varchar2(20),
c3 number
);

create or replace procedure mainproc (p1 in varchar2)
is
begin
procstr := 'create procedure '||p1||'.proc1(b1 in varchar2, b2 in varchar2, b3 in number) is '
||' select * from temp where c1 = '||b1
||' and c2 = '||b2
||' and c3 = '||b3;
execute immediate procstr;

exeprocstr := 'BEGIN '||p1|| '.proc1 (:b1,:b2,:b3); END;';
execute immediate (exeprocstr) ;
end;


When I execute main procedure
exec mainproc ('scott');
should execute scott.proc1 (:b1,:b2,:b3); <with accepting values of bind variables>

Is there any way to accomplish this?

Thanks & regards
Raghav

dynamic procedure

Raghav, October 26, 2009 - 7:49 am UTC

Hi Tom

Please let me know is there any chance / option to get the dynamic parameters (accepting parameters at run time) as per the scenerio given above?

Point to be noted is if we execute the dynamically created procedure independently it is asking for the parameter values and gets executed successfully.

It is not working when we call it from the other (mainproc) procedure.

Thanks in advance

Thanks & Regards
Raghav
Tom Kyte
October 26, 2009 - 2:18 pm UTC

In order to run a stored procedure, you need a client application.

the client application can interact with the end user, get inputs, run the procedure and display the outputs.


Now, what is your client application. This is a big page and when you say "given above" I have no idea what you mean.

dynamic procedure

Raghav, October 27, 2009 - 8:51 am UTC

Hi Tom

Thank you for your reply. Sorry for the inconvenience caused.I thought of avoiding duplicate info on the page and mentioned as "given above" because the last post was by me. However, I am posting the same query here.

There is no client application and I am trying to get it done through SQL * Plus or pl/sql.

Below is the scenerio.

I am trying to create and execute the procedure through another procedure.

create table temp
(c1 varchar2(20),
c2 varchar2(20),
c3 number
);

create or replace procedure mainproc (p1 in varchar2)
is
begin
procstr := 'create procedure '||p1||'.proc1(b1 in varchar2, b2 in varchar2, b3 in number) is '
||' select * from temp where c1 = '||b1
||' and c2 = '||b2
||' and c3 = '||b3;
execute immediate procstr;

exeprocstr := 'BEGIN '||p1|| '.proc1 (:b1,:b2,:b3); END;';
execute immediate (exeprocstr) ;
end;

When I execute main procedure
exec mainproc ('scott');
should execute scott.proc1 (:b1,:b2,:b3); <with accepting values of parameter variables>

Point to be noted is if we execute the dynamically created procedure independently it is asking for
the parameter values and gets executed successfully.

It is not working when we call it from the other (mainproc) procedure.

Is there any way to accomplish this in SQL * Plus or Pl/Sql?

Thanks & Regards
Raghav
Tom Kyte
October 27, 2009 - 12:02 pm UTC

... There is no client application and I am trying to get it done through SQL *
Plus or pl/sql. ...

there is a client application.

the client application is called SQL*Plus.

plsql is a stored procedure language, embedded in the server - with no ability to interact with an end user (assuming no oracle forms).




I will not tell you how to generate that code since that code is so bad - so very very very bad.

What are you trying to do - don't tell me HOW you are trying to do what you are trying to do

Tell me the end goal here - I don't like your approach of creating a procedure here at. What are your requirements here (do not confuse 'your solution' with 'your requirements' - that is important - forget your solution for a minute, it is the wrong way)

dynamic procedure

Raghav, October 28, 2009 - 7:56 am UTC

Hi Tom

Thanks for your reply. Actually I mean to say there is no other front end application /tool other than the sql * plus and I was executing the procedure in sql * plus.

The requirement is:

In the first place get the schema name and table name from the user as inputs.

Then basing on the inputs (schema and table), get the primary key columns from the data dictionary for that table and create the sub procedure with primary key columns as parameters and also in where clause.

As the different tables will/may have different number of primary keys, I am creating the procedure dynamically and execute it.

Once if the sub procedure gets executed accepting the parameters i.e., primary key values,
we can arrive at a particular record and can do a DML operation on the same.

Hope this will give a clear picture of the requirement. Please let me know if any option available to achieve this.

Thanks & Regards
Raghav
Tom Kyte
October 28, 2009 - 8:48 am UTC

... I was executing the procedure
in sql * plus.
...

that is the client application and you would be reading the sqlplus guide to determine what sort of interaction that client can have with an end user (and you will find it to be very sparse at best, sqlplus is a rather simple command line tool - if you want anything even remotely fancy you will have a slightly more sophisticated client program, probably one you WRITE)


... Then basing on the inputs (schema and table), get the primary key columns from
the data dictionary for that table and create the sub procedure with primary
key columns as parameters and also in where clause. ...

tell me WHY you are doing this, what is your GOAL - why do you think this is a good thing, what are you DOING?




Given what you wrote, I have no idea at all why you feel the need to create a procedure on the fly. I can select a record, I can perform dml on the record all without doing DDL - doing DDL like you are on the fly is a very very very very VERY bad idea.

dynamic procedure

Raghav, October 28, 2009 - 10:09 am UTC

Hi Tom

Thank you once again for your reply.

Supposing, there is a common column "Active_YN" which has to be updated to 'N' for a particular record in a given table (basically to mark the record as inactivate).

Agreed what you say. If we know the key columns of a given table, then we can accept the values for the same and update that record. And, you know, number of primary key columns differ from table to table and not unique. So, every time the user has to write a separate DML query.

Or in other way, we can create different procedures - one procedure per table and call that particular procedure, pass the parameters and update the record. Suppose there are 100 tables mean we will have 100 procedures (may be grouped in a package).

If we do it in dynamic way, we will have only 2 procedures - one main to create the sub and the sub procedure to update a particular record in give table.

Here we are sure that the column names are given correct as they are picked from dictionary tables and the user will be executing the same procedure for such update of any given table.

Your views and comments please.

Thanks & Regards
Raghav

Tom Kyte
October 28, 2009 - 10:44 am UTC

end users do not write code, application developers do.

and application developers should be generating transactional api's that do entire transactions - not an update by a primary key of a single record, not unless that is the entire transaction.

and it would not be something you would want to prompt anyone for 100 times either.

No, I do not agree with this approach, it doesn't seem to do anything for you really.


you wrote:

... If we do it in dynamic way, we will have only 2 procedures - one main to create
the sub and the sub procedure to update a particular record in give table.

...


ummm, if your main procedure CREATES A PROCEDURE, how is it that you'll have only two procedures?!?!?!?

dbms_sql with ORA-00936

deba, November 11, 2009 - 7:30 am UTC

Hi Tom,

My database is 10.2.0.4, OS is solaris 10.5. My objective is to handle unicode data dynamically. 
My database character set is WE8ISO8859P1 and national character set is AL16UTF16.

I have one table which was created using following staement
create table deba ( name nvarchar2(2000) );
insert into deba values ('Àðò¸ì Ðîäèí');
commit;

1) I tried to use "execute immediate" for variable nvarchar2. It does not work. Does "execute immediate" support variable of nvarchar2 ?

2) I am trying to use dbms_sql as it supports national character set data. If you see followings , then you can see , that 1st pl/sql unit is working but not the 2nd one.  I am not sure why . Could you please let me know why ?
   
SQL> select * from deba;

NAME
--------------------------------------------------------------------------------
Àðò¸ì Ðîäèí

SQL> DECLARE  
  2    stmt_str nvarchar2(2000);
  3    cur_hdl number;
  4    rows_processed number;
  5    lname nvarchar2(2000);
  6    d number;
  7  BEGIN
  8  cur_hdl := dbms_sql.open_cursor; -- open cursor 
  9  stmt_str := 'INSERT INTO DEBA VALUES (''Àðò¸ì Ðîäèí'')';
 10  dbms_sql.parse(cur_hdl, stmt_str, dbms_sql.native); 
 11  
 12  -- describe defines 
 13  --dbms_sql.define_column(cur_hdl, 1, lname, 2000); 
 14  
 15  rows_processed := dbms_sql.execute(cur_hdl); -- execute 
 16  
 17  /*LOOP
 18      d := dbms_sql.fetch_rows(cur_hdl);
 19      dbms_sql.column_value(cur_hdl, 1, lname);
 20   dbms_output.put_line(lname);
 21      exit ;--when d != 10;
 22  END LOOP;*/
 23    
 24  dbms_sql.close_cursor(cur_hdl); -- close cursor
 25  END;
 26  /

PL/SQL procedure successfully completed.                -->>>>> EXECUTING FINE WITH UNICODE DATA

SQL> select * from deba;

NAME
--------------------------------------------------------------------------------
Àðò¸ì Ðîäèí
Àðò¸ì Ðîäèí

SQL> rollback;

Rollback complete.

SQL> select * from deba;

NAME
--------------------------------------------------------------------------------
Àðò¸ì Ðîäèí

SQL>    

Now I am changing the above with one select statement which is "SELECT name FROM deba WHERE where name like '%ðò%'". Please see below :

SQL> select * from deba;

NAME
--------------------------------------------------------------------------------
Àðò¸ì Ðîäèí

SQL> DECLARE  
  2    stmt_str nvarchar2(2000);
  3    cur_hdl number;
  4    rows_processed number;
  5    lname nvarchar2(2000);
  6    d number;
  7  BEGIN
  8  cur_hdl := dbms_sql.open_cursor; -- open cursor 
  9  stmt_str := 'SELECT name FROM deba WHERE where name like ''%ðò%''';
 10  dbms_sql.parse(cur_hdl, stmt_str, dbms_sql.native); 
 11  
 12  -- describe defines 
 13  --dbms_sql.define_column(cur_hdl, 1, lname, 2000); 
 14  
 15  rows_processed := dbms_sql.execute(cur_hdl); -- execute 
 16  
 17  /*LOOP
 18      d := dbms_sql.fetch_rows(cur_hdl);
 19      dbms_sql.column_value(cur_hdl, 1, lname);
 20   dbms_output.put_line(lname);
 21      exit ;--when d != 10;
 22  END LOOP;*/
 23    
 24  dbms_sql.close_cursor(cur_hdl); -- close cursor
 25  END;
 26  /
DECLARE
*
ERROR at line 1:
ORA-00936: missing expression
ORA-06512: at "SYS.DBMS_SYS_SQL", line 909
ORA-06512: at "SYS.DBMS_SQL", line 39
ORA-06512: at line 10


SQL> 

I have just changed only the sql , but its is unable to parse. Could you please let me know why ?

Thanks
Deba
   
   

dbms_sql with ORA-00936

deba, November 17, 2009 - 5:43 am UTC

Hi Tom,

If you have time , could you please tell me why I am having error in above thread ?

Thanks
Deba

utl_file

manju, November 12, 2010 - 4:55 am UTC

Hi/TAM
this is the code for reading the data from flatfile using ut_file package.the procedure created but at the excution time it will show an error like invalid operation pleace give responce

the abc.dat look like this
base number memo interest reserve amount country code currency system name
1234 10.5 15.5 'IND' 'RS' 'kast'
1235 10.6 15.6 'US' 'USD' 'kas1'
2345 10. 15.0 ' EU' 'EUR' 'cls'



create or replace procedure utl_file_read(
p_filedir in varchar2,
p_filename in varchar2,
p_totalrowsinserted in out number)
is
v_filehandle utl_file.file_type;
--input line
v_newline varchar2(100);
v_base number memo_interest. base number%type;
v_memo interest memo_interest. memo interest%type;
v_reserve amount memo_interest . Reserve amount%type;
v_country code memo_interest. Country code%type;
v_currency memo_interest. currency%type;
v_system name memo_interest. system name%type;
POS1 NUMBER(2);
POS2 NUMBER(2);
POS3 NUMBER(2);
POS4 NUMBER(2);
POS5 NUMBER(2);
BEGIN
v_filehandle:=utl_file. fopen(p_filedir,p_filename,'r');
p_totalrowsinserted :=0;
LOOP
BEGIN
FILE.GET_LINE(v_filehandle,v_newline );
EXCEPTION
WHEN NO_DATA_FOUND THEN
EXIT;
END;
POS1 := INSTR(v_newline ,',',1,1);
POS2:= INSTR(v_newline ,',',1,2);
POS3:= INSTR(v_newline ,',',1,3);
POS4:= INSTR(v_newline ,',',1,4);
POS5:= INSTR(v_newline ,',',1,5 );

v_base number := SUBSTR(v_newline ,1,POS1-1);
v_memo interest := SUBSTR(v_newline,POS1+1,POS2 - POS1 -1);
v_reserve amount:=SUBSTR(v_newline,pos2+1,pos3-pos2-1);
v_country code:=SUBSTR(v_newline,pos3+1,pos4-pos3-1);
v_currency:=SUBSTR(v_newline ,pos4+1,pos5-pos4-1);
v_system name:= RTRIM(SUBSTR(v_newline,pos5+1);
insert into MEMO_INTEREST(base number,memo interest,reserve amount,country code,currency,system name)values(v_base number,v_memo interest,v_reserve amount,v_country code,v_currency,v_system name);
p_totalrowsinserted :=p_totalrowsinserted +1;
end loop;
utl_file. fclose(v_filehandle);
commit;
exception
when utl_file. invalid_operation then
utl_file. fclose(v_filehandle);
raise_application_error(-20051,'memo_interest:invalid operation');
when utl_file.invalid_filehandle then
utl_file.fclose(v_filehandle);
raise_application_error(-20052,'memo_interest:invalid filehandle');
when utl_file.read_error then
utl_file.fclose(v_filehandle);
raise_application_error(-20053,'memo_interest:read error');

when others then
utl_file.fclose(v_filehandle);
raise;
end utl_file_read;

Tom Kyte
November 12, 2010 - 9:00 am UTC

I'm not really here to debug your code.

You don't show us what line of code fails on your system, you have a LOT of code in your example that quite simply "isn't relevant" to the problem (meaning you haven't whittled it down to the bare minimum yet - to try to resolve this yourself)


Your code doesn't compile - FILE.GET_LINE()???? indicates to me that your code differs significantly from this code.

code works fine for me, IF the file exists:
ops$tkyte%ORA11GR2> create or replace procedure utl_file_read
  2  ( p_filedir in varchar2,
  3    p_filename in varchar2,
  4    p_totalrowsinserted in out number
  5  )
  6  is
  7      v_filehandle utl_file.file_type;
  8      v_newline varchar2(100);
  9  BEGIN
 10      v_filehandle:=utl_file.fopen(p_filedir,p_filename,'r');
 11      LOOP
 12      BEGIN
 13          utl_FILE.GET_LINE(v_filehandle,v_newline );
 14      EXCEPTION
 15          WHEN NO_DATA_FOUND THEN
 16          EXIT;
 17      END;
 18                  dbms_output.put_line( v_newline );
 19      end loop;
 20      utl_file. fclose(v_filehandle);
 21  end utl_file_read;
 22  /

Procedure created.

ops$tkyte%ORA11GR2> 
ops$tkyte%ORA11GR2> create or replace directory test as '/tmp';

Directory created.

ops$tkyte%ORA11GR2> host echo hello > /tmp/test.dat

ops$tkyte%ORA11GR2> host echo world >> /tmp/test.dat

ops$tkyte%ORA11GR2> variable n number
ops$tkyte%ORA11GR2> exec utl_file_read( 'TEST', 'test.dat', :n );
hello
world

PL/SQL procedure successfully completed.


if the file doesn't exist, I see what you probably see:

ops$tkyte%ORA11GR2> !rm /tmp/test.dat

ops$tkyte%ORA11GR2> exec utl_file_read( 'TEST', 'test.dat', :n );
BEGIN utl_file_read( 'TEST', 'test.dat', :n ); END;

*
ERROR at line 1:
ORA-29283: invalid file operation
ORA-06512: at "SYS.UTL_FILE", line 536
ORA-29283: invalid file operation
ORA-06512: at "OPS$TKYTE.UTL_FILE_READ", line 10
ORA-06512: at line 1




so -

a) prove to us the file exists
b) prove to us it is in the right directory
c) prove to us the directory is ON THE DATABASE SERVER (not your pc)
d) prove to us oracle can access it (the oracle OS account, not your account)



why don't you just use an external table - it would be so so so much easier.

More to Explore

PL/SQL demos

Check out more PL/SQL tutorials on our LiveSQL tool.

PL/SQL docs

PL/SQL reference manual from the Oracle documentation library