Skip to Main Content
  • Questions
  • Handling errors by inserting in a table

Breadcrumb

Question and Answer

Tom Kyte

Thanks for the question, Serge.

Asked: October 25, 2004 - 9:40 am UTC

Last updated: September 13, 2005 - 11:44 am UTC

Version: 9.2.0.5

Viewed 1000+ times

You Asked

Hi Tom,

First of all I'm apologize for asking more than one question, but it is very hard to get access to the "ask a question" link, therefore I don't know when will be next "open available port" to that link.

1. In many discussion about handling the exceptions into the exception blocks, in a way or another, seems to have a comun approach of a kind of auditing/logging the error message into a table.

Something like the following pseudocode:
--------------------------------------------------
pkg1.proc1:

begin

-- code

-- code

-- optional call pkg2.log_proc for successfully logging purpose

exception

-- optional do something
-- call pkg2.log_proc for exception logging purpose

end;


pkg2.log_proc:

begin
-- some code
insert "error message" into log_table;
-- some other code

exception

-- what to do if problems inside of log_proc ????? What to do if for some reason log_table is unavailable ?????

end;
--------------------------------------------------
Please give us a right professional approach of a problem like this.


2. The problem of ORA-04068: existing state of packages has been discarded.

After so many versions of Oracle (I'm refering from 8i) it seems that we still have an ugly problem of packages becoming invalid during runtime. Although all Oracle documentation claims that automatically revalidation takes place whenever the package is called, not always that thing is happenning, and always (on Oracle Discussion Forums and metalink) the workaround of this problem is to reinitialize the session/aplication, in other words "try one more time". This is definetely NOT A SOLUTION to ask Java Team to try again to call a certain package because of error mentioned above.

Here are (at least) 2 examples:

A.
create table t3 (col1 number);

Session 1: Create the package and body for package p_pack:
==========================================================
create or replace package p_pack as
p_var varchar2(1);
procedure p;
procedure p2;
end p_pack;
/
create or replace package body p_pack as
procedure p2
is
begin
EXECUTE IMMEDIATE 'drop table t3';
EXECUTE IMMEDIATE 'create table t3 (col1 number)';
end;

procedure p is
begin
p_var := 1;
p2;
insert into t3 values (1); --any DML operation !!!
commit;
end;
end p_pack;
/
Execute the package:
SQL> exec p_pack.p ;
PL/SQL procedure successfully completed.

LEAVES THE PACKAGE INVALID!!!


Session 2: Execute the package:
===============================

SQL> exec p_pack.p ;
begin p_pack.p; end;
*
ERROR at line 1:
ORA-04068: existing state of packages has been discarded
ORA-04061: existing state of package "SCOTT.P_PACK" has been invalidated
ORA-04065: not executed, altered or dropped package "SCOTT.P_PACK"
ORA-06508: PL/SQL: could not find program unit being called
ORA-06512: at line 1

Now the workaround to tell to client application: TRY AGAIN !!!
SQL> exec p_pack.p ;
PL/SQL procedure successfully completed.
SQL>

B. If for this particular case of having DDL operations which invalidates the package there is a workaround to this problem, by using EXECUTE IMMEDIATE that DML operation in procedure p, I will give you another example where I don't have a workaround:

truncating a partition from a table which is refferred in a package, leaves the package invalid again.

What is my point here is that we don't need workaround for a cronical problem which seems to have a conceptual (or architectural) wrong approach in Oracle. What we really need is a GLOBAL SOLUTION to this problem.

Thank you anticipate for your opinion.

and Tom said...

1) i hate error handling like that -- an exception block MUST be followed by a RAISE if it is in fact an error (else it is an exception like no_data_found which may or may not be an error)

but if the error routine gets an error, you should DEFINITELY not have an exception block -- just let it propagate back up the client.


exception blocks should only be there when you EXPECT an error, not for "just any error"

2) </code> http://asktom.oracle.com/pls/asktom/f?p=100:11:::::P11_QUESTION_ID:286816015990#25644572543759 <code>

the documentation is not wrong at all -- what is wrong is

a) you started running code
b) that code has global variables
c) you set those variables -- YOU BECAME DEPENDENT ON THEM
d) someone in another session said "hey, lets take this production system and erase it's memory". they recompiled your code
e) the next time you went to call that package -- that has had its MEMORY WIPED OUT, we graciously told you that fact.


What I'm constantly amazed by is people compiling code into a running production system.


do YOU SEE in your example how the package state WAS TOTALLY WIPED OUT.

do you see that you had a variable whose value was just "poofed" away.


I'm amazed that people are amazed by that.


Think about it -- try and replace a windows DLL that is being used. Take your webbrowser and reinstall it while it is running. think about what happens there.



Rating

  (21 ratings)

Is this answer out of date? If it is, please let us know via a Comment

Comments

Oracle really needs to do something about this

Christo Kutovsky, October 25, 2004 - 2:39 pm UTC

Tom,

I know you are always very protective about Oracle, and that a lof of the bashing out there is because of lack of knowledge. But sometimes, just sometimes, there's somethings that have a point. And Oracle might be doing it the "right" way, but it is far not the most convenient. Like in this case.

I mean, what's the point of using Oracle? To make things easier. Then why Oracle doesn't listen to what's causing a lot of trouble to his users.

True, when you use global variables your package state is invalidated and thus you need to be aware that this has happened.

But Oracle needs to provide some way of rolling package upgrades to packages in a running production system. Something similar to what happens in the unix world, when you replace a library that is currently in use. As long as a process has a file open, it still exists.

In a similar way, in Oracle only new sessions will use the new package. Most 24/7 systems are running web servers, which can automatically reconnect. Some kind of functionality could be a "force disconnect" of all sessions which have a package state for the recompiled package and have been idle for say 5+ seconds. Web servers will automatically reconnect. True, not the very best way, but it is a way, that will not shutdown the hole system for a "package upgrade" or "bug fix".

A major problem we have with a 24/7 OLTP system, which stores historical data in a partitioned table through a package is that we can't drop partitions of that table because it will invalidate a lot of packages and clients will have the "package state discarded" error, which is not really acceptable. And we can't shutdown the system to "drop a partition".... The work around we're forced to do is to truncate the partitions (which does not invalidate the package in 9i, note to original question), which is not the most elegant way of doing this. Oracle should have purity analysis of packages, and if a package is not referring to specific partitions of a table, it should not invalidate it due to a dropped partition.

Also why is the memory "WIPED OUT" when only the BODY of the package is invalidated? The specification is still there ... it is still the same?

As far as to the original problem, i guess a work around would be to get all queries referring to the T3 table in a separate procedures, or packages in order for the package with global variables to not get invalidated.

But that's a really ugly work around, especially if you have a lot of calls to the T3 table.



Tom Kyte
October 25, 2004 - 3:23 pm UTC

Christo -- tell me how to patch my OS, patch my running binaries. Come on -- patch my system. Forget Oracle for a minute -- patch my running applications. My .so file is bad, here is the new one.

Think about it.



<quote>
I mean, what's the point of using Oracle? To make things easier. Then why Oracle
doesn't listen to what's causing a lot of trouble to his users.
</quote>

Here is the skinny here -- you have someone (lots of people) running around plopping code into production frequently, lots -- all over the place. And being really utterly surprised when it "does not work". What happened to change mgmt? What happened to test cycles? Seems our profession is falling into the "you don't need to think about anything to run -- just throw junk out there" mode.

<quote>
But Oracle needs to provide some way of rolling package upgrades to packages in
a running production system. Something similar to what happens in the unix
world, when you replace a library that is currently in use. As long as a process
has a file open, it still exists.
</quote>

it doesn't exist -- can you spell "page fault". If you try to zap in a library that is opened and closed by applications, well, bummer for you. It'll either not let you overwrite the file (file busy error) or you are setting your self up for this thing called a "page fault" -- or worse.

If you are building a system whereby you are going to lobotomize the code running on a recurring basis -- design your applications for it. don't use globals -- no "state" to be discarded. But remember -- if you use globals, you have very very clearly stated in no uncertain terms -- i have a state to maintain.

Why is the memory wiped out in the body of the package? globals live there too you know, heck the global you were relying on? it might not even be there.


and don't even get me started on DDL in a package, ugh -- man oh man.

Sorry -- I'm not "defending" Oracle here -- I am pointing out what I thought to be obvious. You have globals, you have decided in your production system to wipe them out -- bamm, there you go.



Ok how would you do it?

Christo Kutrovsky, October 25, 2004 - 4:35 pm UTC

Ok, so how would you do the following tasks:

- upgrade a package with package state on a 24/7 system? (after the test cycle is complete and etc...)
- how would you drop partitions on a table which is referenced by packages with global variables

Ok, maybe did not express my self right, I am saying that you can replace an executable (by "rm"-ing it first) that is currently running or is been open, with a new one without any problems (on linux/unix). You can do this, because an open file will still keep it's space. Same way you can write to a removed DBF file. </code> http://asktom.oracle.com/pls/asktom/f?p=100:11:::::P11_QUESTION_ID:19013077250095, <code>

That's how we patched a custom application on a site. There were no problems at all. But we only replaced single executables, which actually had many personalities (same as oracle's concept).

<quote>
it doesn't exist -- can you spell "page fault". ...
</quote>

Sure I can ! :) Many times. Although we used to call it "coredump" or a "segmentation fault" ;) And no, it wasn't because we've updated the application.

Since we used that technique, not really a technique but still, why shouldn't oracle have something similar. It already has some sort of this, since a package that invalidates itself doesn't crash .

For example, a very very nice feature would be a init.ora parameter, which is session modifiable, which will force all packages with a package state to remain unchanged until a "dbms_session.modify_package_state(dbms_session.reinitialize)" is called. That would be neat. That'll have to be paired with a way to "force reinitialize" from another session.


<quote>
You have globals, you have decided in your production system to
wipe them out -- bamm, there you go.
</quote>

No I did not decide to. Oracle had to invalidate the package because:
- I did DDL (ok)
- I dropped a partition (not ok... i *should* be able to drop partitions without invalidating)

And no, I will not go about DDL in a package, although we have to, especially for partition maintenance and truncate. If only truncate wasn't DDL.


Tom Kyte
October 25, 2004 - 4:48 pm UTC

<quote>
Ok, maybe did not express my self right, I am saying that you can replace an
executable (by "rm"-ing it first) that is currently running or is been open,
with a new one without any problems (on linux/unix).
</quote>

you are kidding right? without any problems? you were kidding right? please tell me that. with shared libraries getting replaced -- oh, i see it now "yes, well that is what happens when version 1 of our code tries to use a version 2 library. happens everytime the .so is loaded up after someone rm'ed it". You are thinking too simply here. You cannot (especially in something you keep calling "24x7" -- that means correctness and availability are fairly important concepts to you) do that.

Tell me how many systems you have really truly done that on. (and many unixes won't let you do that wisely). really -- a complex system with shared libraries, lots of users, stuff happening for real.

upgrades that change code will involve some pause.

<quote>
That's how we patched a custom application on a site. There were no problems at
all. But we only replaced single executables, which actually had many
personalities (same as oracle's concept).
</quote>

you got *lucky* -- very very exceptionally *lucky*.

Hey -- you know, if I backup my database at 3am, it works -- no need for alter tablespace begin backup. Yeah, it works -- sometimes, when you get lucky.



A page fault is not a segmentation fault, although depending on your system -- it could lead to a coredump being produced.


You want to drop in version 2 of code for new users? We can use schema's to accomplish that easily. Put the code into a new schema, have the application issue "alter session set current_schema=THAT_SCHEMA", change a parameter file. You now have "v2" of your code installed, new connections use v2 of code (still scares the $#&@)@ out of me to have v1 of the code still running for some people -- chills running up and down my spine). Old connections use v1 until they die.


There are ways to do things -- replacing EXISTING RUNNING CODE is "not smart" -- even if you got really lucky once.


You have to invalidate when dropping a partition however, DDL has to do that. Your code could have "select * from t partition (one_I_dropped)". If you know you are going to do that -- and you really cannot stand for the package being invalidated you can

o use dynamic sql, no dependency

or

o separate out this code into a package that doesn't have a state, return a ref cursor, whatever.



If you think about the tools you have, you can construct such a system. The multiple schemas is very attractive personally.

DDL operations

Serge, October 25, 2004 - 5:00 pm UTC

Tom,

Maybe my example was not clear enough, or I don't know, but still we are skiping an important point.
I am not talking about recompiling/reinstalling or whatever the package in production. So from my example please ignore the part of SESSION 1 which creates the package.

ASSUMING THE PACKAGE IS ALREADY THERE, and nobody and nothing upgrade/change or (re)compile the package, they just RUN the package, then the scenario is:

SESSION 1 executes the package. SUCCESSFULLY.

SESSION 2 executes the package. ORA-04068.
SESSION 2 executes the package again. SUCCESSFULLY.

SESSION 3 executes the package. ORA-04068.
SESSION 3 executes the package again. SUCCESSFULLY.

and we can keep it so on so long.

<quote>
What I'm constantly amazed by is people compiling code into a running production system.
</quote>

We are not compiling the code. We just try to run the package !! That's it.

Now let's forget about global variables, let's take out that global variabile from our example.
My opinion is that the package is left behind invalid BECAUSE of DDL OPERATION which take place inside while runing the package. That is one of the problem.

And also I mentioned to you the "truncate partition of table" as another cause.

These kind of operations are leaving the package invalid while is running !!! Nobody compiles or deploy or upgrades the package.

And finally I didn't say the Oracle documention is wrong, what I meant was that Oracle claims that whenever the flag of an object into the shared buffer is flagged INVALID, is going to take a fresh copy of that object from the right place. It doesn't happen that in our example: Session 2 and Session 3 are going to "take a fresh copy" only at second attempt.

The packages (without talking the issue of global variables) should be able to handle correct DDL operations and after finish the work, still remaining in valid state, or at least, the other threads/sessions to be able to go and get a valid copy of that package. This is my point.
Thank you.

Tom Kyte
October 25, 2004 - 7:42 pm UTC

you are doing things (DDL) that invalidates the package -- this is no different that compiling the package.  

This timeline:


SESSION 1 executes the package. SUCCESSFULLY.

SESSION 2 executes the package. ORA-04068.
SESSION 2 executes the package again. SUCCESSFULLY.

SESSION 3 executes the package. ORA-04068.
SESSION 3 executes the package again. SUCCESSFULLY.

<b>cannot happen (i challenge you to give me a test case).  Only if the timeline is:</b>

SESSION 2 executes the package. 
SESSION 3 executes the package. 

SESSION 2 executes the package. ORA-04068.
SESSION 2 executes the package again. SUCCESSFULLY.

SESSION 3 executes the package. ORA-04068.
SESSION 3 executes the package again. SUCCESSFULLY.


<b>if you do not have a session state, you will NOT get the 4068.  If you do, that indicates YOU HAD A SESSION state, you had run the code, you were totally relying on that state, you threw it out the window</b>


Take out the global and see what happens...  for you see, without the globals, the 4068 isn't a factor (you haven't any state to loose you see)

ops$tkyte@ORA9IR2> create table t ( x int ) ;
 
Table created.
 
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> create or replace package pkg1
  2  as
  3          global  varchar2(1);
  4          procedure p;
  5  end;
  6  /
 
Package created.
 
ops$tkyte@ORA9IR2> create or replace package pkg2
  2  as
  3          procedure p;
  4  end;
  5  /
 
Package created.
 
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> create or replace package body pkg1
  2  as
  3          procedure p
  4          is
  5          begin
  6                  for x in ( select * from t )
  7                  loop
  8                          null;
  9                  end loop;
 10
 11                  global := 'Y';
 12          end;
 13  end;
 14  /
 
Package body created.
 
ops$tkyte@ORA9IR2> create or replace package body pkg2
  2  as
  3          procedure p
  4          is
  5          begin
  6                  for x in ( select * from t )
  7                  loop
  8                          null;
  9                  end loop;
 10          end;
 11  end;
 12  /
 
Package body created.
 
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> exec pkg1.p
 
PL/SQL procedure successfully completed.
 
ops$tkyte@ORA9IR2> exec pkg2.p
 
PL/SQL procedure successfully completed.
 
ops$tkyte@ORA9IR2> drop table t;
 
Table dropped.
 
ops$tkyte@ORA9IR2> create table t ( x int );
 
Table created.
 
ops$tkyte@ORA9IR2> exec pkg1.p
BEGIN pkg1.p; END;
 
*
ERROR at line 1:
ORA-04068: existing state of packages has been discarded
ORA-04061: existing state of package body "OPS$TKYTE.PKG1" has been
invalidated
ORA-06508: PL/SQL: could not find program unit being called
ORA-06512: at line 1
 
 
ops$tkyte@ORA9IR2> exec pkg2.p
 
PL/SQL procedure successfully completed.


<b>see pkg2 -- no worries.  pkg1 -- we discarded it's state, gonzo</b>



You are compiling the package by performing DDL in a live running system.


Session 2 and 3 only not behaving as you indicate -- the only way for them to get that message would be if they had already RUN that code and were ALREADY relying on the state which you just wiped out.



<b>that is my point -- you did compile the code (by doing DDL), sessions 2 and 3 did rely on that state.  Everything is working "normally" here</b>

(and I did mention quite a few things for you to consider -- such as putting the affected SQL into a stateless package, such as native dynamic sql) 

DBMS_SESSION.RESET_PACKAGE

Gary, October 25, 2004 - 6:36 pm UTC

I think it is worth pointing out that if the transaction can cope with simply re-trying the package after a 4068, then it wasn't REALLY dependent on whatever session variables it had stored in that package.

In that case, you may consider using DBMS_SESSION.RESET_PACKAGE;
to clear out your session states when appropriate.
This will also ensure that later transactions do not have the 'residue' of old transactions from earlier in that sesson

Tom Kyte
October 25, 2004 - 7:50 pm UTC

(the funny thing is -- i'll betcha 1/2 of the people that get this don't even KNOW if they are really dependent on that or not :( they just redo it the call and hope for the best -- wondering why things work "strangely" sometimes)

DDL operations in packages

Serge, October 25, 2004 - 9:05 pm UTC

Tom,
I understand the "session state" explanation. I'm agree, this is the way how the things should be for a session relying on some variables and/or objects in memory and another session just wipe them out from there, or determine them to be flagged as invalid.
What I was expecting from Oracle, in background, is to be able in a case of an object flagged as invalid, to go and get the fresh copy of that object with automatically revalidation if necessary, so the client session would get back the valid object always and from the first attempt, not the second.

Anyway, on metalink this problem seems to be endless.
But this discussion was very instructive and I thank you for clarifications.

Tom Kyte
October 26, 2004 - 7:35 am UTC

you agree but you then state 100% the opposite???

<quote>
I understand the "session state" explanation. I'm agree, this is the way how the
things should be for a session relying on some variables
</quote>

but you then say:

<quote>
so the client session would get back
the valid object always and from the first attempt, not the second.
</quote>


Well, which is it?


The only time you get "so sorry, you wiped out your package state" is, well, when you have a package state. Therefore, you had a package state and thusly -- your first point "I'm agree" applies. But - then you say "even though I agree, I want you to NOT TELL me my state was wiped out"



The problem is not "endless" -- it is quite clear.

a) you have a package
b) this package has a state
c) you used this package, established a state
d) you do something to cause this package to recompile. State goes poof
e) we nicely tell you "state went poof, now you decide what to do"



If this is undesirable -- if you DON"T REALLY NEED THIS STATE, don't use any global variables. (clean ending to long story)

or

If this is an application upgrade -- use multiple schemas (another ending)

or

If you really do need the state and you plan on doing DDL -- use native dynamic sql for that limited piece of affected code (yet another ending)



But your paragraph above where you "agree" yet "disagree" seems to indicate we haven't arrived at understanding yet.


Oracle does "get a fresh copy" IF and ONLY IF getting a fresh copy doesn't mean not telling you something vitally important to the correct operation of your code.

Oracle will NOT get "a fresh copy" if doing so means you haven't been told that "oh, by the way -- variables you set last time, well, they've gone 'poof'"



i have modified your Test by adding DBMS_SESSION.RESET_PACKAGE

pasko, October 26, 2004 - 5:20 am UTC

Hi Tom,
Thanks for a Great Thread!

i note that i don't get the invalid package state error when i add the DBMS_SESSION.RESET_PACKAGE in the init section...are there any implications of doing this to all my Packages ?


create or replace package body pkg1
as
procedure p
is
begin
for x in ( select * from t )
loop
null;
end loop;

global := 'X';
end;
begin
DBMS_SESSION.RESET_PACKAGE;
end;
/

Tom Kyte
October 26, 2004 - 7:55 am UTC

you want to do that ONCE after you grab a connection from your connection pool or whatever you are using.

It would be an utterly terrible idea to put it in the elaboration code (and it would not fix the problem -- elaboration code runs ONCE)


dbms_session.reset_package is expensive (time consuming). do it ONCE when you grab a connection if at all.


and remember, it'll wipe out the package state of EVERY SINGLE PACKAGE. Are there implications? ONLY YOU can tell us the answer to that. One might ask "well, why do you have globals?" - if you do not need them.

let's discuss these one by one.

Christo Kutrovsky, October 26, 2004 - 8:22 am UTC

Going back to my partitions problems nad package states.

If you have a package, with global variables, accessing a table with partitions, you cannot drop partitions from that table, since you risk your clients gething the package state error.

Even if you hide the partitioned table with a view, it still invalidates your package.

I understand why the package is invalidated, I could be accessing specific partitions, but I am not. And I am not accessing them more often then I am. And even if I needed to access a specific partition, I would use partition ranges, not partition names.

Any work arrounds about this ? Using a sedond package without global variables to access the table is not a viable workarround.

What I was suggesting is Oracle to use purity checks, the same way it uses purity checks to see if there's a package state to maintain, and not invalidate packages (when droping partitions) wich are accessing a partitioned table without referencing specific partitions.



Tom Kyte
October 26, 2004 - 9:01 am UTC

<quote>
If you have a package, with global variables, accessing a table with
partitions, you cannot drop partitions from that table, since you risk your clients gething the package state error.
</quote>
                                                                                
correct and I think I've beaten that dead horse to death.  I've told you
exactly how to avoid that if you anticipate doing that
                                                                                
a) move that particular query OUT of that package into a stateless package.
b) use native dynamic sql.
                                                                                
                                                                                
how can you say "using a second package is not viable".  how can you say
that?  it certainly is, will be and -- well, it is.  "not viable" means not
doable.  it is totally *doable*.
                                                                                
It doesn't matter what you -- a specific individual -- would or would not
do.  It is frankly a matter of "what can be done".



ops$tkyte@ORA9IR2> CREATE TABLE t
  2  (
  3    temp_date  date,
  4    x          int,
  5    y int
  6  )
  7  PARTITION BY RANGE (temp_date)
  8  (
  9    PARTITION part1 VALUES LESS THAN (to_date('13-mar-2003','dd-mon-yyyy')) tablespace users,
 10    PARTITION part2 VALUES LESS THAN (to_date('14-mar-2003','dd-mon-yyyy')) tablespace tools,
 11    PARTITION junk VALUES LESS THAN (MAXVALUE)
 12  )
 13  ;
 
Table created.
 
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> create or replace package pkg1
  2  as
  3          global varchar2(1);
  4          procedure p;
  5  end;
  6  /
 
Package created.
 
ops$tkyte@ORA9IR2> create or replace package body pkg1
  2  as
  3          procedure p
  4          is
  5          begin
  6                  for x in ( select x from t )
  7                  loop
  8                     null;
  9                  end loop;
 10                  global := 'Y';
 11          end;
 12  end;
 13  /
 
Package body created.
 
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> create or replace package pkg2
  2  as
  3          global varchar2(1);
  4          procedure p;
  5  end;
  6  /
 
Package created.
 
ops$tkyte@ORA9IR2> create or replace package body pkg2
  2  as
  3          procedure p
  4          is
  5                  l_cursor sys_refcursor;
  6                  l_x      number;
  7          begin
  8                  open l_cursor for 'select x from t';
  9                  loop
 10                          fetch l_cursor into l_x;
 11                          exit when l_cursor%notfound;
 12                  end loop;
 13                  close l_cursor;
 14                  global := 'Y';
 15          end;
 16  end;
 17  /
 
Package body created.
 
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> create or replace package protect_me_from_state
  2  as
  3          function foo return sys_refcursor;
  4  end;
  5  /
 
Package created.
 
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> create or replace package body protect_me_from_state
  2  as
  3
  4  function foo return sys_refcursor
  5  is
  6          l_cursor sys_refcursor;
  7  begin
  8          open l_cursor for select x from t;
  9          return l_cursor;
 10  end;
 11
 12  end;
 13  /
 
Package body created.
 
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> create or replace package pkg3
  2  as
  3          global varchar2(1);
  4          procedure p;
  5  end;
  6  /
 
Package created.
 
ops$tkyte@ORA9IR2> create or replace package body pkg3
  2  as
  3          procedure p
  4          is
  5                  l_cursor sys_refcursor;
  6                  l_x      number;
  7          begin
  8                  l_cursor := protect_me_from_state.foo;
  9                  loop
 10                          fetch l_cursor into l_x;
 11                          exit when l_cursor%notfound;
 12                  end loop;
 13                  close l_cursor;
 14                  global := 'Y';
 15          end;
 16  end;
 17  /
 
Package body created.
 
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> @invalid
ops$tkyte@ORA9IR2> break on object_type skip 1
ops$tkyte@ORA9IR2> column status format a10
ops$tkyte@ORA9IR2> select object_type, object_name, status
  2  from user_objects
  3  where status = 'INVALID'
  4  order by object_type, object_name
  5  /
 
no rows selected
 
ops$tkyte@ORA9IR2> exec pkg1.p
 
PL/SQL procedure successfully completed.
 
ops$tkyte@ORA9IR2> exec pkg2.p
 
PL/SQL procedure successfully completed.
 
ops$tkyte@ORA9IR2> exec pkg3.p
 
PL/SQL procedure successfully completed.
 
ops$tkyte@ORA9IR2> alter table t drop partition junk;
 
Table altered.
 
ops$tkyte@ORA9IR2> @invalid
ops$tkyte@ORA9IR2> break on object_type skip 1
ops$tkyte@ORA9IR2> column status format a10
ops$tkyte@ORA9IR2> select object_type, object_name, status
  2  from user_objects
  3  where status = 'INVALID'
  4  order by object_type, object_name
  5  /
 
OBJECT_TYPE        OBJECT_NAME                    STATUS
------------------ ------------------------------ ----------
PACKAGE BODY       PKG1                           INVALID
                   PROTECT_ME_FROM_STATE          INVALID
 
 
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> exec pkg3.p
 
PL/SQL procedure successfully completed.
 
ops$tkyte@ORA9IR2> exec pkg2.p
 
PL/SQL procedure successfully completed.
 
ops$tkyte@ORA9IR2> exec pkg1.p
BEGIN pkg1.p; END;
 
*
ERROR at line 1:
ORA-04068: existing state of packages has been discarded
ORA-04061: existing state of package body "OPS$TKYTE.PKG1" has been
invalidated
ORA-06508: PL/SQL: could not find program unit being called
ORA-06512: at line 1


Pkg2, Pkg3 -- maintain a state, but are not affected by the drop partition.

The dependency mechanism (quite complex actually when you get down to it, quite sophisticated) tracks at the top level object (the table).


It is like if you create some procedures that access table T and you have been granted:

o select any table
o select on t directly


if you get select on t revoked -- procedure goes invalid.  why?  because there was a check at compile time to see if you could access T, we don't remember the details of "why" (and in fact, why is unclear here - was it select any table or select on t).  So, when you revoke on T, we invalidate so we can check next time "do you have access to T".  

Same with "select any table", revoke that from a user and see what happens (if you are really daring and have a test database -- one you don't care about, grant select any table to public and then revoke it).


 

that's exactly the point

Christo Kutrovsky, October 26, 2004 - 9:58 am UTC

look at how much extra code you had to add, not because you logically needed it, but to work arround something.

And that's the point, it adds extra complexity, when you don't need it. Now I have to think about 2 packages, when I only have 1 task.

And what about when people are using the package that is to be recompiled, you can't recompile a package that is been used... so if it is a long running package, everyone has to wait for whoever is running this package to finish, before it can be recompiled. It causes tramendous lock ups.

And what about triggers? They get invalid too. There's no work arround that, you can't hide the trigger dependancy with dynamic SQL.

And if you are to use dynamic sql, it adds again extra complexity. Can you imagine if all SQL you wrote in your PL/SQL Packages to be dynamic? How much longer will it take you to code it? Also it adds extra parses to the system, insignificant, but still .. they are there !

And how would I explain the cost to the already overworked developers? "Sorry guys, you have to rework all these packages, so that we can use partitioning". It just wont happen. Should all developers just have their queries in a separate package, just in case some day the table needs to be partitioned? Still the recompiling is an issue.

Serge: I tried truncating a partition, and it does not invalidate the package for me (9.2.0.5). That's my only solution. Just leave the partitions empty behind me. And eventually drop them in some maintenance window, which will have to involve shutting down all web server connections.

Tom, all my respects to you, but neither of these workarrounds is acceptable. To recap:

- Separate package - can't do that, when the packages becomes invalid, all current users executing the packages have to finish, before it can be revalidated.
- Dynamic SQL - I can't rewrite all my sql with dynamic. It will become a mess ... it doesn't help with triggers.


Tom Kyte
October 26, 2004 - 10:12 am UTC

Look -- at the end of the day tell me -- what do you want?

a) the package state to go away
b) or not

You decide, you make the call.

I didn't have two packages in all cases. did you read pkg2?

(don't get me started on triggers, but -- do you have triggers that maintain state? whats up with triggers all of a sudden?)


It is what it is, you know what I prefer -- I prefer to know what it does, how it works, what the implementation implies -- and then write my system accordingly. I understand that it works the way it works, not the way I'd personally like it to work, not the way I think it should work. It works the way it does - and we have very viable, easy methods to solve this

It all comes back to two things

o can you really update code in a running system -- not whilst someone is running it, i don't care how lucky you got once upon a time, it is totally not a stable reliable thing to do. an update to a production, running system is something very tricky, i don't care if it is java code, c code, plsql code -- code is code is code is code. But, I do believe I told you how to install "version 2" of your code with multiple schemas. It is an implementation I've used -- for me, I update a DAD (mod_plsql thing) and there we go -- switched over.

o can you do DDL without losing a package state. I believe I've shown you how. Various ways. How many tables exactly do you have that you do this too? It is not a large number, so we are not talking about rewriting your entire system -- don't go there.

I give up, whatever.



As for:

- Separate package - can't do that, when the packages becomes invalid, all
current users executing the packages have to finish, before it can be
revalidated.

umm, don't need separate packages but if you choose that static sql approach (i would favor it), tell me how long do you think this code takes to execute:

create or replace package body protect_me_from_state
as

function foo return sys_refcursor
is
l_cursor sys_refcursor;
begin
open l_cursor for select x from t;
return l_cursor;
end;

end;
/
?


- Dynamic SQL - I can't rewrite all my sql with dynamic. It will become a mess
... it doesn't help with triggers.

I'm not saying rewrite all of your sql -- I'm saying you have a small set of tables this effects. IN THOSE PACKAGES that for whatever reason maintain a state -- this is your other choice.



But you can sit there and watch the package state go invalid and get mad all you want. I prefer to make things work smoothly. It is up to you.


just to clear the confusion

Christo Kutrovsky, October 26, 2004 - 10:24 am UTC

My concern is not with package states, rather then droping partitions invalidades packages that do not refer to specific partitions.

In the end of the day, we use native dynamic sql or don't drop partitions. But it's a major disadvantage and a lot of extra work - makes using pl/sql with partitioning not that attractive.



don't use the database poorly and it'll work better

Scott Swank, October 26, 2004 - 10:26 am UTC

What if we all just follow the simple rule that you shouldn't drop a table:

EXECUTE IMMEDIATE 'drop table t3';
EXECUTE IMMEDIATE 'create table t3 (col1 number)';

If you're planning to select from it later in the same connection (much less the same transaction -- lovely).

insert into t3 values (1);

What on earth is anyone thinking, writing the above? Truncate the table if you must, use a global temporary table, do something sensible.

Scott

Swapping Code

Bob B, October 26, 2004 - 11:53 am UTC

Hopefully this is along the same veins of this topic. Say there are 10 users connected to a system using the code in Schema A. New code is developed and compiled into Schema B, which was a replica of Schema A. At this point, the users need to be gradually pushed over to use the new code.

Case 1, no connection pool:
In this case, each new connection sets the application schema to Schema B and they are using the new code. Is there a way to alter the sessions of the currently logged in users to use Schema B. Basically, the instant they commit their current transaction, they should be moved over to Schema B.

Case 2, connection pool:
In this case, does the application schema have to be set each time a user requests a connection from the pool?

Ideally, it would be nice to set it up so that any new db requests use Schema B. Any db requests that were still active during the changeover can continue what they were doing.

Tom Kyte
October 26, 2004 - 11:56 am UTC

not unless you've coded support for that in your application. If the technique was:

a) grab connection from pool
b) look in application parameter settings for current schema
c) if different, then issue alter session set current_schema

then it would migrate them over as it grabbed connections -- after you update a parameter file.





Do you really?

Vladimir Andreev, October 26, 2004 - 12:11 pm UTC

<quote>
It is an implementation I've used -- for me, I update a DAD (mod_plsql thing) and there we go -- switched over.
</quote>

Well, not always, as can be seen here:
</code> http://asktom.oracle.com/pls/asktom/f?p=100:11:::::P11_QUESTION_ID:7913917986287#21367027710149 <code>

Gotcha!
:-)
Flado

Tom Kyte
October 26, 2004 - 12:50 pm UTC

hey, i was just zapping a query in an htmldb report -- that doesn't count :)

Updating code in production

Bob B, October 26, 2004 - 12:56 pm UTC

I'm a little confused. Are you saying that in a connection pool environment, if I change the session of a particular connection, that connection's session remains changed when the connection is released back to the pool?

e.g.

Connection 1: Schema A
Connection 2: Schema A

Code change from Schema A to Schema B

User Joe grabs connection 1. Schema is different, alter session to Schema B. Joe finishes.

Connection A: Schema B
Connection B: Schema A

User Tom grabs connection 1. No change made to session.

Connection A: Schema B (in use by Tom)
Connection B: Schema A

User George grabs connection 2. Schema is different, alter session to Schema B. George finishes. Tom finishes

Connection A: Schema B
Connection B: Schema B

If that is the case, wouldn't there be a way to create a stored procedure to go through all connections in the pool and update their session to the new schema?

Tom Kyte
October 26, 2004 - 1:05 pm UTC

a connection pool is a pretty "simple piece of software" actually.

but most likely -- the connection pool "object", "class", "subroutine" whatever you want to call it doesn't issue "alter session set" statements to put things back -- i would seriously doubt it.

but it could definitely be connection pool specific.


A connection pool is a middle tier thing, a stored procedure is a database thing, you'll not have a stored procedure that modifies a middle tier state like that, no.

Reply for Scott -> don't use the database poorly and it'll work better

Serge Floca, October 26, 2004 - 2:43 pm UTC

Scott,

Using DDL such as drop and recreate a table are not coming from a poor control or knowledge of database. You have to understand there are justified REASONS behind choosing some solutions instead of others. One reason: performance expectation. High speed nedeed over loading/manipulating tens of millions of rows !!!

First, I'm using in my packages not a simple 'create table...' but a CTAS command. Using insert and truncate it is indeed a viable solution in some casses but in other not. I don't have now that link to another asktom thread where Tom explains that there are situations where CTAS is faster than bulk insert. But when is a performance issue, it matters what you choose !!

Second, a global temporary table is definetely NOT the right choice when we are talking about middle-tier which are not using dedicated connections. In other words in certain situations global temporary tables are not applicable.

What I'm trying to say is that YES, there are justified situations where you have to use DDLs like drop and recreate and none of the alternatives, some of them suggested by you, represent a viable substitute.

Thank you.

Tom Kyte
October 26, 2004 - 3:02 pm UTC

dropping and creating tables makes sense in an ETL process, not anywhere else that I've ever seen in Oracle, period.

we'll have to agree to vehemently disagree on the DDL stuff -- that is a no-no.

I'm confused

Mikito Harakiri, October 26, 2004 - 5:45 pm UTC

There is drop recreate table, there is truncate, and there is delete from table - 3 choices! What one is better to use?

Tom Kyte
October 26, 2004 - 6:58 pm UTC

all rows = truncate
some rows = delete

drop table is, well, drop table. it is not comparable to truncate or delete -- which are two methods to remove rows.

so, if you want to remove some or all rows -- delete and truncate are commands to consider.

if you want to drop a table forever, drop is a command to look at.

ETL process

Serge Floca, October 26, 2004 - 7:07 pm UTC

Yes, Tom, I'm using those DDL in an ETL process where I have a window of only 2 hours for loading milions of items into the database. So performance matters a lot !!.

And I'm agree with your definition given to Christo that stipulates something like: Oracle is doing what is doing not what I WISH to do and I have to follows his rules even I don't like some of them.

Tom Kyte
October 26, 2004 - 7:14 pm UTC

I would strongly encourage you to separate the DDL out way away from the PLSQL. You can "limit" your exposure to package invalidation using dynamic sql -- but if you do all of your DDL up front and then start invoking the code, there will be no issues whatsoever.

High Availability

Mike Friedman, December 13, 2004 - 10:09 am UTC

Tom,

Let's face it - high availability is becoming more and more of an issue.

DBAs need ways to do patches and upgrades without shutting down running systems.

This ranges from the demand for rolling upgrades to Oracle DBs to the current arguments people are having with you.

Let's take a simple example: I want to add a column to a table and then change my web forms so people can store and retrieve data in that column.

Well, adding a NULL column is unlikely to cause problems with PL/SQL packages. As long as I explicitly tell the database not to clear package state and that I am willing to take the risk I should be able to do so.

Modifying the packages to use the new column is more complicated - there may be dependencies throughout the system - but I should be able to do through some mechanism such as by providing a set of new package definitions to the database as a unit and having all new sessions use the new packages while existing sessions continue to use the old ones.

None of this will be easy for Oracle to provide but long term it will be necessary - no one will accept having to shut down a major business application to apply a minor upgrade or patch in a few years.



Tom Kyte
December 13, 2004 - 10:30 am UTC

it is not as simple as "package state"

you have select * issues (code must be recompiled)
you have insert into t values ( .... ) issues (code must be recompiled)

you have

declare
l_data t%rowtype;


issues, code must be recompiled..........

Mike, I agree partly....

Bill Schwartz, December 14, 2004 - 3:20 pm UTC

But you have to understand that there are things that affect major portions of your application and you must take the app down in order to accomplish your goals effectively. You would not expect your mechanic to try and fix your car while you were going 70 mph on the interstate, right?

A reader, July 08, 2005 - 7:22 am UTC

cdi-20039


Tom Kyte
July 08, 2005 - 7:57 am UTC

r2-d2

c3po

R2D2, July 08, 2005 - 10:54 am UTC

From the Oracle documentation:
</code> http://download-west.oracle.com/otn_hosted_doc/designer/misc/284341/insgde/appc.htm <code>

CDI-20039: This user does not have an installed Repository.

Repository not present.

If you are about to install a new Repository, click Yes.

If you are trying to start the product, ensure that the Repository has been installed correctly and that you are using the correct username for the Repository owner.


Tom Kyte
July 08, 2005 - 12:53 pm UTC

I knew it was a designer error, I was just wondering "why" or what possible context.

so, I threw r2-d2 out there :)

Package State Invalid

Kumar, September 12, 2005 - 10:08 pm UTC

We are getting package state invalid in our three tier application/We are not doing any DDL changes to the package spec/package body.This is happening only when we test the package in the multiuser testing with the connection pool.
Yes we have a global var defined in package spec.

Is there any other reason why package state will be invalid other than recompiling procedure.


E.g

Package spec
Number G_EMP_ID;
calla(Number emp_id);

Package body
calla(Number emp_id)
G_EMP_ID:=emp_id;



Tom Kyte
September 13, 2005 - 11:44 am UTC

grants, ddl changes -- something is happening.

More to Explore

PL/SQL demos

Check out more PL/SQL tutorials on our LiveSQL tool.

PL/SQL docs

PL/SQL reference manual from the Oracle documentation library