Skip to Main Content

Breadcrumb

May 4th

Question and Answer

Tom Kyte

Thanks for the question, Jesse.

Asked: May 12, 2002 - 10:48 pm UTC

Last updated: June 10, 2011 - 5:15 pm UTC

Version: 8.1.7

Viewed 10K+ times! This question is

You Asked

Hi Tom,
Is there a way to check the dependency of stored procedures(procedure, function, trigger). If I compile a stored procedure, other stored procedures which call this sp will becomes invalid. I need to recompile the others too. Is there a table which stores the dependency of sp? Thanks.

and Tom said...

Yes there is HOWEVER

o they will recompile themselves (so you need not waste your time/energy)
o you shouldn't use procedures and functions (standalone) in a real database!!!

Read:
</code> http://asktom.oracle.com/pls/asktom/f?p=100:11:::::P11_QUESTION_ID:286816015990 <code>

and use PACKAGES. Avoid the issue in the first place! You'll be glad you did.

(*_DEPENDENCIES where * = all, user, dba)

Rating

  (47 ratings)

Is this answer out of date? If it is, please let us know via a Comment

Comments

Sometimes an automatic re-compile is unavoidable

AB, May 13, 2002 - 12:45 pm UTC

Tom

Even when using packages an automatic re-compile of dependants is unavoidable. For example, my PKG_TYPES of shared user-defined types is referenced by several others. Now when someone needs to add a new shared type they hadn't thought of at original version time to PKG_TYPES, the spec needs to be re-compiled so all becomes invalidated. I guess it's something we have to live with.



Tom Kyte
May 13, 2002 - 2:14 pm UTC

sometimes -- but not often. I still encourage you to use packages exclusively in any real system. avoid standalone procedures and functions!

Procedures dont ALWAYS recompile themselves.

IS, May 13, 2002 - 12:55 pm UTC

Tom on 8.1.7 we use describe <procedure_name> to find out what the field sizes are before calling the procedure. If a procedure is INVALID the describe gets an error and the procedure does not get automagically compiled. (The proedure does get maually recompiled without any compilation errors).
Also a manual recompile of a stored procedure sometimes changes the order of the parameters. Is there a fix for that? We just replace the procedures when they become invalid.

Tom Kyte
May 13, 2002 - 2:20 pm UTC

A manual compile cannot in any way shape or form change the order of parameters

Only you, by writing NEW code with the parms in a different oracle can do that.


What I do in this case is:


attempt to describe
if error due to invalid
parse begin procedure_name; end; <<=== this might fail, no parameters!
might even be a function. It'll
VALIDATE the procedure/function tho
attempt to describe
if error then it reall is broken, not going to run it
end if


I do it with code like this:


for i in 1 .. 2 loop
begin
dbms_describe.describe_procedure( :ObjectName,
NULL, NULL,
l_overload, l_position,
l_level, l_argument_name,
l_datatype, l_default_value,
l_in_out, l_length, l_precision,
l_scale, l_radix, l_spare );
exit;
exception
when others then
if ( i = 1 ) then
begin
exec_cursor := dbms_sql.open_cursor;
dbms_sql.parse(exec_cursor,
'begin if (1=0) then ' || :ObjectName || ';end if; end;',
dbms_sql.native );
dbms_sql.close_cursor( exec_cursor );
exception
when others then
if dbms_sql.is_open(exec_cursor) then
dbms_sql.close_cursor(exec_cursor);
end if;
end;
else
raise;
end if;
end;
end loop;





package invalid caused database hang

Windy, May 13, 2002 - 3:54 pm UTC

Tom,
We use packages in the database. There are a lot of dependency among packages in our database. Sometimes, when developers push a new version of a core package (changed its spec), it will invalid some other packages. Since we are an online retailer, we constantly have customers activities that need call packages. And sometimes, oracle was unable to recompile invalid package in time. Then the database is in a deadlock. One session has a library cache lock and wait for library cache pin, while some other sessions hold library cache pin waiting for library cache lock, and the session tries to recompile packages will never finish its job. As soon as we block further customer connections, kill all waiting sessions and recompiled invalid packages, we are fine. But this is not a solution we like. Could you please give me some suggestion on this situation?

Thank you very much.

Tom Kyte
May 13, 2002 - 6:56 pm UTC

It is not in deadlock -- we would detect and roll that back.

It is locked up. You cannot compile a package until the dependencies are not being used -- you cannot use the package you are attempting to compile until after it has been compiled. Classic traffic jam.

In 9i, we have a "quiesce" function -- that effectively "stops" what is going on. It is a cleaner way -- but still involves downtime. Code upgrades do and will always involve some amount of downtime.

For now, the best approach is to schedule downtime -- get people out -- do the upgrade and let them back in.

...cannot in any way shape or form change the order of parameters...

Padders, May 14, 2002 - 5:59 am UTC

Funnily enough there was bug: 782989 in earlier versions of Oracle 8 and 8i
that used to do exactly that during compilation. I believe fix was to flush
shared pool, although I don't know why. Suspect this MUST have been fixed
by now, however!

Unix code management is easier

Paul Kelley, November 26, 2002 - 12:23 pm UTC

It took a while for the Unix contingent at my shop to convince me about this, but from an ease of management perspective, Oracle has a lot to learn. Shutting down the database, shutting out users, etc. prior to code upgrades seems crude if you compare it with what is involved in the same sort of operation in Unix.

A rudimentary example:

cc hello.c
mv -f a.out my.executable
my.executable &

<the background process prints "hello" every 5 seconds

cc goodby.c
mv -f my.executable my.executable.bak
mv -f a.out my.executable
my.executable &

There are now two background processes, one printing
"hello" every 5 seconds, one printing "goodbye" every 5 seconds. The inodes are preserved, the directory entries are changed, both new and old can execute at the same time, I don't have to kill processes running the old version, etc.




Tom Kyte
November 26, 2002 - 12:59 pm UTC

boy oh boy -- you've been lucky in the past, haven't you.

I see "page fault" written all over your future.

You are scaring me actually. If you really believe the OS keeps each and every text file open, forever. Or that it even opens all files at the same time. Or that programs are all trivial. Please -- don't you ever do a database upgrade (don't laugh -- I've had people do OS upgrades, while the database is running and wonder "why did it crash all of a sudden with strange errors?". Well, you CHANGED THE OS on us)

what about the program that uses two dynamic libraries? So you start the process "a.out" and it loads version 1.0 of lib1. You decide to upgrade to version 2. Now a.out (v1) is still "running" (you are getting lucky) and now calls something that invokes functionality in lib2 for the first time. a.out (v1) now loads library code for v2. You have a mis-mash.

most unixes have this concept of SHARED TEXT as well. Yikes on that one...

Many unixes (eg: AIX i believe, HP I'm almost positive) protect you from this and say "text busy" if you try to recompile an already running program.

Even windows got this right.

I can say, if this is your approach to system upgrades -- I'm glad I don't work on your systems. I'd be ripping my hair out. I'm really hoping you are not an SA.


Perhaps the "lot to learn" thing is a two way street???



To Be Or Not To Be

Mac, November 26, 2002 - 3:43 pm UTC

Besides, why would one want two versions of the same executable to be running simultaneously and doing different things? If you change a program, you do so because you need different behavior. Wouldn't you want *all* runs of this changed program to produce the new behavior?

Unix has a better idea? hmm

Paul Kelley, November 26, 2002 - 3:49 pm UTC

Don't be scared, we have all of our hair, I'm not an SA. I wasn't talking about OS or Oracle product upgrades. By code upgrade I meant our homegrown apps, not Oracle or Os. Most of our homegrown apps are statically linked, and the multi-versioning thing actually works, according to the Unix/Oracle guys who have been torturing me. I forwarded your response to them. Perhaps they will feel like responding to you, or perhaps they will stop asking me why Oracle doesn't work the way they think it should. Your dynamic libraries example was the kind of response that should interest them.

BTW I am one of the guys who told you years ago you should write a book and I would buy it. You did and I did.

Tom Kyte
November 26, 2002 - 6:47 pm UTC

but by extension -- I should be able to according to you -- it is so simple? I mean, if it works on a single binary, why not the world?

(i have a chapter in that book that addresses that "it works the way it works -- the way it works is documented -- it does not work the way you feel that it should -- cause the guy next to you thinks it should work differently then you do and it can only work ONE way ;)

And unix systems in general won't let you do this "text busy" is the message you get.

we'll have to agree to disagree on this point.

to be or not to be

Paul Kelley, November 26, 2002 - 4:08 pm UTC

Pl/sql aside, for the moment, assume we're talking about a system with hundreds of users with many reports, some of which run for hours. Users can submit a report request at any time, it gets put in the queue for execution. Assume I have a new version of a report to install. The new version has some enhancements requested by the user community, it is better than the old report, but the old report has been ok for years and it happens to be running at the moment in 3 or 4 databases. Should I have to keep checking each of those databases to find just the right moment to install? Should I have to go into the report request system and postpone execution of this report in all databases until I get the new version installed? Wouldn't it be better if I could just do the installation and let the processes that are running the report just finish up?





Tom Kyte
November 26, 2002 - 6:39 pm UTC

yes you should, yes you should with your binaries on the OS as well.

Consider that this report calls 5 packages to process. Consider that (for example) this report takes 1 hour to run. Consider that the first 10 minutes is in package 1, then package 2, .... and so on. Consider you "upgrade the code during minute 30. So -- would you want NEW package 4 and 5 to execute or the NON-EXISTENT old package 4 and 5 to execute -- given that you've already used old packages 3 and 4.

Wouldn't you be really mad if every user loaded their own copy of the code? (thats what shared text is all about). We used to do that -- we called it version 6 of Oracle. No shared pool, every sql statement compiled into the pga and was lost forever when you disconnected. That would be a huge step backwards.



Or, incorporate a naming convention that lets you do this (eg: the way, umm the way unix does with .....so.1 .....so.2 and so on)

Still scaring me.

How much rope?

Paul, November 26, 2002 - 9:30 pm UTC

Not sure why you're scared. Oracle protects me from myself, doesn't it? And I'm only playing Unix advocate. Anyhow, speaking on behalf of some Unix/Oracle developers in my shop, I would want executing programs (or queries retrieving data from views) to continue to use the versions of the referenced programs that were in scope at the start of the session. I wouldn't want every user to load his own version, I'd want the system to manage the versioning and do the usual Oracle paging bits with the versions that should be in scope for the given session.

We have several flavors of Unix and at least one version of Linux, all of which give us enough rope to do this type of installation with Unix apps, many of which are Pro*C programs. I have only done this with a trivial program on Sequent Dynix/PTX. I'm told it also works in-house on Solaris and Linux. Apparently we have some staff members who are knowledgeable about this at the dll level, and who admit that you have to be very careful with dynamic linking.

I'll admit that to me, the dynamic linking scenario seems complex even for a simple case. With many sessions and hundreds of programs it would certainly be daunting. I'm not sure why a group of people smarter than myself and with enough time couldn't make it work, though. Then again there's the question of how much rope do you want to give me, and of course, what's in it for Oracle.

I'll have to finish with this - In the face of continued mild abuse from the Unixians, I have maintained all along that we may have to settle for 24X6, 23X1 availability in some of our databases. I have been told that this is painful when you need 24 hour web access and ( possibly more to the point in this case ) when you have to log in at odd hours or else hire more staff to manage the off hours installations. This is the manageability point that they have tried to make to me.

In any case congratulations on a great web site and an excellent book.

to be or not to be

Jim, November 26, 2002 - 10:29 pm UTC

Paul,
I find the thread you started interesting.
I see your point of view and concerns
however, I agree with Tom on this one.
Things are just waaaaay to prone to something going
wrong



To Be Or Not To Be

Mac, November 27, 2002 - 9:17 am UTC

I see Paul's point regarding long-running reports. But consider another situation. Suppose we are dealing with transactions that modify data. Suppose we create a new version of a program to fix the current version which creates faulty data. In this case, wouldn't we want the new version to be used by everybody (and specifically prevent the use of old version for the sake of data integrity)? Thus we would have 2 classes of programs -- tolerant and non-tolerant -- to manage.

But, as Tom said, it works the way it works.

Thank you Tom for proividing a stimulating and rich environment full of treasures!

What if invalid code still works?

Mark Schifferli, March 26, 2003 - 3:17 pm UTC

Hi Tom,

Thanks for your wonderful expertise. We have a situation where the package body is invalid, yet is still in use, much like Windy from USA describes above. However, the invalid package still seems to be executing fine, sort of. From a SQLPLUS session, any attempt to execute or recompile the package just hangs:

14:44:36 LISTENER> alter package q compile;
alter package q compile
*
ERROR at line 1:
ORA-04021: timeout occurred while waiting to lock object LISTENER.Q

For the most part, this package is an interface to DBMS_JOB, and the dbms_jobs started via this package call a procedure in the package to check for things to do. If there is nothing to do, they wait and check again in a little bit. As far as I can tell, these jobs are running fine, no invalidation problems at all. How can this be? It appears that there is invalid code for the sqlplus prompt, and working code (therefore valid code?) for DBMS_JOB.

We're using 8.1.7.0.0.

Thanks for your help,

Mark

Tom Kyte
March 26, 2003 - 4:35 pm UTC

a package body cannot be invalid yet in use. that doesn't make sense.

What if invalid code still works? (2)

Mark Schifferli, March 26, 2003 - 5:37 pm UTC

I agree, it doesn't make sense to me either. Perhaps I'm misinterpreting the situation? Here are the facts that lead me to that conclusion:

* The package body is invalid, though the specs are valid.
* I can't get a lock on the package to recompile it.
* the only sessions that can use the package are those in DBMS_JOBS (some of which are running now)

17:06:00 LISTENER> select object_type, status from user_objects where object_name = 'Q';

OBJECT_TYPE STATUS
------------------ -------
PACKAGE VALID
PACKAGE BODY INVALID

17:06:07 LISTENER> alter package q compile;
alter package q compile
*
ERROR at line 1:
ORA-04021: timeout occurred while waiting to lock object LISTENER.Q

17:31:25 TDM.LISTENER> select uj.this_date, uj.this_sec, uj.failures, uj.what from dba_jobs_running djr, user_jobs uj where uj.job = djr.job;

THIS_DATE THIS_SEC FAILURES WHAT
--------- -------- ---------- --------------------------------------------------
26-MAR-03 08:35:51 BEGIN q.listen(1099); /* Q1099--*/ END;
26-MAR-03 08:34:53 BEGIN q.listen(204); /* Q204--*/ END;
26-MAR-03 08:34:53 BEGIN q.listen(301); /* Q301--*/ END;
26-MAR-03 08:34:53 BEGIN q.listen(302); /* Q302--*/ END;
26-MAR-03 08:34:53 BEGIN q.listen(200); /* Q200--*/ END;

Any insight in how to interpret this situation is welcome.

Thanks

P.S. Your book is awesome

Tom Kyte
March 26, 2003 - 6:50 pm UTC

ahh -- ok, the spec has some variables or types in it.

Others are using those variables or types.

The body is not being executed....

can you alter package q compile BODY;




What if invalid code still works? (3)

Mark Schifferli, March 26, 2003 - 7:23 pm UTC

Thanks for your response. Unfortunately, it didn't work.

alter package q compile BODY
*
ERROR at line 1:
ORA-04021: timeout occurred while waiting to lock object LISTENER.Q




Tom Kyte
March 26, 2003 - 7:38 pm UTC

see page 119 of expert one on one oracle and query up dba_ddl_locks to see whose got it.

What if invalid code still works? (4)

Mark Schifferli, March 27, 2003 - 11:48 am UTC

Done (yet another reason to appreciate your book).

So from dba_ddl_locks, I get both these for sessions 7-36
LISTENER Q Table/Procedure/Type Null None
LISTENER Q Body Null None

And from a version of your showsql script (thanks again) each of these sessions is doing something like:

DECLARE job BINARY_INTEGER := :job; next_date DATE := :mydate; broken BOOLEAN := FALSE; BEGIN
BEGIN q.listen(303); /* Q303--*/ END;
:mydate := next_date; IF broken THEN :b := 1; ELSE :b := 0; END IF; END;

Which to me looks like the syntax of a DBMS_JOB (thanks again, ref. page 1069). Each job is calling a procedure in the package that is invalid, q.listen. Just in case the particulars make a difference, that procedure essentially it queries a table to find if there is a task to execute. If it finds one it hosts out via java, and if it doesn't find one it sleeps for 10 seconds.

I still can't help but think that these sessions are executing this procedure, even though the package body is invalid. Querying dba_jobs_running shows this_date is not null, which implies they are executing, doesn't it?

Thanks again.

Mark

Tom Kyte
March 27, 2003 - 12:35 pm UTC

yup -- it would appear the package is valid, but the view is "wrong".

I'll have to refer you to support on that one, I haven't seen that myself.

What if invalid code still works? (5)

Mark Schifferli, March 27, 2003 - 12:44 pm UTC

Thanks for all your help

mode_held is null in dba_ddl_locks

Vijay, July 01, 2003 - 8:07 am UTC

Hi Tom,
Good Day to you, I have gone through your suggestion of dba_ddl_locks what i see is like this
session_id owner name type
13 test proc_test Table/procedure/Type
mode_held mode
null none

there are four session id's other than mine which are having records in this view and using the object but mode_held is null for all.

But when i try to compile the package I still get the
ORA-04021: timeout occurred while waiting to lock object

what could be the reason and how do i resolve it, as always thanks to you.

Tom Kyte
July 01, 2003 - 8:55 am UTC

that means one of them was "using" it while you were trying to rebuild it.

Thanks Tom for your quick reply

Vijay, July 02, 2003 - 12:49 am UTC

Hi Tom,
Good day sorry to ask you about this again but I am a bit confussed here, does rows in dba_ddl_locks mean all those users are using the object, please help.

Thanks as always.

Regards,
Vijay

Tom Kyte
July 02, 2003 - 8:08 am UTC

it means they "have at least used", they might not be currently using but at the time you tried the operation -- they were.

remember it shows you "right now", you got the error "back then".

Regarding "Dependency of Stpred procedure"

Raj Jamadagni, July 02, 2003 - 9:21 am UTC

Vijay,

instead of ddl_locks ... I prefer to use the procedure "who_is_using" supplied by Oracle and available on metalink.

Have your DBA install it and give you rights on it, as it needs to be installed as sys. Run that with the name of your package in question and it will give you a list of sessions (sid only) that are actively executing the code.

HTH
Raj

Yogesh, July 29, 2003 - 11:54 am UTC

One more question related to this ... One of the member from testing team was testing one procedure using Debug of TOAD ... I tried making changes in the same procedure same time ... and when I compiled it it gave me

ORA-04021: timeout occurred while waiting to lock object TEST.PKG_MIGRATE

I could understand this .. But when testing user finished his debug and came out, I tried once more .. still was getting the same error.... not only that even after One hour same error ... I even killed all the active and inactive sessions of test user using TOAD.. but of no use .. what could be the reason ?

There was one session which was showing as killed but was still coming in list even hour after .. was this session blocking that Pkg ?

what are ways to get rid of this error ?

Tom Kyte
July 29, 2003 - 12:14 pm UTC

use dba_ddl_locks to see what sessions might be pointing to it.

Over 10,000 packages become invalid

A reader, September 30, 2003 - 7:38 pm UTC

11.5.7 Apps, 8.1.7.4 database.

It has been 2nd time. There were 10,000 APPS owned package became invalid. Recompiled all of them, and no problem.

Question, what could triggered such number invalid? no, there have been no upgrades and patches applied. If there was small number invalid storage objects, I might figure the roots out. Not such large number.

What could be?

Thanks In Advance

Tom Kyte
October 01, 2003 - 8:26 am UTC


someone could have revoked something from PUBLIC -- thats a sure fire way to invalidate a database.

someone could have revoked something from the owner of the packages.

it is most likely a grant being revoked that did it. you could enable auditing to capture the culprit in the future.

great input !!!!

A reader, October 01, 2003 - 9:35 am UTC


package compilations timing out

Menon, July 26, 2005 - 1:55 pm UTC

Hi Tom
"All of a sudden" we have many package compilations being
timed out in our database. When we compile a package, it times out without compiling..
This seems like a locking issue. There is a central package that is being invoked in all cases so far. Can you tell me how we can find out what is being locked (if that is the case) or give some other pointers?

Thanx!

Tom Kyte
July 26, 2005 - 3:21 pm UTC

are you trying to compile code in a running production system with lots of users using it while you are doing this?

problem resolved

Menon, July 26, 2005 - 3:15 pm UTC

It was the problem of someone else running a dependent
package whose spec was being changed. We thought
we had taken care of this but apparently we did not:

thanx.

Menon, July 26, 2005 - 3:28 pm UTC

"are you trying to compile code in a running production system with lots of users
using it while you are doing this? "

Not really - I am aware of the issues with that:)

The thing is that all developers in my company share
the same database for development (I am hoping it will
change soon.) Since it is a small team, we can coordinate
or way around this but hopefully it will be resolved in the
right away (one dev db per developer) sometime in the future...


Tom Kyte
July 26, 2005 - 3:34 pm UTC

SQL> desc dba_ddl_locks


;)  you'll like that view. 

thanx...

Menon, July 26, 2005 - 4:37 pm UTC

Well, I will have to do it in my local database
(to check the view out). In the shared dev environment
I get:

myuser@commonDEV> desc dba_ddl_locks
SP2-0749: Cannot resolve circular path of synonym "dba_ddl_locks"

The corresponding installation may not have been done
properly perhaps...


Tom Kyte
July 26, 2005 - 4:45 pm UTC

catblock.sql creates them (the extra lock views)

yup..

Menon, July 26, 2005 - 5:58 pm UTC

learnt that from asktom search...will do a catcblock.sql
in my own database (cant do it in the dev database as
it is shared.)

Thanx!

dba_ddl_locks

Menon, September 28, 2005 - 5:05 pm UTC

Hi Tom
Is there any other way (some other view or such) to find out the DDL lock when compiling a procedure? I dont have privileges to create the dba_ddl_locks currently.

Thanx!

Tom Kyte
September 28, 2005 - 5:39 pm UTC

you'll want to get the privilege?

you'll need access to certain views in order to see this information

all right

Menon, September 28, 2005 - 6:14 pm UTC

I asked the dba to send me the scripts with the right privileges - thanx!

stored functions, packages, procedures will compile themselves

A reader, December 15, 2006 - 7:12 am UTC

Hi

I dont think that is entirely true.

I have worked in Oracle Applications where packages in a schema references otehr schemas objects.

When one package become invalid it gets recompiled when executed but if that package calls another package which was invalid the compilation is not recursive! You would need to manually compile them.



Tom Kyte
December 15, 2006 - 9:08 am UTC

nope, you are not correct.

if you believe you are - you can set up a small test case in a controlled environment to demonstrate.


ops$tkyte%ORA9IR2> create user a identified by a;

User created.

ops$tkyte%ORA9IR2> create user b identified by b;

User created.

ops$tkyte%ORA9IR2>
ops$tkyte%ORA9IR2> grant create session, create procedure to a;

Grant succeeded.

ops$tkyte%ORA9IR2> grant create session, create procedure to b;

Grant succeeded.

ops$tkyte%ORA9IR2>
ops$tkyte%ORA9IR2> connect a/a
Connected.
ops$tkyte%ORA9IR2> create or replace function standalone return number
  2  as
  3  begin
  4          return 0;
  5  end;
  6  /

Function created.

ops$tkyte%ORA9IR2>
ops$tkyte%ORA9IR2> create or replace package my_pkg
  2  as
  3          global number := standalone;
  4
  5          procedure p;
  6  end;
  7  /

Package created.

ops$tkyte%ORA9IR2> create or replace package body my_pkg
  2  as
  3          procedure p
  4          is
  5          begin
  6                   dbms_output.put_line( standalone );
  7          end;
  8  end;
  9  /

Package body created.

ops$tkyte%ORA9IR2> grant execute on my_pkg to b;

Grant succeeded.

ops$tkyte%ORA9IR2>
ops$tkyte%ORA9IR2>
ops$tkyte%ORA9IR2> connect b/b
Connected.
ops$tkyte%ORA9IR2>
ops$tkyte%ORA9IR2> create or replace package my_pkg
  2  as
  3          global number := a.my_pkg.global;
  4          procedure p;
  5  end;
  6  /

Package created.

ops$tkyte%ORA9IR2> create or replace package body my_pkg
  2  as
  3          procedure p
  4          is
  5          begin
  6                  a.my_pkg.p;
  7          end;
  8  end;
  9  /

Package body created.

ops$tkyte%ORA9IR2> show errors
No errors.
ops$tkyte%ORA9IR2>
ops$tkyte%ORA9IR2> connect /
Connected.
ops$tkyte%ORA9IR2> alter function a.standalone compile;

Function altered.

ops$tkyte%ORA9IR2> column owner format a1
ops$tkyte%ORA9IR2> column object_name format a10
ops$tkyte%ORA9IR2>
ops$tkyte%ORA9IR2> select owner, object_name, object_type, status
  2    from all_objects where owner in ( 'A', 'B' );

O OBJECT_NAM OBJECT_TYPE        STATUS
- ---------- ------------------ -------
B MY_PKG     PACKAGE            INVALID
B MY_PKG     PACKAGE BODY       INVALID
A MY_PKG     PACKAGE            INVALID
A MY_PKG     PACKAGE BODY       INVALID
A STANDALONE FUNCTION           VALID

ops$tkyte%ORA9IR2> exec b.my_pkg.p

PL/SQL procedure successfully completed.

ops$tkyte%ORA9IR2> select owner, object_name, object_type, status
  2    from all_objects where owner in ( 'A', 'B' );

O OBJECT_NAM OBJECT_TYPE        STATUS
- ---------- ------------------ -------
B MY_PKG     PACKAGE            VALID
B MY_PKG     PACKAGE BODY       VALID
A MY_PKG     PACKAGE            VALID
A MY_PKG     PACKAGE BODY       VALID
A STANDALONE FUNCTION           VALID

 

if packages gets compiled when are executed how come...

A reader, December 15, 2006 - 6:00 pm UTC

Hi

We have several schemas in the application. It happens to me sometimes that when we make a new software release, which can be a small package to several usually invalidates quite a few packages. If we dont manually compile some of them (most would automatically compile buit some dont!) users start complaining that they are getting errors.

How can this be?

Tom Kyte
December 16, 2006 - 5:52 pm UTC

you would have to sort of emulate this for us. then we can make an intelligent answer.


I would guess this:

o your change made it IMPOSSIBLE for the other code to compile, period. And when you fixed it manually, you did a lot more than just recompile the existing packages.

No longer getting the timeout error

Rahul, August 08, 2007 - 12:07 pm UTC

Hi Tom,

We recently upgraded our Development database from 9i to 10gR2 on HP-UX. On 9i, when a developer tried to compile a package that was in use by a long running transaction, we would see a timeout error on the compile. The developer waited for a while and re-compiled. I don't remember the exact error message, but I think it was the "ORA-04021: timeout occurred while waiting to lock object" mentioned above.

After the upgrade to 10gR2 we have not been getting the timeout error (atleast for 15 minutes). The compile just hangs waiting for the execution to complete. We see a Wait for "library cache lock" or "library cache pin". We have to kill the session that is executing the package to compile it.

Do we need to configure the timeout so that we get an error instead of the long wait? If yes, what parameter/compile option governs this?

Thanks,
Rahul

DBA_DDL_LOCKS in 10g RAC

User, October 16, 2007 - 2:48 pm UTC

Hi Tom,

Is there a global view for DBA_DDL_LOCKS which has info from all nodes/instances in clustered database?

Like v$session & gv$session where you can query gv$session to get session info from all instances.

Thanks & Regards.

10.2: Package body invalidation

Mark, July 14, 2010 - 8:22 am UTC

Hi Tom,
we are experiencing frequent invalidation of the same package body. The dependency of this package includes local tables/synonyms and remote tables via:
private synonym -> dblink(owned by local schema) -> remote table. When the body invalidates, I run dependency tree and see no changes in any objects. I found a script on Metalink that shows the need to recompile based on the comparison of obj$.stime and dependency$.p_timestamp. I noticed that when the body invalidates, obj$.stime < dependency$.p_timestamp for the objects in my local database representing the remote tables. Once I compile the body, obj$.stime picks up correct value, but later the body goes invalid again and the old stime shows up. Oracle support tells me those times are no longer involved in dependencies in 10g. However, we created local copies of the remote tables and changed synonyms to point to local tables and the body is no longer invalidating. Oracle support is not giving me any contructive leads on what's hapening. I also noticed some "not exist" objects that seem to "represent" remote tables, some of those have exactly the same stime as that I see when the body is invalid, but those "not exist" objects are not referenced in it.
I would appreciate it if you would give your thoughts. The query i run:
prompt
prompt "Depends on:"
prompt "~~~~~~~~~~~"

select o.obj# "Obj#",
decode(o.linkname, null,
nvl(u.name,'Unknown')||'.'||nvl(o.name,'Dropped?'),
o.remoteowner||'.'||nvl(o.name,'Dropped?')||'@'||o.linkname) "Object",
decode(o.type#, 0, 'NEXT OBJECT', 1, 'INDEX', 2, 'TABLE', 3, 'CLUSTER',
4, 'VIEW', 5, 'SYNONYM', 6, 'SEQUENCE',
7, 'PROCEDURE', 8, 'FUNCTION', 9, 'PACKAGE',
10, '*Not Exist*',
11, 'PKG BODY', 12, 'TRIGGER',
13, 'TYPE', 14,'TYPE BODY',
19,'TABLE PARTITION', 20,'INDEX PARTITION',
21,'LOB',22,'LIBRARY',
23,'DIRECTORY', 'UNDEFINED') "Type",
decode(sign(stime-P_TIMESTAMP),
1,'*NEWER*',-1,'*?OLDER?*',null,'-','-SAME-') "TimeStamp",
decode(o.status,0,'N/A',1,'VALID','INVALID') "Status",
to_char(stime,'yyyy/mm/dd hh24:mi:ss') stime,
to_char(P_TIMESTAMP,'yyyy/mm/dd hh24:mi:ss') P_TIMESTAMP,
to_char(D_TIMESTAMP,'yyyy/mm/dd hh24:mi:ss') D_TIMESTAMP
from sys.dependency$ d, sys.obj$ o, sys.user$ u
where P_OBJ#=obj#(+) and o.owner#=u.user#(+) and D_OBJ#='&&OBJID'
;
Thanks

Tom Kyte
July 19, 2010 - 12:54 pm UTC

what is remote_dependencies_mode set to?

A reader, August 03, 2010 - 3:38 pm UTC

remote_dependencies_mode = TIMESTAMP

We now have some additional info. There are two schemas with several similar packages:
A.PKG1, A.PKG2, A.PKG3
B.PKG1, B.PKG2, B.PKG3
Their bodies are never valid. If I compile the body of A1.PKG1, the bodies in schema B are invalidated, and the other way around. We tried to rename the packages to make sure there is no "name confusion", still the same. Running dependency tree shows no dependency between those two schemas. However, If we remove remote tables from the picture and replace synonyms to point to local tables, this invalidation stops.
Tom Kyte
August 04, 2010 - 7:25 am UTC

... If I compile the body of A1.PKG1, the bodies in
schema B are invalidated, ...

that should not happen - bodies are NOT dependent on other bodies. Bodies are dependent on other SPECIFICATIONS. Give a concrete example demonstrating otherwise


give an example to work with - as small an example as possible.


you might want remote dependencies to be signature

A reader, August 05, 2010 - 11:01 am UTC

Hi Tom,
I was able to identify the conditions when this happens:
1. one local db and two remote databases
2. local db has two schemas
3. both local schemas have their own dblinks (one in each local schema) with the same name but pointing to different remote db
4. both local schemas have a package which references a table with the same name over their own dblink (same name) pointing to different remote db.

Below is the test case (quite lengthy, sorry), oracle version is 10.2.0.4:
1. on DB1 (remote)
create user tstu1 identified by tst999999
default tablespace users
temporary tablespace temp1;

grant … to tstu1;
alter user tstu1 quota unlimited on users;

create table tstu1.tab1 (f1 varchar2(30));
insert into tstu1.tab1 values ('tstu1.tab1 db1');
commit;


2. on DB2 (remote)
create user tstu1 identified by tst999999
default tablespace users
temporary tablespace temp1;

grant … to tstu1;
alter user tstu1 quota unlimited on users;

create table tstu1.tab1 (f1 varchar2(30));
insert into tstu1.tab1 values ('tstu1.tab1 db2');
commit;

3. on DB3 (local)
create user localu1 identified by localu1
default tablespace users
temporary tablespace temp1;

grant ... to localu1;
alter user localu1 quota unlimited on users;

create user localu2 identified by localu2
default tablespace users
temporary tablespace temp1;

grant … localu2;
alter user localu2 quota unlimited on users;

connect localu1/localu1
CREATE DATABASE LINK dbl1.world
CONNECT TO tstu1 IDENTIFIED BY tst999999 USING 'db1.WORLD';

select * from tab1@dbl1.world;
-- should return: "tstu1.tab1 db1"

connect localu2/localu2
CREATE DATABASE LINK dbl1.world
CONNECT TO tstu1 IDENTIFIED BY tst999999 USING 'db2.WORLD';

select * from tab1@dbl1.world;
-- should return: "tstu1.tab1 db2"

4. on db3 (local) create scripts
4.1. pkg_tst1.sql:
set echo on
spool pkg_tst1.log
connect localu1/localu1
create or replace package pkg_tst1 as
procedure proc_tst1;
end pkg_tst1;
/
show errors

create or replace package body pkg_tst1 as
procedure proc_tst1 is
v1 number;
begin
select count(*) into v1 from tab1@dbl1.world;
end;
end pkg_tst1;
/
show errors

4.2. pkg_tst2.sql:
set echo on
spool pkg_tst2.log
connect localu2/localu2
create or replace package pkg_tst2 as
procedure proc_tst1;
end pkg_tst2;
/
show errors

create or replace package body pkg_tst2 as
procedure proc_tst1 is
v1 number;
begin
select count(*) into v1 from tab1@dbl1.world;
end;
end pkg_tst2;
/
show errors
Exit

4.3. compile1.sql:
set echo on
set lines 100
col object_name for a30
col owner for a20
select owner, object_name, object_type, status from dba_objects where object_name like 'PKG_TST%'
and owner in ('LOCALU1','LOCALU2')
order by 1,2,3
/
alter package localu1.pkg_tst1 compile body;
select owner, object_name, object_type, status from dba_objects where object_name like 'PKG_TST%'
and owner in ('LOCALU1','LOCALU2')
order by 1,2,3
/
Exit

4.4. compile2.sql
set echo on
set lines 100
col object_name for a30
col owner for a20
select owner, object_name, object_type, status from dba_objects where object_name like 'PKG_TST%'
and owner in ('LOCALU1','LOCALU2')
order by 1,2,3
/
alter package localu2.pkg_tst2 compile body;
select owner, object_name, object_type, status from dba_objects where object_name like 'PKG_TST%'
and owner in ('LOCALU1','LOCALU2')
order by 1,2,3
/
Exit

5. on DB3(local) - run tests
5.1. create packages:
sqlplus localu1/localu1 @pkg_tst1.sql
sqlplus localu2/localu2 @pkg_tst2.sql

5.2. test compiles:

sqlplus / @compile1.sql
sqlplus / @compile2.sql

Observe pkg body status changes in the outputs



Tom Kyte
August 05, 2010 - 1:04 pm UTC

confirmed, reproduces on 11gr2 - do you have access to metalink/my oracle support to file a bug?

A reader, August 05, 2010 - 2:20 pm UTC

Hi Tom,
I really appreciate your quick response and confirmation. I do have SR with Ora support. So far they haven't been helpful. I posted this test case today on my SR, hoping that now they may identify the root cause. We found a workaround. If we encapsulate remote tables in views it all works fine, no more invalidations. The view would be:
create view tab1_v as select * from tab1@dbl1.world;
and then use tab1_v in the packages.
We are planning to test it in the app, however, using views in complex sql may alter execution plans, so we would consider this as the last resort if no fix is available.
Do you have any idea why this creates such a dependency anomaly?

Tom Kyte
August 05, 2010 - 3:00 pm UTC

a simple "select * from table" view should not impact a query plan.

I'm not sure why the anomaly - just that it exists. If support does not file a bug, please do let me know and I'll file it directly myself.

A reader, August 16, 2010 - 12:49 pm UTC

Hi Tom,
After I provided the test case, Oracle Support found: Note 415983.1 - Compiling Object In One Schema Invalidates Object In Another.
Apparently this bug was closed as not feasible to fix.

Tom Kyte
August 19, 2010 - 1:16 am UTC

thanks for the followup, the base bug was interesting to read through ;)

Dependency of package body

vivian, January 21, 2011 - 2:46 pm UTC

Our production db is oracle 10.2.0.4.0.

1. Will a view get invalidated after loading data to base table? The loading process will include truncate base table, drop indexes and recreate indexes. I can not reproduce this test case. But we observed view invalidation in our production every time after data loading to the base table by Ab Intio Etl process.
2. After the package body get invalidated due to referenced view invalidation, Oracle tries to recompile the package body when job Y try to run the package. However, since the package is still being executed by job X, job Y is blocked by job X, waiting on library cache pins related to the package. What is default timeout setting before job Y getting ORA-04021 timeout error? Can we adjust the timeout value manually in this case to let the job Y waiting until job X release the package?
3. Job X and job Y are running separated procedure in the same package. And in our case, the package body always get invalidate during execution of the job X and before job Y starts, do you recommend creating separate packages if both jobs are required to run in parallel?

Tom Kyte
February 01, 2011 - 9:42 am UTC

1) loading does not invalidate views.

ops$tkyte%ORA11GR2> drop table t;

Table dropped.

ops$tkyte%ORA11GR2> 
ops$tkyte%ORA11GR2> create table t ( x int );

Table created.

ops$tkyte%ORA11GR2> 
ops$tkyte%ORA11GR2> create or replace view v as select * from t;

View created.

ops$tkyte%ORA11GR2> insert /*+ append */ into t select rownum from all_users;

41 rows created.

ops$tkyte%ORA11GR2> 
ops$tkyte%ORA11GR2> select status from user_objects where object_name = 'V';

STATUS
-------
VALID

ops$tkyte%ORA11GR2> truncate table t;

Table truncated.

ops$tkyte%ORA11GR2> select status from user_objects where object_name = 'V';

STATUS
-------
VALID

ops$tkyte%ORA11GR2> insert /*+ append */ into t select rownum from all_users;

41 rows created.

ops$tkyte%ORA11GR2> select status from user_objects where object_name = 'V';

STATUS
-------
VALID



Ab Intio Etl must be doing something else.


2) It is a few seconds and is untunable. The operation must be retried.

3) I would investigate what is causing the invalidation - a simple truncate/load would not do that. You must be doing some other DDL which is likely not necessary.

dependency

sam, June 08, 2011 - 11:21 am UTC

Tom:

I want to cleanup a large number of UNUSED and OLD tables in a schema in 9i database.

I assume there is no way to track the last SELECT, UPDATE, etc or know if it is used or not other than checking if code uses it or not.


Is the best way for this to check this VIEW and determine if any code references those tables or not before dropping them?

select referenced_owner, referenced_name, referenced_type
from dba_dependencies
where name= 'MYPROC'
and owner = 'SCOTT'
order by referenced_owner, referenced_name, referenced_type


Tom Kyte
June 08, 2011 - 12:20 pm UTC

I assume there is no way to track the last SELECT, UPDATE, etc or know if it is
used or not other than checking if code uses it or not.


or, if you enable auditing.


Is the best way for this to check this VIEW and determine if any code
references those tables or not before dropping them?


oh my gosh. you scare me to death sam, you really and truly do. totally.

think dynamic sql
think any sql not in plsql



The only way to do this would be to have system documentation - period. Nothing else works. Nothing. Just because a table hasn't been accessed in six months (or even longer) doesn't mean "you can drop it"



There is no simple answer to this question. If you proceed, you are likely (HIGHLY likely) to mess up stuff big time. I would encourage you to rethink this busy work and do anything else.

tables

sam, June 08, 2011 - 1:39 pm UTC

Tom:

There is no system documentation.

Let us say XYZ hired you to do this task. Would you turn them down because they dont have documentation? if they agree to pay big $$$ how do you proceed. I am sure you would need to study the applications and do reverse engineering.

Most of these tables I already know because i use BK_XXX or TEMP_XXX or TEST_XXX but I was jut making sure that no plsql uses them. I migth as well leave them since there is no harm other than some storage taken.

We do not use much DSQL.


Tom Kyte
June 08, 2011 - 1:45 pm UTC

I would turn XYZ down because they would be suing me for destroying their data later.


Unless they hired me to fully document the system - what all components do, what they access etc. And to deploy the application in a sandbox environment - creating an installation process for it - so that we'd have a pristine environment from which we could copy over the production data and test/evaluate with. And then take a maintenance window to copy the data over one last time and switch over to the new system.

We do not use much DSQL.


That is like someone saying they are only a little pregnant. If you have a client application - you have dynamic sql - period. And all it takes is "a little"


I'm not going to help you on this one Sam, I've been asked this question before and I've always said the same thing.

tables

sam, June 08, 2011 - 5:05 pm UTC

Tom:

Intresting! I though DSQL is SQL created on runtime. the SQL is not known when you compile.

So all "Client" applications create the SQL on runtime and not compile time?


Not sure, how you can destroy the data. You always have have backups of the data before you do anything. right!

If you take a copy of the production system on a test box and drop the tables that you think are unused and test everything out and it works then you know the table are most likely unused.

This is not easy thing to do of course because you have to pick the tables that you *think* are unused, drop them, test everything, then fix stuff (i.e add tables) and retest, etc. It can be non-ending process.

so yes documentation is the best thing unless someone needs the money bad and no worried of being sued.
Tom Kyte
June 08, 2011 - 5:19 pm UTC

... So all "Client" applications create the SQL on runtime and not compile time?
...

The only thing that registers in *_dependencies with its "static" sql is plsql. I put air quotes around static, because even static sql is 'dynamic' - there is no such thing as static sql in Oracle truly. It is just a convention to call 'static' sql in plsql 'static'. Under the covers, it is executed just like dynamic sql is (only we know what it is at compile time)

pro*c can do "static" sql too - but it won't be in *_dependencies

j/sql (or is it sql/j) can do "static" sql too - but it won't be in *_dependencies.

Not sure, how you can destroy the data. You always have have backups of the
data before you do anything. right!


and when you discover 5 months from now that you nuked something relevant - do you have your backups from 5 months ago? And do you test all of your backups to know you can actually restore them? This is just a recipe for disaster what you are talking about.


so yes documentation is the best thing unless someone needs the money bad and
no worried of being sued.


tell me how you get into the position of not worrying about the last bit? I'd love to know, I want to be there. Maybe not sued, but fired, blacklisted, bad word of mouth, transferred to a "not nice" position, etc.

PROC BLOCK

Duke Ganote, June 08, 2011 - 9:39 pm UTC

Of course, I'm using PL/SQL for ETL, but I've noticed that some DDL can block re-compilation of the proc/package. For example:

CREATE OR REPLACE PACKAGE CIOPDBA.z_pkg as
   curr_stack_id INTEGER;
   PROCEDURE load_test;
   FUNCTION current_stack_id RETURN INTEGER;
END z_PKG;
/
CREATE OR REPLACE PACKAGE BODY CIOPDBA.z_pkg as
PROCEDURE load_test as
  cnt INTEGER;
BEGIN
  <<drop_work_table>>
  BEGIN
    EXECUTE IMMEDIATE 'drop table Z_test purge';
  END drop_work_table;
  EXECUTE IMMEDIATE q'<create table z_test compress nologging as
select level lvl from dual connect by level < 1000000
union all
select level lvl from dual connect by level < 1000000
union all
select level lvl from dual connect by level < 1000000
union all
select level lvl from dual connect by level < 1000000
union all
select level lvl from dual connect by level < 1000000
union all
select level lvl from dual connect by level < 1000000
union all
select level lvl from dual connect by level < 1000000
union all
select level lvl from dual connect by level < 1000000
union all
select level lvl from dual connect by level < 1000000
union all
select level lvl from dual connect by level < 1000000
union all
select level lvl from dual connect by level < 1000000
union all
select level lvl from dual connect by level < 1000000
union all
select level lvl from dual connect by level < 1000000
union all
select level lvl from dual connect by level < 1000000
union all
select level lvl from dual connect by level < 1000000
union all
select level lvl from dual connect by level < 1000000
union all
select level lvl from dual connect by level < 1000000>';
select count(*) into cnt from z_test; -- package depends on z_test existing!
DBMS_OUTPUT.PUT_LINE(cnt);
END load_test;
FUNCTION current_stack_id RETURN INTEGER IS
BEGIN
  RETURN curr_stack_id;
END current_Stack_id;
END z_pkg;
/


Session one fires up:

22:16:02 one> exec z_pkg.load_test;

PL/SQL procedure successfully completed.

Elapsed: 00:00:29.16
22:16:32 one>


While that was running, I called the proc/package in session two... but it blocks until session one finishes. Then session two can re-compile and use the package:

22:16:06 two> select z_pkg.current_stack_id from dual;

CURRENT_STACK_ID
----------------


Elapsed: 00:00:25.62 -- it waits... and waits....
22:16:32 two> select z_pkg.current_stack_id from dual;

CURRENT_STACK_ID
----------------


Elapsed: 00:00:00.15 -- much faster normally
22:17:39 two>


Session one invalidates the package by initially dropping the table Z_TEST. Session two can't use the invalidated package until Z_TEST is re-created. Then the package can re-validate/re-compile.
Tom Kyte
June 09, 2011 - 9:32 am UTC

that is to be expected. I'd much rather use truncate and insert/*+APPEND*/, especially in light of stuff like this:

"Why does my plan change"
http://www.oracle.com/technetwork/issue-archive/2006/06-may/o36asktom-099107.html

if you drop and recreate the table, you may well have to run a bunch of queries BEFORE you gather statistics on the table.

And - it'll avoid things like this.

(it is a really bad idea to have a piece of code have static references to something it personally does DDL on... Having a dynamic reference would be ok)

really bad idea

Duke Ganote, June 09, 2011 - 10:43 am UTC

RE: <tom>it is a really bad idea to have a piece of code have static references to something it personally does DDL on</tom>

<duke>The intent was (a) easy/lazy way to accommodate additional new columns in the source table (b) retaining Oracle's awareness of dependencies, e.g. in xxx_DEPENDENCIES.

For those goals, I'm satisfied. However, I've been working on making our custom pl/sql processes run concurrently. And there I found the trade-off of inhibiting package/proc concurrency.</duke>

spinning?

Duke Ganote, June 09, 2011 - 1:05 pm UTC

I assume the second session is "spinning", that is, waiting for a latch? As discussed in this question "About latch wait time":
http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:409308200346402947
Tom Kyte
June 09, 2011 - 1:19 pm UTC

it'll only spin so much and then sleep

select z_pkg.current_stack_id waiting
from
 dual


call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.01       0.04          0          0          0           0
Execute      1      0.00       0.00          0          0          0           0
Fetch        2      0.02      39.01          0          0          1           1
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        4      0.03      39.05          0          0          1           1

Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 237
Number of plan statistics captured: 1

Rows (1st) Rows (avg) Rows (max)  Row Source Operation
---------- ---------- ----------  ---------------------------------------------------
         1          1          1  FAST DUAL  (cr=0 pr=0 pw=0 time=5 us cost=2 size=0 card=1)


Elapsed times include waiting on following events:
  Event waited on                             Times   Max. Wait  Total Waited
  ----------------------------------------   Waited  ----------  ------------
  SQL*Net message to client                       2        0.00          0.00
  library cache pin                               1       38.98         38.98
  SQL*Net message from client                     2        0.00          0.00


as opposed to when it didn't have to wait:

select z_pkg.current_stack_id nowaiting
from 
 dual


call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute      1      0.00       0.00          0          0          0           0
Fetch        2      0.00       0.00          0          0          0           1
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        4      0.00       0.00          0          0          0           1
  
Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 237
Number of plan statistics captured: 1
 
Rows (1st) Rows (avg) Rows (max)  Row Source Operation
---------- ---------- ----------  ---------------------------------------------------
         1          1          1  FAST DUAL  (cr=0 pr=0 pw=0 time=4 us cost=2 size=0 card=1)
 
 
Elapsed times include waiting on following events:
  Event waited on                             Times   Max. Wait  Total Waited
  ----------------------------------------   Waited  ----------  ------------
  SQL*Net message to client                       2        0.00          0.00
  SQL*Net message from client                     2        0.00          0.00



it didn't burn too much CPU - it spent most of the time sleeping

The Sleeper Awakes.................................................(H.G.Wells, ISBN 1-4264-5876-2)

Duke Ganote, June 10, 2011 - 8:29 am UTC

Is there some v$ view or some such to find out which sessions are dozing off instead of working?
Tom Kyte
June 10, 2011 - 2:44 pm UTC

define dozing off?

do you mean inactive for more than N seconds?

select * from v$session where status = 'INACTIVE' and last_call_et > N;

or you mean from above maybe - then v$session_wait would do.

dozing off == sleeping while waiting for a latch

Duke Ganote, June 10, 2011 - 3:51 pm UTC

I can find who's blocking whom, e.g.
http://asktom.oracle.com/pls/apex/f?p=100:11:0::::P11_QUESTION_ID:1184139484917
How do I find who's sleeping waiting for a latch?

Tom Kyte
June 10, 2011 - 5:15 pm UTC

v$session_wait

if they are waiting, they be sleeping

spins don't count as a wait, if we spin and get the latch before sleeping - it will be a "miss" but not a "sleep"


More to Explore

PL/SQL demos

Check out more PL/SQL tutorials on our LiveSQL tool.

PL/SQL docs

PL/SQL reference manual from the Oracle documentation library