ok.. tried v$session_longops ... no result.
Alex, March 31, 2003 - 12:14 am UTC
ok.. thanks Tom.
I had a look at:
</code>
http://asktom.oracle.com/pls/asktom/f?p=100:11:::::P11_QUESTION_ID:7115955140126, <code>
I could not seem to get v$session_longops to work on the test sample I gave it...
Test:
DROP TABLE ztable;
CREATE TABLE ztable(x number, d date);
BEGIN
for x in 1..1000000 LOOP
INSERT INTO ztable VALUES (x, SYSDATE);
END LOOP;
COMMIT;
END;
While this was running... I queried the following...
select substr(lo.sid||'-'||lo.serial#,1,10) "SID-Ser#",
lo.last_update_time "Last Update",
lo.elapsed_seconds secs,
lo.sql_address||'-'||lo.sql_hash_value addhash
from v$session_longops lo
where lo.username not like 'SYS%'
Yet my query did not return anything.
BTW... I ordered your book from Amazon last Thursday!
March 31, 2003 - 8:07 am UTC
well -- you didn't make a single call to dbms_application_info.set_session_longops! so, since you did no single long running database operation (you did lots of little fast things), the database said "hey, nothing long going on here"
try this perhaps:
declare
l_nohint number default dbms_application_info.set_session_longops_nohint;
l_rindex number default l_nohint;
l_slno number;
begin
for i in 1 .. 1000000
loop
insert ....;
dbms_application_info.set_session_longops
( rindex => l_rindex,
slno => l_slno,
op_name => 'my long running operation',
target => 1234,
target_desc => '"msg description"',
context => 0,
sofar => i,
totalwork => 1000000,
units => 'loops'
);
end loop;
end;
/
and then in another session run:
select * from v$session_longops where time_remaining > 0;
Alex, March 31, 2003 - 9:48 pm UTC
hmmmm...
You mean dbms_application_info doesn't magically manifest the process progress data itself!?!?!?
Best to be facetious in the face of my own stupidity! :)
Thank you for the sample code.
If I can ask one further base question on dbms_application_info's capabilities...
The For.. Loop example works ok because there is a counter process the code can loop through.
What if you just executed a single SQL statement on a very large table with group by aggregate functions and nested selects etc etc?
In this scenario you would not get an opportuhnity to post dbms_application_info.set_session_longops updates!
April 01, 2003 - 7:28 am UTC
a long running sql statment that does bulk operations (full scans and such) is automagically "long operation enabled".
if it uses "first rows" techniques however -- it will not be (but that is OK cause you get the rows back and would be calling dbms_application_info yourself)
client overhead in pinging v$session_longops
A reader, July 13, 2004 - 12:22 pm UTC
Hi tom
To actually show the progress on the client side
- one needs to poll the server by doing the
select from v$session_longops, correct? This does
require round trips - are there any ways to avoid this
overhead? (I think not - but just thought I would
run it by you in case you can suggest some other alternative...)
thanx!
July 13, 2004 - 12:34 pm UTC
well, if the client needs the information - you sort of have to leave the client goto the server and return.
hence the roundtrips are not overhead (overhead is something that can be removed) but rather the price of admission here!
thanx!
A reader, July 13, 2004 - 12:35 pm UTC
yeah - i have to be careful with that word - "overhead"!
Work OK in C/S & dedicated Connections, but in WEB/3-tiers ?
J. Laurindo Chiappa, July 13, 2004 - 12:49 pm UTC
Yes, the technique works great if you are in Client/Server, using a persistant connection. But in web/3-tier environment, normally (AFAIK) there is some kind of connection pooling/reuse involved, so the sid/serial# will change over time. How I could to use DBMS_APPLICATION_INFO in this situation ?
July 13, 2004 - 1:42 pm UTC
what does sid/serial# have to do with it specifically?
the sid/serial will point you to the database session -- but you are filling in the other fields (action/module/client_info) with your own information -- put whatever you like in there (eg: action/module = step you are processing, client_info = whatever you like to identify "what 3 tier session" this is)
I mean...
J. Laurindo Chiappa, July 13, 2004 - 8:16 pm UTC
see you, commonly I have many users running the same procedure P. In P, I put in client_info track information (like , Routine X started in hh:mm:ss, 'main SELECT started in hh:mm:ss'), etc.
So, doing a SELECT SID, SERIAL#, CLIENT_INFO from V$SESSION WHERE CLIENT_INFO IS NOT NULL, I can see what each session (identified by sid/serial#) is doing , if some session is slow, I fire TOP SESSIONs in OEM , or check in V$nnWAIT what that session is waiting for.
In a 3-tier environment, as I said, AFAIK , for each user, each time sid/serial# will change : so, if I do that select client_info from v$session, for the user A routine X was executed in one sid/serial, main select was executed in another... How I can "know" what sids/serial#s was used by use A SQLs, what for user B ?
I imagine this : I will need to make some kind of unique identifier to the user when he/she starts the app, and in each call to dbms_application_info, I put this identifier along with the track info what I need - in other words, I forget about sid/serial, right ? With this approach, I will lost the possibility of using v$sesswait and similars, but this is expected in web/3-tiers, or no ? And if yes, what tool I can use to have some stats for the current execution of my program for each user ? Like I said, in C/S is easy, the session will stay for all the execution, simply consult the v$ views for that session (or even I can TRACE the session), but and in web/3-tier ?
Regards,
Chiappa
July 14, 2004 - 11:13 am UTC
use client info to id the client
use action and module to identify what the client is doing
the last_call_et tells you how long they've been in that call.
you can use v$sesswait? you have client_info (finds the session you are interested in from the web side), you then have sid/serial# which tells you what that session is currently waiting on
what you won't have would be "for this web session, what are the cumulative waits" -- not unless you collect that yourself.
in 10g, we can automate that -- you can trace "sessions" over "database sessions"
how to delete Rows from v$session_longOps
pasko, July 14, 2004 - 6:37 am UTC
Hi Tom,
Thanks for a great Technique above.
I have just tested it on my Test 9iR2 Database,
I note that for each LOOP Counter a single Row is created in the Table: v$session_longOps
I also note that when a user session terminates, then only one row is left corresponding to the last LOOP step and all other Rows disappear magically :)
So, if i have a bunch of Users monitoring Batch JOBs using this Technique, i have to find a way of periodically Purging This Table.
Qn1: Do we need to purge this Table periodically , or may be the rows disappear after a DB bounce?
Qn2:
Is the same information duplicated/aggregated in some other Dynamic Performance Views somewhere?
Thanks in advance.
Best Regards.
July 14, 2004 - 11:34 am UTC
you do not delete from v$ tables at all -- they are in general memory structures.
the other rows do not automagically disappear (this view is a fixed size view, also has the same number of rows).
only rows for active sessions contain anything meaningful, ignore the rest -- do NOT bounce to "clear", it'll have the same number of rows regardless.
how to instrument a pipelined table function?
j., September 20, 2004 - 3:37 pm UTC
hi tom,
in order to instrument a long-running pipelined-table-function with "longop"-tracking i put it into a package and added the variables needed as global package-variables (outside the table-function). since this approach failed, i used context-entries instead to store/retrieve those package-variable's values - and succeeded.
did i make something wrong? how would you code this?
September 20, 2004 - 4:28 pm UTC
why did you need globals?
(sorry, yesterday i was totally wrong)
j, September 21, 2004 - 3:40 pm UTC
today i 've placed all those variables (needed for calls to DBMS_Application_Info.Set_Session_LongOps) inside the table-function, outside the nested cursor-loops that generate the results ... and everything worked fine ...
How I implement it in the client side (Forms R4.5)?
Alejandro, March 04, 2005 - 5:06 pm UTC
I am trying to implement this solution in Forms R4.5 to show the client the progress of its task, I will thank for any suggestion to them.
Thanks in advance, Alejandro
March 04, 2005 - 6:22 pm UTC
you would have to run the long running process as a job (via dbms_job) in forms and have a timer that would periodically query to populate the status bar or whatever you want.
How I implement it in the client side (Forms R4.5)?
Alejandro, March 07, 2005 - 10:40 am UTC
Thanks for the quick answer.
Lamentably we cannot run the long running process in background because the user must confirm the results before doing COMMIT. During the execution of the long running process, FormsR4.5 does not raise the trigger WHEN-TIMER-EXPIRE, so I cannot consult view v$session_longops.
I have thought about that just before initiating the long running process, I will call to another form over the same session so that this is in charge to monitorig the long running process, something like "reports server" that in this case will review periodically the view v$session_longops until the long running process finishes.
What you think about this idea?, It can seem a little absurd, but there are few options in the version that we have.
March 07, 2005 - 7:40 pm UTC
then it would take another session, another forms instance on that client.
as you described, that is the only way I would know to do it otherwise.
Apologize
Alejandro, March 07, 2005 - 12:50 pm UTC
The word "lamentably" does not exist in the English language, I apologize by my little knowledge of the language.
I meant "Unfortunately we cannot run the long running process in background..."
March 07, 2005 - 7:58 pm UTC
SIDs
Neeraj Nagpal, April 06, 2006 - 1:21 pm UTC
Tom,
I have this question regarding SIDs in Oracle. What underlying algorithm does Oracle use while assigning new SIDs to the processes? How are they reassigned after use? And lastly, if you see the select output from v$session_longops: the SID no. 29 had been used a couple of times on 22-Mar-2006, 24-Mar-2006 and 31-Mar-2006. And I know for a fact that the displayed messages are from entirely different processes, not even remotely connected with each other. So, if my understanding is correct then, the same SID 29 was reassigned more than once to different processes in matter of few days (database was not bounced during that time), but still v$session_longops is showing different jobs messages under the same the SID. Could you please explain me why this could be happening?
Thanks,
Neeraj
SQL> select sid, message, LAST_UPDATE_TIME
from
v$session_longops where sid=29
SID MESSAGE LAST_UPDATE_TIME
--- ---------------------------------------------------------------------- -----------------------
29 Sort Output: : 404 out of 404 Blocks done 22-mar-2006 01:21:57 pm
29 Index Fast Full Scan: AVM.Z_SCORE: 1250 out of 1250 Blocks done 24-mar-2006 09:25:01 pm
29 Table Scan: FNCPROD.DELETE_IMB2: 47288 out of 47288 Blocks done 31-mar-2006 08:26:04 pm
29 Table Scan: FNCPROD.DELETE_IMB2: 47288 out of 47288 Blocks done 31-mar-2006 08:26:28 pm
29 Table Scan: FNCPROD.DELETE_IMB2: 47288 out of 47288 Blocks done 31-mar-2006 08:26:51 pm
29 Table Scan: FNCPROD.DELETE_IMB2: 47288 out of 47288 Blocks done 31-mar-2006 08:31:53 pm
29 Table Scan: FNCPROD.DELETE_IMB2: 47288 out of 47288 Blocks done 31-mar-2006 08:32:14 pm
29 Table Scan: FNCPROD.DELETE_IMB2: 47288 out of 47288 Blocks done 31-mar-2006 08:32:36 pm
29 Table Scan: FNCPROD.DELETE_IMB2: 47288 out of 47288 Blocks done 31-mar-2006 08:33:01 pm
29 Table Scan: FNCPROD.DELETE_IMB2: 47288 out of 47288 Blocks done 31-mar-2006 08:33:23 pm
29 Table Scan: FNCPROD.DELETE_IMB2: 47288 out of 47288 Blocks done 31-mar-2006 08:38:39 pm
29 Table Scan: FNCPROD.DELETE_IMB2: 47288 out of 47288 Blocks done 31-mar-2006 08:39:01 pm
29 Table Scan: FNCPROD.DELETE_IMB2: 47288 out of 47288 Blocks done 31-mar-2006 08:39:27 pm
29 Table Scan: FNCPROD.DELETE_IMB2: 47288 out of 47288 Blocks done 31-mar-2006 08:39:52 pm
29 Table Scan: FNCPROD.DELETE_IMB2: 47288 out of 47288 Blocks done 31-mar-2006 08:40:10 pm
29 Sort Output: : 114 out of 114 Blocks done 06-apr-2006 09:57:08 am
April 07, 2006 - 4:13 pm UTC
the assignment is not documented, and entirely out of our control.
only look at rows in that view where time remaining is greater than zero, the rest are legacy or uninitialized rows.
Implementation of progress bar.
Mathew Butler, July 03, 2006 - 11:52 am UTC
I'm estimating the amount of work to be done by a set of PL/SQL processing and queries and then producing a progress bar based on the contents of v$session_longops. If my estimate is less than the real amount of work to do then I get two entries in v$session_longops - one for the "completed" progress marked as 100% complete and another that progresses past the total work mark. <b>What is the general approach in this case?</b> Options are:
1) Leave the progress bar "waiting" at 100% whilst the process catches up
2) Attempt to group the rows in v$session_longops and "re-tune" the v$session_longops view.
Many thanks as always.
The following code simulates the above
declare
rindex binary_integer;
slno binary_integer;
totalwork number;
sofar number;
obj binary_integer;
begin
rindex := dbms_application_info.set_session_longops_nohint;
sofar := 0;
totalwork := 10;
while sofar < 100 loop
-- update obj based on sofar
-- perform task on object target
sofar := sofar + 1;
dbms_application_info.set_session_longops(rindex, slno,
'operation y', obj, 0, sofar, totalwork, 'table', 'tables');
end loop;
end;
SQL> select round((t.sofar/t.totalwork)*100) pct
2 ,t.sid
3 ,t.serial#
4 ,t.sofar
5 ,t.totalwork
6 ,t.time_remaining
7 ,t.elapsed_seconds
8 ,start_time
9 ,last_update_time
10 from v$session_longops t
11 where 1=1
12 and opname = 'Operation Y';
PCT SID SERIAL# SOFAR TOTALWORK TIME_REMAINING ELAPSED_SECONDS START_TIME LAST_UPDATE_TIME
---------- ---------- ---------- ---------- ---------- -------------- --------------- ------------------- -------------------
100 10 5306 10 10 0 0 03/07/2006 16:37.29 03/07/2006 16:37.29
1000 10 5306 100 10 0 03/07/2006 16:37.29 03/07/2006 16:37.29
SQL>
July 07, 2006 - 7:15 pm UTC
Most things attempt to "re-tune" the progress - including v$session_longops itself!
You send it a number of units of work to complete - and as you tick them off, it adjusts its guess of the time remaining.
Just like dragging and dropping a big directory in Windows will - the estimated time at the end is always correct :) The estimated time in the beginning - always very much WAY off.
Ah, yes...
Mathew Butler, July 10, 2006 - 11:26 am UTC
Gotcha.
I avoided getting multiple rows in v$session_longops for my session by ensuring that sofar was not published as being equal to total work until all work had been done...and incremented total work and sofar appropriately. This way the record in v$session_longops didn't reach 100% until ALL the work ws complete.
Thanks again.
dbms_application_info.set_session_longops
saradha, September 25, 2006 - 1:50 pm UTC
Hi Tom,
If I use
declare
l_nohint number default dbms_application_info.set_session_longops_nohint;
l_rindex number default l_nohint;
l_slno number;
l_emp number;
l_dept number;
cursor c1 is select * from emp;
begin
for rec in 1 .. 10 loop
select count(*) into l_emp from emp;
select count(*) into l_dept from dept;
dbms_lock.sleep(2);
dbms_application_info.set_session_longops
( rindex => l_rindex,
slno => l_slno,
op_name => 'insert into collection table',
sofar => rec,
totalwork => 10,
units => 'loops'
);
end loop;
end;
it is working.
But If I change the loop to
for rec in c1 loop
it is not working...
Why is not working with the cursor increments
Thanks
September 26, 2006 - 2:00 am UTC
in this example, rec is a NUMBER
when you code
for rec in C1 loop
rec becomes a CURSOR, this api sort of wants "a number", think about it for a moment....
you tell set session longops how many units of "work" you are performing (my example was 10 units of work, where a unit of work was a "loop")
You then come back and tell it how many units of work you performed (when I did i in 1 .. 10 - I was doing my 10 units of work)
l_nohint?
Jack Douglas, March 12, 2007 - 10:57 am UTC
Hi Tom,
Why
declare
l_nohint number default dbms_application_info.set_session_longops_nohint;
l_rindex number default l_nohint;
and not
declare
l_rindex number default dbms_application_info.set_session_longops_nohint;
?
Sorry if this has been answered elsewhere - I looked and couldn't find any explanation.
When I test I find they both 'work' but the second version bizarrely seems to delay appearing in longops until part-way through (at least on 10g).
Thanks,
Jack
March 12, 2007 - 9:11 pm UTC
should not have any effect
got test case?
Sorry
Jack Douglas, March 26, 2007 - 8:34 am UTC
Sorry, I should have tested more thoroughly - no effect like you say.
Was the longhand version required in a past version?
March 26, 2007 - 11:09 am UTC
no, it was not
Question arising from Neeraj's Apr 6th 2006 entry above
Neil, August 18, 2011 - 2:59 am UTC
I am interested to read about SIDs being reassigned. I have been experimenting with dbms_application_info with regard to inter-session communication and have used it to determine whether or not an update has emanated from a certain package or not. I am setting the module name in the instantiation section of the package and then using sys_context('userenv', 'module') to retrieve it in a trigger. If the name of the module is the one I've set, then the theory is that the update came from the package, otherwise it did not. If, as Neeraj points out, the sessions are re-used, then this is unreliable because some other unrelated process that does not overwrite the module name may inherit it. Is that the case? If so, is setting the module via dbms_application_info reliable? Surely only the session's id is re-used and that all previous session information is scrubbed...
August 23, 2011 - 2:27 am UTC
if your connection pool doesn't reset the session state, doesn't de-instantiate packages, then the instantiation code is run only once.
In APEX (application express), we reset the session state by default - so it would work.
but it would depend on how YOUR connection pooling software works.