Why do not you suggest use trigger to solve this?
because I never lead with triggers - ever - never.
http://www.oracle.com/technetwork/issue-archive/2008/08-sep/o58asktom-101055.html Let me list the deficiencies in even considering a trigger for this:
a) how would this practically work given they wanted:
How do I count the number of records inserted/updated/deleted that have not yet been committed in a session?
that means for every table touched, in every schema they might touch a table. not just 'their' schema, any schema, every schema. The code changes over time - it might use dynamic sql - you don't have the exhaustive list of all tables for all times.
b) where is your logic to reset any counter to zero after they commit? (I can show you such logic, but it adds even more overhead to the entire bad idea)
c) every one is penalized performance wise - in a big way.
d) insert /*+ APPEND */ will cease working in the existence of a trigger
e) the massive performance overhead - they are updating lots of stuff.
ops$tkyte%ORA11GR2> create table t1 ( x int primary key, y varchar2(30) );
Table created.
ops$tkyte%ORA11GR2> create table t2 ( x int primary key, y varchar2(30) );
Table created.
ops$tkyte%ORA11GR2>
ops$tkyte%ORA11GR2> insert /*+ append */ into t1
2 select rownum, rownum
3 from dual
4 connect by level <= 150000;
150000 rows created.
ops$tkyte%ORA11GR2>
ops$tkyte%ORA11GR2> insert /*+ append */ into t2
2 select rownum, rownum
3 from dual
4 connect by level <= 150000;
150000 rows created.
ops$tkyte%ORA11GR2>
ops$tkyte%ORA11GR2>
ops$tkyte%ORA11GR2> create or replace package my_pkg
2 as
3 g_cnt number := 0;
4 end;
5 /
Package created.
ops$tkyte%ORA11GR2>
ops$tkyte%ORA11GR2> create or replace trigger t2_aiudfer
2 after insert or update or delete on t2 for each row
3 begin
4 my_pkg.g_cnt := my_pkg.g_cnt+1;
5 end;
6 /
Trigger created.
ops$tkyte%ORA11GR2>
ops$tkyte%ORA11GR2>
ops$tkyte%ORA11GR2> declare
2 l_start_cpu number := dbms_utility.get_cpu_time;
3 begin
4 for i in 1 .. 150000
5 loop
6 update t1 set y = lower(y) where x = i;
7 end loop;
8 dbms_output.put_line( 'cpu hsecs = ' || ( dbms_utility.get_cpu_time-l_start_cpu ) );
9 end;
10 /
cpu hsecs = 832
PL/SQL procedure successfully completed.
ops$tkyte%ORA11GR2>
ops$tkyte%ORA11GR2> declare
2 l_start_cpu number := dbms_utility.get_cpu_time;
3 begin
4 for i in 1 .. 150000
5 loop
6 update t2 set y = lower(y) where x = i;
7 end loop;
8 dbms_output.put_line( 'cpu hsecs = ' || ( dbms_utility.get_cpu_time-l_start_cpu ) );
9 end;
10 /
cpu hsecs = 1133
PL/SQL procedure successfully completed.
ops$tkyte%ORA11GR2> exec dbms_output.put_line( my_pkg.g_cnt );
150000
PL/SQL procedure successfully completed.
I know I wouldn't want that overhead - and think about it - it would be for EVERYTHING - not just this one package.
f) and even if you had such a counter - so what, what could an external session do about it? You'd have to pop that counter into an application context or v$session (yet MORE large overhead as your counting, trigger runtime is going up and up and up)
g) And most of all - because counting records is a bad approach to the underlying problem. they are trying to contain the use of certain resources (almost certainly the undo space being used). Controlling this by the resource - not by a rowcount - is much more appropriate.
Think about it - I could generate orders of magnitude more undo updating 150,000 rows than you might or vice versa. There is no true relationship between the number of rows modified and the use of some resource. It would be wholly dependent on the nature of the modifications - NOT the number of modifications.