Helena Markova, May 24, 2001 - 1:51 am UTC
VJ, May 31, 2001 - 11:13 am UTC
Kenan BEKTAS, June 04, 2001 - 5:56 pm UTC
Yuval Lavi, July 12, 2001 - 4:21 am UTC
A reader, July 25, 2001 - 3:30 pm UTC
Froggy, August 29, 2001 - 8:12 am UTC
ali, October 15, 2001 - 8:08 am UTC
Not quite dynamic enough.
Steve Rock, October 26, 2001 - 6:24 pm UTC
I've been reading several related scenarios to this one. As usual, you do a great job of answering them and I greatly appreciate your insights.
My problem is that I need something much more dynamic. In my situation, I need to identify changes on a sub-set of columns based on a customer id. The column list can change at any time. I am not allowed to keep building/rebuilding triggers or sp's.
I want to store the columns of interest in a meta-data table by customer id and when the trigger fires, determine if the any of that customer's columns of interest have changed.
Any ideas?
Thanks in advance!
October 27, 2001 - 8:20 am UTC
well, simply add the logic to do that then. Sounds easy enough to me. Change the check_val to always take a customer_id as input and add a check for the column name after seeing it the value changed:
procedure check_val( l_cust_id in number,
l_tname in varchar2,
l_cname in varchar2,
l_new in varchar2,
l_old in varchar2 )
is
begin
if ( l_new <> l_old or
(l_new is null and l_old is not NULL) or
(l_new is not null and l_old is NULL) )
then
for x in ( select *
from LOOKUP_TABLE
where cust_id = l_cust_id
and table_name = l_tname
and instr( column_list, '/' || l_cname || '/' ) > 0
and rownum = 1 )
loop
insert into audit_tbl values
( sysdate, user, upper(l_tname), upper(l_cname),
l_old, l_new );
end loop;
end if;
end;
Now, if you have a lookup table like:
CUST_ID TABLE_NAME COLUMN_LIST
1 EMP /ENAME/HIREDATE/
2 EMP /JOB/SAL/
eg:
create table lookup_table( cust_id number, table_name varchar2(30), column_list varchar2(4000), primary key( cust_id, table_name) ) ORGANIZATION INDEX;
you'd have what you want.
A reader, December 31, 2001 - 2:13 pm UTC
view? sql?
A reader, March 04, 2002 - 3:50 pm UTC
Is there a way to use the audit table for viewing the record history??, like:
Empno Ename Hiredate Sal Version
233 AA 2000-02-01 1000 1
233 BB 2000-03-01 800 2
March 04, 2002 - 4:26 pm UTC
yes, you could do whatever you want. Going this far however and you probably want to start looking at the automated feature of "workspace mgmt" supplied for 817 and as part of 9i and above.
</code>
http://otn.oracle.com/products/workspace_mgr/content.html <code>
...
Dynamic binding
reader, June 26, 2002 - 7:55 am UTC
Tom,
We have got lot of solutions for our problems from your site. We appreciate your deemed valued suggestions.
we have the following questions :
1) Here we are using the package you have given for audit trail. Instead of using the package for auditing each change in column as you have specified in your article.
i.e audit_pkg.check_val( 'dept', 'DEPTNO', :new.DEPTNO, :old.DEPTNO);
audit_pkg.check_val( 'dept', 'DNAME', :new.DNAME, :old.DNAME);
audit_pkg.check_val( 'dept', 'LOC', :new.LOC, :old.LOC);
How can we dynamically get the old and new value for all the columns of a particular table using cursor as given below.
...
FOR x IN (select column_name from user_tab_columns where table_name = upper('dept'))
LOOP
AUDIT_PKG.CHECK_VAL(l_tname => 'DEPT',
l_cname => x.column_name,
l_new => ?
l_old => ?
l_primary => 'DEPTNO',
l_primary_value => :old.deptno,
l_tpid => TPID,
l_trigger => TRIG_CD) ;
END LOOP;
.....
As you said
":new and :old are like bind variables to the trigger, they are not 'regular' variables. you cannot dynamically access them, only 'statically'."
Is there any way or workaround to access these old and new values for particular table as number of columns in a table can be more than 50 and we want to audit all the columns.
2) We are thinking of storing the audit of all the tables in one table. Is it a better a suggestion to maintain different audit tables for different tables.
please don't mind this inane question.
Thanx in advance.
June 26, 2002 - 8:32 am UTC
Well, I thought that I was clear in the original answer:
...
and we said...
:new and :old are like bind variables to the trigger, they are not 'regular'
variables. you cannot dynamically access them, only 'statically'.
.......
*you cannot dynamically access them, only statically*
The solution I provided cares NOT if there are 5, 50, 500 or 1000 columns -- it generates the code for you so YOU do not care if there are 5, 50, 500 or 1000 columns. To you it is the same amount of effort (the software does the heavy lifting).
2) storing all of the audit information in a single table is perfectly acceptable. In fact, that is exactly what this example does!
Use of advanced queue
David Pujol, November 06, 2002 - 10:49 am UTC
Good. I need something similar. My system is 24x7 and there are many concurrent proceses. I need to save all old values and all new values of record updated in other table with another data manipulations. The problem is the performance of updates. Do you think that I could use advanced queue for this additional work (data manipulation)?
Thanks is advance
November 06, 2002 - 3:20 pm UTC
Yes, if the manipulations take a while, can be done after the transaction and you don't want to make the end user wait while they are performed -- AQ is a good approach (thats what message queues were developed for, this and building loosely coupled systems)
generic trigger for auditing column level changes
A reader, November 06, 2002 - 12:38 pm UTC
A reader, January 17, 2003 - 10:51 am UTC
A reader, January 17, 2003 - 10:57 am UTC
A reader, January 30, 2003 - 4:57 pm UTC
It works in 8.1.7!!!
Damon, February 18, 2003 - 5:05 pm UTC
Thanks Tom,
The fact that workspace management works in 8.1.7 may have just saved me hours and hours and hours of coding and had it not been for your site I would have never known that...
February 18, 2003 - 6:51 pm UTC
Test it out and let us know how it goes
how to add keys to the audit_tbl
Tong Siew Pui, April 14, 2003 - 9:58 pm UTC
your example is very useful and simple. thanks.
but, how can we add additional info, like the primary key, to the audit_tbl, since this is going to be the 'generic table' to keep track of changes on all 'audited table' ?
thanks/rgds
tsp
the key of the current updated row
A reader, April 21, 2003 - 2:29 am UTC
I actually wanted to keep the key of the audited table in audit_tbl (the audit table).
but I have pbm finding out what is/are the key(s) of the currently updated record.
How can I achieve that ?
Thanks/Rgds
April 21, 2003 - 7:23 am UTC
umm, when you generate the trigger as I did, I used use_tab_columns. You can extend the example to meet your needs and include the use of USER_CONS_COLUMNS and USER_CONSTRAINTS to determine what the primary key columns are.
I've found a little bug...
Marcio, July 31, 2003 - 12:45 am UTC
Tom, take a look on your first code post on this thread, where you put to_char to convert date there is a tiny bug hh23:mi:ss
<quote>
( sysdate, user, upper(l_tname), upper(l_cname),
to_char( l_old, 'dd-mon-yyyy hh24:mi:ss' ),
to_char( l_new, 'dd-mon-yyyy hh23:mi:ss' ) );
end if;
</quote>
but -- any way! my review is...
I used you example to audit some tables so consider that package created as is.
ops$marcio@MARCI9I1> @askq
ops$marcio@MARCI9I1> drop table c;
Table dropped.
ops$marcio@MARCI9I1> drop table p;
Table dropped.
ops$marcio@MARCI9I1>
ops$marcio@MARCI9I1> create table p (
2 id int constraint p_pk primary key,
3 y varchar2(20),
4 z date )
5 /
Table created.
ops$marcio@MARCI9I1>
ops$marcio@MARCI9I1> create table c (
2 id int constraint c_pk primary key,
3 p_id int references p on delete cascade,
4 z varchar2(20) )
5 /
Table created.
-- I changed the trigger to catch update, delete and insert.
ops$marcio@MARCI9I1> set echo off
create or replace trigger aud_u#c
after update on c
for each row
begin
audit_pkg.check_val( 'c', 'ID', :new.ID, :old.ID, 'UPDATE');
audit_pkg.check_val( 'c', 'P_ID', :new.P_ID, :old.P_ID, 'UPDATE');
audit_pkg.check_val( 'c', 'Z', :new.Z, :old.Z, 'UPDATE');
end;
/
create or replace trigger aud_i#c
after insert on c
for each row
begin
audit_pkg.check_val( 'c', 'ID', :new.ID, :old.ID, 'INSERT' );
audit_pkg.check_val( 'c', 'P_ID', :new.P_ID, :old.P_ID, 'INSERT' );
audit_pkg.check_val( 'c', 'Z', :new.Z, :old.Z, 'INSERT' );
end;
/
create or replace trigger aud_d#c
after delete on c
for each row
begin
audit_pkg.check_val( 'c', 'ID', :new.ID, :old.ID, 'DELETE' );
audit_pkg.check_val( 'c', 'P_ID', :new.P_ID, :old.P_ID, 'DELETE' );
audit_pkg.check_val( 'c', 'Z', :new.Z, :old.Z, 'DELETE' );
end;
/
Trigger created.
Trigger created.
Trigger created.
create or replace trigger aud_u#p
after update on p
for each row
begin
audit_pkg.check_val( 'p', 'ID', :new.ID, :old.ID, 'UPDATE');
audit_pkg.check_val( 'p', 'Y', :new.Y, :old.Y, 'UPDATE');
audit_pkg.check_val( 'p', 'Z', :new.Z, :old.Z, 'UPDATE');
end;
/
create or replace trigger aud_i#p
after insert on p
for each row
begin
audit_pkg.check_val( 'p', 'ID', :new.ID, :old.ID, 'INSERT' );
audit_pkg.check_val( 'p', 'Y', :new.Y, :old.Y, 'INSERT' );
audit_pkg.check_val( 'p', 'Z', :new.Z, :old.Z, 'INSERT' );
end;
/
create or replace trigger aud_d#p
after delete on p
for each row
begin
audit_pkg.check_val( 'p', 'ID', :new.ID, :old.ID, 'DELETE' );
audit_pkg.check_val( 'p', 'Y', :new.Y, :old.Y, 'DELETE' );
audit_pkg.check_val( 'p', 'Z', :new.Z, :old.Z, 'DELETE' );
end;
/
Trigger created.
Trigger created.
Trigger created.
-- several inserts and deletes (cascade includind) to populate our audit_dml
ops$marcio@MARCI9I1>
ops$marcio@MARCI9I1> begin
2 for i in 1..5
3 loop
4 insert into p values ( i, 'DESC-'||i, sysdate );
5
6 end loop;
7 end;
8 /
PL/SQL procedure successfully completed.
ops$marcio@MARCI9I1>
ops$marcio@MARCI9I1> begin
2 for i in 1..10
3 loop
4 insert into c values ( i, mod(i,2) + 1, 'VALOR: '||i );
5 end loop;
6 end;
7 /
PL/SQL procedure successfully completed.
ops$marcio@MARCI9I1>
ops$marcio@MARCI9I1> set echo off
ops$marcio@MARCI9I1> select * from p;
ID Y Z
---------- -------------------- ---------
1 DESC-1 31-JUL-03
2 DESC-2 31-JUL-03
3 DESC-3 31-JUL-03
4 DESC-4 31-JUL-03
5 DESC-5 31-JUL-03
5 rows selected.
ops$marcio@MARCI9I1> select * from c;
ID P_ID Z
---------- ---------- --------------------
1 2 VALOR: 1
2 1 VALOR: 2
3 2 VALOR: 3
4 1 VALOR: 4
5 2 VALOR: 5
6 1 VALOR: 6
7 2 VALOR: 7
8 1 VALOR: 8
9 2 VALOR: 9
10 1 VALOR: 10
10 rows selected.
ops$marcio@MARCI9I1> delete from p where id = 2;
1 row deleted.
ops$marcio@MARCI9I1> select * from p;
ID Y Z
---------- -------------------- ---------
1 DESC-1 31-JUL-03
3 DESC-3 31-JUL-03
4 DESC-4 31-JUL-03
5 DESC-5 31-JUL-03
4 rows selected.
ops$marcio@MARCI9I1> select * from c;
ID P_ID Z
---------- ---------- --------------------
2 1 VALOR: 2
4 1 VALOR: 4
6 1 VALOR: 6
8 1 VALOR: 8
10 1 VALOR: 10
5 rows selected.
-- here is my generic table, so...
ops$marcio@MARCI9I1> set lines 50
ops$marcio@MARCI9I1> desc audit_dml
Name Null? Type
----------------------- -------- ----------------
TIMESTAMP DATE
WHO VARCHAR2(30)
OP VARCHAR2(6)
TNAME VARCHAR2(30)
CNAME VARCHAR2(30)
OLD VARCHAR2(2000)
NEW VARCHAR2(2000)
ops$marcio@MARCI9I1> select * from audit_dml;
TIMESTAMP WHO OP TNAME CNAME OLD NEW
--------- ---------- ------ ----- ---------- ---------- --------------------
31-JUL-03 marcio INSERT P ID 1
31-JUL-03 marcio INSERT P Y DESC-1
31-JUL-03 marcio INSERT P Z 31-jul-2003 01:17:36
31-JUL-03 marcio INSERT P ID 2
31-JUL-03 marcio INSERT P Y DESC-2
31-JUL-03 marcio INSERT P Z 31-jul-2003 01:17:36
31-JUL-03 marcio INSERT P ID 3
31-JUL-03 marcio INSERT P Y DESC-3
31-JUL-03 marcio INSERT P Z 31-jul-2003 01:17:36
31-JUL-03 marcio INSERT P ID 4
31-JUL-03 marcio INSERT P Y DESC-4
31-JUL-03 marcio INSERT P Z 31-jul-2003 01:17:36
31-JUL-03 marcio INSERT P ID 5
31-JUL-03 marcio INSERT P Y DESC-5
31-JUL-03 marcio INSERT P Z 31-jul-2003 01:17:36
31-JUL-03 marcio INSERT C ID 1
31-JUL-03 marcio INSERT C P_ID 2
31-JUL-03 marcio INSERT C Z VALOR: 1
31-JUL-03 marcio INSERT C ID 2
31-JUL-03 marcio INSERT C P_ID 1
31-JUL-03 marcio INSERT C Z VALOR: 2
31-JUL-03 marcio INSERT C ID 3
31-JUL-03 marcio INSERT C P_ID 2
31-JUL-03 marcio INSERT C Z VALOR: 3
31-JUL-03 marcio INSERT C ID 4
31-JUL-03 marcio INSERT C P_ID 1
31-JUL-03 marcio INSERT C Z VALOR: 4
31-JUL-03 marcio INSERT C ID 5
31-JUL-03 marcio INSERT C P_ID 2
31-JUL-03 marcio INSERT C Z VALOR: 5
31-JUL-03 marcio INSERT C ID 6
31-JUL-03 marcio INSERT C P_ID 1
31-JUL-03 marcio INSERT C Z VALOR: 6
31-JUL-03 marcio INSERT C ID 7
31-JUL-03 marcio INSERT C P_ID 2
31-JUL-03 marcio INSERT C Z VALOR: 7
31-JUL-03 marcio INSERT C ID 8
31-JUL-03 marcio INSERT C P_ID 1
31-JUL-03 marcio INSERT C Z VALOR: 8
31-JUL-03 marcio INSERT C ID 9
31-JUL-03 marcio INSERT C P_ID 2
31-JUL-03 marcio INSERT C Z VALOR: 9
31-JUL-03 marcio INSERT C ID 10
31-JUL-03 marcio INSERT C P_ID 1
31-JUL-03 marcio INSERT C Z VALOR: 10
31-JUL-03 marcio INSERT P ID 1
31-JUL-03 marcio INSERT P Y DESC-1
31-JUL-03 marcio INSERT P Z 31-jul-2003 01:18:46
31-JUL-03 marcio INSERT P ID 2
31-JUL-03 marcio INSERT P Y DESC-2
31-JUL-03 marcio INSERT P Z 31-jul-2003 01:18:46
31-JUL-03 marcio INSERT P ID 3
31-JUL-03 marcio INSERT P Y DESC-3
31-JUL-03 marcio INSERT P Z 31-jul-2003 01:18:46
31-JUL-03 marcio INSERT P ID 4
31-JUL-03 marcio INSERT P Y DESC-4
31-JUL-03 marcio INSERT P Z 31-jul-2003 01:18:46
31-JUL-03 marcio INSERT P ID 5
31-JUL-03 marcio INSERT P Y DESC-5
31-JUL-03 marcio INSERT P Z 31-jul-2003 01:18:46
31-JUL-03 marcio INSERT C ID 1
31-JUL-03 marcio INSERT C P_ID 2
31-JUL-03 marcio INSERT C Z VALOR: 1
31-JUL-03 marcio INSERT C ID 2
31-JUL-03 marcio INSERT C P_ID 1
31-JUL-03 marcio INSERT C Z VALOR: 2
31-JUL-03 marcio INSERT C ID 3
31-JUL-03 marcio INSERT C P_ID 2
31-JUL-03 marcio INSERT C Z VALOR: 3
31-JUL-03 marcio INSERT C ID 4
31-JUL-03 marcio INSERT C P_ID 1
31-JUL-03 marcio INSERT C Z VALOR: 4
31-JUL-03 marcio INSERT C ID 5
31-JUL-03 marcio INSERT C P_ID 2
31-JUL-03 marcio INSERT C Z VALOR: 5
31-JUL-03 marcio INSERT C ID 6
31-JUL-03 marcio INSERT C P_ID 1
31-JUL-03 marcio INSERT C Z VALOR: 6
31-JUL-03 marcio INSERT C ID 7
31-JUL-03 marcio INSERT C P_ID 2
31-JUL-03 marcio INSERT C Z VALOR: 7
31-JUL-03 marcio INSERT C ID 8
31-JUL-03 marcio INSERT C P_ID 1
31-JUL-03 marcio INSERT C Z VALOR: 8
31-JUL-03 marcio INSERT C ID 9
31-JUL-03 marcio INSERT C P_ID 2
31-JUL-03 marcio INSERT C Z VALOR: 9
31-JUL-03 marcio INSERT C ID 10
31-JUL-03 marcio INSERT C P_ID 1
31-JUL-03 marcio INSERT C Z VALOR: 10
31-JUL-03 marcio DELETE C ID 1
31-JUL-03 marcio DELETE C P_ID 2
31-JUL-03 marcio DELETE C Z VALOR: 1
31-JUL-03 marcio DELETE C ID 3
31-JUL-03 marcio DELETE C P_ID 2
31-JUL-03 marcio DELETE C Z VALOR: 3
31-JUL-03 marcio DELETE C ID 5
31-JUL-03 marcio DELETE C P_ID 2
31-JUL-03 marcio DELETE C Z VALOR: 5
31-JUL-03 marcio DELETE C ID 7
31-JUL-03 marcio DELETE C P_ID 2
31-JUL-03 marcio DELETE C Z VALOR: 7
31-JUL-03 marcio DELETE C ID 9
31-JUL-03 marcio DELETE C P_ID 2
31-JUL-03 marcio DELETE C Z VALOR: 9
31-JUL-03 marcio DELETE P ID 2
31-JUL-03 marcio DELETE P Y DESC-2
31-JUL-03 marcio DELETE P Z 31-jul-200
3 01:18:46
I would like to achieve build a query to return this:
insert into p ( id, y, z ) values ( 1, 'DESC-1', to_char('31-jul-2003 01:18:46', 'dd-mon-yyyy hh24:mi:ss')
/
after that
insert into c ...
after that
delete from c ...
after
delete from p
sort by timestamp to respect RI (Referencial Integrity).
thanks,
Marcio
July 31, 2003 - 7:08 am UTC
you might consider then adding a sequence to the mix and:
create or replace trigger aud_i#p
after insert on p
for each row
declare
l_seq number;
begin
select s.nextval into l_seq from dual;
audit_pkg.check_val( l_seq, 'p', 'ID', :new.ID, :old.ID, 'INSERT' );
audit_pkg.check_val( l_seq, 'p', 'Y', :new.Y, :old.Y, 'INSERT' );
audit_pkg.check_val( l_seq, 'p', 'Z', :new.Z, :old.Z, 'INSERT' );
end;
/
passing it into the audit pkg. that would let you group the statements together.
bear in mind, concurrent transactions are going to KILL you as the timestamp shows the modification order, NOT the commit order! be careful there.
Performance.
Marcio, July 31, 2003 - 9:05 am UTC
Yes I know this will kill my performance. I've read enough issue here. However, my problem with that is contractor client REQUIREMENT and didn't see any other way to do that.
I'm doing a conversion/migration data from sheet Excell to Oracle tables to each filial of company. I've already finished around 60% of work. But, the guy of company wants to start work with this before I get done, so I will go on with migration and he will do modifications. Well, we have problem here ( two schemas "unsync" ). How can I keep this sync? The company use 817.
Marcio
July 31, 2003 - 9:42 am UTC
no, it is not that it'll kill performance -- it is that it'll kill you trying to come up with the serial order of operations from this audit trail in a multi-user environment.
Ordering by TIMESTAMP isn't good enough in a multi-user system in every case. Something to watch out for is all.
What do you suggest for?
Marcio, July 31, 2003 - 11:53 am UTC
Any suggest for this case?
July 31, 2003 - 12:37 pm UTC
it starts to get *really* complicated.
we call it replication and it took a long time to code.
streams in 9ir2 may be of use if you want to get down and dirty.
Query on audit trail
Marcio, August 03, 2003 - 10:25 pm UTC
Again! I've investigated about environment and there no is mass event update/delete/insert on it. It is no critical table and just head office have rights to change it. So I thought about your old solution ( trigger dml event ) -- I put sequence on trigger as your advice and re-do test I got something like that:
ops$marcio@MARCI9I1> @audit_dml
ops$marcio@MARCI9I1>
ops$marcio@MARCI9I1> drop table audit_dml
2 /
Table dropped.
ops$marcio@MARCI9I1> create table audit_dml
2 ( timestamp date,
3 who varchar2(30),
4 op varchar2(6),
5 seq number(6),
6 tname varchar2(30),
7 cname varchar2(30),
8 old varchar2(2000),
9 new varchar2(2000)
10 )
11 /
Table created.
ops$marcio@MARCI9I1>
ops$marcio@MARCI9I1> drop sequence x#_seq;
Sequence dropped.
ops$marcio@MARCI9I1>
ops$marcio@MARCI9I1> create sequence x#_seq;
Sequence created.
ops$marcio@MARCI9I1>
ops$marcio@MARCI9I1> create or replace package audit_pkg
2 as
3 procedure check_val(
4 l_tname in varchar2,
5 l_cname in varchar2,
6 l_new in varchar2,
7 l_old in varchar2,
8 l_op in varchar2,
9 l_seq in number
10 );
11
12 procedure check_val(
13 l_tname in varchar2,
14 l_cname in varchar2,
15 l_new in date,
16 l_old in date,
17 l_op in varchar2,
18 l_seq in number
19 );
20
21 procedure check_val(
22 l_tname in varchar2,
23 l_cname in varchar2,
24 l_new in number,
25 l_old in number,
26 l_op in varchar2,
27 l_seq in number
28 );
29 end;
30 /
Package created.
ops$marcio@MARCI9I1>
ops$marcio@MARCI9I1>
ops$marcio@MARCI9I1> create or replace package body audit_pkg
2 as
3
4 procedure check_val( l_tname in varchar2,
5 l_cname in varchar2,
6 l_new in varchar2,
7 l_old in varchar2,
8 l_op in varchar2,
9 l_seq in number)
10 is
11 begin
12 if ( l_new <> l_old or
13 (l_new is null and l_old is not NULL) or
14 (l_new is not null and l_old is NULL) )
15 then
16 insert into audit_dml values
17 ( sysdate, sys_context('USERENV','OS_USER'), l_op, l_seq, upper(l_tname), upper(l_cname),
18 l_old, l_new );
19 end if;
20 end;
21
22 procedure check_val( l_tname in varchar2, l_cname in varchar2,
23 l_new in date, l_old in date, l_op in varchar2, l_seq in number )
24 is
25 begin
26 if ( l_new <> l_old or
27 (l_new is null and l_old is not NULL) or
28 (l_new is not null and l_old is NULL) )
29 then
30 insert into audit_dml values
31 ( sysdate, sys_context('USERENV','OS_USER'), l_op,l_seq, upper(l_tname), upper(l_cname),
32 to_char( l_old, 'dd-mon-yyyy hh24:mi:ss' ),
33 to_char( l_new, 'dd-mon-yyyy hh24:mi:ss' ) );
34 end if;
35 end;
36
37 procedure check_val( l_tname in varchar2, l_cname in varchar2,
38 l_new in number, l_old in number, l_op in varchar2, l_seq in number )
39 is
40 begin
41 if ( l_new <> l_old or
42 (l_new is null and l_old is not NULL) or
43 (l_new is not null and l_old is NULL) )
44 then
45 insert into audit_dml values
46 ( sysdate, sys_context('USERENV','OS_USER'), l_op, l_seq, upper(l_tname), upper(l_cname),
47 l_old, l_new );
48 end if;
49 end;
50
51 end audit_pkg;
52 /
Package body created.
ops$marcio@MARCI9I1> set echo off
ops$marcio@MARCI9I1> @askq
ops$marcio@MARCI9I1> drop table c;
Table dropped.
ops$marcio@MARCI9I1> drop table p;
Table dropped.
ops$marcio@MARCI9I1>
ops$marcio@MARCI9I1> create table p (
2 id int constraint p_pk primary key,
3 y varchar2(20),
4 z date )
5 /
Table created.
ops$marcio@MARCI9I1>
ops$marcio@MARCI9I1> create table c (
2 id int constraint c_pk primary key,
3 p_id int references p on delete cascade,
4 z varchar2(20) )
5 /
Table created.
ops$marcio@MARCI9I1> set echo off
create or replace trigger aud#c
after update or insert or delete on c
for each row
declare
l_seq number;
l_dml varchar2(6);
begin
select x#_seq.nextval into l_seq from dual;
if inserting
then l_dml := 'INSERT';
elsif updating
then l_dml := 'UPDATE';
else l_dml := 'DELETE';
end if;
audit_pkg.check_val( 'c', 'ID', :new.ID, :old.ID, l_dml, l_seq);
audit_pkg.check_val( 'c', 'P_ID', :new.P_ID, :old.P_ID, l_dml, l_seq);
audit_pkg.check_val( 'c', 'Z', :new.Z, :old.Z, l_dml, l_seq);
end;
/
Trigger created.
create or replace trigger aud#p
after update or insert or delete on p
for each row
declare
l_seq number;
l_dml varchar2(6);
begin
select x#_seq.nextval into l_seq from dual;
if inserting
then l_dml := 'INSERT';
elsif updating
then l_dml := 'UPDATE';
else l_dml := 'DELETE';
end if;
audit_pkg.check_val( 'p', 'ID', :new.ID, :old.ID, l_dml, l_seq);
audit_pkg.check_val( 'p', 'Y', :new.Y, :old.Y, l_dml, l_seq);
audit_pkg.check_val( 'p', 'Z', :new.Z, :old.Z, l_dml, l_seq);
end;
/
Trigger created.
ops$marcio@MARCI9I1>
ops$marcio@MARCI9I1> begin
2 for i in 1..5
3 loop
4 insert into p values ( i, 'DESC-'||i, sysdate );
5
6 end loop;
7 end;
8 /
PL/SQL procedure successfully completed.
ops$marcio@MARCI9I1>
ops$marcio@MARCI9I1> begin
2 for i in 1..10
3 loop
4 insert into c values ( i, mod(i,2) + 1, 'VALOR: '||i );
5 end loop;
6 end;
7 /
PL/SQL procedure successfully completed.
ops$marcio@MARCI9I1>
ops$marcio@MARCI9I1> delete from p where id = 2;
1 row deleted.
ops$marcio@MARCI9I1>
ops$marcio@MARCI9I1> update c set z = 'MUDEI' where id = 10;
1 row updated.
ops$marcio@MARCI9I1>
ops$marcio@MARCI9I1> col WHO format a10
ops$marcio@MARCI9I1> col OP format a6
ops$marcio@MARCI9I1> col TNAME format a4
ops$marcio@MARCI9I1> col CNAME format a4
ops$marcio@MARCI9I1> col OLD format a30
ops$marcio@MARCI9I1> col NEW format a30
ops$marcio@MARCI9I1> select * from audit_dml;
TIMESTAMP WHO OP SEQ TNAM CNAM OLD NEW
--------- ---------- ------ ----- ---- ---- ------------------------------ ------------------------------
03-AUG-03 marcio INSERT 1 P ID 1
03-AUG-03 marcio INSERT 1 P Y DESC-1
03-AUG-03 marcio INSERT 1 P Z 03-aug-2003 23:19:08
03-AUG-03 marcio INSERT 2 P ID 2
03-AUG-03 marcio INSERT 2 P Y DESC-2
03-AUG-03 marcio INSERT 2 P Z 03-aug-2003 23:19:09
03-AUG-03 marcio INSERT 3 P ID 3
03-AUG-03 marcio INSERT 3 P Y DESC-3
03-AUG-03 marcio INSERT 3 P Z 03-aug-2003 23:19:09
03-AUG-03 marcio INSERT 4 P ID 4
03-AUG-03 marcio INSERT 4 P Y DESC-4
03-AUG-03 marcio INSERT 4 P Z 03-aug-2003 23:19:09
03-AUG-03 marcio INSERT 5 P ID 5
03-AUG-03 marcio INSERT 5 P Y DESC-5
03-AUG-03 marcio INSERT 5 P Z 03-aug-2003 23:19:09
03-AUG-03 marcio INSERT 6 C ID 1
03-AUG-03 marcio INSERT 6 C P_ID 2
03-AUG-03 marcio INSERT 6 C Z VALOR: 1
03-AUG-03 marcio INSERT 7 C ID 2
03-AUG-03 marcio INSERT 7 C P_ID 1
03-AUG-03 marcio INSERT 7 C Z VALOR: 2
03-AUG-03 marcio INSERT 8 C ID 3
03-AUG-03 marcio INSERT 8 C P_ID 2
03-AUG-03 marcio INSERT 8 C Z VALOR: 3
03-AUG-03 marcio INSERT 9 C ID 4
03-AUG-03 marcio INSERT 9 C P_ID 1
03-AUG-03 marcio INSERT 9 C Z VALOR: 4
03-AUG-03 marcio INSERT 10 C ID 5
03-AUG-03 marcio INSERT 10 C P_ID 2
03-AUG-03 marcio INSERT 10 C Z VALOR: 5
03-AUG-03 marcio INSERT 11 C ID 6
03-AUG-03 marcio INSERT 11 C P_ID 1
03-AUG-03 marcio INSERT 11 C Z VALOR: 6
03-AUG-03 marcio INSERT 12 C ID 7
03-AUG-03 marcio INSERT 12 C P_ID 2
03-AUG-03 marcio INSERT 12 C Z VALOR: 7
03-AUG-03 marcio INSERT 13 C ID 8
03-AUG-03 marcio INSERT 13 C P_ID 1
03-AUG-03 marcio INSERT 13 C Z VALOR: 8
03-AUG-03 marcio INSERT 14 C ID 9
03-AUG-03 marcio INSERT 14 C P_ID 2
03-AUG-03 marcio INSERT 14 C Z VALOR: 9
03-AUG-03 marcio INSERT 15 C ID 10
03-AUG-03 marcio INSERT 15 C P_ID 1
03-AUG-03 marcio INSERT 15 C Z VALOR: 10
03-AUG-03 marcio DELETE 16 C ID 1
03-AUG-03 marcio DELETE 16 C P_ID 2
03-AUG-03 marcio DELETE 16 C Z VALOR: 1
03-AUG-03 marcio DELETE 17 C ID 3
03-AUG-03 marcio DELETE 17 C P_ID 2
03-AUG-03 marcio DELETE 17 C Z VALOR: 3
03-AUG-03 marcio DELETE 18 C ID 5
03-AUG-03 marcio DELETE 18 C P_ID 2
03-AUG-03 marcio DELETE 18 C Z VALOR: 5
03-AUG-03 marcio DELETE 19 C ID 7
03-AUG-03 marcio DELETE 19 C P_ID 2
03-AUG-03 marcio DELETE 19 C Z VALOR: 7
03-AUG-03 marcio DELETE 20 C ID 9
03-AUG-03 marcio DELETE 20 C P_ID 2
03-AUG-03 marcio DELETE 20 C Z VALOR: 9
03-AUG-03 marcio DELETE 21 P ID 2
03-AUG-03 marcio DELETE 21 P Y DESC-2
03-AUG-03 marcio DELETE 21 P Z 03-aug-2003 23:19:09
03-AUG-03 marcio UPDATE 22 C Z VALOR: 10 MUDEI
Can you show me if is possible write a query that return inserts... deletes... and updates...
I was thinking about
select op || case op
when 'INSERT' then ' INTO ' || tname || ' VALUES ( '|| new
when 'UPDATE' then tname || ' set ' || cname || ' = ' || new
when 'DELETE' then ' from ' || tname || ' where ' || cname || ' = ' || old
end
from audit_dml
but, update won't work and delete I have put some WHERE clause.
Thanks in Advanced
Marcio
August 04, 2003 - 8:28 am UTC
yes it is possible -- it is just "more code".
you'll probably want to code a stored procedure for it.
Tips
Marcio, August 04, 2003 - 10:01 am UTC
Tom, I wouldn't like that you do it for me, i just want to some tips/clue how to do this stored procedure. Could you?
Thanks,
August 04, 2003 - 10:19 am UTC
short of writing the code? no, not really. come on -- this is just some simple code here, nothing very complex. you have everything you need.
Really Useful stuff
Abhijit, August 04, 2003 - 12:02 pm UTC
Performance
michael, October 17, 2003 - 1:05 pm UTC
Tom,
What is better for performance in an update trigger - the first IF condition or the second IF condition?
The second is definitely easier to code and read.
create or replace trigger bur$t1
before update on t1 for each row
begin
IF ((:new.name IS NULL AND :old.name IS NOT NULL) or
(:new.name IS NOT NULL AND :old.name IS NULL) or
(:new.name <> :old.name)) THEN
update t2 set name=:new.name where id=:old.id;
END IF;
IF(NVL(:new.name,0) <> NVL(:old.name,0)) then
update t2 set name=:new.name where id=:old.id;
END IF;
end;
/
Thanks.
October 17, 2003 - 2:04 pm UTC
*benchmarking*
it is so *easy*....
when I am faced with a question like this -- it is the first thing I do. Runstats makes it fairly easy. Consider:
ops$tkyte@ORA920> /*
DOC>
DOC>drop table t1;
DOC>drop table t2;
DOC>
DOC>create table t1 ( name varchar2(30) );
DOC>
DOC>create or replace trigger t1
DOC>before update on t1 for each row
DOC>begin
DOC>IF ((:new.name IS NULL AND :old.name IS NOT NULL) or
DOC> (:new.name IS NOT NULL AND :old.name IS NULL) or
DOC> (:new.name <> :old.name)) THEN
DOC> null;
DOC>END IF;
DOC>end;
DOC>/
DOC>
DOC>create table t2 ( name varchar2(30) );
DOC>
DOC>create or replace trigger t2
DOC>before update on t2 for each row
DOC>begin
DOC>IF(NVL(:new.name,'0') <> NVL(:old.name,'0')) then
DOC> null;
DOC>END IF;
DOC>end;
DOC>/
DOC>*/
ops$tkyte@ORA920>
ops$tkyte@ORA920> truncate table t1;
Table truncated.
ops$tkyte@ORA920> truncate table t2;
Table truncated.
ops$tkyte@ORA920>
ops$tkyte@ORA920> insert into t1 select decode( mod(rownum,25), 0, null, object_name ) from all_objects;
31665 rows created.
ops$tkyte@ORA920> insert into t2 select * from t1;
31665 rows created.
ops$tkyte@ORA920>
ops$tkyte@ORA920> exec runstats_pkg.rs_start
PL/SQL procedure successfully completed.
ops$tkyte@ORA920> update t1 set name = substr(name,1,29)||'x';
31665 rows updated.
ops$tkyte@ORA920> exec runstats_pkg.rs_middle;
PL/SQL procedure successfully completed.
ops$tkyte@ORA920> update t2 set name = substr(name,1,29)||'x';
31665 rows updated.
ops$tkyte@ORA920> exec runstats_pkg.rs_stop(10000)
Run1 ran in 377 hsecs
Run2 ran in 667 hsecs
run 1 ran in 56.52% of the time
Name Run1 Run2 Diff
Run1 latches total versus runs -- difference and pct
Run1 Run2 Diff Pct
269,830 272,593 2,763 98.99%
PL/SQL procedure successfully completed.
<b>that shows that if you do a couple of extra keystrokes and avoid the function calls to nvl stuff and compare temporaries from functions and such -- it is faster</b>
It can be more usefull if ...
Anil Sharma, October 28, 2003 - 1:11 am UTC
This can even more useful if we can use new and old dynamically.
Is it possible to create one row for each record for changed value i.e
old f1->value1||f2->value2...etc new f1->newval1||f2->newval2......etc.
Thanks for usefull solution.
Workspace Management
A reader, November 04, 2003 - 3:05 am UTC
Hi Tom,
What would you recommend Workspace Management to audit a table or create a DML triggers like these?
Please tell, if only update is to be audited, then will a trigger be a better option, or, Workspace Management would still be better?
Also, please tell the changes, which oracle makes to enable workspace mangement, have side effects? Like the original table becomes a View, how to view constraints that were applicable on the original table after enabling the Workspace management. There are certain constraints like after enabling workspace management, primary key cannot be updated, so if this may sometimes happen, a trigger would be better?
Thanks for your time.
November 04, 2003 - 8:20 am UTC
If workspace mgmt met my requirements and I could work within its documented implementation constraints -- I would certainly choose that 100% of the time over writing lots of code. Absolutely.
There is an entire "application developers guide for workspace mgmt", read it, see it if meets your needs. if so, absolutely use it. If not, write lots of code that you will maintain yourself forever.
It is the "make vs buy" decision. You buy lots of stuff with compromises -- meaning, even if workspace mgmt does not meet 100% of your needs (maybe 90%) you should give some serious consideration to changing your requirements to permit its use. Using builtin functionality -- in order to avoid writing scads of code -- is to me always a good idea.
using UPDATING() call
vll, December 04, 2003 - 3:09 pm UTC
We are using
if updating('{field name'}) then
{insert something in audit table}
end if;
instead of
IF ((:new.name IS NULL AND :old.name IS NOT NULL) or
(:new.name IS NOT NULL AND :old.name IS NULL) or
(:new.name <> :old.name)) THEN
{insert something in audit table}
end if;
As for me it's more easy. Am I missing something? May be there are performance issues with using UPDATING()?
Thank you very much, Tom!
December 04, 2003 - 4:26 pm UTC
that is dandy -- the original question that prompted this only wanted to audit changed data (some applications update ALL columns to update a single column)
IF updating ('colname') - doesn't do the job...
Mark, December 04, 2003 - 7:22 pm UTC
If the update statement references the column, it will still write the audit row even if :old.colval = :new.colval.
Using UPDATING() - it depends on what we really need
vll, December 05, 2003 - 2:14 pm UTC
I think it's a question of what we really need. If we really need to track only CHANGES, then you are absolutely right, it will create redundant record in audit table. But in all the cases I happened to deal with before, we needed to record UPDATE EVENT. In other words, we NEEDED this record in audit table. Let's say you have a user that updates some column in the whole table to some value. With UPDATING() call we have the record in audit table for each record. By using audit table we can get info about the actual UPDATE event (who, when and etc.) updated this column for ANY record. With :new.value<>:old.value (plus NULL checking) approach we have this data only for records that have this column changed and we are loosing UPDATE EVENT by some user.
Again - it's a question of what we really need. As for me its better to have MORE data in audit table. If we want to get rid of redundant records we can always do it later by deleting all duplicate records except of the very first one historically.
Thank you all!
Audit Dilemna
Mike, December 16, 2003 - 4:14 pm UTC
I have found many of your solutions out here both highly informative and entertaining. I especially love the rants about barnyard database design.
I have read the entire thread above and found it very useful, but I have a question about how to find out what value(s) changed in a row by examining audit trail records. However, let me give some background information. I inherited this scheme upon arrival.
The developers are programmaticaly generating the audit table ddl and triggers for each audited table from the Rose Object Model they are using. These are simple audit triggers, such that all inserts, updates, and deletes result in the entire row being inserted into an audit table that has all of the production table columns. In addition to those columns, however, there are 5 extra audit columns. OPS specifies the operation (insert, update, delete) and then there are columns to capture the system time, application name, os user, and terminal. In the production tables themselves, there are extra columns in every table for the name of the user who created the row and date, and the name of the user who updated the row and date. This is a web front end and uses a pooled connection, but the developers have written code to capture the real user id in the app and it gets passed down to the security columns in the production table when they commit some work. Whenever they do so, as noted above, the entire row is sent to the counterpart audit table.
Sorry for the long setup, but now here is my immediate problem. For updates only, the requirement is to capture only what changed in a row by looking at a row in the audit trail table and comparing it to the last version of that row. I have been told that the dumb (primary) keys will never be updated. Please picture, a row is inserted with a dumb id as the PK. Then, Column 2 is updated from "nice" to "naughty". So now there is an audit row showing column 2 with "naughty" as the value. It has timestamp and user id. Column 5 is then updated from "check" to "coal". So now there is a row showing Column 2 as "naughty" and Column 5 as "coal". How can I generate a report that will only show what value(s) changed when a row has been labeled update in the audit table. The old and new values are required. So they would want to see something like
PKVal: 123 Column2 OldValue: Nice NewValue: Naughty Updt User: John Updt Date: 03-DEC-2003
PKVal: 123 Column5 Old Value: Check New Value: Coal Updt User: Mary Updt Date: 04-DEC-2003
In addition, they would like to see the entire history of a specific row (or rows) when requested, including every update with old and new values as shown above.
I work with a developer who, inspired by one of your solutions above, developed complex triggers with logic to do this up front, but the PM voted it down. In our alternate approach, the audit tables had old and new columns for each column in the production tables. For inserts, only the NEW values were populated. For deletes, the OLD values were populated. For updates, we captured the new and old values ONLY for the column(s) that changed. We thought is was a good approach, but the PM signalled "thumbs down" and said we should stick to the simple triggers and extract the deltas (column value differences) with code at the back end. Any ideas about how to get these "deltas" will be appreciated.
December 16, 2003 - 6:45 pm UTC
using lag you can do this pretty easy. consider:
ops$tkyte@ORA9IR2> select * from t;
ID DT C1 C2 C3
---------- --------- ---------- --------- --------------------
1 27-OCT-03 1 16-DEC-03 nice
1 28-OCT-03 1 16-DEC-03 naughty
1 29-OCT-03 1 14-DEC-03 naughty
1 30-OCT-03 2 14-DEC-03 naughty
1 30-OCT-03 3 13-DEC-03 nasty
2 27-OCT-03 11 16-NOV-03 xnice
2 28-OCT-03 11 16-NOV-03 xnaughty
2 29-OCT-03 11 14-NOV-03 xnaughty
2 30-OCT-03 12 14-NOV-03 xnaughty
2 30-OCT-03 13 13-NOV-03 xnasty
10 rows selected.
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> select id, dt,
2 nullif( c1, lag(c1) over (partition by id order by dt) ) c1,
3 nullif( c2, lag(c2) over (partition by id order by dt) ) c2,
4 nullif( c3, lag(c3) over (partition by id order by dt) ) c3
5 from t;
ID DT C1 C2 C3
---------- --------- ---------- --------- --------------------
1 27-OCT-03 1 16-DEC-03 nice
1 28-OCT-03 naughty
1 29-OCT-03 14-DEC-03
1 30-OCT-03 2
1 30-OCT-03 3 13-DEC-03 nasty
2 27-OCT-03 11 16-NOV-03 xnice
2 28-OCT-03 xnaughty
2 29-OCT-03 14-NOV-03
2 30-OCT-03 12
2 30-OCT-03 13 13-NOV-03 xnasty
10 rows selected.
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> select id, dt,
2 decode( c1, old_c1, null, ' c1 was: ' || old_c1 || ' is: ' || c1 ) ||
3 decode( c2, old_c2, null, ' c2 was: ' || old_c2 || ' is: ' || c2 ) ||
4 decode( c3, old_c3, null, ' c3 was: ' || old_c3 || ' is: ' || c3 )
5 from (
6 select id, dt, c1, c2, c3,
7 lag(c1) over (partition by id order by dt) old_c1,
8 lag(c2) over (partition by id order by dt) old_c2,
9 lag(c3) over (partition by id order by dt) old_c3
10 from t )
11 ;
ID DT
---------- ---------
DECODE(C1,OLD_C1,NULL,'C1WAS:'||OLD_C1||'IS:'||C1)||DECODE(C2,OLD_C2,NULL,'C2WAS:'||OLD_C2||'IS:'||C2)||DECODE(C3,OLD_C3,NULL,'C3WA
-----------------------------------------------------------------------------------------------------------------------------------
1 27-OCT-03
c1 was: is: 1 c2 was: is: 16-DEC-03 c3 was: is: nice
1 28-OCT-03
c3 was: nice is: naughty
1 29-OCT-03
c2 was: 16-DEC-03 is: 14-DEC-03
1 30-OCT-03
c1 was: 1 is: 2
1 30-OCT-03
c1 was: 2 is: 3 c2 was: 14-DEC-03 is: 13-DEC-03 c3 was: naughty is: nasty
2 27-OCT-03
c1 was: is: 11 c2 was: is: 16-NOV-03 c3 was: is: xnice
2 28-OCT-03
c3 was: xnice is: xnaughty
2 29-OCT-03
c2 was: 16-NOV-03 is: 14-NOV-03
2 30-OCT-03
c1 was: 11 is: 12
2 30-OCT-03
c1 was: 12 is: 13 c2 was: 14-NOV-03 is: 13-NOV-03 c3 was: xnaughty is: xnasty
10 rows selected.
I'll leave any more formatting to you -- but you get the gist.
A reader, December 17, 2003 - 9:45 am UTC
Just a quick thought for Mike to take to his PM.
We have the same situation with regards auditting.
Our requirement is further complicated in that the client,
when viewing audit histories, wishes to be able to see what a "record" would have shown at a given point in time.
This sounds no different than that already discribed.
However, consider foreign key lookup values.
Eg a lookup table 'COLOUR'
Point In Time ID CODE
1 1 RED
1 2 BLUE
2 1 GREEN
2 2 YELLOW.
Then when table T (id, f_colour_id) changes from
(1,1) to (1,2). What will the client see when viewing the "full record" audit trail.
Does he see the change as RED to BLUE
or GREEN to YELLOW
or RED to YELLOW.
A new wrinkle
Michael, January 13, 2004 - 3:20 pm UTC
Thanks for letting me know about the analytics, somehow I missed those. I need an opinion regarding the audit triggers I am working on. There are two choices. Both triggers collect all of the new column values for inserts, and all of the old column values for deletes. The problem is how to record updates. In one trigger, I could record all of the old values for columns that did not change, only showing a new value in the column that changed. (P.S. I am not allowed have "old" and "new" column values in the same row, and I cannot audit to a single table). So in those rows, it would not be obvious which column changed, that could only be pulled out through code. In the other approach, whenever an update is done, only the primary key value and the new value of the column that changed are recorded (of course there are timestamp and userid columns in the audit table) but all of the other columns are left null. So consider the following data in the audit table:
EMP# FNAM LNAM HIREDATE JOBID SALY OPR UPDUSR UPDDAT
135 KI GEE 12-DEC-99 STCLK 2400 INS LDORAN 12-DEC-99
135 KI GEE 12-DEC-99 STSPV 2600 UPD LDORAN 10-MAR-01
135 KI GEE 12-DEC-99 STSPV 2800 UPD LDORAN 12-APR-02
135 KI POPP 12-DEC-99 STSPV 2800 UPD SBAIDA 15-JUN-02
135 KI POPP 12-DEC-99 STMGR 3200 UPD LDORAN 01-JAN-03
or in the second kind of trigger:
EMP# FNAM LNAM HIREDATE JOBID SALY OPR UPDUSR UPDDAT
135 KI GEE 12-DEC-99 STCLK 2400 INS LDORAN 12-DEC-99
135 STSPV 2600 UPD LDORAN 10-MAR-01
135 2800 UPD LDORAN 12-APR-02
135 POPP UPD SBAIDA 15-JUN-02
135 STMGR 3200 UPD LDORAN 01-JAN-03
Which is the better of the two sets of audit data in terms of meeting two requirements for pulling data from audit tables:
1. Rebuild an entire history of changes for the entire history of the table, or a range of time, e.g. July
2. Show what the data for a specific EMP looked like on a given date. NOTE: We cannot compare the audit table to the production table data.
Any advice/help will be greatly appreciated.
WM not feasible
Michael, January 14, 2004 - 10:47 am UTC
First, thanks for responding quickly, I did see your home page notice that you are very busy right now...
We have a number of "alternate keys" in the schema which have unique constraints on them. There will be additional unique indexes built on some columns as time goes by (still in development stages, but production rollout in June of first modules). PM wants audit strategy in cement now, or I will be in cement instead. We will not be going to 10g soon. They will not even let me patch up from 9.2.0.1 to 9.2.0.4...afraid something will break (I told them this was very unlikely).
I oversimplified previous data example. We currently have about 280 tables, and will eventually have over 1000. It is a web based front end with connection pooling, so the application only has one user, which is the schema owner (not my decision!). However, the developers have written functionality into the front end that captures the real user id of the person who logged in. When that person changes anything in a table, if it is an insert, there are two column in every production table, cr_usr_id and cr_usr_dt, that get populated. If someone deletes a row or changes data, there are columns upd_usr_id and upd_usr_dt that get populated. According to the developers, this is the only way they could capture the real id of the user inserting or updating a table. In addition, I have been working on triggers that fire whenever any inserts, updates, or deletes are made to the production tables. These triggers populate audit tables that are identical to the production tables, except that there are additional columns:
OPS (operation) SYS_TM, APP_NM, OS_USR, TERM
These columns will be populated by the triggers for transactions that occur both inside the application or outside of it, as in SQL*PLUS, TOAD, etc.
So I am stuck in a box, basically my only choice is to structure the triggers so that they populate the audit tables in one of the two ways shown earlier. Since I am new to analytics, and do not have a lot of time to experiment, which of those audit tables would be the easiest to build code for if I needed to extract change reports for a period of time or as of a given date?
Right now, we are leaning heavily toward the first example, recording all column values in the updated row, even though the changed column (new value) would not be obvious. The other approach, just recording the new value of the column that changed, was due to storage concerns, along with the idea that it *might* be easier to pull out audit reports for data changes.
Once again, thanks for your comments.
what other sql*plus settings are you using?
reader, January 14, 2004 - 4:43 pm UTC
Tom,
sqln and sqlprompt text is appearing in my spoooled file, as well as the original prompt commands and prompt output from your original trigger generation script.
I think I need to include:
set define off
set sqln off
set sqlprompt ''
but when I do my sql*plus spool file looks like this:
prompt create or replace trigger aud#&1
create or replace trigger aud#&1
prompt after update on &1
after update on &1
prompt for each row
for each row
prompt begin
begin
select ' audit_pkg.check_val( ''&1'', ''' || column_name ||
''', ' || ':new.' || column_name || ', :old.' ||
column_name || ');'
from user_tab_columns where table_name = upper('&1')
/
prompt end;;
end;
prompt /
/
spool off
how do I make this look like the output you generate?
January 14, 2004 - 5:57 pm UTC
set serveroutput on
set feedback off
set verify off
set embedded on
set heading off
spool tmp.sql
What about the 4 'who' columns?
Robert, January 14, 2004 - 6:44 pm UTC
Tom,
Knowing that...
1) You are very busy.
2) This question involves a slightly different level of auditing
Here goes...
What do you think of a shop standard that insists that every table put into production, no matter how transitory the data is, MUST contain the 4 'who' columns (user_inserting, date_inserted, user_updating, date_updated), which are populated by a database trigger?
Can you please share the philosophy and/or reasoning behind your answer?
Thanks,
Robert
January 14, 2004 - 7:01 pm UTC
i can see the user|date_inserting......
but the user|date_updated -- that does not make sense......
consider:
connect bob
insert row
connect mary
update row set x = 55
connect sue
update row set y = 66
hmmm -- who updated x = 55?? thats sort of lost -- so, what point "last updated"?
Good One
Abhijit, January 14, 2004 - 11:14 pm UTC
Hi Tom,
Is there any way of knowing the name of the column that is getting changed in the trigger? I am using Oracle 8.1.7
I have a table MAIN_TBL with 165 columns. Another table TRAC_TBL contains COLUMN_NAME varchar(30) and AUDIT varchar2(1). The columns that need to be tracked are marked by the user in the TRAC_TBL by setting the value of the AUDIT column = "Y". However the user also desires an interface where she can change this audit value to "N" at any point of time. How do I keep track without resorting to 165 if conditions?
Warm Regards,
Abhijit
January 15, 2004 - 3:16 am UTC
</code>
http://asktom.oracle.com/pls/asktom/f?p=100:11:::::P11_QUESTION_ID:4529781014729 <code>
sometimes -- it is just easier/faster to "do them all". seems it would be much easier for the end user to analyze as well if they had the row history -- just just deltas (which makes it virtually impossible to analyze in retrospect).
short of that -- you'll be doing 165 if's.
User|Date Updated 'Who' columns
Robert, January 15, 2004 - 9:48 am UTC
Tom,
The purpose of the User|Date Updated columns is simply to see if the record has been touched... and who was the last to touch it.
It is not as complete as full colummn level auditing, but it is simple to implement and use.
This information has helped me many times in debugging a tangled problem, and reconstructing a sequence of events...And it is much easier to implement and use than full column level auditing....sort of the 80/20 rule... 20% of the effort for 80% of the results.
Will you please comment on my reasoning?
Thanks,
Robert.
January 15, 2004 - 10:25 am UTC
maybe v$logminer contains more than enough information for your debugging purposes without adding any additional runtime overhead then?
but even so -- the runtime overhead of 165 if's and all of that procedural code may be "huge"....
(how is full column auditing "harder" -- it is a single INSERT -- versus hundreds of lines of procedural code to check each and every column......)
must be something misconfigured in my sql*plus
reader, January 15, 2004 - 9:50 am UTC
Connected to:
Oracle9i Enterprise Edition Release 9.2.0.3.0 - Production
With the Partitioning, OLAP and Oracle Data Mining options
JServer Release 9.2.0.3.0 - Production
SQL> set serveroutput on
SQL> set feedback off
SQL> set verify off
SQL> set embedded on
SQL> set heading off
SQL> spool tmp.sql
SQL> prompt create or replace trigger aud#&1
Enter value for 1: prompt after update on &1
create or replace trigger aud#prompt after update on &1
SQL> prompt for each row
for each row
SQL> prompt begin
begin
SQL>
SQL> select ' audit_pkg.check_val( ''&1'', ''' || column_name ||
2 ''', ' || ':new.' || column_name || ', :old.' ||
3 column_name || ');'
4 from user_tab_columns where table_name = upper('&1')
5 /
Enter value for 1: prompt end;;
Enter value for 1: prompt /
SQL>
SQL> spool off
SQL> @tmp dept
SP2-0734: unknown command beginning "SQL> promp..." - rest of line ignored.
SP2-0734: unknown command beginning "Enter valu..." - rest of line ignored.
SQL> @tmp dept
SP2-0734: unknown command beginning "SQL> promp..." - rest of line ignored.
SP2-0734: unknown command beginning "Enter valu..." - rest of line ignored.
16 /
SQL> prompt for each row
*
ERROR at line 2:
ORA-04079: invalid trigger specification
SQL> l
1 create or replace trigger aud#prompt after update on &1
2 SQL> prompt for each row
3 for each row
4 SQL> prompt begin
5 begin
6 SQL>
7 SQL> select ' audit_pkg.check_val( ''&1'', ''' || column_name ||
8 2 ''', ' || ':new.' || column_name || ', :old.' ||
9 3 column_name || ');'
10 4 from user_tab_columns where table_name = upper('&1')
11 5 /
12 Enter value for 1: prompt end;;
13 Enter value for 1: prompt /
14 SQL>
15* SQL> spool off
SQL>
--
January 15, 2004 - 10:28 am UTC
did you just type that in or are you running a script -- it is meant to be run as a script (eg: @audit_pkg_create_script DEPT)
disregard prior problem
reader, January 15, 2004 - 10:31 am UTC
Tom,
Sorry, i am a little slow today. disregard my previous problem. I get it.
How to compare spatial data
Irina, February 10, 2004 - 5:48 pm UTC
Hi Tom,
Thanks a lot for your site. I found it the most useful.
This discussion helped me to create my audit package. It works fine for all data types except for the spatial data columns.
I wonder if it is possible to compare spatial data in the same manner as other data types. I need something like:
if (l_new_geom <> l_old_geom or
l_new_geom is null and l_old_geom is not null) or
(l_new_geom is not null and l_old_geom is null))
Cheers
Irina
February 11, 2004 - 8:47 am UTC
since the mdsys type does not have a map or order method, you'll have to compare them attribute by attribute if you want to compare them at all.
Providing names of the columns dynamically !!
Dipesh Khakhkhar, June 02, 2004 - 5:40 pm UTC
Hi,
I tried to follow the solutions mentioned here but since " new and :old are like bind variables to the trigger, they are not 'regular' variables. you cannot dynamically access them, only 'statically'. "
So I tried to follow the round about way of arriving to the solution of providing dynamically column names which can be audited:
1) Writing a package and defining a plsql table in it.
2) Writing a statement level before update trigger which will empty this table.
3) Writing a row level before update trigger which will populate the plsql table defined in the package with rowids of the updated rows.
4) Writing statement level after update trigger where I will create a dynamic sql statement by getting
a) column names, its datatype and journaling and updating, by reading from the table. I will store this in an array.
b) Using the column names and its datatype i will be able to create string of columns to be fetched dynamically.
c) Then I will create an into string wherein I will have an array of old_values.
d) Then I will loop through the rowids in the plsql table and pass that sql statment generated dynamically and old_array name to the autonomous function. I will collect the old array as second argument as IN OUT variable the result will of old values will be stored in this array.
(while collecting this array in the function, i will collect it using the name of the new array)
e) Since in the sql I have mentioned the name of the new array in the into clause, I will run it inside trigger which will give me new values.
f) Now I will compare the old and new values and depending on the result of comparision, I will take necessary action.
Is this approach okay ?
If this is okay, I faced some problem about which I will inform you if you first approve this design otherwise it won't be of any use.
Thanks!
Regards,
June 02, 2004 - 7:23 pm UTC
see above, i demonstrate the efficient way to do this.
your approach, involving lots of "generic stuff" won't be efficient.
i would look at workspace mgmt as well (applications developers guide on the whole thing).
might be you have to write 1 line of code to do what you need.
Shalu, July 28, 2004 - 3:31 pm UTC
The Procedures don't need to overloaded?
Celia Wang, August 11, 2004 - 4:58 pm UTC
Too bad, I have to write the triggers to catch all the changes. The company don't like any new features. This really help me a lot.
However, I just tested and found the procedures don't need be overloaded by different data type. I am so surprised to see I can pass Number, date into the procedure with VARCHAR2 as in parameter. It didn't give me any error. Did I miss anything?
August 12, 2004 - 8:29 am UTC
ahh, the old thought process of "treat software like fine wine, gotta let it age". How "prudent" (sarcasm dripping).
Betcha this same company flings new windows versions onto desktops at the drop of a hat. oh well.
the overloading is there so you can FORMAT the information. the date passed as varchar2 would convert the date to a string using the default date format (which generally "strips the time" component off.
here is the example I have tested
celia wang, August 11, 2004 - 5:17 pm UTC
create table a(column_name varchar2(30), old_value varchar2(30), new_value varchar2(30));
create or replace procedure p_ins_a (in_name in varchar2, in_old in varchar2, in_new varchar2)
is
begin
insert into a values(in_name, in_old, in_new);
end;
/
execute p_ins_a('test1', 'a', 'a');
execute p_ins_a('test2', sysdate, sysdate);
execute p_ins_a('test3', 1, 1);
execute p_ins_a('test4', 128, 128);
select * from a;
4 row(s) retrieved
One procedure take every data type. But, in theory, the procedure should be overloaded.
August 12, 2004 - 8:30 am UTC
see above
How abt the Performance?
JeeKay, August 11, 2004 - 5:20 pm UTC
Tom,
Thanks for the great, detailed answer. I got a question though. Let us consider your original answer for this question. Let us assume I have a Table A, that contains 30 columns or so. For each I/U/D op to this table, the audit trigger puts 30 records in the Audit table. So does that mean every I/U/D operation that took N units of time before, now takes N + 30*N = 31N units?? In a scenario where the screen may be updating 100,000 records in Table A or so, how do you see that affecting the app performance?
August 12, 2004 - 8:31 am UTC
you get nothing for free in this world.
yes, this has obvious performance implications.
Ways to determine table data changes???
LK, August 16, 2004 - 3:38 pm UTC
Tom,
Thanks for the code snippets. Here's my question.
After running a process, I want to know what tables have undergone data changes in the entire schema. Let's assume that there are no other processes running.
* For a developer, what is the best way to determine what tables have undergone data modifications? What are all the ways to determine table modification info. Getting info from all_tab_modifications requires the tables to be analyzed constantly. well, correct me if i am wrong.
Also, if there's already some discussion on this, please send the link.
Thanks
August 16, 2004 - 8:03 pm UTC
all tab modifications would not be the right approach.
auditing would. If I had a serious -- bona fide need to know, auditing would be the only game in town.
LK, August 17, 2004 - 11:01 am UTC
Yes, there is a serious need. This is for the testers to do their system/integration tests. So it appears that the foolproof approach is creating an audit table and having triggers for all the tables in the schema to populate the audit table whenever there is a data modification....
Thanks, I will use your script (the 1st one at the top) with some modifications to achieve this.
August 17, 2004 - 11:15 am UTC
why not just "audit"???
is it a bug in the example?
Rich, August 17, 2004 - 8:01 pm UTC
Hi Tom,
I was looking at your example and noticed that you had:
to_char( l_new, 'dd-mon-yyyy hh23:mi:ss' ) );
------------------------------------------^
Q: is the hh23:mi:ss correct? Or should it be hh24:mi:ss ?
Thanks,
Rich
This is your complete Exa:
procedure check_val( l_tname in varchar2, l_cname in varchar2,
l_new in date, l_old in date )
is
begin
if ( l_new <> l_old or
(l_new is null and l_old is not NULL) or
(l_new is not null and l_old is NULL) )
then
insert into audit_tbl values
( sysdate, user, upper(l_tname), upper(l_cname),
to_char( l_old, 'dd-mon-yyyy hh24:mi:ss' ),
to_char( l_new, 'dd-mon-yyyy hh23:mi:ss' ) );
end if;
end;
August 18, 2004 - 7:38 am UTC
that was pointed out in "I've found a little bug... " above.
Update & Insert Old Value To Another Table For Backup
Dulal, August 27, 2004 - 5:13 am UTC
Hi Tom,
I have a client/server database and have to develop a system in which updated column’s old value of any table will be insert into a common table by post_update or on_update or any database level trigger or package at the time of update. Main task is to find out the updated old value, table and columns name.
I am giving a sample -
I have 3 tables Table1, Table2, Backup_Table with structure like as
Create table Table2(
T1C1 VARCHAR2(10),
T1C2 DATE,
T1C3 NUMBER);
Create table Table3(
T2C1 VARCHAR2(10),
T2C2 DATE,
T2C3 NUMBER,
T2C4 VARCHAR2(25));
Create table Backup_Table(
Table_Name VARCHAR2(6),
Column_Name VARCHAR2(4),
Old_Value VARCHAR2(25),
Updated_Date Date default sysdate,
Updated_By_User VARCHAR2(10) default user);
--values like as --
insert into table2 values('Mr Aman','22-Aug-04',350.00);
insert into table2 values('Mr Zaman','23-Aug-04',450.00);
insert into table3 values('Mr Wahid','22-Aug-04',1020.00,'For Sale');
insert into table3 values('Mr Ahad','23-Aug-04',450.00,'By Cash');
commit;
When I will update a column or all columns of Table1 or Table2 of a record via Forms (*.fmb) or SQL*Plus then the old values of columns of that updated record will be insert into Backup_Table like as
Table_Name Column_Name Old_Value Updated_Date Updated_By_User
---------- ----------- --------- ------------ ---------------
Table1 T1C1 Mr Aman 24-Aug-04 User1
Table1 T1C3 500.50 24-Aug-04 User1
Table2 T2C2 22-Aug-04 25-Aug-04 User3
Table1 T2C3 350.00 26-Aug-04 Rira
Table2 T2C1 Mr Wahid 26-Aug-04 User3
Table1 T1C1 Mr Imam 27-Aug-04 User5
Table1 T1C2 24-Aug-04 27-Aug-04 User5
Table1 T1C3 1000.00 27-Aug-04 User5
Table2 T2C4 By Cash 28-Aug-04 User2
i.e. Updated values of all types (Varchar2, Date, Number) will store as string in
same column i.e. in Old_Value of table Backup_Table. There are no PK/FK relation betweens tables.
How it is possible?
My system is Oracle9i(Res. 2), Dev6i, Windows 2000.
Best regards.
August 27, 2004 - 7:50 am UTC
did you read the top of this here page????
that was in fact the original question -- and the original answer is right there?
Problems solved
Dulal, August 29, 2004 - 3:44 am UTC
Thanks Tom.
My problems solved.
How To Insert Primary Key In Audit_Tbl
Dulal, August 30, 2004 - 5:34 am UTC
Hi Tom,
I am now in new problems. I wish to insert primary key into a column of 'audit_tbl' table for compareing between original table record and audit_tbl table record. Primary key may consist with one or more columns.
How it is possible? Kindly help me.
Best regards.
August 30, 2004 - 8:48 am UTC
generate audit table with necessary columns and populate it??
it is just code -- you have the spec. turning spec into code is what we do.
How To Insert Primary Key Value In Audit_Tbl
Dulal, August 30, 2004 - 11:36 pm UTC
Sorry Tom,
I could not understand. Kindly help me in which I will insert primary key's old value into a new column of audit_tbl table associated with the updated row (primary key may consist with one or more columns value).
Best regards.
August 31, 2004 - 8:43 am UTC
as stated, you'll have to write "fancier code"
something that
a) reads the data dictionary to determine what the PK of the table is so
b) it can create an audit table specificially for that table so
c) it can create a package for that table that
d) is called by the triggers on that table
How To Insert Primary Key Value In Audit_Tbl
Dulal, August 31, 2004 - 6:35 am UTC
Hi Tom,
Sorry to disturb due to I am in great problem.
Here is my codes. Problems in primary key value. The package does not find out primary key's old value (because primary key's values is unchanged). Kindly help me.
Create table Table2(
T1C1 VARCHAR2(10),
T1C2 DATE,
T1C3 NUMBER,
T1c4 VARCHAR2(10),
constraint pk_table2 primary key (t1c4));
create table audit_tbl
( timestamp date,
who varchar2(30),
tname varchar2(30),
pkval varchar2(200),
cname varchar2(30),
old varchar2(200),
new varchar2(200));
/
CREATE OR REPLACE PACKAGE AUDIT_PKG
as
procedure check_val(
l_tname in varchar2,
l_pkval in varchar2,
l_cname in varchar2,
l_new in varchar2,
l_old in varchar2 );
procedure check_val(
l_tname in varchar2,
l_pkval in varchar2,
l_cname in varchar2,
l_new in date,
l_old in date );
procedure check_val(
l_tname in varchar2,
l_pkval in varchar2,
l_cname in varchar2,
l_new in number,
l_old in number );
end;
/
CREATE OR REPLACE PACKAGE BODY AUDIT_PKG as
procedure check_val(
l_tname in varchar2,
l_pkval in varchar2,
l_cname in varchar2,
l_new in varchar2,
l_old in varchar2 )
is
begin
if ( l_new <> l_old or
(l_new is null and l_old is not NULL) or
(l_new is not null and l_old is NULL) )
then insert into audit_tbl values
( sysdate, user, upper(l_tname), 1_pkval, upper(l_cname), l_old, l_new );
end if;
end;
procedure check_val(
l_tname in varchar2,
l_pkval in varchar2,
l_cname in varchar2,
l_new in date,
l_old in date )
is
begin
if ( l_new <> l_old or
(l_new is null and l_old is not NULL) or
(l_new is not null and l_old is NULL) )
then insert into audit_tbl values
( sysdate, user, upper(l_tname), 1_pkval, upper(l_cname),
to_char( l_old, 'dd/mm/yyyy hh:mi:ss pm' ), to_char( l_new, 'dd/mm/yyyy hh:mi:ss pm' ) );
end if;
end;
procedure check_val(
l_tname in varchar2,
l_pkval in varchar2,
l_cname in varchar2,
l_new in number,
l_old in number )
is
begin
if ( l_new <> l_old or
(l_new is null and l_old is not NULL) or
(l_new is not null and l_old is NULL) )
then insert into audit_tbl values
( sysdate, user, upper(l_tname), 1_pkval, upper(l_cname), l_old, l_new );
end if;
end;
end audit_pkg;
/
--- TRIGGER
create or replace trigger aud#table2
after update on table2
for each row
begin
audit_pkg.check_val( 'table2', :old.T1C4, 'T1C1', :old.T1C1, :new.T1C1);
audit_pkg.check_val( 'table2', :old.T1C4, 'T1C2', :old.T1C2, :new.T1C2);
audit_pkg.check_val( 'table2', :old.T1C4, 'T1C3', :old.T1C3, :new.T1C3);
audit_pkg.check_val( 'table2', :old.T1C4, 'T1C4', :old.T1C4, :new.T1C4);
end;
/
Best regards.
August 31, 2004 - 9:57 am UTC
don't know what you need really?!?
DULAL - Are you looking to initially load the PK values?
YAR (Yet Another Reader), August 31, 2004 - 11:51 am UTC
Dulal,
It sounds like you are looking to do an initial load to your audit table of existing primary key values. Since your package is looking for "changed" values, you can't use it to do the first load. You need to write code (query or procedure) to initially load your pk values into your audit table. You can then use your package to track changes.
Problems solved.
Dulal, September 01, 2004 - 11:11 pm UTC
Hi Tom,
My problems solved.
Output from audit_tbl & user_tab_columns tables
Dulal, September 04, 2004 - 6:37 am UTC
Hi Tom,
I am in problem like this-
My audit_tbl is like as
create table audit_tbl
( timestamp date,
who varchar2(30),
tname varchar2(30),
cname varchar2(30),
old varchar2(200));
My data table 'Table2' is like as
Create table Table2(
T1C1 VARCHAR2(10),
T1C2 DATE,
T1C3 NUMBER);
After updating in Table2 using audit_pkg my audit_tbl becomes like as
SQL:> select * from audit_tbl;
timestamp who tname cname old new
-----------------------------------------------------
01-SEP-04 VIS TABLE2 T1C1 MR XXX MR XX
01-SEP-04 VIS TABLE2 T1C1 MR XX MR YY
01-SEP-04 VIS TABLE2 T1C3 450 500
01-SEP-04 VIS TABLE2 T1C3 500 700
And now I want a output from Table2 and audit_tbl like as
tname timestamp who T1C1 T1C2 T1C3
----------------------------------------------------
TABLE2 01-SEP-04 VIS MR XXX
TABLE2 01-SEP-04 VIS MR XX
TABLE2 01-SEP-04 VIS 450
TABLE2 01-SEP-04 VIS 500
(i.e. I want column wise all time to time updates information which column sequence will be as like 'Table2' column sequence T1C1, T1C2,T1C3).
How it is possible?
Kindly help me.
Best regards.
September 04, 2004 - 11:09 am UTC
select tname, timestamp, who,
decode( cname, 'T1C1', old ) t1c1,
decode( cname, 'T1C2', old ) t1c2,
decode( cname, 'T1C3', old ) t1c3
from audit_tbl;
Again
Dulal, September 04, 2004 - 6:51 am UTC
Sorry,
There were tab problems
And now I want a output from Table2 and audit_tbl like as
tname timestamp who T1C1 T1C2 T1C3
----------------------------------------------------
TABLE2 01-SEP-04 VIS MR XXX <null> <null>
TABLE2 01-SEP-04 VIS MR XX <null> <null>
TABLE2 01-SEP-04 VIS <null> <null> 450
TABLE2 01-SEP-04 VIS <null> <null> 500
here <null> means null value
Dulal, September 04, 2004 - 11:20 pm UTC
Thanks Tom,
My problem solved.
Does Oracle Audit Column Level changes
LK, October 19, 2004 - 10:54 am UTC
Oracle has audit functionlity. We get table name and action (select, update etc). Can we get the names of the colunms that underwent changes. I don't need to know the old value and new value; just the name of the column. If there is a way please let me know what view has this info and any other helpful info.
Thanks
October 19, 2004 - 10:57 am UTC
there is not, you either updated the table or -- not.
You'd have to "do it yourself" via a trigger at the level of a column
Trigger
Branka, October 21, 2004 - 5:16 pm UTC
I created table, procedure and trigger. I am updating same row in OL table, and trigger should stop me to make second update, but it does not. Actualy it fire 2 times. Can you help me to understand where I made mistake?
In procedure I have dbms_lock.sleep( 3 ); because I would like to have same timestamp for both updates.
SQL> DESC OL
Name Null? Type
---------------------------------- -------- -------- ID NOT NULL NUMBER
NAME NOT NULL VARCHAR2(55)
VREME TIMESTAMP(6)
CREATE OR REPLACE PROCEDURE ol_prvi (id1 in number,comment1 in varchar2)IS
l_out_time timestamp;
BEGIN
update ol set name=comment1 where id=id1;
dbms_lock.sleep( 3 );
commit;
END ol_prvi;
/
CREATE OR REPLACE TRIGGER BRANKA.OL_UPDATE
BEFORE UPDATE
ON BRANKA.OL
REFERENCING NEW AS NEW OLD AS OLD
FOR EACH ROW
declare temp_time timestamp;
BEGIN
dbms_output.put_line(:old.vreme||' old_vreme');
dbms_output.put_line(:new.vreme||' new_vreme');
IF (:new.vreme!=:old.vreme) then
raise_application_error (-20000, 'failure');
end if;
:new.vreme:=systimestamp;
END ol_update;
SQL> EXEC ol_prvi(1,'TEST2');
21-OCT-04 04.46.36.400307 PM old_vreme
21-OCT-04 04.46.36.400307 PM new_vreme
PL/SQL procedure successfully completed.
SQL> EXEC ol_prvi (1,'TEST1');
21-OCT-04 04.46.36.400307 PM old_vreme
21-OCT-04 04.46.36.400307 PM new_vreme
21-OCT-04 04.48.57.532633 PM old_vreme
21-OCT-04 04.48.57.532633 PM new_vreme
PL/SQL procedure successfully completed.
October 22, 2004 - 3:43 pm UTC
vreme is always NULL
null = null is NEVER true
null = null is NEVER false
null != null is NEVER true
null != null is NEVER false
null <op> null is always "i don't know, unknown"
to compare two variables AND to consider NULLness, you have to:
if ( c1 <> c2
OR (c1 is null and c2 is not null)
OR (c1 is not null and c2 is null) )
then
they are different
else
they are not different
trigger
branka, October 26, 2004 - 11:08 am UTC
I do not understand how is vreme all the time null. What is than value that I print?
I think that is problem with SP that I run second, because it take time when it start, but than since time change while update is waiting for commit, it has second set of values.
Problem that I would like to solve is to have old value from the beggining of the transaction, and new from the and. Having 2 sets of value can not be used.
Branka
October 26, 2004 - 11:18 am UTC
i did not see the assignment at the bottom of the trigger, i was looking at the update. (hard to scan unindented code..)
but -- since you are not updating vreme -- HOW could it possibly be that :old was not the same as :new??????
you haven't UPDATED vreme at all. :old and :new are IDENTICAL (until you changed :new at the bottom of the trigger, but you've already done the check by then)
perhaps you mean to simply code:
if (:old.vreme IS NOT NULL) /* then we already did this row */
then
raise_application_error( .... )
else
:new.vreme := systimestamp;
end if;
A reader, November 04, 2004 - 12:48 pm UTC
System Level Trigger or Use of an Oracle supplied Package
denni50, November 30, 2004 - 3:02 pm UTC
Hi Tom
I would like to be able to audit any changes that are
made to procedures,packages and/or triggers that are
compiled after modification.
Users came into my office stating that a procedure that
was working earlier in the day was no longer working
after lunch. After some investigating I noticed some
code had been altered. The modified timestamp on the file
that compiles the procedure indicated the change had been
made earlier that day. When I approached the only other
person that has a some knowledge of sqlplus they
emphatically denied making any changes...and there is
only 1 other person besides myself who knows all the
procedures,packages and triggers.
I would like to be able to audit the machine and OS user where the change has come from not just the oracle user...since there is a 'super' user that is the owner of the objects and when a change is made to an object the user must login as the super user.
thanks for any suggestions/help
Sukumaran Moorthy, December 07, 2004 - 11:02 am UTC
Exactly what I was looking for!
History/Audit
Rich, January 31, 2005 - 2:42 pm UTC
In the Healthcare application we are developing, the client
requires a history of changes/modifications (who changed
it, new and old values, etc) for all columns in several
tables.
We are looking at 2 options:
1) Create a generic trigger/proc that will store the
changes into a central audit trail (similar to your example
at the start of this thread).
2) Create a history table for each of the table requiring
the audit. Use a simple trigger (INSERT into history_tbl)
for these tables to populate their respective history
table.
I tend to choose option#2. My reasoning is that we will
try to put as little overhead as possible into the
triggers so that there is little performance impact during
DMLs. Then just create a query (might be complex) to
read the history tables. I would think that users will
spend most of their time doing DMLs rather than querying
the audit/history tables.
I would appreciate your thoughts. Or any alternative
options you might have. We are also looking at Workspace
Mgr but came across the non-PK unique key limitation (we
are usign 9i).
Thank you.
January 31, 2005 - 2:49 pm UTC
#2 wins for me.
(or 10g which allows for more than one key on a versioned table and will be supported lots longer with tons more stuff than 9i has -- given that "you are developing")
You are great Tom
Anjana, February 09, 2005 - 12:48 pm UTC
My question is related with the original question. I have four tables A, B and Comment_events and Comment_text tables. B is the child of A and Comment_event has foreign key link to A, Comment_text is child of Comment_event table. I want to audit table B . I donot want to store columns old and new values as you have demonstrated in your example. I want to store an event (INSERT or UPDATE) in comment_event table along with the PK of table A and a long text about the columns added or changed ( something like Referr# was changed from 100 to 200 , referral date is chnaged to Jan 20' 2005 from Dec 20' 2004 and so on so where referr# and referral date are the columns in table B)
in the table comment_text along with PK of table comemnt_events.
I will really appreciate your help in solving my problem. I have almost 20 tables to audit which are children of table A.
Thanks
Anjana
February 09, 2005 - 3:01 pm UTC
code a trigger it sounds like to me.
Generic trigger for auditing column level changes(incorporating column addition/removal)
Vivek, February 25, 2005 - 4:06 am UTC
Hi Tom, thanks for your reply indication that :new and :old can't dynamically accessed as those are bind variables. The code given by you at the start does work, but I need a more generic solution. We are hard-coding for a set of columns in a table (eg. DEPTNO, DNAME in our example), but I need to audit all columns that are there in the table. Even if a column is added/dropped from the table that should not affect my code. Can we have that generic code? Otherwise each time we add a cloumn to the table, we need to add code to audit that column separately. Can we avoid this problem?
Thanks in advance.
Vivek
February 25, 2005 - 5:08 pm UTC
check out
</code>
http://docs.oracle.com/docs/cd/B10501_01/appdev.920/a96628/toc.htm <code>
(and remember, adding dropping columns is like a majorly HUGE change, I would expect you do this in development, with some test and you would change any and all affected code correspondingly -- meaning there is no need for "generic" code here at all -- you would (if using the above) REGENERATE the trigger of course in your dev and test environment so you can role it out to production)
A reader, March 08, 2005 - 10:49 am UTC
A reader
A reader, March 18, 2005 - 2:49 am UTC
Hi,
We are in a need of auditing ,where complete row(before update) has to be written in some other table.We know trigger can do this,I was just wondering if oracle has some inbuilt feature to do this in db.we are using 9i/10g.
Cheers
March 18, 2005 - 7:00 am UTC
workspace manager
search this site for dbms_wm or see the application developers guide for workspace manager.
it does not write to another table, it keeps it all together, but it does exactly that.
How to Audit DML statments
Yousef, April 16, 2005 - 11:22 am UTC
Dear Tom:
I am sorry for asking this silly question, but
I need to know how can I track all the DML or select statements that have been executed.
I have tried
audit all by access;
or
audit select table,update table,insert table, delete table;
but it was wrong
We have billing system, and we want to add an extra locally developed form, but we don't know how the data is updated, so I have to track the changes on the database level, and that billing system is integrated with Oracle financial.
Thanks in advance
April 16, 2005 - 11:29 am UTC
do you want to "audit" or simply "figure out what SQL transaction X runs"
to audit, it would be
o dbms_fga for SELECT's in Oracle 9i plus a TRIGGER using ora_sql_txt to capture the DML
o dbms_fga for both in Oracle 10g
To figure out what SQL transaction X runs, SQL_TRACE=TRUE and read the tkprof report/trace files.
I want to know what statatment has been Executed
Yousef, April 17, 2005 - 9:48 am UTC
Dear Tom
Thank alot for helping all these people throw their daily work with Oracle.
I need to know what statment has been executed, (select ,delete, update,...), what I need to know, is the logic that application is using to update the database, I need to tarck what tables are being modified. after I will write a trigger to audit the :old and :new values.
So how can I do this auditing.
and in general, what is the audit statment that enable me to track all the opertations in all schemas?
Thank you so much in advance
April 17, 2005 - 9:55 am UTC
see the review/followup *directly* above this one?
Thank you
Yousef, April 18, 2005 - 2:37 am UTC
Dear Tom:
Thank you so much, your are really best of the best.
I hope some day I become like you
Regards
Yousef
more Audting questions ..
Anthony, May 24, 2005 - 1:06 pm UTC
Tom,
I have a quick question for you.
I am trying to do an audit on a table. i need to capture new and old values in an audit_detail table and at the same time need to capture some other fields that are sent to a procedure(from client) that's doing an insert on this table,into audit_main table. Audit main and detail tables are linked by a key (id)
So I was following this approach.
I am writing an after insert/update trigger on this table and capturing those values (old and new) in a package variables(global variables) with the table name (as auditing has to be done for more than one table).
From the procedure,after doing an insert in this table, I am passing fields(that are sent from client) into audit parent table and trying to get those captured values from the package ,with id from parent table into audit detail table.
Is this a good solution? or can we do in a better way?
Thanks,
May 24, 2005 - 1:44 pm UTC
if you are going to do code (not a bad idea), kill the trigger and just do it in code.
why bother with the trigger at all?
insert -- you have the "new" values -- no trigger needed (you can use insert returning if there are triggers that assigned values)
select for update the pre-value and update. for the update.
delete with the returning clause for a delete.
How about old values?
Anthony, May 24, 2005 - 2:19 pm UTC
You are right, i can get new values like that without trigger.
But to get old values, i need to write quieries to catch pre-insert /pre-delete/pre-update values and then do an insert/update or delete and use returning clause for new values. I am using MERGE-Update-Insert method on some and returning new values like that is not possible, so i need to change them and write regular queries (to See if data exist, if exists then update else insert..adds more quieries and database trips)
So, I thought if a trigger could get me both new and old then why not use it instead of writing tons of code.
Does it make sense? Can you suggest a more definite approach ?
And also, my approach was giving some problems->
what happens if there are 2 or 3 inserts on the same table one after the other. Since i am storing only table name with old and new values, i will not be able to get right data from package variables, right? How to handle this?
Thanks for your Time
May 24, 2005 - 3:56 pm UTC
what is a "pre-insert" old value?
i said to use SELECT FOR UPDATE for the update (to get old)
and RETURNING on the delete to get old
less code than triggers.
How about UPSERT
Anthony, May 25, 2005 - 12:12 pm UTC
I think i got it now. I left my brains at home yesterday (Pre-insert and old value?):-)
But part of my question is still not answered, since i'm using UPSERT (merge update-insert) syntax for updateand inserts, returning a value is not possible. Do you think it's better to go back to good old method insted of using MERGE -update/insert command?
Thanks
May 25, 2005 - 3:11 pm UTC
</code>
http://asktom.oracle.com/~tkyte/wc.html <code>
if you read that, you might be even more willing to stay away from the trigger route...
Since you are doing this auditing thing, I'd probably just
try the update with returning
if (sql%rowcount = 0 )
then
do the insert
end if
audit(.....);
Auditing columns - prevent updates
A reader, June 03, 2005 - 7:08 pm UTC
Tom,
We would like to prevent updates on certain columns in a given table. When the user tries to update the column, the update should be rolled back, but the detals of the user who tried to update need to be captured and stored in a different table. Would appreciate it if you can suggest the best way to achieve this.
Thank you
June 03, 2005 - 8:04 pm UTC
well, you can grant update on columns to users (so don't grant update on these)
and you can audit failed updates (via audit command).
does that get enough information for you?
Auditing columns - prevent updates
A reader, June 04, 2005 - 1:15 am UTC
Tom,
>>.....grant update on columns to users ....
The login name/user that might be updating the column is the owner of the table. In this case the owner would have all the permissions on this table. This option will not work in this case.
>>and you can audit failed updates (via audit command).
I do remember reading that audits are expensive (I might be wrong on this..). I would like to get your opinion on this.
Could pragma autonomous_transaction be used as one of the options to rollback the updates on the main table and commit the user details to the log table?
Would appreciate it if you can let me know your comments.
Thank you
June 04, 2005 - 8:25 am UTC
well, I have a problem with "the owner of the table is the one that must be prevented from doing things to it". You might want to rethink your entire approach here.
Auditing as a feature of the database is cheaper than "do it yourself auditing"
Auditing adds to the resources needed by a statement (obviously I hope). Is auditing more expensive than not auditing? ONLY IF YOU DON'T NEED TO AUDIT. Otherwise, it is what is called a "necessary thing".
Can you really rething the approach here, why the "owner"? Seems the table might just be in the wrong schema.
Auditing columns - prevent updates
A reader, June 04, 2005 - 9:22 am UTC
Tom,
It is true that there are problems with the basic design of the application. This application has been in production for quite sometime now and I have recently joined this team to maintain this application. There is nothing much I can do in the short term about "the owner of the table is the one that must be prevented from doing things to it".
While searching through Google and this site, I have come across this link on auditing columns(option 2 below). I would like to know which is a better approach for auditing columns.
Option 1: Auditing with audit statement...
Option 2: </code>
http://asktom.oracle.com/~tkyte/autonomous/ <code>
Thank you for your time.
June 04, 2005 - 12:15 pm UTC
but the owner of the table will in fact be able to get around just about anything you do (heck, I'll just delete and insert if you don't let me update, even better I'll just get mad at you and create table copy as select, drop the old table and rename the new one.
This is not "maintain" you are doing, this is enhancement request processing.
Putting a trigger on there only makes you feel better, it would be relatively easy to work around. A trigger would be the only way to do it -- but, I could find ways around it as the owner of the table.
Get the update time for certain record without auditing.
Sean, June 20, 2005 - 11:21 am UTC
Hi Tom,
We didnt implement auditing on one table in the past. Are there ways to know when the certain record was updated through system view, since we are investigating one record in the critical table?
Thanks so much for your help.
June 20, 2005 - 12:50 pm UTC
log miner might be useful if you have the redo logs from the period of interest. It (log miner) is documented.
no version information..... 10g has flashback capabilities you could investigate as well if you are on that release.
Save additional info in (via) the _application_ layer ?
MaxU, July 05, 2005 - 6:44 pm UTC
Hi Tom,
Thanks for your brilliant ideas !
I'd like to store some additional information about the "who" (modifier) in the audit table. Unfortunately this information accessible on application layer _only_.
I'm talking about JAVA application using "connection pool" to DB.
How would I put an additional info from Application so that that it can be used form within trigger/procedure DB later.
May I use "sys_context" inside our application for that?
How would I bind an additional info to Session-Id than?
Or do you have a better idea? (i'm sure you do!)
;)
Thank you!
July 05, 2005 - 7:14 pm UTC
dbms_session.SET_IDENTIFIER
is sort of cool, it gets audited with normal auditing too.
if you are interested in reading a lot more on this topic, "Links I like" above -> Effect Oracle Security by Design by D. Knox.
Thank you Tom !
MaxU, July 06, 2005 - 1:06 am UTC
Audit without trigger
karma, July 18, 2005 - 10:04 pm UTC
We have Data Warehouse and daily we apply delta changes to warehouse. We have a need in which have to audit/track changes ( I/U/D ) to 3-5 tables where users define which columns they would like to track and need to know prev value and new value at any point in time. Is there a generic solution to achieve this without using triggers ( since in a warehouse env. this could slow us down - table being tracked are huge).
July 19, 2005 - 7:24 am UTC
Normally, this is handled as part of the load, using dimension tables that are effective date stamped (effective to and from dates). It would mean there is no such thing as an update or delete really on this tables - other than to update the end date. Inserts would rule supreme.
You would update dim set end_date =sysdate where key in (select key from rows to be updated/deleted )
and then insert all of the I/U/D records with sysdate as the effective date.
That is one way- it is not an audit so much as a "data model"
"ASKTOM" is the best place to make sure you are on the right way...
Sergey Soy, December 02, 2005 - 11:50 am UTC
Tom is defenetely the best Oracle Guru.
I'm an Oracle Developer and it's a rule for me to read all related asktom.oracle.com topics whenever I design any minor or major thing. It helps to avoid tons of mistakes.
History/Audit
Oliver, December 28, 2005 - 12:18 pm UTC
On January 31, 2005 Rich from Chicago asked wether the history of several tables should be stored in a single table or in one table for each source table. You opted for the second possibility. My first question now is directed to the number of columns: You chose to store all values in a single varchar column. Could it make sense to use different columns for numbers, dates and strings?
create table audit_tbl
( timestamp date,
who varchar2(30),
tname varchar2(30),
cname varchar2(30),
old_date date,
new_date date,
old_number number,
new_number number,
old_string varchar2(2000),
new_string varchar2(2000)
)
/
And how would you treat BLOBs and CLOBs?
With the improvements of 10g Oracle Workspace Manager seems to become the solution to our auditiong problems. Still there is one thing I could not find yet. We already have some auditing data. It is far away from being complete but it still help us a lot. Is it possible to feed this information into the workspace manager to have a combined history?
Thanks in advance,
Oliver
December 28, 2005 - 5:44 pm UTC
I said #2 wins for me - meaning "a history table per table to be 'historied'"
...
2) Create a history table for each of the table requiring
the audit. Use a simple trigger (INSERT into history_tbl)
for these tables to populate their respective history
table.
....
These history table would have the same exact columns as the base table, plus "effective" date columns.
Meaning - that single table - wouldn't be the approach of my choosing.
I know of no way to "feed old history into the workspace manager", no.
History/Audit
Oliver, December 29, 2005 - 7:25 am UTC
Thanks for your fast response. I'm glad to hear that you would use the structure of the original tables for the history tables instead of putting all information into a single column. At least one thing that I can keep in my system ;-)
I remember vaguely that you wrote somewhere to put DML statements used in triggers into packages to avoid unnecessary parsing. But what about the control logic (if/then/elsif ...)? Unlike the creator of this thread I need to track inserts and deletes too. For inserts and updates :NEW are used for the audit. When deleting :OLD is saved in the history (at least the primary keys should be stored). Should this logic be placed in the trigger or in the package? Or would you code this logic by using a trigger for delete and another trigger for insert+update? At the moment it looks like this:
REM One sequence for all history tables
CREATE SEQUENCE "CHANGE_ID" INCREMENT BY 10 START WITH 10;
CREATE TABLE "PROJECT"
( "PROJECT_ID" NUMBER NOT NULL ENABLE,
"CREATED_BY" VARCHAR2(20) NOT NULL ENABLE
) ;
ALTER TABLE "PROJECT"
ADD CONSTRAINT "PK_PROJECT"
PRIMARY KEY ("PROJECT_ID");
CREATE TABLE "PROJECT_HIST"
( "PROJECT_ID" NUMBER,
"CREATED_BY" VARCHAR2(20),
"CHANGE_ID" NUMBER(*,0),
"CHANGE_KIND" CHAR(1) NOT NULL ENABLE,
"CHANGE_DATE" DATE NOT NULL ENABLE,
"CHANGE_USER" VARCHAR2(30) NOT NULL ENABLE
) ;
REM PK necessary for replication
ALTER TABLE "PROJECT_HIST"
ADD CONSTRAINT "PK_PROJECT_HIST"
PRIMARY KEY("CHANGE_ID");
ALTER TABLE "PROJECT_HIST"
ADD CONSTRAINT "CK_PROJECT_HIST"
CHECK (change_kind IN ('D', 'I', 'U'));
CREATE OR REPLACE TRIGGER "PROJECT_HIST"
BEFORE INSERT OR UPDATE OR DELETE ON project
FOR EACH ROW
DECLARE
l_change_kind CHAR(1);
l_user VARCHAR2(30);
BEGIN
IF from_remote() = 0 THEN
l_user := sampledb_context.get_attribute('USER_NAME');
IF INSERTING THEN l_change_kind:='I';
ELSIF UPDATING THEN l_change_kind:='U';
ELSIF DELETING THEN l_change_kind:='D';
END IF;
IF DELETING THEN
INSERT INTO project_hist(
project_id,
created_by,
change_kind, change_date, change_user, change_id)
VALUES(
:OLD.project_id,
:OLD.created_by,
l_change_kind, SYSDATE, l_user, change_id.NEXTVAL);
ELSE
IF (
( :NEW.project_id = :OLD.project_id OR
( :NEW.project_id IS NULL AND
:OLD.project_id IS NULL)) AND
( :NEW.created_by = :OLD.created_by OR
( :NEW.created_by IS NULL AND
:OLD.created_by IS NULL)) ) THEN
NULL;
-- No value has changed.
ELSE
INSERT INTO project_hist(
project_id,
created_by,
change_kind, change_date, change_user, change_id)
VALUES(
:NEW.project_id,
:NEW.created_by,
l_change_kind, SYSDATE, l_user, change_id.NEXTVAL);
END IF;
END IF;
END IF;
END;
/
Because we need the history of all tables (>60) of a schema I hesitate to use one big package for this. And I guess using dynamic sql is no good choice either. So there will be one package per table or would you suggest something different?
Thanks and regards
Oliver
December 29, 2005 - 12:02 pm UTC
In real life, I would prefer a trigger that is a one line call (or as few lines as possible) to a packaged procedure or function.
The trigger code is stored compiled, but the SQL would only be cached per statement execution - meaning, you would be softparsing the SQL over and over again in the trigger (but not so in the packaged procedure)
Please verify my approach for putting in sequence number and change comments in audit table
M Kapoor, January 19, 2006 - 9:36 am UTC
Dear Tom,
I have followed the set up as suggested by you for putting a generic trigger for auditing column level changes, in terms of the audit_pkg, script for creating audit trigger, etc.
Could you please verify whether the following approach would work fine in bulk load scenario as well as normal transaction mode:
For inserting sequence number per update statement in audit table:
- declared a variable in audit_pkg header as number
- created a sequnce
- created a before update, statement level trigger on party which populates the pkg vriable with a sequence number.
- the pkg variable value thus set as above, is assigned to a number column in audit_tbl via audit_pkg procedures.
For inserting user comments for any update in the audit table:
- declared a variable in audit_pkg header as varchar2
- User does insert/update/delete on the audited table via a procedure which sets the audit_pkg variable.
- the pkg variable value thus set as above, is assigned to the comments column in audit_tbl vi the audit_pkg procedures.
Thanks and regards,
Kapoor
January 19, 2006 - 1:43 pm UTC
as you test it, what do you discovers......
why would you need this package variable when the trigger that calls this package can pass it the value directly.
can u send me the samle queries and how to delete duplicate rows in a table
Penchal Reddy, February 02, 2006 - 12:36 am UTC
i am checking to see how to delete duplicate rows in a table. but unfotunately i got triggers explanation. it is very useful for me.....
can u send me the samle queries and how to delete duplicate rows in a table. i hope that u will response to me immediately with the conformation of this mail.
February 02, 2006 - 11:50 am UTC
"U" is not around right now (I am really looking forward to meeting "U" some day. I've been so curious about them, not actually having met them)
However, I did do this search:
</code>
http://asktom.oracle.com/pls/ask/search?p_string=delete+duplicate+rows <code>
it seems to bring up articles relating to deleting duplicate records....
how to implement such "custom-built logging"?
max, February 05, 2006 - 1:43 pm UTC
we 've the requirement to provide some kind of "change log" to keep track of changes of content of some columns that are spread across several tables (the whole data model).
therefore we don't think that workspace manager is appropriate to meet this requirement. and we don't want to incorporate an additional column into every table's primary key and have versions of rows either.
in addition we have to store some additional metadata for every "log entry" (e.g. creator, timestamp, user defined comment) ...
we could have decided to create a "common" log table containing all the abovementioned information and a primary key referencing table name, column name, source row (primary keys of different structures instead of rowids) and a sequence number to mark consecutive "versions" ...
how would you implement such "logging"?
February 06, 2006 - 12:39 am UTC
I would not have a common table, no chance (no way to provide data integrity).
there would be a log per table.
I would prefer to implement the entire interface via stored procedures (NOT triggers). Therefore, applications would not use insert/update/delete against these objects but rather a package that encapsulates the well formed transactions that may take place here.
there would not be any reference to rowids for sure, primary keys would be the only thing even considered.
How does Oracle know which audit_pkg.check_val to use?
BL, February 27, 2006 - 1:10 pm UTC
Great idea for logging changes. One thing that's left me puzzled is how in the world does Oracle know which check_val proc to use (no need to get extremely techinical), does it have to try one till it gets to the one with the right data types?
Also do you think this adds much overhead to the transaction?
Thanks in advance.
February 27, 2006 - 2:40 pm UTC
it is called overloading and it is resolved at compile time. It does data type matching.
how to log application accessing my database
Bez, March 02, 2006 - 1:23 pm UTC
Along with column changes, is there a way to capture the name of the program the user has connected with to make the change in the database? (for example ACCESS.exe, sqlplusw.exe, excel.exe, etc.)
In my 8i db I can query v$session.program to get that but it looks like 9.2 no longer populates that field.
thanks!
March 02, 2006 - 1:32 pm UTC
v$session has this.
ops$tkyte@ORA9IR2> select program from v$session;
PROGRAM
------------------------------------------------
oracle@vmware (PMON)
oracle@vmware (DBW0)
oracle@vmware (LGWR)
oracle@vmware (CKPT)
oracle@vmware (SMON)
oracle@vmware (RECO)
sqlplus@vmware (TNS V1-V3)
7 rows selected.
but beware!!! I can just copy your program to something else.... give it a different name.
Any benefit for this type of approach for audit
Rajeswari, March 03, 2006 - 8:29 am UTC
Sorry for long story.
<begin>
I am instructed to help DB Developer remotely to improve the performance for particular function which takes 2 hours to insert 2 million records. All the code written in pl/SQL
For auditing they are using 4 columns ( created_date, created_user, last_modified_date, last_modified_user). They are using trigger for loading this columns and sequence generation. Above 4 columns are available in all their tables.
I suggested them to do the following changes after analyzing tkprof report
1. Use of bulk fetch,insert
2. No frequent commit
3. use of analytical functions
4. Reduce the function calls
5. Remove the below trigger and implement the logic in procedure itself
But they implemented only 2,3 and 4. I asked for the reasons but they are saying for implementing (1) they didn't have time
for (5) their senior db person instructed to call the procedure for modularity
Now their time reduced to 30 minutes by applying 2,3 and 4. They are content about the current time. Really their answer worried me lot.
Very sad part is they are truncating the date for created_date and modified_date
<end>
My question is whether any advantage for following the below approach. Please share your thoughts.
CREATE OR REPLACE TRIGGER tr_com_ded_bri
BEFORE INSERT ON com_deduction
REFERENCING NEW AS NEW OLD AS OLD
FOR EACH ROW
begin
IF :NEW.ded_id IS NULL THEN
SELECT SQ_COM_DED_ID.Nextval INTO :NEW.ded_id FROM Dual;
END IF;
APP_PACK_UTIL.PRC_Create_Audit(:NEW.created_date, :NEW.created_user, :NEW.last_mod_date, :NEW.last_mod_user);
end;
/
PROCEDURE PRC_CREATE_AUDIT
(
pCreatedDate IN OUT DATE,
pCreatedUser IN OUT VARCHAR2,
pLastModDate IN OUT DATE,
pLastModUser IN OUT VARCHAR2
)
IS
ldtmDate DATE := Trunc(SYSDATE);
lvarUser VARCHAR2(30):= USER;
BEGIN
--Remove the prefix OPS$
IF(substr(lvarUser,1, 4) = 'OPS$')
THEN
lvarUser := substr(lvarUser, 5);
END IF;
--Remove trailing 0 from the user name
IF(substr(lvarUser, -1, 1) = '0')
THEN
lvarUser := substr(lvarUser, 1, length(lvarUser)-1);
END IF;
IF pCreatedDate IS NULL THEN
pCreatedDate := ldtmDate;
END IF;
IF pCreatedUser IS NULL THEN
pCreatedUser := lvarUser;
END IF;
IF pLastModDate IS NULL THEN
pLastModDate := ldtmDate;
END IF;
IF pLastModUser IS NULL THEN
pLastModUser := lvarUser;
END IF;
END PRC_CREATE_AUDIT;
/
The other data type
Yiming Li, May 10, 2006 - 4:49 pm UTC
Tom,
I used your pkg for auditing the tables everything is fine. Current I have tables with long and clob type. Does your pkg work on these type, if i modify a little the pkg such as,
procedure check_val( l_tname in varchar2,
l_cname in varchar2,
l_new in long, <--
l_old in long, <--
l_pt_id in number,
l_ip_address in varchar2 );
procedure check_val( l_tname in varchar2,
l_cname in varchar2,
l_new in clob, <--
l_old in clob, <--
l_pt_id in number,
l_ip_address in varchar2 );
Thank you for this website.
yiming
Just what I was looking for
Russell Steel, June 21, 2006 - 11:19 am UTC
Just wondering if using the method on a table that already has a trigger define on it, that generates a sequence number. would I be best adding this code to the existing trigger or should I create a new trigger, which would be able to reference sequence_value.CURVAL ?
Will the triggers fire simultaneously or sequentially ?
June 22, 2006 - 11:27 am UTC
not sure what you mean but.....
multiple same type triggers fire in "an undefined order". If you have 2 "before for each row" triggers, you cannot RELY on the order in which they fire. IF there is some ordering you need - then you should have one trigger that invokes the code in the right order.
Just what I was looking for
Russell Steel, June 23, 2006 - 5:51 am UTC
Our database already has Audit triggers that monitor changes that occur on tables. Though we now have a client that would their own additional auditing, which I was hoping we could achieve buy creating client only triggers.
The problems is, we use sequence values to generate audit numbers, but this number is not transparent across triggers on the same table. Also the client wants to he before and after snapshots of the data, our a current triggers only log the now, not the before image.
Is it possible to dynamically read a tables current trigger code, then append the additional package calls ?
June 23, 2006 - 10:19 am UTC
clarify your last paragraph please. Like is this a "one time thing" or do you want to do this in the trigger itself or what?
Just what I was looking for
Russell Steel, June 26, 2006 - 4:57 am UTC
As I already have AFTER insert triggers defined on the tables, the solutions I have are, to dynamically amend the current triggers to add the code required for column changes, or change the current triggers to fire BEFORE insert, then create new AFTER insert triggers that will be able to reference the sequence value generated by the BEFORE insert.
send email message when trigger condition satisfied
Baqir Hussain, July 13, 2006 - 7:28 pm UTC
I would like to get your advise on how to send an email message to a user when trigger condition is satisfied as follows:
create or replace trigger aud_del_stoptype
after delete on LINESTOP
for each row
begin
if (:old.STOPTYPE = 'N' ) then
audit_pkg.check_val( 'LINESTOP', 'STOPTYPE', :new.STOPTYPE, :old.STOPTYPE, 'DELETE' );
audit_pkg.check_val( 'LINESTOP', 'SIGNID', :new.SIGNID, :old.SIGNID, 'DELETE' );
audit_pkg.check_val( 'LINESTOP', 'LINEDIRID', :new.LINEDIRID, :old.LINEDIRID, 'DELETE' );
audit_pkg.check_val( 'LINESTOP', 'NODEID', :new.NODEID, :old.NODEID, 'DELETE' );
audit_pkg.check_val( 'LINESTOP', 'STOPID', :new.STOPID, :old.STOPID, 'DELETE' );
end if;
end;
/
Please explain it with short example. Thanks
Thanks
July 13, 2006 - 8:41 pm UTC
are you just asking how to send mail from a trigger?
your trigger condition seems to be satisfied "always" upon delete???
do you really want to send them an email for every row deleted?
be more precise.
send mail from trigger
Baqir Hussain, July 14, 2006 - 11:51 am UTC
Thanks for the prompt reply. When stoptype=N then a trigger will fire and insert records in audit table. This condition (stoptype=N) will happens when a user try to change the bus route by mistake or at a schedule time. This could happen either once a day or in a week. In order to find out, in time, who did deletetion -- an email notification is required to the application engineer so that prompt action could be taken.
All the mail packages you have written work fine but I have no idea how to make it work within a trigger.
Thanks in advance.
July 14, 2006 - 12:46 pm UTC
just call it? it is just a stored procedure
Joe, July 14, 2006 - 2:18 pm UTC
I'm not sure if this qualifies as a new question.
How would you modify the trigger if you do not care which column was updated. All you need to know is that a column was updated. Then the primary key and date would be written to another table. A history of the previous information for the row is not needed. Only the newest information is used.
July 14, 2006 - 2:51 pm UTC
create trigger whatever
after update on table for each row
begin
insert into your_audit (pk,dt) values (:new.pk,sysdate);
end;
followup to my previous question (from 07/14/2006)
Joe, July 17, 2006 - 2:22 pm UTC
I am unable to do a blind insert. A program that I do not have access to the code updates the table. It will call the update procedure as long as the user hits the save button, even if no data has changed. Because of this I need to see if a change is actually made.
It is my understanding that when an update command runs it does not check to see if any data is being modified. It just overwrites what is there.
How would the process above be modified so the primary key and date would be written to another table only when a value has changed?
July 17, 2006 - 3:17 pm UTC
you would call check_val on each old/new value and look at the return from it.
rewrite check_val to return a boolean true/false and have the generated trigger look like:
if ( check_val(:old.c1,:new.c1) or
check_val(:old.c2,:new.c2) or ....
)
then
do_insert....
Other way for generic trigger ?
Cari Ricci, July 18, 2006 - 5:27 pm UTC
Thanks for your information. Congratulations for this site.
Sorry if i insist in a dummy question, but i am not clear.
I am trying develoment audit tool with PL/SQL for oracle 9i and 10g.
Do you know other alternative to generate generic trigger?
Why can not use dynamic sql for create trigger ?
I am doing it, but I receive error PLS-00049 and I read your answer, would you please explain better ?
Do you know other way ?
--
Create or replace procedure test(v_tname varchar2)
IS
begin
execute immediate ' CREATE OR REPLACE TRIGGER trg_' || v_tname ||
' BEFORE INSERT
ON ' || v_tname || '
DECLARE
PRAGMA AUTONOMOUS_TRANSACTION;
BEGIN
SELECT pkg_audit.p_det_action(column_name ,:old.column_name, :new.column_name)
FROM user_tab_columns WHERE table_name = upper('|| v_tname ||');
EXCEPTION
WHEN others then
null;
End;';
end;
/
July 19, 2006 - 9:00 am UTC
I'm not sure where to even begin....
when others then null - STOP IT, JUST DON'T DO IT, WHY BOTHER WRITING CODE IF YOU DO. Sorry for shouting, but hey - you basically are saying "if we encounter an error that prevents our code from running, so what, just ignore it". If you can say that - I say "just don't ever run your code, it is not useful apparently"
So, DON'T EVER DO THAT.
as for the body of the trigger, I'm not sure what you are trying to do at all?
And unless you really know you want an autonomous transaction and why you want one, don't use that either.
Sorry
Cari Ricci, July 19, 2006 - 10:06 am UTC
Tom,
I am so sorry for my dummy question.
July 20, 2006 - 7:43 am UTC
you don't have to be sorry - but you could explain in "text" what your goal was. You do understand that if you give someone code that "just doesn't work" and ask them to "make it work", it would be extremely useful to tell them what it actually is supposed to be doing.
some help on the approach to be taken.
Vijay, July 26, 2006 - 6:39 am UTC
Hi Tom,
good day to you as always, I need your help on a requirement we have, the requirement is as follows, the application users will be in two groups makers and approvers, any data added or modified by maker is not final unless the approver approves the changes made.
so the scenario will be something like this, user A (maker) add a new customer then user B (approver) when logs into the system should see the changes done i.e. add in this case and then when he approves the changes will be final
to handle this can we have something as follows, a customer table as normal and any add/update to data should go to another table that will have a clob column, all the data entered from the screen should be in this clob column with delimeters, when the approver opens his screen we give the data from this table and once he approves the data be moved to customer or any other respective table.
the other way I think is having two tables of similar structure and changes should first go to clone table and then after approval should go to master table.
I am not sure if workspace will be a solution to this requirement, can you please suggest something on this, your help is much appreciated.
thanks and regards,
Vijay'S
July 26, 2006 - 11:24 am UTC
sounds like a queue to me, have you considered using a message queue (AQ).
What about application users?
RP, August 06, 2006 - 3:07 pm UTC
Hi Tom,
i really like the method for tracking column level changes to specific tables that you have at the beginning of this thread.
I may be losing the argument with my team on using REAL oracle accounts though (they use the ONE BIG APPLICATION user model), and so wanted to know if the trigger could use information in the application context (which i will populate every time a request comes in from the mid tier)????
Thanks
August 06, 2006 - 4:12 pm UTC
absolutely - in fact if their application were to call dbms_session.set_client_identifier - we'd be able to audit that for you (we do, that shows up in the audit trail)
Using object definitions for tables
Billy, October 09, 2006 - 4:47 am UTC
I like object types. Methinks it is one of the most underestimated and under utlised features in Oracle.
With regards dealing with :NEW/:OLD as static references - one can make treat them dynamic when dealing with these as object types using the OBJECT_VALUE (generic/standard) method.
The following simplistic example illustrates
SQL> create or replace type TUser is object
2 (
3 id number,
4 name varchar2(30)
5 );
6 /
Type created.
SQL>
SQL> create table application_audit
2 (
3 datetime date,
4 username varchar2(30),
5 table_name varchar2(30),
6 operation varchar2(1),
7 audit_row TUser
8 )
9 /
Table created.
SQL>
SQL>
SQL>
SQL> create or replace procedure AuditUser( cTableName varchar2, cOperation varchar2, userObj TUser ) is
2 begin
3 insert
4 into application_audit
5 values(
6 SYSDATE,
7 USER,
8 cTableName,
9 cOperation,
10 userObj
11 );
12 end;
13 /
Procedure created.
SQL>
SQL>
SQL>
SQL> create table users of TUser
2 (
3 constraint pk_users primary key( id )
4 )
5 organization index
6 /
Table created.
SQL>
SQL>
SQL> create or replace trigger audit_trig_users
2 before insert or delete or update on USERS
3 for each row
4 declare
5 userObj TUser;
6 begin
7 userObj := :NEW.Object_Value;
8 if userObj is NULL then
9 userObj := :OLD.Object_value;
10 end if;
11
12 case
13 when INSERTING then AuditUser( 'USERS', 'I', userObj );
14 when DELETING then AuditUser( 'USERS', 'D', userObj );
15 when UPDATING then AuditUser( 'USERS', 'U', userObj );
16 end case;
17 end;
18 /
Trigger created.
SQL> show errors
No errors.
SQL>
SQL>
SQL>
SQL> insert into users values( 100, 'Tom');
1 row created.
SQL> insert into users values( 200, 'Dick');
1 row created.
SQL> insert into users values( 300, 'Hary');
1 row created.
SQL> insert into users values( 400, 'Hary' );
1 row created.
SQL> update users set name = 'Harry' where id = 200;
1 row updated.
SQL> delete from users where id = 400;
1 row deleted.
SQL> commit;
Commit complete.
SQL>
SQL> col OPERATION format a10
SQL> col AUDIT_ROW format a50
SQL> col USERNAME format a20
SQL> col TABLE_NAME format a20
SQL> select
2 a.datetime,
3 a.username,
4 a.table_name,
5 case a.operation
6 when 'I' then 'INSERT'
7 when 'U' then 'UPDATE'
8 when 'D' then 'DELETE'
9 end as OPERATION,
10 a.audit_row as AUDIT_ROW
11 from application_audit a
12 /
DATETIME USERNAME TABLE_NAME OPERATION AUDIT_ROW(ID, NAME)
------------------- ---------- ---------- ---------- --------------------
2006/10/09 10:38:07 BILLY USERS INSERT TUSER(100, 'Tom')
2006/10/09 10:38:07 BILLY USERS INSERT TUSER(200, 'Dick')
2006/10/09 10:38:07 BILLY USERS INSERT TUSER(300, 'Hary')
2006/10/09 10:38:07 BILLY USERS INSERT TUSER(400, 'Hary')
2006/10/09 10:38:07 BILLY USERS UPDATE TUSER(200, 'Harry')
2006/10/09 10:38:07 BILLY USERS DELETE TUSER(400, 'Hary')
6 rows selected.
SQL>
October 09, 2006 - 8:41 am UTC
...
I like object types. Methinks it is one of the most underestimated and under
utlised features in Oracle.
.....
methinks I disagree entirely :) the inefficiencies introduced by this generic layer, that could be very efficiently implemented with relational structures yourself (has to be true, we ONLY HAVE relational structures under the covers) are numerous. Start adding collection types and it is all over.
Syntatic sugar.
Object types
Billy, October 09, 2006 - 9:50 am UTC
Tom said:
> methinks I disagree entirely :) the inefficiencies
> introduced by this generic layer, that could be very
> efficiently implemented with relational structures
> yourself (has to be true, we ONLY HAVE relational
> structures under the covers) are numerous. Start adding
> collection types and it is all over.
I would *not* swap relational design for an o-r design - or at least not before Oracle v12.OD (Object Database).. ;-)
But I do find object types/ADTs extremely useful in PL/SQL code.
The problem however is that when dealing with an ADT in PL/SQL, you want the table to share that same definition with the data being accessed from the database. It makes maintenance a lot easier. It means that you can use the VALUE() function to fetch the object instead of manually having to code a type constructor in the SQL's projection clause.
So do you do a CREATE TABLE as per normal, or instead a CREATE TABLE OF <TYPE>...?
I have not yet made up my mind on this.. but it is very tempting to do it at times as it allows you to, for example, reference a static like :NEW in a trigger dynamically.
My current take is to use it when needed (which thus far has been few and far in between production) - but while practising safe hex at all times. ;-)
October 09, 2006 - 10:13 am UTC
..
But I do find object types/ADTs extremely useful in PL/SQL code..
....
ABSOLUTELY, as a programming construct, definitely.
quote from Expert one on one Oracle:
<quote>
Reasons For Using These Features
The reason I use the object relational features in Oracle is predominantly as a means to extend the PL/SQL language in a natural fashion. The object type is an excellent way to extend PL/SQL with new functionality in the same way a class structure is in C++ or Java. We will take a look at an example of doing this in the following section.
</quote>
After explaining why I don't like the "magic" of types for implementing tables, I do offer this up:
<quote>
Object Relational Views
This is a fairly powerful feature for those of you who want to work with the object relational features, but still must present a relational view of the data to many applications. This allows you to use the standard VIEW mechanism to synthesize objects from relational tables. You don't have to create tables of a TYPE, with all of the mysterious columns and such – you can create a view of standard tables you have created (and probably already have). These views will behave just like an object table of that type would – without much of the overhead of hidden keys, surrogate keys, and other nuances.
</quote>
set_identifier or set_client_identifier
Anwar, October 10, 2006 - 12:52 am UTC
This is regarding your followup to RP's question above. I only see set_identifier procedure in dbms_session. Is this a typing error or i am on a different version of database (9.0.1.5.1)?
regards
Anwar
October 10, 2006 - 7:38 am UTC
just a typo
Is there downsides to this approach?
RP, December 03, 2006 - 4:25 am UTC
Hi Tom,
I really wanted to use this technique but our DBA just shook his head and mumbled something about not liking triggers.
I think what he said was that triggers can go invalid and they might not run when they should. Is there anything in this at all?
I attended your 2 day seminar in London earlier this year and you did mention one case with triggers that was not good. That related to over long transactions that aborted half-way (I think!).
But in this auditing trigger scenario - is there anything that could go wrong?
Thanks
RP
December 03, 2006 - 7:44 am UTC
I don't like triggers either and avoid them at all costs.
If the trigger goes invalid AND they cannot be automagically recompiled (Oracle will try to compile it without any effort on your part), then something is horribly broken and frankly - anything would start failing (not just the trigger)
Old and new values for CLOBs
A reader, January 19, 2007 - 12:02 pm UTC
Hi Tom,
Your site is very quiet for the past 1 week - very unusal - guess everything is fine from your side
Qn - can we modify your procedure audit_pkg.check_val to capture and old and new values for CLOBs ?
thanks
Anto
CLOBs - old and new values
A reader, January 19, 2007 - 12:44 pm UTC
Hi,
I have modified your get_val procs for CLOBs but only the old values are getting captured for CLOBs, not the :new values - any idea ?
thanks
Anto
just a little bit
AK, March 05, 2007 - 5:05 pm UTC
Tom, this is the first time i am writing to this site, this has been a very helpful and efficient source of knowledge to me...keeps me motivated to be a DBA......COming back to the topic...i just wanted to know if this was solved?
<quote/>
Can you show me if is possible write a query that return inserts... deletes... and updates...
I was thinking about
select op || case op
when 'INSERT' then ' INTO ' || tname || ' VALUES ( '|| new
when 'UPDATE' then tname || ' set ' || cname || ' = ' || new
when 'DELETE' then ' from ' || tname || ' where ' || cname || ' = ' || old
end
from audit_dml
but, update won't work and delete I have put some WHERE clause.
<quote/>
i am stuck in the same situation and i know stored procedure will be help full, but some more inputs from your side would be much appreciated
March 05, 2007 - 8:46 pm UTC
I don't see why this is hard? This audit dml table has everything you need to make these simple sort of things.
the hard part is that the values SPAN rows, but if you work on it a bit - I'm sure you can get it.
Different approaches to track history
Lino Bort, March 14, 2007 - 3:01 pm UTC
Hi Tom,
We are designing a new database and now we are facing the problem of history tracking. We know that we have several approaches:
1- Generic triggers as you suggested a long time ago in this thread.
2- Workspace Manager
3- Advanced Queuing
4- Generic Audit or FGA
We don't like the first one because, as you said, we have to develop and maintain new code.
Workspace Manager has a lot of functionality we don't need to use. We only want to maintain a history of changes made in some tables. Moreover, the execution plans generated are huge and we cannot manage them. For us, it's critical the response time of some queries and using this approach we loose control over the plans.
Advanced Queuing could be wright, but we also have to write a trigger to call the PL/SQL AQ interface and insert into some table to enqueue messages. In this case, history tracking can be done in batch.
With generic audit or FGA we can track SQL statements executed but our application does need to access the actual columns modified. So this approach is not appropriate for us.
Do your think we have other alternatives? What do you think about the alternatives we mention?
Thanks in advance,
Lino Bort
March 15, 2007 - 7:22 am UTC
transactional APIs - that is, auditing is part of the application, part of the transaction.
It is not an afterthought
it is not something to be added via magic
it is a requirement of your processing, as valid - as needed - as necessary as anything else.
Hence, just make it PART OF the transaction from the very beginning.
So, take your existing transaction code and have it do this, just like you would in response to any new requirement.
I am more and more of the mind that trying to graft audit on after the fact is just so wrong for so many reasons, it is as important as everything else, don't relegate it to secondary status.
PL/SQL API to track history
Lino Bort, March 15, 2007 - 2:28 pm UTC
Ok, it is true that history tracking is part of our application. But, we only wanted to know if Oracle Database 10g Release 2 had a wonderful functionality to do that, and most important, appropriate for our application.
We think the best solution for us is to create a PL/SQL package to execute DML sentences, in which we effectively execute the DML the application wanted to execute plus an insert into a history table. In this manner, we can also control the way applications execute DMLs because, actually, we make it for them, avoiding bad practices. Indeed, we must control the order of insertion in a clustered table, and we can achieve it with this kind of package. The problem here is how to avoid actual DML executions.
What do you think about this solution?
Thanks
March 16, 2007 - 2:55 pm UTC
well, there is - but you already said you didn't want to use it.
workspace manager does what you require.
... to create a PL/SQL package to execute DML sentences, ...
no, make transactional api's
do not make 'table' api's
in that way, you can outlaw bad practices since all would be done properly in a stored procedure under your control.
Can we decrease number of tables?
Igor, May 27, 2007 - 6:53 pm UTC
Hello Tom,
After reading this thread I believe that I understand concepts described in it, but still I have question for you.
Question is not discussed in this thread, so I am prone to believe that it is ridiculous, and I apologize if it so.
If auditing approach is trigger based, and changed data is stored in the same structure as original data, I was thinking if there is some way to decrease number of tables necessary for audit functionality, but to keep fast querying of current data that separation in two tables offers.
I was thinking about possibility to use one table (original one), but to partition it on value of additional field named CURRENT.
CURRENT field would be NULL-able and only record representing current value would have `X¿ as value. Because of this partitioning of table would include MAXVALUE.
Every change of existing entity would result with update of record that has CURRENT = `X¿ with new values, and insertion (by trigger) of record that represents previous version with CURRENT = NULL. This way row movement would be avoided.
So, please can you tell what which drawbacks do you see in taking approach with one table that is partitioned (maybe compared with option #2 from Rich¿s review from January 31st, 2005)?
Thank you in advance for sharing you knowledge, experience and expertise with us.
Igor
May 30, 2007 - 9:26 am UTC
if you were to do that, you might as well just use workspace manager which does this for you. It would not use partitioning, but would use a single table.
drawbacks - depends on the implementation. I would presume you would use a view to query the partition key so that partition elimination would always kick in, so you would end up with two views anyway, not sure if it would be conceptually any different to the application from the two table approach in the end.
Generic statement triggers
pkot, October 15, 2007 - 2:08 am UTC
Hi Tom,
I have written two after insert row triggers one one table.
Can i specify order of execution of same type of triggers?
Th
I trigger AFTER UPDATE per column our only on trigger for the table?
Pierre LEJEUNE, November 22, 2007 - 5:17 am UTC
Hi Tom,
what is the most efficient code: generating 1 trigger AFTER UPDATE OF COLUMN per column or only 1 trigger for the table which test all the columns.
I'm sur that my application only update the modified column (Oracle Forms).
November 26, 2007 - 10:41 am UTC
well, it'll depend.
Given that I hate triggers in general (except for data validation of the individual row), I would tend to go with "keep it simple - a single trigger that calls a packaged procedure - the trigger, there is one, the lines of code in it - one, a call to a procedure"
How can I use "bulk collect" for inserting to the audit table ?
ido mosseri, December 11, 2007 - 5:20 am UTC
Hello ,
I have a table with 20 column ,
Its look to me (mabee im wrong) that it will slow the system while it will insert 20 new record to the audit table.
My Questions :
1. Is it be faster when i use the bulk collect .
2. How can I use the bulk collect to insert all the records at once.
Many Thanks,
December 11, 2007 - 9:00 am UTC
bulk collect is for FETCHING
we are not fetching here, we are inserting.
so, bulk collect does not apply.
you could write a series of complicated triggers - a before statement, after for each row, and an after statement trigger, that use a package to store a state, and do a forall insert in the AFTER trigger
but, by the time you do that, you've added so much procedural code, you are slow anyway.
pay me now, pay me later. It won't matter significantly.
If 11g is an option, then
http://www.oracle.com/technology/products/database/oracle11g/pdf/flashback-data-archive-whitepaper.pdf is something to look at..
using SYSTIMESTAMP and RETURN in trigger
matthias, February 21, 2008 - 10:18 am UTC
Hi Tom,
I cannot explain the following behaviour using "SYSTIMESTAMP" and "RETURN" in a before insert trigger:
prj@PRJ> create table t (da date, ts timestamp);
Tabelle wurde erstellt.
prj@PRJ> create or replace trigger BI_T before INSERT on T for each row
2 BEGIN
3 :new.da := SYSDATE;
4 :new.ts := SYSTIMESTAMP;
5 END;
6 /
Trigger wurde erstellt.
prj@PRJ> insert into t values (null, null);
1 Zeile wurde erstellt.
prj@PRJ> select * from t;
DA TS
------------------- ------------------------
21.02.2008 16:01:45 21.02.08 16:01:45,321000
prj@PRJ> create or replace trigger BI_T before INSERT on T for each row
2 BEGIN
3 :new.da := SYSDATE;
4 :new.ts := SYSTIMESTAMP;
5 return;
6 END;
7 /
Trigger wurde erstellt.
prj@PRJ> insert into t values (null, null);
1 Zeile wurde erstellt.
prj@PRJ> select * from t;
DA TS
------------------- ------------------------
21.02.2008 16:01:45 21.02.08 16:01:45,321000
21.02.2008 16:02:05
Why is TS not set in the second record? Just because of the "return" in my trigger? How come?
TIA - Matthias
February 21, 2008 - 11:28 am UTC
oh, that is not good, filed bug 6837497 on that.
Hansjörg Tengg, May 19, 2008 - 2:32 am UTC
Thank you very much Tom for your answer to this question. We have now implemented our auditing-mechanism with the help of this thread. The structure of the audit-table looks as follows:
-------------------------------------------------------
TABLE_NAME | COLUMN_NAME | GUID | DATE | OLD_VALUE | NEW_VALUE
-------------------------------------------------------
We now want to get the column values of a certain record (GUID) at a certain timestamp/date (in order to reset this record if something went wrong). Do you know any simple solution to this problem, except writing an sql-query for each column of this table and getting the first record before the given timestamp?
problem with an audit trigger after update for a large number of columns
Amh, February 05, 2009 - 8:33 am UTC
I'm facing a problem with an "audit trigger" after update trigger for a large number of columns the length of the list of columns is more than 2000 character and I got this error message :
"ORA-04086:trigger description too long, move comments into triggering code"
February 05, 2009 - 11:27 am UTC
if you have that many columns - why bother listing them?
problem with an audit trigger after update for a large number of columns
Amh, February 05, 2009 - 6:27 pm UTC
I must list all columns because I created a form which allow
the user to choose the table and the columns which he want to be audited. And if I split the triggers to two triggers I will
not be able to insert all other columns values because one trigger will be executed.
February 05, 2009 - 7:58 pm UTC
give me an example trigger you are coding and I'll show you how to do it. Give me a create table and a trigger that currently does not work
problem with an audit trigger after update for a large number of columns
Amh, February 06, 2009 - 4:39 am UTC
The table which I want to audit has 400 columns ,But the user
wants to audit only 300 column so I create a procedure to generate a dynamic trigger .In this case the created trigger
description has a list of the columns which chosen by the user
this list length exceed 2000 character which is the maximum length for trigger description so I got this error "ORA-04086"
.if don't write the list of columns all columns will be audit and I don't want that.
February 06, 2009 - 3:33 pm UTC
give me a example of your template trigger please - say for 5 columns. I'll show you *another way* as I said.
A reader, February 06, 2009 - 5:49 am UTC
Can we save information for table(column added, deleted, table added) and procedure change (terminal/user/procedure) from trigger auditing. Or some other way to achieve it.
problem with an audit trigger after update for a large number of columns
Amh, February 06, 2009 - 5:54 pm UTC
CREATE OR REPLACE TRIGGER AIKBMLS.WRETRM_AUDIT_TRG
AFTER UPDATE OR DELETE OF WRETRM_CONS_CODE,WRETRM_TITLE ON AIKBMLS.WRETRM
REFERENCING NEW AS NEW OLD AS OLD
FOR EACH ROW
DECLARE
V_OPERATION VARCHAR2(10);
V_COUNTER NUMBER;
V_VALUE VARCHAR2(400);
BEGIN
----------TO SET V_OPERATION UPDATE OR DELETE-------
IF UPDATING THEN
V_OPERATION := 'U';
ELSIF DELETING THEN
V_OPERATION := 'D';
END IF;
--------COUNT NUMBER OF UPDATES OR DELETES ON TABLE------
SELECT COUNT(distinct A.GXRAUDT_TRNS_SEQ)+1
INTO V_COUNTER
FROM GXRAUDT A
WHERE A.GXRAUDT_OWNER = 'AIKBMLS'
AND A.GXRAUDT_TABLE_NAME = 'WRETRM'
AND A.GXRAUDT_TRNS_TYPE= V_OPERATION;
-------LOOP FOR COLUMNS ON AUDIT SETUP TABLE GXBAUDT-------
---CHECK VALUES IF UPDATED OR NOT DEPENDING ON CASES---
----1-IF NEW VALUE NULL AND OLD VALUE IS NOT NULL OR----
----2-IF NEW VALUE NOT NULL AND OLD VALUE NULL OR----
----3-IF NEW AND OLD VALUES ARE NOT NULL AND NOT EQUAL------
IF UPDATING THEN
IF ((:NEW.WRETRM_CONS_CODE IS NULL) AND (:OLD.WRETRM_CONS_CODE IS NOT NULL)) OR (:NEW.WRETRM_CONS_CODE IS NOT NULL AND
:OLD.WRETRM_CONS_CODE IS NULL) THEN
BEGIN
INSERT INTO GXRAUDT (GXRAUDT_OWNER,GXRAUDT_TABLE_NAME,
GXRAUDT_COL_NAME,GXRAUDT_TRNS_SEQ,GXRAUDT_TRNS_DATE,GXRAUDT_TRNS_TYPE,
GXRAUDT_TRNS_USER,GXRAUDT_OLD_VALUE)
VALUES (
'AIKBMLS','WRETRM','WRETRM_CONS_CODE',V_COUNTER,SYSDATE,V_OPERATION,USER,:OLD.WRETRM_CONS_CODE);
END;
ELSIF (:NEW.WRETRM_CONS_CODE IS NOT NULL) AND
(:OLD.WRETRM_CONS_CODE IS NOT NULL) THEN
BEGIN
IF :NEW.WRETRM_CONS_CODE <> :OLD.WRETRM_CONS_CODE THEN
BEGIN
INSERT INTO GXRAUDT (GXRAUDT_OWNER,GXRAUDT_TABLE_NAME,
GXRAUDT_COL_NAME,GXRAUDT_TRNS_SEQ,GXRAUDT_TRNS_DATE,GXRAUDT_TRNS_TYPE,
GXRAUDT_TRNS_USER,GXRAUDT_OLD_VALUE)
VALUES (
'AIKBMLS','WRETRM','WRETRM_CONS_CODE',V_COUNTER,SYSDATE,V_OPERATION,USER,:OLD.WRETRM_CONS_CODE);
END;
END IF;
END;
END IF;
ELSIF DELETING THEN
INSERT INTO GXRAUDT (GXRAUDT_OWNER,GXRAUDT_TABLE_NAME,
GXRAUDT_COL_NAME,GXRAUDT_TRNS_SEQ,GXRAUDT_TRNS_DATE,GXRAUDT_TRNS_TYPE,
GXRAUDT_TRNS_USER,GXRAUDT_OLD_VALUE)
VALUES (
'AIKBMLS','WRETRM','WRETRM_CONS_CODE',V_COUNTER,SYSDATE,V_OPERATION,USER,:OLD.WRETRM_CONS_CODE);
END IF;
IF UPDATING THEN
IF ((:NEW.WRETRM_TITLE IS NULL) AND (:OLD.WRETRM_TITLE IS NOT NULL)) OR (:NEW.WRETRM_TITLE IS NOT NULL AND
:OLD.WRETRM_TITLE IS NULL) THEN
BEGIN
INSERT INTO GXRAUDT (GXRAUDT_OWNER,GXRAUDT_TABLE_NAME,
GXRAUDT_COL_NAME,GXRAUDT_TRNS_SEQ,GXRAUDT_TRNS_DATE,GXRAUDT_TRNS_TYPE,
GXRAUDT_TRNS_USER,GXRAUDT_OLD_VALUE)
VALUES (
'AIKBMLS','WRETRM','WRETRM_TITLE',V_COUNTER,SYSDATE,V_OPERATION,USER,:OLD.WRETRM_TITLE);
END;
ELSIF (:NEW.WRETRM_TITLE IS NOT NULL) AND
(:OLD.WRETRM_TITLE IS NOT NULL) THEN
BEGIN
IF :NEW.WRETRM_TITLE <> :OLD.WRETRM_TITLE THEN
BEGIN
INSERT INTO GXRAUDT (GXRAUDT_OWNER,GXRAUDT_TABLE_NAME,
GXRAUDT_COL_NAME,GXRAUDT_TRNS_SEQ,GXRAUDT_TRNS_DATE,GXRAUDT_TRNS_TYPE,
GXRAUDT_TRNS_USER,GXRAUDT_OLD_VALUE)
VALUES (
'AIKBMLS','WRETRM','WRETRM_TITLE',V_COUNTER,SYSDATE,V_OPERATION,USER,:OLD.WRETRM_TITLE);
END;
END IF;
END;
END IF;
ELSIF DELETING THEN
INSERT INTO GXRAUDT (GXRAUDT_OWNER,GXRAUDT_TABLE_NAME,
GXRAUDT_COL_NAME,GXRAUDT_TRNS_SEQ,GXRAUDT_TRNS_DATE,GXRAUDT_TRNS_TYPE,
GXRAUDT_TRNS_USER,GXRAUDT_OLD_VALUE)
VALUES (
'AIKBMLS','WRETRM','WRETRM_TITLE',V_COUNTER,SYSDATE,V_OPERATION,USER,:OLD.WRETRM_TITLE);
END IF;
END;
/
February 09, 2009 - 5:58 pm UTC
how many other people saw an obvious multi-user problem with this trigger in 5 seconds or less?
It jumped right off the screen and punched me in the nose.
this code is as dysfunctional as you get:
SELECT COUNT(distinct A.GXRAUDT_TRNS_SEQ)+1
INTO V_COUNTER
FROM GXRAUDT A
WHERE A.GXRAUDT_OWNER = 'AIKBMLS'
AND A.GXRAUDT_TABLE_NAME = 'WRETRM'
AND A.GXRAUDT_TRNS_TYPE= V_OPERATION;
first - what a waste of resources - that is a really good way to slow down the audit and not provide any useful information whatsoever.
Ok, the 'solution' is not to use OF clause (which will result in tons of false positives anyway, many "frameworks" update columns even when they do not update columns)
Instead - generate this as the first bit of code:
if (updating)
then
select decode( :new.wretrm_cons_code, :old.wretrm_cons_code, null, 'x' ) ||
decode( :new.wretrm_title , :old.wretrm_title, null, 'x' )
into l_something
from dual;
if (l_something) is null
then
return;
end if;
end if;
it works with null values and all. If you update, and you do not modify any of the columns of interest - it returns.
and absolutely lose that USELESS AND WRONG count you are doing - think for a moment about what happens when more than one person uses you database....
problem with an audit trigger after update for a large number of columns
Amh, February 08, 2009 - 4:01 am UTC
The Last version of my trigger:
CREATE OR REPLACE TRIGGER AIKBMLS.WRETRM_AUDIT_TRG
AFTER UPDATE OR DELETE OF WRETRM_ACTIVITY_DATE,WRETRM_ADDR_CITY,WRETRM_ADDR_COUNTRY,WRETRM_ADDR_STATE,WRETRM_ADDR_STREET_LINE1,WRETRM_ADDR_STREET_LINE2,WRETRM_ADDR_STREET_LINE3,WRETRM_ADDR_ZIP_CODE,WRETRM_CODE,WRETRM_DESC,WRETRM_DICD_CODE,WRETRM_STATSCAN_CDE3,WRETRM_SYSTEM_REQ_IND,WRETRM_VR_MSG_NO ON AIKBMLS.WRETRM
REFERENCING NEW AS NEW OLD AS OLD
FOR EACH ROW
DECLARE
OP VARCHAR2(10);--Operation Type (update,delete)
CN NUMBER;--count
OW VARCHAR2(10) := 'AIKBMLS';--OWNER OF TABLE
TB VARCHAR2(10) := 'WRETRM';--TABLE NAME
V_VALUE VARCHAR2(400);
BEGIN
----------TO SET OP UPDATE OR DELETE-------
IF UPDATING THEN
OP := 'U';
ELSIF DELETING THEN
OP := 'D';
END IF;
--------COUNT NUMBER OF UPDATES OR DELETES ON TABLE------
SELECT COUNT(distinct A.GXRAUDT_TRNS_SEQ)+1
INTO CN
FROM GXRAUDT A
WHERE A.GXRAUDT_OWNER = 'AIKBMLS'
AND A.GXRAUDT_TABLE_NAME = 'WRETRM'
AND A.GXRAUDT_TRNS_TYPE= OP;
-------LOOP FOR COLUMNS ON AUDIT SETUP TABLE GXBAUDT-------
---CHECK VALUES IF UPDATED OR NOT DEPENDING ON CASES---
----1-IF NEW VALUE NULL AND OLD VALUE IS NOT NULL OR----
----2-IF NEW VALUE NOT NULL AND OLD VALUE NULL OR----
----3-IF NEW AND OLD VALUES ARE NOT NULL AND NOT EQUAL------
P_INSERT_GXRAUDT(OW,TB,'WRETRM_ACTIVITY_DATE',CN,OP,:OLD.WRETRM_ACTIVITY_DATE);
P_INSERT_GXRAUDT(OW,TB,'WRETRM_ADDR_CITY',CN,OP,:OLD.WRETRM_ADDR_CITY);
P_INSERT_GXRAUDT(OW,TB,'WRETRM_ADDR_COUNTRY',CN,OP,:OLD.WRETRM_ADDR_COUNTRY);
P_INSERT_GXRAUDT(OW,TB,'WRETRM_ADDR_STATE',CN,OP,:OLD.WRETRM_ADDR_STATE);
P_INSERT_GXRAUDT(OW,TB,'WRETRM_ADDR_STREET_LINE1',CN,OP,:OLD.WRETRM_ADDR_STREET_LINE1);
P_INSERT_GXRAUDT(OW,TB,'WRETRM_ADDR_STREET_LINE2',CN,OP,:OLD.WRETRM_ADDR_STREET_LINE2);
P_INSERT_GXRAUDT(OW,TB,'WRETRM_ADDR_STREET_LINE3',CN,OP,:OLD.WRETRM_ADDR_STREET_LINE3);
P_INSERT_GXRAUDT(OW,TB,'WRETRM_ADDR_ZIP_CODE',CN,OP,:OLD.WRETRM_ADDR_ZIP_CODE);
P_INSERT_GXRAUDT(OW,TB,'WRETRM_CODE',CN,OP,:OLD.WRETRM_CODE);
P_INSERT_GXRAUDT(OW,TB,'WRETRM_DESC',CN,OP,:OLD.WRETRM_DESC);
P_INSERT_GXRAUDT(OW,TB,'WRETRM_DICD_CODE',CN,OP,:OLD.WRETRM_DICD_CODE);
P_INSERT_GXRAUDT(OW,TB,'WRETRM_STATSCAN_CDE3',CN,OP,:OLD.WRETRM_STATSCAN_CDE3);
P_INSERT_GXRAUDT(OW,TB,'WRETRM_SYSTEM_REQ_IND',CN,OP,:OLD.WRETRM_SYSTEM_REQ_IND);
P_INSERT_GXRAUDT(OW,TB,'WRETRM_VR_MSG_NO',CN,OP,:OLD.WRETRM_VR_MSG_NO);
END;
/
--------------------------------------------------------------------------
CREATE OR REPLACE PROCEDURE P_INSERT_GXRAUDT
(P_GXRAUDT_OWNER IN GXRAUDT.GXRAUDT_OWNER%TYPE,
P_GXRAUDT_TABLE_NAME IN GXRAUDT.GXRAUDT_TABLE_NAME%TYPE,
P_GXRAUDT_COL_NAME IN GXRAUDT.GXRAUDT_COL_NAME%TYPE,
P_GXRAUDT_TRNS_SEQ IN GXRAUDT.GXRAUDT_TRNS_SEQ%TYPE,
P_GXRAUDT_TRNS_TYPE IN GXRAUDT.GXRAUDT_TRNS_TYPE%TYPE,
P_GXRAUDT_OLD_VALUE IN GXRAUDT.GXRAUDT_OLD_VALUE%TYPE)
IS
BEGIN
INSERT INTO GXRAUDT (GXRAUDT_OWNER,
GXRAUDT_TABLE_NAME,
GXRAUDT_COL_NAME,
GXRAUDT_TRNS_SEQ,
GXRAUDT_TRNS_DATE,
GXRAUDT_TRNS_TYPE,
GXRAUDT_TRNS_USER,
GXRAUDT_OLD_VALUE)
VALUES (P_GXRAUDT_OWNER,
P_GXRAUDT_TABLE_NAME,
P_GXRAUDT_COL_NAME,
P_GXRAUDT_TRNS_SEQ,
SYSDATE,
P_GXRAUDT_TRNS_TYPE,
USER,
P_GXRAUDT_OLD_VALUE);
END P_INSERT_GXRAUDT;
/
I'm waiting for you response about my problem
Amh, February 09, 2009 - 6:48 am UTC
I'm waiting for you response about my problem
February 09, 2009 - 7:20 pm UTC
good gosh...
let's see, you posted on a friday night and early on monday morning you are impatiently waiting....
wow. ok. whatever.
Would you answer the original question the same way today?
David Weigel, February 24, 2009 - 1:44 pm UTC
The original question had to do with creating triggers on lots of tables to audit data changes. The question and answer were in 2000. Would you answer today with "Don't create triggers on tables; use DBMS_FGA"?
Real-world application: We have a web-based VB6 application that channels its data-changing SQL through a common routine. We resell this application. Some customers would like to be able to audit changes to particular columns. In a previous version of our software we had a screen where the suitably authorized user could pick tables and columns and the code would create triggers much like your original answer. Today, we could have a screen like that which would (to steal from the docs) DBMS_FGA.ADD_POLICY (object_schema => 'scott', object_name=>'emp', policy_name => 'mypolicy1', audit_column =>'comm, credit_card, expirn_date', enable => TRUE, statement_types=> 'INSERT, UPDATE');.
One problem with the FGA solution is that it's only included with Enterprise Edition and some of our customers bought Standard Edition. Maybe we just tell customers that the more they pay, the more they get. (I also haven't yet figured out what sort of security is necessary to read the audit tables, as it'd be nice to be able to dump the information with standard reports.)
What would you recommend these days?
February 24, 2009 - 5:18 pm UTC
dbms_fga wouldn't capture the values, it would just capture the DML.
In 11gR1 - I would say "use flashback data archive instead of doing it yourself"
But to capture values - you'd need triggers (or have the front end code do it). Auditing doesn't capture the changed data values.
I'm still waiting for you response about my problem
Amh, March 09, 2009 - 12:20 pm UTC
I'm still waiting for you response about my problem
March 09, 2009 - 1:11 pm UTC
Amh
use the page up feature of your browser.
I already posted commentary on your posting a while ago.
trigger
A reader, March 09, 2009 - 4:00 pm UTC
trigger
A reader, April 02, 2009 - 12:38 pm UTC
Tom:
I posted a thread by mistake on another thread. This is the thread i wanted to posted on. I will post the answer too. I hope it does not change.
Great info on this thread. I have a few small questions.
1. You have a generic package that can build an audit trail for any table and any transaction and
be called via a trigger.
Why do that if you have this workspace manager that seems to give you a smilar thing without
wirting any code.
Do they serve different purposes?
2. I have a reqt to audit table updates for 4 tables. One of the tables is about 50 columns
Do you think the before-update trigger with those 50 procedure calls will have great impact on
performance.
The system does not have that many transactions.
3. Do you think your one-table design is much more easier and manageable than having 4 different
audit tables (one for each prodcution table and adding a NEW_COL_VAL for each OLD_COL_VAL to store
the row old values and new values.
A 50 column table would have 100 column audit table.
4. Would you use DB standard auditing for this instead of a trigger. When do you usually use the DB
AUDIT command.
Followup April 2, 2009 - 9am US/Eastern:
1) if you refer back to that, you'll find the requirements were different. They didn't want an entire row versioned. They wanted just the modified columns. Compare the questions, you'll find they are somewhat "not the same at all"
2) if it doesn't do much work, well, think about it.
And you should see this coming:
benchmark it. prototype it, simulate it, measure it, figure out what the penalty will be. Anything short of that is called "a guess"
3) depends - think about it again. If you were auditing all columns every time, a table per table would make sense (I call that a shadow table). If you are auditing JUST the columns that were modified and doing it in a fashion that has a row per modified column, then a single table makes sense.
4) you want to capture row changes - modified data - we do not audit that (not until 11g with the flashback data archive that is)
https://asktom.oracle.com/Misc/oramag/on-redefinition-nature-and-triggers.html
trigger
A reader, April 02, 2009 - 12:46 pm UTC
Tom:
1. You said if you need to audit the entire row regardless of whether the columns changed or not you would create a shadow table for each transaction table. Would your shadow table have double the columns (old_val,new_val) for each column? That would result in so many audit tables if you are auditing many production tables.
Have you discussed that kind of design somewhere else? I assume the concept of the trigger and package above would still apply for the shadow table design
2. I would like to store the "org code" which is the primay key for one of the tables i am auditing so i can query the audit table for all changes for that organzation. I also want to track the IP address for the client machine (browser).
Would you just add these two columns and store these two values in the table above. The problem comes when another table have more than one column for PK. Trying to catpure the ip address in the DB before from environment variable gave me the ip for the application server (not user machine)
auditing
A reader, April 03, 2009 - 12:18 pm UTC
Tom:
Is this how you would implement auditing if the reqt is to log in the whole row in the audit trail (instead of just modified data) or would you have an old_col and new_col for each field.
http://www.developer.com/db/article.php/3326371
April 03, 2009 - 12:35 pm UTC
I already answered that for you SMK.
3) depends - think about it again. If you were auditing all columns every time, a table per table
would make sense (I call that a shadow table). If you are auditing JUST the columns that were
modified and doing it in a fashion that has a row per modified column, then a single table makes
sense.
if the requirement is to log the entire row, a row would seem to be the answer.
A reader, April 03, 2009 - 1:56 pm UTC
Tom:
I understodd that part. But how would you do meaning
if you want to audit EMPLOYEE table
CREATE TABLE EMPLOYEE
(EMPNO CHAR(6) NOT NULL,
FIRSTNME VARCHAR(12) NOT NULL,
MIDINIT CHAR(1) NOT NULL,
LASTNAME VARCHAR(15) NOT NULL,
WORKDEPT CHAR(3) ,
PHONENO CHAR(4) ,
HIREDATE DATE ,
JOB CHAR(8) ,
EDLEVEL SMALLINT NOT NULL,
SEX CHAR(1) ,
BIRTHDATE DATE ,
SALARY DECIMAL(9,2) ,
BONUS DECIMAL(9,2) ,
COMM DECIMAL(9,2)
PRIMARY KEY (EMPNO))
Would you create AUDIT_EMPLOYEE with same number of columns and just adding 3 new columns
for OPERATION, WHO, and WHEN
or would you create an AUDIT_TABLE table like this:
CREATE TABLE AUDIT_EMPLOYEE
(OLD_EMPNO CHAR(6) NOT NULL,
OLD_FIRSTNME VARCHAR(12) NOT NULL,
OLD_MIDINIT CHAR(1) NOT NULL,
OLD_LASTNAME VARCHAR(15) NOT NULL,
OLD_WORKDEPT CHAR(3) ,
OLD_PHONENO CHAR(4) ,
OLD_HIREDATE DATE ,
OLD_JOB CHAR(8) ,
OLD_EDLEVEL SMALLINT NOT NULL,
OLD_SEX CHAR(1) ,
OLD_BIRTHDATE DATE ,
OLD_SALARY DECIMAL(9,2) ,
OLD_BONUS DECIMAL(9,2) ,
OLD_COMM DECIMAL(9,2) ,
NEW_EMPNO CHAR(6) NOT NULL,
NEW_FIRSTNME VARCHAR(12) NOT NULL,
NEW_MIDINIT CHAR(1) NOT NULL,
NEW_LASTNAME VARCHAR(15) NOT NULL,
NEW_WORKDEPT CHAR(3) ,
NEW_PHONENO CHAR(4) ,
NEW_HIREDATE DATE ,
NEW_JOB CHAR(8) ,
NEW_EDLEVEL SMALLINT NOT NULL,
NEW_SEX CHAR(1) ,
NEW_BIRTHDATE DATE ,
NEW_SALARY DECIMAL(9,2) ,
NEW_BONUS DECIMAL(9,2) ,
NEW_COMM DECIMAL(9,2)
April 03, 2009 - 3:16 pm UTC
think about this, think it through, think about what you would need, think about if it would be valuable or silly (in software produced this century) to keep both the old and new. SMK - you do this for me, you draw up the list of pro's and con's.
I'll give you a hint, if you have a "when" column and you have access to a database analytic function called lag() ...... end of hint.
would you need the "new" values, aren't the old views of one row someone elses new values?
trigger
A reader, April 03, 2009 - 4:11 pm UTC
Tom:
I think your answer is not to add NEW columns and use lag function instead. You are saying use #1 and then create a pivot view using lag(0) to get the old/new values.
The reason I am asking is because i have seen deisgns like this where they have multiple columns (even in oracle book).
But for large tables this may get tedious.
in summary what you are saying:
1) if you want to audit modified data only use one table like above and track the table name and column name in a single row.
2) if you want to audit the whole row then create a shadow table (replica) for each table you want to audit.
correct?
April 03, 2009 - 4:19 pm UTC
well, technically, lag(0) would be the current row.
"even in oracle book" - anyone is allowed to write a book about anything.
I'm not saying what you say I'm saying..
What I always say is "based on the REQUIREMENTS, I will come up with an efficient implementation that satisfies them using my knowledge of how things work"
If you have a requirement to record the before and after image of a column - at the column level - for every change, then one of the answers I've given on this web site applies.
If you have a requirement to record the changes to entire rows (record a row when it is modified), then I have another answer.
If you modify the requirements, I'll modify the answer accordingly.
trigger
A reader, April 03, 2009 - 4:36 pm UTC
<<If you have a requirement to record the changes to entire rows (record a row when it is modified), then I have another answer. >>
Yes, this is the answer i want to know.
If i have 20 tables and i have a requirement to record changes to the entire row, is not your answer to create 20 AUD_XXXX tables, 20 trigges (after insert or update or delete) and then sticking the audit data into the AUD_XXXX table.
April 04, 2009 - 2:33 am UTC
ugh
My answer as to HOW to do this would be threefold
a) 11gr1 - flashback data archive
b) 10gr2 and before - evaluate workspace manager, see if it applies and if the overhead is acceptable. ease of implementation - high, no code to write.
c) 10gr2 and before - evaluate coding 20 triggers on 20 tables to record the audit trail into 20 shadow tables.
trigger
A reader, April 04, 2009 - 10:03 am UTC
Tom:
There is not much code to wite. one small trigger with different INSERT statement for each audit table.
Do you usually keep the audit data in separate SCHEMA or DATABASE away from production table?
CREATE OR REPLACE TRIGGER AUD_DEPT
AFTER INSERT OR DELETE OR UPDATE ON DEPT FOR EACH ROW
DECLARE
v_operation VARCHAR2(10) := NULL;
BEGIN
IF INSERTING THEN
v_operation := 'INS';
ELSIF UPDATING THEN
v_operation := 'UPD';
ELSE
v_operation := 'DEL';
END IF;
IF INSERTING OR UPDATING THEN
INSERT INTO AUDIT_DEPT (col1,col2,col3,col4,col5,aud_Action,aud_timestamp,aud_user)
VALUES (:new.col1,:new.col2,:new.col3,:new.col4,:new.col5,:new.col5, v_operation,SYSDATE,USER);
ELSE
INSERT INTO AUDIT_DEPT (col1,col2,col3,col4,col5,aud_Action,aud_timestamp,aud_user)
VALUES (:old.col1,:old.col2,:old.col3,:old.col4,:old.col5,:old.col5, v_operation,SYSDATE,USER);
END IF;
END;
/
April 07, 2009 - 5:27 am UTC
depends (you should be used to that answer by now)
Do you have a security need to keep it separate? Are you afraid that the schema owner of the base table would try to modify the audit trail?
why bother auditing an insert? You have the inserted data already.
Seems you only need to do updates and deletes...
Why bother auditing an insert?
Mike, April 08, 2009 - 8:06 am UTC
Because, after the next update, you will no longer have the data that was inserted...
For your audit trail, consider recording both old and new on updates. It may sound silly, but if someone updates the primary key and you have saved only the :new, you will not really be able to fathom where the old one went.
I am looking forward to 11g and flashback data archive - I am getting tired of these hand-crafted audit trails.
April 13, 2009 - 1:32 pm UTC
after the next update - you WILL have the data that was inserted - it will be the :old record.
good point on the primary key - if you permit the modification of it, that would be problematic - for many many reasons. This would be reason #5213523 why you do not update primary keys.
audit
A reader, April 08, 2009 - 9:46 pm UTC
Tom:
Mike wrote an interesting point about updating the Primary Key.
If I did save the whole updated ROW and use LAG() to get the OLD and NEW data instead of creating a NEW COLUMN and OLD COLUMN.
IF the PK gets updated then I wont be able to group the changes per EMP to see the history of changes.
April 13, 2009 - 3:34 pm UTC
so, don't change the primary key - or if you do - sit down and *design* your system to *your* needs.
SMK - you are a developer, you know your own unique needs and requirements, please, ask yourself "what do I need to support", and design from there.
IF the PK gets updated ...
Sokrates, April 12, 2009 - 3:16 pm UTC
... you have a serious issue with your data model
a pk which is due to updates, isn't a pk at all
Update of primary key
Mike, April 13, 2009 - 3:06 pm UTC
WHether your design allows/encourages updates of the primary key or not - it is often physically possible.
One reason people want to use the trigger approach for logging these updates is to be sure it is always logged, even if the update does not come from the 'application' itself. For example, an udpate may come from a SQL script that is run during an application migration or conversion.
If you say 'no need to do consider that possibility when designing this logging function, becasue we don't update primary keys'...it takes actual effort to make that a physical impossibility. If you have not made it impossible, it might happen, so it's worth thinking about.
April 13, 2009 - 5:44 pm UTC
well, this trigger approach also suffers from the fact that triggers are easily disabled and even bypassed (direct path loads = no triggers for you)
trigger
A reader, April 13, 2009 - 6:32 pm UTC
Tom:
You seem to favor transactional API for loggin audit data because trigger can be easily disabled?
In one of your friends books, David KNox "efective oracle database 10g security by design" they seem to favor triggers since the application coding applies only to one application while triggers centralize the auditing. Any application will be auditing.
I thought you like to do stuff at the server level instead of client level - close to the data.
It is interesting to know that direct path loads will not fire triggers. the book does not metnion that.
April 14, 2009 - 9:58 am UTC
I hate triggers, that is all. They are evil.
https://www.oracle.com/technetwork/issue-archive/2008/08-sep/o58asktom-101055.html they are magic, people abuse them, people misuse them
Am I aware that when used properly they can do good? Yes
Am I aware that some people know how to use them properly? Yes
However the fact remains that most implementations of triggers are *bad*, they are used when they shouldn't be, they are coded improperly, they do things as a side effect and mess up tons of other stuff, they get skipped during certain operations (and people don't know that), they can easily be fired more than once (and people do non-transactional things in them, leading to lost integrity of data) in a single statement (if you have a one row table, your before update for each row trigger can fire twice - even if you just update a single row)
I do stuff at the server level, where do you think this transactional API would live?
As for the direct path stuff, let me expand on that. You can do direct path loads via sqlldr:
ops$tkyte%ORA11GR1> create table emp as select * from scott.emp where rownum <= 5;
Table created.
ops$tkyte%ORA11GR1>
ops$tkyte%ORA11GR1> !sqlldr_exp / emp > emp.ctl
ops$tkyte%ORA11GR1> !cat emp.ctl
LOAD DATA
INFILE *
INTO TABLE emp
REPLACE
FIELDS TERMINATED BY '|'
(
empno
,ename
,job
,mgr
,hiredate
,sal
,comm
,deptno
)
BEGINDATA
7369|SMITH|CLERK|7902|17-DEC-80|800||20
7499|ALLEN|SALESMAN|7698|20-FEB-81|1600|300|30
7521|WARD|SALESMAN|7698|22-FEB-81|1250|500|30
7566|JONES|MANAGER|7839|02-APR-81|2975||20
7654|MARTIN|SALESMAN|7698|28-SEP-81|1250|1400|30
ops$tkyte%ORA11GR1>
ops$tkyte%ORA11GR1>
ops$tkyte%ORA11GR1> create or replace trigger emp_trigger
2 before insert on emp for each row
3 begin
4 :new.ename := lower(:new.ename);
5 end;
6 /
Trigger created.
ops$tkyte%ORA11GR1> !sqlldr / emp
SQL*Loader: Release 11.1.0.7.0 - Production on Tue Apr 14 09:38:54 2009
Copyright (c) 1982, 2007, Oracle. All rights reserved.
Commit point reached - logical record count 5
ops$tkyte%ORA11GR1> select ename from emp;
ENAME
----------
smith
allen
ward
jones
martin
ops$tkyte%ORA11GR1> !sqlldr / emp direct=y
SQL*Loader: Release 11.1.0.7.0 - Production on Tue Apr 14 09:38:54 2009
Copyright (c) 1982, 2007, Oracle. All rights reserved.
Load completed - logical record count 5.
ops$tkyte%ORA11GR1> select ename from emp;
ENAME
----------
SMITH
ALLEN
WARD
JONES
MARTIN
sqlldr - with direct=y - will not fire triggers, we can observe that - the ename is not in lower case afterwards. Trigger did not happen.
http://docs.oracle.com/docs/cd/B19306_01/server.102/b14215/ldr_modes.htm#sthref1587 documentation states that.
The other direct path load is insert /*+ APPEND */ - add a trigger and you lose the ability to direct path load using SQL. consider (same example):
ops$tkyte%ORA11GR1> truncate table emp;
Table truncated.
ops$tkyte%ORA11GR1> insert /*+ APPEND */ into emp select * from scott.emp where rownum <= 5;
5 rows created.
ops$tkyte%ORA11GR1> select ename from emp;
ENAME
----------
smith
allen
ward
jones
martin
<b>that we could read emp without an error shows that the direct path insert WAS NOT done, this was a conventional path insert - see below for the error you would expect if you try to read a table after direct pathing it without committing...</b>
ops$tkyte%ORA11GR1> alter trigger emp_trigger disable;
Trigger altered.
ops$tkyte%ORA11GR1> insert /*+ APPEND */ into emp select * from scott.emp where rownum <= 5;
5 rows created.
ops$tkyte%ORA11GR1> select ename from emp;
select ename from emp
*
ERROR at line 1:
ORA-12838: cannot read/modify an object after modifying it in parallel
<b>when we disable trigger, we can direct path it...</b>
auditing
sam, April 15, 2009 - 10:08 pm UTC
Tom:
One nice thing about triggers (even though you hate them) is if customer decides to disable auditing you can do that in a second. If the auditing code is inside the application it will take much longer.
For auditing purposes they seem to be OK if you use it properly.
Your example above is all based on that. I bet many people use that method too.
April 16, 2009 - 9:25 am UTC
unless you are a smart programmer. Think about this please SMK - if you want your application to have the ability to disable auditing, might you - just MIGHT YOU - make your application configurable that way?
auditing
A reader, April 16, 2009 - 10:37 am UTC
Tom:
IT has nothing with being SMART. You simply have customers who do not know what they need. One month "TURN ON AUDITING", the next month "TURN IT OFF" etc.
Bottom line the reqts are continuously changing. You can code an application to the REQTS known now.
April 16, 2009 - 11:06 am UTC
You have completely missed the point.
It has everything to do with smart. You came up with the idea that "auditing might be something to turn on and off", you already know you need this.
It is like archiving. You already know that for an audit trail, you need archiving - probably partitioning. So design it in.
Disabling triggers - hugely bad idea, that is NOT configuring an application, that is hacking it. You upgrade your application and bamm - more triggers, new triggers, maybe someone added something to the audit trigger that isn't just auditing.
Disabling triggers - that would be a huge mistake. Potentially a disaster if you think about it.
Making auditing configurable - smart.
Disabling triggers - not smart, really a bad idea in fact.
Using your line of thinking - that you don't know about this sort of requirement - how is it you were smart enough to ONLY PUT audit code in the trigger?
trigger
A reader, April 16, 2009 - 5:46 pm UTC
Tom:
Triggers do make sense soemtimes.
If the application has so many places to INSERT and UPDATE a table from why do i need to go and update a dozen procedures when i can simply go to one place and have one piece of code to maintain. I do not to mess with old sphagetti code that someone wrote and figure out where to add the audit code should be at and retest that module.
Try selling oracle database without triggers and see if customers will like the idea.
April 17, 2009 - 9:29 am UTC
you are calling modular, linear, easy to read code spaghetti code and triggers "good"
wow, flabbergasted.
http://en.wikipedia.org/wiki/Spaghetti_code ... Spaghetti code is a pejorative term for source code which has a complex and tangled control structure, especially one using many GOTOs, exceptions, threads, or other "unstructured" branching constructs. ...
Triggers are the very DEFINITION of spaghetti code.
Do I know that triggers can be useful? Yes
Do I know that triggers can be safely implemented? Maybe
Do I know how to use triggers safely myself? Yes
Do most - or even MANY - developers know how to use triggers safely? no, most certainly not.
We used to sell the database without triggers, did so for as long as we've been selling it with triggers in fact. It was better back then.
Mubeen, April 22, 2009 - 10:19 am UTC
Your answers are excellent. your website has become center point for real-time solutions/tips.
I have a related query. Let me refer to your structure used to record changes..
create table audit_tbl
( timestamp date,
who varchar2(30),
tname varchar2(30),
cname varchar2(30),
old varchar2(2000),
new varchar2(2000)
)
I would also like to record value of Primary key also. for example, my table: MILT_PERSONS
Milt_id Join_date Status Salary
123 12-jan-06 0 8000
321 12-jan-04 1 6000
230 12-jan-01 99 4000
If I change STATUS of Milt_id=123, then It is going to record
TIMESTAMP, WHO, TNAME, CNAME, OLD, NEW
---------- --- ------ ---- --- ---
10-10-08 MOD, MILT_PERSONS, status, 0, 99
This above record tells me that status is changed & But it don't say which person's status changed. It will be more helpful, if it can store primary key value of this record, in this case Milt_id.
Please help, thanks
April 24, 2009 - 3:35 pm UTC
sure, you can do that, just read user constraints to get the primary key column names and modify the code.
I'd suggest writing a package, that generates the code - and then execute immediate the code.
trigger
sam, April 26, 2009 - 10:39 pm UTC
Tom:
1. If i want to implement the trigger for auditing 10 tables, and I want to capture the primary key for the record. if some tables have 2 or 3 column natural keys how do you change the table structure and code to store the PK.
would you just add three columns and store the values of the primary key there.
2. If i want to track "DELETE" too even though it is seldom, would this mean one row per column for the table audited.
3. Is this trigger solution suitable or not suitable for auditing since you say you know when to use them effectively.
auditing
sam, April 28, 2009 - 8:19 pm UTC
Tom:
The above table tells us who,what and when something changed in a table. But i does not tell you the ROW that changed which is very important in auditing.
many people followed up asking how to track teh primary keys. One solution was to create
an audit table specifically for that table and another solution to use same table and a generic package.
<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
as stated, you'll have to write "fancier code"
something that
a) reads the data dictionary to determine what the PK of the table is so
b) it can create an audit table specificially for that table so
c) it can create a package for that table that
d) is called by the triggers on that table
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
sure, you can do that, just read user constraints to get the primary key column names and modify the code.
I'd suggest writing a package, that generates the code - and then execute immediate the code.
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
It seems they are two different solutions though?
How would this extra package fit into the solution above by capturing.
Can't you just add the value of PK like this (assuming DEPTNO is a PK)
create or replace trigger aud#dept
after update on dept
for each row
begin
audit_pkg.check_val( 'dept', 'DEPTNO', :old.DEPTNO, :new.DEPTNO, :old.DEPTNO);
audit_pkg.check_val( 'dept', 'DNAME', :old.DEPTNO, :new.DNAME, :old.DNAME);
audit_pkg.check_val( 'dept', 'LOC', :old.DEPTNO, :new.LOC, :old.LOC);
end;
/
2. I was trying to see if workspace manager would track all the data changes and i get this.
any ideas.
sql>execute dbms_wm.enableversioning('emp','view_wo_overwrite');
BEGIN dbms_wm.enableversioning('emp','view_wo_overwrite'); END;
*
ERROR at line 1:
ORA-20080: invalid 'hist' parameter for EnableVersioning
ORA-06512: at "SYS.LTDDL", line 441
ORA-06512: at "SYS.LTDDL", line 1177
ORA-06512: at "SYS.LTDDL", line 1163
ORA-06512: at "SYS.LT", line 8033
ORA-06512: at line 1
April 29, 2009 - 8:44 am UTC
... But i
does not tell you the ROW that changed which is very important in auditing.
...
to you maybe, not to someone else. SMK - everyone has *their* requirements, I demonstrate *concepts* and expect people will take the *concept* and using that *concept* apply it to their requirements *making changes* (since we are all developers and DBA's, we do this for a living).
... It seems they are two different solutions though? ...
how so?????
one says - read dictionary, generate code.
the other says - read dictionary, generate code.
hmmm, they look - I don't know - the same??
I suggested writing a package that generates code - period.
2) I can only imagine that your cut and paste is not a cut and paste, that you did something entirely different in real life
ops$tkyte%ORA10GR2> alter table emp add constraint emp_pk primary key(empno);
Table altered.
ops$tkyte%ORA10GR2> execute dbms_wm.enableversioning('emp','view_wo_overwrite');
PL/SQL procedure successfully completed.
audit
sam, May 02, 2009 - 11:43 pm UTC
Tom:
It seems workspace manager can audit every row if you enable it on a row. my questions:
1. why would i need a trigger and code to do auditing if this is enabled via simple command.
would not this give you the shadow table solution (audit table per table). is there any impelentation constraints in 9i other than yo ucant do it if table has a unique index? would it audit all columns.
2. Where is the data actually stored for versioninig. is it in same production table. would it impact performance significanlty for low volume OLTP.
3. would i be able to cpature application userid and other environment variables for each row (i.e ip address, host name, etc).
4. Do you think it is easier for end user to analyze just audit data based on entire row or just modified columns?
May 04, 2009 - 10:39 am UTC
1) obviously you wouldn't SMK
read the documentation, all restrictions are there. Unique keys are permitted (since all tables have a unique key - it would be useless if they were not)
2) in the table. in the same table.
3) check out the documentation - it shows you everything.
4) it depends, what do you need to ascertain from your auditing. Think about it - if there were one answer and it was always the best way - the other ways would not exist.
auditing
sam, May 05, 2009 - 11:06 pm UTC
Tom:
I see now why you refer everyone to the documentation.
I will take the trigger solution over workspace manager in 9i due to the following:
1. I have 200,000 records in one master table i want to audit. if i turn on versioning i have an additional row per update, an addition row per insert. BAsed on that I may end up with 1 million records of versioned data very soon.
basically i would be turning my production table into audit table. I do not know what happens to peformance.
In addition i do not like mixing up current data with audit data. I do not think they belong to same table.
2. Not only that I have to turn ON versioning for every child table that has foreign key to my master table.
3. You can not perform DDL on version enable tables.
4. You can not add referential integrity constraints to versioned table.
5. you cannot version a table with LONG type column.
6. Workspace Manager was not really created for auditing. The intention was to have Virtual Workspace where you can get a copy of the data for Long trnasactions while other users are updating production data. Each workspace or user can work on his own version of the data. so i can have one long transaction running on Thursday version while another running on Wednesday version, etc.
Do you agree with these points.
I do not know why you do not mention DBMS_FGA for fine grained auditing.
aduit
sam, May 11, 2009 - 4:59 pm UTC
Tom:
A quick question on your above solution:
If you are tracking updates/deletes on 3 tables: TABLE1, TABLE2, TABLE3, TABLE4
Table 1 has empid as a primary key
Table 2 has seq_no as primary key
Table 3 has deptcode,seq as primary keys
Table 4 has budgetid as primary key
Would you add ag eneric column "pkey1", "pkey2", "pkey3" to the table and store the values only (not column names)
or would you add "empid", "seq_no","deptcode","seq","budgetid" to the tables n fill those based on the TNAME value.
please advise.
Versioning
A reader, June 26, 2009 - 4:44 pm UTC
I am not sure what the latest Oracle technologies are to support this sort of thing, we are still on Version 9.2.0.8.
I have a set of tables which need to be highly audited i.e. anytime there is a change to any of the columns, create a new "version" of the record and store it in a "history" table by appending as_of_dt to the primary key of the record.
Given a set of such tables and a date parameter, how would one go about writing a query joining the tables using the latest "effective" version of each record involved in the query as of that date?
June 26, 2009 - 5:58 pm UTC
sort of automatic way: search for dbms_wm on this site and read the documentation on the workspace manager, it can version in a single table (has certain performance considerations of course)
to do this manually to say the EMP table you would modify the emp table:
create table emp
( empno number(10,0) not null enable,
ename varchar2(10),
job varchar2(9),
mgr number(10,0),
hiredate date,
sal number(7,2),
comm number(7,2),
deptno number(10,0),<b>
asof date,
modby varchar2(30)</b>
)
/
and then create a shadow table:
create table emp_hist
( empno number(10,0) not null enable,
ename varchar2(10),
job varchar2(9),
mgr number(10,0),
hiredate date,
sal number(7,2),
comm number(7,2),
deptno number(10,0),
asof date,
modby varchar2(30),
enddate date,
dmltype varchar2(1)
)
/
alter table emp_hist add constraint emp_hist_pk primary key(empno,asof);
of course, you likely would rename the table to EMP_TABLE and add a view EMP that selected just the original columns
then you would need a package:
create or replace package emp_hist_pkg
as
procedure set_fields( p_asof OUT date, p_modby OUT varchar2 );
procedure save_old( p_empno in emp_hist.empno%type,
p_ename in emp_hist.ename%type,
p_job in emp_hist.job%type,
p_mgr in emp_hist.mgr%type,
p_hiredate in emp_hist.hiredate%type,
p_sal in emp_hist.sal%type,
p_comm in emp_hist.comm%type,
p_deptno in emp_hist.deptno%type,
p_asof in emp_hist.asof%type,
p_modby in emp_hist.modby%type,
p_dmltype in emp_hist.dmltype%type );
end;
/
create or replace package body emp_hist_pkg
as
g_user varchar2(30) := USER;
procedure set_fields( p_asof OUT date, p_modby OUT varchar2 )
is
begin
p_asof := sysdate;
p_modby := g_user;
end;
procedure save_old( p_empno in emp_hist.empno%type,
p_ename in emp_hist.ename%type,
p_job in emp_hist.job%type,
p_mgr in emp_hist.mgr%type,
p_hiredate in emp_hist.hiredate%type,
p_sal in emp_hist.sal%type,
p_comm in emp_hist.comm%type,
p_deptno in emp_hist.deptno%type,
p_asof in emp_hist.asof%type,
p_modby in emp_hist.modby%type,
p_dmltype in emp_hist.dmltype%type )
is
begin
insert into emp_hist
( empno, ename, job, mgr, hiredate, sal, comm,
deptno, asof, modby, enddate, dmltype )
values
( p_empno, p_ename, p_job, p_mgr, p_hiredate, p_sal, p_comm,
p_deptno, p_asof, p_modby, SYSdate, p_dmltype );
end;
end;
/
and then three triggers:
create or replace trigger emp_hist_trigger_AIFER
before insert on emp for each row
begin
emp_hist_pkg.set_fields( :new.asof, :new.modby );
end;
/
create or replace trigger emp_hist_trigger_BUFER
before update on emp for each row
begin
emp_hist_pkg.set_fields( :new.asof, :new.modby );
emp_hist_pkg.save_old
( :old.empno, :old.ename, :old.job, :old.mgr,
:old.hiredate, :old.sal, :old.comm,
:old.deptno, :old.asof, :old.modby, 'U' );
end;
/
create or replace trigger emp_hist_trigger_BDFER
after delete on emp for each row
begin
emp_hist_pkg.save_old
( :old.empno, :old.ename, :old.job, :old.mgr,
:old.hiredate, :old.sal, :old.comm,
:old.deptno, :old.asof, :old.modby, 'D' );
end;
/
and then to query the history, you would:
create or replace view emp_view
as
select empno, ename, job, mgr, hiredate, sal,
comm, deptno, asof, modby
from emp
where asof <= to_date( userenv('client_info'), 'yyyymmddhh24miss')
UNION ALL
select empno, ename, job, mgr, hiredate, sal,
comm, deptno, asof, modby
from emp_hist
where asof <= to_date( userenv('client_info'), 'yyyymmddhh24miss')
and enddate >= to_date( userenv('client_info'), 'yyyymmddhh24miss')
and dmltype <> 'D'
/
and to query "as of" a date you would:
dbms_application_info.set_client_info( yyyymmddhh24miss )
where yyyy.... is of course, just a date in a string. Then select * from emp_view and it'll show you the table "as of that time"
auditing
A reader, June 28, 2009 - 12:29 pm UTC
Tom:
great example.
The problem with workstpace maanger is that it stores every audit in the same production table (EMP) instead of separate AUDIT_EMP like your manual method. There is also other limitations like you cant change the table structure, etc. after you turn it ON.
But, a small question on your example.
Why do you need to join EMP with EMP_HIST.
You have 3 triggers: before insert, before update , before delete.
your EMP_HIST should contain the original record + all audits. correct?
Is not easier to just store everything in AUDIT table: Original record + audited records instead of having the current record in EMP and audits in EMP_HIST.
July 06, 2009 - 5:46 pm UTC
Sam/SMK
please stop making things up, here are two things right off you "made up"
a) where do you see a join?
b)
... There is also other limitations like you cant change the table structure, etc. after you turn it ON...
that is not true, have you read the workspace manager documentation? there are limitations due to the implementation but what you say is patently FALSE.
http://docs.oracle.com/docs/cd/B19306_01/appdev.102/b14253/long_intro.htm#sthref197 read the bit about "The following DDL operations related to version-enabled tables are supported:"
Also, please read the example and follow the logic, if you did, you would see that the insert trigger does not populate the history table, it is not history yet - it is the current record. current record is in the EMP table, only the update/delete put data into HIST.
Since in most/many use cases - a record is inserted and not updated - no, it would not make sense in general to put it into both places. I did not do that.
A reader, June 28, 2009 - 1:06 pm UTC
Great example. Questions, in addition to the ones asked by the last reviewer.
1. What is the BEFORE INSERT trigger doing exactly?
2. To see the latest/current version of the table, would querying EMP work or would I need to query EMP_VIEW for sysdate?
On a related note, I have a set of tables related to an insurance policy. Policy, claims, coverage, limits, etc, complex normalized data model. When a claim is logged, all policy information should be as of the effective date of the claim. In this case, a "workspace"-like technique like you demonstrated would be appropriate i.e. jump into the time machine and flashback to any point in time to query all tables as of any point in time.
But another typical audit requirement is "show me all the changes that happened to any aspect of the policy between dates D1 and D2". To answer such a question, it might be better to create a table like (table_name,pk,column_name,old_value,new_value,date,user) and have row-level triggers on tables save data to it.
Given these 2 types of audit requirements, would you suggest both implementations at the same time or is one of them redundant? i.e. can one implementation serve both requirements with some SQL voodoo? Your thoughts?
July 06, 2009 - 5:52 pm UTC
1) setting the fields in the EMP table so we know who created the record and when.
2) the current version is only in the EMP table, if you don't need to "flashback" you would never use the view. The union view is for querying in the past
I disagree with your "it might be better to create a table like ...." - to see all of the changes in a period of time (between d1 and d2), you only need to see the rows that were valid during that period of time, the schema I've provided gives that EASILY.
You can easily query up all versions of a record from this schema between points A and B in time
and if you needed to, you could EASILY use the lag() function to marry up the current record in that set with the prior record in that set and use decode() to only print out values that changed if you needed:
instead of just selecting column, you would:
decode( column, lag(column) over (order by date_column), null, column )
Reference data
A reader, June 29, 2009 - 7:42 am UTC
Also, using either implementation, how would changes to referential data be captured? i.e. emp.deptno is a FK to dept.deptno. If dept.dname changes from Sales to Sales/Marketing, the "audit report" of all affected employees should show that but the EMP table has not changed! This came up earlier in this thread but there was no resolution.
July 06, 2009 - 6:24 pm UTC
tell us what probably you foresee using the most recent approach mentioned (as there are more than TWO on this page)
how would the EMP/EMP_HIST table affect this at all?????
what "audit report" are you talking about - the one you refer to so empathetically with "but the EMP table has not changed!"
The emp data did not change, the dept data did. There was no change to the employee record at all.
One set of changes per day
A reader, June 29, 2009 - 4:30 pm UTC
I have front-end apps that "touch" a record without actually changing any data. I guess I can add a row to history only if :old/:new is different (for columns of interest)
Also, I would like to save only one version of the record per date i.e. if deptno changes from 10 to 20 to 30 on a day, I want to save only the 30 to the history table. What is the best way to do this? Modify your save_old procedure to update a row in emp_hist if one already exists for emp_no,trunc(sysdate)? If it doesn't, add it with asof=trunc(sysdate)?
July 06, 2009 - 6:33 pm UTC
for your first issue - yes, you would have to compare the :new to :old column by column.
for your last issue, you would have to modify the code to MERGE into _hist table using the primary key of the current table PLUS the sysdate populated columns (after changing the set_fields routine to set the date column to the trunc(sysdate), not sysdate.
Reference data
A reader, July 07, 2009 - 9:12 pm UTC
> The emp data did not change, the dept data did. There was no change to the employee record at all.
Right, but from a end-user perspective, John's department is now Sales/Marketing instead of Sales and this is a reportable change.
To report this on a employee change/history report we would need to join EMP_VIEW and DEPT_VIEW with all those decode/lag expressions? Is there any other way?
July 08, 2009 - 3:04 pm UTC
then you would just need to write a query that took that into consideration?? It is just SQL.
In a typical situation, the EMP table would have many relations out to many "dimensions" - and some of them could be 1:1, some 1:M - in general, a single query isn't going to "cut it"
And from my perspective (as an end user) John's department *DID NOT CHANGE*, if your rules say "john's dept changed because the name of it was updated", then I'd say you modelled your data wrong if you actually update it the name. You would create a new department and reassign people to it - otherwise if you ask me, the department DID NOT change. An attribute of the department changed - but nothing at all about John changed.
What if table column changes
Farhan Ali, August 20, 2009 - 3:12 am UTC
Dear Tom,
Your answers and followups are of great user to at least me. What I want to ask is that what if we create a new column(s) on table. We will be required to execute the trigger script once again. Isn't there any way to generate a trigger once on a table that selects the table columns available and compares their new and old value and inserts row on audit table accordingly.
Thanks,
Farhan
August 24, 2009 - 5:09 pm UTC
no, all references to the columns are static, by name, if you added a column and the trigger needs to access that column the trigger must change.
Possible solution
JerrQ, November 07, 2009 - 5:50 pm UTC
We have a similar problem - i.e. possible that new columns, tables get added and DBA's forget to (re)create audited triggers.
We haven't implemented this yet, but you should be able to use a DDL trigger to ensure the audit triggers are correct.
CREATE TRIGGER audit_db_object AFTER CREATE
ON SCHEMA
begin
--identify new table
--execute create audit trigger procedure
November 11, 2009 - 1:51 pm UTC
... We have a similar problem - i.e. possible that new columns, tables get added
and DBA's forget to (re)create audited triggers.
...
the dba's didn't forget a thing. You - the developer - gave them a buggy upgrade script. It is not the dba' responsibility to do that. It is yours - or it is a serious breakdown in your release to production procedures.
and no, you cannot do what you describe as the create audit trigger is ddl itself and you cannot do ddl in a trigger - not without using an autonomous transaction - but not only would that not work, it would be a hugely bad thing if it did work.
but, in short, you either have bad "here is your upgrade script" developers OR you have a serious breakdown in your release to production techniques. I fail to see how something like this could be "forgotten", it is part of your UPGRADE SCRIPT.
But I am the DBA...
JerrQ, March 12, 2010 - 12:04 pm UTC
Just saw this trawling for ways to improve our auditing with as little impact on performance as possible. In the reply above.. we have quite a few DBAs worldwide supporting the system - sometimes scripts get past one of us.
You could use a DDL trigger to log the change / send an email / etc so at least someone will know that a change to the auditing may be required.
We run a number of jobs at night that do fairly basic checks - triggers and constraints enabled, audit triggers for all production table, possible data discrepancies, backups successfully run, etc - and email a report to us.
We're on 10.2, but thinking of moving to 11G. The Flashback Archive looks pretty compelling.
March 12, 2010 - 4:02 pm UTC
as I'm saying - you the DBA haven't forgotten a thing, the developers did. Do they not have any QA? No testing? No set of rules (you know, based on mistakes made in the past) applied to their code?
... sometimes scripts get past one of us.
...
it seems you are in a "tightly run ship" - you have the needs of one anyway. I don't see how something could "get past one of us", wouldn't a script run by a DBA to change production have gone through multiple layers already??
Yes, you could send someone an email (but who? and wouldn't that person get to hate email really fast). You could log a change - and that might not be a bad idea, for auditing in general, audit the changes - but if YOU know that a change must have some other changes - wouldn't the people making the change?
PLS-00307: too many declarations of Check_Val matches this call error
Lam Nguyen, March 12, 2010 - 12:53 pm UTC
Hi Tom,
I got this error when I do unit testing on the Check_Val procedure with both old and new value set to NULL.
This may not affect the audit logic, but I would know if there is a way to avoid it other than using NVL function on these value.
Thanks a lot
March 12, 2010 - 4:09 pm UTC
give example. We look at the base type being passed in, in real life you would never pass in NULL, NULL, you would pass in two variables that have a distinct type known at compile time that have the value of null in them.
Big difference. If you are unit testing with literals as parameters, you are doing it wrong in this case.
ops$tkyte%ORA11GR2> begin
2 audit_pkg.check_val( 'x', 'y', null, null );
3 end;
4 /
audit_pkg.check_val( 'x', 'y', null, null );
*
ERROR at line 2:
ORA-06550: line 2, column 2:
PLS-00307: too many declarations of 'CHECK_VAL' match this call
ORA-06550: line 2, column 2:
PL/SQL: Statement ignored
ops$tkyte%ORA11GR2> declare
2 l_vc1 varchar2(20) := null;
3 l_vc2 varchar2(20) := null;
4 begin
5 audit_pkg.check_val( 'x', 'y', l_vc1, l_vc2 );
6 end;
7 /
PL/SQL procedure successfully completed.
when testing overloaded functions/procedures, you best use variables for that. Else we cannot always tell the datatype, when you use a variable - we can - we use the datatype of the variable itself.
auditing
sam, December 13, 2010 - 11:47 am UTC
Tom:
What is the best way to audit the "UPDATE" and "DELETE" statements for a 9i database or specific tables.
Can you refer me to a link if you already provded a solution.
December 14, 2010 - 3:37 am UTC
Sam also known as SMK -
you should know by now that when you ask "what is the best way", there is exactly one answer you will get:
it depends
Define what YOU mean by audit. Do you just want to audit the fact that Joe updated the table? the AUDIT command does that.
Do you need to audit the STATEMENT that Joe used to update the table? Fine grained auditing does that in 10g and up (nothing for 9i)
Do you need to audit the VALUES that Joe updated? workspace manager might do that for you in 9i/10g and Flashback data archive would do it in 11g.
auditing
A reader, December 16, 2010 - 8:41 am UTC
Tom:
I need to know the SQL statement that John executed from backend (not application) and when he ran it. like
On Oct 10, 2010 10:40 AM, john ran this.
Update table
set sal = 100
where userid='john';
Currently users use one db account too. John is a user application account. Since they log from backend I would need to capture the machine name or windows O/S user.
I have 9i for now . My preference would be to do it via PL/SQL as I do not have access to AUDIT commansds and it takes time to get those enabled.
December 16, 2010 - 9:14 am UTC
Sam also known as SMK,
I answered already above, just re-read it. If it existed for 9i (since you already said 9i) I would have mentioned it. I was not ambiguous:
Do you need to audit the STATEMENT that Joe used to update the table? Fine grained auditing does that in 10g and up (nothing for 9i)
and this by the way, is something I cannot even parse:
John executed from backend (not
application)
what is a backend and how does it differ from the application - give that it takes an application to submit sql to the database in the first place...
Sam also known as SMK
Martin, December 16, 2010 - 9:11 pm UTC
How do you always know it is 'Sam also known as SMK'? :)
December 16, 2010 - 9:25 pm UTC
ip addresses, contact info.
Build application by Delphi language connected with current Oracle
Kamal, December 20, 2010 - 7:02 am UTC
I made a software application by Delphi language, this application has installed on the local University network and connected with the Oracle 11g by ODBC.
When I try to run this application some times work very good but some times is not and I get an error message which is
"ORA-12557: TNS:protocol adapter not loadable"
What is the problem please.
December 20, 2010 - 7:47 am UTC
are you trying to connect to a local (on the same machine) database or over a network.
SQL auditing
Sam, December 27, 2010 - 4:39 pm UTC
Tom:
Merry Christmas to you! It is good to still see you in good physical shape and running the site.
You do not need to publish my initials every time unless you like it a lot!
What i mean by backend execution is using a tool like sql*plus installed on the server. User does not go through the client application installed in his PC or browser based application.
I guess for oracle it does not matter. It receives a SQL statement from some client.
There must be a way to do this in 9i using custom pl/sql code. I am sure many may have done it before. Can't some trigger be created on V$SQL that copies the SQL statement that was added and store it into AUDIT_SQL table.
December 28, 2010 - 10:05 am UTC
Sam/SMK - I do that just so people know how often it is you - asking very similar questions and getting the same response (usually "it depends, there is no single BEST way for anything"). You frequently post as "a reader" these days. It gives context.
sqlplus IS a client application. Backend execution is some term you made up - on your own. It is things like that in our industry that make communication hard, if not impossible.
If a user is using sqlplus - guess what they are doing? they are using a client application installed on their PC (or they could be using the old isqlplus which is a browser based application).
If you could do it in 9i, then the new feature for DBMS_FGA - fine grained auditing, introduced in 10g, would have been sort of boring - wouldn't it?
v$sql isn't a table - it is a v$ view, it isn't "real", it doesn't consume storage. No triggers.
Even if it were a real table, it is owned by SYS.. No triggers allowed on sys owned objects.
Even if it was allowed for SYS owned objects to have triggers - it still wouldn't work since no one ever inserts/updates/deletes from a v$ view - they are in memory data structures that are modified by direct memory access - not SQL.
auditing
sam, December 28, 2010 - 10:19 am UTC
Tom:
OK, if I uderstand you correctly the answer is, "No, you cant do FGA and audit the SQL statement in oacle 9i". right?
<<<If a user is using sqlplus - guess what they are doing? they are using a client application installed on their PC (or they could be using the old isqlplus which is a browser based application). >>>
Most of the time sql*plus is installed with oracle on the server. I think many developers call data manipulation from server as backend updates.
December 28, 2010 - 10:30 am UTC
... "No, you cant do FGA and audit
the SQL statement in oacle 9i". right? ...
sometimes I just go "wow". Like now. Here are some quotes from prior responses by me:
o If you could do it in 9i, then the new feature for DBMS_FGA - fine grained auditing, introduced in 10g, would have been sort of boring - wouldn't it?
o Do you need to audit the STATEMENT that Joe used to update the table? Fine grained auditing does that in 10g and up (nothing for 9i)
o I answered already above, just re-read it. If it existed for 9i (since you already said 9i) I would have mentioned it. I was not ambiguous:
I'll let you figure out if I was trying to say "No, you cant do FGA and audit
the SQL statement in oacle 9i"
Most of the time sql*plus is installed with oracle on the server. I think many
developers call data manipulation from server as backend updates.
No, they don't.
Many times sqlplus is installed on every machine a developer touches - most times. So most times - sqlplus is not on a server - it is on a desktop. And regardless of where sqlplus is installed - the term "backend updates" is a made up term - it doesn't map to any common nomenclature in use by developers to my knowledge.
Which do you think makes more sense, is more universal, says BEST what you were trying to say:
a) I need to know the SQL statement that John executed from backend (not application) and when he ran it
b) I need to know the SQL statement that John executed from sqlplus and when he ran it
auditing
A reader, January 12, 2011 - 3:56 pm UTC
January 12, 2011 - 4:15 pm UTC
Sam/SMK -
it is in 9i, however, as I said
Do you need to audit the STATEMENT that Joe used to update the table? Fine grained auditing does that in 10g and up (nothing for 9i)
for you see, fine grained auditing audited SELECT statments in 9i, but not modifications.
If you read the links you just posted, you would have read things like:
Another important thing to note here is, as of Oracle 9i, the FGA policies define how the auditing is done for SELECT statements only, not for other statements.
Critical reading is a skill - an important one - for people in our industry.
A reader, January 12, 2011 - 4:55 pm UTC
The original question dates back to 2000.
Please correct me if I'm wrong...
If the question had been asked now, as of 11g, then Fined-Grained Auditing (FGA) would have been the straightforward answer, right ?
January 13, 2011 - 7:12 am UTC
No, fine grained auditing would allow us to see what SQL has been performed against objects - but not the changes that SQL made.
In 11g, the answer would be "flashback data archive, part of the total recall option. You can query the data 'as of' any point in time in the past - back to the time the table was placed into the flashback data archive"
auditing
sam, January 12, 2011 - 5:24 pm UTC
Tom:
Ok, i was planning to read it in more depth tonight. I thought you were saying FGA was introduced in 10g.
What do you recommend to audit all updates/deletes/inserts in a 9i database (all tables)? Let us say capture the "data" changes for now.
Triggers will take too long and slow things down. Application auditing will also.
is there anything out of the box (i.e AUDIT DATABASE command) that allows me to do that fast.
January 13, 2011 - 7:15 am UTC
Ok, i was planning to read it in more depth tonight. I thought you were saying
FGA was introduced in 10g.
I was not saying fine grained auditing was introduced in 10g, I was saying it did not do the DML statements you explicitly asked for.
what do you recommend to audit ...
re-read the posting from a month ago - December 13, 2010
Triggers will ...
You have to pay the price. Until 11g with the flashback data archive, you will have to do this one way or another in real time. In 11g, it can happen in the background AFTER your transaction concludes.
auditing
sam, January 13, 2011 - 12:43 am UTC
Tom:
Is not this link saying that I can audit SQL or DML in 9i
http://download.oracle.com/docs/cd/B10500_01/server.920/a96521/audit.htm To audit all successful SELECT, INSERT, and DELETE statements on the dept table owned by user jward, BY ACCESS, enter the following statement:
AUDIT SELECT, INSERT, DELETE
ON jward.dept
BY ACCESS
WHENEVER SUCCESSFUL;
Fine Grain auditing seems to be used to capture data changes.
January 13, 2011 - 7:26 am UTC
do you know what audit does, I suggested audit myself.
Audit will tell you that someone inserted into a table, that someone updated a table, that someone did a delete on a table.
Audit will NOT tell you what statement they issued, that is what fine grained auditing does.
Audit captures actions (you did an insert)
Fine grained auditing captures SQL (you issued INSERT INTO T values (1,2) - in 10g and above. In 9i, find grained auditing captures only SELECT DML statements)
Flashback data archive in 11g captures data changes. Workspace Manager in 9i and before can do something similar but has a high performance hit.
auditing
sam, January 16, 2011 - 11:56 am UTC
Tom:
1) How practical is it to use LogMiner to catpure SQL_TEXT and data column changes from database logs and store in application AUDIT_TABLES? It seems to me this feature is more of on demand thing to investigate a current problem and not long term auditing solution
http://download.oracle.com/docs/cd/B10500_01/server.920/a96521/logminer.htm 2) It seems the most practical option in 9i and 10g to capture column changes after INSERT, UPDATE, DELETE statements is use AFTER triggers (one trigger per table). Do you agree?
FGA is only to capture SQL statements (SELECT for 9i and INSERT, UPDATE, DELETE, SELECT in 10g).
January 23, 2011 - 3:11 pm UTC
1) i would agree, i would not be using log miner for that purpose.
2) if you need to capture the actual values - which is something entirely NEW here Sam, not what you were talking about at all before - not even close to what you were asking about (you do that a lot you know, change the question after a long discussion), then in 10g and before you will likely be looking at triggers and in 11g at flashback data archive.
auditing
sam, February 04, 2011 - 6:25 pm UTC
Tom:
Is this a typo or 9i only captures full SQL statements for SYSDBA (all DML) and O/S files.
see the update statement captured below.
http://download.oracle.com/docs/cd/B10500_01/server.920/a96521/audit.htm When SYS auditing is enabled, both the ALTER SYSTEM and UPDATE statements are displayed in the operating system audit file as follows:
Thu Jan 24 12:58:00 2002
ACTION: 'CONNECT'
DATABASE USER: '/'
OSPRIV: SYSDBA
CLIENT USER: scott
CLIENT TERMINAL: pts/2
STATUS: 0
Thu Jan 24 12:58:00 2002
ACTION: 'alter system flush shared_pool'
DATABASE USER: ''
OSPRIV: SYSDBA
CLIENT USER: scott
CLIENT TERMINAL: pts/2
STATUS: 0
Thu Jan 24 12:58:00 2002
ACTION: 'update salary set base=1000 where name='myname''
DATABASE USER: ''
OSPRIV: SYSDBA
CLIENT USER: scott
CLIENT TERMINAL: pts/2
STATUS: 0
February 06, 2011 - 12:01 pm UTC
Hey Sam,
when you tested it- what happened?
and remember, sysdba isn't you, it won't solve your problem.
auditing
sam, March 17, 2011 - 8:15 pm UTC
Tom:
How would you determine if triggers would have any performance impact on the system.
My expectation there is none or like 0.01 sec overhead. I have to test it of course.
But every time you INSERT/UPDATE/DELETE a ROW, i want to store a copy os the whole ROW into a shadow table with a few audit columns. I am not doing 50 IF statements in the trigger if a table has 50 columns..just make another insert if few values got updated.
This system is small in size and has low number of transactions/users.
What would be the best way to measure performance or estimate this before you implement it.
Would you use a script to simulate a double insert versus single insert.
March 18, 2011 - 7:29 am UTC
How would you determine if triggers would have any performance impact on the
system.
It is obvious Sam, they - LIKE ANY OTHER ADDITIONAL BIT OF CODE PATH - would definitely have a measurable impact on the execution of code.
If you have a trigger that does this:
begin
update one_million_row_table set x = 5;
end;
it will obviously have much more than a 0.01 second impact.
But every time you INSERT/UPDATE/DELETE a ROW, i want to store a copy os the
whole ROW into a shadow table with a few audit columns. I am not doing 50 IF
statements in the trigger if a table has 50 columns..just make another insert
if few values got updated.
Your point is?
It is not overhead if it is something you HAVE to do.
Enabling Oracle AUDIT commands adds to runtime. I refuse to call it overhead however since it is something you HAVE to do. If you did not HAVE to do it, you would not.
What would be the best way to measure performance or estimate this before you
implement it.
Sam - you are a long time reader of this site. You know the answer to that. After lurking around here for years - you know what the answer and the approach is.
Ideas for column log storage
Tailor, March 25, 2011 - 3:04 pm UTC
Hello.
I'm using triggers to store column data changes, as the suggestion at the very beggining of this thread. I defined a single table to store all logs, from some tables "audited".
The structure is like this:
SQL> desc wsi_auditlog
Name Type Nullable Default Comments
----------- -------------- -------- ------- --------
DATE_TIME DATE Y
USER VARCHAR2(30) Y
TERMINAL VARCHAR2(30) Y
PROGRAM VARCHAR2(48) Y
TABLE VARCHAR2(30) Y
COLUMN VARCHAR2(30) Y
ID_ROW VARCHAR2(2000) Y
VALUE_OLD VARCHAR2(4000) Y
VALUE_NEW VARCHAR2(4000) Y
COMMENTS VARCHAR2(4000) Y
MODULE VARCHAR2(40) Y
The columns sizes for value new and old, plus ID (from application view) of the row and comments (obs) are very rarely used at the full capacity; let's say an average of 30 chars each.
(I translated the column names, since they were in portuguese).
While this is working, is hard to SELECT against it to show the "grouped" changes of a particular row, bringing this rows back to columns (like a hand made pivot).
The performance in general is not very good too. I have indexes on table and column attributes, and ID and date.
I'm trying to figure out a different way to store and recover this information, but not tied specifically at one table; it must cover a group of different tables.
The target database is 10.1 and 10.2 .
What could you suggest to make this in a better way than the current?
Thanks for your attention.
March 28, 2011 - 5:53 am UTC
either to:
a) get to 11g and use the flashback data archive, which makes it very easy to query up the data using either flashback query (point in time) or flashback versions query (show me all of the versions of a row(s) between two points in time. Impact on performance: almost nothing, the versioning happens AFTER you commit - not as part of your transaction
b) "do it yourself" and log the entire row - you would have a shadow table for every table you want audited and a trigger that populates the entire before (:old) version of the row after update or delete on the row. You would have to then 'teach' people how to query this data properly - to get a point in time view or a versions view. Impact on performance: moderate to high, a trigger will add overhead and you'll be doubling the size of some of your transactions.
c) if your system is not very high volume, you can also look at workspace manager (comes with database since v8) with the knowledge that it will impact performance. Impact on performance: High
http://docs.oracle.com/docs/cd/B19306_01/appdev.102/b14253/toc.htm
auditing
sam, March 28, 2011 - 9:41 am UTC
Tom:
<<<the versioning happens AFTER you commit - not as part of your transaction>>>
Is this why it would have no performance impact versus triggers that insert data before commit as part of the transaction?
I think the Flashback Archive is optional feature that has to be enabled too with 11g and the data needs to be copied from flashback tables to audit tables since it does not stay there for long time. correct?
March 29, 2011 - 3:12 am UTC
Is this why it would have no performance impact versus triggers that insert
data before commit as part of the transaction?
pretty much
I think the Flashback Archive is optional feature that has to be enabled too
with 11g and the data needs to be copied from flashback tables to audit tables
since it does not stay there for long time. correct?
instead of guessing, you could have read the documentation....
total recall - including flashback data archive - is an option to enterprise edition, yes.
the data in the flashback data archive is generated from the UNDO your transaction generated. There is no "copy from flashback table to audit tables" - I don't even know what those things "are" - we do not have them. "flashback table" is a feature, a capability - not a storage mechanism. "Audit tables" hold the audit trail - like "who selected from this table, who updated that table" - they do not hold the flashback data archive...
triggers
sam, March 29, 2011 - 8:23 am UTC
Tom:
WEll for column data changes the oracle audit trail does not support that so you say to capture it from flashback data archive.
Does data stays there forever in flashback archive?
We want to have our own audit tables and store the data there. Cant we copy it from flashbacck archives to our audit tables just like yo uare copy data from oracle audit table to your own private area and then flushing the oracle audit table.
March 30, 2011 - 1:21 am UTC
I'm not sure what your point in the first paragraph was sam. You really need to read up on the technology before commenting on it.
Data stays in the (well document, much written about) flashback data archive for as long as you configure it to stay there. YOU set the retention period. You could have some tables with a 5 year retention, some with a 5 day retention. YOU decide, YOU tell us.
We want to have our own audit tables and store the data there.
why, you would lose the ability to use a simple flashback query to see the data, you would lose the ability to use a simple flashback versions query to see the data. You would have to re-implement everything. In a word - it would be (ok, two words) "NOT SMART"
There is NO AUDIT TABLE involved here, use the right terminology, stop making words up (you do that a lot - make up your own terminology - it makes having a discussion hard at best).
Can you copy it from the flashback data archive? sure
Should you? no
Would it show innovation and intelligence on your part? no
Should it stay in the flashback data archive for as long as you need it? yes
Audit
lalu, April 14, 2011 - 5:17 am UTC
Hi Tom,
1:Can we track/log all the sql(DDL,DML)/plsql comming into the database ?
2:We have some sql/plsql based application which reside on jar files.
Can we track which objects(tables/columns) are affected when running the application?
Ex: The application does some bill generations, can we track which query is executed and which objects are
affected through the application?
Thanks.
lalu.
April 14, 2011 - 9:55 am UTC
1) pretty much, you have the AUDIT command, you have fine grained access control.
2) if your application is kind enough to identify itself (dbms_application_info, dbms_session), yes - if they just connect and do not identify themselves, not necessarily.
Audit sql
lalu, April 14, 2011 - 2:50 pm UTC
Hi Tom,
For 1:Can we track/log all the sql(DDL,DML)/plsql comming into the database ?
I check fine grained audit , but found that sql can be tracked for ddl/dml for a single table using one fga policy.
I am looking for some method using which I can track all the incoming DDL/DML/select statements for all the tables using single fga policy.
Thanks.
April 14, 2011 - 5:44 pm UTC
you can use the AUDIT command for that, audit all the types of statements you would like audited, use the extended audit trail.
audit
Sam, April 14, 2011 - 6:41 pm UTC
Tom:
He did not say what oracle version he is using but you are assuming 10g or 11g (not supported in 9i).
I thought all you need is set AUDIT_TRAIL=EXTENDED in10g/11g to capture the all the sql statements coming into the DB..
the AUDIT command is more auditing specific tables, users and statements.
I dont think pl/sql can be audited per his question.
April 14, 2011 - 7:03 pm UTC
Sam,
I always assume current software if no version is mentioned. As a frequent reader and questioner - you know that. I've told you that specifically on more than one occasion as you often do not mention that crucial information.
http://docs.oracle.com/docs/cd/E11882_01/server.112/e17110/initparams017.htm#REFRN10006 setting audit trail to db,extended or xml,extended (there is no setting it to just extended as documented...) will
"Performs all actions of AUDIT_TRAIL=db, and also populates the SQL bind and SQL text CLOB-type columns of the SYS.AUD$ table, when available. These two columns are populated only when this parameter is specified."
meaning, only things you AUDIT will be audited.
you can audit execute...
audit
sam, April 15, 2011 - 7:34 pm UTC
Tom:
Thanks, i see now you still have to audit with that ( in addition to setting audit_trail=extended)
IF we want to capture all the SQL the SQL coming in, that means we have to AUDIT insert, update, delete on all tables in schema.
April 18, 2011 - 9:58 am UTC
SQL> audit all statements;
Sam - before guessing, you might just want to "peek" at the documentation?
Ideas for column log storage
Tailor, April 18, 2011 - 9:47 am UTC
Thanks for your advices.
I should have given you more information on the usage scenario. We develop an application, and there is a handful of databases, each on a different customer, all of them standard editions.
So the Workspace Manager is not available ... And some of this databases are not exactly small.
An upgrade to 11g is not feasible to all instalations in short time (and even then they'll still stay on standard edition, I guess, for cost reasons), so I guess it's few to no option on get that better.
This audit need is primarily for our own usage, for uncovering the real truth about the old excuse "the system did that, not me".
If you had any other points to show us, I would be glad to read on that.
trigger
sam, May 19, 2011 - 12:17 pm UTC
Tom:
Let us say I want to audit data updates, deletes on existing table EMP_TAB that
has a few hundred thousands of records.
I created a shadow table Emp_tab_audit and added few audit columns
Emp_tab (
Empno NUMBER NOT NULL,
Ename VARCHAR2(10),
Job VARCHAR2(9),
Mgr NUMBER(4),
Hiredate DATE,
Sal NUMBER(7,2),
Comm NUMBER(7,2),
Deptno NUMBER(2) NOT NULL);
CREATE TABLE Emp_tab_audit (
seq number
operation varchar2(3),
user varchar2(20),
Timestamp date,
ip_address varchar2(25),
Terminal varchar2(10,
Empno NUMBER NOT NULL,
Ename VARCHAR2(10),
Job VARCHAR2(9),
Mgr NUMBER(4),
Hiredate DATE,
Sal NUMBER(7,2),
Comm NUMBER(7,2),
Deptno NUMBER(2) NOT NULL);
I am nostly interested in UPDATES and DELETES but I decided to add INSERTS to have full history for each eomplyee in one table (audit schema) instead of querying two tables all the time (production table and audit table) to see the changes.
I created this AFTER INSERT, UPDATE, DELETE trigger.
I decide to copy the :NEW values for INSERT and UPDATE and :OLD values for DELETE.
*******************************************************************
CREATE OR REPLACE TRIGGER TRG_EMP_AUDIT
AFTER INSERT OR DELETE OR UPDATE ON EMP_TAB
FOR EACH ROW DECLARE
v_operation VARCHAR2(10) := NULL;
v_user VARCHAR2(20);
v_timestamp Date;
v_ip_address VARCHAR2(25),
v_terminal VARCHAR2(10);
BEGIN
v_user := USERENV(user);
v_timestamp := SYSDATE;
v_ip_address := USERENV(ip_address);
v_terminal := USERENV(terminal);
IF INSERTING THEN
v_operation := 'INS';
ELSIF UPDATING THEN
v_operation := 'UPD';
ELSE
v_operation := 'DEL';
END IF;
IF INSERTING OR UPDATING THEN
INSERT INTO EMP_TAB_AUDIT (
seq,
operation,
user
timestamp,
ip_address,
terminal,
empno,
job,
mgr,
hiredate,
sal,
comm,
deptno )
VALUES (
audit_seq.nextval,
v_operation,
v_user,
v_timestamp,
v_ip_address,
v_terminal,
:new.empno,
:new.job,
:new.mgr,
:new.hiredate,
:new.sal,
:new.comm,
:new.deptno);
ELSIF DELETING THEN
INSERT INTO EMP_TAB_AUDIT (
seq,
aud_action,
user
timestamp,
ip_address,
terminal,
empno,
job,
mgr,
hiredate,
sal,
comm,
deptno )
VALUES (
audit_seq.nextval,
v_operation,
v_user,
v_timestamp,
v_ip_address,
v_terminal,
:old.empno,
:old.job,
:old.mgr,
:old.hiredate,
:old.sal,
:old.comm,
:old.deptno);
END IF;
END;
/
*******************************************************************************
so when insert happens, the first audit row is created in EMP_TAB_AUDIT.
when update happens, the 2nd new row is created in EMP_TAB_AUDIT.
The problem I am facing is the old records that curently exist. If someone updates an old row I am copying the :NEW values so I won't have a copy of the :OLD values unless I create 2 ROWS (one for the old and one for the new).
Do you think I should copy all the hundreds of thousands of records to the AUDIT tables for this to work. I am hesitant to do that.
ANy better ideas. I am applying this solution to several tables (not just one).
This is also in 9i and i dont flexibility other than using a trigger to track data changes.
May 23, 2011 - 10:41 am UTC
union all and create a view sam. Just audit the updates and deletes. Capture only the :old values.
create view all_of_the_stuff
as
select ...., X.* from current_table X
union all
select * from your_audit_trail;
Otherwise you obviously have to copy the current table - you have to stop people from accessing, create table "new table" as select from the old, put triggers in place and then let things go again.
triggers
sam, May 23, 2011 - 12:23 pm UTC
Tom:
It seems it is a catch-22 situation.
I dont want to copy 100,000 of records to audit table as some records are very old and not needed.
similarly i dont like the idea of copying OLD values with after update trigger. The whole purpose is to see what that user updated (NEW Values are). I know you can compare the current records to the audited OLD but does not look awkward to show OLD values with update statement and compare it to current records to compute what has been updated.
For example if the records now is
EMPID, Col1, Col2, Col3, CREATE_BY, CREATED_DATE, ACTION
-----------------------
100, A, B, C, John, 01-MAY-2011, INS
Mike updates col1 and col2 so this will be in AUDIT table
---------------------------------
100, A, B, C, mike, 10-MAY-2011, UPD
The current record in MAIN table will be
----------------------------------------------
100, A, B1, C1
Susan updates col 1 so thsi wil be in audit table
---------------------------------------------
100, A, B1, C1, Suzan, 15-MAY-2011, UPD
Main table will be
-------------------------
100, A1, B1, C1
It might be useful to copy new record created from 6 months ago or just assume the first update will not have a baseline record to compare to since we are storing NEW values or maybe with 1st initial UPDATE insert two records into AUDIT tabl (one for old and one for new).
Do you agree that capturing NEW with update is clearer.
what method would you recommend to implement.
May 23, 2011 - 12:48 pm UTC
it is not a catch 22, it is a decision you are making. A catch 22 is a no-win situation and I've already told you (more than once, more than just today) what to do. You just don't want to do it. There is a win situation here.
You should just log the update/delete :OLD values, nothing else and union all the tables together for reporting.
done, easily, efficiently.
last I'll have to say on it.
What the user updated to is rather *easy* sam, you have that information in my model. It is right there - it is the "next" record which is either in the audit trail OR in the base table itself. If you wanted, add a "last updated by" column to the base table and maintain that. Pretty simple.
trigger
sam, May 23, 2011 - 3:45 pm UTC
Tom:
1) Is the "last updated by" column in main table to sort the records in the view and get the last current one at the bottom of the list?
create view all_of_the_stuff
as
select ...., X.* from current_table X
union all
select * from your_audit_trail;
order by last_updated_by
2) You don't think that having all records for EMPLOYEE in audit trail table (i.e auditing INSERT) can be better for reporting performance?
3) Have you demonstrated this auditing technique or design in any other thread. I am trying to see how you use LAG function to see the values updated by each user in single ROW format.
With the expamle above, I want to see what columns/values Mike and Suzsan updated.
May 23, 2011 - 5:26 pm UTC
1) no, it was to more readily see the changes sam, look at what I wrote again:
What the user updated to is rather *easy* sam, you have that information in my model. It is right there - it is the "next" record which is either in the audit trail OR in the base table itself. If you wanted, add a "last updated by" column to the base table and maintain that. Pretty simple.
what were we discussing there - getting the record with the changes associated with a user. I said "If you wanted, add a "last updated by".... That would show you right away what changes that person made to that record.
2) Sam, union all. I'll just say it - union all. Just union all the two things together Sam. I've said it over and over. If you don't like my suggestion (which I will not be changing by the way, you should know that about me by now), then go off and do whatever you like.
3) Sam, you are ON THIS PAGE - right here, right now - where I audit...... ta-day - just the changes.
If you cannot figure out how to use LAG() - (you personally) - after reading so many things on this site, I don't know what to tell you.
You have records.
They have primary keys.
They have dates associated with them (a fixed date in the audit table, sysdate perhaps in the 'real' table).
you have a union all view:
select x.*, sysdate mod_date, ..... whatever else you need ....
from real_table x
union all
select * from audit_table;
you can therefore:
select a.*, lag( username ) over (partition by primary key order by mod_date DESC) user_who_made_these_changes
from that_view;
Or, as I wrote above, just mod your real table to have a "last update by" column that is the username of the person that modified the record. That'll make it rather easy wouldn't it.
broken links
volker, May 24, 2011 - 11:52 am UTC
May 24, 2011 - 1:06 pm UTC
I had to change the URLs when we moved to apex.oracle.com - I could not have a public synonym anymore, so the URL is:
http://asktom.oracle.com/pls/asktom/asktom.search now, schema qualified and the DAD is more "specific"
Segregating data changes with flashback query
Mike, July 05, 2011 - 7:49 am UTC
We are looking at Flashback Data Archive versus Oracle Workspace Manager for some of our control needs.
For retaining past versions of data, and for identifying changes to data from one point in time to another, the Flashback Data Archive seems to meet the need very well without the enormous performance penalties introduced by OWM.
However, I need to be able to segregate the data changes, when I have multiple batch jobs that have updated the same table. These batch jobs have intermediate commits, so they are not necessarily the same SCN. All of the commits/SCNs for a job would have occurred over the same session, though.
I have tried to figure out if there was a way to tie together the SCNs (i.e. the VERSIONS_STARTSCN and VERSIONS_ENDSCN) that belong to the same session, but I don't see a way to do it. (I would have an opportunity to run a procedure at the end of the job, from the session that actually performs those changes.)
Before I resign myself to adding a column on each table to track this, should there be any other avenues I pursue?
July 05, 2011 - 12:53 pm UTC
there is an undocumented thing that could work for you - we can record the commitscn of a transaction - so if your batch job was to insert ONE extra record in some log table right before its commit - we can tell you all of the commit scns for a given business transaction.
I am not going to recommend this - since it is undocumented and subject to change over time - it would be high risk.
I think the only 'good' way to do it would be to tag the records with a business transaction ID of some sort.
Provide Logged in user id instead of 'user' schema
prashant bhate, July 06, 2011 - 11:03 am UTC
Hi Tom,
In web application scenario, how to provide currently logged in user name ( not db 'user') to the trigger? It doesnt always make sense to provide schema name as multiple 'web' users connect to same schema.
insert into audit_tbl values
( sysdate, user, upper(l_tname), upper(l_cname),
l_old, l_new );
user must be login id of user than schemaname
July 08, 2011 - 11:52 am UTC
what is the "logged in" user? You say "not db user", but as far as the database is concerned - that is the logged in user.
Does your application currently supply the "real identity" via a session context or anything???
performance issue
Raj, July 14, 2011 - 8:20 am UTC
Hi Tom,
Can we use the trigger as below instead of using the procedure call from trigger?
Will it improving the performance with only one INSERT query with multiple SELECT clauses?
Please provide your feedback. thanks in advance.
CREATE OR REPLACE TRIGGER T1_LOG
AFTER UPDATE ON T1
FOR EACH ROW
DECLARE
BEGIN
INSERT INTO CHG_LOG_TABLE ( COLUMN_ID, UPDATE_USER_ID, UPDATE_TIME, ACTION, TABLENAME, KEY1, KEY2 ,KEY3, COLUMN_NAME, OLD_VALUE, NEW_VALUE )
select COLUMN_ID,user,SYSDATE,V_ACTION,'T1',NULL,NULL,NULL,'C1', :OLD.C1, :NEW.C1
from dual where :NEW.C1 <> :OLD.C1 OR (:NEW.C1 IS NOT NULL AND :OLD.C1 IS NULL) OR (:NEW.C1 IS NULL AND :OLD.C1 IS NOT NULL)
union all
select COLUMN_ID,user,SYSDATE,V_ACTION,'T1',NULL,NULL,NULL,'C2', :OLD.C2, :NEW.C2
from dual where :NEW.C2 <> :OLD.C2 OR (:NEW.C2 IS NOT NULL AND :OLD.C2 IS NULL) OR (:NEW.C2 IS NULL AND :OLD.C2 IS NOT NULL)
union all
select COLUMN_ID,user,SYSDATE,V_ACTION,'T1',NULL,NULL,NULL,'C3', :OLD.C3, :NEW.C3
from dual where :NEW.C3 <> :OLD.C3 OR (:NEW.C3 IS NOT NULL AND :OLD.C3 IS NULL) OR (:NEW.C3 IS NULL AND :OLD.C3 IS NOT NULL)
union all
.....
July 15, 2011 - 9:05 am UTC
you would need to explicitly convert dates to strings using a format, so your approach is a little incomplete probably...
varray
Jonie, July 19, 2011 - 8:35 am UTC
how to pass varrays in above package procedure ,if i have 10 different types on 10 different tables ,should I create that many procedures
July 19, 2011 - 9:54 am UTC
if you have 10 different types, you'd need 10 different procedures.
why so many types? I don't see how varrays fit in at all actually - since it is a for each row trigger - only one row at a time in the first place.
VARRAYS
Jonie, July 20, 2011 - 12:09 am UTC
Hi ,
I have 10 tables which we have to audit.
Each table has 1 or 2 varray in it.
for ex 14 telephone numbers need to stored for a user.
How do I audit these columns.Even if I write 10 different procedures for total 10 varrays in different tables,and only 1 value in a single varray changes ,how do I audit it.
Please help me asap.
July 22, 2011 - 12:40 pm UTC
ugh, I would never have used varray's to persist data, I wish we didn't even offer it as a method.
You'd probably want to implement a much more complex object type - with a MAP and/or ORDER methods (that you code yourself) so that you can ask if the two varrays are equal or not.
But me, I'd fix the data model, these phone numbers belong in their own table... As rows.
varray
Jonie, July 25, 2011 - 5:57 am UTC
Hi ,
Thanxs for the reply.
You mean I should store these phone nos. as 14 columns for 1 user instead of keeping it in 1 column for a user as a varray.
So that the trigger creation would also be pretty much simple.
July 27, 2011 - 8:05 pm UTC
No, I mean their phone numbers should be in a child table as rows.
Generic Trigger.
Marcelo, August 02, 2011 - 2:46 pm UTC
Hi Tom,
Similarly to the example below, I can create a trigger that captures ALL DML operations of the tables only for a particular schema?
Note: My version of Oracle is Standard Edition.
create or replace trigger DBAUNIMED.audit_ddl_trg
AFTER DDL ON DATABASE
BEGIN
INSERT INTO dbaunimed.audit_ddl(Data, OsUser, Ora_Login_User, Host, Terminal, Module, Owner, Type, Name, SysEvent)
VALUES
(
sysdate,
sys_context('USERENV','OS_USER') ,
ora_login_user,
sys_context('USERENV','HOST') ,
sys_context('USERENV','TERMINAL') ,
sys_context('USERENV','MODULE') ,
ora_dict_obj_owner,
ora_dict_obj_type,
ora_dict_obj_name,
ora_sysevent
);
END;
August 02, 2011 - 4:03 pm UTC
after ddl on schema
you can do these things on the schema level too.
Think you meant all DDL, not all DML...
ops$tkyte%ORA11GR2> create table audit_ddl (sysevent varchar2(255) );
Table created.
ops$tkyte%ORA11GR2>
ops$tkyte%ORA11GR2> create or replace trigger audit_ddl_trg
2 AFTER DDL ON schema
3 BEGIN
4 INSERT INTO audit_ddl(SysEvent) values ( ora_sysevent );
5 END;
6 /
Trigger created.
ops$tkyte%ORA11GR2>
ops$tkyte%ORA11GR2> select * from audit_ddl;
no rows selected
ops$tkyte%ORA11GR2> create table t ( x int );
Table created.
ops$tkyte%ORA11GR2> select * from audit_ddl;
SYSEVENT
-------------------------------------------------------------------------------
CREATE
ops$tkyte%ORA11GR2>
Bala, August 04, 2011 - 5:42 pm UTC
How do I create a trigger that captures ALL DML operations of for a particular schema? Not on table level after insert..
August 04, 2011 - 7:22 pm UTC
what do you mean by "all dml operations"
do you want a history of all changes for all tables
do you want a history of all SQL executed against all tables
need help
oumer, August 12, 2011 - 1:38 pm UTC
i have a table called offer that is parent table and i have child table call offer_cache if an update the columns or a row in offer table the same column should be updated on child column also i wrote a trigger for that here is my trigger but is not updating can some can help out of it.
CREATE OR REPLACE TRIGGER EMJO.OFFER_CACHE_UPDATE
BEFORE UPDATE ON EMJO.OFFER_CACHE
FOR EACH ROW
BEGIN
UPDATE OFFER_CACHE SET
CUSTOMER_FRIENDLY_PROGRAM_ID = NVL(:NEW.CUSTOMER_FRIENDLY_PROGRAM_ID,CUSTOMER_FRIENDLY_PROGRAM_ID)
,SERVICE_PROVIDER_NM = NVL(:NEW.SERVICE_PROVIDER_NM,SERVICE_PROVIDER_NM)
,EXTERNAL_OFFER_ID= NVL(:NEW.EXTERNAL_OFFER_ID,EXTERNAL_OFFER_ID)
,OFFER_STATUS_TYPE_ID= NVL(:NEW.OFFER_STATUS_TYPE_ID,OFFER_STATUS_TYPE_ID)
,DISPLAY_EFFECTIVE_START_DT= NVL(:NEW.DISPLAY_EFFECTIVE_START_DT,DISPLAY_EFFECTIVE_START_DT)
,DISPLAY_EFFECTIVE_END_DT= NVL(:NEW.DISPLAY_EFFECTIVE_END_DT,DISPLAY_EFFECTIVE_END_DT)
WHERE OFFER_ID = :NEW.OFFER_ID;
END OFFER_CACHE_UPDATE;
August 14, 2011 - 8:28 pm UTC
you have a data model mess up here.
the solution is NOT A TRIGGER
the solution is: do not store this data in the child table. You are done.
You give no example, there is no create tables, no inserts, nothing to see here - no one can tell you why your code doesn't work without such an example.
But - the solution is so so so very very very simple:
DO NOT DO THIS
Just join.
Kumar, September 14, 2011 - 12:14 am UTC
Hi Tom,
As per client requirement I am facing problem. Just I explain you and need some help.
Actually I hahve 2 master tables(Item_Cost__mst,Item_mst)
and details table (Imtem_dtl). Whenever value will change in two master tables for some column like(cost_price,special_price) then it will be automatically change in the detail table(item_dtl) in the respective columns. Can you please help me in this regards.
thanks
Kumar
September 14, 2011 - 7:12 pm UTC
You have a data model problem. You do not want to have this redundant data. You will just keep the value in the master table.
Your new requirement: explain to the client that that just isn't the way it is done, you have the data you need, just join.
seriously, I'm not being sarcastic at all. In a transaction system (which it sounds like here) you do NOT want this redundant data. Period.
Just join, databases were born to join.
Kumar, September 14, 2011 - 12:14 am UTC
Hi Tom,
As per client requirement I am facing problem. Just I explain you and need some help.
Actually I hahve 2 master tables(Item_Cost__mst,Item_mst)
and details table (Imtem_dtl). Whenever value will change in two master tables for some column like(cost_price,special_price) then it will be automatically change in the detail table(item_dtl) in the respective columns. Can you please help me in this regards.
How will I do using triggers?
thanks
Kumar
auditing
A reader, October 03, 2011 - 10:54 am UTC
Hello Tom:
<<<if you need to capture the actual values then in 10g and before you will likely be looking at triggers and in 11g at flashback data archive. >>>
Based on my reading for the flashback features in 11g (flashback Query, Flashback table ,Flashback database), it seems it provides more of a short term backup/restore solution rather than a long term data-capture auditing solution (similar to exp/imp with timestamp).
The data is being read from the "UNDO" tablespace or rollback segments specified by the retention period. Even though you can probably specify a retention period for years it is not practical and most likely it should be *days*.
Am I correct in thinking that if I need to store audit data (updates and deletes) for TABLE EMP for the *LONG-TERM and PERMANENT* for the whole ROW, the shadow table (EMP_AUDIT) and before insert triggers would still be the best solution in Oracle 11g.
The audit data can be captured either via triggers or flashback (periodically).
The only benefit of Flashback is that you can copy the data later AFTER transaction commits while a trigger would have to do it before the COMMIT. I assume there no database performance impact because the database has to write everything to UNDO regardless if you have FLASHBACK ON or OFF.
For LOW-VOLUME tables I see triggers as the most practical solution in 11g.
Do you agree or not? What do you suggest.
Thanks in advance.
October 03, 2011 - 11:28 am UTC
flashback data archive is a long term thing. flashback "query" introduced in 9i is 5 days into the past max. flashback data archives go months, years into the past.
http://www.oracle.com/us/products/database/options/total-recall/index.html I disagree with your overall analysis, I was talking about the data archive, not just the flashback query.
triggers
A reader, October 03, 2011 - 1:09 pm UTC
Tom:
ok, I will check the flashback data archive features in more detail.
You did not explain why you disagree with the analysis and what you would do in oracle 11g enterprise to get the data changes for auditng purpose.
If I am correct, in 11g you can use this for auditing data changes
1) flashback data archive (no shadow tables)
2) table triggers (requires shadow tables).
and you seem to favor #1 for mod/high volume database mainly because:
1) there is no performance hits (read from rollback segments).
2) No need to write or maintain code. It is built-in feature.
Am I correct?
October 03, 2011 - 6:20 pm UTC
your analysis in a nutshell was:
a) what I said early about flashback data archive was synonymous with the five day flashback query. That is wrong. Flashback data archive goes days, months, years into the past. You configure how long. So, I disagreed with that.
b) you wrote Am I correct in thinking that if I need to store audit data (updates and deletes) for TABLE EMP for
the *LONG-TERM and PERMANENT* for the whole ROW, the shadow table (EMP_AUDIT) and before insert
triggers would still be the best solution in Oracle 11g..
To which I totally disagreed. You missed out on what the flashback data archive is/does. For *LONG-TERM and PERMANENT* storage of the entire row - in 11g - flashback data archive is the way to go. Less impact performance wise than your triggers. Less impact on end user response time than your triggers. Easier to query up (using the "as of" or "versions between" syntax) than your approach. Easier to manage (we use partitions to roll off old data). Easier to implement - a command, rather than writing code.
c) you wrote: The only benefit of Flashback is that you can copy the data later AFTER transaction commits while a
trigger would have to do it before the COMMIT.
which is not correct.
So, to your three main points, I disagree. They are not correct, they are not valid.
I prefer total recall to code because I always prefer "no code to code".
Flashback data archive does use shadow tables by the way, we create them, we manage them, we populate them, we transparently query them for you - you never really have to see them.
I don't know why your 1) no performance hit 2) no need to write/maintain code would prevent you from considering this in all cases. I sort of like no performance hit when I can get it, and I always prefer "no code over code"
auditing
A reader, October 03, 2011 - 7:15 pm UTC
Tom:
Now I am 80% convinced.
no audit tables to create, no audit code to create and maintain, no performance impacts at all.
sounds too good to be true.
I just printed the whole chapter on flashback. I will read it tonight and see what kind of work is involved versus the old trigger approach.
I guess I would also need to educate ad hoc users how to run queries on the archive because this is different than regular SQL queries. I can also create views for them. let me check.
October 04, 2011 - 11:19 am UTC
it adds a simple "as of timestamp" or "versions between timestamp and timestamp" to your query - nothing more, nothing less. It is *trivial* to use flashback query.
Now, on the other hand, if you do it yourself, you'll be training your end users for a while on how to query your data model to get the right version of the data (and they'll probably never be 100% sure they are getting the right data....) It is fairly complex to do a "version" query across two tables. It would typically look something like:
select empno, ename, job, mgr, hiredate, sal,
comm, deptno, asof, modby
from emp
where asof <= :date
UNION ALL
select empno, ename, job, mgr, hiredate, sal,
comm, deptno, asof, modby
from emp_hist
where asof <= :date
and enddate >= :date
and dmltype <> 'D'
for every table, instead of 'select * from emp as of timestamp :date'
total recall
A reader, October 04, 2011 - 3:04 pm UTC
total recall
A reader, October 04, 2011 - 10:39 pm UTC
Tom:
It seems oracle has made a huge jump in 11g auditing capabilities with flashbask data archive.
I would be reinventing the wheel here if i create audit tables and triggers, etc. and spend months for work that can be done in couple of days.
This is a piece of cake to implement. just create tablespace, create flash data archive, and ALTER Table for the table I want to track history data for and i am done.
1) Is it possible though to add columns to the internal history table if i want to track addition environment variables such as ip address, terminal is, O/S ,etc like we can using the trigger approach.
2) Are we stuck with flashback queries "AS OF" and "VERSIONS BEWEEN" to query the historical data or can we see all the data (if we desire) same way we query an EMP_AUDIT table.
October 05, 2011 - 10:42 am UTC
1) no, and if you could, it would be done using triggers - back to square one.
2) you can query the base tables if you want, but why. You say "stuck with", tell me what you cannot do with them that you could with the base table?
flashback
A reader, October 05, 2011 - 10:58 am UTC
Tom:
You are making me worried now after i am 100% convinced that FDA is the way to go for auditing with 11g. It would take months/years and lots of $$$ to write all these features that are already built in and might never work anyway (risk).
<<no, and if you could, it would be done using triggers - back to square one.>>
The client has a requirement to capture some of these for web user or client/server application that uses one DB account. I need somehow to capture the ip address or O/S user or application userid.
Do not tell me i cant do this and I need to create an audit data model and write a 100 triggers. I am sure oracle has thought about this.
2) I was thinking of doing something like (select * from T1_AUDIT where empno in (1000,1001)) which shows ALL audit data for these two employees. Can you do the same with FDA.
October 05, 2011 - 11:05 am UTC
Sam (time for you to start using other sites too isn't it? You have collectively asked more things than anyone else on the *entire planet* here)
I'll have to tell you that we haven't thought of every possible combination and silly set of attributes to collect - you are correct. For example, the IP address is utterly useless (ever hear of a proxy server) for identifying anyone. As is the OS user (guess who that would be - why that would be the application server!) (do not even go down your "well, it would be good enough probably" path - just don't)
We can capture application userids IF and ONLY IF they constantly use dbms_session.set_identifier to set it in our standard audit trails. But that information is only as good as your programmers. If your middle tier hides the identity from the database..
2) read the docs. yes is the answer. You just say ... from table versions between ... where ....
How about...?
A reader, October 05, 2011 - 11:54 am UTC
What about a bit of a hybrid approach? Let FDA do what it is intended to do and audit changes, and use the triggers to get you the session information. I don’t know what your environment looks like, but would you be able to do something like this?
Create a new table for the session information you want to keep. Populate this table with a LOGON trigger, giving it a unique “session id” value, and populating a Package variable with that value (so it will persist for the length of the session). Then add the session id column to each table you want to audit this information for and create a simple trigger for that table to set the session id on insert and update to the value stored in the package variable. The trigger should be generic enough that you could build your DDL script with a simple SQL query on the data dictionary.
Now you would have a table that stores every connection to the database, giving you the information you want to track (even if they do not do any DML statements). You can easily join that session table back to your data table to see who made the most recent change. Looking into the past, the value in the session id field will contain the value of the user that made that change. In other words, if I inserted a row with session 100 three weeks ago and Bob changed the record yesterday with session 150, the current value will be 150, but looking at the record a week ago will show you my session id of 100.
This way, you get all the features of FDA without having to reinvent the wheel. You get to track session data without having to store duplicate data for every insert/update. And when you decide in the future that you need to store extra session data, you simply have to add the column to your session table and adjust your LOGON trigger (or probably a stored procedure in your session package, since you need the package to persist the session id variable anyway), rather than modifying hundreds of tables and triggers.
auditing
A reader, October 11, 2011 - 7:28 pm UTC
This is *not* a question.
FYI, I put in a request for a few licenses of total recall.
The long conversations/questions has paid off. I convinced everyone to pay 10K and save over 100K in Software development costs. Thanks for you patience.
For your note on other sites, I will take that as a joke.
I tried other sites with some who call themselves experts. Many are amateurs who have no clue about oracle.
I normally like to learn from the BEST. That is how I can become the best.
In oracle, there is only one world heavyweight champion, that is thomas kyte. You will keep that belt for a long long time.
Flashback
A reader, October 12, 2011 - 4:58 pm UTC
Tom:
small question on your statement.
<<<We can capture application userids IF and ONLY IF they constantly use dbms_session.set_identifier to set it in our standard audit trails. But that information is only as good as your programmers. If your middle tier hides the identity from the database.. >>
I am talking about a client/server application (not web).
Do you mean the client application (i.e visual basic, powerbuilder) has to do this when it initially connects to DB? I think the standard oracle udit trail is different than Flashback Data archive (two different locations/files).
Would it be much better/easier for Flashback data archive if we convert to individual database accounts instead of application accounts ?
October 12, 2011 - 6:04 pm UTC
if it is client server, it isn't a problem.
everyone logs in as themselves. There is no middle tier "super user" account.
oh wait, it is Sam I am again. The one who didn't think having a single shared account was the security issue to be worried about.
If you have client server, and you have a shared account, don't even bother auditing, you'll never figure out who did what.
Use separate login/passwords for every client, problem solved. Use a single account whereby everyone knows the credentials - don't even bother with security, it would all be fake anyway.
auditing
A reader, October 12, 2011 - 11:14 pm UTC
<<If you have client server, and you have a shared account, don't even bother auditing, you'll never figure out who did what. >>
You told me before if the client application pushes the userid to the server, you can. If the users log in using application accounts ,you can set dbms_application_info.set_client_info(:user_id); immediately after user successfully logs into the application
I am not sure why i cant. I use sql*plus on my windows PC.
I monitor V$session view. I can see my terminal_id, the name of client application, ip address, O/S info, etc.
The client does not even push it in this case. Oracle seems to collect it.
I did use this once to capture some environmet variables when user is logging from web or client/server
CREATE OR REPLACE PROCEDURE test IS
lvIPAddress VARCHAR2(32);
lvUser VARCHAR2(30);
BEGIN
IF owa.num_cgi_vars > 0 THEN
lvIPAddress := owa_util.get_cgi_env(param_name => 'REMOTE_ADDR');
ELSE
lvIPAddress := SYS_CONTEXT('USERENV',
'IP_ADDRESS');
END IF;
lvUser := SYS_CONTEXT('USERENV', 'SESSION_USER');
IF owa.num_cgi_vars > 0 THEN
htp.p(lvIPAddress);
htp.p(lvUser);
ELSE
dbms_output.put_line(lvIPAddress);
dbms_output.put_line(lvUser);
END IF;
END;
/
October 13, 2011 - 7:27 am UTC
I was always thinking "three tier"
client server - all bets off, I have the userid/password to the database, I can use any tool I want to do whatever I want.
and you won't
know
who
i
am
Sam - you keep using htp.p examples, when talking about client/server. htp.p implies three tier with a web server. client server it ain't
auditing
A reader, October 13, 2011 - 8:42 am UTC
Tom:
Good morning.
I have both applications: two tier(client/server) and 3 tier (web).
I think you mean that a user is smart enough to use a binary editor to figure out the password in application or turn SQL trace ON and capture it from the trace file on his PC.
Yes, that can be done (not by normal user) but i think you can still know who hacked into the box and did what using the machine id.
My point, when i login in to sql*plus from my windows machine I see this in V$Session. Oracle already knows my machine name, OS name, etc. If I go to another machine and use the same db userid/password it will show me the different machine name, OS user name, etc.
PID SID SER# LOGON_TIME BOX USERNAME STATUS OS_USER PROGRAM
--------- ----- ----- -------------------- ------------ ---------- -------- ------------ -----------
10739800 13 3843 13-oct-2011 09:16:07 sun19 EMCADMIN INACTIVE oraS10AS oracle@sun19
5161050 25 1438 13-oct-2011 09:18:08 EMC\LMS28A1 EMCADMIN ACTIVE emclaptop sqlplusw.exe
If this data can be tied to the audit records in Flashback Data archive for the table then we meet the requirement on who updated or deleted the record using the client application that uses one database account.
Of course it would be better to change this setup to one db account per user but there will be some political issues with account management. It is must easier now to do it using application rather than let DBA do it for you. I think this why probably most client applications use application accounts.
October 13, 2011 - 10:49 am UTC
Sam, you are in love with security via obscurity, we've had that discussion lots of time.
do whatever you like.
FDA
A reader, February 01, 2012 - 10:53 am UTC
Tom:
just a simple question.
I am using FDA to audit records for an 11g modplsql web application.
Since this is a 3 tier web app, it uses a shared database account and individual application accounts (custom table).
FDA will track all data updates & deletes for me for a given table.
But how do I know the user who did the update and delete since FDA only sees one database user.
Do I need to update every webpage to include (dbms_session.set_itentifier) to send the application userid to the standard audit trail and link that somehow to the record captured by FDA
or
I can tell using the FDA row since the production row will have the update userid for the new row. However, I think FDA copies the old ROW.
Any ideas?
February 01, 2012 - 11:16 am UTC
Sam
the FDA doesn't regulate Oracle - it does food and drugs as far as I know.
FGA might be what you mean perhaps? Fine Grained Auditing?
You sort of need to have the middle tier tell us WHO is running the application - yes. If you set the identifier using dbms_session, it'll appear in the client_id column of the FGA audit trail view.
I can tell using the FDA row since the production row will have the update
userid for the new row. However, I think FDA copies the old ROW.
i have no idea what you might have been trying to say there.
audit
A reader, February 01, 2012 - 11:49 am UTC
Tom:
No, not FGA.
FDA = Flashback Data Archive (spelled in oracle docs)
http://www.oracle.com/technetwork/issue-archive/2008/08-jul/flashback-data-archive-whitepaper-129145.pdf It is part of the total recall option.
There is no middle tier other than Mod_plsql gateway. Everything is stored procedure in the database.
so if i enabled the flashback data archive on EMP table and the modplsql application uses database account (empadmin) while user id like "mike", "tom", Sheila" log in to update/delete rows, Would the audit archive tell me that mike update this row, tom deleted this row, etc.
If i understand correctly the flashback data archive is totaly separate from standard audit trail. so even if i set "dbms_session.set_identifier) in every web page to push the application userid to the standard audit trail, I am not sure I can link the audit record in FDA to standard audit trail to know which user deleted the record.
The permanent archive will have a copy of the ROW deleted but that will not have the userid who deleted the row.
February 01, 2012 - 4:47 pm UTC
Sam - as a *frequent* user of this site you know exactly and precisely why I hate acronyms.
SPELL IT OUT, always and forever SPELL IT OUT. First time, every time, SPELL IT OUT. Even if you see it already on the page - SPELL IT THE HECK OUT.
I am not a mind reader.
You are not a mind reader.
They are not mind readers.
Use acronyms WITHOUT SPELLING THEM OUT - and forget it, someone is going to miss the point or get it wrong. Just stop it (everyone - all of you - all of us - every single one of us)
*JUST LIKE THE DOCUMENTATION DOES* eg: search for FDA and what do you find?
The DBMS_FLASHBACK_ARCHIVE package contains two simple procedures for disassociation and reassociation of a Flashback Data Archive (FDA) enabled table from/with its underlying FDA respectively.
......
The DBMS_FLASHBACK_ARCHIVE package contains two simple procedures for disassociation and reassociation of a Flashback Data Archive (FDA) enabled table from/with its underlying FDA respectively.
.....
Containsprocedures for disassociation and reassociation of a Flashback Data Archive (FDA) enabled table from/with its underlying FDA respectively.
....
The DBMS_FLASHBACK_ARCHIVE package contains two simple procedures for disassociation and reassociation of a Flashback Data Archive (FDA) enabled table from/with its underlying FDA respectively.
...see how it is *never* used without defining it in the context of the page you are on????
There is no middle tier other than Mod_plsql gateway. Everything is stored
procedure in the database.mod_plsql *IS YOUR MIDDLE TIER*. Your stored procedures *ARE YOUR MIDDLE TIER CODE*.
Just the same as java running in an application server would be. It (plsql) is your middle tier code.
read this
http://technology.amis.nl/blog/10911/database-transaction-recorder-adding-who-to-when-and-what-to-make-flashback-take-over-from-journalling-tables and never use an acronym in any writing, about anything, ever - without defining it first.
transaction id
A reader, April 25, 2012 - 4:47 pm UTC
Tom:
if a table T1 has an after update trigger to audit updates and deletes and then Transaction TRANS_01 performs 3 updates on that table, is there a way to log in the same transaction id with the 3 records in the audit log so we know the audit happened in one transaction.
Would you select the internal oracle transaction number from a view in the trigger or use an environment variable or how would you do this?
April 25, 2012 - 5:35 pm UTC
see dbms_transaction.local_transaction_id
use the return value from that function.
trigger
A reader, April 26, 2012 - 10:23 am UTC
Tom:
Yes, forgot about that.
Would you store this multi decimal string in a VARCHAR2(48) column. I doubt a number field will work.
DBMS_TRANSACTION.STEP_ID seems to be aunique number. would you use that assuming it is always unique.
April 26, 2012 - 11:01 am UTC
I would use the transaction id as the unique thing.
step id would be unique within a transaction and monotonically increasing for sorting purposes. the wording for the function only ensures uniqueness within a transaction (no matter what else we observe)
Audit trigger and query
Tony Fernandez, May 10, 2012 - 12:53 pm UTC
Dear Tom,
Hopefully the question below is explained correctly.
An audit table "AUDIT_TABLE" is listed below with data.
This table is populated by a trigger from another "source" table not listed in this question.
-- Audit table
CREATE audit_table
(
ID NUMBER number NOT NULL,
DISTRIBUTOR_ID VARCHAR2(13 BYTE) NOT NULL,
DIST_BRAND_ID VARCHAR2(25 BYTE) NOT NULL,
DIST_BRAND_NAME VARCHAR2(30 BYTE) NOT NULL,
ACTION_CODE VARCHAR2(1 BYTE) NOT NULL,
PROCESSED_DATE DATE
);
Insert into audit_table
(ID, DISTRIBUTOR_ID, DIST_BRAND_ID, DIST_BRAND_NAME, ACTION_CODE, PROCESSED_DATE)
Values
(26, 'DO-2001-00470', 'NOT PROVIDED', 'Tonys test 5', 'I',
TO_DATE('05/09/2012 16:19:53', 'MM/DD/YYYY HH24:MI:SS'));
Insert into audit_table
(ID, DISTRIBUTOR_ID, DIST_BRAND_ID, DIST_BRAND_NAME, ACTION_CODE, PROCESSED_DATE)
Values
(3, 'DO-2001-00729', 'Brand-123', 'test for brand 123', 'D',
TO_DATE('05/09/2012 16:34:12', 'MM/DD/YYYY HH24:MI:SS'));
Insert into audit_table
(ID, DISTRIBUTOR_ID, DIST_BRAND_ID, DIST_BRAND_NAME, ACTION_CODE)
Values
(3, 'DO-2001-00729', 'Brand-123', 'test for brand 123', 'I');
Insert into audit_table
(ID, DISTRIBUTOR_ID, DIST_BRAND_ID, DIST_BRAND_NAME, ACTION_CODE)
Values
(4, 'DO-2001-00729', 'Brand-123', 'test for brand 123', 'I');
Insert into audit_table
(ID, DISTRIBUTOR_ID, DIST_BRAND_ID, DIST_BRAND_NAME, ACTION_CODE)
Values
(4, 'DO-2001-00729', 'Brand-123', 'test for brand 123', 'D');
COMMIT;
select * from audit_table;
13:19:08 GPO @ devvol >select * from audit_table;
ID DISTRIBUTOR_I DIST_BRAND_ID DIST_BRAND_NAME A PROCESSED
-- ------------- -------------- ------------------- - ---------
1 DO-2001-00470 NOT PROVIDED Tonys test 5 I 09-MAY-12
2 DO-2001-00729 Brand-123 test for brand 123 D 09-MAY-12
3 DO-2001-00729 Brand-123 test for brand 123 I
4 DO-2001-00729 Brand-123 test for brand 123 I
5 DO-2001-00729 Brand-123 test for brand 123 D
13:19:37 GPO @ devvol >
We have a "sweep" process that goes into a large table ( millions of rows ), if we have an ACTION_CODE='I' then need to tag
corresponding rows, and conversely with 'D' to remove the tags if tagged.
The challenge is locate "pairs" of rows not yet processed ( PROCESSED_DATE = null ) with one being 'I' and the second 'D' or 'D' and later 'I'.
If the pair or pairs exist, just stamp them as processed without going to the large table and tag since it would be redundant.
Remove tags to later add them or add them to later remove them.
We would like to have a query that returns something like this:
ID DISTRIBUTOR_I DIST_BRAND_ID DIST_BRAND_NAME A PROCESSED
-- ------------- -------------- ------------------- - ---------
4 DO-2001-00729 Brand-123 test for brand 123 I
5 DO-2001-00729 Brand-123 test for brand 123 D
or this:
ID DISTRIBUTOR_I DIST_BRAND_ID DIST_BRAND_NAME A PROCESSED
-- ------------- -------------- ------------------- - ---------
3 DO-2001-00729 Brand-123 test for brand 123 I
5 DO-2001-00729 Brand-123 test for brand 123 D
So the process can go to row IDs 4 and 5 ( or 3 and 5 in second result set ) and time stamp them without going to the large table,
since it would be unnecessary.
The rows are identified by DISTRIBUTOR_ID and DIST_BRAND_ID being the same and unique.
We have tried few queries with INTERSECT, analytics group by's but still no luck,
Your help to create the query appreciated,
Audit trigger and query
Tony Fernandez, May 14, 2012 - 2:26 pm UTC
Tom,
Please disregard the question above, as the solution was created by adding a pk_id on source table. So when row is deleted, an audit log row is also created with same columns and repeating pk_id. So in entire life cycle of a row, when inserted and later deleted only 2 and 2 only rows will be created in audit. With "I" for insert, and "D" for delete. Thus making the rows to delete easier to locate.
Kind regards,
capturing commit time of record in a column of the table that is inserted or updated
Sean, May 16, 2012 - 10:51 am UTC
Hi
for auditing purposes, we usually put before insert/update trigger and we put something like
:new.insert_timestamp := sysdate ;
this records the time insert statement was called and not the time commit is called. commit can happen way after the insert statment.
Is there anyway to capture commit time of record in record itself? is there any ON commit trigger type concept?
Thanks
May 17, 2012 - 2:43 am UTC
even if there were an on commit trigger - it would not really help you - you'd have to go back and update all of the rows you updated with the timestamp - but by then - you'd have spent time doing that and it wouldn't be the commit timestamp anymore.
How about doing your own "commit" routine and have it insert the transaction id from the dbms_transaction package and systimestamp into a table - and then commit. You'd tag your records with the dbms_transaction id. Then you can just join.
DDL triggers
Deepa, May 23, 2012 - 6:22 am UTC
We are facing one scenario where we have 100 of table and we have created 100 of history tables .
Now situation is we can changes the definition of any main table any time ,like we can add new column or modify existing column.
Now Our requirement is if we alter any table from these 100 table ,the same change should apply to history table automatically.
for example
we have table
create table emp(
name varchar2(20),
empno number)
we have same history table
create table emp_history(
name varchar2(20),
empno number)
now we changed the emp table
alter table emp add(dept varchar2(30));
alter table emp modify(name varchar2(40));
We always have history table as suffix _history added to table.
Now how can this be done without manually making these changes in history table.The main table and history table should alwyas be in sync.
Regards
May 23, 2012 - 10:21 am UTC
change your requirement. seriously.
Making a schema change is a serious thing. It does not seem to me to be onerous to you to make the appropriate changes to all of the affected tables.
You'd have to use a ddl trigger, use the event functions to determine what was being done
http://docs.oracle.com/cd/E11882_01/appdev.112/e25519/triggers.htm#LNPLS798 and then schedule a job via dbms_job to execute short AFTER the ddl change occurred.
It cannot be done "concurrently", it has to be sequentially - and frankly - it would be a heck of a lot more clear if you did the DDL straight out in your upgrade script, checked into your source code control system.
A reader, May 24, 2012 - 12:02 am UTC
Thanks For your reply.
Can you please elaborate more on event and how to implement this for the above query as I never worked on event till now.
Regards
May 24, 2012 - 8:48 am UTC
to elaborate:
I think this is a horrifically bad idea, I do not support it, I do not think you even want to consider it.
If you do - all of the event functions and what they return are fully documented, I gave you a link to it all. You'll need to read the function descriptions so you know what is available to you and write code.
I do not think you should even consider this, this is a horribly bad idea. You are trying to do 'magic', magic is something we IT people should be avoiding like the plague.
DDL triggers
Deepa, May 30, 2012 - 7:07 am UTC
Thanks very much .
But I still think there should be some easy solution for this in oracle.
Regards
May 31, 2012 - 12:03 am UTC
A reader, May 30, 2012 - 9:27 am UTC
@Deepa ...
As with most other features , this is available only in Enterprise Edition.
May 30, 2012 - 9:41 am UTC
most features are actually available with SE, many more cool "big" features are available with EE, the total recall option - necessary for flashback data archive - is an option to EE.
A reader, May 31, 2012 - 12:00 am UTC
Thanks..
This actually stisfied my curosity.
Regards
flashback
A reader, June 15, 2012 - 8:46 pm UTC
Tom:
Is there a retention limit in 11gR2 for flashback query and versions (data in undo)?
I was told it is years but i recall you said it is 5 days (probably in 9i only).
If the retention is years, then does not this defeat the purpose of total recall since the data can be copied to audit tables.
June 16, 2012 - 4:26 am UTC
for flashback query, for undo based flashback query, the theoretical limit is five days into the past in all releases. I say theoretical because practical limits will make it less than five days (do you have five days of undo retained on disk in your systems??)
for a flashback data archive - a feature available with the total recall option to the enterprise edition of the database - the limit is limitless. There, the flashback capability is not based on undo, it is based on data we store (table by table, not database wide) in the database itself. It does not have the five day limit.
flashback
A reader, June 16, 2012 - 8:14 am UTC
Tom:
I was arguing with some DBAs about this who said there is no limit and it is limited by UNDO_RETENTION parameter.
Let us say I have 1 TB of disk for undo allocation. Does that mean I can always go back months and years for a flashback query.
I was reading this
http://docs.oracle.com/cd/B28359_01/server.111/b28310/undo002.htm#BJFFIACJ It seems there is a GURANETEE option but it sounds oracle overwrites the undo blocks as needed even if there is a disk space available.
June 16, 2012 - 11:19 am UTC
Sam,
working link is:
http://docs.oracle.com/cd/B28359_01/server.111/b28310/undo002.htm#ADMIN10180 The ability for us to translate a date into an SCN is gated to be about five days (of uptime) in the past. In theory you could go further using direct SCNs however, it is beyond the realm of practical. Five days is beyond the realm of practical. More than five days is just "not even close to reasonable"
In order to flashback query five days ago - we need to roll back every single change that happened in the last five days to the affected blocks. ONE by ONE, one after the other, in order - for every block - every time we needed that block. That is, the further back in time you flashback - the longer and longer and longer it will take. I don't care how much undo you save, you don't have enough time to do it. You'd have to roll back every single block you touch - modification by modification (not even transaction by transaction - MODIFICATION by MODIFICATION). Think about this for a minute. The limit is theoretical because practically speaking *it isn't going to happen*. Wouldn't you love to run a query for two days only to get the ora-1555.
Flashback data archive (total recall option) fixes this obvious scalability issue by rolling back the change ONCE shortly after you commit and logging the before image of the modified row in the flashback data archive. To flashback query using total recall allows you to flashback 5 years ago as fast as 5 minutes ago.
Using undo, it *just would not happen*
tell you what, why don't you go ahead and prototype it out and let us know how it turns out?
1TB is a pretty trivial amount of disk, you generate more undo than you realize.
five days is not practical for flashback query without the flashback data archive.
How did you get the perception that guarantee would still overwrite???? It seemed pretty clear to me that it would not overwrite:
To guarantee the success of long-running queries or Oracle Flashback operations, you can enable retention guarantee. If retention guarantee is enabled, the specified minimum undo retention is guaranteed; the database never overwrites unexpired undo data even if it means that transactions fail due to lack of space in the undo tablespace. If retention guarantee is not enabled, the database can overwrite unexpired undo when space is low, thus lowering the undo retention for the system. This option is disabled by default.
identifying the changed column names only
Stuart, July 02, 2013 - 1:51 am UTC
Hi Tom,
I have a 10 million row table with ~160 columns.
I want to populate an audit table of columns that change when DML against the table is done.
If someone updates say, 8 columns affecting 200k rows, I only want to write 8 records to an audit table of the owner, table_name, and column_name.
Given that the triggers are processed 'after|before every row', is there a way to do this without getting the table populated with 8x200k entries?
July 02, 2013 - 4:52 pm UTC
sorry - i cannot parse this sentence:
Given that the triggers are processed 'after|before every row', is there a way
to do this without getting the table populated with 8x200k entries?
I thought you wanted the eight modified columns - for each of the 200k rows?
identifying the changed column names only
Stuart, July 03, 2013 - 8:26 pm UTC
Yes, apologies for the way the question was formulated.
The trigger is fired after|before each row, but I just want to know the distinct column(s) that had DML changes.
July 16, 2013 - 12:45 pm UTC
that was the goal of the original post? isn't that already answered way up there at the top of the page?
Option to Stuart
J. Laurindo Chiappa, July 04, 2013 - 5:04 pm UTC
Hi, Stuart :
pardon me for jumping in, but let me try to help : first, check the Oracle docs and you will find that DML triggers CAN fire (after/before) for each row OR not, it´s a choice.... We need to specify FOR EACH ROW if (and only if) we need to use the values in each record via :OLD and/or :NEW - if it is not needed (it appears to be your case, you don´t want to write individual record values to the audit, just an audit message) you DO NOT NEED to use FOR EACH ROW, and thus the trigger will fire just ONCE....
In second place, to make a DML trigger fire only when a givem column is updated, you will use the FOR UPDATE OF syntax - as you say that will exist 8 columns to audit (let´s say, colums A, B, C, D E, F, G, H) you could write 8 triggers , in this way :
create trigger TRG_UPD_A before UPDATE OF A ON tablename
BEGIN
insert into audittable values (...., 'Column A updated!!');
END;
/
create trigger TRG_UPD_B before UPDATE OF B ON tablename
BEGIN
insert into audittable values (...., 'Column B updated!!');
END;
/
create trigger TRG_UPD_C before UPDATE OF C ON tablename
BEGIN
insert into audittable values (...., 'Column C updated!!');
END;
/
create trigger TRG_UPD_D before UPDATE OF D ON tablename
BEGIN
insert into audittable values (...., 'Column D updated!!');
END;
/
create trigger TRG_UPD_E before UPDATE OF E ON tablename
BEGIN
insert into audittable values (...., 'Column E updated!!');
END;
/
create trigger TRG_UPD_F before UPDATE OF F ON tablename
BEGIN
insert into audittable values (...., 'Column F updated!!');
END;
/
create trigger TRG_UPD_G before UPDATE OF G ON tablename
BEGIN
insert into audittable values (...., 'Column G updated!!');
END;
/
okdoc ? With this technique you will be able to do your request, ie : make an UPDATE trigger fire just once for the UPDATE command, and fire just if some columns are updated...
Needless to say, would be so much better if you could use some built-in, such as the AUDIT command (triggers HAVE side-effects, and can cause overheads/negative performance issues), but here is the answer for your question...
best regards,
J. Laurindo Chiappa
identifying the changed column names only
Stuart, July 04, 2013 - 9:32 pm UTC
Thanks! I have only ever dealt with row level triggers,...
I really don't want to be creating ~160 triggers for this table though, so it probably isn't that practical.
If we take a small example:
create table patient
( patient_id number,
name varchar2(10),
address varchar2(20),
city varchar2(10) );
insert into patient values ( 1, 'joe' , '1 long rd' , 'new york' );
insert into patient values ( 2, 'fred' , '22 ebay cres' , 'chicago' );
insert into patient values ( 3, 'amber' , '4 broadway' , 'toronto' );
commit;
create table column_updates_audit
(
table_name varchar2(30),
column_name varchar2(30),
date_updated date
);
I can specify many columns in the 'create trigger' clause, but how to identify the changed columns within this trigger?
(the OLD, NEW can't be used at this level).
create trigger patient_audit_trg before update of patient_id, name, address, city on patient
begin
?????????
end;
/
July 16, 2013 - 1:24 pm UTC
see original answer.
Identifying changed columns - trigger options
J. Laurindo Chiappa, July 05, 2013 - 8:52 pm UTC
Via DML/table-level triggers your options really are limited (and by the way, this is a major reason to my suggestion of using OTHER audit options, such as the AUDIT command, FGA - Fine Grained Auditing -, etc)....
Thinking yet about DML/table-level triggers, yes you can specify a list of columns to fire the trigger, but you ** don´t know ** which of the columns present in the list caused the firing, what columns were changed... Thus, your options would be :
a. create one non-rowlevel trigger to each column : the manual work involved could be reduced via sqlplus scripting such as
select 'CREATE TRIGGER TRG_UPD' || column_name
|| 'BEGIN nnnn comands.... END;'
FROM DBA_TAB_COLUMNS
WHERE OWNER = 'tableowner'
AND TABLE_NAME = 'tablename'
AND COLUMN_NAME IN ('listofthedesiredcolumns');
Due to the fact that not all the columns will be changed always, not all the 160 triggers will fire in each UPDATE command, so the overhead/performance cost (allways present when we speak about triggers) would be lesser than option b. below, I think
OR
b. in a rowlevel/for each row trigger, ask for changed columns, this way :
create trigger trg_audit of column1, column2, .... for each row
BEGIN
if flagindicatingfirstrow is FALSE then
--
if :new.column1 <> :old.column1 then
INSERT into audittable Values (...., 'column1 updated!!');
elsif :new.column2 <> :old.column2 then
INSERT into audittable Values (...., 'column2 updated!!');
elsif :new.column3 <> :old.column3 then
INSERT into audittable Values (...., 'column3 updated!!');
....
end if;
flagindicatingfirstrow := TRUE;
end if;
END;
/
flagindicatingfirstrow could be a global variable, a context, some control table, whatever... Will be YOUR work the code needed to control multi-user accesses and the like ....
Point here is : inside DML triggers we don´t know what columns were changed, we ONLY know individual record values inside a FOR EACH ROW trigger, and FOR EACH ROW triggers ALWAYS fire for each row - what you can do is to "mark"/flag/control somewhere/somehow in your code to make the audit logic run only once at the first record execution of the trigger...
My final words : do a favor to yourself and Please, Please, look carefully to the database audit built-ins, avoiding the performance hit and the complex code-controls expected from dml triggers....
best regards,
J. Laurindo Chiappa
Identifying changed columns - trigger options
Stuart, July 06, 2013 - 9:25 am UTC
Thanks very much. Your response confirmed my thoughts on this matter.
Who&When&How audit
Alexander, August 01, 2013 - 1:00 pm UTC
Hi Tom!
Please clarify some aspects.
I need to know Who(username),When(date&time) and How(operation and values) makes chnges.
Number and size of those tables are big (even huge). And depth of stored history - 1-3 years.
As far as I understood there are few methods of auditing:
1) FGA-Fine Grained Audit
Cons: doesn`t take old and new values into account
2) FDA-Flashback Data Archive
Cons: - can`t find references about storing an user related information (uid or username). (Might it be possible in conjunction with audit trail?)
- storing SCN`s instead of date/time information
3) Triggers
Cons: large overhead
Am I right with conclusion that the only tool which meet all my requirements is triggers?
And one more question. With respect to huge sizes of history tables, is there any oportunities to store them in remote db (dblink, export/import) to exclude them from backup strategy?
August 02, 2013 - 7:17 pm UTC
flashback data archive, plus a pair of columns in the table typically named last_updated_by, last_updated_on, get you everything you need to know.
You'll know who created a row, that is in the production table.
Using a versions query, you can determine who modified a row (modifier is the "most current row", modified values can be seen using lag() to get the prior row.
the only problem would be a delete and recording who did the delete.
so, you can use flashback data archive for most of the work, just always set last_updated_by, last_updated_on. I don't like triggers - so you can encapsulate this logic in a package.
Then you can either turn a delete into an insert (into a log table, so you know the user) or an update+delete - the update would just set the last_updated_by, last_update_on fields. Again, in a packaged API. I'd probably go with the update+delete in order to keep all of the archive data in one place.
flashback data archive will be a feature in 11.2.0.4 (it already is in 12.1 and above, you don't need advanced compression to use it)
http://docs.oracle.com/cd/E16655_01/license.121/e17614/editions.htm#sthref68
Thanks Tom !
A reader, August 30, 2013 - 3:09 am UTC
Thanks Tom, this helped me to finish my task in no time. Your solutions are best always. Thanks !!
Dynamic values in Trigger
Saurabh, September 25, 2013 - 4:56 am UTC
Hi Tom ,
I am creating a trigger that calls a function to dynamically generate the insert/update script using :NEW and :OLD.But when using Execute Immediate for executing the script,
Its considering those as bind variables..and throwing error "01008. 00000 - "not all variables bound" ".
Use of the function is generate the values to be inserted/updated querying other tables, and hence require to dynamically generate the script.
Please let me know any other workaround if you can suggest.
FDA
sam, January 22, 2015 - 7:42 pm UTC
Tom,
Can you show how to do this in FDA (your comment on 8/2/2013) above so that FDA can know who deleted the record.
<<<
Then you can either turn a delete into an insert (into a log table, so you know the user) or an update+delete - the update would just set the last_updated_by, last_update_on fields. Again, in a packaged API. I'd probably go with the update+delete in order to keep all of the archive data in one place.
>>>>
I have an existing large application. It would be a huge job to go back and update EVERY single DELETE and make it update/delete. It would also be a monster job to write a trigger on every table to insert the deleted record into another table. We used FDA to avoid all that trigger work and manitenance.
Is not there a way to add a column to the FDA history table and log the userid or other columns?
I think that is possible in 12c but i am using 11.2.0.3.
FDA is simple and elegant way to do
Bala, June 09, 2015 - 7:02 pm UTC
Tom has mentioned that just add last_updated_by, last_updated_on columns in the actual table for tracking who has changed/deleted.
That's all.
ddl impact?
Bob, April 12, 2016 - 8:07 pm UTC
I'm sick of the shadow table/trigger approach. I thought that FDA might be just what I need, but then I saw this caveat. So, does this mean if I have a VARCHAR2(10) that I need to change to a VARCHAR2(15), I can't do so without losing the history? Am I misunderstanding this limitation?
Remember that DDLs that alter the structure of a table (such as drop/modify column, move table, drop partition, truncate table/partition, and add constraint) invalidate any existing undo data for the table. If you try to retrieve data from a time before such a DDL executed, error ORA-01466 occurs. DDL operations that alter the storage attributes of a table (such as PCTFREE, INITRANS, and MAXTRANS) do not invalidate undo data.
April 13, 2016 - 1:26 am UTC
With each version of the database, we relax the restrictions on FDA more and more, so it depends on what version you are on.
But in those cases, where the change you want to do is not implicitly handled by FDA, you temporarily detach your table, ie
1) DBMS_FLASHBACK_ARCHIVE.DISASSOCIATE_FBA
2)
- do the structural change on your table
- do the structural change on the FDA partner so they are in sync
3) DBMS_FLASHBACK_ARCHIVE.REASSOCIATE_FBA
Hope this helps
Very HelpFul
Somdutt Harihar, February 15, 2018 - 11:59 am UTC
Hi,
i have created the trigger
CREATE OR REPLACE TRIGGER AUDITEMP1
AFTER UPDATE ON EMP1
FOR EACH ROW
DECLARE
--ABC VARCHAR2(100);
BEGIN
SELECT ' audit_pkg.check_val( ''emp1'', ''' || COLUMN_NAME || ''', ' ||
':new.' || COLUMN_NAME || ', :old.' || COLUMN_NAME || ');'
-- into abc
FROM USER_TAB_COLUMNS
WHERE TABLE_NAME = UPPER('emp1');
END;
i am getting error :
Error(5,3): PLS-00428: an INTO clause is expected in this SELECT statement ;
when i am writing the INTO clause in the above trigger. Then also it shows error.
i did'nt understand how to compile succesfully this trigger.
February 15, 2018 - 2:26 pm UTC
When you have a select statement in PL/SQL, you need a target variable to store the result!
Provided your query returns at most one row, by selecting into the variable:
select col
into var
from ....
Audit table
Somdutt Harihar, February 16, 2018 - 3:51 am UTC
Hi Tom,
you are provided the code
create or replace trigger aud#dept
after update on dept
for each row
begin
audit_pkg.check_val( 'dept', 'DEPTNO', :new.DEPTNO, :old.DEPTNO);
audit_pkg.check_val( 'dept', 'DNAME', :new.DNAME, :old.DNAME);
audit_pkg.check_val( 'dept', 'LOC', :new.LOC, :old.LOC);
end;
the above code is working.
but for this we have to define columns for every table inside the above trigger.
for dynamic DML into audit_tbl
i use ur code :
create or replace trigger aud#&1
after update on &1
for each row
begin
select ' audit_pkg.check_val( ''&1'', ''' || column_name ||
''', ' || ':new.' || column_name || ', :old.' ||
column_name || ');'
from user_tab_columns where table_name = upper('&1')
end;
but how this above code is going to functioning?
please explain.
February 17, 2018 - 1:54 am UTC
No - the objective here is to use SQL to create the trigger *ddl* automatically. You can then use this to create your audit trigger, eg
SQL> connect hr/hr
Connected.
SQL> col sortkey nopri
SQL> col tn nopri
SQL> select table_name tn, -10 sortkey, 'create or replace trigger aud#'||table_name ddl from user_tables
2 union all
3 select table_name tn, -9, 'after update on '||table_name||' for each row ' from user_tables
4 union all
5 select table_name tn, -8, 'begin' from user_tables
6 union all
7 select table_name tn, column_id sortkey, ' audit_pkg.check_val( '''||table_name||''', ''' || column_name || ''', ' || ':new.' || column_name || ', :old.' || column_name || ');'
8 from user_tab_columns
9 union all
10 select table_name tn, 9998, 'end;' from user_tables
11 union all
12 select table_name tn, 9999, '/' from user_tables
13 order by 1,2;
DDL
----------------------------------------------------------------------------------------------------------------------------------
create or replace trigger aud#COUNTRIES
after update on COUNTRIES for each row
begin
audit_pkg.check_val( 'COUNTRIES', 'COUNTRY_ID', :new.COUNTRY_ID, :old.COUNTRY_ID);
audit_pkg.check_val( 'COUNTRIES', 'COUNTRY_NAME', :new.COUNTRY_NAME, :old.COUNTRY_NAME);
audit_pkg.check_val( 'COUNTRIES', 'REGION_ID', :new.REGION_ID, :old.REGION_ID);
end;
/
create or replace trigger aud#DEPARTMENTS
after update on DEPARTMENTS for each row
begin
audit_pkg.check_val( 'DEPARTMENTS', 'DEPARTMENT_ID', :new.DEPARTMENT_ID, :old.DEPARTMENT_ID);
audit_pkg.check_val( 'DEPARTMENTS', 'DEPARTMENT_NAME', :new.DEPARTMENT_NAME, :old.DEPARTMENT_NAME);
audit_pkg.check_val( 'DEPARTMENTS', 'MANAGER_ID', :new.MANAGER_ID, :old.MANAGER_ID);
audit_pkg.check_val( 'DEPARTMENTS', 'LOCATION_ID', :new.LOCATION_ID, :old.LOCATION_ID);
end;
/
create or replace trigger aud#EMPLOYEES
after update on EMPLOYEES for each row
begin
audit_pkg.check_val( 'EMPLOYEES', 'EMPLOYEE_ID', :new.EMPLOYEE_ID, :old.EMPLOYEE_ID);
audit_pkg.check_val( 'EMPLOYEES', 'FIRST_NAME', :new.FIRST_NAME, :old.FIRST_NAME);
audit_pkg.check_val( 'EMPLOYEES', 'LAST_NAME', :new.LAST_NAME, :old.LAST_NAME);
audit_pkg.check_val( 'EMPLOYEES', 'EMAIL', :new.EMAIL, :old.EMAIL);
audit_pkg.check_val( 'EMPLOYEES', 'PHONE_NUMBER', :new.PHONE_NUMBER, :old.PHONE_NUMBER);
audit_pkg.check_val( 'EMPLOYEES', 'HIRE_DATE', :new.HIRE_DATE, :old.HIRE_DATE);
audit_pkg.check_val( 'EMPLOYEES', 'JOB_ID', :new.JOB_ID, :old.JOB_ID);
audit_pkg.check_val( 'EMPLOYEES', 'SALARY', :new.SALARY, :old.SALARY);
audit_pkg.check_val( 'EMPLOYEES', 'COMMISSION_PCT', :new.COMMISSION_PCT, :old.COMMISSION_PCT);
audit_pkg.check_val( 'EMPLOYEES', 'MANAGER_ID', :new.MANAGER_ID, :old.MANAGER_ID);
audit_pkg.check_val( 'EMPLOYEES', 'DEPARTMENT_ID', :new.DEPARTMENT_ID, :old.DEPARTMENT_ID);
end;
/
audit_pkg.check_val( 'EMP_DETAILS_VIEW', 'EMPLOYEE_ID', :new.EMPLOYEE_ID, :old.EMPLOYEE_ID);
audit_pkg.check_val( 'EMP_DETAILS_VIEW', 'JOB_ID', :new.JOB_ID, :old.JOB_ID);
audit_pkg.check_val( 'EMP_DETAILS_VIEW', 'MANAGER_ID', :new.MANAGER_ID, :old.MANAGER_ID);
audit_pkg.check_val( 'EMP_DETAILS_VIEW', 'DEPARTMENT_ID', :new.DEPARTMENT_ID, :old.DEPARTMENT_ID);
audit_pkg.check_val( 'EMP_DETAILS_VIEW', 'LOCATION_ID', :new.LOCATION_ID, :old.LOCATION_ID);
audit_pkg.check_val( 'EMP_DETAILS_VIEW', 'COUNTRY_ID', :new.COUNTRY_ID, :old.COUNTRY_ID);
audit_pkg.check_val( 'EMP_DETAILS_VIEW', 'FIRST_NAME', :new.FIRST_NAME, :old.FIRST_NAME);
audit_pkg.check_val( 'EMP_DETAILS_VIEW', 'LAST_NAME', :new.LAST_NAME, :old.LAST_NAME);
audit_pkg.check_val( 'EMP_DETAILS_VIEW', 'SALARY', :new.SALARY, :old.SALARY);
audit_pkg.check_val( 'EMP_DETAILS_VIEW', 'COMMISSION_PCT', :new.COMMISSION_PCT, :old.COMMISSION_PCT);
audit_pkg.check_val( 'EMP_DETAILS_VIEW', 'DEPARTMENT_NAME', :new.DEPARTMENT_NAME, :old.DEPARTMENT_NAME);
audit_pkg.check_val( 'EMP_DETAILS_VIEW', 'JOB_TITLE', :new.JOB_TITLE, :old.JOB_TITLE);
audit_pkg.check_val( 'EMP_DETAILS_VIEW', 'CITY', :new.CITY, :old.CITY);
audit_pkg.check_val( 'EMP_DETAILS_VIEW', 'STATE_PROVINCE', :new.STATE_PROVINCE, :old.STATE_PROVINCE);
audit_pkg.check_val( 'EMP_DETAILS_VIEW', 'COUNTRY_NAME', :new.COUNTRY_NAME, :old.COUNTRY_NAME);
audit_pkg.check_val( 'EMP_DETAILS_VIEW', 'REGION_NAME', :new.REGION_NAME, :old.REGION_NAME);
create or replace trigger aud#JOBS
after update on JOBS for each row
begin
audit_pkg.check_val( 'JOBS', 'JOB_ID', :new.JOB_ID, :old.JOB_ID);
audit_pkg.check_val( 'JOBS', 'JOB_TITLE', :new.JOB_TITLE, :old.JOB_TITLE);
audit_pkg.check_val( 'JOBS', 'MIN_SALARY', :new.MIN_SALARY, :old.MIN_SALARY);
audit_pkg.check_val( 'JOBS', 'MAX_SALARY', :new.MAX_SALARY, :old.MAX_SALARY);
end;
/
create or replace trigger aud#JOB_HISTORY
after update on JOB_HISTORY for each row
begin
audit_pkg.check_val( 'JOB_HISTORY', 'EMPLOYEE_ID', :new.EMPLOYEE_ID, :old.EMPLOYEE_ID);
audit_pkg.check_val( 'JOB_HISTORY', 'START_DATE', :new.START_DATE, :old.START_DATE);
audit_pkg.check_val( 'JOB_HISTORY', 'END_DATE', :new.END_DATE, :old.END_DATE);
audit_pkg.check_val( 'JOB_HISTORY', 'JOB_ID', :new.JOB_ID, :old.JOB_ID);
audit_pkg.check_val( 'JOB_HISTORY', 'DEPARTMENT_ID', :new.DEPARTMENT_ID, :old.DEPARTMENT_ID);
end;
/
create or replace trigger aud#LOCATIONS
after update on LOCATIONS for each row
begin
audit_pkg.check_val( 'LOCATIONS', 'LOCATION_ID', :new.LOCATION_ID, :old.LOCATION_ID);
audit_pkg.check_val( 'LOCATIONS', 'STREET_ADDRESS', :new.STREET_ADDRESS, :old.STREET_ADDRESS);
audit_pkg.check_val( 'LOCATIONS', 'POSTAL_CODE', :new.POSTAL_CODE, :old.POSTAL_CODE);
audit_pkg.check_val( 'LOCATIONS', 'CITY', :new.CITY, :old.CITY);
audit_pkg.check_val( 'LOCATIONS', 'STATE_PROVINCE', :new.STATE_PROVINCE, :old.STATE_PROVINCE);
audit_pkg.check_val( 'LOCATIONS', 'COUNTRY_ID', :new.COUNTRY_ID, :old.COUNTRY_ID);
end;
/
create or replace trigger aud#REGIONS
after update on REGIONS for each row
begin
audit_pkg.check_val( 'REGIONS', 'REGION_ID', :new.REGION_ID, :old.REGION_ID);
audit_pkg.check_val( 'REGIONS', 'REGION_NAME', :new.REGION_NAME, :old.REGION_NAME);
end;
/
86 rows selected.
SQL>
SQL>
Audit table
Somdutt Harihar, February 17, 2018 - 8:59 am UTC
In the above code you are mentioning trigger for each and every table . like for department you are declaring all the columns inside the dept table.
i want only one DML trigger which can be apply on all the tables in the database . If any DML is occured on any table it should be fire that only one trigger and insert the log into audit_tbl table.
Thank you.
February 19, 2018 - 3:26 am UTC
That is not how triggers work.
Audit Log
somdutt Harihar, February 19, 2018 - 5:53 am UTC
Hi,
First of i like to say sorry for not understanding your code.
Now i understand that it is a dynamic creation of trigger through the below select statement.
select table_name tn, -10 sortkey, 'create or replace trigger auddel'||table_name ddl from user_tables
union all
select table_name tn, -9, 'after delete on '||table_name||' for each row ' from user_tables
union all
select table_name tn, -8, 'begin' from user_tables
union all
select table_name tn, column_id sortkey, ' audit_pkg.check_val( '''||table_name||''', ''' || column_name || ''', ' || ' ' || 'NULL' || ', ' || 'NULL' || ', :old.' || column_name || ');'
from user_tab_columns
union all
select table_name tn, 9998, 'end;' from user_tables
union all
select table_name tn, 9999, '/' from user_tables
order by 1,2
Thank you so much for this post. It helps me a lot.
Regards,
Somdutt Harihar
February 19, 2018 - 8:05 am UTC
glad we could help
support for other data type
Mahendra, July 28, 2020 - 12:37 pm UTC
currently this solution gives example of varchar2 date and number
when we have
LONG
TIMESTAMP(6) WITH TIME ZONE
FLOAT
TIMESTAMP(6)
CHAR
NUMBER
CLOB
BARCODE_FIELD
DATE
VARCHAR2
BLOB
INTERVAL DAY(2) TO SECOND(6)
Will it get adjusted in above three procedure or it we have to write different procedure for each data type ?
Can i define a type with columnName, oldValue, newValue
instead of calling procedure call a function to get this type object returned will this work ?
July 29, 2020 - 2:55 am UTC
I've added a revised version to the question answer - but really, with the exception of LOBs, this really is a no brainer.... With regard to data types:
LONG => don't use these
TIMESTAMP(6) WITH TIME ZONE => done
FLOAT => already works with number
TIMESTAMP(6) => done
CHAR => works with varchar2
NUMBER => already there
CLOB => done
BARCODE_FIELD => what is this?
DATE => already there
VARCHAR2 => already there
BLOB => done
INTERVAL DAY(2) TO SECOND(6) => done
And I stress - if the auditing is so critical, then I'd be looking at custom triggers per table, and custom tables rather than a simple generator like this one
Even do not delete "DDL Trigger"
Suuny, September 09, 2020 - 9:07 am UTC
I have create a trigger that is "after DDL ON DATABASE" in oracle 12c that is call "track_trigger"
but all trigger capture by "track_trigger" trigger but when i delete "track_trigger". So from database side trigger droped. but i want to also do not delete any trigger. so what to do?
September 10, 2020 - 6:49 am UTC
This is a security issue. If you are tracking changes with this trigger, no one except an admin should be able to drop this trigger.
If you have an issue with who has admin rights...then well...you got much bigger problems to handle :-)
Thanks - and some tweaks
Barry, March 03, 2021 - 11:32 am UTC
Many thanks.
Just my 2 pence:
(1) We have some really long column names, the trigger definition wraps and prevent compilation.
add set linesize 1000
(2) We have timestamps without timezones. This causes a problem "PLS-00307: too many declarations of"
So add another prcedure to the package:
PROCEDURE Check_val(l_tname IN VARCHAR2,
l_cname IN VARCHAR2,
l_new IN TIMESTAMP,
l_old IN TIMESTAMP)
USER_TAB_COLS not showing the latest changes from the trigger
Venkat, March 07, 2022 - 4:42 pm UTC
hi team,
I'm trying to capture the modifications on any tables(ALTER) and sync the corresponding audit table with the change.
But while querying the DBA_TAB_COLS inside the after DDL trigger, its showing me the earlier version on the column instead of the statement changes.
Could you tell what causes this behavior?
Many thanks in advance
Connected to:
Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production
SQL> set time on
19:30:07 SQL> select banner from v$version;
BANNER
--------------------------------------------------------------------------------
Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production
PL/SQL Release 12.2.0.1.0 - Production
CORE 12.2.0.1.0 Production
TNS for Linux: Version 12.2.0.1.0 - Production
NLSRTL Version 12.2.0.1.0 - Production
19:30:20 SQL> CREATE OR REPLACE TRIGGER audt_temp
19:30:55 2 after alter on schema
19:30:55 3 declare
19:30:55 4 --oper ddl_log.operation%TYPE;
19:30:55 5 sql_text ora_name_list_t;
19:30:55 6 i pls_integer;
19:30:55 7 v_sql_text varchar2(4000);
19:30:55 8 v_clob clob;
19:30:55 9 PRAGMA AUTONOMOUS_TRANSACTION;
19:30:55 10 begin
19:30:55 11 i := sql_txt(sql_text);
19:30:55 12 for n in 1..i
19:30:55 13 loop
19:30:55 14 v_sql_text := v_sql_text|| sql_text(n);
19:30:55 15 end loop;
19:30:55 16
19:30:55 17 select listagg(column_name||' '||data_type||' '||data_length,';') within group (order by 1)
19:30:55 18 into v_clob
19:30:55 19 from user_tab_cols
19:30:55 20 where table_name=ora_dict_obj_name
19:30:55 21 group by table_name;
19:30:55 22
19:30:55 23 insert into temp1(col1, col2, col3) values(v_sql_text, v_clob, sysdate);
19:30:55 24
19:30:55 25 commit;
19:30:55 26 exception
19:30:55 27 when others then
19:30:55 28 raise;
19:30:55 29 null;
19:30:55 30 end audt_temp;
19:30:55 31 /
Trigger created.
19:30:57 SQL> show errors
No errors.
19:31:03 SQL> select listagg(column_name||' '||data_type||' '||data_length,';') within group (order by 1) output
19:31:22 2 from user_tab_cols
19:31:22 3 where table_name='TAB1'
19:31:22 4 group by table_name;
OUTPUT
--------------------------------------------------------------------------------
COL1 VARCHAR2 32
19:31:24 SQL> select col1, to_char(col2), col3 from temp1;
no rows selected
19:33:03 SQL> alter table tab1 modify col1 varchar2(34);
Table altered.
19:33:18 SQL> select col1, to_char(col2), col3 from temp1;
COL1
--------------------------------------------------------------------------------
TO_CHAR(COL2)
--------------------------------------------------------------------------------
COL3
---------
alter table tab1 modify col1 varchar2(34)
COL1 VARCHAR2 33
07-MAR-22
19:33:33 SQL> select listagg(column_name||' '||data_type||' '||data_length,';') within group (order by 1) output
19:34:09 2 from user_tab_cols
19:34:09 3 where table_name='TAB1'
19:34:09 4 group by table_name;
OUTPUT
--------------------------------------------------------------------------------
COL1 VARCHAR2 34
19:34:11 SQL> drop trigger audt_temp;
Trigger dropped.
19:36:53 SQL>
March 08, 2022 - 1:29 pm UTC
The trigger starts an autonomous transaction - this can't see the new columns because they're not committed yet!
Get the column list first,
then call an autonomous transaction to log the change, e.g.:
create table temp1 (
col1 varchar2(4000), col2 clob, col3 date
);
create or replace trigger audt_temp
after alter on schema
disable
declare
sql_text ora_name_list_t;
i pls_integer;
v_sql_text varchar2(4000);
v_clob clob;
procedure log as
pragma autonomous_transaction;
begin
insert into temp1(col1, col2, col3) values(v_sql_text, v_clob, sysdate);
commit;
end log;
begin
i := sql_txt(sql_text);
for n in 1..i
loop
v_sql_text := v_sql_text|| sql_text(n);
end loop;
select listagg(column_name||' '||data_type||' '||data_length,';') within group (order by 1)
into v_clob
from user_tab_cols
where table_name=ora_dict_obj_name
group by table_name;
log();
end audt_temp;
/
alter trigger audt_temp enable;
create table t (
c1 int
);
alter table t add ( c2 int );
select * from temp1;
COL1 COL2 COL3
------------------------------ ------------------------------ -----------
alter table t add ( c2 int ) C1 NUMBER 22;C2 NUMBER 22 08-MAR-2022
Thanks for the response, but Its showing the previous data type for modify statements
Venkat, April 10, 2022 - 3:19 pm UTC
Thanks Chris,
This example of yours works but using the same trigger(I have created the trigger that you had shared) enabled, when I tried to modify the data type of the column, its showing the old version data type and not the new one in the log.
alter table t modify ( c2 varchar2(10) );
Table altered.
select * from temp1;
COL1
--------------------------------------------------------------------------------
COL2
----------------------------------------------------------------------------
COL3
---------
alter table t modify ( c2 varchar2(10) )
C1 NUMBER 22;C2 NUMBER 22;C3 NUMBER 22
10-APR-22
1 row selected.
Could you tell what is that I'm missing
April 12, 2022 - 1:32 pm UTC
So it does. I'm not sure why this is or how to work around it.
Though I'm wondering - you are capturing the alter table statement. This shows what you're changing the column to. Wouldn't storing the original state be more useful?
Thanks Chris
Venkat, April 13, 2022 - 7:36 am UTC
Intention is to have an trigger makes changes in the Audit table for any changes that happen to the main table. Audits doesn't have any constraints (no NOT NULL, no CHECK, no DEFAULT value) where as the main table may get changes involving this that must be avoided for executing in respective Audit table. Considering the actual script and replacing(for Audit table) was a bit complex with many lines of code and thought this approach would be simpler comparatively.
April 19, 2022 - 3:06 am UTC
Check out my audit utility - in particular the "post_install" routine which looks to bring a source table and its audit table into alignment.
Maybe start with that code and use it to serve your need, and then schedule it to run from time to time to pick up changes.
audit table to capture :old records value
Pat, November 09, 2022 - 2:30 am UTC
Greetings!
Sorry if this is repetitive question.
Our database version is on 19.11. on Exadata platform
We have a requirement , every time a value changes in the table (on any columns), we update the table but need to insert the entire record (old record) with OLD values and the old timestamp into an audit table. Audit table will be storing the previous version of the data over period of time.
Just wondering, if there is already a feature in Oracle 19c that will do it. Could you please advise on the best and efficient way to implement this ?
Thanks!
November 09, 2022 - 2:43 pm UTC
audit table to capture :old record values
Pat, November 09, 2022 - 4:46 pm UTC
Hi Connor,
Thanks.
>> If you want an out-of-the-box solution instead of triggers, look into Flashback Data Archive. This stores the change history tables for a specified period of time:
Will there be an overhead with enabling FDA on the highly active transaction tables .?
November 11, 2022 - 3:25 am UTC
On your overall server, yes a slight increase.
But not on your transactions because this is done asynchronously to the session doing the transactions.
Flashback Context on Autonomous
Simon, June 21, 2024 - 1:38 pm UTC
Following on from your suggestion and blog posts, I'm looking to use Flashback Time Travel (formally FDA) on Autonomous DB to satisfy an audit compliance request.
The last sticking point is the ability to view the historic context info. It seems there is no ability on Autonomous to set the context level to ALL - although I see no limitation in the Autonomous Flashback documentation. I'm unable to run the following, even as ADMIN:
dbms_flashback_archive.set_context_level('ALL');
Am I missing something or is this an oversight in the implementation in Autonomous?
Thanks!
June 24, 2024 - 5:04 am UTC
That looks like an oversight or a security restriction - I will ask around internally