You Asked
We use OCI GoldenGate Microservice to replicate tables from Oracle Database to BigData.
I was tasked to create a process to replicate any "ALTER TABLE ADD COLUMN" statements to let bigdata team modify schema safely.
Tested solution using database trigger in dedicated schema in test environment was rejected because
1. Triggers are evil
2. Database patching can have unpredicted consequences having database trigger. I wonder if that is true.
I used similar trigger to log my own DDL for journaling for 5 years in a database which passed through multiple patching without any issues.
Still, my experience is not statistically correct prove.
My colleague suggested using use aud$unified table, but that involves creating procedure executed by frequently running scheduled job because releases are not 100 % regular.
Are there better approach?
With regards,
Alex
Here is the setup:
prompt create table utility.ddl_log
--drop table utility.ddl_log purge;
create table utility.ddl_log (
ddl_time timestamp(6) default systimestamp not null
ddl_by varcha2(30) default sys_context('userenv', 'current_user')
, os_user varchar2(100) default sys_context('userenv','os_user')
, host varchar2(100) default sys_context('userenv','host')
, ip_address varchar2(100) default sys_context('userenv','ip_address')
, module varchar2(100) default sys_context('userenv','module')
, terminal varchar2(100) default sys_context('userenv','terminal')
, operation varchar2(100)
, owner varchar2(50)
, object_name varchar2(50)
, object_type varchar2(50)
, sqltext clob
);
create or replace trigger utility.after_gg_table_ddl
after alter on database
declare
l_2run binary_integer := 0;
l_sql_id varchar2(16);
l_sql clob;
begin
--p('ora_dict_obj_name = '||ora_dict_obj_name);
select count(*)
into l_2run
from dba_users u
where 1 = 1
and profile = 'APP_USER' -- only application schemas to exclude any other schemas
and ora_dict_obj_type = 'TABLE'
and exists (
select 1
from all_log_groups a
where a.owner = u.username
and a.table_name = ora_dict_obj_name
and log_group_type is not null
)
and ora_dict_obj_owner = u.username
;
--p('l_2run = '||to_char(l_2run)||'; sid = '|| sys_context('userenv','sid'));
if l_2run > 0 then
select sql_id
into l_sql_id
from gv$session
where sid = sys_context('userenv','sid')
and rownum = 1
;
select lower(sql_fulltext)
into l_sql
from gv$sql
where sql_id = l_sql_id
and rownum = 1; --to be on the safe side as many sessions and children can have the same sql_id
-- This part is not required
insert into utility.ddl_log (operation, owner, object_name, object_type, sqltext, prevsqltext)
values (ora_sysevent, ora_dict_obj_owner, ora_dict_obj_name, ora_dict_obj_type, l_sql);
if instr (l_sql, ' add') > 0 and l_sql not like '%supplemental%' and sql_text not like '%overflow%' then
insert into replicated_table_altered values(ora_dict_obj_owner, ora_dict_obj_name, l_sql);
end if;
end if;
exception
when others then
l_sql := sqlerrm;
insert into utility.ddl_log (operation, owner, object_name, object_type, sqltext)
values (ora_sysevent, ora_dict_obj_owner, ora_dict_obj_name, ora_dict_obj_type, l_sql);
end;
/
and Connor said...
Lets take the two statements
1) Triggers are evil
"Triggers are evil" is like any absolute statement, ie, they are rarely correct.
Triggers are evil
in the wrong circumstances is perhaps a better phrasing. I don't think I've ever encountered an Oracle feature where I've said "100% of the time you should avoid it". As always, there is more nuance to the discussion
2) Database patching
This is part of the nuance bit. Database patching *does* do a lot of dictionary DDL and the like, and when a database is upgraded, we might use a mode that impacts trigger firing etc. In this way, triggers are like any piece of code - they require process and management to ensure they are being used in the correct way.
Just as an aside on your trigger - any reason you're not just using the "ora_sql_text" pseudo-function?, eg
create or replace
trigger ddl_grabber
after create or alter or drop on database
declare
l_string varchar2(32000);
l_sql_text ora_name_list_t;
l_n number;
begin
l_n := ora_sql_txt(l_sql_text);
for i in 1 .. l_n
loop
l_string := l_string || l_sql_text(i);
end loop;
insert into ddl_log (sqltext) values (l_string);
end;
/
Back to the question - assuming you're grabbing the DDL and inserting it "somewhere", presumably something is monitoring that table downstream for later processing. Based on that, is there any reason why the unified audit trail is *not* an option for you?