Skip to Main Content

Breadcrumb

Question and Answer

Chris Saxon

Thanks for the question, Mario.

Asked: March 10, 2026 - 12:06 pm UTC

Last updated: March 11, 2026 - 3:41 pm UTC

Version: 19c

You Asked

I need to propagate changes on a few tables to an external system. I have to do it near realtime and ultimately it will end up on Kafka from my end. The solution target is Oracle 19c and needs to be installed to multiple clients on premises.

Example problem:
Let's say I have table of messages. I have a column called change_id which is populated by trigger from a sequence on insert and update. Theoretically I always know which row was changed last. i might avoid needing to track deletes, not sure yet.

Now I create a procedure to be ran as a batch and collect rows changed from last run. First it gets the max(change_id), then selects all rows between last_run_max_change_id and new_max_change_id. Exports data as json to another table. writes new_max_change_id as last_run_max_change_id. Commits. Another process will handle delivery of json to where it needs to be.

The problem is another long running transaction might have consumed sequence with lower numbers but has not commited when batch was run, thus those change_ids will never be exported. Another problem is I don't have deleted rows.

Solution 1: Golden Gate replication or OpenLogReplicator or something similar. I would have to convince all clients to commit to paying the GG licence, create tables as replication target, export from those tables, delete from them. Licence and getting all clients on board is difficult, because i need one solution for all. Also security, stability and maintenance concerns will likely make clients want to reject such ideas, and it has to be all of them onboard. I have also tried to use Oracle Streams before on another project and had stability issues and ORA-600 errors that were never resolved.

Solution 2: use SCN instead of change_id. SCNs (ora_scnrow) are not indexed and select by scn in where clause on billions of rows is too slow.

Solution 3: Flashback. Have something like SELECT * FROM messages VERSIONS BETWEEN SCN :last_scn AND :current_scn; My concern is if the program doesn't run for a while for whatever problem and reason, the flashback will be lost. I would need a backup solution.

Solution 4: trigger after insert on messages table that will write to export table. Handles insert, update, delete and nothing will be skipped. I select 10k ids ordered, export to json, delete 10k records and commit; I'm worried about big transactions having additional load to write to export table and trigger overhead for each row. Additional context switching. Index contention on export table having rows at the same time inserted by inserts and updates on original table and rows deleted by export batch process. Exports will have to be ordered by change_id.

Conclusion: The only stable and data consistent solution I can think of is solution 4, a trigger. But I'm worried about overhead.

Instead of a trigger, I could check the code and program additional inserts into export log table so that the total number uf updates might be lower but not by much.

Any other ideas or advice? Is there some auditing feature or clever strategy i didn't think of?
Thanks

edit:
Solution 5: Instead of change-id sequence column written on trigger, I could write a change-flag column that would trigger only on relevant columns. Then I would select for update rows that have change-flag not null, propagate those records to external system and update change-flag to null.
Alternatively, I could update the existing change-id column to null when processed which would drastically reduce index size on change-id also.
With this I would avoid additional tables to be created and overhead of multiple updates generating rows in log table as it is only important to send the latest value as soon as possible.

and Chris said...

Here's some thoughts:

Solution 1: Replicating data is exactly what GoldenGate is for; many customers use this product. If licensing this really will be a blocking issue, you could look at LogMiner to read changes from redo & archive logs. There are products that can help you do this; I've not used any so can't make any recommendations.

Solution 2: By default, ora_rowscn is at the block level, not row. To make it row level you need to define ROWDEPENDENCIES when you create the table. Plus you can't use it to capture deletes, should you need to in the future. This likely rules this out for existing systems.

Solution 3: Flashback Time Travel (previously data archive) enables you to keep a permanent store of changes up to the retention period. This avoids the problem of losing history.

Solution 4: As you say, triggers have overhead. The best way to see if this is viable is to benchmark on your system to see whether this overhead is acceptable. Note that triggers can be disabled, so changes can be lost using this method.

Solution 5: If you've ruled out GG or LogMiner, this is my favourite. You can make the column DEFAULT ON NULL, meaning you only need triggers to set the flag again for updates and capture deletes. You may still need additional tables to capture old values for updates; you'll definitely need extra tables to store delete details.

A column like this also gives you a simple way to re-run the process if the changes are lost downstream for some reason. You just set the flag again.

Another option to consider: Oracle Advanced Queues. This gives you a way to propagate changes to other systems. Release 21c added Transactional Event Queues which can work with Kafka clients https://docs.oracle.com/en/database/oracle/oracle-database/21/adque/interoperability_TEQ_AQ.html You'll need to write to the queue as part of your transactions. You could do this in the app code or with triggers making it similar to option 4.

More to Explore

PL/SQL demos

Check out more PL/SQL tutorials on our LiveSQL tool.

PL/SQL docs

PL/SQL reference manual from the Oracle documentation library