Skip to Main Content
  • Questions
  • General Question for your thought or Experience with the Outbox Pattern and viable alternatives

Breadcrumb

Question and Answer

Connor McDonald

Thanks for the question, Mike.

Asked: August 01, 2025 - 4:29 pm UTC

Last updated: August 20, 2025 - 8:12 am UTC

Version: 19c

Viewed 1000+ times

You Asked

Question
We're evaluating different approaches to implement the Outbox Pattern in Oracle 19c for reliable event publishing in our microservices architecture, but we're concerned about the significant I/O overhead and performance implications. Could you provide guidance on the best practices and alternatives?
Current Implementation Options We're Considering
1. Traditional Polling Approach

Method: Standard outbox table with application polling using SELECT ... FOR UPDATE SKIP LOCKED
Concerns:

Constant polling creates unnecessary database load
Potential for high latency in event delivery
Resource consumption even when no events exist



2. Change Data Capture (CDC) with Debezium

Method: Using Debezium to mine Oracle redo logs for outbox table changes
Concerns:

Additional complexity in deployment and monitoring
Dependency on external CDC infrastructure
Potential log mining overhead on the database



3. Oracle Advanced Queuing (AQ) with Sharded Queues

Method: Leveraging Oracle's native messaging with 19c sharded queue improvements
Concerns:

Learning curve for development teams familiar with table-based approaches
Potential vendor lock-in
Queue management complexity



Primary Concerns
I/O Impact: All approaches seem to significantly increase database I/O:

Polling creates constant read operations
CDC requires continuous log scanning
Queuing systems add their own storage and processing overhead

Scalability: As our event volume grows, we're worried about:

Database performance degradation
Increased storage requirements for outbox/queue tables
Network bandwidth consumption

Specific Questions

Performance Optimization: What Oracle 19c specific features or configurations can minimize the I/O overhead of outbox pattern implementations?
Alternative Architectures: Are there Oracle-native alternatives to the traditional outbox pattern that provide similar transactional guarantees with better performance characteristics?
Hybrid Approaches: Would a combination approach (e.g., AQ for high-priority events, polling for batch operations) be advisable?
Monitoring and Tuning: What specific metrics should we monitor, and what tuning parameters are most critical for outbox pattern performance in Oracle 19c?
Resource Planning: How should we size our database resources (I/O capacity, storage, memory) when implementing outbox patterns at scale?

Environment Details

Oracle Database 19c Enterprise Edition
Microservices architecture with moderate to high event volume
Requirements for exactly-once delivery semantics
Mixed OLTP and event-driven workloads

Any insights on Oracle-specific optimizations, alternative patterns, or architectural recommendations would be greatly appreciated.

and Connor said...

Obviously I have a bias :-) but I would be opting for AQ, or more precisely, Oracle Transactional Event Queues, which are more scalable than traditional AQ.

https://www.oracle.com/database/advanced-queuing/

My concerns with (1) and (2) are

- the cost of polling in (1)

- the risk profile with (2). Every time you patch or upgrade Oracle, you're now thinking "Gee, I hope my CDC is unaffected by this". What are you going to do if a security patch for Oracle comes out that breaks you CDC. That's a tough decision point

- in both, home building all of the assumptions about queues (exactly once delivery, message ordering, message correlation, error retyry/processing etc) all now fall to you. What often starts a looking simple, often is simple to get the first 80% of the functionality you want, but a quagmire of complexity to get to 100%

I'm a fan of "Hybrid Approaches", but in a slightly different context. I once did an implementation for a client (using AQ) where most app tasks might result in a small amount of messages being put on the queue, but some app tasks would (conceptually) would result in millions of messages.

For the latter, we still used the queue, but we'd point a modified single message on the queue, which contained info to tell out queue processor "Hey, this message means go an get the 1 million data artefacts that this message represents".

Rating

  (1 rating)

Comments

Message propagation between databases

Stew Ashton, August 13, 2025 - 9:32 am UTC

Connor, is there any point where Oracle Transactional Event Queues might use distributed transactions? If a message is propagated to a remote database, doesn't that require a database link, and if so isn't the commit or rollback using two phase commit?

If two phase commits are completely avoided, how is the "exactly once" promise kept?

Thanks in advance,
Stew
Connor McDonald
August 20, 2025 - 8:12 am UTC

"If two phase commits are completely avoided"

I'm not sure we say that in the docs in the general sense of TEQ. We mention this only specifically on propagation.

"Optimized propagation happens in batches. If the remote queue is in a different database, then Oracle Database Advanced Queuing uses a sequencing algorithm to avoid the need for a two-phase commit"

https://docs.oracle.com/en/database/oracle/oracle-database/21/adque/aq-performance-scalability.html#GUID-689E9C50-5647-49BA-8BFF-AB0DEE6432EE

But even so, I'll ask around internally for more details, because you'd imagine, once a database link comes into play, you're looking at some sort of two phase commit