Skip to Main Content
  • Questions
  • Oracle Advanced Queues dequeue throughput

Breadcrumb

Question and Answer

Chris Saxon

Thanks for the question, Paul.

Asked: September 12, 2017 - 8:50 am UTC

Last updated: October 11, 2017 - 1:58 pm UTC

Version: 11

Viewed 1000+ times

You Asked

I am a developer, not a DBA, so know just enough about this to be dangerous.
I have a setup a queue using the fairly standard scripts that are on multiple sites. I have used this one: http://www.oracle-developer.net/display.php?id=411
The trouble starts when it is put under load. I have about 270 requests that are enqueued within a 10 second period. The first 10 are dequeued very quickly and processed. The task that the dequeue procedure performs can last from anywhere between 15 seconds and an hour. All DML is committed immediately, so there should be no growing transactions. I've read every document I can find, but can't pin down what controls the number of asynchronous tasks the queue will run concurrently.
Ideally, I would like to start 200+ concurrent processes, so all the items in the queue are dequeued and processed at the same time. As it is, all 200 jobs take about 8 hours to finally clear, as time taken to dequeue gets longer and longer.
I have looked at multiple subscribers, but my understanding is this will run rows in the queue multiple times for different tasks. I used the param multiple_consumers => TRUE when creating the table, but again I think this is for multiple processing of the same row, not parallel processing of a single task. I only have one task that I need to process for each row in the queue. I just need more concurrent processes running at the same time. I thought it would be a simple parameter, but can't find it. When I look at the logging, it appears enqueued tasks are sitting waiting to be dequeued once a resource becomes available.
So to summarize, I need to up throughput of the queue. Any help on this would be gratefully appreciated.
Many Thanks

and Chris said...

Are you using PL/SQL notifications for dequeueing? If so, you can't control how many dequeue processes there are:

https://asktom.oracle.com/pls/apex/f?p=100:11:0::::P11_QUESTION_ID:8178644100346656339

But if you want to run 270 processes at the same time, there's a more fundamental question:

Does your system have the capacity to process 270 concurrent requests?

In simple terms, if each process consumes 1 CPU thread and you only have 8 available you can only process 8 jobs at once. So the other 262 will have to wait. The real world will be more complicated than this, with processes doing IO etc. But it's likely there will be contention for hardware resources.

So unless you have masses of spare capacity on your database, chances are your database won't cope with starting all the dequeues together.

Which brings another question:

Why are these separate processes? Can you combine them into one?

e.g. instead of starting 200 single row updates, can you run one update that changes 200 rows?

Rating

  (2 ratings)

Is this answer out of date? If it is, please let us know via a Comment

Comments

Increasing dequeue processes

Paul Brown, September 12, 2017 - 1:49 pm UTC

Very useful thanks: I read that note and here is the part that answers part of my question:

We are using PLSQL notifications, so...

"Instead, AQ Notification automatically starts appropriate number of AQ Notification background
processes (a.k.a EMON slaves) depending on the total workload and time taken to execute user PL/SQL callbacks. "

Is there a view I can check against to see how many dequeue processes are currently running?


Why so many processes: we have a small time window to move data from 100s of different schemas to DynamoDB. The idea was to submit each of these into a queue in order to get them running in parallel. I already know trying to run them in a serial manner will take too long. I was pondering running multiple queues, but I take your point that physical resources are required. This may be a very ignorant question: is it as simple as each running dequeue process will always need it's own CPU thread?
Chris Saxon
September 12, 2017 - 2:01 pm UTC

I don't know of a dedicated view. But the dequeue notification processes run as scheduler jobs. So you can look for the number of these running:

select * from dba_scheduler_running_jobs
where  job_name like 'AQ$_PLSQL_NTFN%';


is it as simple as each running dequeue process will always need it's own CPU thread?

No. It depends on what your processing does! If you're scraping all the data out of your application, chances are a fair bit of the time is spent reading it from disk. So then you're using disk I/O, not CPU.

Is this a one-off migration, or something you're going to do regularly?

Dynamo

Paul Brown, September 12, 2017 - 2:11 pm UTC

The plan is to switch propagation to CQNs once we are happy we have the correct data propagated. But I think we may need to run this load a number of times (Dynamo is just an upsert). There is potentially a fair bit of trial and error across 100's of schemas on different dbases on different boxes. which is why I am trying to design a concurrent, centralized solution. Only a subset of the data in all the schemas is being moved, and we are using bulk REST services from Dynamo to move it all across. The solution works nicely in dev/test, I just can't seemingly get the throughput in the required window of time to scale this from test to production.
Chris Saxon
October 11, 2017 - 1:58 pm UTC

I don't really understand why you're using AQ to do the initial load. Could you clarify?

More to Explore

PL/SQL demos

Check out more PL/SQL tutorials on our LiveSQL tool.

PL/SQL docs

PL/SQL reference manual from the Oracle documentation library