Skip to Main Content
  • Questions
  • Advanced Queues for background Reporting

Breadcrumb

Question and Answer

Tom Kyte

Thanks for the question.

Asked: March 14, 2004 - 10:24 am UTC

Last updated: November 11, 2004 - 10:23 am UTC

Version: 9.2

Viewed 1000+ times

You Asked

In a web based reporting app. can we use advanced queues for running background long running reports? Only thing i dont understand is how would i notify the user (few thousand users) that report is completed or how to notify?
Web application would be using connection pooling.

Am i on the right track or completely off by thinking of using advanced queues?

Thanks


and Tom said...


Yes you can -- and yes, it makes total sense.


The notification methods are limitless.

My phone accepts SMS messages by emailing my phonenumber@service.provider. You could use utl_smtp or just send mail upon completion to let them know.

You could just send email to their accounts. Email them a link "click here to see your report"

You could integrate it in whatever portal you are using -- having a little area on "their" home page show them the status of their reports.

You could let them pick up the status of the report at a web page anytime they wanted.

I myself use the "email them a link" option -- although the SMS page is excellent as well (I have my system email me an SMS page when I get voice mail for example -- so my one phone tells me "all")

Rating

  (3 ratings)

Is this answer out of date? If it is, please let us know via a Comment

Comments

Excellent reply, but what about scaling?

Mohit Dubey, March 15, 2004 - 7:56 am UTC

Hi Tom,

This is one of the very few questions I have found on this site which deals with AQs and performance. I am therefore going to avail this opportunity to focus this thread more closely towards performance, but from a different perspective.

I think my situation is somewhat similar to the original post... I have a multiconsumer queue which accepts payloads targeted to one or more consumers (Oracle 9.2.0.4). The number of consumers is quite large and they are constantly polling for "work" -- which I get using a simple PL/SQL routine.

The routine is basically called by each consumer, tries a dequeue in a "wait-forever" mode, and "blocks" till work becomes available. The cycle continues after the consumer has finished the dequeued work and wants more.

The trouble I am facing when scaling up is that the Queue itself becomes a point of contention at high volumes (60,000+ messages and 900+ consumers). I have tried the usual tricks of manipulating PCTFREE/USED to "spread" data more thinly, but am wondering if there is something else I can do to help boost the performance?

One thing that comes to mind is partitioning the queue table somehow, but I could find no pointers on this in the manuals or the web. BTW, partitioning worked wonders when the AQ was a "normal" table (I think it distributed the rows across different objetcs, thus spreading them across multiple hash-chains in the SGA -- which reduced contention). I may be barking up the wrong tree here if the contention is for the "head" of the queue...

I also see a lot of enqueue waits and dequeue message waits which I am not sure how to tune.

Any pointers would be very appreciated.

As tons of people before me have said, thanks for this *wonderful* site... It's a veritable treasure-trove of Oracle knowledge, tips, facts and tricks!

Mohit.

Tom Kyte
March 15, 2004 - 8:14 am UTC

well, the dequeue message waits are going to be hard to consider here - you will be getting them as people just naturally "wait" for a new message to come along.

have you identified (by tracing a single session with 10046 trace at level 12) what the enqueues are on?

data is cached in the SGA based on a hash of the DBA (data block address) -- we need not have things in different segments to achieve what you describe with the hashing (the block number/file number differences are more than sufficient)

900 dequeue processes -- they are not all running on the same machine are they?

Here are some more details.....

Mohit, March 16, 2004 - 11:33 am UTC

Hi Tom,

Apologies for the delayed reply, but here's some data (derived from the StatsPack report) to help nail the waits down. Some salient points first:

o The "consumers" mentioned in my previous post were running on multiple servers different from where DB is hosted.

o The "enqueues" I mentioned earlier were found to be mainly undo segment related -- this was fixed (Thanks for the tip).

o In the snapshot below, SYS_IOT_TOP_16272 is an "overflow" segment created as part of the AQ table, specifically for an IOT called AQ$_<QueueName>_NR. I have no idea what this does.

o I have ignored log file sync waits as (I believe) that can be improved by moving the redos to a RAID0 device. Redo-allocation has already been tuned, so the "physical" write time (and the fact that the system transactions are very small) is the main culprit here.

Selected sections from the report are as below.

Load Profile
~~~~~~~~~~~~ Per Second Per Transaction
--------------- ---------------
Redo size: 2,982,933.70 6,975.28
Logical reads: 35,519.57 83.06
Block changes: 21,779.11 50.93
Physical reads: 7.61 0.02
Physical writes: 209.50 0.49
User calls: 3,799.58 8.88
Parses: 261.96 0.61
Hard parses: 0.01 0.00
Sorts: 45.95 0.11
Logons: 0.02 0.00
Executes: 7,173.91 16.78
Transactions: 427.64

% Blocks changed per Read: 61.32 Recursive Call %: 59.17
Rollback per transaction %: 0.01 Rows per Sort: 3.34

Instance Efficiency Percentages (Target 100%)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Buffer Nowait %: 98.21 Redo NoWait %: 99.99
Buffer Hit %: 99.98 In-memory Sort %: 100.00
Library Hit %: 100.01 Soft Parse %: 100.00
Execute to Parse %: 96.35 Latch Hit %: 97.69
Parse CPU to Parse Elapsd %: 81.11 % Non-Parse CPU: 99.70

Shared Pool Statistics Begin End
------ ------
Memory Usage %: 73.46 73.46
% SQL with executions>1: 89.94 93.85
% Memory for SQL w/exec>1: 78.64 93.24

Top 5 Timed Events
~~~~~~~~~~~~~~~~~~ % Total
Event Waits Time (s) Ela Time
-------------------------------------------- ------------ ----------- --------
log file sync 48,233 2,166 33.33
buffer busy waits 71,799 1,995 30.70
latch free 123,801 1,452 22.35
CPU time 726 11.17
.
.
.

Wait Events for DB: MEMX9ID0 Instance: MEMX9ID0 Snaps: 234 -235
-> s - second
-> cs - centisecond - 100th of a second
-> ms - millisecond - 1000th of a second
-> us - microsecond - 1000000th of a second
-> ordered by wait time desc, waits desc (idle events last)
Avg
Total Wait wait Waits
Event Waits Timeouts Time (s) (ms) /txn
---------------------------- ------------ ---------- ---------- ------ --------
log file sync 48,233 165 2,166 45 1.0
buffer busy waits 71,799 2 1,995 28 1.5
latch free 123,801 114,389 1,452 12 2.5
db file parallel write 342 0 59 173 0.0
enqueue 1,976 0 46 23 0.0
db file sequential read 785 0 24 31 0.0
.
.
.

Latch Sleep breakdown for DB: MEMX9ID0 Instance: MEMX9ID0 Snaps: 234 -235
-> ordered by misses desc
Get Spin &
Latch Name Requests Misses Sleeps Sleeps 1->4
-------------------------- -------------- ----------- ----------- ------------
cache buffers chains 13,237,415 157,268 118,235 0/0/0/0/0
redo allocation 1,355,068 141,333 1,836 139508/1814/
11/0/0
library cache 2,838,152 114,317 2,715 111647/2625/
45/0/0
dml lock allocation 817,347 64,033 950 63088/940/5/
0/0
shared pool 791,255 60,891 253 60640/249/2/
0/0
.
.
.

Top 10 Logical Reads per Segment for DB: MEMX9ID0 Instance: MEMX9ID0 Snaps: 23
-> End Segment Logical Reads Threshold: 10000
Subobject Obj. Logical
Owner Tablespace Object Name Name Type Reads %Total
---------- ---------- -------------------- ---------- ----- ------------ -------
BLADE01 BOM_DATA SYS_IOT_TOP_16272 INDEX 223,632 8.10
SYS SYSTEM DUAL TABLE 170,016 6.16
BLADE01 INFRA_DATA ITAUDIT0 TAUDIT0_P3 INDEX 158,832 5.76
BLADE01 ARC_DATA ITAUDIT0 TAUDIT0_P1 INDEX 144,224 5.23
BLADE01 INFRA_INDE ITAUDIT0 TAUDIT0_P4 INDEX 143,440 5.20
BLADE01 ARC_INDEX ITAUDIT0 TAUDIT0_P2 INDEX 135,728 4.92
BLADE01 INFRA_DATA ITAUDIT ITAUDIT_P3 TABLE 119,936 4.35
BLADE01 ARC_DATA ITAUDIT ITAUDIT_P1 TABLE 108,736 3.94
BLADE01 INFRA_INDE ITAUDIT ITAUDIT_P4 TABLE 106,992 3.88
BLADE01 ARC_INDEX ITAUDIT ITAUDIT_P2 TABLE 100,336 3.64
-------------------------------------------------------------


Top 10 Physical Reads per Segment for DB: MEMX9ID0 Instance: MEMX9ID0 Snaps: 2
-> End Segment Physical Reads Threshold: 1000
Subobject Obj. Physical
Owner Tablespace Object Name Name Type Reads %Total
---------- ---------- -------------------- ---------- ----- ------------ -------
BLADE01 ARC_INDEX ITWORKFLOWQUEUEOBJEC EOBJECT_P4 TABLE 101 27.08
BLADE01 ARC_DATA ITWORKFLOWQUEUEOBJEC EOBJECT_P3 TABLE 62 16.62
BLADE01 BOM_DATA ITWORKFLOWQUEUEOBJEC EOBJECT_P1 TABLE 51 13.67
BLADE01 BOM_INDEX ITWORKFLOWQUEUEOBJEC EOBJECT_P2 TABLE 49 13.14
BLADE01 ARC_INDEX ITWORKFLOWQUEUEOBJEC OBJECT1_P4 INDEX 11 2.95
BLADE01 BOM_DATA ITWORKFLOWOBJECT TABLE 10 2.68
BLADE01 BOM_DATA SYS_IOT_TOP_16272 INDEX 9 2.41
BLADE01 BOM_INDEX ITWORKFLOWOBJECT0 INDEX 9 2.41
BLADE01 INFRA_DATA ITWORKFLOWCREATIONAD TABLE 8 2.14
BLADE01 ARC_DATA ITWORKFLOWQUEUEOBJEC OBJECT0_P3 INDEX 8 2.14
-------------------------------------------------------------


Top 10 Buf. Busy Waits per Segment for DB: MEMX9ID0 Instance: MEMX9ID0 Snaps:
-> End Segment Buffer Busy Waits Threshold: 100

Buffer
Subobject Obj. Busy
Owner Tablespace Object Name Name Type Waits %Total
---------- ---------- -------------------- ---------- ----- ------------ -------
BLADE01 BOM_DATA SYS_IOT_TOP_16272 INDEX 17,771 30.38
BLADE01 ARC_DATA ITAUDIT0 TAUDIT0_P1 INDEX 3,509 6.00
BLADE01 INFRA_DATA ITAUDIT0 TAUDIT0_P3 INDEX 3,442 5.88
BLADE01 INFRA_INDE ITAUDIT0 TAUDIT0_P4 INDEX 3,394 5.80
BLADE01 BOM_INDEX ITWFPINFRASTACK0 INDEX 3,148 5.38
BLADE01 ARC_INDEX ITAUDIT0 TAUDIT0_P2 INDEX 2,874 4.91
BLADE01 ARC_DATA ITWORKFLOWQUEUEOBJEC OBJECT0_P3 INDEX 2,349 4.02
BLADE01 BOM_DATA ITWORKFLOWQUEUEOBJEC OBJECT0_P1 INDEX 1,538 2.63
BLADE01 ARC_INDEX ITWORKFLOWQUEUEOBJEC OBJECT0_P4 INDEX 1,523 2.60
BLADE01 ARC_DATA ITWORKFLOWQUEUEOBJEC EOBJECT_P3 TABLE 1,511 2.58
-------------------------------------------------------------


Top 10 Row Lock Waits per Segment for DB: MEMX9ID0 Instance: MEMX9ID0 Snaps: 2
-> End Segment Row Lock Waits Threshold: 100
Row
Subobject Obj. Lock
Owner Tablespace Object Name Name Type Waits %Total
---------- ---------- -------------------- ---------- ----- ------------ -------
BLADE01 BOM_DATA SYS_IOT_TOP_16272 INDEX 7,666 89.27
BLADE01 INFRA_DATA ITAUDIT0 TAUDIT0_P3 INDEX 258 3.00
BLADE01 INFRA_INDE ITAUDIT0 TAUDIT0_P4 INDEX 210 2.45
BLADE01 ARC_DATA ITAUDIT0 TAUDIT0_P1 INDEX 202 2.35
BLADE01 ARC_INDEX ITAUDIT0 TAUDIT0_P2 INDEX 157 1.83
BLADE01 ARC_INDEX ITWORKFLOWQUEUEOBJEC OBJECT1_P4 INDEX 28 .33
BLADE01 BOM_DATA ITWORKFLOWQUEUEOBJEC OBJECT1_P1 INDEX 13 .15
BLADE01 BOM_INDEX ITWORKFLOWOBJECT0 INDEX 10 .12
BLADE01 ARC_INDEX ITAUDITARCHIVE0 RCHIVE0_P4 INDEX 7 .08
BLADE01 ARC_DATA ITAUDITARCHIVE0 RCHIVE0_P3 INDEX 5 .06
-------------------------------------------------------------

Your comments are most welcome!

Mohit.

Tom Kyte
March 16, 2004 - 1:38 pm UTC

zero in on the buffer busy waits (if you believe you can fix the log file sync already).

find out what they are about. a level 12 10046 trace for an application suffering from them would be relevant (actually -- I'd prefer not seeing statspacks, but TKPROFS! much more relevant to tuning *an application*. this system level stuff contains tons of noise that just isn't relevant to tuning your application)

How about partitioning the queue table?

christine, November 11, 2004 - 9:28 am UTC

An option in the question was about the possiblity to partition the underlying queue table. Is this do-able? Can we drop old partitions like we do in a regular database table? I couldn't find any references in oralce document and in metalink. I'd appreciate your insight into this.

Thanks,

Tom Kyte
November 11, 2004 - 10:23 am UTC

see the alter table command -- it shows you can drop, split, merge, add, truncate, etc etc etc partitions.