Skip to Main Content
  • Questions
  • How to know SGA and PGA size is insufficient?

Breadcrumb

May 4th

Question and Answer

Tom Kyte

Thanks for the question, Albert.

Asked: October 12, 2007 - 12:15 am UTC

Last updated: October 19, 2011 - 6:07 pm UTC

Version: 10.2.0.2

Viewed 10K+ times! This question is

You Asked

Hi Tom,

This is continuation to the followup at http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:7739845026791#485777700346728968 As I found it more appropriate to be a new thread I am asking this question.

We are using Oracle 10.2.0.2 with automatic memory management. Only one instance is running in the box. The box has a RAM of 10 GB. SGA target is set at 3 GB and PGA target also at 3 GB.

1) Assuming that there is no swapping happening at the OS level, we want to know if the memory is sufficient to Oracle. What is the definitive indicator that shows us there is / is not memory contention?

Following are some of details from awr output taken during a period of 15 minutes interval:

Instance efficiency percentages:
Buffer Nowait %: 99.98 Redo NoWait %: 100.00 
Buffer Hit %: 99.71 In-memory Sort %: 100.00 
Library Hit %: 99.33 Soft Parse %: 98.25 
Execute to Parse %: 75.08 Latch Hit %: 99.89 
Parse CPU to Parse Elapsd %: 51.72 % Non-Parse CPU: 98.76 

Top 5 timed events:
Event Waits Time(s) Avg Wait(ms) % Total Call Time Wait Class 
db file sequential read 174,972 1,833 10 40.7 User I/O 
CPU time   1,499   33.3   
log file sync 201,738 370 2 8.2 Commit 
Streams capture: waiting for subscribers to catch up 210 216 1,029 4.8 Configuration 
log file parallel write 187,311 183 1 4.1 System I/O 


The SGA memory advice and PGA memory advice in OEM shows no improvement by increasing memory.

2) We are expecting an increase in the load of application by about 2 times (the transaction rate is to be dobuled) in near future for about 5 days. We want to know if we need additional memory required to sustain the load surge. What is the scientific and correct way to do this (short of simulating in another environment)? Is it possible to estimate how much more memory is required?

Thanks always for your guidance.

Regards,

Albert Nelson A.

and Tom said...

1) you would use the memory advisors (part of enterprise manager) or use statspack itself.

It shows you what would happen if you increase a memory region. For example, the PGA advisor would tell you "hey, if you increase the PGA target by X, then you would expect to see Y less sorts to disk".

Same with SGA memory.

You sort of answered your own question here in the last sentence where you said "the advisors say nothing would change by increasing memory"

I can say you are hard parsing too often if this was done during normal heavy load (if this was done during no or light load, then the hard parses were probably the AWR report itself - but that would be a useless time to measure stuff so I assume this was during "high load")

2) Why short of? What better could you think of doing? Buffer cache - impossible to say - if the new users query the same stuff, no change, if the new users query entirely different data - maybe more (maybe not, don't know how much of the cache you are really using)

the pga advisor will tell you peak memory usage pga wise, you can use that as a swag - assume it might be double that peak usage.

Rating

  (12 ratings)

Is this answer out of date? If it is, please let us know via a Comment

Comments

memory advisors

David, October 14, 2007 - 1:48 pm UTC

My senior DBA tells me that Oracle buffer cache memory advisor may not be accurate as it does consider the buffer read from OS/SAN cache also as reading from disk so increasing the memory per advisor may not yield any good result. Is this true? Thanks.
Tom Kyte
October 14, 2007 - 2:18 pm UTC

the advisor simply tells you "if you increase the buffer cache by this much, we will decrease the number of physical IO requests to the OS by this much"

I fail to see how a cache in front of a SAN or a file system cache would "render that advice incorrect"

the slowest - a physical IO that results in actual disk access.
the next slowest - a physical IO satisfied by some cache outside of oracle.
the next slowest - a physical IO request we can skip because it is in our cache already
the only good IO - lack of an IO, efficient SQL

The cache advisor is accurate, whether decreasing the physical IO will have a material affect or not will be affected somewhat by the OS/SAN cache - which just looks like slightly faster IO at the end (but is still an IO)

How about top wait events?

Albert Nelson, October 14, 2007 - 6:04 pm UTC

Thanks Tom.

Inspite of SGA and PGA advisors showing us there is no improvement by increasing SGA/PGA, we see db_file_sequential read to be the top wait event. I do not understand this. As far as my understanding, the db_file_sequential read is typically single block IO wait. If buffer cache is sufficient and there is no improvement by increasing the size, then why should db_file_sequential read event be the top wait event?

That is why I wanted to know, the 'definitive indicator' that shows us there is / is no memory contention. Can we fully rely on the advisors alone?

And yes this statspack was taken during high load time.

Regards,

Albert Nelson A.
Tom Kyte
October 14, 2007 - 9:02 pm UTC

... we see db_file_sequential read to be the top wait event ...

and to that I say "so what", there will ALWAYS be a top wait.

You do not quantify it with anything - no one here can comment as to whether you have an IO issue or not.

Basically, it is telling you "we have watched the blocks you read and you know what, you read lots of blocks once - and then don't touch them again for a long long time - therefore, caching them would do nothing - since you read them once, they sit there and we don't use them again"

The advisors are pretty accurate, yes.

Clarifications

Albert Nelson A, October 15, 2007 - 7:54 am UTC

Hi Tom,

Thanks for confirming that advisors are accurate.

On whether I have IO issue or not, I was thinking along this line:

I see wait event db_file_sequential_read. That means Oracle has to do physical IO. It is doing physical IO because it is not in buffer cache. It is not there because may be the memory is not sufficient. Hmm... but why does the advisor say no improvement by increasing memory?

Can you please say what is wrong in the above logic? You have told that it is not be quantified with anything. If so how do I know that there is IO issue?

Sorry Tom, I am unable to understand the relevance of your comments "we have watched the blocks you read and you know what, you read lots of blocks once - and then don't touch them again for a long long time - therefore, caching them would do nothing - since you read them once, they sit there and we don't use them again" with wait event db_file_sequential read.

How do you say that it is read once and not touched again by seeing db_file_sequential_read wait event?

Thanks always.

Regards,

Albert Nelson A.
Tom Kyte
October 15, 2007 - 12:13 pm UTC

I tried to tell you why the logic could be "not logical"

You read block #5 from file #42 for your query.
You are the only one interested in that block.
No one else is going to use it.
In fact, you use it once and don't go back to it.

Therefore, the benefit of caching block #5 from file #42 is what?
[this space intentionally left blank]

there is no benefit to be derived from caching that block.

In order for a cache to be effective, you have to have a need to read and re-read the data over and over and over again.



that is what I tried to say in the statement you could not see the relevance in.

Oracle doe more than look at db_file_sequential_read wait events in the advisors (in fact, it doesn't even look at that). it is looking at information in the buffer cache about how you've been using blocks over time.


Oh...

Albert Nelson A., October 19, 2007 - 10:14 am UTC

Thanks Tom,

Now I see some light.

I understand your point as below:

We are searching for a block 'x'. It is not in buffer cache. So a physical io is done and we have an increment in db_file_sequential_read event. What you are saying is we are now searching for block 'y'. This is also not in buffer cache and hence another increment in db_file_sequential_read event. Similarly the next time we are searching for another block 'z'.
We are never again searching for block 'x'. That is why db_file_sequential_read event is high.
Am I right in understanding you?

But,
assume that buffer cache is not sufficient. We search for block 'x' and we do a physical io to get it in buffer cache with an increment of db_file_sequential_read event. Now since the buffer cache is small, suppose the block 'x' is flushed to disk and no longer exists in buffer cache. If we again search for block 'x' we will have to do a physical io with another increment in db_file_sequential_read event.

So in this case also db_file_sequential_read event is getting increased even if we query for same block because of insufficient buffer cache. How do I differentiate between the two?

Regards,

Albert Nelson A.

Tom Kyte
November 02, 2007 - 1:04 pm UTC

... Am I right in understanding you? ...
yes


... How do I
differentiate between the two?
...

Earlier you wrote:

Inspite of SGA and PGA advisors showing us there is no improvement by increasing SGA/PGA, we see db_file_sequential read to be the top wait event.


If the advisor came back and said "increase buffer cache by x% and you will decrease physical IO by y%", that would be the differentiator.

We did the grunt work, we watched the buffer cache, we are telling you what would happen if you made it larger.

I wrote earlier:

Basically, it is telling you "we have watched the blocks you read and you know what, you read lots of blocks once - and then don't touch them again for a long long time - therefore, caching them would do nothing - since you read them once, they sit there and we don't use them again"



Just reminder

Albert Nelson A, November 01, 2007 - 10:51 pm UTC

Thanks Tom for answering so far.

I think you missed my followup. Can you please give your valuable comment?

Albert Nelson A.

OK. Got it.

Albert Nelson A, November 03, 2007 - 4:10 pm UTC

Thanks Tom.

Advisor

A Reader, January 15, 2009 - 1:46 pm UTC

Hi Tom

had a couple of questions on the 'Advisor' that can suggest if increasing SGA and PGA can help.

(a) I understand the advisor bases its advise by monitoring the usage of the system over a period of time. How long a period of usage is typically used for giving the advise. The reason I am asking is - we are currently testing 10g on a Test Server (M4000 Loaner from Sun) on which we are testing our batch process, reporting load etc ... but the usage on the test box is sporadic - with large periods (hours, sometime days) of inactivity - basically the time when no one is testing anything.

If the advisor takes that period of inactivity into consideration - can't that skew the advise ? How can we avoid that ?

(b) Memory Tuning Rookie Question - how does one run the advisor - are these just some scripts against some performance views - if so, if you can provide any pointer to those scripts (for getting advise on SGA and PGA) that would be greatly appreciated.

Thanks as always !!!
Tom Kyte
January 16, 2009 - 6:12 pm UTC

for example, an easy interface to the 'advice' is statspack - and there you pick the time range to review. You would take a snap, run something, take another snap and then report on it.



Wanted to add some questions in this topic

Pinaki Prasad Das, October 16, 2011 - 3:49 am UTC

In such scenario, if i will look at buffer header and will see the touch count and will plan the keep, recycle, then would it help in using the buffer cache efficiently.

Thanks,
Pinaki
Tom Kyte
October 16, 2011 - 10:26 pm UTC

You shouldn't really be planning for a keep/recycle pool - they are of very limited use in practice.

However, how do you think looking at the touch count would help you?


I would just look at the buffer cache advisor in statspack/AWR and see what it says would help me, it does all of the work.

I don't see how a touch count would help you plan anything at the block level really.

Buffer header

Pinaki Prasad Das, October 17, 2011 - 2:03 am UTC

First of all let me thank you and would like to beg appologies as i was failed to explain my intention.
However i consider the below condition :

1) If in Buffer advisory, it shows that there will not be any improvement in terms of saving DB time by X factor
by increasing the buffer cache, then should i plan to configure recycle,keep as mentioned below:

a) Buffers occupied in default buffer cache by FTS (Large tables)
or Index fast full scan will be flused out very faster as it will be kept in LRU end.
It also may have impact on buffer space.

For this i was thinking for Recycle pool.

b) Frequently pinned small tables doing FTS --> Keep pool

So in realtime scenario, will it help us to manage default buffer pool efficiently ?

Thanks again in advance to get me the valued clarification on this..........

Regards,
Pinaki
Tom Kyte
October 17, 2011 - 4:00 am UTC

1) I would not, no, the use of the recycle pool/keep pool is in general so unique, so few and far between, that for all practical purposes and intents - you should ignore their existence.

a) those buffers *already are* flushed out very rapidly, you don't need to worry about them in general.

b) if you use them frequently, they'll already be in buffer cache, that is what the buffer cache does.


I think you would get about 10 orders of magnitude more improvement tuning wise by looking at the APPLICATION and how it interacts with the database.

Does it do really tiny queries - but does them a billion times in a loop - fix it, make it one big sql statement. Get rid of slow by slow procedural code

Does it not use binds appropriately? fix it, fix it NOW - don't tell me "we cannot" fix it - that is never true. What is true is that you CHOOSE NOT TO fix it - big difference.

Does it have inefficient sql? tune it.

Does it do silly things like:

select count(*) from table where ....
if (count(*) was greater than zero)
then
do something;
end if

if so, fix it, it should just be:

do something;

you never need to count first. Look for ANTI-PATTERNS like that - learn to recognize and eradicate them



that'll get you going 1000000000 times faster - twiddling with the keep and recycle pool - maybe 1%, 2% - single digit percents.


Our time (mine and yours) is spent better elsewhere.

PGA supersizing

pavan.agrawal, October 18, 2011 - 2:05 am UTC

Hi Tom,

Can you please help me to understand the Oracle internal algorithm for PGA based on _pga_max_size, _smm_max_size and _smm_px_max_size settings ?

For example: pga_aggregate_target = 8gb
_pga_max_size = 4gb
_smm_max_size = 1.6gb
_smm_px_max_size = 1gb

Thanks in advance ... !

Regards,
Pavan
Tom Kyte
October 18, 2011 - 2:24 am UTC

Nope - I won't even go there.


You set pga_aggregate_target and you are done. A line from a famous movie comes to mind:


Pay no attention to that man behind the curtain.



do the same to underscore parameters. You will never get me to talk about them. If you decide to go down that path - I personally put you into the class of people that are the most dangerous of people. I actively speak out against such people.

If you have read a paper "undocumented secrets for super sizing your pga", I would recommend you just forget its existence. We've covered that article, and its inaccuracies, in detail a long time ago.

Rogerio, October 18, 2011 - 6:09 pm UTC

Hi Tom. We have an intensive SQL that basically needs to read the information from database to generate a Unix binary file to batch process access it. This query is taking different times during some months. We have made a test in Day 1 (D1) and in Day 2 (D2) with all applications off (just this query was running).
In the D1 the query took 4 hours to finish and in D2 it took 9 hours.
In D1 The query made 3807902 I/O reads and 282169035 LIO reads.
In D2 it made 4890886 I/O reads and 286664400 LIO reads.

The only difference in both dates is that in D1 we have shutdown the whole environment (Unix and Database). As it is reading almost the same amount of data from SGA (95%) and the times is very different can I assume that we are probably facing problems related to memory configuration, kernel parameters or SWAP problems ??

Your help will be really appreciated.

Thanks in advance.
Tom Kyte
October 19, 2011 - 7:45 am UTC

search this site for:

"secondary sga"

that is what is happening to you.

On Day 1 - you had data in the SGA and you had data *in the buffered file system cache you were using*. Many of your physical IO's were not true physical IOs, they were reads from the file system cache. To Oracle - they look the same however.

On Day 2 - you reset the entire machine, nothing in the file system cache. All physical IOs where true physical IOs. All of them hit the disk.

No kernel settings need adjustment.

No 'swap' problems.

Just an understanding of buffered file systems is needed - or just make your buffer cache larger so we can do things even faster (getting from buffer cache is MUCH faster than getting from OS file system cache, getting from OS file system cache is MUCH faster than getting from disk - which one would you like to do?)

I'd be worried about a query that took

286,664,400 Logical IOs

it sounds to me that some developer hinted that query to use indexes - when it OBVIOUSLY should not be. That it only needs less than 5 million physical IOs - but needs almost 300 MILLION logical IOs tells me "this query has a hint and the developer did that and it is just so wrong"

read this:

http://asktom.oracle.com/pls/apex/f?p=100:11:0::::P11_QUESTION_ID:6749454952894#6760861174154

for why I say that...

Rogerio, October 19, 2011 - 8:37 am UTC

Hi Tom, thanks for your feedback. Actually the query have some problems but there is no hint. You misunderstood the days..rsrs.
In D1 we have shutdown the Database/Unix and it was faster than D2 when we have not shutdown the environment.

Follow the query.
select
distinct CO.CO_ID ,
decode(CR.PARTY_ROLE_ID,
null ,
CO.DEALER_ID,null ) ,
CO.PLCODE ,
nvl(CD.HLCODE,0) ,
nvl(CO.ARPCODE,0) ,nvl(CO.AN_PACKAGE_ID,0) ,nvl(CO.CO_ARCHIVE,'N') ,
to_char(CO.TRIAL_END_DATE,'YYYYMMDDHH24MISS') ,nvl(CO.CO_REL_TYPE,'N') ,
nvl(COH.LEC_BILLING_IND,'N') ,CO.CO_CODE ,
CO.AGREEMENT_TYPE ,nvl(CP.IVR_LNG_ID,0) ,CO.CUSTOMER_ID ,
CB.CUSTOMER_SET_ID ,CA.PRGCODE ,BCAH.BILLCYCLE,
IDN.DAP_ID ,
DN.UB_ID ,DN.FLEET_ID ,DN.MEMBER_ID ,IDN.HLCODE
from
CONTRACT_ALL CO ,
CONTRACT_OPTIONS_HISTORY COH ,
CONTR_DEVICES CD ,
CUSTOMER_BASE CB ,
CUSTOMER_ALL CA ,
CUSTOMER_ROLE CR ,
BILLCYCLE_ASSIGNMENT_HISTORY BCAH ,
CONTR_PREPAID_LC_PROFILE CP ,
CONTR_SERVICES_CAP CSC ,
DIRECTORY_NUMBER DN ,
(select CI.CO_ID ,DAP_ID ,IDF.HLCODE from IDEN_FLEET IDF ,CONTR_IDEN CI where (IDF.UB_ID=CI.UB_ID and IDF.FLEET_ID=CI.FLEET_ID)) IDN
where
(((((((((((((((((
CO.CONTRACT_TEMPLATE is null and
CO.NOT_VALID is null ) and
CO.CO_ID=COH.CO_ID(+)) and
CO.CO_ID=CD.CO_ID(+)) and
CO.CO_ID=CP.CO_ID(+)) and
CO.CUSTOMER_ID=CB.CUSTOMER_ID) and
CB.CUSTOMER_ID=CA.CUSTOMER_ID) and
sysdate > COH.VALID_FROM(+)) and
(COH.VALID_FROM is null or
COH.VALID_FROM=(select max(VALID_FROM) from CONTRACT_OPTIONS_HISTORY where ((VALID_FROM<= sysdate and CO_ID=CO.CO_ID) and
LEC_BILLING_IND is not null )))) and
CO.SCCODE in (select SCCODE from MPDSCTAB where nvl(RATE_NETWORK_SERVICES,'N')<>'N')) and
CA.CUSTOMER_ID=BCAH.CUSTOMER_ID) and
BCAH.SEQNO=(select max(SEQNO) from BILLCYCLE_ASSIGNMENT_HISTORY where (CUSTOMER_ID=CA.CUSTOMER_ID and VALID_FROM<=sysdate))) and
CO.CONTRACT_TYPE_ID not in (2,3,13)) and
CO.DEALER_ID=CR.CUSTOMER_ID(+)) and
CR.PARTY_ROLE_ID(+)= 14) and
CO.CO_ID=IDN.CO_ID(+)) and
CSC.CO_id=CO.CO_ID(+)) and
CSC.DN_ID=DN.DN_ID(+)) order by 1;

follow the Plan too.
----------------------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time |
----------------------------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 2 | 443 | | 126K (2)| 00:25:20 |
| 1 | SORT UNIQUE | | 2 | 443 | | 126K (67)| 00:25:20 |
| 2 | UNION-ALL | | | | | | |
|* 3 | FILTER | | | | | | |
| 4 | NESTED LOOPS OUTER | | 1 | 179 | | 42836 (3)| 00:08:35 |
| 5 | NESTED LOOPS OUTER | | 1 | 170 | | 42832 (3)| 00:08:34 |
| 6 | NESTED LOOPS | | 1 | 162 | | 42831 (3)| 00:08:34 |
| 7 | NESTED LOOPS | | 1 | 157 | | 42830 (3)| 00:08:34 |
| 8 | NESTED LOOPS | | 5 | 720 | | 42821 (3)| 00:08:34 |
| 9 | NESTED LOOPS | | 5 | 675 | | 42811 (3)| 00:08:34 |
| 10 | NESTED LOOPS OUTER | | 5 | 635 | | 42801 (3)| 00:08:34 |
|* 11 | HASH JOIN RIGHT OUTER | | 5 | 505 | | 42801 (3)| 00:08:34 |
|* 12 | TABLE ACCESS FULL | CONTRACT_OPTIONS_HISTORY | 1 | 24 | | 2 (0)| 00:00:01 |
|* 13 | TABLE ACCESS FULL | CONTRACT_ALL | 256 | 19712 | | 42798 (3)| 00:08:34 |
| 14 | TABLE ACCESS BY INDEX ROWID | CONTR_PREPAID_LC_PROFILE | 1 | 26 | | 0 (0)| 00:00:01 |
|* 15 | INDEX UNIQUE SCAN | PK_CONTR_PREPAID_LC_PROFILE | 1 | | | 0 (0)| 00:00:01 |
| 16 | TABLE ACCESS BY INDEX ROWID | CUSTOMER_BASE | 1 | 8 | | 2 (0)| 00:00:01 |
|* 17 | INDEX UNIQUE SCAN | PK_CUSTOMER_BASE | 1 | | | 1 (0)| 00:00:01 |
| 18 | TABLE ACCESS BY INDEX ROWID | CUSTOMER_ALL | 1 | 9 | | 2 (0)| 00:00:01 |
|* 19 | INDEX UNIQUE SCAN | PKCUSTOMER_ALL | 1 | | | 1 (0)| 00:00:01 |
| 20 | TABLE ACCESS BY INDEX ROWID | BILLCYCLE_ASSIGNMENT_HISTORY | 1 | 13 | | 2 (0)| 00:00:01 |
|* 21 | INDEX UNIQUE SCAN | PK_BILLCASSHIST | 1 | | | 1 (0)| 00:00:01 |
| 22 | SORT AGGREGATE | | 1 | 18 | | | |
|* 23 | TABLE ACCESS BY INDEX ROWID | BILLCYCLE_ASSIGNMENT_HISTORY | 1 | 18 | | 4 (0)| 00:00:01 |
|* 24 | INDEX RANGE SCAN | PK_BILLCASSHIST | 1 | | | 3 (0)| 00:00:01 |
|* 25 | TABLE ACCESS BY INDEX ROWID | MPDSCTAB | 1 | 5 | | 1 (0)| 00:00:01 |
|* 26 | INDEX UNIQUE SCAN | PKMPDSCTAB | 1 | | | 0 (0)| 00:00:01 |
|* 27 | INDEX UNIQUE SCAN | UC_CUSTOMER_ROLE | 1 | 8 | | 1 (0)| 00:00:01 |
| 28 | TABLE ACCESS BY INDEX ROWID | CONTR_DEVICES | 3 | 27 | | 4 (0)| 00:00:01 |
|* 29 | INDEX RANGE SCAN | CODEVCDV | 3 | | | 2 (0)| 00:00:01 |
| 30 | SORT AGGREGATE | | 1 | 24 | | | |
|* 31 | INDEX RANGE SCAN | IU_CONOHIS_COVAL | 1 | | | 0 (0)| 00:00:01 |
|* 32 | FILTER | | | | | | |
| 33 | NESTED LOOPS OUTER | | 1 | 264 | | 83775 (2)| 00:16:46 |
| 34 | NESTED LOOPS OUTER | | 1 | 234 | | 83771 (2)| 00:16:46 |
| 35 | NESTED LOOPS OUTER | | 1 | 225 | | 83767 (2)| 00:16:46 |
|* 36 | HASH JOIN OUTER | | 1 | 207 | | 83765 (2)| 00:16:46 |
| 37 | TABLE ACCESS BY INDEX ROWID | CONTR_SERVICES_CAP | 3 | 36 | | 3 (0)| 00:00:01 |
| 38 | NESTED LOOPS | | 11 | 2013 | | 83763 (2)| 00:16:46 |
|* 39 | HASH JOIN | | 3 | 513 | | 83754 (2)| 00:16:46 |
| 40 | NESTED LOOPS OUTER | | 4 | 664 | | 83751 (2)| 00:16:46 |
| 41 | NESTED LOOPS OUTER | | 4 | 632 | | 83749 (2)| 00:16:45 |
| 42 | NESTED LOOPS | | 4 | 528 | | 83749 (2)| 00:16:45 |
| 43 | NESTED LOOPS | | 7 | 343 | | 83720 (2)| 00:16:45 |
|* 44 | HASH JOIN | | 7 | 287 | 15M| 83706 (2)| 00:16:45 |
|* 45 | HASH JOIN | | 400K| 10M| 11M| 65867 (2)| 00:13:11 |
| 46 | VIEW | VW_SQ_1 | 400K| 7440K| | 10520 (5)| 00:02:07 |
| 47 | HASH GROUP BY | | 400K| 9398K| 27M| 10520 (5)| 00:02:07 |
|* 48 | TABLE ACCESS FULL | BILLCYCLE_ASSIGNMENT_HISTORY | 404K| 9475K| | 7679 (6)| 00:01:33 |
|* 49 | TABLE ACCESS FULL | CUSTOMER_ALL | 6884K| 59M| | 47814 (1)| 00:09:34 |
|* 50 | TABLE ACCESS FULL | BILLCYCLE_ASSIGNMENT_HISTORY | 8086K| 100M| | 7394 (3)| 00:01:29 |
| 51 | TABLE ACCESS BY INDEX ROWID| CUSTOMER_BASE | 1 | 8 | | 2 (0)| 00:00:01 |
|* 52 | INDEX UNIQUE SCAN | PK_CUSTOMER_BASE | 1 | | | 1 (0)| 00:00:01 |
|* 53 | TABLE ACCESS BY INDEX ROWID | CONTRACT_ALL | 1 | 83 | | 17 (0)| 00:00:01 |
|* 54 | INDEX RANGE SCAN | FKICOCSID | 23 | | | 2 (0)| 00:00:01 |
| 55 | TABLE ACCESS BY INDEX ROWID | CONTR_PREPAID_LC_PROFILE | 1 | 26 | | 0 (0)| 00:00:01 |
|* 56 | INDEX UNIQUE SCAN | PK_CONTR_PREPAID_LC_PROFILE | 1 | | | 0 (0)| 00:00:01 |
|* 57 | INDEX UNIQUE SCAN | UC_CUSTOMER_ROLE | 1 | 8 | | 1 (0)| 00:00:01 |
|* 58 | TABLE ACCESS FULL | MPDSCTAB | 6 | 30 | | 3 (0)| 00:00:01 |
|* 59 | INDEX RANGE SCAN | PK_CONTR_SERVICES_CAP | 3 | | | 2 (0)| 00:00:01 |
|* 60 | TABLE ACCESS FULL | CONTRACT_OPTIONS_HISTORY | 1 | 24 | | 2 (0)| 00:00:01 |
| 61 | TABLE ACCESS BY INDEX ROWID | DIRECTORY_NUMBER | 1 | 18 | | 2 (0)| 00:00:01 |
|* 62 | INDEX UNIQUE SCAN | PKDIRECTORY_NUMBER | 1 | | | 1 (0)| 00:00:01 |
| 63 | TABLE ACCESS BY INDEX ROWID | CONTR_DEVICES | 3 | 27 | | 4 (0)| 00:00:01 |
|* 64 | INDEX RANGE SCAN | CODEVCDV | 3 | | | 2 (0)| 00:00:01 |
| 65 | VIEW PUSHED PREDICATE | | 1 | 30 | | 4 (0)| 00:00:01 |
| 66 | NESTED LOOPS | | 1 | 27 | | 4 (0)| 00:00:01 |
| 67 | TABLE ACCESS BY INDEX ROWID | CONTR_IDEN | 1 | 13 | | 3 (0)| 00:00:01 |
|* 68 | INDEX UNIQUE SCAN | PK_CONTR_IDEN | 1 | | | 2 (0)| 00:00:01 |
| 69 | TABLE ACCESS BY INDEX ROWID | IDEN_FLEET | 25085 | 342K| | 1 (0)| 00:00:01 |
|* 70 | INDEX UNIQUE SCAN | PK_IDEN_FLEET | 1 | | | 0 (0)| 00:00:01 |
| 71 | SORT AGGREGATE | | 1 | 24 | | | |
|* 72 | INDEX RANGE SCAN | IU_CONOHIS_COVAL | 1 | | | 0 (0)| 00:00:01 |
----------------------------------------------------------------------------------------------------------------------------------

Unfortunatelly we already know that this query is quite expansive. I'm thinking to create materialized views in another schema with different tablespaces/disks and create a trigger to make this query run in parallel when the application log into the system.
Tom Kyte
October 19, 2011 - 6:07 pm UTC

post tkprofs that include wait events from both environments.

More to Explore

Performance

Get all the information about database performance in the Database Performance guide.