How to approach ?
A reader, July      28, 2004 - 1:20 am UTC
 
 
Hi Tom,
That was a very useful input. Was intrested in knowing what your approach would be? How to begin ? Say your manager comes and says we need to prepare something for capacity planning... Where to begin now ?
o Should we first look into the storage part ? Starting with business questions like No of bookings in a year and the based on these bookings estimate the no of rows/annum. Then calculate the size of table then tablespace based on the number of rows. Then estimate the size of the database.
o Then run some transactions and find the CPU usage and then sum up all to estimate the CPU usage etc.
I know this is big topic to be answered shortly. Just needed guildance on how to approach towards capacity planning, may be steps. Any good link to a document would be very helpful.
Thanks and Regards
 
 
July      28, 2004 - 8:11 am UTC 
 
 
tell them "well, we need a good test machine and some load generation software so we can get some data points to see how the real live system will behave".
You can calculate potential worst case and best case scenarios but that is about it.
 
 
 
 
capacity/performance improvements 8i vs 9i vs 10g
Johann, March     14, 2006 - 5:44 am UTC
 
 
Hi Tom,
Thanks for this great site!  Learned a lot by just doing a search and reading the posts.  Although I never figured out how to post a new question - whenever I log on, status is always "please ask a question later" and I can't find a link to ask a new question.  So I hope you don't mind me posting this as a followup.
I have quite a predicament here.  We're currently hosting a large (well, by local standards, it is quite large), yellowpages site, which uses JDBC and 8i.  For various reasons, its been stuck on 8i for years now.  Management is considering investing on an upgrade, but wants to see projected performance benefits before they do.  They can't wait for me to do a lab test - they said they will be satisfied with "rated" capacities.
From your experience, is there any measurable performance benefit from just upgrading databases, without changing any code?  (I know, I know - To maximize the investment, we should change some code to take advantage of the new features, but for now I will have to give that as added benefit).  Is there any document/case study/etc I can use?
Thanks a lot
Johann 
 
March     14, 2006 - 11:00 am UTC 
 
 
No.  because no one else is using your system as their benchmark.
tell them the answer is 1.53215
it is as meaningful as their request really. 
 
 
 
Thanks!
Johann, March     14, 2006 - 7:52 pm UTC
 
 
Read your post first thing in the morning.  And you just made my day =D 
 
 
reda, March     15, 2006 - 2:48 am UTC
 
 
helo tom
tell them "well, we need the best siez of table count  rows=4000000 
so you can  calculate the best siez 
thanks 
 
March     15, 2006 - 4:39 pm UTC 
 
 
the answer will be between 8,000,000 and infinity.  That is as close as you'll get given "4,000,000 rows"
4,000,000 rows is meaningless in this context. 
 
 
 
num of extend  
reda, May       07, 2006 - 9:23 am UTC
 
 
helo tom
tell them "well, we need the best siez of table count  rows=4000000 
so you can  calculate the best siez and plase tell me waht i do for num of extend  100 in this table 
thanks 
 
 
May       07, 2006 - 11:39 am UTC 
 
 
er? 
 
 
extend
redaead@yahoo.com, May       07, 2006 - 2:28 pm UTC
 
 
helo tom
 i need the best siez of table count  rows=4000000 
so you can  calculate the best siez and plase tell me waht i do for num of 
extend  100 in this table 
select owner "Owner",
       segment_name "Segment Name",
       segment_type "Type",
       tablespace_name "Tablespace",
       extents "Ext",
       max_extents "Max"
from dba_segments
where ((max_extents - extents) >=3) 
and owner ='ADMIN'
order by extents;
Segment Name            Type    Tablespace    Ext    Max
POS_INVOICES_ITEMS    TABLE    NAHDI_TRANS    93  47483645
 plase  fast replay
thanks
 
 
May       07, 2006 - 4:16 pm UTC 
 
 
I don't get the "rows=4000000"  bit- are you trying to ask "how much space is this table with about 4.7 million records taking?
If you only have 100 extents, you are doing just fine, if you need to add more - no problem.  100 extents is great, so is 1,000, so is 1.  They are all just fine. 
 
 
 
extents 
A reader, May       08, 2006 - 8:41 am UTC
 
 
helo tom
ok 
how much space is 
this table with about 4.7 million records taking?
If iwant only have 3 extents  just 
how and how tunig table ,sga
mor thanks
 
 
 
May       08, 2006 - 10:33 am UTC 
 
 
query dba_segments, it'll tell you allocated space in bytes, blocks...
Why would you want "3" - what possible logic did you use to come up with "3". 
 
 
 
tunig
reda, May       09, 2006 - 8:57 am UTC
 
 
hell tome
iwant Become little  num of extend 
Verily that i do
1-CREATE TABLESPACE POS_TRANS DATAFILE 
  'F:\ORACLE\ORADATA\PROD\POS_TRANS_01.DBF' SIZE 500M AUTOEXTEND ON NEXT 100M MAXSIZE UNLIMITED,
  'F:\ORACLE\ORADATA\PROD\POS_TRANS_02.DBF' SIZE 500M AUTOEXTEND ON NEXT 100M MAXSIZE UNLIMITED,
  'F:\ORACLE\ORADATA\PROD\POS_TRANS_03.DBF' SIZE 500M AUTOEXTEND ON NEXT 100M MAXSIZE UNLIMITED
LOGGING
DEFAULT STORAGE ( 
                  INITIAL     40K
                  NEXT        40K
                  MINEXTENTS  1
                  MAXEXTENTS  505
                  PCTINCREASE 50
                )
ONLINE
PERMANENT
EXTENT MANAGEMENT DICTIONARY;
2-
alter table P_INV_ITEMS move tablespace POS_TRANS;
ALTER INDEX ADMIN.PK_P_INV_ITEMS;
num of extend for P_INV_ITEMS
after 39 being 1
Verily that  resolve slow fro database  
which the steps he felt plase hlpe me 
thanks  
 
May       09, 2006 - 9:51 am UTC 
 
 
No.
You.
Dont.
The number of extents is not affecting your runtime query performance.   
 
 
 
How did you decipher that??
Kevin, May       09, 2006 - 10:23 am UTC
 
 
That's the worst case of stuck keyboard I've seen... ;0) 
 
 
performance
A reader, May       10, 2006 - 7:44 am UTC
 
 
hi tome 
ok
tell me 
who affecting your runtime query performance
Thanks in advance
Regards
reda 
 
May       10, 2006 - 9:20 am UTC 
 
 
everyone else using the database ? 
 
 
 
Basic Matrics for Forcasting
Suraj Sharma, May       04, 2007 - 2:24 am UTC
 
 
Hi Tom,
First of all thanks a lot once again to shall all the knowledge you have here. 
I am new into capacity planning and forecasting. Please let me know the general matrices, which we have to consider during the capacity forecasting or some questions, which we can put to customers when they come at us for the same. 
Thanks,
Suraj 
 
capacity planning
jaypee, October   01, 2007 - 6:02 am UTC
 
 
Hi Tom,
Can you guide me to the steps for hardware estimation/capacity planning for a new oracle database (I need to know RAM(SGA) and appropriate CPU speed). The statistics I have in hand is oracle database size = 300 GB and no of transactions per day ~ 70,000.
Thanks in advance for you contributions and patience!! 
October   03, 2007 - 2:45 pm UTC 
 
 
hah.
the size of the database and number of transactions... not very useful.
what if 90% of the transactions happen at 3pm.
what if most of the transaction affect 300 rows (versus someone else that affects 1 row, versus someone else that affects 3000 rows)
ops$tkyte%ORA10GR2> create table t ( x int );
Table created.
ops$tkyte%ORA10GR2> insert into t values ( 0 );
1 row created.
ops$tkyte%ORA10GR2> set timing on
ops$tkyte%ORA10GR2> begin
  2          for i in 1 .. 70000
  3          loop
  4                  update t set x = x+1;
  5                  commit;
  6          end loop;
  7  end;
  8  /
PL/SQL procedure successfully completed.
Elapsed: 00:00:15.97
given your requirements, I can apparently satisfy your needs on my laptop in about 16 seconds. 
 
 
 
capacity planning,
A reader, May       24, 2010 - 11:42 am UTC
 
 
Using data dictionary views like DBA_HIST%, how to calculate the CPU used, Memory used, # of IOs, IO latency (in m.s) in a 3 hour interval?
Is there a way to point out which user_id (or session_id) contributed to higher CPU or any other metrics?
I can get in AWR report but I like to store them in user defined table and I can plot a graph of my choice.
Thanks,
  
May       24, 2010 - 1:13 pm UTC 
 
 
averaging over many hours isn't going to be very helpful for capacity planning is it?  What would it mean for capacity if you used N units of cpu in 3 hours.  If you were really busy for 30 minutes and not at all busy for 2.5 hours - you would not have a very good picture of what is going on would you...
You'd want detailed (snapshots as close to together as you have) and graph those and that - that is already what EM does??? 
I don't have a query on hand for you - I'd just use the existing user interfaces if I wanted to look at that over time. 
 
 
capacity planning,
A reader, May       24, 2010 - 1:28 pm UTC
 
 
Hi Tom,
Thanks for a quick response.  EM gives the information I want.  However, if I want history like going back a month or two, I thought it would be helpful to store the related information in user defined tables that way we can draw a graph as needed.
If you get time to generate SQLs (at least the list of data dictionary views required for this) and share it with us, that would be really helpful.
Thanks,
 
May       24, 2010 - 1:31 pm UTC 
 
 
You can extend the amount of data that EM keeps to be as much as you want, that would be the best way - you don't have to write any code at all. 
 
 
Production Server Design
Rajeshwaran, Jeyabal, December  12, 2011 - 10:50 pm UTC
 
 
Tom:
We are designing an Application and below is the memory details and Explain plan for query, which executes for more than 2 hour.
a) I know that you would ask to look at Tkprof for tuning this query, but unluckly the DBA is unavailable to get the Trace files.
b) The explain plan shows temp_space as 602MB, But the PGA memory is 512MB - do you think that we need to increase this PGA and SGA memory settings?
c) Can you provide any documentation link about how to set the optimial size for PGA / SGA memory settings? 
app@appdev>
app@appdev> show parameter sga;
NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
lock_sga                             boolean     FALSE
pre_page_sga                         boolean     FALSE
sga_max_size                         big integer 1G
sga_target                           big integer 1G
app@appdev>
app@appdev> show parameter pga;
NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
pga_aggregate_target                 big integer 512M
app@appdev>
app@appdev>
app@appdev>
app@appdev>
app@appdev> select sum(bytes)/1024 Kbytes_alloc,
  2        sum(maxbytes)/1024 Kbytes_max,
  3        tablespace_name
  4  from sys.dba_temp_files
  5  group by tablespace_name
  6  /
KBYTES_ALLOC KBYTES_MAX TABLESPACE_NAME
------------ ---------- ------------------------------
     5120000          0 TEMP
Elapsed: 00:00:00.36
app@appdev>
app@appdev> exec show_space(user,'IDX_STG_INBOUND','INDEX PARTITION','P_4');
l_total_blocks****************  13216
l_total_bytes*****************  108265472
l_unused_blocks***************  0
l_unused_bytes****************  0
l_last_used_extent_file_id****  6
l_last_used_extent_block_id***  412800
l_last_used_block*************  928
l_unformatted_blocks**********  0
l_unformatted_bytes***********  0
l_fs1_blocks******************  0
l_fs1_bytes*******************  0
l_fs2_blocks******************  5
l_fs2_bytes*******************  40960
l_fs3_blocks******************  0
l_fs3_bytes*******************  0
l_fs4_blocks******************  0
l_fs4_bytes*******************  0
l_full_blocks*****************  13055
l_full_bytes******************  106946560
PL/SQL procedure successfully completed.
Elapsed: 00:00:00.17
app@appdev>
app@appdev>
Execution Plan
----------------------------------------------------------
Plan hash value: 682711079
-----------------------------------------------------------------------------------------------------------------------
| Id  | Operation                   | Name            | Rows  | Bytes |TempSpc| Cost (%CPU)| Time     | Pstart| Pstop |
-----------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |                 |  2699K|   502M|       |   367K  (1)| 01:13:35 |       |       |
|   1 |  WINDOW SORT                |                 |  2699K|   502M|   602M|   367K  (1)| 01:13:35 |       |       |
|   2 |   WINDOW SORT               |                 |  2699K|   502M|   602M|   367K  (1)| 01:13:35 |       |       |
|   3 |    WINDOW SORT              |                 |  2699K|   502M|   602M|   367K  (1)| 01:13:35 |       |       |
|   4 |     WINDOW SORT             |                 |  2699K|   502M|   602M|   367K  (1)| 01:13:35 |       |       |
|*  5 |      VIEW                   |                 |  2699K|   502M|       | 23095   (1)| 00:04:38 |       |       |
|   6 |       WINDOW SORT           |                 |  2699K|    64M|    93M| 23095   (1)| 00:04:38 |       |       |
|   7 |        PARTITION LIST SINGLE|                 |  2699K|    64M|       |  3557   (1)| 00:00:43 |   KEY |   KEY |
|   8 |         INDEX FAST FULL SCAN| IDX_STG_INBOUND |  2699K|    64M|       |  3557   (1)| 00:00:43 |    56 |    56 |
-----------------------------------------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
   5 - filter("CLM_RNK" IS NOT NULL)
app@appdev>
app@appdev>
app@appdev> select tablespace,segtype,segfile#,segblk#,extents,blocks
  2  from v$tempseg_usage;
TABLESPACE                      SEGTYPE     SEGFILE#    SEGBLK#    EXTENTS     BLOCKS
------------------------------- --------- ---------- ---------- ---------- ----------
TEMP                            LOB_DATA        1502     287872          1        128
TEMP                            SORT            1502     284672        119      15232
TEMP                            DATA            1502     283520          1        128
TEMP                            LOB_DATA        1502     283392          1        128
TEMP                            INDEX           1502     283264          1        128
TEMP                            LOB_DATA        1502     283136          1        128
6 rows selected.
Elapsed: 00:00:00.40
app@appdev>
app@appdev> select event#,event,p1text,p1,p2text,p2,p3text
  2  from gv$session
  3  where osuser ='testuser'
  4  /
EVENT# EVENT                          P1TEXT                             P1 P2TEXT                    P2 P3TEXT
------ ------------------------------ ------------------------------ ------ -------------------- ------- ----------
   347 SQL*Net message to client      driver id                      ###### #bytes                     1
   197 direct path read temp          file number                      1502 first dba             267591 block cnt
   351 SQL*Net message from client    driver id                      ###### #bytes                     1
   351 SQL*Net message from client    driver id                      ###### #bytes                     1
Elapsed: 00:00:00.20
app@appdev> 
 
December  13, 2011 - 7:17 am UTC 
 
 
the temp area referenced in a plan is a total guess.  It is based on the estimated cardinality and a 'generic' guess at how much memory might be available at runtime (which might not actually be available at runtime...)
if your DBA is "unavailable", you won't be able to fix anything - suggest you make them "available"
Then they, as the DBA, would know to review an AWR or statspack report to see what the PGA advisor says - it would be the thing that would tell you if increasing the PGA settings would reduce single and multi-pass sorts to disk.
 
 
 
Production Server Design   
Rajeshwaran, Jeyabal, December  15, 2011 - 1:55 am UTC
 
 
Tom:
This is what is see in Development machine.
1) How the workarea_size_policy becomes AUTO when pga_aggregate_target = 0. this doesn't match with Oracle doc quotes. We have not changed any parameter setting in Development machine, we just created a database by accepting defaults.
2) I am unable to understand the quotes from AWR PGA Advisor 
When using Auto Memory Mgmt, minimally choose a pga_aggregate_target value where Estd PGA Overalloc Count is 0Is that something i have to set the pga_aggregate_target=86MB (based on AWR PGA Memory Advisory) to achieve better performance? is that my understand is correct? please confirm. 
rajesh@ORA11GR2> show parameter workarea;
NAME                                 TYPE        VALUE
------------------------------------ ----------- ----------
workarea_size_policy                 string      AUTO
rajesh@ORA11GR2>
rajesh@ORA11GR2> show parameter pga;
NAME                                 TYPE        VALUE
------------------------------------ ----------- ----------
pga_aggregate_target                 big integer 0
rajesh@ORA11GR2>
Oracle docs provide's this
 http://docs.oracle.com/cd/E11882_01/server.112/e25513/initparams189.htm#CHDBJCDE <quote>
Setting PGA_AGGREGATE_TARGET to 0 automatically sets the WORKAREA_SIZE_POLICY parameter to MANUAL
</quote>
Running the above query in Development database and PGA Advisor from AWR shows this.
PGA Memory Advisory                     DB/Inst: ORA11GR2/ora11gr2  Snap: 3247
-> When using Auto Memory Mgmt, minimally choose a pga_aggregate_target value
   where Estd PGA Overalloc Count is 0
                                       Estd Extra    Estd P Estd PGA
PGA Target    Size           W/A MB   W/A MB Read/    Cache Overallo    Estd
  Est (MB)   Factr        Processed Written to Disk   Hit %    Count    Time
---------- ------- ---------------- ---------------- ------ -------- -------
        22     0.1            405.6            535.8   43.0        5 1.4E+06
        43     0.3            405.6            132.4   75.0        1 7.7E+05
        86     0.5            405.6            132.4   75.0        0 7.7E+05
       129     0.8            405.6            132.4   75.0        0 7.7E+05
       172     1.0            405.6            132.4   75.0        0 7.7E+05
       206     1.2            405.6            132.4   75.0        0 7.7E+05
       241     1.4            405.6            132.4   75.0        0 7.7E+05
       275     1.6            405.6            132.4   75.0        0 7.7E+05
       310     1.8            405.6            132.4   75.0        0 7.7E+05
       344     2.0            405.6            132.4   75.0        0 7.7E+05
       516     3.0            405.6            132.4   75.0        0 7.7E+05
       688     4.0            405.6            132.4   75.0        0 7.7E+05
     1,032     6.0            405.6            132.4   75.0        0 7.7E+05
     1,376     8.0            405.6            132.4   75.0        0 7.7E+05
          ------------------------------------------------------------- 
 
December  16, 2011 - 6:15 am UTC 
 
 
it would be using manual at runtime since there is no setting for the pga aggregate target.
it should not be zero, why are you using manual pga memory management. 
 
 
Capacity Planning
Parag Patankar, February  02, 2012 - 1:36 am UTC
 
 
Hi Tom,
Somebody as me a generic question, that currently there is mutual funds system running on oracle 10g having approx 1500 users, in month of May planning to migrate on a new server expecing load of 5000 users. Before migrating they want to ensure that they have done their home work "very well" and ensure it is a successful activity. 
1. Can you suggest what are the thing look into as a Capacity Planning ? 
2. From Oracle DB side, what are things are "must" to do study ?
Pl suggest
thanks & regards
PJP
 
February  02, 2012 - 7:50 am UTC 
 
 
they should benchmark it, simulate 5000 users on the new hardware.  That is the only way I know to do something like this.
 
 
 
Benchmarking
Parag J Patankar, February  02, 2012 - 11:12 pm UTC
 
 
Hi Tom,
Agree to do benchmarking. But can you guide us in detail how to do benchmarking specially in terms of hardware and oracle DB point of view 
regards
PJP 
February  03, 2012 - 9:59 am UTC 
 
 
you get representative hardware
you program a simulation, something that simulates as close as possible what you are going to be doing in real life
you run it and measure, you probably run it many many many times with different user loads - to get many data points.
will it involve a lot of work?
yes
will it be fairly difficult?
probably
will it take a significant amount of time?
only if you do it right
do we need to do it?
only if you want to really know what probably will happen 
 
 
capacity planning
Arvind, April     26, 2013 - 1:23 pm UTC
 
 
Hi Tom,
Can you please give me some links and may be some book names which are good to learn Oracle capacity planning.
Thanks,
Arvind 
 
More automated approach to Capacity Planning is available
Steve Lemme, February  27, 2024 - 12:00 am UTC
 
 
February  27, 2024 - 5:08 am UTC 
 
 
Thanks for stopping by Steve