Skip to Main Content

Breadcrumb

Question and Answer

Connor McDonald

Thanks for the question, Rahul.

Asked: September 13, 2018 - 6:41 am UTC

Last updated: September 18, 2018 - 3:17 am UTC

Version: 11.2.0.4

Viewed 1000+ times

You Asked

Hello,

Thanks for taking up this question.


I am interested in understanding how to optimize the hardware resources (cores, memory, disk space) required for Oracle without impacting performance.

There are multiple virtual machines in a VMware having oracle database installed in each of them. Suppose I want to add another Oracle VM with the following specifics, how should I estimate the H/W resources required for it.

Concurrent Users: 20
Database Size: 500 GB
Database Server Processor Model: Intel Xeon
Database connected to an application (front end) in development environment.

Could you please let me know where to start from? Are there any basic principles/formulas to ascertain the hardware resource requirement?

and Connor said...

You need (hold hands 50cm apart) this much :-)

I'm being flippant but there is also a touch of seriousness here. The number of users etc is almost irrelevant. One user can destroy a server; conversely a small server might easily handle 1000's of users. Similarly just one bad application can hammer a machine, whereas 20 well written applications might only consume 20% of a servers resources.

What really matter is *load*. The challenge here is estimating that load when sometimes you don't any real indicators as to what that load might be. And I've been in your position - where someone (often a manager :-)) demands to know exactly what specifications to purchase/requisition when all the information that is available is: "We have 10 developers who will use this".

So really, it is going to be a case of - how close can you get to a realistic estimate, but I stress, don't be forced by anyone into saying "Yes, this is a rock solid number that we have got exactly accurate" because that will come back and bite you.

As a *rough* measure you could commence along the following lines:

- a common buffer cache is 1-10% of database size, since *typically* applications focus on 1-10% of the total data. So for a 500G database, start with between 5-50G allowance for buffer cache.

- look at PGA usage for similar users/applications on your existing databases. Use that as a model for the new one. So if a typical PGA is 2meg per session, then you're looking at 2MB * n connected users.

- look at CPU consumption/load from existing databases and use that to map to your new proposed one.

One more thing I'll add - if you have no real clues about the anticipated load, an obvious solution would be to lease some resources from a cloud provider who offers Oracle (because the Oracle database runs the same anywhere). Use that initially, just paying for the resources you use, and then if you need to, you have a perfect measure of what you'd need in house. You might even find you ultimately don't need to go in-house (for development).

Rating

  (2 ratings)

Is this answer out of date? If it is, please let us know via a Comment

Comments

Villy, September 17, 2018 - 7:19 am UTC

How to look at CPU consumption and load from existing databases specific time also?
Connor McDonald
September 18, 2018 - 3:17 am UTC

Sources

- AWR reports (if you are using it)
- OS level information
- Statspack (if you are using it)
- day to day monitoring - pick a typical time and monitor v$sysstat et al

CPU consumption and load from existing databases

Rajeshwaran, Jeyabal, September 17, 2018 - 9:56 am UTC

Perhaps an AWR could help us.

CPU consumption - are available under the "Time Model Statistics" section in AWR.

load from existing databases - are available under the "Load Profile" section in AWR.
Connor McDonald
September 18, 2018 - 3:16 am UTC

nice input

More to Explore

Performance

Get all the information about database performance in the Database Performance guide.