Skip to Main Content
  • Questions
  • What is the relationship of CPU, Memories against DB performances?

Breadcrumb

May 4th

Question and Answer

Connor McDonald

Thanks for the question, Jason.

Asked: June 07, 2018 - 3:29 am UTC

Last updated: June 22, 2018 - 2:27 am UTC

Version: Oracle Database 12c

Viewed 1000+ times

You Asked

Hi Tom,

Frequently I get asked quite a number of times when planning for a new server setup for the creation of databases. How much CPU cores should I get? How much Memories should I get.
Normally I'll answer them, just get the highest cores & memories which your budget allows. But I would want to know how exactly these hardware impact the database performances? Does having higher CPU cores means better for DB that runs a lot of parallelism? Bigger memory = higher SGA lots of stuff can be run on memory?
I know to judge a DB's performances we'll also need to take note of the underlying storage connections while the DB's doing reads & writes, but lets say I don't have any problem with the storage & network conections. I just want to understand how CPU cores & memories impacted on a DB's performances.

and Connor said...

There are 2 simple things I like to keep in mind when it comes to resources and performance

1) It only makes sense to tune a bottleneck

If 95% of your response time is disk I/O, then all the CPU in the world isn't going to help (much). In such an instance, you'd be much better off throwing resources at the disk infrastructure. That *might* mean more memory to access disk often, or it might mean better disk. But I'm using disk as an *example* here, that of - tackle the bottleneck. If your CPU is max'd out, then more memory wont help (it would probably hinder).

2) Law of diminishing returns

(Not withstanding point #1) Doubling your CPU might get you twice the performance. But doubling it again is unlikely to get 4 times the original performance. So *over* spending on resources I generally see as not being a smart option. In particular with CPU, because every core = license dollars, and one of the administrators jobs is to get the best "bang for the buck", so why have 32 cores if 16 will do the job? (Of course, conversely, don't get 8 cores if you need 16 to meet your business requirements). But I've lost track of the number of places I've worked where a monthly report comes out saying "Look how awesome this is - we're averaging 30% CPU utilization". That's not awesome...that's wasting money.

So in terms of cpu, cores, disk, memory etc....I can only say this - understand your target, and buy/optimize resources accordingly. If your target is totally unknown (as is sometimes the case nowadays, then that is where cloud solutions come in - where have elasticity on the resources you need/consume).


Rating

  (1 rating)

Is this answer out of date? If it is, please let us know via a Comment

Comments

Jason Oon, June 20, 2018 - 9:19 am UTC

Thanks for the reply Tom.

I did some random checking on 1 of my production databases, pulled an AWR for 1 whole day & also 1 hour gap. After I did some checking and calculation based on IDLE_TIME & BUSY_TIME values, I found out that the average CPU utlization is only about 12%

The production DB is riding on a 24core CPU machine, does that mean the DB is actually oversized?
Connor McDonald
June 22, 2018 - 2:27 am UTC

Averages tend to smooth out peaks, so you might need to do something like monitor cpu at an OS level on a more granular level (eg vmstat every 30 seconds or similar).

But if its a legitimate 12% *and* you are monitoring at the right time (ie, at month end you might be busier etc), then its possible you have more headroom then you really require.

More to Explore

Administration

Need more information on Administration? Check out the Administrators guide for the Oracle Database