Skip to Main Content
  • Questions
  • Limit to the number of instances on a RAC cluster

Breadcrumb

Question and Answer

Connor McDonald

Thanks for the question, Neil.

Asked: November 21, 2011 - 9:23 pm UTC

Last updated: September 21, 2017 - 2:53 am UTC

Version: 10.2.0.5

Viewed 10K+ times! This question is

You Asked

Hi Tom,

I have a two node RAC cluster for PROD and another two node RAC cluster for DEV / TEST. The PROD cluster has 7 databases (14 instances) across the 2 nodes.

However the DEV/TEST cluster has 34 databases (68 instances) across the two nodes.

I have no hard proof, but I just "feel" that this is too many databases for Grid Infrastructure to manage, and I think it may be due to the inter - process communication and so many instances.

Do you have any evidence or internal documentation that has any limitations or best practice etc for the number of databases / instances per node in a RAC cluster?

The databases are 10.2.0.5 and I am using Grid Infrastructure 11.2.0.1

I would appreciate your thoughts on this setup and whether RAC was designed to be used in this way.

Many thanks

and Tom said...

There are two schools of thought on this one.


a) the best number of instances on a given host is one.

b) (a) is wrong. go with however many you want.

The software supports both A and B. I am more a fan of A than B - however, with a machine like an Exadata box - that has to be rethought. With Exadata - you have a rather large box with lots of resources and you might not have a database big enough to "fill it up". With Exadata however you have the ability to control 1) memory allocated to each instance, 2) cpu used by each instance with instance caging and 3) an IO resource manager to control the IO used by an instance. So you can pretty much segment the machine into smaller machines to effectively resource manage them.

In 10gr2 the only thing you truly have is 1) memory allocated to each instance.

In 11gr2 you'd have 1) and 2).

Only on Exadata do you get 3).


One of the reasons you run RAC I presume is for high availability. With 7 databases - have you put everything in place to ensure that if one instance becomes anti-social, it will not put the other instances on that node at risk. That is, all it takes is for instance-A to get a runaway query (or two or three) and consume 100% of the cpu on that node, leading to a node eviction, which will kill the other six instances on that node.


You need to definitely be looking at resource management here. You can control the memory used by each instance (SGA - definitely, PGA - mostly). You need to control the amount of CPU used by each instance as well. In 11gR2 you have instance caging - a very easy way to do that. In 10g, you'll have to use the resource manager to limit the maximum number of active sessions by service. By doing that, you'll ensure that a single instance cannot consume all of the cpu on that node.

However, you'll have to figure out a 'fair' way to do this. Say you have 16 cores on that node. If you allow each instance to have four active sessions - each instance will be able to consume 1/4th of that machines CPU at any time. If 4 or more of them become 100% busy - you'll have pretty much knocked that node out. If three of them become busy - you'll be OK.


Rating

  (4 ratings)

Is this answer out of date? If it is, please let us know via a Comment

Comments

Instance Caging

Neil, November 22, 2011 - 8:38 pm UTC

Thanks Tom,

I had not come across Instance Caging before, but will read up on that with interest, plus it gives me extra ammunition for a case to upgrade from 10gR2 to 11gR2.

Do you know if Oracle support this as a method for licensing in the same way as hard partitioning such as LPAR's?

Again, thanks for the response
Tom Kyte
November 23, 2011 - 8:16 am UTC

It does not affect the license, no.

You need to hard partition for that.


It (instance caging) controls the user sessions only (dedicated/shared servers). It does not affect lgwr, dbwr, smon, etc. So, your instance can in fact use more than the prescribed number of CPU's - but it will keep the instance "under control".

In fact, even if you were running a single instance on a host, you might use instance caging to restrict the instance to say 80% of the cpu on that machine - leaving some aside for the clustering stuff and other OS processes - to avoid node eviction.

Limiting Memory Usage

stephan, November 23, 2011 - 9:22 am UTC

Hi Tom,

<quote>
With Exadata however you have the ability to control 1) memory allocated to each instance

In 10gr2 the only thing you truly have is 1) memory allocated to each instance.
</quote>

Can you say more about this? I'm aware of SGA_MAX_SIZE to limit the SGA, and PGA_AGGREGATE_TARGET - but that does not set a hard cap on the PGA. Is there any way to truly cap the memory usage for an instance? Or are you always subject to whatever the users might do, and the amount of PGA that might be allocated to support that?

Thanks!
Tom Kyte
November 23, 2011 - 9:38 am UTC

You can 100% control the size of the SGA.

The PGA however, you control to a degree. However, there is no way to totally limit it. For example, if you allowed 1,000 users to connect simultaneously (a really bad idea, regardless of machine size by the way) - we'd have to allocate 1,000 PGA's using dedicated server connections. If they all decided to open 5 queries that sorted - they'd add 5 workareas to that PGA allocation. If they then decided to run a bit of code:

for i in 1 .. 1000000 loop l_array(i) := rpad('*',32000,'*'); end loop;


that would get tacked on - and so on.


If you switch to shared server, then you could limit the amount of memory being used since the UGA would be in the large pool of the SGA now. That would probably cause that loop to blow up in individual sessions.

But you'd still have workareas in the shared server to consider - but since you control the number of shared servers - you would in fact be able to control the PGA memory very tightly.



So, under normal circumstances (dedicated server) - no, you cannot totally control the PGA.

Under shared server - yes, you can - because the UGA (user global area, session memory) is moved into the SGA's large pool and the PGA's for the shared servers would tend to be rather fixed in size.

RAC node Support

Rahim Khan, September 20, 2017 - 1:47 pm UTC

Hi

I would like to know that whats the maximum number of nodes supported by Oracle RAC?
Connor McDonald
September 21, 2017 - 2:53 am UTC

Depends on the database version, but from the 12.2 docs:


"MAXINSTANCES Clause

Specify the maximum number of instances that can simultaneously have this database mounted and open. This value takes precedence over the value of initialization parameter INSTANCES. The minimum value is 1. The maximum value is 1055. The default depends on your operating system."


about instances in RAC

Rajeshwaran Jeyabal, September 21, 2017 - 1:03 pm UTC

Connor,

Just to add that, It was from OTNYathra 2017 event at chennai.( http://otnyathra.in/chennai/ )

It was "Sandesh Rao" (Senior director -RAC Developement) during the session 'Troubleshooting and Diagnosing Oracle Database 12.2 and Oracle RAC' at Room#1 he mentioned that the biggest implementation of RAC configuration for a banking customer has 48 nodes in it.