all in one database
A reader, June 06, 2003 - 7:48 am UTC
On our production system, we have a staging database, a warehouse, and an oltp database. We may also be adding a data mart. Would you really roll all of those databases into one instance?
June 06, 2003 - 8:05 am UTC
I would most certainly not have them on the same machine if there were to be in separate instances.
If they were to be on the same machine, absolutely they would be all in one.
Unfortunately there is only one machine
A reader, June 09, 2003 - 7:46 am UTC
Since there is only one production machine, all instances at our shop must reside on that one machine. We have a mix of oltp, staging, and dw instances. Due to the differences in init.ora parameters, how could we justify combining them all?
June 09, 2003 - 8:51 am UTC
because performance will improve?
because it will be easier to manage?
because it is the only right thing?
How do you account for the differences in init.ora parameters?
A reader, June 09, 2003 - 8:55 am UTC
June 09, 2003 - 9:00 am UTC
give me a "for example".....
if it is like "sort_area_size", you should use workarea_size_policy = auto in 9i -- especially in a mixed workload.
if it is like "buffer cache sizes", well -- you don't need to worry about that, you just gave yourself a memory upgrade by shutting down the redundant instances.
So, give me a concrete example and we'll see what we can do....
Version Restrictions
Tim Hall, June 09, 2003 - 9:18 am UTC
If you want to run portal then you need to have infrastructure. Infrastructure currently runs against 9.0.X (not sure about the exact subversion). If you want your client database to run at 9.2.X then you need to have multitple installations due to the restriction on portal. I believe the 9.0.4 release of 9iAS will use 9.2.X which will solve this issue.
Easy solution is to buy more machines :)
June 09, 2003 - 10:12 am UTC
somewhere else on this site i have a discussion on this type of topic.
There are two types of databases:
o those we build our (our being the universal "us" -- you and me, not "oracle")
apps in
o those that come with packaged applications.
For those that we build our own applications in (like asktom for example, an application built internally, custom development) -- I like to keep it down to as few instances as humanly possible. Like one. In this single database that houses asktom, we also house promedmail.org, the oai database and a couple of other applications. When we upgrade -- it is one for all, all for one (we all go or none of us do)
For applications, of what I would argue the infrastructure database is part of the portal install, much like there is an oracle database behind HR or CRM or ERP, you will in general run an instance in support of that application. That would hold true for SAP, Siebel, whatever as well -- the database is sort of a black box that goes hand in hand with those application infrastructures.
Parameters
A reader, June 09, 2003 - 9:29 am UTC
db_block_size
db_file_multiblock_read_count
optimizer_index_caching
optimizer_index_cost_adj
optimizer_percent_parallel
June 09, 2003 - 10:15 am UTC
you do have multi-block size support if you feel that is relevant, however I think you would find 8k to be an excellent all around block size.
db_file_multiblock_read_count is settable at the session level HOWEVER, since your OLTP won't be full scanning and "that which is good for peter is good for paul", one size really does fit all there.
the last one is obsolete.
the other two are settable at the session level, also in 9iR2, using SYSTEM STATISTCS may more or less remove the need for them all together.
Thanks!
A reader, June 09, 2003 - 10:16 am UTC
So, Should we gather system stats on regular basis as done for other schemas?
Kamal Kishore, June 09, 2003 - 10:25 am UTC
Should we call DBMS_STATS.gather_system_stats on a regular basis?
Thanks,
June 09, 2003 - 10:44 am UTC
you should gather system stats during your "typical" processing times.
You could consider switching stats in and out during the day (you probably have different load characteristics at night then during the day).
You should regather from time to time -- as your performance characteristics will change over time as well.
What about on client side
Bryan Wells, May 08, 2007 - 8:07 pm UTC
Tom -
Love your site. We are talking multiple oracle homes on the server side, but what about the client side? i work in a small shop and we are migrating up from 9203 to 10g. to continue with customer requirements we will be simultaneously testing in the existing environment, but want to use 10g client to test against a 10g database before rolling live. any thoughts, or direction is greatly appreciated!
Thanks
Bryan
May 11, 2007 - 9:21 am UTC
vmware :)
you can have multiple client installs, but it gets hard. If you start using vmware, you have the best client testing environment ever.
single vs multiple oracle homes 11gR2
Connell, February 05, 2010 - 5:05 pm UTC
Hi Tom,
Similarly to Bob, I have the opportunity to migrate a legacy application from it's existing desupported 9.2.XX backend to a shiny new SUN server and also upgrade to 11gR2 at the same time.
I need to migrate the databases for 10 countries (each country's DB currently sits on it's own old desupported windows hardware - one server per country)
Options under consideration are as follows:
1) Use a single global zone, install one 11gR2 Oracle Home, and run all 10 instances from there.
2) Use a single global zone, install 10 Oracle homes (all 11gR2) and run each instance in it's own home, with it's own listener, or
3) Create 10 local zones (effectively 10 VM's), install an 11g home on each and run each databases on it's own zone with it's own Oracle Home.
I'm kinda leaning towards option 1) - a colleague of mine prefers option 3).
Can you think of any compelling arguments for one over the other - or why any of these is a really bad idea?
Thanks,
Connell.
February 08, 2010 - 9:35 pm UTC
one instance per host.
No more, no less.
If you do a ps -aef | grep pmon, you should see ONE pmon - on a production box.
That is the only way you can limit the memory used by an instance, throttle resources like IO and ensure cpu is used as you deem it should be (there is instance caging in 11gr2 for cpu, but that only affects the user processes, not the backgrounds, so it doesn't have the same limiting factors as a VM would)
one pmon only...
Paul, February 15, 2010 - 2:34 am UTC
Dear Tom,
You said "If you do a ps -aef | grep pmon, you should see ONE pmon - on a production box. "
Could one consider running a 'real' PMON and a physical standby-pmon on one box the "least of evil" ? (As in two machines, each with a primary DB and the other one's standby)
Knowing the standby will be running with a minimal resource footprint. (Once the standby needs to be activated, there will be other issues than 'the db running optimal')
Paul
February 16, 2010 - 4:58 pm UTC
I'd still rather use virtualization - since the standby could be active standby nowadays, it could be open read only regardless of version, it does do work, it is something else running on that host, it might be used during a failover.
Your choice in the end, but I'd still probably want it separate.
Mohamed amin, February 22, 2010 - 6:50 am UTC
Hi Connell
what did you do? I face a problem like you ? is you did the first choice ?
advice me please .
Mohamed Amin
March 01, 2010 - 6:51 am UTC
@Mohamed
What does it matter with Connell did - there are implementations out there that have done it every single way. That will never change the fact that the correct number of instances on a host is.... will be.... has been......
One.
DW and OLTP
A reader, March 24, 2010 - 9:58 pm UTC
Wouldn't we suffer performance issues combining OLTP and DW in a single instance using a single library cache? where typically DW queries aren't shared, compared to bound OLTP cursors.
Can we isolate the OLTP library cache from the DW library cache?
March 26, 2010 - 11:24 am UTC
but DW queries take many seconds/minutes/whatever to execute - so - do the math.
With the DW queries, will you be attempting to hard parse hundreds per second? (doubt it). tens per second (probably not, unless you had thousands of DW users). Even 1 per second might be high.
So, what would the numbers be - it takes a single sql statement, executed by one application, many times per second to kill an instance. I don't see a true DW type of situation being capable of doing that.
Multiple ORACLE_HOME with same version on same server
Pintu, August 18, 2011 - 1:01 am UTC
Hi,
For our testing server, we need three databases: OLTP, W/H and HISTORY. We want to keep all them on same ORACLE version. But the problem is some CPU patches need to be applied on specific database only. So is it possible to have multiple ORACLE_HOME of same oracle version (10.2.0.4) on the same server?
Thanks,
August 23, 2011 - 2:21 am UTC
they are NOT at the same version then, the CPU's would change the versions in fact.
If you patch a binary, you've patched the binary. If you want three different patch levels (which you really don't in my opinion), you'd need three installs.
Different backup/recovery requirements?
David, August 30, 2011 - 3:03 am UTC
I have always agreed with the one host - one instance approach, but I now need to store a current active schema and a dozen large historical archived schemas. The backup and recovery requirements are very different - an old backup of the historic schemas is fine, but I need to rapidly recover the current schema to current point in time, and Dataguard will be run, but it's only really required for the current schema.
I have never tried backing up different tablespaces rather than the whole DB. How much complexity does it add to recovery? Is using a separate instance an reasonable alternative approach? We have a two node RAC. Does that make having two instances more difficult? I have plenty of memory so the extra memory consumption is not an issue.
August 31, 2011 - 8:21 am UTC
if you wanted to use dataguard with physical standby - you'll be looking at more than one instance (you'd have to use logical standby to do just a slice of a database - more complex, some restrictions).
If you are using RAC, running more than one instance per host reduces your availability prospects. If one of the instances becomes "anti-social", then it would/could cause that node to be evicted from the cluster - including the other "well behaved" instance. Instance caging in 11g with the resource manager can help offset that (but doesn't do much regarding physical IOs - one instance can still slam the IO pretty hard if they wanted to).
Restoring a tablespace to a point in time is not that hard - the process (what happens under the covers) is that you set up an auxiliary instance elsewhere, you restore system/undo and the tablespace in question - point in time recover it and then transport it over to the 'real' database.
http://www.oracle.com/pls/db112/search?remark=quick_search&word=tspitr