Been there, done that
djb, September 01, 2009 - 3:31 pm UTC
Pay attention to what Tom says here: You'll get much more ROI by tuning/fixing your app. You have a fairly high likelihood that all your money and efforts will give you at best a 1% improvement in speed by changing your disk architecture.
Thank you
Chidambaram Velayudham, September 01, 2009 - 4:14 pm UTC
Hi Tom and djb
Thank you for taking time to review. No disagreements on tunning applications and programs . We have done that to our best , Lot of programs we brouht down to 1/3rd of the time it was consuming, infact I have used all the features what Tom is talking about except clusters and will be continuing to do. Otherwise I will be out of my job :-)
Thank you
A reader, September 01, 2009 - 5:37 pm UTC
...
1. Database is running on server( SERVER A) 4 core 1 CPU with 48GB and communicates directly to the Disks Through two disk controllers (wish I could draw a picture here ).Asychronous I/O . SGA size is 25 GB and another development instance is using 12 GB.
...
it sounds like you have production instance and development instance on the same box. --- bad idea
If true, your test could be flawed since you'd be comparing a 'production' server on iSCSI to a 'production + development' server on local drives.
test a copy of production on the best hardware you have without iSCSI, and then rerun the same test on the same box with iSCSI.
I have had some bad experience with iSCSI for non-DB stuff. If you still decide to go that route, make sure your NICs can be used to offload the iSCSI requests.
Also, try your database on 11g. (i don't have any real numbers, but..) My 11g 'upgrade-test-box' seems to outperform my 9i on identical hardware.
if you have NO ARCHIVE LOG MODE, then why are you worried about physical points of failure when you have a major "logical" one?
I/O performance
LDS, September 03, 2009 - 10:21 am UTC
iSCSI is not faster than local disks - unless local disks are really slow - and it has the protocol overhead and network transmission overhead. Its performance depends on the quality of the iSCSI server and drivers used.
There are NICs that have onboard iSCSI accelerator but you alwyas have Ethernet adapters and you should have at least two dedicated for fault tolerance. Switches and NICs must be properly configured.
IMHO iSCSI is a decent alternative to more expensive FibreChannel connections, but it's not really my choice when performance are vital with actual ethernet networks.
The perfomance gain you saw are probably given by the huge cache and having CPUs dedicated to I/O - the disk controller can also play a role if it can handle I/O by itself or needs CPU cycles to be driven by software. Fast disks need a fast controller.
As Tom usually says, having two instance on the same production machine is bad - if you have two server you should get better performance (and security) having the production instance on one server and the development one on the other.
"Asynch I/O" does not mean "cached I/O". Using Asynch I/O a process/thread can ask for an I/O operation and then do something else until the OS notifies it the I/O has "completed", instead of just waiting and doing nothing. But "completed" does not mean data are written to the physical disks - they could be in the OS cache, the controller cache, etc.
Running a DB in noarchivelog mode without an UPS and the like is really looking for troubles :)