Skip to Main Content

Breadcrumb

Question and Answer

Tom Kyte

Thanks for the question, Chidambaram.

Asked: September 01, 2009 - 10:30 am UTC

Last updated: September 01, 2009 - 12:12 pm UTC

Version: 10.2.0.3.0

Viewed 1000+ times

You Asked

Hi Tom,
We are planning to switch to new Faster Disks as we are having I/O as bottle neck. As a part of Evaluation and Testing our Architect suggested that we should run datbase on iscsi Architecture.
Our Present configuration
1. Database is running on server( SERVER A) 4 core 1 CPU with 48GB and communicates directly to the Disks Through two disk controllers (wish I could draw a picture here ).Asychronous I/O . SGA size is 25 GB and another development instance is using 12 GB.

Suggested Configurations:

1.Iscsi Architecture : Database will run on server( SERVER B) 2 dual core 2 CPU 32 GB which will communicate to SERVER A which inturn will communicate through DISK viz two disk controllers.


Pros : Architect is suggesting that this will be faster. ( I would like to agree as the test shows , I believe it is because of all the Buffered reads and Buffered writes from the huge buffer 43 GB).

Cons : we are adding another point of failure by adding another physical cable .

2. Just don't change anything . Run the databse on the server B ( CPU is faster than server A) against the new disk . Don't add the middle computer .


My Question are.

1.)What is your views on Iscsi Architecture ? ( I am not able to find your follow ups on this subject in one of the threads).

2.) Are there any other Advantages to this Architecture considering the risks involved.

3.)Considering the database is not in Archive log mode.What happens when there is a power failure and the blocks suppose to be in physical disk are still in the middle computer buffer( could this be possible in Asynchronous I/O) . ( will this behave like in the normal architecture or whould we back up the system in Battery , which we found in other forums).

Thank you very much
Chidambaram

and Tom said...

... We are planning to switch to new Faster Disks as we are having I/O as bottle neck...

hopefully you exhausted other routes first, like tuning queries and physical storage of data in the database (clusters, partitioning, iot's, indexing, materialized views, etc)


I'm not a hardware guy at all - I don't have a comment on your physical disk setup at this level of detail, your hardware vendor would best be suited to counsel you there and should be able to tell you how to set up a disk system that is resilient to failure.


if you have a setup whereby the OS tells us "block written" and block is not actually written and may never be written - you cannot use that for a database.

archivelog mode would not change that comment.

Rating

  (4 ratings)

Is this answer out of date? If it is, please let us know via a Comment

Comments

Been there, done that

djb, September 01, 2009 - 3:31 pm UTC

Pay attention to what Tom says here: You'll get much more ROI by tuning/fixing your app. You have a fairly high likelihood that all your money and efforts will give you at best a 1% improvement in speed by changing your disk architecture.

Thank you

Chidambaram Velayudham, September 01, 2009 - 4:14 pm UTC

Hi Tom and djb
Thank you for taking time to review. No disagreements on tunning applications and programs . We have done that to our best , Lot of programs we brouht down to 1/3rd of the time it was consuming, infact I have used all the features what Tom is talking about except clusters and will be continuing to do. Otherwise I will be out of my job :-)

Thank you

A reader, September 01, 2009 - 5:37 pm UTC

...
1. Database is running on server( SERVER A) 4 core 1 CPU with 48GB and communicates directly to the Disks Through two disk controllers (wish I could draw a picture here ).Asychronous I/O . SGA size is 25 GB and another development instance is using 12 GB.
...

it sounds like you have production instance and development instance on the same box. --- bad idea
If true, your test could be flawed since you'd be comparing a 'production' server on iSCSI to a 'production + development' server on local drives.
test a copy of production on the best hardware you have without iSCSI, and then rerun the same test on the same box with iSCSI.

I have had some bad experience with iSCSI for non-DB stuff. If you still decide to go that route, make sure your NICs can be used to offload the iSCSI requests.

Also, try your database on 11g. (i don't have any real numbers, but..) My 11g 'upgrade-test-box' seems to outperform my 9i on identical hardware.

if you have NO ARCHIVE LOG MODE, then why are you worried about physical points of failure when you have a major "logical" one?

I/O performance

LDS, September 03, 2009 - 10:21 am UTC

iSCSI is not faster than local disks - unless local disks are really slow - and it has the protocol overhead and network transmission overhead. Its performance depends on the quality of the iSCSI server and drivers used.
There are NICs that have onboard iSCSI accelerator but you alwyas have Ethernet adapters and you should have at least two dedicated for fault tolerance. Switches and NICs must be properly configured.
IMHO iSCSI is a decent alternative to more expensive FibreChannel connections, but it's not really my choice when performance are vital with actual ethernet networks.
The perfomance gain you saw are probably given by the huge cache and having CPUs dedicated to I/O - the disk controller can also play a role if it can handle I/O by itself or needs CPU cycles to be driven by software. Fast disks need a fast controller.
As Tom usually says, having two instance on the same production machine is bad - if you have two server you should get better performance (and security) having the production instance on one server and the development one on the other.
"Asynch I/O" does not mean "cached I/O". Using Asynch I/O a process/thread can ask for an I/O operation and then do something else until the OS notifies it the I/O has "completed", instead of just waiting and doing nothing. But "completed" does not mean data are written to the physical disks - they could be in the OS cache, the controller cache, etc.
Running a DB in noarchivelog mode without an UPS and the like is really looking for troubles :)

More to Explore

VLDB

If you are new to partitioning, check out Connor McDonald's introduction series here.

VLDB

Documentation set on VLDB and Partitioning.