Skip to Main Content

Breadcrumb

Question and Answer

Connor McDonald

Thanks for the question.

Asked: December 20, 2007 - 5:48 pm UTC

Last updated: January 23, 2019 - 3:44 am UTC

Version: 9.2

Viewed 10K+ times! This question is

You Asked

I recently tested an Oracle database on a Solid state disk.

Here is the link http://www.superssd.com/products/ramsan-400/

The performance is really great if you compare it with regular disks.

I couldn't find many documents or threads to get more information about these disks. I am mainly concerned about the stability on these disks as my data is not just read only and I can't afford loosing even single transaction. These disks are very pricey too.

Do you have any experience with these systems? How reliable are they for holding the data files?


and Tom said...

... The performance is really great if you compare it with regular disks. ...

and the fastest way to do something is....

not to do it.

Your mileage will vary IMMENSELY with this stuff.

reduce the time it takes to do physical IO and you might well DECREASE the overall response time to your end users:

http://jonathanlewis.wordpress.com/2007/12/18/analysing-statspack-9/

Or, it might not have any noticeable effect:
http://kevinclosson.wordpress.com/2007/07/21/manly-men-only-use-solid-state-disk-for-redo-logging-lgwr-io-is-simple-but-not-lgwr-processing/

there be no silver bullets - only one of three things will happen if you sink a ton of money into solid state disk:

a) things might stay the same (whoops, bummer)
b) things might get much worse (ouch, that would hurt)
c) things might go somewhat faster (maybe that is OK, but if you spend 100x for the disk, would you not like a 100x return - doubtful you would get that)


solid state disks are reliable - I'm not worried about losing data so much.

I'd be more worried that I made a huge investment in something that won't fix a problem (I suppose you must have one or we wouldn't be talking?) I had or might make it worse.

Rating

  (31 ratings)

Is this answer out of date? If it is, please let us know via a Comment

Comments

Alberto Dell'Era, December 21, 2007 - 4:18 pm UTC

I've always wondered why a dedicated disk (or disk array) with a good battery-backed write-back cache shouldn't provide the same (or nearly the same) performance as a Solid State Disk.

Maybe a naive question; maybe I haven't found yet a good (independent) comparative benchmark between the two alternatives.
Tom Kyte
December 21, 2007 - 10:22 pm UTC

reads....

they are all about reads...

Oops ...

Alberto Dell'Era, December 22, 2007 - 6:33 am UTC

... sorry, I forgot four words at the end (I was thinking about the LGWR case covered by Kevin's paper) :

"I've always wondered why a dedicated disk (or disk array) with a good battery-backed write-back cache shouldn't provide the same (or nearly the same) performance as a Solid State Disk *for online redolog writes* ".

I might add - reads for archiving the log files happen in background, and are sequential, hence they can benefit from the usual read-ahead optimizations.
Tom Kyte
December 22, 2007 - 12:11 pm UTC

in cases of large bursts of redo - you could overwhelm a cache - but in general I would tend to agree with you.

I would like solid state disk for my laptop though - that would rock :) lower power consumption (longer battery) and possibly faster windows

Steve, December 24, 2007 - 5:09 am UTC

Windows?? You use windows?? Shame on you!

Merry Xmas!
Tom Kyte
December 24, 2007 - 9:19 am UTC

as a desktop OS :) yes.

so I can run a linux virtual machine and drive powerpoint at the same time.

Solid State Laptop

Greg, December 24, 2007 - 11:02 am UTC

Tom said: I would like solid state disk for my laptop though - that would rock :) lower power consumption (longer battery) and possibly faster windows

Well, there's always: http://laptop.org/en/laptop/hardware/highlights.shtml

SSD vs. Disk performance.

Russ, December 25, 2007 - 6:22 pm UTC

I've always wondered why a dedicated disk (or disk array) with a good battery-backed write-back
cache shouldn't provide the same (or nearly the same) performance as a Solid State Disk.

---

No. The physical disk array will ultimately need to de-stage the cache to the disks. In a write intensive environment, eventually the cache will fill. When this happens, it will operate only as fast as the underlying disks do. If you average fewer write operations per time period than the disks can handle (about 200/s per disk spindle), then the cache does help.

Reads depend on the disk, since they probably won't be in the cache. Even if they are, they need to go over the IO bus (FC/SCSI), through the HBA, kernel, and then the database. This is another strike against SSD storage.

Sequential reads will do better on modern disks than SSDs. A modern FC 15k disk will read at about 100 MB/sec, over an 8 Gbit bus (usually many disks per bus). The best SSD controllers can only do about 15 MByte/s. Sure, you can get more and stripe them, but you can do the same thing with disks.

The best case for SSD is a highly random, small IO workload. It would be faster using an interface other than FC/SCSI. For a well tuned, sequential workload, the disks will be faster, while the SSD may be slower and cost 50-100 times more.

SSDs have limited write cycles (10 to 100,000 writes per block), so the controllers must re-balance IO constantly. They eventually will fail. Under heavy load, about as often as physical disks do, if not sooner. Of course, the replacement cost is much higher.

SSDs are becoming available for laptops now. They can use less power, but are smaller. They do not make things 100x faster. Reading a 2 GB hibernation file is a 2GB sequential IO. Even the super-slow 4500RPM disks in laptops do this in about 100 seconds. The SSDs take about 80s. (faster, but not by much, and not really worth the price). They do much better on random IO. Then again, laptops don't do a ton of this, and they have the absolute slowest disks made anyway.

I have seen many environments that would run SLOWER with SSD disks, and some that would be faster. As these devices become faster, they may become more appropriate in more environrments. (I boot Linux from a USB SSD on my laptop, something I wouldn't have done 2 years ago).







Kevin Closson, February 13, 2008 - 8:51 pm UTC

Folks, let's not confuse NAND Flash SSD for DRAM SSD. The former is currently available from folks like Samsung in SFF and LFF drives, the latter is extremely expensive enterprise grade stuff from folks like Texas Memory Systems. If I'm not mistaken, Texas Memory Systems has a new hybrid NAND/DRAM SSD which replaces that round spinning things backing the DRAM with NAND Flash making it entirely solid state.

There is a revolution ahead ...

Michael, July 14, 2008 - 4:15 am UTC

Unbelievable, but reality ...

IODrive:

700 MB/s Read, 600 MB/s Write, 100.000 IOPS per secend

30 $/GB

http://fusionio.com/products.aspx

In the not too distant future traditional, mechanical disk drives will only be found in the museums, next to a steam engine and a hand axe.

IO memory

BitBucket, August 08, 2008 - 2:12 pm UTC

HP, SUN and others are moving ahead with SSD's in their servers. HP is said to be going with the Fusion IOdrive in their blade server. This shows that even as pricey as SSDs are, they have reached a point where total operating costs and performance are in their favor.

I've been looking at options for a few days. The notebook SSDs are on the small end for price, performance and capacity.
The DRAM equipment is super fast, super expensive. The IOdrive hits a great mid point of price/performance/capacity and brings back the simplicity of direct attached storage ( PCIe ).

Concerns about reliability are a dead issue. The drives should last under constant writes for several ( 20+ ) years.


The end of tuning consultants !!

A reader, March 20, 2009 - 3:34 pm UTC

Hi,

I've heard that the price of SSD will decrease amazingly next year, and some DB experts think that by the time, we no more need Tuning experts!!!

What's happening?
Sofiane.

Tom Kyte
March 24, 2009 - 10:43 am UTC

laughing out loud.

Sure, it'll make IO go faster, but so what?

Did you know that if you disable logging (just have oracle do everything BUT write the redo), the approximate speedup is near......

zero.

most of our time is spent doing other things. IO can be a contributor, but it is just that.

And SSD does NOT make you "infinitely fast" as far as IO goes. It just makes you faster. It might turn a one minute wait into a 45 second wait, but it will *not* make the wait go away.

SSD for redo logs

Alwin, October 10, 2011 - 7:49 am UTC

Hi Tom,

We have done our best to reduce activity on our redologs, but they are still a bottleneck and we still have many "log write" wait events. Datafile write is not a real issue as most of the blocks that get changed are already in the cache and get updated several times before writing.

We didn't implements until know the recommendation to have a dedicated disk for the redo-logs, and want to do so using SSD. The advantage is that we need only 2 such disks (for 2 redo groups) instead of ideally 1 separate disk for each redo member.

What's your opinion with such a configuration ?

Regards
-- Alwin

Tom Kyte
October 10, 2011 - 10:54 am UTC

unfortunately SSD doesn't really write much faster - if any faster at all - then spinning magnetic disks - especially for the type of writes we do to redo (sequential writes - magnetic spinning disks are great at that).

SSD might actually write slower than magnetic disk as they have to take great care to not write and rewrite the same areas all of the time. The first time you write to an SSD - fast. Over time however - they have to re-initialize areas, make sure they are not using the same areas over and over.

so, I would suggest a pretty intense benchmark (that makes sure you fill up the SSD many times over - write it a second time) to see if it would even be worth it.

SSD Opinion

Rob Burton, October 14, 2011 - 1:47 pm UTC

Hi Tom,

Do you have any opinion on where you think SSD's might be able to greatly improve performance in certain areas. It obviously depends on the total size of you database but being able to do somewhere in the region of 100,000 iops whereas by disk benchmark from 20 years ago was 100 iops there really are areas where SSD's can make a difference - where do you think the biggest gains can be found?? I also think that it reinforces your general idea of minimising logical IO as something like a SSD really closes the gap between logical and physical IO. From experience the big performance problems I have encountered are inefficient application design leading to massive logical IO which can be very hard to correct in a performance tuning mode.
Tom Kyte
October 14, 2011 - 11:37 pm UTC

they would benefit most

in single block IO's

when your cache cannot hold all of the data you need (you'd want them in the cache as that will be many times faster then ssd)

when you have the bandwidth to actually transfer the data from the ssd to your buffer cache - if you don't, don't even bother making IO's go faster - you need the bandwidth to move the data from disk to ram.




From experience the big performance problems I have encountered are
inefficient application design leading to massive logical IO which can be very
hard to correct in a performance tuning mode.


if you have that, ssd won't do anything for you except perhaps make that worse :)

SSD

A reader, October 15, 2011 - 10:46 am UTC


in-memory database.

Amir Riaz, October 16, 2011 - 3:15 pm UTC

Hi Tom,

4 years ago, we build a in-memory system(Time Ten cache). I think we were the first one to do that with your assistance and help.

http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:676698900346506951

I work in Telecom domain and the transaction load is unbelievable throughout the day. but times Ten always able to get us through and this system is still the fastest one.

However, a few days ago at OOW, Oracle Exalytic was announced and after some research I have find out that its based on Time Ten.

Now questions is, is it a OLTP + Data warehouse or just in-memory data warehouse.

In my opinion now, Oracle database 12g(future release) really need columnar engine ( EHCC is columnar storage with exadata). Oracle has done well but the competition is very hard.


Tom Kyte
October 16, 2011 - 10:31 pm UTC

You were hardly the first - Times Ten has been around for a while.

we have columnar storage already - an implementation that permits incremental loads in a very efficient fashion - not sure why you think we need a columnar engine.

Exalytic is geared towards analytics - analysis - as the name implies.

http://www.oracle.com/us/solutions/ent-performance-bi/business-intelligence/exalytics-bi-machine/features/index.html

(and incorporates columnar compression as well)

Exalytic is for SAP HANA

Amir Riaz, October 17, 2011 - 9:09 am UTC

Tom,

Exalytic is to compete with SAP HANA, which is a real time analytic appliance. real time analytic means it get data from ERP system at real time and stored it in its memory as compressed columnar data. when a report runs Transformation function are applied to it to simulate ETL and aggregation is done in real time, then the report is presented. SAP HANA does all this with the help of its columnar storage and some columnar based joins it does. therefore it lightening fast.

Oracle Exadata on the other hand has columnar storage in storage cell and the processing in data is still done at row level. That is why I suggest hybrid engine which contain row and column level kernel, the optimizer decide the optimal engine based on cost.

However, this is just my suggestions, cannot see other catch Oracle
Tom Kyte
October 17, 2011 - 6:14 pm UTC

therefore it lightening fast.

I always laugh when I hear things like that.


"we do X, therefore we are lightening fast"

which seems to imply that in order to be lightening fast - you have to do X. Which of course is a falsehood.

The processing of data is done in the storage cell AND the database instance. And furthermore, if the data is done solely in the database instance - the database instance works with our compression units - and works at a columnar level, only going to rows of data when it needs to (just like any other columnar data store).

So, can SAP HANA push predicates and processing down to the disk - we do and therefore it is lightening fast. :)

Thanks Tom

A reader, October 18, 2011 - 2:23 am UTC

Thanks Tom of clearing thing.

Fusion IO 1TB

Amir Riaz, October 28, 2011 - 6:46 am UTC

Hi Tom,

How extended SGA works. We have a Fusion Io 1TB SSD disk and I am looking to put it into the data warehouse system

Questions are:
1. How DBWR write to flash cache, I think it write first to disk and then to flash cache created by extended SGA.

2. How Server process read, I think it first checks SGA memory, if not found, looks for Flash Cache, if not found looks for disks system

3. How block get out of flash cache?. can I lock a table in extended SGA. What algorithm is uses to take the block out from SSD disk. some Oracle Representative told me that it left something on disk when block is taken out from flash cache. what is that something.

4. Can in-memory parallel processing can be benefited. since our SGA size has increased. I think it should. since Size of SGA is quite big through extended SGA. Did Oracle while deciding for in-memory parallel processing looks for data in SGA cache or both SGA cache and on extended disk. because according to Oracle optimizer group 80% of data should be in SGA. but they could not told me that data considered should reside in database cache only or both database cache and SSD disk. This is the most important question for our team. so please answer.
Tom Kyte
October 28, 2011 - 7:11 am UTC

you need to be on solaris or Oracle enterprise linux.

are you?


1) it'll use async io to write to both at the same time basically.

2) it'll look in the SGA and if the block is there - fine, you are done. If not, is the pointer to the block on the SSD there - if it is, do an IO against the SSD to get it. If not, do the physical IO

3) we keep a pointer in the SGA to the block on SSD, there is nothing on disk regarding it, it is a pointer in the SGA (so if you are using this, you'll need to allocate a bit more SGA to accommodate this extra information)

4) no, in memory parallel query requires real memory for the SGA. Adding SSD for a 'flash cache' does not increase the SGA size (in fact, it effectively makes it a bit smaller for the pointers).

I don't know what this 80% number is or means, sounds made up to me.

SSD requirements

AJ, October 28, 2011 - 7:55 am UTC

My customer is wanting to use Fusion IO to make their poorly written OTB system go fast. Yeah, yeah, I know. Let's not go there right now. You seem to say that to use SSD (or at least this particular SSD) "you need to be on solaris or Oracle enterprise linux.". Where can I read more about this? I looked on support and could not find anything. Thanks.
Tom Kyte
October 28, 2011 - 10:13 am UTC

Exadata vs Fusion IO

Stefan Afendoulis, October 29, 2011 - 12:43 am UTC

AJ,

If you use HP DL980 60 core with two Fusion IO octal 5TB and you work load is OLTP. You can easily beat exadata for OLTP.

Fusion IO system have been build over mother board system bus so for processor its just like ordinary RAM. It does not have to go to HBA, then go to IB, then go to cells, retrieve output and send to database.

When we use it first time, the actual bottle neck were the processors. We tried it with block size 8K.

It just cost us 1/3 of cost of Exadata and its a far better solution than Exadata

SSD as means to do FTS faster

Max, October 29, 2011 - 6:40 pm UTC

Tom,

>> Sure, it'll make IO go faster, but so what?

I want to hope, that SSD can speed up FULL-TABLE-SCANS dramatically. Can you confirm this hypothesis?

Why is it so important?
The most unpleasant thing in Oracle optimizer, in my experience, - is its unwillingness to use indices to join two big tables, and using hash joins instead.

The optimizer seems to believe, that I have very fast disks!
With all statistics gathered, both on TABLE/COLUMN and SYSTEM level, it still calculates hash-join-plans as the best ones.

Actual execution time degrades sometimes in 100 times.
I used to employ INDEX_COST_ADJ parameter to combat, but afterwards reset it to default value, because there is no "correct for all cases" value for this parameter.

I am very lazy person, and I usually do not tune queries, which return results fast... So, if we will get SSD, then (I hope) we will get execution times as if indices were in use, and we would ignore excessive logical IO in these queries, where optimizer is at fault?

Best regards,
Max


Tom Kyte
October 31, 2011 - 11:04 am UTC

I want to hope, that SSD can speed up FULL-TABLE-SCANS dramatically. Can you confirm this hypothesis?


extremely doubtful. spinning magnetic disks are good at doing 1mb IO's - the bottleneck almost always (as in always practically speaking) is not the ability of the disk to retrieve data, but rather the inability to transfer that data from the disk to the server (that is, the SAN or network or whatever is wholly insufficient). Given enough disks (spindles), sufficient IO bandwidth (network, hba's, whatever) - IO doesn't have to be a problem.

In the real world performance day I've been doing, we demonstrate how - in a properly set up system - reading from disk during a massive full scan to reading from plash cache (faster even then SSD) after it's been filled up - doesn't really equate to very much.

Where SSD/flash come into play - lots of single block IO's. Big full scans - no so much.


The most unpleasant thing in Oracle optimizer, in my experience, - is its unwillingness to use indices to join two big tables, and using hash joins instead.


wow, do you have a lot to learn - really. I'm not kidding. I am TOTALLY not kidding.

http://asktom.oracle.com/pls/apex/f?p=100:11:0::::P11_QUESTION_ID:6749454952894#6760861174154

SSD ain't going to solve your problems. Architecting the right hardware will get you a bit of the way there - but you have a way to go.

I love full scans.
I love hash joins.

Indexes scare me to death when you start talking about 100,000 - 1,000,000 - or more rows being processed. totally to death

Just to fix CBO inaccuracy

Max, November 01, 2011 - 12:56 am UTC

Well, I confess I know almost nothing about server hardware, because I am modest database developer.

Fortunately, I learnt a lot about CBO from your books ;-)
So, I know, that sometimes we should love indexed access, and sometimes full table scans. Of course, I was speaking about queries which are supposed to return tiny amounts of rows, and need to see only few rows from big tables to make result set.
Then I do not understand, why CBO rates the plans in such way, that no correlation exists between "cost" and execution time. CBO constantly tends to prefer FTS's, even if the big tables are to be hash-joined many times.
The only reason I can see is very high CBO's expectations of our IO perfomance, but this is where SYSTEM statistics should assist, shouldn't it?

I trust your sentence, Tom, that SSD will not save us, but it (or enhancement to existing disk system) could help.
Optimizer hints and optimizer related parameters are useful, but we want "global" solution.

Best regards,
Max
Tom Kyte
November 01, 2011 - 3:39 am UTC

Well, I confess I know almost nothing about server hardware, because I am modest database developer.


get to know it, else a modest developer you'll remain...


So, I know, that sometimes we should love indexed access, and sometimes full table scans. Of course, I was speaking about queries which are supposed to return tiny amounts of rows, and need to see only few rows from big tables to make result set.

It is not that it needs to return tiny amounts of rows - it is that it is to PROCESS a tiny amount of rows.


select count(*) from one_billion_row_table where column_with_five_distinct_values = 'x';

that will count 20% of the rows in the table. It returns a TINY number of rows - namely 1. Should it use an index?



produce for us a concrete example that we can talk through.


I trust your sentence, Tom, that SSD will not save us, but it (or enhancement to existing disk system) could help.


Not much. On a well balanced system where you have the ability to actually transfer data (that is the brick wall you are going to hit - the ability to TRANSFER the data - you can probably already READ the data faster than you can a) TRANSFER IT, b) PROCESS IT - this is why you as a modest developer needs to become "unmodest" - else you'll be chasing rabbits down holes forever) and process the table - an SSD isn't going to do much for your full scans.

On a system that already cannot transfer the data fast enough (insufficient bandwidth) and even if it could could not process it fast enough (insufficient CPU) - SSD will do zero for you.


The "global" solution is to understand that we "architect" systems - well, at least we are *supposed* to. We don't just say "it must be good - we don't need to understand it - we'll just use whatever is there and assume it was well architected"

Optimizer hints and optimizer related parameters are useful, but we want "global" solution.


I disagree, my first step in "tuning" a system is to basically remove both and get to the root cause.

The root causes will either be:


a) incorrect cardinality estimates due to missing, non-representative statistics or SQL written in a way such that no one could correctly guess the estimated rows. We would use tools such as sql profiles, extended statistics, dynamic sampling to conquer that. If we get the right cardinalities - we get the right plan

http://asktom.oracle.com/pls/apex/f?p=100:11:0::::P11_QUESTION_ID:4043225300346897444

http://asktom.oracle.com/pls/apex/f?p=100:11:0::::P11_QUESTION_ID:4043033500346640019

b) inefficient sql written by people that don't know all of sql well enough to code something efficiently. For example the developer that does 15 self outer joins to PIVOT data in an "EAV" (entity attribute value) model. Two problems with that - problem 1 is the model itself, problem 2 is you do not need 15 outer joins to pivot - you can use either a "max(decode())" technique in a single pass OR the PIVOT command in 11g. there are tons of other examples of really 'poorly' written sql.

c) the hardware as configured just isn't going to do it - no matter what. and fixing "one thing" (magical SSD for example) isn't going to fix it - you need balanced resources from the disk on up to the server itself




OK, No need for SSD for now

Max, November 03, 2011 - 2:30 am UTC

Hi Tom,

Thank you for help. I think I got your position regarding role which SSD plays in multiblock reads.

And I cannot neglect an opportunity to learn from you (the reason why one obstinate query does not perform well).
I will try to post the problem to "Reduce LIO" topic:
http://asktom.oracle.com/pls/apex/f?p=100:11:0
- because the problem is not related to SSD subject.


extended SGA

A reader, November 05, 2011 - 8:23 am UTC

in above you wrote,

2) it'll look in the SGA and if the block is there - fine, you are done. If not, is the pointer to the block on the SSD there - if it is, do an IO against the SSD to get it. If not, do the physical IO

3) we keep a pointer in the SGA to the block on SSD, there is nothing on disk regarding it, it is a pointer in the SGA (so if you are using this, you'll need to allocate a bit more SGA to accommodate this extra information)

why we do physical IO against disk, why we keep disk since SSD is there?

3) Are you saying that pointer of SSD disk reside in SGA, it means that if you reach block with pointers you would need only one IO to get data.

Why you recommend Solaris 11?. I think Oracle is moving away from Linux. Solaris is too good.
Tom Kyte
November 07, 2011 - 11:47 am UTC

we use the SSD purely as a *cache*, not for persistent storage when using the flash CACHE. That is what it does by design.

You have 10's maybe low 100's of gigabytes of SGA

You have 100's of gigabytes of SSD

You have 1000's (or more) gigabytes of Magnetic spinning disk <<<=== persistent



As for:


3) Are you saying that pointer of SSD disk reside in SGA, it means that if you
reach block with pointers you would need only one IO to get data.


Yes, that is pretty much what I typed. But not the "you would need only one IO to get data", you always need only one IO to read a given block. The flash cache allows that IO to be against the SSD instead of magnetic disk.



Why you recommend Solaris 11?. I think Oracle is moving away from Linux.
Solaris is too good.


where did I recommend an OS ?

We are not moving away from linux at all - what gives you that impression?

SSD with data warehouse

Amir Riaz, November 05, 2011 - 3:39 pm UTC

Our experiment using 1TB SSD with Oracle 11g r2 Extended SGA become a big success. We used it for data warehouse purposes to tune our star transformations. We partitioned the disk divided it into two parts, on one part we placed our redo logs and second part is added to SGA.

Oracle ETL time reduced by 3 hours and now we are able to get our reports within 1 minute.

Oracle Extended SGA works for us, even if Oracle recommends it for OLTP purposes. Infact I found that SSD inside the database is better than SSD as in Exadata ( but it just my observations).

Thanks tom, for his valuable help and suggestions

hmm... presistence storage for cache

A reader, November 08, 2011 - 7:13 am UTC

That's extremely confusing, SSD is a storage device and you are using it for cache and not recommend it for storage.

I think extended sga and flash cache are your cache implementations.

we could not use extended SGA with oracle 11g r1. Tom, is it possible that Oracle 11g r1 support extended SGA with some patches or have similar feature like Oracle r2.

can the users who have licenses of oracle 11g r1 can switch to oracle 11g r2 without paying any additional license fees.

Please recommend further readings for extended SGA.
Tom Kyte
November 08, 2011 - 7:51 am UTC

SSD's - the cheap ones we are talking about here, something you would buy at best buy - exhibit:

a) excellent random read (no seek time)
b) average large, sequential read (your transfer rates come into play)
c) average to less than average write performance (you have to be careful where you write to)
d) typically higher failure rates (although this is getting much better)
e) significantly higher cost


So, if you have some SSD, and wanted to put it to use, using it as a backing store - a cache - makes sense.


And the flash cache is a new feature of 11.2 - it has not been backported.

You get access to patches and upgrades with a license - yes.


http://docs.oracle.com/docs/cd/E11882_01/server.112/e25494/memory005.htm#ADMIN13391

A reader, November 08, 2011 - 1:46 pm UTC

You get access to patches and upgrades with a license - yes.

If a user already has license to Oracle 11g r1, Can he move to oracle 11g r2 in same license. There are a lot of feature in oracle 11g r2 which are not in r1.

Or can it is possible to make r1 equivalent to r2 through patches.

it would be a killer if users have to pay extra for minor releases.

Can you elaborate or clarify
Tom Kyte
November 08, 2011 - 1:56 pm UTC

you get upgrades (and patches). Sorry, I don't know how to say it more clearly?

You would upgrade 11.1 to 11.2. As long as you are licensed, you have upgrades. You get the base set of features - not any new options or features of a higher edition.

Meaning, if you have 11.1 enterprise edition with partitioning - you can upgrade to 11.2, you'll have every 11.2 enterprise edition feature (see http://docs.oracle.com/docs/cd/E11882_01/license.112/e10594/toc.htm for what are features of each edition) and still have access to the partitioning option - but not any new options that you don't license

So what about the SSD Racks ?

Greg, November 08, 2011 - 2:28 pm UTC

You mentioned:
SSD's - the cheap ones we are talking about here, something you would buy at best buy

So how much do things change if we look at these larger SSD Racks that are available?

ie
http://www.cybernetics.com/storage/iSAN_10TB.php

I keep seeing this stuff thinking I'd love to see how our Oracle install would run on a SAN rack of SSDs :)

I'm sure they'd still suffer a bit in the "ahh, too much, slow down, please!" type logic hitting the CPUs and network and such, but would the SAN racks make the difference?
(that and I'm assuming these SSDs are higher quality than the "best buy" ones, are they not?) :)


Tom Kyte
November 08, 2011 - 4:52 pm UTC

If they are in a SAN it'll depend on how they are cabled to the server - doesn't matter how fast the IO is if you cannot transfer it from point A to point B.

And again- single block IO - they'll be really good. Having it in the buffer cache - better, but that would be a good second.

I don't know anything about this particular companies product - so best if I do not comment on the 'best buy' comparison one way or the other :)

SAN problems

A reader, November 10, 2011 - 12:04 pm UTC

If they are in a SAN it'll depend on how they are cabled to the server - doesn't matter how fast the IO is if you cannot transfer it from point A to point B.

Can you please tell us, how should SANs be connected with servers?.
Tom Kyte
November 10, 2011 - 1:48 pm UTC

with lots of cables

that are dedicated to you

with sufficient switches/hbas whatever so as to not be a single concentration point.


For example, if you have a GigE network - you can transfer (on the very very very best of days) about 100mb/sec. That is nothing. So you get a couple of those - but they all go into a single card. Or they hit a single switch. Or a single anything.


You need to make sure that you have put together the hardware in your environment to ensure you can move the data from the disk to the server - without a choke point.

single block vs multi block reads

A reader, November 13, 2011 - 2:32 am UTC

Tom,

does both conventional and direct path read perform single block IO from SGA.

or Oracle always does single block read from SGA and milt-block from disks.


Tom Kyte
November 15, 2011 - 7:33 am UTC

blocks are scattered in the buffer cache - so blocks that are logically next to each other on disk are not stored next to each other in the buffer cache.

We get blocks one at a time from the cache.

single block vs multi block reads

A reader, November 13, 2011 - 2:34 am UTC

Tom,

does both conventional and direct path read perform single block IO from SGA.

or Oracle always does single block read from SGA and milt-block from disks.


SSD performance and stability

stephane, June 07, 2012 - 5:30 pm UTC

For testing/dev purposes I have a ora 10g DB running on win 7 ultimate virtual machine (VMWare Fusion) on Mac book pro OSx 10.7.4.

Since I replace my old 750go disk by a smart SSD 512 performance its just amazing.

But since I replace it I have many java.sql.SQLException: ORA-01578: ORACLE data block corrupted that appears.

So I am a little bit annoyed some time.
All feedback welcome


Tom Kyte
June 07, 2012 - 7:00 pm UTC

Please utilize support.

Is this answer outdated?

Mittu Thomas, January 22, 2019 - 1:31 pm UTC

This was answered 10 years back. SSD technology changed a lot in last 10 years. What is your latest view on this?
Connor McDonald
January 23, 2019 - 3:44 am UTC

I don't think the *answer* has changed, in that, as with *any* technology advancement, you should always test the claimed benefits against some real world analysis that is applicable to your particular business case.

Having said that, it is not really the ssd tech that has changed, more our way of accessing data. 10 years ago, vendors slammed flash into disk arrays, but you still accessed them as if they were standard disks, typically over a network of some sort, and often it was the network and/or controllers that were the dominant component of response time.

So a lot of people got "burnt" by flash, ie, it didn't match the hype.

With the advent of PCI attached disks, with near memory-speed style access, we are starting to see the true benefits of flash tech.

But our answer remains the same - if you don't have an I/O problem, throwing a chunk of into money into flash is just money wasted.

If you *do* have an I/O problem, then an investigation/benchmark on flash tech would be wise before spending that cash.

More to Explore

Administration

Need more information on Administration? Check out the Administrators guide for the Oracle Database