Skip to Main Content

Breadcrumb

Question and Answer

Tom Kyte

Thanks for the question, AMIR.

Asked: February 20, 2008 - 3:14 pm UTC

Last updated: October 23, 2012 - 12:32 pm UTC

Version: 11.1.0

Viewed 10K+ times! This question is

You Asked

hi Tom

A few monthes back i designed an architecture for one of our client. Then the main problem was their centeral database an Oracle 9i has so many problem that it was not possible for them to close it or ask us to redesign the system before we complete another project on the same database. I tried different architecture and java frame works but no vail until i find oracle Times Ten. what i did is i installed oracle Times ten and application server on the same machine and tried to save queries from the main database server. fortunately it work to some extent and i was able to complete the project. My question to you is.

Why oracle did not tried times tens on the middleware. Instead they introduce a new feature of sql resultset. which is pretty cool but not as cool if some lighter verions of Times Ten was used for caching data in the middleware. Why oracle opt to create a new feature while a product can already fulfil that need.

Also while designing this project i saw many possiblities in caching data also with combining different frame works like when i combine it with toplink it gave me a fast response time. May be because of data is availble on the same machine.

My Project Manager considers that in above project i went too extreme in terms of desgin and architect. Do you consider it extreme. because if that extreme then oracle would not have introduce sql resultset cache. what's your opinion

regards
Amir Riaz

and Tom said...

because more than one database is complex. because there are replication and syncronization issues with times ten

and because of people put 1% of the time into DESIGN that they did into slamming code out, we'd probably not need any of this stuff.

With the sql result cache, the plsql function result cache and the client result cache in 11g, the need for extra moving bits and pieces goes away in most all cases.

the client result cache - client issues query once and as long as the database doesn't change the answer, transparently keeps reusing it.

the server results cache - like a just in time materialized view, every client that connects to oracle takes advantage

the function results cache - see above


if you can do what you need with one thing, then one thing will make life less complicated.


And you are asking the wrong person if a middle tier was over designed or architected. I tend to think they are over engineered (glued together bit by bit) rather than over designed. I wish for over designed sometimes :)

Rating

  (71 ratings)

Is this answer out of date? If it is, please let us know via a Comment

Comments

Michal Kuratczyk, February 22, 2008 - 10:02 am UTC

> Why oracle did not tried times tens on the middleware.

Well, we actually have something like that - it's called Oracle Coherence (formerly Tangosol Coherence). It is not a SQL database (it's not a database as such at all in fact, because you don't have any persistent storage out of the box but you can integrate it with RDBMS, Berkeley DB and whatnot) but very often, when customers ask about Times Ten, Coherence is a better answer. Really cool product and scalability/performance solution.

Result cahce

Anil, February 24, 2008 - 1:59 pm UTC

Hi

Oracle 11g new, result cache looks good on the first shot. Immediate test shows it is really fast if uses appropriately.

But while searching I have seen some not very encouraging reports about it scalability,

http://www.pythian.com/blogs/683/oracle-11g-result-cache-tested-on-eight-way-itanium

What is you take on this..

Rgds
anil
Tom Kyte
February 24, 2008 - 2:38 pm UTC

well, it is a sort of not real world test isn't it.


Q) why would you use the result cache?
A) to take a complex query and save it's results, you know something that takes 1,000,000 rows and aggregates it into 1,000

So, what do they test with?

SQL> begin
  2   for cur in (select /*+ result_cache */ * from t)
  3   loop
  4    null;
  5   end loop;
  6  end;
  7  /



I'll ask you - would you realistically expect the results of a simple "select * from table" to be realistic here?

would you cache that? (I would not)


How about the results of "select count(*) from t where <condition>" or something more realistic - where you turn MANY ROWS into ONE ROW



Put it another way, I call the server result cache a JIT MV, just in time materialized view.

So, would you ever:

create or replace materialized view mv
enable query rewrite
refresh fast
as
select * from big_table;

would you do that? (no, please say "no")

if you did - would you expect it to be better (no, please say "no")


So, suggestion: go back to their web site and leave a followup and say "hey, if you use this for a query that takes a query from running in a second down to 1/1000th of a second - you know, something we'd really use this for - what do you see then?????"

The server result cache makes sense where a materialized view might have made sense, it is a logical extension of that

for select * from t or select * from t where indexed_column = value, you are not going to see much benefit.

their graphs show that, the red and the blue are equal in the beginning - why would you use the result cache if it didn't do anything for you from the beginning??!?!?

some people dont get it simply

AMIR RIAZ, February 25, 2008 - 1:41 am UTC

Hi Tom

just read the above article about resultset cache. I think Alex Fatkulin test case is not valid not because he is doing a simple

select /*+ result_cache */ n from t

but because of the way he has coded the test case. lets go through the test case.

SQL> variable rc refcursor;
SQL> exec open :rc for select /*+ result_cache */ n from t;
PL/SQL procedure successfully completed.

SQL> create or replace procedure rc_fetch(
2 p_rc in sys_refcursor
3 ) is
4 l_n number;
5 begin
6 fetch p_rc into l_n;
7 dbms_output.put_line(to_char(l_n));
8 end;
9 /

he logs into one session and run the

SQL> set serveroutput on
SQL> exec rc_fetch(:rc);

then in another session he runs the same with tracing on and gets the following enqeues

enq: RC - Result Cache: Contention 6 87.15 137.17

first he was not able to understand why he has these waits and went through two more posts. well in my humble opinion these waits are because oracle has to lock the resultset cache when we open the cursor. the purpose is to prevent then invalidation during the fetch if the table data changes. what alex did is he open the cursor but did not close it so the oracle is still considering that he needs more fetchs and did not releases that lock. so RC locks are used for preventing issues like data integrity.

he himself says

So you see, our first session is holding the RC enqueue in exclusive mode. The second session was trying to acquire this enqueue in shared mode until a certain amount of timeouts (six, according to our trace). The good news is that subsequent executions in any other session (including our second session) will not try to acquire RC enqueue again. The bad news is that they will not use Result Cache. As long as our first session has fetched all rows from a refcursor all other sessions will suddenly start to use Result Cache without incrementing the total_req# stat for RC enqueue in the v$enqueue_stat view.

but why oracle used exclusive locks on RC. why they did not used shared locks.

regards
Amir Riaz

result cache

Anil, February 25, 2008 - 1:00 pm UTC

Tom,

100% agreed, When I read that article completely I realized that the test was not reasonable.
"Wrong use of the feature"


Rgds
Anil

does open cursor lock result set cache

AMIR RIAZ, February 27, 2008 - 7:12 am UTC

hi

in above i have written a review but which i think in oracle 11g with resutl set cache when we open cursor the open cursor locks the result set cache for integrity reasons and if the close cursor statement is not run oracle 11g keeps a lock on it and that why alxender is getting RC locks. Do you think my observation is right. i do agree with you that result set cache is like materialize view and simple select statement cannot shows the true capabilty of result set cache but i want to add to it that in above test case the cursor was not close on the result set cache and that 's why he is getting so much waits. do you think i am right?

regards
Amir riaz
Tom Kyte
February 27, 2008 - 11:14 am UTC

I haven't read the entire paper yet. But as long as a cursor is opened against the result cache - it would have to stay there, yes. But that does not prevent concurrency at all - so not sure what the big deal would be.

that cached result could be invalidated by another session (it wouldn't be used by furture "opens")

that cached result can be accessed by other sessions (until it is invalidated).

so, not sure the relevance.

lets me explain

AMIR RIAZ, February 27, 2008 - 1:40 pm UTC

tom

let me explain with a time line. while giving the explanation i am considering the coder did not close the cursor and while open cursor statement has lock the resultset cache. the objective here is to understand weather the concurrency is effected by it or not. i.e oracle locks the result set cursor in share mode or in exclusive mode. suppose a user open cursor at time t1 in session s1 during the result set retival(fetch) at time t2 an update is performed on the table to which result set cache is associated in another session s2. in session s3 a user try to retrieve the same result set from result set cache. so time line will be something like this

t1 ============================== s1
t2 ====== s2
t3 ========================= s3

now if we dont close the cursor in session s1 the result set cache will be there even after update in the session s2 and since session s3 should see the updated data because it queries at time t3 after t2 is committed it has to either wait or query from database and from that its been derived that result set cache should be locked in exclusive mode to prevent other session querying it.

if the result set cache is locked when the cursor is opened then it should be followed by a close statement or we should use implicit cursor as you did in your examples. otherwise same problem will occur as in alexender test case. .

regards
Amir Riaz
Tom Kyte
February 27, 2008 - 7:16 pm UTC

there can and commonly are multiple copies of the result in the cache based on different points in time.

other sessions are not prevented from querying anything - they will however query the CORRECT CACHED result.

other sessions are not prevented from updating anything - they will however invalidate the current server result cache (which prevents others from hitting it, they will not use what was there - they have to REBUILD the cached result)


Is it necessary?

Andrew Markiewicz, May 06, 2008 - 5:35 pm UTC

I received an email from Oracle with a link proclaiming the virtues of TimesTen in 11g. I'd never heard of this feature.

http://www.oracle.com/newsletters/information-indepth/database-insider/may-08/inmemory.html?msgid=6520781

What struck me was the apparent need to get the data closer to the application .... in the application server. Why not get the application closer to the data ... in the database?

"By caching performance-critical subsets of an Oracle Database in main memory in the application tier, the software dramatically reduces application response times and increases throughput by relying on memory-optimized algorithms and avoiding communication delays between computers and processes."

I've been reading some information about the TimesTen database. There's a good pdf on OTN. It seems to get some of it's performance benefit by using different optimizer algorithms that require the data to be entirely in memory, thus being able to directly access memory addresses instead of developing plans that may need to be optimized to access disk. Fair enough. I can see that being a benefit. But how much of the performance benefit is gained because of reduced network traffic from the OAS to DB? And if it's all in memory, what about read consistency and concurrency?

http://download.oracle.com/otn_hosted_doc/timesten/703/TimesTen-Documentation/intro.pdf
The documentation says it supports read committed isolation and has multiversioning so readers are not blocked, but they do still allow non-repeatable reads. This is not the way I've come to expect Oracle to work.

"Read committed isolation provides increased concurrency because readers do not
block writers and writers do not block readers. This isolation level is useful for
applications that have long-running scans that may conflict with other operations
needing access to a scanned row. However, the disadvantage when using this
isolation level is that non-repeatable reads are possible within a transaction or
even a single statement (for example, the inner loop of a nested join)."


I don't have any experience with this feature, but my first impression is that it is necessary in response to some poorly performing Java applications residing on an app server that tried to use the database as a non-proprietary black box instead of designing database applications close to the data within the database using plsql.

Andrew


Tom Kyte
May 07, 2008 - 1:14 am UTC

Times ten is all about "latency".

In a not-in-memory database you typically have

a) disk
b) network

involved - both of those can and will make response times fluctuate. Time ten can remove both of them - so the only thing that will cause a query to really fluctuate response time wise will be CPU and if you determine the cpu required to do query X, you can remove these variations and get a consistent, constant response time.

that is what it is best for. I agree that I would not want to use it to try and fix a middle tier application.

as for "but it allows for non-repeatable read" - so does Oracle in read committed - no difference there. Not sure what you meant by that bit

non-repeatable read

Andrew Markiewicz, May 07, 2008 - 9:19 am UTC

Tom,
This is what I was getting at.

Excerpt from the Time Ten documentation:
"However, the disadvantage when using this
isolation level is that non-repeatable reads are possible within a transaction or
even a single statement (for example, the inner loop of a nested join).
"

My understanding from your book "Expert One On One" Chp 3 p. 127 is that Oracle does not do this since it will rebuild the data to the point when the single sql statement began.

"In Oracle, using multi-versioning and read consistent queries, the answer we get from the accounts query is the same in READ COMMITTED as it was in the READ UNCOMMITTED example. Oracle will reconstruct the modified data as it appeared when the query began, returning the answer that was in the database when the query started."

That is different than TimesTen isn't it?
Tom Kyte
May 08, 2008 - 2:45 am UTC

Sounds like it could be, I haven't had a need to utilize it myself, so I haven't really tested it at all.

got a pointer to the full documentation you are quoting (always best to see things in context)

ResultCache research

Alex Fatkulin, May 07, 2008 - 11:21 am UTC

ok, I will answer this on by one.

Amir Riaz,

"first he was not able to understand why he has these waits and went through two more posts"

I assume you did not read all of my posts. First of all -- I did three additional posts, not two :). Second -- last three posts are NOT related with the first one, because first one is about enqueue while the subsequent ones are about latches. When someone is "not able to understand why" he does more research on the same topic, how completely different topics transformed into "not able to understand why" is not clear to me. Taking into account that you later did post my own explanation (which means I did understand the issue) makes your statement even more confusing for me.

And how the fact that I did not closed the cursor invalidates the test? Does documentation says anywhere "you have to close your cursor of we will stop you cold dead for 6 seconds on your first try but not on your second try (but you will not use RC cache)?".

Tom,

I appreciate your feedback. Since you said that you didn't read the entire paper I'll skip right into the paper's goal.

The research was around two things:

(1) We have only one ResultCache latch
(2) This latch can only be acquired in an exclusive mode, even if you only need to read from a result cache.

The test cases were selected to put as much pressure on that single latch as possible and, mind you, I explicitly state that. Nowhere in the paper I am stating that this is a best application for ResultCache. The goal was to find the limits and implications of a single latch and contribute to a community by showing the case where RC should not be used and explain why. I am not hiding how did I get these results nor pretending it was the best use, I simply provided my observations.

The statement that "ResultCache is not appropriate for small things" is not obvious, isn't it? Sort of saying "don't use Oracle Buffer Cache to cache small tables".

Oracle Documentations says:

"Query retrieval from the query result cache is faster than rerunning the query."

It doesn't mention whether it should be big, small or what "real world application" means. I saw many real world applications where things were used in a way they were never designed/supposed to :) And nowhere it says "don't use RC for small queries" as well as nowhere it says "RC is more appropriate for DWH-like quires". All it says is "running query from result cache is faster".
Tom Kyte
May 08, 2008 - 3:21 am UTC

... I am not
hiding how did I get these results nor pretending it was the best use,...

that is where I think this research falls short. Rather than putting a damper on the entire thing - maybe demonstrate how it works, where it should be used and (in this rather obvious - you would never use it for this case) when it should not be.

If you print something relatively negative, but your example is so "non-real world", well...?

The result cache would be used to take a query that needs to process a bit of information into a smaller bit of information (sort of like a materialized view) and cut out the processing time. A simple "select * from t" - I don't see that making real world sense - do you?

TimesTen Intro

Andrew Markiewicz, May 08, 2008 - 9:44 am UTC

Tom Kyte
May 12, 2008 - 9:28 am UTC

it would appear that way - the times ten read committed isolation does not provide for consistent read in the way Oracle does.

Real World Sense

Alex Fatkulin, May 08, 2008 - 12:18 pm UTC

"The result cache would be used to take a query that needs to process a bit of information into a smaller bit of information (sort of like a materialized view) and cut out the processing time. A simple "select * from t" - I don't see that making real world sense - do you?"

I do see an existence of RESULT_CACHE_MODE = FORCE as a claim "we are good to cache everything", because if you are not -- why do you have a parameter to cache everything in a first place?

There is a difference between "will not do faster" and "will do slower" and I would tell you that there is no "real world sense" in why taking something out of a result cache should be slower than doing the same out of a buffer cache (regardless of the fact how much you "fold" it). In fact, result cache could be optimized for doing what it is intended to do -- while buffer cache has much more things to care about. This is what I was expecting from result cache back in 2007 but not what I have got.

There is also no "real world sense" in why do you have a single latch without shared mode gets (anyone here care to remember shared pool latch in oracle 7?) and, truth be told, if they did so -- result cache could be on par with a buffer cache in every case and not only "if you do this, that and there".
Tom Kyte
May 12, 2008 - 9:49 am UTC

Alex - we do not recommend using that and it is not the default.

but - suppose you have a system whereby every query:

... "The result cache would be used to take a query that needs to process a bit of
information into a smaller bit of information (sort of like a materialized
view) and cut out the processing time. A simple "select * from t" - I don't see
that making real world sense - do you?"
...

satisified that.

I do not agree with your results as posted, you can do with that whatever you like. They are not real world - they cause people to question "why, why did you do that, why not test something we'd actually use"



you did not read all of my posts

Amir Riaz, May 10, 2008 - 1:49 am UTC

hi

I have read all of your posts before posting something on this thread and i understand what the enqueue waits are But on the contrary you did not read this thread and just went on. Again you did not gave any logic. just simple replies to defend a wrong test case. here is a cut and past for my first post

SQL> variable rc refcursor;
SQL> exec open :rc for select /*+ result_cache */ n from t;
PL/SQL procedure successfully completed.

SQL> create or replace procedure rc_fetch(
2 p_rc in sys_refcursor
3 ) is
4 l_n number;
5 begin
6 fetch p_rc into l_n;
7 dbms_output.put_line(to_char(l_n));
8 end;
9 /

he logs into one session and run the

SQL> set serveroutput on
SQL> exec rc_fetch(:rc);

then in another session he runs the same with tracing on and gets the following enqeues

enq: RC - Result Cache: Contention 6 87.15 137.17

first he was not able to understand why he has these waits and went through two more posts. well in my humble opinion these waits are because oracle has to lock the resultset cache when we open the cursor. the purpose is to prevent then invalidation during the fetch if the table data changes. what alex did is he open the cursor but did not close it so the oracle is still considering that he needs more fetchs and did not releases that lock. so RC locks are used for preventing issues like data integrity.

he himself says

So you see, our first session is holding the RC enqueue in exclusive mode. The second session was trying to acquire this enqueue in shared mode until a certain amount of timeouts (six, according to our trace). The good news is that subsequent executions in any other session (including our second session) will not try to acquire RC enqueue again. The bad news is that they will not use Result Cache. As long as our first session has fetched all rows from a refcursor all other sessions will suddenly start to use Result Cache without incrementing the total_req# stat for RC enqueue in the v$enqueue_stat view.

but why oracle used exclusive locks on RC. why they did not used shared locks.


then i realised that to be consistent at some point in time oracle have to lock the result set cache in exclusive mode so my second question was. Were there only one result set cache or multiple so i posted this

let me explain with a time line. while giving the explanation i am considering the coder did not close the cursor and while open cursor statement has lock the resultset cache. the objective here is to understand weather the concurrency is effected by it or not. i.e oracle locks the result set cursor in share mode or in exclusive mode. suppose a user open cursor at time t1 in session s1 during the result set retival(fetch) at time t2 an update is performed on the table to which result set cache is associated in another session s2. in session s3 a user try to retrieve the same result set from result set cache. so time line will be something like this

t1 ============================== s1
t2 ====== s2
t3 ========================= s3

now if we dont close the cursor in session s1 the result set cache will be there even after update in the session s2 and since session s3 should see the updated data because it queries at time t3 after t2 is committed it has to either wait or query from database and from that its been derived that result set cache should be locked in exclusive mode to prevent other session querying it.

if the result set cache is locked when the cursor is opened then it should be followed by a close statement or we should use implicit cursor as you did in your examples. otherwise same problem will occur as in alexender test case. .


as tom describes there can be multiple result set caches.you can read the correct reply from that post. So in all cases its your Test case which is wrong and does not even prove what you said above. You are just trying to defend your self on the modesty of Tom. If you really have something PLEASE POST ON THIS THREAD.

Alex Fatkulin, May 10, 2008 - 1:07 pm UTC

Amir Riaz,

I think your entire confusion goes from the fact that you did not realized that I kept the cursor open on purpose -- to demonstrate my observation against RC enqueue waits.

Alex Fatkulin, May 10, 2008 - 2:17 pm UTC

I have read all of your posts before posting something on this thread and i understand what the enqueue waits are

I still fail to see what you did.

The purpose if this article: http://www.pythian.com/blogs/594/oracle-11g-query-result-cache-and-the-rc-enqueue

...was to solely demonstrate how and when you can wait on RC enqueue because this behavior is nowhere documented. This article has nothing to do with scalability problems discussed in the following articles.

This article:
http://www.pythian.com/blogs/598/oracle-11g-query-result-cache-rc-latches

...introduces you to RC latches (not enqueues) and outlines that the entire RC cache is protected by a single latch without shared mode gets at all. This article is of a totally different matter from the previous one and it uses completely different test cases which were constructed with only one goal -- to put as much pressure on RC latch as possible -- in order to expose the limits. The test cases used to stress RC latch were read-only without anyone trying to modify anything and I also made sure everything can fit in RC cache so there were no age-in and age-out problems as well -- which means I made everything I could think of to allow Oracle get RC latch in a shared mode.

Unfortunately, back then in 2007 I haven't had any appropriate hardware running 11G to really test concurrency so when I got my hands on quad core box I redid my test on it:

http://www.pythian.com/blogs/660/does-oracle-11gs-result-cache-scale-poorly

...but that was not enough to saturate this single latch as well, however, from these test you can already spot how badly RC latch waits were growing as concurrency increases.

And finally:

http://www.pythian.com/blogs/683/oracle-11g-result-cache-tested-on-eight-way-itanium

...where you can see the limits exposed.

The important point about these articles (which you seems to have totally missed or ignored) -- the test case which demonstrated RC enqueue waits has NOTHING to do with questioning scalability. The articles which does question RC scalability are using completely different test cases.

Other members may comment

Amir Riaz, May 11, 2008 - 2:06 am UTC

hi

http://www.pythian.com/blogs/594/oracle-11g-query-result-cache-and-the-rc-enqueue

in your article you did not mentioned anywhere that you are keeping the cursor open with the intention of showing RC waits further in your article above you gave the following comments

I¿m going with the simplest example I could think of. You are free to apply that to any real-world situation you can imagine.

variable rc refcursor;
exec open :rc for select /*+ result_cache */ n from t;


on one side you are saying that you are doing this intentionally to demonstrate the RC enq and in your article you mentioned your self you are saying that YOU ARE FREE TO APPLY THAT TO Any real world situation ???

THIS IS BECAUSE YOU DONT KNOW THAT KEEPING THE CURSOR OPEN WILL INTRODUCE RC WAITS.POINTS OUT THAT YOU WERE TAKING THIS EXAMPLE AS A REAL WORLD EXAMPLE AS YOU YOURSELF WRITTEN IN ARTICLE.

further prof in down in the article where you accepted yourself

It¿s a pity that no one bothered to add those enqueues here (probably in the desperate belief that no one will notice their presence). You can discover that any user session connected to a database is holding AE enqueue in shared mode. The purpose of that enqueue is unknown to me for now. RC stands for Result Cache.

clearly states that you dont know what you are doing.

I accept that oracle missed that documentation but its just a documentation bug.

You did not mentioned anywhere what you are trying to prove by keeping the cursor open. You did not mentioned anywhere in the article that you are keeping cursor open intentionally for the purpose of demonstration. Rather you were apply that piece of code to any real world example.


Alex Fatkulin, May 11, 2008 - 2:40 am UTC

Any real world situation ???

Easily. Imagine a report with pagination which is based on aggregated data. You decided to use RC for it. So your user fires up a report and looks at the first page -- he didn't views all the pages yet. Now, if you have a statefull environment -- you will naturally want to keep the cursor open in order to be able to fetch the next page when user will request you to do so. This is also the first user who started to initialize RC this that resultset. Now, if second user will request the same report -- he will be force to wait 2 minutes for RC enqueue timeout.

clearly states that you dont know what you are doing.

I am talking about AE enqueue there -- AE enqueue is not documented anywhere. I hope you always know what are you doing when you see undocumented stuff, because I don't, and you better share with us how you do so.

You did not mentioned anywhere what you are trying to prove by keeping the cursor open. You did not mentioned anywhere in the article that you are keeping cursor open intentionally for the purpose of demonstration. Rather you were apply that piece of code to any real world example.

At the very beginning I am saying:

Way before 11G made its way to production, I joked that this new feature called ¿Query Result Cache¿ will just implicitly create a materialized view, protecting dependent objects with some new enqueue while doing so.

...

Query Result Cache was the first new feature that I put to the test, only to discover that you can spend more than two minutes waiting on¿ guess what."

This is the first paragraph and it clearly says that I was expecting this enqueue and I am going to show you how to see two minutes waiting on it (I hope the "guess what" part wasn't a rocket since).

Sorry, but I don't know how else I can help you but repeat "please read what I am writing properly".

Alex Fatkulin, May 11, 2008 - 3:14 am UTC

YOU ARE FREE TO APPLY THAT TO Any real world situation ???

ok, I think your confusion with that statement is different than I initially thought, so I am going to comment second time on that one.

I suppose you understood it as "every real world case will experience this wait". This is not what I am saying. I am saying "go ahead and apply this testcase to what are you doing and see if you are affected by it".

I think what we really have here is some sort of a language barrier -- you misunderstanding the meanings of words.

Another reason for using Times Ten over Result Cache

Arup Nanda, May 11, 2008 - 10:04 am UTC

Skipping the discussion on whether the tests done by Alex are real-worldy enough or not; I have another anwer to the original post - why would you choose timesten over Result Cache.

Tom already gave one compelling reason - the network latency between the app and db servers is a huge motivation for using TimesTen. the other reason is redundancy. Typically you have several app servers for a single database server. Using TimesTen on these app servers reduce the need to go the database - to the buffer or not. One example (yes, real world), one where I am very cloely associated with - has an application that provides rates information on a webpage, after doing some lengthy calculations. This application is served by 6 app servers around the world and the database is at one place - Boston, USA. Using Result Cache will not really help; since the data is fairly static; but the app server will need to check the result cache every time a query comes for rates. With up to 100,000 queries per second, that would just cripple the database.

In this case, we used an application server level data cache. Since TT was not available then, we wrote our own tool. Right now we are doing a conversion to TT to replace our homegrown tool. I deliberately chose TT even though I knew very well about Result Cache, for this specific reason.

So, the moral is obvious - Result Cache is not a replacement for TT; they both server different purpose.

resultset cache real world example

daniel, May 12, 2008 - 4:38 am UTC

http://www.oracle-developer.net/display.php?id=503

in the performance considerations section

We can see that there is little performance difference between the cached and uncached lookups. In fact, the result cache uses more latches (note the two million hits on the new Result Cache latch), although it is marginally quicker overall. These results are probably to be expected when we consider the work to be done in a single primary key lookup. First, the index and table data will be in the buffer cache and second, the amount of I/O saved by a lookup in the query result cache is minimal.

The best performance gains will be achieved when the result cache is preventing a large amount of database work from being repeated (such as in our aggregate queries earlier). Unfortunately, these types of queries are going to be rare in the result cache as the base tables will probably be frequently updated.


this perhaps what tom and other members are trying to say. A resultset cache is like materialized view. Its will best suit for queries with aggregation as in materialized view and will save the database work.

Perhaps now alex understands why his test case is not the real world. under these conditions all of his posts has flaw so they are not valid

Alex Fatkulin, May 12, 2008 - 11:07 am UTC

Alex - we do not recommend using that and it is not the default.

Can you share a link (if it would be different from this page)?

but - suppose you have a system whereby every query...

This is what we are talking here and now and I am cool with that. But back then in 2007 these "suppose" bits and pieces were not clear.

I disagree on the definition of "real worldliness" of tests as "it have to fold the original query".

1. Even if you cache a query which does nothing but a single row lookups -- RC is still able to serve these request slightly faster than a perfectly sized single-table hash cluster which is a darn good result and this is why I don't see the "folding" factor as a driving condition -- you could cache anything and benefit from anything. Is is a concurrency which kills RC, not the pure speed of getting something out.

2. Why do we have CBO in the database? If processing elimination factor is a principal -- why don't you let CBO make a decision what is good for RC caching and what is not? You do this with regular MVs.
Tom Kyte
May 12, 2008 - 1:12 pm UTC

just read the reference guide - it is not the default.


You may print whatever you like, however, when you compare a

select * from t uncached
versus
select * from t cached

it makes people go "hmmm", what was the point? I'd never do that.


Sort of like if you tested a materialized view in that fashion. You might find the materialized view to be worse than using the table directly - the rewrite activity and everything else that would have to go on.


If you demonstrated this feature in a meaningful way, we might have a different view of the paper - as it is, it isn't showing something anyone would really do - you can say all you want "but wait, you could do it, nothing stops you - but so what? That doesn't mean anything. People would rather look at something they would actually consider doing, that is all.


we have a CBO in the database to optimize queries, indeed. And every release it gets better at doing that. Not sure I get your point there.

Alex Fatkulin, May 12, 2008 - 3:57 pm UTC


just read the reference guide - it is not the default.

I was talking about "we do not recommend using that" part.

Not sure I get your point there.

The only way we can use RC is through hinting or setting result_cache_mode to force.

Documentation says:

The use of the result cache is database-wide decision

... and that makes sense given that one should escape using hints whether possible, so result_cache_mode=force is suggested as a preferred method of using Result Cache (that's why I became curious where did you read about this being not recommended).

Read further:

OLTP applications can benefit significantly from the use of the result cache

Which is contrary to what we have in reality, no? Isn't OLTP applications are characterized by small queries with high degree of concurrency -- and this is exactly where Result Cache fails short. I understand your position as "Result Cache is mostly for DSS-like queries" and I agree with it. But this is not what documentation says.

My test cases shows exactly this -- if you have something which you are going to execute on a frequent basis with high degree of concurrency (thinking of OLTP, no?)-- Result Cache is going to kill your scalability. You can replace my query with whatever query you like -- OLTP style execution will kill you regardless of the query you are using.
Tom Kyte
May 13, 2008 - 10:14 am UTC

turnabout: show me where you see us ever recommending it?? negatives are hard to prove sometimes. do you see us saying "hey, this is a good idea, flip this on..."


.... and that makes sense given that one should escape using hints whether possible, ....


that is a patently and entirely FALSE statement
. There are good hints (result cache is one of them, all rows is another, dynamic sampling, append - there are many good hints, hints that give the optimizer more lattitude, more information, more options.


... Which is contrary to what we have in reality, no? ...

no, not really. You are thinking the update component - there are many frequently executed and cacheable queries in many (i might even say most) transactional applications



I still think you need a better example, caching a full scan - ??

CRC is different from DRC

Amir Riaz, May 13, 2008 - 1:42 am UTC

Hi

from above and many others article on the internet i have seen the same thing. Even at otn the RC articles are questionable. even this article is not based on realism.

http://www.oracle-developer.net/display.php?id=503

why?

this is because Database resultset cache is not design for conncurrency. Its designed for frequently executing queries whose base tables are updatable. Its not designed for lookup tables with huge number of users. we are making test cases on wrong assumptions. so here are few assumptions i developed when design the above system and they should equally apply to RC. They can

Database ResultSet Cache (designed for)
frequently executing queries (but not for large number of users)
the must have some sort of aggregation
behave like materialized view
base tables of the query are updatable.

database resultset cache(not designed for)
huge number of conncurrent users
readonly look up tables

Client ResultSet cache(design for)
huge number of conncurrent users
readonly look up tables

client resultset cache(not designed for)
updatable tables

I have these experiences when i was designing the above system. The problems are virtually the same with RC.

what most papers are doing they are using look up table like considering them readonly and using Database resultset cache which is not design for that purpose or neither it can be designed

if you run your test on Client resultset cache you will see that its highly conncurrent for readonly lookup table.so no problem with CRC.

OLTP applications can benefit significantly from the use of the result cache

is completely true. for oltp lookup readonly tables you can use client resultset cache and which is highly conncurent. for small number of queries which just execute often and are expensive for the database and have updatable base tables you can use database resultset cache. to understand my assumptions you have know how DRC and CRC works

if you made you test case based on these assumption you will see RC is beneficial for the oltp application. also i must say all of these assumptions are based on experience and my understanding of how RC work

regards
Amir Riaz

A better example

Alex Fatkulin, May 13, 2008 - 4:40 pm UTC

do you see us saying "hey, this is a good idea, flip this on..."

I don't know how else one can interpret "it will result in a faster response time for your frequently executed functions and queries" and "it is a database-wide decision". You may say about how appropriate or not something is for Result Cache -- and I hear you. But the guys who wrote the documentation didn't.

I still think you need a better example, caching a full scan - ??

I still think that one should read entire thing before going to comment on it :)

In my scalability test I was not caching a full scan -- I was caching a lookup into a table.

Of course, you will say "this is not a real world use", despite the fact that Performance Tuning Guide is perfectly OK with that. Heck -- you can even see article published in Oracle Magazine:

https://asktom.oracle.com/Misc/oramag/on-oracle-database-11g.html

...which does exactly the same -- it caches a function which does nothing but single row lookups into a table and you spot "best practices" and "real world" there (and, again, -- this is perfectly legitimate use for this feature given the way it is described in the documentation). Care to see how well it scales (they didn't do scalability tests for whatever reason)?
Tom Kyte
May 14, 2008 - 10:27 am UTC

sigh, I was talking about the init.ora parameter, the one you say we are recommending to set - which we don't.


did you or did you not lead with a full scan - it is a rather large article and pieced to together in bits. I find it misleading.


Alex Fatkulin, May 13, 2008 - 4:48 pm UTC

this is because Database resultset cache is not design for conncurrency. Its designed for frequently executing queries whose base tables are updatable. Its not designed for lookup tables with huge number of users. we are making test cases on wrong assumptions.

And please tell me -- how would you know that without us did all these test to actually show you that "resultset cache is not design for conncurrency"? You are using the word "wrong assumptions" but you need to understand -- all these tests made these assumptions to be wrong, not the other way around.

check this out

Amir Riaz, May 15, 2008 - 2:25 am UTC

Its turning out that resultset cache is perhaps the most misunderstood topic in oracle 11g. However tom is as usual right. As a proof here is his paper on otn. it is advocating exactly what he is saying here.

https://asktom.oracle.com/Misc/oramag/on-oracle-database-11g.html


Tom Kyte
May 19, 2008 - 11:36 am UTC

well, that is a recursive proof :)

You are using my material to prove me right or wrong...

paper from tom is needed here

Jayson, May 15, 2008 - 1:35 pm UTC

here is an other article from alex. He is presistent on his view point and has written another article. here is the link.

http://www.pythian.com/blogs/1004/oracle-11g-result-cache-in-the-real-world

I think now Tom should write an article explaining what is the real world test case should be like. with similar graphs and data. because there is so much confusion on this topic even the guru's are make mistakes so why not prove things in Tom Kyte style with a test case showing what the real world is?

only proving that you are right

Amir Riaz, May 20, 2008 - 10:43 am UTC

just proving that you are right

A reader, February 24, 2011 - 9:01 pm UTC

How is the the RC scalability in 11gR2? One blogger remarks, "I was quite surprised just how good these results are! ... There is an enormous improvement when it comes to Result Cache scalability in 11GR2." ( http://afatkulin.blogspot.com/2010_06_01_archive.html )

Pythian Group... full of FUD information

Bob Webb, March 07, 2011 - 1:48 am UTC

Hi,

I don't relies much on Pythian group posting, their site is full of "FUD" information.

Caching

A reader, September 26, 2011 - 10:41 am UTC

Tom:

1) Is "results caching" in 11g different that data caching before 11g. Was not oracle caching the database blocks in the buffer cache when you ran a SQL select statement?

2) Does not this feature removes the need for MVs or make them less important since they are used to stored aggregates and query results and you can cache it now instead?

3) Not sure if you heard about a popular product called "MEMCACHE" that people keep talking about

http://memcached.org/

http://www.search-this.com/2007/07/24/an-introduction-to-memcached/

Do you see any cons to this when used with database applications or with oracle? or it is more for static websites where data is mostly static.

Thanks in advance,
Tom Kyte
September 26, 2011 - 6:46 pm UTC

1) yes, there was always the buffer cache - caching of blocks. These blocks can be processed to create a result set.

result set caching is the caching of the answer. Say you run a query like "select count(*) from a_big_table". That might need to process 100,000 blocks (which we might cache). But if you run it again without result set caching - you still need to process 100,000 blocks. With result set caching - you just get the answer back (we cached the ANSWER to the query, not just the blocks that could be used to recreate the answer)


2) no, not at all. There is limited memory and relatively unlimited disk. materialized views will still be used in warehouse/reporting environments. Not everything will fit in memory every time, all of the time.

3) I know of it, we have something we find a bit easier to use in a relational database situation - it is called IMDB - the in memory database, also known as times ten.

caching

A reader, September 26, 2011 - 7:47 pm UTC

TOm:

Great explanation for result caching.

1) Would you recommend IMDB over MEMCACHED?
2) Why would you need IMDB cache if the database already has caching (buffer cache, library cache, etc.)?
3) Is IMDB an oracle product?
4) Can it be used with MySql database too?
Tom Kyte
September 27, 2011 - 8:21 am UTC

1) That is like asking "would you recommend a house over a banana". They are somewhat different.

times ten is an in memory relational database.

memcached is an API that allows you to manage a set of your own objects in memory.


2) because there is no physical IO at all with IMDB/times ten. With a buffer cache - there is going to be true physical IO at some point.

IMDB can remove latency due to disk IO and network IO (you typically would put IMDB right next to your application). You get constant, flat response times from it - assuming you do not exhaust your CPU. From a disk based, across the network database - you'll get latencies due to physical IO and network response times.

3) yes.
http://www.oracle.com/us/products/database/timesten/index.html

4) it is a database itself. It would load the contents of another database into its memory and do the work itself.

Data size in timeten

Lal Cyril, March 23, 2012 - 9:06 am UTC

Hi Tom,
I have a doubt regarding storage in Timesten in memory db and Oracle. I have a table with around 34,00,000 rows.
It has 14 indexes on it including the primary key index.

When i queried the user_segments to get the size of indexes + table , the size was found to be 1.7 GB

when this table was loaded to times ten the memory used was
found to be 4.5 GB.

Why is timesten taking more size?
Can you please explain the reason?

Its almost 2.5 times the storage in oracle.


I used the following query to get the memory usage in timesten
select PERM_ALLOCATED_SIZE,PERM_IN_USE_SIZE,PERM_IN_USE_HIGH_WATER ,TEMP_ALLOCATED_SIZE,TEMP_IN_USE_SIZE from  SYS.MONITOR;

Tom Kyte
March 24, 2012 - 10:07 am UTC

main memory databases, by their very definition, organize data quite differently than disk based (page/block based) databases.

They are just different.



TimesTen performance

Lal Cyril, March 28, 2012 - 2:05 am UTC

Hi Tom,
I have one more doubt on Timesten.

I was comparing the performance of a report in Oracle and Times ten In memory db cache.

When i executed the report against oracle it took 14 minutes out of which 11 minutes was for queries (based on awr report)

when i excecuted the same report against Timesten In memory db cache, took 13 minutes.

I expected a much better response time from Timesten, since tables for the report were completely available in memory.

Similar to AWR and top sql features with Oracle, any utilities available with Timesten to know the top queries?


Tom Kyte
March 28, 2012 - 9:01 am UTC

it took 14 minutes out of which 11
minutes was for queries (based on awr report)


and of the 11 minutes, how much was spent doing physical IO

TimesTen performance

Lal Cyril, March 30, 2012 - 5:05 am UTC

Thanks Tom for your time.
Sorry for the late response.
Out of the 11 minutes in db, one query was found to be executed 5000 times and it was going for a full scan. This query alone was taking around 10 minutes.

I verified i/o statistics by checking the following sections in the awr report.

The above query was not listed in the following sections in AWR

SQL ordered by user I/O wait time
SQL ordered by Reads
SQL ordered by Physical reads (Unoptimised)


But the query was shown as top in
SQL ordered by gets section.


Tom Kyte
March 30, 2012 - 7:18 am UTC

so it was basically an in memory query. In Oracle it was an in memory query - no physical IO. In memory versus in memory....

Isn't TimesTen still supposed to win that one?

Matthew McPeak, March 30, 2012 - 8:52 am UTC

Tom,

I thought the idea behind TimesTen was that it is faster than a disk-based RDBMS, even when that disk-based RDBMS is operating out of its cache. That was supposed to be because TimesTen *knows* all of its data is in memory and can therefore use more efficient algorithms than the disk-based RDBMS, which must still operate as though some or all of its data could be on disk.

If that's not a significant advantage, why have TimesTen vs. an Oracle RDBMS with lots of SGA?

Thanks!
Matt

Tom Kyte
March 30, 2012 - 10:17 am UTC

I thought the idea behind TimesTen was that it is faster than a disk-based
RDBMS, even when that disk-based RDBMS is operating out of its cache.


No, the idea behind timesten and in memory databases in general is to

a) remove latency due to disk access
b) remove latency due to network access

They act as a middle tier cache - boot strapping themselves (loading) from a disk based system to provide for guaranteed response times (assuming you keep your cpu utilization reasonable, say 60% or less).

If the data is in the cache, the rdbms finds it there and doesn't do any of the disk code. It is true that times ten might be able to find the data faster (their rowids are really pointers, they don't have to search the buffer cache - but in some cases - that doesn't matter either).


In this case, there was no physical IO - how often does that happen in real life?


If you do not have IO latency
If you do not have network latency

they you might not need something that removes those.

From Oracle docs

Matthew McPeak, March 30, 2012 - 10:22 am UTC

I totally got the wrong impression from Oracle's documentation then. You are saying that what is described below is NOT significant in practice? Thanks as always!

http://docs.oracle.com/cd/E21901_01/doc/timesten.1122/e21631/overview.htm#BABCIACC

Why is Oracle TimesTen In-Memory Database fast?
Much of the work that is done by a conventional, disk-optimized RDBMS is done under the assumption that data primarily resides on disk. Optimization algorithms, buffer pool management, and indexed retrieval techniques are designed based on this fundamental assumption.

Even when a disk-based RDBMS has been configured to hold all of its data in main memory, its performance is hobbled by assumptions of disk-based data residency. These assumptions cannot be easily reversed because they are hard-coded in processing logic, indexing schemes, and data access mechanisms.

TimesTen is designed with the knowledge that data resides in main memory and can take more direct routes to data, reducing the length of the code path and simplifying algorithms and structure.

When the assumption of disk-residency is removed, complexity is dramatically reduced. The number of machine instructions drops, buffer pool management disappears, extra data copies are not needed, index pages shrink, and their structure is simplified. The design becomes simple and more compact, and requests are executed faster. Figure 1-1 shows the simplicity of the TimesTen design.


Tom Kyte
March 30, 2012 - 11:19 am UTC



better link
http://docs.oracle.com/cd/E21901_01/doc/timesten.1122/e21631/overview.htm#TTCIN119

that might be true if you execute tiny queries millions or billions of times - there is (as I mentioned above) the extra work of finding the block in the buffer cache.

But in this example here, the double digit number of minutes report, I would hazard a guess that the bulk of the work is not in getting blocks - but actually processing those blocks.

So, for really small, short, sweet, simple sql statements - times ten will have definite advantages due to no physical IO, lack of network IO (assumed to be on same machine), and lack of many of the buffer cache mechanisms. The disk based RDBMS would find it spends a significant portion of the query runtime on getting blocks from the cache - whereas times ten would not. If you take something you run millions of times and make it a little faster - that'll shave off a lot of runtime.

For really big, sort intensive, aggregation heavy queries - the time spent accessing the buffer cache becomes - not so big of a deal.

I have a feeling these reporting queries were of the latter, not the former, type.


Also, I *definitely* think their use of the word hobbled is a tad hyperbolic personally.

Performance of Result Cache

A reader, September 12, 2012 - 9:53 am UTC

Hi Tom,

I was looking at this excellent feature provided by Oracle to solve some of the caching problems people face in the DW world.The expectancy was that once the result set is cached in the Shared pool the next execution/invocation will be in msecs.

However,it seemed otherwise, see below the executions which I have done.

1st execution took 14 secs.

2nd execution (from Result Cache) 00:00:06.06 secs
Result Cache Information (identified by operation id):
------------------------------------------------------

Statistics
----------------------------------------------------------
732 recursive calls
9 db block gets
8461 consistent gets
3 physical reads
1228 redo size
55 sorts (memory)
0 sorts (disk)

3rd Execution took (Elapsed: 00:00:04.71)
Result Cache Information (identified by operation id):
------------------------------------------------------

Statistics
----------------------------------------------------------
14 recursive calls
9 db block gets
16 consistent gets
3 physical reads
764 redo size
3 sorts (memory)
0 sorts (disk)

Questions are:
1) Why the autotrace statistics on subsequent invocations still showing consistent gets/recursive management/ physical reads etc.
2) I think they should all be ZERO.

Its just about hashing the SQL statement to find whether the HASH EXISTS in the REsult cache pool, if Yes display the resultset.

Why still 4 secs?

Can you please explain?
Tom Kyte
September 14, 2012 - 6:32 pm UTC

can you give more information, describe the schema, describe the result set, describe the settings you have in place, describe the query, etc.

you don't even tell us something as simple and relevant as the size of the result set!!! you removed (you went out of your way!!!!) all statistics that would have even told us that most basic of things!!!!!!!!

can you tell me why?

Performance of Result Cache

A reader, September 16, 2012 - 2:16 am UTC

Hi Tom,

The LIMIT of 1000 words, restricted me NOT to post the question in detail.

I have query/explain plan and statistics and table structures but not sure how to paste them here, more over there is no utility to attach a file.

Please let me know how to do that, I will be more than happy to do that.

Thanks
Tom Kyte
September 16, 2012 - 4:28 am UTC

You have 32k, your challenge: present your issue in 32k of text or less.

My challenge: proving your number 1000 is a made up number


[tkyte@localhost ~]$ !wc
wc test.dat
157 2342 14830 test.dat
[tkyte@localhost ~]$ cat test.dat
more than
two thousand words two thousand words two thousand words two thousand words two thousand words
two thousand words two thousand words two thousand words two thousand words two thousand words
two thousand words two thousand words two thousand words two thousand words two thousand words
two thousand words two thousand words two thousand words two thousand words two thousand words
two thousand words two thousand words two thousand words two thousand words two thousand words
two thousand words two thousand words two thousand words two thousand words two thousand words
two thousand words two thousand words two thousand words two thousand words two thousand words
two thousand words two thousand words two thousand words two thousand words two thousand words
two thousand words two thousand words two thousand words two thousand words two thousand words
two thousand words two thousand words two thousand words two thousand words two thousand words
two thousand words two thousand words two thousand words two thousand words two thousand words
two thousand words two thousand words two thousand words two thousand words two thousand words
two thousand words two thousand words two thousand words two thousand words two thousand words
two thousand words two thousand words two thousand words two thousand words two thousand words
two thousand words two thousand words two thousand words two thousand words two thousand words
two thousand words two thousand words two thousand words two thousand words two thousand words
two thousand words two thousand words two thousand words two thousand words two thousand words
two thousand words two thousand words two thousand words two thousand words two thousand words
two thousand words two thousand words two thousand words two thousand words two thousand words
two thousand words two thousand words two thousand words two thousand words two thousand words
two thousand words two thousand words two thousand words two thousand words two thousand words
two thousand words two thousand words two thousand words two thousand words two thousand words
two thousand words two thousand words two thousand words two thousand words two thousand words
two thousand words two thousand words two thousand words two thousand words two thousand words
two thousand words two thousand words two thousand words two thousand words two thousand words
two thousand words two thousand words two thousand words two thousand words two thousand words
two thousand words two thousand words two thousand words two thousand words two thousand words
two thousand words two thousand words two thousand words two thousand words two thousand words
two thousand words two thousand words two thousand words two thousand words two thousand words
two thousand words two thousand words two thousand words two thousand words two thousand words
two thousand words two thousand words two thousand words two thousand words two thousand words
two thousand words two thousand words two thousand words two thousand words two thousand words
two thousand words two thousand words two thousand words two thousand words two thousand words
two thousand words two thousand words two thousand words two thousand words two thousand words
two thousand words two thousand words two thousand words two thousand words two thousand words
two thousand words two thousand words two thousand words two thousand words two thousand words
two thousand words two thousand words two thousand words two thousand words two thousand words
two thousand words two thousand words two thousand words two thousand words two thousand words
two thousand words two thousand words two thousand words two thousand words two thousand words
two thousand words two thousand words two thousand words two thousand words two thousand words
two thousand words two thousand words two thousand words two thousand words two thousand words
two thousand words two thousand words two thousand words two thousand words two thousand words
two thousand words two thousand words two thousand words two thousand words two thousand words
two thousand words two thousand words two thousand words two thousand words two thousand words
two thousand words two thousand words two thousand words two thousand words two thousand words
two thousand words two thousand words two thousand words two thousand words two thousand words
two thousand words two thousand words two thousand words two thousand words two thousand words
two thousand words two thousand words two thousand words two thousand words two thousand words
two thousand words two thousand words two thousand words two thousand words two thousand words
two thousand words two thousand words two thousand words two thousand words two thousand words
two thousand words two thousand words two thousand words two thousand words two thousand words
two thousand words two thousand words two thousand words two thousand words two thousand words
two thousand words two thousand words two thousand words two thousand words two thousand words
two thousand words two thousand words two thousand words two thousand words two thousand words
two thousand words two thousand words two thousand words two thousand words two thousand words
two thousand words two thousand words two thousand words two thousand words two thousand words
two thousand words two thousand words two thousand words two thousand words two thousand words
two thousand words two thousand words two thousand words two thousand words two thousand words
two thousand words two thousand words two thousand words two thousand words two thousand words
two thousand words two thousand words two thousand words two thousand words two thousand words
two thousand words two thousand words two thousand words two thousand words two thousand words
two thousand words two thousand words two thousand words two thousand words two thousand words
two thousand words two thousand words two thousand words two thousand words two thousand words
two thousand words two thousand words two thousand words two thousand words two thousand words
two thousand words two thousand words two thousand words two thousand words two thousand words
two thousand words two thousand words two thousand words two thousand words two thousand words
two thousand words two thousand words two thousand words two thousand words two thousand words
two thousand words two thousand words two thousand words two thousand words two thousand words
two thousand words two thousand words two thousand words two thousand words two thousand words
two thousand words two thousand words two thousand words two thousand words two thousand words
two thousand words two thousand words two thousand words two thousand words two thousand words
two thousand words two thousand words two thousand words two thousand words two thousand words
two thousand words two thousand words two thousand words two thousand words two thousand words
two thousand words two thousand words two thousand words two thousand words two thousand words
two thousand words two thousand words two thousand words two thousand words two thousand words
two thousand words two thousand words two thousand words two thousand words two thousand words
two thousand words two thousand words two thousand words two thousand words two thousand words
two thousand words two thousand words two thousand words two thousand words two thousand words
two thousand words two thousand words two thousand words two thousand words two thousand words
two thousand words two thousand words two thousand words two thousand words two thousand words
two thousand words two thousand words two thousand words two thousand words two thousand words
two thousand words two thousand words two thousand words two thousand words two thousand words
two thousand words two thousand words two thousand words two thousand words two thousand words
two thousand words two thousand words two thousand words two thousand words two thousand words
two thousand words two thousand words two thousand words two thousand words two thousand words
two thousand words two thousand words two thousand words two thousand words two thousand words
two thousand words two thousand words two thousand words two thousand words two thousand words
two thousand words two thousand words two thousand words two thousand words two thousand words
two thousand words two thousand words two thousand words two thousand words two thousand words
two thousand words two thousand words two thousand words two thousand words two thousand words
two thousand words two thousand words two thousand words two thousand words two thousand words
two thousand words two thousand words two thousand words two thousand words two thousand words
two thousand words two thousand words two thousand words two thousand words two thousand words
two thousand words two thousand words two thousand words two thousand words two thousand words
two thousand words two thousand words two thousand words two thousand words two thousand words
two thousand words two thousand words two thousand words two thousand words two thousand words
two thousand words two thousand words two thousand words two thousand words two thousand words
two thousand words two thousand words two thousand words two thousand words two thousand words
two thousand words two thousand words two thousand words two thousand words two thousand words
two thousand words two thousand words two thousand words two thousand words two thousand words
two thousand words two thousand words two thousand words two thousand words two thousand words
two thousand words two thousand words two thousand words two thousand words two thousand words
two thousand words two thousand words two thousand words two thousand words two thousand words
two thousand words two thousand words two thousand words two thousand words two thousand words
two thousand words two thousand words two thousand words two thousand words two thousand words
two thousand words two thousand words two thousand words two thousand words two thousand words
two thousand words two thousand words two thousand words two thousand words two thousand words
two thousand words two thousand words two thousand words two thousand words two thousand words
two thousand words two thousand words two thousand words two thousand words two thousand words
two thousand words two thousand words two thousand words two thousand words two thousand words
two thousand words two thousand words two thousand words two thousand words two thousand words
two thousand words two thousand words two thousand words two thousand words two thousand words
two thousand words two thousand words two thousand words two thousand words two thousand words
two thousand words two thousand words two thousand words two thousand words two thousand words
two thousand words two thousand words two thousand words two thousand words two thousand words
two thousand words two thousand words two thousand words two thousand words two thousand words
two thousand words two thousand words two thousand words two thousand words two thousand words
two thousand words two thousand words two thousand words two thousand words two thousand words
two thousand words two thousand words two thousand words two thousand words two thousand words
two thousand words two thousand words two thousand words two thousand words two thousand words
two thousand words two thousand words two thousand words two thousand words two thousand words
two thousand words two thousand words two thousand words two thousand words two thousand words
two thousand words two thousand words two thousand words two thousand words two thousand words
two thousand words two thousand words two thousand words two thousand words two thousand words
two thousand words two thousand words two thousand words two thousand words two thousand words
two thousand words two thousand words two thousand words two thousand words two thousand words
two thousand words two thousand words two thousand words two thousand words two thousand words
two thousand words two thousand words two thousand words two thousand words two thousand words
two thousand words two thousand words two thousand words two thousand words two thousand words
two thousand words two thousand words two thousand words two thousand words two thousand words
two thousand words two thousand words two thousand words two thousand words two thousand words
two thousand words two thousand words two thousand words two thousand words two thousand words
two thousand words two thousand words two thousand words two thousand words two thousand words
two thousand words two thousand words two thousand words two thousand words two thousand words
two thousand words two thousand words two thousand words two thousand words two thousand words
two thousand words two thousand words two thousand words two thousand words two thousand words
two thousand words two thousand words two thousand words two thousand words two thousand words
two thousand words two thousand words two thousand words two thousand words two thousand words
two thousand words two thousand words two thousand words two thousand words two thousand words
two thousand words two thousand words two thousand words two thousand words two thousand words
two thousand words two thousand words two thousand words two thousand words two thousand words
two thousand words two thousand words two thousand words two thousand words two thousand words
two thousand words two thousand words two thousand words two thousand words two thousand words
two thousand words two thousand words two thousand words two thousand words two thousand words
two thousand words two thousand words two thousand words two thousand words two thousand words
two thousand words two thousand words two thousand words two thousand words two thousand words
two thousand words two thousand words two thousand words two thousand words two thousand words
two thousand words two thousand words two thousand words two thousand words two thousand words
two thousand words two thousand words two thousand words two thousand words two thousand words
two thousand words two thousand words two thousand words two thousand words two thousand words
two thousand words two thousand words two thousand words two thousand words two thousand words
two thousand words two thousand words two thousand words two thousand words two thousand words
two thousand words two thousand words two thousand words two thousand words two thousand words
two thousand words two thousand words two thousand words two thousand words two thousand words
two thousand words two thousand words two thousand words two thousand words two thousand words
two thousand words two thousand words two thousand words two thousand words two thousand words
[tkyte@localhost ~]$


I win

your turn

Performance of Result Cache

A reader, September 17, 2012 - 1:15 am UTC

Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Data Mining and Real Application Testing options

SQL> set linesize 1000
SQL> set pagesize 1000
SQL> set timing on
SQL> set autotrace on
SQL> SELECT /*+ RESULT_CACHE */ a.*, b.rn
2 FROM ( SELECT f1.DATE_KEY,
3 d4.USER_LOGGED_IN_STATE_ID,
4 d5.PROPERTY_ID,
5 f1.MOD_INSTANCE_TIT_ID,
6 SUM (f1.LINKVIEW) lnk_vw
7 FROM V_FD_PG_LINK_LEGO f1
8 INNER JOIN V_DIM_USER_LOGGED_IN_STATE d4
9 ON (f1.USER_LOGGED_IN_STATE_KEY =
10 d4.USER_LOGGED_IN_STATE_KEY)
11 INNER JOIN V_LKP_PROPERTY d5
12 ON (f1.PROPERTY_KEY = d5.PROPERTY_KEY)
13 INNER JOIN V_DIM_DATE d6
14 ON (f1.DATE_KEY = d6.DATE_KEY)
15 WHERE d4.USER_LOGGED_IN_STATE_ID = 'T'
16 AND d5.PROPERTY_ID = 114
17 AND f1.DATE_KEY IN (20120628, 20120629)
18 GROUP BY f1.DATE_KEY,
19 d4.USER_LOGGED_IN_STATE_ID,
20 d5.PROPERTY_ID,
21 f1.MOD_INSTANCE_TIT_ID
22 ORDER BY f1.DATE_KEY DESC) a,
23 (SELECT DATE_KEY,
24 USER_LOGGED_IN_STATE_ID,
25 PROPERTY_ID,
26 MOD_INSTANCE_TIT_ID,
27 rn
28 FROM ( SELECT f2.DATE_KEY,
29 d4.USER_LOGGED_IN_STATE_ID,
30 d5.PROPERTY_ID,
31 f2.MOD_INSTANCE_TIT_ID,
32 SUM (f2.LINKVIEW) lnk_vw,
33 RANK () OVER (OR

Performance of Result Cache

A reader, September 17, 2012 - 1:19 am UTC

DER BY SUM (f2.LINKVIEW) DESC) rn
FROM V_FD_PG_LINK_LEGO f2
35 INNER JOIN V_DIM_USER_LOGGED_IN_STATE d4
36 ON (f2.USER_LOGGED_IN_STATE_KEY =
37 d4.USER_LOGGED_IN_STATE_KEY)
38 INNER JOIN V_LKP_PROPERTY d5
39 ON (f2.PROPERTY_KEY = d5.PROPERTY_KEY)
40 INNER JOIN V_DIM_DATE d6
41 ON (f2.DATE_KEY = d6.DATE_KEY)
42 WHERE d4.USER_LOGGED_IN_STATE_ID = 'T'
43 AND d5.PROPERTY_ID = 114
44 AND f2.DATE_KEY = 20120628
45 GROUP BY f2.DATE_KEY,
46 d4.USER_LOGGED_IN_STATE_ID,
47 d5.PROPERTY_ID,
48 f2.MOD_INSTANCE_TIT_ID)
49 WHERE rn < 10 + 1) b
50 WHERE a.USER_LOGGED_IN_STATE_ID = b.USER_LOGGED_IN_STATE_ID
51 AND a.PROPERTY_ID = b.PROPERTY_ID
52 AND a.MOD_INSTANCE_TIT_ID = b.MOD_INSTANCE_TIT_ID
53 ORDER BY a.DATE_KEY DESC, b.rn
54 /

18 rows selected.

Elapsed: 00:01:34.97

Execution Plan
----------------------------------------------------------
Plan hash value: 1367100206

---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | Pstart| Pstop | TQ |IN-OUT| PQ Distrib
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------

I am not able to paste more than this.

Help!
Tom Kyte
September 20, 2012 - 1:48 am UTC

what browser are you using?

I'm using chrome and IE 9, no problems here.

Performance of Result Cache

A reader, September 17, 2012 - 1:21 am UTC

Hi Tom,

Is there a way for us to upload the file capturing all information.

Thanks

Tom Kyte
September 20, 2012 - 1:48 am UTC

you have 32k, everyone has 32k, no one is limited to less then 32k, no one can provide more than 32k.


"Enter your review: (maximum 1,000 words)"

jannovak, September 17, 2012 - 9:40 am UTC

more than
two thousand words two thousand words two thousand words two thousand words two thousand words
two thousand words two thousand words two thousand words two thousand words two thousand words
two thousand words two thousand words two thousand words two thousand words two thousand words
two thousand words two thousand words two thousand words two thousand words two thousand words
two thousand words two thousand words two thousand words two thousand words two thousand words
two thousand words two thousand words two thousand words two thousand words two thousand words
two thousand words two thousand words two thousand words two thousand words two thousand words
two thousand words two thousand words two thousand words two thousand words two thousand words
two thousand words two thousand words two thousand words two thousand words two thousand words
two thousand words two thousand words two thousand words two thousand words two thousand words
two thousand words two thousand words two thousand words two thousand words two thousand words
two thousand words two thousand words two thousand words two thousand words two thousand words
two thousand words two thousand words two thousand words two thousand words two thousand words
two thousand words two thousand words two thousand words two thousand words two thousand words
two thousand words two thousand words two thousand words two thousand words two thousand words
two thousand words two thousand words two thousand words two thousand words two thousand words
two thousand words two thousand words two thousand words two thousand words two thousand words
two thousand words two thousand words two thousand words two thousand words two thousand words
two thousand words two thousand words two thousand words two thousand words two thousand words
two thousand words two thousand words two thousand words two thousand words two thousand words
two
"Enter your review:
(maximum 1,000 words)"

Form takes 32K in IE 9, but only 2,000 character in Chrome & Firefox.

Shannon Severance, September 21, 2012 - 12:39 am UTC

           Chrome : Version 21.0.1180.89 m
Internet Explorer : Version: 9.0.8112.16421
                    Version: 9.0.8112.16421 64-bit Edition
          Firefox : 7.0.1


All running on Windows 7 Enterprise Service Pack 1 64-bit.

To generate the test string I used PerlClip, by James Bach and Danny Faught. PerlClip is available at at http://satisfice.com/tools.shtml Within PerlClip I used the command
counterstring 65536
to place a string of 65,536 characters of the form: "*3*5*7*10*...65536*" on the clip board. Each asterisk is the number of chars of the preceding number.

Then I pasted the string from the clipboard into "Enter your Review" field of the "Write a Review" form.

The field to enter the review text was limited to 2000 characters in both Chrome & Firefox. Both version of Internet Explorer allowed the full 64K characters to be pasted into the field for review.

Note: this was just testing whether the "Enter your review:" field would take 32768 characters and nothing more. No attempt was made to try and save and of the test strings.
Tom Kyte
September 26, 2012 - 12:13 pm UTC

I only allow for 32k.

I use only chrome

I posted more than 2,000 words

now what.


I don't care what size the field allows, I STORE 32k, period. I have many bits of text that are near 32k.


eh?

Shannon Severance., September 27, 2012 - 9:49 pm UTC


now what For me "eh?" If I did not value your writing and respect you for what you've written, I would have written neither my earlier post nor this one.

now what For you, apparently, denial of what other users of this application see.

"A reader from INDIA" wrote, "The LIMIT of 1000 words, restricted me NOT to post the question in detail." http://asktom.oracle.com/pls/apex/f?p=100:11:0::::P11_QUESTION_ID:676698900346506951#5438534800346235479 and you responded, "My challenge: proving your number 1000 is a made up number" Ok, it is a made up number. BUT to the left of THIS field where I am entering THIS text at THIS moment YOUR website states, "Enter your review: (maximum 1,000 words)" The number was made up by someone who wrote asktom.oracle.com.

After that the posts from you and he generated more smoke than light. Neither claim of behavior matched what I saw. "A reader" was not getting anywhere close to 1,000 words in his posts. You never demonstrated posting via a review. So I tested, observed, and reported my observations along with details on how I did my testing.

I posted more than 2,000 words ... as a review? I've only seen you post and we said.. and Followup, never reviews. I'm guessing the forms are different*. So the behavior you see report could be entirely irrelevant in challenging the veracity of what my report.

* It is entirely possible that I am wrong and that the difference is due to something besides being different forms. I do not have access to the entire website nor your machine to test my hypothesis in this matter.

I don't care what size the field allows, I STORE 32k, period. I have many bits of text that are near 32k.

If the field size in the web browser is less than 32k it is difficult for us to give your application 32k of text to store.

Now what?

You can choose to investigate what users are seeing. You can open an incognito window in Chrome. (shift-crtl-N on Windows, shift-command-N on OS X.) Visit asktom.oracle.com, and add a review. of more than 2,000 words.

You can choose to investigate in collaboration with users who have access to different viewpoints, OSes and browsers. That would require people to be careful and clear in what they have done and what they have seen.

You can continue claiming that we can do what we cannot. You can continue to answer with the non-sequitor of what you can do.

Either way, I hope you continue the great work writing about Oracle. Your writing has been immensely valuable in my study of the Oracle database and to my work with Oracle.

I mean this post to be helpful. I am sorry if it is not.

PS, I was trying to fit under the 2000 character limit on Chrome, but gave up. For this review I am using IE. (Hate it, come on Microsoft where's the built in spell check.)

PPS pulled up page source for the write a review page. (Again not the "And we said..." nor the "Followup" pages) in both Chrome and IE. The following bit of HTML looks relevant.

In IE:
<tr><td nowrap="nowrap" align="right"><label for="P12_REVIEW" tabindex="999"><span class="t18OptionalLabel">Enter your review:<br /><span style="font-weight:normal">(maximum 1,000 words)</span></span></label></td><td  align="left"><input type="hidden" name="p_arg_names" value="101719954001532873" /><fieldset id="P12_REVIEW_fieldset" class="textarea" tabindex="-1"><textarea name="p_t05" rows="10" cols="60" maxlength="2000" wrap="virtual" id="P12_REVIEW" class="textarea"  ></textarea></fieldset><script type="text/javascript">
 <!--
    var textEd = $x('P12_REVIEW');
    var textName = 'P12_REVIEW';
 -->
</script>
</td></tr>


In Chrome:
<tr><td nowrap="nowrap" align="right"><label for="P12_REVIEW" tabindex="999"><span class="t18OptionalLabel">Enter your review:<br /><span style="font-weight:normal">(maximum 1,000 words)</span></span></label></td><td  align="left"><input type="hidden" name="p_arg_names" value="101719954001532873" /><fieldset id="P12_REVIEW_fieldset" class="textarea" tabindex="-1"><textarea name="p_t05" rows="10" cols="60" maxlength="2000" wrap="virtual" id="P12_REVIEW" class="textarea"  ></textarea></fieldset><script type="text/javascript">
 <!--
    var textEd = $x('P12_REVIEW');
    var textName = 'P12_REVIEW';
 -->
</script>
</td></tr>


Both have
maxlength="2000"
HTML is outside my circle of competence, but I think that says to limit the input field to 2,000 characters.
Tom Kyte
September 28, 2012 - 6:47 am UTC

your review is 3,704 characters.

I'll have to have someone look into it - this looks like it could be a new thing they added. I wasn't aware of any limits, it always used to be 32k forever.

(your review is over 4,000 characters though...)

Yes, I entered more than 2,000 characters using IE.

Shannon Severance, September 28, 2012 - 12:29 pm UTC

My first review on the matter stated that in IE, the field allowed at least 64k (In the web form, I have not made any claim as to what the database excepts.) Whereas both Chrome and Firefox limited the field to 2,000 characters.

In the the second review, that is 4,000 characters, I said, "PS, I was trying to fit under the 2000 character limit on Chrome, but gave up. For this review I am using IE." (emphasis added) I had initially started writing in Chrome, but ran out of space. Using IE, navigated to the review page, used copy and paste and then continued writing.
Tom Kyte
September 28, 2012 - 4:50 pm UTC

I'll have the APEX guys look at it after OOW. It seems to be something new. thanks for being persistent.

interesting usage of Times Ten

A reader, September 28, 2012 - 10:05 pm UTC

Tom,

in original post Times Ten is used as cache why?.

second, Oracle has its own cache. why its not being used.

I just attended a session on Exalytic, its being used in a similar fashion with some enhancements. Quit confused actually. I used to think that above implementation is Time Ten is not valid one. But now oracle by itself is following it. The guy is doing this way way back in 2007. What is your opinion?
Tom Kyte
September 29, 2012 - 8:07 am UTC

because that is what times ten is really good at. It is an in memory database, it can avoid redo to disk (which makes it not so good at totally preserving data).

the buffer cache is very different from times ten. times ten loads everything into memory - in its indexes there are no rowids, there are pointers to memory. there is no physical IO once started and loaded. It has long startup times but after that - no physical IO.


and we do (and have done) times ten on the middle tier, in addition to coherent on the middle tier, in addition to the client side result cache on the middle tieir.

A reader, September 28, 2012 - 10:08 pm UTC

And the nice thing is, he wants to implement Times Ten in middle ware, The same thing Oracle is doing now a days with Times Ten and Fusion middle ware. quite futuristic and clever.
Tom Kyte
September 29, 2012 - 8:08 am UTC

but we were doing that way before this article was posted?

Oracle Exadata x3

A reader, October 01, 2012 - 9:40 am UTC

Tom,

Oracle open world Oracle Exdata x3-2 and Oracle 12c. Larry said Oracle database has been designed or developed again during last 4 years. Whats this secret. are you going to change the database architecture?

second, He told us that Exadata x3-2 is in-memory database, from this article and from your feed back, I can say that database will be in-memory. but times ten is different from Oracle. Oracle is not in-memory database as you told us.

Thirdly, if Exadata x3-2 is in-memory for warehouses then what is Exalytics. Exalytics is an extension of times ten so confusing. what are the purposes of all these technologies.

Larry also said the disks will be the thing of past the data will be stored on flash disk now and you always told us that flash disks are not good for writing.

Please clarify
Tom Kyte
October 08, 2012 - 3:36 pm UTC

He was talking about a new architecture for database consolidation - the pluggable database.

http://www.computerworld.com/s/article/9232120/How_Oracle_s_pluggable_databases_will_work


as for the in memory bit - he was talking about the fact that the X3 machine has terabytes and terabytes of flash memory. It is not an in memory database ala times ten, it is a database that is cached in memory. The physical IO's will, after some time, be to the flash cache - not as fast as the SGA, but still faster than disk for those OLTP single block reads.


The in memory part is most crucial for OLTP applications. Believe it or not, but spinning magnetic disk is pretty darn fast for large IO's (warehouse scans). Flash won't make them orders of magnitude faster (warehouses), but it can do so for single block IO's (oltp)


I said solid state disk drives only offer up OK write speeds. flash memory is entirely different.

hmm....

A reader, October 01, 2012 - 10:43 pm UTC

Another interesting thing I have read from Times Ten documentation. If your application layer and Times Tens database is in same machine Times Tens By pass Networking over head and makes a direct connection with application layer using IPC.

Tom, correct me if I am wrong.
Tom Kyte
October 09, 2012 - 11:26 am UTC

yes, it does that. it removes disk and network latency.

ctl-f for


Times ten is all about "latency".


on this page to see more.

what an intellectual response for a stupid question

kumar, October 02, 2012 - 1:32 am UTC

what is this Tom,

two thousand words two thousand words two thousand words two thousand words two thousand words
two thousand words two thousand words two thousand words two thousand words two thousand words
two thousand words two thousand words two thousand words two thousand words two thousand words
two thousand words two thousand words two thousand words two thousand words two thousand words
two thousand words two thousand words two thousand words two thousand words two thousand words
two thousand words two thousand words two thousand words two thousand words two thousand words
two thousand words two thousand words two thousand words two thousand words two thousand words
two thousand words two thousand words two thousand words two thousand words two thousand words
two thousand words two thousand words two thousand words two thousand words two thousand words
two thousand words two thousand words two thousand words two thousand words two thousand words
two thousand words two thousand words two thousand words two thousand words two thousand words
two thousand words two thousand words two thousand words two thousand words two thousand words

I think you remove the questions which are stupid. You did this once to me.

Tom Kyte
October 09, 2012 - 11:33 am UTC

I was testing.


You can now post long things again, it was an inadvertent change caused by an APEX update

https://asktom.oracle.com/pls/apex/f?p=100:11:0::::P11_QUESTION_ID:456820211101#5530289300346211030

dump Oracle X3 as you have dump sparc

Robert Brumfield, October 10, 2012 - 9:35 pm UTC

Hi Friends,

I am pleased to see on this site someone is really innovative and creative. The person understands the problems faced by modern architecture. Which are Network and Disks Letancies. Oracle has no idea of these problems in past and they are still doing the same thing. Let me prove it by an example.

I would start with Oracle exadata X3-2 and compare it with Fusion IO technology. Oracle Exadata X3-2 has 22TB(raw) flash storage for OLTP system. Warehouse systems will not be benefited by Flash storage. Now check How Exadata perform IO. For each IO, it has to perform some network calls at OS level to read a block from disks or Flash. This Network calls are huge over head with Flash disks. Since database server and Flash disks are hosted on different machines.

With Fusion IO Technology, Its been Eliminated. The Database servers and Flash disks are hosted on same machines. Further the Network Latency for IO has been removed since the Flash disks are embeded into Motherboard and processor reads block just like it read from memory. In reality there is little Network call overhead involved. In case of Fusion IO the Flash Disk is on big big System IO Bus which does not require Network Protocal calls.

Please check this architecture and give your feed back. Remember whatever Oracle does is not the Final thing. There are far better solutions available you just have to check.

http://www.fusionio.com/blog/extreme-performance-lower-cost-hps-new-data-accelerator-solution-for-oracle-database/

My Question To Tom is, How you would defend Exadata Now. He has been the biggest critic and Now Oracle is using this technology and in a wrong way. I hope he is able to answer the question because I find no answer of it and Just say Tom is just another Oracle paid Employee.
Tom Kyte
October 11, 2012 - 7:52 am UTC

Now check How Exadata
perform IO. For each IO, it has to perform some network calls at OS level to
read a block from disks or Flash. This Network calls are huge over head with
Flash disks. Since database server and Flash disks are hosted on different
machines.


someone doesn't understand how Exadata works. What network call is involved in that IO? where did the network come into play at this point? In Exadata - there are database servers running RAC in the middle of the machine, connected to storage cells (independent computers themselves running in an MPP/shared nothing fashion) by a private infiniband network. Now, when we do an IO - the IO happens on the storage cell to local attached storage (no net). In the case of using storage indexes - perhaps zero IO's take place (cannot get much faster than that). In the case of a smart scan (a query offloaded to the storage cells) - the IO's take place locally, with direct attach storage (no net - and either flash or disk or both) and the where clause (among other things) is performed locally (no net yet). Only the data (rows and specific columns) that need be returned to the database severs for further processing will be - this is the first time a network might come into play.

where is the database software on fusion io? did you get some Oracle software onto the flash itself? No, you don't have storage indexes, you don't have MPP, you don't have query offloading, you don't have hardware assisted decryption, you don't have hybrid columnar compression with 10:1 up to 50:1 compression rates (I could keep going...)

You seem to miss the fact that with Exadata and query offloading - you have local storage and the bulk of the processing happens down there - not up at the database server itself. We don't bring terabytes back to the server to process - we aim to bring just the tiny bit of data we need after scanning/aggregating/filtering/process down on the storage cells.

In short, the bulk of the database processing takes place on the *same machines* as the storage and when it doesn't - we have 70+ GB/s of data transfer rates. Couple that with compression and you are full scanning more than a TB of data a second.


Please check this architecture out and get back. Where is the MPP/shared nothing component in your approach? Exadata combines the best of MPP/shared nothing with RAC/shared everything. We push as much processing as we can down to the storage (why move the mountain? you only need to move a few rocks to get the answer).


Of course I am just another Oracle paid employee. do you think I do this for free? No one is trying to hide anything here. This is asktom on ORACLE.COM - it should come as no surprise.




If you compare non-Exadata Oracle on conventional SAN storage with non-Exadata Oracle on local flash storage - sure, you'll see a benefit. When you get around to comparing Oracle Exadata with non-Exadata Oracle on local flash storage - then feel free to come back and we'll talk. Until then - you haven't compared anything.

Just another Clarification

Robert Brumfield, October 10, 2012 - 9:47 pm UTC

Fusion IO disk with Dell servers would be much much cheaper than Exadata and super fast. Since there is no Network Involved between database server and disks and Fusion IO disks are embeded in mother boards

Further Fusion IO Flash Disk are many times faster than Oracle Flash Disks. Fusion IO Flash Disks are much much better in writing than Oracle Flash disks.

Further with One server and No RAC involved You also get ride of RAC overhead. A server with Fusion IO Disks can have 80cores and 24TB of Flash storage. That is one server only. With Oracle Exadata Full Rack the Database cores 64 for x3-2 if I am not wrong and Flash storage is 22TB(raw) and there is TONs TONs and TONs of overhead involved in Network involved in IO and RAC.

Please say no to such systems.
Tom Kyte
October 11, 2012 - 8:06 am UTC

Please say yes to "provide real numbers"

"much much cheaper"
"super fast"
"many times faster"
"much much better"
"TONs TONs and TONs of overhead"

that sounds so marketing and communications (MARCOM) oriented.

Not a single number anywhere - overuse of the superlatives "much", "many", "TONs".


you are wrong on your Exadata hardware specs too. It isn't really hard to find this stuff

http://www.oracle.com/us/products/database/exadata-db-machine-x3-2-1851253.pdf
http://www.oracle.com/us/products/database/exadata-db-machine-x3-8-1851252.pdf

one of them has 128 cores for database plus 168 cores for SQL query processing on the storage - that would be 296 cores of things that do database work.

the other has 160 cores for database plus 168 cores for SQL query processing, or 328 cores.

and up to 224TB of user data (before compression - so it is actually a lot more, with redundancy)


so, when you do the numbers, please make sure to supply them for fully redundant storage (flash memory fails too) and a quarter of a petabyte of storage please - 24TB is too small.



Numbers are *very* important in real life, use them.


are you the same Robert referenced here as a 'press contact'?

http://www.fusionio.com/contact/general/

it would be nice for everyone to know that perhaps you are just another paid employee of FusionIO :) Just like I am an Oracle Employee.

Oracle is for OLTP only dont think warehouse with it

Robert Brumfield, October 11, 2012 - 2:13 am UTC

I have just confirm with one of our client who have been briefed by oracle team about Oracle Exadata x3-2. They say that Flash Cache (22TB) will improved data warehouse queries also. its completely wrong and completely contradicts what Tom has been teaching from many years. Even with Flash Disks driver as good and as fast as Fusion IOs warehouse performance will not improve.

Oracle strategy is:
if others do something innovate they are wrong like Fusion IO or its already done in Oracle while they have no idea what they are doing, Oracle Team dont have slightest idea of what data warehousing is and they are making such big claims.

but if Oracle do the same thing by some work around technique which improves performance just by a margin. they have done a great thing.
Tom Kyte
October 11, 2012 - 8:14 am UTC

Well, the flash cache WILL improve data warehouse queries as well..

first - it gives us more bandwidth. We can perform IO's to disk and to flash memory at the same time for different bits of data - more paths to the data - that is always good.

second - the flash memory response time can and will be faster then spinning magnetic disks - even for large sequential reads (until we saturate the IO bandwidth which you will do with your fusionio stuff too - at which point we'll use the IO bandwidth of the disk as well as the flash memory).

Will it improve a data warehouse query to the same rate it will improve an OLTP situation? Unlikely. Will it improve a data warehouse query? Sure.


Robert, as a marketing communications director for FusionIO
http://www.linkedin.com/pub/robert-brumfield/22/736/619

I would hope you have numbers? Access to real information? (please spot check it before posting - like you could have with the Exadata specs which are really easy to find)

Since only 50% of our Exadata customers are using it for OLTP - and the other 50% use it for warehousing, you might be slightly off in your opinion.

Poor Marketing Exadata X3

A reader, October 11, 2012 - 7:28 am UTC

Tom,

Please confirm, if you can.

1. Exadata is in-memory data warehouse or NOT....
2. Exadata Flash Cache will improve data warehouse performance if yes, How.

in-memory system means SAP HANA. Why Oracle is claiming that Exadata is in-memory system. Please elaborate.
Tom Kyte
October 11, 2012 - 8:32 am UTC

1) asked and answered above.

as for the in memory bit - he was talking about the fact that the X3 machine has terabytes and terabytes of flash memory. It is not an in memory database ala times ten, it is a database that is cached in memory. The physical IO's will, after some time, be to the flash cache - not as fast as the SGA, but still faster than disk for those OLTP single block reads.


2) asked and answered right above, answer is yes. gives additional bandwidth (I don't care how fast your IO is, you need bandwidth) as well as faster IOs even for large sequential reads.



in-memory system means SAP HANA. Why Oracle is claiming that Exadata is
in-memory system. Please elaborate.



It is not positioned as an in-memory database, it is positioned as a database with a ton of memory (not SSD, 4TB DRAM, 22TB Flash memory) to cache information in-memory. With writes taking place to the flash memory storage (1,000,000 writes/second) as well as reads.


HANA is a hybrid system, disk and memory, too.


This is fun, but...

Thomas Kyte, October 11, 2012 - 8:34 am UTC

This is fun and all - debating with marketing people - but is not a useful use of my time.

If you have a technical question - I'll be glad to address them. If you just come and say "this is better, you work for Oracle, your stuff is slow", I'm just basically going to ignore it.

If you have numerate examples, technical questions about how things work - I'll be glad to address them. Even if you are from marketing.

Tom its not marketing its your team which is spreading myths.

A reader, October 12, 2012 - 1:34 am UTC

Hi Tom you just clarify that

as for the in memory bit - he was talking about the fact that the X3 machine has terabytes and terabytes of flash memory. It is not an in memory database ala times ten, it is a database that is cached in memory. The physical IO's will, after some time, be to the flash cache - not as fast as the SGA, but still faster than disk for those OLTP single block reads

Please check your Linkedin Oracle Exadata (IOUG SIG) group. They are claiming that Exadata is in-memory database. They are also claiming that it would improve the warehouse performance. Which completely contracts what you have said in this thread. Its not marketing, its just to get the facts right. Which your team is wrongly spreading.

TOM, I am a SAP HANA specialist, Hana only keep logs in on disks the rest of data is in-memory. in-memory means DRAM not some Flash cache (NAND SLC) based. Please check what Larry Elison said, It just make me jumped from my seat but When I saw what you introduced, I am much much disappointed. its Oracle team which are doing marketing on myths not me. I am just asking you to correct them, Since that part of world is now much much educated than the small part of world you live in.

Exadata is not in-memory system. ( its now a proven fact)
Exadata will only improve OLTP processing Not warehouse processing

Now if you have only to improve OLTP performance so why not try simple cheap solution with much better performance like Fusion IO. Fusion IO dont claim for warehouse performace since we know the technology and we are honest.

Not like we first build Myths and then write a book or two to bust those Myth and call ourself Mythbuster.

Tom, on any ground by any why you will not be able to prove things what Larry said in Open world. You are spreading things which is not good for this industry so I have to make some posts. I hope you are liberal enough to answer it and give correction. Since you are man of science not man of myths.
Tom Kyte
October 12, 2012 - 8:04 am UTC

No one ever said it was an in memory database ala times ten, in fact, I said "it is not an in memory database ala times ten".

If you think someone on another site is saying something wrong, why the heck are you not over there asking about it? I don't do linked in.


They are also claiming that it would
improve the warehouse performance. Which completely contracts what you have
said in this thread.


no it doesn't. did you even read what I wrote above?? I'll put it here for the 3rd time, in response to "will this help in a datawarehouse"

answer is yes. gives additional bandwidth (I don't care how fast your IO is, you need bandwidth) as well as faster IOs even for large sequential reads.


so, what sort of contradiction do you see - I see me saying "yes, it will improve warehouse performance", I see them saying "yes, it will improve warehouse performance". I see you saying "those two statements contradict". Please clarify.

TOM, I am a SAP HANA specialist

and how much experience do you have with exadata? You see, I talk about things I know about...

Hana only keep logs in on disks the rest of
data is in-memory.


false. It maintains indexes in memory, data is on disk - data is persisted on disk.

In-memory means "in memory" - you don't get to make up definitions. Flash memory is.... memory - tah-dah. If you have something in flash, it is in memory, it is not on disk. In memory means "in the memory, eg: not on disk". Flash *is memory*




You have presented zero numbers. Go away, get some, come back with factual information that can be discussed.


Exadata is all about data warehouse processing. Exadata is all about OLTP. Exadata is built for both - with specific features for one area (smart scan procesing, offloading the query to the storage cells for example) or the other. That is - there are OLTP features, there are warehouse features and the two do not necessarily cross the boundary - we use smart scans in warehousing, not oltp'ing.



I hope you are scientific enough to come back with numbers, facts, figures - not just BS "of course exadata isn't X, it is obvious". That just doesn't work.

With Respect to Great Thomas Kytes

A reader, October 13, 2012 - 7:13 am UTC

Tom,

I myself learned a lot from you, but your this statement

"Hana only keep logs in on disks the rest of
data is in-memory.

false. It maintains indexes in memory, data is on disk - data is persisted on disk.

In-memory means "in memory" - you don't get to make up definitions. Flash memory is.... memory - tah-dah. If you have something in flash, it is in memory, it is not on disk. In memory means "in the memory, eg: not on disk". Flash *is memory"

IS COMPLETELY WRONG. PLEASE CHECK WHAT YOU ARE SAYING.

EXALYTICS IS IN-MEMORY DATABASE. TIME TEN IS IN-MEMORY DATABASE, HANA IS IN-MEMORY DATABASE.

Tom, in-memory means DRAM. Flash is not DRAM, there are two types of Flash Technologies (DRAM, NAND). NAND can be divided into SLC and MLC.

Exadata uses NAND SLC. HANA keeps all the data in-memory, it compresses the data before keeping it in memory. there is no concept of active and cold data. Infact this concept is completely wrong what you work with Data warehouses. Where Analytics sometimes require historical data.

Please see this video, I know the complete architecture of Hana and this video confirms this.

http://www.youtube.com/watch?v=o0eTL2K7lII&feature=related




Tom Kyte
October 13, 2012 - 9:16 am UTC

sorry, you don't get to make up definitions.

flash is memory, flash is not disk.

in-memory doesn't specify the type of memory.

http://en.wikipedia.org/wiki/In-memory_database

quote: Non-volatile random access memory (NVRAM), usually in the form of static RAM backed up with battery power (battery RAM), or an electrically erasable programmable ROM (EEPROM). With this storage, the MMDB system can recover the data store from its last consistent state upon reboot.

flash memory caches data in memory instead of on disk, in a non-volatile fashion (so we can use it for reliable WRITES as well as reads)


Not that I said that Exadata is an in-memory database in the first place now did I. I said it has terabytes of flash memory to cache reads and writes from and to disk - and I said that can have an impact on data warehouse performance (a positive impact) in the form or reduced IO times as well as massively increased bandwidth.


where did hot and cold come from all of a sudden and what the heck does it have to do with anything???



you want me to watch a marketing video? seriously? sorry, I didn't quite make it through the marketing bit - it wasn't really very technical at all.


tell me, does hana or does hana not store data on disk. (if not, where does said data come from?)

does hana or does hana not load this data on disk into "in memory" (cache it). (fill the cache)

does hana or does not hana maintain the indexes for this data in memory (cache it).

does hana or does not hana periodically flush in memory data to disk (dirty buffers)?

okay

A reader, October 14, 2012 - 1:52 am UTC

Tom,

For Hana with current release, the data is replicated there are 3 techniques of data replication in Haha, Hana replicates the data from OLTP systems at start time just like Exalytics and Times Ten does. Although Exalytic and Time Tens are far far primitive than Hana.

The data is compressed 4-5x in columnar fashion, memory indexes are being build ( a storage indexes like concept). All the data in Hana is in DRAM.

When a transaction occurs its being applied to data in memory, if data become uncompressed its being re compressed again when the server is free.

Nand (SLC and MLC) as far far slower than DRAM. They have extremely poor write capabilities. I dont know how you are making such claims. Even at Data warehouse level. which are mostly write intensive and require throughput instead of IOPS.

Further when I investigate, I found out the Exadata is not using (SLC) but using eMLC flash. When is even slower than SLC flash in writing. Please check Oracle documentation on read and write latencies.

its a complete mess.

Tom Kyte
October 15, 2012 - 8:54 am UTC

Although Exalytic and
Time Tens are far far primitive than Hana.


thank you for making me smile and laugh out loud. I love it!! You would be the perfect marketeer!! you are really good at that.

"far far primitive" - awesome, that is so excellent.

The data is compressed 4-5x in columnar fashion

only 4-5x? that is pretty poor. We get that with block compression of rows. It should be in the order of a magnitude at least shouldn't it. Just 5x?

We typically get 10-20x - at 10x compress a TB becomes a 100GB problem - something easily cacheable all of a sudden.


All the data in Hana is in DRAM.

and disk, we know it is on disk too....


I dont know how you are making such claims.

please, let's review the factually true statements I have made regarding this:

a) we get additional bandwidth - that is good.
b) we get slightly increased read speed for large sequential IO - that is good
c) we get much faster read/write times for OLTP operations (1,000,000 IOPS per second assuming an 8k OLTP block). One million write IOPs per second for example. How's Hana doing there? do you have a SINGLE NUMBER - just a NUMBER, a proven, evaluated, reproducible by the rest of the planet NUMBER or are you just going to spew stuff like "you are so primitive"???


I keep putting out numbers or links to documents with numbers, for example:

http://www.oracle.com/us/products/database/exadata-db-machine-x3-2-1851253.pdf

There are tables in there showing IO rates.

the only thing that is a complete mess is your repeated inability to show a single relevant *number*

How you define active data.

A reader, October 14, 2012 - 2:31 am UTC

Another thing Tom,

Exadata Larry Elison briefing clearly says, Exadata keeps active data in-memory. Please tell me for warehouse how you define active data.

Exadata x3 is a creation of mind which is more OLTP oriented than Data warehouse. where an active data really exists.

Please just define what oracle means by active data in warehouse terms. As i said previously, I accept active data on flash for OLTP will improve performance but if you just explain to me how active data on Flash and rest of data on disk will improve warehouse performance. I would leave the discussion.

Thanks for your patience.
Tom Kyte
October 15, 2012 - 8:57 am UTC

Please tell me for warehouse how you define active data.

who said that was a data warehouse feature???


why does everything have to be a data warehouse feature on a machine designed for OLTP *and* data warehousing?


but - think about it. In many warehouses (many, not all, not every, but many) - there is a tendency to frequently query newer data - it is more 'interesting', the old is there - but not as frequently queried.

But the ILM features of hot, warm, cold lend themselves towards an OLTP type of system more often than a warehouse.

When you can tell me how a bitmap index helps OLTP, I can tell you how redo generation assists warehousing.


get real.

dear friend please get real

A reader, October 15, 2012 - 11:34 pm UTC

Tom,

In General.

Active data only exists in OLTP systems.

Warehouses dont have any Active Data at reporting level. However at ETL level your current data may be considered as active data. but reporting system may never use the current data due to adhoc nature of warehouse.

Tom, I would have no objection if instead of Flash( NAND type) oracle would have used DRAM in Exadata Cells. I would happily accepted it as in-memory warehouse. Accept it Oracle bluffing the users. They have no idea of data warehousing and doing experiments and trying to learn but still trying to sell.
Tom Kyte
October 16, 2012 - 9:44 am UTC

I do not believe you are actually reading anything here, you are so focused on "hana is better, your stuff is no good" in your head, you are not even reading..

but - think about it. In many warehouses (many, not all, not every, but many) - there is a tendency to frequently query newer data - it is more 'interesting', the old is there - but not as frequently queried.

maybe you just haven't worked in enough different environments to have a broader view of things. If you've worked for a bank for your entire career, however long that may have or have not been, you will have a very myopic view of the world. The longer you work in the industry, the more places you work at - the more broad your view will become.

but reporting system may
never use the current data due to adhoc nature of warehouse.

did you know there are some warehouses that are not necessarily ad-hoc?

did you know that there are some warehouses whereby the old data is kept there solely for the purpose of meeting some mandate (thou shall keep seven years of this data online sort of stuff) and the current data is the only interesting stuff. Think of a warehouse of audit trails for example, the old stuff is kept - but only the recent stuff is actually queried on a day to day basis.

Not that is really matters since I did say the hot, warm, cold concept was primarily an OLTP feature anyway - not always - but primarily. and that not everything about Exadata is about warehousing since..... Exadata is an OLTP machine and a warehouse machine.

OLTP won't use smart scans for example (probably, not often anyway, maybe for a report or something)

DW won't typically use the data optimization capabilities (hot,warm,cold) - but it can (think of that audit trail warehouse - old audit trail data is archived compressed - maybe 40 or 50:1 compression, relatively new audit trail data is query compressed - maybe 10 or 15:1 compression, the newest audit trail data which is being constantly ETL'ed into the database is using OLTP compression (yeah, an OLTP feature used in a DW, who would have thunk it). The newest data is queried most often (seriously, it is), the warm data is infrequently queried, the cold data won't be touched unless some event causes us to have to go way back in time.

There are many other cases of hot/warm/cold in warehousing - that you personally have never experienced doesn't change that fact.


We cannot use DRAM in the storage cells, we cannot use non-volatile memory as told to you more than once.

And frankly, that you have an objection to it based not on numbers, but just on your gut, doesn't really bother me too much.

When you have some facts, feel free to come back.



Entertaining and inspiring

Another reader, October 16, 2012 - 7:32 am UTC

I find this discussion quite entertaining, and I am very impressed by the host's patience.

It makes me think of a drinking game where the same guy ends up having to drink on every round, but never seems to notice that the longer he plays, the worse off he gets.

A reader, October 17, 2012 - 9:48 am UTC

Tom, This is a cut and paste of a post. Please reply

Please check the the following specs provided by Oracle or google the followings
Oracle Exadata Database Machine X3-2 Data Sheet

The information provided is from Oracle.

A full rack contain 22.4TB of RAW Flash. The link provide above clearly tells the guy does not know the internal working of Exadata. Because he computes

22.4TB x 10x( HCC compression) = 224TB

Which is Wrong. Exadata Flash cache has been Multiplexed. Which decreases its capacity. Each Exadata has 4 Flash cards of size 400GB (NAND type eMLC) Which give 4x400/1024=1.6TB. In Full Rack there are 14 Exadata units so 1.6x14=22.4TB. That is why the documentation says, its RAW 22.4TB.

With dual Multiplexing this capacity will reduce to 100TB which is nothing. Considering that the usable disk capacity of Exadata X3 Full Rack is 45TB.

with 10x compression.... The possible data is 45x10 =450TB.

Now compute the percentage of data on Flash.

Without Multiplexing( An impossible case) : (200/450)x100= 44.5%
With dual Multiplexing : (100/450)x100 = 22.25%

Just with Dual multiplexing the possibility of active data presented in flash cache is reduce to 22.5%. Which means that if any two Flash cards fails in one Exadata cells you would loss data.

and with RAW Flash with Active data cached 44.5%. you would loose data with failure of just one Flash Card.

Further you compare Exadata with a System which cache 100% of data in not in eMLC NAND but in DRAM and still claim that it would out perform SAP HANA. Hard to believe and completely unscientific from Oracle.

As I told you previously. That this system is more for OLTP purposes. Especially for cloud computing and would give no improvment in Data warehousing.

Q1. by this math, if 22% data is cache on Flash cache its just 1/4 of data exadata can host. if an analytic query is not hitting that 1/4th of data. you would get performance hit. since query on Flash runs fast and on disk run slower. so query time will vary, will hitting the same number of rows. do you agree with it or not? if yes, Tell me how many one million US$ data warehouses has this problem. I tell you.... NONE. SAP Hana definitely not because all(100%) data is in memory. You dont get variation in speed while hitting the same amount of data.

Q2. As Oracle promotes, Exadata as mix load machine. What would happens under these condition?

tom, accept it, Oracle has really messed up Exadata X3. There is no scientific logic is there to defend what you say.
Tom Kyte
October 17, 2012 - 3:26 pm UTC

wow, you totally win :) cheers.


we do have 40%+ marketshare in data warehousing world wide in 2011, they must all be wrong...


This system, Exadata, with the combination of software on the storage cells (something, ummm, Hana doesn't do does it - if you ask me, if you don't do that, you cannot claim to be a big data warehouse engine... so there) and the hardware above it all is in fact a database machine.

Period.


the flash MEMORY gives us faster IO's (much faster single block IO's and faster multi-block IO's) as well as much increased bandwidth - the storage cells with query offloading make it so MOST of the IO's don't ever leave the storage cells (which gives us MUCH apparent increased bandwidth - nothing is faster than NOT DOING something). Exadata melds massively parallel shared nothing with parallel shared everything to give you a data warehouse database unlike any other.

Where is hana's MPP bit? shared nothing bit? If you ask me - if you don't have that you don't have a database (said with sarcasm, of course you can have a database without that - but you keep coming from a mentality of "if you ain't like us, you ain't no good" - forget it)


I guess when and if Hana starts publishing numbers, we'll have something to compare.


Alexander, October 18, 2012 - 10:01 am UTC

"As I told you previously. That this system is more for OLTP purposes. Especially for cloud
computing and would give no improvment in Data warehousing."

Seriously?

Smart Scan? Hybrid Columnar Compression? I think the reader should look at what Exadata actually offers. I would argue the opposite, that it is more geared towards data warehousing. Heck even the name is obviously to compete with Teradata, a strictly dw product.
Tom Kyte
October 23, 2012 - 10:55 am UTC

<sarcasm>
but you see, it doesn't strictly imitate the architecture and implementation of what he knows (Hana) so it cannot be what it says to be - because obviously there is exactly one best way to do everything!
</sarcasm>

repeat again. perhaps new friends did not get the real topic

A reader, October 20, 2012 - 2:38 am UTC

Alexander

I already know the usage of smart scan, storage indexes and Hybrid column compression. I know that they are for Data warehouse

but the point we are discussing here is.

1. Exadata is in-memory system.
2. Flash Cache will improve data warehouse performance.

Oracle is claiming that X2 is fast and X3 is in-memory super fast system. When I evaluate their claim, I found little truth in it.

1. Exadata is not in-memory system.
2. Flash Cache will improve OLTP performance but not Warehouse performance.

The reason is simple, Oracle quite strangely introduce the concept of active/warm/cold data. While in data warehouse they are Adhoc queries which work on whole data. There is no active/warm/cold data in 45TB data warehouse.

If you work with Telecom and Banking sectors you usually see such data warehouses. For Example a Financial analytic report may contain data upto 3 years back (36 months). Now lets suppose that only the current or 12 month data is on Flash. Which means that while running parallel query your slave processes running on Flash will finish faster while slave processes running on disk data will take time. Means that you will not get any improvement in response time

since. in parallel processing time to finish the query is max time taken by all the slave processes.

That will also vary the response time of warehouse hugely. for example SAP HANA response time is from 0.35-0.5 second since all the data is in Dram. If some of your data is in Flash and some on Disk the response time may vary with huge amount.

The more we discuss this topic the more it prove how stupid X3 is.

Tom Kyte
October 23, 2012 - 11:34 am UTC

1) no one here has ever said that. What has been said (over and over and OVER I might add) is that exadata uses a huge amount of flash memory.

2) and I've said that - and it does for two reasons (which I've repeated over and over)

a) increased bandwidth. When full scanning - we can utilize the bandwidth of flash and of disk - and in fact we do. Even when all of the data is in flash - we still use the spinning magnetic disks - BECAUSE WE CAN - and we benefit from the increased bandwidth.

b) reading from flash, even for large IO's, can be and generally is, faster than disk.



Oracle quite strangely introduce the concept of
active/warm/cold data.


I think you need to read more critically, critical reading is important. As stated (repeatedly - as in over and over) this is mostly an OLTP feature, just as smart scan is pretty much a reporting/warehouse feature). It can definitely have some use in certain warehouse applications - not all - but some. Just because you don't see the need for it doesn't imply in any way shape or form it is "strange".

Now lets suppose that only the current or 12 month
data is on Flash.


this shows you don't understand the purpose of the flash cache... data isn't "on" flash, data is cached in flash - any data, all data, this data, that data. Not just "this special stuff right here"


for example SAP
HANA response time is from 0.35-0.5 second since all the data is in Dram.


Now, I usually don't use this word, but I will here.

That is the stupidest thing you could ever say. It really is. Are you actually saying you cannot write a query that takes longer than 0.5 seconds in Hana, that all queries, no matter how much data it takes - it will be no longer than 0.5 seconds? Wow, that would be amazing - given that just crunching a terabyte of data would likely take a tad bit longer than that - for an ad-hoc query.



good joke Tom

A reader, October 20, 2012 - 2:47 am UTC

we do have 40%+ marketshare in data warehousing world wide in 2011, they must all be wrong...

Tom, Exadata release was a good move by Oracle but during the last 3-4 years, Oracle did little imporvments in it. While the vendors like SAP HANA work aggressively released SAP HANA, Then SAP HANA cluster which can hold up to 1 Petabyte of data.

Period....

Larry Elison in Open world was completely wrong that SAP HANA can only hold upto 5TB of data.

http://www.sacbee.com/2012/10/16/4916071/sap-brings-massive-scale-to-sap.html#storylink%3Dcpy

Tom just look and Please Tell this to Larry Elison.

Oracle this move is more in panic, since they never put much importance to data warehousing. But now with the emergence of new Technologies like Fusion IO, SAP HANA, people started to question about Exadata. So they did a little FUD marketing on open world. but it did not work in their favour
Tom Kyte
October 23, 2012 - 12:32 pm UTC

since they never put much importance to
data warehousing.


huh? boy is that wrong.. I've been at Oracle for over 19 years, and I can definitely say this is wrong.


and you are right, Hana just recently released an 8TB machine allowing for a theoretical 40TB of uncompressed data (assuming they can in fact compress it at the rate they assume they can).

and if you think 40TB is a big data warehouse, well..... ok, more power to you.

My Friend Tom a great man and Oracle advocator

A reader, October 26, 2012 - 1:01 pm UTC

Tom said
--------------------------------------------------------
That is the stupidest thing you could ever say. It really is. Are you actually saying you cannot write a query that takes longer than 0.5 seconds in Hana, that all queries, no matter how much data it takes - it will be no longer than 0.5 seconds? Wow, that would be amazing - given that just crunching a terabyte of data would likely take a tad bit longer than that - for an ad-hoc query.
-----------------------------------------------------

Tom, My friend.
You are respectably wrong again. Hana can support batch processing and real time processing ( which Oracle Exadata cannot do by any means real time data warehouse ETL loading its a quite challenging).

The response time of 0.35-0.5 is for real time processing. Means as soon as transaction is being done on OLTP system, Hana replicate that transaction and applies it to its data in memory. Further you are wrong again since Hana keeps no uncompressed data. All the data in-memory is compressed 5x times. so a node with 1 Terabyte of memory can support 5Terabyte data. Hana has very advance Clustering system so you can go upto 1Petabyte of data. Since Hana replicates small amount of rows to data warehouse it easily ensure this response time. What Nightly batch mode there is no question of response time. Its a batch Window. Your ETL should complete within that batch window. Dear Tom, You need to learn a lot about data warehousing I am afraid. Please work hard, since I dont want that you dont spend another 19 years in darkness.

Yes Tom, any analytic query on Hana ensure a response time of 0.3 to 0.5. Hana does it same way as Oracle does but in a much more advance way. The data is compressed in memory and Indexes ( just like Exadata storage indexes but much more advance) are in memory which helps Hana to trace which nodes has data and what will be degree of Parallism.

Further There is a bondage between data and CPU in Hana to ensure this response time at each node level.

Since Exadata parallel query runs partically in database and partically in Exadata. I cannot do this. so its response times varies.

However, SAP Hana has introduce Cloud Services using Hana. Where users like you ( who just spend 19 years spreading myths and then writing books to eliminate them may be entertained). You can test my claim of response time there on any amount of data. An open Challenge to my friend and Oracaholic Tom.

But Tom, Oracle is too old, you dont have any people who understand data warehousing. Take your example.



oracle vision

A reader, October 26, 2012 - 11:57 pm UTC

Another thing, which always make me wonder about Oracle vision in IT.

Oracle 11g does not have any install base of Mac OS x. Tom always expresses his concerns about Windows and with Windows 8 metro. The most window users will be moving to Mac. But no Mac installer.

To Oracle users, if you want to live in this world of disasters adopt windows.

benchmark

mani, December 12, 2013 - 6:36 pm UTC

Has there ever been a real benchmark on a same volume and type of data performed against exadata and hana to identity which is better on what parameter?

benchmark

mani, December 12, 2013 - 6:37 pm UTC

Has there ever been a real benchmark on a same volume and type of data performed against exadata and hana to identity which is better on what parameter?

benchmark

mani, December 12, 2013 - 6:37 pm UTC

Has there ever been a real benchmark on a same volume and type of data performed against exadata and hana to identity which is better on what parameter?