Skip to Main Content
  • Questions
  • MIT Prof Claims RDBMS Model is All Wrong

Breadcrumb

Question and Answer

Chris Saxon

Thanks for the question, Alex.

Asked: January 11, 2017 - 8:35 pm UTC

Last updated: January 12, 2017 - 12:22 pm UTC

Version: 12.1.0.2

Viewed 1000+ times

You Asked

Hi Team,

My manager recently sent out this interesting talk by the founder of Postgres and VoltDB, Michael Stonebraker.

https://blog.jooq.org/2013/08/24/mit-prof-michael-stonebraker-the-traditional-rdbms-wisdom-is-all-wrong/

I am curious of how much of this is true for Oracle, and what problems exist for the model he describes in his VoltDB product. This is a very broad question since it's an hour talk. I'm interested in hearing whatever you have time for. This is the kind of thing Tom seemed to be at his best and thrive at. Also, if Johnathon Lewis is out there lurking, would like to hear your thoughts as well.

and Connor said...

Last time I looked.... h-store and voltdb have not taken over the database market :-)

But in reality, my view is pretty simple - people like mature, functionally complete, requirement-based fit, cost effective technology.

For analytics, column based stores are great - its a reason we have HCC and in-memory, and other vendors have similar offerings. But do I want (solely) a column based store for high volume transactions ?... probably not. That's a key differentiator for our in-memory product - you get column based access *without* having to run a second database alongside your oltp system.

But I do disagree with the content in two areas

1) that customers need exclusively ram based systems for transactions. A stock standard oracle database (and I'm not being biased here, other vendors I'm sure can do this too) can crank out tens of thousands of transactions per second on commodity hardware. If you need *millions* of transactions per second, then fine ... look at niche solutions, because guess what...you're a niche customer. I'd contend then 99.99% of database customers in the world do *not* need that scale.

2) The concept of "this database is for oltp", "this database is for olap" etc... I think that is rubbish nowadays. Every database is a transactional database, every database is an analytic database - the line is so blurred between them nowadays. Once again, I think that's why the in-memory option is such a cool piece of tech for us.


Rating

  (1 rating)

Is this answer out of date? If it is, please let us know via a Comment

Comments

RE

George Joseph, January 12, 2017 - 7:37 am UTC

Just out of curiosity, have you worked on customers which required millions of transactions per second.

What would be a use case for such a real world scenario requiring millions of tps.

I guess retail isnt going to need millions of tps?
Chris Saxon
January 12, 2017 - 12:22 pm UTC

Folks from CERN gave an interesting talk at UKOUG about processing data from the Large Hadron Collider. This generated millions of rows/transaction:

http://www.tech16.ukoug.org/default.asp?p=14778&dlgact=shwprs&prs_prsid=12656&day_dayid=102

But yes, your typical retail app won't need this scale. Along with 99.99% of other applications too as Connor says.