Skip to Main Content

Breadcrumb

Question and Answer

Chris Saxon

Thanks for the question, Rajeshwaran.

Asked: September 02, 2016 - 6:56 am UTC

Last updated: September 05, 2016 - 11:00 am UTC

Version: 12.1.0.2

Viewed 1000+ times

You Asked

Team,

Please let us know your comments on this http://www.johndcook.com/blog/2009/07/06/brewer-cap-theorem-base/

Connor - Been with this industry since 1990 - Please share your view on this.
...
Ultimately I think the peak of the relational database era is on the horizon (in the next 5-10 years) 
and we’ll see “databases” founded on different models rise and become dominant 
(due to the fundamental scalability problems of relational data more than anything). 
....


I hope the first comments from the above blog post, is addressed in Oracle since the version#4 as Multi version read consistency, please correct me if i am getting it wrong.



and Chris said...

It's true we've seen new database models in recent years.

But that article was written 7 years ago. Looking at the popularity of different DB systems we can see:

Sept 2016 Rank DBMS   DB Model  Score
1  Oracle   Relational DBMS  1425.56
2  MySQL    Relational DBMS  1354.03
3  Microsoft SQL Server Relational DBMS  1211.55
4  PostgreSQL  Relational DBMS  316.35
5  MongoDB   Document store  316
6  DB2   Relational DBMS  181.19
7  Cassandra   Wide column store 130.49
8  Microsoft Access Relational DBMS  123.31
9  SQLite   Relational DBMS  108.62
10  Redis    Key-value store  107.79

Source DB-Engine rankings:
http://db-engines.com/en/ranking

Relational takes the top 4 spots and 7 of the top 10! So the new models have hardly become dominant.

Non-relational stores may continue to rise in popularity. But they won't take over from relational completely. The added complexity makes eventually consistent systems harder to work with. Indeed, Google found this to be such a burden they developed their own relational DB, F1:

We also have a lot of experience with eventual consistency systems at Google. In all such systems, we find developers spend a significant fraction of their time building extremely complex and error-prone mechanisms to cope with eventual consistency and handle data that may be out of date. We think this is an unacceptable burden to place on developers and that consistency problems should be solved at the database level

http://static.googleusercontent.com/external_content/untrusted_dlcp/research.google.com/en/us/pubs/archive/41344.pdf

The CAP theorem is about consistency, availability and partitioning in distributed systems. i.e. when dealing with multiple systems across a network. So multi-versioning read consistency doesn't "solve" it because it only applies within a single database.

But the theorem isn't as black-and-white as "either your data are consistent or you system is available". There's much more to it than that. I suggest reading Brewer's (source of the CAP theorem) own article on this:

https://www.infoq.com/articles/cap-twelve-years-later-how-the-rules-have-changed

Is this answer out of date? If it is, please let us know via a Comment

More to Explore

VLDB

If you are new to partitioning, check out Connor McDonald's introduction series here.

VLDB

Documentation set on VLDB and Partitioning.