Skip to Main Content
  • Questions
  • Performance improvement by placing tables on a virtual RAM disk

Breadcrumb

Question and Answer

Connor McDonald

Thanks for the question, Nagarajan.

Asked: March 22, 2016 - 10:50 am UTC

Last updated: March 24, 2016 - 2:15 am UTC

Version: oracle 11.2

Viewed 1000+ times

You Asked

I am trying to improve performance of queries by placing the tables in a virtual RAM DISK. I am using Oracle 11.2 Standard edition with 4G of SGA and 2G of PGA.

I created a Virtual RAM DISK of size 140GB using imdisk. Then I moved only one table, which has 350 million rows and 32gb of size, to this RAM DISK.

Now I am accessing this table with simple straight forward query.

SELECT PRICE, QUANTITY FROM TEST_TABLE;
It is taking around 5-6 minutes to read the entire table.

There is no difference in performance whether the table is in hard disk (SSD) or RAM disk. When RAM read speed is many times faster than hard disk, why is this not getting reflected in the query performance?

Is something missed here?

Any help would be appreciated.

Explain Plan Plan hash value: 2982911821

-------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
-------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 348M| 11G| 1311K (2)| 04:22:14 |
| 1 | TABLE ACCESS FULL| TEST_TABLE | 348M| 11G| 1311K (2)| 04:22:14 |
--------------------------------------------------------------------------------

I am trying here to see if RAM disk can really improve the performance, as this is full table scan and it has to fetch so much of rows, I thought this would be appropriate use case to check the effects of RAM disk with Oracle.

More over we already have a mechanism in place to handle the volatility of RAM, so that's not a problem for me right now. Also, the entire database is in SSD (still we are not satisfied with the performance). The data must be fetched far better from RAM than SSD. Only question that I am not able to get answer is why that is not reflected here (even though the sample query is a I/O bound).

and Connor said...

Take a look here

https://connormcdonald.wordpress.com/2013/11/24/once-youve-done-your-io-theres-still-more-to-do/

So even if you *read* from disk super fast, you need to consume (and in your case) deliver those rows back to calling environment.

Something to try:

dbms_monitor.session_trace_enable(waits=>true)
select max(PRICE), max(QUANTITY) FROM TEST_TABLE;

so there's little or nothing to fetch. That gives you an idea of how fast you can actually *process* the data coming from RAM. And the trace will gives you a break down of the waits. It *might* actually be a lack of CPU.

If it's still being reported as IO waits, try cranking up the parallelism and repeat.

Keep us posted with the results.

I am curious - why not just allocate all that RAM to the buffer cache ?

Rating

  (3 ratings)

Is this answer out of date? If it is, please let us know via a Comment

Comments

Nagarajan Umapathy, March 23, 2016 - 2:25 pm UTC

Please find the trace output.


SQL ID: g7y5wu6409mwv
Plan Hash: 1401355061
SELECT MAX(REG_PRICE), MAX(TOT_MOVEMENT)
FROM
TEST_TABLE


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 1 133.56 134.34 4771867 4771886 0 1
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 3 133.56 134.34 4771867 4771886 0 1

Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 85

Rows Row Source Operation
------- ---------------------------------------------------
1 SORT AGGREGATE (cr=4771886 pr=4771867 pw=0 time=0 us)
347989956 TABLE ACCESS FULL TEST_TABLE(cr=4771886 pr=4771867 pw=0 time=113824488 us cost=1311127 size=2786103200 card=348262900)


Nagarajan Umapathy, March 23, 2016 - 2:39 pm UTC

SQL ID: g7y5wu6409mwv
Plan Hash: 1401355061
SELECT MAX(REG_PRICE), MAX(TOT_MOVEMENT)
FROM
TEST_TABLE

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute      1      0.00       0.00          0          0          0           0
Fetch        1    133.56     134.34    4771867    4771886          0           1
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        3    133.56     134.34    4771867    4771886          0           1


Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 85

Rows     Row Source Operation
-------  ---------------------------------------------------
      1  SORT AGGREGATE (cr=4771886 pr=4771867 pw=0 time=0 us)
347989956   TABLE ACCESS FULL TEST_TABLE(cr=4771886 pr=4771867 pw=0 time=113824488 us cost=1311127 size=2786103200 card=348262900)


Elapsed times include waiting on following events:
  Event waited on                             Times   Max. Wait  Total Waited
  ----------------------------------------   Waited  ----------  ------------
  SQL*Net message to client                       1        0.00          0.00
  Disk file operations I/O                        2        0.00          0.00
  direct path read                               15        0.00          0.00
  db file sequential read                         1        0.00          0.00
  SQL*Net message from client                     1     2908.01       2908.01
********************************************************************************

Chris Saxon
March 24, 2016 - 2:15 am UTC

There you go.

134 seconds elapsed, of which 133 was CPU activity.

This is why our (12c) in-memory option is not *just* storing stuff in RAM, its a total restructure of the way we hold the data in RAM.

Because whilst rows in RAM is faster than rows on disk, there's a cpu cost (as you've just shown) to digesting all of those rows.

With 12c Inmemory

Rajeshwaran Jeyabal, March 24, 2016 - 11:08 am UTC

This is why our (12c) in-memory option is not *just* storing stuff in RAM, its a total restructure of the way we hold the data in RAM.

Thanks, this helps to understand better. Below test is from Oracle 12c In-memory with big_table holding 10M rows in it.

rajesh@ORA12C> select min(object_id), max(object_id) from big_table ;

MIN(OBJECT_ID) MAX(OBJECT_ID)
-------------- --------------
             2         104464

1 row selected.

Elapsed: 00:00:13.38
rajesh@ORA12C> alter table big_table
  2  inmemory
  3  no inmemory(OWNER ,
  4  OBJECT_NAME       ,
  5  SUBOBJECT_NAME    ,
  6  DATA_OBJECT_ID    ,
  7  OBJECT_TYPE       ,
  8  CREATED           ,
  9  LAST_DDL_TIME     ,
 10  TIMESTAMP         ,
 11  STATUS            ,
 12  TEMPORARY         ,
 13  GENERATED         ,
 14  SECONDARY         ,
 15  NAMESPACE         ,
 16  EDITION_NAME      ,
 17  SHARING           ,
 18  EDITIONABLE       ,
 19  ORACLE_MAINTAINED ,
 20  ID   );

Table altered.

Elapsed: 00:00:00.32
rajesh@ORA12C> select min(object_id), max(object_id) from big_table ;

MIN(OBJECT_ID) MAX(OBJECT_ID)
-------------- --------------
             2         104464

1 row selected.

Elapsed: 00:00:00.03
rajesh@ORA12C>


More to Explore

DBMS_MONITOR

More on PL/SQL routine DBMS_MONITOR here