Skip to Main Content

Breadcrumb

Question and Answer

Connor McDonald

Thanks for the question.

Asked: July 29, 2016 - 9:36 am UTC

Last updated: August 01, 2016 - 10:07 am UTC

Version: 11.2.0.3.0

Viewed 1000+ times

You Asked

Dear Sirs,

My question is about throughput\bandwith problem.

We have a data warehouse solution, database size aroud 1 TB, nightly\morning run times around 15 hours. The optimal performance would be that users can have their daily generated reports at the beginning of the work day, and not like now at noon, or later.

We see lot of large reads (90%), full table scans are the typical loads because of the business logic (sure we could\should optimize this as well, but it is an other question) and we have lot of ctas and direct path loads to "bypass" redo and thus archive log generation to reduce IO (backups not running parallel with the loads), before the bypass redo optimizations we had 2-300 GB generated archive log, nowadays it is only 10-20 GB. We think that we need large bandwidth because of this type of operation instead of large IOPS that matters more in OLTP operations.

The disk system is a SAN, high bandwidth (16 Gb\s) FC connection, the server has 2 disk arrays where dbf files reside (and other drives for OS and swap file etc), both drives are around 1 TB in the OS level. Both array consist of 5 or 6 SAS disks, 10k (or 15k can't remember) RPM disk with 300 GB size each in raid 5. When we test the drives in OS level with sequential read\writes we can get around 3-400 MB\s speed, but in oracle we see that the speed is somehow restricted to 4-5 MB\s.

For example we ran this ctas in oracle in noarchivelogmode and no other load was on the server. The table is around 4GB in size, 20 million record, no index, no partitions, no exotic data types, 76 columns (1 columne date, 1 column varchar2(256), 3 column varchar2(1), 19 column number(16,2) and the rest is number(14) ). We even set the STORAGE clause from default but had no effect. Tablespace have plenty of free space in it.

create table inibig
STORAGE (
INITIAL 5G
NEXT 100M
)
as select * from bigtable

IOPS is around 10-20, throughput\bandwidth is always around 4-5 MB\s.

We also use sql*loader.exe to load data to oracle, with these options, to maximize performance:
direct: yes
UNRECOVERABLE: yes
load method: truncate

Still, speed can not be faster then 4 MB\s. But if we start some other loads parallel they will also have this 4 MB\s speeds. We tried to tune the sqlloader parameters (bindsize and others), but no matter what we changed all the test we run with sqlloader or ctas or any other insert methods, speed had this 4 MB\s limit.

Block size of datafiles are 16K and multi_block_file_read parameter is 64. Server is Windows Server 2008 R2x64, with 8 cores, 72 GB RAM and Oracle is 11R2x64 11.2.0.3.0

We appreciate any help or suggestion that you could give us to improve throughput speed if it is possible.
Thank you!

and Connor said...

I would download SLOB

https://kevinclosson.net/2012/02/06/introducing-slob-the-silly-little-oracle-benchmark/

Its a perfect tool to get an accurate measurement of how much I/O bandwidth your storage can provide.

If using SLOB can only give you 'n' MB/sec, then you can be sure that this is the best your infrastructure can give you. (Then its a battle with your storage admins!)

If it gives you much better performance than you are getting with your warehouse code, then it becomes a case of tracking down the issues in the code.

But start with SLOB

Rating

  (1 rating)

Is this answer out of date? If it is, please let us know via a Comment

Comments

storage\disks were underperforming

A reader, September 25, 2017 - 11:38 am UTC

We ended up recently to upgrade the hardware infrastructure for this DWH, so we started to use full flash storage instead of the (8 years) old storage that had old fashioned mechanical disks. We measure 2-3 times increase in performance so instead of 15 hours it is running only for 6 hours.