Skip to Main Content
  • Questions
  • Advanced Compression Options in Oracle 11g

Breadcrumb

Question and Answer

Chris Saxon

Thanks for the question.

Asked: March 10, 2017 - 9:55 am UTC

Last updated: April 07, 2017 - 2:33 am UTC

Version: 11.2

Viewed 1000+ times

You Asked

Hi Chris/Connor,
Can you please share any document on Advanced Compression Options in Oracle 11g.
I have searching on this and getting only theoretical stuffs.
Could you please share any reference, blog to study on this to understand its use on OLTP or Data Warehouse systems.

and Chris said...

Rating

  (8 ratings)

Is this answer out of date? If it is, please let us know via a Comment

Comments

Need inputs

A reader, March 14, 2017 - 3:05 pm UTC

Hi Chris,

Is compression slow down DML operation, decompression during SELECT ?

Below are the points I got from various articles?


Benefits:
• Up to a 3x disk savings (i.e. compressed tables reside on fewer disk blocks).
• Less disk I/O because data resides in few data blocks.
• Faster full scan/range scan operations - Because tables will reside on less data blocks, full table scans and index range scans can retrieve the rows with less disk I/O.
• Reduced network traffic - due to fewer disk blocks.

Overhead:
• Compression/decompression slows that I/O writes being reduced while CPU increases, resulting in slowing SQL throughput.
• Overhead at DML time (i.e. Compression on write).
• Decompress on read (i.e. Date fetch during SQL execution).

Chris Saxon
March 14, 2017 - 5:32 pm UTC

Yep, a good overview of the costs/benefits otherwise.

The exact disk savings entirely depend on your data - you'll need to test yourself to see how much it shrinks your data. Same for CPU overheads.

If you to know more about this, I recommend reading Richard Foote's compression articles:

https://richardfoote.wordpress.com/category/advanced-index-compression/
https://richardfoote.wordpress.com/category/index-compression/

Tom

A reader, March 14, 2017 - 5:47 pm UTC

• Compression/decompression slows that I/O writes being reduced while CPU increases, resulting in slowing SQL throughput.
• Overhead at DML time (i.e. Compression on write).
• Decompress on read (i.e. Date fetch during SQL execution).


if every one should go to richardfoote or Tim Hall's blog, then why should we come to AskTom ?

why don't you provide some test case to show us the Overhead associated with DML time and slow I/O writes?

if this was Tom Kyte he would have sequenced these types of questions with a couple of test cases and Tkprof results.

you lazy guys !!!
Connor McDonald
March 15, 2017 - 2:37 am UTC

if every one should go to richardfoote or Tim Hall's blog, then why should we come to AskTom ?

Yeah, I can probably think of one person that I'd prefer didnt come to AskTom :-)

But everyone else is always welcome

Advanced Compression

Jeff, March 14, 2017 - 6:10 pm UTC

A Reader:

"Overhead:
• Compression/decompression slows that I/O writes being reduced while CPU increases, resulting in slowing SQL throughput.
• Overhead at DML time (i.e. Compression on write).
• Decompress on read (i.e. Date fetch during SQL execution). "

You need to re-read the documentation.

1: There is no overhead on reads, because blocks are not "decompressed" when read. The RDBMS reads the compressed data out of the block natively. If you think about it for a minute, you'll realize it has to be that way, because we're still working with fixed block sizes in the buffer cache. If the block had to be decompressed to be read, you wouldn't be able to do that.

2: Overhead on writes. Again, it depends. There isn't overhead on all writes. Block compression only happens when an update/insert causes the space consumed in the block to reach the pctfree threshold. (interesting side-note: On Insert, the compression is done before the actual insert, so you will almost always have compressed and uncompressed data co-mingled in the same block.

3: The level of compression achieved is heavily dependent on the data being processed, or even the way the data is sorted prior to being loaded. So test cases on a generic sample database won't tell you much of anything about how it will behave in your specific situation.

Having said that, after implementing Advanced compression in our Production EBS database, the gross size of the database reduced to almost exactly 50% of what it was pre-compression. But individual tables in the database saw compression ratios ranging from about 0 (no compression at all) to almost 8:1.

Connor McDonald
March 15, 2017 - 2:35 am UTC

Thanks for your input Jeff.

So entitled :)

Tubby, March 14, 2017 - 6:51 pm UTC

To "a reader" who feels the need to say things like
"you lazy guys !!! "

You were pointed to great resources which you could (and should) have found yourself with a simple google search, instead you come here asking to be spoon fed the information.

Someone in this thread is certainly lazy, if you're looking to point fingers maybe grab a mirror first :)
Connor McDonald
March 15, 2017 - 2:35 am UTC

:-)

To a reader

Gh.., March 14, 2017 - 7:37 pm UTC

Indeed . Perhaps he'll be beaten by his boss if fail to provide the dish

Ps . Good patience behavior front Jeff.

Humm

Tommy, March 14, 2017 - 7:51 pm UTC

Luckily for you it was not Tom, otherwise you would have had the beating that you deserve silly "reader"

Compression removes duplicates from block!!

A reader, April 06, 2017 - 1:34 pm UTC

Hi Chris/Connor,

In oracle docs it is mentioned, "Advanced Row Compression works by eliminating duplicate values within a database
block, even across multiple columns"

could you please to understand what exactly mean saying eliminate duplicates??
Connor McDonald
April 07, 2017 - 2:33 am UTC

For table T ( firstname, lastname, role) a block might stored (conceptually) as:

row1: CONNOR,MCDONALD,ASKTOM,
row2: CHRIS,SAXON,ASKTOM
row3: MARIA,COLGAN,ASKTOM

and so forth. Compression might do something like this:

compression element #1: ASKTOM, rows 1,2,3
row1: CONNOR,MCDONALD,
row2: CHRIS,SAXON,
row3: MARIA,COLGAN,

so we remove the duplicated "ASKTOM", and store a mapping to it

A little dated, but a nice presentation on the internals of it all here

www.juliandyke.com/Presentations/DataSegmentCompression.ppt


Simple and precise. Thanks a lot!!

Nikhil, April 07, 2017 - 3:16 am UTC