Skip to Main Content


Question and Answer

Connor McDonald

Thanks for the question, Piotr.

Asked: October 11, 2003 - 3:21 pm UTC

Last updated: November 21, 2022 - 11:56 am UTC


Viewed 10K+ times! This question is

You Asked


I have a table with BLOBs and they are actually character logs generated by fax transmission software. They are about 4k-16k big and pretty redundant, they compress with zlib to just about 1k-2k. And I want to store them compressed in DB. Now question what is better in your opinion?
To use server side compressor lib in on insert triggers for storing (they will never be updated) and decompressing views? Or maybe you would code compressor/decompressor logic on the client side.

I wrote compressor/decompressor as Java SP and use first approach thus there have to be no compressor specific client side code (Perl, Java and C++) but the drawback is that there is more network traffic than necessary.

Any comments?

Thank you in advance


and Tom said...

I would do the compression in the database -- easier to manage and the amount of network traffic probably won't be an issue (they trickle in all day long right...)

In 10g, the package utl_compress will be of keen interest to you to replace all of the custom code you have.

make sure to enable inline row storage -- else the lobs will be stored on a block all by itself (eg: a 1k lob stored out of line will consume at least ONE block all by itself, stored inline, in the row, it'll just take its size)


  (53 ratings)

Is this answer out of date? If it is, please let us know via a Comment


Enabling inline row storage

Piotr Jarmuz, October 11, 2003 - 4:13 pm UTC

I agree. Interesting comments.

Especially this enable inline row storage hint. I will do that. Can I enable it post factum with alter and/or rebuild or do I have to re-create the table?

Seems like 10g will render obsolete most of my custom Java code in Oracle DB (compression, regex).

If you want I can post my compression code (Java and PLSQL wrappers) so that other people might use it in 8i or 9i.

Thanks a lot.


Tom Kyte
October 11, 2003 - 4:38 pm UTC

believe you must rebuild...

if the code is "small", post it.

It is small...

Piotr Jarmuz, October 11, 2003 - 4:57 pm UTC

...and self-explanatory.

Java part:

create or replace and compile java source named asco."LobCompressor" as
import java.lang.*;
import oracle.sql.*;

* A simple class for LOB compression and decompression in Oracle Database. Will work in 8i and better.
* @author <a href="">Piotr Jarmuz</a>
public class LobCompressor {
* Compresses the CLOB into BLOB
* @param clob the source CLOB (plain text)
* @param blob the target BLOB (will hold compressed binary data) it should be an empty BLOB retrieved for example with dbms_lob.createtemporary(l_blob,true);
* @throws Exception mostly I/O exception if ever
public static void compress(CLOB clob, BLOB blob) throws Exception {
InputStream in=clob.getAsciiStream();
DeflaterOutputStream z=new DeflaterOutputStream(blob.getBinaryOutputStream());
byte[] buffer=new byte[clob.getBufferSize()];

int cnt;
while ((!=-1) {

* Decompresses the BLOB into CLOB
* @param blob the source BLOB (compressed binary data)
* @param clob the target CLOB (will hold plain text) it should be an empty CLOB retrieved for example with dbms_lob.createtemporary(l_clob,true);
* @throws Exception mostly I/O exception if ever
public static void decompress(BLOB blob, CLOB clob) throws Exception {
OutputStream out=clob.getAsciiOutputStream();
InflaterInputStream z=new InflaterInputStream(blob.getBinaryStream());
byte[] buffer=new byte[blob.getBufferSize()];

int cnt;
while ((!=-1) {

* Compresses the BLOB into BLOB
* @param slob the source BLOB (plain binary data)
* @param blob the target BLOB (will hold compressed binary data) it should be an empty BLOB retrieved for example with dbms_lob.createtemporary(l_blob,true);
* @throws Exception mostly I/O exception if ever
public static void compress(BLOB slob, BLOB blob) throws Exception {
InputStream in=slob.getBinaryStream();
DeflaterOutputStream z=new DeflaterOutputStream(blob.getBinaryOutputStream());
byte[] buffer=new byte[slob.getBufferSize()];

int cnt;
while ((!=-1) {

* Decompresses the BLOB into CLOB
* @param blob the source BLOB (compressed binary data)
* @param slob the target CLOB (will hold plain binary data) it should be an empty CLOB retrieved for example with dbms_lob.createtemporary(l_blob,true);
* @throws Exception mostly I/O exception if ever
public static void decompress(BLOB blob, BLOB slob) throws Exception {
OutputStream out=slob.getBinaryOutputStream();
InflaterInputStream z=new InflaterInputStream(blob.getBinaryStream());
byte[] buffer=new byte[blob.getBufferSize()];

int cnt;
while ((!=-1) {

PL/SQL package specification:

create or replace package compressor is
function clob_compress(p_clob clob) return blob;
function clob_decompress(p_blob blob) return clob;
function blob_compress(p_blob blob) return blob;
function blob_decompress(p_blob blob) return blob;

And PL/SQL package implementation:

create or replace package body compressor is

procedure clob_decompress(p_blob blob, p_clob clob)
as language java
name 'LobCompressor.decompress(oracle.sql.BLOB, oracle.sql.CLOB)';

procedure clob_compress(p_clob clob, p_blob blob)
as language java
name 'LobCompressor.compress(oracle.sql.CLOB, oracle.sql.BLOB)';

procedure blob_decompress(p_slob blob, p_blob blob)
as language java
name 'LobCompressor.decompress(oracle.sql.BLOB, oracle.sql.BLOB)';

procedure blob_compress(p_slob blob, p_blob blob)
as language java
name 'LobCompressor.compress(oracle.sql.BLOB, oracle.sql.BLOB)';

function clob_compress(p_clob clob) return blob is
l_blob blob;
if p_clob is null then
return null;
end if;
return l_blob;

function clob_decompress(p_blob blob) return clob is
l_clob clob;
if p_blob is null then
return null;
end if;
return l_clob;

function blob_compress(p_blob blob) return blob is
l_blob blob;
if p_blob is null then
return null;
end if;
return l_blob;

function blob_decompress(p_blob blob) return blob is
l_blob blob;
if p_blob is null then
return null;
end if;
return l_blob;


I made PL/SQL wrappers functions so that they can be called from SQL layer e.g. views hiding (de)compression



lucatoldo, October 22, 2004 - 5:17 am UTC

I needed precisely the same, since I am not running 10g but only 9i2. Therefore,this posting saved a lot of my time.
It took to me 30 minutes to find the code, but it would have taken me much more than that in developing it and debugging it.
Thankyou so much Piotr, for posting it !


reader, December 18, 2004 - 1:34 pm UTC

Re your reply above <quote>make sure to enable inline row storage -- else the lobs will be stored on a
block all by itself<quote>

To use utl_compress package, do I have to enable inline row storage? If I kept the LOBS in another tablespace, can it be compressed as well?

Is this package for all LOBS - BLOB, CLOB? Thanks. You are awesome.


Tom Kyte
December 18, 2004 - 3:54 pm UTC

if you want the blob to be stored inline, you need to make sure you allowed for inline storage

I don't see any connection between utl_compress and this really. whether the blob is compressed or not -- if you want it inline (upto 4000 bytes) make sure you allow for inline storage.

utl_compress is currently setup to do binary lob compression only.

</code> <code>


reader, December 19, 2004 - 11:56 am UTC

If I read a BLOB, would it uncompress automatically? Thanks.

Tom Kyte
December 19, 2004 - 12:00 pm UTC

no. please see above link, it describes the API, there are compress and decompress routines.

Read blob data - time

Jairo Ojeda, February 14, 2005 - 1:46 pm UTC

Tom, I have a new issue,
We have an application (C#) that store JPG images on 9iR2, this images are document’s pages where a document can have 1..N pages. So, the application show to the user a preview (in a listview) of all pages of a specific document and the user can select one and see the whole page (image), but the application is slow.
We debug it and find that the query is not the problem (less than a second) but the blob data reading is too slow (around 10 seconds) getting 5 images,

//Read blob data into byte array
int i = blob.Read(byteBLOBData,0,System.Convert.ToInt32(blob.Length));

develop guys, said that the solution is to CTAS from the image table and store the same images in lower quality to show it on the listviewer and use the original image table to get the whole image to client, so the images will be stored twice (original and lower quality) on the database (redundancy).

Do you know another way to perform the code to read blob data?
Is there another solution? Point me the link, if it is one.
develop guys are right?

Tom Kyte
February 14, 2005 - 5:39 pm UTC

why not use the image type that you have and ask it to scale the images? you can store the full size image AND the thumbnail and the database can do it all...

storing a thumbnail is not "redunancy", the thumbnail is a different piece of data. Sort of like storing "First_Name" and "Know_As" -- in the HR database for me, they store "Thomas" and "Tom" -- full image and thumbnail.

A reader, February 14, 2005 - 4:01 pm UTC


Jairo Ojeda, February 15, 2005 - 10:59 am UTC

I was thinking of redundancy, I mean having to tables with the same structure, "same data" (image/thumbnail), but you clarify me, so I add a new column to image table to store the thumbnail, thanks.

A reader, July 28, 2005 - 2:42 pm UTC

Tom, it seems that most of the BLOB compression functionality requires the BLOB already be in the database. Is there any way to compress the BLOB as it's being inserted (from client app) and remove the need to save it uncompressed and then compress it?

Tom Kyte
July 28, 2005 - 3:45 pm UTC

the client can compress the blob in any way it wants to, nothing preventing the client from doing that.

but there is no "stream and compress this via sqlnet" sort of functionality.

Performance impact ?

ManchesterNH, August 01, 2005 - 11:44 am UTC

We are on 9i2 and are thinking of using this solution to save disk space. Ours is a insert heavy system and wanted some data on the performance impact of using these compression routines on inserts.

Tom Kyte
August 01, 2005 - 12:33 pm UTC

do you use insert /*+ append */ and bulk insert or just inserts?

Performance impact ?

A reader, August 01, 2005 - 12:59 pm UTC

It is just inserts.. Size of LOB is about 50-100KB.. We will have to call these compression routines about 1-2 million times a day..

Tom Kyte
August 01, 2005 - 1:30 pm UTC

doh, I was think about segment compression, did not read the title...

I doubt it would be "useful" to compress 50-100kb objects 1-2 million times a day.

Disk cost much less than cpu.

Call to free temporary LOB required?

Glenn A. Santa Cruz, August 01, 2005 - 3:03 pm UTC

Thanks to Piotr Jarmuz for sample code above. Using the functions to decompress LOBs on-the-fly in views, do we need to free the temporary LOBs created by the decompression functions? For example:

We have an underlying table containing a compressed BLOB. Data is inserted via a procedure call from a Java-based client, similar to:
tempBlob = BLOB.createTemporary( conn, true, BLOB.DURATION_SESSION ); BLOB.MODE_READWRITE );
OutputStream os = tempBlob.getBinaryOutputStream() ;
... < read file contents into outputstream > ...
cs = (CallableStatement) conn.prepareCall( "begin LobUtility_pkg.insertFile( ?, ? ); end;" );
cs.setObject(1, fileid);
cs.setObject(2, tempBlob);
The "insertFile" packaged procedure handles compressing the data and persisting it into our underlying table (filequeue).

For subsequent access to the BLOB via SQL, we have a view:
create or replace view v as
select fileid, lobutility_pkg.blob_decompress( filecontent ) as filecontent from filequeue;

Would this decompression-on-the-fly view work as simply as this, or do we need to be concerned with freeing temporary LOBs?

I've run brief testing from SQL*Plus, selecting from the view, and then checking (in another session) v$temporary_lobs -- all looks fine to me, but I'm concerned that I may be overlooking something.

Tom Kyte
August 01, 2005 - 3:05 pm UTC

if the duration is for a session, yes, you better free them (especially if you connection pool where a session is a really really long thing)

Freeing temporary

Glenn A. Santa Cruz, August 01, 2005 - 4:20 pm UTC

Just to be sure I understand which LOB needs freeing:

1) The insertion calls a packaged procedure, passing in a temporary LOB. This LOB is created and closed() by the client, then bound to a callable statement. Calls to freeTemporary() on this side cause ORA-22922: nonexistent LOB value ( maybe because the LOB was already closed()? )

2) The view encapsulating the decompression function returns a temporary blob via SQL. Clients could:
for rec in (select fileid, filecontent from v)
if dbms_lob.istemporary( rec.filecontent ) = 1 then
dbms_lob.freetemporary( rec.filecontent );
end if;
end loop;

You're suggesting that we need to close the temporary blobs returned to us via the view, as in example # 2 above, correct? Anything to worry about on the insertion side (example # 1) ?

Tom Kyte
August 01, 2005 - 8:41 pm UTC

if you have an explicit allocate temporary of session duration, that is like "returning an open file", you need to close it.

else, it'll just hang onto the temp space.

Freeing temporary

Glenn A. Santa Cruz, August 01, 2005 - 5:42 pm UTC

Found the documentation regarding the freeing of temporary lobs in SQL calls here:

</code> <code>

Returned temporary LOBs automatically get freed at the end of a PL/SQL program block

Given this tidbit, I think it's still a good practice to free the temporary LOB when processing a resultset containing LOBs, but at least this helps alleviate my concerns over temporary LOBs returned from PL/SQL function calls.

Tom Kyte
August 01, 2005 - 8:46 pm UTC

no, you need to look at the text preceding that

in PL/SQL, C (OCI), and Java, SQL query results return temporary LOBs for operation/function calls on LOB columns. For example:

SELECT substr(CLOB_Column, 4001, 32000) FROM ...


that is not a temporary clob EXPLICITLY allocated by you the programmer, that is what you are dealing with, not with temporary clobs that are implicitly set up in the context of a plsql routine.

Compressing a CLOB

Doug, August 04, 2005 - 10:06 am UTC

I have data in a CLOB column that I'd like to compress using the new-fangled UTL_COMPRESS. As you (and the documentation) point out, UTL_COMPRESS only handles RAW and BLOB.

So far I've been unsuccessful in figuring out how to coerce my CLOB data into a BLOB so I can then use UTL_COMPRESS on it... where should I be looking?

If all else fails, I can use Piotr's kindly contributed Java stored proc.

Tom Kyte
August 04, 2005 - 10:13 am UTC

since the compressed data will be a blob (it is binary), you'll have to use a blog ultimately anyway -- you cannot store the compressed data in a clob

You can use a temporary blob, read the clob 32k at a time and using utl_raw.cast_to_raw, you can stash the string data in the blob.

These clobs would have to be literraly huge and infrequently accessed before you want to really start doing this.

Automatic compress of BLOB column with a trigger

George Sogrwig, August 05, 2005 - 7:34 am UTC

Hello. I am trying to create a trigger that will
automatically compress blobs while they are inserted
in the database (I use 10g). In:

</code> <code>

there is an example...

"Example: Modifying LOB Columns with a Trigger" clearly states that "Formerly, you could examine LOB
columns within a trigger body, but not modify them. Now,
you can treat them the same as other columns".

I have had ABSOLUTELY no luck modifing a blob value while
it is inserted in the db. No errors are generated, the
value just won't change! I know my problem is the way
I insert binary data in the database which is the

My code:

CREATE OR REPLACE TRIGGER document_insert_trigger
l_compressed_blob BLOB := TO_BLOB('1'); -- Initial value just in case...

-- Compress the data.
UTL_COMPRESS.lz_compress (src => :new.document_binary_data,
dst => l_compressed_blob);

-- update the blob value
SELECT l_compressed_blob INTO :new.document_binary_data FROM dual;

-- *** this does not work either ***
--:new.document_binary_data := l_compressed_blob;


Never works with the sql loader, but works nicely when I:

insert into document (document_binary_data)
values (TO_BLOB(UTL_RAW.CAST_TO_RAW('1234567890123456789012345678901234567890')));


How can I change this ? (I am on windows, but I would
also like to know how to make it work on various unix/linux
platoforms). I use direct=false (convensional method), the
trigger gets fired. It's only the blob column that does not
get updated.

Your help is INVALUABLE. Thank you.

Tom Kyte
August 05, 2005 - 11:24 am UTC

not going to happen.

think about how blobs are typically placed into the database:

insert into table ( ..., blob_col ) values ( ..., empty_blob() );

Now, that is when the trigger fired, client gets the empty lob back and then "streams" the data into it.

That is, the LOB data *isn't there* when you insert, it comes LATER. SQLLDR for sure does it like that.

you'll need to come up with another approach, the trigger isn't going "to happen". You could use the trigger to schedule a JOB to run after the row is committed or you could run a procedure AFTER loading the data (you do have a "is_compressed" flag in this table right....)


George Sogrwig, August 22, 2005 - 9:14 am UTC

Hello again. This is exactly what I do now. I run
a stored procedure to compress the BLOB data after
the import has finished.

What I am not really sure is if this saves space on
the hard disk, or if the space is never freed after
compressing the blob data. (Sorry if this is a faq
question, but I am a newbie on oracle)

Thank you very much.

Tom Kyte
August 23, 2005 - 3:48 am UTC

The space is managed in the database, as you load the blob (uncompressed) it'll allocate some storage in the lobsegment. You will then read this loaded data and rewrite it (smaller) somewhere else in that lobsegment. The space that was occuptied by your original blob will become available overtime (depends on your pctversion setting for blobs how fast that is) for subsequent lobs loaded into your table.

Good stuff...this gets 6 stars.

Bill, September 08, 2005 - 5:29 pm UTC

Thanks Tom for the insight. Also, thanks to Piotr for sharing his Java and PL/SQL source in the third post.

is the use of empty_blob manditory?

A reader, June 13, 2006 - 6:14 pm UTC

It appears that inserting or updating a blob column as NULL is a valid statement. Why would/should one use empty_blob() instead of NULL? What is the effect?

Tom Kyte
June 14, 2006 - 8:35 am UTC

NULL is the absence of a value.

an empty_blob() is a blob of zero bytes in length.

You cannot "write", using dbms_lob.write and so on, to a non-existent blob, you may however write to an empty one.

very good -why i am not able to save space here

Ajeet, October 10, 2006 - 5:32 am UTC

Hi Tom,

I created a table as a copy of exisiting table which has LOB data type along with RAW data types also.

then I tried to compress the LOB data typed colum using utl_compress..
below is the complete test case.

  1* create table t1 as select * from svsuser.SIGNMAINTENANCE where rownum < 10 and sign is
 not null
SQL> /

Table created.

SQL> declare
l_lob bl  2  ob ;
select utl_c  7    8  ompress.LZ_COMPRESS (x.sign) into l_lob from dual ;
insert  into  9   t2
commit  14  ;
end loop ;
end 15   16   ;
/ 17

PL/SQL procedure successfully completed.

SQL> select sum(bytes)/1024 from dba_segments where segment_name = 'T1' ;


SQL> c/T1/T2
  1* select sum(bytes)/1024 from dba_segments where segment_name = 'T2'
SQL> /


please note t2 is a copy of t1.


SQL> Desc t1
 Name                                      Null?    Type
 ----------------------------------------- -------- ----------------------------
 SIGNID                                             NVARCHAR2(30)
 SIGNGRPID                                          NVARCHAR2(30)
 SIGN                                               BLOB
 SALTVAL                                            RAW(8)
 ENCKEYS                                            RAW(48)
 EFFECTIVEDATE                                      TIMESTAMP(6)
 EXPIRYDATE                                         TIMESTAMP(6)
 ACTIVE                                             CHAR(1)
 DELETED                                            CHAR(1)
 VERIFIED                                           CHAR(1)
 LASTACTIVITY                                       NVARCHAR2(50)
 CREATEDBY                                 NOT NULL NVARCHAR2(50)
 CREATEDDATE                               NOT NULL TIMESTAMP(6)
 MODIFIEDBY                                         NVARCHAR2(50)
 MODIFIEDDATE                                       TIMESTAMP(6)
 VERIFIEDBY                                         NVARCHAR2(50)
 VERIFIEDDATE                                       TIMESTAMP(6)
 REMARKS                                            NVARCHAR2(650) 

Tom Kyte
October 10, 2006 - 8:07 am UTC

T1 and T2 are the names of table segements.

lobs over 4000 bytes are stored out of line in their OWN segments.

You measured the table and the table was not "compressed"

ops$tkyte%ORA10GR2> select segment_name from user_segments;

no rows selected

ops$tkyte%ORA10GR2> create table t ( x clob );

Table created.

ops$tkyte%ORA10GR2> select segment_name from user_segments;


CLOB compression

tom, December 22, 2006 - 7:03 pm UTC

I have following table with clob
SQL> desc smown.ms_usagesoapmsgs
Name Null? Type
----------------------------------------- -------- ----------------------------

How can I compress clob o this table? It is 200k rows with 7 GB size.
Tom Kyte
December 24, 2006 - 8:53 am UTC

you do not, unless you "do it yourself" (eg: use utl_compress in 10g or some other external compression routine on the data before you stick it in there.

And remember, you'll be using a BLOB not a CLOB at that point.


tom, December 22, 2006 - 10:52 pm UTC

) ;

ABove is the ddl

Freeing implicit Temp LOBS

David L., February 28, 2007 - 12:25 pm UTC

Similar to Glenn A's posting above, I've also set up a decompression-on-the-fly view :

create or replace view v as
select logid, compressor.clob_decompress( b1 ) as c1 from tab1;

select * from v;
returns the desired data, but each time I select from the view, the v$temporary_lobs.NOCAHE_LOBS count increases for each row returned. The view will be queried through a Python client via ADODB. How can I free the temporary lobs created here?
Tom Kyte
February 28, 2007 - 3:07 pm UTC


Freeing implicit Temp LOBS

David L., February 28, 2007 - 8:24 pm UTC

Sorry, I'm still not clear. I see examples in this and other threads using dbms_lob.freetemporary to free temporary lobs that are explicitly created and assigned to a variable (like in a PL/SQL block). But it appears that implicit temporary lobs are created (with session duration) when you use a sql function returning a LOB in a select statement. For example, each time I execute the following statement in SQL Navigator or from a script, CACHE_LOBS increases by 1 till the session ends:

select substr(to_clob(lpad('X',2000,'X')), 1, 100) from dual;

The same thing happens using decompress-on-the-fly view.

The goal here is to have the client script use a simple select statement on the view, retrieving the result as an ADODB Recordset. It doesn't appear as if I can call dbms_lob.freetemporary on the recordset Fields. Is some way to use it in a view definition?

Tom Kyte
February 28, 2007 - 10:08 pm UTC

if you were to free the data as it was returned - it could not be returned - think about it.

the client sort of has to say "I am done with this, it is ok to let it go"

until they do that, we have to keep it around, we don't know when the client is DONE with it.

Freeing Implicit Temp LOBS - Correction

David L., February 28, 2007 - 8:39 pm UTC

Oops, above example increases CACHE_LOBS by 2 each time it runs, not 1.

Freeing Implicit Temp LOBS

David L., February 28, 2007 - 10:56 pm UTC

Yep, makes sense. But is this 'so-long-thanks-for-all-the-LOBS' call possible in a SQL? Or ADODB code? Can you please provide an example?
Tom Kyte
March 01, 2007 - 8:28 am UTC

it is just a stored procedure call? like any other

LOBSEGMENT Compression in 9iR2

whizkid420, April 27, 2007 - 1:35 am UTC

Hi Tom,

This post has been very useful. In our scenario, we have this table which is used for helpdesk, bugs reporting, change request etc. In this we have a BLOB column which will store all kind of attachments, right from zips to office documents to snapshots. Day by day the activities are increasing and this lobsegment has grown very big from 2 gb to 40 gb (about couple of months ago) is now 73 gb. This is becoming unmanageable now due and we are running short of disk space. We are on 9iR2 currently. Given the scenerio, what do you think is the best approach towards achieving max compression? Is Piotr's only solution for us? Also for each compress/decompress, it will consume that much CPU?

Thanks in advance.
Tom Kyte
April 27, 2007 - 10:47 am UTC

disk is pretty darn cheap.

We have multiple terabytes of that sort of stuff here internally. Have you considered just right sizing your system?

else, yes, using utl_compress or something like it, rewriting all of your access routines (at great cost, and the subsequent performance hit...)....

Re: LOBSEGMENT Compression in 9iR2

whizkid420, April 27, 2007 - 11:10 am UTC

With respect to the production, there is no issue of disk space or CPU. Our system is sized well for the next 3 years. But the disk problem comes when we want to replicate this production environment for UAT or development or any other enviromnent. Those are not sized as per production and we have to keep increasing the SAN allocation every time we want to refresh those env. Is there any way in which we can exclude this particular table from the export dump?

Tom Kyte
April 27, 2007 - 11:24 am UTC

well whizkid420 - why did you write what you wrote above then? I'll never understand....

you can do table level exports (skipping that table)

you can "trick it out" by putting a fine grained access control policy on it that "hides" the data for the user doing the export (eg: returns NULL for all users except the one doing the export, for whom it returns '1=0' instead)

Re: LOBSEGMENT Compression in 9iR2

A reader, April 27, 2007 - 11:35 am UTC

I was referring to manageability & disk space issue while replicating to the other environments.. just read what i wrote.. sorry for not being clear...

can you help with some links to get started with the fine grained access control policy? have never tried that but seems like a good idea.
Tom Kyte
April 27, 2007 - 11:48 am UTC

PLS-00306 in call to LZ_COMPRESS

Paulie, August 26, 2007 - 2:30 pm UTC

Hi Tom,

I'm learning a lot from this thread, and tried to implement it myself but received an error in the call to UTL_COMPRESS.LZ_COMPRESS stating "PLS-00306: wrong number or types of arguments in call to lz_compress". Pointed to line 44 of the following snippet (tried to be as concise as possible, but not sure if this is enough to work with):

image_A ORDSYS.ORDIMAGE; -- source BLOB 
image_B ORDSYS.ORDIMAGE; -- source BLOB    
image_C ORDSYS.ORDIMAGE; -- source BLOB    
image_D ORDSYS.ORDIMAGE; -- source BLOB
l_compressed_A ORDSYS.ORDIMAGE; -- destination BLOB   
l_compressed_B ORDSYS.ORDIMAGE; -- destination BLOB    
l_compressed_C ORDSYS.ORDIMAGE; -- destination BLOB
l_compressed_D ORDSYS.ORDIMAGE; -- destination BLOB
TEMP_ID RAW(32):='';
TEMP_BLOB BLOB := P_DATA; -- input from procedure  
TEMP_BLOB_LENGTH number :=P_FileSize; -- input from proceudre    
    INSERT INTO images ... VALUES
    -- I'm initializing the BLOBs so they're not NULL
...) RETURN IMAGE_ID into TEMP_ID;     
image_A,image_B,image_C,image_D FROM images WHERE IMAGE_ID = TEMP_ID FOR UPDATE;    

-- line 44 starts here
utl_compress.lz_compress(image_A, l_compressed_A);
utl_compress.lz_compress(image_B, l_compressed_B);
utl_compress.lz_compress(image_C, l_compressed_C);

According to the documentation, all I need is a source (image_real) and a destination (l_compressed_real), which I think I've declared correctly(?) but I'm guessing I haven't since I do have the correct "number of arguments" so it must be the "type of argument". Maybe I can't decalre the destination BLOB like this?

Can you tell what I'm doing wrong from this information?

Thanks for your help.
Tom Kyte
August 27, 2007 - 4:24 pm UTC

get this down to a tiny example - like 10 lines you should be able to do that (a complete example that just demonstrates your issue)

I find when I do that, I almost always find the problem.

typo, sorry

Paulie, August 26, 2007 - 2:43 pm UTC

doh! typo . . .

"According to the documentation, all I need is a source (image_real) and a destination (l_compressed_real)"

Should have said:

"According to the documentation, all I need is a source (image_A) and a destination (l_compressed_A)"

sorry about that

no luck

A reader, August 28, 2007 - 11:05 am UTC

( "AUDIO_ID" RAW(32),

  v_src_audio ORDSYS.ORDAudio;
  v_compressed_audio blob;
  ctx RAW(64) :=NULL;
  SELECT audio INTO v_src_audio FROM audio_tbl WHERE audio_id = p_audioID FOR UPDATE;
    dbms_lob.createtemporary(v_compressed_audio,true); -- make temp LOB
    obj.getContentInLob(ctx,v_compressed_audio); -- write from ORDAudio into LOB
    utl_compress.lz_compress(v_src_audio, v_compressed_audio); -- compress LOB
  UPDATE audio_tbl SET audio = v_compressed_audio WHERE audio_id = p_audioID; 
    dbms_lob.freetemporary(v_compressed_audio); -- free temp LOB

Warning: Procedure created with compilation errors.

SQL> show error

8/5 PL/SQL: Statement ignored
8/9 PLS-00302: component 'GETCONTENTINLOB' must be declared
9/5 PL/SQL: Statement ignored
9/5 PLS-00306: wrong number or types of arguments in call to 'LZ_COMPRESS'

10/3 PL/SQL: SQL Statement ignored
10/28 PL/SQL: ORA-00932: inconsistent datatypes: expected NUMBER got BLOB

Tried to use the example in the docs here

where it's explained how to write out an ORDAudio type to a BLOB, which seems to be necessary to use UTL_COMPRESS.

I just can't see what I'm doing wrong here.
Tom Kyte
September 04, 2007 - 2:16 pm UTC

well, the real problem will be....

that you are not storing audio anymore, you are storing a compressed file - there would be NO purpose in using this datatype anymore since you are just storing "stuff", not an audio file.

So, you would just use a blob, upon retrieval and decompression - you would be able to use it in a local variable of the audio type.

Also, bear in mind, most audio is already compressed, compressing compressed stuff doesn't typically result in anything smaller - in fact, sometimes larger.

To the reader immediately above

Chris Poole, August 30, 2007 - 7:20 pm UTC

Hi 'A Reader'

In your example you changed the name of the ORDSYS.ORDAudio variable from 'obj' in the documentation example to 'v_src_audio' but you didn't change this line:


needs to be


Thats why you are getting the error

8/9 PLS-00302: component 'GETCONTENTINLOB' must be declared

And then your next problem will be not declaring the two OUT variables required to hold the mime type and format.




Paulie (I remembered to put it in), September 04, 2007 - 6:22 pm UTC

Hi Tom, Chris -

Thanks for that insight, it's very useful.

I give your suggestions a try, thanks again.

Paulie (forgot to put it in last time)

How can we compress an image

A reader, November 17, 2008 - 4:12 am UTC

Hi tom,

How can we compress an image using Orcle 10g

Thnks in advance
Tom Kyte
November 18, 2008 - 7:20 pm UTC

utl_compress - documented in the plsql packages and types guide.

python clob

god_boy, June 05, 2009 - 10:27 am UTC

in python,how go doing(select,insert,update,delete) python?

How to find if image compressed

A reader, November 01, 2009 - 10:43 am UTC


In 10gR2.
1. Could you suggest a way to determine if image stored in blob already compressed or not; maybe there header string (hex, ascii) that would indicate that?
2. Is there a way to find file type stored inside blob (.doc, .jpg, .pdf, ..)?

Thanks in advance
Tom Kyte
November 09, 2009 - 12:06 pm UTC

images are compressed already, the popular image formats are compressed.

we have an image datatype, it supports these types

see that documents table of contents for more information on all of this.

How Advance compression

Vivek Yalla, November 11, 2009 - 10:43 pm UTC

Hi tom ,

I have around 15 Tb of High resolution images which i need to store in Database i am using 11g,by using secure file where i can use compression i want to know how blob compress in secure files.
Tom Kyte
November 15, 2009 - 1:23 pm UTC

almost every image format is already compressed

and you cannot compress compressed data. Usually, it just gets larger if you try.

If you need to keep the high resolution, you likely "have what you have".

You don't mention the format - but try this:

compress the file using any tool you want at the operating system - can you find ANY that work? I'm thinking "probably not" - because the image is already compressed.

Jpegs for example - the only way to make them smaller is to lose some of the detail, you choose - hi-res or small.

Any update on LOB compression?

Clark Pearson, February 10, 2010 - 8:06 am UTC

Hi Tom,

I am planning to store VARCHAR(4000) columns compressed into BLOBs. I've read the full thread but note it dates back many years. I just wondered if your followup dated 4th Aug, 2005, to Doug's question, would be any different now in 2010?

Are there any other compression/decompression functions, ones that work on character data? (Haven't found any.)

On a related note, is there anything similar to DBMS_UTILITY.GET_HASH_VALUE that guarantees a unique return value? It could be a character return value instead of a numeric. (Haven't found one.)

I am currently working with a db but *may* be given an 11g one. I note that the documentation for UTL_COMPRESS looks the same in 11gR2 as it does in 10gR2.

Thanks in advance,
Tom Kyte
February 15, 2010 - 3:42 pm UTC

... I am planning to store VARCHAR(4000) columns compressed into BLOBs. ...

but that would make them larger????

you can use securefiles, which are a supertype of lobs, they can be compressed. But, since lobs are stored in chunks when out of line (at least ONE BLOCK) and have a lob index - you would find they are going to be LARGER.

utl_compress would work.

ops$tkyte%ORA10GR2> create table t ( x varchar2(4000), y blob )
  2  /

Table created.

ops$tkyte%ORA10GR2> declare
  2          l_data varchar2(4000) := dbms_random.string( 'a', 4000 );
  3          l_raw  blob;
  4  begin
  5          l_raw := utl_compress.lz_compress( cast( utl_raw.cast_to_raw( l_data ) as blob) );
  6          insert into t (x,y)
  7          values ( l_data, l_raw );
  8  end;
  9  /

PL/SQL procedure successfully completed.

ops$tkyte%ORA10GR2> select length( x ), dbms_lob.getlength(y) from t;

---------- ---------------------
      4000                  2933

but I don't see you saving massive amounts of disk space - not at all - small strings just don't compress very well in general since there is the overhead of storing the "dictionary".

Clark Pearson, February 22, 2010 - 7:49 am UTC

I think in all likelihood, you are right, the savings might not outweight the benefits - but - I think your example is a little flawed. A random string of characters doesn't exhibit much in the way of patterns the way that real data does. I tried your example, but with 3980 characters copied from this thread in place of a random string, and the compression came in at about 50% original size (1932 bytes to be exact) instead of c.75% original size.

Also, the very first followup mentions 'inlining' the lob storage to avoid using a block for every lob and just using the actual size of the compressed data.

How is that achieved? Is it a db config parameter or an option on the CREATE TABLE statement?

Finally, I know it's not quite on the same subject, but is there anything similar to DBMS_UTILITY.GET_HASH_VALUE that guarantees a unique return value?

Many thanks,
Tom Kyte
March 01, 2010 - 6:55 am UTC

it just isn't going to be that much in real life - depending on the string. It isn't worth the effort to compress a varchar2 - it is way too small in general (using zip like, binary like compression). You'll spend more time compressing and decompressing than would offset any benefit.


ops$tkyte%ORA11GR2> create table t ( x clob );

Table created.

ops$tkyte%ORA11GR2> select dbms_metadata.get_ddl( 'TABLE', 'T' ) from dual;


   (    "X" CLOB

In general, hashing cannot return a unique value - you are mapping an infinite set of strings into a finite set, there are more possible strings than hashes - unless you make the namespace of hashes at least as big as the namespace of the strings - but then you would have hashes as big as strings :)

There are things known as "perfect hashes", but they are very very data dependent and not going to be of benefit here.

Compressing OUT OF LINE BLOBs

Daniel, March 14, 2010 - 9:13 am UTC

So if I understand correctly ... if you're storing BLOBs OUT OF LINE as SecureFiles and using Advanced Compression on those out of line BLOBs, there is no benefit at all and probably the BLOBs will be larger!?!?

Is that correct?
Tom Kyte
March 15, 2010 - 11:05 am UTC

where did I say that???????? really - where did you get that impression from?

We were discussing recently REALLY TINY, TEENY TINY, SMALL INLINE STORED lobs - specifically NOT discussing big ones.

I said "small things will not compress very much, if at all and may in many cases become larger"

We were talking small and only small - where did you see anything to indicate that large blobs won't compress well - because that is not true in general. A JPG - won't compress well, it is already compressed. A word document - compresses very well.

Probably read this wrong

A reader, March 24, 2010 - 5:34 am UTC

Followup February 15, 2010 - 3pm Central time zone:
... I am planning to store VARCHAR(4000) columns compressed into BLOBs. ...

but that would make them larger????

you can use securefiles, which are a supertype of lobs, they can be compressed. But, since lobs are stored in chunks when out of line (at least ONE BLOCK) and have a lob index - you would find they are going to be LARGER.

I guess I read that as "You can compress your securefile LOBs but because of the associated LOB index it will actually make the LOB larger". Sorry if I misunderstood that one.

But, if I store unstructured content inside the database then I have 2 down sides: loss of storage space because everything is in a CHUNK size so I always have wasted storage (if I have 1 million 1k files to store, then I need 1 million 8k chunks, essentially wasting roughly 7k per LOB stored in the database). the other problem I see is how to distribute these files around the world. Does Oracle ASM with SecureFiles support be used like a distributed file system and store content around the globe for quicker access? Maybe it's a bit off topic, but if you have any information on this it would be helpful.

Sorry again about the confusion
Tom Kyte
March 26, 2010 - 10:56 am UTC

they were going to take a single varchar column and store it as a compressed inline blob - which would not typically work, you need something sizeable to compress.

using securefiles stored OUT OF LINE would make it even worse.

How to fixed flag in Oracle

Sourabh, May 18, 2010 - 7:54 am UTC

In my job I am facing 1 issue : Whenever a new product come in the product table I want to assign Flag as 'N' and if that product already exist in the product table and I assign as 'Y' and if the same product come to new state it see that it is mapped as 'Y' flag and automatically maps it to yes and i don't ever see it in product table. The whole idea behind the request is to have all new product alerts get automatically mapped to no and any exsisting codes that are already yes that come in to get automatically mapped to yes... My question is how to fixed this code. Rightnow I am doing this thing manually. But it take too much time. I don't want to manually now onwards. Can anybody help me in that my ID is
Tom Kyte
May 24, 2010 - 9:26 am UTC

problem with your thought process, what if something comes into that table and you say "N" (product did not exist) but then someone inserts product (so it should be "Y")

Basically - you do NOT want to do what you say you want to do.

What you want to do is OUTER JOIN this table to the product table when you retrieve the row from this table to see if the product exists.

you do not want this flag - really - you do not.

If it is taking too long, you are doing something wrong, for a given row - it should be nearly instantaneous. So, show us what you are doing and we'll help you correct it.

utl_compress for xmltype column, not as good as expected

julio, August 09, 2010 - 4:23 am UTC

I am trying the utl_compress approach for an xmltype column.
Average value for the column is around 250 Kb

xml is a good candidate for compression, but I don't get a good performance:
Original Length : 264266
Compressed Length : 129944

50% compression ratio, doesn't worth the effort.
(I tried quality = 9 and it's the same crap)

If I take that same xml, export it to file, and compress it with winrar, I get 11Kb : 3% compression ratio!!

Is there something wrong with utl_compress?
The performance difference is so huge that makes it barely usable.
Tom Kyte
August 09, 2010 - 1:48 pm UTC

... (I tried quality = 9 and it's the same crap)

ummm, bye.

I answer myself: It's a known bug

julio, August 09, 2010 - 6:26 am UTC

At last I found it is a known bug in this version of Oracle


Fixed in version 11.2

The bug seems to be limited to the oracle lz_compress implementation of the algorithm, as I have tested that the lz_uncompress procedure works fine with external (java) 20:1 ratio compressed blobs.

I spent quite some time looking for what I had done wrong, when a simple foot note in the online Oracle 10 utl_compress documentation stating the poor performance of the utl_compress would have saved me quite time of web surfing and sql testings.
As bad as it is issuing this utl_compress in this poor shape, it is far worse not to point it in the documentation.
Tom Kyte
August 09, 2010 - 1:54 pm UTC

... I spent quite some time looking for what I had done wrong, when a simple foot
note in the online Oracle 10 utl_compress documentation stating the poor
performance of the utl_compress would have saved me quite time of web surfing
and sql testings.

that is utter nonsense.

You do a test
you observe results

there was no need for a note - give me a break.

will utl_compress in v 11.2 uncompress a BLOB compressed in v.

Mike P, August 14, 2010 - 11:36 am UTC

utl_compress functionality suggested in original answer looks quite useful!
But in it gives bad compress ratio due to bug #7337717 (fixed in 11.2).

N.B.: I tested this on BLOB holding about 700Mb of text data (flat handoff file). UTL_COMPRESS gives 1:3 compression rate, comparing to 1:10 by WinZip in "fastest mode" and 1:16 by WinRAR in "good mode".

Well, 1:3 is better then 1:1, so we are going to implement usage of utl_compress even while we are in 10g.
But we plan to migrate to v 11 next year.

So, the concern is whether UTL_COMPRESS in 11.2 (with the bug fixed) will be able to uncompress BLOBs that were compressed by UTL_COMPRESS in 10G (affected by the bug) or not?
I understand that we'll need to re-compress existing data manually to get better rate... but will we be able to so using only 11.2's UTL_COMPRESS?

Thanks & regards,
Tom Kyte
August 19, 2010 - 12:51 am UTC

backwards compatibility is a big thing to us, yes, it will successfully uncompress the legacy data.

A reader, November 15, 2010 - 10:15 pm UTC

I have been using the PL/SQL wrapped Java compression utility to great affect. We have some very large extracts 1gig uncompressed that compress extremly well to 50Mb using the java compress (Using 10g so PL/SQL utl_compress does not really work)

I tried to improve the code provided by adding a new procedure java wrapped in pl/sql to append 32k raw's to a compressed blob in place.

This was to remove the need for the large uncompressed blob. It appears that PL/SQL java wrapping in the database does not allow non static methods. Any way around this? as the blob needs to be stored "java" side in order to add more 32k chunks to the end.
Tom Kyte
November 16, 2010 - 3:52 am UTC

have you considered using an array - and have the java stuff return a "handle" to the lob or passing the lob locator back to plsql so it can pass it back to you again?

Streaming is what you want

Piotr Jarmuz, November 16, 2010 - 2:38 pm UTC

Hi, I am the original author of this Java-PL/SQL thing. I am flattered that this lives for so many years. I DO believe my idea and code was WRONG so seriously wrong because of the size it was only some 16k < 32k somewhat artificial limit for varchar2(?) raw(?). In short it was wrong!

I am going to provide you the C code that is doing better than Oracle utl_compress (no bugs), and alternative versions in Java which does comparaively.

Best regards,


LOB with deletion of rows

Tom, January 27, 2011 - 2:11 pm UTC

In 10g, the package utl_compress ... will this also regain space back from LOB content being removed?

we have a table with X million rows, each rows has a LOB field containing a file. Various rows from this table are deleted all the time BUT we dont get back the space from the LOB data that is removed!

I wanted to know if this utl_compress (or some other way) to compress or regain back this space.

btw we also ran into an issue (Ora-600) because (we think) the LOB index became too large and errors out... we would also love to rebuild or compress this too? almost run utl on the table that will remove/regain dead data and rebuilt LOB indexes ( ORA-600 [25027] ) - have you seen where this LOB index gets too large?
Tom Kyte
February 01, 2011 - 3:20 pm UTC

... BUT we dont get
back the space from the LOB data that is removed! ...

YES YOU DO!!! It gets reclaimed for the segment (but not given back to dba_freee_space - that takes a reorganization)

space is reused - how it is reused depends. There is pctversion OR retention in use here. Both are documented.

If you went and compressed all of your data - and you were using retention - you would intially reclaim .... NOTHING.

Because we retain the old versions for as long as your retention policy told us to.

After a while - after the retention period passed - you would get the space BACK for FUTURE inserts (but it would not be "reclaimed" - segments do not shrink ever without a reorganization).

If you are hitting an ora-600 please contact support immediately. An index getting large should NEVER ora-600.

Unable to open compressed files in winzip

vaibhav, March 22, 2012 - 6:35 am UTC

I'm using this block of code to compress files residing on my linux box.

in_filename VARCHAR2(100);
src_file BFILE;
v_content BLOB;
v_blob_len INTEGER;
v_file utl_file.file_type;
v_buffer RAW(32767);
v_amount BINARY_INTEGER := 32767;
v_pos INTEGER := 1;
src_file := bfilename('MY_DIR', in_filename);
dbms_lob.fileopen(src_file, dbms_lob.file_readonly);
v_content := utl_compress.lz_compress(src_file, 9);
v_blob_len := dbms_lob.getlength(v_content);
v_file := utl_file.fopen('MY_DIR',
in_filename || '.zip',
WHILE v_pos < v_blob_len LOOP
dbms_lob.READ(v_content, v_amount, v_pos, v_buffer);
utl_file.put_raw(v_file, v_buffer, TRUE);
v_pos := v_pos + v_amount;

IF utl_file.is_open(v_file) THEN

A file of 9MB gets compressed to approx 6MB which is then emailed to users via utl_smtp.
Users are unable to open this file using winzip upon email receipt. Winzip raises an error saying: "File is unrecoverable / not a valid zip file". The file can be opened using WinRaR though.
However, as majority of the users have winzip installed on their machines, attachments fail to open.

Could you let me know what am i doing wrong here, and how to rectify it?


Sukhmeet, July 10, 2012 - 12:49 pm UTC

Hi Tom,
I want defalte compression algorithm for compressing my blob.
as utl_compress.lz uses lempel zeiv algorithm.
Because from my table a subscriber accepts only defalte compressed blobs.

will u plz hep?

Tom Kyte
July 11, 2012 - 5:18 pm UTC

I couldn't find U, sorry.

have your client program compress it then - or find some java routines that do that and use java in the database.

UTL_COMPRESS and Advanced Compression Option

Sergey, November 21, 2018 - 1:55 pm UTC

Does utl_compress required Advanced Compression Option?

I saw MOS and other authoritative sources and not found affirmative information about this question, only controversial from non-authoritative sources.

Can you help?
Connor McDonald
November 27, 2018 - 4:19 am UTC

I've checked internally.

The routine is available for use in all versions of Oracle database without ACO.

Bon-Minh Lam, November 16, 2022 - 3:08 pm UTC

Hello, I have a follow-up question on compressing CLOB column in a table.
I assume that that is a program to do the compression. And let's assume that it is UTL_COMPRESS. When looking at the signature of subprogram in 12c documentation, the src LOB is always is BFILE, raw or BLOB.
So somehow, the CLOB would have to be converted to BLOB first. But when I look DBMS_LOB.converttoblob ( again another assumption) the procedure needs a few input information such as BLOB_CSID and LANG_CONTEXT.
Now my question: how does Oracle ensure that during this chain of conversion: CLOB to BLOB, BLOB to compressed and reverse, no information is lost?

Of course, I may be worrying to much and Oracle just dump to CLOB somehow to file and compress the file, put it back into the CLOB column.
Connor McDonald
November 21, 2022 - 11:56 am UTC

how does Oracle ensure that during this chain of conversion

Without being too flippant, thats basically the same as saying:

"I don't want to compress my Word document with WInZip because how do I know WinZip will be able to decompress it?"

There's plenty of lossless compression algorithms - and its our job of course to make sure that they work as intended.

More info here

More to Explore


Complete documentation on Securefiles and Large Objects here