Skip to Main Content
  • Questions
  • how to extract BLOB data back out again

Breadcrumb

Question and Answer

Tom Kyte

Thanks for the question, Peter.

Asked: March 23, 2004 - 10:00 am UTC

Last updated: November 12, 2005 - 10:46 am UTC

Version: 9.2

Viewed 10K+ times! This question is

You Asked

My BLOB data consists of images files (GIF,JPEG, etc...), word documentation files, spreadsheet, PDF files etc...
How to I extract these files back out into a directory for viewing?

and Tom said...

you would write a client application in C, java, VB, whatever -- see

</code> http://docs.oracle.com/cd/B10501_01/appdev.920/a96591/toc.htm

if you have mod_plsql & apache, you can use plsql and a web browser easily

http://asktom.oracle.com/pls/asktom/f?p=100:11:::::P11_QUESTION_ID:232814159006#6891533553691 <code>


Rating

  (9 ratings)

Is this answer out of date? If it is, please let us know via a Comment

Comments

how to extract blob data back out again

Peter, March 24, 2004 - 8:32 am UTC

all of your example does no good for me. I needed a way or at the least Oracle need to put "enhancement" on the pl/sql to extract BLOB out on a simple select statement. How hard could it be?


Tom Kyte
March 24, 2004 - 9:34 am UTC

plsql runs on the *server*
plsql can write to the *server*
plsql cannot in any way shape or form write to the *client*

plsql can deliver to the client easily so the client can write to the client.

(think about it -- what would the implication of the database server being able to write to just any client -- think about it. YOU NEED a client application -- the easiest would be your BROWSER as you probably have one of those)

how to extract BLOB data back out again

peter, March 25, 2004 - 8:37 am UTC

what if users decided to have a quick and dirty way of extracting files from BLOB back out again. (Just like a SELECT statement...) either from the client or the server.
.
.
All that I want is a SIMPLE WAY(nothing more... nothing less...) to extract BLOB data back out again. why make it difficult...
.
.
we have a SQLLDR to load MASSIVE DATA into database. Why not have 'BLOBEXTRACT' or 'BLOBLOADER' (not to confuse w/ the export/import commands) to extract BLOB data back out again or to import BLOB back in again



Tom Kyte
March 25, 2004 - 9:25 am UTC

blobs are BINARY DATA -- think about it.

what tools does a client have...... hmmmm.... sqlplus. what does sqlplus do? binary? no, it displays text.


sqlldr loads blobs (so there is a blobloader)

sorry, what you are looking for just "doesn't exist". a client needs to handle BINARY data.

now, if you want a clob out -- sqlplus does that, binary -- not possible.

TOAD can be used

Roger, March 25, 2004 - 11:26 am UTC

TOAD is a client that handles BLOB data and can create files on the client computer. It's possible to write a select statement in the SQL Editor, run it and in the result grid, right click and choose Export Blobs. A file/row will be created. There also exists other possibilities to create files from BLOB columns and also loading a file into a BLOB column.

Tom Kyte
March 25, 2004 - 9:13 pm UTC

TOAD is a client piece of software, MANY of them can do this (as i've been trying to sort of say)....

Is this a database issue?

Susan, April 02, 2004 - 3:34 pm UTC

Tom,

We store an image files in a long raw column (our driver doesn't support blobs, we'll be changing soon), view them as JPEGs from our application. We discovered that some of the JPEGs were not viewable by our app. I export/imported the data, and then our app could view them. After reading this thread I tried viewing the problem images with TOAD and could (I don't that it's really doing an export b/c I doing this from a user that doesn't have export rights). I'm not sure now if this is database problem or if it relates to how we retrieve the image to our application. We're running a 8.1.7 database. Thanks for any suggestions.

Tom Kyte
April 02, 2004 - 3:47 pm UTC

what driver doesn't support blobs (available in oracle since version 8.0 a long long time ago)?.?.?

Are you using "strings" to export the long raws? if so, you are getting some character set conversion issues.

what version of the DB are you using?
what is its character set
what is your CLIENTS character set (nls_lang)

Not sure..

Susan, April 02, 2004 - 4:26 pm UTC

Microsoft ODBC for Oracle :(

I'm not sure what you mean by are you using "strings" to export the data, we upload from the db to app in an asp page that selects the image column and then uses a GetChunk method to "read" the data from the recordset

our NLS_LANGUAGE setting on the database is AMERICAN
on the client (webserver) it's "NA" (in the registry)

we've got 600 images and we're only seeing this problem on 3 or 4 of them.

I'm guessing it's a problem with our asp method b/c it's happening on both devl and production, we can't reproduce it yet. The developer is convinced it's a db problem b/c deleting the image and reloading the XML file (sometime more than once) fixed the problem. Also, as I mentioned export/import "fixed" the problem.



Tom Kyte
April 02, 2004 - 7:21 pm UTC

you need to use singe that is "RAW", not a "string" -- strings (varchar's, char's) are subject to character set conversion.

american is the language, the character set is something more obscure like US7ASCII...


if just delete + reload "fixed it", highly unlikely to be a "db problem". something like that leads me more to "application issue"

sorry

Susan, April 02, 2004 - 4:34 pm UTC

forgot the version 8.1.7.3 (we'll be upgrading soon to 9i)

Tom Kyte
April 02, 2004 - 7:21 pm UTC

also, take the problem images

load them, dump them and compare them -- see what is different.

sorry still not getting it

Susan, April 04, 2004 - 12:10 pm UTC

what does singe mean?

<quote>
you need to use singe that is "RAW", not a "string"
</quote>

we're using an ado method getChunk that returns binary data

load the problem files (that sometimes fixes the problem) dump them, compare them -- compare those the binary files to what?

I think people are pointing at the database b/c reloading or export/import fixes the problem, without touching the application code they're assuming there's some type of corruption or storage problem on the db side

Thanks for your help Tom.

Tom Kyte
April 05, 2004 - 9:36 am UTC

compare the unloaded files to the files you loaded -- see what is "different" about them.


there isn't a corruption issue on the DB side, I'll point the figure at the developed code (and reloading 'works' because the program is loaded into a different place in memory) OR a configuration issue (eg: not really using a binary interface after all)


I remember working with a C programmer. there was this huge static variable declared but not used anywhere. When asked "why" he responded -- "it is well known that compilers have bugs, the solution is to occasionally define really large variables when you hit a compiler bug. you define the variable as big as you need until the problem goes away"

Flabergasted I was -- of course what they had done was just move memory around enough so as to not have the buffer overruns they had all over the place corrupt "real" memory in such a way as to crash the program. Here they were just corrupting big chunks of static memory.

Export Import and blob

Mahesh Kumar, April 14, 2004 - 2:46 am UTC

Hi Tom

Plese let me know how BLOB,CLOB act with EXPORT/IMPORT utilities. In simple words when i have export of table containting BLOB,CLOB, how they will be imported in database.

Thanx & Regards



Tom Kyte
April 14, 2004 - 8:30 am UTC

conceptually you can think "they are like numbers, dates or strings -- they just go out and come back in"

so, in general, they are "just done"

physically however, there are some differences....

o a table with a lob is a multi-segment object that has more than one tablespace specifier (like a partitioned table, iot with overflow and so on).  So, if you take an exp from database1 and you try to import into database2 -- you had BETTER have the same tablespace names in place, else the CREATE will fail.  Oracle will only rewrite the tablespace name in a single segment object upon import.

what that means is if you execute this on db1:

create user X ... default tablespace bar;

create table t1 ( x int, y varchar2(200) ) tablespace foo;
create table t2 ( x int, y clob ) tablespace foo;


and export it -- and try to import it on db2 where you have:

create user X .... default tablespace ABC;

and db2 does not have tablespaces FOO and BAR, the CREATE of T1 will succeed, the create of T2 will fail. You'll have to PRECREATE t2.  Consider:

ops$tkyte@ORA9IR2> create tablespace foo datafile size 1m;
Tablespace created.
 
ops$tkyte@ORA9IR2> create tablespace bar datafile size 1m;
Tablespace created.
 
ops$tkyte@ORA9IR2> create user a identified by a default tablespace bar;
User created.
 
ops$tkyte@ORA9IR2> grant create session, create table, unlimited tablespace to a;
Grant succeeded.
 
ops$tkyte@ORA9IR2> @connect a/a

a@ORA9IR2> create table t1 ( x int primary key, y varchar2(4000) ) tablespace foo;
Table created.
 
a@ORA9IR2> create table t2 ( x int primary key, y clob ) tablespace foo;
Table created.
 
a@ORA9IR2> host exp userid=a/a 'tables=(t1,t2)'
 
Export: Release 9.2.0.4.0 - Production on Wed Apr 14 08:19:59 2004
Copyright (c) 1982, 2002, Oracle Corporation.  All rights reserved.
Connected to: Oracle9i Enterprise Edition Release 9.2.0.4.0 - Production
With the Partitioning, OLAP and Oracle Data Mining options
JServer Release 9.2.0.4.0 - Production
Export done in WE8ISO8859P1 character set and AL16UTF16 NCHAR character set
 
About to export specified tables via Conventional Path ...
. . exporting table                             T1          0 rows exported
. . exporting table                             T2          0 rows exported
Export terminated successfully without warnings.
 
a@ORA9IR2> @connect /

ops$tkyte@ORA9IR2> drop user a cascade;
User dropped.
 
ops$tkyte@ORA9IR2> drop tablespace foo;
Tablespace dropped.
 
ops$tkyte@ORA9IR2> drop tablespace bar;
Tablespace dropped.
 
ops$tkyte@ORA9IR2> create user a identified by a default tablespace users;
User created.
 
ops$tkyte@ORA9IR2> grant create session, create table, unlimited tablespace to a;
Grant succeeded.
 
ops$tkyte@ORA9IR2> @connect a/a

a@ORA9IR2> host imp userid=a/a full=y
Import: Release 9.2.0.4.0 - Production on Wed Apr 14 08:20:02 2004
 
Copyright (c) 1982, 2002, Oracle Corporation.  All rights reserved.
 
 
Connected to: Oracle9i Enterprise Edition Release 9.2.0.4.0 - Production
With the Partitioning, OLAP and Oracle Data Mining options
JServer Release 9.2.0.4.0 - Production
 
Export file created by EXPORT:V09.02.00 via conventional path
import done in WE8ISO8859P1 character set and AL16UTF16 NCHAR character set
. importing A's objects into A
. . importing table                           "T1"          0 rows imported

<b>Import rewrote the CREATE TABLE T1 to use the default tablespace but...</b>

IMP-00017: following statement failed with ORACLE error 959:
 "CREATE TABLE "T2" ("X" NUMBER(*,0), "Y" CLOB)  PCTFREE 10 PCTUSED 40 INITRA"
 "NS 1 MAXTRANS 255 STORAGE(INITIAL 65536 FREELISTS 1 FREELIST GROUPS 1) TABL"
 "ESPACE <b>"FOO"</b> LOGGING NOCOMPRESS LOB ("Y") STORE AS  (TABLESPACE <b>"FOO" </b>ENABL"
 "E STORAGE IN ROW CHUNK 8192 PCTVERSION 10 NOCACHE  STORAGE(INITIAL 65536 FR"
 "EELISTS 1 FREELIST GROUPS 1))"
IMP-00003: ORACLE error 959 encountered
ORA-00959: tablespace 'FOO' does not exist
Import terminated successfully with warnings.
 
<b>it WILL NOT do that for a multi-segment object! You'd have to precreate that table in db2 first and then import with IGNORE=Y (you can get the ddl from the dmp file using "imp userid=a/a tables=t2 indexfile=t2.sql", t2.sql will have the ddl for t2.</b>

That is the longest bullet point ever :)


o Tables with LOBS or LONGS will be done a row at a time upon export and upon insert.  This can be *slow* but due to the large nature of the data, unavoidable.  We can see this with another test:

ops$tkyte@ORA9IR2> create table t1 ( x int primary key, y varchar2(4000) );
Table created.
 
ops$tkyte@ORA9IR2> create table t2 ( x int primary key, y clob );
Table created.
 
ops$tkyte@ORA9IR2> insert into t2
  2  select rownum, to_lob(text)
  3    from all_views;
 
2612 rows created.
 
ops$tkyte@ORA9IR2> insert into t1
  2  select x, dbms_lob.substr(y,4000,1)
  3    from t2;
 
2612 rows created.
 
ops$tkyte@ORA9IR2> grant alter session to ops$tkyte;
Grant succeeded.
 
ops$tkyte@ORA9IR2> create or replace trigger logon_trigger after logon on database
  2  begin
  3          execute immediate 'alter session set sql_trace=true';
  4  end;
  5  /
 
Trigger created.
 
ops$tkyte@ORA9IR2> host exp userid=/ 'tables=(t1,t2)'
 
Export: Release 9.2.0.4.0 - Production on Wed Apr 14 08:13:36 2004
 
Copyright (c) 1982, 2002, Oracle Corporation.  All rights reserved.
 
 
Connected to: Oracle9i Enterprise Edition Release 9.2.0.4.0 - Production
With the Partitioning, OLAP and Oracle Data Mining options
JServer Release 9.2.0.4.0 - Production
Export done in WE8ISO8859P1 character set and AL16UTF16 NCHAR character set
 
About to export specified tables via Conventional Path ...
. . exporting table                             T1       2612 rows exported
. . exporting table                             T2       2612 rows exported
Export terminated successfully without warnings.

<b>tk is just a shell script I have that finds the last trace and tkprofs it into tk.prf</b>
 
ops$tkyte@ORA9IR2> host tk
/home/ora9ir2/admin/ora9ir2/udump/ora9ir2_ora_21922.trc
 
TKPROF: Release 9.2.0.4.0 - Production on Wed Apr 14 08:13:38 2004
 
Copyright (c) 1982, 2002, Oracle Corporation.  All rights reserved.
 
 
 
ops$tkyte@ORA9IR2> host mv tk.prf tk_exp.prf
 
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> drop table t1;
Table dropped.
 
ops$tkyte@ORA9IR2> drop table t2;
Table dropped.
 
ops$tkyte@ORA9IR2> host imp userid=/ full=y
 
Import: Release 9.2.0.4.0 - Production on Wed Apr 14 08:13:41 2004
 
Copyright (c) 1982, 2002, Oracle Corporation.  All rights reserved.
 
 
Connected to: Oracle9i Enterprise Edition Release 9.2.0.4.0 - Production
With the Partitioning, OLAP and Oracle Data Mining options
JServer Release 9.2.0.4.0 - Production
 
Export file created by EXPORT:V09.02.00 via conventional path
import done in WE8ISO8859P1 character set and AL16UTF16 NCHAR character set
. importing OPS$TKYTE's objects into OPS$TKYTE
. . importing table                           "T1"       2612 rows imported
. . importing table                           "T2"       2612 rows imported
Import terminated successfully without warnings.
 
ops$tkyte@ORA9IR2> host tk
/home/ora9ir2/admin/ora9ir2/udump/ora9ir2_ora_21937.trc
 
TKPROF: Release 9.2.0.4.0 - Production on Wed Apr 14 08:13:45 2004
 
Copyright (c) 1982, 2002, Oracle Corporation.  All rights reserved.
 
 
 
ops$tkyte@ORA9IR2> host mv tk.prf tk_imp.prf
 
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> drop trigger logon_trigger;
 
Trigger dropped.


<b>Now, when we inspect the tk_exp.prf file we see:</b>

SELECT /*+NESTED_TABLE_GET_REFS+*/ "OPS$TKYTE"."T1".*
FROM
 "OPS$TKYTE"."T1"
                                                                                                           
                                                                                                           
call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute      1      0.00       0.00          0          0          0           0
Fetch      523      0.02       0.03          0        730          1        2612
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total      525      0.02       0.03          0        730          1        2612


<b>It array fetched that, that was 5 rows/fetch (we control that, if you rerun with buffer=10000000 on the exp, you'd see:

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute      1      0.00       0.00          0          0          0           0
Fetch        1      0.00       0.02          0        253          1        2612
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        3      0.00       0.03          0        253          1        2612

A single fetch!

But when we get to T2:</b>


SELECT /*+NESTED_TABLE_GET_REFS+*/ "OPS$TKYTE"."T2".*
FROM
 "OPS$TKYTE"."T2"
                                                                                                           
                                                                                                           
call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute      1      0.00       0.00          0          0          0           0
Fetch     2613      0.09       0.08          0       2637          0        2612
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total     2615      0.09       0.08          0       2637          0        2612


<b>it is forced into single row mode to retrieve the data (regardless of the buffer size).  Looking at the import we see:</b>

INSERT /*+NESTED_TABLE_SET_REFS+*/ INTO "T1" ("X", "Y")
VALUES
 (:1, :2)
                                                                                                           
                                                                                                           
call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute    374      0.07       0.12          0        374       3068        2612
Fetch        0      0.00       0.00          0          0          0           0
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total      375      0.07       0.12          0        374       3068        2612

<b>again, array inserts, but for t2:</b>



INSERT /*+NESTED_TABLE_SET_REFS+*/ INTO "T2" ("X", "Y")
VALUES
 (:1, :2)
                                                                                                           
                                                                                                           
call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute   2612      0.25       0.32          0        188       3985        2612
Fetch        0      0.00       0.00          0          0          0           0
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total     2613      0.25       0.32          0        188       3985        2612
                                                                                                           
Misses in library cache during parse: 1
Optimizer goal: CHOOSE
Parsing user id: 121
********************************************************************************
                                                                                                           
SELECT /*+NESTED_TABLE_GET_REFS+*/ "Y"
FROM
 "T2" WHERE ROWID = :1
                                                                                                           
                                                                                                           
call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute   2612      0.11       0.12          0          0          0           0
Fetch     2612      0.05       0.09          0       2612          0        2612
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total     5225      0.16       0.21          0       2612          0        2612


<b>Not only a single row insert but -- a select as well (to get the lob locator, to write the data into the lob after inserting)</b>



 

Java stored proc

Chris, November 11, 2005 - 4:47 pm UTC

Is it possible to use java in a stored procedure to export binary objects to the client?

Tom Kyte
November 12, 2005 - 10:46 am UTC

only if the client has a program running on it that the java stored procedure can connect to and over the network - talk to.

Otherwise, the java stored procedure is just a program running on the database server - it cannot do anything to the clients file system, not unless the client has something in place to facilitate that (like a network share, ftp server, whatever)

More to Explore

CLOB/BLOB/etc

Complete documentation on Securefiles and Large Objects here