Skip to Main Content
  • Questions
  • Import a subset of data user Oracle exp/imp

Breadcrumb

Question and Answer

Tom Kyte

Thanks for the question, Simon.

Asked: August 28, 2004 - 12:11 pm UTC

Last updated: August 31, 2004 - 8:37 am UTC

Version: 9.2.0

Viewed 1000+ times

You Asked

Hi Tom,

For some specific needs we have, I would like to be able to import a subset of data from an export dumpfile (schema dump). Our databases run on 9i and I know that 10g (datapump) will probably help us to do this but we're not there right now. Ideally, we don't want to extract the subset of data (again) from our production systems because:

- the data we need has already been extract into an export file we have
- it may affects production systems during peak hours
- etc...

Here is a simple example of how I think it could be done. Let's say I have the following objects in my production database:

CREATE TABLE PARENT
(
NO NUMBER(12) NOT NULL,
THEDATE DATE)
tablespace production_data;

CREATE UNIQUE INDEX PARENT_P ON PARENT(NO) tablespace production_data;

ALTER TABLE PARENT ADD (
CONSTRAINT PARENT_P PRIMARY KEY (NO)
USING INDEX );

CREATE TABLE CHILD
(
NO NUMBER(12) NOT NULL,
LANGUAGE NUMBER(3) NOT NULL,
DESCR VARCHAR2(4000 BYTE))
tablespace production_data;

CREATE UNIQUE INDEX CHILD_P ON CHILD(NO, LANGUAGE) tablespace production_data;

ALTER TABLE CHILD ADD (
CONSTRAINT CHILD_P PRIMARY KEY (NO, LANGUAGE)
USING INDEX );

ALTER TABLE CHILD ADD (
CONSTRAINT PARENT_CHILD_F FOREIGN KEY (NO)
REFERENCES PARENT (NO));

So here I have a parent table and a child table. A referential intergrity constraint is also there.

The "subset import process" will do the following with an export of the preceding structure and data:

1 - Create an indexfile based on the export dumpfile
2 - Create tables (no indexes, no constraints, no data) in the destination schema
3 - For the parent table and for each tables that reference that parent table (using dictionnary constraints from source system), do the following:
3a - Rename the table to xxx_tablename
3b - Create a view named tablename for xxx_tablename
3c - Create an instead of trigger on the view that contains the necessary code to validate if the rows to be inserted by the import process (later) respect the "conditions" of our needed subset
4 - Import the data
5 - Drop the views/triggers and rename the tables

Everything works fine until one of the table where a subset must be loaded contains a LOB column. Using this procedure, we get the following error from imp:

IMP-00028 partial import of previous table rolled back: %lu rows rolled back

I've log a TAR with Metalink about that but they finally told me that it was beyond the scope of Support Services. However, I'm able to proove that this technique works when it is used with another tool than imp (for example with a PL/SQL or SQL script).

I'm pretty sure that the problem is related to the way imp manage LOB columns from a dumpfile. However, nobody at Oracle Support is able to answer that. With a bit more info, we may be able to make it work and it would be a very nice tool for us.

Would you have any informations/ideas on how we may bypass this problem ?

If you would like to have a complete reproducable testcase for that, I will be pleased to provide it.

Thanks.

Simon





and Tom said...

you cannot do lobs in an "instead trigger with a view". it doesn't work, never has and won't. Not everything is supported with instead of triggers.

think about it - the clob could be 4gig in size. the clob data in general is "not there" in the insert statement -- an empty_clob() is and this would be retrieved by the "returning" clause on the insert, but that doesn't fly with instead of triggers.

so, import does not, cannot do something so simple as:


ops$tkyte@ORA9IR2> create table t ( x int, y clob );

Table created.

ops$tkyte@ORA9IR2> insert into t values ( 1, 'hello world' );

1 row created.

ops$tkyte@ORA9IR2> commit;

Commit complete.

ops$tkyte@ORA9IR2> delete from t;

1 row deleted.

it has to do something like this:

ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> declare
2 l_clob clob;
3 begin
4 insert into t values ( 1, empty_clob() ) returning y into l_clob;
5 dbms_lob.writeAppend( l_clob, length( 'hello world' ), 'hello world' );
6 end;
7 /

PL/SQL procedure successfully completed.

else, it could never get more than 4000 characters in there! (thats the length of a string in SQL, 4000 characters)

So you see, if you turn this into a trigger:



ops$tkyte@ORA9IR2> drop table t;

Table dropped.

ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> create table t2 ( x int, y clob );

Table created.

ops$tkyte@ORA9IR2> create view t as select * from t2;

View created.

ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> create or replace trigger t instead of insert on t
2 begin
3 insert into t2 values ( :new.x, :new.y );
4 end;
5 /

Trigger created.

ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> insert into t values ( 1, 'hello world' );

1 row created.

that is probably what you tested and said "this should work", but...


ops$tkyte@ORA9IR2> delete from t;

1 row deleted.

ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> declare
2 l_clob clob;
3 begin
4 insert into t values ( 1, empty_clob() ) returning y into l_clob;
5 dbms_lob.writeAppend( l_clob, length( 'hello world' ), 'hello world' );
6 end;
7 /
declare
*
ERROR at line 1:
ORA-22816: unsupported feature with RETURNING clause
ORA-06512: at line 4


that is what IMP is getting.. or it could even be more insidious. IMP might
a) insert the row
b) try to select out the lob locator in order to write to it -- but hey, it ain't there! but it has to be there


So, this trick will just not work with imp on those tables with lobs.


Why would you not export with "query=" to get a slice of the data?



Rating

  (2 ratings)

Is this answer out of date? If it is, please let us know via a Comment

Comments

Simon, August 30, 2004 - 10:53 pm UTC

Thank you Tom for your quick answer. It makes lot of sense.

It would be hard for us to use the query= parameter of export because of some of its restrictions (no user mode exports, applicable to all tables specified - column_name from query may be different -, no direct path).

I'm prety sure that imp does not use a returning clause. I did a trace of an import process into a "standard" table containing a lob column. Here are the SQLs I get:

INSERT /*+NESTED_TABLE_SET_REFS+*/ INTO "TEST" ("NO", "THEDATE", "THELOB") VALUES (:1, :2, :3)

SELECT /*+NESTED_TABLE_GET_REFS+*/ "THELOB" FROM "TEST" WHERE ROWID = :1

So it looks like imp wants to get the lob locator and then probably use dbms_lob.writeappend to write into it.

Do you have any information about these 2 hints ?

Thanks

Simon

Tom Kyte
August 31, 2004 - 8:37 am UTC

it does the functional equivalent -- it is quivalent.

if you have my book "Expert one on one Oracle", i discuss the nested table hints, lets you query a nested table as if it were a standalone table (and one of the reasons I don't like nested tables, you ALWAYS want to query them as a standalone table and the only way to do that is via this undocumented hint which sort of makes it clear that you really didn't want a nested table in the first place...)

A reader, July 17, 2005 - 11:26 am UTC

Hi Tom,

Just want to let you (and eventually others) know the solution I found for the problem I had with LOBs. Instead of creating a trigger (instead of trigger) that will decide if the row must be inserted or not, I change the table to a partitionned table ONLY for table that contain a LOB (I kept the "instead of trigger" solution for other tables). One partition will contain the data I want to keep. The other one, the data I want to get rid of. As soon as the import is completed, I can drop the unwanted partition and get back to an unpartitionned table.

This solution doesn't allow me to "reject" the rows I don't want BEFORE inserting them into the database. However, I can get rid of those rows pretty fast (drop partition).

Thank you again for your help on this.

Simon

More to Explore

Data Pump

All of the database utilities including Data Pump are explained in the Utilities guide.