You Asked
Hi Tom,
For some specific needs we have, I would like to be able to import a subset of data from an export dumpfile (schema dump). Our databases run on 9i and I know that 10g (datapump) will probably help us to do this but we're not there right now. Ideally, we don't want to extract the subset of data (again) from our production systems because:
- the data we need has already been extract into an export file we have
- it may affects production systems during peak hours
- etc...
Here is a simple example of how I think it could be done. Let's say I have the following objects in my production database:
CREATE TABLE PARENT
(
NO NUMBER(12) NOT NULL,
THEDATE DATE)
tablespace production_data;
CREATE UNIQUE INDEX PARENT_P ON PARENT(NO) tablespace production_data;
ALTER TABLE PARENT ADD (
CONSTRAINT PARENT_P PRIMARY KEY (NO)
USING INDEX );
CREATE TABLE CHILD
(
NO NUMBER(12) NOT NULL,
LANGUAGE NUMBER(3) NOT NULL,
DESCR VARCHAR2(4000 BYTE))
tablespace production_data;
CREATE UNIQUE INDEX CHILD_P ON CHILD(NO, LANGUAGE) tablespace production_data;
ALTER TABLE CHILD ADD (
CONSTRAINT CHILD_P PRIMARY KEY (NO, LANGUAGE)
USING INDEX );
ALTER TABLE CHILD ADD (
CONSTRAINT PARENT_CHILD_F FOREIGN KEY (NO)
REFERENCES PARENT (NO));
So here I have a parent table and a child table. A referential intergrity constraint is also there.
The "subset import process" will do the following with an export of the preceding structure and data:
1 - Create an indexfile based on the export dumpfile
2 - Create tables (no indexes, no constraints, no data) in the destination schema
3 - For the parent table and for each tables that reference that parent table (using dictionnary constraints from source system), do the following:
3a - Rename the table to xxx_tablename
3b - Create a view named tablename for xxx_tablename
3c - Create an instead of trigger on the view that contains the necessary code to validate if the rows to be inserted by the import process (later) respect the "conditions" of our needed subset
4 - Import the data
5 - Drop the views/triggers and rename the tables
Everything works fine until one of the table where a subset must be loaded contains a LOB column. Using this procedure, we get the following error from imp:
IMP-00028 partial import of previous table rolled back: %lu rows rolled back
I've log a TAR with Metalink about that but they finally told me that it was beyond the scope of Support Services. However, I'm able to proove that this technique works when it is used with another tool than imp (for example with a PL/SQL or SQL script).
I'm pretty sure that the problem is related to the way imp manage LOB columns from a dumpfile. However, nobody at Oracle Support is able to answer that. With a bit more info, we may be able to make it work and it would be a very nice tool for us.
Would you have any informations/ideas on how we may bypass this problem ?
If you would like to have a complete reproducable testcase for that, I will be pleased to provide it.
Thanks.
Simon
and Tom said...
you cannot do lobs in an "instead trigger with a view". it doesn't work, never has and won't. Not everything is supported with instead of triggers.
think about it - the clob could be 4gig in size. the clob data in general is "not there" in the insert statement -- an empty_clob() is and this would be retrieved by the "returning" clause on the insert, but that doesn't fly with instead of triggers.
so, import does not, cannot do something so simple as:
ops$tkyte@ORA9IR2> create table t ( x int, y clob );
Table created.
ops$tkyte@ORA9IR2> insert into t values ( 1, 'hello world' );
1 row created.
ops$tkyte@ORA9IR2> commit;
Commit complete.
ops$tkyte@ORA9IR2> delete from t;
1 row deleted.
it has to do something like this:
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> declare
2 l_clob clob;
3 begin
4 insert into t values ( 1, empty_clob() ) returning y into l_clob;
5 dbms_lob.writeAppend( l_clob, length( 'hello world' ), 'hello world' );
6 end;
7 /
PL/SQL procedure successfully completed.
else, it could never get more than 4000 characters in there! (thats the length of a string in SQL, 4000 characters)
So you see, if you turn this into a trigger:
ops$tkyte@ORA9IR2> drop table t;
Table dropped.
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> create table t2 ( x int, y clob );
Table created.
ops$tkyte@ORA9IR2> create view t as select * from t2;
View created.
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> create or replace trigger t instead of insert on t
2 begin
3 insert into t2 values ( :new.x, :new.y );
4 end;
5 /
Trigger created.
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> insert into t values ( 1, 'hello world' );
1 row created.
that is probably what you tested and said "this should work", but...
ops$tkyte@ORA9IR2> delete from t;
1 row deleted.
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> declare
2 l_clob clob;
3 begin
4 insert into t values ( 1, empty_clob() ) returning y into l_clob;
5 dbms_lob.writeAppend( l_clob, length( 'hello world' ), 'hello world' );
6 end;
7 /
declare
*
ERROR at line 1:
ORA-22816: unsupported feature with RETURNING clause
ORA-06512: at line 4
that is what IMP is getting.. or it could even be more insidious. IMP might
a) insert the row
b) try to select out the lob locator in order to write to it -- but hey, it ain't there! but it has to be there
So, this trick will just not work with imp on those tables with lobs.
Why would you not export with "query=" to get a slice of the data?
Rating
(2 ratings)
Is this answer out of date? If it is, please let us know via a Comment