Skip to Main Content

Breadcrumb

Question and Answer

Tom Kyte

Thanks for the question.

Asked: May 19, 2000 - 8:59 am UTC

Last updated: May 01, 2008 - 2:57 pm UTC

Version: 8.1.5

Viewed 1000+ times

You Asked

Can I export the database in Oracle 8i and then use the *.dmp file to import into Oracle 8?

and Tom said...

Short answer -- yes, sort of.

Longer answer -- you'll want to make sure to use the Oracle8 (not 8i) EXP tool to export from 8i and the Oracle8 IMP tool to import it.

The EXP files from an earlier release of a database may be imported using the current releases IMP tool (eg: I can take a v7.x exp and import it with v8.1 IMP). The EXP files from a later release cannot be imported using that IMP (eg: I cannot take a v8.x export and import it using a v7.x tool -- even v8.0 cannot import v8.1 exports).

You use the EXP tool of the release you want to "downgrade" to to export the data. If you need to goto a version 7 database from version 8, there are views (catexp7.sql found in $ORACLE_HOME/rdbms/admin) provided to allow the v7 export tool to run against the version 8 database.

The problem with using an 8i export against 8.0 import is that there are new objects in the 8i export that 8.0 simply doesn't recognize.

Rating

  (75 ratings)

Is this answer out of date? If it is, please let us know via a Comment

Comments

importing data exported by 9.0 into 8.1.7

Nikunj, July 22, 2003 - 5:42 am UTC

Dear Sir,

I have Oracle EE 8.1.7 at one place & Oracle EE 9.0 at another place.
both are not connected to each other and both are in different place.

Now i want to import the data exported by 9.0 server how can i do that please give me in details steps.

Thanks,
Nikunj

Tom Kyte
July 22, 2003 - 8:32 am UTC

you cannot use export and import in this case.

you'll need to

a) install the 817 client on the machine hosting 9i
b) use that export tool over sqlnet on that machine to create the dmp file



Speaking of import/Export ..how do i exp/imp JOBs

pasko, July 22, 2003 - 10:00 am UTC

Hi Tom,

Thanks for the above follow-up...

i have a question related slightly to this..

My DBA just did exp/imp of all user schemas from one database to the other:
using his USERNAME ( which i guess had granted himself sysdba/sysoper role) and imported a few dozen schemas using the FROMUSER TOUSER option.

Everything worked fine, except that all user jobs from the imported schemas went into his schema :
Table dba_jobs shows log_user for all jobs as his Name instead of creating the JOBs into respective schemas


I also tried to import twice expecting to see duplicated generated constraints( those starting with SYSnnnn_.... , but i was surprized to see that they were not Duplicated during import


Do you have a script for getting all JOBs from one schema to another , i mean similar to your popular getallviews/getallcode/getcode scripts ?
something which would go by the name getalljobs..


Thanking you in advance,

Regards







Tom Kyte
July 22, 2003 - 2:53 pm UTC

No I don't -- but it shouldn't be too hard to generate.

Scheduled "Exp" using .bat file

Fazle Rabbi, July 23, 2003 - 1:26 am UTC

I can export data everyday automatically using a Export.bat file via Task Scheduler in Windows2000 platform.

In the Export.bat file i'v used the following command line-
>
>EXP SCOTT/TIGER@ORCL FILE=EXPDAT;
>

In this case i'v to take a backup of the EXPDAT.DMP file to a different location to avoid overwriting.

Now i want to export using "sysdate+dat.dmp"(this format) as a FILE parameter of EXP utility. How it is possible?

Please help.

Thanks
F Rabbi


dbms_job.user_export does that..

pasko, July 24, 2003 - 7:47 am UTC

Hi Tom,

I finally found another alternative..

dbms_job.user_export  Procedure 

Example :

SQL> var text varchar2(250) ;
SQL> var job  number  ;
SQL> exec dbms_job.user_export(:job , :text ) ;
SQL> print text

TEXT
--------------------------------------------------------------------------------
dbms_job.isubmit(job=>237,what=>'begin null; end; ',next_date=>to_date('4000-01-
01:00:00:00','YYYY-MM-DD:HH24:MI:SS'),interval=>'trunc( sysdate + 4000)',no_pars
e=>TRUE);


 

Export

Ignatius Cardoz, April 16, 2004 - 8:01 am UTC

Dear Tom,

Question No. 1). I have a complete Export dump file as of last night, now next morning my database crashes. I dont know the name of the tablespace and datafiles, how do i Import using the dump file, with same tablespace name. Is there a way.

Question No 2). I have 2 Databases (ver 9.0 and ver 9.2), I have installed client version of 9.2 on my PC. Now whilst exporting from my PC (Client PC), Export for Database with ver 9.2 works fine but Database with ver 9.0 gives me the following error

EXP-00056: ORACLE error 942 encountered
ORA-00942: table or view does not exist
EXP-00000: Export terminated unsuccessfully

Thank you very much
Ignatius Cardoz

Tom Kyte
April 16, 2004 - 9:08 am UTC

1) it'll create them. the exp dmp file contains create tablespace statements.

but, that would be the worst of the worst ways to "backup". Please -- use BACKUPS for BACKUPS. exports are *not* a good way (i would go as far to say they are a terrible way) to backup a database. only a backup -- data file copies -- are good for backups.


2) use v9.0 export against v9.0. use the version of the tool for that database.

Easier and moreefficient way to export import

John Miller, August 31, 2004 - 12:39 am UTC

When You have large production database, then you might want better to unload table into text file with tool like WisdomForce FastReader (www.wisdomforce.com). It will also generate control file for sql loader. Very quick and no overhead.

incremental

Alay, September 30, 2005 - 9:16 am UTC

Hi tom
How can we take incremental backup using export-import utility?

Tom Kyte
September 30, 2005 - 11:01 am UTC

you cannot really, first it is deprecated and second export would have exported any table that had a single row changed. it exported all tables that had any modifiction, the entire table, all of it.

export is not a database backup tool.

export is a data movement tool, move data from here to there.

BACKUPS are the only backup tools

export- import

Alay, October 03, 2005 - 9:03 am UTC

Hi tom,

Can you explain me with example that how can we take first deprecated and then second export for a table?

Thanks in advance.

Tom Kyte
October 03, 2005 - 11:16 am UTC

eh?

deprecated means "the incremental export feature is deprecated"


If you want to read about it, it is documented still in the Utilities guide, but it is not a useful tool for anything

Export/Import, Tablespace & Database Relationship

VIKAS SANGAR, October 19, 2005 - 5:33 am UTC

Dear Mr. Kyte

Can you Please get me out of the disarray related to the relationship between EXPORT/IMPORT, Tablespace and Database.
Let me give you a detailed scenario, Suppose....

I have an oracle9i R2 database "D1" in one of my machines, and another oracle9i R2 database "D2" in another machine. Now i am required to Import the entire user "U1" from D1 to a user in D2 (It can be the user with the same name U1 or any other name, say U2).

Given that that all the objects (Tables/Views) owned by U1 are lying in different Tablespaces "T1, T2, T3". So, inorder to import the User U1 from Database D1 to Databse D2, I first export the user U1 and then Finally import the same (ExpDat.dmp file) to a fresh user U2 in D2 with its default tablespace as "T4".

As I start the import process, some of the tables and objects get imported and some of the objects dont, as i am flashed with the Message of a kind -> Failed to import table Tab_x Tablespace T1 does not exist, Failed to import table Tab_y Tablespace T2 does not exist, Failed to import table tab_Z Tablespace T3 does not exist.

Now can you please kindly explain me, why does this Happens?
1)-> Why cant I import all the objects owned by a user to another user just using his default tablespace?

2)-> Why do I have to create tablepsaces T1, T2, T3 inorder to sucessfuly import the user U1 of database D1 to user U2 of database D2?

3)-> How can I get rid of this problem of creating alike tablespaces in another Database D2, and successfuly import all the objects owned by user U1 (irrespective of the tablespaces which holds them in D1)completeley to the different database D2's user U2 and that too just on U2's respective default Tablespace rather than creating another tablespaces on which the objects of user to be imported are lying?

4)-> Is there any means by which the details of export file such as name of the user exported, Password, name of objects and corresponding tablespaces etc can be known just before importing the same to another database/user?

5)-> At last, what is the relation ship between Export/Import, Tablespaces & Databases?

These are a lot many Questions, But I feel a detailed explaination given by you will help in breaking the jinks of lot more other people and help us all to understand and implement the related concepts in a much more better way, confidently.

Take care, Regards.
VIKAS.

Tom Kyte
October 19, 2005 - 7:17 am UTC

(do you have access to my first book Expert one on one Oracle? I cover this in great detail in the chapter on exp/imp)


I'll have to assume (guess, hypothesize, due to lack of information) that the objects that fail are MULTI-SEGMENT objects (tables with clobs, partitioned tables, IOT's with overflow segments for example)...

imp ... SHOW=Y will just show you what the import WOULD do (not actually do it). No, you will not see passwords this way - that would be a problem.


There is no real relationship between exp/imp and tablespaces/databases -- exp creates a LOGICAL COPY of data with CREATE statements in. imp reads this logical copy and attempts to run the CREATES and do the INSERTS -- just like a script.


<quote src=expert one on one oracle>
Multi-Tablespace CREATES

In the beginning, CREATE TABLE statements were relatively simple. Over the years, they have gotten progressively more and more complex. The 'train tracks' or 'wire diagram' for the simple CREATE TABLE now spans eight pages. One of the newer features of tables is the ability for bits and pieces of them to exist in various tablespaces. For example, a table with a CLOB column will have a table segment, CLOB index segment, and CLOB data segment. We can specify the location of the table and the locations of the CLOB data. An index-organized table (IOT) can have the index segment, and an overflow segment. Partitioned tables of course may have many partitions, each in a separately specified tablespace.

With this complexity comes some confusion for EXP/IMP. It used to be that if you tried to import an object and it failed due to the tablespace either not existing, or because you exceeded your quota on that tablespace, IMP would rewrite the SQL for you to create the object in your DEFAULT tablespace. IMP will not do this with multi-tablespace objects like it will with single tablespace object, even if all of the tablespaces specified in the CREATE are the same. An example will demonstrate the problem, and then I'll describe how we can work around the situation.

First, we'll start with a schema that has a couple of multi-tablespace objects and a simple single tablespace table in a tablespace:

tkyte@TKYTE816> create tablespace exp_test
2 datafile 'c:\oracle\oradata\tkyte816\exp_test.dbf'
3 size 1m
4 extent management local
5 uniform size 64k
6 /
Tablespace created.

tkyte@TKYTE816> alter user tkyte default tablespace exp_test
2 /
User altered.

tkyte@TKYTE816> create table t1
2 ( x int primary key, y varchar2(25) )
3 organization index
4 overflow tablespace exp_test
5 /
Table created.

tkyte@TKYTE816> create table t2
2 ( x int, y clob )
3 /
Table created.

tkyte@TKYTE816> create table t3
2 ( x int,
3 a int default to_char(sysdate,'d')
4 )
5 PARTITION BY RANGE (a)
6 (
7 PARTITION part_1 VALUES LESS THAN(2),
8 PARTITION part_2 VALUES LESS THAN(3),
9 PARTITION part_3 VALUES LESS THAN(4),
10 PARTITION part_4 VALUES LESS THAN(5),
11 PARTITION part_5 VALUES LESS THAN(6),
12 PARTITION part_6 VALUES LESS THAN(7),
13 PARTITION part_7 VALUES LESS THAN(8)
14 )
15 /
Table created.

tkyte@TKYTE816> create table t4 ( x int )
2 /
Table created.

So, we started by creating a tablespace and making this our DEFAULT tablespace. We then created an IOT with two segments - the index and overflow. We created a table with a CLOB that has three segments. Then we have a partitioned table with seven segments. Lastly, we have the normal, simple 'table'. We export this schema:

tkyte@TKYTE816> host exp userid=tkyte/tkyte owner=tkyte

and proceed to drop that tablespace:

tkyte@TKYTE816> drop tablespace exp_test including contents;
Tablespace dropped.

tkyte@TKYTE816> alter user tkyte default tablespace data;
User altered.

When we import the schema we discover most of the tables won't come back in:

tkyte@TKYTE816> host imp userid=tkyte/tkyte full=y

Import: Release 8.1.6.0.0 - Production on Tue Mar 20 19:03:18 2001

(c) Copyright 1999 Oracle Corporation. All rights reserved.


Connected to: Oracle8i Enterprise Edition Release 8.1.6.0.0 - Production
With the Partitioning option
JServer Release 8.1.6.0.0 - Production

Export file created by EXPORT:V08.01.06 via conventional path
import done in WE8ISO8859P1 character set and WE8ISO8859P1 NCHAR character set
. importing TKYTE's objects into TKYTE
IMP-00017: following statement failed with ORACLE error 959:
"CREATE TABLE "T2" ("X" NUMBER(*,0), "Y" CLOB) PCTFREE 10 PCTUSED 40 INITRA"
"NS 1 MAXTRANS 255 LOGGING STORAGE(INITIAL 65536) TABLESPACE "EXP_TEST" LOB "
"("Y") STORE AS (TABLESPACE "EXP_TEST" ENABLE STORAGE IN ROW CHUNK 8192 PCT"
"VERSION 10 NOCACHE STORAGE(INITIAL 65536))"
IMP-00003: ORACLE error 959 encountered
ORA-00959: tablespace 'EXP_TEST' does not exist
IMP-00017: following statement failed with ORACLE error 959:
"CREATE TABLE "T3" ("X" NUMBER(*,0), "A" NUMBER(*,0)) PCTFREE 10 PCTUSED 40"
" INITRANS 1 MAXTRANS 255 LOGGING TABLESPACE "EXP_TEST" PARTITION BY RANGE ("
""A" ) (PARTITION "PART_1" VALUES LESS THAN (2) PCTFREE 10 PCTUSED 40 INIT"
"RANS 1 MAXTRANS 255 STORAGE(INITIAL 65536) TABLESPACE "EXP_TEST" LOGGING, P"
"ARTITION "PART_2" VALUES LESS THAN (3) PCTFREE 10 PCTUSED 40 INITRANS 1 MA"
"XTRANS 255 STORAGE(INITIAL 65536) TABLESPACE "EXP_TEST" LOGGING, PARTITION "
""PART_3" VALUES LESS THAN (4) PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 25"
"5 STORAGE(INITIAL 65536) TABLESPACE "EXP_TEST" LOGGING, PARTITION "PART_4" "
"VALUES LESS THAN (5) PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255 STORAGE"
"(INITIAL 65536) TABLESPACE "EXP_TEST" LOGGING, PARTITION "PART_5" VALUES LE"
"SS THAN (6) PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255 STORAGE(INITIAL "
"65536) TABLESPACE "EXP_TEST" LOGGING, PARTITION "PART_6" VALUES LESS THAN ("
"7) PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255 STORAGE(INITIAL 65536) TA"
"BLESPACE "EXP_TEST" LOGGING, PARTITION "PART_7" VALUES LESS THAN (8) PCTFR"
"EE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255 STORAGE(INITIAL 65536) TABLESPACE "
""EXP_TEST" LOGGING )"
IMP-00003: ORACLE error 959 encountered
ORA-00959: tablespace 'EXP_TEST' does not exist
. . importing table "T4" 0 rows imported
IMP-00017: following statement failed with ORACLE error 959:
"CREATE TABLE "T1" ("X" NUMBER(*,0), "Y" VARCHAR2(25), PRIMARY KEY ("X") EN"
"ABLE) ORGANIZATION INDEX NOCOMPRESS PCTFREE 10 INITRANS 2 MAXTRANS 255 LOG"
"GING STORAGE(INITIAL 65536) TABLESPACE "EXP_TEST" PCTTHRESHOLD 50 OVERFLOW "
"PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255 LOGGING STORAGE(INITIAL 6553"
"6) TABLESPACE "EXP_TEST""
IMP-00003: ORACLE error 959 encountered
ORA-00959: tablespace 'EXP_TEST' does not exist
Import terminated successfully with warnings.

Specifically, the only table that came back in without an error was the simple 'normal' table. For this table, IMP rewrote the SQL, blanking out the first TABLESPACE EXP_TEST that it came across and retried the CREATE. This rewritten CREATE succeeded. The other CREATES, when likewise rewritten, did not succeed. The only solution to this is to create the tables beforehand and then import with IGNORE=Y. If you do not have the DDL for the CREATE TABLES, you can retrieve it of course, from the DMP file with INDEXFILE=Y to modify it. In this case, since I had the DDL readily handy, I just created the three tables with new tablespaces specified where necessary:

tkyte@TKYTE816> create table t1
2 ( x int primary key, y varchar2(25) )
3 organization index
4 overflow tablespace data
5 /
Table created.

tkyte@TKYTE816> create table t2
2 ( x int, y clob )
3 /
Table created.

tkyte@TKYTE816> create table t3
2 ( x int,
3 a int default to_char(sysdate,'d')
4 )
5 PARTITION BY RANGE (a)
6 (
7 PARTITION part_1 VALUES LESS THAN(2),
8 PARTITION part_2 VALUES LESS THAN(3),
9 PARTITION part_3 VALUES LESS THAN(4),
10 PARTITION part_4 VALUES LESS THAN(5),
11 PARTITION part_5 VALUES LESS THAN(6),
12 PARTITION part_6 VALUES LESS THAN(7),
13 PARTITION part_7 VALUES LESS THAN(8)
14 )
15 /
Table created.

and was able to import cleanly:

tkyte@TKYTE816> host imp userid=tkyte/tkyte full=y ignore=y

Import: Release 8.1.6.0.0 - Production on Tue Mar 20 19:03:20 2001

(c) Copyright 1999 Oracle Corporation. All rights reserved.


Connected to: Oracle8i Enterprise Edition Release 8.1.6.0.0 - Production
With the Partitioning option
JServer Release 8.1.6.0.0 - Production

Export file created by EXPORT:V08.01.06 via conventional path
import done in WE8ISO8859P1 character set and WE8ISO8859P1 NCHAR character set
. importing TKYTE's objects into TKYTE
. . importing table "T2" 0 rows imported
. . importing partition "T3":"PART_1" 0 rows imported
. . importing partition "T3":"PART_2" 0 rows imported
. . importing partition "T3":"PART_3" 0 rows imported
. . importing partition "T3":"PART_4" 0 rows imported
. . importing partition "T3":"PART_5" 0 rows imported
. . importing partition "T3":"PART_6" 0 rows imported
. . importing partition "T3":"PART_7" 0 rows imported
. . importing table "T4" 0 rows imported
. . importing table "T1" 0 rows imported
Import terminated successfully without warnings.

Now, I have seen an issue with some objects, whereby the above workaround does not work. IMP still raises the ORA-00959 tablespace 'name' does not exist. The only way I have found to work around this temporarily, is to do the above step (precreate the object) and then create very small tablespaces with the names that IMP demands. You would create these tablespaces with a file too small to actually create anything. Now the IMP will work and you can drop the tablespaces afterwards.


Export/Import, Tablespaces, Databases - Relationship

VIKAS SANGAR, October 19, 2005 - 8:49 am UTC

Dear Mr. Kyte

Thanx a ton for such Marvellous explaination, that too within such short time span of my putting a query before you. That was really so very nice of you.

We really appreciate the good work put in by you and your team, inorder to facilitate the entire Oracle Fraternity by sharing your intelligence and rendering sense of security and a feeling of closeness to all the Oracalites. Just keep the spirits up as before, as always.

No, I dont have access to Expert Oracle, but surely I am waiting for the likely newer edition (covering 10g)to be avialable for purchase in India.

Take care, Regards.
Vikas.

Export/Import, Tablespaces, Databases - Feedback

VIKAS SANGAR, October 19, 2005 - 9:01 am UTC

Dear Mr. Kyte

The user that I want to import to another database/machine does not have any CLOB/Long columns in any of the tables nor they are partioned. Some of the tables do have explicit indexes (besides Primary key, Foreign Key constraints), and no overflow segments seems to be present.

Take care, regards..
Vikas


Tom Kyte
October 19, 2005 - 9:44 am UTC

then it should work, give us a cut and paste of a failed statement to look at please.

Export/Import, Tablespaces, Databases - Related Qestion

VIKAS SANGAR, October 19, 2005 - 9:30 am UTC

Dear Mr. Kyte

Another few things that struck me on this are, suppose if I have several records in my Tables that somehow could not get imported. Now, as per your instructions if I create these tables manually (that means, now I only have the structure of the table, no records in it), and also created a short corresponding tablespace previously demanded by the Import utility.

Now when I again run the Import utility with ignore=Y, will Import insert the records into this table, which had been manually created by me, and holds no records? Or, I have to manually insert these records too.

And, what will be the case, suppose, if I have enetered few new records of my own and then run the Import, will the old records will get inserted? What wiil be the reaction of Import?

Alternately, after successfull(somehow) import, if I drop the tablespace that was solely created for the purose of import (not the default Tablespace of user, but the tablespace that was demanded by import in previous run), What impact will it have on the related table and its record?

Take care, Regards...
Vikas.

Tom Kyte
October 19, 2005 - 9:46 am UTC

import will insert the records for you, yes.


the old records will be inserted, yes.

if the data was in the tablespace you dropped, the data will be gone, the only reason imp would "demand" a tablespace is because it tried to actually create something in there, so dropping that tablespace would either fail (objects in there) or you would say 'including contents' and would drop the objects as well.

Export-Import

Vin, October 19, 2005 - 2:57 pm UTC

Wont revoking the 'resource' role on the user importing objects, allow the objects to reside in the importing users default tablespace? ...Regards


Tom Kyte
October 19, 2005 - 4:32 pm UTC

only for single segment objects.

you would only need to revoke resource IF the tablespace existed - but you didn't WANT the objects to go into that tablespace. If the tablespace doesn't exist, you do not need to revoke it (you are trying to get rid of the unlimited tablespace that comes with resource - to make it so you could put a quota of 0k on the tablespace you dont' want the objects in )

Export/Import, Tablespaces, Databases - Feedback.

VIKAS SANGAR, October 20, 2005 - 7:12 am UTC

Dear Mr. Kyte

At the time(12-15 days back) when I ran the import utility, I did not took the spool/snapshot of the import process but since you pointed out the facts, After making deep analysis and consulting the Dba_Tab_cols View, I found that one of the tables did contained CLOB datatype for one of its columns.

Also, while carrying out Tablespace Analyisis (which was however unsuccessful) I got the following Messages-

Using Previously Collected: Row Chaining Data...
Collecting: Index Fragmentation Data...
It means row chaining does exists, but since the analyzation process fails due to some reason I could not extract the detailed report.

Processing: --Some statements... and finally..
VBO-1508 : An error has occured and the server object could not be initialized. Please disconnect and try again.ORA-08100: index is not valid - see trace file for diagnostics
:/* OracleOEM */ ANALYZE INDEX "OPTUSR"."IX_CREATEDBY" VALIDATE STRUCTURE
Processing failed.
During a previous attempt, the details window of Tablespace Analyzer also showed some CLOB Indexes starting with names like sys#11227768A, these are not created by me.

And while importing a particular Table I am confronted with following-

Connected to: Oracle9i Enterprise Edition Release 9.2.0.1.0 - Production
With the Partitioning, OLAP and Oracle Data Mining options
JServer Release 9.2.0.1.0 - Production
Import file: EXPDAT.DMP > leads.dmp
Enter insert buffer size (minimum is 8192) 30720>
Export file created by EXPORT:V09.02.00 via conventional path
Warning: the objects were exported by OPTUSR, not by you
import done in WE8MSWIN1252 character set and AL16UTF16 NCHAR character set
List contents of import file only (yes/no): no >
Ignore create error due to object existence (yes/no): no > yes
Import grants (yes/no): yes >
Import table data (yes/no): yes >
Import entire export file (yes/no): no > yes

. importing OPTUSR's objects into TST
. . importing table "LEADS"

IMP-00058: ORACLE error 29861 encountered
ORA-29861: domain index is marked LOADING/FAILED/UNUSABLE
Import terminated successfully with warnings.

I hope now it must be more clear for you to give your valuabe comments on the matter.

Can you pls tell, why CLOB's, Overflow Segments, Partitions cause such problems related to tablespace co-ordination, what is the internal logic beside all this?

Take Care, Regards...
Vikas.


Tom Kyte
October 20, 2005 - 8:35 am UTC

I wrote just about all I could on this subject right above???

Export/Import

VIKAS SANGAR, October 20, 2005 - 9:06 am UTC

Thank you Mr. Kyte.

Your above Expaination have made many of the things clear to me, to a much greater extent. Its just that more we hear from you, more we get curious about other related things.

Thank you once again.
Take care...
Regards.
Vikas.



How to exp/imp partial data set from a table?

RB, October 25, 2005 - 5:45 pm UTC

Tom:

I have number of large tables with daily, monthly network usage data.

Ex: Table A has the network usage data for 365 days worth. I have an ID field which is the key (1 to 365).

Table B has some configuration information per day - meaning every day i have configuration snapshot associated with the usage data in the table A.

What i would like to do is the following:

exp 1 or more days worth od usage data from table A and its corresponding configuration snapshot data from table B.

Then delete that data from the tables.

later i may have to imp this data back into the same tables if the customer ask for that data.

I was thinking of creating a partition based on that ID and then take that partition offline and when they need it bring it online.

But, the reason that the customer want to do this is to save space on the production DB.

So, what they have asked me to do is to exp N number of days worth of data (ID is the key represent the day: 1-365) from Table A and B and then move the exported file to another system, delete the data from the production table. If they want the data back then we have to imp the data back.

WOuld you please help how to do this?

When we do this, after the delete statement, should i do anything to re-claim the space?

Thanks
RB

Tom Kyte
October 26, 2005 - 11:34 am UTC

is partitioning available to you?

if so, partition by date - you can easily take that in and out of the database.

Else, you can use query= with export to export a slice of a table, delete it. Subsequent inserts will reuse that space.

import performance

dost, November 15, 2005 - 2:32 am UTC

Tom,

I am running a imp job whic takes 30 hours .Please sugest somehing to improve performance
uffer=26214400 file=mgr_fpa.dmp constraints=y indexes=y fromuser=mgr_fpa touser=mgr_fpa ignore=y log=mgr_fpa_imp.log


Tom Kyte
November 15, 2005 - 8:37 am UTC

commit=n

don't have it commit each batch insert


and if possible, you would import in parallel (having exported in parallel, creating many dump files).

and if possible use datapump with 10g as it can do parallel and direct path operations.

import performance

dost, November 15, 2005 - 12:13 pm UTC

1. You mean I should put commit=n right?
2. How to do export on parallel ?

Thanks!

Tom Kyte
November 15, 2005 - 12:21 pm UTC

1) yes

2) by running many exports - each getting a separate set of schema objects (different sets of tables, or different user accounts, whatever - do it yourself parallelism, something data pump in 10g does for you)

import performance

Dost, November 15, 2005 - 12:26 pm UTC

Size of import will be 250GB so do you think I need a really big big big undotablespace if I use commit=n

My concern is what will be the effect of commit=n ?
Can it cause of failure of job because of rollback generation or something else?

Tom Kyte
November 15, 2005 - 12:37 pm UTC

no you won't.

the only thing not committed are the inserts and inserts generate the least amount of undo on unindexed tables. Each table will be committed (because of the subsequent DDL to create the next table) in turn.


At 250g, you really want to rethink using export and import.

Or, do it yourself parallelism...


Or having a long weekend ;)

import performance

dost, November 15, 2005 - 12:42 pm UTC

Ha ha :)
Honestly speaking this weekend is really long for me :)
Friday 08 am till Sunday 12 pm and father of 1 month old baby :))

Appreciate if you give me concept of parallel export

Tom Kyte
November 15, 2005 - 2:14 pm UTC

the concept of the parallel export is a "do it yourself" thing.

You use owner= and export individual schemas

or for big schemas, you use table= and export a couple of tables at the same time, in parallel.

It is all "do it yourself"


(congrats on the new child)

import performance

dost, November 16, 2005 - 12:41 am UTC

Thanks! Tom

right now log_buffer is 10m ..Do you think i should change this value to increase performance of import?



Tom Kyte
November 16, 2005 - 8:56 am UTC

doubtful. you have a single session importing, lgwr is constantly writing out the log buffer contents when it is 1/3 full, every three seconds, every commit as it is.

As a newbie...

Tom Fox, November 16, 2005 - 8:20 am UTC

As a newbie, I read the manuals, then listen to others with more experience. When those two collide, I have to ask why. So, is there a certain reason COMMIT=n will not require a lot of undo/rollback?

<QUOTE 9i Manual>
...
Specifying COMMIT=y prevents rollback segments from growing inordinately large and improves the performance of large imports.
...
</QUOTE 9i Manual>

<QUOTE from AskTom>
...
Size of import will be 250GB so do you think I need a really big big big
undotablespace if I use commit=n

My concern is what will be the effect of commit=n ?
Can it cause of failure of job because of rollback generation or something
else?


Followup:

no you won't.
...
</QUOTE from AskTom>

Again, I'm not arguing against your statements but want to understand what is being said and why. Am I missing something here or do I just need more coffee?

Tom Kyte
November 16, 2005 - 5:52 pm UTC

the manuals have it backwards.


commit=y will decrease performance.
commit=y will negatively impact the performance of large imports.



every time IMP commits, you have to wait for lgwr to write redo to disk.  If you load 100,000 rows and commit 10,000 times - you'll wait 10,000 times for that flush.

If you load 100,000 rows and commit ONCE - you'll wait once (and wait just a short period of time, a commit returns in about the same time regardless of the amount of data you are committing).  I've written about (and demonstrated) this lots in Expert Oracle Database Architecture (and expert one on one Oracle)

As for the undo stuff - yes, it will use more undo - but not a HUGE amount.  inserts into unindexed tables generate the *least* amount of undo.  deletes tend to generate the *most*.  The undo for an insert - is just "delete+rowid".  Small.


Here, lets setup a procedure to array insert 100 rows at a time:

ops$tkyte@ORA10GR2> create table t as select * from all_objects where 1=0;

Table created.

ops$tkyte@ORA10GR2>
ops$tkyte@ORA10GR2> create or replace procedure do_inserts( p_num_times in number )
  2  as
  3          type array is table of t%rowtype index by binary_integer;
  4          l_data  array;
  5          l_value number;
  6  begin
  7          select * bulk collect into l_data
  8            from all_objects
  9           where rownum <= 100;
 10
 11          for i in 1 .. p_num_times
 12          loop
 13                  forall i in 1 .. l_data.count
 14                          insert into t values l_data(i);
 15          end loop;
 16          dbms_output.put_line( (100 * p_num_times) || ' rows inserted.' );
 17  end;
 18  /

Procedure created.

<b>now, if we call it to do 1 or 10 iterations:</b>

ops$tkyte@ORA10GR2>
ops$tkyte@ORA10GR2> exec do_inserts( 1 );
100 rows inserted.

PL/SQL procedure successfully completed.

ops$tkyte@ORA10GR2> select used_ublk from v$transaction;

 USED_UBLK
----------
         1

ops$tkyte@ORA10GR2> commit;

Commit complete.

ops$tkyte@ORA10GR2> exec do_inserts( 10 );
1000 rows inserted.

PL/SQL procedure successfully completed.

ops$tkyte@ORA10GR2> select used_ublk from v$transaction;

 USED_UBLK
----------
         1

ops$tkyte@ORA10GR2> commit;

Commit complete.

<b>one block (8k) of UNDO... let's do more:</b>


ops$tkyte@ORA10GR2> exec do_inserts( 1000 );
100000 rows inserted.

PL/SQL procedure successfully completed.

ops$tkyte@ORA10GR2> select used_ublk from v$transaction;

 USED_UBLK
----------
        43

ops$tkyte@ORA10GR2> commit;

Commit complete.

ops$tkyte@ORA10GR2> exec do_inserts( 10000 );
1000000 rows inserted.

PL/SQL procedure successfully completed.

ops$tkyte@ORA10GR2> select used_ublk from v$transaction;

 USED_UBLK
----------
       428

ops$tkyte@ORA10GR2> commit;

Commit complete.

ops$tkyte@ORA10GR2> exec do_inserts( 100000 );
10000000 rows inserted.

PL/SQL procedure successfully completed.

ops$tkyte@ORA10GR2> select used_ublk from v$transaction;

 USED_UBLK
----------
      4268

ops$tkyte@ORA10GR2> commit;

Commit complete.

ops$tkyte@ORA10GR2> select 4268*8/1024 from dual;

4268*8/1024
-----------
   33.34375

<b>so, for 10,000,000 rows - 33.3 mb of undo - not too large....

So, I feel confident in saying it won't grow to outrageous sizes</b> since imp will

a) create table
b) load table
c) AND THEN index table


You don't need more coffee, but I have another doc bug to file :)

 

import performance

dost, November 16, 2005 - 12:16 pm UTC

Tom,

There is one table import which takes 3-4 hours in this job
. importing table          "AVENUE_BY_TOMER"   83837644 

SQL> select sid,event   from v$Session_wait;

       SID EVENT
---------- ----------------------------------------------------------------
         1 pmon timer
         2 rdbms ipc message
         3 rdbms ipc message
         6 rdbms ipc message
         7 rdbms ipc message
         4 rdbms ipc message
        11 direct path read
         5 smon timer
        19 SQL*Net message to client
         8 wakeup time manager


sid is 11
 

Tom Kyte
November 16, 2005 - 6:12 pm UTC

could be indexing it... have to read it to index it.

import performance

dost, November 16, 2005 - 1:00 pm UTC

Will it be good idea if I diable logging of tables and indexes before export and drop indexes of this big table and recreate once it import back?

Thanks!

Tom Kyte
November 16, 2005 - 6:15 pm UTC

typically the table to be imported into doesn't exist???


doing any of this before EXPORT won't have any effect.


I would assume the table and indexes are not in place when you import, if they are - get rid of them? import creates them

import performance

dost, November 16, 2005 - 7:16 pm UTC

Tom,

Objects are already not there.

1. I will disable logging and change degree of indexes and tables to 5 . Is it good idea?

2. I will drop indexes on big tables

3. Start export
4. drop user and tablespaces then recreate user and LMT
5 Import
6. index recreation

Please suggest on 1

Thanks!

Tom Kyte
November 16, 2005 - 7:43 pm UTC

if objects are not already there, there is nothing to disable logging on, there is nothing to change the degree of, there are no indexes to drop.

that and imp does conventional path inserts , they are ALWAYS logged.




IMPORT PERFORMANCE

dost, November 17, 2005 - 4:03 pm UTC

Tom,

I am planning to drop index on big tables before export and once import finish then I can recreate index manullay

Please do let me know ..
Is it good idea?

Thanks@


Tom Kyte
November 18, 2005 - 9:59 am UTC

why would dropping an index before an EXPORT make sense?


it will not materially affect the export.


If you just leave the index in place, export will export the definition of the index and import will create the index AFTER loading the data.

space management

dost, November 18, 2005 - 1:49 am UTC

create TABLEPSACE FPAIND DATAFILE '/REP/indx03/fpaind01.dbf' size 2000m EXTENT MANAGEMENT LOCAL SEGMENT SPACE MANAGEMENT AUTO ;

Is it good idea to create tablespace like above?

Tom Kyte
November 18, 2005 - 10:35 am UTC

only one answer for a question like this:

a) yes
b) no
c) maybe


or more simply "it depends"


sure, it is ok if you want about 2gig of storage allocated and wish to use system allocated extents (i like them in most cases) with ASSM.

import performance

dost, November 18, 2005 - 2:14 pm UTC

If I drop index then import job time will be minus of index creation time so I can create index iin parallel later on

Is is sound good?


Tom Kyte
November 18, 2005 - 3:48 pm UTC

why does the table and index even exist, import creates them


You kept saying above

I am planning to drop index on big tables before export
^^^^^^




why does date disallow array insert?

Joel Garry, November 18, 2005 - 6:58 pm UTC

In the 9i utilities manual under commit it says:

"For tables containing LONG, LOB, BFILE, REF, ROWID, UROWID, or DATE columns, array inserts are not done. If COMMIT=y, Import commits these tables after each row."

I'm wondering what it is about DATE columns that causes that? Aren't they a stable known format? Or another doc bug?

Tom Kyte
November 19, 2005 - 10:08 am UTC

<b>Documention bug I believe, using sql_trace we can see that it is array binding with dates.
</b>

ops$tkyte@ORA9IR2> create table t as select * from all_objects;

Table created.

ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> host exp userid=/ tables=t

Export: Release 9.2.0.6.0 - Production on Sat Nov 19 09:55:35 2005

Copyright (c) 1982, 2002, Oracle Corporation.  All rights reserved.


Connected to: Oracle9i Enterprise Edition Release 9.2.0.6.0 - Production
With the Partitioning, OLAP and Oracle Data Mining options
JServer Release 9.2.0.6.0 - Production
Export done in US7ASCII character set and AL16UTF16 NCHAR character set
server uses WE8ISO8859P1 character set (possible charset conversion)

About to export specified tables via Conventional Path ...
. . exporting table                              T      30818 rows exported
Export terminated successfully without warnings.

ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> create or replace trigger trace_trigger
  2  after logon on schema
  3  begin
  4  execute immediate 'alter session set events ''10046 trace name context forever, level 12'' ';
  5  end;
  6  /

Trigger created.

ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> drop table t;

Table dropped.

ops$tkyte@ORA9IR2> host imp userid=/ full=y

Import: Release 9.2.0.6.0 - Production on Sat Nov 19 09:55:36 2005

Copyright (c) 1982, 2002, Oracle Corporation.  All rights reserved.


Connected to: Oracle9i Enterprise Edition Release 9.2.0.6.0 - Production
With the Partitioning, OLAP and Oracle Data Mining options
JServer Release 9.2.0.6.0 - Production

Export file created by EXPORT:V09.02.00 via conventional path
import done in US7ASCII character set and AL16UTF16 NCHAR character set
import server uses WE8ISO8859P1 character set (possible charset conversion)
. importing OPS$TKYTE's objects into OPS$TKYTE
. . importing table                            "T"      30818 rows imported
Import terminated successfully without warnings.

ops$tkyte@ORA9IR2> drop trigger trace_trigger;

Trigger dropped.



INSERT /*+NESTED_TABLE_SET_REFS+*/ INTO "T" ("OWNER", "OBJECT_NAME",
  "SUBOBJECT_NAME", "OBJECT_ID", "DATA_OBJECT_ID", "OBJECT_TYPE", "CREATED",
  "LAST_DDL_TIME", "TIMESTAMP", "STATUS", "TEMPORARY", "GENERATED",
  "SECONDARY")
VALUES
 (:1, :2, :3, :4, :5, :6, :7, :8, :9, :10, :11, :12, :13)


call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.03          0          0          0           0
Execute    230      0.31       0.71          0        580       2695       30818
Fetch        0      0.00       0.00          0          0          0           0
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total      231      0.32       0.74          0        580       2695       30818

<b>
It used an array size of about 134 rows per execute.

If you change the create to:
create table t as select all_objects.*, (select rowid from dual) r from all_objects;


then you would see 30818 executes for 30818 rows (showing rowid does prevent the array insert)

</b>



 

Why leve 12 ?, why not sql=true

Jagjeet Singh, November 19, 2005 - 11:21 am UTC


Tom Kyte
November 19, 2005 - 1:41 pm UTC

because that is my generic trigger I use and I like to have binds and waits dumped to the trace file.

sql_trace=true would have been sufficient.

Export / Import Between Oracle10g(Solaris) and Oracle9iR2(Windows)

VIKAS SANGAR, November 23, 2005 - 6:37 am UTC

Dear Mr. Kyte

As we know that there is no oracle9i database version that supports Solaris10(or it may be other way round?)


Therefore, All I want to know is, Is it possible, to Export/Import a database between two different versions of Oracle (9i r2 for Windows and 10g for Solaris) running on different Operating Systems, Windows and Solaris10?

I mean, Export / Import of DATABASE/USERS/OBJECTS between Oracle9i Database for Windows and Oracle10g database for Solaris in a Bi-directional way (from 9i for windows database to 10g database for solaris and Vice-Versa)

If yes, Can you please explain/pointout how this can be achieved, and what are the rules/pre-requisites and steps, one should follow in oder to do so?

Take care, regards...
Vikas.

Tom Kyte
November 23, 2005 - 10:00 am UTC

export/import are platform agnostic - yes, you can export from "X" and import into "Y"



You must use the 9i export tool to create a dmp file that 9i import can read. So, to export from 10g, you'll use 9i export.


However, if you are going to be moving data back and forth and back and forth and back and forth repeatedly - you might consider, well, that you need a single database and be done with it. (or dblinks or something other than export import)


However, metalink says....


Sun Solaris OS 10 Information:

* The following database products are certified on the Sun Solaris 10 operating system:
o Oracle Database 9iR2 (with 9.2.0.6 Patchset) for Sun Solaris SPARC32
o Oracle Database 9iR2 (with 9.2.0.6 Patchset) for Sun Solaris SPARC64
o Oracle Database 10gR1 (with 10.1.0.3 Patchset) for Sun Solaris SPARC64

Export / Import Between Oracle10g(Solaris) and Oracle9iR2(Windows)

VIKAS SANGAR, November 24, 2005 - 1:12 am UTC

Dear Mr. Kyte

Thanx a lot for the above information.

That was all related to Sun Solaris SPARC 32/64 architecture.

How should people(having their Databases, 9i for windows and 10g for solaris) working on Windows and Solaris10 both on Intel 32Bit architecture handle the situation.

Will the Oracle9iR2 export utility work for the 10g database on Solaris10 & vice-versa? How can we overcome this Platform/Version/Compatibility related issue Oracle9iR2(Win Intel 32Bit) and Oracle10g(Solaris10 Intel 32Bit)?

Can we get any help on this?

Take Care, regards...

Tom Kyte
November 24, 2005 - 8:48 am UTC

I told you that 9i is on solaris 10???


but anyway, you would use the 9i export tool against "database x" in order to create a dmp file that the 9i import tool can read.


you would use 9i OR 10g export tool against "database x" in order to create a dmp file that 10g can read.


You can also use - well, consolidation (sounds truly like you want a single database), database links, anything you want.

Export / Import Between Oracle10g(Solaris) and Oracle9iR2(Windows)

VIKAS SANGAR, November 24, 2005 - 1:31 am UTC

Hi,
Its me once again. As an annexure to above...

I would also like to ask (Or please provide a link, if this query does not match this thread), How can we create DBlinks for Orcale10g on Solaris10, and what steps should we follow to make it work with both 9iR2 windows database and ofcourse, 10g for Solaris10?

Regards.

Tom Kyte
November 24, 2005 - 8:49 am UTC

you just create database links, they work - create database link foo .....;


there is nothing special about this configuration at all.

Export / Import Between Oracle10g(Solaris) and Oracle9iR2(Windows)

VIKAS SANGAR, November 25, 2005 - 12:36 am UTC

Dear Mr. Kyte.

Well, thats great. Thanx a ton for the your explanation and the provided information, to my queries.

Take Care, Regards..
Vikas.

Import

dost, November 29, 2005 - 2:39 pm UTC

Tom,

If we can afford single file system then what you think how much file system size we need for 225 GB?
and Will it affect the performance?
Will there be any issue on single file of approx 250GB



Tom Kyte
November 30, 2005 - 10:59 am UTC

I did not understand:

If we can afford single file system then what you think how much file system
size we need for 225 GB?


import

Dost, November 30, 2005 - 1:00 pm UTC

My bad...

I meant to say 225 Gb schema size will take How much space?

Tom Kyte
November 30, 2005 - 8:41 pm UTC

sorry, I still don't understand the intent of the question.

a 225gb schema would take - 225gb.

so I must be missing something in the question here.

Metalink note on commit

dost, December 21, 2005 - 6:21 pm UTC

Bookmark Go to End

Doc ID: Note:239001.1
Subject: How to Improve IMP Performance for Large Imports
Type: HOWTO
Status: PUBLISHED
Content Type: TEXT/X-HTML
Creation Date: 28-AUG-2002
Last Revision Date: 20-MAY-2003




goal: How to improve IMP performance for large imports
fact: IMPORT



fix:

Use COMMIT=Y either on the command line or in the parameter file.

From the Oracle Utilities documentation.
-----------------------------------------
Specifying COMMIT=Y prevents rollback segments from growing inordinately large
and improves the performance of large imports. Specifying COMMIT=Y is advisable
if the table has a uniqueness constraint. If the import is restarted, any rows
that have already been imported are rejected with a nonfatal error.


Is it ok ?

Tom Kyte
December 21, 2005 - 8:02 pm UTC

It'll slow performance....

I'll note it (add my comment to it)

Exp / imp different operating systems

A reader, April 18, 2006 - 8:57 pm UTC

Tom. anyway I can use 8i Tru64 export tool to import into a 10gR2 Solaris 9 machine ?

It always keep complaining with ORA-03134: Connections to this server version are no longer supported.

Thank you

The goal: export data that is on 8i, Tru64 to 10gR2 Solaris 9.

Tom Kyte
April 19, 2006 - 8:27 am UTC

you can use the 8i export tool to create the dump file and then the 10gr2 imp tool to import it of course.

I know of no reason why using the 8i import would not work - are you sure about what you said above? That would be a message I might expect if the 10g CLIENT (10g export) tried to connect to 8173 or before.

but if you use the 8i export tool and the 10gr2 import tool - it should work

RE: Exp / imp different operating systems

A reader, April 19, 2006 - 8:51 am UTC

Yeah, problem is that we're running out of space on the server that 8i is on. Reason why we can't do the export directly on the server.

Tom Kyte
April 19, 2006 - 9:16 am UTC

nfs, smb - attach some disk temporarily to do this (say from the server that will want to have the dmp file in the first place)

or find another machine with 8i on it (install the software if you have to) and use the 8i export over the network.

Importing full db from Windows to Linux

Ashish, May 02, 2006 - 9:19 am UTC

Hi Tom,

I'm trying to import full database, source database is Oracle 9i r2 on windows and target database is Oracle9i r2 on Red Hat Linux 9. Now my question is can I import with command line option FULL=y and Direct=y. Do I need to create any tablespace, users or any other object in order to import data successfully? Is there any risk involved ?

Thanks a lot,

Ashish

Tom Kyte
May 02, 2006 - 3:31 pm UTC

direct=y does not apply to import, hence - ?

How to import file where export is taken from 10g

A reader, May 25, 2006 - 6:31 pm UTC

How to import into 9i from file where export is taken from 10g.

Tom Kyte
May 26, 2006 - 8:29 am UTC

you get the 9i export tool and create a new dmp file that can be read by the 9i IMP table.

In other words - you cannot import the 10g DMP file, which is using 10g specific features, using the 9i imp tool.

You can however use the 9i EXP tool against 10g to create a file that the 9i imp too can read.

How to import file where export is taken from 10g

A reader, May 26, 2006 - 12:56 pm UTC

for that i should require 9i instance on the same server

Tom Kyte
May 27, 2006 - 9:27 pm UTC

for that you require nothing on that server.

exp userid=u/p@10gdatabase_over_network




Undo Generation While Import

Deepak, December 12, 2006 - 6:24 am UTC

Hi Tom,

Have few very basic questions. Hope you will consider for answering. My questions are:

> While importing a table using COMMIT=N into a schema (where the table did not exist before) what kind of UNDO information will be generated, will it be the rowid of the newly added rows which will be deleted if a rollback happens or some other information along with that?

> When exactly the ROWID is generated when we perform an insert. Is it generated at the time of flushing the block from buffer to disk? or at some stage before? Would like to know how the insert statement is really processed by Oracle(Stages of processing).

Tom Kyte
December 12, 2006 - 7:15 am UTC

there will be undo generated for the DDL executed.
then there will be undo generated for the inserts (delete + rowid for each row inserted basically)


the rowid of a row is known from the instant the row is placed on a block. That block belongs to a file and the row occupies some slot on that block - that is all we need - instant rowid.

Exp utility works internally

Suraj Sharma, January 16, 2007 - 11:34 pm UTC

Hi Tom,

Thanks a lot as always for us to understand Oracle better!!

When I take some export and import the same dump with the parameter imp ¿ show=Y it shows all the DDL statements, but it doesn¿t show the insert statements. I have two questions:
1. How does the export utility works internally so get all these DDL statements?
2. When I export the data with shows like ¿20000 Rows export..¿, but when I import data with imp show=y¿.. option it shows only DML statements. How does the imp utility works internally to insert/update the data.

Import to Oracle 10g

A reader, January 29, 2007 - 5:50 am UTC

Hi Tom,

My DBA exported the data from Oracle 9.2.0.6 and tried to export it to Oracle 10g R2 using exp/imp utility. The import to 10g R2 failed with the following errors.

An excerpt from the import log file is below.

Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.2.0 - Production
With the Partitioning, OLAP and Data Mining options

Export file created by EXPORT:V09.02.00 via conventional path

Warning: the objects were exported by SERVD4, not by you

import done in US7ASCII character set and AL16UTF16 NCHAR character set
import server uses AL32UTF8 character set (possible charset conversion)
export client uses UTF8 character set (possible charset conversion)
. importing DEFAULT_PROTOCOL_0405_DB's objects into DEFAULT_PROTOCOL_0405_DB
. . importing table "ACTIONS" 97 rows imported
. . importing table "ALERT" 5 rows imported
. . importing table "ALL_LISTS" 11499 rows imported
. . importing table "AUDIT_LOG" 298532 rows imported
. . importing table "AUDIT_PROPERTIES" 298532 rows imported
. . importing table "BPTEST_TO_COMPONENTS" 0 rows imported
. . importing table "BP_ITERATION" 0 rows imported
. . importing table "BP_ITER_PARAM" 0 rows imported
. . importing table "BP_PARAM" 0 rows imported
. . importing table "BP_STEP_PARAM" 0 rows imported
. . importing table "BUG" 9078 rows imported
. . importing table "BUG_MULTIVALUE" 0 rows imported
. . importing table "BUG_TOKENS" 326858 rows imported
. . importing table "CACHE" 0 rows imported
. . importing table "CHANGE" 0 rows imported
. . importing table "CHANGE_COVER" 0 rows imported
. . importing table "CHANGE_ENTRY" 0 rows imported
. . importing table "COMMON_SETTINGS" 40 rows imported
. . importing table "COMPONENT" 0 rows imported
. . importing table "COMPONENT_FOLDER" 3 rows imported
. . importing table "COMPONENT_STEP" 0 rows imported
. . importing table "COMPONENT_STEP_PARAMS" 0 rows imported
. . importing table "CROS_REF" 29100 rows imported
. . importing table "CYCLE" 1889 rows imported
. . importing table "CYCLE_MULTIVALUE" 0 rows imported
. . importing table "CYCL_FOLD" 429 rows imported
. . importing table "DATACONST" 15 rows imported
. . importing table "DESSTEPS" 338291 rows imported
. . importing table "FRAMEWORK_PARAM" 0 rows imported
. . importing table "GROUPS" 13 rows imported
. . importing table "HISTORY"
IMP-00019: row rejected due to ORACLE error 12899
IMP-00003: ORACLE error 12899 encountered
ORA-12899: value too large for column "DEFAULT_PROTOCOL_0405_DB"."HISTORY"."HS_NEW_VALUE" (actual: 256, maximum: 255)
Column 1 BUG
Column 2 12
Column 3 BG_DESCRIPTION
Column 4 21-JAN-2004:00:00:00
Column 5 14:27:17
Column 6 David Legge
Column 7 On the Welsh information pages, the links to the o...
Column 8
Column 9
IMP-00019: row rejected due to ORACLE error 12899
IMP-00003: ORACLE error 12899 encountered
ORA-12899: value too large for column "DEFAULT_PROTOCOL_0405_DB"."HISTORY"."HS_NEW_VALUE" (actual: 256, maximum: 255)
column "DEFAULT_PROTOCOL_0405_DB"."HISTORY"."HS_NEW_VALUE" (actual: 256, maximum: 255)
....
....
Import terminated successfully with warnings.


Any help is appreciated.
Thanks


Tom Kyte
January 31, 2007 - 1:30 pm UTC


import server uses AL32UTF8 character set (possible charset conversion)
export client uses UTF8 character set (possible charset conversion)


looks like you have multibyte data and in the conversion - one of your strings needs one more byte to be stored in the new characterset.

compress=y parameter

Shivdeep Modi, January 30, 2007 - 6:38 am UTC

Hi Tom,

COMPRESS import into one extent (Y).
How do it see it?

/NCLDBA/NV02 > create table t as select * from dba_objects where 1=2;

Table created.

/NCLDBA/NV02 > select segment_name,segment_type,extent_id,block_id,bytes
2 from dba_extents where segment_name = 'T'
3 /

SEGMENT_NAME SEGMENT_TYPE EXTENT_ID BLOCK_ID BYTES
------------------------------ ------------------ ---------- ---------- ----------
T TABLE 0 37625 65536

/NCLDBA/NV02 > alter table t allocate extent;

Table altered.

/NCLDBA/NV02 > /

Table altered.

/NCLDBA/NV02 > /

Table altered.

/NCLDBA/NV02 > /

Table altered.

/NCLDBA/NV02 > select segment_name,segment_type,extent_id,block_id,bytes
2 from dba_extents where segment_name = 'T'
3 /

SEGMENT_NAME SEGMENT_TYPE EXTENT_ID BLOCK_ID BYTES
------------------------------ ------------------ ---------- ---------- ----------
T TABLE 0 37625 65536
T TABLE 1 54369 65536
T TABLE 2 54385 65536
T TABLE 3 54937 65536
T TABLE 4 54961 65536

/NCLDBA/NV02 > host exp file=t.dmp tables=t

Export: Release 9.2.0.8.0 - Production on Tue Jan 30 11:29:00 2007

Copyright (c) 1982, 2002, Oracle Corporation. All rights reserved.


Username: ncldba
Password:

Connected to: Oracle9i Enterprise Edition Release 9.2.0.8.0 - Production
With the Partitioning option
JServer Release 9.2.0.8.0 - Production
Export done in WE8ISO8859P1 character set and AL16UTF16 NCHAR character set

About to export specified tables via Conventional Path ...
. . exporting table T 0 rows exported
Export terminated successfully without warnings.

/NCLDBA/NV02 > host grep -i table t.dmp
RTABLES
TABLE "T"
CREATE TABLE "T" ("OWNER" VARCHAR2(30), "OBJECT_NAME" VARCHAR2(128), "SUBOBJECT_NAME" VARCHAR2(30), "OBJECT_ID" NUMBER, "DATA_OBJECT_ID" NUMBER, "OBJECT_TYPE" VARCHAR2(18), "CREATED" DATE, "LAST_DDL_TIME" DATE, "TIMESTAMP" VARCHAR2(19), "STATUS" VARCHAR2(7), "TEMPORARY" VARCHAR2(1), "GENERATED" VARCHAR2(1), "SECONDARY" VARCHAR2(1)) PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255 STORAGE(INITIAL 65536 FREELISTS 1 FREELIST GROUPS 1) TABLESPACE "TOOLS" LOGGING NOCOMPRESS

/NCLDBA/NV02 >

The exp dmp still has initial=65536. Wasn't it supposed to have 65536*5 as intial extent?

Regards,
Shivdeep
Tom Kyte
January 31, 2007 - 1:06 pm UTC

I'll guess, you are using locally managed tablespaces right (initial, next, pctincrease - phooey - don't need or want them with locally managed tablespaces)

A reader, February 01, 2007 - 9:49 am UTC

Hi Tom,

"import server uses AL32UTF8 character set (possible charset conversion)
export client uses UTF8 character set (possible charset conversion)


looks like you have multibyte data and in the conversion - one of your strings needs one more byte to be stored in the new characterset."

What do you reckon should be done to facilitate the import?
Tom Kyte
February 01, 2007 - 1:15 pm UTC

do you want to change charactersets like that?

if so, your data doesn't fit, create the table (pre-create) with a larger size.
if not, use the same characterset.

Previously asked question

Suraj Sharma, February 13, 2007 - 5:27 am UTC

Hi Tom,

I asked one question in the blog before,

When I take some export and import the same dump with the parameter imp ¿ show=Y it shows all the DDL statements, but it doesn¿t show the insert statements. I have two questions:
1. How does the export utility works internally so get all these DDL statements?
2. When I export the data with shows like ¿20000 Rows export..¿, but when I import data with imp show=y¿.. option it shows only DML statements. How does the imp utility works internally to insert/update the data.


compress=y parameter again

Shivdeep Modi, March 05, 2007 - 5:48 am UTC

Hi,

Yes i am using locally managed tablespaces. How did you deduce that? And does this mean compress=y will not cause the table segments to be in a single extent for locally managed tablespaces. I guess for dictionay managed it should.

Regards,
Shivdeep
Tom Kyte
March 05, 2007 - 1:09 pm UTC

there is no concept of "extent compression" with locally managed tablespaces.

You either have

a) uniform (they are all the same size)
b) system allocated (you have NO CONTROL over the size)

it is a meaningless concept with locally managed tablespaces.

Not that segments should be in a single extent, not that a single extent is desirable.

problem in import with Indexes=n

prakash, April 06, 2007 - 7:03 am UTC

Hi Tom,
I have a full database Export file oracle 9i database.I am trying to import into the another database which is same structure. it takes much time. then i was adviced to use Indexes =n and indexfile='' parameter into import command.then import finished fastly.but in log file some of the tables were skipped.then i manullay run the indexe file,into the database,
that table are created but in empty state. how to retrive the data?

Thanks in Advance
Tom Kyte
April 06, 2007 - 11:43 am UTC

import the tables, use tables= on the import.

How to export & import?

cPiyush, April 18, 2007 - 10:03 am UTC

Hi.

1. I have created a .dmp file from a db 'Sample'.

Then I created another db & imported that .dmp file to that new database.
But this is not showing the tables(which I created manually) of that Sample db in my new db.

Can you please tell me how to take tables from onw db to another one??

2. I have one more problem that I have some *.sql files, now I want to create a Database from these file, how can I do this???...will import work???..if yes then how??


Tom Kyte
April 18, 2007 - 12:28 pm UTC

1) review your imp log and determine what happened, if there was an error - it would be recorded. import showed you exactly what it was doing as it did it.

2) that does not compute, don't know what you mean.

Export *.sql files to DB

cPiyush, April 19, 2007 - 5:41 am UTC

Hi.

Your advice was valuable, I got the solution to my first problem.

Explaining my second Problem: -

I have several *.sql(employee.sql, project.sql, etc) files of a DB(Say MyDepartment).
I want to load their data in that DB.

I am trying sqlplus @<files location> command
(e.g. sqlplus @c:\Data\employee.sql
then sqlplus @c:\Data\project.sql
etc...),

how to do this??


Tom Kyte
April 19, 2007 - 7:22 am UTC

ummm, a .sql file is a script. It contains sql statements. I'm not sure what yours contain.


sqlplus is not a "data loading tool"

sqlldr on the other hand....

export-import

cPiyush, April 20, 2007 - 6:42 am UTC

Hi Tom,
Here is onw more problem for you.

We are using Oracle 9i.
We have been given a dump file (mighty.dmp). We have created a new DB on our system & trying to import the given DB's data to the local Db.

We are giving the following command: -
C:\oracle\ora92\bin>imp mighty1/mighty1 file=mighty.dmp FROMUSER=MIGHTY_TEST TOU
SER=mighty1

Import: Release 9.2.0.1.0 - Production on Fri Apr 20 16:08:11 2007

Copyright (c) 1982, 2002, Oracle Corporation. All rights reserved.


Connected to: Oracle9i Release 9.2.0.1.0 - Production
JServer Release 9.2.0.1.0 - Production

Export file created by EXPORT:V09.02.00 via conventional path

Warning: the objects were exported by MIGHTY_TEST, not by you

import done in WE8MSWIN1252 character set and AL16UTF16 NCHAR character set
Import terminated successfully without warnings.


Help us, what to do???
Tom Kyte
April 20, 2007 - 7:18 am UTC

looks good to me.

looks like mighty_test didn't have anything to contribute.

Import/Export

cPiyush, April 23, 2007 - 1:22 am UTC

Hi Tom,

please tell me that when we import any *.dmp file of any FULL DB, then will it create all the users, role, tablespaces etc automatically or We will have to create them manually in our target Db.


Tom Kyte
April 23, 2007 - 4:28 pm UTC

if you have a full database export, it has the create users, create tablespaces - everything.

A reader, April 28, 2007 - 2:17 am UTC

Hi Tom,

Is it possible to export/import tables across users?
For example:
Tables "product" and "prices" From user "user1" to "user2" in the same database

Thanks in advance.

Tom Kyte
April 29, 2007 - 9:18 am UTC

exp userid=user1 tables=(product,prices)
imp userid=user2 full=y touser=user2


ops$tkyte%ORA10GR2> drop user user1 cascade;

User dropped.

ops$tkyte%ORA10GR2> drop user user2 cascade;

User dropped.

ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> create user user1 identified by user1 default tablespace users quota unlimited on users;

User created.

ops$tkyte%ORA10GR2> create user user2 identified by user2 default tablespace users quota unlimited on users;

User created.

ops$tkyte%ORA10GR2> grant create table, create session to user1;

Grant succeeded.

ops$tkyte%ORA10GR2> grant create table, create session to user2;

Grant succeeded.

ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> connect user1/user1
Connected.
user1%ORA10GR2> create table t1 as select 1 x, 'y' y, sysdate z from dual;

Table created.

user1%ORA10GR2> create table t2 as select 1 x, 'y' y, sysdate z from dual;

Table created.

user1%ORA10GR2> create table t3 as select 1 x, 'y' y, sysdate z from dual;

Table created.

user1%ORA10GR2>
user1%ORA10GR2> !exp userid=user1/user1 tables=(t1,t2)

Export: Release 10.2.0.3.0 - Production on Sun Apr 29 09:22:57 2007

Copyright (c) 1982, 2005, Oracle.  All rights reserved.


Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.3.0 - Production
With the Partitioning, OLAP and Data Mining options
Export done in US7ASCII character set and AL16UTF16 NCHAR character set
server uses WE8ISO8859P1 character set (possible charset conversion)

About to export specified tables via Conventional Path ...
. . exporting table                             T1          1 rows exported
. . exporting table                             T2          1 rows exported
Export terminated successfully without warnings.

user1%ORA10GR2> !imp userid=user2/user2 full=y touser=user2

Import: Release 10.2.0.3.0 - Production on Sun Apr 29 09:22:59 2007

Copyright (c) 1982, 2005, Oracle.  All rights reserved.


Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.3.0 - Production
With the Partitioning, OLAP and Data Mining options

Export file created by EXPORT:V10.02.01 via conventional path

Warning: the objects were exported by USER1, not by you

import done in US7ASCII character set and AL16UTF16 NCHAR character set
import server uses WE8ISO8859P1 character set (possible charset conversion)
. importing USER1's objects into USER2
. . importing table                           "T1"          1 rows imported
. . importing table                           "T2"          1 rows imported
Import terminated successfully without warnings.

user1%ORA10GR2>
user1%ORA10GR2> connect user2/user2
Connected.
user2%ORA10GR2> select * from t1;

         X Y Z
---------- - ---------
         1 y 29-APR-07

user2%ORA10GR2> select * from t2;

         X Y Z
---------- - ---------
         1 y 29-APR-07

user2%ORA10GR2>

A reader, April 30, 2007 - 3:14 am UTC

After importing from .dmp file the character "»" is replaced with ">" in stored procedure. How we make sure that special character are not replaced in procedure from import/export.
Tom Kyte
April 30, 2007 - 9:36 am UTC

did you change character sets - when you exported, was the character set of the exp environment the same as the database, when you imported - same question.

A reader, May 02, 2007 - 12:38 am UTC

Character Set is same. Database character set is WE8ISO8859P1.
Tom Kyte
May 02, 2007 - 8:10 am UTC

are you sure, do you have the logs from the exp and imp - do this as a test for us

create a schema, put the procedure in there
do the export of just that schema
do the import of just that schema
verify problem exists

and save the exp/imp headers (which'll tell us if character set translation happened).

that the databases have the same character set doesn't tell us much. It is all about the client that does the exp/imp

upgrade 9i to 10g

mohammed, June 07, 2007 - 3:59 am UTC

Dear Sir,
i want upgrade database 9i to 10g installed in new server and i have dump file taken by exp utility what's the requiremnt to upgrade and what about the existing users in the new database(10g).
Tom Kyte
June 07, 2007 - 2:44 pm UTC

all that is required is:

you read the upgrade guide, it tells you everything you need to know.

http://docs.oracle.com/docs/cd/B19306_01/server.102/b14238/toc.htm


I would almost certainly not want a dmp file from exp, I would want a 9i database to upgrade.

database upgrade

mohammed, June 11, 2007 - 3:30 am UTC

Hi Tom,
actually i have huge production oracle 9i database connecteded to many databases using dblink and i want upgrade to oracle 10g but in new server but im worried about those dblinks tell me the best way to upgrade this database and taking care of the dblinks
Tom Kyte
June 11, 2007 - 10:00 am UTC

why are you worried about dblinks? they upgrade.

A reader, June 12, 2007 - 1:01 am UTC

i should connect the new server to the remote databases first before i import into the new server? because if its not connected the dblinks will give error can't connect to the specified database and the remote database is production databases so while importing maybe something will go wrong whats the best way to minimize the errors? really i have very big and important data.
thanks alot
Tom Kyte
June 12, 2007 - 9:58 am UTC

just upgrade???

do not import, upgrade. do it right.

mohammed, June 12, 2007 - 12:31 pm UTC

i already bought new servers with good specification so the only chioce i move to the new servers so how to prepare the new server to connect to other databases before import.
Tom Kyte
June 12, 2007 - 2:04 pm UTC

upgrade to 10g

do cross platform transports of datafiles, or if possible (same byte order on the servers) full database transports

answer will persist: upgrade.

mohammed, June 14, 2007 - 2:07 am UTC

Hi Tom,
sorry i dont have much DBA experiense could you please help me more by give me the steps of cross platform transports of datafiles
thanks alot

Export tables limit by time

A reader, June 20, 2007 - 2:42 am UTC

hi Tom!!!
Now, I am DBA for my office. I want export data from a table in my database, but I don't want export all data in the table, I want export data from the table in a week, month or fix time...
Plz give me your solutions for it. (My database is not partioning).





Export tables limit by time

Doan Manh Tien, June 20, 2007 - 2:43 am UTC

hi Tom!!!
Now, I am DBA for my office. I want export data from a table in my database, but I don't want export all data in the table, I want export data from the table in a week, month or fix time...
Plz give me your solutions for it. (My database is not partioning).





FROMUSER

Deepak, June 20, 2007 - 7:35 am UTC

Hi Tom,

I have one 9i database export dump file with me. I want to know what all source schemas(schema name) exist in the dump file.

Is there any way to list the users of the export dump file?

Tom Kyte
June 20, 2007 - 10:49 am UTC

$ grep --text ^CONNECT expdat.dmp | sort -u
CONNECT OPS$TKYTE
CONNECT SCOTT

is one approach - using show=y with a filter would be another.

ORA-00959:

A reader, July 10, 2007 - 9:29 am UTC

Tom,
I get the following error while trying to import the Production dump in the Test Env.
ORA-00959: tablespace tablespace_name' does not exist
I could do the workaround by creating the small tablespaces mentioned in the error and then drop it after I complete the import. The problem is that there are over 100 tablespaces involved. So, creating then would be real pain. Any other workaround. Thanks.
Tom Kyte
July 10, 2007 - 12:01 pm UTC

impdp (data pump) available with 10g can remap tablespaces.

prior to that, if you have a multi-segment object ( partitioned tables, tables with lobs, IOTs with overflows...) you need to have the tablespaces - because import will only rewrite the FIRST tablespace reference - not the rest.

you could use dbms_metadata to extract the ddl from source (without tablespaces) and run that in target and then import with ignore=y

A reader, July 12, 2007 - 10:37 am UTC

Many thanks Tom for your suggestion

separation of depts

Dawar, July 16, 2007 - 12:07 pm UTC

Tom,

We have following department# in our company called ABC. (1, 2, 3, 4, 5, 6, 7, 8, 9, 10)

Now we have a separation.
We are on initial stage.
Departments 7 to 10 will go to the company XYZ.

Currently, we are running our employee status program through application server. We wrote different sub programs which produces the desire results. This program Forms & reports base program. Application server uses our employee Database existed on our Database server to run this program.

On weekly basis we received the data from our corporate (in dat files) which we loaded in our Database.

All these dat files from corporate contain information of all departments (1 to 10) or ABC and XYZ.

Now we need to run our status program only for the ABC (department # 1 to 6).

Please note separation is not final yet, we still want to keep department 1 to 10
But only to run this program we need to use dept # 1 to 6 only.

How to achieve this goal?


After running the status program we want to use all departments (1 to 10) in our Database. How to achieve that.

Can we achieve this through export/import, if yes how.

Later on we will completely separate our departments.

Dawar

Tom Kyte
July 17, 2007 - 11:29 am UTC

my car won't start.

why not?

we are even now.

IMPORT frozen

Robert, August 17, 2007 - 7:03 am UTC

Hallo Tom
One of my test databases will be safed each night with all schemas in a full dump.
There are two schemas on that database that have the same structure.
Let me call them SCHEMA1 an SCHEMA2.
In both schemas you can find a table TB_xxx

I created a new schema RECOVER and tried to import this table by using the
following command:

imp USERID='/@database as sysdba'
fromuser=SCHEMA1
touser=RECOVER
tables=TB_xxx
file=full_export.dmp
log=Recover.log
grants=Y
rows=Y

With the parameter fromuser=SCHEMA1 it works properly

With the parameter fromuser=SCHEMA2 it stops here
without any error message:

Import: Release 9.2.0.7.0 - Production on Fr Aug 17 12:42:47 2007
Copyright (c) 1982, 2002, Oracle Corporation. All rights reserved.
Verbunden mit: Oracle9i Enterprise Edition Release 9.2.0.8.0 - Production
With the Partitioning, OLAP and Oracle Data Mining options
JServer Release 9.2.0.8.0 - Production<
Export-Datei wurde von EXPORT:V09.02.00 ¿ber Direktpfad erstellt
Importvorgang mit Zeichensatz WE8MSWIN1252 und Zeichensatz AL16UTF16 NCHAR durch
gef¿hrt

If I take a look at the running session, I can see this statement, but nothing happens:
/* Formatted on 2007/08/17 12:45 (Formatter Plus v4.8.7) */
ALTER SESSION SET EVENTS '25475 TRACE NAME CONTEXT FOREVER, LEVEL 1'

What's up here ?

Greetings
Robert
Tom Kyte
August 22, 2007 - 9:10 am UTC

....
One of my test databases will be safed each night with all schemas in a full
dump..
....

Using my universal translator I'll read that as:

we mistakenly believe EXP is a backup tool, to be used to backup databases and are using EXP as our "backup"

Well- we can stop right there, EXP is NOT a backup tool, a dmp file is not a backup, it is a logical copy of data in the database and NOT suitable for a proper backup.

Only - well - a BACKUP is - you are using the WRONG tool. You need to use RMAN and make real backups.



Now that said, I'll guess:

import is NOT frozen, you are just not patient enough. The data in the dmp file is accessed sequentially - and schema1 must be near the "front" of the file and schema2 near the "end" of the file. You just have not given imp sufficient time to read and process (parse) the contents of the dmp file and get to the schema2 data.

Reason ?

Chaman, September 17, 2007 - 11:49 am UTC

Hello Tom,
The import of a 5.5 GB export dump took more time than expected. I think it could be because of one (or more) following reasons:

1. Database was in Archival log mode
2. The destination table had 4 CLOB columns and amount of rows imported was 5,50,000. Import on tables (which had no CLOB columns) was very fast.
3. There was a unique key index on the table even though the table was empty before importing.

I also observed the first 50,000 rows was imported very fast and as it progressed it slowed down.

Does having a unique index will slow sown the import process ?


Tom Kyte
September 18, 2007 - 3:20 pm UTC

I'll vote for number 4

4) unreasonable expectations

and there are an unlimited number of other possible reasons too...


1) not likely, when correctly set up - archiving affects database runtime performance by 0%. It happens in the background (arch) and is not very cpu intensive - it does IO but given sufficient IO (and we do pay attention to our logging set up disk wise right....) it won't cause a pause (you would see "cannot allocate new log, archival required" in the alert log)

2) that could have something to do with it as lobs and longs will cause IMP to do things ONE ROW AT A TIME.

3) that isn't a good idea, especially in light of #2



A reader, October 05, 2007 - 7:45 pm UTC

what the benefit from using parameter 'owner=xxx' in EXP/IMP ???

i read in "expert one-on-one oracle" ...

owner parameter :- This lets you specify a list of schemas to export. Useful for cloning a schema or to 'rename' a user.

but i still confused ... please with at least small example ....

thanx in advance,

Import into the same Database structure

Godwin Ofoe, April 15, 2008 - 7:48 am UTC

Hi Tom,
Thanks for the explainations.Kindly clarify this: If i have the same database structure on two seperate box but say box B is a day behind box A then i suggest i can use the export from Box A to update Box B and the two Boxes will be at par.
Tom Kyte
April 16, 2008 - 2:43 pm UTC

export is a logical data copy tool, it does full copies pretty much.

it is NOT a data synchronization tool at all - it would not be appropriate to even consider exp/imp to synchronize two databases.

Importing exporting from 9i to 10g

Shrikant, May 01, 2008 - 12:45 pm UTC

Hi Tom,

I am trying to export 9i database using oracle export utility in 10g, it is giving me "PLS-00302: component 'SET_NO_OUTLINES' must be declared" error.

What is the best way to do this?

Regards
Shrikant
Tom Kyte
May 01, 2008 - 2:57 pm UTC

you do not do that

don't even try, it doesn't work that way.

you use 9i export against 9i
you use 9i import against 9i
you use 10g import against 10g (even with a 9i dmp file)

you can use 9i export against 10g


you cannot use 10g against 9i

Thank You Tom.

A reader, May 01, 2008 - 3:24 pm UTC


Sam, June 24, 2008 - 2:46 am UTC

Dear Tom,
how r u? my problem is, i have 3 seperate databases 10g in three locations and one master database at an other location,i want to manual replicate data i mean i dont want to use online replication becuase i cannot connect all databases via intrnet or other sources. so what should i do for this that primary key does not conflict...
and second thing that how can i transfer only changed data to master site..as all databases have sync db structure..
please advise

Thanks
Sam