Skip to Main Content
  • Questions
  • export and import best option to use in practical enviroment

Breadcrumb

Question and Answer

Connor McDonald

Thanks for the question, Neeraj.

Asked: December 02, 2004 - 8:18 am UTC

Last updated: September 04, 2018 - 5:12 am UTC

Version: 9.2.0

Viewed 50K+ times! This question is

You Asked

Dear Tom,
Please provide all options as DBA....I am unexpereinced dba....

LIst all option that I should use with exp and imp and

"
while importing a table with a rowid col does oracle do a commit after
each row...
and wat if the target table has a varchar2 col corresponding to that
rowid col in the source table...

"

Thanks!


and Tom said...

we just happen to document this stuff -- believe it or not.


if there was a single set of options you should always use -- they would not be options. exp/imp is just a tool, with options, features. You decide which are appropriate for you at which points in time.


no idea what you mean in the last paragraph. Oracle only commits when an application says "please commit". Import will issue a commit either for every batch of rows it inserts (commit=y) or at the end of a table (commit=n). commit=n is more performant and what I would prefer to use.


Please read:

</code> http://docs.oracle.com/cd/B10501_01/server.920/a96652/toc.htm <code>

tons of info in there regarding exp and imp.


Rating

  (131 ratings)

Is this answer out of date? If it is, please let us know via a Comment

Comments

Patch

Rekha, December 02, 2004 - 9:42 am UTC

Hi Tom,

Please provide details on ptaching of oracle versions.


Tom Kyte
December 02, 2004 - 11:27 am UTC

expdp

friend, December 03, 2004 - 6:10 pm UTC

Hi Tom,
I reach my drealand...
facing following problem in data pump

Connect sys as sysdba
Sql> create user test indentified by test;
Sql> grant dba to test;
Sql> create directory expdp as ‘/oracle/expdp’;
Sql> create table t as select * from dba_objects;
Sql> execute dbms_stats.gather_table_stats(ownname=> ‘TEST’,table_name => ‘t’,degree=> 3 );
SQL> select count (1) from t;

  COUNT (1)
----------
    380776

Login to operating system


C:\>expdp test/test tables=t directory=expdp dumpfile=texp.dmp logfile=texp.log

Export: Release 10.1.0.2.0 - Production on Friday, 03 December, 2004 17:55

Copyright (c) 2003, Oracle.  All rights reserved.

Connected to: Oracle Database 10g Enterprise Edition Release 10.1.0.2.0 - Production
With the Partitioning, Oracle Label Security, OLAP and Data Mining options
Starting "TEST"."SYS_EXPORT_TABLE_01":  test/******** tables=t directory=expdp dumpfile=texp.dmp logfile=texp.log
Estimate in progress using BLOCKS method...
Processing object type TABLE_EXPORT/TABLE/TBL_TABLE_DATA/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 42 MB
Processing object type TABLE_EXPORT/TABLE/TABLE
. . exported "TEST"."T"                                  35.50 MB  380776 rows
Master table "TEST"."SYS_EXPORT_TABLE_01" successfully loaded/unloaded
******************************************************************************
Dump file set for TEST.SYS_EXPORT_TABLE_01 is:
  D:\EXPDP\TEXP.DMP
Job "TEST"."SYS_EXPORT_TABLE_01" successfully completed at 17:55


C:\>

Import of  test.t to some other schema lets say  Ora1 ….

$ impdp ora1/ora1 tables=t  directory=expdp dumpfile=texp.dmp logfile=timp.log
C:\>impdp ora1/ora1 tables=t  directory=expdp dumpfile=texp.dmp logfile=timp.log

Import: Release 10.1.0.2.0 - Production on Friday, 03 December, 2004 18:04

Copyright (c) 2003, Oracle.  All rights reserved.

Connected to: Oracle Database 10g Enterprise Edition Release 10.1.0.2.0 - Production
With the Partitioning, Oracle Label Security, OLAP and Data Mining options
ORA-31655: no data or metadata objects selected for job
ORA-39039: Schema expression "IN ('ORA1')" contains no valid schemas.
Master table "ORA1"."SYS_IMPORT_TABLE_01" successfully loaded/unloaded
Starting "ORA1"."SYS_IMPORT_TABLE_01":  ora1/******** tables=t directory=expdp dumpfile=texp.dmp logfile=timp.log
Job "ORA1"."SYS_IMPORT_TABLE_01" successfully completed at 18:04


C:\>sqlplus ora1/ora1

SQL*Plus: Release 10.1.0.2.0 - Production on Fri Dec 3 18:05:20 2004

Copyright (c) 1982, 2004, Oracle.  All rights reserved.


Connected to:
Oracle Database 10g Enterprise Edition Release 10.1.0.2.0 - Production
With the Partitioning, Oracle Label Security, OLAP and Data Mining options

SQL> select * from t;
              *
ERROR at line 1:
ORA-00942: table or view does not exist 

Tom Kyte
December 04, 2004 - 10:51 am UTC

exported TEST.T
importing as ORA1

asking for table T, which is table ORA1.T which doesn't exist.......

friend, December 04, 2004 - 6:26 pm UTC

C:\>impdp test/test tables=t directory=expdp dumpfile=texp.dmp logfile=timp.log

Import: Release 10.1.0.2.0 - Production on Saturday, 04 December, 2004 18:23

Copyright (c) 2003, Oracle. All rights reserved.

Connected to: Oracle Database 10g Enterprise Edition Release 10.1.0.2.0 - Production
With the Partitioning, Oracle Label Security, OLAP and Data Mining options
Master table "TEST"."SYS_IMPORT_TABLE_01" successfully loaded/unloaded
Starting "TEST"."SYS_IMPORT_TABLE_01": test/******** tables=t directory=expdp dumpfile=texp.dmp logfile=timp.log
Processing object type TABLE_EXPORT/TABLE/TABLE
ORA-31684: Object type TABLE:"TEST"."T" already exists
Processing object type TABLE_EXPORT/TABLE/TBL_TABLE_DATA/TABLE/TABLE_DATA
Job "TEST"."SYS_IMPORT_TABLE_01" completed with 1 error(s) at 18:23

Its same database....obviously this error will come...

It means we cant use expdp/impdp to import table in same database
as expdp from TEST and impdp in TEST.....doeant make sense...

Please advise

Thanks!

Tom Kyte
December 04, 2004 - 8:22 pm UTC

you want the equivalent of "fromuser", "touser" was my point.

see "schemas" and "remap_schema" parameters.

IMPDP

A reader, December 05, 2004 - 10:37 am UTC

It took sometime to find out exact syntax declaration:

IMPDP --> remap_schema=test:ora1
Thanks!

C:\>impdp ora1/ora1 tables=t dumpfile=texp.dmp DIRECTORY=EXPDP REMAP_SCHEMA=TEST:ORA1

Import: Release 10.1.0.2.0 - Production on Sunday, 05 December, 2004 10:31

Copyright (c) 2003, Oracle. All rights reserved.

Connected to: Oracle Database 10g Enterprise Edition Release 10.1.0.2.0 - Production
With the Partitioning, Oracle Label Security, OLAP and Data Mining options
ORA-31655: no data or metadata objects selected for job
ORA-39039: Schema expression "IN ('ORA1')" contains no valid schemas.
Master table "ORA1"."SYS_IMPORT_TABLE_01" successfully loaded/unloaded
Starting "ORA1"."SYS_IMPORT_TABLE_01": ora1/******** tables=t dumpfile=texp.dmp DIRECTORY=EXPDP REMAP_SCHEMA=TEST:ORA1

Job "ORA1"."SYS_IMPORT_TABLE_01" successfully completed at 10:31


C:\>impdp TEST/TEST tables=t dumpfile=texp.dmp DIRECTORY=EXPDP REMAP_SCHEMA=TEST:ORA1

Import: Release 10.1.0.2.0 - Production on Sunday, 05 December, 2004 10:32

Copyright (c) 2003, Oracle. All rights reserved.

Connected to: Oracle Database 10g Enterprise Edition Release 10.1.0.2.0 - Production
With the Partitioning, Oracle Label Security, OLAP and Data Mining options
Master table "TEST"."SYS_IMPORT_TABLE_01" successfully loaded/unloaded
Starting "TEST"."SYS_IMPORT_TABLE_01": TEST/******** tables=t dumpfile=texp.dmp DIRECTORY=EXPDP REMAP_SCHEMA=TEST:ORA1

Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/TBL_TABLE_DATA/TABLE/TABLE_DATA
. . imported "ORA1"."T" 35.50 MB 380776 rows
Job "TEST"."SYS_IMPORT_TABLE_01" successfully completed at 10:32



10.1.0.3

A reader, January 27, 2005 - 11:42 pm UTC

Tom,

1. Some notes on metalink mention(dont have the docid on top of my head) that we cannot use mknod option with expdp, if then is there a workaround.
2. Why do I get this error while using userid ='/ as sysdba'

ORA-39002: invalid operation
ORA-39135: SYS schema cannot read data using specific SCNs

Does that mean i cannot perform full expdb as sys.

Thanks.



Tom Kyte
January 28, 2005 - 7:16 am UTC

1) why do you need mknod with expdp, with exp and its limited capabilities at times -- sure, but what do you need mknod for?

2) sounds like you are using it with other options.

A reader, January 28, 2005 - 9:19 am UTC

Tom,

The reason for using mknod is to compress export dump while expdp is going on.

Thanks.

Tom Kyte
January 28, 2005 - 2:17 pm UTC

the architecture is totally different. the file to dump to cannot "pre-exist".

it won't be happening. the old exp is still there (but if you are using expdp to "backup", you got it really wrong, these are not backup tools -- they are data transfer tools. If you want to move data from database1 to database2 -- you don't even need a dmp file)

Re:

Js, March 22, 2005 - 2:04 am UTC

Hi,
exp and imp is handy tool for dba. Is there any way to
exclude a single schema in full refresh.

I don't want to overwrite my procedures or functions for
a particular schema.

Thanks,


Tom Kyte
March 22, 2005 - 11:08 am UTC

fromuser, touser to selectively import.

Need other things

Js, March 22, 2005 - 11:19 am UTC

ok .
If I need all objects except old procedures and functtions.


Thanks,
Js


Tom Kyte
March 22, 2005 - 5:58 pm UTC

exp/imp won't be the tool you use then, or you'll be doing object level imports (import tables a, b, c.....)

impdb and expdb - maybe in 10g with its slightly more advanced filtering.

remap_schema

Sami, August 07, 2005 - 4:38 pm UTC

Z:\OraDocs\10g\Work>impdp sami/sami remap_schema=scott:asktom dumpfile=scott_schema_exp.dmp logfile=imp_to_asktom.log

Import: Release 10.1.0.2.0 - Production on Sunday, 07 August, 2005 16:15

Copyright (c) 2003, Oracle. All rights reserved.

Connected to: Oracle Database 10g Enterprise Edition Release 10.1.0.2.0 - Produc
tion
With the Partitioning, OLAP and Data Mining options
Master table "SAMI"."SYS_IMPORT_FULL_01" successfully loaded/unloaded
Starting "SAMI"."SYS_IMPORT_FULL_01": sami/******** remap_schema=scott:asktom d
umpfile=scott_schema_exp.dmp logfile=imp_to_asktom.log
Processing object type SCHEMA_EXPORT/USER
Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
Processing object type SCHEMA_EXPORT/ROLE_GRANT
Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
Processing object type SCHEMA_EXPORT/SE_PRE_SCHEMA_PROCOBJACT/PROCACT_SCHEMA
Processing object type SCHEMA_EXPORT/TABLE/TABLE
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
. . imported "ASKTOM"."SYS_EXPORT_TABLE_01" 54.91 KB 337 rows
. . imported "ASKTOM"."DEPT" 5.656 KB 4 rows
. . imported "ASKTOM"."EMP" 7.820 KB 14 rows
. . imported "ASKTOM"."SALGRADE" 5.585 KB 5 rows
. . imported "ASKTOM"."BONUS" 0 KB 0 rows
Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Processing object type SCHEMA_EXPORT/TABLE/COMMENT
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/REF_CONSTRAINT
Processing object type SCHEMA_EXPORT/SE_POST_SCHEMA_PROCOBJACT/PROCOBJ
ORA-39083: Object type PROCOBJ failed to create with error:
ORA-27486: insufficient privileges
Failing sql is:
BEGIN
dbms_scheduler.create_program('"CHECK_ALERT_LOG_ERRORS"','EXECUTABLE',
'C:\oracle\product\10.1.0\WorkArea\check_alert.bat'
,0, TRUE,
'Email DBA if errors in the alert file'
);COMMIT; END;


Job "SAMI"."SYS_IMPORT_FULL_01" completed with 1 error(s) at 16:16


Z:\OraDocs\10g\Work>



Questions:

1) New user ASKTOM is created and imported all the objects from scott to ASKTOM.
What would be the password for ASKTOM?

2) What are the privileges will be carried over from SCOTT to ASKTOM?
I've noticed that only ROLE PRIVILEGS are carried over from SCOTT to ASKTOM. The object level privileges are NOT carried over to the new schema owner.



Tom Kyte
August 07, 2005 - 5:16 pm UTC

1) you have to set it
</code> http://docs.oracle.com/docs/cd/B14117_01/server.101/b10825/dp_import.htm#sthref324 <code>

2) the necessary object privleges should be there, well system privileges (create table, etc). It depends on who did the expdp in the first place and whether they were able to extra the schema metadata.

Technical Differences between New Data Pump and Orignal Exp/Imp utilities

Aamer Javaid, March 14, 2006 - 6:25 am UTC

Hi Tom,

Currently, I am working on the new Data Pump Utilities added to Oracle Database 10g.

I want to know more about the "Technical Differences between New Data Pump and Orignal Exp/Imp utilities".

1). What I found is that we hav't to create a user while using EXPDP/IMPDP.
2). The following parameters are not included in EXPDP/IMPDP.

i). Buffer

My Question is that whether EXPDP does not use this technology if yes what is the newer on if not what would be the default buffer size.

ii). Indexes, Constraints, Grants

What could we do in order to suppress exporting a schema without Indexes, Constraints, Grants (any One or all).

iii). Is there any way to Import a full Database using IMPDP that is error free or with less errors. (I mean to say if i create a database manually). if there PLZ LET ME KNOW.

Looking Forward.


Dump file permission for non-dba users

Sanjay Jha, March 27, 2006 - 9:36 am UTC

Here is the scenario:
1. At unix level, two directories are created by application user(not in dba group)one each for dump and log.
2. The appropriate permissions are given to Oracle user so that it can write in these folders.(We tried first by giving only r+w but later figured out that it needs r+w+x)
3. In the database DUMP_DIR and LOG_DIR are created to point to these directories.
4. expdp utility is run.
5. The dump file output is ownwed by oracle:dba, therefore not useable by application user.
What could be the solution? The security standards will not let me include the application users in dba group and each time changing the permission on the output file is not practical.
Currently, I am thinking of scripting a wrapper on top of expdp utility which will internally use setacl to change the permission on the file appropriately. What is your thoughts on this? Please advise.

Tom Kyte
March 27, 2006 - 10:15 am UTC

expdp is not creating the file

the oracle database backend process is.

expdp doesn't even have to RUN ON THAT MACHINE - there might not be any user associated with that small client process on the server.


so, running expdp as someone else - that'll not work, since it is your dedicated/shared server process that actually writes this file.

changing the permissions will have to become something "practical" - or you would use exp which is still supported and works - and creates a file by the CLIENT process, hence it'll be owned by the CLIENT.

Does it mean that if run on Clinet, it will not have any issues?

Sanjay Jha, March 27, 2006 - 11:14 am UTC

Tom,

Thanks for quick reply. So, if I understood correctly, if expdp is run on a client machine, the dump file will not be created by Oracle background process and therefore will be completely owned by the user who ran it and depending on umask setting it will have acl set on the file.

Then please clarify that the DUMP_DIR and LOG_DIR, which we create in the database, are mainly meant for DBAs who intend to run expdp on the DB SERVER?

Also, I did not get this concept that how come on db server bg process will create the file and on client the expdb utility?


Thanks,
Sanjay

Tom Kyte
March 27, 2006 - 3:07 pm UTC

No, if you use expdp - the file is ALWAYS created on the server and ALWAYS by the server process. expdp just calls some procedures in the database to do the work.

If you use EXP (export, NOT the data pump) then the EXP client itself reads the data out of the database, writes it to a file - and the file is owned by whomever runs EXP


expdp is architecturally very very different from the client/server program EXP. expdp is pretty much a server thing - and the tiny command line tool expdp just asks the database to do the work.

EXP is a fat client, it does the work, reads the data out, writes it.

expdp is an extremely thin client of the database and the real meat of expdp is IN the database (in fact, you don't even need expdp the command line tool - sqlplus would suffice, you could just call the procedures yourself)

DBMS_FILE_TRANSFER is the solution

Sanjay Jha, March 27, 2006 - 12:49 pm UTC

Tom,
I had an after thought. We can always use dbms_file_transfer to move the files around if someone wants to archive or move the file to lower environments. I tried that and it works fine. Also, the ownership of file does not make a lot of difference since impdb works fine with these files.

I guess I need not be worried with Application folks' requirement of changing the ownership of dump files, since it is not going to benefit anyway! It is pretty much like Oracle owns the tablespace datafiles at Unix level and that does not bother any user, same way dumpfiles are owned by Oracle but it is very much usable when used impdp to reload.I did not experiment with impdb and was stumbled by them with simple fact that file is owned by oracle:dba!Wish had done that before thinking so far and wasting your valuable time!

I am going to explain the same to application folks and it should be fine.

Thanks for your help.

Tom Kyte
March 27, 2006 - 3:20 pm UTC

nope, that will not be assured of working since that works only with files that are exact multiples of OS blocks and I'm not sure expdb assures that.



How to do the Export by Non-DBA ?

MSK, April 02, 2006 - 7:00 am UTC

Hi Tom
How we can invoke the process of expdp to do the export by non-dba users at the server level ?

Tom Kyte
April 02, 2006 - 11:25 am UTC

</code> http://docs.oracle.com/docs/cd/B19306_01/server.102/b14215/dp_export.htm#i1007106 <code>

describes the neccessary privileges required to perform expdp's - none of which include DBA

Import and Constraints in disable, novalidate state

A reader, April 03, 2006 - 7:12 pm UTC

Tom,

Did a full export of a schema(schema1/db1) and imported the schema into a different schema (schema2/db2) - on doing this we noticed that constraints that are disabled and in novalidate state (in source/export) are not imported.
Did we miss something during the process of export/import or is this the expected behaviour of export/import?

Source DB version (export done on this DB): 9.2.0.1
Destination DB version (import done on this DB) : 9.2.0.6

Thank you for your time.



Tom Kyte
April 04, 2006 - 9:50 am UTC

are they able to be validated? (not expected)

Import and Constraints in disable, novalidate state

A reader, April 04, 2006 - 1:50 pm UTC

Tom,

These constraints cannot be validated. Since the constraints are missing, re-created them manually with disable and no-validate option.

Is this the expected behaviour of export/import (missing constraints in disable and novalidate state)?

Thank you



Tom Kyte
April 04, 2006 - 7:32 pm UTC

if the constraints are missing, how could they have been in the "disabled and in novalidate state "??????

Import and Constraints in disable, novalidate state

A reader, April 04, 2006 - 8:14 pm UTC

Tom,

I am sorry I did not explain it well

The source DB has a few constraints that are in enabled and novalidate state (schema1)
Exported schema1.
Imported schema1 into schema2
After the import constraints that are in enabled and novalidate state are not present in schema2

Since the constraints were missing, got the sql script from schema1 and executed it in schema2.


--Source
SQL> 
SQL> select status, validated, count(*)
  2    from user_constraints a
  3   group by status, validated
  4  ;

STATUS   VALIDATED       COUNT(*)
-------- ------------- ----------
ENABLED  VALIDATED           3713
ENABLED  NOT VALIDATED        140
DISABLED NOT VALIDATED          8

SQL> 

Thank you
 

Tom Kyte
April 05, 2006 - 5:33 pm UTC

you'll have to help me reproduce this step by step.  tested in 9ir2 and 10gr2



ops$tkyte@ORA10GR2> create table t ( x int );

Table created.

ops$tkyte@ORA10GR2> alter table t add constraint t_cons check (x>0)
  2  enable novalidate;

Table altered.

ops$tkyte@ORA10GR2>
ops$tkyte@ORA10GR2> select constraint_name, status, validated from user_constraints;

CONSTRAINT_NAME                STATUS   VALIDATED
------------------------------ -------- -------------
T_CONS                         ENABLED  NOT VALIDATED

ops$tkyte@ORA10GR2> !exp userid=/ 'owner=ops$tkyte'

Export: Release 10.2.0.1.0 - Production on Wed Apr 5 17:12:57 2006

Copyright (c) 1982, 2005, Oracle.  All rights reserved.


Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
With the Partitioning, OLAP and Data Mining options
Export done in US7ASCII character set and AL16UTF16 NCHAR character set
server uses WE8ISO8859P1 character set (possible charset conversion)

About to export specified users ...
. exporting pre-schema procedural objects and actions
. exporting foreign function library names for user OPS$TKYTE
. exporting PUBLIC type synonyms
. exporting private type synonyms
. exporting object type definitions for user OPS$TKYTE
About to export OPS$TKYTE's objects ...
. exporting database links
. exporting sequence numbers
. exporting cluster definitions
. about to export OPS$TKYTE's tables via Conventional Path ...
. . exporting table                              T          0 rows exported
. exporting synonyms
. exporting views
. exporting stored procedures
. exporting operators
. exporting referential integrity constraints
. exporting triggers
. exporting indextypes
. exporting bitmap, functional and extensible indexes
. exporting posttables actions
. exporting materialized views
. exporting snapshot logs
. exporting job queues
. exporting refresh groups and children
. exporting dimensions
. exporting post-schema procedural objects and actions
. exporting statistics
Export terminated successfully without warnings.

ops$tkyte@ORA10GR2> drop table t purge;

Table dropped.

ops$tkyte@ORA10GR2> select constraint_name, status, validated from user_constraints;

no rows selected

ops$tkyte@ORA10GR2> !imp userid=/ full=y

Import: Release 10.2.0.1.0 - Production on Wed Apr 5 17:13:09 2006

Copyright (c) 1982, 2005, Oracle.  All rights reserved.


Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
With the Partitioning, OLAP and Data Mining options

Export file created by EXPORT:V10.02.01 via conventional path
import done in US7ASCII character set and AL16UTF16 NCHAR character set
import server uses WE8ISO8859P1 character set (possible charset conversion)
. importing OPS$TKYTE's objects into OPS$TKYTE
. . importing table                            "T"          0 rows imported
Import terminated successfully without warnings.

ops$tkyte@ORA10GR2> select constraint_name, status, validated from user_constraints;

CONSTRAINT_NAME                STATUS   VALIDATED
------------------------------ -------- -------------
T_CONS                         ENABLED  NOT VALIDATED

ops$tkyte@ORA10GR2>
 

A reader, April 25, 2006 - 3:44 pm UTC

Tom,

Is long datatype supported by datapump?

Thanks.

Tom Kyte
April 25, 2006 - 11:53 pm UTC

yes

A reader, April 25, 2006 - 3:50 pm UTC

Please ignore my previous post i tested it works but somehow on my production environment i am getting this error

ORA-31696: unable to export/import TABLE_DATA:"HDM"."TDATA" using client specified AUTOMATIC method




A reader, April 25, 2006 - 4:06 pm UTC

Just checked on metalink apparently there is Bug No. 4239903 reported against it.



A reader, April 26, 2006 - 12:01 am UTC

Tom, thanks for the answer any workaround for this Bug No. 4239903.



Tom Kyte
April 26, 2006 - 7:54 am UTC

The workaround for this issue is to import both metadata and data at one time if you want to import the data with both a LONG column and an enabled constraint.

that is, you cannot load into an existing table which is what must be happening here.

A reader, April 26, 2006 - 6:14 pm UTC

Thanks will try and see if hit the same error.



Any way other than fromuser touser?

Srinivas Narashimalu, July 28, 2006 - 8:46 am UTC

Hi Tom,

There is a requirement to have some of the materialized views to be at current (i.e latest values as of sysdate) and also at what it was a day before (i.e what the materialized views were holding sysdate-1). This is because when some of the feeds on the source system fail, to correct the whole issue or to restore the previous day's backup it takes a long time.

I can think of -

1. Creating another set of materialized views and refresh it at suitable time to facilitate the requirement.

2. Exporting and Importing the required materialized views.
Now, if I have to export and import -

a. Is there any way to export a materialized view (or any object) and import it into the same schema with a different name?

b. Is there anyway to selectively (materialized views in this case) to export from a schema (like I don't want to export the whole schema with all it's tables and views..just the materialized views)?

Also, your opinion about my thoughts on doing this and any other option for meeting this requirement would greatly be appreciated.

I am using Oracle 9.2.0.7

Thanks,
Srinivas

Tom Kyte
July 28, 2006 - 8:02 pm UTC

why not two schemas - schema1 and schema2.

they have refresh on demand materialized views.

You refresh schema1 today, you refresh schema2 tomorrow, and so on.

And you use "create or replace synonym t for schema2.t" when you want to point to schema2 and so on...

(this is something I myself do for a small 'warehouse' schema I maintain)

export-import in different platforms

A reader, July 29, 2006 - 9:45 am UTC

Hi Tom,

Thanks for your all help to Oracle DBAs.

I would like to ask you following question:

Is it alright to import export dmp file taken from 9.2.0.6 database running on HU-UX into Windowz 2000 box database with the same version i.e 9.2.0.6?


Thanks,




Tom Kyte
July 29, 2006 - 11:26 am UTC

yes, dmp files are cross platform compatible.

don't forget to BINARY ftp them if you use ftp.

But I still need more information!

Srinivas Narashimalu, August 01, 2006 - 2:38 am UTC

Thanks for your response Tom!

Even I thought along the same lines while I thought of creating the objects with different names in the same schema. This is a solution but I still have my questions -

1. Can we export only materialized views? Let's assume that the touser is having all the neccesarry privileges on the fromuser objects (in this case I presume it's ony the select on base tables)

2. Is there any way to export an object from schema1 and import it back into schema1 with a different object name?

Thanks,
Srinivas

Tom Kyte
August 01, 2006 - 7:20 am UTC

1) no - not with EXP (eg: not datapump), you can use dbms_metadata to retrieve the DDL. materialized views in the form of an "export" are just metadata - not data.

2) no - not with EXP (eg: not datapump)

Thank you!

Srinivas Narashimalu, August 01, 2006 - 9:47 am UTC

Thanks for your immediate response Tom!

Srinivas

expdp and impdp REMAP_* parameters

Roger, August 02, 2006 - 4:58 pm UTC

Hi Tom,

I post this after researching/exhausting all sources on the internet/oracle documentation/user groups on datapump. I have tried real hard not to waste your time.

I have a database, call it A, with over 250 tablespaces and 250 datafiles (I have inherited the crazy design and don't have any option of reducing the number of tablespaces/datafiles). Each datafile is stored in a directory like this: /data/sid_name (e.g. /data/A).

I want to export all the metadata from database A into another database (call it B). I used the standard datapump export command like this:

expdp user/passd@db CONTENT=METADATA_ONLY FULL=Y DUMPFILE=test.dmp

Question 1) Using CONTENT=METADATA_ONLY and FULL=Y will dump all metadata for all users, right? If not, please shed some light.

Now, when I want to import, the paths to the datafiles are different for database B. so, instead of /data/A, the datafile paths are /data/B.

Question 2) Impdp doesn't create the tablespaces for you, right?

Question 3) Since I have so many files, is there anyway I can use a wild card in the REMAP_DATAFILE or REMAP_TABLESPACE parameter so the path to the 250 datafiles (or tablespaces) can be remaped to /data/B?

Thank you for your time and your relentless support. Every ounce of your energy is sincerely appreciated.



Tom Kyte
August 03, 2006 - 9:09 am UTC

well, if #2 was true, #3 wouldn't have any point in being asked :)

yes, impdb can create tablespaces (that is why remap_datafile is there, if it didn't create tablespaces, there would be no reason to ever use remap_datafile!!!)

remap_datafile does not support wildcarding - no.


But if all you have are path differences, perhaps symbolic links could be used to allow the create to work

exp

sam, August 15, 2006 - 1:09 pm UTC

Tom:

We are migrating an 8i database from one machine to another machine htat has 9i. Would the exp/imp take care of all objects i.e. contexts, sequences, db links.

do we have to adjust sequences or any other stuff after migration?

Tom Kyte
August 15, 2006 - 2:11 pm UTC

yes, exp is a logical copy of the database, all structures can be moved.

export

sam, August 17, 2006 - 11:06 pm UTC

Tom:

DBA exported objects from 8i to 9i database using dbca. However when we did a schema compare in TOAD there were several objectrs missing like indexes, package bodies, etc. Do you have an idea how you can lose these during an export/import?

2. TOAD gives you a synchronization SQL script that you can run to create the missing objects. Would you recommend running that or keep doing the exp/imp.

Tom Kyte
August 18, 2006 - 7:57 am UTC

dbca to export?? Nope. dbca is the database configuration assistant. it doesn't do that.



export

sam, August 18, 2006 - 11:46 am UTC

Tom:

dbca for creation of databases. But exp/imp for objects. What do you think may have caused the missing objects? if you did not have db links in the new db would it cause issues?

Tom Kyte
August 18, 2006 - 12:38 pm UTC

your dba should be the one answering these questions. I don't know how they did the export, I don't know how they did the import.

Hopefully, they have the log from when they did it - problems would be located there.

dblinks are traditionally part of the export itself.

export

sam, August 18, 2006 - 5:07 pm UTC

Tom:

This is top part of it. Does it give you a hint on anything. Could be storage allocation or 8i/9i tool compatiblity issues (i.e migrating using 8i exp and using 9i imp)

Connected to: Oracle9i Enterprise Edition Release 9.2.0.6.0 - 64bit Production
With the Partitioning, OLAP and Oracle Data Mining options
JServer Release 9.2.0.6.0 - Production

Export file created by EXPORT:V08.01.07 via conventional path
import done in WE8ISO8859P1 character set and AL16UTF16 NCHAR character set
export server uses WE8ISO8859P1 NCHAR character set (possible ncharset conversion)
. importing BBBADMIN's objects into BBBADMIN
IMP-00017: following statement failed with ORACLE error 942:
"CREATE UNIQUE INDEX "PBCATC_X" ON "SYSTEM"."PBCATCOL" ("PBC_TNAM" , "PBC_OW"
"NR" , "PBC_CNAM" ) PCTFREE 10 INITRANS 2 MAXTRANS 255 STORAGE(INITIAL 4096"
"0 NEXT 40960 MINEXTENTS 1 MAXEXTENTS 505 PCTINCREASE 0 FREELISTS 1 FREELIST"
" GROUPS 1 BUFFER_POOL DEFAULT) TABLESPACE "PICDATA" LOGGING"
IMP-00003: ORACLE error 942 encountered
ORA-00942: table or view does not exist
IMP-00017: following statement failed with ORACLE error 942:
"CREATE UNIQUE INDEX "PBCATE_X" ON "SYSTEM"."PBCATEDT" ("PBE_NAME" , "PBE_SE"
"QN" ) PCTFREE 10 INITRANS 2 MAXTRANS 255 STORAGE(INITIAL 40960 NEXT 40960 "
"MINEXTENTS 1 MAXEXTENTS 505 PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUF"
"FER_POOL DEFAULT) TABLESPACE "PIVDATA" LOGGING"
IMP-00003: ORACLE error 942 encountered
ORA-00942: table or view does not exist
IMP-00017: following statement failed with ORACLE error 942:
"CREATE UNIQUE INDEX "PBCATF_X" ON "SYSTEM"."PBCATFMT" ("PBF_NAME" ) PCTFRE"
"E 10 INITRANS 2 MAXTRANS 255 STORAGE(INITIAL 40960 NEXT 40960 MINEXTENTS 1 "
"MAXEXTENTS 505 PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFA"
"ULT) TABLESPACE "PIVDATA" LOGGING"

Tom Kyte
August 18, 2006 - 6:30 pm UTC

that explains why the indexes aren't there.

the table they are on isn't there either.


look further up in your import log, for like the first error.



Hey, why the heck are you creatnig stuff in SYSTEM??????!?!?!?!?!?!? stop that.

export

sam, August 19, 2006 - 7:26 am UTC

Tom:

Should not the import do tables before indexes?

not sure why he is creating indexes in SYSTEM too!

How would you formulate your exp/imp commands. It should be simple commands to export database out of 8i and import database into 9i.

Tom Kyte
August 19, 2006 - 9:16 am UTC

I am going to go out on a limb and guess....


a) you did a user level export from old database
b) that gets things owned by the user, that sole user
c) that user created an index on a table in another schema (system of all the worst choices, well sys would have been worse...)
d) that table owned by system is not part of the user level export

hence, the table does not exist in the target database, is not part of the export.

Import full database

Swaminathan, September 02, 2006 - 3:11 pm UTC

I am trying to export full database from Production and importing the same into the deveopment database.

Oracle version:9.2.0.5
Expfile dmp size:800 MB

In development datbase i have 2 schemas and 2 tablespaces more than the production.Rest of the schemas and tablespaces are same on both the DB's.

My question:

1.In this following condition when i do a full database import to the devopment databases what are the possible errors that i will face.
2.Is there any specific steps that i have to follow during the import.


Tom Kyte
September 02, 2006 - 5:21 pm UTC

I would not recommend using full, you don't want full, you want to logically copy schemas from A to B.

You use full when you want to copy the database from one OS to another OS (before cross platform transports that is)


Import full database -Follow up

Swaminathan, September 02, 2006 - 8:35 pm UTC

Actually this activity involves updating the whole production database to the development one.

The production db usage is 4.5 GB and further full database export has been taken...The entire production database has to be updated to the existing development database.

For the above scenario...what should be the import plan???

Tom Kyte
September 04, 2006 - 8:25 am UTC

I would do this then:

a) restore production to test ( RESTORE, not export/import ). That'll test your ability to do so which is a great bonus. AND you'll have a realistic copy of production that can be used for testing.

b) export/import the other two schemas and scrap that other database.

exp/imp

sam, September 10, 2006 - 12:36 pm UTC

Tom:

DBA is using exp/imp to copy a database from 8i and import to another 9i R2 database. After reading chapter 8 of your book I found we may have several issues we are not aware of. Can you confirm and explain what fixes we need:

1. This is the first of import log. Does no that tell you it was done in 7 bit character data?How do we know if we had 8-bit or 7-bit data in the the current database?


Connected to: Oracle9i Enterprise Edition Release 9.2.0.6.0 - 64bit Production
With the Partitioning, OLAP and Oracle Data Mining options
JServer Release 9.2.0.6.0 - Production

Export file created by EXPORT:V08.01.07 via conventional path

Warning: the objects were exported by ORA81, not by you

import done in WE8ISO8859P1 character set and AL16UTF16 NCHAR character set
export server uses US7ASCII NCHAR character set (possible ncharset conversion)

2. If we had 8-bit and imported using 7 bit does that mean we have to redo it again? what potential problems you may get?

3. You say that exp/imp does not do system named indexes. Does not that create a problem becaue when you create a table with a PK, the system creates the constraint and index for that column? Do you have to worry about those indexes.

4. Would you always set NLS_LANG unix varaible to match the database before exp/imp? how do you do that.

5. Does this tell you that he used the proper verion of exp and imp tools in exporting and importing database.

6. When you go from 8i to 9i would you only do a developer schema export or you would do a FULL dump.

Tom Kyte
September 10, 2006 - 4:52 pm UTC

1) you have an 8 bit "current database", it uses we8iso8859p1 as the standard characterset and al16utf16 for the NCHAR characterset.

the NCHAR character sets differ between the two, but likely OK as you like didn't use NCHAR (but if you did, using us7ascii on the 817 server was "strange").

We don't know if the export already did character set conversion - that would have been in the export log:

...
JServer Release 8.1.7.4.0 - Production
Export done in WE8MSWIN1252 character set and WE8ISO8859P1 NCHAR character set
server uses WE8ISO8859P1 character set (possible charset conversion)

About to export specified users ...
.......

that shows the export NLS_LANG was WE8MSWIN1252 but the database character set was WE8ISO8859P1 - so the data was translated on the way out.

2) that would be your choice ultimately (it is after all - your data, you would make the call if you need to do it again).

In this case, the dmp file would be in the same characterset as the server

3) be more specific. It is true that some indexes you have currently do not have to "come across" - for example, if you have:

create table t ( x int primary key, y int, z int );
create index t_idx on t(x,y);

when you import, it would be valid for an index on T(x) TO NOT BE CREATED since the index on T(X,Y) is more than sufficient to support the primary key constraint.


4) you do it like you set any environment variable (it depends on your shell).

You would set it to what your needs dictate. It would be normal to have it be set the same as the database you export from (to avoid translation on the way out)

5) it says they used the 8i export tool - that looks right. assuming they are using the 9i imp tool, that too would be correct.

6) I would just upgrade the database as my first choice. after that, I would likely go for schema level exports (to achieve some degree of parallelism and to determine precisely what grants outside of the schema need to take place - and get them under control)

exp/imp

sam, September 10, 2006 - 9:31 pm UTC

Tom:

1. I am confused. Is NCHAR a datatype?

I think we have an 8 byte 8i database. They do the export. The log is below. It seems (if i read it correctly) it is done in 8 bytes. Then they go to 9i to import and it seems it doing something in ASCII7. Can you explain how to make sure I did not get data loss. DBA is beginner and cant explain this stuff.


Disconnected from Oracle8i Enterprise Edition Release 8.1.7.2.0 - 64bit Production
With the Partitioning option
JServer Release 8.1.7.2.0 - 64bit Production

Export: Release 8.1.7.2.0 - Production on Tue Aug 1 23:20:05 2006

(c) Copyright 2000 Oracle Corporation. All rights reserved.


Connected to: Oracle8i Enterprise Edition Release 8.1.7.2.0 - 64bit Production
With the Partitioning option
JServer Release 8.1.7.2.0 - 64bit Production
Export done in WE8ISO8859P1 character set and WE8ISO8859P1 NCHAR character set

About to export the entire database ...
. exporting tablespace definitions
. exporting profiles
. exporting user definitions
. exporting roles
. exporting resource costs
. exporting rollback segment definitions
. exporting database links
. exporting sequence numbers
. exporting directory aliases
. exporting context namespaces
. exporting foreign function library names
. exporting object type definitions
. exporting system procedural objects and actions
. exporting pre-schema procedural objects and actions
. exporting cluster definitions
. about to export SYSTEM's tables via Conventional Path ...
. . exporting table DEF$_AQCALL 0 rows exported
. . exporting table DEF$_AQERROR 0 rows exported


2. What about if i had a case where I did not have an index to enforce PK and I relied on a system one. What would happen after the import. E.g.

create table t2
(x in t primary key,
y int )
/

System created an INDEX SYS_C002559 and I do not have any other index on table.
Do i have to recreate this in 9i or the system will create it in 9i since it is not part of another index.

3. Is there a way/test to verify that no data was lost during conversion from 8i to 9i?

4. Would the import override the existing data in a table but not the DDL.

for example, let us say you the export created DDL for T and exported 500 records. Now when we import to another instance and T already existed with 10 records, I assume it would not change the structure of T but will erase the 10 records and import the 500 ones correct?

what about if T in new instance had a few additional columns would it fail?

thank you,



Tom Kyte
September 11, 2006 - 9:43 am UTC

1) yes, nchar and nvarchar2 are datatypes - they can use a different characterset from the rest of the char's/varchar2's

if you did not use nchar/nvarchar2 - you have nothing to worry about here.  

2) if not index exists that can be used for enforcement of the primary key - we will create one.  IF ONE EXISTS however, we will not.

ops$tkyte%ORA10GR2> create table t( x int primary key, y int );

Table created.

ops$tkyte%ORA10GR2> create index t_idx on t(x,y);

Index created.

ops$tkyte%ORA10GR2> set echo off
Wrote file /tmp/xtmpx.sql
Datatypes for Table t

                               Data                 Data
Column Name                    Type                 Length     Nullable
------------------------------ -------------------- ---------- --------
X                              NUMBER                          not null
Y                              NUMBER                          null


Indexes on t

Index                          Is
Name                           Unique COLUMNS
------------------------------ ------ --------------------------------
T_IDX                          No     X, Y
SYS_C005927                    Yes    X


Triggers on t
ops$tkyte%ORA10GR2> !exp userid=/ tables=t

Export: Release 10.2.0.1.0 - Production on Mon Sep 11 09:42:38 2006

Copyright (c) 1982, 2005, Oracle.  All rights reserved.


Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
With the Partitioning, OLAP and Data Mining options
Export done in US7ASCII character set and AL16UTF16 NCHAR character set
server uses WE8ISO8859P1 character set (possible charset conversion)

About to export specified tables via Conventional Path ...
. . exporting table                              T          0 rows exported
Export terminated successfully without warnings.

ops$tkyte%ORA10GR2> drop table t;

Table dropped.

ops$tkyte%ORA10GR2> !imp userid=/ full=y

Import: Release 10.2.0.1.0 - Production on Mon Sep 11 09:42:46 2006

Copyright (c) 1982, 2005, Oracle.  All rights reserved.


Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
With the Partitioning, OLAP and Data Mining options

Export file created by EXPORT:V10.02.01 via conventional path
import done in US7ASCII character set and AL16UTF16 NCHAR character set
import server uses WE8ISO8859P1 character set (possible charset conversion)
. importing OPS$TKYTE's objects into OPS$TKYTE
. importing OPS$TKYTE's objects into OPS$TKYTE
. . importing table                            "T"          0 rows imported
Import terminated successfully without warnings.

ops$tkyte%ORA10GR2> set echo off
Wrote file /tmp/xtmpx.sql
Datatypes for Table t

                               Data                 Data
Column Name                    Type                 Length     Nullable
------------------------------ -------------------- ---------- --------
X                              NUMBER                          not null
Y                              NUMBER                          null


Indexes on t

Index                          Is
Name                           Unique COLUMNS
------------------------------ ------ --------------------------------
T_IDX                          No     X, Y

<b>see how the system created unique index isn't there?  It isn't necessary, so we did not create it</b>

3) you could write something, sure.  a database link, distributed query to compare.  

https://www.oracle.com/technetwork/issue-archive/2005/05-jan/o15asktom-084959.html
comparing the contents of two tables.

4) import just does INSERTS.  it does not delete, it would add 500 records (if constraints didn't get violated) 

imp

sam, September 11, 2006 - 12:34 pm UTC

1. Do the NLS parameters for 8i and 9i look OK. The 8i also has the NCHAR character set set to 8 bit format Why would it do it in US7ASCII in the import (is it because the 9i one was set to that) ?

do we need to change the 9i nchar characterset to WE8ISO since it is set to AL16UTF16?

NLS_DATE_FORMAT in V$nls_parameters is difernt than nls_database_parameters. This is kind of strange?

8i database
select * from v$nls_parameters

PARAMETER VALUE
------------------------- -------------------------
NLS_LANGUAGE AMERICAN
NLS_TERRITORY AMERICA
NLS_CURRENCY $
NLS_ISO_CURRENCY AMERICA
NLS_NUMERIC_CHARACTERS .,
NLS_CALENDAR GREGORIAN
NLS_DATE_FORMAT dd-mon-yyyy hh24:mi:ss
NLS_DATE_LANGUAGE AMERICAN
NLS_CHARACTERSET WE8ISO8859P1
NLS_SORT BINARY
NLS_TIME_FORMAT HH.MI.SSXFF AM
NLS_TIMESTAMP_FORMAT DD-MON-RR HH.MI.SSXFF AM
NLS_TIME_TZ_FORMAT HH.MI.SSXFF AM TZH:TZM
NLS_TIMESTAMP_TZ_FORMAT DD-MON-RR HH.MI.SSXFF AM
NLS_DUAL_CURRENCY $
NLS_NCHAR_CHARACTERSET WE8ISO8859P1
NLS_COMP BINARY

8i
select * from nls_database_parameters;

NLS_DATE_FORMAT mm/dd/rrrr
NLS_RDBMS_VERSION 8.1.7.2.0



9i database

PARAMETER VALUE
------------------------- -------------------------
NLS_LANGUAGE AMERICAN
NLS_TERRITORY AMERICA
NLS_CURRENCY $
NLS_ISO_CURRENCY AMERICA
NLS_NUMERIC_CHARACTERS .,
NLS_CALENDAR GREGORIAN
NLS_DATE_FORMAT dd-mon-yyyy hh24:mi:ss
NLS_DATE_LANGUAGE AMERICAN
NLS_CHARACTERSET WE8ISO8859P1
NLS_SORT BINARY
NLS_TIME_FORMAT HH.MI.SSXFF AM
NLS_TIMESTAMP_FORMAT DD-MON-RR HH.MI.SSXFF AM
NLS_TIME_TZ_FORMAT HH.MI.SSXFF AM TZR
NLS_TIMESTAMP_TZ_FORMAT DD-MON-RR HH.MI.SSXFF AM
NLS_DUAL_CURRENCY $
NLS_NCHAR_CHARACTERSET AL16UTF16
NLS_COMP BINARY
NLS_LENGTH_SEMANTICS BYTE
NLS_NCHAR_CONV_EXCP FALSE

19 rows selected.

NLS_DATE_FORMAT DD-MON-RR
NLS_RDBMS_VERSION 9.2.0.6.0

2. How do you read the table comparison checks result

C1 C2 C3 CNT1 CNT2
--- -- -- ---- ----
2 x y 0 1
2 xx y 1 0
3 x y 1 0
3 x yy 0 1

does "0" means no diff and 1 means a difference.

Tom Kyte
September 11, 2006 - 1:45 pm UTC

1) the nls settings are of your choosing, they are right when you say they are right for what we need to do.


2) it shows you how many times that record appears in the first table (cnt1) versus the other table (cnt1)

so a "0 1" would mean "this row is not in table 1, but it is in table 2"

Transportable tablespace

Alay, September 13, 2006 - 6:12 am UTC

Hi Tom,

Can we do export of one tablespace (using transportable_tablespace) from one database which has default block size 2K and import that tablespace into another database with 8K block size?

Tom Kyte
September 13, 2006 - 2:35 pm UTC

if you have current software, sure.

multiple block sizes are supported in 9i and above.

import

Roma, October 04, 2006 - 3:48 am UTC

Tom,
At me a small problem. At performance of import a file of broad gullies there is a mistake:
. importing GPT's objects into GPT
IMP-00017: following statement failed with ORACLE error 942:
"CREATE INDEX "I_WAYBILLS_NUM_STATE" ON "CM_WAYBILLS" (TRIM("DOC_NUMBER") , "
"NVL("STATE",' ') ) PCTFREE 10 INITRANS 2 MAXTRANS 255 STORAGE(INITIAL 6553"
"6 FREELISTS 1 FREELIST GROUPS 1) TABLESPACE "GPT_INDX1" NOLOGGING"
IMP-00003: ORACLE error 942 encountered
ORA-00942: table or view does not exist
I do Export on Oracle 9.2.0.7 (32 bit), and import on Oracle 9.2.0.8 (64 bit) In what can be the reason of this mistake? The table and other indexes are imported without mistakes. The mistake arises if the index is created on function.

Tom Kyte
October 04, 2006 - 7:12 am UTC

well, the table didn't exist? - or does it?

can you make a small reproducible test case to work with?


ops$tkyte%ORA9IR2> create table t ( doc_number varchar2(5), state varchar2(2) );

Table created.

ops$tkyte%ORA9IR2>
ops$tkyte%ORA9IR2> create index t_idx on t(trim(doc_number),nvl(state,' '));

Index created.

ops$tkyte%ORA9IR2> !exp userid=/ 'owner=ops$tkyte'

Export: Release 9.2.0.8.0 - Production on Wed Oct 4 07:01:56 2006

Copyright (c) 1982, 2002, Oracle Corporation.  All rights reserved.


Connected to: Oracle9i Enterprise Edition Release 9.2.0.8.0 - Production
With the Partitioning, OLAP and Oracle Data Mining options
JServer Release 9.2.0.8.0 - Production
Export done in US7ASCII character set and AL16UTF16 NCHAR character set
server uses WE8ISO8859P1 character set (possible charset conversion)

About to export specified users ...
. exporting pre-schema procedural objects and actions
. exporting foreign function library names for user OPS$TKYTE
. exporting PUBLIC type synonyms
. exporting private type synonyms
. exporting object type definitions for user OPS$TKYTE
About to export OPS$TKYTE's objects ...
. exporting database links
. exporting sequence numbers
. exporting cluster definitions
. about to export OPS$TKYTE's tables via Conventional Path ...
. . exporting table                              T          0 rows exported
. exporting synonyms
. exporting views
. exporting stored procedures
. exporting operators
. exporting referential integrity constraints
. exporting triggers
. exporting indextypes
. exporting bitmap, functional and extensible indexes
. exporting posttables actions
. exporting materialized views
. exporting snapshot logs
. exporting job queues
. exporting refresh groups and children
. exporting dimensions
. exporting post-schema procedural objects and actions
. exporting statistics
Export terminated successfully without warnings.

ops$tkyte%ORA9IR2> drop table t;

Table dropped.

ops$tkyte%ORA9IR2> !imp userid=/ full=y show=y

Import: Release 9.2.0.8.0 - Production on Wed Oct 4 07:02:02 2006

Copyright (c) 1982, 2002, Oracle Corporation.  All rights reserved.


Connected to: Oracle9i Enterprise Edition Release 9.2.0.8.0 - Production
With the Partitioning, OLAP and Oracle Data Mining options
JServer Release 9.2.0.8.0 - Production

Export file created by EXPORT:V09.02.00 via conventional path
import done in US7ASCII character set and AL16UTF16 NCHAR character set
import server uses WE8ISO8859P1 character set (possible charset conversion)
. importing OPS$TKYTE's objects into OPS$TKYTE
 "BEGIN  "
 "sys.dbms_logrep_imp.instantiate_schema(schema_name=>'OPS$TKYTE', export_db_"
 "name=>'ORA9IR2.US.ORACLE.COM', inst_scn=>'25503955');"
 "COMMIT; END;"
 "CREATE TABLE "T" ("DOC_NUMBER" VARCHAR2(5), "STATE" VARCHAR2(2))  PCTFREE 1"
 "0 PCTUSED 40 INITRANS 1 MAXTRANS 255 STORAGE(INITIAL 65536 FREELISTS 1 FREE"
 "LIST GROUPS 1) TABLESPACE "USERS" LOGGING NOCOMPRESS"
. . skipping table "T"

 "CREATE INDEX "T_IDX" ON "T" (TRIM("DOC_NUMBER") , NVL("STATE",' ') )  PCTFR"
 "EE 10 INITRANS 2 MAXTRANS 255 STORAGE(INITIAL 65536 FREELISTS 1 FREELIST GR"
 "OUPS 1) TABLESPACE "USERS" LOGGING"
Import terminated successfully without warnings.

ops$tkyte%ORA9IR2> !imp userid=/ full=y

Import: Release 9.2.0.8.0 - Production on Wed Oct 4 07:02:02 2006

Copyright (c) 1982, 2002, Oracle Corporation.  All rights reserved.


Connected to: Oracle9i Enterprise Edition Release 9.2.0.8.0 - Production
With the Partitioning, OLAP and Oracle Data Mining options
JServer Release 9.2.0.8.0 - Production

Export file created by EXPORT:V09.02.00 via conventional path
import done in US7ASCII character set and AL16UTF16 NCHAR character set
import server uses WE8ISO8859P1 character set (possible charset conversion)
. importing OPS$TKYTE's objects into OPS$TKYTE
. . importing table                            "T"          0 rows imported
Import terminated successfully without warnings.

 

import

Roma, October 04, 2006 - 9:31 am UTC

The given test has passed successfully. When the given message on a mistake stands out, the table is already created

Tom Kyte
October 04, 2006 - 5:13 pm UTC

unless you can help me reproduce, I can only hypothesize the table was not in fact created.

import

Roma, October 05, 2006 - 4:39 am UTC

If the table actually has not been created, then why other indexes were created, but only the index on function was not created?

Tom Kyte
October 05, 2006 - 8:15 am UTC

I don't know, you are not helping me to reproduce in any way shape or form, therefore, Please utilize support - they'll work with you to reproduce the issue in a small test case.

import fromuser touser values

Deepak, October 05, 2006 - 5:58 am UTC

Hi Tom,

Recently I was told by a senior that if I specify the same schema name for "touser" and "fromuser" parameter while importing a database schema the import process will be faster than if different schema names are specified.

e.g. if I have taken the export of schema "testuser1" from one database (testdb1) and create a user called "testuser1" on the target database (testdb2) and do the import to the testuser1 schema, then will it be faster than if I do the import to some other schema, say testuser2?



Tom Kyte
October 05, 2006 - 8:19 am UTC

hahaha, good one.

where does this stuff COME FROM? Who makes this up?


and would not your BASIC NEEDS, your REQUIREMENTS drive the schema owner?


import

Deepak, October 05, 2006 - 9:10 am UTC

yes it does. But what if we have that luxury to choose the target schema name. Just want to know in general will it really affect the performance of import?

Tom Kyte
October 05, 2006 - 9:38 am UTC

think about it....

better yet, test it and then ask this "senior"

where they heck did you hear this made up "fact".


could you not tell from my reply that this would not "affect" the performance of the import.

Thanks Tom

Deepak, October 06, 2006 - 12:37 am UTC

Thanks a lot Tom. Will show your reply to my senior.

import

Roma, October 06, 2006 - 1:41 am UTC

As I wrote the test the I offered by you earlier has made, it &#1087;&#1088;&#1086;&#1096;&#1086;&#1083; is successful. 

D:\Documents and Settings\admins>sqlplus roma/roma

SQL*Plus: Release 9.2.0.8.0 - Production on Fri Oct 6 09:18:36 2006

Copyright (c) 1982, 2002, Oracle Corporation.  All rights reserved.


Connected to:
Oracle9i Enterprise Edition Release 9.2.0.8.0 - 64bit Production
With the Partitioning, OLAP and Oracle Data Mining options
JServer Release 9.2.0.8.0 - Production

SQL> create table t ( doc_number varchar2(5), state varchar2(2) );

Table created.

SQL> create index t_idx on t(trim(doc_number),nvl(state,' '));

Index created.

SQL> exit
Disconnected from Oracle9i Enterprise Edition Release 9.2.0.8.0 - 64bit Producti
on
With the Partitioning, OLAP and Oracle Data Mining options
JServer Release 9.2.0.8.0 - Production

D:\Documents and Settings\admins>exp roma/roma owner=roma file=d:\1.dmp

Export: Release 9.2.0.8.0 - Production on Fri Oct 6 09:23:02 2006

Copyright (c) 1982, 2002, Oracle Corporation.  All rights reserved.


Connected to: Oracle9i Enterprise Edition Release 9.2.0.8.0 - 64bit Production
With the Partitioning, OLAP and Oracle Data Mining options
JServer Release 9.2.0.8.0 - Production

Export done in CL8MSWIN1251 character set and AL16UTF16 NCHAR character set
. exporting pre-schema procedural objects and actions
. exporting foreign function library names for user ROMA
. exporting PUBLIC type synonyms
. exporting private type synonyms
. exporting object type definitions for user ROMA
About to export ROMA's objects ...
. exporting database links
. exporting sequence numbers
. exporting cluster definitions
. about to export ROMA's tables via Conventional Path ...
. . exporting table                              T          0 rows exported
. exporting synonyms
. exporting views
. exporting stored procedures
. exporting operators
. exporting referential integrity constraints
. exporting triggers
. exporting indextypes
. exporting bitmap, functional and extensible indexes
. exporting posttables actions
. exporting materialized views
. exporting snapshot logs
. exporting job queues
. exporting refresh groups and children
. exporting dimensions
. exporting post-schema procedural objects and actions
. exporting statistics
Export terminated successfully with warnings.
D:\Documents and Settings\admins>sqlplus roma/roma

SQL*Plus: Release 9.2.0.8.0 - Production on Fri Oct 6 09:25:40 2006

Copyright (c) 1982, 2002, Oracle Corporation.  All rights reserved.


Connected to:
Oracle9i Enterprise Edition Release 9.2.0.8.0 - 64bit Production
With the Partitioning, OLAP and Oracle Data Mining options
JServer Release 9.2.0.8.0 - Production

SQL> drop table t;

Table dropped.

SQL> exit
Disconnected from Oracle9i Enterprise Edition Release 9.2.0.8.0 - 64bit Producti
on
With the Partitioning, OLAP and Oracle Data Mining options
JServer Release 9.2.0.8.0 - Production

D:\Documents and Settings\admins>imp userid=roma/roma full=y show=y file=d:\1.dm
p

Import: Release 9.2.0.8.0 - Production on Fri Oct 6 09:26:56 2006

Copyright (c) 1982, 2002, Oracle Corporation.  All rights reserved.


Connected to: Oracle9i Enterprise Edition Release 9.2.0.8.0 - 64bit Production
With the Partitioning, OLAP and Oracle Data Mining options
JServer Release 9.2.0.8.0 - Production

Export file created by EXPORT:V09.02.00 via conventional path
import done in CL8MSWIN1251 character set and AL16UTF16 NCHAR character set
. importing ROMA's objects into ROMA
 "BEGIN  "
 "sys.dbms_logrep_imp.instantiate_schema(schema_name=>'ROMA', export_db_name="
 ">'JUPITER.JUPITER.GPTRANS.GAZPROM.RU', inst_scn=>'88865243');"
 "COMMIT; END;"
 "CREATE TABLE "T" ("DOC_NUMBER" VARCHAR2(5), "STATE" VARCHAR2(2))  PCTFREE 1"
 "0 PCTUSED 40 INITRANS 1 MAXTRANS 255 STORAGE(INITIAL 65536 FREELISTS 1 FREE"
 "LIST GROUPS 1) TABLESPACE "USERS" LOGGING NOCOMPRESS"
. . skipping table "T"

 "CREATE INDEX "T_IDX" ON "T" (TRIM("DOC_NUMBER") , NVL("STATE",' ') )  PCTFR"
 "EE 10 INITRANS 2 MAXTRANS 255 STORAGE(INITIAL 65536 FREELISTS 1 FREELIST GR"
 "OUPS 1) TABLESPACE "USERS" LOGGING"
Import terminated successfully without warnings.

D:\Documents and Settings\admins>imp userid=roma/roma full=y file=d:\1.dmp

Import: Release 9.2.0.8.0 - Production on Fri Oct 6 09:27:14 2006

Copyright (c) 1982, 2002, Oracle Corporation.  All rights reserved.


Connected to: Oracle9i Enterprise Edition Release 9.2.0.8.0 - 64bit Production
With the Partitioning, OLAP and Oracle Data Mining options
JServer Release 9.2.0.8.0 - Production

Export file created by EXPORT:V09.02.00 via conventional path
import done in CL8MSWIN1251 character set and AL16UTF16 NCHAR character set
. importing ROMA's objects into ROMA
. . importing table                            "T"          0 rows imported
Import terminated successfully without warnings.
 

Tom Kyte
October 06, 2006 - 8:57 am UTC

so, unless and until you can help me reproduce the circumstances that cause this, I cannot comment.

Export and indexes

A reader, October 11, 2006 - 9:43 am UTC

Tom,

Does the export utility(exp) when done at a schema level, get all information related to the indexes in that schema?
All information as in
a) Index DDLs
b) Index data
c) Related statistics infomation
d)?
Oracle DB version: 9.2.0.6
Client version: 9.2.0.1

Thank you


Tom Kyte
October 11, 2006 - 3:53 pm UTC

a) yes
b) there is no such "concept", exp only extracts ddl for indexes, there is no index data to extract.
c) it can
d) ? indeed?

Import into different OS and oracle version

Yoav, October 13, 2006 - 8:27 am UTC

Hi Tom,
I took export with full=y option from database with oracle
9.2.0.7 and WINDOWS as O.S.
I created new database with oracle 9.2.0.8 over linux
redhat.
I also created all the tablespaces that needed then used full=y to the import process.

Since i exproted from 9.2.0.7 and imported to 9.2.0.8 is
there any sql scripts like catproc.sql and etc. that i need to rerun ?

Thanks.



Tom Kyte
October 13, 2006 - 2:27 pm UTC

nope, sys isn't exported - you created the dictionary using the 9208 scripts, all is "done"

Import into different OS and oracle version

Yoav, October 15, 2006 - 9:20 am UTC

Hi Tom,

The database im trying to export and import is 600GB size.
1. Is there another option in 9iR2 to move data between
diffrent platforms ?
2. Is export=full and import=full is the right method ?
3. I got : "IMP-00000: Import terminated unsuccessfully"
Would you recommand to drop the databse and recreate
it ?
4. I got alot of error like this one:
IMP-00017: following statement failed with ORACLE error 20000:
"BEGIN DBMS_STATS.SET_INDEX_STATS(NULL,'"B1_KEY_NUM"',NULL,NULL,NULL,282300"
"278,668312,282300278,1,1,38607202,2,0); END;"
IMP-00003: ORACLE error 20000 encountered
ORA-20000: Unable to set values for index B1_KEY_NUM: does not exist or insufficient privileges
ORA-06512: at "SYS.DBMS_STATS", line 3908
ORA-06512: at "SYS.DBMS_STATS", line 4017

can you suggest what to do about it ?
Thanks

Tom Kyte
October 15, 2006 - 10:08 am UTC

1) you have good old "dump and load" as well as database links. Else, until 10g, your options are limited.

2) probably not, very serial, very much "all eggs in one big basket". More likely schema level or even - for big tables - table level exports.

3) what was the real error message that came along with it.

4) see if the index is there??? it was just trying to set statistics on it

Schema name change across databases

Deepak, October 30, 2006 - 7:23 am UTC

Hi Tom,

I have a scenario like the following...

> In my production DB I have two schemas called USERA and USERB.
Both the users have interdependencies across the schemas among their objects. i.e., USERA->USERB & USERB->USERA.

> I have a development schema in which there are two empty schemas called USER1 & USER2.

Now my requirement is to import USERA & USERB schemas from production DB to USER1 & USER2 in the Development DB respectively and the dependencies between the schemas have to exist still.

Can you please suggest a best Way to perform the above migration.

NOTE: There are some restrictions because of which I CAN NOT use USERA and USERB in the Development DB.

Tom Kyte
October 30, 2006 - 9:16 am UTC

you can use import with fromuser and touser.

However, any hard coded references in triggers, views, packages and the like will have to be fixed by you.

Using impdp parameter REMAP_SCHEMA

Susan, November 02, 2006 - 12:56 pm UTC

I'm doing a schema level data pump import from one schema into my own schema that already exists. My userid has DBA role. I'm finding that after this import, I've lost my DBA privileges. The DBA role is still granted but the default role is no longer set to DBA. Instead, my userid now has the role grants and default role of the other schema I had exported. I was hoping EXCLUDE=USER would resolve that, but it didn't. I've looked for information about this from the usual sources but so far haven't found anything useful. I'm guessing if I did a table level import with impdp that this wouldn't happen, but I'm curious what you have to say about the schema level impdp replacing my default role.

Nice job at OOW by the way. Loved Spy vs. Spy.

Tom Kyte
November 02, 2006 - 7:28 pm UTC

did the original schema have a default role (explicitly).

can't you just alter user username default role all; to put them all into place again?

Using impdp parameter REMAP_SCHEMA

Susan, November 03, 2006 - 9:43 am UTC

Both schemas, mine and the one that was exported had default roles explicitly set.

Yes, I did the ALTER to give DBA role back to my username after realizing that my privileges were gone. I just wasn't expecting that behavior. We're just now starting to use Data Pump and I'm testing out some scenarios and want to know what to expect. So if that's just the way it is, then we'll have to remember that the role needs to be reset after a schema import. In the case where we're doing a schema export/import from Development to Production, we don't want the liberal privileges that are allowed in a development database to be transferred to production. Let me know if you have any suggestions. Thanks.

Tom Kyte
November 03, 2006 - 11:02 am UTC

</code> http://docs.oracle.com/docs/cd/B19306_01/server.102/b14215/dp_export.htm#sthref160 <code>

"Filtering can further restrict what is exported using schema mode "

because you have the exp_full_database priv (your DBA role), you are exporting the user definitions as well - you can filter them out during the export.

Using impdp parameter REMAP_SCHEMA exclude=SCHEMA_EXPORT/USER

Susan, November 03, 2006 - 1:59 pm UTC

Thanks. I dug deeper, queried SCHEMA_EXPORT_OBJECTS and found the SCHEMA_EXPORT/USER exclude option with the comment "User definitions for users associated with the selected schemas". Using exclude=SCHEMA_EXPORT/USER on the expdp did the trick. Thanks!

Using impdp parameter REMAP_SCHEMA exclude=SCHEMA_EXPORT/ USER

Susan, November 03, 2006 - 2:51 pm UTC

Sorry, my mistake. I got too excited (too much leftover Halloween candy). Excluding schema_export/user didn't work, I'm still losing my DBA as default role. I'll keep digging.

Using impdp parameter REMAP_SCHEMA exclude=DEFAULT_ROLE

Susan, November 03, 2006 - 3:37 pm UTC

Success. For the particular case of losing the DBA default role, I got that resolved with expdp ...exclude=DEFAULT_ROLE. I'm assuming that to prevent development database privileges from getting exported/imported to production I'd export exclude=ROLE_GRANT or some combination of these options. Thanks, and sorry goof up above.

Should theTablespace name be same for import and Export

Arindam Mukherjee, November 06, 2006 - 11:03 am UTC

Respected Mr. Tom

Today I am using Oracle Datapump under version Oracle 10g Release 2.

From one machine I am issuing the following command to create export dump.

expdp gcbdev/gcbdev SCHEMAS=gcbdev DIRECTORY=bodwh DUMPFILE=gcbdev.dmp

To another machine and database, first create the same user / schema – “gcbdev” and using the following command to import that dump.

impdp GCBDEV/GCBDEV DIRECTORY=bodwh DUMPFILE=GCBDEV.dmp

To two machines, I create same directory structures to maintain “DIRECTORY=bodwh”.

My question, when I went to import to other machine and oracle database as well, I had to create tablespace having same name of source database otherwise I got an error. Could you kindly tell me where I should modify my command? I have got the clause “REMAP_TABLESPACE=tbs_1:tbs_6” but could not understand how to use it.

So should theTablespace name be same for import and Export using Oracle datapump?

Please help me write two commands on Oracle datapump for export and import for a specific schema.


Tom Kyte
November 06, 2006 - 11:46 am UTC

why couldn't you understand how to use it? what problem did you encounter?

You have the commands, you just need to figure out what tablespaces you want to use.

Migrating 1TB 9i database across platforms

A reader, November 22, 2006 - 6:57 pm UTC

Tom,
We're migrating a 1TB 9207 database (SAP application) from Solaris to AIX. Is parallel exp / imp the recommended way to go?

In response to someone with a similar question, you said
>> 1) you have good old "dump and load" as well as database links. Else, until 10g, your options are limited.

By "database links", are you suggesting CTAS?

Thanks.

Tom Kyte
November 24, 2006 - 6:09 pm UTC

create table as select
sqlplus copy command
insert /*+ append */

anything you want.....


I would ask SAP for recommendations on moving from one platform to another, they use the database as a "black box", they may have MANY restrictions on what you can and cannot do.

REMAP_SCHEMA and IMP_FULL_DATABASE

A reader, December 06, 2006 - 5:40 pm UTC

Hi Tom,

Is it mandatory to use REMAP_SCHEMA and have IMP_FULL_DATABASE privs if I am using network_link to directly import one table from x schema to y schema.

Thanks.

Please ignore my question Tom.

A reader, December 06, 2006 - 6:32 pm UTC

Hi Tom,
Please ignore my question. I got the answer. I was trying from the source initially and getting the error. I tried from destination using remap_schema. It worked.

Thanks.

Error ORA-39203

A reader, December 07, 2006 - 11:03 am UTC

Tom,

Is there any work around for ORA-39203: Partition selection is not supported over a network link.

Thanks,
SRC.

Tom Kyte
December 07, 2006 - 1:18 pm UTC

can you query on the partition key (eg: if it is range or list the answer is "yes, yes I can")

Thanks Tom

A reader, December 07, 2006 - 6:03 pm UTC

Thanks Tom. Yeah. I got the idea of passing QUERY parameter. But I will not get much benefits of using network_link by eliminating intermediate file. It doesn't save much time. Moreover it might take more time than, taking a seperate partition dump using expdp and importing the dump using impdp.

A reader, February 19, 2007 - 7:08 pm UTC

Tom,

I need to move data from Production environment to Test environment, due to space constraints i cannot generate dump file and secondly due to security/internal reasons i can only push from production to test environment.

I was looking at data pump package for import in that it mentions pulling data whereas i am looking for pushing data, can you please point out some link or info on this.

Thanks,
Rahul.
Tom Kyte
February 19, 2007 - 9:25 pm UTC

due to space constraints - come on, disk is cheap when compared to your time.

You can create a database link in the production database and insert into table@remote select * from table; in the production instance.

Anil kumar, March 14, 2007 - 3:51 am UTC

Hi Tom,

#!/usr/bin/ksh
export ORACLE_BASE=/asa005g/app/oracle
export ORACLE_HOME=${ORACLE_BASE}/product/9.2
export ORACLE_SID=PRDB1
export LD_LIBRARY_PATH=${ORACLE_HOME}/lib
export LD_ASSUME_KERNEL=2.4.19
export PATH=${ORACLE_HOME}/bin:/usr/bin:/etc:/usr/bin/X11:/usr/local/bin:${ORACLE_HOME}/OPatch:${PATH}

cd /asa005g/oraarch/exports/${ORACLE_SID}

NOW=`date '+%Y%m%d_%H%M'`

export FILE_TAG=${ORACLE_SID}_${NOW}
exp ${ORACLE_SID}_SYS/$password@${ORACLE_SID} file=${FILE_TAG}.dmp log=${FILE_TAG}.log buffer=100000 full=y statistics=none consistent=y
gzip ${FILE_TAG}.dmp

I am using the above syntax to take expdp of production server from my local dev server. Is that possible i can replace the same exp with expdp to take the dump from my dev server.

Thanks in Advance.
Tom Kyte
March 14, 2007 - 7:54 am UTC

is it possible to use expdb instead of exp?

yes.

not sure what else you were looking for though

Anil Kumar, March 14, 2007 - 8:09 am UTC

This is the syntax used to export the schema dump from client side:
exp ${ORACLE_SID}_SYS/$password@${ORACLE_SID} file=${FILE_TAG}.dmp log=${FILE_TAG}.log buffer=100000 full=y statistics=none consistent=y

I am taking this dump from client side other than on the database installed server.


If i want to replace the same with expdp, i am facing some problem

expdp ${ORACLE_SID}_SYS/$password@${ORACLE_SID} directory=test_dir dumpfile=test.dmp logfile=test.log

How to create the directory which will point to client side mount point. Why because this is production database
which i cann't use the mount points on that server. How to create a directory which will points to client
location. Is that possible?

Thanks in Advance.
Tom Kyte
March 14, 2007 - 1:58 pm UTC

expdp runs in the database, the dump file is always created on the database.

Import/Export

khater ali, March 15, 2007 - 3:07 am UTC

Hai Tom.

I have two databases one is production and another one is development.When I try to do export from production and import to dev it shows ORA-00959 :tablespace does not exist in dev db.My question is if it is a 1 or 2 tablespace we can create new one.Otherwise if it shows 40 or 50 tablespace is not exist then how can we import the data.In this scenario creating the tablespace is a cumbersome.Is there any other alternative solution for this scenario.

Thanks
Khater Ali
Tom Kyte
March 15, 2007 - 11:52 am UTC

"creating the tablespace is cumbersome" :) fascinating.


the answer is "it depends". if you have single segment objects only (no partitions, no lobs, no iot's with overflows and such), it'll just work, if you have multi-segment objects, you'll need the tablespaces.

A reader, April 04, 2007 - 2:37 am UTC

1. In response to Arindam Mukherjee's question above, I suspect you do not realize that exp/imp utilities do no create tablespaces for you. You need to pre-create the target tablespace(s) required in the target database first, manually. I personally think many Oracle "users" don't expect this requirement, although it makes sense.

2. In reponse to Roma from Russia, can you repeat your example but in a brand new schema?

3. Tom, I'm upgrading/migrating from 9i on Windows 2003R2 32 bit to 10gr2 on Oracle Enterprise Linux 64 bit. Is it correct that exp/imp is my only real option? And is there any problem doing the full=y export to a single 300GB file?

Thanks...
Tom Kyte
April 04, 2007 - 10:11 am UTC

1) they might, they might not - it depends actually.

3) no. you could

a) upgrade the 9i database to 10g on windows and do a full database transport (basically backup windows, restore to linux)

b) upgrade the 9i database to 10g on windows and use datapump

c) use database links between 9i and 10g to move the data

d) dump and reload using a custom program of your choosing

for example.

to move 300gb, I doubt export and import would even come into my head. Especially with full=y, that I would never even consider (be looking at smaller schema level exports done in parallel at best)

A reader, April 04, 2007 - 2:43 am UTC

Sorry - forgot one related item... (see post above)

4. Are the scripts to create the dictionary still a requirement in a fresh 10gr2 install?

Tom Kyte
April 04, 2007 - 10:12 am UTC

if you install 10g, and create a database, yes, you'll create the 'catalog' if that is what you mean.

Expdp/impdp

Aru, April 04, 2007 - 6:06 pm UTC

Hi Tom,

In exp/imp we can choose to not import the statistics and after the import we can generate the statistics ourselves.
How can we do the same in expdp/impdp? I have not seen any options in the expdp help=y /impdp help=y OR maybe I've missed out something.
Thanks as always,
Regards,
Aru.
Tom Kyte
April 04, 2007 - 6:12 pm UTC

http://docs.oracle.com/docs/cd/B19306_01/server.102/b14215/dp_import.htm#sthref389

they are of the opinion that a parameter similar to statistics is not needed.

if source had statistics, they are imported.

exp

A reader, April 23, 2007 - 4:40 pm UTC

TOm:

We are doing an exp/imp from a production database to a mirror database. In the script, we disable a few jobs that send emails using dbms_job.remove() to avoid duplicate emails. However, i noticed that sometimes the database will run jobs before the remove command.

Is there a way to exculde the jobs from exp or imp or resolve this issue?
Tom Kyte
April 23, 2007 - 4:54 pm UTC

set job_queue_processes to zero in the "mirror database", no jobs will run until you set job_queue_processes > 0

How fast it is importing

B.M.Kulkarni, May 07, 2007 - 8:39 am UTC

Hi Tom,
Greetings. You are excellent as usual.

While importing the main problem we face is to know how long it will take to complete import. Is there any way to monitor import speed. How can we know the time left out to complete import?.
Tom Kyte
May 08, 2007 - 10:54 am UTC

impossible to know "how long", but you can show the progress using "feedback=1000" or so.

You'll be able to see how far along it is - but to guess how long to complete would be - a guess pure and simple. The very next table might have 50 indexes to create (but the prior tables had one each) so it would take "much longer". The table after that might have longs or lobs - making it go row by row instead of array inserting. You might hit a schema that needs to compile 500 packages - whereas the prior ones did 5 packages each and so on

import

sam, May 10, 2007 - 3:36 pm UTC

Tom:

We have an oracle web application (mod_plsql) that users use to key in student applications.

Now, there is another separate web site (ASP) that allows students to enter application information on the web and saves the data into a Comma Delimited Format and sends it into an email.

What i need is to somehow create something that allows the user import that CSV data into our oracle app.

What is the best way to do this? Can you do this using a web page or the user has to save the CSV data (via email) into files and install sqlldr and use that to load data into the oracle database on the unix server.


Tom Kyte
May 11, 2007 - 11:04 am UTC

you can certainly file upload the csv (into a blob) and use utl-raw.cast-to-varchar2 to convert the raw into text and parse it, sure.

import

sam, May 10, 2007 - 3:45 pm UTC

Tom:

As a followup to the above would you create a web page that upload the text file from pC to oracle database. Then you parse the file uploaded and insert records into tables accordingly?

Do you think this is a good easy web solution?

csv

sam, May 11, 2007 - 6:18 pm UTC

Tom:



1. Why do you upload the file into a blob and not clob? the file is text and not binary.

2. Let us say the csv file has the following and I want this to be saved in CUSTOMER table.

Customer_no,Date,Time,First Name, Last Name,Address, City,State,Zip Code,Country,Phone
1234,4/26/2007,10:45:12,Joe,Blow,"1200 Mayflower St, Apt 9", Los Angeles, Ca,20111,USA,213-999-2222
5678,............
222,.................

Would this be uploaded as one record into your BLOB table? Would the parsing be difficult here?
I do not see how pl/sql will find out where the end of each record is?

3. I was reading on External tables. Would you recommend using this in this case?
Would I have to uplaod the file and then write it to the server file system to use "External tables".

4. If I want to use the SQLLOADER option would the user have to install the oracle client on the machine and then just run a dos command to load the file to the database? Is this the easiest solution?

Thank you,
Tom Kyte
May 14, 2007 - 12:39 pm UTC

1) because file upload uploads via mod_plsql into a blob - because "in general", you upload anything - word documents, whatever.
http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:1352202934074

has an example of reading a bfile (like a blob) line by line


2) get a blob, which is read via dbms_lob. the "end of record" is a newline character (chr(10) or chr(13)||chr(10) if file is from windows)

you could dbms_lob.instr( blob, utl_raw.cast_to_raw( chr(10) ) ) to find end of line markers

3) you would need to upload to the servers FILE SYSTEM (not going to happen really, external tables need to have the file on the database SERVER)

4) yes, you would have to do that. No, I don't see this as an easy option, I see it as the hardest way to do it - installing and configuring software on the end users machine... ugh.


Impdp and DBA_JOBS

A reader, June 06, 2007 - 5:09 pm UTC

Although not the best approach, we are using impdp (using system account) to recover a database. Everything imports fine, except the DBMS Jobs. Although they are imported all the jobs belong to SYSTEM. Here's a sample of DBA_JOBS

JOB LOG_USER PRIV_USER SCHEMA_USER
---- ------------------------------ ------------------------------ -------------
2 SYSTEM SYSTEM SYSTEM
3 SYSTEM SYSTEM SYSTEM
4 SYSTEM SYSTEM SYSTEM
5 SYSTEM SYSTEM SYSTEM

How can we import the jobs to their original user?

imp questions??

David, September 12, 2007 - 6:03 pm UTC

Dear Tom,

I have a 817 dump file and I tried to imp two specific users into my 9.2.0.1 DB. I created correspoding users and tablespaces in 9i DB. But when I tried to imp, it gives me errors like this:

Export file created by EXPORT:V08.01.07 via conventional path
import done in WE8MSWIN1252 character set and AL16UTF16 NCHAR character set
export client uses US7ASCII character set (possible charset conversion)
export server uses WE8ISO8859P1 NCHAR character set (possible ncharset conversion)
. importing MERAK's objects into MERAK
"ALTER SESSION SET CURRENT_SCHEMA= "MERAK""
IMP-00003: ORACLE error 1435 encountered
ORA-01435: user does not exist
IMP-00000: Import terminated unsuccessfully

Here is my imp command:
imp system/pw@prod fromuser=MERAK touser=MERAK file=E:\Oracle_Full_Backups\ORCL\072007.dmp log=E:\Oracle_Full_Backups\ORCL\orclimp.log ignore=y buffer=9999999 grants=y indexes=y

Can you help me out. Many thanks in advance.
Tom Kyte
September 15, 2007 - 7:11 pm UTC

did you create the user?

fromuser/touser won't create them, full imports do that.

Import problem - different character set

Ken, September 17, 2007 - 12:42 pm UTC

Dear Tom,

I'm facing a problem while importing an export file of database with character set CL8MSWIN1251, but the Oracle server to which the data to be imported has character set WE8MSWIN1252. And am getting the following errors, pls help.

import done in WE8MSWIN1252 character set and AL16UTF16 NCHAR character set
export client uses CL8MSWIN1251 character set (possible charset conversion)
. importing ADP6's objects into ADP_169
. . importing table "ACC" 223 rows imported
IMP-00017: following statement failed with ORACLE error 1452:
"CREATE UNIQUE INDEX "ACC_PK" ON "ACC" ("GROUP_CODE" , "ACCOUNT_KEY" ) PCTF"
"REE 10 INITRANS 2 MAXTRANS 255 STORAGE(INITIAL 196608 FREELISTS 1 FREELIST "
"GROUPS 1 BUFFER_POOL DEFAULT) TABLESPACE "ADP_INDEX" LOGGING"
IMP-00003: ORACLE error 1452 encountered
ORA-01452: cannot CREATE UNIQUE INDEX; duplicate keys found
IMP-00017: following statement failed with ORACLE error 20000:
"BEGIN DBMS_STATS.SET_INDEX_STATS(NULL,'"ACC_PK"',NULL,NULL,NULL,233,1,233,"
"1,1,52,0,6); END;"
IMP-00003: ORACLE error 20000 encountered
ORA-20000: INDEX "ADP_169"."ACC_PK" does not exist or insufficient privileges
ORA-06512: at "SYS.DBMS_STATS", line 2121
ORA-06512: at "SYS.DBMS_STATS", line 5393
ORA-06512: at line 1
IMP-00017: following statement failed with ORACLE error 2437:
"ALTER TABLE "ACC" ADD CONSTRAINT "ACC_PK" PRIMARY KEY ("GROUP_CODE", "ACCO"
"UNT_KEY") USING INDEX PCTFREE 10 INITRANS 2 MAXTRANS 255 STORAGE(INITIAL 19"
"6608 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT) TABLESPACE "ADP_IND"
"EX" LOGGING ENABLE "
IMP-00003: ORACLE error 2437 encountered
ORA-02437: cannot validate (ADP_169.ACC_PK) - primary key violated
. . importing table "ACC_EXP" 0 rows imported
. . importing table "ACCGROUP" 0 rows imported
. . importing table "ACC_LOCAL_700" 0 rows imported
. . importing table "ACC_RES_CORR_700" 0 rows imported
. . importing table "AF0052" 0 rows imported
. . importing table "BALANCE_COM" 11 rows imported
. . importing table "BALANCE_COM_CONS" 0 rows imported
. . importing table "BALANCE_COM_DEPT" 18 rows imported
. . importing table "BALANCE_DEPT" 7 rows imported
. . importing table "BALDEPT_CONS" 0 rows imported
. . importing table "BALSADJUST" 0 rows imported
. . importing table "BALSDEPT" 0 rows imported
. . importing table "BALSNAT" 0 rows imported
. . importing table "BANKBAL" 3 rows imported
IMP-00017: following statement failed with ORACLE error 1452:
"CREATE UNIQUE INDEX "BANKB_PK" ON "BANKBAL" ("COYNUM" , "BANKCODE" , "STATE"
"DATE" ) PCTFREE 10 INITRANS 2 MAXTRANS 255 STORAGE(INITIAL 196608 FREELIST"
"S 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT) TABLESPACE "ADP_INDEX" LOGGING"
IMP-00003: ORACLE error 1452 encountered
ORA-01452: cannot CREATE UNIQUE INDEX; duplicate keys found
IMP-00017: following statement failed with ORACLE error 20000:
"BEGIN DBMS_STATS.SET_INDEX_STATS(NULL,'"BANKB_PK"',NULL,NULL,NULL,594,3,59"
"4,1,1,343,1,6); END;"
IMP-00003: ORACLE error 20000 encountered
ORA-20000: INDEX "ADP_169"."BANKB_PK" does not exist or insufficient privileges
ORA-06512: at "SYS.DBMS_STATS", line 2121
ORA-06512: at "SYS.DBMS_STATS", line 5393
ORA-06512: at line 1
IMP-00017: following statement failed with ORACLE error 2437:
"ALTER TABLE "BANKBAL" ADD CONSTRAINT "BANKB_PK" PRIMARY KEY ("COYNUM", "BA"
"NKCODE", "STATEDATE") USING INDEX PCTFREE 10 INITRANS 2 MAXTRANS 255 STORAG"
"E(INITIAL 196608 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT) TABLESP"
"ACE "ADP_INDEX" LOGGING ENABLE "
IMP-00003: ORACLE error 2437 encountered
ORA-02437: cannot validate (ADP_169.BANKB_PK) - primary key violated
. . importing table "BANK_BATCH_CTL" 0 rows imported
. . importing table "BANK_BATCH_DTL" 0 rows imported
. . importing table "BANK_BATCH_HDR" 0 rows imported
. . importing table "BANK_BRANCH" 0 rows imported
. . importing table "BANKBR_STR" 0 rows imported
. . importing table "BANK_CLEAR_DATE" 0 rows imported
. . importing table "BANKCODE" 3 rows imported
IMP-00017: following statement failed with ORACLE error 1452:
"CREATE UNIQUE INDEX "BANKC_PK" ON "BANKCODE" ("BANKCODE" ) PCTFREE 10 INIT"
"RANS 2 MAXTRANS 255 STORAGE(INITIAL 65536 FREELISTS 1 FREELIST GROUPS 1 BUF"
"FER_POOL DEFAULT) TABLESPACE "ADP_INDEX" LOGGING"
IMP-00003: ORACLE error 1452 encountered
ORA-01452: cannot CREATE UNIQUE INDEX; duplicate keys found
IMP-00017: following statement failed with ORACLE error 20000:
"BEGIN DBMS_STATS.SET_INDEX_STATS(NULL,'"BANKC_PK"',NULL,NULL,NULL,3,1,3,1,"
"1,1,0,6); END;"
IMP-00003: ORACLE error 20000 encountered
ORA-20000: INDEX "ADP_169"."BANKC_PK" does not exist or insufficient privileges
ORA-06512: at "SYS.DBMS_STATS", line 2121
ORA-06512: at "SYS.DBMS_STATS", line 5393
ORA-06512: at line 1
IMP-00017: following statement failed with ORACLE error 2437:
"ALTER TABLE "BANKCODE" ADD CONSTRAINT "BANKC_PK" PRIMARY KEY ("BANKCODE") "
"USING INDEX PCTFREE 10 INITRANS 2 MAXTRANS 255 STORAGE(INITIAL 65536 FREELI"
"STS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT) TABLESPACE "ADP_INDEX" LOGGING"
" ENABLE "
IMP-00003: ORACLE error 2437 encountered
ORA-02437: cannot validate (ADP_169.BANKC_PK) - primary key violated
. . importing table "BANKCODE_COY" 6 rows imported
IMP-00017: following statement failed with ORACLE error 1452:
"CREATE UNIQUE INDEX "BNKCOY_PK" ON "BANKCODE_COY" ("COYNUM" , "BANKCODE" ) "
" PCTFREE 10 INITRANS 2 MAXTRANS 255 STORAGE(INITIAL 65536 FREELISTS 1 FREEL"
"IST GROUPS 1 BUFFER_POOL DEFAULT) TABLESPACE "ADP_INDEX" LOGGING"
IMP-00003: ORACLE error 1452 encountered
ORA-01452: cannot CREATE UNIQUE INDEX; duplicate keys found
IMP-00017: following statement failed with ORACLE error 20000:
"BEGIN DBMS_STATS.SET_INDEX_STATS(NULL,'"BNKCOY_PK"',NULL,NULL,NULL,6,1,6,1"
",1,2,0,6); END;"
IMP-00003: ORACLE error 20000 encountered
ORA-20000: INDEX "ADP_169"."BNKCOY_PK" does not exist or insufficient privileges
ORA-06512: at "SYS.DBMS_STATS", line 2121
ORA-06512: at "SYS.DBMS_STATS", line 5393
ORA-06512: at line 1
IMP-00017: following statement failed with ORACLE error 2437:
"ALTER TABLE "BANKCODE_COY" ADD CONSTRAINT "BNKCOY_PK" PRIMARY KEY ("COYNUM"
"", "BANKCODE") USING INDEX PCTFREE 10 INITRANS 2 MAXTRANS 255 STORAGE(INITI"
"AL 65536 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT) TABLESPACE "ADP"
"_INDEX" LOGGING ENABLE "
IMP-00003: ORACLE error 2437 encountered
ORA-02437: cannot validate (ADP_169.BNKCOY_PK) - primary key violated
. . importing table "BANKPMT" 6 rows imported
IMP-00017: following statement failed with ORACLE error 1452:
"CREATE UNIQUE INDEX "BANKPM_PK" ON "BANKPMT" ("COYNUM" , "ACCOUNT_KEY" , "B"
"ANKCODE" ) PCTFREE 10 INITRANS 2 MAXTRANS 255 STORAGE(INITIAL 65536 FREELI"
"STS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT) TABLESPACE "ADP_INDEX" LOGGING"
IMP-00003: ORACLE error 1452 encountered
ORA-01452: cannot CREATE UNIQUE INDEX; duplicate keys found
IMP-00017: following statement failed with ORACLE error 20000:
"BEGIN DBMS_STATS.SET_INDEX_STATS(NULL,'"BANKPM_PK"',NULL,NULL,NULL,6,1,6,1"
",1,2,0,6); END;"
IMP-00003: ORACLE error 20000 encountered
ORA-20000: INDEX "ADP_169"."BANKPM_PK" does not exist or insufficient privileges
ORA-06512: at "SYS.DBMS_STATS", line 2121
ORA-06512: at "SYS.DBMS_STATS", line 5393
ORA-06512: at line 1
IMP-00017: following statement failed with ORACLE error 2437:
"ALTER TABLE "BANKPMT" ADD CONSTRAINT "BANKPM_PK" PRIMARY KEY ("COYNUM", "A"
"CCOUNT_KEY", "BANKCODE") USING INDEX PCTFREE 10 INITRANS 2 MAXTRANS 255 STO"
"RAGE(INITIAL 65536 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT) TABLE"
"SPACE "ADP_INDEX" LOGGING ENABLE "
IMP-00003: ORACLE error 2437 encountered
ORA-02437: cannot validate (ADP_169.BANKPM_PK) - primary key violated

Tom Kyte
September 18, 2007 - 3:42 pm UTC

do you understand what happens with character set conversions?

whenever you change charactersets - you'll have issues like this.

Let us take a simple example - 8bit extended characters (western european) and 7bit US7ASCII.

Now, when we convert the extended characters from decimal 128 to 255 - we map them into the ascii characters 0..127.

we take a set of 255 characters and map them into 128 characters.

We take a larger set and map to a smaller set.

causing - possible duplication.


And - there is not any "fix" for this, we can only do the conversion requested - according to the rules of that character set.

QUERY RUNING ON DIFFERENT ORACLE VERSION DBS GOT SAME RESULT, BUT DIFFERENT ORDER

David, November 02, 2007 - 12:43 pm UTC

Hi, Tom,

We try to migrate our Db from oracle 9.2.0.1 to 10.2.0.1. we run a simple query to gererate some report . The query returns the same # of result but the order of the result are quite different. The result from 9201 shows result
nicely ordered according to DATE_TIME, but the result from 10201 shows result not in good order according to DATE_TIME. There are indexes in the tables we queried. We have a lof of reports and hardly force a query result with adding "order by DATE_TIME" for each query. I understand Oracle does not guarantee the order of a query, but Would you please help us on what happens here and how to correct it? Does drop and recreate index on table after import helps? Or do you have any other recommendation? Many thanks.

Tom Kyte
November 02, 2007 - 1:28 pm UTC

do you have an order by?

if not, you have your answer.

without order by - I fail to see why you expect the data to be ordered in any fashion.

In fact, I'll never understand that.


ops$tkyte%ORA10GR2> create table t ( x int );

Table created.

ops$tkyte%ORA10GR2> insert into t values ( 1 );

1 row created.

ops$tkyte%ORA10GR2> insert into t values ( 2 );

1 row created.

ops$tkyte%ORA10GR2> commit;

Commit complete.

ops$tkyte%ORA10GR2> select * from t;

         X
----------
         1
         2

ops$tkyte%ORA10GR2> truncate table t;

Table truncated.

ops$tkyte%ORA10GR2> insert into t values ( 2 );

1 row created.

ops$tkyte%ORA10GR2> insert into t values ( 1 );

1 row created.

ops$tkyte%ORA10GR2> commit;

Commit complete.

ops$tkyte%ORA10GR2> select * from t;

         X
----------
         2
         1


No order by, no order....

http://asktom.oracle.com/Misc/order-in-court.html


there is but ONE WAY TO FIX THIS and that is to fix the bug in your code, you must be missing an order by.

Thanks a lot for your answer.

David, November 06, 2007 - 5:08 pm UTC

Tom,

Thanks a lot and really appreciate your great help. Oracle won't be that good without your helping on user end.

reader

A reader, November 12, 2007 - 10:39 am UTC

When a heavy import ( miliions of rows ) is done on a table
either by imp or "insert into table ...", the next time when someone wants to read from the table takes a long time due to delayed block cleaning.

What are the optons to prevent this performance when accessing the table subsequent to heavy import

The analyze table cascade seems to work well
Do you suggest any other options that will work that takes less time than analyze
Tom Kyte
November 16, 2007 - 1:03 pm UTC

don't import, direct path load it.


insert /*+ append */ would do it. The blocks are loaded "clean"

the analyze table is just like running a query on it, you could just do that as well - run a query on it.

reader

A reader, November 16, 2007 - 5:55 pm UTC

what is the difference between simple "insert ...."
vs. "import /*+ append */ ...." with reference to cleaning the block of the "lock bit". Is it because the direct load bypasses the buffer cache and has sufficient time to clean the block? If so, will there be any space allocation issues (redundant allocation of blocks leading to sparse filling of blocks ) if "insert /*+ append */ is done for each single row insert.
Tom Kyte
November 21, 2007 - 11:05 am UTC

believe you meant insert /*+ append */, not import

anyway - a direct path load (insert /*+ append */, sqlldr direct=y, create table as select, alter table t move...) knows it is working with "clean blocks" - in a single user fashion (no one else will be touching them.

so, it loads them "clean"


insert /*+ append */ only works with "insert SELECT", never with insert values. Thank goodness.

else you would get one row per block - since direct path operations never use any existing formatted block and always allocate new blocks from above the high water mark.

IMPDP Network_link and insert/select from @dblink

AlvinP, December 11, 2007 - 7:57 pm UTC

Hello Tom,

What's the difference between

IMPDP Networkl_ink vs.

Insert /append/ into xxx
select * from remote_tab@remotedb;

Are there advantages and pitfalls using one or the other ?

Tom Kyte
December 12, 2007 - 8:10 am UTC

well, one does it's work in a single sql statement (insert /*+ APPEND */) and the other does it more row by row-ish.

If I have one table - or few tables - I'd be tempted to just user INSERT, easier to setup for sure. A single sql statement.

overwriting procedures using impdp

Sam, January 08, 2008 - 2:45 pm UTC

Hi Tom,

I have a dump file created using data pump in 10gR2. I am using impdp to import the above file.

I am using parameter
table_exists_action=replace

to drop / recreate any table that already exists.

Is there an equivalent parameter for packages / procedures ?

I get the following error if the procedure already exists:
ORA-31684: Object type PROCEDURE:"DELME"."TEST_PROC" already exists

Alternatively, is there any way to force impdp to do a "Create OR replace" when creating procedures / packages ?

Thanks
Sam

A reader, January 28, 2008 - 5:54 pm UTC

Tom

"I use EXP as a tool to proactively detect corruption, physical or logical, in my database. If you use EXP
to perform a full database export, it will completely exercise the data dictionary, finding almost any
logical dictionary corruption for us. Additionally, it will full scan every table in your database, reading
out all of the rows. If there is a table somewhere with a bad block, this is bound to find it. It won't find
certain types of logical corruption, such as an index that points to non-existent rows, since it simply full
scans the tables, but it will find the most important types of error (we can always rebuild an index,
rebuilding a table may not be possible). "

1. "logical dictionary corruption", can you please explain what do you mean by logical dictionary corruption?

2. How will an import find out "table somewhere with a bad block" ?

3."On more than one occasion, I have
used this to rescue a missing piece of code in an emergency "

what kind of missing piece of code scenario, are you mentioning about?

Thanks


Tom Kyte
January 29, 2008 - 6:51 am UTC

1) eg: a bug, a problem in the dictionary.

2) import won't, export will. By reading it and processing it.

3) where you "accidentally" create or replace a procedure, and didn't mean to - and you need the old copy.

A reader, January 30, 2008 - 8:23 pm UTC

1. Please given an example of a bug in the data dictionary?

2. A bad block has to do with operating system, how will the export find it?

When it infact finds it, where does it record the information for review
Tom Kyte
January 31, 2008 - 3:25 am UTC

1) to what end? If logical corruption where to enter the dictionary (because someone for example MODIFIED it by hand, that is a pretty common cause - some DBA that reads some script on this here inter-web thing and follows it, modifies the data dictionary and screws it up entirely), it'll likely find it - if it is a problem - since it uses so much of the dictionary data.

2) export will cause full scans of all tables. If one of the blocks in the table isn't readable - export would let you know about it.

export prints stuff on the screen, export can create a log file.

please read about export in the documentation - so you know what parameters it supports and understand what it does...

Data Import problem - 8.1.7

Nazmul Hoque, February 17, 2008 - 5:21 am UTC

Dear Tom,

I have rcvd exported DMP on 8.1.7 PE from One pc, Now while i am exporting the in same in my pc getting the below error. the DMP size is 283MB. Earlier i have export the DMP with out any error. Now a days i am getting error.

Please help me to solve the problem.

=========================================

Microsoft Windows XP [Version 5.1.2600]
(C) Copyright 1985-2001 Microsoft Corp.

C:\Documents and Settings\hoque>cd\

C:\>imp

Import: Release 8.1.7.0.0 - Production on Sun Feb 17 16:05:38 2008

(c) Copyright 2000 Oracle Corporation. All rights reserved.

Username: hrm/hrm@orape

Connected to: Personal Oracle8i Release 8.1.7.0.0 - Production
With the Partitioning option
JServer Release 8.1.7.0.0 - Production

Import file: EXPDAT.DMP > c:\hrmdata

Enter insert buffer size (minimum is 8192) 30720>

Export file created by EXPORT:V08.01.07 via conventional path
import done in WE8ISO8859P1 character set and WE8ISO8859P1 NCHAR character set
List contents of import file only (yes/no): no >

Ignore create error due to object existence (yes/no): no >

Import grants (yes/no): yes >

Import table data (yes/no): yes >

Import entire export file (yes/no): no >
Username: hrm

Enter table(T) or partition(T:P) names. Null list means all tables for user
Enter table(T) or partition(T:P) name or . if done:

. importing HRM's objects into HRM
. . importing table "ACTUAL_LIA" 0 rows imported
. . importing table "APPMONTHINFO" 0 rows imported
. . importing table "ATTENDANCE_BONUS" 5175 rows imported
. . importing table "ATTENDENCEINFO" 129027 rows imported
. . importing table "ATTENDENCEPUNCH" 0 rows imported
. . importing table "ATTENDENCETYPE" 0 rows imported
. . importing table "COMMONLEAVE" 3 rows imported
. . importing table "COMMONLEAVEX" 0 rows imported
. . importing table "COMPANY" 1 rows imported
. . importing table "CONFIRMATIONINFO" 1137 rows imported
. . importing table "DESIGNATIONINFO" 0 rows imported
. . importing table "DOCTORINFO" 1 rows imported
. . importing table "EARNENCASHMENT" 0 rows imported
. . importing table "EARNLEAVEBALANCE" 0 rows imported
. . importing table "EARNLEAVEGENERAL" 0 rows imported
. . importing table "EDUCATIONINFO" 0 rows imported
. . importing table "EMPAEVALUATIONINFO" 0 rows imported
. . importing table "EMPINFO"
IMP-00009: abnormal end of export file
IMP-00003: ORACLE error 0 encountered
ORA-00000: normal, successful completion
IMP-00000: Import terminated unsuccessfully



Tom Kyte
February 17, 2008 - 8:14 am UTC

IMP-00009: abnormal end of export file
IMP-00003: ORACLE error 0 encountered
ORA-00000: normal, successful completion
IMP-00000: Import terminated unsuccessfully



I guess the first thing I would do is get the person that sent me this file to tell me exactly how many bytes it was....

and then when I see that mine is different - I'd try to figure out why it is - maybe I did a TEXT ftp instead of binary, hence corrupting the file as windows would turn all CHR(10) into CHR(13)||CHR(10) in the file...


Data import

Nazmul Hoque, February 18, 2008 - 12:49 am UTC

Yes, you are right in the properties It show 276MB H/ever phsically Is show 283MB. Boss Pls Help me how can get this data in my PC.

Wating for you adivse
Tom Kyte
February 18, 2008 - 7:32 am UTC

you'll obviously want to get the file again, it is not correct. Maybe you ran out of disk space copying it, I don't know what you did to it but....


if it should be 283mb
and it is 276mb

you obviously lost 7mb of data

and we cannot just "make that up", you'll have to find it

Data Import

Nazmul Hoque, February 18, 2008 - 8:00 am UTC

Dear Tom,

Please note i need you help to solve the issue. we can arrange new fresh DMP as on today. I can take DMP at any time from the HOST(where oracle is runnuing), But problem is i am not able use that DMP to import Data at my end. Only expart like you can help us to solve the issue.

You can advise me If there are any other way to get the data(export) from the HOST and same can be export at my end. We can try that way also.

Please advise ...........

Thanks
Tom Kyte
February 18, 2008 - 9:58 am UTC

I cannot tell you what you did wrong in the moving of a file from system A to system B.

Only someone who knows what you did (no expert required, it is a FILE COPY) can correct it.

...You can advise me If there are any other way to get the data(export) from the
HOST and same can be export at my end. We can try that way also....

that does not compute.


My advice: be more careful in the moving of the file from A to B - that is what you did wrong, nothing but nothing about Oracle here - just a file copy.

Data Import

Nazmul Hoque, February 18, 2008 - 11:33 pm UTC

Dear Tom,

Sorry i am disturbing again and again. Today i have got a copy of the DMP file that size now 351 MB and it showing same both the places (A & B), If i check the properies it showes less Size from 351 MB.

I am exporting the DMP by exp command from Dos Prompt it export data successfuly. I am able to use same dmp from as my end. i am useing this process for long time. But first time am facing problem at time of import data. and the DMP size also incresing abnormaly. I do not why its happen.

My request to are there any to export data from oracle 8.1.7 pe, Please advise................ and save us

Thanks
Tom Kyte
February 18, 2008 - 11:44 pm UTC

this makes no sense of course:

Today i have got a copy of the DMP file
that size now 351 MB and it showing same both the places (A & B), If i check
the properies it showes less Size from 351 MB.



if at A&B it is the same size, how can the "properties" at B show a size less than 351mb.. do you realize that makes no sense to anyone? can you explain a bit better what you might mean here.

dmp files do not increase in size as they are read by imp. never. not possible. dmp files are READ by imp, never , never, never, never, never written. They will neither increase nor decrease in size using imp.

what you write and reality conflict with each other, please help us reconcile that.

We admire your patience

John Gasch, February 19, 2008 - 9:52 am UTC

Tom - You deserve the Nobel Prize for Patience!

Data Import

Nazmul Hoque, February 22, 2008 - 6:30 am UTC

Dear Tom,

I have check all and found okay, Only problenm is dmp sizes, I am able import data when the DMP size is up 250 MB, but when it is more than that I am not able import the dmp. Can you please adise me i need to do to import data from DMP. Out of 150 table on that dmp empinfo table size is 330 MB and total DMP is 360.

Please advise..........
Tom Kyte
February 22, 2008 - 7:25 am UTC

sigh, you already told me the file is the wrong size.

You have no problem importing over 250mb, you have a problem getting a file over 250mb to IMPORT!!!!!

do you understand that unless and until you get a file on your machine that is the same size as the file that was originally exported to - this is not going to work.


... If i check the properies it showes less Size from 351
MB. ...

this is *not an oracle issue*

this is a problem you have moving files - figure out what you are doing wrong there first and then you be perfectly OK

Size of DMP file using EXPDP

Sudip Sarkar, April 07, 2008 - 1:53 pm UTC

Hi tom,
My DBA's are using expdp to take export of my application( batch_application runs every Month).
Oracle:10g on Solaris.
Now the Application Schema size is about 300gb. the database is on Archivelog mode and archives are being properly backed up on Tape.
There are other batch applications on the same database, on different schema and tablespace.
My question is
a) Is expdp the only method to take a backup?
b) Wont expdp become slow as the size grows?
c) On event of reverting back to previous point in time is IMPDP the only option?

thanks in advance
Sudip

Tom Kyte
April 09, 2008 - 9:05 am UTC

a) expdb DOES NOT MAKE BACKUPS. exports are NOT BACKUPS. BACKUPS are backups and nothing else is. rman or user managed backups are the only backups.

export data pump is a logical data copy tool, it extracts data from the database and stores it in a file, you cannot "restore" with a dmp file, you can only reload data (no recovery, no roll forward, nothing like that at all)

b) see a - unless you are copying this data out in order to copy it in as of that point in time elsewhere, I don't know what you are doing with it.

c) if you have a DMP file, data pump is pretty much the only tool - yes.


They should be BACKING UP however, period. if you want a backup

DATAPUMP: Can be decide Table Orders for importing?

A. N. Shukla Abhai, May 23, 2008 - 8:16 am UTC

Hi Tom,

We are using oracle 10gR2. We want to migrate Data from 4-5 different databases into one Central Databases. In all Source Databases, we are having same Schemas & Objects structure including Referential Constraints. We tried followed steps...

1. We are exporting & Importing schema level using METADATA_ONLY option. It is working fine and created all the Schema Objects.
2. Exported schema objects data using DATA_ONLY option, while importing, we are getting errors for Integrity Constraints Violated, Parent Key Not Found. ---
ORA-31693: Table data object "SCOTT"."EMP" failed to load/unload and is being skipped due to error:
ORA-02291: integrity constraint (SCOTT.FK_DEPTNO) violated - parent key not found

For avoiding the above error, what shall we do?
Unfortunately we can't disable the constraints on Central Database, because once data from any server will be loaded on this, it will come under production.

We don¿t want to go TABLE Level because we are thousand of Tables and it will create several dumpfiles.
Can we define Table order at the time of EXPDP/IMPDP using SCHMAS level?
Please guide us. Thanks in advance.

--Shukla
Tom Kyte
May 23, 2008 - 9:09 am UTC

you would have to run dp over and over - and think about it - you would gain nothing by specifying the order here - datapump would still have to scan the dump file over and over for each one.

so, just do table level impdp's - in the order you like - else data dump puts them in in the order they were exported into the dmp file.


this sounds like a recipe for disaster - putting data from 5 different databases into one table - I see lots more issues in your life soon.

and don't forget, you might have cyclic reference - meaning there doesn't have to be a sequential order that would work even..

DATAPUMP: Can be decide Table Orders for importing?

A. N. Shukla Abhai, May 23, 2008 - 9:45 am UTC

Hi Tom,

Thanks for valuable response quickly.
Please help me again...

1. As you told, we have to do TABLE Level impdp only.
Which one will be better to use in respect of performance...
a) Export in a single job having all Tables and import table by table again and again in our dependency order using same set of dmp files.
b) Create separate dump file for each table and import the same.

2. As you told, data dump puts them in in the order they were exported into the dmp file; Can we define order during Export?

2. Any other method, we can implement in such situation. Some of our Tables are having 1-5Millions of records in each database.

--Thanks,
Shukla.




Tom Kyte
May 23, 2008 - 4:14 pm UTC

it would be less work to have separate dmp files - dmp files are read sequentially.

data dump was not really designed as a data consolidation tool - as such, it just exports things in any order it wants.


I don't think I'd be looking at data pump for this - it would have to load and re-load all of the data all of the time, this doesn't sound reasonable if you ask me - you seem to indicate you have lots of data - do you really want to reload it over and over and over and over. Would not a different approach be something you want to look at.

Move Data from One Database to Another

A Reader, May 27, 2008 - 3:03 pm UTC

Hi Sir,

As per the Oracle Documentation, before using EXPDP and IMPDP we should create a directory and then give READ and WRITE permission to oracle user.

I am not understanding why we need to give permissions on the directory, arent we restricting export and import to a particular user. What if I create a directory MYDIR and give SCOTT@ORACLE1 user directory permissions. But now if I want to take export of SCOTT@ORACLE2 how will it work or if I want to import dump taken by SCOTT@ORACLE1 into SCOTT@ORACLE2, how it will happen, do i need to shuffle file across servers?

In traditional IMP and EXP command we can specify any database and get the data in and out and can transfer physical files.

Please please explain.

Regards
--
Tom Kyte
May 27, 2008 - 3:23 pm UTC

you grant access to directories in file systems to users - you grant access to directories in oracle to users.

I fail to see the difference there? What difference do you see?

Data Transfer from One Server to Another

A Reader, May 27, 2008 - 3:58 pm UTC

Sir,

Very sorry, I think I have not put it in correct way.

Suppose I have Server-1 and Server-2. And I have Oracle-10g installed on both, say Oracle-1 and Oracle-2 respectively. I want to export data from Oracle-1 to Oracle-2.

Using the tradition Import-Export, I can unix login to Server-2, take export of Oracle-1 and then import in Server-2 using following commands

Export : exp user1/pwd@oracle1 --> this is on server2
Import : imp user2/pwd@oracle2 ---> again on server2

With above two commands I transferred data from one server to another server by logging on just one server and without using FTP.

If same thing I have to do using new IMP and EXP, I have to create directory on server-1, give access permission to user of Oracle-1. And then do export. To import in server-2, again create directory on server-2, grant acecss, then copy that file on server-2 and then import.

This looks very long process, but I know whatever I am assuming is wrong, but dont know what is correct also. Can you please help me in understanding the better flow to move data from one server to another.
Tom Kyte
May 27, 2008 - 6:44 pm UTC

you just asked me this elsewhere

http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:891071800346984803

and I told you "no, you don't need to do that", you wouldn't even create a file.

there would be NO file.

export

A reader, June 06, 2008 - 11:48 pm UTC

Tom:

1. Do you usually recommend copying Production data to test database before UAT? why do people do that sometime.

2. What do you usally recommend to do to copy prod data into TEST database considering that TEST database might be different (new columns, objects, tables, etc.)

dba does export schema from prod/import schema into test. and then we have to apply DB upgrade script and move code from DEVE to TEST. do you like this approach

3. how is backup from prod/restore to test different. Would it import data keep the table structure intact.

4. would you recommend creating a sql script to copy from one PROD into TEST for 100+ tables or that would be too cumbersome.
Tom Kyte
June 09, 2008 - 11:38 am UTC

1) if you test on a database that is not sized the same as what you will deploy on, with the same sorts of data distributions - what can you say about the performance of the deployed system? Not much - this is why people would test against something realistic.

2) well, you need an upgrade script to turn production into version 2 right?

And you need to test this script right?

And you need to test what will happen after said script is run - right?

I would not consider using export at all, simply restore a backup.


3) see #2

4) see #2

exp

A reader, June 09, 2008 - 2:33 pm UTC

Tom:

Excellent.

Can you elaborate why you would use a backup/RMAN and then restore instead of exp/imp. Are not they the same?

Also, would you ever do a table@prod to table@test copy for this sort of thing? It sounds cumbersome as you have to cleanup all the tables in TEST and then run insert statement for each table in the correct order.
Tom Kyte
June 09, 2008 - 3:39 pm UTC

they are entirely and utterly different.

datapump is a logical copy. sure you have the same rows and columns - but we DUMPED the data out, we RELOADED IT BACK in, indexes are brand new - pristine, data is stored in different files, different blocks.


You would be comparing an apple to a tomato using data pump.

If you restore a backup, you have bit for bit, byte for byte - the same stuff. You can be pretty sure a plan generated for a query in test will be the same in production since you have the same physical layout on disk - if you data pump the data, you might well find it to be a completely different story.



If you want a testing instance, by far the most straightforward thing to do is restore a backup. In fact, it'll prove that you can actually restore your database - that along is reason to do it.

and it is really quite easy to do.

exp

A reader, June 09, 2008 - 5:12 pm UTC

Tom:

Look like data pump is a faster sql*loader utility,

but the backup/restore is not that doing the same thing as exp/imp by exporting data and then importing it.

9i import and workarea_size_policy

A reader, August 28, 2008 - 1:25 pm UTC

hi tom,


1)
imp does the following order, as far as I know:
1. create tables
2. load table data
3. create indexes
4. validate and enable constraints

Question: correct?



2)
we did an import on 9i with default parameters. this took very long.

we did the same import with "indexfile=x.sql" and "constraints=y" and ran the "x.sql" file in a sqlplus session afterwards still having workarea_size_policy = AUTO. this appeared to be much faster.

Question: How can this be faster while workarea_size_policy is still AUTO? The "create index" statement and memory areas (pga) should be the same in both cases as workarea_size_policy = AUTO!?



thanks for your time.
Tom Kyte
August 29, 2008 - 10:56 pm UTC

please, define "this appeared to be much faster"

and - what was a) happening during the first import versus b) what was happening during the second import.

remember "auto" tries to be "fair", if you had 100 sessions during import 1 (99 idle, just you running) and 1 sessions during import 2 - the amount of memory you would be allowed to use would be very different.

eg: no, I would definitely NOT expect the workareas to be the same in general, the entire goal of auto workareas is to have them resize based on what is happening.

db refresh

A reader, September 13, 2008 - 7:37 pm UTC

I have two oracle databases: PROD and TEST.

PROD is at version 1.0. TEST has version 2.0 (many db and code changes).

I want to refresh TEST with data in production.

would i be able to:
a. drop all the tables in TEST leaving code (packages/procedure/functions/triggers)
b. then export tables from PROD with data
c. then import tables into TEST.
d. then apply db upgrade script

1. would this work?

2. would doing a schema level export and then import be better? this way i have to backup all the code from TEST before dropping schema, then drop all the code from TEST after import and then recompile all the code from backup. recompile it.
Tom Kyte
September 16, 2008 - 9:44 pm UTC

would you be able to drop all tables, export tables/import them then apply db upgrade script.

I can say yes to drop, export and import. You have to answer "apply db upgrade script", that is your code, you wrote it. Would it work?


1) you tell me. Seems more likely you want to

a) backup prod
b) restore prod to test
c) run your upgrade scripts

(eg: test what will happen in real life in prod). It does not seem to make any sense to preserve anything in test - you need to have upgrade scripts that modify your schema and install your code - this is a perfect time to test them.


2) ummm, you are telling me your code isn't in source code control already? hmmmm - that is bad. New first step for you: get your code under source code control - then start thinking about other things - until then, nothing else matters.

exp

A reader, September 17, 2008 - 7:23 pm UTC

Tom:

1. Do you really need source code control for small 1-2 man projects? wE had this once and we found it to be extra work checkin and checkout and we never got to use it. we just back up the files on daily basis.

2.
When i do export it stops after a few tables. Do you know what causes that?



2. . exporting table AUDIT_CLOSE 0 rows exported
. . exporting table
EXP-00002: error in writing to export file
EXP-00002: error in writing to export fileerror closing export file exppcm.dmp
EXP-00000: Export terminated unsuccessfully

then i try this

$ exp padmin/padmin@pppp file=ttt.dmp full=y

Export: Release 9.2.0.2.0 - Production on Wed Sep 17 18:55:59 2008

Copyright (c) 1982, 2002, Oracle Corporation. All rights reserved.


Connected to: Oracle9i Enterprise Edition Release 9.2.0.2.0 - 64bit Production
With the OLAP and Oracle Data Mining options
JServer Release 9.2.0.2.0 - Production
Export done in US7ASCII character set and AL16UTF16 NCHAR character set
server uses WE8ISO8859P1 character set (possible charset conversion)
EXP-00002: error in writing to export fileerror closing export file ttt.dmp
EXP-00000: Export terminated unsuccessfully

Tom Kyte
September 17, 2008 - 7:32 pm UTC

1) YES. Checkin checkout is extra work - but done for a purpose.

What do you have to do when you checkin? You have to say "this is what I did". So that later, when you both get fired, laid off, transferred, promoted, relocated, sick, whatever - your replacements stand a chance.

And just because there are only two of you doesn't mean you are not going to stomp all over each others code.

2) looks like your file system might be full, is it. Especially since the first one gets a little ways along and the second stops immediately.

exp

A reader, September 18, 2008 - 12:45 pm UTC

Tom:

I deleted a bunch of big old files on the directory where i do the export and it still fails after a few tables.

EXP-00002: error in writing to export file
EXP-00002: error in writing to export fileerror closing export file piexp.dmp
EXP-00000: Export terminated unsuccessfully

IT seems it does not matter if you cleanup files and i just need to request more space on disk.

Tom Kyte
September 18, 2008 - 9:29 pm UTC

that just means you keep running out of space - you deleted some old files, but that didn't free enough space.

when your export fails, did you look at the free space on the disk? Was it in fact full

dmp size

ELY, September 19, 2008 - 1:09 pm UTC

Hi Tom,

Been a big fan of this site since it was introduced to me by a friend and I would have to say it helped me solved a lot of mysteries in my Oracle programming.

I just have this question. We have this DB 8.1.7 who is doing and exp every night using

exp system/manager full=Y compress=N file="destination"\%fname% >>"destination"\fullexplog.txt >>"destination"\fullexplog.txt

It normally gives out a dmp file by the size of 1GB. Unfortunately, all the schema is in one tablespace which is "system". I moved one schema to it's own tablespace using what I found here somewhere.

alter TABLE ACCOUNT move
tablespace whmgmt_data
storage (initial 131072 next 8192
minextents 1 maxextents 4096
pctincrease 50 freelists 1);

alter INDEX SYS_C0019037 rebuild
tablespace whmgmt_indx
storage ( initial 131072 next 131072
minextents 1 maxextents 4096
pctincrease 0 freelists 1);

you might recognize this from one of your script generating sql I pilfered here in your site.

My question is after moving into a different tablespace the size of the dmp shrunk by 200MB. Is this normal? The same script was run. No change anywhere and only one schema was moved. Should I be worried?

Thanks in advance,

ELY
Tom Kyte
September 19, 2008 - 4:31 pm UTC

that is not normal. you'll want to validate your data.

Refreshing Test database

Harschil Kaparwan, October 19, 2008 - 8:34 pm UTC

Hi Tom,

Many Thanks in advance.

We have two environment : PROD & TEST. and wish to refresh TEST env which has not been refreshed since a long.

I am just trying to find out the best way to refresh TEST db from PROD db.

Here ...

a)There are some datafiles which have been added in PROD since the last refresh of TEST.
b)There are few users/roles/procedure/fuction/sequences which has been added in PROD since the last refresh of TEST.
c)PROD has schema A, which has application data.

I have the option (exp/imp) and not cold/hot backup (complete db )restoratio in test.

I am thinking to opt the following path, please suggest if you have other best option(s).

a) Add the datafiles in TEST env. ( alter tablespace add datafile..)
b)Take complete structure dump of the PROD database(rows=n) and import it.
Here i have faced sometimes problem... import script starts creating tablespaces like ( create tablespace...). How to make sure that this does not happen?
c) Export schema A from PROD.
exp user/pass owner=A file=a.dmp log=a.log
and then import into TEST.

My database size is around 100GB.

Regards






Tom Kyte
October 21, 2008 - 11:43 am UTC

here would be my process

a) erase test
b) restore prod to test

done.

I would not even for a millisecond consider export/import. First of all - slow. Second of all - slow. Third of all - the newly created database after the import (which did I mention would be slow) will not be anything like the production database. The data will be organized very differently on disk - any tests I performed in test would tell me what the test database would do if it were moved into production - but it won't be. If I restore production to test I've done two things

a) I've proven to myself I can actually restore production (this is worth its weight in gold)

b) I've assured myself that any testing I do is realistic - I can expect to see the same/similar results in real life.

Refreshing Test database ...

Harschil Kaparwan, October 19, 2008 - 8:36 pm UTC

Sorry Tom
I forget to mention DB version above,
it is 9.2.0.6

Regards

Refreshing Test database ...

Harschil Kaparwan, October 21, 2008 - 12:37 pm UTC

Many Thanks TOM for giving valuable information.

Alexander, October 21, 2008 - 4:38 pm UTC

Tom,

What if we have many applications using a database, how can we refresh test with prod? We could back and restore the tablespace, but since rman uses backup information is in the controlfile how would we go about doing this? I'm not aware of an option you can use to restore from a specific file. Thanks.
Tom Kyte
October 22, 2008 - 8:08 am UTC

why wouldn't you just restore "the database" - why do you care if there is 1 or 100 'applications' in there.

and your DBA doesn't know how to restore just the tablespaces they want???

But - why not just restore the *database*

So that test = prod.

You can of course just restore the tablespaces you want - your DBA should be able to do that, and if not, this is reason 133,423,123 for using your backups to refresh test - so they can learn to do this safely.

Alexander, October 22, 2008 - 10:56 am UTC

Ok, couple of things. I don't want to refresh other people's applications because they're all doing different things at different times.

I know how to restore a database. And I'm pretty sure I could restore into a foreign database if I could just see the steps, some code, anything. Tell me, how I am suppose to learn it in the first place if when you ask you are told "you should already know how to do this"?

I've not found any information here or in the documentation on how to restore into a different database.
Tom Kyte
October 22, 2008 - 4:20 pm UTC

would not different test environments exist for different applications?

setup your instance to restore to
restore system
restore undo
restore the tablespaces of interest
recover them
open and optionally drop the other tablespaces

very similar to the steps regarding a TSPITR
http://docs.oracle.com/docs/cd/B19306_01/backup.102/b14191/rcmtspit.htm#i1008453
but don't do the transport at the end.

but again, I'll go back to the first paragraph.


Alexander, October 22, 2008 - 4:46 pm UTC

You lost me with the first question. Do you mean that each appplication/schema in production would have it's own separate instance/machine?

Once instance per machine, many applications within, isn't that the way to go? Test is the same, it wouldn't be a test environment if it wasn't a mirror of production. So I'm not following you there.

But given that, can you see why I would want to restore a tablespace at a time. Now let's say I just wanted data, then I can exclude system and not disrupt others correct?

The part I am missing is how to tell rman to restore a tablespace backup that wasn't taken from the local instance. I'll keep looking for a step by step example, that's really what I need is a picture to "see" it.
Tom Kyte
October 22, 2008 - 4:56 pm UTC

...
You lost me with the first question. Do you mean that each appplication/schema
in production would have it's own separate instance/machine?
....

isn't that what YOU proposed? You want to have a separate testing environment for each application?

but I think I now see where you are going. TSPITR might actually be what you were looking for.

Read the TSPITR, it restores system+undo+tablespace(s) you want to another instance. Recovers them to time X, then you transport them onto the target instance.

It has a picture.

Alexander, October 24, 2008 - 10:49 am UTC

Is using TSPITR the only way to restore a tablespace backup from one database to the other? This is confusing. No idea what an auxiliary database is. Their description doesn't really mean anything to me.

When we want to restore a tablespace backup, all we have to do is take it offline, restore, recover, put it back online, done.
Tom Kyte
October 24, 2008 - 1:53 pm UTC

if you are going to restore a tablespace to a different point in time, that would be a tablespace point in time recovery.

No idea why that is confusing. You want to restore a tablespace to a given database as of a different point in time from the current incarnation of that database.


The documentation does document this.

An auxillary database is a "auxillary", temporary, new, off to the side instance that you restore system+undo+tablespaces of interest to, recover them, then transport the data from.


... When we want to restore a tablespace backup, all we have to do is take it
offline, restore, recover, put it back online, done. ...

umm, that would be a complete recovery - it is as of the current point in time, you would end up with exactly what you started with.


This is definitely stuff you should be playing with if you are the DBA. I like to say there is only one thing the DBA cannot get wrong - that is the ability to recover. Auxillary instances, tablespace point in time recoveries - these are all pretty old concepts. The ability to take a backup, restore ALL or SOME of it to another instance - get data out of it and move it around are things you should be able to do fairly easily.

Alexander, October 24, 2008 - 2:20 pm UTC

Maybe it's just me, but that sounds like way too much trouble just to restore a tablespace from one db to the other.

I was just comparing how much easier it is to restore a backup than the TSPITR process. I can't really think of a situation where I would need to know this backwards and forwards as you suggest. I was looking into this because I like what you had to say about using real backups to refresh test. I guess we're not really set up for this, we don't have spare machines kicking around for an auxiliary db.
Tom Kyte
October 25, 2008 - 8:24 am UTC

AT A DIFFERENT POINT IN TIME
AS OF A DIFFERENT POINT IN TIME
AS IN NOT AT THE SAME POINT IN TIME


If you suffer media failure, lose a tablespace and want to restore it, then you

a) restore it
b) recover it completely


If however you want your tablespace to come from an entirely DIFFERENT POINT IN TIME (that seems to be missing here, that point), you need a slightly different process - transportable tablespaces come into play.


And as a DBA, if you do not know the ins and outs of recovery - all of the options - you are not a DBA yet. Recovery is the ONE THING a DBA cannot get wrong and must master.

You need 1 machine to have an auxillary instance. One machine. Period.



And when a user comes to you (an application owner) and says "hey, that database with 30 odd applications in it, well, we upgraded ours last night and it bombed big time, I need you to restore my application as of 5:00pm last night please" - that is when you will be saying "ah, TSPITR, got it - no problem" - instead of saying "ah, well, sorry, you are out of luck, we cannot do that, it isn't possible"



Alexander, October 27, 2008 - 11:52 am UTC

Ok to summarize then, can you answer a couple of questions because we kind of started a new subject.

1) If I want to restore a tablespace from one db to the other, I must use TSPITR.

2) If I want to restore a tablespace to a point in time, I would use TSPITR.

Tom Kyte
October 27, 2008 - 2:03 pm UTC

1) or transport it from database A to database B. Transporting in older releases involves making the tablespace READ ONLY in the source database. If you cannot do that (many times it is not practical) you can use RMAN to transport - but that is basically TSPITR in disguise.

http://docs.oracle.com/docs/cd/B19306_01/backup.102/b14191/ontbltrn.htm

2) yes.

expdp over network_link

Lise, March 17, 2009 - 5:32 am UTC

Hi,
I would like to move data from several tables, where rows are selected depending on say a emp_id. The data should be extracted from database A to database B.
Database B is in use for say testing, so I would not want to replace the tables in this database. I only want to add in the rows I exported from database A.
On top of this I have sequences to consider. Sequences on database A is not in sync with sequences in database B. However, the sequences links tables together through FKs etc. I therefore need to replace sequences exported from database A with sequences in database B.

I thought I could use impdp with the NETWORK_LINK parameter, however this requires you to consider the whole table, and the QUERY option is 'not supported'.
I would still have the sequence issue.

So - I then moved on to database links, however I then came across an issue in regards to the use of indexes.
If I execute a query across a databaselink, the index on the table I am accessing across the databaselink, does not seem to be used. I even added in a hint with no luck. This was tried in 9iR2. Do you know if this is a 'known issue'? I was told that this has been resolved in later releases.

...and this would have to be an automated system, allowing the end users to copy data from database A to database B whenever they wish.
Tom Kyte
March 17, 2009 - 10:32 am UTC

database links do not in any way shape or form preclude indexes. Give us an example query with the plan please.


Excluding partitioned tables during Export

Vinayak, February 07, 2010 - 5:23 am UTC

Hi Tom,

We have a database where there are roughly 250 tables out of which 15 are partitioned. They are partitioned on date and have around 90 days of data(each day is a partition). The requirement is to create a test database that will have:

1. 5 days worth of data from the partitioned tables
2. All non-partitioned data

For the 1st one I was able to take the export using following command:
expdp user/pwd directory=dump_dir dumpfile=5days.dmp tables=t1:t1p1,t1:t1p2,t1:t1p3,t1:t1p4,t1:t1p5,t2:t2p1,.....

For the 2nd case, I thought of using expdp where I could exclude all partitioned tables and take everything else using the following parfile:

directory=dump_dir
dumpfile=nonpartition.dmp
exclude=table:"in ('t1','t2','t3')"

but the above does not work and it exported entire schema including partitioned tables as well. The resulting dump file was too big.

I am unable to unload the data for non-partitioned tables. Please suggest a solution or where I am going wrong?

kind regards
Vinayak
Tom Kyte
February 09, 2010 - 6:53 pm UTC

how about table in ('T1', 'T2', 'T3' )


Selective export

A reader, April 13, 2010 - 3:29 pm UTC

Hi Tom,

I have 5 schemas and 600 tables

I want to export only table with prefix 'ABC%', 'EFG%','KLM%' which amounts to 100 tables . Is there a way to do selective expdp. The purpose of it is extracting the metadata. This metadata will be then be reviewed for the structural changes, modify and recreate tables in another database

Thanks


Thanks

export

A reader, August 13, 2010 - 7:00 am UTC

Tom:

1. When I export the schema in ISO8859P1 database, it is doing it in US7ASCII.
Do you kno why>

$ pwd
/ora920/bin

$ exp tssadmin/tssadmin@tssp file=tssp.dmp tables=TSS_USERS log=exptssp.log

Export: Release 9.2.0.2.0 - Production on Thu Aug 12 17:09:41 2010

Copyright (c) 1982, 2002, Oracle Corporation. All rights reserved.


Connected to: Oracle9i Enterprise Edition Release 9.2.0.2.0 - 64bit Production With the OLAP and Oracle Data Mining options JServer Release 9.2.0.2.0 - Production Export done in US7ASCII character set and AL16UTF16 NCHAR character set server uses WE8ISO8859P1 character set (possible charset conversion)

About to export specified tables via Conventional Path ...
. . exporting table TSS_USERS 496 rows exported
EXP-00091: Exporting questionable statistics.
EXP-00091: Exporting questionable statistics.
Export terminated successfully with warnings

2. Shall I always use Direct=Y for faster exports.
Tom Kyte
August 19, 2010 - 12:38 am UTC

1) because you told us to. You either let the NLS_LANG environment variable default (and it defaults to us7ascii on unix) or you set it that way.

export NLS_LANG=AMERICAN_AMERICA.WE8MSWIN1252

or whatever you want it to be...


2) no, sometimes it will be slower. Most of the time, just export.

Alexander, September 29, 2010 - 12:10 pm UTC

Hi Tom,

Do LOBs takes considerably more time to exp/imp vs non LOB fields? I am asking because we have a few, somewhat standard shell scripts that exp/imp schemas to reporting databases. I have two that are similar in size, but the one without LOBs takes a few hours, and the one with a ton of LOB columns takes 17.

Basically my question is, does that sound off to you? Should it take THAT much longer? You can assume the hardware is comparable.

Can you describe the process of inserting a LOB column via imp vs a varchar, the work that needs to be done so that I understand?

Thank you.
Tom Kyte
September 29, 2010 - 1:07 pm UTC

exp/imp drop to single row, slow by slow processing with lobs - there is no array processing.

Also, lobs are complex datatypes - to insert a lob requires not only inserting into the table, but also populating a lob segment outside of the table and maintaining an index on that lob segment. It is much harder than a "non lob"


Normally import would read as much as its buffers allow from the dmp file and array insert that - with lobs or longs - imp goes into slow by slow which can slow things down considerably.

Alexander, September 29, 2010 - 1:12 pm UTC

Does datapump changes any of this behavior?
Tom Kyte
September 29, 2010 - 1:47 pm UTC

datapump should be better at this then exp/imp. datapump uses external tables and single sql statements to process this.

exp/imp

A reader, September 30, 2010 - 8:37 am UTC

Tom:

I had a simlar issue when importing a 3 GB table with BLOB column. It takes a long time.

I have not tried data pump yet because it was not suported in 9i but many say it will save 30% in time over the old exp/imp tool.

What would you do if you had to load a production table into test database? would you retore from a backup or use transpotable tablespaces or what?
Tom Kyte
September 30, 2010 - 10:56 am UTC

restore from backup would be #1 if it was an option.

transport from a backup set would be #2

transport from production would be a far #3 as it would require you to make the tablespace read only for a bit.

Alexander, September 30, 2010 - 11:00 am UTC

Bugs in 10g prevent TT from backup sets from working, else I would be doing that.
Tom Kyte
September 30, 2010 - 11:03 am UTC

you could have always done it yourself if you wanted.

restore system, undo and tablespace in question.

point in time recovery it.

transport it.


that is what transporting from a backup set does under the covers. Creates an auxillary instance, restores, recovers and transports.

Alexander, September 30, 2010 - 11:15 am UTC

In my case this is an automated job that happens every night. It would be very difficult to script what you just described.

Also, I've had problem just restoring system, undo, etc I wasn't even sure you could do that. It it simply

restore tablespace system, undo, my_ts;
recover tablespace system, undo, my_ts;
Tom Kyte
September 30, 2010 - 12:21 pm UTC

... It would be very
difficult to script what you just described. ...

using rman? No, not really. You can script a restore. (you better be able to!)


see
http://docs.oracle.com/docs/cd/E11882_01/backup.112/e10642/rcmtspit.htm#i1034676

Alexander, September 30, 2010 - 12:44 pm UTC

Well I admit it's a pretty good idea but there's no way I could have known this a couple of years ago when this process was created (has no idea what TT was then). However, I foresee a process with many moving parts. We'd also have to create scripts to rebuild all the code that lives in system and keep that automated and maintained.

Right now we are trying to convince the people with the $ to go for a long term solution that includes either

a) GoldenGate to replicate the schema
b) Get it's own instance/node so we can use dup db
c) My fav, but has already been shot down due to FUD, use the operational database

ex[

A reader, October 02, 2010 - 9:39 pm UTC

Tom:

Would not the restore from backup more practical if you restore a while database or schema rather than a few tables?
How fast is the restore compared to exp/imp? let us say a 5 GB schema. would it happen in minutes/hours?
Tom Kyte
October 04, 2010 - 2:04 am UTC

exp would be very slow compared to a restore for a 5gb schema. Exp is just very slow for anything.

With backups you can restore single tablespaces - as of any point in time at all. With export, you have to read through the whole exp file and can only restore as of the point in time the export was done.

exp is not a backup tool, exp is a logical data copying tool.

import privileges needed

A reader, April 18, 2012 - 4:20 pm UTC

Tom:

Can you tell what privileges an application/user account would need to do an import of a database dump.

I have a 9i application schema that I can export to a dump file and I need to import that file to the 11g database.

My thinking is I need to drop the user "myapp" and then re-create it and then do the import.

so it would be

1) drop user
2) create user
3) IMP_FULL_DATABASE

is this correct if the account does not have a DBA role?
Tom Kyte
April 19, 2012 - 4:42 am UTC

if you just have a schema, you don't need imp full database - you only have a schema.

you either need to import as that schema or be a DBA. I would suggest just importing as that schema.

imp

A reader, April 19, 2012 - 9:48 am UTC

Tom:

Yes i thought IMP_FULL_DATABASE grant allows user to import that schema too in 11g.

For this box they wont give me dba role.
So what privileges i need to do import at schema level.

Let us say the application account is "Scott".

SO my idea, is

sql> drop user scott
sql> create user scott identified by tiger
sql> grant same privs to scott

unix prompt
$ imp scott/tiger@orcldb file=sottfile.dmp full=y

the IMP will not do "Create or replace" in the DDL script it generates. I have to drop hundreds of objects before it would work so drop user/create user is much simpler.
Tom Kyte
April 20, 2012 - 3:15 am UTC

sure, imp full database would - so would sysdba - but you don't *need* that, you asked what you needed - you just need to do it as the schema itself, no special privileges necessary.

and you know, if you can drop a user, create a user, grant them necessary privileges - it does sound like you might have DBA.

imp

A reader, April 20, 2012 - 7:48 am UTC

Tom:

OK, no special privileges needed.

BUt dont you have to drop all the objects in the schema before you run the imp command, since the DDL does not do REPLACE?

You have to drop all tables, packages, constraints, sequences, indexes, MVs, triggers, procedures, functions, etc.

is there a quick way to clean everything inside the schema or you have to create a script based on USER_OBJECTS view.
IF you have that script can you point me to it?

thanks,

Is there a quick day to do that?
Tom Kyte
April 20, 2012 - 4:16 pm UTC

you can use ignore=y if you want to append the data to the existing data.

otherwise you do what you are planning or you drop things one by one, up to you.

expdb issue

Viswajeet Samal, May 06, 2012 - 11:39 pm UTC

I am using a oracle user A to export a partition table.
A's default tablespace was TSA earlier
So as per my understanding when the export was executing the temporary table used for export something like 'SYS_EXPORT_TABLE_22' was getting created in TSA.
But now the issue is that TSA is offline due to some cause, and hence forth the not able to read datafile error is comming.

Hence I have changed the Default Table space of the user A to TSB, but stil the same error of not able to read datafile belonging to TBA is comming.

Can any one let me know how to overcome this. Please dont guide to change the user to System and take the export, since for a time being i have done that but I want to use the user A as a permanent solution.

Tom Kyte
May 07, 2012 - 5:17 am UTC

export doesn't create a table.

I don't know what you mean - an export file doesn't need any datafiles to be online or not to be read.

You better be a bit more clear as to the step by steps you are taking and what you are doing right at the moment you hit an error.


I don't know where a 'not able to read datafile' error would be coming from - you give *zero* context.

Export and importing of table

Suresh.R, May 09, 2012 - 6:59 am UTC

Hi Tom,

Thanks a lot for your support.
Could you please confirm does export table backup includes indexes by default

example:
I use the following:-

exp username/pwd@xxdb-dbname-sec tables=sa.Table_name file=exp_dbname_Table_name.dmp log=exp_ dbname _Table_name.log compress=N statistics=none --> does this export table has all indexes for a table?

and then

imp username/pwd file= exp_dbname_Table_name.dmp full=Y LOG=imp_dbname_Table_name.log ignore=Y --> Does this script import the table with all indexes present?

Kindly confirm on the same please.
Many thanks.

Tom Kyte
May 10, 2012 - 3:20 am UTC

export is not backup. backup is not export. exports are not backups. backups are not exports.




it will export the indexes unless you tell us not to. You'd have to say "indexes=N" to have their definitions not be exported.

Gather statistics after impdp

dk, June 01, 2012 - 1:24 am UTC

Hello Tom,

I export schema from one database to another located on different machine and OS using Data Pump. Are the imported statistics fully relevant? In other words, is it better to gather statistics anew?

Regards
Tom Kyte
June 01, 2012 - 6:49 am UTC

if the character sets are the same - the imported stats are probably OK, there could be slight differences in some numbers like the clustering factor of an index ( the data has been stored more compactly) but in general - it will be ok.

ABOUT NORMAL EXPORT & IMPORT UTILITIES AND ABOUT RESTORE CONTROLFILE WITHOUT RMAN COMMANDS

Akhil, September 09, 2012 - 2:17 am UTC

Dear Tom,

Please help me for below Queries....

1.I have one dump file.It contains 26 tables(tables names A,B,C,D....X,Y,Z). I don't want to import 2 tables from 26 tables(Say Y,Z). How import to user level?Production database is located in remote level.

2. I have RMAN backup of production database. Accidentally control file,data files and redolog files are lose. How to restore the control file with out RMAN commands? Is it Possible to restore?
Tom Kyte
September 10, 2012 - 8:16 pm UTC

1) trick question. using imp you cannot import just some of the tables with a schema level import, you will have to list the 24 tables you want. you will do a table level import.

2) if you lost the online redolog files, the best you can do is a full recovery and recover up to the last log you have, period.

create the controlfile

joel garry, September 13, 2012 - 7:12 pm UTC

You can create a controlfile manually, since you can look and see what all the datafiles are. Go to a working database and perform the command: alter database backup controlfile to trace as '/yourfavoritedirectory/controlfile.txt'; and see what is in there. Edit it to have a command file to create a controlfile with your data files. As Tom said, without redo you won't be able to fully recover, and that is not a good thing for a dba.

It might even be a good idea to add that trace command to the regular backups.

Full database import in Oracle 9i

A reader, August 30, 2018 - 2:11 pm UTC

One of my old client has the installation of Oracle 9i.
Recently , they had a power surge that messed up the HDD that hosted the database .

He has Full Logical backup and RMAN backup with no controlfile / no archived redo logs / no pfile / no spfile ?

What are my options ? .

How do I get the tablespace information from the dump file ? . Like many of us , its been a while , I touched 9i. Will indexfile will give me the tablespace information?

Here are the steps I am thinking for full db import.
Please advise if any of the steps unnecessary Or missing any steps.

a) Create a database using DBCA

b) Pre-create the tablespaces.

c) Imp the full dump file

d) utlrp.sql

e) compile all other objects.


Thanks very much.

Connor McDonald
September 04, 2018 - 5:12 am UTC

imp show=y full=y

or

imp indexfile=... full=y

should give you all of the DDL for tablespaces etc etc. If the RMAN backup was done from cold (or mounted) it would still be valid, otherwise it probably is not much use.

But your (a) through (e) looks sound