Skip to Main Content

Breadcrumb

Question and Answer

Tom Kyte

Thanks for the question, Venkateswara Reddy.

Asked: February 09, 2002 - 11:48 am UTC

Last updated: June 29, 2011 - 3:10 pm UTC

Version: 8.15

Viewed 10K+ times! This question is

You Asked

Dear Tom

I am egarly waiting for you since 2 weeks. I din't get u. Today I am lucky.

1. I want export certain tables instead of exporting all tables. Out of 200 tables I want 25 tables starting with 'appr%'. As well as my procedures , functions and packages.

How can I do that.

Thanks

Reddy

and Tom said...

Well, you can export the TABLES easily enough. Just create a parameter file using:


1 select decode( rownum, 1, 'tables=(', ',' ), table_name
2 from user_tables
3 where table_name like 'E%'
4 union all
5 select ')', null
6* from dual
scott@ORA8I.WORLD> /

DECODE(R TABLE_NAME
-------- ------------------------------
tables=( E
, EMP
, EMP2
, EMPLOYEE
, EMP_DEMO
, EMP_DEPT
, EMP_SNAPSHOT
)

(spool that to a file) and use partfile= on the EXP command line.

To get all of the procedures -- thats a bit trickier, export wasn't designed to do that. You can use this utility:

</code> http://asktom.oracle.com/pls/asktom/f?p=100:11:::::P11_QUESTION_ID:743825617527 <code>

to accomplish that.

Rating

  (102 ratings)

Is this answer out of date? If it is, please let us know via a Comment

Comments

oops

Yogeeraj, February 10, 2002 - 11:01 pm UTC

Typying mistake!! well i happens to everyone...;)
I guess it should have been "parfile"

Great example.
Cheers

===========================================================
exp help=y

Export: Release 8.1.7.0.0 - Production on Mon Feb 11 08:04:44 2002

(c) Copyright 2000 Oracle Corporation. All rights reserved.



You can let Export prompt you for parameters by entering the EXP
command followed by your username/password:

Example: EXP SCOTT/TIGER

Or, you can control how Export runs by entering the EXP command followed
by various arguments. To specify parameters, you use keywords:

Format: EXP KEYWORD=value or KEYWORD=(value1,value2,...,valueN)
Example: EXP SCOTT/TIGER GRANTS=Y TABLES=(EMP,DEPT,MGR)
or TABLES=(T1:P1,T1:P2), if T1 is partitioned table

USERID must be the first parameter on the command line.

Keyword Description (Default) Keyword Description (Default)
----------------------------------------------------------------------
USERID username/password FULL export entire file (N)
BUFFER size of data buffer OWNER list of owner usernames
FILE output files (EXPDAT.DMP) TABLES list of table names
COMPRESS import into one extent (Y) RECORDLENGTH length of IO record
GRANTS export grants (Y) INCTYPE incremental export type
INDEXES export indexes (Y) RECORD track incr. export (Y)
ROWS export data rows (Y) PARFILE parameter filename
CONSTRAINTS export constraints (Y) CONSISTENT cross-table consistency
LOG log file of screen output STATISTICS analyze objects (ESTIMATE)
DIRECT direct path (N) TRIGGERS export triggers (Y)
FEEDBACK display progress every x rows (0)
FILESIZE maximum size of each dump file
QUERY select clause used to export a subset of a table
VOLSIZE number of bytes to write to each tape volume

The following keywords only apply to transportable tablespaces
TRANSPORT_TABLESPACE export transportable tablespace metadata (N)
TABLESPACES list of tablespaces to transport

Export terminated successfully without warnings.
======================================================


can i export only the table data and import to another user

john, August 07, 2002 - 4:13 pm UTC

tom,
can i export only the table data and import to another user

i don't want to export constraints,triggers,grants and indxes to be exported and then imported to the other user.

thanks
john

Tom Kyte
August 07, 2002 - 6:32 pm UTC

Just use the sqlplus copy command for that or CREATE TABLE AS SELECT

(no, you cannot with exp/imp, it'll get the constraints, triggers etc)

Export options

SB, August 07, 2002 - 8:57 pm UTC

You can use the export options grants=n constraints=n indexes=n triggers=n rows=y when you are exporting the tables and import them to another user. That should only import the data.

Tom Kyte
August 08, 2002 - 8:59 am UTC

true, copy and CTAS would be infinitely easier.

We coulkd also use imp with ignore=y ? after Table import?

Pasko, August 08, 2002 - 2:58 am UTC

Hi Tom,

I think i read somewhere from your site that :

I could export the Tables that i want using parfile as you did above and
then
To get proceudures and Triggers use :
exp with owner and rows=n

and then use imp with ignore=y

Is that correct ?

Regards,




Tom Kyte
August 08, 2002 - 9:16 am UTC

That would pull the rest of the table definitions as well though (so the goal of just getting tables that begin with E in this example would not be met)

Exports

Harold, August 08, 2002 - 1:47 pm UTC

How can i export all the tables for 1 to 20 but i need that the table 3, 8, 12, 18 export the data on it, the rest of them no.

thanks for your time

Harold

Tom Kyte
August 08, 2002 - 2:45 pm UTC

You'll do two exports then, one with rows=n for tables 1,2,4,5,6,7,.... and one with rows=y for 3,8,12,18

excluding some OWNER'S tables

atul, August 09, 2002 - 5:46 am UTC

Sir,

We take full export of our database daily..
Some accounts are test accounts,we actually don't want to include those accounts into our daily process..

so can it be possible by mofifying your script like..
----------------------------------------------------------
select decode( rownum, 1, 'tables=(', ',' ), OWNER|| '.'||table_name from dba_tables
where owner not in ('DEVLOPMENT')
union all
select ')', null from dual
-----------------------------------------------------------


so i can exclude development user from daily export..

will that work?
will that create any problems at full import??

Thanks.
atul

Tom Kyte
August 09, 2002 - 8:58 am UTC

export is NOT a backup tool (not a very good one anyway). hopefully you are not using it for that!


I would do USER level exports. I might do this:

set heading off
set feedback off
spool myscript.sh
select 'exp userid=/ owner=' || username || 'file=' || username || '.dmp &'
from all_users
where username not in ( 'SYS', 'SYSTEM', ...., 'DEVLOPMENT' );
spool off
host chmod a+x myscript.sh
host myscript.sh



Missing records

mo, October 08, 2002 - 9:37 am UTC

I have a table with 20,399 rows in this table in INSP, but if I tried to export the table I could export only 20,186 rows. I imported these rows into the same table in INSD, where I found that all the rows with PODATE > ‘9/27/2002’ are not being exported/imported.

These are also sometimes were missing from the display in a dropdown list in an html form. The sql query shows all of the rows up to 10/7/02.

Any ideas? We are thinking about dropping and recreating the table in INSP but don’t want to lose these rows in the process.





Tom Kyte
October 08, 2002 - 10:55 am UTC

what you describe is not really possible.

if podate is a date, please don't compare it to a string. 


Do this (cut and paste)


SQL> select count(*) from T;
SQL> host exp userid=u/p tables=T


and cut and paste the entire response. 

mo, October 08, 2002 - 10:21 am UTC

Tom:

I found that instance has been upgraded from 8.1.6 to 8.1.7 after sept 27 and it seems to me it has to do with a date format.

how would you check for the data stored foramt after the 27.

Thanks

Tom Kyte
October 08, 2002 - 10:56 am UTC

use TO_DATE on strings with an EXPLICIT format (i think we've had this discussion a couple of times)

use TO_CHAR on dates with an EXPLICIT format.

never rely on the "default"




export

mo, October 08, 2002 - 11:20 am UTC

Tom:

Here are the results per your query. It seems that for some reason sometimes we have a problem displaying records after 09/27/02 and exportin it too. An upgrade happened on that day.

The query for the html form actually does :

select date_column from table where date_column is not null order by date_column desc;


PROD> select count(*) from req;

COUNT(*)
----------
20460

PROD> host exp=ins/ins tables=req;

PROD> set serveroutput on;

PROD> host exp=picadmin/picadmin tables=req;

Tom Kyte
October 08, 2002 - 12:41 pm UTC

Umm, you missed my point on what I wanted. You must be using that "gui" sqlplus -- use the command line one.

It would be relevant for me to SEE the exp output -- that is what I really *need* to see.


Did the DBA do the upgrade "correctly" -- following all of the requisite steps in the manuals.

And -- this is important -- you do have a TAR with support filed? right?

export followup

mo, October 08, 2002 - 11:35 am UTC

Tom:

sorry here is the result of the export

PROD> host exp ins/ins tables=req;

Export: Release 8.1.6.0.0 - Production on Tue Oct 8 11:36:49 2002

(c) Copyright 1999 Oracle Corporation. All rights reserved.


Connected to: Oracle8i Enterprise Edition Release 8.1.7.2.0 - 64bit Production
With the Partitioning option
JServer Release 8.1.7.2.0 - 64bit Production
Export done in US7ASCII character set and WE8ISO8859P1 NCHAR character set
server uses WE8ISO8859P1 character set (possible charset conversion)

About to export specified tables via Conventional Path ...
. . exporting table REQ
20460 rows exported
Export terminated successfully without warnings.


Tom Kyte
October 08, 2002 - 12:42 pm UTC

that export pulled every record -- i fail to see the problem? (other then why are you using the 816 export if you have 817 now?)



export

mo, October 08, 2002 - 11:46 am UTC

Tom:

I found that the 8.1.7 database was using WE (Western European) character set. The 8.1.6 was using the ASCII7 character set but was switiced to WE later.

Would this affect date queries in any way.

Tom Kyte
October 08, 2002 - 12:44 pm UTC

nope. dates are 7byte binary data regardless of the character set.



export

mo, October 08, 2002 - 2:22 pm UTC

Tom:

You are right about export but why do you think a sql query would show all dates while in an html the same query will show only records up to 9/27/02. do you think it has to do with the browser. The weird thing is that sometimes when you do multiple refreshes the html form will show all records include up today. Any hints?


The query for the html form actually does :

select date_column from table where date_column is not null order by date_column desc;


Tom Kyte
October 08, 2002 - 5:14 pm UTC

You have a bug in your code. Perhaps you aborted (stopped) the page and it showed you what you got (20,000+ items in a list -- i sort of question that) .

Or the OS just didn't feel like using the resources to show you 20+K items in a list due to resources.

You should look at the RAW html (view source) to see what is there.


But -- 20+K items in a list - totally useless don't you think? I would hate a page like that hitting my browser, would never use that application again (and would question the sanity of the person writing such a page!)

export

mo, October 08, 2002 - 2:44 pm UTC

Tom:

as a followup here are the same query run in unix and windows with two different results:

1 SELECT DISTINCT po_date
2 FROM req
3 WHERE po_date IS NOT NULL
4* ORDER BY po_date DESC
INSP> /

PO_DATE
---------
27-SEP-02
26-SEP-02
25-SEP-02
24-SEP-02
23-SEP-02
20-SEP-02
19-SEP-02
18-SEP-02
17-SEP-02
16-SEP-02
13-SEP-02

QACOMPDT
---------
12-SEP-02

In unix :

1 SELECT DISTINCT po_date
2 FROM req
3 WHERE po_date IS NOT NULL
4* ORDER BY po_date DESC
PROD> /

what could be happening.
PO_DATE
----------
10/07/2002
10/04/2002
10/03/2002
10/02/2002
10/01/2002
09/30/2002
09/27/2002
09/26/2002
09/25/2002
09/24/2002
09/23/2002
^C
09/20/2002

13 rows selected.

Tom Kyte
October 08, 2002 - 5:18 pm UTC

perhaps you are connecting to two entirely different databases -- I mean -- look at that default date format, they are DIFFERNENT. Could be different databases.

Look at the output -- one goes from sep-20 to oct-07, the other from 12-sep to 27-sep.

Slow down, take your time, double check things. query v$database, make sure you are in the same exact instance.


export

mo, October 08, 2002 - 6:10 pm UTC

Tom:

As usual you are incredibly right. The DBA had two instances running, 8.1.6 and 8.1.7. Apparently the application server was still pointing to the 8.1.6 to show the data. However I do not understand how users manage to enter data into the 8.1.7 through the web unless the DAD was sometimes pinting to 8.1.6 and sometimes to 8.1.7. Can this happen?

2. Strangely, the DBA said he did not update the DAD when he upgraded from 8.1.6 to 8.1.7. Now we are using 8i IAS but from what I remember of OAS 4.0 you enter the unix physical directory path into the DAD and you enter the character set. IF he did change both without updating the DAD is not there something wrong? Unless the database upgrade takes care of this?

3. How does SQL*PLUS GUI for windows connect to the unix server. Is it via the internet or NET8 connection. IS this different than connecting through SQL*PLUS in unix on the same server?

Thanks a lot, you are great help.



Tom Kyte
October 08, 2002 - 6:25 pm UTC

1) no, it cannot "sometimes" happen.


2) there would be nothing wrong. A DAD just describes "how to connect". matters not the version you are connecting to.

btw: DBA didn't technically do an upgrade -- DBA did a copy to a new instance. An upgrade would have prevented this - there would be no 816 database to confuse you with.

Sounds like you need some "configuration managment" perhaps -- very shaky place your working in.

3) It connects over Oracle net.

export

mo, October 08, 2002 - 6:40 pm UTC

Tom:

Is it because he has two instances or databases with the same name that the DAD was getting confused? Once going to one database, and other times going to second one.

2. I thought you do tell the DAD what the physcial and virtual path are?

Thanks,

Tom Kyte
October 08, 2002 - 6:47 pm UTC

No idea what happened to you -- unless he was sometimes having 816 up and other times 817.

A DAD just has

o oracle home
o oracle sid OR tns connect string
o optionally user/password


It is very straightforward -- same sort of info you type in at sqlplus. It is as PREDICABLE as sqlplus.


Your environment however is so shaky -- I won't even speculate what magic things might have been happening. Needs a little bit of "control" don't you think?

Regarding Tablespace definitions during import.

Nathan, October 08, 2002 - 10:41 pm UTC

Hi Tom..
I want to import all users and its objects from Prod db to Test Db and maintain same tablespace definitions in Test Db for all objects
Is it works?.
I am going to follow the steps below
Create all tablespaces and all users(with proper tablespace defintions as in prod) ,and do import..
expecting your reply
thanks in advance
Nathan

Tom Kyte
October 09, 2002 - 6:59 am UTC

that'll work

Puzzle

atul, October 09, 2002 - 5:51 am UTC

Sir,

I'm in very interesting situation

1)i have oracle 8 running on one server
2)I have created oracle 7 on other server.
3)Two servers are physically very far from each other
4)i want to export oracle 8 data to oracle 7..
5)I know its possible only if i run catexp7 in oracle 8
and then take export..and again import to 7
6)But how can i do this??
7)Can i transfer catexp7 through mail and tell it to run there..
8)Or can this be possible in other ways...

Plz let me know??

Thanks.
atul


Tom Kyte
October 09, 2002 - 7:11 am UTC

You need to have access to the Oracle8 database from somewhere where you have the Oracle7 software installed to use exp/imp.

You would run catexp7.sql in the 8 database (it is in the $ORACLE_HOME/rdbms/admin directory of the 8.x software already)

You NEED to use the version 7.x EXP tool in order to create a dmp file that the 7.x IMP tool can read. YOU NEED to have the v7 software accessible to the 8.x instance.

Else, it'll just not work at all.

export

mo, November 04, 2002 - 8:52 pm UTC

Tom:

1. I want to copy 4 tables from a prodcution instance into a test instance. Would you do an export/import or insert into t (select * from t@prod).

2. Any advantages/disadvantage for either approach.

3. Wiht import/export do you have to drop tables one by one in test before you can do import.

4. The 4 tables have other tables linked to them using foreign keys. I tried doropping the tables and it said it can't because of the foreign keys?

5. If I want to mirror production (all objects) into test what is the easiest way to do this?

Thank you,

Tom Kyte
November 05, 2002 - 9:22 am UTC

1) i would probably use the sqlplus copy command actually. exp/imp would be number two. CREATE TABLE T AS SELECT * FROM T@PROD would be three

2) see #1

3) ignore=y makes it so you don't have to (but they should be empty of course)

4) don't drop them then?

5) backup prod, restore to test.


exp

atul, December 11, 2002 - 11:41 pm UTC

Sir,

I have two databases...connected to each other through db link..

Can i export one schema of second database from first database through db-link??? how???

Thanks.
atul



Tom Kyte
December 12, 2002 - 9:53 am UTC

not through a database link. You just use exp/imp from the command line to do this.

Getting less rows without any warnings

Gururaj Kulkarni, March 12, 2003 - 4:05 am UTC

Tom,

I need to export a file from production. Table has 4428718 rows in it and it is growing slowly. When I export the table, I am getting 4163322 rows. No warnings and says Export terminated successfully without warnings.

I am using parameter file, which I have attached here.

userid=/
direct=y
compress=n
rows=y
buffer=15000000
indexes=n
triggers=y
file=sas_nice.dmp
log=sas_nice.log
tables=
ops$nice.SAS_SUBSCRIBER_ADD_SERVICE

My exp command is like this

exp userid=ops$itguru/password@nicedb.nwt.com parfile=exp.par

Oracle database version is 8.1.7.4
O.S is open VMS Version 7.2

I have tried this atleast 3-4 times. Each time source has different numbers but my dump file will have always 4163322 rows. Could you tell me what is the reason and how to overcome it ?

Thanks
-Gururaj

Tom Kyte
March 12, 2003 - 7:53 am UTC

never heard of such a situation.

Well, the only time was when the person was exporting from a different database then they were counting against. (so, can you check that out...)

if it persists, time to call support.

Export Failure

Jihad Darwish, June 16, 2003 - 4:00 am UTC

Dear Sir,

I'm trying to export tables from database using oracle 7 and the databases was developed but installed on oracle 8i. It didn't work at all and I had the following errors

export trminated successfully
oracle error 1436 encountered
connext by loop in user

automatic script after import

Marcio, October 22, 2003 - 2:55 pm UTC

Is it possible run a script (such update) automatic from import? Sample:
You have a table aid_emp (empno, new_ename) into db_export.dmp and you want to run automatic, without be submitted.
update ( select ename, new_ename
from emp t1, aid_emp t2
where t1.empno = t2.empno )
set ename = new_ename
/
drop table aid_emp
/


Tom Kyte
October 22, 2003 - 6:19 pm UTC

nope

exporting LONG

Sam, April 22, 2004 - 9:20 pm UTC

How does export utility handle tables with LONG columns? Assuming the row size including a LONG column could be >2GB, I was wondering do I need >2GB of memory to export a row into the dump file? Could you clarify please? Thanks.

Tom Kyte
April 23, 2004 - 11:04 am UTC

</code> http://docs.oracle.com/docs/cd/B10501_01/server.920/a96652/ch01.htm#1006904 <code>

longs = you need memory to hold it.
lobs = you do not need memory to hold it.

How to resolve this!!!!!!

A reader, July 06, 2004 - 6:45 pm UTC

Hi Tom,
I am exporting a user along with a few tables and no rows.
All the associated indexes and constraints are also being exported.
I wanted to export the grants on the exported tables also so used the grants=y.
While importing I specified full=y and indexfile option. It generated the file but did not specify the grant statements.
The code was as follows:
imp userid=user/pwd@host_string file=path full=y grants=y indexfile=file.sql

So does that mean that grants are not included in the indexfile or did I make an error.
Please let me know.


Tom Kyte
July 06, 2004 - 7:58 pm UTC

indexfile only gets create tables (in comments) and indexes. that is the way it works.




exporting just table data and not the definction

Anju, December 27, 2004 - 12:19 pm UTC

Tom,

I have two instances of the same schema ( eg. comm_repos). I want to export just data for one table from instance A and then would like to import to instance B. Here is my exp command

C:\>exp comm_repos/comm_repos@test FILE=comm_region.dmp TABLES=(dec_regions)
ROWS=Y COMPRESS=y

Export: Release 9.2.0.1.0 - Production on Mon Dec 27 12:12:57 2004

Copyright (c) 1982, 2002, Oracle Corporation. All rights reserved.


Connected to: Oracle9i Enterprise Edition Release 9.2.0.5.0 - 64bit Production
With the Partitioning, Oracle Label Security, OLAP and Oracle Data Mining option
s
JServer Release 9.2.0.5.0 - Production
Export done in WE8MSWIN1252 character set and AL16UTF16 NCHAR character set
server uses WE8ISO8859P1 character set (possible charset conversion)

About to export specified tables via Conventional Path ...
. . exporting table DEC_REGIONS 10 rows exported
EXP-00091: Exporting questionable statistics.
Export terminated successfully with warnings.

It works greate and exported 10 rows, now I want to import the data to a different instance, here is the command:

C:\>imp comm_repos/comm_repos@win_db FILE=comm_region.dmp

Import: Release 9.2.0.1.0 - Production on Mon Dec 27 12:13:14 2004

Copyright (c) 1982, 2002, Oracle Corporation. All rights reserved.


Connected to: Oracle9i Enterprise Edition Release 9.2.0.1.0 - Production
With the Partitioning, OLAP and Oracle Data Mining options
JServer Release 9.2.0.1.0 - Production

Export file created by EXPORT:V09.02.00 via conventional path
import done in WE8MSWIN1252 character set and AL16UTF16 NCHAR character set
. importing COMM_REPOS's objects into COMM_REPOS
IMP-00015: following statement failed because the object already exists:
"CREATE TABLE "DEC_REGIONS" ("REGION_ID" NUMBER NOT NULL ENABLE, "REGION_NAM"
"E" VARCHAR2(100), "REGION_UID" NUMBER, "CREATED_BY" VARCHAR2(30), "DATE_CRE"
"ATED" DATE, "MODIFIED_BY" VARCHAR2(30), "DATE_MODIFIED" DATE) PCTFREE 10 P"
"CTUSED 40 INITRANS 1 MAXTRANS 255 STORAGE(INITIAL 65536 FREELISTS 1 FREELIS"
"T GROUPS 1) LOGGING NOCOMPRESS"
Import terminated successfully with warnings.


Why import utility is trying to re create the table definition. I just want to export the data and then to import it into nother database?

Thanks for your answer.


Tom Kyte
December 27, 2004 - 12:25 pm UTC

use ignore=y, it'll ALWAYS get the table definition, if you already have the table, you use ignore=y on the import.

You might want to add a couple more =N's to the export, to surpress constraints, grants, etc.


commpress=y is bad bad bad... compress=n should be the default and only value.

thanks it works

Anju, December 27, 2004 - 12:57 pm UTC

Thanks for your prompt response.

CTAS , Export/Import and ?

Ramachandran C, December 28, 2004 - 3:04 am UTC

Recently in an interview, I was asked how to create a table similar to an existing table but without any data. I told CTAS. But that wasnt the answer the expected. So i said export/import without data. But that still didnt satisfy them. Is there any other way to achieve the same ? (in any case, i would still prefer ctas as it achieves the desired result with a single simple statement.)

Tom Kyte
December 28, 2004 - 10:23 am UTC

I would have said

"well, OK, I gave you:

create table like_other_table as select * from other_table where 1=0

and

exp userid=u/p table=foo rows=n
imp userid=other_user/p fromuser=u touser=other_user


both work, both do the answer but you don't seem satified, what approach would you have taken"


They might have been looking for "well, since I am command line challenged, I got into OEM and using the tool did a 'create table like' "


You gave answers I would have given, maybe they were looking for the "gui" option.

Problem exporting data using PARFILE

Jignesh, January 17, 2005 - 1:40 pm UTC

I have following shell script

#!/bin/sh
V_DATE=`date "+%d-%m-%Y"`
v_date="'"$V_DATE"'"
exp parfile=t_output_slice.par



And the following is the t_output_slice.par

userid=system/manager
file=../../../DUMPS/T4/EXP/t_output_slice.dmp
log=../../../LOGS/T4/EXP/t_output_slice.log tables=uat01_bacst4b.t_output_slice
QUERY="WHERE EXISTS (SELECT 1 FROM uat01_bacst4b.T_OUTPUT_FILE TOF WHERE TOF.output_file_id = uat01_bacst4b.T_OUTPUT_SLICE.output_file_id AND trunc(tof.generation_date) = TO_DATE($v_date,'DD-MM-RRRR'))"

Now if i put appropriate values (say for eg. '01-10-2004') instead of $v_date... export works fine... otherwise it returns error :

About to export specified tables via Conventional Path ...
Current user changed to UAT01_BACST4B
. . exporting table T_OUTPUT_SLICE
EXP-00056: ORACLE error 911 encountered
ORA-00911: invalid character
Export terminated successfully with warnings.

How to resolve this issue?


Tom Kyte
January 17, 2005 - 5:22 pm UTC

parfile is just a text file, it is not a shell script that undergoes variable expansion.


why not just

tof.generation_date >= trunc(sysdate) and
tof.generation_date < trunc(sysdate+1)

?? you are getting todays date, sysdate is todays date. by not applying a function to the column, you might even find an index that could not be used before being used if applicable.



Use of Index

Jignesh, January 18, 2005 - 4:39 am UTC

Thanks for your feedback.. Yes now I will not use trunc because the column has got index associated with it... But the problem of $v_date is still there. Because in the shell script the date will be any thing i.e. it will add one day to the previous dates export. So, can not directly use sysdate in this... might be I should avoid using par file????

Tom Kyte
January 18, 2005 - 10:00 am UTC

#!/bin/sh

....



cat > exp.par <<EOF
userid=system/manager
file=../../../DUMPS/T4/EXP/t_output_slice.dmp
log=../../../LOGS/T4/EXP/t_output_slice.log tables=uat01_bacst4b.t_output_slice

QUERY="WHERE EXISTS (SELECT 1 FROM uat01_bacst4b.T_OUTPUT_FILE TOF WHERE
TOF.output_file_id = uat01_bacst4b.T_OUTPUT_SLICE.output_file_id AND
trunc(tof.generation_date) = TO_DATE($v_date,'DD-MM-RRRR'))"
EOF

exp parfile=exp.par

Thank you very much

Jignesh, January 19, 2005 - 5:09 am UTC


Export only modified tables

Laxman Kondal, March 23, 2005 - 12:26 pm UTC

Hi Tom

I am working on new requirement which calls for your valueable ideas:

Identify tables that have been modified since last export date and do export/import from unclass to class and visa versa.

Need to ensure that data that has been modified after the last exp date is the only one exported, and if any one doing any data modification during export then user should get message 'exp is progress'.

One way to do is before exp put tablespace in read only mode.
This approach may restrict too many other tables, may be for few seconds or minute, which may not be feasible

Other way is lock table in EXCLUSIVE mode.

Because of so many new features in Oracle 9R2 is there any way to put selected tables in read-only mode - I am not aware of.

How to find which table qualified for exported after the last export date (table has any insert, update, or delete after last exp date)

Thanks.

Regards

Tom Kyte
March 23, 2005 - 6:15 pm UTC

why give them exp is progress?

why not just export with consistent=y, it'll be as if no one else is in the database....

EXP QUERY Parameter

George Baso, April 12, 2005 - 2:16 am UTC

Tom,

I have been browsing various threads in your database, specifically looking for more detailed information on the use of the QUERY parameter in the EXP utility.

( I already have your 1-on-1 book, it’s great )

So far, the most complex query that I have come across in is one.

QUERY="WHERE EXISTS ( SELECT 1
FROM uat01_bacst4b.T_OUTPUT_FILE TOF
WHERE TOF.output_file_id = uat01_bacst4b.T_OUTPUT_SLICE.output_file_id
AND trunc(tof.generation_date) = TO_DATE($v_date,'DD-MM-RRRR'))"

I am a developer working on an Archiving project, we are currently working on the strategy to archive an application with approx 1,200 tables, running on 9.2.0.4, approx size is 280GB, all tables in the same schema

We have only recently upgraded to 9I and our DBA’s are not up to speed with the new QUERY parameter, they only have only used full exports to date.

Our strategy will be to create two new databases ( rather than just delete records from Production ) and then copy the records that we want to keep into the first one ( which will become the new production db ) and the records to be archived to the second one. This second DB will then be used for historical reporting.

Having a dedicated Reporting database is an option that has been discussed & will be reviewed in the next fiscal year, but we want to archive before this ( of course … ). We expect to be able to reduce the size of the DB by about 100 GB doing this exercise

( Sorry to be so long winded … )

So far, I have classified tables into four categories

1) MN : Master File, can not be archived ( All to be imported into both DB )
2) TN : Transaction , can not be archived ( All to be imported into both DB )
3) MP : Master File, can be archived ( Some to be imported into each DB )
4) TP : Transaction , can be archived ( Some to be imported into each DB )

I would like to generate par files for each of the above categories (using the TABLES parameter) , but the last two categories are a bit problematical as the selection criteria will vary depending on the underlying tables.

That is some tables ( or groups of related tables ) can be selected using an archive date as the key, but for other tables the archive rules are more complex but eventually can be factored to the archive date.

A) Simple, by date

EG 1 : Journals

Archive all records created before 01 Jul 2003

The Creation Date is on the Journal Master table and it’s related tables

B) Complex

EG 1 : AP Invoice Master and AP Invoice Lines

Archive all AP Master ( & Lines ) Invoices records where AP Invoice Date before 01 Jul 2003

EG 2 : More Complex say Supplier Master

Archive all Supplier Master records where there has been no AP Invoice since 01 Jul 2003

SQL pseudo code would be :

Select all Suppliers where NOT Exists AP Invoice where AP Inv Date > 01 Jul 2003

I understand that the query in the par file must relate to all tables listed and am prepared to generated an individual par file per table ( possibly 50 par files ) for these more difficult tables.

Therefore, what I am looking for is further explanations or examples of more complex query syntax.

Many Thanks


Tom Kyte
April 12, 2005 - 7:12 am UTC

query existed in 8i (i documented it with examples in Expert one on one Oracle)


but basically, you have to think about it a table at a time. the text you pass in with query= will just be added to the end of a select * from t;

so, you are exporting table EMP.  You want to get all EMPs that make more than 5000 or work in research.  You know you have:

select * from emp

to work with and can only add a predicate.  So, 

query=where sal > 5000 or deptno in ( select deptno from dept where dname ='Research')


What you might do?  go into sqlplus and
SQL> select * from emp
   2 WHERE

enter that, hit enter.  Now what would you type in to get the data you wanted?  Anything you type in there is what you would pass in as query= 

dmp data in the same database.

Dawar, April 20, 2005 - 5:08 pm UTC

Tom,

I am using following script to dmp data in the same database.
This is our every weekly routine.
We used to do that this in Oracle 7.3.4 and its works fine.

Now I am trying same thing in Oracle 10.1.0.3.0
but getting errors.

exp userid=abc/manager@witsprd buffer=204800 file=/export/home/data/dmp/"$1" fu
ll=y

Please note: I am running this through menu script.

Export terminated successfully with warnings.
I am also getting error as
EXP-00091: Exporting questionable statistics.
. . exporting table MGMT_UPDATE_OPERATIONS_DETAILS 0 rows exported
EXP-00091: Exporting questionable statistics.
EXP-00091: Exporting questionable statistics.
. . exporting table MGMT_UPDATE_PROPERTIES_DATA 0 rows
exported
EXP-00091: Exporting questionable statistics.
EXP-00091: Exporting questionable statistics.
. . exporting table MGMT_UPDATE_THRESHOLDS_DATA 0 rows exported
EXP-00091: Exporting questionable statistics.
EXP-00091: Exporting questionable statistics.
. . exporting table MGMT_USER_CALLBACKS 3 rows exported
EXP-00091: Exporting questionable statistics.
. . exporting table MGMT_USER_CONTEXT 8 rows exported
But on other hand Oracle 7.3.4 has no issue.

It has meesage as below
Export terminated successfully without warnings.

The size of the file is also very different.
7.3.4 size of the dmp file as
-rw-r--r-- 1 dnaqvi wits7 52154368 Apr 19 10:05 wits041905v2.dmp

Size of the 10g database is
-rw-r--r-- 1 dnaqvi wits7 196591616 2005-04-19 10:03 wits041905v1.dmp


Regards,
Dawar


Tom Kyte
April 20, 2005 - 9:18 pm UTC

I don't see any errors?

i would not expect any files to be the same size between 7.3 and 10g

Export User

Anne, July 12, 2005 - 11:46 am UTC

Hi Tom,

I need to export a user with data from a prod database and import to a test database on different servers. The user in Prod contains two tablespaces one for data and another for Indexes with total datafilesize of about 20M. The user and the tablepaces do not exist in the Test database.

Could you please help me with the optimal way of doing this. Appreciate your help on this. Thanks!

Tom Kyte
July 12, 2005 - 5:18 pm UTC

umm, just exp imp. it is a really small set of data, not too much to optimize.

Export Schema

Anne, July 12, 2005 - 12:31 pm UTC

Forgot to add that both databases are Ora 8.1.5. Would it be easier to use transport tablespace ?

Tom Kyte
July 12, 2005 - 5:30 pm UTC

20m is tiny. exp/imp straigh out is ok

Export data from old table to new

Muhammad Ibrahim, July 12, 2005 - 9:22 pm UTC

Dear Tom,

If i have a table with data

create table t1 ( col1 number, col2 number );

insert into t1 ( col1, col2)
values ( 1,1);
insert into t1 ( col1, col2)
values ( 1,2);
insert into t1 ( col1, col2)
values ( 1,3);
insert into t1 ( col1, col2)
values ( 2,1);
insert into t1 ( col1, col2)
values ( 3,1);

and if i want to add some more column in t1 before
doing this i have to take the backup of old data from t1

create table t1_bk as select * from t1;

then

alter table t1
add (col3 number, col4 number);

now i need to copy back the t1_bk data to t1 please
tell us which is the best and easy way to do or how
you do it normnally???

please tell us how to handle if you add some not null
column in the alteration eg: col3 is a not null

Regards,
Ibrahim.

Tom Kyte
July 13, 2005 - 10:56 am UTC

why do you need to "back it up" like that? (that is just a copy)

you can put it "back" with an insert into as select.

if col3 is not null, only YOU can tell yourself what col3 should have when you "put it back"

What /How to do

A reader, July 13, 2005 - 9:58 am UTC

Hello Tom,

What is the best way to archive selected tables from a schema to a file such that we just retain last 15 days of data in the tables and purge the rest after archiving it.
All tables have a create date column.I also need an easy way to restore the same data if need be

Some thoughts I have,do not know if it is a good idea or will it work.
Is is possible to do incremental exports of selective tables with a possibility to do incremental imports if need be to reload that archived data.

Can we give where clause which would be applicable to all the tables in the table list option of exp command.
Query= where create_Date < sysdate - 15 ?
(create_date column is in all tables ).

What are the other options ? Can we use backup utility ?
I am a novice.
What is the best thing to do ?

Thanx



Tom Kyte
July 13, 2005 - 12:55 pm UTC

incremental exports export any table (the entire table) that was modified in any way since the last incremental export.

that and they are not a supported feature anymore.


why take the data out of the database? databases were made to store data... we could use partitions and just roll data off the end of the table into another table so if you ever need it -- it is there, or you can make it read only and just archive the tablespace itself.

What /How to do DB version 817 and 9i

A reader, July 13, 2005 - 9:59 am UTC

Sorry forgot the DB version.Its 817 in some schemas and 9i in ohters.

A reader

A, August 01, 2005 - 5:06 am UTC

Hi,
Can I do the import in parallel for a table.For an example I have a table called emp which has 5 million records.I have a export file called emp_dump.dmp with emp info in it.Now I say following in a unix session..

imp userid/password file=/data/emp_dump.dmp query="where rownum <=2.5 fromuser=x touser=x

in another session..

imp userid/password file=/data/emp_dump.dmp query="where rownum > 2.5 fromuser=x touser=x

Please note < and > sign.

It didn't work for me...(second session reported zero rows exported...immediately).
Is there a way to acheive this..??

Cheers


Tom Kyte
August 01, 2005 - 7:37 am UTC

with imp, no

with impdb in 10g, yes.


you cannot apply a predicate on the "way in".  query= is not a parameter to IMP, you must have been using EXP


and funny thing with rownum, there is no "row 2" unless there WAS a row one.  where rownum > 2.5 will NEVER return data since there was never a row 1 or a row 2.

You would have to use "the data" itself to export a slice.  Take the primary key maybe and use ntile -- eg: say I wanted to export all_objects (if I could, you cannot) in 4 slices:

ops$tkyte@ORA9IR2> select nt, count(*), min(object_id), max(object_id)
  2    from (select object_id, ntile(4) over (order by object_id) nt
  3            from all_objects
  4             )
  5   group by nt;
 
        NT   COUNT(*) MIN(OBJECT_ID) MAX(OBJECT_ID)
---------- ---------- -------------- --------------
         1       6995              3           7176
         2       6995           7177          14171
         3       6994          14172          21165
         4       6994          21166          40165
 
ops$tkyte@ORA9IR2>


now you would "where object_id between 3 and 7176" and so on. 

Export

David Prabhakar, September 21, 2005 - 3:21 am UTC

Hi Tom,

I have a table to export with 1000 rows sitting in it. But What my management needs is, just 999 rows from that table to be exported. I am really stuck here....Can you please suggest me some ideas to work around this issue?

Thanks and regards,
David.

Tom Kyte
September 21, 2005 - 6:58 pm UTC

any 999 rows?

use query= on the export, rownum <= 999

DBA_EXP_FILES

Sami, October 15, 2005 - 2:50 pm UTC

Dear Tom,

9.2.0.6
We are doing schema level export everyday however DBA_EXP_FILES views are always empty.
What is this view used for?

I checked </code> http://tahiti.oracle.com
and this is the only explanation I got.
http://download-west.oracle.com/docs/cd/B10501_01/server.920/a96536/ch2245.htm#1309109 <code>

Tom Kyte
October 15, 2005 - 3:44 pm UTC

that was only for incremental exports (deprecated feature) and only when record=y was true.

consider it "obsolete"

A reader, October 18, 2005 - 10:28 pm UTC

Is a schema export, with consistent=y and owner= option supplied, a reliable means of backing up a schema. If not, can you please tell what is the recommended approach and when a schema level export is ok to do.

Tom Kyte
October 19, 2005 - 6:55 am UTC

no, backups are the only reliable method of backing up a database.

an export is a logical copy of the data at a point in time, nothing more. A point in time copy.

Problem while exporting

Ram, October 20, 2005 - 12:17 am UTC

Hi,
I am facing this error while exporting:
Connected to: Oracle9i Enterprise Edition Release 9.2.0.1.0 - Production
With the Partitioning, OLAP and Oracle Data Mining options
JServer Release 9.2.0.1.0 - Production
Export done in WE8MSWIN1252 character set and AL16UTF16 NCHAR character set

About to export the entire database ...
. exporting tablespace definitions
. exporting profiles
. exporting user definitions
. exporting roles
. exporting resource costs
. exporting rollback segment definitions
. exporting database links
. exporting sequence numbers
. exporting directory aliases
. exporting context namespaces
. exporting foreign function library names
. exporting PUBLIC type synonyms
. exporting private type synonyms
. exporting object type definitions
. exporting system procedural objects and actions
. exporting pre-schema procedural objects and actions
EXP-00008: ORACLE error 22275 encountered
ORA-22275: invalid LOB locator specified
ORA-06512: at "SYS.DBMS_LOB", line 514
ORA-06512: at "SYS.DBMS_AW", line 39
ORA-06512: at "SYS.DBMS_AW", line 237
ORA-06512: at "SYS.DBMS_AW", line 469
ORA-06512: at "SYS.DBMS_AW_EXP", line 270
ORA-06512: at line 1
EXP-00083: The previous problem occurred when calling
SYS.DBMS_AW_EXP.schema_inf
o_exp
. exporting cluster definitions
EXP-00056: ORACLE error 19206 encountered
ORA-19206: Invalid value for query or REF CURSOR parameter
ORA-06512: at "SYS.DBMS_XMLGEN", line 83
ORA-06512: at "SYS.DBMS_METADATA", line 345
ORA-06512: at "SYS.DBMS_METADATA", line 410
ORA-06512: at "SYS.DBMS_METADATA", line 449
ORA-06512: at "SYS.DBMS_METADATA", line 1156
ORA-06512: at "SYS.DBMS_METADATA", line 1141
ORA-06512: at line 1
EXP-00056: ORACLE error 19206 encountered
ORA-19206: Invalid value for query or REF CURSOR parameter
ORA-06512: at "SYS.DBMS_XMLGEN", line 83
ORA-06512: at "SYS.DBMS_METADATA", line 345
ORA-06512: at "SYS.DBMS_METADATA", line 410
ORA-06512: at "SYS.DBMS_METADATA", line 449
ORA-06512: at "SYS.DBMS_METADATA", line 1156
ORA-06512: at "SYS.DBMS_METADATA", line 1141
ORA-06512: at line 1
EXP-00000: Export terminated unsuccessfully

How can I find out which table is using LOB ?
Oracle site says, I need to set lob locator to zero. How do I do that
My database is working fine and there are no issues as such.
I have been taking export prior to this and didnot face any issue. Infact, later I exported some 5 tables and it worked
fine

Tom Kyte
October 20, 2005 - 8:03 am UTC

that is coming from the analytic workspace (AW) stuff. Please contact support for this one, I don't see anything "directly" the same off hand.

Select on Table

David Prabhakar, October 21, 2005 - 4:03 pm UTC

Hi Tom,

I know this is not particular to the subject I have chosen as the Title of Review.

But, can you please help me with this. I need to give select only privilege to a user on all the tables of a specific schema? Is it possible?

Please give your advice.

Thanks and regards,
David.


Tom Kyte
October 22, 2005 - 10:30 am UTC

no, there is no single "grant select on any table in this schema"

you would either

a) create a role, grant select on the required tables to that role and grant the role to anyone that needs it.

b) grant select on the required tables to anyone that needs it.

Exp tables

Laxman Kondal, November 14, 2005 - 4:24 pm UTC

Hi Tom

I need to export tables with name like 'EI_tablename' so I used TABLES=(OPS.EI_%) and it worked fine and now we have few table named EIGHT_E_TABLES which are also being expored where as it should not. If i use escape then I get error

v_str := p_userlogin||'/'||p_pwd||'@'||v_glb_name;
v_exp_cmd := v_bin||'bin/exp '
||v_str
||' TABLES=(OPS.EI\_%) '
||' GRANTS=N '
||' COMPRESS=N STATISTICS=NONE '
||' FILE='||v_dir||'exp_low.dmp '
||' LOG='||v_dir||'exp_low.log';

-----------------
About to export specified tables via Conventional Path ...
EXP-00011: OPS.EI\_% does not exist
EXP-00011: OPS.ESCAPE does not exist
EXP-00011: OPS.\ does not exist
Export terminated successfully with warnings.
------------------

Could you please show me how to use escape or alternate method to exclude table not like EI_table_name.

Thanks and regards.


Tom Kyte
November 15, 2005 - 7:24 am UTC

because the query would have to be:

where table_name like 'EI\_%' escape '\'


'\' is not a universal escape, you have to tell the SQL you want to use '\' as your escape.


seems you might have to generate the list of tablespace you want to export instead of passing a wildcard to exp.

EXP TABLES

Laxman Kondal, November 15, 2005 - 8:35 am UTC

Hi tom

Thanks for reply.
I tried few combinations in table clause but duid'nt work

v_exp_cmd := v_bin||'bin/exp '
||v_str
--||' TABLES=QUERRY=WHERE TABLE_NAME LIKE ''EI\_%'' ESCAPE ''\'' '
||' QUERRY=WHERE TABLE_NAME LIKE ''EI\_%'' ESCAPE ''\'' '
||' GRANTS=N '
||' COMPRESS=N STATISTICS=NONE '
||' FILE='||v_dir||'exp_low.dmp '
||' LOG='||v_dir||'exp_low.log';

About to export specified tables via Conventional Path ...
EXP-00011: OPOM.WHERE does not exist
EXP-00011: OPOM.TABLE_NAME does not exist
EXP-00011: OPOM.LIKE does not exist
EXP-00011: OPOM.EI\_% does not exist
EXP-00011: OPOM.ESCAPE does not exist

Seems I am missding the you point.

Thanks and regards

Tom Kyte
November 15, 2005 - 11:38 am UTC

you would run the query against user_tables YOURSELF and build a string of table names to export.



export

mo, March 07, 2006 - 12:27 pm UTC

Tom:

I am exporting all objects from one database 8i (AAA) on one server to another existing 9i database (BBB) on a different server. the application in (AAA) will be merged with an existing application in (BBB)

1. Would the exp/imp take care of all objects such as tables, views, packages, procedures, triggers, dbms jobs?

2. If something is not included how do you move it? Somone mentioned that existing DBMS jobs are treated different?

3. Do I need to make sure that there are no existing objects with the same name in "BBB" since the imp will fail for that?

4. what kind of checks would you do (i.e. memory) for BBB to make sure it will handle both applications.

Tom Kyte
March 08, 2006 - 4:36 pm UTC

1) depending in the options used, you can say "yes"

2) define "existing dbms jobs"?

3) what would you like to have happen? you can use ignore=yes on the import.

4) my test environment would have discovered if I was undersized ;)

export

mo, March 08, 2006 - 10:14 pm UTC

Tom:

1. The existing job are let us say when you "make a shipment" you schedule a job to send an email alert. or let us say you have a nightly job that does data updates or cleanups that reschedules itself. Can these be copied?

2. My understanding is that if BBB has a table named "EMP" and you export AAA database and import it into BBB the table "EMP" will not be affected since it already exists. Are you saying that there is an option in the import to drop/recreate all current existing tables in BBB if they exist in the import file?

3. The import of the first application "AAA" and merging it with "BBB" should not affect the current tables of "BBB" correct?

Thank you,

what's affected...

Craig, March 09, 2006 - 10:36 am UTC

"2. My understanding is that if BBB has a table named "EMP" and you export AAA database and import it into BBB the table "EMP" will not be affected since it already exists. Are you saying that there is an option in the import to drop/recreate all current existing tables in BBB if they exist in the import
file?"

I don't think that's what Tom said at all. If your .dmp file created by the export has a table in it that doesn't exist on your imp destination, it will create and populate the table on your destination. If the table already exists and you use IGNORE=Y, then import will ignore the error that will be raised when it tries to create the table and move along to try and populate the pre-existing table.

"3. The import of the first application "AAA" and merging it with "BBB" should not affect the current tables of "BBB" correct?"

The objects you chose to export from your source are the only objects that will be affected on your target. Be aware of differences in tablespaces. If you're "merging" tables, you might want to consider using "MERGE". There are several threads here that discuss this.


export

mo, March 09, 2006 - 10:41 pm UTC

thanks,

so igonre=y will populate the table with data. However my question is that the import cannot recreate the table in the second database if it already exists there. This is why I think I have to drop all the tables manually before doing an import to make sure the new structure of "EMP" (if any)gets to the database.

COrrect?

Tom Kyte
March 10, 2006 - 12:09 pm UTC

you tell us.

think about it.

ignore=y => if table exists, let the create fail and continue.
ignore=n => if table exists, fail the import

import will not drop any existing table, so if you want your latest greatest....

Export/Import

David Andrew, June 13, 2006 - 3:46 pm UTC

Dear Tom,

First of all Thank You..for answering all my questions over the period of time.

I am stuck again with a problem, I have a Table level export of Table(A) and need to import to a different table(B) with the same table structure and columns...except the name is different. We are doing this exercise as part of UAT.

Can you please suggest how to go about? I tried various methods and all are failing. below is the log.

/load/rais/RSDD=>imp rais_admn@rsdd file=RSDD_RAIS_ADMN_CLM_RCPNT_DMNSN_exp.dmp fromuser=rais_admn touser=rais_admn tables=CLM_RCPNT_DMNSN_UAT grants=y ignore=y

Import: Release 8.1.7.4.0 - Production on Tue Jun 13 12:41:38 2006

(c) Copyright 2000 Oracle Corporation. All rights reserved.

Password:

Connected to: Oracle8i Enterprise Edition Release 8.1.7.4.0 - 64bit Production
With the Partitioning option
JServer Release 8.1.7.4.0 - 64bit Production

Export file created by EXPORT:V08.01.07 via conventional path

Warning: the objects were exported by OPS$ORACLE, not by you

import done in US7ASCII character set and US7ASCII NCHAR character set
IMP-00033: Warning: Table "CLM_RCPNT_DMNSN_UAT" not found in export file
Import terminated successfully with warnings.
/load/rais/RSDD=>

It says the table name that I gave for importing into it, is not found in the export file.

Please help,

Thanks and regards,
David.

Tom Kyte
June 13, 2006 - 5:04 pm UTC

can you create a view with the original name ? imp with ignore=y should work then.

export

A reader, October 02, 2006 - 11:36 am UTC

Tom:

I have a production database in 8i. We created a new 9i database and tested the functions. Now we are ready to go live with the 9i database. I would like to verify the steps requried to copy data from 8i to 9i with you before we go live with 9i.

1. In 9i database, delete all data.
Is there a one command to do this or you have to truncate table by table.?)


2. In 8i database, export production data

EXP username/password@8i_instance TABLES=(all tables from user_tables query) file=my8i.dmp

is this correct syntax?

3. In 9i database import the production data.

imp username/password@9i_instance FILE=my8i.dmp

is this correct?


4. Verify that all data was imported correctly. How do you do this. Would you check the import logs or you run record counts on 8i.table_name versus 9i.table_name.


Thank you,


Tom Kyte
October 02, 2006 - 11:46 am UTC

why not

restore 8i database
upgrade it


exp/imp would be amongst the last of my choices and just about the slowest of them all.

export

A reader, October 02, 2006 - 12:12 pm UTC

Tom:

The upgrade was completed (actually it is technically a copy since a new instance was created). I am trying to verify the steps for migrating data only (from 8i DB to 9i DB). Can you elaborate more on how you would delete all data in 9i instance and then copy a fresh dump from the 8i instance into the 9i instance.

Tom Kyte
October 02, 2006 - 12:33 pm UTC

but my approach to migrating data from 8i to 9i would be:

a) upgrade the 8i database to 9i

now you are done. erase the existing 9i database, restore the 8i database in its place. upgrade it.

export

sam, October 02, 2006 - 12:55 pm UTC

Tom:

I am confused here.

It is an not an upgrade but a database copy.

We have 8i Database "A" running on machine "mach1".
We installed the 9i oracle server on "mach2" and created a new 9i oracle database "A" on "mach2".
Then we copied the schema "my_schema" from Database "A" on machine 1 to Database "A" on machine 2.

We tested the Database "A" on machine 2 and the functionality works fine.

Now we need to delete all this data in the 9i one and import a fresh copy of the 8i and then switch the users to the 9i one and shut down the 8i one.

What commands do you use to "erase the eixsitng 9i database"?
What commands do you use to "restore the 8i database in its place"?

Thank you,

Tom Kyte
October 02, 2006 - 4:39 pm UTC

right, but that 9i database - NOT USEFUL, it is full of old data and needs to be completely refreshed.

So, take 8i, restore, upgrade.

That is what I keep trying to say. That 9i database - pretend it doesn't exist, it is useless, it doesn't have anything you want, need or desire.

Can you imagine trying to import with constraints, indexes, foreign keys and the like??? It would never happen.



export

A reader, October 02, 2006 - 5:47 pm UTC

Tom:

In coding terms (sql*plus) can you explain how you do

"So, take 8i, restore, upgrade"

DBA is thinking of dropping all tables in 9i and then importing a new set of the 8i tables.

Tom Kyte
October 02, 2006 - 9:15 pm UTC

the same way you got the 9i database you already have!!!!

tell me, how did you get your 9i database right now?

I guess I'll just give in and say "do whatever you did to get the current 9i database you have, just do it over as you did it before"

export

A reader, October 02, 2006 - 9:12 pm UTC

Tom:

I forgot to mention that THERE will be some code changes in the 9i databases because it uses a different web server. SO I cant upgrade the 8i database. I need to keep using the 9i one but with fresh copy of data.

Tom Kyte
October 02, 2006 - 9:19 pm UTC

sure you can, because you must have some source code control so you

a) make the 8i become 9i just like you did before
b) run YOUR upgrade scripts to make your application become "version 2.0"



export

A reader, October 02, 2006 - 10:03 pm UTC

Tom:

BEfore we did this:

1. Export the database on Server A with the FULL=Y parameter.
2. FTP the export dump file to Server B.
3. Create the database on Server B using ODCA.
4. import the dump file with the FULL=Y parameter.

But why do all of this again. WHy not just export the tables only (indexes and constraints too) for the user data and re import it.

2. would you ever use a sql*plus script that does insert into table a select * from table A@8i_link

or that is not suitable for an upgrade like this?

Tom Kyte
October 03, 2006 - 6:15 am UTC

I give up, do whatever you like. Just drop all of the tables and do it again, plan on having a mess with duplicated constraints, code that is overwritten, and so on.

I've told you what I would prefer from a speed and getting it right perspective:

a) upgrade the database from 8i to 9i (i would not have used exp/imp for that in the first place)
b) run your upgrade scripts to fix your code in the database


Migrate 8i to 9i

Chris Chan, October 03, 2006 - 5:54 am UTC

Reader:

I've seen in various forums that the imp/exp method is not "officially" supported by Oracle for migrating your 8i database. Can you please confirm Tom?

What Tom is recommending is:
- Clone your 8i database on your test server
- Install 9i in a different home, then either use DBUA or manual method to migrate your database from 8i to 9i

The document which can help you is PartNo. A96530-02.
There are also pre and post updates to be performed

This is the test run; make sure to time and document the upgrade and additional steps as this will give you a rough estimate for the live upgrade.

Once you are confident to proceed, perform full backup on your live server, perform upgrade (documented from test testrun), test if successful. If not, you will need to revert back to your old instance.
Plan your donwtime too.


Tom Kyte
October 03, 2006 - 6:33 am UTC

imp and exp are fully supported to copy data and other schema objects from point a to point b. that is what the tools are. "copy" tools. They are not a "migrate" tool, but they can be used to copy stuff. If the copy of stuff satifies your goals - so be it.



export

sam, October 03, 2006 - 6:54 am UTC

Tom:

1. What is really the difference between "copy" data and "migrate" data. If you have an 8i server ad 9i server and you copy data from the 8i to 9i and shutdown the 8i. Is not this really a migration?

2. Why would the export/import tables only do this:

"duplicated constraints, code that is overwritten, and
so on."

if you drop all the user tables in the 9i before import?



Tom Kyte
October 03, 2006 - 7:11 am UTC

1) that is copying data from database 1 to database 2 (in perhaps the most inefficient way, but whatever).

To migrate from 8i to 9i one normally just "upgrades"

2) if you drop all of the tables (they wanted to truncate initially), the constraints would not be duplicated (if you just truncated, they could be) - but their code (plsql for example) would cause tons of "failed to create" type errors - making reviewing the logs something of "a mess" (eg: error prone)

export

sam, October 03, 2006 - 9:14 am UTC

Tom:

I did not quite understand your comment about dropping or truncating the tables. As an example, if I did this to copy one table from 8i to 9i, would it create duplicate constraints or any other issues:

In 8i database,

EXP username/password@8i_instance TABLES=EMP file=my8i.dmp log=explog.txt

In 9i database

imp username/password@9i_instance FILE=my8i.dmp log=implog.txt


Tom Kyte
October 03, 2006 - 10:46 am UTC

if you truncate
then
   import (which would need ignore=y) would likely create duplicate constraints, 
   all of the UNNAMED constraints would be
elsif you drop
then
   obviously you would not create duplicates, there is nothing to duplicate.
end if;




ops$tkyte%ORA10GR2> create table t ( x int not null,
  2                   y int check (y>0)
  3  );

Table created.

ops$tkyte%ORA10GR2> select constraint_name, constraint_type
  2    from user_constraints where table_name = 'T';

CONSTRAINT_NAME                C
------------------------------ -
SYS_C0014779                   C
SYS_C0014780                   C

ops$tkyte%ORA10GR2> !exp userid=/ 'owner=ops$tkyte';

Export: Release 10.2.0.2.0 - Production on Tue Oct 3 10:36:13 2006

Copyright (c) 1982, 2005, Oracle.  All rights reserved.


Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.2.0 - Production
With the Partitioning, OLAP and Data Mining options
Export done in US7ASCII character set and AL16UTF16 NCHAR character set
server uses WE8ISO8859P1 character set (possible charset conversion)

About to export specified users ...
. exporting pre-schema procedural objects and actions
. exporting foreign function library names for user OPS$TKYTE
. exporting PUBLIC type synonyms
. exporting private type synonyms
. exporting object type definitions for user OPS$TKYTE
About to export OPS$TKYTE's objects ...
. exporting database links
. exporting sequence numbers
. exporting cluster definitions
. about to export OPS$TKYTE's tables via Conventional Path ...
. . exporting table                              T          0 rows exported
. exporting synonyms
. exporting views
. exporting stored procedures
. exporting operators
. exporting referential integrity constraints
. exporting triggers
. exporting indextypes
. exporting bitmap, functional and extensible indexes
. exporting posttables actions
. exporting materialized views
. exporting snapshot logs
. exporting job queues
. exporting refresh groups and children
. exporting dimensions
. exporting post-schema procedural objects and actions
. exporting statistics
Export terminated successfully without warnings.

ops$tkyte%ORA10GR2> !imp userid=/ full=y ignore=y

Import: Release 10.2.0.2.0 - Production on Tue Oct 3 10:36:14 2006

Copyright (c) 1982, 2005, Oracle.  All rights reserved.


Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.2.0 - Production
With the Partitioning, OLAP and Data Mining options

Export file created by EXPORT:V10.02.01 via conventional path
import done in US7ASCII character set and AL16UTF16 NCHAR character set
import server uses WE8ISO8859P1 character set (possible charset conversion)
. importing OPS$TKYTE's objects into OPS$TKYTE
. . importing table                            "T"          0 rows imported
Import terminated successfully without warnings.

ops$tkyte%ORA10GR2> select constraint_name, constraint_type
  2    from user_constraints where table_name = 'T';

CONSTRAINT_NAME                C
------------------------------ -
SYS_C0014779                   C
SYS_C0014780                   C
SYS_C0014781                   C
 

About Gururaj query!

srinivasulu, October 04, 2006 - 3:43 am UTC

Hi gururaj,

I think there is some problem related to size of dump file.
Can you tell the size of the dump file being created. May be dump file is created which holds only that perticular number of rows.



Tom,

I need to export a file from production. Table has 4428718 rows in it and it is
growing slowly. When I export the table, I am getting 4163322 rows. No warnings
and says Export terminated successfully without warnings.

I am using parameter file, which I have attached here.

userid=/
direct=y
compress=n
rows=y
buffer=15000000
indexes=n
triggers=y
file=sas_nice.dmp
log=sas_nice.log
tables=
ops$nice.SAS_SUBSCRIBER_ADD_SERVICE

My exp command is like this

exp userid=ops$itguru/password@nicedb.nwt.com parfile=exp.par

Oracle database version is 8.1.7.4
O.S is open VMS Version 7.2

I have tried this atleast 3-4 times. Each time source has different numbers but
my dump file will have always 4163322 rows. Could you tell me what is the reason
and how to overcome it ?

Thanks
-Gururaj



Tom Kyte
October 04, 2006 - 7:09 am UTC

do you have fine grained access control in place?

export

sam, October 04, 2006 - 5:01 pm UTC

Tom:

As a followup to the above (copy database from 8i to 9i), can you tell me which way is better to copy data and what potential problems you see with either one.


1. In 8i (Current live)

export the ttsadmin schema

In 9i

Get copy of revised code using getallcode.sql

Drop user ttsadmin cascade;

Create user ttsadmin

Import from user=ttsadmin the 8I data;

Run getallcode_isntall.lst (to apply the code changes that occurred in 9i)



2. export all tables only from 8i

drop all tables in 9i ttsadmin schema;
import all tables from 8i dump;

thanks,

Tom Kyte
October 04, 2006 - 5:23 pm UTC

you decide. I've already weighed in with my opinion many times.

export

sam, October 04, 2006 - 6:53 pm UTC

Tom:

Does not your opinion go with Option #1? I can't upgrade and restore an 8i database on a machine that will be shut down.

Tom Kyte
October 04, 2006 - 7:39 pm UTC

no, it would not, option #1 involves more steps. why would I want more steps?

I don't know what you mean by your second sentence at all. You cannot imp/exp on a shutdown machine either - so what was your point?

export

sam, October 04, 2006 - 8:18 pm UTC

Tom:

I dont understand how it involves more steps:

You told me to do this:

a) make the 8i become 9i just like you did before
b) run YOUR upgrade scripts to make your application become "version 2.0"


Now, for us to do what we did before, we have to do all the steps I listed above. Create a 9i database/instance and then export from the 8i/machine a and import into the 9i/machine b database. How can deleting a schema and exporting/importing it be more steps than I listed above?

Tom Kyte
October 04, 2006 - 8:29 pm UTC

Sam, YOU wrote the steps, count them?

1. In 8i (Current live)

export the ttsadmin schema

In 9i

Get copy of revised code using getallcode.sql

Drop user ttsadmin cascade;

Create user ttsadmin

Import from user=ttsadmin the 8I data;

Run getallcode_isntall.lst (to apply the code changes that occurred in 9i)



2. export all tables only from 8i

drop all tables in 9i ttsadmin schema;
import all tables from 8i dump;


I would seriously hope that running my "get all code" to suck code out of the database is not your idea of what I meant by "run your upgrade scripts"

you do have "upgrade scripts" don't you? you do use something to configuration manage your code don't you???





export

sam, October 05, 2006 - 6:56 am UTC

Tom:

I was refereing to the original steps in creating the 9i database.

But I dont understand what you mean by upgrade script? What is the problem of copying all the code from a DAtabase using getallcode and then recompiling into another database?

Tom Kyte
October 05, 2006 - 8:21 am UTC

the problem would be......


that you have no configuration management, you have no source code control,
you are the wild wooly west of programming shops, you probably don't even
back up this test database.


if you have to get your code out of the dictionary, that implies zero thought to managing this project over the long term has been done.

export

sam, October 05, 2006 - 8:48 am UTC

Tom:

let us say i have a task to copy CODE ONLY (PACKAGES, PROCEDURES, FUNCTIONS) from DB A to an empty DB B.
I pull out the code from the dicitonary using getcode all and compile it in B.

What real life problems do you see that I am not seeing (unless getallcode.sql has a bug in it)?

How would you do it.

Tom Kyte
October 05, 2006 - 9:36 am UTC

I would ask the owner of database for their install scripts - I mean, any professional shop would have them in source code control, nicely version controlled under configuration management - so there would never be any cause to copy "code" from a database to another database, you install applications.



export

sam, October 05, 2006 - 11:33 am UTC

Tom:

OK I create an install script (sql file) that has all the commands generated by teh getallcode and tell you to run it using sql*plus.

I still do not see any problems. Procedure A in DB 1 is the same in DB 2 etc.

It might not be a clean way of doing it like you are suggesting but it will do the job.

Tom Kyte
October 05, 2006 - 1:19 pm UTC

it is not only not clean, it is rather "primitive as far as software development would go".


It goes way beyond clean, it strikes at the very heart of the way you do code management.

which is: not at all.

export

sam, October 09, 2006 - 12:29 pm UTC

Tom:

If you only want to copy data from DB1 to DB2, would you

1.  Delete all tables data in schema in DB2
2.  Export tables only from DB1
3.  Import with ignore="Y"

Would not this work to bring data only?

2.  Is there a fast way to delete data in all schema instead of
SQL>delete table A; etc.

thanks, 

Tom Kyte
October 09, 2006 - 1:40 pm UTC

1,2,3,2 laughing out loud.....


I would first look to see if "backup and restore" couldn't be done.

short of that, you'd need/want either
a) many more options on the export, like grants=n, constraints=n, indexes=no........

b) or to drop the objects in the target database


not sure I'd want to load into tables with indexes, constraints and the like. think about foreign keys, think about slow by slow loading (which imp does) with indexes and such on.


2) truncate is typically "faster". drop user A cascade is another option :)

export

sam, October 09, 2006 - 4:08 pm UTC

Tom:

1. Does not backup/restore recreate all objects? You use sql*plus for that.

2. drop user a cascade will delete the whole schema? You are stuck with restore or import afterwards?

3. When you do import data only does oracl know the order of parent/child tables or it does not matter and there will be no foreign key violations, because it has to insert data into parent before child table

Tom Kyte
October 09, 2006 - 9:21 pm UTC

1) eh?

2) eh? that is what you seemed to want.

3) if you do an import whereby it creates the objects sure. If you import into EXISTING tables with EXISTING constraints - no it doesn't.

export

sam, October 10, 2006 - 10:56 pm UTC

Tom:

since it does not know the order of tables i assume the data import into existing tables may very well fail? that does not sound a good approach to ever use. does it?

Tom Kyte
October 11, 2006 - 8:06 am UTC

if you use exp/imp to copy data - and let exp/imp do the work (create the objects, load the objects) all works - it does the right thing,


if you have pre-existing constraints imp has not any control over, it is up to you to deal with all issues - and there likely will be some.

exp

sam, November 28, 2006 - 6:39 pm UTC

Tom:

1. How can you find out what is causing an export error?
2. Can the same table name and trigger name cause export to error and stop?
3. What is that questionable stats message for?


Connected to: Oracle9i Enterprise Edition Release 9.2.0.2.0 - 64bit Production
With the OLAP and Oracle Data Mining options
JServer Release 9.2.0.2.0 - Production
Export done in US7ASCII character set and AL16UTF16 NCHAR character set
server uses WE8ISO8859P1 character set (possible charset conversion)
. exporting pre-schema procedural objects and actions
. exporting foreign function library names for user TTTADMIN
. exporting PUBLIC type synonyms
. exporting private type synonyms
. exporting object type definitions for user TTTADMIN
About to export TTTADMIN's objects ...
. exporting database links
. exporting sequence numbers
. exporting cluster definitions
. about to export TTTADMIN's tables via Conventional Path ...
.
EXP-00091: Exporting questionable statistics.
EXP-00091: Exporting questionable statistics.
. . exporting table BATCH 171 rows exported
EXP-00091: Exporting questionable statistics.
EXP-00091: Exporting questionable statistics.
. . exporting table BIB_DATA 0 rows exported
EXP-00091: Exporting questionable statistics.
EXP-00091: Exporting questionable statistics.
. . exporting table BIB_INDEX
EXP-00002: error in writing to export file
EXP-00002: error in writing to export fileerror closing export file test9i.dmp
EXP-00000: Export terminated unsuccessfully


Tom Kyte
November 28, 2006 - 8:09 pm UTC

1) reading the error message?

I'd just about guess on this one "out of space"

2) should not, they are different namespaces, triggers can have the same name as the table

3)
Export done in US7ASCII character set and AL16UTF16 NCHAR character set
server uses WE8ISO8859P1 character set (possible charset conversion)

because of that, statistics created in one character set might not be right in another.



exp

sam, November 28, 2006 - 9:44 pm UTC

Tom:

1. You mean out of space on the server disk or the oracle tablespace or data file? Can you elaborate more and how to fix the problem?

2. Is there a command or script that tells you the size of the database being exported?

Thanks,

Tom Kyte
November 30, 2006 - 8:41 am UTC

1) did you actually read the error message?

EXP-00002: error in writing to export file
EXP-00002: error in writing to export fileerror closing export file test9i.dmp
EXP-00000: Export terminated unsuccessfully


error in WRITING TO EXPORT FILE


2) query dba_segments, but the size of the database doesn't necessarily have anything to do with the size of the export (eg: indexes are exported as a CREATE command, not data - a table might be 5gig in allocated size but contain a single row - small in the export)

export

sam, November 28, 2006 - 11:07 pm UTC

Tom:

1. WOuld this also be caused by space problem too?

. exporting synonyms
. exporting views
. exporting referential integrity constraints . exporting stored procedures . exporting operators . exporting indextypes . exporting bitmap, functional and extensible indexes . exporting posttables actions . exporting triggers
EXP-00056: ORACLE error 1422 encountered
ORA-01422: exact fetch returns more than requested number of rows
ORA-06512: at "XDB.DBMS_XDBUTIL_INT", line 52
ORA-06512: at line 1
EXP-00056: ORACLE error 1422 encountered
ORA-01422: exact fetch returns more than requested number of rows
ORA-06512: at "XDB.DBMS_XDBUTIL_INT", line 52
ORA-06512: at line 1
EXP-00000: Export terminated unsuccessfully
//ora920/dbs/tttexp.dmp.gz: File exists


Tom Kyte
November 30, 2006 - 8:44 am UTC

umm, well - no.

export

smk_01_2002@yahoo.com, November 30, 2006 - 12:52 pm UTC

Tom:
Thanks,

1. I meant is there a way to know the actual data size in a schema or database? Like If it is allocated 5 gigabytes, how can i tell how much is actually used.

2, Any hints on what might be causing that second error in export



Tom Kyte
November 30, 2006 - 2:01 pm UTC

1) search this site for show_space

2) nope, please utilize support

Export Hangs

ora, January 10, 2007 - 9:55 am UTC

Hi Tom,
I'm using the query option for exporting subset of a table data. This is my parfile

tables=poi.a1
query="WHERE mc IN ( SELECT mc FROM poi.a1 WHERE end_date between '01-JUL-2006' and '31-JUL-2006')" statistics=none indexes=n grants=n constraints=n triggers=n

When i give for one month it just hangs but the same parfile works fine for 15 days i.e query="WHERE mc IN ( SELECT mc FROM poi.a1 WHERE end_date between '01-JUL-2006' and '15-JUL-2006')" .

Can you help me on this.
Thanks

Progress of import

A reader, March 08, 2007 - 12:49 pm UTC

I have an import running for abt 15-20 mins, is there a way I can find out how many rows have been imported at this point of time ?
Tom Kyte
March 08, 2007 - 1:05 pm UTC

reader

A reader, July 02, 2007 - 11:09 am UTC

From the posting on 2006/11/28
"
Export done in US7ASCII character set and AL16UTF16 NCHAR character set
server uses WE8ISO8859P1 character set (possible charset conversion)
"
1. If the NLS_CHARACTER SET is defined at the session variable to WE8ISO8859P1 and perform exp via SQL*Net, will it still show warning messages ( The database at which exp performed using SQL*Net is US7ASCII character set)

2. Can the warning message be ignored


reader

A reader, July 02, 2007 - 11:55 am UTC

Never mind for the previous posting

setting NLS_LANG=american_america.WE8ISO8859P1 at session level solves it

Statistics and transportable_tablespace

Yoav, July 04, 2007 - 5:49 am UTC

Hi Tom,
Im using transportable_tablespace from 10.2.0.3 DB to 10.2.0.3 DB.
I found that the sample_size values are different between the source database and the target database.
For example, in the source database:

SQL> SELECT TABLE_NAME,NUM_ROWS,SAMPLE_SIZE,LAST_ANALYZED,PARTITIONED
2 FROM DBA_TABLES
3 WHERE table_name = 'CM_CUST_DIM_1';

TABLE_NAME NUM_ROWS SAMPLE_SIZE LAST_ANAL PAR
------------------------------ ---------- ----------- --------- ---
CM_CUST_DIM_1 21284586 21284586 02-JUL-07 YES

The same query againt the target database:
SQL> SELECT TABLE_NAME,NUM_ROWS,SAMPLE_SIZE,LAST_ANALYZED,PARTITIONED
2 FROM DBA_TABLES
3 WHERE table_name = 'CM_CUST_DIM_1'
4 ;

TABLE_NAME NUM_ROWS SAMPLE_SIZE LAST_ANAL PAR
------------------------------ ---------- ----------- --------- ---
CM_CUST_DIM_1 21284586 2000 02-JUL-07 YES

As you can see oracle sampled only 2000 records in target database,
while in the source database it sampled 100% of the records.

When i checked all the tables that transported i saw that oracle
set the SAMPLE_SIZE to constatnt value of 2000 , even if the number of the rows in the tables
are less then that.
Example:

SQL> SELECT table_name,num_rows,sample_size
2 FROM DBA_TABLES
3 WHERE owner = 'CRM'
4 AND tablespace_name = 'DW_CRM_TTS'
5 ORDER BY num_rows DESC;

TABLE_NAME NUM_ROWS SAMPLE_SIZE
------------------------------ ---------- -----------
CM_CASE_HIST_FACT 11998814 2000
CM_CASE_HIST_FACT_STATUS 10320104 2000
CM_CASE_HIST_FACT_GRP 9165942 2000
....
NAP_PRVGP_SR_VW 133 2000
NAP_SUPPLIER 115 2000
NAP_CANCEL_CAUS 96 2000
PS_PROD_CATGRY_TBL 68 2000
IP_VERIO_PLANS 37 2000
NAP_PRVGP_SL_VW 29 2000
PS_RD_SEGMENT 24 2000

More then that :
In parttion level the values are missing in the target database.
from source database:
SQL> SELECT PARTITION_NAME,NUM_ROWS,SAMPLE_SIZE,LAST_ANALYZED
2 FROM DBA_TAB_PARTITIONS
3 WHERE table_name = 'CM_CUST_DIM_1';

PARTITION_NAME NUM_ROWS SAMPLE_SIZE LAST_ANAL
------------------------------ ---------- ----------- ---------
P01_CUST_DIM 16863512 16863512 02-JUL-07
P02_CUST_DIM 4421074 4421074 02-JUL-07
P03_CUST_DIM 0 02-JUL-07

And in target database:
SQL> SELECT PARTITION_NAME,NUM_ROWS,SAMPLE_SIZE,LAST_ANALYZED
2 FROM DBA_TAB_PARTITIONS
3 WHERE table_name = 'CM_CUST_DIM_1';

PARTITION_NAME NUM_ROWS SAMPLE_SIZE LAST_ANAL
------------------------------ ---------- ----------- ---------
P01_CUST_DIM 16863512 02-JUL-07
P02_CUST_DIM 4420795 02-JUL-07
P03_CUST_DIM 0 02-JUL-07

As you can see there isnt any value in the sample_size.

The command that im using to run the transportable_tablespace
is:
parfile
========
TRANSPORT_TABLESPACE=y
TABLESPACES=dwh_crm_tts
FILE=/tmp/exp1.dmp
LOG=/tmp/exp1_log.log

exp userid="sys/xxx@$psdwh as sysdba" PARFILE=ExportParFileName

1. Can you please explain why statistics didnt copied ?
2. why oracle set it to constant value of 2000 in target database , and why there isnt any SAMPLE_SIZE value at partition level?
3. Does export statistics is the solution for this problem ?

Thanks
Tom Kyte
July 05, 2007 - 12:48 pm UTC

the statistics are just moved, not gathered with the transport (the statistics will not change after all, the data blocks themselves are moved)

the optimizer doesn't use sample size, but every number the optimizer uses is there - so all is "well"

statistics and transportable_tablespace

Yoav, July 05, 2007 - 10:25 am UTC

Hi Tom,
I found that is due to an unpoblished Bug 4175223.
Thanks.
Tom Kyte
July 05, 2007 - 1:18 pm UTC

which by the way is closed as "not a bug" since the optimizer (as stated) doesn't really care.


dbms_stats.set_table_stats - sets the sample size to an arbitrary default value, since it is meaningless when we import statistics - we simply do not move that value.

Exporting Question

Maverick, January 26, 2008 - 10:42 am UTC

Tom, I want to export a schema [all tables/constraints/indexes etc] ..full export to another schema. But the catch is I cannot login to unix box and do that as dba. I have to login to local [windows 2000 client] oracle/bin directory and use exp command and login as the user that I need information from. Can I do an export like that?

eg: I need objects from Scott/tiger schema to scott1/tiger schema..
can I issue a command like this?

exp scott/tiger full=y etc..

or do I have to be a DBA to do that?
Tom Kyte
January 29, 2008 - 2:13 am UTC

you need DBA (exp_full_database priv) to export the full database.

$ exp userid=scott/tiger full=y

Export: Release 11.1.0.6.0 - Production on Tue Jan 29 02:20:06 2008

Copyright (c) 1982, 2007, Oracle. All rights reserved.


Connected to: Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
EXP-00023: must be a DBA to do Full Database or Tablespace export

Exporting question

Maverick, January 29, 2008 - 9:53 am UTC

Thanks for your response. So, can I just do an export on some tables [or all tables] and not full schema export[from the error you got above]?
Sorry, I could try this myself , but I didn't find exp/imp executables in my client Oracle /bin folders. Hence all these questions..

Thanks for any inputs you can provide.
Tom Kyte
January 30, 2008 - 9:01 am UTC

You said "full database" before, I did a full database export, that requires dba.

if you don't have exp and imp, why does it even matter though?

you would need dba to export a schema OTHER THAN YOUR OWN (you can export your own schema, but not someone elses)

Exporting tables

Maverick, January 30, 2008 - 11:44 am UTC

So, How can I export/import tables /objects from my own schema without EXP/IMP Executables?

Also, when we do Export all objects [mainly tables] , how does oracle export them [ I mean in what order]? If I have child tables with foreign key constraints, will they be exported first before it exports parent table?

Thanks for all your help and time.

Tom Kyte
January 30, 2008 - 1:56 pm UTC

hah, good question.

similar to asking "how can i create a powerpoint without powerpoint" isn't it?

if you don't have the tool that creates a certain type of file, you won't be creating that file.


oracle will export tables
then constraints

when it imports, it'll
create tables
then add constraints

ops$tkyte%ORA10GR2> create table p ( x int primary key );

Table created.

ops$tkyte%ORA10GR2> create table c ( x references p );

Table created.

ops$tkyte%ORA10GR2> !exp userid=/ 'owner=ops$tkyte'

Export: Release 10.2.0.1.0 - Production on Wed Jan 30 14:02:25 2008

Copyright (c) 1982, 2005, Oracle.  All rights reserved.


Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
With the Partitioning, OLAP and Data Mining options
Export done in US7ASCII character set and AL16UTF16 NCHAR character set
server uses WE8ISO8859P1 character set (possible charset conversion)

About to export specified users ...
. exporting pre-schema procedural objects and actions
. exporting foreign function library names for user OPS$TKYTE
. exporting PUBLIC type synonyms
. exporting private type synonyms
. exporting object type definitions for user OPS$TKYTE
About to export OPS$TKYTE's objects ...
. exporting database links
. exporting sequence numbers
. exporting cluster definitions
<b>. about to export OPS$TKYTE's tables via Conventional Path ...
. . exporting table                              C          0 rows exported
. . exporting table                              P          0 rows exported
</b>. exporting synonyms
. exporting views
. exporting stored procedures
. exporting operators
<b>. exporting referential integrity constraints
</b>. exporting triggers
. exporting indextypes
. exporting bitmap, functional and extensible indexes
. exporting posttables actions
. exporting materialized views
. exporting snapshot logs
. exporting job queues
. exporting refresh groups and children
. exporting dimensions
. exporting post-schema procedural objects and actions
. exporting statistics
Export terminated successfully without warnings.

ops$tkyte%ORA10GR2> !imp userid=/ full=y show=y

Import: Release 10.2.0.1.0 - Production on Wed Jan 30 14:02:43 2008

Copyright (c) 1982, 2005, Oracle.  All rights reserved.


Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
With the Partitioning, OLAP and Data Mining options

Export file created by EXPORT:V10.02.01 via conventional path
import done in US7ASCII character set and AL16UTF16 NCHAR character set
import server uses WE8ISO8859P1 character set (possible charset conversion)
. importing OPS$TKYTE's objects into OPS$TKYTE
 "BEGIN  "
 "sys.dbms_logrep_imp.instantiate_schema(schema_name=>SYS_CONTEXT('USERENV','"
 "CURRENT_SCHEMA'), export_db_name=>'ORA10GR2.REGRESS.RDBMS.DEV.US.ORACLE.COM"
 "', inst_scn=>'2560766');"
 "COMMIT; END;"
 "CREATE TABLE "C" ("X" NUMBER(*,0))  PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRA"
 "NS 255 STORAGE(INITIAL 65536 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFA"
 "ULT) TABLESPACE "USERS" LOGGING NOCOMPRESS"
. . skipping table "C"

 "CREATE TABLE "P" ("X" NUMBER(*,0))  PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRA"
 "NS 255 STORAGE(INITIAL 65536 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFA"
 "ULT) TABLESPACE "USERS" LOGGING NOCOMPRESS"
. . skipping table "P"

 "ALTER TABLE "P" ADD  PRIMARY KEY ("X") USING INDEX PCTFREE 10 INITRANS 2 MA"
 "XTRANS 255 STORAGE(INITIAL 65536 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL "
 "DEFAULT) TABLESPACE "USERS" LOGGING ENABLE"
<b> "ALTER TABLE "C" ADD FOREIGN KEY ("X") REFERENCES "P" ("X") ENABLE"
</b>Import terminated successfully without warnings.



Exorting tables

Maverick, January 30, 2008 - 4:39 pm UTC

That's what I was wondering how can I do that? but you asked in your response earlier

<quote>
if you don't have exp and imp, why does it even matter though?
</quote>

That made me wonder ..wow, there must be a way to do this
;-)

So, is it possible to get those EXP.exe and IMP.exe files seperately from Oracle or somewhere?

thanks again
Tom Kyte
January 31, 2008 - 3:16 am UTC

No, you would have to install them from the installation media.

Exporting Tables

Maverick, January 31, 2008 - 9:01 am UTC

Thanks for all your help Tom.I'll reinstall my client and see if I can get those utility files. Is there anyway I can just export data using EXP command and not structures?
I didn't see exporting only rows option
Tom Kyte
February 04, 2008 - 3:18 pm UTC

export will always export the structure, you use "ignore=y" on import if you already have them created.


Exporting Table Data

Greg, January 31, 2008 - 2:52 pm UTC

Maverick, if all that you want is the data - how about something from: http://asktom.oracle.com/tkyte/flat/index.html ?

Something in there should work for you...

Exporting tables

A reader, February 04, 2008 - 3:26 pm UTC

Greg, I followed the link, but looks like all I have got was an empty zip file [when i clicked get unix/nt scripts here link]. I think I can live with Tom's suggestion when I can say ignore=Y and it'll not overwrite table structures which I already have.

Which brings me to another one:

How can I import only tables from the export file I have, avoiding Procedures/Functions etc.. All I need now is only tables and their data? I can Export only tables if that's the only way and If I can do that..Can I export only tables with data [all the tables]?

Thanks again..
Tom Kyte
February 04, 2008 - 5:05 pm UTC

imp tables=(...)

Exporting Tables

A reader, February 04, 2008 - 6:00 pm UTC

So, Do I need to give list of all tables [if I have 400 tables]????? Isn't there any command like Tables=* etc..?


Tom Kyte
February 04, 2008 - 7:11 pm UTC

if you want only tables, yes.

You could use sqlplus to create a spool file that would be used as a parameter file to list them all.

Importing Tables

Maverick, February 18, 2008 - 11:28 am UTC

Tom, I am back with more questions :-)..
As you mentioned I have been importing only tables i wanted [for data] by specifying all the tables in the parameter file. But I have a scenario where
Constraint A is in Schema B.
Constraint A is not in Schema A.

Dropped all the objects in Schema B.
Imported all the objects from SchemaA.dmp
Imported all the Tables from schemaB.dmp ignore=Y and rows=Y

Problem is this final import for data, will also create those unwanted constraints again from schemaB.dmp. How do I avoid those?
Thanks,
Tom Kyte
February 18, 2008 - 1:57 pm UTC

why is the constraint unwanted?

using exp/imp you are very limited, if you have a table with constraints, and you import that table, you will get the constraint.

Importing Tables.

Maverick, February 18, 2008 - 2:32 pm UTC

Thanks for your response. The problem is, We do testing in one schema and do all the changes and move these changes to destination schema. Sometimes we might have to move entire schema tables as we might have changed most of the objects in testing schema. So, one of the testing scenarios required the constraint removed [not all constraints of that table..only one of them..] and while importing, i'll be getting that back since the last import i do is from the same schema's exported data. at this point oftime i'm only interested in data and not structures from schemaA.dmp as explained previously. If it was just one table/one constraint , not a big deal..i would manually work it out. But if this was a change on a bunch of tables..then it's a lot more work ..

If that's not possible with Exp/Imp process, is there any other method I can get the required data into this schema?
Tom Kyte
February 18, 2008 - 6:30 pm UTC

you can get all of the data in, the constraint will just be created - unless you create an object of that name first - in which case the constraint would fail.

that is an option.

exp/imp are not the most flexible of things (data pump, much more so, 10g)

Importing Tables..correction

Maverick, February 18, 2008 - 2:33 pm UTC

Sorry..please change this following line from
"at this point oftime i'm only interested in data and not structures from schemaA.dmp as explained previously."
to
"at this point oftime i'm only interested in data and not structures from schemaB.dmp as explained previously."

Thanks.

Importing Tables..Data Pump

Maverick, February 18, 2008 - 8:42 pm UTC

Luckily I am using 10g..so can you guide me to some examples where it help my scenarios explained above?

Thanks a lot..

export/import

Marcelo, February 10, 2009 - 8:31 am UTC

Hi, Tom

I need to do export of all table from tablespace, but have to do a filter where "table.field = 1" and not all the table has field that needed. How to do this script?

Thanks, regard.

Marcelo / Brazil.


Tom Kyte
February 11, 2009 - 8:58 am UTC

you will probably need to do a table level export

use sql to create your parameter file

spool file1.par
select table_name||',' from user_tables where
table_name in (select table_name from user_tab_columns where column_name = 'FIELD') and tablespace_name = 'FOO';
spool off

spool file2.par
select table_name||',' from user_tables t1 where not exists
(select null from user_tab_columns where column_name = 'FIELD' and table_name = t1.table_name) and tablespace_name = 'FOO';
spool off


edit up those par files (add the query= to the first) and use par= on the export.

grants=y used in exp/imp but see some execute privs lost after importing

Pauline, April 22, 2010 - 1:02 pm UTC

Tom,
For upgrading Grid Control repository Database, we export it from 10.1.0.4 and import to newly created Database which is 10.2.0.4. Even we use GRANTS=Y (default) in both export and import command /parfile, it looks some grants exising in 10.1.0.4 DB users such as SYSMAN, WMSYS are lost after importing. For example, SYSMAN in 10.1.0.4 has privs on executing sys.dbma_lock and dbms_DBMS_REDEFINITION , but after importing, they are gone. It caused SYSMAN many packages are invalid. After granted these 2 privs to SYSMAN, compile SYSMAN's invalid packages is OK. I am wondering is it the way Oracle exp/imp supposed to be?

Thanks.



Tom Kyte
April 22, 2010 - 1:12 pm UTC

unless you do a full database exp/imp - you won't get grants like that, no. They are "database level".

If you just do a user level export, you get things that user owns - not things granted to that user at other levels.

Pauline, April 22, 2010 - 3:19 pm UTC

Yes, I was doing database level exp/imp by full=y. So it shouldn't lost user's grants for executing privs But it happened to me. Is it possible a Oracle bug ?

Thanks.

Pauline
Tom Kyte
April 22, 2010 - 4:34 pm UTC

did you do a full import? If not, we don't create the users, we don't do the top level grants.


question: why would you use exp/imp to upgrade??? that is like the single hardest approach possible known to me.

Pauline, April 22, 2010 - 3:23 pm UTC

Sorry, I didn't read your comment carefully. You mean if it is full database level exp/imp, some grants won't be totally carried on. So since I was doing full database exp/imp, it could be lost some grants, right?

Thanks.

Pauline
Tom Kyte
April 22, 2010 - 4:35 pm UTC

I mean if you did full, it would have created the users and done the grants.

Pauline, April 22, 2010 - 3:34 pm UTC

Tom,
Here is the sysman grants information before exporting and after importing by

select TABLE_NAME,PRIVILEGE,GRANTOR from dba_tab_privs where GRANTEE='SYSMAN';

--10.1.0.4 before exporting

--** sysman_grants

TABLE_NAME PRIVILEGE GRANTOR
------------------------------ ---------------------------------------- ------------------------------
AUX_STATS$ DELETE SYS
AUX_STATS$ INSERT SYS
AUX_STATS$ SELECT SYS
AUX_STATS$ UPDATE SYS
USER_TAB_COLUMNS SELECT SYS
DBMS_UTILITY EXECUTE SYS
DBMS_LOCK EXECUTE SYS
DBMS_JOB EXECUTE SYS
DBMS_RLS EXECUTE SYS
DBMS_RANDOM EXECUTE SYS
DBMS_OBFUSCATION_TOOLKIT EXECUTE SYS

TABLE_NAME PRIVILEGE GRANTOR
------------------------------ ---------------------------------------- ------------------------------
UTL_FILE EXECUTE SYS
DBMS_LOB EXECUTE SYS
DBMS_AQ EXECUTE SYS
DBMS_AQADM EXECUTE SYS
DBMS_AQ_BQVIEW EXECUTE SYS
DBMS_REDEFINITION EXECUTE SYS
DBMS_SHARED_POOL EXECUTE SYS
WRI$_DBU_FEATURE_USAGE DELETE SYS
WRI$_DBU_FEATURE_USAGE INSERT SYS
WRI$_DBU_FEATURE_USAGE SELECT SYS
WRI$_DBU_FEATURE_USAGE UPDATE SYS

TABLE_NAME PRIVILEGE GRANTOR
------------------------------ ---------------------------------------- ------------------------------
WRI$_DBU_FEATURE_METADATA DELETE SYS
WRI$_DBU_FEATURE_METADATA INSERT SYS
WRI$_DBU_FEATURE_METADATA SELECT SYS
WRI$_DBU_FEATURE_METADATA UPDATE SYS
WRI$_DBU_HIGH_WATER_MARK DELETE SYS
WRI$_DBU_HIGH_WATER_MARK INSERT SYS
WRI$_DBU_HIGH_WATER_MARK SELECT SYS
WRI$_DBU_HIGH_WATER_MARK UPDATE SYS
WRI$_DBU_HWM_METADATA DELETE SYS
WRI$_DBU_HWM_METADATA INSERT SYS
WRI$_DBU_HWM_METADATA SELECT SYS

TABLE_NAME PRIVILEGE GRANTOR
------------------------------ ---------------------------------------- ------------------------------
WRI$_DBU_HWM_METADATA UPDATE SYS
DBMS_FEATURE_USAGE EXECUTE SYS
QT16348_BUFFER SELECT SYS
QT177515_BUFFER SELECT SYS
ORA_TEMP_1_DS_1173 INSERT SYS
ORA_TEMP_1_DS_1173 SELECT SYS
ORA_TEMP_1_DS_1174 INSERT SYS
ORA_TEMP_1_DS_1174 SELECT SYS
ORA_TEMP_1_DS_1175 INSERT SYS
ORA_TEMP_1_DS_1175 SELECT SYS
ORA_TEMP_1_DS_1176 INSERT SYS

TABLE_NAME PRIVILEGE GRANTOR
------------------------------ ---------------------------------------- ------------------------------
ORA_TEMP_1_DS_1176 SELECT SYS
ORA_TEMP_1_DS_1177 INSERT SYS
ORA_TEMP_1_DS_1177 SELECT SYS
ORA_TEMP_1_DS_1178 INSERT SYS
ORA_TEMP_1_DS_1178 SELECT SYS
MGMT_PASS_VERIFY EXECUTE SYS
QT286058_BUFFER SELECT SYS
QT286344_BUFFER SELECT SYS

52 rows selected.


--created 10.2.0.4 database by dbca generated scripts which
no sysman user.

--after importing, the sysman user' grants are

--** sysman_grants

TABLE_NAME PRIVILEGE GRANTOR
------------------------------ ---------------------------------------- ------------------------------
DBMS_AQ EXECUTE SYS
DBMS_AQADM EXECUTE SYS
DBMS_AQ_BQVIEW EXECUTE SYS
DBMS_RLS EXECUTE SYS
QT53636_BUFFER SELECT SYS
QT12869_BUFFER SELECT SYS
QT52967_BUFFER SELECT SYS
QT53138_BUFFER SELECT SYS
QT11592_BUFFER SELECT SYS
QT12513_BUFFER SELECT SYS
QT53144_BUFFER SELECT SYS

TABLE_NAME PRIVILEGE GRANTOR
------------------------------ ---------------------------------------- ------------------------------
QT52951_BUFFER SELECT SYS

12 rows selected.

Thanks.



Tom Kyte
April 22, 2010 - 4:37 pm UTC

that wasn't necessary.

exp

A reader, April 22, 2010 - 7:58 pm UTC


Full DB exp/imp lost user grants

Pauline, April 23, 2010 - 10:20 am UTC

Tom,
Thanks very much for your answers.
The reason for using exp/imp mathod to upgrade dababase is because this DB is Grid Control repository DB with basic version of 10.1.0.4. We have upgraded Grid Control OMS and central Agent from 10.2.0.4 to 10.2.0.5. According to Oracle
recommendation, repository DB needs to upgrade to 10.2.0.4.
But there is no directly upgrade path from 10.1.0.4 to 10.2.0.4 (have to go 10.1.0.5 first)if apply patch set and run upgrade script. So we choose exp/imp to upgrade DB from 10.1.0.4 to 10.2.0.4 which can save some steps.

We did FULL database export and import.
Here is the options for our exp/imp

--export

full=y rows=y direct=n buffer=102400000 grants=y indexes=y statistics=none file=trd_exp_full_with_data_sys_conv.dmp log=trd_exp_full_with_data_sys_conv.log

--import

buffer=102400000
FULL=Y
rows=Y
statistics=none
COMMIT=Y
IGNORE=Y
file=trd_exp_full_with_data_sys_conv.dmp
GRANTS=Y
indexes=y
log=trd_imp_full_with_data_sys_conv.log
toid_novalidate=SYSTEM.REPCAT$_OBJECT_NULL_VECTOR,WMSYS.OPER_LOCKVALUES_TYPE,WMSYS.OPER_LOCKVALUES_ARRAY_TYPE,WMSYS.INTTOSTR_ARRAY_TYPE,WMSYS.TRIGOPTIONSTYPE,WMSYS.WM_CONCAT_IMPL)

The user SYSMAN is in export file and not in newly created DB, it is created by import with some grants but also lost some grants (you may see it from the inforamtion I posted above)after importing. It does not look right for the FULL DB exp/imp. That is why I ask the question to you. Your answer comfirmed thar full exp/imp would have created the users and done the grants.

So I suspect something wrong with Oracle.

Thanks.





Tom Kyte
April 23, 2010 - 2:04 pm UTC

... So we choose exp/imp
to upgrade DB from 10.1.0.4 to 10.2.0.4 which can save some steps. ...

by using one mongo step that has a complexity level way beyond any of the upgrade steps? I'd rethink that process for now and the future. *just upgrade*, dumping and reloading is always going to be *the hard way*.

I would just do the upgrade, I've never used a full export/import to upgrade in all of my years with Oracle.

Copy all objects(Table, Views, packages, Procedures..)

A reader, October 28, 2010 - 10:02 am UTC

Hi Tom,

Could you please suggest how to copy one database to another database.. The scehma name is same.. but database is different..

I tried with the create object in another schema.. It worked but this will take me weeks as I have lots of objects.. I have to work on toad as SQL Developer is not allowed as interface..


I want to copy it in live database.. I cant shut down the database.. Will export import work on live database as well??
Tom Kyte
October 28, 2010 - 12:18 pm UTC

data pump would be the way to accomplish this. Your DBA will be able to facilitate this for you easily.

expdp/impdp queries for cross platform

A reader, June 29, 2011 - 1:47 pm UTC

Hi Tom,

I want to use a export dump taken from 10.2.0.4 version of Red hat 64 bit Linux and import it to a 32 bit Windows oracle 10.2.0.1. Is it possible? Can I use any of the expdp/impdp options?
Tom Kyte
June 29, 2011 - 3:10 pm UTC

data pump is platform agnostic, you can expdp on any platform and impdp on any other.

the versions are upwards compatible - meaning you can expdp from version X and impdp into version Y as long as Y >= X.

10.2 to 10.2 is OK.

how to recover foreign keys being lost

A reader, October 03, 2013 - 1:50 pm UTC

I have three schemas, A, B and C, I need to recover schema B, but I have a foreign keys in B referenced to tables in A and foreign keys in C referenced to tables in B.
to recover B schema I have to drop it cascade cause dropping related foreign keys between B-C and B-A, so who to recover the foreign keys being lost?

More to Explore

PL/SQL demos

Check out more PL/SQL tutorials on our LiveSQL tool.

PL/SQL docs

PL/SQL reference manual from the Oracle documentation library