DBMS_DATAPUMP
Eric Peterson, October 29, 2007 - 6:21 pm UTC
My DBA exported (using Data Pump) 3 partitions of 4 tables from Production. I want to import into a test db this data. The test database is on a server I do not have unix access on, so I can't use impdp from the command line.
The DBA did not tell me he was using Data Pump, so at first I tried imp but that failed. I attempted to do impdb from the unix box I have access, to load this data into the database, but the DIRECTORY object was not "findable" from the database. i.e prefered "/opt/t16/app/oracle/t16ah00/util/<db>" where I had the data in "/u01/test/export".
Then I got the idea of trying to use the DATAPUMP package to have the database read the local directory directly, instead of me trying to load from a seperate machine.
Using your example, I came up with:
I created a directory and granted appropriate permissions.
SET SERVEROUTPUT ON
DECLARE
dph NUMBER; -- job handle
BEGIN
dph := DBMS_DATAPUMP.OPEN ( operation => 'IMPORT',
job_mode => 'TABLE',
remote_link => NULL,
job_name => 'Import2',
version => 'COMPATIBLE' );
DBMS_DATAPUMP.ADD_FILE ( handle => dph,
filename => 'xxx.dmp',
directory => 'DPUMP_DIR1',
filetype => DBMS_DATAPUMP.ku$_file_type_dump_file );
DBMS_DATAPUMP.ADD_FILE ( handle => dph,
filename => 'xxx_imp.log',
directory => 'DPUMP_DIR1',
filetype => DBMS_DATAPUMP.ku$_file_type_log_file );
-- Tried to load all 3 partitions together
--value => '= P_2007_08, P_2007_09, P_2007_10',
DBMS_DATAPUMP.DATA_FILTER ( handle => dph,
name => 'PARTITION_LIST',
value => '= P_2007_09',
table_name => 'DSLOG' );
DBMS_DATAPUMP.SET_PARALLEL ( handle => dph,
degree => 1 );
DBMS_DATAPUMP.SET_PARAMETER ( handle => dph,
name => 'INCLUDE_METADATA',
value => 0 );
DBMS_DATAPUMP.SET_PARAMETER ( handle => dph,
name => 'TABLE_EXISTS_ACTION',
value =>'APPEND' );
DBMS_DATAPUMP.START_JOB ( dph );
DBMS_DATAPUMP.DETACH ( dph );
END;
/
I get the following error. Line 31 is the START_JOB.
DECLARE
*
ERROR at line 1:
ORA-39002: invalid operation
ORA-06512: at "SYS.DBMS_DATAPUMP", line 4646
ORA-06512: at line 31
But then looking at the status of the job I get the following, eventually it stops and clears out of dba_datapump_jobs. But no data is imported into the table's partition.
> SELECT * FROM dba_datapump_jobs;
OWNER_NAME JOB_NAME
------------------------------ ------------
OPERATION
-------------------------------------------
JOB_MODE STATE
------------------------------------------- -----------------
ATTACHED_SESSIONS DATAPUMP_SESSIONS
----------------- -----------------
RPTMGR Import2
IMPORT
TABLE COMPLETING
1 2
So I guess I'm confused on how to appropriately use this package. Am I on the right track? Or should I bother the DBAs & SAs for a unix account on the box with the database?
Thanks for any insight you can provide.
Eric
October 30, 2007 - 1:12 pm UTC
the dmp file must be accessible on a file system local to the database server itself.
is it?
if it is on your PC, you need to make your filesystem "visible" to the server itself.
impdb always runs in the database - even if you use the command line client, all that does is log into Oracle and run the plsql packages like you did in sqlplus.
And they run on the server.
impdb
Eric Peterson, October 30, 2007 - 1:27 pm UTC
Yes, the files are on the database server. I tried to load from the client.
The par file:
<code>
DIRECTORY=DPUMP_DIR1
DUMPFILE=tbl.dmp
TABLES=tbl:P_2007_08,tbl:P_2007_09,tbl:P_2007_10
CONTENT=DATA_ONLY
TABLE_EXISTS_ACTION=APPEND
JOB_NAME=impjob01
STATUS=30
The directory is there and granted to this user. I get the following error. I went through the documentation and I think I have all the parameters and values correct. I must still be missing something.
>impdp xxx/yyy@zzz tbl_imp.par
Import: Release 10.2.0.1.0 - 64bit Production on Tuesday, 30 October, 2007 10:14:14
Copyright (c) 2003, 2005, Oracle. All rights reserved.
Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.3.0 - 64bit Production
With the Partitioning, OLAP and Data Mining options
UDI-00014: invalid value for parameter, 'attach'
Thanks for all your help.
Eric
</code>
October 30, 2007 - 2:21 pm UTC
impdp uses positional parameters, attach is first and tbl_imp.par is not a valid attach parameter :)
$ impdp u/p parfile=tbl_imp.par
dbms_datapump script vs procedure
Jasbir, November 12, 2007 - 1:32 pm UTC
Hi Tom,
Database Version: 10.2 (Windows 32 bit)
I ran the following script in SQLPLUS but when I made it into a procedure it stopped working and gave the error ORA-31626: job does not exist.
SCRIPT:
DECLARE
l_schema VARCHAR2(30) := sys_context('USERENV', 'CURRENT_SCHEMA');
l_dp_handle NUMBER;
BEGIN
l_dp_handle := dbms_datapump.open(operation => 'EXPORT', job_mode => 'SCHEMA', remote_link => NULL, job_name => 'EMP_EXPORT', version => 'LATEST');
dbms_datapump.add_file(handle => l_dp_handle, filename => l_schema||'_'||to_char(sysdate,'yyyymmddhh24miss')||'.dmp', directory => 'DATA_PUMP_DIR');
dbms_datapump.metadata_filter(handle => l_dp_handle, name => 'SCHEMA_EXPR', value => '= '''||l_schema||'''');
dbms_datapump.start_job(l_dp_handle);
dbms_datapump.detach(l_dp_handle);
END;
PROCEDURE:
CREATE OR REPLACE PROCEDURE EXPORTDB
IS
l_schema VARCHAR2(30) := sys_context('USERENV', 'CURRENT_SCHEMA');
l_dp_handle NUMBER;
BEGIN
l_dp_handle := dbms_datapump.open(operation => 'EXPORT', job_mode => 'SCHEMA', remote_link => NULL, job_name => 'EMP_EXPORT', version => 'LATEST');
dbms_datapump.add_file(handle => l_dp_handle, filename => l_schema||'_'||to_char(sysdate,'yyyymmddhh24miss')||'.dmp', directory => 'DATA_PUMP_DIR');
dbms_datapump.metadata_filter(handle => l_dp_handle, name => 'SCHEMA_EXPR', value => '= '''||l_schema||'''');
dbms_datapump.start_job(l_dp_handle);
dbms_datapump.detach(l_dp_handle);
END;
Thanks.
November 16, 2007 - 1:29 pm UTC
http://asktom.oracle.com/tkyte/Misc/RolesAndProcedures.html no roles in that procedure. they would need CREATE TABLE directly granted, not via a role:
ops$tkyte%ORA10GR2> create user test identified by test;
User created.
ops$tkyte%ORA10GR2> grant resource, connect to test;
Grant succeeded.
ops$tkyte%ORA10GR2> grant EXP_FULL_DATABASE to test;
Grant succeeded.
ops$tkyte%ORA10GR2> grant IMP_FULL_DATABASE to test;
Grant succeeded.
ops$tkyte%ORA10GR2> grant all on directory data_pump_dir to test;
Grant succeeded.
ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> connect test/test
Connected.
test%ORA10GR2> CREATE OR REPLACE PROCEDURE EXPORTDB
2 IS
3 l_schema VARCHAR2(30) := sys_context('USERENV', 'CURRENT_SCHEMA');
4 l_dp_handle NUMBER;
5 BEGIN
6 l_dp_handle := dbms_datapump.open(operation => 'EXPORT', job_mode => 'SCHEMA', remote_link => NULL, job_name => 'EMP_EXPORT', version => 'LATEST');
7 dbms_datapump.add_file(handle => l_dp_handle, filename => 'test2.dmp', directory => 'DATA_PUMP_DIR');
8 dbms_datapump.metadata_filter(handle => l_dp_handle, name => 'SCHEMA_EXPR', value => '= '''||l_schema||'''');
9 dbms_datapump.start_job(l_dp_handle);
10 dbms_datapump.detach(l_dp_handle);
11 END;
12 /
Procedure created.
test%ORA10GR2>
test%ORA10GR2> exec exportdb
BEGIN exportdb; END;
*
ERROR at line 1:
ORA-31626: job does not exist
ORA-06512: at "SYS.DBMS_SYS_ERROR", line 79
ORA-06512: at "SYS.DBMS_DATAPUMP", line 911
ORA-06512: at "SYS.DBMS_DATAPUMP", line 4356
ORA-06512: at "TEST.EXPORTDB", line 6
ORA-06512: at line 1
test%ORA10GR2>
test%ORA10GR2> connect /
Connected.
ops$tkyte%ORA10GR2> grant create table to test;
Grant succeeded.
ops$tkyte%ORA10GR2> connect test/test
Connected.
test%ORA10GR2> exec exportdb
PL/SQL procedure successfully completed.
dbms_datapump script vs procedure
Jasbir, November 12, 2007 - 6:00 pm UTC
To add to my previous post. My user I was testing the script vs procedure with had the following grants:
create user test identified by test;
grant resource, connect to test;
grant EXP_FULL_DATABASE to test;
grant IMP_FULL_DATABASE to test;
Thanks.
grant create table to test
Jasbir, November 22, 2007 - 3:26 pm UTC
Hi Tom,
Thank you for your help it fixed my problem. My obvious question is why do you have to explicitly execute "grant create table to test" when this should be part of the resource role.
Thanks,
Jasbir.
November 26, 2007 - 11:11 am UTC
Export Error
Nishith Pandey, December 08, 2007 - 5:58 am UTC
Hi Tom
I submit the Export Job via OEM. Our database is Oracle Database 10gR2. Sometimes the Job fails with the following log :
Job EXPORT006981 has been reopened at Wednesday, 05 December, 2007 23:10
Restarting "SYSTEM"."EXPORT006981":
Estimate in progress using BLOCKS method...
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
. estimated "AGR"."AD_AGOMAST" 256 MB
. estimated "SAN"."FA_VCHDET" 216 MB
. estimated "PATNA"."AD_DUMYDATA" 192 MB
. estimated "COMMOBJ"."AD_OUTSTAND_16_10_07" 176 MB
. estimated "NOIDA"."AD_CCPEDTN" 176 MB
. estimated "NOIDA"."CR_DLYSUP" 168 MB
. estimated "VNS"."VNS_AD_CCPEDTN" 161 MB
. estimated "NOIDA"."FA_VCHDET" 152 MB
. estimated "AGR"."NEW_DUMMYDATA" 144 MB
. estimated "CENRO"."N_AD_CCPDET" 144 MB
. estimated "VNS"."NEW_DUMMYDATA" 141 MB
. estimated "LKO"."AD_CCPEDTN" 140 MB
. estimated "ALLD"."ALD_AD_CCPEDTN" 136 MB
. estimated "CENRO"."D_AD_CCPEDTN" 120 MB
. estimated "CENRO"."L_AD_CCPDET" 120 MB
. estimated "LKO"."LKO_AD_CCPDET" 120 MB
.
.
. (Not shown for abbreviation)
.
.
.
.
. estimated "VNS"."PAY_PRV_MONSTATUS" 0 KB
. estimated "VNS"."PAY_TEMPADV" 0 KB
. estimated "VNS"."PAY_TOURADV" 0 KB
. estimated "VNS"."TASKS_STATUS" 0 KB
Total estimation using BLOCKS method: 22.12 GB
Processing object type SCHEMA_EXPORT/USER
Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
Processing object type SCHEMA_EXPORT/ROLE_GRANT
Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
Processing object type SCHEMA_EXPORT/SYNONYM/SYNONYM
Processing object type SCHEMA_EXPORT/TABLE/TABLE
ORA-39171: Job is experiencing a resumable wait.
ORA-01691: unable to extend lob segment SYSTEM.SYS_LOB0000374957C00039$$ by 1024 in tablespace SYSTEM
ORA-39171: Job is experiencing a resumable wait.
ORA-01691: unable to extend lob segment SYSTEM.SYS_LOB0000374957C00039$$ by 1024 in tablespace SYSTEM
ORA-39171: Job is experiencing a resumable wait.
ORA-01691: unable to extend lob segment SYSTEM.SYS_LOB0000374957C00039$$ by 1024 in tablespace SYSTEM
ORA-39171: Job is experiencing a resumable wait.
ORA-01691: unable to extend lob segment SYSTEM.SYS_LOB0000374957C00039$$ by 1024 in tablespace SYSTEM
ORA-39171: Job is experiencing a resumable wait.
ORA-01691: unable to extend lob segment SYSTEM.SYS_LOB0000374957C00039$$ by 1024 in tablespace SYSTEM
ORA-39171: Job is experiencing a resumable wait.
ORA-01691: unable to extend lob segment SYSTEM.SYS_LOB0000374957C00039$$ by 1024 in tablespace SYSTEM
ORA-39171: Job is experiencing a resumable wait.
ORA-01691: unable to extend lob segment SYSTEM.SYS_LOB0000374957C00039$$ by 1024 in tablespace SYSTEM
ORA-39171: Job is experiencing a resumable wait.
ORA-01691: unable to extend lob segment SYSTEM.SYS_LOB0000374957C00039$$ by 1024 in tablespace SYSTEM
ORA-39171: Job is experiencing a resumable wait.
ORA-01691: unable to extend lob segment SYSTEM.SYS_LOB0000374957C00039$$ by 1024 in tablespace SYSTEM
ORA-39171: Job is experiencing a resumable wait.
ORA-01691: unable to extend lob segment SYSTEM.SYS_LOB0000374957C00039$$ by 1024 in tablespace SYSTEM
ORA-39171: Job is experiencing a resumable wait.
ORA-01691: unable to extend lob segment SYSTEM.SYS_LOB0000374957C00039$$ by 1024 in tablespace SYSTEM
ORA-39171: Job is experiencing a resumable wait.
ORA-01691: unable to extend lob segment SYSTEM.SYS_LOB0000374957C00039$$ by 1024 in tablespace SYSTEM
ORA-39171: Job is experiencing a resumable wait.
ORA-01691: unable to extend lob segment SYSTEM.SYS_LOB0000374957C00039$$ by 1024 in tablespace SYSTEM
ORA-39171: Job is experiencing a resumable wait.
ORA-01691: unable to extend lob segment SYSTEM.SYS_LOB0000374957C00039$$ by 1024 in tablespace SYSTEM
ORA-39171: Job is experiencing a resumable wait.
ORA-01691: unable to extend lob segment SYSTEM.SYS_LOB0000374957C00039$$ by 1024 in tablespace SYSTEM
ORA-39171: Job is experiencing a resumable wait.
ORA-01691: unable to extend lob segment SYSTEM.SYS_LOB0000374957C00039$$ by 1024 in tablespace SYSTEM
ORA-39171: Job is experiencing a resumable wait.
ORA-01691: unable to extend lob segment SYSTEM.SYS_LOB0000374957C00039$$ by 1024 in tablespace SYSTEM
ORA-39171: Job is experiencing a resumable wait.
ORA-01691: unable to extend lob segment SYSTEM.SYS_LOB0000374957C00039$$ by 1024 in tablespace SYSTEM
ORA-39171: Job is experiencing a resumable wait.
ORA-01691: unable to extend lob segment SYSTEM.SYS_LOB0000374957C00039$$ by 1024 in tablespace SYSTEM
ORA-39171: Job is experiencing a resumable wait.
ORA-01691: unable to extend lob segment SYSTEM.SYS_LOB0000374957C00039$$ by 1024 in tablespace SYSTEM
ORA-39125: Worker unexpected fatal error in KUPW$WORKER.CREATE_OBJECT_ROWS while calling FORALL [TABLE]
ORA-30032: the suspended (resumable) statement has timed out
ORA-01691: unable to extend lob segment SYSTEM.SYS_LOB0000374957C00039$$ by 1024 in tables
pace SYSTEM
ORA-06512: at "SYS.DBMS_SYS_ERROR", line 116
ORA-06512: at "SYS.KUPW$WORKER", line 6248
----- PL/SQL Call Stack -----
object line object
handle number name
000007FFCE2F99B0 14916 package body SYS.KUPW$WORKER
000007FFCE2F99B0 6300 package body SYS.KUPW$WORKER
000007FFCE2F99B0 5638 package body SYS.KUPW$WORKER
00
0007FFCE2F99B0 2145 package body SYS.KUPW$WORKER
000007FFCE2F99B0 6861 package body SYS.KUPW$WORKER
000007FFCE2F99B0 1262 package body SYS.KUPW$WORKER
000007FFCD236720 2 anonymous block
Job "SYSTEM"."EXPORT006981" stopped due to fatal error at 01:35:33
When we increase the size of System Tablespace, the next job runs successful. What is the permanent remedy for this error?
December 10, 2007 - 10:53 am UTC
to have sufficient space in system?
sounds like system is full
and cannot autoextend
hence lots of things are going to "break"
Data Pump
Debasish, May 29, 2008 - 9:03 am UTC
Using the expdp/impdp (Data Pump in 10g), can export and import data from one schema/Database to another schema/Data
base in one shot. ( i.e there is not DUMP file) it just read data from one Sehema/DB and write it to another Schema/DB without creating Dump file.
If possible then using DB link and without DB link.
Thanks in Advance
May 29, 2008 - 9:46 am UTC
Datapump and manipulation of exported DATA
MK, October 20, 2008 - 1:16 pm UTC
Hi Tom,
I was wondering if there is a way of exporting the database but have the ability to modify an ID column in the tables. For example if I have an ID column with a value of 100, 200, 300 etc I would like to multiplay every single value with a constant value like say 2 or 3.
Basically "select 2 * id from table;" How would I achieve this? And would you recommend using the DBMS_DATAPUMP package to write this piece of custom manipulation or can it be done from command line?
Cheers!
October 21, 2008 - 3:37 pm UTC
create or replace directory tmp as '/tmp'
/
create table X
organization external
( type oracle_datapump
default directory TMP
location( 'x.dmp' )
)
as
select whatever you want.....
is an easy approach. You can then create an external table on the receiving site that maps to this structure and "insert /*+ APPEND */ into whatever select * from X"
Load data using oracle_datapump driver
A reader, January 27, 2009 - 5:23 pm UTC
Tom,
The 10g and 11g documentation is confusing about data loading using ORACLE_DATA_PUMP driver.
http://download.oracle.com/docs/cd/B28359_01/server.111/b28319/et_concepts.htm#i1009391 The documentation says:
The ORACLE_DATAPUMP access driver can perform both loads and unloads. The data must come from binary dump files. Loads to internal tables from external tables are done by fetching from the binary dump files. Unloads from internal tables to external tables are done by populating the binary dump files of the external table.However, further down on the same page, under heading:
Using External Tables to Load and Unload Data, it says:
Data can only be unloaded using the ORACLE_DATAPUMP access driver.My question is, given a datapump export dump file, can I do a data load (load data into permanent database tables from an external table)? If yes, can you please give an example?
Thanks...
January 28, 2009 - 3:23 pm UTC
It says "data can only be UNLOADED using X"
It does not say "X can only be used to UNLOAD data"
X, oracle_datapump, can be used to read from the filesystem and write to the filesystem
ops$tkyte%ORA11GR1> create or replace directory tmp as '/tmp'
2 /
Directory created.
ops$tkyte%ORA11GR1> create table all_objects_unload
2 organization external
3 ( type oracle_datapump
4 default directory TMP
5 location( 'allobjects.dat' )
6 )
7 as
8 select * from all_objects
9 /
Table created.
so, we just unloaded... somewhere else, we can:
ops$tkyte%ORA11GR1> create table t
2 ( OWNER VARCHAR2(30),
3 OBJECT_NAME VARCHAR2(30),
4 SUBOBJECT_NAME VARCHAR2(30),
5 OBJECT_ID NUMBER,
6 DATA_OBJECT_ID NUMBER,
7 OBJECT_TYPE VARCHAR2(19),
8 CREATED DATE,
9 LAST_DDL_TIME DATE,
10 TIMESTAMP VARCHAR2(19),
11 STATUS VARCHAR2(7),
12 TEMPORARY VARCHAR2(1),
13 GENERATED VARCHAR2(1),
14 SECONDARY VARCHAR2(1)
15 )
16 organization external
17 ( type oracle_datapump
18 default directory TMP
19 location( 'allobjects.dat' )
20 )
21 /
Table created.
ops$tkyte%ORA11GR1> select count(*) from t;
COUNT(*)
----------
67522
and there we can read it.
Character set Mismatch while loading the datas. Please help me.
Rajeshwaran, Jeyabal, January 28, 2009 - 3:44 pm UTC
Tom,
I am not sure that this question can be posted here. but i need you help in this scenario.
There is a character set mismatch between my local and production database.
Local database
===============
SELECT value
FROM nls_database_parameters
WHERE parameter ='NLS_CHARACTERSET';
VALUE
=====
WE8MSWIN1252
Production database
===================
appread@RASSYST> SELECT value
2 FROM nls_database_parameters
3 WHERE parameter ='NLS_CHARACTERSET';
VALUE
-------------
WE8ISO8859P1
So i wrote my control file like the one below to load my data into Production database.
Load DATA
CHARACTERSET WE8MSWIN1252
INFILE *
TRUNCATE
INTO TABLE "T"
WHEN ( SEQ_ID = '000' ) AND ( CODE <> '') AND (DESCRIPTION <> '')
AND (SHORT_HCC_DESC <> '') AND (ACTUAL_WEIGHT <>'')
AND (WEIGHT<>'') AND (YEAR_DOS<>'') AND (YEAR_PY<>'')
FIELDS TERMINATED BY '|'
TRAILING NULLCOLS
(SEQ_ID "HCC_STAGE_SEQ.nextval",
CODE,
DESCRIPTION,
SHORT_HCC_DESC,
ACTUAL_WEIGHT,
WEIGHT,
BEGIN_DATE "TO_DATE(:BEGIN_DATE, 'MMDDYYYY')",
END_DATE "TO_DATE(:END_DATE, 'MMDDYYYY')",
YEAR_DOS,
YEAR_PY
)
INTO TABLE "T1"
WHEN ( SEQ_ID = '999' )
FIELDS TERMINATED BY '|'
(SEQ_ID POSITION(1) "HCC_TRL_SEQ.nextval" ,
HCC_COUNT
)
BEGINDATA
000|073|Parkinson¿s and Huntington¿s Diseases|Parkinson¿s, Huntington¿s|0.547|0.532|01012006|12312006|2006|2007
000|073|Parkinson¿s and Huntington¿s Diseases|Parkinson¿s, Huntington¿s|0.547|0.526|01012007|12312007|2007|2008
000|073|Parkinson¿s and Huntington¿s Diseases|Parkinson¿s, Huntington¿s|0.592|0.575|01012008|12312099|2008|2009
while coping from my machine to your portal apostrophe(`)is replaced with Inverted question marks. i dont know why this is happening. (this is what happend with my abouve data)
sample output.
==============
(the below is the output i am exactly getting from the Production DB)
SELECT description FROM T;
Parkinson¿s and Huntington¿s Diseases
Parkinson¿s and Huntington¿s Diseases
Parkinson¿s and Huntington¿s Diseases
1) The apostrophe {¿) is replace with Inverted Question marks in the output. Can you please help me to solve this issue?
2) Is this happening because of my mismatch in the Character dataset.
Many Thanks,
Rajesh.
January 30, 2009 - 12:19 pm UTC
WE8MSWIN1252 = windows characterset (see the MSWIN bit in there) = characterset where those stupid things known as "smart quotes" exist.
WE8ISO8859P1 = Western European ISO standard characterset. Where these so called smart quotes do not exist.
I guess you would have to convince your production environment to abandon their characterset in favor of the windows specific one... Or find a characterset that supports the needs of both (like a unicode one). In either case - you would be asking the production instance to basically "rewrite" itself.
reuse ?
Sokrates, February 05, 2009 - 10:37 am UTC
create table X
organization external
( type oracle_datapump
...
is a fantastic feature !!
Can I specify a "reuse"-parameter to avoid the following:
SQL> create table t
2 organization external
3 ( type oracle_datapump
4 default directory DATA_PUMP_DIR
5 location ('t.dmp')
6 )
7 as select * from dual
8 /
create table t
*
FEHLER in Zeile 1:
ORA-29913: error in executing ODCIEXTTABLEOPEN callout
ORA-29400: data cartridge error
KUP-11012: file t.dmp in /export/dbtmp2/oracle/datapumpdir already exists
ORA-06512: at "SYS.ORACLE_DATAPUMP", line 19
February 05, 2009 - 11:54 am UTC
this'll work:
ops$tkyte%ORA10GR2> exec utl_file.fremove( 'DATA_PUMP_DIR', 't.dmp' );
PL/SQL procedure successfully completed.
ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> create table t
2 organization external
3 ( type oracle_datapump
4 default directory DATA_PUMP_DIR
5 location ('t.dmp')
6 )
7 as select * from dual
8 /
Table created.
Load data using oracle_datapump driver
A reader, February 05, 2009 - 3:36 pm UTC
Tom,
I have couple of questions regarding the data unload/load example that you worked out couple of posts above. I tested this example and it works fine. However, I used the expdp utility to create a datapump export file of certain schemas and data load is throwing an error:
ora-31619: invalid dump file <file name>
The dump file was generated without error as per the log.
My questions are:
a) Will the data load work with any dump file generated using datapump utility or is it limited to files unloaded using the oracle_datapump driver?
b) What will happen if the datapump export file created using expdp contains multiple schemas having same table names? This is the case with my export dump file. I have table T1 in schema S1 and S2. If I load data using oracle_datapump driver, which T1 will be used?
Thanks...
February 05, 2009 - 4:07 pm UTC
You would use the create table organization external to CREATE the file
Then you can use the oracle_datapump driver to reload it.
A create table to be read from cannot use a full blown datapump file - there is a lot of extraneous stuff in there. You create the dmp file using CREATE TABLE ORGANIZATION EXTERNAL AS SELECT
Load data using oracle_datapump driver
A reader, February 05, 2009 - 3:52 pm UTC
Sorry, I forgot to copy/paste the example. Here it is:
SQL> CREATE TABLE T1
2 (C1 VARCHAR2(25),
3 C2 VARCHAR2(2000),
4 C3 NUMBER(10,0),
5 C4 VARCHAR2(6),
6 C5 DATE
7 )
8 ORGANIZATION EXTERNAL
9 ( TYPE ORACLE_DATAPUMP
10 DEFAULT DIRECTORY DATA_PUMP_DIR
11 LOCATION ( 'EXPDAT01.DMP')
12 );
Table created.
SQL> select count(*) from T1;
select count(*) from T1
*
ERROR at line 1:
ORA-29913: error in executing ODCIEXTTABLEOPEN callout
ORA-31619: invalid dump file "d:\exports\ORCL\EXPDAT01.DMP"
ORA-06512: at "SYS.ORACLE_DATAPUMP", line 19
SQL>
The file EXPDAT01.DMP was created using expdp and is a valid export file.
Thanks...
utl_file.fremove
Sokrates, February 06, 2009 - 2:47 am UTC
thanks ! didn't know about that yet
I too get "ORA-31626: job does not exist"
A reader, March 11, 2009 - 7:32 pm UTC
Hi Tom,
I am facing the same job does not exist problem. However I get this error only if I execute a stored procedure which has the data pump API as shown below
CREATE OR REPLACE PROCEDURE DM_SCHEMA_REFRESH
AS
v_JOB_HANDLE NUMBER;
v_JOB_STATE VARCHAR2(30);
begin
--Initiaing the job
v_JOB_HANDLE:=DBMS_DATAPUMP.open('EXPORT','TABLE',NULL,'DM9');
DBMS_OUTPUT.PUT_LINE('hANDLE IS ' ||v_JOB_HANDLE);
--Specifying the file
DBMS_DATAPUMP.ADD_FILE(v_JOB_HANDLE,'DM_DEV_EXP2.dmp','DAILY_EXP_DMP');
--Specifying the Schema
DBMS_DATAPUMP.METADATA_FILTER(v_JOB_HANDLE,'NAME_LIST','(''BOOKING_FACT'')');
--Starting the job
DBMS_DATAPUMP.START_JOB(v_JOB_HANDLE);
exception
WHEN OTHERS THEN
DBMS_OUTPUT.PUT_LINE(SQLERRM);
end;
But if I use the same code in an Anonymous block it executes fine and the file is created. I have all the necessary privileges like Export Full Database, Import Full Database, Execute Any Procedure. I am lost here and any help from is greatly appreciated.
Thanks
N.A
How can I see if table statistics have been imported?
A reader, March 18, 2009 - 10:54 am UTC
Hi Tom,
Oracle 10.2.0.4
I would like to know if my actual table statistics have been imported with impdp or calculated by dbms_stats. How can I see this in the data dictionary?
Thanks & Regards,
Markus
How can I see if table statistics have been imported?
A reader, March 30, 2009 - 9:15 am UTC
Hey Tom,
Sorry to bother you but is there a possibility to see if table statistics have been imported or calculated?
Regards,
Markus
March 30, 2009 - 5:04 pm UTC
as we don't really care where they came from - it is not tracked as far as I know - and it would be not really possible to do so.
Some of the stats could be imported
Some of them could be gathered
Some of them could be set manually
All at the same time - it is not "all or nothing", I guess I would look to "last analyzed" to see when they were last put into place (but even so, it could still be a mixture)
Export only selected tables
A reader, June 15, 2009 - 1:34 pm UTC
Assume I have the following table , I would like to generate the export (pump) dump
using dbms_datapump API only for the tables listed in zv$product_tables .
In this , I need to export only the tables tab1 , tab2 ; though this schema may have few other tables.
How can do this via metadata_filter.
SQL> Select * from zv$product_tables;
TNAME TVERSION
-------------------- ---------------------------------------
TAB1 5
TAB2 5
SQL> desc zv$product_tables
Name Type Nullable Default Comments
-------- ------------ -------- ------- --------
TNAME VARCHAR2(20) Y
TVERSION INTEGER Y
SQL>
June 15, 2009 - 2:23 pm UTC
you can build a variable that has your list of names
dbms_datapump.metadata_filter(h1,'NAME_EXPR','IN (''MY_TABLE1'', ''MY_TABLE2'')');
replace 'IN (''MY_TABLE1'', ''MY_TABLE2'')'
with a variable that you populate with your list of tables (eg: run a query, find your table names, build a string)
DBMS_DATAPUMP to load Flat File
Robert, August 20, 2009 - 6:09 pm UTC
Can DBMS_DATAPUMP be used to load flat files ?
August 25, 2009 - 8:43 am UTC
no, external tables with the oracle_loader (sqlldr basically) driver does that.
Regarding FILESIZE in Datapump
Rajeshwaran, Jeyabal, September 18, 2009 - 1:00 am UTC
;;;
Export: Release 10.2.0.3.0 - Production on Friday, 18 September, 2009 11:19:13
Copyright (c) 2003, 2005, Oracle. All rights reserved.
;;;
Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
With the Partitioning, OLAP and Data Mining options
Starting "HR"."SYS_EXPORT_TABLE_01": userid=hr/********@iradsdb directory=IRADS_PROC_OUT dumpfile=HR_DMP_%U.DMP logfile=hr_imp_log.txt TABLES=EMPLOYEES filesize=102400
Estimate in progress using BLOCKS method...
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 64 KB
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/INDEX/INDEX
Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type TABLE_EXPORT/TABLE/COMMENT
Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/REF_CONSTRAINT
Processing object type TABLE_EXPORT/TABLE/TRIGGER
Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
. . exported "HR"."EMPLOYEES" 15.77 KB 107 rows
Master table "HR"."SYS_EXPORT_TABLE_01" successfully loaded/unloaded
******************************************************************************
Dump file set for HR.SYS_EXPORT_TABLE_01 is:
E:\IRADS\DATA\INTERFACES_IRADS2\IRADSDEV\PROC\OUT\HR_DMP_01.DMP
E:\IRADS\DATA\INTERFACES_IRADS2\IRADSDEV\PROC\OUT\HR_DMP_02.DMP
Job "HR"."SYS_EXPORT_TABLE_01" successfully completed at 11:20:17
Tom,
I have provided FILESIZE=102400 (100KB), But i have been ended up with 2 files HR_DMP_01.DMP - 48Kb & HR_DMP_02.DMP - 80Kb, but i should be like HR_DMP_01.DMP - 100 Kb & HR_DMP_02.DMP - 28Kb. why this is happening like this. Why file1 is smaller than file2?
September 18, 2009 - 10:42 am UTC
why would it matter? You said "do not exceed 100k in a file", so we did not - we know we need two - we put some in file 1 some in file 2. There is no assurance of the filesizes - other then "they will be less than 100k"
but why does it matter to you? You asked for files less than 100k, we needed 128k, we knew we need two - it didn't have to be exactly 100k - either one of them. We don't like to break things over the files.
Create Dmp into Netwrok Drive
Samy, October 21, 2009 - 5:42 am UTC
Hi Tom,
i wanna have your suggestion for this requirement.
we have 3 server,
1) Database 10G,
2) Windows service with Oracle client
3) FileServer.
Now i want to Export a Schema with few tables and place this dmp file into Fileserver.
I have tried this.
--------------
SQL> Create DIRECTORY datapump1 AS 'C:\DBBckup'; -- LOCAL DRIVE
SQL> GRANT EXP_FULL_DATABASE to scott;
SQL> GRANT READ, WRITE ON DIRECTORY datapump1 to scott;
expdp scott/tiger TABLES=SCOTT.DEPT DIRECTORY=datapump1 DUMPFILE=dept.dmp LOGFILE=dept.log
C:\Documents and Settings\tm-it-319>expdp scott/tiger TABLES=SCOTT.DEPT DIRECTORY=datapump1 DUMPFILE=dept.dmp LOGFILE=dept.log
Export: Release 10.1.0.2.0 - Production on Wednesday, 21 October, 2009 14:04
Copyright (c) 2003, Oracle. All rights reserved.
Connected to: Oracle Database 10g Enterprise Edition Release 10.1.0.2.0 - Production
With the Partitioning, OLAP and Data Mining options
Starting "SCOTT"."SYS_EXPORT_TABLE_01": scott/******** TABLES=SCOTT.DEPT DIRECTORY=datapump1 DUMPFILE=dept.dmp LOGFILE=dept.log
Estimate in progress using BLOCKS method...
Processing object type TABLE_EXPORT/TABLE/TBL_TABLE_DATA/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 64 KB
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/INDEX/INDEX
Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
. . exported "SCOTT"."DEPT" 5.656 KB 4 rows
Master table "SCOTT"."SYS_EXPORT_TABLE_01" successfully loaded/unloaded
******************************************************************************
Dump file set for SCOTT.SYS_EXPORT_TABLE_01 is:
C:\DBBCKUP\DEPT.DMP
Job "SCOTT"."SYS_EXPORT_TABLE_01" successfully completed at 14:07
Where as when i tried this For network Drive which has been mapped to the server i get error.
SQL> Create DIRECTORY datapump AS 'K:\DBBckup'; -- NETWORK DRIVE
SQL> GRANT READ, WRITE ON DIRECTORY datapump to scott;
expdp scott/tiger TABLES=SCOTT.DEPT DIRECTORY=datapump DUMPFILE=dept.dmp LOGFILE=dept.log
C:\Documents and Settings\tm-it-319>expdp scott/tiger TABLES=SCOTT.DEPT DIRECTORY=datapump DUMPFILE=dept.dmp LOGFILE=dept.log
Export: Release 10.1.0.2.0 - Production on Wednesday, 21 October, 2009 14:58
Copyright (c) 2003, Oracle. All rights reserved.
Connected to: Oracle Database 10g Enterprise Edition Release 10.1.0.2.0 - Production
With the Partitioning, OLAP and Data Mining options
ORA-39002: invalid operation
ORA-39070: Unable to open the log file.
ORA-29283: invalid file operation
ORA-06512: at "SYS.UTL_FILE", line 475
ORA-29283: invalid file operation
is there a way i can put DMP file into Shared Folder through the JOB.
October 23, 2009 - 11:51 am UTC
sure, because the database runs as a service and windows is NOT unix.
In unix, if you did a mount, everyone would see it - it would be a file system on that server. That is because Unix says "we are a multi-user operating system"
Windows is a single user operating system that allows for multiple user accounts. You don't really log in multiple users (there are ways using remote termainals - but it really doesn't change anything)
When you logged in, YOU created that share, that share exists for you and your session alone - it doesn't exist after you log out and it doesn't exist in anyone elses 'session'
The database is running as a service, as another user entirely, they cannot see your 'share'.
see support note: 144485.1 for directions on how to make a network share visible to the right people on windows, if your network administrator doesn't know how...
Re: Create Dmp into Netwrok Drive
Samy, November 01, 2009 - 4:31 am UTC
i Tom,
Thanks for your Reply.
Yeap i have achieved it through expdp which creates Backup in a particular folder, then a Procedue call that copies that file into Table with BLOB Column and a service in through which it querys the Table for the Dump. it might not be right way, ya waiting for your suggestion in this case.
i do keep reading your articles and all the question Asked and replied. Its like if we have Problem we say UNCLE TOM hai na (Is there na). Eager to know, do u read books or its just you keep doing R&D, feels good that you have solution for Questions. few links doesnot opens might be old.
http://asktom.oracle.com/tkyte/Misc/RolesAndProcedures.html
November 09, 2009 - 12:03 pm UTC
see home page for new link to the old files
exp vs expdp
Ronak, March 29, 2010 - 5:06 am UTC
Hi Tom,
I need to import data from dump file using expdp utility but the problem here is that the dump file was created using exp utility. Is there any solution available which can help me out in this.
Thanks in advance..
April 05, 2010 - 9:49 am UTC
... I need to import data from dump file using expdp utility ...
that'll never happen, you don't import with EXPORT...
you probably meant impdp
and the only solution for you will be to use IMP, not IMPDP - since IMP data formats are completely different from IMPDP data formats - the files are not even a tiny bit compatible.
Privileges for a EXPORT user
A reader, May 03, 2010 - 9:21 am UTC
I have created a user to perform export dumps. I have given appropriate privileges . I am getting error as specified below .
What other privilges I would need ? Thanks in Advance.
create user BKUPADMIN
identified by ""
default tablespace SECURETS
temporary tablespace TEMP
profile DEFAULT
password expire
quota unlimited on securets;
-- Grant/Revoke role privileges
grant exp_full_database to BKUPADMIN;
-- Grant/Revoke system privileges
grant create session to BKUPADMIN;
grant create table to BKUPADMIN;
grant export full database to BKUPADMIN;
-- Set the user's default roles
alter user BKUPADMIN
default role none;
expdp bkupadmin/jj1admin$ DUMPFILE=prd_%TEMPTIME% LOGFILE=prd_%TEMPTIME% parallel=4 exclude=statistics directory=DLOAD REUSE_DUMPFILES=y FLASHBACK_TIME=sysdate schemas=USER1,USER2
Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
ORA-31631: privileges are required
ORA-39109: Unprivileged users may not operate upon other users' schemas
May 06, 2010 - 11:57 am UTC
http://docs.oracle.com/docs/cd/E11882_01/server.112/e10701/dp_export.htm#i1007509
A schema export is specified using the SCHEMAS parameter. This is the default export mode. If you have the DATAPUMP_EXP_FULL_DATABASE role, then you can specify a list of schemas, optionally including the schema definitions themselves and also system privilege grants to those schemas. If you do not have the DATAPUMP_EXP_FULL_DATABASE role, you can export only your own schema.
balasaheb, May 22, 2010 - 4:00 am UTC
Hi,Its nice,
But still my issue persist with
ORA-39125: Worker unexpected fatal error in KUPW$WORKER.UNLOAD_METADATA while calling DBMS_METADATA.FETCH_XML_CLOB [OBJECT_GRANT:"CNVDBO12"]
ORA-04063: view "SYS.KU$_CLUSTER_VIEW" has errors
---is this issue regarding parallelism .
I gave parallel-10.
I have enouch space in system tbs.
How system is related with 39125.
--please i am waiting for ur reply-
Its urgent.
May 24, 2010 - 12:56 pm UTC
it is something for you to call support about.
REMAP_TABLE in 11gR1
Rajeshwaran, Jeyabal, August 17, 2010 - 1:56 pm UTC
Export: Release 11.1.0.6.0 - Production on Wednesday, 18 August, 2010 0:07:18
Copyright (c) 2003, 2007, Oracle. All rights reserved.
;;;
Connected to: Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Starting "SCOTT"."SYS_EXPORT_TABLE_02": scott/********@11g directory=IRADS_IN dumpfile=scott_t1.dmp tables=T logfile=export1.log
Estimate in progress using BLOCKS method...
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 64 KB
Processing object type TABLE_EXPORT/TABLE/TABLE
. . exported "SCOTT"."T" 6.765 KB 43 rows
Master table "SCOTT"."SYS_EXPORT_TABLE_02" successfully loaded/unloaded
******************************************************************************
Dump file set for SCOTT.SYS_EXPORT_TABLE_02 is:
D:\INTERFACES_IRADS2\IRADS2PROD\IN\SCOTT_T1.DMP
Job "SCOTT"."SYS_EXPORT_TABLE_02" successfully completed at 00:12:11
D:\>impdp scott/tiger@11g directory=IRADS_IN dumpfile=SCOTT_T1.DMP tables=scott.T remap_table=scott.T:new_T logfile=imp_log.txt table_exists_action=truncate
Import: Release 11.1.0.6.0 - Production on Wednesday, 18 August, 2010 0:20:21
Copyright (c) 2003, 2007, Oracle. All rights reserved.
Connected to: Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Master table "SCOTT"."SYS_IMPORT_TABLE_01" successfully loaded/unloaded
Starting "SCOTT"."SYS_IMPORT_TABLE_01": scott/********@11g directory=IRADS_IN dumpfile=SCOTT_T1.DMP tables=scott.T remap_table=scott.T:new_T logfile=imp_log.txt table_exists_action=truncate
Processing object type TABLE_EXPORT/TABLE/TABLE
ORA-39153: Table "SCOTT"."T" exists and has been truncated. Data will be loaded but all dependent metadata will be skipped due to table_exists_action of truncate
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
. . imported "SCOTT"."T" 6.765 KB 43 rows
Job "SCOTT"."SYS_IMPORT_TABLE_01" completed with 1 error(s) at 00:18:13
D:\>
Tom:
Referring to the product documentation link below. I am not able to rename tables during import. Is that wrong in my approach, Tom. please help me?
http://download.oracle.com/docs/cd/B28359_01/server.111/b28319/dp_import.htm#BABIGHCC
December 22, 2011 - 9:20 am UTC
dblink vs datapump
A reader, August 30, 2010 - 9:34 am UTC
Transferring huge amount of data around 500GB worth of data from one database to another (for Archiving) which one is faster. Assuming we dont have option of Transportable Tablespace
1. DBLink
a) Create Database link on Archive DB pointing to Prod
b) Pull the data from Prod using DBLink (Enabling Parallel)
c) Truncate Partition in Prod
2. Data PUMP
Export data from Prod to DUMP file
Import from Dump File to Arhive DB
Which one is going to be faster and why?
September 09, 2010 - 9:29 am UTC
the only true answer will come from you bench marking in your environment with your data on your hardware.
You forgot option three, four, and so on
3) data pump - export/import over a dblink without creating a disk file at all.
4) put tables you want to archive in a tablespace or set of tablespaces (so they are self contained). use tablespace transports to move all of the data without unloading or reloading it at all
probably #4 will be the most efficient. You'll just copy datafiles.
content=metadata_only
thick head, September 28, 2010 - 3:36 pm UTC
Sir
Can you tell me how to write content=metadata_only using dbms_metadata plsql utility (as you show above how to export a schema's data). I only need metadata extract on a RAC environment and prefer invoking from PLSQL and scheduler to keep it simple.
I'm not able to find this is plsql reference guide for dbms_datapump.
September 28, 2010 - 3:41 pm UTC
I'm not sure what you are asking for - dbms_metadata only retrieves metadata - never the data.
and then the reference to dbms_datapump?
I'm not sure what you are attempting to do...
how to skip collect object statistics in expdp/impdp
Pauline, November 04, 2010 - 2:47 pm UTC
Tom,
In previous Oracle exp/imp utility, there was parameter STATISTICS to control how to handle statistics collection while export/importing. In Oracle expdp/impdp utility, the parameter ESTIMATE sounds like for calculating expdp/impdp job estimates with the key word BLOCKS and STATISTICS. If we want to skip object statistics collection during the expdp/impdp, what we should do?
Thanks.
November 05, 2010 - 5:21 am UTC
estimate has to do with estimating the size of the export - not statistics.
statistics are always exported/imported with data pump.
how to skip collect object statistics in expdp/impdp
Pauline, November 04, 2010 - 2:47 pm UTC
Tom,
In previous Oracle exp/imp utility, there was parameter STATISTICS to control how to handle statistics collection while export/importing. In Oracle expdp/impdp utility, the parameter ESTIMATE sounds like for calculating expdp/impdp job estimates with the key word BLOCKS and STATISTICS. If we want to skip object statistics collection during the expdp/impdp, what we should do?
Thanks.
equivalent of imp's grants in impdp
A reader, November 05, 2010 - 8:23 am UTC
Is there any equivalent of exp/imp's grants=n in expdp/impdp ?
November 05, 2010 - 9:28 am UTC
..expdp
Manoj Kaparwan, November 15, 2010 - 10:53 am UTC
Tom,
Thanks for your time.
below worked well using exp.
---------------------------------
mknod exp_pipe p
gzip <exp_pipe > exp.dmp.gz &
exp user/pass owner=X file=exp_pipe log=exp.log
----------------------------------
but if we use expdp it is not able to use named pipe.
any help or alternative way to use expdp in which compression is performed background ( in parallel) will be helpful .
thanks
November 15, 2010 - 11:02 am UTC
compression was made available to expdp with the advanced compression option in 11g.
..expdp
Manoj Kaparwan, November 15, 2010 - 11:58 am UTC
Thanks you so much tom for your quick reply.
so until we are in 10g, we will continue to use exp in case we wanted to use named pipe.
sharing below my work
-----
[oracle@host-rac1 dump]$ mknod exp_pipe p
[oracle@host-rac1 dump]$
[oracle@host-rac1 dump]$ more exp.sh
gzip <exp_pipe >exp_man.dmp.gz &
exp man/pass file=exp_pipe log=exp.log owner=MAN
[oracle@host-rac1 dump]$ sh exp.sh
Export: Release 10.2.0.1.0 - Production on Mon Nov 15 17:33:34 2010
Copyright (c) 1982, 2005, Oracle. All rights reserved.
Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
With the Partitioning, Real Application Clusters, OLAP and Data Mining options
Export done in US7ASCII character set and AL16UTF16 NCHAR character set
About to export specified users ...
. exporting pre-schema procedural objects and actions
. exporting foreign function library names for user MAN
. exporting PUBLIC type synonyms
. exporting private type synonyms
. exporting object type definitions for user MAN
About to export MAN's objects ...
. exporting database links
. exporting sequence numbers
. exporting cluster definitions
. about to export MAN's tables via Conventional Path ...
. . exporting table A 3 rows exported
. . exporting table B 2 rows exported
. . exporting table C 3 rows exported
. . exporting table CUSTOMERS 55500 rows exported
. . exporting table EMP 14 rows exported
. . exporting table LOOKUP 3 rows exported
. . exporting table P_OLD 2 rows exported
. . exporting table SESS_EVENT
. . exporting table T 99525 rows exported
. . exporting table T1 100 rows exported
. . exporting table T2 49747 rows exported
. . exporting table T_A 2 rows exported
. . exporting table T_SYSDATE 6400 rows exported
. . exporting table T_TYPE 0 rows exported
. exporting synonyms
. exporting views
. exporting stored procedures
. exporting operators
. exporting referential integrity constraints
. exporting triggers
. exporting indextypes
. exporting bitmap, functional and extensible indexes
. exporting posttables actions
. exporting materialized views
. exporting snapshot logs
. exporting job queues
. exporting refresh groups and children
. exporting dimensions
. exporting post-schema procedural objects and actions
. exporting statistics
Export terminated successfully without warnings.
[oracle@host-rac1 dump]$
-rw-r--r-- 1 oracle oinstall 3360124 Nov 15 17:34 exp_man.dmp.gz
-----using expdp----
man@host-rac1> create or replace directory dump_dir as '/home/oracle/sqlstuffs/dump';
Directory created.
man@host-rac1>
[oracle@host-rac1 dump]$ more expdp.sh
gzip <exp_pipe >expdp_man.dmp.gz &
expdp man/pass directory=dump_dir dumpfile=exp_pipe logfile=exp.log schemas=MAN
[oracle@host-rac1 dump]$ sh expdp.sh
Export: Release 10.2.0.1.0 - Production on Monday, 15 November, 2010 17:40:59
Copyright (c) 2003, 2005, Oracle. All rights reserved.
Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
With the Partitioning, Real Application Clusters, OLAP and Data Mining options
Starting "MAN"."SYS_EXPORT_SCHEMA_01": man/******** directory=dump_dir dumpfile=exp_pipe logfile=exp.log schemas=MAN
Estimate in progress using BLOCKS method...
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 118.4 MB
Processing object type SCHEMA_EXPORT/USER
Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
Processing object type SCHEMA_EXPORT/ROLE_GRANT
Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
Processing object type SCHEMA_EXPORT/TYPE/TYPE_SPEC
Processing object type SCHEMA_EXPORT/SEQUENCE/SEQUENCE
Processing object type SCHEMA_EXPORT/TABLE/TABLE
Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type SCHEMA_EXPORT/TABLE/COMMENT
Processing object type SCHEMA_EXPORT/PROCEDURE/PROCEDURE
Processing object type SCHEMA_EXPORT/PROCEDURE/ALTER_PROCEDURE
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/REF_CONSTRAINT
Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
. . exported "MAN"."T" 77.86 MB 99525 rows
. . exported "MAN"."CUSTOMERS" 9.850 MB 55500 rows
. . exported "MAN"."T_SYSDATE" 805.6 KB 6400 rows
. . exported "MAN"."T2" 665.1 KB 49747 rows
. . exported "MAN"."T1" 28.18 KB 100 rows
. . exported "MAN"."A" 4.921 KB 3 rows
. . exported "MAN"."B" 4.914 KB 2 rows
. . exported "MAN"."C" 4.937 KB 3 rows
. . exported "MAN"."EMP" 7.812 KB 14 rows
. . exported "MAN"."LOOKUP" 5.531 KB 3 rows
. . exported "MAN"."P_OLD" 4.929 KB 2 rows
. . exported "MAN"."T_A" 5.515 KB 2 rows
. . exported "MAN"."T_TYPE" 0 KB 0 rows
Master table "MAN"."SYS_EXPORT_SCHEMA_01" successfully loaded/unloaded
******************************************************************************
Dump file set for MAN.SYS_EXPORT_SCHEMA_01 is:
/home/oracle/sqlstuffs/dump/exp_pipe.dmp
Job "MAN"."SYS_EXPORT_SCHEMA_01" successfully completed at 17:42:40
[oracle@host-rac1 dump]$
-rw-r----- 1 oracle oinstall 93843456 Nov 15 17:42 exp_pipe.dmp
-----------
questions
a) why i am intrested to use expdp over exp is that ...
we wanted to exclude table "T" in the export dump but at the same time we wanted to export everything else in the schema. using exp when we use TABLES=( list of tables exlcluding "T" ) then we wont get other schema objects other then tables. any suggestion to achieve it using exp?
b) the following line
... . exporting table SESS_EVENT
is missing in expdp. SESS_EVENT is global temporary table here. so the reason for discarding it in expdp is just because it is an temporary table?
November 16, 2010 - 3:34 am UTC
name pipe will not work with expdp ever. it runs in the server, not in the OS. export (exp) is a client program that runs sql, reads data out of the database and writes it to a file. datapump export (expdp) is a client program that runs a stored procedure. the stored procedure exports the data on the server.
..expdp
Manoj Kaparwan, November 16, 2010 - 12:41 pm UTC
Thanks Tom.
impdp remap_tablespace...
Craig, January 20, 2011 - 1:48 pm UTC
Tom,
I've been searching Oracle documentation and online looking to see if it's possible to remap objects to different tablespaces based on object type, with no luck so far. If I have a .dmp file taken from a schema where all indexes and tables are in the same tablespace, is it possible to specify indexes go into IND_TS and tables go into TAB_TS using impdp? If so, which parameters would be involved?
Best Regards.
Impdp Running Slow
jatin, March 03, 2011 - 7:31 am UTC
Hi Tom
I'm migrating data in some schemas from oracle version 11.1.0.7 to 10.2.0.4 using a verions clause and parallel 2 and it's running too slow and using too much temp:
select username, sum(blocks)*16/1024 "mb" from v$sort_usage
group by username;
USERNAME mb
------------------------------ ----------
EMXSMGR57 4600
ORACLE 2
v$sort_usage shows:
USERNAME USER SESSION_ADDR SESSION_NUM SQLADDR SQLHASH SQL_ID TABLESPACE CONTENTS SEGTYPE SEGFILE# SEGBLK# EXTENTS BLOCKS SEGRFNO#
------------------------------ ------------------------------ ---------------- ----------- ---------------- ---------- ------------- ------------------------------- --------- --------- ---------- ---------- ---------- ---------- ----------
ORACLE ORACLE 0000000406BC9110 645 00000003C5E17428 3732391352 7wn3wubg7gjds TEMP01 TEMPORARY LOB_DATA 201 158021 1 64 1
ORACLE ORACLE 0000000406BC10A0 1763 00 0 TEMP01 TEMPORARY LOB_DATA 201 158149 1 64 1
EMXSMGR57 EMXSMGR57 00000003C73B9030 1624 00 0 TEMP01 TEMPORARY LOB_DATA 202 389 1 64 2
EMXSMGR57 EMXSMGR57 00000003C73B9030 1624 00 0 TEMP01 TEMPORARY SORT 202 453 11 704 2
EMXSMGR57 EMXSMGR57 00000003C73BBB00 231 00 0 TEMP01 TEMPORARY SORT 202 102021 2386 152704 2
Although I've index segment_size not exceeding 6 GB:
EMXSMGR57 IX_STATENTRIES_01 INDEX 293601280
EMXSMGR57 IX_ALERTS_03 INDEX 2382364672
EMXSMGR57 SYS_C0042690 INDEX 2486173696
OWNER SEGMENT_NAME SEGMENT_TYPE BYTES
------------------------------ ------------------------------ ------------------ ----------
EMXSMGR57 IX_ALERTS_02 INDEX 4232052736
EMXSMGR57 IX_ALERTS_01 INDEX 5398069248
640 rows selected.
Also, the waits show:
SQL> select event, time_waited, max_wait, TOTAL_WAITS
from v$session_event
where sid = 535 2 3 ;
EVENT TIME_WAITED MAX_WAIT TOTAL_WAITS
---------------------------------------------------------------- ----------- -------- -----------
wait for unread message on broadcast channel 172820 100 2026
os thread startup 18 9 2
control file sequential read 0 0 7
db file sequential read 177 49 226
db file scattered read 3 1 4
direct path read 15 2 51
And alert has:
kupprdp: worker process DW01 started with worker id=1, pid=27, OS id=10148
to execute - SYS.KUPW$WORKER.MAIN('SYS_IMPORT_SCHEMA_02', 'ORACLE');
kupprdp: worker process DW02 started with worker id=2, pid=20, OS id=10253
to execute - SYS.KUPW$WORKER.MAIN('SYS_IMPORT_SCHEMA_02', 'ORACLE');
Thu Mar 3 07:14:18 2011
Thread 1 advanced to log sequence 11181 (LGWR switch)
Current log# 3 seq# 11181 mem# 0: /oradata/EMXSCOR4/data001/oralink/redo03a
Current log# 3 seq# 11181 mem# 1: /oradata/EMXSCOR4/data001/oralink/redo03b
Thu Mar 3 07:14:36 2011
Thread 1 advanced to log sequence 11182 (LGWR switch)
Current log# 1 seq# 11182 mem# 0: /oradata/EMXSCOR4/data001/oralink/redo01a
Current log# 1 seq# 11182 mem# 1: /oradata/EMXSCOR4/data001/oralink/redo01b
Thu Mar 3 07:14:52 2011
Thread 1 advanced to log sequence 11183 (LGWR switch)
Current log# 2 seq# 11183 mem# 0: /oradata/EMXSCOR4/data001/oralink/redo02a
Current log# 2 seq# 11183 mem# 1: /oradata/EMXSCOR4/data001/oralink/redo02b
Thu Mar 3 07:15:09 2011
Thread 1 advanced to log sequence 11184 (LGWR switch)
Current log# 3 seq# 11184 mem# 0: /oradata/EMXSCOR4/data001/oralink/redo03a
Current log# 3 seq# 11184 mem# 1: /oradata/EMXSCOR4/data001/oralink/redo03b
Thu Mar 3 07:15:28 2011
Thread 1 advanced to log sequence 11185 (LGWR switch)
Current log# 1 seq# 11185 mem# 0: /oradata/EMXSCOR4/data001/oralink/redo01a
Current log# 1 seq# 11185 mem# 1: /oradata/EMXSCOR4/data001/oralink/redo01b
Thu Mar 3 07:16:06 2011
Thread 1 advanced to log sequence 11186 (LGWR switch)
Current log# 2 seq# 11186 mem# 0: /oradata/EMXSCOR4/data001/oralink/redo02a
Current log# 2 seq# 11186 mem# 1: /oradata/EMXSCOR4/data001/oralink/redo02b
Thu Mar 3 07:16:59 2011
Thread 1 advanced to log sequence 11187 (LGWR switch)
Current log# 3 seq# 11187 mem# 0: /oradata/EMXSCOR4/data001/oralink/redo03a
Current log# 3 seq# 11187 mem# 1: /oradata/EMXSCOR4/data001/oralink/redo03b
Thu Mar 3 07:17:34 2011
Thread 1 advanced to log sequence 11188 (LGWR switch)
Current log# 1 seq# 11188 mem# 0: /oradata/EMXSCOR4/data001/oralink/redo01a
Current log# 1 seq# 11188 mem# 1: /oradata/EMXSCOR4/data001/oralink/redo01b
Thu Mar 3 07:18:03 2011
Thread 1 advanced to log sequence 11189 (LGWR switch)
Current log# 2 seq# 11189 mem# 0: /oradata/EMXSCOR4/data001/oralink/redo02a
Current log# 2 seq# 11189 mem# 1: /oradata/EMXSCOR4/data001/oralink/redo02b
Thu Mar 3 07:18:26 2011
Thread 1 advanced to log sequence 11190 (LGWR switch)
Current log# 3 seq# 11190 mem# 0: /oradata/EMXSCOR4/data001/oralink/redo03a
Current log# 3 seq# 11190 mem# 1: /oradata/EMXSCOR4/data001/oralink/redo03b
I've 3 questions:
1. What can I look into to make sure of my excessive temp usage - I feel 4.5 GB temp usage for at max 6 GB of index is on a higher side.
2. What is "wait for unread message on broadcast channel" and how should I relate it with my impdp session.
3. Why suddenly my redo log switches are so aggressive when Indexes start building - can I do anything to counter this behaviour?
Thanks Much!
March 03, 2011 - 8:08 am UTC
1) why do you feel that way? If you have a 6gb index - you'll have - well, about 6gb of stuff to sort - it is unlikely to fit into memory so it'll go into temp and we might have to do multi-pass operations on it. I'd expect temp usage to be pretty heavy. Why do you think it should be tiny? or smaller even?
2) it probably doesn't relate to your impdp session. It is related to AQ (advanced queues). It just means that a queue consumer is waiting to read a message but cannot find any it is authorized to see.
3) your database is in archivelog mode - everything will be logged. If this is a new database (not an existing one with important data already), you can disable archivelogmode while loading.
Details in dump file
Jayadevan, June 17, 2011 - 6:24 am UTC
Hello Tom,
Is it possible to list the schemas available in a dump file? We are working on a UI for expdp/impdp. We would like to list the schemas available in a dump file so that we can let the users remap them.
Thanks,
Jayadevan
DATA_PUMP EXPORT
Dilip Ganeshan, July 22, 2011 - 6:06 am UTC
Hi Tom,
I have a Range Parition table with values less than 100 in PART_1, values less 200 in PART_2 and values less than 300 in PART_3. I want to export some data whose value are in (10,20,30). I used oracle data_pump utility with TABLE mode to do the export.
I add the filter as
DBMS_DATAPUMP.data_filter (handle => v_dp_handle,
NAME => 'SUBQUERY',
VALUE => 'WHERE values IN (10,20,30)' );
Question :
When I checked the log file, I found that it is scanning all the partitions (PART_1,PART_2,PART_3), instead of only PART_1. Can you please let me know the reason behind this and how to overcome this situation ?
Thanks and Regards,
Dilip Kuttuva Ganeshan.
July 22, 2011 - 2:40 pm UTC
show me the information you have that leads you to believe it is scanning all partitions.
DATA_PUMP scanning all partitions
Dilip Kuttuva Ganeshan, July 26, 2011 - 2:13 am UTC
Hi Tom,
I have a Range Parition table with values less than 100 in PART_1, values less 200 in PART_2 and
values less than 300 in PART_3. I want to export some data whose value are in (10,20,30). I used
oracle data_pump utility with TABLE mode to do the export.
I add the filter as
DBMS_DATAPUMP.data_filter (handle => v_dp_handle,
NAME => 'SUBQUERY',
VALUE => 'WHERE values IN (10,20,30)' );
Question :
When I checked the log file, I found that it is scanning all the partitions (PART_1,PART_2,PART_3),
instead of only PART_1. Can you please let me know the reason behind this and how to overcome this
situation ?
Starting "SEI"."TEST_62":
Estimate in progress using BLOCKS method...
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 24 MB
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
. . exported "SEI"."TEST":"PART_1" 119.4 KB 60 rows
. . exported "SEI"."TEST":"PART_2" 101.8 KB 0 rows
. . exported "SEI"."TEST":"PART_3" 101.8 KB 0 rows
. . exported "SEI"."TEST":"PART_0" 0 KB 0 rows
Master table "SEI"."TEST_62" successfully loaded/unloaded
******************************************************************************
July 27, 2011 - 8:15 pm UTC
first of all, so what, it is tiny.
second of all, this doesn't show it did anything to the other partitions, it knows the size from the dictionary. Trace it, set up a logon trigger that enables trace and prove it is reading all of that.
estimate statistics or blocks
Daniel, September 28, 2011 - 10:03 am UTC
Hi Tom,
First thank you for all your efforts. Your contribution is very much appreciated.
I am trying to migrate 6TB database from a server to another using streams and datapump.
I am using datapump impdp with network_link option, because I don't have enough intermediate space available and also I need to use flashback_scn for consistency.
Everything is working, but very very slow. I started the import 30hours ago and it's still in the faze of estimation.
From the import I excluded system users, some other users and table statistics. I am using parallel 16. On source I have 8 processors on destination more 16.
My question is: if I change the estimate parameter from the default one (which I am using - BLOCKS) to STATISTICS would it be faster?
At current speed only the estimation part would take about 10days. So probably the hole export would finish in about 20 days at least. After the import, streams will have to do mining on the archivelogs (for which I will need disk space) for another couple of days.
September 28, 2011 - 12:27 pm UTC
how good is your IO system there. How many megabits do you have to get the data from disk to the server (assuming some sort of SAN/NAS, not direct attach storage).
how many megabits do you have to get the data over your network.
how many megabits do you have to write the data back out.
how many megabits do you have to read it all back in for each index you have to create?
I'm more concerned about that.
Why wouldn't you just transport this data or if the platforms are the same just restore it???
Daniel, September 29, 2011 - 1:30 am UTC
The speed is relatively ok. I did a test before starting the import on a small user (around 200gb) and I estimated around 4 days for the import to finish over the db link (to the new server).
Unfortunately the test was not done using flashback_scn option, so the import wasn't consistent (I didn't needed to be at that time - just wanted to test the speed of disk/network).
If I use streams to do the migration from Linux to AIX (from 10.2.0.1 to 10.2.0.5 database) I wouldn't require too much downtime. This is production database.
I would require the time needed to switch the application to point to the new database or not even. I can switch the ip's between the servers. Any option I would use the downtime from db point of view would be at somewhere around 5minutes.
If I am using transportable tablespaces (which I might use if I can not use import) means that I need downtime. The downtime is quite big. Plus, with transportable tablespaces I have to create the users, grant them privileges on objects that don't exist yet! which is kind of a headache.
This is why I was wondering if the estimate process can be speed up by using statistics instead of blocks.
Do you know if that would help?
Thank you,
Daniel
September 29, 2011 - 7:03 am UTC
why would the downtime for a transport be "big"? If you have the IO capabilities - it wouldn't be big, if you don't - everything is going to be slow slow slow slow slow... and take a long time to boot (do the math - look at your effective transfer rates and start dividing..)
Daniel, September 29, 2011 - 4:29 am UTC
Import crushed with not enough rollback segment, although undo_retention is set to 15days.
Anyway, I don't get it. I checked the activity on source database.
Oracle is scanning everything for the estimation, and then does this activity again for getting the data. Makes not much sense.
Here is the explain plan for the query SELECT /*+ NESTED_TABLE_GET_REFS */ 0
FROM "OWNER"."STATEMENTS" "STATEMENTS":
{script}
"OPERATION" "OPTIONS" "OPTIMIZER" "ID" "COST" "CARDINALITY" "OTHER_TAG"
"SELECT STATEMENT" "" "ALL_ROWS" "0" "49086" "" ""
"PX COORDINATOR" "" "" "1" "" "" ""
"PX SEND" "QC (RANDOM)" "" "2" "49086" "75638998" "PARALLEL_TO_SERIAL"
"PX BLOCK" "ITERATOR" "" "3" "49086" "75638998" "PARALLEL_COMBINED_WITH_CHILD"
"TABLE ACCESS" "FULL" "" "4" "49086" "75638998" "PARALLEL_COMBINED_WITH_PARENT"
{script}
I started the import with estimate=statistics, but it seems that there is no improvement.
smane39@gmail.com, December 21, 2011 - 4:02 pm UTC
procedure p_expdp is in SCHEMA_A
I am trying to export a table using schema_A. The table to be exported is in SCHEMA_B.The table details are stored in SCHEMA_A.TEST_DETAILS.
I am using a cursor to loop throung the table, SCHEMA_A.TEST_DETAILS in procedure p_expdp.
I have EXP_FULL_DATABASE role assigned.
I get object not found error
ORA-39166: Object ('SCHEMA_B.TEST_DETAILS') was not found.
ORA-31655: no data or metadata objects selected for job
Can you please tell me what is wrong in my procedure?
CREATE TABLE SCHEMA_A.TEST_DETAILS
(
DT_KEY NUMBER NOT NULL,
OBJECT_TYPE VARCHAR2(100 BYTE) NOT NULL,
OWNER_NAME VARCHAR2(30 BYTE) NOT NULL,
TABLE_NAME VARCHAR2(30 BYTE) NOT NULL,
DERIEVED_TABLE_NAME VARCHAR2(500 BYTE) NOT NULL,
ARCHIVE_FLAG VARCHAR2(1 BYTE) DEFAULT 'N' NOT NULL,
ARCHIVE_STATUS VARCHAR2(20 BYTE) DEFAULT 'NOT ARCHIVED' NOT NULL,
DMP_FILE VARCHAR2(100 BYTE),
LOG_FILE VARCHAR2(100 BYTE),
)
INSERT INTO SCHEMA_A.TEST_DETAILS
(DT_KEY,OBJECT_TYPE,OWNER_NAME,TABLE_NAME,DERIEVED_TABLE_NAME,ARCHIVE_FLAG,ARCHIVE_STATUS,DMP_FILE,LOG_FILE )
values
(20111219,'TABLE','SCHEMA_B','WEATHER','WEATHER_20111219','N','NOT ARCHIVED','weather_20111219.dmp','weather_20111219.log')
PROCEDURE p_expdp(p_owner IN VARCHAR2
,p_table_name IN VARCHAR2
)
IS
l_datapump_handle NUMBER;
l_datapump_dir VARCHAR2(20) := 'DATAPUMP_DIR';
l_quote varchar2(1) := chr(39); -- single quote
l_table_name VARCHAR2(100);
CURSOR cur_test_DETAIL_exp
IS
SELECT DT_KEY
,OWNER_NAME
,TABLE_NAME
,DERIEVED_TABLE_NAME
,ARCHIVE_FLAG
,ARCHIVE_STATUS
,DMP_FILE
,LOG_FILE
FROM schema_A.test_DETAILS
WHERE test_DETAIL.OWNER_NAME = p_owner
and test_DETAIL.DERIEVED_TABLE_NAME = p_table_name
and test_DETAIL.ARCHIVE_FLAG = 'Y'
and test_DETAIL.ARCHIVE_STATUS = 'NOT ARCHIVED';
BEGIN
FOR rec_test_DETAIL_exp in cur_test_DETAIL_exp
LOOP
l_table_name := p_owner||'.'||rec_archive_detail_exp.DERIEVED_TABLE_NAME;
l_datapump_handle := dbms_datapump.open(operation => 'EXPORT'
,job_mode =>'TABLE'
,job_name => rec_archive_detail_exp.DERIEVED_TABLE_NAME||'_exp');
dbms_datapump.add_file(handle => l_datapump_handle
,filename => rec_archive_detail_exp.dmp_file
,directory => l_datapump_dir);
dbms_datapump.add_file(handle => l_datapump_handle
,filename => rec_archive_detail_exp.log_file
,directory => l_datapump_dir
,filetype => DBMS_DATAPUMP.ku$_file_type_log_file);
dbms_datapump.metadata_filter( handle => l_datapump_handle,
name => 'NAME_LIST',
value => '('''||l_table_name||''')'
);
dbms_datapump.start_job(handle => l_datapump_handle);
dbms_datapump.detach(handle => l_datapump_handle);
END LOOP;
END;
December 21, 2011 - 4:25 pm UTC
what did you pass as inputs to this procedure.
where is the create for the necessary schema_b objects?
I don't see how it could be complaining about a table named TEST_DETAILS since you never reference it - you only reference weather_XXXXXXXX
smane39@gmail.com, December 21, 2011 - 4:29 pm UTC
Apologies the error is -
ORA-39166: Object ('SCHEMA_B.WEATHER_20111219') was not found.
ORA-31655: no data or metadata objects selected for job
executing procedure as shown below -
exec SCHEMA_A.p_expdp('SCHEMA_B','WEATHER_20111219');
smane39@gmail.com, December 21, 2011 - 4:31 pm UTC
i forgot to include the create script for SCHEMA_B table -
create table schmema_b.WEATHER_20111219
(id number
temp varchar2(10));
insert into schmema_b.WEATHER_20111219
values
(1,'98F');
December 22, 2011 - 9:49 am UTC
well, after fixing the three errors in this script...
and the ten errors in the prior script....
quick rant: why or how can that happen? If I were to post on a forum, looking for information - asking someone for help - I would make darn sure that the stuff I posted was correct. I would have run it, I would have tested it. So - why do I get so many - so so so many - examples that contain typos, missing commas, bad data??? I don't get it....
Here is your complete example with the minimal set of privileges necessary for both schemas:
schema_a%ORA11GR2> connect /
Connected.
ops$tkyte%ORA11GR2>
ops$tkyte%ORA11GR2> drop user schema_a cascade;
User dropped.
ops$tkyte%ORA11GR2> drop user schema_b cascade;
User dropped.
ops$tkyte%ORA11GR2>
ops$tkyte%ORA11GR2> create user schema_a identified by schema_a default tablespace users quota unlimited on users;
User created.
ops$tkyte%ORA11GR2> grant create session, create procedure, create table, exp_full_database to schema_a;
Grant succeeded.
ops$tkyte%ORA11GR2>
ops$tkyte%ORA11GR2> create user schema_b identified by schema_b default tablespace users quota unlimited on users;
User created.
ops$tkyte%ORA11GR2> grant create session, create table to schema_b;
Grant succeeded.
ops$tkyte%ORA11GR2>
ops$tkyte%ORA11GR2> create or replace directory datapump_dir as '/tmp'
2 /
Directory created.
ops$tkyte%ORA11GR2> grant read on directory datapump_dir to schema_a;
Grant succeeded.
ops$tkyte%ORA11GR2> grant write on directory datapump_dir to schema_a;
Grant succeeded.
ops$tkyte%ORA11GR2>
ops$tkyte%ORA11GR2>
ops$tkyte%ORA11GR2> connect schema_b/schema_b;
Connected.
schema_b%ORA11GR2> create table WEATHER_20111219 (id number, temp varchar2(10));
Table created.
schema_b%ORA11GR2> insert into WEATHER_20111219 values (1,'98F');
1 row created.
schema_b%ORA11GR2>
schema_b%ORA11GR2> connect schema_a/schema_a;
Connected.
schema_a%ORA11GR2>
schema_a%ORA11GR2> CREATE TABLE SCHEMA_A.TEST_DETAILS
2 (
3 DT_KEY NUMBER NOT NULL,
4 OBJECT_TYPE VARCHAR2(100 BYTE) NOT NULL,
5 OWNER_NAME VARCHAR2(30 BYTE) NOT NULL,
6 TABLE_NAME VARCHAR2(30 BYTE) NOT NULL,
7 DERIEVED_TABLE_NAME VARCHAR2(500 BYTE) NOT NULL,
8 ARCHIVE_FLAG VARCHAR2(1 BYTE) DEFAULT 'N' NOT NULL,
9 ARCHIVE_STATUS VARCHAR2(20 BYTE) DEFAULT 'NOT ARCHIVED' NOT NULL,
10 DMP_FILE VARCHAR2(100 BYTE),
11 LOG_FILE VARCHAR2(100 BYTE)
12 );
Table created.
schema_a%ORA11GR2>
schema_a%ORA11GR2>
schema_a%ORA11GR2> INSERT INTO SCHEMA_A.TEST_DETAILS
2 (DT_KEY,OBJECT_TYPE,OWNER_NAME,TABLE_NAME,DERIEVED_TABLE_NAME,ARCHIVE_FLAG,ARCHIVE_STATUS,DMP_FILE,LOG_FILE )
3 values
4 (20111219,'TABLE','SCHEMA_B','WEATHER','WEATHER_20111219','Y','NOT ARCHIVED','weather_20111219.dmp','weather_20111219.log');
1 row created.
schema_a%ORA11GR2>
schema_a%ORA11GR2>
schema_a%ORA11GR2>
schema_a%ORA11GR2> create or replace
2 PROCEDURE p_expdp(p_owner IN VARCHAR2
3 ,p_table_name IN VARCHAR2
4 )
5 IS
6 l_datapump_handle NUMBER;
7 l_datapump_dir VARCHAR2(20) := 'DATAPUMP_DIR';
8 l_quote varchar2(1) := chr(39); -- single quote
9 l_table_name VARCHAR2(100);
10
11 CURSOR cur_test_DETAIL_exp
12 IS
13 SELECT DT_KEY
14 ,OWNER_NAME
15 ,TABLE_NAME
16 ,DERIEVED_TABLE_NAME
17 ,ARCHIVE_FLAG
18 ,ARCHIVE_STATUS
19 ,DMP_FILE
20 ,LOG_FILE
21 FROM schema_A.test_DETAILS
22 WHERE test_DETAILs.OWNER_NAME = p_owner
23 and test_DETAILs.DERIEVED_TABLE_NAME = p_table_name
24 and test_DETAILs.ARCHIVE_FLAG = 'Y'
25 and test_DETAILs.ARCHIVE_STATUS = 'NOT ARCHIVED';
26
27
28 BEGIN
29 FOR rec_test_DETAIL_exp in cur_test_DETAIL_exp
30 LOOP
31 l_datapump_handle := dbms_datapump.open(operation => 'EXPORT'
32 ,job_mode =>'TABLE'
33 ,job_name => rec_test_DETAIL_exp.DERIEVED_TABLE_NAME||'_exp');
34
35 dbms_datapump.add_file(handle => l_datapump_handle
36 ,filename => rec_test_DETAIL_exp.dmp_file
37 ,directory => l_datapump_dir);
38
39
40 dbms_datapump.add_file(handle => l_datapump_handle
41 ,filename => rec_test_DETAIL_exp.log_file
42 ,directory => l_datapump_dir
43 ,filetype => DBMS_DATAPUMP.ku$_file_type_log_file);
44
45
46 dbms_datapump.metadata_filter( handle => l_datapump_handle,
47 name => 'SCHEMA_EXPR',
48 value => ' in (''' || p_owner || ''')' );
49
50 dbms_datapump.metadata_filter( handle => l_datapump_handle,
51 name => 'NAME_EXPR',
52 value => 'in ('''|| rec_test_detail_exp.derieved_table_name ||''')' );
53
54 dbms_datapump.start_job(handle => l_datapump_handle);
55 dbms_datapump.detach(handle => l_datapump_handle);
56
57 END LOOP;
58 END;
59 /
Procedure created.
schema_a%ORA11GR2> exec utl_file.fremove( 'DATAPUMP_DIR', 'weather_20111219.dmp' );
PL/SQL procedure successfully completed.
schema_a%ORA11GR2> exec utl_file.fremove( 'DATAPUMP_DIR', 'weather_20111219.log' );
PL/SQL procedure successfully completed.
schema_a%ORA11GR2> !ls -ltr /tmp/weather*
ls: /tmp/weather*: No such file or directory
schema_a%ORA11GR2> exec p_expdp( 'SCHEMA_B', 'WEATHER_20111219' )
PL/SQL procedure successfully completed.
schema_a%ORA11GR2> pause
schema_a%ORA11GR2> !ls -ltr /tmp/weather*
-rw-rw-r-- 1 ora11gr2 ora11gr2 622 Dec 22 10:45 /tmp/weather_20111219.log
-rw-rw---- 1 ora11gr2 ora11gr2 86016 Dec 22 10:45 /tmp/weather_20111219.dmp
schema_a%ORA11GR2> !cat /tmp/weather_20111219.log
Starting "SCHEMA_A"."WEATHER_20111219_exp":
Estimate in progress using BLOCKS method...
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 64 KB
Processing object type TABLE_EXPORT/TABLE/TABLE
. . exported "SCHEMA_B"."WEATHER_20111219" 5.429 KB 1 rows
Master table "SCHEMA_A"."WEATHER_20111219_exp" successfully loaded/unloaded
******************************************************************************
Dump file set for SCHEMA_A.WEATHER_20111219_exp is:
/tmp/weather_20111219.dmp
Job "SCHEMA_A"."WEATHER_20111219_exp" successfully completed at 10:45:24
Reader, December 22, 2011 - 12:22 pm UTC
Tom,
Thank You. Apologies. I replaced the real table names and made mistakes. I will make sure to post the right scripts from next time on wards.
Incremental export/import for large tables
Nikhilesh, January 11, 2012 - 4:38 am UTC
Dear Tom,
In one of our project we maintain TEST database and PRODUCTION database is maintained by some other company but we have access to it.
To synch TEST DB with PROD we export important tables from PROD DB and import them on TEST using EXP/IMP utilities.
But it has become to much time consuming to export and import very large tables (170 million rows) even though the chages are just 20%. Can we use EXPDP and IMPDP to make it incremental i.e. just export changed data and so import just changed data?
Also I would be grateful if you can suggest a non-dba approach to implement it.
Thanks in advance.
January 11, 2012 - 9:39 am UTC
increment exports export any TABLE (the entire table, the whole thing) that have had any updates since the last export.
It would do nothing for you.
why not just take your backup of production and restore it. It would
a) give you a realistic database to test with, much much much better than the logical copy you currently have
b) it would prove you can actually restore your backups.
using datapump api in a dbms_job
Erwann, February 28, 2012 - 7:53 am UTC
Hi Tom,
I'm trying to use an impdp via a PL/SQL procedure.
ButI want to launch it in a dbms_job.
But it generate a error :
ORA-31626: job does not exist
ORA-06512: at "SYS.DBMS_SYS_ERROR", line 79
ORA-06512: at "SYS.DBMS_DATAPUMP", line 3507
ORA-06512: at "SYS.DBMS_DATAPUMP", line 5296
ORA-06512: at "ATLAS_CBR.PKG_DATAPUMP_IMPORT", line 293
I triaed to put a lot of grant to the user but stil the same error.
It works when launched by an exec but not in a job.
Thanks
Erwann
February 28, 2012 - 9:52 am UTC
you need to provide a complete - yet concise - but importantly 100% complete - but small
example.
just like above.... (like I do)
Hint: your example shall not have 293 or more lines of code.
calling a datapump proc in a job
Erwann, February 28, 2012 - 10:52 am UTC
Thanks for answering :
Here is my PL/SQL proc which import a schema to a database from an other.
CREATE OR REPLACE PROCEDURE Import_dtm_test(aSource_DBLink all_db_links.db_link%TYPE
,aImport_Type VARCHAR2)
IS
PRAGMA AUTONOMOUS_TRANSACTION;
EJobDoesNotExists EXCEPTION;
PRAGMA exception_init(EJobDoesNotExists, -31626);
EBadFlashbackTime EXCEPTION;
PRAGMA exception_init(EBadFlashbackTime, -39001);
lOwner all_objects.owner%TYPE := 'TEST_OWNER';
lJob_Name all_objects.object_name%TYPE := 'TEST_OWNER_DPUMP';
lLog_Name VARCHAR2(255) := lJob_Name||'.log';
lJob_Status LONG;
lStatus ku$_status;
lHandle NUMBER;
BEGIN
EXECUTE IMMEDIATE 'purge recyclebin';
BEGIN
lHandle := dbms_datapump.attach(lJob_Name);
EXCEPTION
WHEN EJobDoesNotExists THEN
lHandle := dbms_datapump.open('IMPORT', 'schema', aSource_DBLink, lJob_Name);
END;
dbms_datapump.get_status(lHandle, 2, 0, lJob_Status, lStatus);
IF (lJob_Status = 'DEFINING') THEN
BEGIN
dbms_datapump.add_file(lHandle, lLog_Name, 'DATA_PUMP_DIR', NULL, dbms_datapump.ku$_file_type_log_file);
dbms_datapump.log_entry(lHandle, 'Type : '||aImport_Type||', Start : '||TO_CHAR(SYSDATE, 'dd/mm/yyyy hh24:mi:ss'), 1);
dbms_datapump.set_parameter(lHandle, 'TABLE_EXISTS_ACTION', aImport_Type);
dbms_datapump.metadata_filter(lHandle, 'SCHEMA_EXPR', '='''||lOwner||'''');
dbms_datapump.Start_job (lHandle);
EXCEPTION WHEN EBadFlashbackTime THEN
dbms_datapump.Stop_job (handle=>lHandle, immediate=>1);
lHandle := dbms_datapump.open('IMPORT', 'table', aSource_DBLink, lJob_Name);
--
dbms_datapump.add_file(lHandle, lLog_Name, 'DATA_PUMP_DIR', NULL, dbms_datapump.ku$_file_type_log_file);
dbms_datapump.log_entry(lHandle, 'Type : '||aImport_Type||', Start : '||TO_CHAR(SYSDATE, 'dd/mm/yyyy hh24:mi:ss'), 1);
dbms_datapump.set_parameter(lHandle, 'TABLE_EXISTS_ACTION', aImport_Type);
dbms_datapump.metadata_filter(lHandle, 'SCHEMA_EXPR', '='''||lOwner||'''');
dbms_datapump.Start_job (lHandle);
END;
ELSE
dbms_datapump.detach (lHandle);
RAISE_APPLICATION_ERROR(-20000, 'Job '||lJob_Name||' already running. Please, execute ''Clean_All'' if needed.');
END IF;
dbms_datapump.detach (lHandle);
COMMIT;
EXCEPTION
WHEN OTHERS THEN
lStatus := dbms_datapump.get_status(lHandle, 8);
RAISE_APPLICATION_ERROR(-20000, CHR(10)||dbms_utility.format_error_backtrace||CHR(10)||lStatus.error(1).logtext);
END Import_dtm_test;
/
Then I create the job below :
DECLARE
X NUMBER;
BEGIN
SYS.DBMS_JOB.SUBMIT
( job => X
,what => 'Import_dtm_test(''ROKMD03'',''REPLACE'');'
,next_date => to_date('01/01/4000 00:00:00','dd/mm/yyyy hh24:mi:ss')
,interval => 'NEXT_DAY(TRUNC(SYSDATE), ''LUNDI'')+08/24'
,no_parse => FALSE
);
SYS.DBMS_OUTPUT.PUT_LINE('Job Number is: ' || to_char(x));
SYS.DBMS_JOB.BROKEN
(job => X,
broken => TRUE);
COMMIT;
END;
/
But generate the error (in french)
ORA-12012: erreur d'exécution automatique du travail 3400
ORA-20000:
ORA-06512: Ã "SYS.DBMS_SYS_ERROR", ligne 79
ORA-06512: Ã "SYS.DBMS_DATAPUMP", ligne 3507
ORA-06512: Ã "SYS.DBMS_DATAPUMP", ligne 5296
ORA-06512: Ã "ATLAS_CBR.IMPORT_DTM_TEST", ligne 26
ORA-31626: le travail n'existe pas
ORA-06512: Ã "SYS.DBMS_SYS_ERROR", ligne 79
ORA-06512: Ã "SYS.KUPV$FT", ligne 405
ORA-31638: impossible d'attacher le travail TEST_ONEKEY_DPUMP pour l'utilisateur ATLAS_CBR
ORA-31632: table maître "ATLAS_CBR.TEST_OWNER_DPUMP" introuvable, non valide ou inaccessible
ORA-00942: Table ou vue inexistante
ORA-06512: Ã "ATLAS_CBR.IMPORT_DTM_TEST", ligne 57
ORA-06512: Ã ligne 1
Hope this will help you helping me.
Thanks
Erwann
February 28, 2012 - 11:05 am UTC
how is the database link created?
Run Export from different user
Karthick, February 29, 2012 - 12:32 am UTC
Tom,
I have two users in my DB
1. CUSTOMER
2. LOCAL_USER
CUSTOMER is the schema that came with the product that we use. Any change into this user can only be done by the Vendor.
LOCAL_USER is supplied by our Vendor which can be used by us to add custom table etc.
We have SYNONYMS in LOCAL_USER for all the objects in CUSTOMER.
Now my requirement is to create a process to export a set of table in CUSTOMER schema with filter condition.
I have created a package with AUTHID CURRENT_USER in LOCAL_USER schema to export the tables. Now i call this package from CUSTOMER schema.
The Job completes without any error. But when i look into the Dump log, I get this.
Starting "CUSTOMER"."TABLE_EXPORT_20120229005643":
Estimate in progress using BLOCKS method...
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 400 MB
ORA-31693: Table data object "CUSTOMER"."CUSTOMER_DETAIL" failed to load/unload and is being skipped due to error:
ORA-00942: table or view does not exist
Master table "CUSTOMER"."TABLE_EXPORT_20120229005643" successfully loaded/unloaded
******************************************************************************
Dump file set for CUSTOMER.TABLE_EXPORT_20120229005643 is:
/home/data/CUSTOMER_DETAIL_20120229005643.dmp
Job "CUSTOMER"."TABLE_EXPORT_20120229005643" completed with 1 error(s) at 00:56:48
Now the question is.
1. Why it’s telling the table does not exist when it’s available in the CUSTOMER schema.
2. Why the Job completed successfully when it had error?
Note: If i careate the package in CUSTOMER schema then everything works fine. This error occurs only when i create the package in LOCAL_USER schema.
February 29, 2012 - 3:57 am UTC
give complete, yet small, but 100% complete example to work with.
Start with create users and granting to them the minimum set of privileges.
create a table in one schema and install smallest possible bit of code. grant on this code to other schema.
run code from other schema and reproduce the issue.
We can work from there - this is how I debug anything and everything, reproduce the issue using the smallest bit of code humanly possible. 999 times out of 1000 I find my mistake doing this small bit of set up work.
calling a datapump proc in a job
Erwann, February 29, 2012 - 2:29 am UTC
Hi, the dblink is public
CREATE PUBLIC DATABASE LINK "ROKMD03.TOTO.COM"
USING '(DESCRIPTION=(ADDRESS_LIST=(ADDRESS = (PROTOCOL = TCP)(HOST = db-rokmd0.toto)(PORT = 1521)))(CONNECT_DATA=(SERVER = DEDICATED)(SERVICE_NAME = ROKMD03.toto.com)))';
And it works perfectly using the user OWNER.
As I said, it is using the PS/SQL proc in a job that doesn't work. Using it in a simple sqplus window with EXEC make it work good.
Thanks
Erwann
February 29, 2012 - 4:11 am UTC
that database link is a 'current user' database link. It would take the credentials of the 'current user' and use them to log into the remote database.
Unless it contained a fixed username and password - it isn't going to work in a job. In the job - we don't have your password available to us to send to the remote site to authenticate with.
So, either
a) create the database link with connect to USER identified by PASSWORD
b) use an external job scheduler such as OEM (enterprise manager) that will log into the database as the necessary user and then run the job.
beware of current user database links in jobs and generic procedures run by others - we need to use the current sessions credentials to log into the remote database and the database link might see an entirely different schema on the remote site - or nothing at all if it is not allowed to log into the remote site.
calling a datapump proc in a job
Erwann, February 29, 2012 - 8:42 am UTC
Thank you very much.
You have solved my problems.
Erwann
mfz, June 01, 2012 - 9:09 am UTC
I have Oracle 10g Release 2 on Windows 64 Itanium Server .
The sum(segments) for one of the schema is 380 G . I would like to do a datapump export , but I dont enough space ( Ijust have 140 G ) on the server .
Questions
a) Can I compress the data pump dump on the fly ?
b) Being this is a windows server , I cannot export it to the network share , as the oracle is local user ( not a domain user) . Is there any other option other than changing the oracle user to the domain user.
Thanks
June 02, 2012 - 2:42 am UTC
Compression of data was not in 10g, that was added in 11g
you can do datapump over networks and the like - tell us why you are taking this export - what are you intending to do with it? Since it is not a backup - you must be intending to take this somewhere and import right? Where is that place - you might not even need to hit the disk.
A reader, June 04, 2012 - 8:33 am UTC
This is not part of backup and recover plan. This will be supplement to that .
As compression is not part of 10g pump , how can I compress pump extracts on the fly ( either in a batch file / powerscript ) . Sorry .. I wasn't clear in my question.
June 04, 2012 - 9:18 am UTC
can you address my question?
until data pump does compression, there is no way to compress on the fly.
Attach Parameter in DataPump
Vimal, July 16, 2012 - 10:55 pm UTC
Hi Tom,
I would like to know about attach parameter. I read from the documents but not clear.
Say if I am exporting the schema user1. I would like to attach another schema to this job. Is the attach parameter meant for such kind of operations? What is this term "Attach".
Similarly, if I want to attach some tables to the ongoing job, shall I do this with Attach parameter?
I would be very grateful, if you can give me an example.
Thanks and regards,
Vimal.
July 17, 2012 - 8:09 am UTC
Attach Parameter in DataPump
Vimal, July 17, 2012 - 10:41 am UTC
Thanks for the article. However, I read it already before questioning you.
They say "Attaches the client session to an existing export job and automatically places you in the interactive-command interface. Export displays a description of the job to which you are attached and also displays the Export prompt."
But, I don't really understand the need for doing this.
My doubt is if we can attach objects which we forgot to add before, to the currently going export job?
Thanks.
July 17, 2012 - 4:35 pm UTC
you can attach to a job and tell it what to do. that is all.
it is not that you "forgot" something, it is that you started a job that doesn't have anything to do and you want to talk to it to tell it what to do.
Database Migration
Goh, September 21, 2012 - 4:08 am UTC
Dear Tom,
Need your opinion on the database migration using data pump. I have to migrate a database (SOURCE) in 10GR2, AIX and import into a new database (TARGET) in 11GR2, with different platform (Solaris x86). There are 8 schemas in the SOURCE database and we only need to migrate 3 schemas to the TARGET database. I would like to check with you if the following steps are recommended.
1. Install and create a database in TARGET server (11gR2)
2. Pre-create users, tablespaces in TARGET server
3. Full database export using data pump from SOURCE (10GR2)
4. Import into TARGET database with FULL=yes
5. Drop the schemas that are not required.
Note : I believe the system throw errors on the sys or system objects when importing the database in TARGET database (11GR2). Are these messages ignorable ?
Your help is very much appreciated. Thanks
Regards
Goh
September 26, 2012 - 12:23 pm UTC
I'd take this opportunity to figure out the smallest set of privileges I really truly need.
do schema level exports, do schema level imports after creating the users with the smallest possible set of privileges. why do a full database and then drop a ton of stuff? just do what you need.
automate export/import
A reader, November 07, 2012 - 12:27 am UTC
We have to use datapump (expdp-impdp)every other week...to refresh schemas from prod to dev. I am planning to automate the expdp and impdp using unix shell script.... How do I monitor using unix script that the expdp has started and it is progressing and if it is error out,stopped or aborted....it will let the user know. What query should I include to monitor the expdp and impdp? How do I write code in unix shell script?
Any example will be appreciated.
November 07, 2012 - 7:45 am UTC
... I am planning to automate the expdp and impdp using unix shell
script.... ...
why not a job in the database? don't use shell, especially if you don't know how to ;)
just write a stored procedure and program it in. use dbms_scheduler to run it. monitoring, logging, history of execution - all done for you.
how to disable statistics update in dbms_datapump
Sean Ardar, November 10, 2012 - 4:25 pm UTC
Hi
at source, we have a partitioned table and we use dbms_datapump for copying table partition by partition.
To improve the performance of import, we are thinking to disable statistics update during dbms_datapump import
and do statistics update in destination at the end of copying partitions.
is there any way to disable statistics update in dbms_datapump?
Thank you
November 12, 2012 - 9:18 am UTC
exclude=statistics
on the command line.
pls explain
venkat, January 16, 2013 - 4:22 am UTC
Hi Tom,
after seeing this post i got a thought that data pump =
insert into table select * from other table
. if not then whats great/new/advantage of data pump ? here insert statement also creates a lot of redo log.
January 16, 2013 - 12:15 pm UTC
it does a lot more than just select * from other table.
it can do filtering, subsetting, just get some types of objects, etc etc etc - I mean, it does a lot more than just insert into t1 select * from t2; It can put the data to disk and let you move it that way, whatever.
also, it can do parallel direct path loads - which will always skip undo generation and optional may skip redo generation.
syntax for IMPDP schemas parameter
A reader, February 12, 2013 - 10:23 am UTC
Hi Tom,
The database is 11.2.0.3.
According to Oracle documentation, we can specify a list of schemas separated by commas when doing impdp using a parameter file , such as:
schemas=user1,
user2,
user3
However, yesterday I tested the following syntax in the parameter file without using comma, but using newline:
schemas=user1
user2
user3
The end result is the all the shemas were also imported ino the destination database.
Do this mean that we can use eiter comma or newline to specify a list of schemas for impdp?
Thanks!
DBMS_DataPump Compression
Praveen., May 03, 2013 - 9:06 pm UTC
HI,
11g Expdp has following options
ALL, DATA_ONLY, [METADATA_ONLY] and NONE.
However, It seems that DBMS_DATAPUMP has only NONE and METADATA options. I'm looking for "ALL" option with DBMS_DATAPUMP.
Is it possible. I see following 2 constants declared in the package
KU$_COMPRESS_NONE CONSTANT NUMBER := 1;
KU$_COMPRESS_METADATA CONSTANT NUMBER := 2;
FYI..I'm on Oracle 11g - 11.2.0.3.0
May 06, 2013 - 7:19 pm UTC
Imports, OPS$ schemas and database links
Charlie B., May 14, 2013 - 8:46 pm UTC
Tom -
We follow your suggestion and use an externally-identified schema for scheduled export and import jobs. That user has EXP_FULL_DATABASE and IMP_FULL_DATABASE privileges granted to it.
I'm trying to improve security in the database, and the "Voyager worm" comes to mind. I'd like to ensure that our external accounts don't have "create (public) database link" privilege. But these privileges are apparently granted as part of the IMP_FULL_DATABASE role. Voyager, as I'm sure you recall, was designed to propagate over database links.
Do you have a suggestion regarding how to prevent externally-identified schemas from creating database links? I'd rather not modify the IMP_FULL_DATABASE role: I'm not into changing Oracle's internals unless absolutely necessary.
May 14, 2013 - 9:50 pm UTC
ops$tkyte%ORA11GR2> create database link ora11gr2@loopback connect to ops$tkyte identified by foobar using
2 'ora11gr2';
Database link created.
ops$tkyte%ORA11GR2>
ops$tkyte%ORA11GR2> create table log ( msg varchar2(4000) );
Table created.
ops$tkyte%ORA11GR2>
ops$tkyte%ORA11GR2> create or replace trigger dblink_trigger
2 before create on schema
3 begin
4
5 if ( ora_sysevent = 'CREATE' and ora_dict_obj_type = 'DATABASE LINK' )
6 then
7 raise_application_error( -20001, 'Oh no you don''t' );
8 end if;
9 end;
10 /
Trigger created.
ops$tkyte%ORA11GR2>
ops$tkyte%ORA11GR2> connect /
Connected.
ops$tkyte%ORA11GR2>
ops$tkyte%ORA11GR2> drop database link ora11gr2@loopback;
Database link dropped.
ops$tkyte%ORA11GR2>
ops$tkyte%ORA11GR2> create database link ora11gr2@loopback connect to ops$tkyte identified by foobar using
2 'ora11gr2';
create database link ora11gr2@loopback connect to ops$tkyte identified by foobar using
*
ERROR at line 1:
ORA-00604: error occurred at recursive SQL level 1
ORA-20001: Oh no you don't
ORA-06512: at line 5
ops$tkyte%ORA11GR2>
this trigger does not have to be in this schema...
Charlie B., May 15, 2013 - 12:34 pm UTC
EXP-00008: ORACLE error 1422 encountered
Linda, June 27, 2013 - 12:34 pm UTC
I 'am having this problem in 9.2.0.8:
. exporting pre-schema procedural objects and actions
EXP-00008: ORACLE error 1422 encountered
ORA-01422: exact fetch returns more than requested number of rows
ORA-06512: at "SYS.STANDARD", line 628
ORA-06512: at "SYS.DBMS_RULE_EXP_RULES", line 129
ORA-06512: at line 1
EXP-00083: The previous problem occurred when calling SYS.DBMS_RULE_EXP_RULES.schema_info_exp
. exporting cluster definitions
. about to export SYSTEM's tables via Conventional Path ...
. . exporting table AQ$_INTERNET_AGENTS
Alexander, September 11, 2013 - 8:20 pm UTC
Hi,
Can you explain how expdp uses server memory, based on how big your schema is? We've been having issues with a server running out of memory and swapping, and we can see that our expdp job uses a ton of memory, like close to 100GB which is proportional to the size of the schema's. Now, we've been using expdp for ages and I think this might be the only case were we actually have enough physical memory to cover the size of the export. Everywhere else the exports are typically much larger than we have memory, so I have no idea what's up with this. Any ideas? Thanks.
September 23, 2013 - 5:54 pm UTC
it would not be based on the size of the data but on the amount of metadata data pump felt complelled to cache. Is there anything "strange" about this schema like "it has millions of objects or billions of extents" or anything like that you canthink of?
Alexander, September 24, 2013 - 4:26 pm UTC
Like this?
SQL> select MAX_EXTENTS from dba_tablespaces where TABLESPACE_NAME='RTC_DATA';
MAX_EXTENTS
-----------
2147483645
So aren't extents directly related to the size of the schema? I don't even really think about extents these days, I let Oracle worry about the management of that. So what's expdp doing with those?
A reader, October 22, 2013 - 2:41 pm UTC
Schema replication?
Dhruva, January 20, 2014 - 4:32 pm UTC
Hi Tom,
Requirement is to have DWH schema (called schema1) available for reporting even when the ETL process is running. DB size close to 1TB, running 11.2.0.2 enterprise.
So prior to ETL process:
1. Create schema2 & grant privileges
2. Create all tables in schema2 as select * from schema1.table compress where 1=2
3. Run data pump over loopback DB link from schema1 to schema2, in parallel, excluding statistics and with table exists=append
4. Import statistics
5. Create synonyms pointing to schema2 so reports remain unaffected
After ETL process has completed, simply point the synonyms back to schema1 and get rid of schema2.
The main benefit being no overhead of o/s file, beats transportable tablespace, I think. Plus no maintenance overhead.
Any better suggestions?
datapump package error
Jems, March 09, 2017 - 7:06 am UTC
I try to take backup of hr schema using datapump utility and i create following package without any error.
but when i run then following error occur.
****
SQL> exec fullexport.exp ;
begin fullexport.exp; end;
ORA-39001: invalid argument value
ORA-06512: at "SYS.DBMS_SYS_ERROR", line 79
ORA-06512: at "SYS.DBMS_DATAPUMP", line 3507
ORA-06512: at "SYS.DBMS_DATAPUMP", line 5296
ORA-06512: at "HR.FULLEXPORT", line 13
ORA-06512: at line 2
****
CREATE OR REPLACE PACKAGE fullexport
IS
PROCEDURE exp;
END fullexport;
CREATE OR REPLACE PACKAGE BODY fullexport IS
h1 NUMBER := NULL;
vfile_name varchar2(30) := null;
vfile_name_log varchar2(30) := null;
PROCEDURE exp IS
BEGIN
BEGIN
DBMS_LOCK.sleep(15);
END;
BEGIN
h1 := DBMS_DATAPUMP.open(operation => 'EXPORT',
job_mode => 'HR',
job_name => 'BACKUP_UTILITY',
version => 'COMPATIBLE');
DBMS_DATAPUMP.add_file(handle => h1,
filename => 'FULLEXP_LOG.LOG',
directory => 'DATA_PUMP_DIR',
filetype => 3);
DBMS_DATAPUMP.add_file(handle => h1,
filename => 'FULLEXP.DMP',
directory => 'DATA_PUMP_DIR',
filetype => 1);
DBMS_DATAPUMP.START_JOB(h1);
END;
END;
END fullexport;
SQL> exec fullexport.exp ;
begin fullexport.exp; end;
ORA-39001: invalid argument value
ORA-06512: at "SYS.DBMS_SYS_ERROR", line 79
ORA-06512: at "SYS.DBMS_DATAPUMP", line 3507
ORA-06512: at "SYS.DBMS_DATAPUMP", line 5296
ORA-06512: at "HR.FULLEXPORT", line 13
ORA-06512: at line 2
March 09, 2017 - 11:27 am UTC
gems, March 09, 2017 - 11:44 am UTC
If I want to export only my schema which currently login into "HR" schema so i need to pass < job_mode => 'SCHEMA' >
Right or Wrong?
March 09, 2017 - 1:46 pm UTC
Yes
A reader, May 28, 2019 - 4:15 am UTC
how can I use datapump export metadata only not data
May 29, 2019 - 6:25 am UTC
content_type=metadata_only
and you're done!
Data Pump Log contents with DBMS_DATAPUMP and NOLOGFILE
Narendra, June 17, 2022 - 8:08 am UTC
Apologies in advance if this thread is not appropriate for my question.
I am trying to use data pump import using NETWORK_LINK to clone schema using the approach outlines by Connor at
https://connor-mcdonald.com/2019/11/06/cloning-a-schema-with-one-line/ While the approach is really cool and works well (Thanks Connor for this!!), I am trying to find out a way to get the contents of data pump import log without having/creating a log file.
When I use IMPDP with NOLOGFILE=YES, I still get the log of the import printed to the client terminal/screen.
Is there a way to achieve the same using DBMS_DATAPUMP where we don't create log file but still get the log contents as output of call to CLONE_SCHEMA procedure?
July 04, 2022 - 11:56 pm UTC
I just omitted the log file call, eg
declare
l_job number;
begin
l_job := dbms_datapump.open(
operation => 'EXPORT',
job_mode => 'TABLE',
remote_link => NULL,
job_name => 'EMP_DUMP',
version => 'LATEST');
dbms_datapump.add_file(
handle => l_job,
filename => 'emp.dmp',
directory => 'TEMP');
-- dbms_datapump.add_file(
-- handle => l_job,
-- filename => 'not_in_use.log',
-- directory => 'TEMP',
-- filetype => DBMS_DATAPUMP.KU$_FILE_TYPE_LOG_FILE);
dbms_datapump.metadata_filter(
handle => l_job,
name => 'SCHEMA_EXPR',
value => '= ''SCOTT''');
dbms_datapump.metadata_filter(
handle => l_job,
name => 'NAME_EXPR',
value => '= ''EMP''');
dbms_datapump.start_job(l_job);
dbms_datapump.detach(l_job);
end;
/
How to exclude some tables from a schema export with DBMS_DataPump API ?
Tailor K, August 17, 2023 - 1:21 pm UTC
I am doing a export procedure to be used with Oracle 11.2 to 19.
Will be pointed one or two schemas, but a particular table I would like to exclude some times.
Getting ORA-39001 on this code:
DBMS_DataPump.MetaData_Filter(handle => l_dp_handle, name => 'NAME_EXPR', value => 'NOT IN (''tab_A'')');
even single IN gives same error?
DBMS_DataPump.MetaData_Filter(handle => l_dp_handle, name => 'NAME_EXPR', value => 'in (''tab_X'')');
what I´m missing here ...
August 31, 2023 - 5:02 am UTC
For your NAME_EXPR in schema mode, we need to know what *type* of objects, eg
declare
l_ctx number;
begin
l_ctx := dbms_datapump.open(
operation => 'EXPORT',
job_mode => 'SCHEMA',
remote_link => NULL,
job_name => 'SCOTT_DP',
version => 'LATEST');
dbms_datapump.add_file(
handle => l_ctx,
filename => 'SCOTT.dmp',
directory => 'TEMP');
dbms_datapump.add_file(
handle => l_ctx,
filename => 'SCOTT.log',
directory => 'TEMP',
filetype => dbms_datapump.ku$_file_type_log_file);
dbms_datapump.metadata_filter(
handle => l_ctx,
name => 'SCHEMA_EXPR',
value => '= ''SCOTT''');
dbms_datapump.metadata_filter(
handle => l_ctx,
name => 'NAME_EXPR',
value => 'NOT IN (''EMP'',''DEPT'')',
object_path =>'TABLE'
);
dbms_datapump.start_job(l_ctx);
dbms_datapump.detach(l_ctx);
end;
/