Skip to Main Content

Breadcrumb

Question and Answer

Connor McDonald

Thanks for the question, Sudheer.

Asked: August 06, 2021 - 12:24 pm UTC

Last updated: August 13, 2021 - 3:31 am UTC

Version: 19c

Viewed 1000+ times

You Asked

I have run the impdp twice -- with a user that has datapump_imp_full_database and datapump_exp_full_database in addition to connect,resource privileges.

the parfile in use has:
full=no
content=metadata_only
cluster=no
job_name=imp_pkg
dumpfile=expdp_production_%U.dmp ( 4 files present 01,..04 .dmp) in DIR_DBIMP location)
directory=dir_dbimp
logfile=impdp_pkg.log
include=package:"= 'pkg_test1'"
sqlfile=cr_pkg_test1.sql

But -- the job is stuck at DEFINING stage -- as seen from select * from dba_datapump_jobs.
I tried using with -- no content filter/ content =metadata -- but same result.

--update 11 aug.2021.

The impdp completed but after 1 hours - I guess the 1..2 TB of .dmp files caused it? we have each file (.dmp) of roughly 200/300gb in size.

As for details : the destination directory from which we read the dumps
used by the impdp command - its an NFS mount point from other server- yes
Local impdp run locally - yes


and Connor said...

What file system are the dump files on?

- local?
- network?
- NFS/ACFS/OCFS?

=================

Glad it completed.

The reason I asked about file systems was that when a job gets stuck at defining, the most common cause is an OS file lock (which you'll typically see for NFS or similar shared network file systems). We will typically wait for the lock the resolve itself (the OS cleans them up) and then proceed.

fuser or lsof can help here. Typically the presence of ".nfs" files is an indicator of open/locked files

Is this answer out of date? If it is, please let us know via a Comment

More to Explore

Utilities

All of the database utilities are explained in the Utilities guide.