Skip to Main Content
  • Questions
  • ORA-39002: invalid operation when import on ORACLE 18CXE

Breadcrumb

Question and Answer

Connor McDonald

Thanks for the question, Daxesh.

Asked: January 28, 2021 - 12:47 pm UTC

Last updated: July 19, 2021 - 5:44 am UTC

Version: 18.0.0.0.0

Viewed 10K+ times! This question is

You Asked

1) I installed Oracle18cXE on my local pc.
2) I have installed instant client on my pc
3) Configure wallet of my Autonomous account. I take backup from my autonomous account using version=11.2.2 and import on my local instance on XE.
4) It is not restoring gives me error ORA-39002: invalid operation
5) I also tried with parameter version 12.2.0.2 but give me same error when I restore on XE
6) I try to restore(import) on oracle database 12c (12.2.0.1) Enterprise edition it worked successfully

I import using following command my OS IS windows 10

impdp \"/as sysdba" parfile=ligitime.par

my parfile is

full=no
schemas=LIGITIME
remap_schema=LIGITIME:LIGITIME
remap_tablespace=DATA:HLL
directory=DATA_PUMP_DIR
dumpfile=lgtm11g.dmp
logfile=imp_impligitime.log
EXCLUDE=TABLE_STATISTICS
EXCLUDE=INDEX_STATISTICS

======================================
In reference to your question
Where did you export *to*? And how did you get the dump file down to your local machine? We need to see all the steps please.

Some details I provide here with
1) I created object storage on cloud
2) I created token and pre-Authentication request (make expiration period long)
3) I created credential using following scripts
BEGIN
DBMS_CLOUD.CREATE_CREDENTIAL(
credential_name => <name of creditial>,
username => <my username for oracle cloud>,
password => <token that I created>
);
END;
4) I download oracle instant client version 19.0.0.0 (basic light Package,sql*Plus package, tools package). And extract in one directory
5) I make one folder in instant client folder named network, with in network created admin folder.
6) I extract wallet all files in admin folder
7) As I explain I take backup using expdp as mention earlier.
8) I run following scripts to move my .dmp file to my object storage
BEGIN
DBMS_CLOUD.PUT_OBJECT(
credential_name => <created in no.3>,
object_uri => <url that I got when creating pre-authentication time > /<name of my .dmp file>,
directory_name=>'DATA_PUMP_DIR',
file_name => < my dump file name>);
END;
9) Than I download it from my object storage.

and Connor said...

Where did you export *to*? And how did you get the dump file down to your local machine? We need to see all the steps please.

OK those steps looks all good.

A common cause is a timezone disparity as per MOS note 2482971.1

The steps listed in the note are (on the import target)

SQL> shutdown immediate
SQL> startup upgrade
SQL> exec dbms_dst.begin_upgrade(<version>)
SQL> shutdown immediate
SQL> startup


However, I suspect that might be for 12c only or later. Mos note 977512.1 covers timezone upgrade for 11g (its a bit more lengthy)

Rating

  (3 ratings)

Is this answer out of date? If it is, please let us know via a Comment

Comments

Daxesh Laiwala, February 17, 2021 - 3:43 am UTC

Thank for reply, if timezone issue than I can't understand I exported using parameter version=11.2.0.2 and import on my labtop on which I installed oracle 11g 11.2.0.4 enterprise edition. It works perfectly. Than after I uninstalled 11g enterprise edition and install standard edition here it works perfectly. Why not working with 18cXE
Or such dump we not installed on xe I will try your solution on 18cXE and inform you.
Thanks again to guide me.
Connor McDonald
February 17, 2021 - 7:08 am UTC

If you still get stuck, then perhaps try with a datapump trace (see details below) and then log a call with Support for them to take a look. You may have hit a bug given that other versions are importing it fine.

-- Summary of Data Pump trace levels:
-- ==================================

  Trace   DM   DW  ORA  Lines
  level  trc  trc  trc     in
  (hex) file file file  trace                                         Purpose
------- ---- ---- ---- ------ -----------------------------------------------
  10300    x    x    x  SHDW: To trace the Shadow process (API) (expdp/impdp)
  20300    x    x    x  KUPV: To trace Fixed table
  40300    x    x    x  'div' To trace Process services
  80300    x            KUPM: To trace Master Control Process (MCP)      (DM)
 100300    x    x       KUPF: To trace File Manager
 200300    x    x    x  KUPC: To trace Queue services
 400300         x       KUPW: To trace Worker process(es)                (DW)
 800300         x       KUPD: To trace Data Package
1000300         x       META: To trace Metadata Package
------- 'Bit AND'
1FF0300    x    x    x  'all' To trace all components          (full tracing)


eg impdp ........ TRACE=1ff0300

(Note - the trace files can get large)

ITS WORKING

DAXESH LAIWALA, February 17, 2021 - 1:07 pm UTC

I tried your solution but in oracle 18cXE not allow me to change time zone. for remain all standard and entrprise edition its working sucessfully. XE may have limited facility to operate.

THANKS FOR GUIDANCE.

Yuri, July 16, 2021 - 9:49 am UTC

I had a similar situation, a DataPump export from 19.3 EE and following import into 18.4 XE

The source database had timezone version 32 and the target XE database 31.
So, I copied two files timezlrg_32.dat and timezone_32.dat from source $RACLE_HOME/oracore/zoneinfo to the same directory on the target database as well as into both "big" and "little" subdirectories (to be on the safe side)
and the following call helped
exec dbms_dst.begin_upgrade(32)

So, thank you so much!
Connor McDonald
July 19, 2021 - 5:44 am UTC

glad it helped

More to Explore

Utilities

All of the database utilities are explained in the Utilities guide.