Skip to Main Content

Breadcrumb

Question and Answer

Connor McDonald

Thanks for the question, Mohamed.

Asked: February 07, 2006 - 5:02 am UTC

Last updated: April 27, 2017 - 10:32 am UTC

Version: 9.0.1

Viewed 10K+ times! This question is

You Asked

Dear Tom
iwant to modify the tablespaces in a dump file before importing it to another database doesnt include the tablespaces in the export file how can i do this...? can i generate a sql file to do so...?

thanks for your help
Mohamed

and Tom said...

you cannot do this with a DMP file.

This is a feature of the newer Oracle Datapump in 10g (there are filters), but in 9i - you cannot edit the BINARY dmp file

You may use

imp indexfile=temp.sql full=y

to create a file temp.sql that will have the DDL, you may edit the DDL and precreate the tables. Then import with ignore=Y to ignore object creation errors.

Rating

  (9 ratings)

Is this answer out of date? If it is, please let us know via a Comment

Comments

Partially true answer

Rory, February 07, 2006 - 9:49 am UTC

Mohamed, it is possible to edit an export file, though rarely is it the wise thing to do. It's almost always easier and better to pre-create the tables as Tom says. I've had to create batch processes that moved objects around, and since I wasn't always sure of the objects I had to move I ended up editing the export file with perl. One cardinal rule I found to live by because I didn't feel like reverse engineering the entire export file format..

***) Don't change the byte count. If you need to change tablespace names they will be to a tablespace whose name is the same length or shorter. Left pad shorter names with spaces to make them the same length.

This is wholly unsupported by Oracle so any problems would be yours to live with. Again, try to use Tom's answer.

Tom Kyte
February 08, 2006 - 1:11 am UTC

and expect funky things to happen unless you really truly know what you are doing since the dmp file is binary and most every editor will screw it up royally. You have to in general write a program to do it and think about that.... The tablespace I want to change is "FOOBAR", I want it to be BARFOO (cannot make it longer, could make it a shorter name).

Ok, I have some data (a row in the table) that includes the value 'Problem reported with the FOOBAR tablespace'. I have a column whose name is 'FOOBAR_DATA'. I have table named 'FOOBAR_TAB', etc.

Don't blindly change FOOBAR to BARFOO, you really mess things up.

I would not suggest editing the dmp file, you can damage the data and maybe not even realize you did!

What about this...

Arangaperumal G, February 08, 2006 - 12:28 am UTC

Hi Tom
What about this Parameter?

TRANSFORM=SEGMENT_ATTRIBUTES and
TRANSFORM=STORAGE

The SEGMENT_ATTRIBUTES transform applies to both storage and tablespace attributes. To omit only the STORAGE clause and retain the TABLESPACE clause, you can use the STORAGE transform

Example:

impdp scott/tiger DIRECTORY=dump DUMPFILE=scott06feb06.dmp
LOGFILE=scott06feb06.log remap_schema=test:scott table_exists_action=truncate
TRANSFORM=storage:N TRANSFORM=SEGMENT_ATTRIBUTES:N:TABLE


Tom Kyte
February 08, 2006 - 1:50 am UTC

sure, as I said:

...
This is a feature of the newer Oracle Datapump in 10g (there are filters)
.......


impdp is Data Pump - that is 10g, not 9i

editing dump file

Ramanan, February 08, 2006 - 4:54 am UTC

Hi..
tom is right in saying editing a dump file is a foolish thing do. but, really if you are very good at editor commands and you know what you are doing, you can go on editing the dump files. beleive me, i have done this in linux OS usin sed to change the header in the dump file.
try it once, if you can afford to loose dump file and see for yourself.


Tom Kyte
February 08, 2006 - 8:10 am UTC

(add a bit of luck - that the tablespace name you are changing is not a substr of some other string and sure...... since sed does not care if it changes DATA in the dmp file OR the tablespace name or a table name or a column name or -- whatever.


and you shouldn't lose the dump file, sed will create a new file, but you could change something without realizing for months (think about if you accidently change DATA in the dmp file, it imports - but you changed it....)

Perl is your friend...

Rory, February 08, 2006 - 10:16 am UTC

Again, only if you're backed up against a wall would I suggest editing a export file. But if you do, something along these lines will help keep you safe from mucking up data. Working from memory from several years ago, so please correct anything I get that's terribly wrong.

Perl's a bit more powerful and very similar to sed.. if you can, I suggest learning enough perl to do things like the following.

To move from tablespace FOOBAR to BARFU (note the extra space in the replacement string to keep the byte count the same):

perl -e 'while (read(STDIN, $str, 4000000)) { $str =~ s/TABLESPACE "FOOBAR"/TABLESPACE "BARFU"/g; print $str; }' < my_exp.dmp > my_new_exp.dmp

Then use less, or od or strings or similar command to verify that you got all the occurrences of FOOBAR that are relevent. Very slight chance that your read above may have gotten only a partial tablespace declaration. Searching for remaining occurrences obviously may not be reasonable to do with more common words in large export files.

PS: If on Windows, get cygwin from cygwin.com and do the same thing as above.

Tom Kyte
February 08, 2006 - 10:52 am UTC

well, you don't need perl for that, sed would do that one just fine. That is a bit more specific.

You never mentioned before to search for

TABLESPACE "name"


That does have a less likely chance of messing up your data - definitely. But all you need is sed, no need to complicate it with that perl thing ;)

I still would not, will not recommend this approach personally.

Kirill, February 08, 2006 - 2:03 pm UTC

> well, you don't need perl for that, sed would do that one > just fine.

Actually, sed/awk/vi implementations have a limitation on the maximum length of line they can hold for processing.

perl (and, incidentally, emacs) does not have any such limitations (apart from physical limitation on memory), and does not care whether file is binary or text.

Each of the create table/index statements in .dmp file are on a single line (no line breaks between columns), so if you have a lot of columns, the line will be long and sed/vi/awk will not work.

Also, there does not appear any restriction on preserving byte count, at least not in the DDL statements contained within the .dmp file.

For example, the following perl one-liner can be used to override tablespace to data_ts and completely get rid of storage(...) clause (and can even be used on-the-fly if exp.dmp and imp.dmp are UNIX pipes)

perl -pe 's/\s+tablespace\s+\S+/tablespace data_ts/i;s/\s+storage\s*\(.+\)//i;' exp.dmp > imp.dmp &


It is least risky when .dmp file does not include data (rows=n), but is still very useful if it contains tens of thousands of objects (e.g. Peoplesoft), so for a complete import project, do the following:
1. Create tables only (rows,indexes,constraints,triggers=n), use perl to control storage for the tables being created
2. Import data with (ignore=y), still nothing else
3. import everything else (rows=n), use perl to control storage for indexes

Byte counts probably only matter if you try to modify the data part of the dump file -- that's really risky because it may corrupt silently if the match expression is somewhere in the data (but even then the risk can be reduced by specifying very strict match expressions that will not match in undesired places).

The DDL modification in .dmp will not corrupt silently, and the worst outcome is table/index not created or created in wrong tablespace.

I understand and agree with the fact that such manipulations are not supported by Oracle and are at one's own risk, but think it is better to know the real risks (and benefits), rather than simply dismiss these opportunities because the file is binary.

Thanks.
Kirill

Why?

Mark Wooldridge, February 10, 2006 - 12:20 am UTC

Why such complicated solutions to a simple problem? The goal shouldn't be to solve the problem in ever more complex ways. Yes they may work but why?

"Perfection is achieved, not when there is nothing more to add, but when there is nothing left to take away."
- Antoine de Saint Exupery

command sed

Barretos, August 30, 2007 - 1:21 pm UTC

People,

I need your help for change the clause STORAGE in the indexfile , example :
(1 line) STORAGE(INITIAL 20480 NEXT 20480 FREELIST GROUPS 1 (2 line)BUFFER_POOL DEFAULT) TABLESPACE "CCDADOS" NOLOGGING (3 line)NOCOMPRESS << TO >> TABLESPACE "CCDADOS"
(4 line)NOLOGGING NOCOMPRESS (just it !!!)

Remember that is an indexfile generated for Import (show=y indexfile=teste.sql rows=n)

Any idea , because I used (sed 's/STORAGE(.*//g' file1.sql > teste.sql ) but not sucessfully , It's only changed to :
(2 line)BUFFER_POOL DEFAULT) TABLESPACE "CCDADOS" NOLOGGING (3 line)NOCOMPRESS << TO >> TABLESPACE "CCDADOS"
(4 line)NOLOGGING NOCOMPRESS (just it !!!)

Any idea , friends ?

Barretos



Tom Kyte
September 04, 2007 - 5:41 pm UTC

I don't have any suggestions for you in the indexfile created by IMP.

if you use dbms_metadata to extract the ddl, yes.
if you use 10g and expdp and impdp - sure.

but with the index file, you have a simple script - that spans line and the stream editor (sed) works line by line.

Sometimes your hand is forced.

Christian, April 26, 2017 - 10:10 am UTC

I wanted to quickly share my findings on this. Because I recently had to do this (slightly dodgy thing).
Everyone knows that using datapump is the right thing to do, but sometimes its not possible for various technical reasons, and you will find yourself needing to muck about with the dmp file with a script.

This worked for me...
It uses sed (obviously) the key bit is the -b (for binary)
I also did this on Cygwin.. Your flav of unix might behave differently.

sed -b 's/LOB ("BIG_FIELD") STORE AS SECUREFILE (TABLESPACE "XYZ_DATA" ENABLE STORAGE IN ROW CHUNK 8192 RETENTION AUTO CACHE LOGGING NOCOMPRESS KEEP_DUPLICATES STORAGE(INITIAL 106496 NEXT 1048576 MINEXTENTS 1 BUFFER_POOL DEFAULT))//g' export.dmp > meddled_with.dmp

export.dmp is your .dmp and meddled_with.dmp will be the result of your shenanigans...

I needed to do this because since 12c supports extended datatypes and the table I was exporting made use of that.. Unfortunately the import didn't like it one little bit... so I needed to trim that off.... Hope this helps someone as I spent a couple of hours faffing about (that someone might benefit from this.. lets be honest... hacky but necessary behaviour in the future)

Please (not you Tom) don't decry the people that do this... sometimes its just expedient to do it.
Connor McDonald
April 27, 2017 - 6:27 am UTC

Ouch...

I'd be surprised if that could not have been done in a supported way, via

- impdp to sqlfile
- edit sqlfile
- run sqlfile
- impdp with table exists action being proceed anyway

Replace old schema name to new schema name

Jems, April 27, 2017 - 7:46 am UTC

I export a schema using exp utility which schema name is "OLD_SCHEMA".
and same database server i create a new schema called "NEW_SCHEMA".
And i import using imp utility with from_user="OLE_SCHEMA" to_user="NEW_SCHEMA".
schema imported but my question is when i see procedure and other objects I also find OLD_SCHEMA name inside new schema.
So How Can i replace OLD_SCHEMA to NEW_SCHEMA.

Following in NEW_SCHEMA.
create or replace procedure OLD_SCHEMA.get_sale_report()........

But I want in NEW_SCHEMA
create or replace procedure NEW_SCHEMA.get_sale_report()........
Connor McDonald
April 27, 2017 - 10:32 am UTC

This one of the shortcomings of old exp/imp.

Use Datapump.

More to Explore

Data Pump

All of the database utilities including Data Pump are explained in the Utilities guide.