Import takes more time
Shanmugam, October 09, 2001 - 9:34 am UTC
Thank you very much Mr.Tom. The information is very useful to me.
About COMPRESS option
A reader, October 09, 2001 - 3:59 pm UTC
Could you explain the export processing for COMPRESS option ?
According to Oracle document, this manage the initial extent for table data.
Since Oracle 816 and up, if we use the locally management tablespace method,
The INITIAL and NEXT size are the same, what compress will do, and
Which option should we use ? The default is 'Y'.
Could you elaborate this option please.
1. If we choose default value 'Y', when import into another DB, what impact to
the new DB tablespace storage setting?
2. If we choose value 'N', what impact when import into another DB.
Assume DB1 tablespace A is using EXTENT MANAGEMENT LOCAL UNIFORM SIZE 64K
the new DB2 tablespace is using the same storage setting. or
3. If I use the larger uniform size, ie 128K.
October 09, 2001 - 4:09 pm UTC
I do not like compress=y. Totally wish it did not exist.
Compress will do what it always did. sum up the existing extents and ask for that much space. When you import, the LMT will look at your initial request, divide by its extent size and give you that many extents.
1) it'll cause you to allocate lots of extents up front in order to satisfy the compress=y. Eg: if the existing table has 50m allocated to it and you exp with compress=y -- the initial will be 50m. If you import into an LMT with 1meg extents -- it'll allocate 50 of them initially.
2) it'll look at the initial and do the same as #1 but the initial will be smaller then in #1 so it'll allocate less extents.
3) same as two
A reader, October 09, 2001 - 4:32 pm UTC
From the full database export dump file- Is there any benefit in importing twice? First only the objects and its data. Then importing only the indexes with rows=n, ignore=y and indexes=y. Or is it the same as doing a full import?
October 09, 2001 - 4:44 pm UTC
That would be the same.
Benefits to be had would be via:
o don't use import to create the indexes -- use imp ... indexfile=temp.sql to
generate a script with the create indexes. then modify them to have nologging
and parallel. Run them in sqlplus faster.
import is slow
A reader, April 06, 2002 - 4:05 pm UTC
Hi Tom
Recently I performed a database export from one machine and import to another, used these parameters
userid=sys/sys
file=exp.dmp
log=imp.log
buffer=10000000
commit=y
ignore=y
full=y
sort_area_size set to 30MB in init.ora (bounced database afterwards)
sort_multiblock_read_count to 8
db_block_size 8k
redo log file size 100MB with 6 groups
all tablespace were locally managed except system and RBS (due to numerous bugs when use LMT in RBS)
the dump file was 2GB big (it included a designer's repository), I have done this twice before already but on other servers with RAID 10 (with these raid 10 srvewrs the tablespaces were dictionary managed), the one I did few days back was RAID 5.
With RAID 10 I rememeber the dump file was about 1.7GB and it took roughly 2 hours but with 2GB dump file it took almost 9 hours in RAID 5!!! Does RAID 5 really make so much difference?
Also I didnt generate the indexfile because since I have one CPU only I didnt consider it was necessary (ok I know I can use nologging but I didnt because I was in rush and couldnt spend time on add nologging clause on around 1000 indexes). The creating index part is mistery to me as well, does the index of a table gets created after the data is imported to table first or this does not always happen (some indexes are created with table before data is imported)?
Later I performed another import of designer's respository, itself alone took 3 hours (what a nightmare tool!) because it had so many pl/sql packages that i thought the data dictionary contetion was really high!
Do you think is a good idea use buffer and commit parameters in import or it is better idea take all RBS offline and leave a big one online and ignore these two parameters to decrease commit frequency?
Finally what is the parameter recalculate_statistics for? I thought analyze runs analyze table during import already
Note: By the way if we set analyze=n we should speed up import? or table statistics is obtained from dump file and not by analyze table etc etc, although I read from documentation that setting analyze=y import actually runs
ANALYZE TABLE ETC
and I think the document is wrong in export parameter STATISTICS, it says that by default is estimate but I did a test, I created a dummy table, ran analyze table xxx compute statistics then export the table with default STATISTICS value but when I check the dump file it actually says
ALTER TABLE XXX COMPUTE STATISTICS instead of estimate
April 06, 2002 - 4:21 pm UTC
The file system type will have NO (zero, zippo) bearing on the size of the dmp file.
RAID 5 is NOTORIOUS for being slow. I've seen a create tablespace take 14 times longer on raid 5 then 0+1. So yes, the raid array could definitely be having a serious impact on you.
Indexes are created after the data is loaded by IMP.
I do everything as a big transaction personally. frequent commits slow you down.
From the docs for imp:
<quote>
In certain situations, the importer might want to always use ANALYZE statements
rather than precomputed statistics. For example, the statistics gathered from a
fragmented database may not be relevant when the data is imported in a
compressed form. In these cases, the importer may specify
RECALCULATE_STATISTICS=Y to force the recalculation of statistics.
</quote>
By default, we'll just stuff the stats from the exported database into the new database, recalc says "don't do that, compute them for real"
Also from the docs:
<quote>
If statistics are requested at export time and analyzer statistics are available for a
table, Export will place the ANALYZE command to recalculate the statistics for the
table into the dump file. In certain circumstances, Export will also write the
precalculated optimizer statistics for tables, indexes, and columns to the dump file.
See the description of the Export parameter STATISTICS on page 1-24 and the
Import parameter RECALCULATE_STATISTICS on page 2-25.
</quote>
so analyze will sometimes just stuff the old stats in.
If you HAD stats, it'll gather them the way you did it, if you didn't have stats, then defaults kick in.
recalculate_statistics does work?
A reader, April 07, 2002 - 9:52 am UTC
Hi
I did a small test, created a table called STATS with 57344 rows did analyze table .... sample 10 rows to yield 57936 in num_rows column in user_tables, good this is what I want, I dont want exact statistics
I then export table with
userid=lsc/lsc
file=exp.dmp
log=exp.log
tables=(lsc.stats)
then import to another user with
userid=lsc/lsc
file=exp.dmp
log=imp.log
fromuser=lsc
touser=ctrl
analyze=y
recalculate_statistics=n
import actually ran analyze table compute statistics because instead of my original stats of 57936 rows it got the right num_rows now, 57344 rows
so basically I was telling him not to recalculate statistics but it did, why? And with COMPUTE (or otherwise how did import got the right statistics?)
select owner, num_rows, table_name from dba_tables
where table_name='STATS'
OWNER NUM_ROWS TABLE_NAME
------------------------------ ---------- ------------------CTRL 57344 STATS
LSC 57936 STATS
April 07, 2002 - 11:44 am UTC
from the docs:
</quote>
However, the precomputed optimizer statistics will not be used at export time if:
o A table has indexes with system-generated names (including LOB indexes)
on A table has columns with system-generated names
o There were row errors while exporting
o The client character set or NCHARSET does not match the server character set
or NCHARSET
o You have specified a QUERY clause
o Only certain partitions or subpartitions are to be exported
o Tables have indexes based upon constraints that have been analyzed (check,
unique, and primary key constraints)
o Tables have indexes with system-generated names that have been analyzed
(IOTs, nested tables, type tables that have specialized constraint indexes)
Note: Specifying ROWS=N does not preclude saving the precomputed statistics
in the Export file. This allows you to tune plan generation for queries in a
nonproduction database using statistics from a production database.
<quote>
so, which of these might be applying to you.
Here (my character set = nchar character set) we can see this can work if all of the rules are followed:
ops$tkyte@ORA817DEV.US.ORACLE.COM> create table t as select * from all_objects;
Table created.
ops$tkyte@ORA817DEV.US.ORACLE.COM> insert /*+ append */ into t select * from all_objects;
17227 rows created.
ops$tkyte@ORA817DEV.US.ORACLE.COM> commit;
Commit complete.
ops$tkyte@ORA817DEV.US.ORACLE.COM> insert /*+ append */ into t select * from all_objects;
17227 rows created.
ops$tkyte@ORA817DEV.US.ORACLE.COM> commit;
Commit complete.
ops$tkyte@ORA817DEV.US.ORACLE.COM>
ops$tkyte@ORA817DEV.US.ORACLE.COM> select count(*) from t;
COUNT(*)
----------
51681
ops$tkyte@ORA817DEV.US.ORACLE.COM>
ops$tkyte@ORA817DEV.US.ORACLE.COM> analyze table t estimate statistics sample 10 rows;
Table analyzed.
ops$tkyte@ORA817DEV.US.ORACLE.COM>
ops$tkyte@ORA817DEV.US.ORACLE.COM> select num_rows from user_tables where table_name = 'T';
NUM_ROWS
----------
51112
ops$tkyte@ORA817DEV.US.ORACLE.COM>
ops$tkyte@ORA817DEV.US.ORACLE.COM> !exp userid=/ tables=t
Export: Release 8.1.7.2.0 - Production on Sun Apr 7 11:43:14 2002
(c) Copyright 2000 Oracle Corporation. All rights reserved.
Connected to: Oracle8i Enterprise Edition Release 8.1.7.2.0 - Production
With the Partitioning option
JServer Release 8.1.7.2.0 - Production
Export done in WE8ISO8859P1 character set and WE8ISO8859P1 NCHAR character set
About to export specified tables via Conventional Path ...
. . exporting table T 51681 rows exported
Export terminated successfully without warnings.
ops$tkyte@ORA817DEV.US.ORACLE.COM> drop table t;
Table dropped.
ops$tkyte@ORA817DEV.US.ORACLE.COM> !imp userid=/ full=t recalculate_statistics=n
Import: Release 8.1.7.2.0 - Production on Sun Apr 7 11:43:19 2002
(c) Copyright 2000 Oracle Corporation. All rights reserved.
Connected to: Oracle8i Enterprise Edition Release 8.1.7.2.0 - Production
With the Partitioning option
JServer Release 8.1.7.2.0 - Production
Export file created by EXPORT:V08.01.07 via conventional path
import done in WE8ISO8859P1 character set and WE8ISO8859P1 NCHAR character set
. importing OPS$TKYTE's objects into OPS$TKYTE
. . importing table "T" 51681 rows imported
Import terminated successfully without warnings.
ops$tkyte@ORA817DEV.US.ORACLE.COM> select num_rows from user_tables where table_name = 'T';
NUM_ROWS
----------
51112
ops$tkyte@ORA817DEV.US.ORACLE.COM>
oops
A reader, April 07, 2002 - 9:59 am UTC
sorry import used analyze table stats estimate statistics, I verified in v$sql but this got me another question because you said that import will use the analyze method what you did with your original table so I did another test, I did analyze table stats compute statistics then exported again and imported again but instead of using compute it uses estimate (the default export value) so I guess export uses his own default parameters?
Compress=Y
Pablo Rovedo, April 07, 2002 - 11:47 am UTC
You have said several times you hate compress=Y, and I'm completely agree about this, but why is the default?.Is the default in 9i aswell?
Thanks
Pablo R.
April 07, 2002 - 12:28 pm UTC
It's a hold over from the really olden days -- before unlimited extents. When 2k blocks ruled and there was a hard limit of 121 extents/object -- the major number one use of export was to "compress" a table down into fewer extents as it got close to this number.
It's a legacy
if you import to same user then statsitcis is not recalculated
A reader, April 07, 2002 - 12:35 pm UTC
hi tom
I did an exact example as you and I did get your results so I was puzzled but then if instead of importing the table to same user as you did I import to another the statistics is recalculated. Import to different user, that is the only difference from my prrevious example in my tests and yours
April 07, 2002 - 2:31 pm UTC
Looks like we need add "and the owners change during the exp/imp process" to the list.
Concurr -- imp of data using fromuser/touser will re-analyze.
Import terminated successfully with warnings.
A reader, February 12, 2003 - 7:24 am UTC
Hi Tom
I am trying to import one table .. i am getting the following error
-----
ORA-01658: unable to create INITIAL extent for segment in tablespace TS01
Import terminated successfully with warnings.
-----
I am sure there is lot of free space in tablespace as well as the drive where the datafile located then the dump file .
Could you please tell me what could be the wrong and how to overcome this..
Thanks
Kumar
February 12, 2003 - 8:36 am UTC
do this
imp userid=u/p tables=TABLE_THAT_IS_FAILING indexfile=t.sql
edit t.sql, grab the create table statement and run it in sqlplus. It'll fail, now you can fix the initial to be more reasonable, create that table again.
Then, you can use ignore=y to import the data.
You probably are using dictionary managed tablespaces and have some serious fragmentation going on. You may well have gobs of free space -- it is just not contigous in the tablespace itself -- hence it is useless.
see
</code>
http://asktom.oracle.com/pls/asktom/f?p=100:11:::::P11_QUESTION_ID:285415955510 <code>
there is a script that shows you the free space by tablespace but very importantly the LARGEST contigous free space.
Try to use LMT (locally managed tablespaces) instead, you'll never regret it.
Import Slow (Think Hung) on "Skipping Table table_name"
Meyer, February 21, 2003 - 12:56 pm UTC
Tom,
Have a full export from 8.16, the size is just under 2 gigs on NT.
It seems to work fine until it gets to the third table it sits there (sits over 24 hours)...with
Skipping table table_name
I have tried the same file on the instance it came from and another 8.17 w/the same result.
The structure of the table is
NUMBER(10)
NUMBER(10)
VARCHAR2(3)
VARCHAR2(14)
NUMBER(2)
With approx 1.8 million rows.
I have tried pressing "enter" in case it is a display problem".
These tests were done on the file using show=y full=y
With tables=(table) I did not have the output
But, it just sat there when I did tables=(table_name)
Also...the DB seems "healthy" for create tables (based on a version of the table that will not work)...w/ very fast response times.
What type of thing could cause this, where do I start to troubleshoot. Any solution that just gets me the one table I want would work. (But I would like to understand what happened as well).
Thanks,
Meyer
February 21, 2003 - 3:20 pm UTC
I'll have to refer you to support.
If it were unix with all of the great tools -- I could tell you how to see what the process is doing (if anything).
Under NT, I don't know how to tell what a process is or isn't doing -- and that is what you need to do here.
Did you ftp this file? did you do it BINARY???
MORE INFORMATION FOR ABOVE POST
Meyer, February 21, 2003 - 2:10 pm UTC
Tom, it only happens with a .dmp file that was copied across our network (not FTP'd) using the win(use-your imagination) standard copy command. Then copied back...to the same server....the original version of the file (that never left the server)...it works fine. Don't ask why I did not use that one first...just didn't.
Seen this before?
Thanks,
Meyer
February 21, 2003 - 3:22 pm UTC
Nope -- are they the same size? if you do a binary "diff" on them (fc /? will tell you how to do that) are they the "same"?
maybe you got unlucky and have hit a bad chunk of disk and it is just waiting for the disk to return from a read and it never is.
Results of FC
Meyer, February 21, 2003 - 5:34 pm UTC
I do not think chunk of disk because I tried to import it on two servers (both same result).
File sizes are the EXACT same bytes...
dates/times are 6 minutes apart (time to copy over our network)?
To test FC (never used)....I created two text files and put the same words in them...With the /B it does the binary compare..
fc /B first_file second_file
With the text files...it said:
FC: no differences encountered
The FC on the export files give me (LOTS OF) this:
14682383: 06 00
14682385: 36 00
14682386: 32 00
14682387: 35 00
14682388: 37 00
14682389: 38 00
1468238A: 38 00
1468238B: FE 00
(just pasted a few lines...I did a ctrl-C)
Seems copying my export files are corrupting them? I also checked another export copied and same...At this point, not expecting a solution...but do you have any advice?
Thanks,
Meyer
February 21, 2003 - 6:50 pm UTC
that would have to be a windows bug -- you are getting binary zeroes stuck in there. Garbage in, garbage out -- imp must be getting sent off the deep end with a totally corrupt file.
you would expect the files to be 100% binary "the same", they are not and the only thing you did was copy them. that they are dmp files isn't relevant -- they could be word documents or anything. windows is corrupting your files.
ora 6550 and ora 922 during import
Fernanda, February 25, 2003 - 9:48 am UTC
Tom, can you tell me what may be causing the errors below (ora-6550 and ora-922) ? I'm both exporting from and importing to Oracle9i on Windows.
Connected to: Oracle9i Enterprise Edition Release 9.0.1.1.1 - Production
With the Partitioning option
JServer Release 9.0.1.1.1 - Production
Export file created by EXPORT:V09.02.00 via conventional path
import done in WE8MSWIN1252 character set and AL16UTF16 NCHAR character set
.. importing CIPLAN's objects into CIPLAN
IMP-00017: following statement failed with ORACLE error 6550:
"BEGIN "
"sys.dbms_logrep_imp.instantiate_schema(schema_name=>'CIPLAN', export_db_nam"
"e=>'WFROTA.US.ORACLE.COM', inst_scn=>'10126426');"
"COMMIT; END;"
IMP-00003: ORACLE error 6550 encountered
ORA-06550: line 2, column 1:
PLS-00201: identifier 'SYS.DBMS_LOGREP_IMP' must be declared
ORA-06550: line 2, column 1:
PL/SQL: Statement ignored
IMP-00017: following statement failed with ORACLE error 922:
"CREATE TABLE "ADIANT_V" ("CODIGO" VARCHAR2(13) NOT NULL ENABLE, "ITEM" VARC"
"HAR2(3) NOT NULL ENABLE, "NFROTA" VARCHAR2(10), "NFUN" VARCHAR2(10), "DATA""
" DATE, "VALOR" NUMBER(15, 4), "OBSERVACAO" VARCHAR2(20)) PCTFREE 10 PCTUSE"
"D 40 INITRANS 1 MAXTRANS 255 STORAGE(INITIAL 65536 FREELISTS 1 FREELIST GRO"
"UPS 1) TABLESPACE "CIPLAN" LOGGING NOCOMPRESS"
IMP-00003: ORACLE error 922 encountered
ORA-00922: missing or invalid option
IMP-00017: following statement failed with ORACLE error 922:
"CREATE TABLE "CAMPOS" ("CA_ORDEM" NUMBER(15, 0) NOT NULL ENABLE, "TA_NOME" "
"VARCHAR2(30) NOT NULL ENABLE, "CA_NOME" VARCHAR2(30) NOT NULL ENABLE, "CA_O"
"BRIGATORIO" CHAR(1), "CA_APELIDO" VARCHAR2(120), "CA_SELECIONADO" CHAR(1), "
""CA_PROCURA" CHAR(1), "CA_SORT" CHAR(1), "CA_TIPO" VARCHAR2(20), "CA_FILTRO"
"" CHAR(1), "CA_MANDANTE" CHAR(1), "CA_TAMANHO" NUMBER(15, 0), "CA_DATACRIAC"
"AO" DATE, "CA_TIPOREL" VARCHAR2(20), "CA_MSG" VARCHAR2(50), "CA_CHAVE" CHAR"
"(1), "CA_GRID" NUMBER(15, 0)) PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 25"
"5 STORAGE(INITIAL 65536 FREELISTS 1 FREELIST GROUPS 1) TABLESPACE "CIPLAN" "
"LOGGING NOCOMPRESS"
IMP-00003: ORACLE error 922 encountered
ORA-00922: missing or invalid option
(... several more ora-922 errors ... )
Import terminated successfully with warnings.
February 25, 2003 - 10:00 am UTC
problem is that the export was done with 9.2 software and the import is being done with 9.0 stuff.
features like "nocompress" (the opposite of compress) were added in 9.2
DMP files are not backwards compatible -- only forwards.
You need to use 9.0 EXP to create a dmp file that 9.0 IMP can handle.
imp slow another possible reason
Kirill Richine, February 25, 2003 - 2:42 pm UTC
imp can be slowed down, very significantly, if you have the following conditions:
- a table with long raw
- commit=y
Unlike other datatypes, long raw does not respect buffer=N, and does not perform any bulk dml. Consequently, it will commit after every row inserted!!!
Typically, if this is the case, v$session_wait will show a lot of log_file_sync waits for the imp session.
Overall, imp performance, just like any other activity, can be diagnosed by reviewing v$session_wait and identifying the event most frequently waited on.
Incremental Export
shanmugam, February 25, 2003 - 11:33 pm UTC
Mr.Tom,
If the current dmp file size is say 20-30 GB. Is it advisable to do incremental export on every day and full export on every 15 days? Is any major disadvantages on Incremental Backup?
February 26, 2003 - 8:41 am UTC
it would be advisiable to stop using EXP!
I hope you are not using exp as a backup tool.
Backups are the only backups.
If my dmp file got to be 20-30gig -- I would stop doing it. what is the purpose/goal of doing this?
incrementals are not supported anymore in the current software, so I would not suggest using them
import with Oracle 8.1.7 client
A reader, February 26, 2003 - 4:07 pm UTC
Hi Tom,
Can we use 8.1.7 clinet import utility to import 8.0.6 dump into an 8.06 database ?
February 27, 2003 - 7:14 am UTC
no
you will use the 806 EXP against 8i
you will use the 806 IMP against 806
what one unspecified parameter can do....
Alvin, May 26, 2003 - 10:25 pm UTC
I'm havin trouble importing data... it takes forever. Will try the compress=N i thought 'N' was the default.
And i get archiving errors that's because the developmental machine is much much much much smaller in scale. Does Imp/Exp generate tbat much redo ? can i minimize it ?
Thanks.
May 27, 2003 - 7:30 am UTC
imp generates enough redo to protect the inserts it is performing. It is AS IF you were in sqlplus typing the inserts in yourself. That amount of redo will be generated.
I'm confused...
Alvin, June 02, 2003 - 1:11 am UTC
You gave this answer to the first question.
---
1) insufficient tablespace - that means you didn't have enough room. this
typically happens if you do the default export which uses "compress=y" which
generates a create table statement that has an INITIAL extent equal to the sum
of all allocated extents. Recommend you always export with compress=N.
---
Now compress=N is the worst of both settings ? hmmm...
June 02, 2003 - 7:25 am UTC
sorry about that - i read it wrong. fixed it.
confused
Thirumaran.G, June 15, 2003 - 5:22 am UTC
hi Tom ,
i am confused about what Alvin from Philippines has asked about and the solution given by you???
Alvin from Philippines
=========================
You gave this answer to the first question.
---
1) insufficient tablespace - that means you didn't have enough room. this typically happens if you do the default export which uses "compress=y" which generates a create table statement that has an INITIAL extent equal to the sum
of all allocated extents. Recommend you always export with compress=N.
---
Now compress=N is the worst of both settings ? hmmm...
Followup:
sorry about that - i read it wrong. fixed it.
============================================================
why compress=N is the worst of both settings ??
I will be thankfull if you could explain this. Which is recommedned to use compress=y or compress=n ??
Thanks in advance
Thiru
June 15, 2003 - 9:44 am UTC
sorry for the confusion.
COMPRESS=Y <<<=== this is bad
COMPRESS=N <<<=== this is good
need clarification
thirumaran.g, June 16, 2003 - 2:16 pm UTC
dear tom
you suggest not to use consistent=y during export . you had also posted in a form that you donot like to use this .
In this link below
</code>
http://asktom.oracle.com/pls/asktom/f?p=100:11:::::P11_QUESTION_ID:832427034486, <code>
you are suggesting to use consistent=y during full export .
[a complete database export (should use consistent=y). That will generate an export with everything in it.]
during full db export can i proceed with consistent=y or not .
seeking your valuable advice on this
thanks
thirumaran.g
June 17, 2003 - 6:52 am UTC
no, i've said "compress=Y" is bad.
I don't believe I've said anything one way or the other about consistent. It is an option, you might NEED to use it, you might not.
A reader, July 28, 2003 - 4:29 pm UTC
import very slow
atul, November 10, 2003 - 1:13 am UTC
Hi,
We are doing import using designer 6i client,
Its very slow.
I found out that sort_area_size is very low only 20K.
Is import process is related to sort_area_size parameter?
What other parameters in init.ora file can affect performance of import process?
Thanks.
Atul
November 10, 2003 - 7:22 am UTC
sort area size will affect index builds.
you need to find out WHY the import is slow however, not just start flipping switches willy nilly to see if something might accidently sort of help a little bit.
diagnose the issue -- see what import is waiting on, waiting for, find the bottleneck.
then and only then can we "fix it"
where to find ?
Ashiq Shamsudeen, November 10, 2003 - 8:01 am UTC
Hi Tom,
From your quote "diagnose the issue -- see what import is waiting on, waiting for, find the bottleneck."
You told us to see the bottleneck ,but tell me where to find the bottleneck? . I mean which file,etc..
November 10, 2003 - 12:03 pm UTC
it won't be "a file"
you can either
o run a 10046 trace on the import with level 12 (get binds) and review the trace file (in 9i, tkprof will format the wait events nicely)
o if you are "almost single user" use statspack -- take a snap, wait 10 minutes, take a snap -- review report.
command to do
Reader, November 10, 2003 - 12:50 pm UTC
tom, what is the exact command to do <quote> 10046 trace on the import with level 12 (get binds)<quote> in sqlplus. Thanks.
November 10, 2003 - 2:59 pm UTC
if you install dbmssupp.sql in $ORACLE_HOME/rdbms/admin
exec dbms_support.start_trace_in_session( sid=> NN, serial => NN );
will do it. start the import, query v$session to get the sid/serial
export with and/or consistent=y -- oracle9iR2
Baqir Hussain, November 11, 2003 - 1:02 pm UTC
Tom,
We are running oracle9i as OLTP and I would to take an export of the whole schema with CONSISTENT=N (by default). Even at the quite time the database is continually updated due to the concurrent users ~ 500.
The tables are being updated during export with consistent=N
My questions are:
1. If I need to import it to a new machine, would I be able to do that? Is there any kind of consistency (any kind of SCN # ..) required during import?
2. Since the data in the tables are inconsistent could cause web application not to function properly. Can such a behavior be seen during export with consistent=N
Your help in this regard will be highly appreciated.
November 11, 2003 - 1:51 pm UTC
1) no, export just dumps data.
2) with consistent=N, you'll be seeing data in tables at different points in time. the tables will be inconsistent with eachother potentially.
How to exclude LOBs in exp/imp
A reader, November 11, 2003 - 2:55 pm UTC
LOBs have always been the slowest part both in export and import - in import it's much much worse. Sometimes people just want to get an export or import without LOBs. Is there a good way to exclude them in exp and/or imp? If only there were a "LOB=n" option ;-).
November 12, 2003 - 6:26 am UTC
you would have to specify tables to export and/or tables to import
what if there are hundreds or even thousands of "normal" tables ...
A reader, November 12, 2003 - 12:47 pm UTC
and I just want to exclude one huge LOB table? More precisely, I want to exclude the LOB column. Short of "LOB=n", I guess we can settle with "excluded=<table_names>", if only it's there!
November 12, 2003 - 4:43 pm UTC
there isn't. you can use fine grained access control to temporary "pretend" there are no rows.
but tell me, what would the use of such an export be? if there isn't any data in this big column??
what is the goal here, what are you using export for?
An extra tip
David Cowie, November 12, 2003 - 2:20 pm UTC
For some reason no one ever seems to include using constraints=N as a tip when people ask about exp/imp. Some common ERP applications have an excessive number of constraints in their database and the constraints can be created using the indexfile after the data is imported. I have seen this one parameter change a 60 hour import into a 8 hour import including building the constraints and indexes. Results of this will vary based on the number of constraints in your database.
Import slow
atul, December 17, 2003 - 8:13 am UTC
Hi,
we are doing import from a designer,so we can't change any parameter.
Our sort are parameters are given below:
sort_area_size
262144
sort_area_retained_size
65536
Could you suggest about that to increase import speed?
Thanks.
atul
December 18, 2003 - 7:18 am UTC
you can change parameters (who cares where the import is actually from, you can change database parameters).
anyway, you can parallelize the import (run imp many times, using tables= or such).
you could transport the tablespaces instead of conventional import.
use larger buffer=
use commit=n
sort area size will affect index creation, you might want it a tad larger then 256k
long column too large for column buffer size
jamil, December 27, 2003 - 7:34 am UTC
Dear Sir
I am getting this message
IMP-00009: abnormal end of export file
imp-00020: long column too large for column buffer size <20>
imp-00028: partial import of previous table rolled back 1208 rows rolled back
import terminated successfully with warnings.
when I try to import the table that is exported in previous oracle 8 , and my current oracle is
Connected to:
Oracle9i Enterprise Edition Release 9.2.0.1.0 - Production
With the Partitioning, Oracle Label Security, OLAP and Oracle Data Mining options
JServer Release 9.2.0.1.0 – Production
And the table structure that is I am trying to import as follow :
APPLICATION_ID VARCHAR2(15)
MENU_NAME VARCHAR2(15)
OPTION_NUMBER NUMBER(4)
OPTION_TEXT VARCHAR2(70)
COMMAND_LINE VARCHAR2(20)
PROGRAM_TYPE NUMBER(2)
SPECIAL_COND NUMBER(2)
How can I increase the buffer size can you give me example
Best regards
Jamil
December 27, 2003 - 9:52 am UTC
you'll need to contact support for this one. make sure you binary ftp'ed the file (that the file itself is exactly the same size as it was on the source system)
Export / import from Lower version
Arvind, December 29, 2003 - 1:48 am UTC
Dear Tom,
We have a problem regarding export/import. We have 10 offices located at different geographical sites. And we have a heterogeneous Oracle databases. Some on 8.0.3 and some even on 8.1.6 and 8.1.7
We have a program which transfers data between different offices using exp/imp utilities. This program is to be executed on server. So I have to load the oracle utilities of lowest version on all the oracle servers along with oracle 8i utilities. It means it requires to load oracle 8.0 uilities in all the servers. I have Oracle already working on each servers. Please can you let me know that how to install Oracle 8.0 utilities on all the running oracle 8i servers.
Thanks
December 29, 2003 - 10:29 am UTC
thats going to be problemtic as 803 is so far out of support and not even installable on the OS's of today in many cases.
I'd opt for plan "b" or plan "c"
plan "b" -- get everyone on supported software and keep them in sync.
plan "c" -- use flat files, see </code>
http://asktom.oracle.com/~tkyte/ <code>, and sqlldr.
help importing subpartioned table
Fernando Sánchez, January 23, 2004 - 7:43 am UTC
I've tried to import a subpartitioned table in a 9.2.0 db using the same parameters that worked well for a partioned table in a 8.1.7 db, but the only thing I get is the 922 error.
Any idea about this?
January 23, 2004 - 8:14 am UTC
hmmm,
my car won't start this morning? No, i won't tell you
a) what the gas gauge is reading
b) if it is making any noises
c) if the engine turns over
d) how cold it is outside
e) what I attempted to do to get it started...
any ideas for me? ;)
how's about SHOWING us what parameters you mean eh? cut and pastes are best of all -- no edits.
You are right
Fernando Sánchez, January 23, 2004 - 8:34 am UTC
Our parameter files are:
EXPORT ->
FILE=bck_historico.dmp
LOG=historico_exp.log
TABLES=(TB_HIS_OPER22)
CONSISTENT=Y
FEEDBACK=100
STATISTICS=NONE
IMPORT ->
FILE=bck_historico.dmp
LOG=historico_imp.log
TABLES=(TB_HIS_OPER22)
IGNORE=Y
COMMIT=Y
FEEDBACK=100
We detected that the problem is about subpartitions because if we try with a test partitioned table are working but if import try with this sentence:
CREATE TABLE TB_HIS_OPER22 (
SQ_HOP_01 NUMBER(10, 0) NOT NULL ENABLE,
CO_MSDN_V VARCHAR2(9) NOT NULL ENABLE,
CO_OPER_V VARCHAR2(16) NOT NULL ENABLE,
DS_OPER_V VARCHAR2(64) NOT NULL ENABLE,
FX_OPER_D DATE NOT NULL ENABLE,
CO_USER_V VARCHAR2(15),
CO_REQU_V VARCHAR2(15) NOT NULL ENABLE,
DS_REAS_V VARCHAR2(64),
CO_TIP_PROD_V VARCHAR2(8) NOT NULL ENABLE,
NO_COD_PROD_V VARCHAR2(8) NOT NULL ENABLE,
DS_PROD_V VARCHAR2(32),
SQ_PROD_V VARCHAR2(10) NOT NULL ENABLE,
DS_INI_PROD_V VARCHAR2(64),
TO_QUAN_N NUMBER(10, 0),
FX_INI_VALI_D DATE,
FX_FIN_VALI_D DATE)
PCTFREE 5 PCTUSED 60 INITRANS 1 MAXTRANS 255 TABLESPACE TSP_HOP_D_01 LOGGING PARTITION
BY RANGE (FX_OPER_D ) SUBPARTITION BY HASH (CO_MSDN_V ) SUBPARTITIONS
10 SUBPARTITION TEMPLATE( SUBPARTITION SP1 VALUES () , SUBPARTITION SP2
VALUES () , SUBPARTITION SP3 VALUES () , SUBPARTITION SP4 VALUES () ,
SUBPARTITION SP5 VALUES () , SUBPARTITION SP6 VALUES () , SUBPARTITION
SP7 VALUES () , SUBPARTITION SP8 VALUES () , SUBPARTITION SP9 VALUES
() , SUBPARTITION SP10 VALUES () ) (PARTITION TB_HIS_OPER_MAX VALUES
LESS THAN (MAXVALUE) PCTFREE 5 PCTUSED 60 INITRANS 1 MAXTRANS 255 STORAGE(
INITIAL 512000 NEXT 307200) TABLESPACE TSP_HOP_D_01 LOGGING (
SUBPARTITION TB_HIS_OPER_MAX_SP1 TABLESPACE TSP_HOP_D_01,
SUBPARTITION TB_HIS_OPER_MAX_SP2 TABLESPACE TSP_HOP_D_01,
SUBPARTITION TB_HIS_OPER_MAX_SP3 TABLESPACE TSP_HOP_D_01,
SUBPARTITION TB_HIS_OPER_MAX_SP4 TABLESPACE TSP_HOP_D_01,
SUBPARTITION TB_HIS_OPER_MAX_SP5 TABLESPACE TSP_HOP_D_01,
SUBPARTITION TB_HIS_OPER_MAX_SP6 TABLESPACE TSP_HOP_D_01,
SUBPARTITION TB_HIS_OPER_MAX_SP7 TABLESPACE TSP_HOP_D_01,
SUBPARTITION TB_HIS_OPER_MAX_SP8 TABLESPACE TSP_HOP_D_01,
SUBPARTITION TB_HIS_OPER_MAX_SP9 TABLESPACE TSP_HOP_D_01,
SUBPARTITION TB_HIS_OPER_MAX_SP10 TABLESPACE TSP_HOP_D_01 ) )
The problem appears.
How can we import subpartitioned tables ???
Thanks in advance.
January 23, 2004 - 9:37 am UTC
ummm, WHAT PROBLEM???
you say
"having a problem importing"
but you show us export parameters.
you say
"the problem appears"
but give us no clue as to exactly WHAT the problem is........
trying to explain the problem
Fernando Sánchez, January 23, 2004 - 11:42 am UTC
Sorry Tom I haven't explained OK.
The export was made with these contents in the parameter file:
FILE=bck_historico.dmp
LOG=historico_imp.log
TABLES=(TB_HIS_OPER22)
IGNORE=Y
COMMIT=Y
FEEDBACK=100
And the import with these ones:
FILE=bck_historico.dmp
LOG=historico_exp.log
TABLES=(TB_HIS_OPER22)
CONSISTENT=Y
FEEDBACK=100
STATISTICS=NONE
And the import failed. Trying to see why it had failed (error 922)we "extracted" from bck_historico.dmp the create table script I shew before and we got the same error (922), that's why we think the problem is there.
I hope it is clear.
Thanks again.
January 23, 2004 - 6:30 pm UTC
hows about a step by step cut and paste of the error -- the entire thing.
it seems to be aparse error
Fernando Sánchez, January 26, 2004 - 4:57 am UTC
Here's the entire thing (8.1.7):
1. EXPORT: exp USERID=user/pwd PARFILE=exp_parfile.txt
and this is the resulting log (everything seems OK)
Connected to: Oracle9i Enterprise Edition Release 9.2.0.3.0 - 64bit Production
With the Partitioning option
JServer Release 9.2.0.3.0 - Production
Export done in US7ASCII character set and AL16UTF16 NCHAR character set
server uses WE8ISO8859P1 character set (possible charset conversion)
About to export specified tables via Conventional Path ...
. . exporting table TB_HIS_OPER
. . exporting composite partition TB_HIS_OPER_20031027
. . exporting subpartition TB_HIS_OPER_20031027_SP1
0 rows exported
. . exporting subpartition TB_HIS_OPER_20031027_SP2
0 rows exported
. . exporting subpartition TB_HIS_OPER_20031027_SP3
0 rows exported
. . exporting subpartition TB_HIS_OPER_20031027_SP4
0 rows exported
. . exporting subpartition TB_HIS_OPER_20031027_SP5
0 rows exported
. . exporting subpartition TB_HIS_OPER_20031027_SP6
0 rows exported
. . exporting subpartition TB_HIS_OPER_20031027_SP7
0 rows exported
. . exporting subpartition TB_HIS_OPER_20031027_SP8
0 rows exported
. . exporting subpartition TB_HIS_OPER_20031027_SP9
0 rows exported
. . exporting subpartition TB_HIS_OPER_20031027_SP10
0 rows exported
. . exporting composite partition TB_HIS_OPER_20031103
. . exporting subpartition TB_HIS_OPER_20031103_SP1
0 rows exported
. . exporting subpartition TB_HIS_OPER_20031103_SP2
0 rows exported
. . exporting subpartition TB_HIS_OPER_20031103_SP3
0 rows exported
. . exporting subpartition TB_HIS_OPER_20031103_SP4
0 rows exported
. . exporting subpartition TB_HIS_OPER_20031103_SP5
0 rows exported
. . exporting subpartition TB_HIS_OPER_20031103_SP6
0 rows exported
. . exporting subpartition TB_HIS_OPER_20031103_SP7
0 rows exported
. . exporting subpartition TB_HIS_OPER_20031103_SP8
0 rows exported
. . exporting subpartition TB_HIS_OPER_20031103_SP9
0 rows exported
. . exporting subpartition TB_HIS_OPER_20031103_SP10
0 rows exported
/* A LOT MORE LINES OF THE SAME KIND*/
0 rows exported
. . exporting composite partition TB_HIS_OPER_20040216
. . exporting subpartition TB_HIS_OPER_20040216_SP1
0 rows exported
. . exporting subpartition TB_HIS_OPER_20040216_SP2
0 rows exported
. . exporting subpartition TB_HIS_OPER_20040216_SP3
0 rows exported
. . exporting subpartition TB_HIS_OPER_20040216_SP4
0 rows exported
. . exporting subpartition TB_HIS_OPER_20040216_SP5
0 rows exported
. . exporting subpartition TB_HIS_OPER_20040216_SP6
0 rows exported
. . exporting subpartition TB_HIS_OPER_20040216_SP7
0 rows exported
. . exporting subpartition TB_HIS_OPER_20040216_SP8
0 rows exported
. . exporting subpartition TB_HIS_OPER_20040216_SP9
0 rows exported
. . exporting subpartition TB_HIS_OPER_20040216_SP10
0 rows exported
Export terminated successfully without warnings.
2. IMPORT: imp USERID=user/pwd PARFILE= imp_parfile.txt with this log:
Export file created by EXPORT:V09.02.00 via conventional path
import done in US7ASCII character set and AL16UTF16 NCHAR character set
import server uses WE8ISO8859P1 character set (possible charset conversion)
. importing LOGAN3_ADMIN's objects into LOGAN3_ADMIN
IMP-00017: following statement failed with ORACLE error 922:
"CREATE TABLE "TB_HIS_OPER" ("SQ_HOP_01" NUMBER(10, 0) NOT NULL ENABLE, "CO_"
"MSDN_V" VARCHAR2(9) NOT NULL ENABLE, "CO_OPER_V" VARCHAR2(16) NOT NULL ENAB"
"LE, "DS_OPER_V" VARCHAR2(64) NOT NULL ENABLE, "FX_OPER_D" DATE NOT NULL ENA"
"BLE, "CO_USER_V" VARCHAR2(15), "CO_REQU_V" VARCHAR2(15) NOT NULL ENABLE, "D"
"S_REAS_V" VARCHAR2(64), "CO_TIP_PROD_V" VARCHAR2(8) NOT NULL ENABLE, "NO_CO"
"D_PROD_V" VARCHAR2(8) NOT NULL ENABLE, "DS_PROD_V" VARCHAR2(32), "SQ_PROD_V"
"" VARCHAR2(10) NOT NULL ENABLE, "DS_INI_PROD_V" VARCHAR2(64), "TO_QUAN_N" N"
"UMBER(10, 0), "FX_INI_VALI_D" DATE, "FX_FIN_VALI_D" DATE) PCTFREE 5 PCTUSE"
"D 60 INITRANS 1 MAXTRANS 255 TABLESPACE "TSP_HOP_D_01" LOGGING PARTITION BY"
" RANGE ("FX_OPER_D" ) SUBPARTITION BY HASH ("CO_MSDN_V" ) SUBPARTITIONS 1"
"0 SUBPARTITION TEMPLATE( SUBPARTITION "SP1" VALUES () , SUBPARTITION "SP2" "
"VALUES () , SUBPARTITION "SP3" VALUES () , SUBPARTITION "SP4" VALUES () , S"
"UBPARTITION "SP5" VALUES () , SUBPARTITION "SP6" VALUES () , SUBPARTITION ""
"SP7" VALUES () , SUBPARTITION "SP8" VALUES () , SUBPARTITION "SP9" VALUES ("
") , SUBPARTITION "SP10" VALUES () ) (PARTITION "TB_HIS_OPER_20031020" VALU"
"ES LESS THAN (TO_DATE(' 2003-10-20 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NL"
"S_CALENDAR=GREGORIAN')) PCTFREE 5 PCTUSED 60 INITRANS 1 MAXTRANS 255 TABLE"
"SPACE "TSP_HOP_D_08" LOGGING (SUBPARTITION "TB_HIS_OPER_20031020_SP1" TABL"
"ESPACE "TSP_HOP_D_08", SUBPARTITION "TB_HIS_OPER_20031020_SP2" TABLESPACE "
""TSP_HOP_D_08", SUBPARTITION "TB_HIS_OPER_20031020_SP3" TABLESPACE "TSP_HO"
"P_D_08", SUBPARTITION "TB_HIS_OPER_20031020_SP4" TABLESPACE "TSP_HOP_D_08""
", SUBPARTITION "TB_HIS_OPER_20031020_SP5" TABLESPACE "TSP_HOP_D_08", SUBPA"
"RTITION "TB_HIS_OPER_20031020_SP6" TABLESPACE "TSP_HOP_D_08", SUBPARTITION"
" "TB_HIS_OPER_20031020_SP7" TABLESPACE "TSP_HOP_D_08", SUBPARTITION "TB_HI"
"S_OPER_20031020_SP8" TABLESPACE "TSP_HOP_D_08", SUBPARTITION "TB_HIS_OPER_"
"20031020_SP9" TABLESPACE "TSP_HOP_D_08", SUBPARTITION "TB_HIS_OPER_2003102"
"0_SP10" TABLESPACE "TSP_HOP_D_08" ), PARTITION "TB_HIS_OPER_20031027" VALU"
"ES LESS THAN (TO_DATE(' 2003-10-27 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NL"
"S_CALENDAR=GREGORIAN')) PCTFREE 5 PCTUSED 60 INITRANS 1 MAXTRANS 255 TABLE"
"SPACE "TSP_HOP_D_09" LOGGING (SUBPARTITION "TB_HIS_OPER_20031027_SP1" TABL"
"ESPACE "TSP_HOP_D_09", SUBPARTITION "TB_HIS_OPER_20031027_SP2" TABLESPACE "
""TSP_HOP_D_09", SUBPARTITION "TB_HIS_OPER_20031027_SP3" TABLESPACE "TSP_HO"
"P_D_09", SUBPARTITION "TB_HIS_OPER_20031027_SP4" TABLESPACE "TSP_HOP_D_09""
", SUBPARTITION "TB_HIS_OPER_20031027_SP5" TABLESPACE "TSP_HOP_D_09", SUBPA"
"RTITION "TB_HIS_OPER_20031027_SP6" TABLESPACE "TSP_HOP_D_09", SUBPARTITION"
" "TB_HIS_OPER_20031027_SP7" TABLESPACE "TSP_HOP_D_09", SUBPARTITION "TB_HI"
"S_OPER_20031027_SP8" TABLESPACE "TSP_HOP_D_09", SUBPARTITION "TB_HIS_OPER_"
"20031027_SP9" TABLESPACE "TSP_HOP_D_09", SUBPARTITION "TB_HIS_OPER_2003102"
"7_SP10" TABLESPACE "TSP_HOP_D_09" ), PARTITION "TB_HIS_OPER_20031103" VALU"
"ES LESS THAN (TO_DATE(' 2003-11-03 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NL"
"S_CALENDAR=GREGORIAN')) PCTFREE 5 PCTUSED 60 INITRANS 1 MAXTRANS 255 TABLE"
"SPACE "TSP_HOP_D_10" LOGGING (SUBPARTITION "TB_HIS_OPER_20031103_SP1" TABL"
"ESPACE "TSP_HOP_D_10", SUBPARTITION "TB_HIS_OPER_20031103_SP2" TABLESPACE "
""TSP_HOP_D_10", SUBPARTITION "TB_HIS_OPER_20031103_SP3" TABLESPACE "TSP_HO"
"P_D_10", SUBPARTITION "TB_HIS_OPER_20031103_SP4" TABLESPACE "TSP_HOP_D_10""
", SUBPARTITION "TB_HIS_OPER_20031103_SP5" TABLESPACE "TSP_HOP_D_10", SUBPA"
"RTITION "TB_HIS_OPER_20031103_SP6" TABLESPACE "TSP_HOP_D_10", SUBPARTITION"
" "TB_HIS_OPER_20031103_SP7" TABLESPACE "TSP_HOP_D_10", SUBPARTITION "TB_HI"
"S_OPER_20031103_SP8" TABLESPACE "TSP_HOP_D_10", SUBPARTITION "TB_HIS_OPER_"
"20031103_SP9" TABLESPACE "TSP_HOP_D_10", SUBPARTITION "TB_HIS_OPER_2003110"
"3_SP10" TABLESPACE "TSP_HOP_D_10" ), PARTITION "TB_HIS_OPER_20031110" VALU"
"ES LESS THAN (TO_DATE(' 2003-11-10 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NL"
"S_CALENDAR=GREGORIAN')) PCTFREE 5 PCTUSED 60 INITRANS 1 MAXTRANS 255 TABLE"
"SPACE "TSP_HOP_D_11" LOGGING (SUBPARTITION "TB_HIS_OPER_20031110_SP1" TABL"
"ESPACE "TSP_HOP_D_11", SUBPARTITION "TB_HIS_OPER_20031110_SP2" TABLESPACE "
""TSP_HOP_D_11", SUBPARTITION "TB_HIS_OPER_20031110_SP3" TABLESPACE "TSP_HO"
"P_D_11", SUBPARTITION "TB_HIS_OPER_20031110_SP4" TABLESPACE "TSP_HOP_D_11""
", SUBPARTITION "TB_HIS_OPER_20031110_SP5" TABLESPACE "TSP_HOP_D_11", SUBPA"
"RTITION "TB_HIS_OPER_20031110_SP6" TABLESPACE "TSP_HOP_D_11", SUBPARTITION"
" "TB_HIS_OPER_20031110_SP7" TABLESPACE "TSP_HOP_D_11", SUBPARTITION "TB_HI"
"S_OPER_20031110_SP8" TABLESPACE "TSP_HOP_D_11", SUBPARTITION "TB_HIS_OPER_"
"20031110_SP9" TABLESPACE "TSP_HOP_D_11", SUBPARTITION "TB_HIS_OPER_2003111"
"0_SP10" TABLESPACE "TSP_HOP_D_11" ), PARTITION "TB_HIS_OPER_20031117" VALU"
"ES LESS THAN (TO_DATE(' 2003-11-17 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NL"
"S_CALENDAR=GREGORIAN')) PCTFREE 5 PCTUSED 60 INITRANS 1 MAXTRANS 255 TABLE"
"SPACE "TSP_HOP_D_12" LOGGING (SUBPARTITION "TB_HIS_OPER_20031117_SP1" TABL"
"ESPACE "TSP_HOP_D_12", SUBPARTITION "TB_HIS_OPER_20031117_SP2" TABLESPACE "
""TSP_HOP_D_12", SUBPARTITION "TB_HIS_OPER_20031117_SP3" TABLESPACE "TSP_HO"
"P_D_12", SUBPARTITION "TB_HIS_OPER_20031117_SP4" TABLESPACE "TSP_HOP_D_12""
", SUBPARTITION "TB_HIS_OPER_20031117_SP5" TABLESPACE "TSP_HOP_D_12", SUBPA"
"RTITION "TB_HIS_OPER_20031117_SP6" TABLESPACE "TSP_HOP_D_12", SUBPARTITION"
" "TB_HIS_OPER_20031117_SP7" TABLESPACE "TSP_HOP_D_12", SUBPARTITION "TB_HI"
"S_OPER_20031117_SP8" TABLESPACE "TSP_HOP_D_12", SUBPARTITION "TB_HIS_OPER_"
"20031117_SP9" TABLESPACE "TSP_HOP_D_12", SUBPARTITION "TB_HIS_OPER_2003111"
"7_SP10" TABLESPACE "TSP_HOP_D_12" ), PARTITION "TB_HIS_OPER_20031124" VALU"
"ES LESS THAN (TO_DATE(' 2003-11-24 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NL"
"S_CALENDAR=GREGORIAN')) PCTFREE 5 PCTUSED 60 INITRANS 1 MAXTRANS 255 TABLE"
"SPACE "TSP_HOP_D_13" LOGGING (SUBPARTITION "TB_HIS_OPER_20031124_SP1" TABL"
"ESPACE "TSP_HOP_D_13", SUBPARTITION "TB_HIS_OPER_20031124_SP2" TABLESPACE "
""TSP_HOP_D_13", SUBPARTITION "TB_HIS_OPER_20031124_SP3" TABLESPACE "TSP_HO"
"P_D_13", SUBPARTITION "TB_HIS_OPER_20031124_SP4" TABLESPACE "TSP_HOP_D_13""
", SUBPARTITION "TB_HIS_OPER_20031124_SP5" TABLESPACE "TSP_HOP_D_13", SUBPA"
"RTITION "TB_HIS_OPER_20031124_SP6" TABLESPACE "TSP_HOP_D_13", SUBPARTITION"
" "TB_HIS_OPER_20031124_SP7" TABLESPACE "TSP_HOP_D_13", SUBPARTITION "TB_HI"
"S_OPER_20031124_SP8" TABLESPACE "TSP_HOP_D_13", SUBPARTITION "TB_HIS_OPER_"
"20031124_SP9" TABLESPACE "TSP_HOP_D_13", SUBPARTITION "TB_HIS_OPER_2003112"
"4_SP10" TABLESPACE "TSP_HOP_D_13" ), PARTITION "TB_HIS_OPER_20031201" VALU"
"ES LESS THAN (TO_DATE(' 2003-12-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NL"
"S_CALENDAR=GREGORIAN')) PCTFREE 5 PCTUSED 60 INITRANS 1 MAXTRANS 255 TABLE"
"SPACE "TSP_HOP_D_14" LOGGING (SUBPARTITION "TB_HIS_OPER_20031201_SP1" TABL"
"ESPACE "TSP_HOP_D_14", SUBPARTITION "TB_HIS_OPER_20031201_SP2" TABLESPACE "
""TSP_HOP_D_14", SUBPARTITION "TB_HIS_OPER_20031201_SP3" TABLESPACE "TSP_HO"
"P_D_14", SUBPARTITION "TB_HIS_OPER_20031201_SP4" TABLESPACE "TSP_HOP_D_14""
", SUBPARTITION "TB_HIS_OPER_20031201_SP5" TABLESPACE "TSP_HOP_D_14", SUBPA"
"RTITION "TB_HIS_OPER_20031201_SP6" TABLESPACE "TSP_HOP_D_14", SUBPARTITION"
" "TB_HIS_OPER_20031201_SP7" TABLESPACE "TSP_HOP_D_14", SUBPARTITION "TB_HI"
"S_OPER_20031201_SP8" TABLESPACE "TSP_HOP_D_14", SUBPARTITION "TB_HIS_OPER_"
"20031201_SP9" TABLESPACE "TSP_HOP_D_14", SUBPARTITION "TB_HIS_OPER_2003120"
"1_SP10" TABLESPACE "TSP_HOP_D_14" ), PARTITION "TB_HIS_OPER_20031208" VALU"
"ES LESS THAN (TO_DATE(' 2003-12-08 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NL"
"S_CALENDAR=GREGORIAN')) PCTFREE 5 PCTUSED 60 INITRANS 1 MAXTRANS 255 TABLE"
"SPACE "TSP_HOP_D_15" LOGGING (SUBPARTITION "TB_HIS_OPER_20031208_SP1" TABL"
"ESPACE "TSP_HOP_D_15", SUBPARTITION "TB_HIS_OPER_20031208_SP2" TABLESPACE "
""TSP_HOP_D_15", SUBPARTITION "TB_HIS_OPER_20031208_SP3" TABLESPACE "TSP_HO"
"P_D_15", SUBPARTITION "TB_HIS_OPER_20031208_SP4" TABLESPACE "TSP_HOP_D_15""
", SUBPARTITION "TB_HIS_OPER_20031208_SP5" TABLESPACE "TSP_HOP_D_15", SUBPA"
"RTITION "TB_HIS_OPER_20031208_SP6" TABLESPACE "TSP_HOP_D_15", SUBPARTITION"
" "TB_HIS_OPER_20031208_SP7" TABLESPACE "TSP_HOP_D_15", SUBPARTITION "TB_HI"
"S_OPER_20031208_SP8" TABLESPACE "TSP_HOP_D_15", SUBPARTITION "TB_HIS_OPER_"
"20031208_SP9" TABLESPACE "TSP_HOP_D_15", SUBPARTITION "TB_HIS_OPER_2003120"
"8_SP10" TABLESPACE "TSP_HOP_D_15" ), PARTITION "TB_HIS_OPER_20031215" VALU"
"ES LESS THAN (TO_DATE(' 2003-12-15 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NL"
"S_CALENDAR=GREGORIAN')) PCTFREE 5 PCTUSED 60 INITRANS 1 MAXTRANS 255 TABLE"
"SPACE "TSP_HOP_D_16" LOGGING (SUBPARTITION "TB_HIS_OPER_20031215_SP1" TABL"
"ESPACE "TSP_HOP_D_16", SUBPARTITION "TB_HIS_OPER_20031215_SP2" TABLESPACE "
""TSP_HOP_D_16", SUBPARTITION "TB_HIS_OPER_20031215_SP3" TABLESPACE "TSP_HO"
"P_D_16", SUBPARTITION "TB_HIS_OPER_20031215_SP4" TABLESPACE "TSP_HOP_D_16""
", SUBPARTITION "TB_HIS_OPER_20031215_SP5" TABLESPACE "TSP_HOP_D_16", SUBPA"
"RTITION "TB_HIS_OPER_20031215_SP6" TABLESPACE "TSP_HOP_D_16", SUBPARTITION"
" "TB_HIS_OPER_20031215_SP7" TABLESPACE "TSP_HOP_D_16", SUBPARTITION "TB_HI"
"S_OPER_20031215_SP8" TABLESPACE "TSP_HOP_D_16", SUBPARTITION "TB_HIS_OPER_"
"20031215_SP9" TABLESPACE "TSP_HOP_D_16", SUBPARTITION "TB_HIS_OPER_2003121"
"5_SP10" TABLESPACE "TSP_HOP_D_16" ), PARTITION "TB_HIS_OPER_20031222" VALU"
"ES LESS THAN (TO_DATE(' 2003-12-22 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NL"
"S_CALENDAR=GREGORIAN')) PCTFREE 5 PCTUSED 60 INITRANS 1 MAXTRANS 255 TABLE"
"SPACE "TSP_HOP_D_17" LOGGING (SUBPARTITION "TB_HIS_OPER_20031222_SP1" TABL"
"ESPACE "TSP_HOP_D_17", SUBPARTITION "TB_HIS_OPER_20031222_SP2" TABLESPACE "
""TSP_HOP_D_17", SUBPARTITION "TB_HIS_OPER_20031222_SP3" TABLESPACE "TSP_HO"
"P_D_17", SUBPARTITION "TB_HIS_OPER_20031222_SP4" TABLESPACE "TSP_HOP_D_17""
", SUBPARTITION "TB_HIS_OPER_20031222_SP5" TABLESPACE "TSP_HOP_D_17", SUBPA"
"RTITION "TB_HIS_OPER_20031222_SP6" TABLESPACE "TSP_HOP_D_17", SUBPARTITION"
" "TB_HIS_OPER_20031222_SP7" TABLESPACE "TSP_HOP_D_17", SUBPARTITION "TB_HI"
"S_OPER_20031222_SP8" TABLESPACE "TSP_HOP_D_17", SUBPARTITION "TB_HIS_OPER_"
"20031222_SP9" TABLESPACE "TSP_HOP_D_17", SUBPARTITION "TB_HIS_OPER_2003122"
"2_SP10" TABLESPACE "TSP_HOP_D_17" ), PARTITION "TB_HIS_OPER_20031229" VALU"
"ES LESS THAN (TO_DATE(' 2003-12-29 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NL"
"S_CALENDAR=GREGORIAN')) PCTFREE 5 PCTUSED 60 INITRANS 1 MAXTRANS 255 TABLE"
"SPACE "TSP_HOP_D_01" LOGGING (SUBPARTITION "TB_HIS_OPER_20031229_SP1" TABL"
"ESPACE "TSP_HOP_D_01", SUBPARTITION "TB_HIS_OPER_20031229_SP2" TABLESPACE "
""TSP_HOP_D_01", SUBPARTITION "TB_HIS_OPER_20031229_SP3" TABLESPACE "TSP_HO"
"P_D_01", SUBPARTITION "TB_HIS_OPER_20031229_SP4" TABLESPACE "TSP_HOP_D_01""
", SUBPARTITION "TB_HIS_OPER_20031229_SP5" TABLESPACE "TSP_HOP_D_01", SUBPA"
"RTITION "TB_HIS_OPER_20031229_SP6" TABLESPACE "TSP_HOP_D_01", SUBPARTITION"
" "TB_HIS_OPER_20031229_SP7" TABLESPACE "TSP_HOP_D_01", SUBPARTITION "TB_HI"
"S_OPER_20031229_SP8" TABLESPACE "TSP_HOP_D_01", SUBPARTITION "TB_HIS_OPER_"
"20031229_SP9" TABLESPACE "TSP_HOP_D_01", SUBPARTITION "TB_HIS_OPER_2003122"
"9_SP10" TABLESPACE "TSP_HOP_D_01" ), PARTITION "TB_HIS_OPER_20040105" VALU"
"ES LESS THAN (TO_DATE(' 2004-01-05 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NL"
"S_CALENDAR=GREGORIAN')) PCTFREE 5 PCTUSED 60 INITRANS 1 MAXTRANS 255 TABLE"
"SPACE "TSP_HOP_D_02" LOGGING (SUBPARTITION "TB_HIS_OPER_20040105_SP1" TABL"
"ESPACE "TSP_HOP_D_02", SUBPARTITION "TB_HIS_OPER_20040105_SP2" TABLESPACE "
""TSP_HOP_D_02", SUBPARTITION "TB_HIS_OPER_20040105_SP3" TABLESPACE "TSP_HO"
"P_D_02", SUBPARTITION "TB_HIS_OPER_20040105_SP4" TABLESPACE "TSP_HOP_D_02""
", SUBPARTITION "TB_HIS_OPER_20040105_SP5" TABLESPACE "TSP_HOP_D_02", SUBPA"
"RTITION "TB_HIS_OPER_20040105_SP6" TABLESPACE "TSP_HOP_D_02", SUBPARTITION"
" "TB_HIS_OPER_20040105_SP7" TABLESPACE "TSP_HOP_D_02", SUBPARTITION "TB_HI"
"S_OPER_20040105_SP8" TABLESPACE "TSP_HOP_D_02", SUBPARTITION "TB_HIS_OPER_"
"20040105_SP9" TABLESPACE "TSP_HOP_D_02", SUBPARTITION "TB_HIS_OPER_2004010"
"5_SP10" TABLESPACE "TSP_HOP_D_02" ), PARTITION "TB_HIS_OPER_20040112" VALU"
"ES LESS THAN (TO_DATE(' 2004-01-12 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NL"
"S_CALENDAR=GREGORIAN')) PCTFREE 5 PCTUSED 60 INITRANS 1 MAXTRANS 255 TABLE"
"SPACE "TSP_HOP_D_03" LOGGING (SUBPARTITION "TB_HIS_OPER_20040112_SP1" TABL"
"ESPACE "TSP_HOP_D_03", SUBPARTITION "TB_HIS_OPER_20040112_SP2" TABLESPACE "
""TSP_HOP_D_03", SUBPARTITION "TB_HIS_OPER_20040112_SP3" TABLESPACE "TSP_HO"
"P_D_03", SUBPARTITION "TB_HIS_OPER_20040112_SP4" TABLESPACE "TSP_HOP_D_03""
", SUBPARTITION "TB_HIS_OPER_20040112_SP5" TABLESPACE "TSP_HOP_D_03", SUBPA"
"RTITION "TB_HIS_OPER_20040112_SP6" TABLESPACE "TSP_HOP_D_03", SUBPARTITION"
" "TB_HIS_OPER_20040112_SP7" TABLESPACE "TSP_HOP_D_03", SUBPARTITION "TB_HI"
"S_OPER_20040112_SP8" TABLESPACE "TSP_HOP_D_03", SUBPARTITION "TB_HIS_OPER_"
"20040112_SP9" TABLESPACE "TSP_HOP_D_03", SUBPARTITION "TB_HIS_OPER_2004011"
"2_SP10" TABLESPACE "TSP_HOP_D_03" ), PARTITION "TB_HIS_OPER_20040119" VALU"
"ES LESS THAN (TO_DATE(' 2004-01-19 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NL"
"S_CALENDAR=GREGORIAN')) PCTFREE 5 PCTUSED 60 INITRANS 1 MAXTRANS 255 TABLE"
"SPACE "TSP_HOP_D_04" LOGGING (SUBPARTITION "TB_HIS_OPER_20040119_SP1" TABL"
"ESPACE "TSP_HOP_D_04", SUBPARTITION "TB_HIS_OPER_20040119_SP2" TABLESPACE "
""TSP_HOP_D_04", SUBPARTITION "TB_HIS_OPER_20040119_SP3" TABLESPACE "TSP_HO"
"P_D_04", SUBPARTITION "TB_HIS_OPER_20040119_SP4" TABLESPACE "TSP_HOP_D_04""
", SUBPARTITION "TB_HIS_OPER_20040119_SP5" TABLESPACE "TSP_HOP_D_04", SUBPA"
"RTITION "TB_HIS_OPER_20040119_SP6" TABLESPACE "TSP_HOP_D_04", SUBPARTITION"
" "TB_HIS_OPER_20040119_SP7" TABLESPACE "TSP_HOP_D_04", SUBPARTITION "TB_HI"
"S_OPER_20040119_SP8" TABLESPACE "TSP_HOP_D_04", SUBPARTITION "TB_HIS_OPER_"
"20040119_SP9" TABLESPACE "TSP_HOP_D_04", SUBPARTITION "TB_HIS_OPER_2004011"
"9_SP10" TABLESPACE "TSP_HOP_D_04" ), PARTITION "TB_HIS_OPER_20040126" VALU"
"ES LESS THAN (TO_DATE(' 2004-01-26 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NL"
"S_CALENDAR=GREGORIAN')) PCTFREE 5 PCTUSED 60 INITRANS 1 MAXTRANS 255 TABLE"
"SPACE "TSP_HOP_D_05" LOGGING (SUBPARTITION "TB_HIS_OPER_20040126_SP1" TABL"
"ESPACE "TSP_HOP_D_05", SUBPARTITION "TB_HIS_OPER_20040126_SP2" TABLESPACE "
""TSP_HOP_D_05", SUBPARTITION "TB_HIS_OPER_20040126_SP3" TABLESPACE "TSP_HO"
"P_D_05", SUBPARTITION "TB_HIS_OPER_20040126_SP4" TABLESPACE "TSP_HOP_D_05""
", SUBPARTITION "TB_HIS_OPER_20040126_SP5" TABLESPACE "TSP_HOP_D_05", SUBPA"
"RTITION "TB_HIS_OPER_20040126_SP6" TABLESPACE "TSP_HOP_D_05", SUBPARTITION"
" "TB_HIS_OPER_20040126_SP7" TABLESPACE "TSP_HOP_D_05", SUBPARTITION "TB_HI"
"S_OPER_20040126_SP8" TABLESPACE "TSP_HOP_D_05", SUBPARTITION "TB_HIS_OPER_"
"20040126_SP9" TABLESPACE "TSP_HOP_D_05", SUBPARTITION "TB_HIS_OPER_2004012"
"6_SP10" TABLESPACE "TSP_HOP_D_05" ), PARTITION "TB_HIS_OPER_20040202" VALU"
"ES LESS THAN (TO_DATE(' 2004-02-02 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NL"
"S_CALENDAR=GREGORIAN')) PCTFREE 5 PCTUSED 60 INITRANS 1 MAXTRANS 255 TABLE"
"SPACE "TSP_HOP_D_06" LOGGING (SUBPARTITION "TB_HIS_OPER_20040202_SP1" TABL"
"ESPACE "TSP_HOP_D_06", SUBPARTITION "TB_HIS_OPER_20040202_SP2" TABLESPACE "
""TSP_HOP_D_06", SUBPARTITION "TB_HIS_OPER_20040202_SP3" TABLESPACE "TSP_HO"
"P_D_06", SUBPARTITION "TB_HIS_OPER_20040202_SP4" TABLESPACE "TSP_HOP_D_06""
", SUBPARTITION "TB_HIS_OPER_20040202_SP5" TABLESPACE "TSP_HOP_D_06", SUBPA"
"RTITION "TB_HIS_OPER_20040202_SP6" TABLESPACE "TSP_HOP_D_06", SUBPARTITION"
" "TB_HIS_OPER_20040202_SP7" TABLESPACE "TSP_HOP_D_06", SUBPARTITION "TB_HI"
"S_OPER_20040202_SP8" TABLESPACE "TSP_HOP_D_06", SUBPARTITION "TB_HIS_OPER_"
"20040202_SP9" TABLESPACE "TSP_HOP_D_06", SUBPARTITION "TB_HIS_OPER_2004020"
"2_SP10" TABLESPACE "TSP_HOP_D_06" ), PARTITION "TB_HIS_OPER_20040209" VALU"
"ES LESS THAN (TO_DATE(' 2004-02-09 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NL"
"S_CALENDAR=GREGORIAN')) PCTFREE 5 PCTUSED 60 INITRANS 1 MAXTRANS 255 TABLE"
"SPACE "TSP_HOP_D_07" LOGGING (SUBPARTITION "TB_HIS_OPER_20040209_SP1" TABL"
"ESPACE "TSP_HOP_D_07", SUBPARTITION "TB_HIS_OPER_20040209_SP2" TABLESPACE "
""TSP_HOP_D_07", SUBPARTITION "TB_HIS_OPER_20040209_SP3" TABLESPACE "TSP_HO"
"P_D_07", SUBPARTITION "TB_HIS_OPER_20040209_SP4" TABLESPACE "TSP_HOP_D_07""
", SUBPARTITION "TB_HIS_OPER_20040209_SP5" TABLESPACE "TSP_HOP_D_07", SUBPA"
"RTITION "TB_HIS_OPER_20040209_SP6" TABLESPACE "TSP_HOP_D_07", SUBPARTITION"
" "TB_HIS_OPER_20040209_SP7" TABLESPACE "TSP_HOP_D_07", SUBPARTITION "TB_HI"
"S_OPER_20040209_SP8" TABLESPACE "TSP_HOP_D_07", SUBPARTITION "TB_HIS_OPER_"
"20040209_SP9" TABLESPACE "TSP_HOP_D_07", SUBPARTITION "TB_HIS_OPER_2004020"
"9_SP10" TABLESPACE "TSP_HOP_D_07" ) )"
IMP-00003: ORACLE error 922 encountered
ORA-00922: missing or invalid option
Import terminated successfully with warnings.
Thanks for your patience Tom.
January 26, 2004 - 6:14 am UTC
what version of the DB are you importing into
version 8.1.7
Fernando Sánchez, January 26, 2004 - 6:23 am UTC
The same version than the export: 8.1.7 (in fact we are testing on the same DB dropping the subpartioned table before the import, although the problem isn't because the object already existed).
January 26, 2004 - 6:30 am UTC
Umm NO -- not the same version:
Export file created by EXPORT:V09.02.00 via conventional path
import done in US7ASCII character set and AL16UTF16 NCHAR character set
import server uses WE8ISO8859P1 character set (possible charset conversion)
. importing LOGAN3_ADMIN's objects into LOGAN3_ADMIN
You exported from 9.2
You are trying to import into 8.1.7
that'll NEVER work. subpartition templates didn't exist back then.
You have to use 8i's export tool against 9i in order to generate a dmp file with SQL that 8i can understand.
sorry
Fernando Sánchez, January 26, 2004 - 6:51 am UTC
Sorry Tom.
I haven't reported the problem right -meanwhile trying to solve this problem, I'm working with another DB version 8.1.7, and that took me to say it bad- but in my import problem both DB are 9.2.0 (the same DB)
January 26, 2004 - 6:57 am UTC
show me the header from import that proves it please.
And -- you are using EE (enterprise) to EE, not EE to SE (standard) right (SE does not have partitioning available at all)
here it is
Fernando Sánchez, January 26, 2004 - 7:14 am UTC
1. Import log beginning ($more historico_imp.log)
Connected to: Oracle9i Enterprise Edition Release 9.2.0.3.0 - 64bit Production
With the Partitioning option
JServer Release 9.2.0.3.0 - Production
Export file created by EXPORT:V09.02.00 via conventional path
import done in US7ASCII character set and AL16UTF16 NCHAR character set
import server uses WE8ISO8859P1 character set (possible charset conversion)
. importing LOGAN3_ADMIN's objects into LOGAN3_ADMIN
IMP-00017: following statement failed with ORACLE error 922:
"CREATE TABLE "TB_HIS_OPER" ("SQ_HOP_01" NUMBER(10, 0) NOT NULL ENABLE, "CO_"
"MSDN_V" VARCHAR2(9) NOT NULL ENABLE, "CO_OPER_V" VARCHAR2(16) NOT NULL ENAB"
"LE, "DS_OPER_V" VARCHAR2(64) NOT NULL ENABLE, "FX_OPER_D" DATE NOT NULL ENA"
"BLE, "CO_USER_V" VARCHAR2(15), "CO_REQU_V" VARCHAR2(15) NOT NULL ENABLE, "D"
"S_REAS_V" VARCHAR2(64), "CO_TIP_PROD_V" VARCHAR2(8) NOT NULL ENABLE, "NO_CO"
"D_PROD_V" VARCHAR2(8) NOT NULL ENABLE, "DS_PROD_V" VARCHAR2(32), "SQ_PROD_V"
"" VARCHAR2(10) NOT NULL ENABLE, "DS_INI_PROD_V" VARCHAR2(64), "TO_QUAN_N" N"
"UMBER(10, 0), "FX_INI_VALI_D" DATE, "FX_FIN_VALI_D" DATE) PCTFREE 5 PCTUSE"
"D 60 INITRANS 1 MAXTRANS 255 TABLESPACE "TSP_HOP_D_01" LOGGING PARTITION BY"
" RANGE ("FX_OPER_D" ) SUBPARTITION BY HASH ("CO_MSDN_V" ) SUBPARTITIONS 1"
"0 SUBPARTITION TEMPLATE( SUBPARTITION "SP1" VALUES () , SUBPARTITION "SP2" "
"VALUES () , SUBPARTITION "SP3" VALUES () , SUBPARTITION "SP4" VALUES () , S"
"UBPARTITION "SP5" VALUES () , SUBPARTITION "SP6" VALUES () , SUBPARTITION ""
"SP7" VALUES () , SUBPARTITION "SP8" VALUES () , SUBPARTITION "SP9" VALUES ("
") , SUBPARTITION "SP10" VALUES () ) (PARTITION "TB_HIS_OPER_20031027" VALU"
"ES LESS THAN (TO_DATE(' 2003-10-27 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NL"
"S_CALENDAR=GREGORIAN')) PCTFREE 5 PCTUSED 60 INITRANS 1 MAXTRANS 255 TABLE"
"SPACE "TSP_HOP_D_09" LOGGING (SUBPARTITION "TB_HIS_OPER_20031027_SP1" TABL"
"ESPACE "TSP_HOP_D_09", SUBPARTITION "TB_HIS_OPER_20031027_SP2" TABLESPACE "
""TSP_HOP_D_09", SUBPARTITION "TB_HIS_OPER_20031027_SP3" TABLESPACE "TSP_HO"
"P_D_09", SUBPARTITION "TB_HIS_OPER_20031027_SP4" TABLESPACE "TSP_HOP_D_09""
", SUBPARTITION "TB_HIS_OPER_20031027_SP5" TABLESPACE "TSP_HOP_D_09", SUBPA"
"RTITION "TB_HIS_OPER_20031027_SP6" TABLESPACE "TSP_HOP_D_09", SUBPARTITION"
" "TB_HIS_OPER_20031027_SP7" TABLESPACE "TSP_HOP_D_09", SUBPARTITION "TB_HI"
"S_OPER_20031027_SP8" TABLESPACE "TSP_HOP_D_09", SUBPARTITION "TB_HIS_OPER_"
"20031027_SP9" TABLESPACE "TSP_HOP_D_09", SUBPARTITION "TB_HIS_OPER_2003102"
"7_SP10" TABLESPACE "TSP_HOP_D_09" ), PARTITION "TB_HIS_OPER_20031103" VALU"
"ES LESS THAN (TO_DATE(' 2003-11-03 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NL"
"S_CALENDAR=GREGORIAN')) PCTFREE 5 PCTUSED 60 INITRANS 1 MAXTRANS 255 TABLE"
"SPACE "TSP_HOP_D_10" LOGGING (SUBPARTITION "TB_HIS_OPER_20031103_SP1" TABL"
"ESPACE "TSP_HOP_D_10", SUBPARTITION "TB_HIS_OPER_20031103_SP2" TABLESPACE "
""TSP_HOP_D_10", SUBPARTITION "TB_HIS_OPER_20031103_SP3" TABLESPACE "TSP_HO"
"P_D_10", SUBPARTITION "TB_HIS_OPER_20031103_SP4" TABLESPACE "TSP_HOP_D_10""
", SUBPARTITION "TB_HIS_OPER_20031103_SP5" TABLESPACE "TSP_HOP_D_10", SUBPA"
"RTITION "TB_HIS_OPER_20031103_SP6" TABLESPACE "TSP_HOP_D_10", SUBPARTITION"
" "TB_HIS_OPER_20031103_SP7" TABLESPACE "TSP_HOP_D_10", SUBPARTITION "TB_HI"
"S_OPER_20031103_SP8" TABLESPACE "TSP_HOP_D_10", SUBPARTITION "TB_HIS_OPER_"
.......
.......
.......
2. Export log beginning (more historico_exp.log)
Connected to: Oracle9i Enterprise Edition Release 9.2.0.3.0 - 64bit Production
With the Partitioning option
JServer Release 9.2.0.3.0 - Production
Export done in US7ASCII character set and AL16UTF16 NCHAR character set
server uses WE8ISO8859P1 character set (possible charset conversion)
About to export specified tables via Conventional Path ...
. . exporting table TB_HIS_OPER
. . exporting composite partition TB_HIS_OPER_20031027
. . exporting subpartition TB_HIS_OPER_20031027_SP1
0 rows exported
. . exporting subpartition TB_HIS_OPER_20031027_SP2
0 rows exported
. . exporting subpartition TB_HIS_OPER_20031027_SP3
0 rows exported
. . exporting subpartition TB_HIS_OPER_20031027_SP4
0 rows exported
. . exporting subpartition TB_HIS_OPER_20031027_SP5
0 rows exported
. . exporting subpartition TB_HIS_OPER_20031027_SP6
0 rows exported
. . exporting subpartition TB_HIS_OPER_20031027_SP7
0 rows exported
. . exporting subpartition TB_HIS_OPER_20031027_SP8
0 rows exported
. . exporting subpartition TB_HIS_OPER_20031027_SP9
0 rows exported
. . exporting subpartition TB_HIS_OPER_20031027_SP10
0 rows exported
. . exporting composite partition TB_HIS_OPER_20031103
. . exporting subpartition TB_HIS_OPER_20031103_SP1
0 rows exported
. . exporting subpartition TB_HIS_OPER_20031103_SP2
0 rows exported
. . exporting subpartition TB_HIS_OPER_20031103_SP3
0 rows exported
. . exporting subpartition TB_HIS_OPER_20031103_SP4
0 rows exported
. . exporting subpartition TB_HIS_OPER_20031103_SP5
0 rows exported
. . exporting subpartition TB_HIS_OPER_20031103_SP6
0 rows exported
. . exporting subpartition TB_HIS_OPER_20031103_SP7
0 rows exported
. . exporting subpartition TB_HIS_OPER_20031103_SP8
0 rows exported
.......
.......
.......
January 26, 2004 - 9:02 am UTC
contact support and reference <Bug:2806270>
Export only triggers
tanmoy datta choudhury, January 26, 2004 - 9:15 am UTC
Is it possible to import only triggers from database.
January 26, 2004 - 9:40 am UTC
no, you can however do this:
ops$tkyte@ORA920PC> select trigger_name from user_triggers where table_name = 'T';
no rows selected
ops$tkyte@ORA920PC> !imp userid=/ tables=t ignore=y grants=n indexes=n rows=n constraints=y
Import: Release 9.2.0.4.0 - Production on Mon Jan 26 09:44:42 2004
Copyright (c) 1982, 2002, Oracle Corporation. All rights reserved.
Connected to: Oracle9i Enterprise Edition Release 9.2.0.4.0 - Production
With the Partitioning, OLAP and Oracle Data Mining options
JServer Release 9.2.0.4.0 - Production
Export file created by EXPORT:V09.02.00 via conventional path
import done in WE8ISO8859P1 character set and AL16UTF16 NCHAR character set
. importing OPS$TKYTE's objects into OPS$TKYTE
Import terminated successfully without warnings.
ops$tkyte@ORA920PC> select trigger_name from user_triggers where table_name = 'T';
TRIGGER_NAME
------------------------------
T_TRIGGER
1 row selected.
Very helpful as always
Steve Montgomerie, February 27, 2004 - 11:17 am UTC
Tom,
We are doing a data center move which will take us from Windows to UNIX. I've been working on nailling doing the exp/imp to build the new instance in short time during
'go live' weekend. I grabbed a couple of really great suggestions from this thread.
Thanks
Steve
Copy Data from DB to Another DB on Same Network
Richard, March 30, 2004 - 8:22 am UTC
Hi,
We have a live 8i DB and wish to upgrade to 9i. We have a data warehouse setup, which entails populating fact tables each day. During the "trying all the new bells & whistles" stage, we wish to load the 9i DB with data from the 8i DB on a daily basis; the schemas in both DBs are almost identical, but some tables are partitioned in the 9i DB, and not so in the 8i DB. We cannot populate the 9i DB as we do the 8i DB, owing to various time and other constraints. Is it sane to populate the 9i DB from the 8i DB using INSERT INTO statements, or is there a better way to go?
March 30, 2004 - 8:36 am UTC
well for testing, why would you have to sync up the databases every day? Just a thought -- that perhaps a one time sync is all that is needed to see that "things work" (or not).
You would also have to test the load procedures (which will be different since you have slightly different schemas). I would think it would take less time to run the new load routines against 9i than it would to copy the entire database from 8i into 9i? (so it seems like "time" would be on your side if you just re-ran the loads against 9i?). What better way to know the load routines work than to test them daily on this new machine?
It would probably not be feasible to rebuild a DW (implies "sizable") over the network using insert as selects. Not without killing the network and slamming the 8i database.
Thanks for the Feedback!
Richard, March 30, 2004 - 8:52 am UTC
Many thanks for the feedback.
It has been deemed by better folk than me to do the data loading on a daily basis, so that's that.
I think I'll see how effective it would be to load the 9i DB from the CSV files that the 8i DB is loaded from, then do something to see if they have same number of rows in their tables, etc.
Cheers!
March 30, 2004 - 9:04 am UTC
that does seem to make more sense. At least that way -- you'd be testing and proving the new processes work in the same manner as the old stuff (and if not, you'd be well positioned to figure out "why" before going production!)
indexes=n during export skips only non-unique indexes
Arun Gupta, April 26, 2004 - 10:56 am UTC
Tom,
Even if I do not export indexes (indexes=n), the primary key indexes will still be created during import. Only the non-unique indexes will be skipped. Is this correct?
Thanks
April 26, 2004 - 2:12 pm UTC
the primary key CONSTRAINTS will be created and if they need an index -- so be it!
unique indexes that are not part of a constraint will be skipped.
so, sounds like you want "constraints=n indexes=n" as well.
Import too Slow
Lamya, July 08, 2004 - 4:42 pm UTC
Tom,
In an earlier response you said to do this
tom, what is the exact command to do <quote> 10046 trace on the import with
level 12 (get binds)<quote> in sqlplus. Thanks.
Followup:
if you install dbmssupp.sql in $ORACLE_HOME/rdbms/admin
exec dbms_support.start_trace_in_session( sid=> NN, serial => NN );
will do it. start the import, query v$session to get the sid/serial
MY Question is -
I am doing an import on a Linux Server ( ORacle 9.2.0.3.0)
ps -ef | grep imp
oracle 16166 16100 94 10:56 pts/1 04:20:05 imp
oracle 16253 16203 93 11:05 pts/0 04:08:55 imp
oracle 17635 16812 0 15:31 pts/3 00:00:00 grep imp
but when I connect to the database and try to get sid, serial# so that I can use the above procedure , it does not show me the sid , serial# for the import. I am confused ?
Thanks
July 08, 2004 - 8:45 pm UTC
every session in v$session has a sid/serial#
Import Problem
Lamya, July 08, 2004 - 5:21 pm UTC
Oracle Database - 9.2.0.3.0
I am doing an export / import , my export file is 17066688512MB . Since its a full export .
I am doing an import for only certain tablespaces . The import runs for around 12 hours , there is no change in the datafile size. And no errors for the import . What could be the problem
July 08, 2004 - 8:59 pm UTC
it is 16,276 TERABYTES? impressive.
hmmm...........
maybe that is bytes and you have a 15gig dmp file.
that'll take a couple of "long units of time with imp".
why do you think the datafiles would need to grow? perhaps there was more than sufficient free space in there.
import could be creating an index.
import could be loading rows.
import could be scanning the dmp file skipping over a table
use v$session to see if the session is active, if active join it to v$sql to see what it is doing (or look at v$session_longops if it is active but not doing anything -- it probably is doing a full scan to build an index)
Thanks
Lamya, July 09, 2004 - 9:23 am UTC
Tom,
Got to know that the imp was scanning the dmp file .
Import IMP-00008 Error Occurs
Lamya, July 12, 2004 - 5:51 pm UTC
Tom,
As I discussed above that the import was taking a long time , after that we got the IMP-00008 Error from 2-3 different export files for different days .
We are taking 2 exports a) Full direct =Y - we get the
IMP-00008 Errors. Metalink says to increase the buffer size and we did that but we still get the error.
b) Users - , Direct = N , Here we are not getting the IMP-00008 Error.
The Error is occurring on one table which has approx 90,000 rows and has blob, clob data. Though there are other tables with blob , clob data
My question is that is direct=N is the problem ???
July 12, 2004 - 11:39 pm UTC
what is the full error stack and version(s) involved here.
Re : import
Lamya, July 13, 2004 - 5:39 pm UTC
Tom,
The Oracle versions are 9.2.0.3.0
and the error was IMP-00008 - Unrecognizable character in export problem.
We solved the problem by not using Direct=N. Since we have a lot of images in our data ( blob , clob fields ) , I think by using direct=Y somehow corrupts the data . Now we are able to import the data without any hassle
Thanks
Import
S.Priyadarshan, August 18, 2004 - 8:21 am UTC
Dear Mr.Tom,
I have a schema with over 200 tables with ditionary managed tablespace. Amoung 200 tables 20 over tables are big in size ( 400meg to 2 GB).I need to import the same schema structure to new schema with locally manged tablespace. I can do it from your method get the indexfile and edit the index file like that. Is any other way to do that? That is I don't want the existing schema initial, next values.
with best regards
S.Priyadarshan
August 18, 2004 - 9:41 am UTC
alter table t move storage( initial 1k next 1k pctincrease 0 minextents 1 ) tablespace new_tablespace;
alter index i rebuild storage( initial 1k next 1k pctincrease 0 minextents 1 ) tablespace new_tablespace;
no reason or need to exp/imp
import without storage clause
S.Priyadarshan, August 19, 2004 - 7:47 am UTC
Thank you very much for your information. I have exported dmp file only. Now I can not do it in source schema. Is any other method to overwrite the storage clause?
With best regards
S.Priyadarshan
August 19, 2004 - 9:53 am UTC
imp .... indexfile=foo.sql
(that won't import the data, just creates foo.sql)
edit foo.sql, fix the storage clauses, run the dll, import with ignore =y
Reader
A reader, August 22, 2004 - 5:13 pm UTC
If I use while imp, buffer=100000000 and commit=n and large rollback segments, calculated as about 71 bytes/row inserted, the import performance will increase.
Is this increase in perormance due only to less commits
in log_buffer or is there any other rational behind the
increase speed of the import
Thanks
August 22, 2004 - 8:10 pm UTC
commit = n will result in less log file sync waits
exp from 817 to 92
Jozef, August 22, 2004 - 6:08 pm UTC
I do exp from 817 to 9.2 50 GB of data
It's application upgrade and whole DB must be 9i. on new machine
So my plan is : disable all constraints on application schema, put DB in restrict mode then
a) exp full=y direct=y rows=y GRANTS=y compress=n constraints=y buffer=100000000
b) imp in 2 stages first just : full=y, rows=n
- imp to indexfile create_idx.sql (to create indexes)
c) this will give me a chance to organize tables into new tablespaces, will create all users with grants.
d) I plan to run import for one main application user (98% of data) into 4 streams , cause I've got 4 CPU's, for tables only, then
d) import in 4 streams those tables with indexes=n
e) in another streams the remaining users
f) create indexes from indexfile
g) enable all constraints
My questions:
1) is imp full=y with rows=n create all users with all grants
2) should I run exp/imp in more streams than 4 ?
3) would make any difference to imp with indexes=y or create them from indexfile with nologging parallel ?
4) other obvious helplfull hint to improve the speed, it is windows system, and normal standard import takes 30 hours.
August 22, 2004 - 8:12 pm UTC
first "d" doesn't make sense. "4 streams"?
imp full=y with rows=n will create all users AND tables and indexes and so on -- it'll do everything BUT create the tables.
is the new machine going to be a different hardware platform? the fastest, easiest, least error prone method would be an upgrade, plain and simple.
Reader
A reader, August 22, 2004 - 8:25 pm UTC
From Documentation:
BUFFER
Default: operating system-dependent
The integer specified for BUFFER is the size, in bytes, of the buffer through which data rows are transferred.
BUFFER determines the number of rows in the array inserted by Import. The following formula gives an approximation of the buffer size that inserts a given array of rows:
buffer_size = rows_in_array * maximum_row_size
Q:
Is there a limit on the buffer size and is it allocated
from PGA
August 23, 2004 - 7:27 am UTC
buffer is a client thing in the imp process itself, not in the "pga" of the dedicated/shared server (although indirectly it will affect that -- since the dedicated/shared server has to be able to recieve whatever the client sends it)
the limits are "os dependent" too but "amount of memory on machine" will be ultimately the limit.
How to export non-tables objects
Laly, August 24, 2004 - 2:42 am UTC
Dear Tom,
I'm trying your solution to do one export for each and every tables so that i can do this in parallel. My questions is after exporting every tables (and associated indexes) how can i export the views, stored procedures... everything else but tables.
Best regards,
Laly.
August 24, 2004 - 7:38 am UTC
search this site for
getcode getallcode
getaview getallviews
Laly, August 24, 2004 - 9:13 am UTC
Thanks Tom, but i finally used exp with rows=n since I want everything except table, it is simplier
Can you please tell me what you think of this strategy to duplicate a schema on another database (we suppose that nobody else is modifying the schema while we are exporting) :
1a. export with direct=y rows=y table=... for each and every table in parallel
1b. import with indexfile=... to get indexes to create : modifying this script to increase sort_area_size and to create index in nologging mode and with parallel clause
2. export with direct=y rows=n
On the target database :
- import the dump got in step 2
- import in parallel every individual table dump file
- run in parallel every create index script.
Can you see any flaw in this ?
Thanks,
Laly.
August 24, 2004 - 10:14 am UTC
hows about if you kept a schema in its own tablespace(s).
then just transport them.
watch out for unnamed constraints, you'll get duplicates. might want to do constraints=n on the 1a, and get the constraints in 2 in order to avoid that.
Laly, August 24, 2004 - 10:33 am UTC
Thanks a lot for your answer, i missed that constraint point.
I will give a try to transportable tablespace.
Best regards,
Laly.
Importing into different tablespace
Kevin, November 04, 2004 - 10:20 am UTC
Tom,
I have exported some tables from a database where all the tables belonged to TS01 tablespace. Now I am importing these tables into a different database schema whose default tablespace is RPT001. However this target database also has a tablespace TS01 but for another schema. So when I start the imports, it attempts to create the tables in this TS01 ts rather than RPT001 and then the import fails due to insufficient space or not able to create enough segments.
My question is how can I export/import so that these tables are created in the RPT001 tablespace only.
Thanks for all the help that you have been providing to us.
Regards.
November 05, 2004 - 12:59 pm UTC
revoke unlimited tablespace from this schema
alter them to have a 0 quota on TS01
then the create will fail (no privs on TS01) and imp will rewrite the create statment without a tablespace in it and try again -- that'll use the default tablespace.
that works for all SINGLE SEGMENT objects (lobs are multi-segment, partitions are, iot's with overflows are, etc).
if you have multi-segment objects, you'll want to extract the ddl using indexfile=foo.sql and precreate the multi-segment objects after fixing up the ddl.
Thanks a lot Tom -- this works !!
Kevin, November 08, 2004 - 8:51 am UTC
Rollback segment during import
Rkaur, November 09, 2004 - 5:55 pm UTC
Should we set commit =y or create a rollback segment large enough as written in Tuning Considerations when import is slow Note 93763.1 because in automatic undo mode I am not able to create any rollback segment.
I need to export/import tables, each of 10-15GB
My database is in archive mode after setting to nonarechive mode
I am using
exp userid = system tables = tag_data indexes= n file=d:\tag_data.dmp log = d:\tag_data.log compress = n
imp uerid= testuser tables = tag_data indexes = n file=d:\tag_data.dmp log = d:\tag_dataimp.log fromuser=system touser=testuser commit = y
November 09, 2004 - 7:45 pm UTC
well, archivelog isn't really going to slow you do so much (you are still generating redo).
since you are doing single tables, anychance we can use a dblink? create the table nologging, insert /*+ append */ - perhaps using parallel query?
else, you have a very serial, row by row sort of thing happening here -- commit=n would be the way to go (generates small undo amounts since the undo is simply "delete rowid").
any longs/lobs involved here?
have you considered a dump and load with sqlldr (direct path)? instead of export/import?
long column too large for column buffer size (1)
A reader, December 10, 2004 - 3:07 am UTC
import Statments
Ali, March 01, 2005 - 7:16 am UTC
Dear Sir
I have done an export as follow in my Server
EXP TEST/TEST FILE=GOODS.DMP
And the file has been created successfully
My oracle version is
Oracle9i Enterprise Edition Release 9.0.1.1.1 - Production
With the Partitioning option
JServer Release 9.0.1.1.1 – Production
But when I try to import the file GOODS.DMP
It display this message
Export file created by export :V09.00.01 via conventional path
Import done in we8mswin1252 character set and AL16UTF16 NCHAR character set
Import server uses AR8ISO8859P6 character set < possible character conversion >
Imp-00031: Must specify Full=y or provide fromuser/touser or tables arguments
Imp-00000: Import terminated unsuccessfully
Can you give me a proper use of Import statement, in details?
Best Regards
Ali
March 01, 2005 - 8:36 am UTC
IMP-00031 seems to be telling you what to do?
$ imp userid=test/test file=goods.dmp full=y
^^^^^^ to import the entire thing
tables=(...)
^^^^^^^^^^^^ to just grab some
fromuser=scott touser=alan
^^^^ to put scotts stuff into a schema alan
ii
A reader, June 13, 2005 - 3:21 am UTC
i
does IMPORT analyze tables from a 8i dump file and importing into 9i
Pingu, July 12, 2005 - 11:52 am UTC
Hi
I am importing a oracle 8i dump file into a oracle 9i itanium database.
I am lookign to speed up my import process. I was wondering if import analyze the tables using dbms_stats?
I did a test (all in 9i), created a table with 20 rows then set num_rows to 5 with dbms_stats, exported the table, drop the table and import back. num_rows still shows 5 so I think import does NOT run analyze as far as this test concerns.
However in my production import I constantly see these messages (using TOAD)
ANALYZE TABLE "D2G_AOPPV_TMP_00" ESTIMATE STATISTICS;
ANALYZE TABLE "EPC_99" ESTIMATE STATISTICS
etc etc
I also see import issues full scan queries against the tables. So I am confused, in my production it does seem import analyze table. So what is the reality?
Thank you
July 12, 2005 - 5:19 pm UTC
same character sets?
yes same character sets except NCHAR
Pingu, July 13, 2005 - 5:21 am UTC
Hi
This is the log message in the import session where I see ANALYZE TABLE messages
Export file created by EXPORT:V08.01.07 via conventional path
Warning: the objects were exported by SYS, not by you
import done in WE8ISO8859P1 character set and UTF8 NCHAR character set
export server uses WE8ISO8859P1 NCHAR character set (possible ncharset conversion)
NCHAR is different, does that affect statistics parameter?
In my test session the charcater sets are the same
Connected to: Oracle9i Enterprise Edition Release 9.2.0.6.0 - Production
With the Partitioning, Oracle Label Security, OLAP and Oracle Data Mining options
JServer Release 9.2.0.6.0 - Production
Export file created by EXPORT:V09.02.00 via direct path
import done in WE8ISO8859P1 character set and AL16UTF16 NCHAR character set
The difference between the dump files is one is exported using 8.1.7 exp and the other 9.2.0.6 (the test session)
July 13, 2005 - 12:42 pm UTC
when character sets differ, stats are not valid.
when character sets differ, stats are not valid.
Pingu, July 13, 2005 - 3:05 pm UTC
Hi
Is above statement written anywhere?
So are you saying that when the character sets are different (NLS_LANG not NLS_DATABASE) import runs analyze and when the character set are the same import doesnt do that?
Cheers
Suggestion..!
Sikanthar, July 16, 2005 - 5:58 am UTC
Tom,
I use to do import like this..! for existing tables and indexes..
1.Truncate the tables which need to be import.
2.import with
userid='...'
fromuser=
touser=
tables=*
file='...'
log=
ignore=y
buffer=10000000
statistics=safe
feedback=50000
indexes=n
constraints=n
Hence table would be import without indexes and constraints.
Ofcourse, INDEXFILE option is there,but i feel thats good for fresh table and index importing..
In this,Once table imported, get into OBJECT BROWSER (like E R WIN)just by right clicking i use to compile(rebulid) all indexes.
Q)Is this good way to continue to reduce time or using INDEXFILE (to create index and constraint)is better than this?
July 16, 2005 - 9:33 am UTC
why rebuild all indexes?
if you are happy with the performance of of this solution as is, we can make you happier by just saying "don't rebuild, just import"
direct=y and import
A reader, August 08, 2005 - 3:22 am UTC
Hi
Last week I was attending a 9i new features course at Oracle University. During one of lessons about Backups someone rose a question regarding export direct=y, the lecturer from Oracle said that export direct=y is faster but the import is much much slower, the reason is since export using direct bypass the buffer and blocks are dumped directly to the dump file when import these blocks need to be rebuilt in the buffer before import can read it. Is this true???
August 08, 2005 - 8:00 am UTC
umm, no. the blocks are not dumped. you'll find the format of the file to be the same, it is not a block dump.
OLAP failed to initialize
Ram, September 15, 2005 - 4:40 am UTC
Hi,
I am working on Oracle 9.2.0.6. I am getting this error "ORA-37002 OLAP failed to initialize. Contact Oracle Support" and "ORA-00600 Error". I get this error, while export is running (this is a scheduled process, which is run daily). Whenever I restart the database, then this problem is not raised for some days, but, again resurfaces. My OLAP_PAGE_POOL_SIZE is 32 MB. Pls. advice as to what needs to be done to get rid of this problem once and for all ? And, will this result in any performance issue ? Or will this effect the database, since, mine is an 24 * 7 db.
Thanks in Advance,
Ram
September 15, 2005 - 7:55 am UTC
"Contact Oracle Support"
seems like a good idea?
consistent parameter
Satheesh Babu.S, September 19, 2005 - 3:11 am UTC
Tom:
I am just wondering why any one want to take the export with consistent=N? Oracle can force us to use consistent=Y by undocumenting this parameter right?
Thanks and Regards,
Satheesh Babu.S
September 19, 2005 - 11:40 am UTC
they could have, but it just runs in the mode most everyone would have written their own export program in - "committed read".
It depends on what you were using export for. To copy data from database 1 to database 2 - sure, consistent=y makes sense.
To use export as a tool to "exercise" your database (read the blocks, validate the dictionary), to create a dmp file you can scrape data from (in the event you accidently lose a small lookup table or something), to create a dmp file you can extract a dropped plsql unit from - on a large database - consistent=y might not be reasonable.
Since export is not a database backup tool really - it makes sense to have both settings, depends on your needs.
pga/uga usage during index creation
A reader, October 19, 2005 - 12:08 pm UTC
Hi
I am creating the indexes after importing the data. I am monitoring memory usage by these index creation processes. I am interested in what happens to uga and pga.
I did this with 1 minute interval
select a.sid, name, value
from v$session a, v$sesstat b, v$statname c
where a.sid = b.sid
and b.statistic# = c.statistic#
and a.username = 'WIX_ADMIN'
and c.name like 'sess%ga%'
order by a.sid, name
SID NAME VALUE
---------- ---------------------------------------------------------------- ----------
8 session pga memory 8003392
8 session pga memory max 8003392
8 session uga memory 7470152
8 session uga memory max 7470152
13 session pga memory 4435504
13 session pga memory max 5549616
13 session uga memory 2242792
13 session uga memory max 2242792
14 session pga memory 4107824
14 session pga memory max 5221936
14 session uga memory 2242792
14 session uga memory max 2242792
15 session pga memory 510712
15 session pga memory max 175950584
15 session uga memory 209864
15 session uga memory max 209864
17 session pga memory 518568
17 session pga memory max 175958440
17 session uga memory 209864
17 session uga memory max 209864
SID NAME VALUE
---------- ---------------------------------------------------------------- ----------
8 session pga memory 8068928
8 session pga memory max 8068928
8 session uga memory 7535560
8 session uga memory max 7535560
13 session pga memory 5549616
13 session pga memory max 5549616
13 session uga memory 3291960
13 session uga memory max 3291960
14 session pga memory 5221936
14 session pga memory max 5221936
14 session uga memory 3291960
14 session uga memory max 3291960
15 session pga memory 175950584
15 session pga memory max 175950584
15 session uga memory 162029256
15 session uga memory max 162029256
17 session pga memory 175958440
17 session pga memory max 175958440
17 session uga memory 162029256
17 session uga memory max 162029256
SID NAME VALUE
---------- ---------------------------------------------------------------- ----------
8 session pga memory 8068928
8 session pga memory max 8068928
8 session uga memory 7535560
8 session uga memory max 7535560
13 session pga memory 2207280
13 session pga memory max 5549616
13 session uga memory 144456
13 session uga memory max 3291960
14 session pga memory 1879600
14 session pga memory max 5221936
14 session uga memory 144456
14 session uga memory max 3291960
15 session pga memory 38587128
15 session pga memory max 175950584
15 session uga memory 35030888
15 session uga memory max 162029256
17 session pga memory 35252648
17 session pga memory max 175958440
17 session uga memory 31883384
17 session uga memory max 162029256
The memory goes from low to high then high to low, the memory freed are returned to OS?
October 19, 2005 - 12:51 pm UTC
starting in 9i, with workareas - yes, you'll see memory go up and down.
If you have access to my new book (on home page - Expert Oracle database architecture) I cover this in some detail.
not using pga
A reader, October 19, 2005 - 1:52 pm UTC
Hi
I am using sort area not pga, in that case is that memory freed to OS?
October 19, 2005 - 3:54 pm UTC
guess where sort areas are located.... ;)
In the PGA. (well, there are nuances with shared server and such but you are creating an index so dedicated server makes sense here)
workareas are what they are called in 9i with automatic pga memory management and they are dynamically created and released.
But sort areas or workareas - they are the stuff of PGA, they are in the PGA.
sorry not using workarea
A reader, October 19, 2005 - 5:04 pm UTC
Hi
My server is 8.1.7 so I cannot use automatic PGA, I understand that in 9i these memory are reused but in 8i I dont think so correct? If that is the case why the memory doesnt stay in their maxumu size? Why gores down and up, down and up
October 19, 2005 - 7:40 pm UTC
but even in 8i, 8.0, 7.x ,whatever - sorts were PGA.
you've made the output exceeding hard to read or interpret above.
first it wraps, second it is for many sessions (and imp uses - ONE)
so, which session was your import ;)
LMT and Compress
A reader, November 29, 2005 - 6:07 pm UTC
Kinda confused with compress=Y with LMT on 9.2.
I created a table from dba_objects and then exported that table. Prior to export, the table had 19 (16 - 64k and 3 - 1M) extents. I did an export of just that table and used the default for compress (compress = Y).
I then did an import with indexfile set and looked at the file created. It has a create table statement with initial extent set to 64k. I was expecting to see the create table statement with an initial extent set at 4M based on the theory that compress = Y will add up the existing extents for the object?
November 30, 2005 - 11:25 am UTC
9i and 10gr1 seem to be doing that, but 10gR2 is not.
Bummer, a "bug" I truly wish they had not fixed :(
Thanks Tom
Ali K, February 21, 2006 - 9:48 am UTC
You are a genius, TOM. Solved all my problems. I was able to complete the imp very qucikly.
Is there a way to avoid IMP to produce UNDO and REDO?
Davis, April 29, 2006 - 1:46 pm UTC
Hi,TOM
The IMP is based on INSERT statment.
Is there a way to avoid IMP to produce UNDO and REDO in oracle9i?
Thanks
April 30, 2006 - 4:57 am UTC
no, there is not.
There is a way, but...
Tom Canty, May 10, 2006 - 6:00 am UTC
To your last poster you said there is no way to avoid UNDO and REDO on an import. That is not entirely true. You CAN avoid (most) redo with the undocumented parameter "_disable_logging". Obviously this is very, very dangerous and in general should never be used (and in fact is the source of a significant bug in 10g). However, I execute numerous migrations for my clients from one platform to another, and downtime is a major consideration. I frequently use this technique (followed IMMEDIATELY by a full database backup) to improve the performance of the migration. I've found it shaves at least 20% off the time of the import, and coupled with many other techniques (nologging/unrecoverable on indexes, parallel, pre-migrating historical data, turning off archiving if on same disks as other things, sort_area/PGA tuning, etc.), I can migrate most systems with a minimum of "hard" downtime. I think it is simply one trick in the bag to be used as needed, but ONLY when absolutely needed.
P.S. One minor but often over looked tip - sometimes it helps to have a tiny data cache (db_block_buffers/db_cache_size) on an import. It's not a major boost, but it helps when importing a LOT of rows that aren't particularly big, because there is less overhead to maintain the cache. There is a "sweet spot" that you have to find - if you can make your cache bigger than your dump file, then you don't need this trick, but otherwise I find that a data cache <48Mb can actually help vs. a large cache. I have never done hard tests - I'd be curious to hear of anyone that has.
May 10, 2006 - 9:01 am UTC
that does not avoid redo generation, that avoids "writing" the redo but that is it.
This is so dangerous, I will say clearly: if you use this, and this import is not being used to actually "create" the database (eg: there isn't a bit of data in there already besides the oracle dictionary), then you have made a really bad choice.
If you use commit=n, and are in noarchivelog mode, I don't really see this making that much of a difference (assuming log buffer is set sufficiently large). You are not going to be waiting for log file syncs (they happen at commit) and unless you start waitnig for log buffer space (insufficient log buffer allocated), you just keep going - an issue could be if your log disks get in the way of dbwr's ability to keep the buffer cache free. Watch out for undersizing that cache as well (else you might end up waiting on dbwr).
I've a feeling that "nologging/parallel" was accounting for a big part of your speedup. As would doing parallel imports (that is the biggest bang for the buck I've gotten - using schema level exports and importing many of them at once)
So, to do what you are talking about (create a new database) that would be the only time you might consider it - but make sure that is the only time (and don't forget to unset it afterwards!)
And remember, all it takes is one failure near the end to obviate the entire "speedup" - if the database isn't shutdown normal- the database ceases to exist!
Tom Canty, May 10, 2006 - 6:31 pm UTC
Yes, I completely agree - this is only for creating a database, never for "casual" use. But I have seen it (by itself) give a major boost, and it completely avoids "checkpoint not complete" problems (I know, these are avoidable other ways, but some customers don't have the best hardware).
A reader, May 17, 2006 - 2:48 am UTC
Hi Tom,
We have production and UAT database. UAT database is not uptodate. Because our testing team needs to do some tests.
When they request to update the UAT database, then we update as follows:
(1). Delete schema objects in UAT
(2). Import latest dump from PROD into UAT
This will take bit long time. So could you please advice us much professional way to do this?
Thanks
May 17, 2006 - 6:56 am UTC
restore prod to uat
use your backups. it'll prove that your DBA staff can actually use their backups to do the job of recovery!
A reader, May 17, 2006 - 12:44 pm UTC
Tom,
The ironic part is to cut some quarters lots of business build uat boxes much smaller(configuration/storage wise) than the prod.
Thanks.
import select objects
A reader, May 26, 2006 - 5:41 am UTC
Hi Tom,
I've a biggish export dump file having hundreds of tables/indexes/views & packages.
Now what I want is to import ONLY packages and not anything else.
Is it possible to do with imp command?
Thanks aas always.
May 26, 2006 - 8:54 am UTC
you would/could
imp show=y
and redirect that to a file and scrape out the code - but in general, the answer is "no, not really, IMP was never designed to do that"
IndexFile Constraints
Rich Janson, June 01, 2006 - 6:28 pm UTC
Tom,
When I use an index file the CREATE TABLE, ADD CONSTRAINT, and ENABLE CONSTRAINT commands are all REM'd out.
Is it possible to have imp not REM the constraint commands? Or do I need to process the file myself to unREM these lines?
My Parfile:
File=../pmmtest1.dmp, ../pmmtest2.dmp
Filesize=4096MB
Log=pmmtest_ind.imp.log
FromUser=('PMM', 'PPR')
ToUser=('PMM', 'PPR')
Indexes=Y
Constraints=Y
IndexFile=pmmtest_index.sql
Thanks!
June 01, 2006 - 7:53 pm UTC
you need to un-rem them manually
It was of great help
anibrata chatterjee, June 29, 2006 - 2:43 am UTC
Thanks tom....but i would like to know something...i am using a Oracle 734 ver on SCO Unix 5.0.4....the server having Raid5 architecture.....import will be slow but is there any other options to make it faster by changing the kernel parameters as well as the initsid.ora and redologs?
June 29, 2006 - 7:25 am UTC
not really - you have ancient stuff here, on the slowest type of disk.
there are no magic "fast=true" for this. The concepts above can be used.
IMP-00020
Yaro, April 24, 2007 - 9:50 am UTC
Dear Mr.Tom
I have problem with import.
Export is from 9.2.0.6
(exp xxx/xxx@xx FILE=$DUMP/yyy.dmp LOG=$DUMP/yyy.log OWNER=xxx CONSISTENT=Y BUFFER=5000000)
But in import in other DB 9.2.0.8
imp xxx/xxx@xx FILE=yyy.dmp LOG=yyy.log BUFFER=5000000 FROMUSER=xxx TOUSER=xxx COMMIT=Y ANALYZE=Y
There is the problem:
IMP-00020: Long column too large for column buffer size (7)
It is same for BUFFER=1000,10000,50000,500000000, only in the error message is various (num).. 7,22... and other.
What does the number in error message mean ?
Import hanging
Shrikant, May 01, 2008 - 11:05 pm UTC
Hi Tom,
I am importing into 10g database from Oracle9i. It worked very well when I ran it first time, but few hours later it gave archiever error. I restarted the Oracle 10g database and now when I am running again, it is not going further
The import screen looks like
Import: Release 9.2.0.7.0 - Production on Thu May 1 22:57:10 2008
Copyright (c) 1982, 2002, Oracle Corporation. All rights reserved.
Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.3.0 - 64bit Production
With the Partitioning and Data Mining options
Export file created by EXPORT:V09.02.00 via direct path
Warning: the objects were exported by RPTODS_ADMIN, not by you
import done in US7ASCII character set and AL16UTF16 NCHAR character set
import server uses WE8ISO8859P1 character set (possible charset conversion)
****************************
And then it is stopping for unlimited time. On looking at currently executed command from session browse I saw
BEGIN
SYS.dbms_logrep_imp.set_import_mode (:1);
END;
Do you have any idea what could be the reason ?
Regards
Shrikant
May 02, 2008 - 6:57 am UTC
umm, did you fix the fact that you yourself ran out of archive space.
A reader, July 05, 2010 - 5:22 pm UTC
Hi sir,
Regarding compress=y
Say existing table has 50m allocated to it.
I know that, If we import into a LMT with 1meg extents, it'll allocate 50 of 1meg extents initially and do the inserts.
If we set compress=n
It will
allocate 1mb
insert
allocate 1mb
insert
and so on.
Is that right?
July 06, 2010 - 3:04 pm UTC
compress=y during export would have export create a create table statement with an initial extent big enough to hold the entire segment (up to 2gb beyond which we need more extents).
So the create table statement would have "initial 50m" and the create statement itself would (assuming 11gr1 and before of course) would allocate 50m of space. In 11gr2 - with deferred segment creation - it would allocate the 50mb during the first INSERT, not during the create:
ops$tkyte%ORA11GR2> create table t1 ( x int ) storage ( initial 10m );
Table created.
ops$tkyte%ORA11GR2> create table t2 ( x int );
Table created.
ops$tkyte%ORA11GR2>
ops$tkyte%ORA11GR2> select segment_name, bytes from user_segments where segment_name in ('T1', 'T2') and segment_type = 'TABLE';
no rows selected
ops$tkyte%ORA11GR2> insert into t1 values ( 1 );
1 row created.
ops$tkyte%ORA11GR2> insert into t2 values ( 1 );
1 row created.
ops$tkyte%ORA11GR2> select segment_name, bytes from user_segments where segment_name in ('T1', 'T2') and segment_type = 'TABLE';
SEGMENT_NAME BYTES
------------------------------ ----------
T1 10485760
T2 65536
to answer what would happen with compress=no, the create table statement would have an initial size associated with it - we would allocate that much space (whatever that create told us, it could be 64k - causing us to do a 1mb extent - or 64gb - causing us to do lots of 1mb extents) and then we'd add 1mb extents over time as the need arose.
A reader, July 07, 2010 - 5:30 pm UTC
thanks sir,
1-)
Regarding compress=y,
Is 2gb the limit?
Even if the table size is bigger than 2gb, it would only allocate 2gb as an initial extent.
Is that right?
2-) Does compress=y perform faster since it doesnt have to extent the table every time?
July 08, 2010 - 12:16 pm UTC
extent size is max 2gb.
it can only allocate 2gb.
compress=y should be completely IGNORED, never use it, it is legacy, it is not meaningful, you don't want things in a single extent. It is not faster, it is not slower, it is just something to not use in the 21st century (actually, exp is something to not use anymore as well)
what about having INITIAL extent of larger size
reader, August 25, 2011 - 5:15 am UTC
Hi Tom,
Thanks for your explaination on compress parameter of export,but i have certain questions
1> what if I use compress=y during export and then import it,as here the initial extent will be of very large size ,what performance issues might generate out of this.
2>you commented not to use conventional export any more,any specific reason for the same
3>I want to see the content of the export dump,is it possible any way?
4>once in your comment you have mentioned to use alter table move to rebuild tables,but at our place there is one table of 300gb size, so what about using expdp for rebuilding the same as time would be constraint
5>what is the role of PCTFREE during import?
August 30, 2011 - 4:11 pm UTC
1) in the year 2011 you should be using locally managed tablespaces with system allocated extents pretty much. You do not want to use compress=, just skip it.
It would only really apply to dictionary managed tablespaces which you are NOT using anymore I hope.
In a locally managed tablespace - using compress=y would create a create table with a large extent request - which we would turn into a multi-extent request. If you are using system allocated extents - we would use our internal algorithms and allocate a series of extents of various sizes to satisfy the total requested amount. If you use uniform sized extents - we would allocate N extents where N*uniform_size was greater than or equal to the initial extent size that was requested.
In short: do not use compress =y
DO use locally managed tablespaces, preferably with system allocated extent sizes.
2) where did i say that?
3) don't look at it.
what do you mean? You don't want to see the output on screen? Use redirection and redirect it to a file
4) alter table t move would almost certainly be faster - a simple direct path read, a direct path write - rebuild your indexes, done.
if you export it, you probably would write it to disk and read it from disk an extra time - we can skip that entirely.
if you export it, we have to re-check all constraints, we can skip that
if you export it, we have to verify that the import did EVERYTHING correctly - we didn't lose an index, we didn't lose a grant, nothing. We can skip that.
5) the same as it is for anything. It reserves, on a table block, a certain percentage of that block as you are inserting for future updates.
slow imp
A reader, May 22, 2012 - 12:45 pm UTC
I am exporting a 7 GB schema/dump file from 9i into 11g.
9i> exp myapp/myapp@tips file=/pics/workarea/tips_exp.dmp log=/myapp/workarea/tips_exp.log compress=N direct=Y
11g> imp myapp/myapp@tips file=/home/oracle/dumps/tips_exp.dmp full=y log=/home/oracle/dumps/tips_imp.log
The import is taking a very long time due to one TABLE that has 5 million records.
Sort area size is 65537.
I noticed there is a lot of redo generation in the trace files.
Is there a way to speed this up and turn the redo off?
how do you run exp/imp in parallel instead of table by table.
shall i do this before import or set INDEXFILE=N and run indexes separately.
ALTER SESSION SET sort_area_size=100000000;
May 22, 2012 - 2:42 pm UTC
in 11g (in 9i) you should be using automatic PGA memory management - so the sort area size would be ignored.
Is your database in archivelog mode? if so, index creates will generate redo, if not, they won't. You could consider taking the database out of archive log mode first.
5,000,000 rows is pretty small.
Is the table in question full of long or lobs?
In order to go parallel - you would have had to of exported multiple files using multiple exports (eg: you would do it yourself, we don't parallelize exports or imports - not until data pump is that done for you)
import
A reader, May 22, 2012 - 4:24 pm UTC
Tom:
LOG_MODE=NOARCHIVELOG (no archival) so maybe I was wrong about generation of redo records in the logs with imports.
It is doing 10,000 rows per minute so for 5,000,000 it will be 500 minutes (about 8 hours).
The table has a couple of CLOB columns.
This is an archive table and i dont need it. Is there a way to *EXCLUDE* this table when i import the dump file or do i need to recreate the export dump based on tables only. I want everything else in the dump though (packages, triggers, sequences) except this table. pleas advise.
TABLE_NAME ROWS_PROCESSED MINUTES ROWS_PER_MIN
------------------------------ -------------- ---------- ------------
INTO "ARCH_MESG_LOG" ("M 3379760 320.9 10531
1 row selected.
Name Null? Type
----------------------------------------- -------- ----------------------------
MESG _NO NOT NULL NUMBER(10)
DATE_IN DATE
DATE_OUT DATE
PROCEDURE_NAME VARCHAR2(30)
MESG_RECV CLOB
MESG_SENT CLOB
PKG_NAME VARCHAR2(30)
CREATED_DATE DATE
May 23, 2012 - 7:51 am UTC
no, you weren't wrong, the inserts will all generate redo.
when you have a long/lob in a table, import drops to row by row - slow by slow method.
Is there a way to *EXCLUDE* this
table when i import the dump file or do i need to recreate the export dump
based on tables only.not directly.
but, we can 'trick it'. Just pre-create the table, we can do that with import rows=n. Then import with ignore=n, we'll skip that table when the create fails.
for example:
ops$tkyte%ORA11GR2> create table t1 as select * from all_users;
Table created.
ops$tkyte%ORA11GR2> create table t2 as select * from all_users;
Table created.
ops$tkyte%ORA11GR2> create table t3 as select * from all_users;
Table created.
ops$tkyte%ORA11GR2>
ops$tkyte%ORA11GR2> !exp userid=/ 'owner=ops$tkyte'
Export: Release 11.2.0.3.0 - Production on Wed May 23 08:49:55 2012
Copyright (c) 1982, 2011, Oracle and/or its affiliates. All rights reserved.
Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Export done in WE8MSWIN1252 character set and AL16UTF16 NCHAR character set
About to export specified users ...
. exporting pre-schema procedural objects and actions
. exporting foreign function library names for user OPS$TKYTE
. exporting PUBLIC type synonyms
. exporting private type synonyms
. exporting object type definitions for user OPS$TKYTE
About to export OPS$TKYTE's objects ...
. exporting database links
. exporting sequence numbers
. exporting cluster definitions
. about to export OPS$TKYTE's tables via Conventional Path ...
. . exporting table T1 50 rows exported
. . exporting table T2 50 rows exported
. . exporting table T3 50 rows exported
. exporting synonyms
. exporting views
. exporting stored procedures
. exporting operators
. exporting referential integrity constraints
. exporting triggers
. exporting indextypes
. exporting bitmap, functional and extensible indexes
. exporting posttables actions
. exporting materialized views
. exporting snapshot logs
. exporting job queues
. exporting refresh groups and children
. exporting dimensions
. exporting post-schema procedural objects and actions
. exporting statistics
Export terminated successfully without warnings.
ops$tkyte%ORA11GR2>
ops$tkyte%ORA11GR2> drop table t1 purge;
Table dropped.
ops$tkyte%ORA11GR2> drop table t2 purge;
Table dropped.
ops$tkyte%ORA11GR2> drop table t3 purge;
Table dropped.
ops$tkyte%ORA11GR2>
ops$tkyte%ORA11GR2> !imp userid=/ tables=t2 rows=N
Import: Release 11.2.0.3.0 - Production on Wed May 23 08:49:55 2012
Copyright (c) 1982, 2011, Oracle and/or its affiliates. All rights reserved.
Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Export file created by EXPORT:V11.02.00 via conventional path
import done in WE8MSWIN1252 character set and AL16UTF16 NCHAR character set
. importing OPS$TKYTE's objects into OPS$TKYTE
. importing OPS$TKYTE's objects into OPS$TKYTE
Import terminated successfully without warnings.
ops$tkyte%ORA11GR2> !imp userid=/ full=y IGNORE=N
Import: Release 11.2.0.3.0 - Production on Wed May 23 08:49:55 2012
Copyright (c) 1982, 2011, Oracle and/or its affiliates. All rights reserved.
Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Export file created by EXPORT:V11.02.00 via conventional path
import done in WE8MSWIN1252 character set and AL16UTF16 NCHAR character set
. importing OPS$TKYTE's objects into OPS$TKYTE
. . importing table "T1" 50 rows imported
IMP-00015: following statement failed because the object already exists:
"CREATE TABLE "T2" ("USERNAME" VARCHAR2(30) NOT NULL ENABLE, "USER_ID" NUMBE"
"R NOT NULL ENABLE, "CREATED" DATE NOT NULL ENABLE) PCTFREE 10 PCTUSED 40 I"
"NITRANS 1 MAXTRANS 255 STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 FREE"
"LISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT) LOGGING N"
"OCOMPRESS"
. . importing table "T3" 50 rows imported
Import terminated successfully with warnings.
ops$tkyte%ORA11GR2>
ops$tkyte%ORA11GR2> select count(*) from t1;
COUNT(*)
----------
50
ops$tkyte%ORA11GR2> select count(*) from t2;
COUNT(*)
----------
0
ops$tkyte%ORA11GR2> select count(*) from t3;
COUNT(*)
----------
50
ops$tkyte%ORA11GR2> desc t2
Name Null? Type
---------------------------------------- -------- ----------------------------
USERNAME NOT NULL VARCHAR2(30)
USER_ID NOT NULL NUMBER
CREATED NOT NULL DATE
imp
A reader, May 23, 2012 - 10:11 am UTC
<<no, you weren't wrong, the inserts will all generate redo. >>
Why is it generating redo data if the archivelogmode is OFF?
Is there a wayt to turn this off completely to speed up the import.
<<!imp userid=/ full=y IGNORE=N>>
I originally thought if the table structure exists it will ignore it but continue to load data.
Was that with "ignore=Y"? ignore=N will not even load data if table exists?
You provided great example.
I assume nothing special can be done via ex/imp to speed up the CLOBS.
May 23, 2012 - 11:16 am UTC
because archivelog is about archiving logs, not about skipping redo generation for everything.
In noarchivelog mode, direct path operations such as create index, create table as select, alter table t move, sqlldr direct=y, insert /*+ append */ and so on can skip redo generation (they write directly to the datafiles, not the buffer cache)
but anything - ANYTHING - that uses the buffer cache, such as insert, update, delete, merge - must, will, shall generate redo.
ignore=N causes it to skip the table if the table already exists (as demonstrated).
ignore=Y would have it ignore the create failure and try to insert data
Nothing can be done to speed up lobs/longs short of using a query to export and exporting slices of the table in parallel and then importing them in parallel (with ignore=Y after you precreate the table)
import
A reader, May 23, 2012 - 1:27 pm UTC
Tom:
You are certainly the "Master of Tricks".
I used that trick and it cut the import time to 30 minutes without any issues.
I will try the other trick later for slicing the table with CLOBS to several export dumps (using sequence or TIMESTAMP) and then importing all dumps in parallel.
I am kind of surprised that oracle has not done anything special to speed up logical imports such as TURN OFF LOGGING before running the import since they do have it for bulk loads and CTAS operations.
I heard also that Data Pump tools are 30% faster than the traditional imp/exp tools so when I will use that when going from 11g DB -->> 11g DB to verify that. I can't use it now with 9i.
May 23, 2012 - 2:58 pm UTC
Sam, Sam, Sam...
or use rowid ranges:
http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:10498431232211 they will avoid using indexes (which is good) to read all of the rows out
and they will avoid multiple full scans if indexes where not used (which would be bad too)
I am kind of surprised that oracle has not done anything special to speed up
logical imports such as TURN OFF LOGGING before running the import since they
do have it for bulk loads and CTAS operations.
we did, we call it datapump. It is in 10g.That was done over seven years ago. Seven years...
You did have transportable tablespaces in 8i and above with export - did you even consider that. You wouldn't have to export or import the data. You would just move datafiles.