fixed record formats?
Maurizio De Giorgi, January 30, 2003 - 6:09 am UTC
There are no alternative to "substr" generic columns if records are totally in fixed formats?
Thanks again.
Maurizio De Giorgi.
January 30, 2003 - 9:03 am UTC
I don't understand the problem -- can you post an example? you can trim blanks (leading or trailing) right in the external table definition.
A reader, January 30, 2003 - 10:50 am UTC
hi Tom,
When i tried your example i am getting this error
O.S. Win 2k Server, DB.9.0.2 EE
test@DB1.ORADOM.COM> insert /*+ APPEND */ first
2 when (c1 = 'header' ) then
3 into header (col1,col2,data)
4 values ( to_number(c2), to_date(c3,'dd/mm/yyyy'), c4 )
5 when (c1 = 'master' ) then
6 into master (master_id,creation_date,data)
7 values ( to_number(c2), to_date(c3,'dd-mon-yyyy hh24:mi:ss'), c4 )
8 when (c1 = 'detail' ) then
9 into detail (master_id,detail_id,amount,data)
10 values (to_number(c2),to_number(c3),to_number(c4),c5 )
11 select * from csv_table;
insert /*+ APPEND */ first
*
ERROR at line 1:
ORA-29913: error in executing ODCIEXTTABLEOPEN callout
ORA-29400: data cartridge error
KUP-00554: error encountered while parsing access parameters
KUP-01005: syntax error: found "identifier": expecting one of: "are"
KUP-01008: the bad identifier was: as
KUP-01007: at line 3 column 23
ORA-06512: at "SYS.ORACLE_LOADER", line 14
ORA-06512: at line 1
January 30, 2003 - 11:28 am UTC
you made a typo in the CREATE TABLE CSV_TABLE somewhere then -- that is what this is saying.
I just ran this in 9.0, worked fine.
The solution provided is really good to load any CSV file...
Maurizio De Giorgi., January 30, 2003 - 2:07 pm UTC
...the problem is that I have to load mainly fixed format files.
I was not clear -- my fault -- I'am sorry.
So, if you still would help with a solution, here follow some additional information.
A file contains only one document type.
A document is composed of a master record followed by 1 to N detail records.
All fields are of char type (although they should be checked to be date, integer, decimal, etc).
I would avoid to manually define a different external table for each file.
I'am looking for a generic solution, perhaps a combination of external tables, table functions (data checking/transformation), multi-table insert and so on.
A good subordinate solution would be something like your load/unload script generator (possibly pl/sql based).
I'am thinking about creating a "repository" containing record formats definitions, field, record and file level checking function names, and so on.
Header record (same format for all files).
field desc len offset
z_typrec record type 1 1
z_flowid flow id 8 2
z_sendappl source system/application code 4 10
z_rcvprn destination system/partner code 10 14
z_idoctyp flow name 30 24
z_mestyp flow type 30 54
credat file creation date (DDMMYYYY) 8 84
cretim file creation time (HH24:MI:SS) 6 92
z_totrec number of documents (master records) 12 98
A Master record format (not complete).
E1BPDOCHDRP record type 1 1
FILLER filler 61 2
CO_AREA Controlling area 4 63
DOCDATE Document date (DDMMYYYY) 8 67
POSTGDATE Document posting date (DDMMYYYY) 8 75
VERSION Version code 3 83
DOC_NO Document number 10 86
VARIANT Variant code 5 96
DOC_HDR_TX Document header text/description 50 101
USERNAME Document created by ... 12 151
...
A Detail record format (not complete).
E1BPAAITM record type 1 1
FILLER filler 61 2
SEND_CCTR cost center code 10 63
ACTTYPE activity type 6 73
SENBUSPROC business process code 12 79
ACTVTY_QTY activity quantity (INTEGER) 15 91
ACTIVITYUN activity unit of measure 3 108
ACTIVITYUN_ISO activity unit of measure ISO CODE 3 111
PRICE price (DECIMAL) 11 114
CURRENCY price currency code 5 139
CURRENCY_ISO price currency ISO code 3 144
...
Thanks anyway.
Maurizio De Giorgi.
January 30, 2003 - 3:09 pm UTC
if you have different lengthed fixed format records in the same file, you'll either
o setup a separate external table def for each with its unique postition and a WHEN condition to pick off the right types
o setup a single one and create N views -- using substr to pick of the fields.
nothing magical we can do there. It is a very "unusual" format to mix into a single file fixed width, positional data. Too bad you cannot get just delimited data -- whereby the file would be a fraction of its current size.
You are a gentleman...
Maurizio De Giorgi, January 30, 2003 - 3:38 pm UTC
If I could know who is the "artist" that created such an "unusual" masterpiece, I would change him/her nose to an "unusual" shape.
Thanks.
Maurizio De Giorgi.
FIXED field length in EXTARNAL TABLE
High, January 12, 2004 - 11:15 am UTC
Tom,
I can't figureout how to specify that if a field is null for a fixed length extarnal table then put null. I know that in a delimited file you can use the parameter "MISSING FIELD VALUES ARE NULL", I tried using in with FIXED with no success. it seem to trim the blank spaces and the whole file get messed up.
I looked in Oracle doc and could not fine the answer to this.
Any ideas?
Thanks.
January 12, 2004 - 1:27 pm UTC
just use SQL, but there are nullif's as well.
I would just use SQL and see if it is blanks and return NULL if so. path of least resistance.
null in fixed field external table
High, January 12, 2004 - 3:53 pm UTC
Tom,
I understand what you mean but I have no idea what is the syntax for it. I tried to look it up in Oracle doc but could not find it.
This is my create statement.
CREATE TABLE LEE_TEST(
OBJECT_NAME VARCHAR2 (70),
OBJECT_ID VARCHAR2(20),
SECONDARY VARCHAR2 (22))
ORGANIZATION EXTERNAL(
TYPE ORACLE_LOADER
DEFAULT DIRECTORY EXT_TBL
ACCESS PARAMETERS(
RECORDS FIXED 92
BADFILE EXT_TBL:'lee_test.bad'
LOGFILE EXT_TBL:'lee_test.log'
FIELDS LRTRIM
MISSING FIELD VALUES ARE NULL(
OBJECT_NAME CHAR(50),
OBJECT_ID CHAR(20),
SECONDARY CHAR(20)))
LOCATION ('lee_test_fixed.txt'))
REJECT LIMIT UNLIMITED;
If one of the columns are null then it is being ignored and the hole record get messed up. How can I say something like if the value of OBJECT_ID is empty then NULL. It will be good if you can provide a NULLIF and sql statement example.
Thanks.
January 13, 2004 - 1:12 am UTC
You didn't understand what I was saying -- I was saying "fix it in SQL"
use a where clause
use a function in the select
whatever
let the external table load all of the fields -- then use SQL to filter/transform the data however you need.
Problem with external table fixed length record
High, January 13, 2004 - 9:31 am UTC
Tom,
I think we are not on the same page, this is what I mean.
Given this data file:
Abc 123 2
Efg 456 3
Hij 4
Klm 789
When I load this data with the create table below I get:
COL1 COL2 COL3
Abc 123 2
Efg 456 3
Hij 4
Klm 789
And this is what it should be:
COL1 COL2 COL3
Abc 123 2
Efg 456 3
Hij 4
Klm 789
On the 3rd record the empty value is being ignored and the hole record is shrinking and I can't really do anything with is. There must be a way to say something like "if the field is empty just put null in this column".
I could not find how to fix this, Im sure there is an easy solution but I cant figure it out.
This is my create statement:
CREATE TABLE LEE_TEST(
COL1 VARCHAR2 (70),
COL2 VARCHAR2(20),
COL3 VARCHAR2 (22))
ORGANIZATION EXTERNAL(
TYPE ORACLE_LOADER
DEFAULT DIRECTORY EXT_TBL
ACCESS PARAMETERS(
RECORDS FIXED 92
BADFILE EXT_TBL:'lee_test.bad'
LOGFILE EXT_TBL:'lee_test.log'
FIELDS LRTRIM
MISSING FIELD VALUES ARE NULL(
COL1 CHAR(50),
COL2 CHAR(20),
COL3 CHAR(20)))
LOCATION ('lee_test_fixed.txt'))
REJECT LIMIT UNLIMITED;
Thanks.
January 13, 2004 - 5:47 pm UTC
you should be using POSITITION(n:m) for a fixed width file like that.
External table "bad" file conncateneting records
Lee, January 21, 2004 - 3:22 pm UTC
Tom,
I have a freaky problem that I hope you can help me with.
I am using external tables to load data from flat files into oracle.
Some files are bing rejected coz some enclosed charachters are not there (I'm using oprinaly) with is fine. when the "bad" file is generated, some records are concateneted, it seem like for some reason Oracle forget to put NEWLINE after a record in the bad file.
This happend only sometimes.
Have you ever encounter this? this is driving me crazy and I really hope you can help with it.
Thank you.
January 21, 2004 - 3:34 pm UTC
never heard of it happening, no. sorry -- cannot imagine it happening, especially "sporadically"
if you do the same operation with an input file that puts the bad records in without newlines -- does it reproduce? (eg: if you query the same file over and over - is the bad file different each and every time???)
Problem with external table bad file
Lee, January 21, 2004 - 8:25 pm UTC
Tom,
First, thank you for the previous fast response, I have a presentation on Monday trying to convince every one that Oracle external table loader will be faster and more reliable then what we are using now which is Sagent.
I followed your suggestion and it seem like the bad file have a slightly different number of records every time.
Let me give you all the details and maybe youll be able to help me figure out what the hack is going on here.
Ok,
I have a file that the records are terminated by NEWLINE, field are delimited by comma (,) and optionally enclosed by double quotes () and double quotes ().
The first field in every record is a unique id so I can look for it in the text file, the end result table and the bad table.
This is what I see:
if I look for the id that is funny in the bad file (concatenated with another record) and in the original text file it looks fine as far NEWLINE terminator. The record is rejected and is concatenated in the bad file.
This is a part of the original text file: (this is the rejected id 233818406920750100)
"233818401704260200","0000012.66","0000000002","0000014.54","0000000000","0000037.93","0170426","02","20020719","20020729","38140681101","19400625","M","0034","0034","59930150201","ISOSORBIDE MONONITRATE","ISOSORBIDE MONONITRATE","30MG","1","1","48104","AED","HM","MI","1007208503","%","A2","0189","AP9002279","PATEL,BIPIN ISH","103022","2338184","PINCKNEY PHARMACY","48169","A7B","241200","0","A","P","0000012.54"
"233818406920750100","0000019.24","0000000005","0000021.87","0000002.25","0000025.73","0692075","01","20020717","20020729","38152528601","19500615","M","0100","0030","08290841103","INSULIN SYRINGE","SYRINGE-NDL,INS DISPOSABLE","29GX0.5"","1","0","94200","AED","HM","MI","1008267001","%","A2","0189","BB1034115","BURNARD,KAREN S","101350","2338184","PINCKNEY PHARMACY","48169","X2B","940000","0","A","P","0000019.12"
"233818401738960000","0000007.49","0000000005","0000010.12","0000002.25","00000021.4","0173896","00","20020713","20020729","38160678303","19951026","M","0017","0014","59930156001","ALBUTEROL","ALBUTEROL","90MCG","1","1","20110","AED","HM","MI","1015480001","%","A2","0189","AM9386245","UNKNOWN","118467","2338184","PINCKNEY PHARMACY","48169","J5D","121200","0","A","P","0000007.37"
"233818401729890100","0000001.83","0000001.71","0000001.17","0000002.25","0000002.94","0172989","01","20020716","20020729","38168123702","19590928","F","0030","0010","49884003510","MECLIZINE HCL","MECLIZINE HCL","25MG","1","1","18302","AED","HM","MI","1008267001","%","A2","0189","BM6013407","MOLLER,WENDY MA","102185","2338184","PINCKNEY PHARMACY","48169","H6J","562200","0","A","P","0000001.71"
This is the rejectd record from the bad file (1 record):
"233818406920750100","0000019.24","0000000005","0000021.87","0000002.25","0000025.73","0692075","01","20020717","20020729","38152528601","19500615","M","0100","0030","08290841103","INSULIN SYRINGE","SYRINGE-NDL,INS DISPOSABLE","29GX0.5"","1","0","94200","AED","HM","MI","1008267001","%","A2","0189","BB1034115","BURNARD,KAREN S","101350","2338184","PINCKNEY PHARMACY","48169","X2B","940000","0","A","P","0000019.12"23"233824708514710000","0000020.71","0000000002","0000020.34","0000002.25","0000023.93","0851471","00","20020715","20020729","38536063101","19380920","M","0100","0030","08290328430","INSULIN SYRINGE","SYRINGE-NDL,INS DISPOSABLE","28GX0.5"","1","0","94200","AED","HM","MI","1007209001","%","A2","0189","AB2738839","BEISON,RICHARD","101201","2338247","RITE-AID #4510","48103","X2B","940000","0","A","P","0000020"23620"233876606357350200","0000049.65","0000000002","0000049.28","0000002.25","0000057.98","0635735","02","20020716","20020729","37252985405","19871009","M","0200","0020","00169185250","NOVOFINE 30","NEEDLES, INSULIN DISPOSABLE","30GX0.8"","1","0","94200","AED","HM","MI","1008067001","%","A2","0189","BK5630478","KASA-VUBU,JOSEP","103672","2338766","ARBOR #8042","48111","X2A","940000","0","A","P","0000049.53"
I will really appreciate your help.
Thanks.
January 22, 2004 - 6:30 am UTC
I'll ask again
given the same inputs -- does the same "bad" output result.
that is -- is this reproducible.
Problems with external tables
Lee, January 22, 2004 - 8:47 am UTC
Tom,
I'm sorry, I was not clear enogh.
I ran the same load 3 times, each time I got difrent number of bad records (485, 461, 467). I looked for the offensing id in the bad tables and I found it in all 3 of them. In the first and the third the record looked similar (concatenated with parts of other records) and the second run had the record rejected but not concatenated.
January 22, 2004 - 8:55 am UTC
are you using parallel?
External table problems
Lee, January 22, 2004 - 9:01 am UTC
Wow Tom, you are fast !!!
Yes I am using parallel.
External table problem
Lee, January 22, 2004 - 9:04 am UTC
Just to be clear.
After I create the external table I do an alter to make it parallel (I know I can do in the create table statment).
External table problem
Lee, January 22, 2004 - 9:19 am UTC
Tom,
I'm sorry, I was not clear enogh.
I ran the same load 3 times, each time I got difrent number of bad records (485, 461, 467). I looked for the offensing id in the bad tables and I found it in all 3 of them. In the first and the third the record looked similar (concatenated with parts of other records) and the second run had the record rejected but not concatenated.
Un believable, why the hell I didnt think of that 
The new problem is now I have an unknown bad file name and I need to create an external table from the bad fine in order to handle rejected files (it is should be created programmatically, manually will not be a problem).
Can I get the process id (%p) or agent number (%a) from somewhere inside Oracle?
P.S.
You made my freaking day, Im battling this for almost 2 days.
January 22, 2004 - 9:35 am UTC
well, we can do a directory on that (ls/dir) using a java stored procedure to get the names -- would that help:
</code>
http://asktom.oracle.com/pls/asktom/f?p=100:11:::::P11_QUESTION_ID:439619916584 <code>
you'd use that to generate the list of "bad" files with which you could then dynamically generate a "create external table" command which could then be processed dynamically (using dynamic sql in plsql)
or just use BFILES to read the files
if you don't want to use java, you could use bfiles and the %a -- since %a is bounded by the max parallel query servers -- you could use bfiles to see if a file exists in the range of 1.. max parallel query servers instead of listing them with "ls"
External table problem
Lee, January 22, 2004 - 9:57 am UTC
Tom,
I hate to kill ass but I have too....Y-O-U A-R-E T-H-E M-A-N.
external tables
A reader, March 25, 2004 - 3:34 pm UTC
Hi tom
We are working a logging framework, and I was
toying with the idea of using external tables
for the log files (in a format we control.)
This gives us an sql based way of accessing information
in the log files. These files contain debug information from our instrumented code (in pl/sql). The logging framework would be in pl/sql. (I am aware of your debug.f -
we are planning to leverage some of the ideas from it.)
What are your thoughts?
Thanx!
March 25, 2004 - 9:27 pm UTC
if you want to log to tables (permanently) sure.
not sure I'd want to use external tables frequently as a query source (as a method to LOAD, definitely -- for ongoing queries, frequently, maybe not...)
thanx Tom!
A reader, March 26, 2004 - 8:33 am UTC
Actually the idea is to *not* log into tables permanently.
So instead of "committing" on every log message during a
debug session, we use the utl_file to write to a
log file. The log file format is such that it
can be queried by creating an external table on it.
Since the dianostic session is not the very frequent
(so you hope:)) the fact that queries would not be
very fast is acceptable perhaps.
continuing on logging
A reader, March 30, 2004 - 1:51 pm UTC
Hi Tom
Regarding the above question on logging framework, there are two methods we are considering:
1. Log the debug messages into tables:
This means that when logging is turned on, the
debug messages get inserted in a table. Each message
insert results in a commit. (not a good idea but
remember this is done only during diagnostic session.)
May be a simple "batching" could alleviate this issue
though )?
Additionally, we also get the benefit of using sql on
tables. We also need to have a purging mechanism to
remove these messages (which would cause additional
load on the system.)
2. Log the debug messages into a file in the utl_file_dir
(smiliar to what debug.f does.) This has the advantage
that it does not cause the load (redo/undo)on the system
(except cpu consumption.) Regarding the capability for
selects, we could potentially use external tables
created on the log files to do that.
My questions are:
1. What are the implications if we use RAC? In alternative
2, in particular, where do the messages go? Are there
any additional gotchas you can think of when using
RAC esp. for alternative 2?
2. I am not a RAC expert. In a stanalone non RAC database
I feel that the second alternative is better (esp.
if the idea of external tables works.) due to reasons
stated. What do you think?
3. How does debug.f fare when used in conjunction with RAC?
Has this been tried?
4. If RAC does have an issue with logging into files, then
how come sql trace stil puts things into files - not
databases. This seems to indicate that logging into
files is a better alternative even in RAC?
Many thanx!
March 30, 2004 - 2:20 pm UTC
a1) the messages from utl_file would goto a file system visible to the RAC server. Unless this filesystem was nfs'ed all around, it would be visible on one server only.
a2) it is what I use.
a3) it works fine -- the trace goes to the filesystem that the instance running the code sees. You might add the instance number to the filename to avoid conflicts if you use a shared filesystem of some sort.
a4) you goto the server to get the trace. most of the time traces indicate "database failure", it would not be "a good idea" to put this data into the database!
thanx Tom!
A reader, March 30, 2004 - 2:33 pm UTC
So to confirm:
1. logging into a file versus tables has no additional
advantage/disadvantage when it comes to using RAC,
correct?
2. Do you have any particularly strong opinion on putting
trace info in a trace file versus tables. Pros and
cons plus final verdict? Would appreciate if you can
think of any additional pros/cons for either apart
from the ones I mentioned earlier.
3. In RAC, is there a chance of getting confused on
where one should look for a particular code's
trace file (as in which host)? I think you explained
it but I am not getting it:) What will be the steps
to identify the host where your messages go since you
may not know which instance executed it (not sure about
if the quetion is valid:))?
Also, the last point was somthing I should have thought of!
So that obviously does not apply in my case because
this logging framework is to log pl/sql messages:)
Thanx a bunch!!
March 30, 2004 - 2:42 pm UTC
1) not really -- in RAC or not in RAC, you would need to be on "that instance" to see the debug.
2) I'm into the file simply cause
o it can be zipped up and emailed to me easily
o if i needed to SQL it, external tables would work
o I can tail -f it and watch 'real time' <<<=== #1 reason for me.
the table has the advantage that it can be viewed from the client, any client.
3) log into the server the instance is running on and there should be no confusion. select * from v$instance will tell you "where you are"
thanx Tom - few more qs...
A reader, March 30, 2004 - 5:20 pm UTC
"1) not really -- in RAC or not in RAC, you would need to be on "that instance to see the debug."
This does not seem to be true if you log into
tables? In other words
if I log into table, I dont need to know the instance,
right? (sorry again to be "RAC" ignorant.
When our code executes on RAC, is it possible that we dont
know which instance is executing it at a given time? If this is possible, then this may be a pro for "tables" based
solution, perhaps?
"3) log into the server the instance is running on and there should be no
confusion. select * from v$instance will tell you "where you are""
Again, if you are selecting from tables do we need to
know the instance in all cases in RAC?
thanx!
March 30, 2004 - 6:36 pm UTC
that is correct -- i said as much above (about the table). the table lets you access it from anywhere.
v$instance tells you what instance exactly you are on.
if you use tables -- no, you don't need to be on the server as the data isn't being written to a filesystem, but to a database table.
yes but..
A reader, March 30, 2004 - 6:56 pm UTC
To confirm:
"When our code executes on RAC, is it possible that we dont
know which instance is executing it at a given time?"
So let us say I am debugging a pl/sql module. When it
is running, if I want to know which instance it is running
on I can simply do the select from v$instance,
know which server is executing it and then
I log into the server and look for the appropriate
file, correct?
If we are logging into a table, we dont need to
do this step since the messages are going into a
database table...
thanx a bunch, Tom!
March 30, 2004 - 7:16 pm UTC
you are debugging a plsql module.
you are obviously logged in.
select * from v$instance will tell you where.
(you seem to be leaning towards a table -- I'm not telling you to go one way or the other. if you find the table easier -- go for it, it has pros and cons as well. I'm not trying to talk you out of anything here)
thanx Tom!
A reader, March 30, 2004 - 7:27 pm UTC
No, actually I am leaning towards files.
It is just that someone objected to the idea of files
stating that RAC may create confusion on where the
file is. You have clarified that bit perfectly.
What still confuses me is:
Is it possible that when a server is executing code
(pl/sql or sql or whatever in the database instance)
we don't know which instance the code is executing
on? In other words, when you obtain connection to
a RAC instance, do we have to specify which instance
we want to connect to or is that transparent to us?
When you obtain connection normally you specify a
service (apart from user/passwd.) Is RAC any
different?
I think this question may not be related to the
above discussion - but this is what confused me.
Thanx anyways and have a wonderful day!:)
March 30, 2004 - 7:34 pm UTC
in rac you can either
a) specify a particular instance
b) a set of instances to try
c) let us pick one for you
totally up to you.
well..
A reader, March 30, 2004 - 7:43 pm UTC
"in rac you can either
a) specify a particular instance
b) a set of instances to try
c) let us pick one for you"
well, that was my confusion. In case b and c you
don't know which instance your code will be executing
on. In case of logging to files, I guess you will
have to try loggin into (potentiall)
all the nodes in such cases to see which instance
executed your code and thus where your log file is?
Do you have any such experience with case b or c?
Thanx!!
March 30, 2004 - 7:51 pm UTC
or just "select * from v$instance"! it'll tell you what node you are on....
hmm..
A reader, March 30, 2004 - 7:59 pm UTC
I am sorry - I do understand that - please bear
with me.
That select is useful in the case where, say you
log in (through sqlplus) and execute a pl/sql
code. In this case you can run the select
you mentioned.
In the general case, you have an application
that uses RAC. I just ran it (in debug mode)
and as part of the application it logged in and
executed my code and dumped the log into file.
I have no idea which instance the application used
(cases b / c apply.)
Now I want to know where my log file is.
Since to execute the "select" you mentioned,
I need to know the instance - it is a chicken
and egg situation. Hope I am making sense -
sorry to keep coming back with more questions.
I guess one way could be that the application always
tells you which instance it ran in (since it
has the info *when* it runs the pl/sql code.)
March 31, 2004 - 8:15 am UTC
last paragraph would be the "answer".
In my apps -- when the page is run in "debug" mode, it puts additional information on the page itself.
thanx Tom!
A reader, March 31, 2004 - 9:58 am UTC
For bearing with me coming back for questions!
Have a great day!!!
comments in external table file?
A reader, April 01, 2004 - 11:56 am UTC
Tom
How can I put comments in external table file so that
the select statements on that external table ignore
lines starting from say "#"?
Thanx and have a wonderful vacation!!
April 01, 2004 - 12:01 pm UTC
ORGANIZATION external
(
TYPE oracle_loader
DEFAULT DIRECTORY SYS_SQLLDR_XT_TMPDIR_00000
ACCESS PARAMETERS
(
RECORDS DELIMITED BY NEWLINE CHARACTERSET WE8ISO8859P1
LOAD WHEN ((1: 1) != "#")
BADFILE 'SYS_SQLLDR_XT_TMPDIR_00000':'ulcase7.bad'
DISCARDFILE 'SYS_SQLLDR_XT_TMPDIR_00000':'ulcase7.dis'
LOGFILE 'test.log_xt'
thank you!!!!
A reader, April 01, 2004 - 5:01 pm UTC
A reader, April 13, 2004 - 4:39 pm UTC
Hi Tom,
Assume there are 2 items with same master id and detail id in flat file which I want to insert them into detail table in this way: if master id and detail id is same, I' d like to just add their amount, otherwise just insert. it is like merge statement. Is it doable if just modify the following sql:
insert /*+ APPEND */ first
2 when (c1 = 'header' ) then
3 into header (col1,col2,data)
4 values ( to_number(c2), to_date(c3,'dd/mm/yyyy'), c4 )
5 when (c1 = 'master' ) then
6 into master (master_id,creation_date,data)
7 values ( to_number(c2), to_date(c3,'dd-mon-yyyy hh24:mi:ss'), c4 )
8 when (c1 = 'detail' ) then
9 into detail (master_id,detail_id,amount,data)
10 values (to_number(c2),to_number(c3),to_number(c4),c5 )
11 select * from csv_table;
Thanks Tom!
April 13, 2004 - 6:47 pm UTC
there is no multi-table merge, so to use merge-- you'd need three separate statements.
q on external table
A reader, June 22, 2004 - 4:14 pm UTC
Is there a way of turning off the log file creation
by external tables? I get a bunch of log files
each time there is a select from the external table
and I would like fo turn them off.
thank you!
June 22, 2004 - 9:36 pm UTC
given that you should select from an external table once
and that external tables are designed to load data into the database, not to be queried over and over and over....
maybe you are not doing something 'appropriate'?
but check out the docs, the utilities guide describes "nologfile" as an option.
thanx!
A reader, June 27, 2004 - 12:24 pm UTC
Actually I am using this just for testing (not of
performance but of the data that comes to the UI:))
I changed the code to first load from the external
table to a real table anyways..
btw, could not find the nologfile option...
thanx!
June 28, 2004 - 7:36 am UTC
File name?
A reader, December 21, 2004 - 6:35 pm UTC
How can I access the filename corresponding to the external table I am selecting from?
I want to do something like
insert all
when c1='cust.dat' then into table cust(a,b) values (a,b)
...
select <something> c1
from ext_table;
<something> should give me the name of the file where the external table is pointing to
Also, how would this work when the 'location' points to multiple files? The access driver simply puts the files together (cat file1 file2 ... fileN), but is there a way to know which record is coming from which file?
Thanks
December 21, 2004 - 7:27 pm UTC
A reader, December 21, 2004 - 8:36 pm UTC
December 22, 2004 - 8:14 am UTC
sorry, fixed that cut and paste error -- see above.
Multiple files
A reader, December 24, 2004 - 10:56 am UTC
Also, how would this work when the 'location' points to multiple files? The access driver simply puts the files together (cat file1 file2 ... fileN), but is there a way to know which record is coming from which file?
Thanks
December 24, 2004 - 1:45 pm UTC
there is not, they are all one "logical input" at that point.
Loading master detail from detail only
A reader, December 27, 2004 - 12:03 am UTC
Suppose I have a detail-only records in a external table like
master1,detail1,...
master1,detail2,...
master2,detail3,...
master2,detail1,...
How can I use multi-table insert to populate all the DISTINCT masterN fields into the MASTER table and the full record into the DETAIL table?
Thanks
December 27, 2004 - 10:00 am UTC
ops$tkyte@ORA9IR2> create table t ( x int, y int );
Table created.
ops$tkyte@ORA9IR2> insert into t values ( 1, 100 );
1 row created.
ops$tkyte@ORA9IR2> insert into t values ( 1, 101 );
1 row created.
ops$tkyte@ORA9IR2> insert into t values ( 2, 200 );
1 row created.
ops$tkyte@ORA9IR2> insert into t values ( 2, 201 );
1 row created.
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> create table p ( x int primary key );
Table created.
ops$tkyte@ORA9IR2> create table c ( x references p, y int );
Table created.
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> insert all
2 when (rn = 1) then into p (x) values (x)
3 when ( 1 = 1) then into c(x,y) values(x,y)
4 select x, y, row_number() over (partition by x order by y) rn
5 from t;
6 rows created.
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> select * from p;
X
----------
1
2
ops$tkyte@ORA9IR2> select * from c;
X Y
---------- ----------
1 100
1 101
2 200
2 201
Multi Table Insert conditional,unconditional
denni50, January 10, 2005 - 12:31 pm UTC
Hi Tom
We have 1.7 mil records we are about to import from
another source. Got the external_table working using
a test file(approx 3,000 records).
Question:
we want to insert if the customer doesn't exist, we
do not want to reload data if customer does exist..
for that I will use the MERGE statement...however
I will need to match on lastname,addr1,zip since the
source data will not have idnumbers to match on...they
are data sources from a website.
We have an idnumber sequence on the customer base table so that when a new record is inserted the customer is automatically assigned an idnumber...if I have a multi table insert..(as an example):
John Doe 123 Main St Cleveland OH 23456 555-555-9999
when John Doe gets inserted to the customer table with
'John','Doe'(with idnumber 3344876) then 123 Main St Cleveland OH 23456 gets inserted to the address table and 555-555-9999 gets inserted into the phone table will the assigned idnumber follow in accordance with the entire insert?
(hope this is understandable)
thanks
January 10, 2005 - 2:01 pm UTC
I'm confused by the merge and multi-table insert in the same context, i did not following that.
I followed the "we want to merge", but then got lost as to where the multi-table insert comes into play.
Tom ...more info
denni50, January 10, 2005 - 3:07 pm UTC
In my research(and looking at some of your code in other
threads)...I've noticed that MERGE and multitable inserts
appear to be 2 distinct operations...what I need is to
combine them.....in your code below(don't care about the
stats part)
ops$tkyte@ORA920.US.ORACLE.COM> declare
2 l_start number;
3 l_run1 number;
4 l_run2 number;
5
6 type rc is ref cursor;
7 l_cur rc;
8 begin
9 insert into run_stats select 'before', stats.* from stats;
10
11 l_start := dbms_utility.get_time;
12 merge into t1
13 using ao on ( t1.object_id = ao.object_id )
14 when matched then
15 update set owner = ao.owner,
16 object_name = ao.object_name,
17 subobject_name = ao.subobject_name,
18 data_object_id = ao.data_object_id,
19 object_type = ao.object_type,
20 created = ao.created,
21 last_ddl_time = ao.last_ddl_time,
22 timestamp = ao.timestamp,
23 status = ao.status, temporary = ao.temporary,
24 generated = ao.generated,
25 secondary = ao.secondary
26 when not matched then
27 insert ( OWNER, OBJECT_NAME,
28 SUBOBJECT_NAME, OBJECT_ID,
29 DATA_OBJECT_ID, OBJECT_TYPE,
30 CREATED, LAST_DDL_TIME,
31 TIMESTAMP, STATUS, TEMPORARY,
32 GENERATED, SECONDARY )
33 values ( ao.OWNER, ao.OBJECT_NAME,
34 ao.SUBOBJECT_NAME, ao.OBJECT_ID,
35 ao.DATA_OBJECT_ID, ao.OBJECT_TYPE,
36 ao.CREATED, ao.LAST_DDL_TIME,
37 ao.TIMESTAMP, ao.STATUS, ao.TEMPORARY,
38 ao.GENERATED, ao.SECONDARY);
39 commit;
40 l_run1 := (dbms_utility.get_time-l_start);
41 dbms_output.put_line( l_run1 || ' hsecs' );
42
43 insert into run_stats select 'after 1', stats.* from stats;
44 l_start := dbms_utility.get_time;
45 for x in ( select * from ao )
46 loop
47 update t2 set ROW = x where object_id = x.object_id;
48 if ( sql%rowcount = 0 )
49 then
50 insert into t2 values X;
51 end if;
52 end loop;
53 commit;
54 l_run2 := (dbms_utility.get_time-l_start);
55 dbms_output.put_line( l_run2 || ' hsecs' );
56 dbms_output.put_line
57 ( 'run 1 ran in ' || round(l_run1/l_run2*100,2) || '% of the time' );
58
59 insert into run_stats select 'after 2', stats.* from stats;
60 end;
61 /
.....can I replace with multitable insert clause like this:
when not matched then
insert all
when firstname,lastname then into customer
values(firstname,lastname)
when addr1,city,state,zip then into address
values(addr1,city,state,zip)
when phonenumber then into phone
values(phone)
from external_table;
commit;
I need to do a MERGE to test for duplicates but I also
need to perform multitable inserts and want to combine
both operations(if possible)
hth..and thanks
January 10, 2005 - 4:22 pm UTC
there is merge
there are multi-table inserts
there is no merge with multi-table inserts.
goofed on when clause
denni50, January 10, 2005 - 3:13 pm UTC
I know the WHEN clause in the multitable insert is supposed
to indicate a conditional insert(which I didn't use)..
but hopefully you get the gist of what I'm trying to
accomplish.
:~)
VA, January 21, 2005 - 8:45 pm UTC
I have exactly the same situation as the OP
HEADER...
...
TRAILER...
This needs to go into 3 tables (header_tab, body_tab, trailer_tab).
Currently, I have sqlldr controlfile doing stuff like
into table header_tab when (1:6) = 'HEADER'
into table body_tab when (1:6)!='HEADER' and (1:7) != 'TRAILER'
into table trailer_tab when (1:7)='TRAILER'
How can I do this with multi-table insert and external tables? What is the equivalent for the (x:y) notation above to refer to positions in the file?
Thanks
January 22, 2005 - 9:38 am UTC
substr, as suggested in the original answer...
How to load master-detail table using oracle sequence
A reader, May 11, 2005 - 5:35 pm UTC
Hi Tom,
I saw your very first example on this thread which has header-master-detail record which has ids already populated. In my case, i don't have ids generated in the input file.
Just wondering, Is there any way to generate the oracle sequence no. on the fly for master id (parent table) and detail id (for child table) and link master-detail record while loading.
Can you please show an example how to do this using External tables and load into two tables using parent-child relationship.
My input file will be of the format:
master record1, child record1 <= on the first record
master record2, child record2 <= on the second record
....
May 12, 2005 - 7:21 am UTC
I have found no truly "safe" way to do this, as it requires a bit of procedural logic. I can show you queries that appear to work, but I'm afraid plan changes in the future would/could break them. bulk processing is going to be the way to go here, or a two step (there must be some way to link the parent to the child in the external table, so you should be able to
a) load parent, assign key
b) load child, look up foreign key
still need column identifier in external file
A reader, May 12, 2005 - 5:36 pm UTC
Thanks Tom.
<<bulk processing is going to be the way to
go here,
I believe even bulk processing requires bit of procedural code, right ? Do you mean process the external file using pl/sql code (using stored procedure) and store the data in master-detail tables.
<<
or a two step (there must be some way to link the parent to the child
in the external table, so you should be able to
a) load parent, assign key
b) load child, look up foreign key
>>
But even in this case, we still need COLUMN IDENTIFIER in external file to indicate the relationship between the parent-child record (like dummy rownumber just in external table).
With out dummy column identifier, i do not think of any way of doing the foreign key look up to link parent-child records ?
Thanks.
May 13, 2005 - 8:49 am UTC
bulk processing, array fetch from table, load up arrays to be array inserted.
you do not have a way to relate parent and child records in this input file???? how do you know what child goes with what parent.
Thanks Tom for the clarification
A reader, May 13, 2005 - 10:37 am UTC
PLSQL function using ursor
Umar, May 21, 2005 - 9:52 am UTC
Hi,
I have created a table in oracle which contains FIRSTNAME and LASTNAME together as following.
--------------------
Jens Erik Petersen
Niels hansen
Ulla Vig-Andersen
--------------------
can anybody help me to creat a function which recieves a value containing firstname and lastname. And returns only the LASTNAME ??
May 21, 2005 - 1:08 pm UTC
Define last name please -- which are the last names above.
if the last name is simply the word after that last blank, instr can be used to look for the last blank and that result in substr can return the last word
ops$tkyte@ORA9IR2> select substr( 'aaa bbb cccccc', instr( 'aaa bbb cccccc', ' ', -1 )+1 ) from dual;
SUBSTR
------
cccccc
SQL statement
Mike, May 22, 2005 - 8:18 am UTC
Hi Tom,
Can you please help me to write a SELECT statement!!
I have a table 'person' looking like following. There is used autonumbering in this table. We can see that two persons have the same Firstname and Last name.
Person
----------------------------------
ID Firstname Lastname
----------------------------------
7 Peter Jensen
8 Hans Petersen
9 Hanne Nielsen
10 Peter Jensen
11 Ulla Andersen
-----------------------------------
I need to write a SELECT statement, which shows all persons who have same Firstname and Lastname more than one time in table person. For example(Peter Jensen).
Can you please write such a Select statement for me!! Thanks
May 22, 2005 - 8:26 am UTC
select * from (
select a.*, count(*) over (partition by firstname, lastname) cnt
from t a )
where cnt > 1;
Thanks for answer Tom :)
Mike, May 22, 2005 - 12:48 pm UTC
Thanks for help Tom:)
and If i want to calculate... how many records of same names are found, if there are more than 1 similar names??
how will you change Select statement for the table 'person'?
Person
----------------------------------
ID Firstname Lastname
----------------------------------
7 Peter Jensen
8 Hans Petersen
9 Hanne Nielsen
10 Peter Jensen
11 Ulla Andersen
-----------------------------------
May 22, 2005 - 1:00 pm UTC
i don't know what you mean this time.
if you want to know how many unique names there are that happen more than once:
ops$tkyte@ORA10G> create table names
2 as
3 select ename, job from scott.emp;
Table created.
ops$tkyte@ORA10G> insert into names select ename, job from scott.emp where rownum < 6;
5 rows created.
ops$tkyte@ORA10G> select count(count(*))
2 from names
3 group by ename, job
4 having count(*) > 1;
COUNT(COUNT(*))
---------------
5
pretend ename and job are your first/last names. that shows there are 5 first/last name occurrences that happen more than once.
PLSQL function using
Umar, May 23, 2005 - 4:51 am UTC
Hello Tom,
How can i make a function in PLSQL which receives a persons name as an input value and returns its length as output.
Example:
input Peter-hansen
return value = 11
We have to use arrays to account the length of name right? but i odnt know how i can programe it.
May 23, 2005 - 10:52 am UTC
length('Peter-hansen')
there is a builtin function already.
PL SQL
Umar, May 24, 2005 - 4:54 am UTC
Thanks for your answer tom!
I'm very happy to visit your site, where form i get a lots of good examples relating with my situation. Your online support is very useful for oracle users.
Can you please answer me one more simple question?
I have two functuons which returns 2 values.
fuk1(a)
returns a name.
funk2(b)
returns a number.
how can i write third function which gets input parameters from fuk1(a) and fuk2(b)
Ìs the following allowed ?
funk3(funk1(a), funk2(b))
May 24, 2005 - 8:02 am UTC
yes it is. just code it, the function doesn't care if you send it a constant, a variable, a function return value...
How to load data into object table from FLAT file
DEBASISH GHOSH, June 20, 2005 - 1:57 am UTC
I have one table and one object
object
--------
address
(add1 varchar2(30),add2 varchar2(30),pincode number)
TABLE
-------
master
(code number(4)
name char(10)
address address)
temp.dat (a flat file) contains
1000THOMAS 54,R.S.AVENUE NEW DELHI 110001
1001CLARK 111 K.S.GANDHI ROAD, KOLKATA 700001
.......
......
HOW COULD I LOAD DATE INTO MASTER TABLE AS WELL AS OBJECT
THROUGH SQLLDR OR ANY OTHER WAY FROM THAT FLAT FILE.
THANKS TO YOU FOR ADVANCE
DEBASISH
NEW DELHI
great
Debasish Ghosh, June 20, 2005 - 10:09 am UTC
thanks a lot.
Load when condition in External Tables
surendra, June 29, 2005 - 3:02 am UTC
i want one example on LOAD WHEN condition using in EXternal Tables.
this is very urgent...
June 29, 2005 - 9:00 am UTC
well, a where clause certainly works pretty easy doesn't it.
when I want a quick and dirty example, I take a control file from $ORACLE_HOME/rdbms/admin and use sqlldr with external_table=generate_only. eg: I took this ctl file:
LOAD DATA
INFILE 'ulcase7.dat'
DISCARDFILE 'ulcase7.dis'
APPEND
INTO TABLE emp
WHEN (57)='.'
TRAILING NULLCOLS
(hiredate SYSDATE,
deptno POSITION(1:2) INTEGER EXTERNAL(3)
NULLIF deptno=BLANKS,
job POSITION(7:14) CHAR TERMINATED BY WHITESPACE
NULLIF job=BLANKS "UPPER(:job)",
mgr POSITION(28:31) INTEGER EXTERNAL TERMINATED BY WHITESPACE
NULLIF mgr=BLANKS,
ename POSITION (34:41) CHAR TERMINATED BY WHITESPACE
"UPPER(:ename)",
empno INTEGER EXTERNAL TERMINATED BY WHITESPACE,
sal POSITION(51) CHAR TERMINATED BY WHITESPACE
"TO_NUMBER(:sal,'$99,999.99')",
comm INTEGER EXTERNAL ENCLOSED BY '(' AND '%'
":comm * 100"
)
and ran:
$ sqlldr scott/tiger test.ctl external_table=generate_only
and got:
CREATE TABLE "SYS_SQLLDR_X_EXT_EMP"
(
DEPTNO NUMBER(2),
JOB VARCHAR(255),
MGR NUMBER(4),
ENAME VARCHAR(255),
EMPNO NUMBER(4),
SAL VARCHAR(255),
COMM VARCHAR(255)
)
ORGANIZATION external
(
TYPE oracle_loader
DEFAULT DIRECTORY SYS_SQLLDR_XT_TMPDIR_00000
ACCESS PARAMETERS
(
RECORDS DELIMITED BY NEWLINE CHARACTERSET WE8ISO8859P1
LOAD WHEN ((57: 57) = ".")
BADFILE 'SYS_SQLLDR_XT_TMPDIR_00000':'ulcase7.bad'
DISCARDFILE 'SYS_SQLLDR_XT_TMPDIR_00000':'ulcase7.dis'
LOGFILE 'test.log_xt'
READSIZE 1048576
FIELDS LDRTRIM
MISSING FIELD VALUES ARE NULL
REJECT ROWS WITH ALL NULL FIELDS
(
DEPTNO (1:2) INTEGER EXTERNAL(3)
NULLIF (DEPTNO = BLANKS),
JOB (7:14) CHAR(8)
NULLIF (JOB = BLANKS),
MGR (28:31) CHAR(4)
NULLIF (MGR = BLANKS),
ENAME (34:41) CHAR(8),
EMPNO CHAR(255)
TERMINATED BY WHITESPACE,
SAL (51) CHAR(255)
TERMINATED BY WHITESPACE,
COMM CHAR(255)
ENCLOSED BY "(" AND "%"
)
)
location
(
'ulcase7.dat'
)
)REJECT LIMIT UNLIMITED
INSERT /*+ append */ INTO EMP
(
HIREDATE,
DEPTNO,
JOB,
MGR,
ENAME,
EMPNO,
SAL,
COMM
)
SELECT
SYSDATE,
DEPTNO,
UPPER(JOB),
MGR,
UPPER(ENAME),
EMPNO,
TO_NUMBER(SAL,'$99,999.99'),
COMM * 100
FROM "SYS_SQLLDR_X_EXT_EMP"
external tables
A reader, June 29, 2005 - 6:27 am UTC
pls send me one sample for using LOAD WHEN condition in External Tables.
i wnat split data from flatfile based on some characters.
eg : CUST 12 SS 222222 EEE33
BRKR 33 KK 788888 HHHHH
DETL 66 GG 677677 HUDFD
FROM HERE i want filter first four characters using laod when condition.
June 29, 2005 - 9:04 am UTC
(hint, I sleep at night, my night is your day, immediate feedback is impossible)
but, "where column = 'CUST' " in the query -- think about it, you can use SQL to filter anything you want!
Loading data into multiple tables
Vikas, August 16, 2005 - 6:41 am UTC
Hi Tom,
I have a flat file with 10 fields delimited by WHITESPACE.
The 8th field has a value separated by ',' which are the child records of the parent row. We need to load the data in both parent and child tables using External table utility.
eg. F1 F2 F3 F4 F5 F6 F7 C1,C2,C3 F9 F10
F1 F2 F3 F4 F5 F6 F7 C1,C2 F9 F10
F1 F2 F3 F4 F5 F6 F7 C1 F9 F10
Parent Table
F1 F2 F3 F4 F5 F6 F7 F9 F10
F1 F2 F3 F4 F5 F6 F7 F9 F10
F1 F2 F3 F4 F5 F6 F7 F9 F10
Child Table
F1 SEQ C1
F1 SEQ C2
F1 SEQ C3
F1 SEQ C1
F1 SEQ C2
F1 SEQ C3
Where SEQ is the sequence number which needs continuous numbering.Here F1 represents the Unique key in the flat file.
An example illustration would be the best to understand.
Thanks in anticipation.
August 16, 2005 - 10:57 am UTC
is the SEQ absolutely necessary, isn't (F1, C1), (F1, C2) and so on - unique already?
but tell you what, if you do a simple create table (regular table, doesn't need to be external, they look the same) and inserts, I'll try to take a look later tonight when I get back to the hotel.
Multiple table loading
Vikas Khanna, August 17, 2005 - 2:52 am UTC
Hi Tom,
Yes a SEQ would be required since the combination of (f1,c1),(f2,c2) are n't unique combination sets.
I thought it will be best to learn more about external tables if the data in the flat file resembles something like we have.
Anyway,whatever approach you decide will be okay with me. Waiting for the practical solution on this.
Regards,
Vikas
August 17, 2005 - 1:13 pm UTC
really WISH you could have done the create table and inserts for me, but anyway.
ops$tkyte@ORA9IR2> create table p( f1 varchar2(10) primary key, f2 varchar2(10) );
Table created.
ops$tkyte@ORA9IR2> create table c( f1 references p, seq int, c varchar2(10), primary key(f1,seq) );
Table created.
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> create table t ( x varchar2(10), y varchar2(10), z varchar2(10) );
Table created.
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> insert into t values ( 'xf1', 'c1,c2,c3', 'xf2' );
1 row created.
ops$tkyte@ORA9IR2> insert into t values ( 'yf1', 'c1,c2', 'yf2' );
1 row created.
ops$tkyte@ORA9IR2> commit;
Commit complete.
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> @str2tbl
ops$tkyte@ORA9IR2> create or replace type str2tblType as table of varchar2(30)
2 /
Type created.
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> create or replace function str2tbl( p_str in varchar2, p_delim in varchar2 default ',' ) return str2tblType
2 PIPELINED
3 as
4 l_str long default p_str || p_delim;
5 l_n number;
6 begin
7 loop
8 l_n := instr( l_str, p_delim );
9 exit when (nvl(l_n,0) = 0);
10 pipe row( ltrim(rtrim(substr(l_str,1,l_n-1))) );
11 l_str := substr( l_str, l_n+1 );
12 end loop;
13 return;
14 end;
15 /
Function created.
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> insert when (seq = 1) then into p(f1,f2) values (x,z)
2 when (seq >=1) then into c(f1,seq,c) values (x,seq,c)
3 select x, z, column_value c,
4 row_number() over (partition by x order by column_value) seq
5 from t t, table(str2tbl(t.y));
7 rows created.
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> select * from p;
F1 F2
---------- ----------
xf1 xf2
yf1 yf2
ops$tkyte@ORA9IR2> select * from c;
F1 SEQ C
---------- ---------- ----------
xf1 1 c1
xf1 2 c2
xf1 3 c3
yf1 1 c1
yf1 2 c2
VA, October 14, 2005 - 2:36 pm UTC
I asked you this before already above
I have exactly the same situation as the OP
HEADER...
...
TRAILER...
This needs to go into 3 tables (header_tab, body_tab, trailer_tab).
Currently, I have sqlldr controlfile doing stuff like
into table header_tab when (1:6) = 'HEADER'
into table body_tab when (1:6)!='HEADER' and (1:7) != 'TRAILER'
into table trailer_tab when (1:7)='TRAILER'
How can I do this with multi-table insert and external tables? What is the equivalent for the (x:y) notation above to refer to positions in the file?
You said "setup a single one and create N views -- using substr to pick of the fields"
Questions
1. I am not sure I understand, can you please show an example?
2. Should I set up the external table definition with real data types like NUMBER(x,y), DATE or define it with all VARCHAR2s so that all data gets loaded? The former will implicitly validate the datatypes, the latter just pushes this responsibility to the process or final target where this file is being loaded into? Which one do you prefer and why?
Thanks
October 14, 2005 - 5:37 pm UTC
substr() of course is one option for x:y
show us your current control file.
A reader, October 14, 2005 - 2:53 pm UTC
Let me tell you the problem I am having
create table ext
(
record_type varchar2(80),
pk number,
col1 date,
col2 varchar2(10)
)
DEFAULT directory mydir
ACCESS PARAMETERS
(
records delimited by newline
fields
missing field values are null
reject rows with all null fields
(
record_type (1:20) char,
pk (1:6) integer external,
col1 (7:14) char date_format date mask "yyyymmdd",
col2 (15:24) char
)
)
location ('myfile.dat')
)
;
I do
select * from ext
where substr(record_type,1,6) != 'HEADER'
and I still keep getting errors on the first (header) record because the string HEADER cannot be converted into a number. Same for date.
Shouldnt the WHERE clause above cause the HEADER record to be skipped?
How else to do this?
Thanks
October 14, 2005 - 5:54 pm UTC
if header is the first record, why not use skip to skip over it?
line number in the flat file
Pet, October 20, 2005 - 2:50 pm UTC
I want to see the line number of the record as one of the column in external table. Is that possible?
I load a file into muliple external tables using "WHEN" clause and wondering I can get the line number of the individual record.
October 20, 2005 - 4:58 pm UTC
select *
from ( select et.*, rownum r from et )
.....
assuming SERIAL operations, no parallel.
Re: line number in the flat file
Gabe, October 20, 2005 - 7:14 pm UTC
Thanks GABE
Pet, October 24, 2005 - 1:18 pm UTC
RECNUM worked great. thanks
INSERT FIRST ... select * from et = slow
VA, December 19, 2005 - 10:33 pm UTC
I have a process that does
INSERT FIRST
INTO TABLE ...
INTO TABLE ...
...
select * from et;
The select * from et takes a couple of seconds.
But the multi-table insert above takes an hour!
Looking at v$session_wait for the session shows the session doing db file scattered and sequential reads for the datafile that contains the table/indexes segments that are being inserted. That sounds normal, but why would the INSERT statement take 1 hour when the ET is so fast.
Any ideas?
Thanks
December 20, 2005 - 8:30 am UTC
really insufficient details here don't you think?
no sizing information.
no indexing information.
no creates
I could come up with 50 reasons it is "slow", none of which would apply to you perhaps.
VA, December 20, 2005 - 10:20 am UTC
Sizing/indexing/creates for what?
The file I am loading has 15000 records. select * from et; takes 3 seconds to execute.
I am loading the file into 3 tables. Table 1 has 200 rows. Table 2 has 150,000 rows. Table 3 has 200 rows. Each of these tables has a PK index and another index on it.
Strangely enough, this INSERT takes just 30 seconds on a clone database (datafile copy to another server).
What would some of the 50 reasons be for this INSERT to be slow on this server?
Thanks
December 20, 2005 - 10:30 am UTC
without an example to work with and reproduce with, I won't play "guess the cause".
Like I said, there is just too little information here to comment.
A reader, December 20, 2005 - 10:32 am UTC
What would you do if you were me?
The process takes 30 seconds in a identical database on a different server, but takes 1 hour on this server.
What should I start to look for?
Thanks
December 20, 2005 - 10:42 am UTC
I'd start with the TKPROF generated with a level 12 trace showing the wait events and query plan, preferablly with statistics_level set to ALL for the session being traced.
Can we generate details records?
Naresh, February 07, 2006 - 9:35 am UTC
Hello Tom,
Is it possible to generate detail records when loading a master?
e.g.
I have two tables
create table mst(soc varchar2(30));
create table dtl(param varchar2(30));
I have two reference tables
offers (soc_cd varchar2(30));
params (soc_cd varchar2(30), param varchar2(30));
params being the detail for offers.
Given an input list of offers.soc_cd can we insert all params for a soc_cd into the dtl table - something like
insert
into mst values(soc_cd)
into dtl values (select param from params p where p.soc_cd = soc_cd)
select soc_cd from offers -- offers used only for the test case, in actual it will be another input table
I know this gives a syntax error - but is there an alternate syntax which will achieve the effect?
Thanks,
Naresh.
February 08, 2006 - 1:07 am UTC
yes we can, and with a couple of inserts to generate some non-trivial test data - I'll show you how to use a multi-table insert to do it.
However mst and dtl are "botched" are they not. I see no primary/foreign keys - make it real, mst and dtl are not quite "done"
Generating Detail rows ---
Naresh, February 08, 2006 - 2:40 am UTC
Hello Tom,
Here are the new scripts - I created some test data in offers and params:
create table mst (soc varchar2(30), primary key (soc));
create table dtl (soc varchar2(30) references mst(soc), param varchar2(30));
create table offers(soc varchar2(30), primary key (soc));
create table params(soc varchar2(30) references offers(soc), param varchar2(30));
SQL> insert into offers
select 'SOC' || rn from (select level rn from dual connect by level < 4);
3 rows created.
insert into params
select soc, soc || '-param' || rn from offers, (select level rn from dual connect by level < 4)
/
9 rows created.
SQL> select * from offers;
SOC
------------------------------
SOC1
SOC2
SOC3
SQL> select * from params order by 1;
SOC PARAM
------------------------------ ------------------------------
SOC1 SOC1-param1
SOC1 SOC1-param2
SOC1 SOC1-param3
SOC2 SOC2-param1
SOC2 SOC2-param3
SOC2 SOC2-param2
SOC3 SOC3-param1
SOC3 SOC3-param3
SOC3 SOC3-param2
9 rows selected.
Thanks,
Naresh.
February 08, 2006 - 8:05 am UTC
ops$tkyte@ORA10GR2> insert
2 when (rn=1) then
3 into mst ( soc ) values ( soc )
4 when (rn>0) then
5 into dtl ( soc, param ) values ( soc, param )
6 select params.*, row_number() over (partition by soc order by param) rn
7 from params
8 where soc in ( 'SOC1', 'SOC3' )
9 /
8 rows created.
ops$tkyte@ORA10GR2> select * from mst;
SOC
------------------------------
SOC1
SOC3
ops$tkyte@ORA10GR2> select * from dtl;
SOC PARAM
------------------------------ ------------------------------
SOC1 SOC1-param1
SOC1 SOC1-param2
SOC1 SOC1-param3
SOC3 SOC3-param1
SOC3 SOC3-param2
SOC3 SOC3-param3
6 rows selected.
Generating detail rows
Naresh, February 08, 2006 - 9:06 am UTC
Thanks for the response Tom.
Expert Oracle - alert log external table
Rahul, September 27, 2006 - 11:39 pm UTC
Hello Tom,
I was reading your book and in page 87 you had a select on an external table you generated from the 'alert log'.
There you used 'rownum' and in page 88 you said "It also "numbers" the lines using ROWNUM".
Did you quote "numbers" because unlike usual heap table, since this is an external table, we DEFINITELY are going to get the rows in a sequential fashion i.e., first line in alert log will have a rownum of 1 and will be thrown out first and so on?
September 28, 2006 - 3:01 am UTC
we've talked about it a couple of times on this page. also ctl-f for recnum here as well.