Skip to Main Content
  • Questions
  • sql to get data from flat file-- this is mind blowing

Breadcrumb

May 4th

Question and Answer

Tom Kyte

Thanks for the question, Nag.

Asked: October 12, 2001 - 7:26 pm UTC

Last updated: April 30, 2013 - 2:14 pm UTC

Version: 8.1.7

Viewed 50K+ times! This question is

You Asked

"In 9i, with the addition of external tables ( the ability to query a FLAT FILE with SQL) -- SQLLDR might be "moot". Using the external table and the insert /*+ append */ I can basically do a direct path load from file to database
without running a command outside of the database."

Tom

We just cant wait, to see how the above is done, what are the new sql clauses we need to use to achieve the above( there should be some new clauses to do the above).

Kindly demonstrate the above, this feature will be trend setting and path breaking.

Nag

and Tom said...

Ok, here is a demo I use for training internally:

For external tables, we need to use a directory object -- we'll
start with that, mapping to the temp directory

create or replace directory data_dir as 'c:\temp\'
/

Now, we'll create the external table.
part of its definition is what looks like a control file -- it is


create table external_table
(EMPNO NUMBER(4) ,
ENAME VARCHAR2(10),
JOB VARCHAR2(9),
MGR NUMBER(4),
HIREDATE DATE,
SAL NUMBER(7, 2),
COMM NUMBER(7, 2),
DEPTNO NUMBER(2)
)
ORGANIZATION EXTERNAL
( type oracle_loader
default directory data_dir
access parameters
( fields terminated by ',' )
location ('emp.dat')
)
/


In \temp\emp.dat I have a file that looks like this:

7369,SMITH,CLERK,7902,17-DEC-80,800,,20
7499,ALLEN,SALESMAN,7698,20-FEB-81,1600,300,30
7521,WARD,SALESMAN,7698,22-FEB-81,1250,500,30
....


(its a dump of the emp table in csv format)


Now you can just:

select * from external_table
/



and now, if you modified the scott/tiger EMP table:

delete from emp where mod(empno,2) = 1
/
update emp set sal = sal/2
/
commit;

You could sync up the flat file with the database table using this single command:


merge into EMP e1
using EXTERNAL_TABLE e2
on ( e2.empno = e1.empno )
when matched then
update set e1.sal = e2.sal
when not matched then
insert ( empno, ename, job, mgr, hiredate, sal, comm, deptno )
values ( e2.empno, e2.ename, e2.job, e2.mgr, e2.hiredate, e2.sal, e2.comm, e2.deptno )
/
commit;


that'll update the records in the EMP table from the flat file if they exist OR it will insert them.

Doing a direct path load would simply be:

insert /*+ append */ into emp select * from external_table;


See
</code> http://www.oracle.com/pls/db901/db901.to_toc?pathname=server.901/a90192/part3.htm#436392 <code>
for the details.




Rating

  (122 ratings)

Is this answer out of date? If it is, please let us know via a Comment

Comments

This is amazing...

AB, October 15, 2001 - 3:37 am UTC

No more UPDATE ; IF SQL%NOTFOUND THEN INSERT...

9i seems to be able to do all those things that 8i ALMOST gave you. And I'm probably 18 months away from 9i in my place....... I'm going to be suffering !

reading records from a diskette

Richard, December 11, 2001 - 3:10 am UTC

Please Tom can you give me the codes to achieve this in 8i and before especially 7.3.2? please am a beginner
Thank You for the answer.

Directory

Sa?a, December 12, 2001 - 2:50 am UTC

Could you please confirm that a directory should exist on server machine or it could be on client machine (doubt so)?


Tom Kyte
December 12, 2001 - 8:16 am UTC

the directory is on the server, the dedecated/shared server that processes your sql statements is the one that will be doing the reading, the file must be accessible to it.

Multiple files

Rob, September 19, 2002 - 9:10 pm UTC

Tom:

In our shop we tend to use sqlldr to load multiple files (hundreds each month) all with the same format to a staging table. We tend to do this by having all of the files we want to load in a directory and calling a shell script that loops through all of the files calling sqlldr and passing the file name to sqlldr as part of the command line. With external tables it seems like the file name is hard coded into the definition of the table i.e. location ('emp.dat'). Is there any way to use external tables in our scenario with multiple files?

Thanks.

Tom Kyte
September 20, 2002 - 7:56 am UTC

UTL_FILE in 9iR2 can RENAME a file. So, the external table could be based on "current_file_to_process.dat" and you would frename a file to be "current".




Can I NULL the WHEN NOT MATCHED?

AJ Allen, January 15, 2003 - 3:34 pm UTC

I would like to try using the merge to do a table-to-table update. I do not want any action when there is no match. I tried leaving the WHEN NOT MATCHED clause off -- no dice. Same with WHEN NOT MATCHED NULL. Is there a way to do nothing when not matched?

Tom Kyte
January 16, 2003 - 7:58 am UTC

you do not need a MERGE then -- a simple update of a join will do it.

</code> http://asktom.oracle.com/pls/asktom/f?p=100:11:::::P11_QUESTION_ID:273215737113 <code>

Why the two step process?

Kashif, February 19, 2003 - 12:33 pm UTC

Hi Tom,

Thanks for that excellent explanation of two important new features in 9i. Followup question: If I can use the external table directly in my application, is there any reason not to do that, e.g. performance considerations? Why the need to create the external table (external_table), and then merge it into another table (emp)? I'm assuming you do that because the emp table is already being used inside the application. Which begs the question, why not recreate that pre-existing emp table as an external table, if one has the liberty to do so?

Thanks in advance.

Kashif

Tom Kyte
February 19, 2003 - 3:10 pm UTC

No indexes on external tables.

No insert/update/delete on external tables.

No backup / recovery on external tables.

External tables are just there to get the data in the database so you can make use of it. I would not be querying them day to day like a "real" table.

Thank GOD!!

Patrick, February 19, 2003 - 7:04 pm UTC

Thanks Tom for the example! This is a biggy for me!

Hint on merge ?

Michel Cadot, February 21, 2003 - 3:30 am UTC

Tom,

Can we use the PARALLEL hint in the ON and INSERT clauses and the APPEND hint in the INSERT clause of the MERGE statement?

We want to use this to synchronize flat file and database table.

Thanks


Tom Kyte
February 21, 2003 - 9:40 am UTC

you can enable parallel dml and do the merge in parallel.

no "appending" though -- the insert will be a conventional insert.

External tables rollbacks all the records even if one insert fails?

A reader, June 08, 2003 - 9:15 am UTC

Tom,

I came across an article by Jonathan Gennick </code> http://gennick.com/ETLandBad.html <code>)... i am surprised (since i am new to ext tables) to know that oracle rollback all the records even if any one of the insert/update fails.

Is that so?
Any work around there to maintain those valid data in the inserted table?

Thanks for your time


Tom Kyte
June 08, 2003 - 9:48 am UTC

A statement is ATOMIC.

so, if you

insert into t select * from external_table;

and some rule is violated on table T, then yes, of course the insert statement must fail and fail entirely.


Now, external tables will have to some degree validated the data -- at least as far as SIZE and DATATYPE is concerned. Here, the records that are inappropriate, would be recorded in a BAD file. Others would be passed through (all dependning on the number of errors you specified to be allowed when you created the table).

But an external table behaves transactionally NO DIFFERENT then any other table does.

A reader, June 08, 2003 - 1:33 pm UTC


Receive err when col width exceeds 255 ?

C.P. Bhattarai, June 09, 2003 - 1:43 am UTC

Tom,

The following error is returned when JOBS_DONE col contains more than 255 (it may be upto 1000 chars).
external table def is :
drop table external_VHistory;

create table external_VHistory
( CHASSIS VARCHAR2 (15),
JCARDNO NUMBER (7),
KM NUMBER (8),
SERV_TYPE VARCHAR2 (2),
FREE VARCHAR2 (1),
PROB_DESC VARCHAR2 (500),
JOBS_DONE VARCHAR2 (1000),
JOPEN_DATE DATE,
MEMBER_ID VARCHAR2 (15)
)
ORGANIZATION EXTERNAL
( type oracle_loader
default directory data_dir
access parameters
( fields terminated by '|'
missing field values are null
(chassis, jcardno, km,serv_type, free, prob_desc,jobs_done , jopen_date char date_format date mask "dd/mm/yyyy", member_id ))
location ('VHistory.ocp')
)
/

*** error message *************
Field Definitions for table EXTERNAL_VHISTORY
Record format DELIMITED BY NEWLINE
Data in file has same endianness as the platform

Fields in Data Source:

CHASSIS CHAR (255)
Terminated by "|"
Trim whitespace same as SQL Loader
JCARDNO CHAR (255)
Terminated by "|"
Trim whitespace same as SQL Loader
KM CHAR (255)
Terminated by "|"
Trim whitespace same as SQL Loader
SERV_TYPE CHAR (255)
Terminated by "|"
Trim whitespace same as SQL Loader
FREE CHAR (255)
Terminated by "|"
Trim whitespace same as SQL Loader
PROB_DESC CHAR (255)
Terminated by "|"
Trim whitespace same as SQL Loader
JOBS_DONE CHAR (255)
Terminated by "|"
Trim whitespace same as SQL Loader
JOPEN_DATE CHAR (10)
Date datatype DATE, date mask dd/mm/yyyy
Terminated by "|"
Trim whitespace same as SQL Loader
MEMBER_ID CHAR (255)
Terminated by "|"
Trim whitespace same as SQL Loader
KUP-04021: field formatting error for field JOBS_DONE
KUP-04026: field too long for datatype
KUP-04101: record 2 rejected in file c:\ws\import\VHistory.ocp


Tom Kyte
June 09, 2003 - 7:15 am UTC

( fields terminated by '|'
missing field values are null
(chassis,
jcardno,
km,
serv_type,
free,
prob_desc,
jobs_done CHAR(1000),
jopen_date char date_format date mask "dd/mm/yyyy",
member_id ))
location ('VHistory.ocp')
)

Your are right

C.P. Bhattarai, June 10, 2003 - 2:25 am UTC

Exactly it works.
Actually it was triying with VARCHAR2(1000).


TOM U ARE "GOD"..

Palash Sarkar, June 26, 2003 - 10:02 am UTC

Why don't u come up with a poster of urs with Oracle9i Data Dictionary in the background ??

Tom Kyte
June 26, 2003 - 10:54 am UTC

that would be scary.

Valid counts from External tables

Raghu Raman, June 30, 2003 - 5:23 pm UTC

Dear Tom,

We are currently using sqlloader to load the staging tables and sending a load reconciliation report with the following format:

DATA FILE       FILE CNT  FILE SUM STAGE TABLE STAGE CNT REJECTED
-------------- --------- --------- ----------- --------- --------
DATAFILE11.TXT      5417     14026 STG_TABLE1      14026        0
DATAFILE12.TXT      2879
DATAFILE13.TXT      5730
 
DATAFILE21.TXT      5687      5687 STG_TABLE2       5687        0
 
DATAFILE31.TXT      4000      4000 STG_TABLE3       3000     1000
 
Going forward we would like to use external tables that would give us the ability to process the data without loading it into staging tables. 

I have created an external table for testing purpose as shown below:
 
create table STG_TABLE3 (
   Field1      Number,
   Field2      varchar2(10),
   Field3      varchar2(50),
   Field4      varchar2(10),
   Field5      varchar2(10),
   Field6      varchar2(25),
   Field7      varchar2(25)
)
organization external (
   type oracle_loader
   default directory datadir
   access parameters  (
      records delimited by newline skip 1
      badfile 'STG_TABLE3.bad'
      logfile 'STG_TABLE3.log'
      fields terminated by ',' optionally enclosed by '"' LRTRIM
   )
   location('DATAFILE31.csv')
)
reject limit unlimited
/

When I do a count on the external table it gives the actual count from the datafile including the records that are supposed to be rejected. Although it creates the log file it does not create the bad file.

SQL> select count(*) from stg_table3;
 
  COUNT(*)
----------
      4000

However, when I specify a column for count it creates a bad file with the records that are violating the definition of that specific column.

SQL> select count(field2) from stg_table3;

COUNT(FIELD2)
-------------
         3800 
 
Contents of bad file
--------------------

error processing column FIELD2 in row 2 for datafile I:\data\DATAFILE31.csv
ORA-01401: inserted value too large for column
...

When I use the following SQL it is gives the actual count of what I am expecting and also creates the bad file with all rejected records.

SQL> select count(*) from (
  2    select * from stg_table3
  3    union
  4    select * from stg_table3 where 1=2
  5  );
 
  COUNT(*)
----------
      3000


Here is an another example:

SQL> select count(*) from stg_table3; 

COUNT(*) 
---------- 
4000 

SQL> create table test_table3 as select * from stg_table3; 

Table created. 

SQL> select count(*) from test_table3; 

COUNT(*) 
---------- 
3000

So executing a count(*) from the staging table does not validate the records in the datafile and consequently not generating the bad file with rejected records. However, if we do a union or create table as select * from the external table it gives the correct result set and also creates the bad file with rejected records. 

Is this the way it supposed to work or am I missing something?

Regards
 

Tom Kyte
June 30, 2003 - 7:32 pm UTC

it is supposed to work that way - the rejects were not rejected.

You might want to

a) map the "bad" file as yet another external table
b) count(*) it
c) do the load
d) count(*) it again and subtract

that is how many records did not get loaded.

so, if you try to load a, b, c from the external table -- but d would have failed, you'll get a different count then if you load a,b,c,d



Vipin, June 30, 2003 - 8:35 pm UTC

Hi Tom,

As you said in this article external tables doesn't support insert/updates/deletes on them. so we r left with only MERGE statement to deal with it. suppose my data loading logic involved , loading some data into the staging table (external table)and then applying some logic over it to reject certain data, which might involve deleting them from the staging table (external table)and then finally loading the actual table.

Here EXTERNAL table prooves unworthy for me since no DELETEs are allowed.

Or else if I find a round about way for this by updating the rejectable records in the staging table (external table)with a flag and then using this table to load my actual table, still EXTERNAL TABLE concept won't help me since it will not allow to update.


So I think these are situations when SQLLOADER comes for my help.

I think Oracle might support INSERT/UPDATE/DELETE on external tables in future.Right tom?



Tom Kyte
July 01, 2003 - 7:59 am UTC

"unworthy"?

rejected rows goto "bad_table"
accepted rows goto "good_table"

now are they "worthy"? heck, I just let the external table reject the rows into a BAD file for me.

why load the entire file into a staging table and then put all good records into another table (deleting from stage) when you can just

insert good
insert bad <maybe, I don't even see why this is needed if you have a bad file>

you will not see insert/update/delete on external tables (short of "vi" of course, just edit the thing)

Valid counts from External tables

Raghu Raman, July 01, 2003 - 7:54 am UTC

I really appreciate for your qucik reply.


Response to Vipin,

Martin Burbridge, July 01, 2003 - 8:03 am UTC

The external table isn't meant to equate or replace the stage table in the
data loading process it replaces the input data file and SQL*Loader. The
stage table, if needed, is always a table in the database.


Flat File |-- SQL*Loader --> Stage Table |----- ETL ----> Live Table
(SQL, PL/SQL)

External |-- SQL Insert --> Stage Table |----- ETL ----> Live Table
Table (SQL, PL/SQL)

Oracle doesn't need to support modifications to the external table, just
as you dont delete or modify records in the input data file using SQL*Loader.

Tom Kyte
July 01, 2003 - 8:52 am UTC

I would propose that with external tables -- you seriously consider that stage table to be obsolete, not necessary, just something to slow you down -- whenever possible.

Best to load straight from it.


External Table -> insert/merge, using pipelined functions if needed -> live.

Martin.. the purpose of an external table is defeated if used that way

A reader, July 01, 2003 - 8:10 am UTC

Martin,
I feel the External table was designed as a replacement for the SQL*Loader step while loading data. if you are again going to insert that data into a staging table the purpose of an external table is defeated. i suppose oracle will support DML on external tables in future releases.. they work great already :)



I don't see any defeat

A reader, July 01, 2003 - 8:42 am UTC

I guess I should have emphasized the stage table is optional part ('if needed'). In the second flow using the external table there is no SQL*Loader, so I don't see how you can say it defeats the purpose of replacing it. Seems to have done a good job there.

My main point was you cannot reasonably say I must still use SQL*Loader until I can write to external tables, when SQL*Loader does not write to the input data file. If you want to start modifying the data it should be in the database, I still see no reason to support modifications of external tables, I don't think it looks like a good idea.

I don't see any defeat

Martin Burbridge, July 01, 2003 - 8:43 am UTC

I guess I should have emphasized the stage table is optional part ('if needed'). In the second flow using the external table there is no SQL*Loader, so I don't see how you can say it defeats the purpose of replacing it. Seems to have done a good job there.

My main point was you cannot reasonably say I must still use SQL*Loader until I can write to external tables, when SQL*Loader does not write to the input data file. If you want to start modifying the data it should be in the database, I still see no reason to support modifications of external tables, I don't think it looks like a good idea.

A reader, July 15, 2003 - 3:56 pm UTC

Great! I am tring to use the external table with TOm's example, but always got errors. Am I missing something?
Here is the flat file:
"id","name","use_type","date_modified"
1,"Cash Issue","abuse","10/10/2002"
2,"Customer Service","abuse","10/10/2002"

Here is the script to create the external table:

create or replace directory external_dir as 'c:\data';

drop table test;
CREATE TABLE test (
EXTERNAL_ID number(2) ,
NAME varchar2(64) ,
USE_TYPE VARCHAR2(32) ,
DATE_MODIFIED date
)
organization external
(
type oracle_loader
default directory external_dir
access parameters
(
records delimited by newline skip 1
fields terminated by ','
ENCLOSED BY '"' and '"'
LRTRIM
)
location ('test.txt')
);
I always getting the following error:

select * from test;
ERROR at line 1:
ORA-29913: error in executing ODCIEXTTABLEFETCH callout
ORA-30653: reject limit reached
ORA-06512: at "SYS.ORACLE_LOADER", line 14
ORA-06512: at line 1


Tom Kyte
July 15, 2003 - 4:16 pm UTC

two things

1) fields appear to be OPTIONALLY enclosed, not enclosed by "
2) your date mask is missing

access parameters
(
records delimited by newline skip 1
fields terminated by ','
OPTIONALLY ENCLOSED BY '"' and '"'
LRTRIM
( external_id, name, use_type, date_modified DATE "dd/mm/yyyy" )

)
location ('test.txt')
);


external table

Judy, July 15, 2003 - 3:57 pm UTC

Great! I am tring to use the external table with TOm's example, but always got errors. Am I missing something?
Here is the flat file:
"id","name","use_type","date_modified"
1,"Cash Issue","abuse","10/10/2002"
2,"Customer Service","abuse","10/10/2002"

Here is the script to create the external table:

create or replace directory external_dir as 'c:\data';

drop table test;
CREATE TABLE test (
EXTERNAL_ID number(2) ,
NAME varchar2(64) ,
USE_TYPE VARCHAR2(32) ,
DATE_MODIFIED date
)
organization external
(
type oracle_loader
default directory external_dir
access parameters
(
records delimited by newline skip 1
fields terminated by ','
ENCLOSED BY '"' and '"'
LRTRIM
)
location ('test.txt')
);
I always getting the following error:

select * from test;
ERROR at line 1:
ORA-29913: error in executing ODCIEXTTABLEFETCH callout
ORA-30653: reject limit reached
ORA-06512: at "SYS.ORACLE_LOADER", line 14
ORA-06512: at line 1


SQL TO GET DATA FROM FLAT FILE

Arun, July 31, 2003 - 8:49 pm UTC

Excellent. This is awesome. Thank you very much.

Can we get the filename?

Steve, September 15, 2003 - 10:49 am UTC

Hi Tom
We load into one external table using several (>50) files - so the location is (1_dir:abc.txt, 2_dir: cde.txt, etc etc). My question is, can we tell which row in the external table came from which file?

Tom Kyte
September 15, 2003 - 2:27 pm UTC

no, you cannot, they are treated as a virtual "single file"

Keepdown the log file EXTERNAL_TABLE......

Hector Gabriel Ulloa Ligarius, September 15, 2003 - 5:21 pm UTC

Hi Tom ...

How i suppress the generation of log file for the External table ....

regards

Hector Ulloa

Tom Kyte
September 15, 2003 - 8:07 pm UTC

use nologfile option in the "records" clause

Excellent most helpful

Kandan Kanakaraj, September 26, 2003 - 2:00 pm UTC


Null Date

Anuj, September 30, 2003 - 4:33 pm UTC

Tom! Your explanation was very helpful as always. Well i was trying to load the date field using Date mask (FIRST_CHG_DATE DATE "mm/dd/yyyy") and all other access parameters like 'missing field VALUES are NULL' but in the data set i have one date field "FIRST_CHG_DATE" which has all null date values .It Creates table but when i query it throws error when i look at the log it says 'error in processing column FIRST_CHG_DATE ORA-01843:not a valid month'.Any Ideas....

Thanks!

Tom Kyte
September 30, 2003 - 6:33 pm UTC

use char for everything and see whats actually coming in to each field (use varchar2(4000) in the table create, just dump the data)

TOM U ARE "GOD"

Neil, October 01, 2003 - 8:46 am UTC

Nah - everyone knows Clapton is... *_-

external table

karma, November 11, 2003 - 6:04 pm UTC

my external file has header and trailer record and i can skip header by using skip 1 but is there a way to skip/ignore last or trailer record?


Tom Kyte
November 12, 2003 - 6:51 am UTC

the trail should error out right? it won't be loadable -- so if you set errors to one and then just check the bad file, it should be in there.

Dynamic creation of external table

Chneih, December 02, 2003 - 2:45 pm UTC

Tom,

The filename of our extract file, which contains 3 million records, is datestamped everyday(example: EMP20030101.txt EMP20030102.txt). In order to load this via external table, which approach is better.

Approach 1: Dynamically recreating the external table everytime the filename changes? Or

Approach 2: Renaming the extract file's name to the filename setuped for the external table like EMP.txt?

What are the pros and cons of approach 1 as against to approach 2?


Thanks,
Chneih

Tom Kyte
December 02, 2003 - 9:14 pm UTC

to do this "every day" -- doing the DDL with the table name (via execute immediate) would be OK - not a problem.


i would be OK with either approach. the DDL create might be "safer" as the rename would require an erase of data -- data that you might need later.

Speed Up Updates from External Tables?

Robin H, December 03, 2003 - 10:11 am UTC

We are using external tables to load data. The problem is that we have an external table with 75974 rows and a table in oracle with 173130 rows. When we update a selected group of rows

Update Oracle_table ot
Set birth_date = ( select et.birth_date
from ext_table et
where et.birth_date is not null
and ot.id=et.id)
where ot.mem_birth >= '01-Jan-2000' <=== only 3000 of 170000 records updated

we find that the update takes a VERY long time. In fact, I've waited hours for it to return a result to analyze in tkprof.

If I run the same update on an staging table of the external table it runs in less than 2 seconds.

The Oracle_table is analyze and indexes with Primary key on ID.

Is there any way to speed up this process? Using External tables is useful, but I'm having difficultly using them for updates (inserts seem very fast).

Tom Kyte
December 03, 2003 - 10:56 am UTC

use MERGE.  watch this:


ops$tkyte@ORA9IR2> create table external_table
  2  (EMPNO NUMBER(10) ,
  3   ENAME VARCHAR2(10),
  4   JOB VARCHAR2(9),
  5   MGR NUMBER(4),
  6   HIREDATE DATE,
  7   SAL NUMBER(7, 2),
  8   COMM NUMBER(7, 2),
  9   DEPTNO NUMBER(2)
 10  )
 11  ORGANIZATION EXTERNAL
 12  ( type oracle_loader
 13    default directory data_dir
 14    access parameters
 15    ( fields terminated by ',' )
 16    location ('emp.dat')
 17  )
 18  /
 
Table created.
 
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> !wc /tmp/emp.dat
  75974   75974 4725204 /tmp/emp.dat
 
ops$tkyte@ORA9IR2> select count(*) from external_table;
 
  COUNT(*)
----------
     75974

<b>I'm going to do lots more then you do -- but, that external table has the same number of rows as you have...</b>

 
ops$tkyte@ORA9IR2> select count(*) from emp2;
 
  COUNT(*)
----------
    227922

<b>we'll update emp2 from external table</b>
 
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> select empno, ename, hiredate from external_table
  2  where rownum <= 5
  3  /
 
     EMPNO ENAME      HIREDATE
---------- ---------- ---------
         1 iaSuDambZr 03-JUL-04
         2 wshxtfJPaX 05-FEB-06
         3 OcvujPmlXf 26-SEP-04
         4 SAfRewaoZt 20-JUN-05
         5 mYfXtNuPgR 26-JUN-06
 
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> set linesize 70
ops$tkyte@ORA9IR2> desc emp2
 Name                                Null?    Type
 ----------------------------------- -------- ------------------------<b>
 EMPNO                               NOT NULL NUMBER</b> primary key....
 ENAME                                        VARCHAR2(10)
 JOB                                          VARCHAR2(9)
 MGR                                          NUMBER(4)
 HIREDATE                                     DATE
 SAL                                          NUMBER(7,2)
 COMM                                         NUMBER(7,2)
 DEPTNO                                       NUMBER(2)
 
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> set timing on
ops$tkyte@ORA9IR2> merge into EMP2 e1
  2  using (select * from external_table where hiredate is not null and empno in (select empno from emp2) ) e2
  3  on ( e2.empno = e1.empno )
  4  when matched then
  5          update set e1.hiredate = e2.hiredate
  6  when not matched then
  7          insert ( empno ) values (NULL)
  8  /
 
75974 rows merged.
 
Elapsed: 00:00:03.18
ops$tkyte@ORA9IR2> set timing off

<b>3.18 seconds....  on a laptop....

we put the "date is not null" in the inline view AND used a subquery to make sure the rows coming from the external table would be used to update a row (eg: the INSERT part *never* happens)...</b>

ops$tkyte@ORA9IR2> commit;
 
Commit complete.
 

utl_file

Mariano, December 03, 2003 - 10:36 am UTC

Tom,
excellent as always.
Is there something we can use as utl_file but on client side files?
regards.-

Tom Kyte
December 03, 2003 - 11:05 am UTC

you need to run a client tool. what is on "the client"

(eg: sqlplus can create and write to files...., your VB program can, etc )

Amazing

Robin H, December 03, 2003 - 11:19 am UTC

Using the merge as described, the merge was faster 1.03 secs versus 2.31 secs with an update.

Thank you

error while using it

Eugene, December 17, 2003 - 1:26 pm UTC

Tom,
I am trying to use external table, but getting an error:

File format:
81,847W,Mens/Boys Woven Other Fiber Pants - 6203.49.8045,0,CHENDRIC,13-Jul-00,DFERGUSO,29-Jan-02,5
81,847W1,Mens/Boys Woven Other Fiber Shorts - 6203.49.8060,0,CHENDRIC,13-Jul-00,DFERGUSO,29-Jan-02,3
81,847W2,Womens/Girls Woven Other Fiber Pants -6204.69.9044,0,CHENDRIC,13-Jul-00,DFERGUSO,29-Jan-02,3
81,847W3,Womens/Girls Woven Other Fiber Shorts - 6204.69.9046,0,CHENDRIC,13-Jul-00,DFERGUSO,29-Jan-02,3
81,859K,Mens/Boys Knit Other Fiber Vests - 6110.90.9064,0,CHENDRIC,13-Jul-00,DFERGUSO,29-Jan-02,4

When I open the file in Excel, then I can see all the dates in it. I don't understand the error message.
Please help.

Steps taken:
-------------
create or replace directory data_dir as 'c:\temp\db_files';

create table hts_ext_table
(
BUSINESS_UNIT_ID VARCHAR2(2),
HTS_CODE VARCHAR2(13),
HTS_DESCRIPTION VARCHAR2(150),
HTS_STATUS VARCHAR2(1),
HTS_CREATED_BY VARCHAR2(30),
HTS_CREATED_DATE DATE,
HTS_MOD_BY VARCHAR2(30),
HTS_MOD_DATE DATE,
HTS_LOCK_COUNTER NUMBER(10)
)
ORGANIZATION EXTERNAL
( type oracle_loader
default directory data_dir
access parameters
( fields terminated by ','
OPTIONALLY ENCLOSED BY '"' and '"'
LRTRIM (BUSINESS_UNIT_ID,HTS_CODE,HTS_DESCRIPTION,HTS_STATUS,HTS_CREATED_BY,
HTS_CREATED_BY date "dd/mm/yy",HTS_MOD_BY,HTS_MOD_DATE date "dd/mm/yy",HTS_LOCK_COUNTER))
location ('hts.csv')
);

select *
from hts_ext_table;

ORA-29913: error in executing ODCIEXTTABLEOPEN callout
ORA-29400: data cartridge error
KUP-04043: table column not found in external source: HTS_CREATED_DATE
ORA-06512: at "SYS.ORACLE_LOADER", line 14
ORA-06512: at line 1



Tom Kyte
December 18, 2003 - 8:44 am UTC

you have hts_created_by in there TWICE in the list of columns

LRTRIM (BUSINESS_UNIT_ID,HTS_CODE,HTS_DESCRIPTION,HTS_STATUS,HTS_CREATED_BY,
HTS_CREATED_BY date "dd/mm/yy"
,HTS_MOD_BY,HTS_MOD_DATE date
"dd/mm/yy",HTS_LOCK_COUNTER))

so it is saying "hey, i cannot find where this hts_created_date column is!!!!"

when you fix that you'll get:


select * from hts_ext_table
*
ERROR at line 1:
ORA-29913: error in executing ODCIEXTTABLEFETCH callout
ORA-30653: reject limit reached
ORA-06512: at "SYS.ORACLE_LOADER", line 14
ORA-06512: at line 1


because your dates obviously do not match your format (that and you used 2 characters for a field that takes AT LEAST 4 CHARACTERS TO RERPRESENT. remember 1999 and what we all did then??????? i beg you -- use 4 characters for years, you'll never be sorry you did, you'll only be sorry if you DON'T!!!)

use this:

(BUSINESS_UNIT_ID,
HTS_CODE,
HTS_DESCRIPTION,
HTS_STATUS,
HTS_CREATED_BY,
HTS_CREATED_DATE date "dd-mon-rr",
HTS_MOD_BY,
HTS_MOD_DATE date "dd-mon-rr",
HTS_LOCK_COUNTER))



Can dynamic SQL be used to write merge statement

Su, February 10, 2004 - 8:51 am UTC

Hi Tom,

Will you please point to an example where merge statement is created dynamically.
Is it possible to use merge when external table and oracle tables dont have same number of columns? In our system, we have same structure for External and Oracle tables, but Oracle table has one more column - load_date which is sysdate on which the row was inserted/updated in the table from file.

Thanks!!!

Cheers,
Su

Tom Kyte
February 10, 2004 - 3:22 pm UTC

execute immediate l_stmt using bind_variable, bind_variable2,.....;

sure, it can be done dynamically -- as any SQL can be.

Yes, they do not have to have the same number -- you

merge into T1
using T2
on (join condition)
when matched then insert ( c1,c2,c3 ) values ( t2.c55, t2.c100, t2.c1000 )
when not matched then update set c1= t2.c22, c22 = t2.c12, c56 = t2.c100;


you can insert into T1 (any subset of columns from t1 ) values (any subset of columns/constants from t2)

you can update (any subset of columns in t1) using (any subset of columns/constants from t2)

How above can be done using DBMS_SQL???

A reader, February 13, 2004 - 5:28 am UTC

I have similar requirement where both the tables used in merge statement are known at run-time. Will you please give some example illustrating this.

Thank you.

Tom Kyte
February 13, 2004 - 9:58 am UTC

dbms_sql.parse( l_cursor, l_any_old_string_you_can_imagine_making_up_at_runtime);

merge is NOT any different than "insert", or "update", or "delete".

don't make more of this then you have to, it is the same as ANY other DML statement.

Urgent

A reader, March 12, 2004 - 2:36 pm UTC

1.
My data is something like thsi

Name vol

a 32
b 15
b 15
c 20

for those names which recur, I need to someup the vol and then merge it with another table for
name A.

Is that possible using external tables => Merge


2.
Further, my csv file is on the nerwork drive, so from my database server can I create a directory
using a folder on the network drive

Tom Kyte
March 12, 2004 - 7:49 pm UTC

1) why external tables? just merge:


merge into some_table
using ( select name, sum(vol) sum_vol from foo group by name ) X
on ( some_table.name = x.name )
when......



if that data is in fact in a file, sure, you can use an external table.


2) guessing by your terminology that you are using windows. it is hard on windows to do somethings -- this is one of them.

see
</code> http://asktom.oracle.com/pls/asktom/f?p=100:11:::::P11_QUESTION_ID:241814624807 <code>



Is NFS mount the solution for Unix platform? Thanks.

reader, March 13, 2004 - 12:13 pm UTC

Can I place the flat file on the NFS mount for external tables to access the file?

Tom Kyte
March 13, 2004 - 1:12 pm UTC

absolutely

NFS mount

A reader, March 14, 2004 - 11:26 am UTC

Tom

Can you tell something about 'NFS mount', or give us a link which explains the basics about 'NFS mount '. Also please add your explantion..

Tom Kyte
March 14, 2004 - 11:58 am UTC

NFS = network file system.

It is the way to get a filesystem from "servera" mounted as if it were a filesystem on "serverb"

Just makes a remote file system look "local"

sort of like a windows share, but lots easier.

A reader, March 14, 2004 - 11:40 am UTC

Tom

Out data in flat file is similar to
Name vol

a 32
b 15
b 15
c 20
( it is not sorted though)

Yes, if the Name repeats then we have to take the summary of volume.

In my opinion, it is a must to have a staging table, where I will first dump my data(using client_text_io(after configuring webutil)), then run the following statement

merge into some_table
using ( select name, sum(vol) sum_vol from STAGING_TABLE group by name ) X
on ( some_table.name = x.name )
when......

Our flat file will be available on the user desktop as such the use of utl_file or external tables is ruled out.


2.
Another interesting question I have is about Sqlldr. Unlike utl_file, external tables .. sqlloader is a client side utility. Is there any way I can write a script having sqlloader statement which will comunicate with the user desktop and dump the data into database table. BUT the catch is that this script access should be through the application and not command line. We are using Forms 9i( without webutil).

The challenge is that we access our application through the browser, Forms 9i out of box cannot interact with the desktop. Now we have to provide this whole data loading functionality through the application. How can it be done.




Tom Kyte
March 14, 2004 - 12:00 pm UTC

1) you can query a flat file as if it were a table -- if the file is not accessible on the server, then yes, you'll have to write code, maintain code. Tis up to you at the end of the day.


2) sqlldr is a command line tool. there is no "sqlldr api" to be called.


Sorry, I do not program forms. My approach would involve pure HTML and file upload into a LOB.

A reader, March 14, 2004 - 11:42 am UTC

"1) why external tables? just merge:"

The data is in the flat file, I just cant use a straight forward merge.

Tom Kyte
March 14, 2004 - 12:00 pm UTC

sure you can -- if it were on the server. That is the problem here -- the data isn't were it can be used by the database server.

A reader, March 14, 2004 - 12:21 pm UTC

NFS = network file system.

Does NFS work if the database server is on UNIX, and the network servers are on windows.



Tom Kyte
March 14, 2004 - 12:51 pm UTC

It can, but windows is a little less open so you'd probably end up running Samba on UNIX (very common) which allows Windows "shares" to be mounted on Unix

A reader, March 14, 2004 - 12:26 pm UTC

sqlloader is a command line utility accpeted.

1.
But cant I do a HOST call and invoke this utility ?

But once again the issue here is even to do a host call in forms9i it will not be possible out of box.


2."My approach would involve pure HTML and file
upload into a LOB. "

Please explain in detail about the above, how will you do it. How will you communicate with the desktop using HTML and get the data from the flat file on the user desktop?

Please share.



Tom Kyte
March 14, 2004 - 12:58 pm UTC

1) and if the file were on the client, the host command would run on the middle tier anywho.

I don't think you'll be using sqlldr.

2) just like you do here on asktom in general? There is in html an "<input type=file ...>" tag that lets you post a file to a webserver.

A reader, March 14, 2004 - 1:03 pm UTC

"just like you do here on asktom in general? There is in html an "<input
type=file ...>" tag that lets you post a file to a webserver. "

<input
type=file ...

The above will it look for the file on the desktop or on the middle tier, what do you mean by "post a file to a webserver."

Sorry for the spate of clarifications requested.

2.

So utl_file is ruled out, external tables is ruled out, sqlldr is ruled out, what is left is text_io( client_text_io) and your <input
type=file ... , if it works in forms , I'm I right ?



Tom Kyte
March 14, 2004 - 1:07 pm UTC

the above would look on the file system where the BROWSER lives, the only file system it can see if the file system the browser sees.

<input type=file ...> is pure HTML, it is not forms at all.

external tables

Sukumar, March 18, 2004 - 2:23 pm UTC

very good excellent discussion

Having problem in loading data in a field which has line feed

Sukumar, March 18, 2004 - 2:26 pm UTC

Tom,

I read the full posting. It was very useful.
I followed these methods and created all the relevant scripts for my work.

One problem that I'm having is, if a field in the flat file (csv) is have line feed characters then it is not getting loaded. I understand that the end of record identifier is line feed character. Is there a way to get around this?

Please help.

Thanks, Sukumar

Tom Kyte
March 18, 2004 - 3:18 pm UTC

if the records are terminated by something identifiable -- yes.

for example:


1,2,'Hello
world'!
4,2,'Good
bye'!
3,2'something else'!

here, each line ends with !\n -- we can use that in sqlldr to tell is the end of line is really !\n (or !\r\n on windows) and load it up.

If not, -- no.

How to write comments in the create statement

dharma, April 21, 2004 - 7:16 pm UTC

How can I write some comments inside the external_table clause of create table. If I use -- or /* */ it throws up an error. Couldnt find it from the manual either.
If it is not I am wondering why? because sql loader allows comments.
Also is there anyway to specify the NOLOGFILE clause without the RECORDS in the ACCESS PARAMETERS.
Database: 9.2.0.1 OS: Windows

Thanks

Tom Kyte
April 21, 2004 - 9:30 pm UTC

ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> create table external_table
  2  (EMPNO NUMBER(4) ,
  3   ENAME VARCHAR2(10),
  4   JOB VARCHAR2(9),
  5   MGR NUMBER(4),
  6   HIREDATE DATE,
  7   SAL NUMBER(7, 2),
  8   COMM NUMBER(7, 2),
  9   DEPTNO NUMBER(2)
 10  )
 11  ORGANIZATION EXTERNAL
 12  ( type /* hello */ oracle_loader /* hello */
 13    default /* hello */ directory data_dir /* hello */
 14    access/* hello */  parameters /* hello */
 15    ( fields terminated by ',' )
 16    location/* hello */  ('emp.dat')
 17  )
 18  /
 
Table created.
 
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> host flat scott/tiger emp > /tmp/emp.dat
 
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> select empno, ename, hiredate from external_table
  2  where ename like '%A%'
  3  /
 
     EMPNO ENAME      HIREDATE
---------- ---------- ---------
      7499 ALLEN      20-FEB-81
      7521 WARD       22-FEB-81
      7654 MARTIN     28-SEP-81
      7698 BLAKE      01-MAY-81
      7782 CLARK      09-JUN-81
      7876 ADAMS      12-JAN-83
      7900 JAMES      03-DEC-81
 
7 rows selected.


example to work with please? 

OK here is the example

dharma, April 22, 2004 - 8:58 pm UTC

SQL*Plus: Release 9.2.0.1.0 - Production on Thu Apr 22 17:50:04 2004

Copyright (c) 1982, 2002, Oracle Corporation. All rights reserved.


Connected to:
Oracle9i Enterprise Edition Release 9.2.0.1.0 - Production
With the Partitioning, Oracle Label Security, OLAP and Oracle Data Mining options
JServer Release 9.2.0.1.0 - Production

scott@krypton.8NBP931> DROP TABLE T2
2 /
DROP TABLE T2
*
ERROR at line 1:
ORA-00942: table or view does not exist


scott@krypton.8NBP931> CREATE TABLE T2 (
2 cid NUMBER,amt NUMBER,dt DATE
3 )
4 ORGANIZATION EXTERNAL (
5 TYPE oracle_loader
6 DEFAULT DIRECTORY ext_data_files
7 ACCESS parameters (
8 records delimited BY newline
9 /*nologfile should be before fields terminated*/
10 nologfile
11 fields terminated BY ',' )
12 location ('t2.dat')
13 )
14 reject LIMIT UNLIMITED
15 /

Table created.

scott@krypton.8NBP931> SELECT * FROM T2
2 /
SELECT * FROM T2
*
ERROR at line 1:
ORA-29913: error in executing ODCIEXTTABLEOPEN callout
ORA-29400: data cartridge error
KUP-00554: error encountered while parsing access parameters
KUP-01005: syntax error: found "error": expecting one of: "badfile, byteordermark, characterset, data, delimited, discardfile, exit, fields, fixed, load, logfile, nodiscardfile, nobadfile, nologfile, date_cache,
processing, readsize, string, skip, variable"
KUP-01007: at line 2 column 3
KUP-00031: concat
ORA-06512: at "SYS.ORACLE_LOADER", line 14
ORA-06512: at line 1


scott@krypton.8NBP931> drop table t2;

Table dropped.

scott@krypton.8NBP931> CREATE TABLE T2 (
2 cid NUMBER,amt NUMBER,dt DATE
3 )
4 ORGANIZATION EXTERNAL (
5 TYPE oracle_loader
6 DEFAULT DIRECTORY ext_data_files
7 ACCESS parameters (
8 records delimited BY newline
9 nologfile
10 fields terminated BY ',' )
11 location ('t2.dat')
12 )
13 reject LIMIT UNLIMITED
14 /

Table created.

scott@krypton.8NBP931> select * from t2;

CID AMT DT
--------------------------------------------- --------------------------------------------- ---------------
1 10 06-JAN-03
1 2 07-JAN-03
1 3 08-JAN-03
1 4 09-JAN-03
.
.
Chopped off to save space
30 rows selected.

scott@krypton.8NBP931> drop table t2;

Table dropped.

scott@krypton.8NBP931> CREATE TABLE T2 (
2 cid NUMBER,amt NUMBER,dt DATE
3 )
4 ORGANIZATION EXTERNAL (
5 TYPE oracle_loader
6 DEFAULT DIRECTORY ext_data_files
7 ACCESS parameters (
8 nologfile
9 fields terminated BY ',' )
10 location ('t2.dat')
11 )
12 reject LIMIT UNLIMITED
13 /

Table created.

scott@krypton.8NBP931> select * from t2;
select * from t2
*
ERROR at line 1:
ORA-29913: error in executing ODCIEXTTABLEOPEN callout
ORA-29400: data cartridge error
KUP-00554: error encountered while parsing access parameters
KUP-01005: syntax error: found "nologfile": expecting one of: "exit, fields, records"
KUP-01007: at line 1 column 1
ORA-06512: at "SYS.ORACLE_LOADER", line 14
ORA-06512: at line 1


scott@krypton.8NBP931>

The first one has a comment in it and has errors when queried.
The second time I create it without any errors.
The third one doesnt have the records clause in it and throws up an error.

Thanks,

Tom Kyte
April 23, 2004 - 10:59 am UTC

looks like it just doesn't like the comment there in the records clause, move it up a line and it'll be "ok"

Excellent piece of Information

Govind Sharma, May 18, 2004 - 12:52 pm UTC

Hi tom,

Very useful information.

I however was wondering if you could help me.

I have a situation where I need to read some flat files which are sitting in a folder on the server ? I can use external file to read individual files , but I do not know the name of the files on the folder.

Is there any way I can find out the name of the files in the folder and than process them one by one.

Many Thanks

Merge via Java?

Terry, May 27, 2004 - 6:15 pm UTC

Okay, hopefully a quick question (and the right location). I have flat files being added in nightly via Java. It uses insert and update calls currently to put the data in.

to create a temp tablespace just using the columns below:

create table t (acctsessionid varchar2(32) not null, acctuniqueid varchar2(32), username (varchar2(32) not null, nasipaddress varchar2(15) not null, nasportid number(12), acctstarttime date, acctauthentic varchar2(32), connectinfo_start varchar2(32), servicetype varchar2(32), framedipaddress varchar2(15));

there are 10 more columns than this and unfortunetly i have no clue which ones are keys...my best guess is a column not given here "radacctid number(38) not null" because i think there is a trigger that increases the value in this field every time there is an insert

a typical insert (from the java function)

String start_query = "INSERT into radacct (AcctSessionId, " +
"AcctUniqueId, UserName, NASIPAddress, NASPortId, AcctStartTime, " +
"AcctAuthentic, ConnectInfo_start, ServiceType, FramedIPAddress)" +
" values (?,?,?,?,?,TO_DATE(?, 'DY MON DD hh24:mi:ss YYYY'), ?,?,?,?)";

with the ? being filled in by java with data.
This works fine for the most part. The problem is, i think some of the data may be duplicated (but i'm not sure, and not sure how to test for that either).

so basically i want this to insert, if and only if, the data isn't allready there. This sounds to me like a merge, only a merge with an empty "when matched" section of code and i figure there has to be a better way of doing this. This is running on Oracle 9.2.0.4 and the Java is SDK 1.4

Thank you Tom and anyone else who may give me a hand/lead along the way.

Tom Kyte
May 27, 2004 - 8:54 pm UTC

should it not be "put a unique constraint on there" and "java handle exceptions when needed"?


Angelo Cappelli, June 23, 2004 - 2:41 pm UTC

What if my external file is variable in lenght and has an array that is is dynamic, sometimes there is one item other time there may be 4. I also just want certain items fron this file to populate my database table and I want to populate two table from the same record.

i.e Database Tables with one to many relationship.
Sensor_Signature
sensor varchar(20)
senor_sig varchar (120)

Sensor_signature_ref
sensor varcahr(20)
sensor_refer varchar(120)

External file:

alert tcp $EXTERNAL_NET any -> $HOME_NET 22 (msg:"EXPLOIT ssh CRC32 overflow NOOP"; flow:to_server,established; content:"|90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90|"; reference:bugtraq,2347; reference:cve,CVE-2001-0144; classtype:shellcode-detect; sid:1326; rev:3;)

Fields I want in the Sensor_Signature table is the text that follows EXPLOIT up to the 1st reference, which is the array data.

Fields I want in the Sensor_signature_ref is the text following reference:. One record for each reference?





Tom Kyte
June 23, 2004 - 3:25 pm UTC

this is one direction to investigate:

ops$tkyte@ORA9IR2> create or replace directory data_dir as '/tmp/'
  2  /
 
Directory created.
 
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> create table t
  2  ( data varchar2(4000) )
  3  organization external
  4  ( type oracle_loader default directory data_dir
  5    access parameters
  6    ( records delimited by newline fields ( "DATA" char(4000) )  )
  7    location ('test.dat' )
  8  )
  9  /
 
Table created.
 
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> select * from t;
 
DATA
-------------------------------------------------------------------------------
alert tcp $EXTERNAL_NET any -> $HOME_NET 22 (msg:"EXPLOIT #1 ssh CRC32 overflow
 NOOP"; flow:to_server,established; content:"|90 90 90 90 90 90 90 90 90 90 90
90 90 90 90 90|"; reference:bugtraq,2347; reference:cve,CVE-2001-0144; classtyp
e:shellcode-detect; sid:1326;  rev:3;)
 
alert tcp $EXTERNAL_NET any -> $HOME_NET 22 (msg:"EXPLOIT #2 ssh CRC32 overflow
 NOOP"; flow:to_server,established; content:"|90 90 90 90 90 90 90 90 90 90 90
90 90 90 90 90|"; reference:bugtraq,2347; reference:cve,CVE-2001-0144; classtyp
e:shellcode-detect; sid:1326;
 
alert tcp $EXTERNAL_NET any -> $HOME_NET 22 (msg:"EXPLOIT #3 ssh CRC32 overflow
 NOOP"; flow:to_server,established; content:"|90 90 90 90 90 90 90 90 90 90 90
90 90 90 90 90|"; reference:bugtraq,2347; reference:cve,CVE-2001-0144; classtyp
e:shellcode-detect;
 
alert tcp $EXTERNAL_NET any -> $HOME_NET 22 (msg:"EXPLOIT #4 ssh CRC32 overflow
 NOOP"; flow:to_server,established; content:"|90 90 90 90 90 90 90 90 90 90 90
90 90 90 90 90|"; reference:bugtraq,2347; reference:cve,CVE-2001-0144;
 
alert tcp $EXTERNAL_NET any -> $HOME_NET 22 (msg:"EXPLOIT #5 ssh CRC32 overflow
 NOOP"; flow:to_server,established; content:"|90 90 90 90 90 90 90 90 90 90 90
90 90 90 90 90|"; reference:bugtraq,2347;
 
 
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> create or replace view V
  2  as
  3  select exploit,
  4         substr( references, is0, decode( is1, 0, length(references), is1 )-is0  ) r1,
  5         trim(decode( is1, 0, null, substr( references, is1+1, decode( is2, 0, length(references), is2 )-is1
  6             )))r2,
  7         trim(decode( is2, 0, null, substr( references, is2+1, decode( is3, 0, length(references), is3 )-is2
  8             )))r3,
  9         trim(decode( is3, 0, null, substr( references, is3+1, decode( is4, 0, length(references), is4 )-is3
 10             ))) r4
 11    from (
 12  select exploit,
 13         references,
 14             1 is0,
 15             nvl( instr(references, ';', 1, 1 ), 0 ) is1,
 16             nvl( instr(references, ';', 1, 2 ), 0 ) is2,
 17             nvl( instr(references, ';', 1, 3 ), 0 ) is3,
 18             nvl( instr(references, ';', 1, 4 ), 0 ) is4
 19    from (
 20  select substr( data, 1, instr( data, '; ' ) ) exploit,
 21         trim(substr( data, instr( data, '; reference:' )+1 )) references
 22    from (
 23  select substr( data, instr( data, 'EXPLOIT' ) ) data
 24    from t
 25         )
 26         )
 27         )
 28  /
 
View created.
 
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> insert all
  2    when (1=1) then into sensor_signature(sensor,senor_sig) values ( s.nextval, exploit )
  3    when (r1 is not null) then into sensor_signature_ref(sensor,sensor_refer) values ( s.currval, r1 )
  4    when (r2 is not null) then into sensor_signature_ref(sensor,sensor_refer) values ( s.currval, r2 )
  5    when (r3 is not null) then into sensor_signature_ref(sensor,sensor_refer) values ( s.currval, r3 )
  6    when (r4 is not null) then into sensor_signature_ref(sensor,sensor_refer) values ( s.currval, r4 )
  7  select * from v;
 
19 rows created.
 
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> select senor_sig, sensor_refer
  2    from Sensor_Signature, Sensor_signature_ref
  3   where Sensor_Signature.sensor = Sensor_signature_ref.sensor
  4  /
 
SENOR_SIG                                SENSOR_REFER
---------------------------------------- ----------------------------
EXPLOIT #1 ssh CRC32 overflow NOOP";     reference:bugtraq,2347
EXPLOIT #1 ssh CRC32 overflow NOOP";     reference:cve,CVE-2001-0144;
EXPLOIT #1 ssh CRC32 overflow NOOP";     sid:1326;
EXPLOIT #1 ssh CRC32 overflow NOOP";     classtype:shellcode-detect;
EXPLOIT #2 ssh CRC32 overflow NOOP";     reference:bugtraq,2347
EXPLOIT #2 ssh CRC32 overflow NOOP";     reference:cve,CVE-2001-0144;
EXPLOIT #2 ssh CRC32 overflow NOOP";     classtype:shellcode-detect;
EXPLOIT #2 ssh CRC32 overflow NOOP";     sid:1326;
EXPLOIT #3 ssh CRC32 overflow NOOP";     reference:bugtraq,2347
EXPLOIT #3 ssh CRC32 overflow NOOP";     reference:cve,CVE-2001-0144;
EXPLOIT #3 ssh CRC32 overflow NOOP";     classtype:shellcode-detect;
EXPLOIT #4 ssh CRC32 overflow NOOP";     reference:bugtraq,2347
EXPLOIT #4 ssh CRC32 overflow NOOP";     reference:cve,CVE-2001-0144;
EXPLOIT #5 ssh CRC32 overflow NOOP";     reference:bugtraq,2347
 
14 rows selected.
 
ops$tkyte@ORA9IR2>
 

Great!!!

A reader, June 23, 2004 - 5:05 pm UTC


Cool

A reader, June 23, 2004 - 5:50 pm UTC

Great demonstration of external tables, INSERT ALL and clever parsing all in one! You are the greatest!

nina, July 09, 2004 - 1:30 pm UTC

Tom, I know this might be a stupid question, but can you clarify the following:

1.
when you create an external table will it stay in the database? If I need to load records from a flat file and keep the table that I load reacords into in the db permanently, will external table do?

2.
if I have a file with one field in it, say name, and I also need to insert a sequence number with the name in the table - like
Ann 1
Bob 2
Joe 3

is it possible to do with external tables?

thank you in advance

Tom Kyte
July 09, 2004 - 1:57 pm UTC

1) an external table is just a definition -- metadata.

if you insert into "real_table" select * from external_table, then the data will be in the database, in "real_table".

else, data will be in flat file outside of database.


2) insert into real_table select s.nextval, name from external_table;

More on external tables

Nina, July 12, 2004 - 12:32 pm UTC

Tom --

I have just 2 more stupid questions and I am done :)

1.
Can I use external table command in PL/SQL to load data? I know that I have to first create directory object. Can I do it in PL/SQL procedure itself? If not, do I have to put this create statement in the sql script and run it every time I run my package? Or, is this create statement a one time deal? I mean if I commit, does it stay in db?


2.

What about the create table (external) statement? Can I do it in PL/SQL? Or can I only execute this statement in sqlplus? If this is the case, can I execute this statement once and commit (will it stay in db this way) or do I have to put it in sql script and run it every time I run my package?

I guess what I am trying to do is have everything in one package. If that is possible, can you give an example of creating dir object and external table IN pl/sql procedure.

thank you in advance.


Tom Kyte
July 12, 2004 - 8:47 pm UTC

there is no external table command, there is DDL and DDL is easily done via EXECUTE IMMEDIATE in plsql

execute immediate 'whatever'

just read:
</code> http://asktom.oracle.com/Misc/RolesAndProcedures.html <code>




External tables

Nina, July 13, 2004 - 8:06 am UTC

Tom --

The link that you provided is "ORA-01031: insufficient privileges.
Do I have to execute DDL (creating external table) just once and it is saved in db? Or do I have to recreate it every time I run my procedure?

Tom Kyte
July 13, 2004 - 11:49 am UTC

you would do it once just like a create table.

the referenced link was because you seem to want to do ddl in your procedure and if so, I was just trying to head off the next question "why doesn't ddl in a procedure work"

External Tables, Oracle 9.2.0.4

Raja, July 30, 2004 - 12:31 pm UTC

Hi Tom,
I have a text file with main header and footer and multiple record headers and footers and I want to load only the rows within the record headers and footers. However, when I execute the query on the external table, there are no records.

I looked at the discard file, which shows the right number of header/footer records discarded. The logfile captures all the correct lines with respect to the actual records in discard file. However, the badfile consists of all the records I really need to be loaded.

This is how the parameter access parameter part looks.....

access parameters
(
RECORDS DELIMITED BY NEWLINE
LOAD WHEN (1:3) != 'REC'
FIELDS (part position (1:5),
prc position (7:8),
qty position (16:6),
tot position (23:8))


The fixed width text file looks like this..

RECTOTHEAD
RECINDHEAD101
10101 00000.54 100000 54000.00
10102 00000.58 025000 14500.00
RECINDFOOT101
RECINDHEAD111
11101 00000.60 100000 60000.00
11102 00000.72 025000 18000.00
RECINDFOOT111
RECTOTFOOT

Could you help me understand why this is happeneing and how could I fix this.

Your help will be highly appreciated!

Thanks,
Raja




Tom Kyte
July 30, 2004 - 5:39 pm UTC

hows about showing the *entire* example please.

External Tables, Oracle 9.2.0.4

Raja, July 30, 2004 - 2:12 pm UTC

Hi Tom,
Sorry to bother you, I found the silly mistake. The position(n:m) mapping was incorrect.

Why it happens

Thaha Hussain, August 15, 2004 - 5:07 am UTC

I have a file emp.dat (comma separated file) in C:\OraMagic

SQL> create or replace directory data_dir as 'C:\OraMagic';

Directory created.

SQL>  create table external_table
  2   (EMPNO NUMBER(4) ,
  3    ENAME VARCHAR2(10),
  4    JOB VARCHAR2(9),
  5    MGR NUMBER(4),
  6    HIREDATE DATE,
  7    SAL NUMBER(7, 2),
  8    COMM NUMBER(7, 2),
  9    DEPTNO NUMBER(2)
 10   )
 11   ORGANIZATION EXTERNAL
 12   ( type oracle_loader
 13     default directory data_dir
 14     access parameters
 15     ( fields terminated by ',' )
 16     location ('emp.dat')
 17*  )
SQL> /

Table created.

SQL> SELECT * FROM  external_table;
SELECT * FROM  external_table
*
ERROR at line 1:
ORA-29913: error in executing ODCIEXTTABLEOPEN callout
ORA-29400: data cartridge error
KUP-04063: unable to open log file EXTERNAL_TABLE_2312_640.log
OS error The system cannot find the file specified.
ORA-06512: at "SYS.ORACLE_LOADER", line 14
ORA-06512: at line 1

 

Tom Kyte
August 15, 2004 - 9:24 am UTC

and is c:\oramagic on the DATABASE SERVER's filesystem? probably not. external tables "see the server", not your client machine.

Using Samba

A reader, August 15, 2004 - 9:45 am UTC

My database server is on Unix (Solaris) and I use Samba to expose the 'directories' on the server to my developers. Just create a share for each 'directory' object, grant it the proper access rights and have the users map whatever drive they want to it. Works like a charm.

Tom Kyte
August 15, 2004 - 10:21 am UTC

yup, on a unix based server -- it is "easy", the database will see any mounted file system.

on a windows based server, it is a pain in the butt but doable -- "services" cannot "see" the file systems you "see" since windows is a single user OS, makes it really hard.

Using Samba

A reader, August 15, 2004 - 10:57 am UTC

Well, more than the 'services cannot see what you can see' part, I always wondered how Windows-based Oracle server can create more than 26 DIRECTORY objects pointing to different servers! Well, A & C are already taken, so it is really only 24!

I alwasys thought that drive-letter concept of mounting remote filesystems in Windows was daft! 24 is not really that large a number and it is easily concievable that users need more than 24!

A related thought, can Windows-based Oracle server use the UNC-style path for

create or replace directory my_dir as '\\server\path\to\dir';

?

Thanks

The directory was in the client. Sorry!

Thaha Hussain, August 17, 2004 - 2:42 am UTC

Dear Tom,

The directory was in the client. Sorry!

Thanks and Regards,

Thaha Hussain

external table generation

Chandru-M, August 26, 2004 - 8:50 am UTC

hi tom,
sorry for posting my question here.its very urgent.
this is my question.
i want to generate an value for an column in the external table dynamically depending on the file name(data) can i do that.
for ex,
i have an data file (db6.txt)which have this columns .
name address age
the no 6 specifies the class using this filename can i have external table with 4 columns
name address age class
i have to generate the 4 th column using the filename and the value should be 6 and it should change dynamically.so for the next file db7.dat i should have 7 in the class column.
chandru.m

Tom Kyte
August 26, 2004 - 10:15 am UTC

ops$tkyte@ORA9IR2> create or replace directory data_dir as '/tmp/'
  2  /
 
Directory created.
 
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> drop table external_table;
 
Table dropped.
 
ops$tkyte@ORA9IR2> create table external_table
  2  (EMPNO NUMBER(4) ,
  3   ENAME VARCHAR2(10),
  4   JOB VARCHAR2(9),
  5   MGR NUMBER(4),
  6   HIREDATE DATE,
  7   SAL NUMBER(7, 2),
  8   COMM NUMBER(7, 2),
  9   DEPTNO NUMBER(2)
 10  )
 11  ORGANIZATION EXTERNAL
 12  ( type oracle_loader
 13    default directory data_dir
 14    access parameters
 15    ( fields terminated by ',' )
 16    location ('db6.txt')
 17  )
 18  /
 
Table created.
 
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> host flat scott/tiger emp > /tmp/db6.txt
 
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> select ename, empno,
  2        (select substr(location,instr(location,'.')-1,1)
  3               from user_external_locations
  4                  where table_name = 'EXTERNAL_TABLE') class
  5   from external_table;
 
ENAME           EMPNO CLASS
---------- ---------- ----------
SMITH            7369 6
ALLEN            7499 6
WARD             7521 6
JONES            7566 6
MARTIN           7654 6
BLAKE            7698 6
CLARK            7782 6
SCOTT            7788 6
KING             7839 6
TURNER           7844 6
ADAMS            7876 6
JAMES            7900 6
FORD             7902 6
MILLER           7934 6
 
14 rows selected.
 
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> alter table external_table location ('db7.txt');
 
Table altered.
 
ops$tkyte@ORA9IR2> host flat scott/tiger emp > /tmp/db7.txt
 
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> select ename, empno,
  2        (select substr(location,instr(location,'.')-1,1)
  3               from user_external_locations
  4                  where table_name = 'EXTERNAL_TABLE') class
  5   from external_table;
 
ENAME           EMPNO CLASS
---------- ---------- ----------
SMITH            7369 7
ALLEN            7499 7
WARD             7521 7
JONES            7566 7
MARTIN           7654 7
BLAKE            7698 7
CLARK            7782 7
SCOTT            7788 7
KING             7839 7
TURNER           7844 7
ADAMS            7876 7
JAMES            7900 7
FORD             7902 7
MILLER           7934 7
 
14 rows selected.
 
 

reply

chandru, August 30, 2004 - 2:14 am UTC

thanks a lot tom
-Chandru.M

Performance difference if data type is used?

Tony, September 03, 2004 - 6:08 am UTC

I've the following options to create Control file for SQL Loader.

1. Specify only CHAR data type for all columns
i.e., even for integer, float and date,
specify CHAR data type only.

2. Specify exact data types and length
i.e., specify Integer, Char and date data types.

3. Without specifing Data type and length for all coulmns.


Which one is the better option?
Will there any performance difference among these options?

Please clarify.





Tom Kyte
September 03, 2004 - 11:47 am UTC

the datatype you specify is the external type -- what is in the file, not what is in the table.

if you have a plain "flat file" you can pull up in your favorite editor -- leave the type "off" unless it is a string bigger than 255 -- then use char(MAXLEN), or you have a date with a format mask you want to apply.




AN Clarification

Chandru.M, September 05, 2004 - 11:25 am UTC

What R the different types of access drivers we can use with external tables . i know ORACLE LOADER is the default.


Tom Kyte
September 05, 2004 - 12:19 pm UTC

in 9i that is the only type, in 10g, you have

oracle_loader -- text datafiles
oracle_datapump -- binary dump files produced by oracle.

ORA-29913: error in executing ODCIEXTTABLEFETCH callout

Winston, September 10, 2004 - 6:52 pm UTC

SQL> select * from v$version;

BANNER
----------------------------------------------------------------
Oracle9i Enterprise Edition Release 9.2.0.4.0 - Production
PL/SQL Release 9.2.0.4.0 - Production
CORE    9.2.0.3.0       Production
TNS for Linux: Version 9.2.0.4.0 - Production
NLSRTL Version 9.2.0.4.0 - Production


SQL> drop table sqltrace_aq5;
drop table sqltrace_aq5
           *
ERROR at line 1:
ORA-00942: table or view does not exist


SQL>
SQL>
SQL> create table  sqltrace_aq5
  2       (log_line varchar2(4000)    )
  3   ORGANIZATION EXTERNAL
  4   (type oracle_loader
  5   default directory data_dir
  6   access parameters
  7   (fields terminated by 0X'0d0a')
  8   location ('sqltrace_aq1.txt'))
  9
SQL> /

Table created.

SQL> select count(*) from sqltrace_aq5;
select count(*) from sqltrace_aq5
*
ERROR at line 1:
ORA-29913: error in executing ODCIEXTTABLEFETCH callout
ORA-30653: reject limit reached
ORA-06512: at "SYS.ORACLE_LOADER", line 14
ORA-06512: at line 1


SQL>


linuxdb:/tmp >cat SQLTRACE_aq5_25033.bad




linuxdb:/tmp >grep reject SQLTRACE_aq5_25033.log
KUP-04101: record 12 rejected in file /tmp/sqltrace_aq1.txt
KUP-04101: record 19 rejected in file /tmp/sqltrace_aq1.txt
KUP-04101: record 37 rejected in file /tmp/sqltrace_aq1.txt
KUP-04101: record 42 rejected in file /tmp/sqltrace_aq1.txt
linuxdb:/tmp >ls -l SQLTRACE_aq5_25033.bad
-rw-r--r--    1 oracle   oracle          4 Sep 10 23:29 SQLTRACE_aq5_25033.bad
linuxdb:/tmp >ls -l SQLTRACE_aq5_25033.log
-rw-r--r--    1 oracle   oracle       1001 Sep 10 23:29 SQLTRACE_aq5_25033.log
linuxdb:/tmp >


Question 1: How do I ask Oracle to accept  the blank line ?


I tried REJECT LIMIT UNLIMITED to ignore the blank line error and it went well. However then I got another problem, some of the lines are exceeding 4000 characters, I tried to defined external table as CLOB, but I got the following error.


     (log_line clob
      *
ERROR at line 2:
ORA-30656: column type not supported on external organized table

Question 2: how can we accept those lines that are longer than 4000 characters?

Thanks in advance, 

Tom Kyte
September 10, 2004 - 7:56 pm UTC

ops$tkyte@ORA9IR2> create or replace directory data_dir as '/tmp'
  2  /
 
Directory created.
 
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> create table  sqltrace_aq5
  2       (log_line varchar2(4000)    )
  3   ORGANIZATION EXTERNAL
  4   (type oracle_loader
  5   default directory data_dir
  6   access parameters
  7   (fields terminated by 0X'0d0a'
  8    <b>missing field values are null )</b>
  9   location ('sqltrace_aq1.txt'))
 10  /
 
Table created.
 
ops$tkyte@ORA9IR2>
ops$tkyte@ORA9IR2> select * from sqltrace_aq5;
 
LOG_LINE
-------------------------------------------------------------------------------
hello
 
world


you have to setup multiple 4000 varchar2's for that (bigger lines), or just get the first 4000 bytes (using position)


 

External Tables

Satsri, October 25, 2004 - 4:12 pm UTC

Thanks for your inputs!

Multiple users

A reader, February 18, 2005 - 10:50 pm UTC

I have multiple files with the same format that I want multiple sessions to be able to read at the same time. If each session does alter et location 'location.dat', it will mess up the other session and stuff.

How is this kind of multi-user access handled with external tables? Thanks

Tom Kyte
February 19, 2005 - 8:17 am UTC

it isn't. external tables are designed primary to be read once (to be merged or loaded into another table).

but just like a "real" table -- there is but one set of data in there at a time. In order for each user to have their own exclusive set of data -- you would have not one table.

Default Date Format for External Table

A reader, April 26, 2005 - 7:53 pm UTC

Hi Tom, is there a way to define a default date format for date values for an external table, so that I don't have to specify date field with (date_format date mask "dd-Mon-YY").

I tried NLS_DATE_FORMAT but it doesn't work.


Thanks.


Tom Kyte
April 26, 2005 - 8:08 pm UTC

you put it into the external table definition:

HIREDATE CHAR(20)
TERMINATED BY "," OPTIONALLY ENCLOSED BY '"'
DATE_FORMAT DATE MASK "DD-Month-YYYY",



OR you load the field as a string and to_date() it.

What is 0X'0d0a' ?

A reader, April 26, 2005 - 11:29 pm UTC

<quote>
SQL> create table  sqltrace_aq5
  2       (log_line varchar2(4000)    )
  3   ORGANIZATION EXTERNAL
  4   (type oracle_loader
  5   default directory data_dir
  6   access parameters
  7   (fields terminated by 0X'0d0a')
  8   location ('sqltrace_aq1.txt'))
  9
SQL> /
<quote>

What is 0X'0d0a' ? 

Tom Kyte
April 27, 2005 - 7:46 am UTC

0d0a is carriage return linefeed (13/10 in hex)....

end of line in windows.

CRLF

Another Reader, April 27, 2005 - 2:30 am UTC

That's just ascii codes for a Carriage Return followed by a Line Feed

External tables and the Merge command

Ravindranath, May 14, 2005 - 6:42 am UTC

Dear Tom,

No doubt you article is excellent but why do you keep so long we feel inconvenient reading all the stuff.

KISS :) Keep it short and simple yours are simple but not short.

Kindly make it a point to keep the things short.Else for that your are excellent!!!

Tom Kyte
May 14, 2005 - 9:40 am UTC

I don't make them long

You guys do.



but if you just want to be told what to do, you'll have to go elsewhere. I try to teach how to do. How to understand what it does, How to figure out on your own when to use it -- in short, how to understand this thing called Oracle.

Your Responses are excellent!!!

Ravindranath, May 18, 2005 - 11:12 pm UTC

Tom you are the excellent person You are Database Guru.

Please accept my appologies because actually your answers were short and simple but i was mistaken.I misunderstood the
review to your explantions for the question.

You are The Best.

I have ever ever come across.

Impressed

Want to "Refresh" the real table

A reader, June 06, 2005 - 7:51 am UTC

Hi Tom

Good morning . In our data migration environment , I want to use External table . I have created a table(say REAL_TABLE) which is intended to get latest data from the External table(say EXTERNAL_TABLE) whenever the flat file gets updated . Now , I want to automate the whole process of refreshing the REAL_TABLE . For that I've to get informed whenever that EXTERNAL_TABLE gets updated and then load those new data to the REAL_TABLE . Can you please tell me the way to achieve this ?

Thanks for your time .

Thanks
Debashis
Bangalore

Tom Kyte
June 06, 2005 - 8:56 am UTC

not really, only you can figure out how in your environment to notify yourself that a file was updated outside of the database.

Want to "Refresh" the real table

Debashis, June 07, 2005 - 4:53 am UTC

Hi Tom
Good morning . This is my second mail in this series .
I told you that I want to automate the process of data
loading through the use of External Table . I have used
MERGE clause and DBMS_JOB to achieve this .
I am showing you the demo procedure below . Kindly
advice me if this would be the right approach or I've to
take any other good approach .


(i) I have EMP table taken from SCOTT schema .

(ii) I have generated emp.csv from EMP .

(iii) I have created an External Table like this

/*********************************************/
CREATE TABLE EMP_EXTERNAL
(EMPNO NUMBER(4) ,
ENAME VARCHAR2(10),
JOB VARCHAR2(9),
MGR NUMBER(4),
HIREDATE DATE,
SAL NUMBER(7, 2),
COMM NUMBER(7, 2),
DEPTNO NUMBER(2)
)
ORGANIZATION EXTERNAL
(TYPE ORACLE_LOADER
DEFAULT DIRECTORY ETL
ACCESS PARAMETERS
(RECORDS DELIMITED BY NEWLINE
FIELDS TERMINATED BY ','
OPTIONALLY ENCLOSED BY '"'
MISSING FIELD VALUES ARE NULL )
LOCATION ('EMP.CSV')
)

/*********************************************/

(iv) I have created a table named LOAD_FROM_EXTERNAL
which will be used to get the latest data in the emp.csv file .

(v) I have created a stored procedure named
LOAD_DATA_FROM_EXTERNAL like this
/*********************************************/
PROCEDURE LOAD_DATA_FROM_EXTERNAL IS

BEGIN
EXECUTE IMMEDIATE 'TRUNCATE TABLE LOAD_FROM_EXTERNAL';
MERGE INTO LOAD_FROM_EXTERNAL E1
USING (SELECT * FROM EMP_EXTERNAL ) E2
ON ( E2.EMPNO = E1.EMPNO )
WHEN MATCHED THEN
UPDATE SET ..
WHEN NOT MATCHED THEN
INSERT ( EMPNO, ENAME, JOB, MGR,HIREDATE, SAL, COMM, DEPTNO )
VALUES (E2.EMPNO, E2.ENAME, E2.JOB, E2.MGR,E2.HIREDATE, E2.SAL, E2.COMM, E2.DEPTNO );
DBMS_OUTPUT.PUT_LINE('Data uploaded successfully ');
EXCEPTION
WHEN OTHERS THEN
DBMS_OUTPUT.PUT_LINE('Error encountered -> ' || SQLCODE ||' , ' ||SQLERRM );
END;

/*********************************************/


(vi) I have created a DMS_JOB which will be calling
the above mentioned procedure in the interval of
4 minutes , like this
/*********************************************/
DECLARE
JOB_NUMBER NUMBER;

BEGIN
DBMS_JOB.SUBMIT( JOB_NUMBER, 'LOAD_DATA_FROM_EXTERNAL;', SYSDATE, 'SYSDATE+4/24/60');
COMMIT ;
DBMS_OUTPUT.PUT_LINE('Job submitted succesfully ');
EXCEPTION WHEN OTHERS THEN
DBMS_OUTPUT.PUT_LINE('Error encountered -> ' || SQLCODE ||' , ' ||SQLERRM );
END;
/*********************************************/


This way , whenever I update the emp.csv , the
LOAD_FROM_EXTERNAL table gets updated .

Now my question is :
(1) Will this approach be a good one ?
(2) If I'll use this approach ,as I am
issuing a TRUNCATE command against
the LOAD_FROM_EXTERNAL table , I don't
have to use WHEN MATCHED clause . Is
there any way to ignore/remove that clause ?

thanks in advance for your time .
waiting for your valuable suggestion .

Thanks
Debashis
Bangalore


Tom Kyte
June 07, 2005 - 8:21 am UTC

(v) if you truncate the table first, do not MERGE, just insert.

(vi) you know, every 4 minutes all of the data will appear to DISAPPEAR. is that what you really want? What are you trying to do here, what is the "goal" ultimately. describe what is happening on this system, what is this data used for and why every 4 minutes?



Is this a possibility as well?

Bill S., June 10, 2005 - 11:45 am UTC

Tom,

I am looking into using external tables to load data to some staging tables. So far all the discussions I have seen refer to using csv files - but my data (and unfortunately I cannot change this at this time) is coming in in MS Excel format (and there are literally hundreds of files coming in, no way to take the time to open each individually and save as CSV). Do you (or anyone else out there) know if it is possible to use external tables with MS Excel files? Would be a big time saver, my alternative is to use SQLLoader and it seems that external tables will eliminate some of the work I would have to do with SQLLoader.

Thanks much!

Regards,

Bill S.

Tom Kyte
June 10, 2005 - 3:37 pm UTC

No, you cannot however -- if you have odbc lying about, you can query them as if they were tables.

</code> http://asktom.oracle.com/pls/asktom/f?p=100:11:::::P11_QUESTION_ID:4406709207206#18830681837358 <code>

OOPS! Forgot version, etc. :-(

Bill S., June 10, 2005 - 11:47 am UTC

Tom,

Sorry, Oracle9i Enterprise Edition Release 9.2.0.3.0 - Production

Thanks!

Bill S. (who obviously can't remember stuff from your blog)

As usual, a perfect solution for a nasty problem

Bill S., June 10, 2005 - 4:20 pm UTC

And I hadn't yet found that thread. Thanks much Tom!

external table error...

Craig, June 13, 2005 - 1:46 pm UTC

Tom,

I've gone through this thread and Oracle documentation, but I'm still having a slight problem with my external table.  It may be something simple that either you or someone else of this forum can point out to me, but please consider the following:

mrdssws1$ sqlplus 

SQL*Plus: Release 9.2.0.6.0 - Production on Mon Jun 13 11:53:40 2005

Copyright (c) 1982, 2002, Oracle Corporation.  All rights reserved.

Enter user-name: sys as sysdba
Enter password: 

Connected to:
Oracle9i Enterprise Edition Release 9.2.0.6.0 - 64bit Production
With the Partitioning, OLAP and Oracle Data Mining options
JServer Release 9.2.0.6.0 - Production

SQL> create or replace directory data_dir as '/export/share/mrdssapp';

Directory created.

SQL> grant read on directory data_dir to mrdssa;

Grant succeeded.

SQL> conn mrdssa
Enter password: 
Connected.
SQL> @cr_cbrne_pers_ddl.txt

Table created.

SQL> desc external_cbrne_pers;
 Name                                      Null?    Type
 ----------------------------------------- -------- ----------------------------
 DATE_COMPLETED                                     DATE
 LAST_NAME                                          VARCHAR2(15)
 FIRST_NAME                                         VARCHAR2(15)
 MI                                                 VARCHAR2(1)
 LAST_4                                             NUMBER(4)
 COMPONENT                                          VARCHAR2(11)
 SEGMENT                                            VARCHAR2(9)
 GRADE                                              VARCHAR2(35)
 CORRECT_CORPS                                      VARCHAR2(2)
 CORRECT_DAFSC                                      VARCHAR2(3)
 JOB_TITLE                                          VARCHAR2(100)
 MAJCOM                                             VARCHAR2(50)
 BASE                                               VARCHAR2(30)

SQL> select * from external_cbrne_pers;
select * from external_cbrne_pers
*
ERROR at line 1:
ORA-29913: error in executing ODCIEXTTABLEOPEN callout
ORA-29400: data cartridge error
KUP-04063: unable to open log file EXTERNAL_CBRNE_PERS_2959.log
OS error No such file or directory
ORA-06512: at "SYS.ORACLE_LOADER", line 14
ORA-06512: at line 1

SQL> exit
Disconnected from Oracle9i Enterprise Edition Release 9.2.0.6.0 - 64bit Production
With the Partitioning, OLAP and Oracle Data Mining options
JServer Release 9.2.0.6.0 - Production
mrdssws1$ pwd
/export/home/mrdssapp
mrdssws1$ ls -ltr
total 1642
drwxr-----   2 mrdssa   oinstall     512 Sep 20  2004 wbits
drwxr-xr-x   2 mrdssa   oinstall     512 Oct 21  2004 lib
drwx------   5 mrdssa   dba          512 Oct 21  2004 afwus
drwx------   6 mrdssa   dba          512 Oct 21  2004 mrl
drwx------   6 mrdssa   dba          512 Oct 21  2004 allow_std
drwxr-xr-x   6 mrdssa   oinstall     512 Oct 21  2004 uimmaster
drwxr-----   6 mrdssa   oinstall     512 Oct 21  2004 uimmajcom
drwxr-xr-x   6 mrdssa   oinstall     512 Feb 11 14:21 uim
drwx------   6 mrdssa   dba         1024 Feb 11 14:25 umd
drwx------   5 mrdssa   dba         1536 Feb 11 14:30 manfor
drwxrwxr-x   7 mrdssa   oinstall   47616 Apr 18 12:13 web
drwxr-x---   4 mrdssa   dba          512 Jun  9 12:43 admin
-rw-r--r--   1 mrdssa   oinstall  767630 Jun 13 10:28 Clinician_Completion_TestBox_050315.dat
drwx------  10 mrdssa   dba          512 Jun 13 11:45 materiel
-rwxrwxrwx   1 mrdssa   oinstall     519 Jun 13 11:49 cr_cbrne_pers_ddl.txt
mrdssws1$ cat cr_cbrne_pers_ddl.txt
create table external_cbrne_pers
(Date_Completed date,
 Last_Name VARCHAR2(15),
 First_Name VARCHAR2(15),
 MI varchar2(1),
 Last_4 number(4,0),
 Component varchar2(11),
 Segment varchar2(9),
 Grade varchar2(35),
 Correct_Corps varchar2(2),
 Correct_DAFSC varchar2(3),
 Job_Title varchar2(100),
 MAJCOM varchar2(50),
 Base varchar2(30)
)
ORGANIZATION EXTERNAL
( type oracle_loader
  default directory data_dir
  access parameters
  ( fields terminated by ',' )
  location ('Clinician_Completion_TestBox_050315.dat')
)
/
mrdssws1$ 


Thanks in advance! 

Tom Kyte
June 13, 2005 - 1:48 pm UTC

KUP-04063: unable to open log file EXTERNAL_CBRNE_PERS_2959.log
OS error No such file or directory

it is trying to open a log file there (by default, same place as input file). You either

a) don't actually have that directory or
b) oracle isn't allow to WRITE there.

External Tables CANNOT replace SQLLDR

Dean, June 14, 2005 - 11:12 am UTC

The begining of this thread posed that SQLLDR was no longer needed with the advent of External Tables.

While I agree that External Tables have their place, and can many times save steps in data loading, SQLLDR is still needed for a wide variety of reasons.

The first that comes to mind, which I have hit recently, is that External tables do not support Logical Record layouts in flat files.

I'm sure there are many other limitations as well.

Perhaps Tom could list them in this post for all to ponder.

Thanks

Tom Kyte
June 14, 2005 - 1:45 pm UTC

lag and lead cover all of that :)

I can do *anything* in sql :)



Dean, June 14, 2005 - 2:40 pm UTC

I don't quite follow how you could use lag and/or lead to simulate a CONTINUEIF (1:2) <> '01' statement in a logical record layout with a variable number of physical records per logical. I am currenltly using that in a control file (something that Oracle Warehouse Builder can't do, FYI)

While I agree that using SQL, one can do just about anything, sometimes I believe it is just more practical to use a tool that may simplify in a single definition, what may be complex in SQL or PL/SQL.



Tom Kyte
June 14, 2005 - 4:22 pm UTC

Ok, if there were an infinite number of possible continueif records -- you might have something there.

If you had a reasonable maximum, lag/lead would work

External tables opens up much more flexibility

David Rydzewski, June 14, 2005 - 3:28 pm UTC

If you are comfortable with that pre-defined functionality in sqlldr, then perhaps you should continue to use it.

But to say external tables cannot do it is probably not true. You have so much flexibility in that you can use sql (including analytic functions), join to other external/interal tables, for added flexibility write your own pl/sql functions and/or pipelined pl/sql to transform the data on the way in, etc.

I think its an awesome addition.

Dave

Dean, June 14, 2005 - 4:11 pm UTC

Don't get me wrong, I also believe it's an awesome addition. I wish it contained the complete functionality of SQLLDR, especially in reference to complex logical record layouts.

I was simply commenting on the initial comment made in the post about SQLLDR becoming a "moot" point. I believe it certainly still has it's place.

Another consideration when using joins and the such off of an external table directly is that depending on the size of the file, a join can kill performance, as any action involving the ext table is , by definition, a full file scan, which is much worse performance than a full table scan would be.



still curious

Dean, June 17, 2005 - 8:56 am UTC

I'm still curious how you could use lag and/or lead to simulate a CONTINUEIF
(1:2) <> '01' statement in a logical record layout with a variable number of
physical records per logical.



Tom Kyte
June 17, 2005 - 3:16 pm UTC

if they have a reasonable maximum number (say 10) that they can span, you can use lag and lead to read ahead N records.

say that max was "5".  flag = 1 represents "new group", seq is rownum assigned as we select from the External table:


ops$tkyte@ORA10GR1> select * from t;
 
       SEQ       FLAG DATA
---------- ---------- ----------
         1          1 row 1
         2          2 row 2
         3          1 row 3
         4          2 row 4
         5          1 row 5
         6          1 row 6
         7          1 row 7
         8          1 row 8
         9          2 row 9
        10          1 row 10
        11          2 row 11
        12          2 row 12
        13          2 row 13
        14          2 row 14
        15          1 row 15
        16          1 row 16
        17          2 row 17
        18          2 row 18
        19          1 row 19
        20          1 row 20
        21          2 row 21
        22          1 row 22
        23          2 row 23
        24          1 row 24
        25          1 row 25
        26          2 row 26
        27          2 row 27
        28          1 row 28
        29          1 row 29
        30          1 row 30
        31          2 row 31
        32          1 row 32
 
32 rows selected.
 
<b>we can mark the groups and lead() in them:</b>

ops$tkyte@ORA10GR1>
ops$tkyte@ORA10GR1>
ops$tkyte@ORA10GR1> select *
  2    from (
  3  select flag, data,
  4         lead(data,1) over (partition by grp order by seq) next1,
  5         lead(data,2) over (partition by grp order by seq) next2,
  6         lead(data,3) over (partition by grp order by seq) next3,
  7         lead(data,4) over (partition by grp order by seq) next4,
  8         lead(data,5) over (partition by grp order by seq) next5,
  9             count(*) over (partition by grp) cnt,
 10             row_number() over (partition by grp order by seq) rn
 11    from (
 12  select seq, flag, data, max(grp) over (order by seq) grp
 13    from (
 14  select seq, flag, data,
 15         case when flag = 1
 16                  then row_number() over (order by seq)
 17                  end grp
 18    from t
 19         )
 20         )
 21         )
 22   where rn = 1
 23  /
 
      FLAG DATA       NEXT1      NEXT2      NEXT3      NEXT4      NEXT5             CNT         RN
---------- ---------- ---------- ---------- ---------- ---------- ---------- ---------- ----------
         1 row 1      row 2                                                           2          1
         1 row 3      row 4                                                           2          1
         1 row 5                                                                      1          1
         1 row 6                                                                      1          1
         1 row 7                                                                      1          1
         1 row 8      row 9                                                           2          1
         1 row 10     row 11     row 12     row 13     row 14                         5          1
         1 row 15                                                                     1          1
         1 row 16     row 17     row 18                                               3          1
         1 row 19                                                                     1          1
         1 row 20     row 21                                                          2          1
         1 row 22     row 23                                                          2          1
         1 row 24                                                                     1          1
         1 row 25     row 26     row 27                                               3          1
         1 row 28                                                                     1          1
         1 row 29                                                                     1          1
         1 row 30     row 31                                                          2          1
         1 row 32                                                                     1          1
 
18 rows selected.

<b>or, if you wanted, we could mark the groups and use stragg (search this site for that) to string them up</b>

 
ops$tkyte@ORA10GR1>
ops$tkyte@ORA10GR1> select *
  2    from (
  3  select flag,
  4         case when row_number() over (partition by grp order by seq) = 1
  5                  then stragg( data ) over( partition by grp order by seq rows between unbounded preceding and unbounded following )
  6                  end str_data
  7    from (
  8  select seq, flag, data, max(grp) over (order by seq) grp
  9    from (
 10  select seq, flag, data,
 11         case when flag = 1
 12                  then row_number() over (order by seq)
 13                  end grp
 14    from t
 15         )
 16         )
 17         )
 18   where str_data is not null
 19  /
 
      FLAG STR_DATA
---------- --------------------------------------------------
         1 row 1,row 2
         1 row 3,row 4
         1 row 5
         1 row 6
         1 row 7
         1 row 8,row 9
         1 row 10,row 11,row 12,row 13,row 14
         1 row 15
         1 row 16,row 17,row 18
         1 row 19
         1 row 20,row 21
         1 row 22,row 23
         1 row 24
         1 row 25,row 26,row 27
         1 row 28
         1 row 29
         1 row 30,row 31
         1 row 32
 
18 rows selected.
 

DROP TABLE unnerving...

Duke Ganote, July 08, 2005 - 1:33 pm UTC

External tables are great! However, I still get unnerved every time I have to issue a "drop table" command -- fearing (wrongly) that I'll lose the data (file) too!

Can you also read a blank line as a valid record from an Ext Table?

PaulJ, January 31, 2006 - 12:23 am UTC

Tom,

I would like to map the "badfile" from an external table so that it is the source for another external table. That way I report on invalid records within the badfile without resorting to a host command.

For my example, if you have a badfile that contains two records:

> vi test.bad
"

bad record2
"
(Where the first row is a blank line).

How can the create table statement be structured to capture the blank lines as well?

i.e.
CREATE TABLE bad_records
(
bad_record varchar2(4000)
)
ORGANIZATION EXTERNAL
(
TYPE ORACLE_LOADER
DEFAULT DIRECTORY TMP_DIR
ACCESS PARAMETERS
(
RECORDS DELIMITED BY NEWLINE
FIELDS
(
BAD_RECORD POSITION (1:4000)
)
)
LOCATION ('test.bad')
)
REJECT LIMIT UNLIMITED
/

But when I do a select from the "bad_records" table, it only shows one record and not the blank line as well.

Thanks.
Paul


Tom Kyte
January 31, 2006 - 2:17 am UTC

are you sure?

ops$tkyte@ORA10GR2> create or replace directory tmp_dir as '/tmp'
  2  /

Directory created.

ops$tkyte@ORA10GR2> !echo > /tmp/test.bad

ops$tkyte@ORA10GR2> !echo "bad record" >>  /tmp/test.bad

ops$tkyte@ORA10GR2> !cat /tmp/test.bad

bad record

ops$tkyte@ORA10GR2>
ops$tkyte@ORA10GR2> CREATE TABLE bad_records
  2  (
  3   bad_record varchar2(4000)
  4  )
  5  ORGANIZATION EXTERNAL
  6  (
  7   TYPE ORACLE_LOADER
  8   DEFAULT DIRECTORY TMP_DIR
  9   ACCESS PARAMETERS
 10   (
 11    RECORDS DELIMITED BY NEWLINE
 12    FIELDS
 13    (
 14     BAD_RECORD POSITION (1:4000)
 15    )
 16   )
 17   LOCATION ('test.bad')
 18  )
 19  REJECT LIMIT UNLIMITED
 20  /

Table created.

ops$tkyte@ORA10GR2> select '"' || bad_record || '"' from bad_records;

'"'||BAD_RECORD||'"'
-------------------------------------------------------------------------------------------------------------------------
""
"bad record"

 

Loading blank lines from Ext Table

PaulJ, January 31, 2006 - 6:08 pm UTC

Thanks for your fast reply Tom.

I ran the steps exactly like you did and the blank line still didn't get returned.

See below for details.

But the DB version that I'm using is: 9.2.0.6
So this may be a fix/change from my DB version to the 10gR2 version.

Thanks.
Paul

SQL*Plus: Release 9.2.0.6.0 - Production on Wed Feb 1 09:53:54 2006

Copyright (c) 1982, 2002, Oracle Corporation.  All rights reserved.

Connected to:
Oracle9i Enterprise Edition Release 9.2.0.6.0 - 64bit Production
With the Partitioning and OLAP options
JServer Release 9.2.0.6.0 - Production

SQL> CREATE TABLE bad_records
  2  (
  3   bad_record varchar2(4000)
  4  )
  5  ORGANIZATION EXTERNAL
  6  (
  7   TYPE ORACLE_LOADER
  8   DEFAULT DIRECTORY TMP_DIR
  9   ACCESS PARAMETERS
 10   (
 11    RECORDS DELIMITED BY NEWLINE
 12    FIELDS
 13    (
 14     BAD_RECORD POSITION (1:4000)
 15    )
 16   )
 17   LOCATION ('test.bad')
 18  )
 19  REJECT LIMIT UNLIMITED
 20  /

Table created.

SQL> !cat /tmp/test.bad

DTEST,A

SQL> select '"'||bad_record||'"' from bad_records;

'"'||BAD_RECORD||'"'
--------------------------------------------------------------------------------
"DTEST,A"
 

Tom Kyte
February 01, 2006 - 2:39 am UTC

CREATE TABLE bad_records
(
bad_record varchar2(4000)
)
ORGANIZATION EXTERNAL
(
TYPE ORACLE_LOADER
DEFAULT DIRECTORY TMP_DIR
ACCESS PARAMETERS
(
RECORDS DELIMITED BY NEWLINE
FIELDS MISSING FIELD VALUES ARE NULL

(
BAD_RECORD POSITION (1:4000)
)
)
LOCATION ('test.bad')
)
REJECT LIMIT UNLIMITED
/


that worked on my 9ir2 instance.

Blank Lines from Ext Table Problem

A reader, January 31, 2006 - 6:15 pm UTC

Thanks Tom.  My previous reply didn't come through, so I've sent it again.

I ran the test again, but the blank line is still not coming through.  See below for details.

The version of the DB that I'm using this on is 9.2.0.6, whereas I see that you did your test on 10gR2, so maybe this is a change/fix between these versions.

Thanks.
Paul


SQL*Plus: Release 9.2.0.6.0 - Production on Wed Feb 1 09:53:54 2006

Copyright (c) 1982, 2002, Oracle Corporation.  All rights reserved.

Connected to:
Oracle9i Enterprise Edition Release 9.2.0.6.0 - 64bit Production
With the Partitioning and OLAP options
JServer Release 9.2.0.6.0 - Production

SQL> CREATE TABLE bad_records
  2  (
  3   bad_record varchar2(4000)
  4  )
  5  ORGANIZATION EXTERNAL
  6  (
  7   TYPE ORACLE_LOADER
  8   DEFAULT DIRECTORY TMP_DIR
  9   ACCESS PARAMETERS
 10   (
 11    RECORDS DELIMITED BY NEWLINE
 12    FIELDS
 13    (
 14     BAD_RECORD POSITION (1:4000)
 15    )
 16   )
 17   LOCATION ('test.bad')
 18  )
 19  REJECT LIMIT UNLIMITED
 20  /

Table created.

SQL> !cat /tmp/test.bad

DTEST,A

SQL> set feedback on
SQL> select '"'||bad_record||'"' from bad_records;

'"'||BAD_RECORD||'"'
--------------------------------------------------------------------------------
"DTEST,A"

1 row selected.
 

Tom Kyte
February 01, 2006 - 2:39 am UTC

see above

Blank Lines from Ext Table

PaulJ, January 31, 2006 - 6:18 pm UTC

Sorry for the multiple replies Tom.

I just checked the LOGFILE and it contains the following info:

LOG file opened at 02/01/06 10:04:40

Field Definitions for table BAD_RECORDS
Record format DELIMITED BY NEWLINE
Data in file has same endianness as the platform
Rows with all null fields are accepted

Fields in Data Source:

BAD_RECORD CHAR (4000)
Record position (1, 4000)
Trim whitespace same as SQL Loader
KUP-04021: field formatting error for field BAD_RECORD
KUP-04023: field start is after end of record
KUP-04101: record 1 rejected in file /tmp/test.bad


Thanks.
Paul


mapping bad file to an external table

ali king, May 03, 2006 - 10:26 am UTC

I wanted to map the bad file as an additional external table to check if there were any errors/rejects.

but as there weren't I get :
ORA-29913: error in executing ODCIEXTTABLEOPEN callout
ORA-29400: data cartridge error
KUP-04040: file ali.bad in INCOMING_BAD not found
ORA-06512: at "SYS.ORACLE_LOADER", line 14
ORA-06512: at line 1

obviously it couldn't find the file, because it doesn't exist - what is the best way to verify that there were no rejcts/errors?

Tom Kyte
May 03, 2006 - 1:06 pm UTC

well, the lack of existance of the file would be one clue (a bfile could be used to see if it were there - or just handle the exception from the select)

sql*loader vs external table

Kumar, July 09, 2006 - 6:34 pm UTC

Tom,

Thank for the great site. Few questions Tom.

a. Is there any benchmarking done on massive loads (may be 10million plus) into a table using sql*loader vs external tables? Any pointers to that?

b. If external table is used to load data and then dml to be done using other lookup tables, am I right in understanding that there is a need to create a table using CTAS from the external table and then doing the dml on them?

c. Can you point to a good doc on external tables in 10gR2?

Tom Kyte
July 09, 2006 - 7:00 pm UTC

a) using a single sql statement - which can do direct pathing or not, just like sqlldr, - generally going to be faster.

b) why do updates to data you just loaded? JUST DO THE "UPDATE" during the load!

c) server utilities guide:
</code> http://docs.oracle.com/docs/cd/B19306_01/server.102/b14215/toc.htm <code>

RE:why do updates to data you just loaded?

Duke Ganote, July 09, 2006 - 8:54 pm UTC

"why do updates to data you just loaded?" I would've wondered about this too, except this situation is exactly what I found when loading the data warehouse at my present employer! When I asked why, I was given an explanation like this:

After loading the staging tables (basic copies of the source tables), we load the latest basic data (customer, product, etc) first into the data warehouse and update the staging tables with the generated surrogate keys.

The staging tables have far fewer records than the data warehouse tables, so when we load secondary records (sales, etc), it's faster to get the surrogate kesy for customer/product/etc surrogate from the basic staging tables than search through the much larger data warehouse tables.

Tom Kyte
July 10, 2006 - 7:43 am UTC

I don't get it, seems it would be six one way, half a dozen the other.


If you have the ability to lookup the surrogate keys and update them, you have the ability to QUERY THEM OUT in the first place.

eg: you are currently (apparently) doing something like this:

a) load stage tables with null surrogates
b) updating stage tables to populate surrogates
c) move stage into "real" tables


when you could be (with external tables)

a) join external table to table with surrogages and load right into "real" tables



RE:why do updates to data you just loaded?

Duke Ganote, July 10, 2006 - 9:42 am UTC

Your intuition is good. We essentially do that currently. For example, the DW product table has 27,000,000 records, while this month's product extract has just 99,000. Needless to say, after loading the product table, when loading the fact tables, it's a lot faster to find the product surrogate on the staging table than the DW product table.

I've argued for an alternative like this:
a) create external tables (BTW: a misleading name to some members of the team; I explain that they're really 'views of flat files').
b) populate 'helper' staging tables (possibly IOTs) that have the keys from the external tables and the generated surrogates.
c) move staging data into "real" tables

Tom Kyte
July 10, 2006 - 9:51 am UTC

ok, but I still don't get why you do "b". totally up to you.

Kumar, July 10, 2006 - 11:12 am UTC

"b) why do updates to data you just loaded? JUST DO THE "UPDATE" during the load! "

This is why I thought :

Table A is dependent upon data from Table B. Both are coming from flat files. So loading both A and B by way of External Tables and then moving them to real tables and then doing the updates was what I thought only possible.

If your approach can be done for this sceanrio, that would be very nice as the processing time is reduced when we are looking at millions of rows in both the tables.

So the possible solutions that I visualize is : Create one real table Table B that has 25% of records compared to Table A through External Table and then create Table A simultaneously with the updates from Table B.

Is my thought process in the direction you suggest.

Tom, also can you show a small example for the update while loading through External Table?


Tom Kyte
July 10, 2006 - 12:40 pm UTC

you didn't say that. you said:

...
If external table is used to load data and then dml to be done using other
lookup tables, am I right in understanding that there is a need to create a
table using CTAS from the external table and then doing the dml on them?
......

seems you would

a) load lookup tables (your table B?)
b) then load A, which can use B, which you just loaded.


The "update" while "loading" is called a "function" or a "join", you just SELECT from the external table(s) and real table(s) the data you want, in a single sql statement.



Christian, July 13, 2006 - 10:55 am UTC

hi tom.

i made this


create or replace directory data_dir as 'c:\temp\'

create table external_table
(EMPNO NUMBER(4) ,
ENAME VARCHAR2(10),
JOB VARCHAR2(9),
MGR NUMBER(4),
HIREDATE DATE,
SAL NUMBER(7, 2),
COMM NUMBER(7, 2),
DEPTNO NUMBER(2)
)
ORGANIZATION EXTERNAL
( type oracle_loader
default directory data_dir
access parameters
( fields terminated by ',' )
location ('emp.dat')
)
/

all fine but when i select * from external_table
i get this:

ORA-29913: error in executing ODCIEXTTABLEOPEN callout
ORA-29400: data cartridge error
KUP-04063: unable to open file exteranl_table_24290.log OS error No such file or directory
ORA-06512: at "SYS.ORACLE_LOADER", line 19

the file is in the directory and i tried in a 9i and a 10g DB

What im doing wrong?

PD: i used the same file as ur example.

Tom Kyte
July 13, 2006 - 1:36 pm UTC

you do not have the ability to write to that directory and it is trying to create a log file there by default

either NOLOG it or...
get write to the directory or...
use a different directory for the log.

upper

Gabriel, December 11, 2006 - 2:58 pm UTC

Hello Tom,

Is there a way to transform (upper) a column from a flat file in an external file? I am trying to get an external file to display a flat file column as upper(column).

Thank you,

Tom Kyte
December 11, 2006 - 7:18 pm UTC

how about a view? Therein lies the real power of external tables - anything you can think of doing...

nice

Gabriel, December 12, 2006 - 6:16 pm UTC

Why didn't I think of that, that is what I want to know! Seriously,

Best and efficient way to load data from flat-file to a relational table

Vijay Bhasksar Dodla, February 27, 2009 - 7:52 am UTC

Tom,

We have a requirement which demands loading a flat-file, of size 10GB, into a Oracle table on a daily basis. The RDBMS version is Oracle 10gR2

We're currently evaluating the best approach to take it forward. Few methods which came across my mind include:

SQl*Loader
External tables
pro*C (can you confirm?)

I would like to know, which one of these provides most optimal solution to the said requirement. From my past experience, I can say that External Tables have an edge over SQl*Loader. Especially, when it comes to ease of implementation, space concumption and etc. Apart from these, do you see any other metrics or advantages of one over other(Ext tabs vis-a-vis SQl*Loader). Further, when is SQl*Loader more suitable.

As regards, Pro*C implementation, I know for sure that they are best-fit on the other end (table to flat-file extraction). More so, when the data volumes are HIGH. However, I'm not quite sure, how far they can be useful when loading vice-versa i.e. flat-file to table.

Your expert comments, with possible benchmarking stats, would be highly useful.

Regards,
Vijay
Tom Kyte
March 03, 2009 - 9:44 am UTC

... I would like to know, which one of these provides most optimal solution to the
said requirement. ...

it one was superior in all ways, there would be only that way - why would we bother inventing the others?


If I had to "default", it would be external tables by far - in most cases. A parallel direct path load is *trivial* with external tables. It is really hard with the others. (sqlldr less hard than a custom C program of course)

Thanks

Vijay Bhaskar Dodla, March 04, 2009 - 3:35 am UTC

Tom,

Thanks for the response.

Yes, I understand that there cannot be one-method-fits-all kind of a solution. I just wanted to know which one "predominantly" betters other? And looking at your response, it seems "External Tables" provides an answer to the question.

Once again, thanks for your help.

Regards,
Vijay
Tom Kyte
March 04, 2009 - 1:12 pm UTC

Be careful with "predominantly better", that is someone looking for a 'best practice'. You have to evaluate what you are doing and use your knowledge of the various approaches and decide what is best for you in your case.


sqlldr "enclosed by" contains a windoze carriage return line feed

A reader, September 18, 2009 - 1:41 pm UTC

Hello Tom,
Thank you for maintaining this great website.

I've been struggling with SQL*Loader against a Windows 10.2.0.4 database.

My issue seems to be caused by a "carriage return line feed" in the datafile (which I cannot modify) that is used in my "enclosed by" control file syntax.

The control/data files below work fine when I modify them to include "</PROPERTY>" on a single line with it's record.

Your help is greatly appreciated.

My datafile:
<CONTACT>
<PROPERTY NAME="locid"><locationid>1</locationid>
</PROPERTY>
<PROPERTY NAME="first"><firstname>Robert</firstname>
</PROPERTY>
<PROPERTY NAME="last"><lastname>Jones</lastname>
</PROPERTY>
</CONTACT>
<CONTACT>
<PROPERTY NAME="locid"><locationid>2</locationid>
</PROPERTY>
<PROPERTY NAME="first"><firstname>Jim</firstname>
</PROPERTY>
<PROPERTY NAME="last"><lastname>Smith</lastname>
</PROPERTY>
</CONTACT>

my controlfile:
load data
infile 'contact1.xml' "str '</CONTACT>'"
truncate
into table test10
trailing nullcols
(
dummy filler terminated by ">",
locationid enclosed by "<PROPERTY NAME=\"locid\"><locationid>" and "</locationid>||chr(13)||chr(10)||</PROPERTY>",
firstname enclosed by "<PROPERTY NAME=\"first\"><firstname>" and "</firstname>||chr(13)||chr(10)||</PROPERTY>",
lastname enclosed by "<PROPERTY NAME=\"last\"><lastname>" and "</lastname>||chr(13)||chr(10)||</PROPERTY>"
)

my logfile:

SQL*Loader: Release 11.1.0.7.0 - Production on Fri Sep 18 11:30:38 2009

Copyright (c) 1982, 2007, Oracle.  All rights reserved.

Control File:   contact1.ctl
Data File:      contact1.xml
  File processing option string: "str '</CONTACT>'"
  Bad File:     contact1.bad
  Discard File:  none specified
 
 (Allow all discards)

Number to load: ALL
Number to skip: 0
Errors allowed: 50
Bind array:     64 rows, maximum of 256000 bytes
Continuation:    none specified
Path used:      Conventional

Table TEST10, loaded from every logical record.
Insert option in effect for this table: TRUNCATE
TRAILING NULLCOLS option in effect

   Column Name                  Position   Len  Term Encl Datatype
------------------------------ ---------- ----- ---- ---- ---------------------
DUMMY                               FIRST     *   >       CHARACTER            
  (FILLER FIELD)
LOCATIONID                           NEXT     *           CHARACTER            
    First enclosure string : '<PROPERTY NAME="locid"><locationid>'
    Second enclosure string : '</locationid>||chr(13)||chr(10)||</PROPERTY>'
FIRSTNAME                            NEXT     *           CHARACTER            
    First enclosure string : '<PROPERTY NAME="first"><firstname>'
    Second enclosure string : '</firstname>||chr(13)||chr(10)||</PROPERTY>'
LASTNAME                             NEXT     *           CHARACTER            
    First enclosure string : '<PROPERTY NAME="last"><lastname>'
    Second enclosure string : '</lastname>||chr(13)||chr(10)||</PROPERTY>'

Record 1: Rejected - Error on table TEST10, column LOCATIONID.
second enclosure string not present
Record 2: Rejected - Error on table TEST10, column LOCATIONID.
second enclosure string not present
Record 3: Discarded - all columns null.

Table TEST10:
  0 Rows successfully loaded.
  2 Rows not loaded due to data errors.
  0 Rows not loaded because all WHEN clauses were failed.
  1 Row not loaded because all fields were null.


Space allocated for bind array:                  49536 bytes(64 rows)
Read   buffer bytes: 1048576

Total logical records skipped:          0
Total logical records read:             3
Total logical records rejected:         2
Total logical records discarded:        1

Run began on Fri Sep 18 11:30:38 2009
Run ended on Fri Sep 18 11:30:38 2009

Elapsed time was:     00:00:00.14
CPU time was:         00:00:00.06


my table:
SQL> desc test10
 Name                                      Null?    Type
 ----------------------------------------- -------- ----------------------------

 LOCATIONID                                         VARCHAR2(5)
 FIRSTNAME                                          VARCHAR2(200)
 LASTNAME                                           VARCHAR2(200)

external table performance

Ramki, November 08, 2009 - 10:14 am UTC

Hi Tom,

We are using 10g external table to load 1 billion data for every day. we are using parallel direct path load, we have 3 node RAC servers with 32GB RAM each. currently we are able to load 1 million rows in 1 minute. Customer is expecting to load 3 million rows per minute. when I saw OEM the resource consumption is not high on all 3 nodes. Do we need to change I/O parameter in OS level. Can you please give me a clue how to improve the performance i will try it out.

Tom Kyte
November 11, 2009 - 2:08 pm UTC

tell me more


ops$tkyte%ORA10GR2> select sysdate from dual;

SYSDATE
--------------------
11-nov-2009 15:06:24

ops$tkyte%ORA10GR2> @big_table2 1000000

Table created.


Table altered.

old   3:     l_rows number := &1;
new   3:     l_rows number := 1000000;
old   9:         where rownum <= &1;
new   9:         where rownum <= 1000000;

PL/SQL procedure successfully completed.

ops$tkyte%ORA10GR2> select sysdate from dual;

SYSDATE
--------------------
11-nov-2009 15:06:41




I just loaded and indexed and put a primary key on 1,000,000 rows on a really old machine - bear in mind, the above timing includes my TYPING TIME, I did not script this - in less than 20 seconds


1,000,000 rows per minute is painfully slow - what are you doing here? give us more information.


1,000,000 rows is tiny, I could type that much in in a minute, a direct path load should blow through that easily.

External table

Ramki, November 12, 2009 - 3:20 am UTC

Hi Tom,

Thanks for you reply.
Sorry for coming back very late.
I am giving out stage table , RAW table & INSERT /*+ APPEND */
We have data in 50 ".dat" files , every cycle we move 50 oldest files from another dir

I missed to tell 1 minute time includes to move files from other folder to "SMS_DIR" .
In my OEM I am seeing lots of "PX Deq Credit: send blkd"

CREATE TABLE "STAGE"."STAGE_EXT_SMS"
( "SMS_REPORT_TIME" TIMESTAMP (6),
"SMS_CALLING_NUMBER" VARCHAR2(22 CHAR),
"SMS_CALLED_NUMBER" VARCHAR2(22 CHAR),
"SMS_MSISDN_NUMBER" VARCHAR2(18 CHAR),
"SMS_DIALLED_DIGITS" VARCHAR2(22 CHAR),
"SMS_ORIG_CALLING_NUM" VARCHAR2(22 CHAR),
"SMS_VMSC_ID_NUMBER" VARCHAR2(25 CHAR),
"SMS_IMSI" NUMBER(16,0),
"SMS_IMEI" VARCHAR2(16 CHAR),
"SMS_TYPE" NUMBER(3,0),
"SMS_CALL_COUNT" NUMBER(10,0),
"SMS_LENGTH" NUMBER(10,0),
"SMS_DX_CAUSE" NUMBER(10,0),
"SMS_CONCAT_INFO_REF_NUMBER" NUMBER(10,0),
"SMS_LAC" NUMBER(5,0),
"SMS_SAC" NUMBER(5,0),
"SMS_CELL" NUMBER(5,0),
"SMS_INCOMING_TS_TIME" TIMESTAMP (6),
"SMS_DELIVERY_TS_TIME" TIMESTAMP (6),
"SMS_GMSC_ID_NUMBER" VARCHAR2(18 CHAR),
"SMS_FORWARDED_TO_SC_NUMBER" NUMBER(10,0),
"SMS_ROAMING_STATUS" NUMBER(10,0),
"SMS_CONCAT_INFO_APP" NUMBER(10,0),
"SMS_REF_NUMBER_MSC_ADDRESS" VARCHAR2(18 CHAR),
"SMS_BSC_ID" NUMBER(5,0),
"SMS_RNC_ID" NUMBER(5,0),
"SMS_SERV_CENTER_ADDR_NUM" VARCHAR2(18 CHAR),
"SMS_CONCAT_INFO_REF_NUM_TYP" NUMBER(10,0),
"SMS_CONCAT_INFO_MAX_NUM_CON" NUMBER(10,0),
"SMS_CONCAT_INFO_SEQ_NUM_CON" NUMBER(10,0),
"SOURCE_ID" NUMBER(10,0)
)
ORGANIZATION EXTERNAL
( TYPE ORACLE_LOADER
DEFAULT DIRECTORY "SMS_DIR"
ACCESS PARAMETERS
( RECORDS DELIMITED BY NEWLINE
CHARACTERSET AL32UTF8
STRING SIZES ARE IN CHARACTERS
BADFILE 'STAGE_EXT_SMS.bad'
DISCARDFILE 'STAGE_EXT_SMS.dis'
LOGFILE 'STAGE_EXT_SMS.log'
LOAD WHEN (SMS_REPORT_TIME != 'NULL')
FIELDS
TERMINATED BY ','
OPTIONALLY ENCLOSED BY "'" AND "'"
LRTRIM
(
SMS_REPORT_TIME CHAR(26) date_format timestamp mask "YYYY-MM-DD HH24:MI:SS.FF" NULLif SMS_REPORT_TIME ='NULL',
SMS_CALLING_NUMBER NULLif SMS_CALLING_NUMBER ='NULL',
SMS_CALLED_NUMBER NULLif SMS_CALLED_NUMBER ='NULL',
SMS_MSISDN_NUMBER NULLif SMS_MSISDN_NUMBER ='NULL',
SMS_DIALLED_DIGITS NULLif SMS_DIALLED_DIGITS ='NULL',
SMS_ORIG_CALLING_NUM NULLif SMS_ORIG_CALLING_NUM ='NULL',
SMS_VMSC_ID_NUMBER NULLif SMS_VMSC_ID_NUMBER ='NULL',
SMS_IMSI NULLif SMS_IMSI ='NULL',
SMS_IMEI NULLif SMS_IMEI ='NULL',
SMS_TYPE NULLif SMS_TYPE ='NULL',
SMS_CALL_COUNT NULLif SMS_CALL_COUNT ='NULL',
SMS_LENGTH NULLif SMS_LENGTH ='NULL',
SMS_DX_CAUSE NULLif SMS_DX_CAUSE ='NULL',
SMS_CONCAT_INFO_REF_NUMBER NULLif SMS_CONCAT_INFO_REF_NUMBER ='NULL',
SMS_LAC NULLif SMS_LAC ='NULL',
SMS_SAC NULLif SMS_SAC ='NULL',
SMS_CELL NULLif SMS_CELL ='NULL',
SMS_INCOMING_TS_TIME CHAR(26) date_format timestamp mask "YYYY-MM-DD HH24:MI:SS.FF" NULLif SMS_INCOMING_TS_TIME ='NULL',
SMS_DELIVERY_TS_TIME CHAR(26) date_format timestamp mask "YYYY-MM-DD HH24:MI:SS.FF" NULLif SMS_DELIVERY_TS_TIME ='NULL',
SMS_GMSC_ID_NUMBER NULLif SMS_GMSC_ID_NUMBER ='NULL',
SMS_FORWARDED_TO_SC_NUMBER NULLif SMS_FORWARDED_TO_SC_NUMBER ='NULL',
SMS_ROAMING_STATUS NULLif SMS_ROAMING_STATUS ='NULL',
SMS_CONCAT_INFO_APP NULLif SMS_CONCAT_INFO_APP ='NULL',
SMS_REF_NUMBER_MSC_ADDRESS NULLif SMS_REF_NUMBER_MSC_ADDRESS ='NULL',
SMS_BSC_ID NULLif SMS_BSC_ID ='NULL',
SMS_RNC_ID NULLif SMS_RNC_ID ='NULL',
SMS_SERV_CENTER_ADDR_NUM NULLif SMS_SERV_CENTER_ADDR_NUM ='NULL',
SMS_CONCAT_INFO_REF_NUM_TYP NULLif SMS_CONCAT_INFO_REF_NUM_TYP ='NULL',
SMS_CONCAT_INFO_MAX_NUM_CON NULLif SMS_CONCAT_INFO_MAX_NUM_CON ='NULL',
SMS_CONCAT_INFO_SEQ_NUM_CON NULLif SMS_CONCAT_INFO_SEQ_NUM_CON ='NULL',
SOURCE_ID NULLif SOURCE_ID ='NULL'
)
)
LOCATION
( "SMS_DIR":'traf_TDW_MSC_SMS_PIONEER4TNES_20091112161459_20091112161959_0.dat',
'traf_TDW_MSC_SMS_ROXAS1TNES_20091112161459_20091112161959_0.dat',
'traf_TDW_MSC_SMS_BACOOR2TNES_20091112161459_20091112161959_0.dat',
'traf_TDW_MSC_SMS_SANJUAN2TNES_20091112161459_20091112161959_0.dat',
…………………..
…………………….
…………………...
'traf_TDW_MSC_SMS_LAHUG1TNES_20091112162000_20091112162459_0.dat',
'traf_TDW_MSC_SMS_LAHUG2TNES_20091112162000_20091112162459_0.dat'
)
)
REJECT LIMIT UNLIMITED PARALLEL 32767;


From STAGE_EXT_SMS table we are inserting in to RAW_SMS


CREATE TABLE "RAWDATA"."RAW_SMS"
( "SMS_REPORT_TIME" DATE,
"SMS_CALLING_NUMBER" VARCHAR2(22 CHAR),
"SMS_CALLED_NUMBER" VARCHAR2(22 CHAR),
"SMS_MSISDN_NUMBER" VARCHAR2(18 CHAR),
"SMS_DIALLED_DIGITS" VARCHAR2(22 CHAR),
"SMS_ORIG_CALLING_NUM" VARCHAR2(22 CHAR),
"SMS_VMSC_ID_NUMBER" VARCHAR2(25 CHAR),
"SMS_IMSI" NUMBER(16,0),
"SMS_IMEI" VARCHAR2(16 CHAR),
"SMS_TYPE" NUMBER(3,0),
"SMS_CALL_COUNT" NUMBER(10,0),
"SMS_LENGTH" NUMBER(10,0),
"SMS_DX_CAUSE" NUMBER(10,0),
"SMS_CONCAT_INFO_REF_NUMBER" NUMBER(10,0),
"SMS_LAC" NUMBER(5,0),
"SMS_SAC" NUMBER(5,0),
"SMS_CELL" NUMBER(5,0),
"SMS_INCOMING_TS_TIME" DATE,
"SMS_DELIVERY_TS_TIME" DATE,
"SMS_GMSC_ID_NUMBER" VARCHAR2(18 CHAR),
"SMS_FORWARDED_TO_SC_NUMBER" NUMBER(10,0),
"SMS_ROAMING_STATUS" NUMBER(10,0),
"SMS_CONCAT_INFO_APP" NUMBER(10,0),
"SMS_REF_NUMBER_MSC_ADDRESS" VARCHAR2(18 CHAR),
"SMS_BSC_ID" NUMBER(5,0),
"SMS_RNC_ID" NUMBER(5,0),
"SMS_SERV_CENTER_ADDR_NUM" VARCHAR2(18 CHAR),
"SMS_CONCAT_INFO_REF_NUM_TYP" NUMBER(10,0),
"SMS_CONCAT_INFO_MAX_NUM_CON" NUMBER(10,0),
"SMS_CONCAT_INFO_SEQ_NUM_CON" NUMBER(10,0),
"SOURCE_ID" NUMBER(10,0)
) PCTFREE 0 PCTUSED 40 INITRANS 1 MAXTRANS 255 COMPRESS NOLOGGING
STORAGE(
BUFFER_POOL DEFAULT)
TABLESPACE "RAW_SMS_TS"
PARTITION BY RANGE ("SMS_REPORT_TIME")
(PARTITION "P_2009110200" VALUES LESS THAN (TO_DATE(' 2009-11-02 01:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
PCTFREE 0 PCTUSED 40 INITRANS 1 MAXTRANS 255 NOLOGGING
STORAGE(INITIAL 10485760 NEXT 10485760 MINEXTENTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
TABLESPACE "RAW_SMS_TS" COMPRESS ,
……….
……….
…………

PARTITION "P_9999999999" VALUES LESS THAN (MAXVALUE)
PCTFREE 0 PCTUSED 40 INITRANS 1 MAXTRANS 255 NOLOGGING
STORAGE(INITIAL 10485760 NEXT 10485760 MINEXTENTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
TABLESPACE "RAW_SMS_TS" COMPRESS )
PARALLEL ;

and insert stm which we are using , this triggered from a tool its not in PLSQL

INSERT /*+ APPEND */ INTO "RAWDATA"."RAW_SMS" ( "SMS_REPORT_TIME" , "SMS_CALLING_NUMBER" , "SMS_CALLED_NUMBER" , "SMS_MSISDN_NUMBER" , "SMS_DIALLED_DIGITS" , "SMS_ORIG_CALLING_NUM" , "SMS_VMSC_ID_NUMBER" , "SMS_IMSI" , "SMS_IMEI" , "SMS_TYPE" , "SMS_CALL_COUNT" , "SMS_LENGTH" , "SMS_DX_CAUSE" , "SMS_CONCAT_INFO_REF_NUMBER" , "SMS_LAC" , "SMS_SAC" , "SMS_CELL" , "SMS_INCOMING_TS_TIME" , "SMS_DELIVERY_TS_TIME" , "SMS_GMSC_ID_NUMBER" , "SMS_FORWARDED_TO_SC_NUMBER" , "SMS_ROAMING_STATUS" , "SMS_CONCAT_INFO_APP" , "SMS_REF_NUMBER_MSC_ADDRESS" , "SMS_BSC_ID" , "SMS_RNC_ID" , "SMS_SERV_CENTER_ADDR_NUM" , "SMS_CONCAT_INFO_REF_NUM_TYP" , "SMS_CONCAT_INFO_MAX_NUM_CON" , "SMS_CONCAT_INFO_SEQ_NUM_CON" , "SOURCE_ID" )
SELECT DISTINCT "STAGE_EXT_SMS_V"."SMS_REPORT_TIME" SMS_REPORT_TIME , "STAGE_EXT_SMS_V"."SMS_CALLING_NUMBER" SMS_CALLING_NUMBER , "STAGE_EXT_SMS_V"."SMS_CALLED_NUMBER" SMS_CALLED_NUMBER , "STAGE_EXT_SMS_V"."SMS_MSISDN_NUMBER" SMS_MSISDN_NUMBER , "STAGE_EXT_SMS_V"."SMS_DIALLED_DIGITS" SMS_DIALLED_DIGITS , "STAGE_EXT_SMS_V"."SMS_ORIG_CALLING_NUM" SMS_ORIG_CALLING_NUM , "STAGE_EXT_SMS_V"."SMS_VMSC_ID_NUMBER" SMS_VMSC_ID_NUMBER , "STAGE_EXT_SMS_V"."SMS_IMSI" SMS_IMSI , "STAGE_EXT_SMS_V"."SMS_IMEI" SMS_IMEI , "STAGE_EXT_SMS_V"."SMS_TYPE" SMS_TYPE , "STAGE_EXT_SMS_V"."SMS_CALL_COUNT" SMS_CALL_COUNT , "STAGE_EXT_SMS_V"."SMS_LENGTH" SMS_LENGTH , "STAGE_EXT_SMS_V"."SMS_DX_CAUSE" SMS_DX_CAUSE , "STAGE_EXT_SMS_V"."SMS_CONCAT_INFO_REF_NUMBER" SMS_CONCAT_INFO_REF_NUMBER , "STAGE_EXT_SMS_V"."SMS_LAC" SMS_LAC , "STAGE_EXT_SMS_V"."SMS_SAC" SMS_SAC , "STAGE_EXT_SMS_V"."SMS_CELL" SMS_CELL , "STAGE_EXT_SMS_V"."SMS_INCOMING_TS_TIME" SMS_INCOMING_TS_TIME , "STAGE_EXT_SMS_V"."SMS_DELIVERY_TS_TIME" SMS_DELIVERY_TS_TIME , "STAGE_EXT_SMS_V"."SMS_GMSC_ID_NUMBER" SMS_GMSC_ID_NUMBER , "STAGE_EXT_SMS_V"."SMS_FORWARDED_TO_SC_NUMBER" SMS_FORWARDED_TO_SC_NUMBER , "STAGE_EXT_SMS_V"."SMS_ROAMING_STATUS" SMS_ROAMING_STATUS , "STAGE_EXT_SMS_V"."SMS_CONCAT_INFO_APP" SMS_CONCAT_INFO_APP , "STAGE_EXT_SMS_V"."SMS_REF_NUMBER_MSC_ADDRESS" SMS_REF_NUMBER_MSC_ADDRESS , "STAGE_EXT_SMS_V"."SMS_BSC_ID" SMS_BSC_ID , "STAGE_EXT_SMS_V"."SMS_RNC_ID" SMS_RNC_ID , "STAGE_EXT_SMS_V"."SMS_SERV_CENTER_ADDR_NUM" SMS_SERV_CENTER_ADDR_NUM , "STAGE_EXT_SMS_V"."SMS_CONCAT_INFO_REF_NUM_TYP" SMS_CONCAT_INFO_REF_NUM_TYP , "STAGE_EXT_SMS_V"."SMS_CONCAT_INFO_MAX_NUM_CON" SMS_CONCAT_INFO_MAX_NUM_CON , "STAGE_EXT_SMS_V"."SMS_CONCAT_INFO_SEQ_NUM_CON" SMS_CONCAT_INFO_SEQ_NUM_CON , "STAGE_EXT_SMS_V"."SOURCE_ID" SOURCE_ID
FROM "STAGE"."STAGE_EXT_SMS_V" "STAGE_EXT_SMS_V"

Tom Kyte
November 15, 2009 - 1:38 pm UTC

... PARALLEL 32767; ....

a little extreme no?


the wait event you report is typically an idle wait event, eg:

An example: We do a select from a large table in parallel.
select /*+ parallel(sales, 10) +/ * from sales)
Than you see a lot of waits for "PX Deq Credit: send blkd" in the slave traces. All slaves wait for the QC to get back the credit bit. The QC can not dequeue fast enough the rows from the slaves, beause there are to many slaves that send rows back to the QC. In this case it is normal.



You are doing a large parallel query, feeding a serial insert. The insert is not consuming rows as fast as the parallel query execution servers are producing them.

Look a little "higher", are you way overloading the machine (do you really need parallel query for this - probably *not* unless you also do a parallel INSERT). As you hitting IO limits? Are you waiting for log files to change? Are you maintaining tons of indexes?

External table

Ramki, November 16, 2009 - 2:34 am UTC

Hi Tom,

PARALLEL 32767; ....
this big number i got by doing
ALTER TABLE STAGE.STAGE_EXT_SMS PARALLEL;

both STAGE.STAGE_EXT_SMS and rawdata.raw_sms table has is created with parallel clause in the definition of the table.

so i think table is enabled for parallel writte.
or should i do some more thing to enable parallel writte ?

Now i changed the degree of parallism to both table
ALTER TABLE STAGE.STAGE_EXT_SMS PARALLEL (DEGREE 16);
ALTER TABLE rawdata.raw_sms PARALLEL (DEGREE 32);
after that PX Deq Credit: send blkd wait has come down 50%.
but time take to load is same.

>>hitting IO limits?
-- there is NO I/O wait is happening should i check for some thing else
>>Are you waiting for log files to change?
-- NO log files switch /sync wait
>>Are you maintaining tons of indexes?
only on index on SMS_REPORT_TIME

we had created view over external table ...
and we are doing "DISTINCT" select fromexternal table
SELECT DISTINCT "STAGE_EXT_SMS_V...

Regards
Ramki
Tom Kyte
November 23, 2009 - 9:06 am UTC

... or should i do some more thing to enable parallel writte ?...

it is capable of being a target of a parallel direct path load - with or without being 'parallel', however, you need to make sure you have enabled parallel dml in your session.

... after that PX Deq Credit: send blkd wait has come down 50%.
but time take to load is same.
...

did you read what I said about that wait, how it is not really a wait?? It is because the PQ servers produced data faster than the single consumer could process it. All you likely did was restrict the number of PQ servers we used (to about 50% of what we used before... for a 50% drop in waiting for the single coordinator to take it...).


If you wanted to do parallel dml, you need to know how to enable it - have you done that?

http://www.oracle.com/pls/db102/search?remark=quick_search&word=parallel+dml&tab_id=&format=ranked



devang, November 23, 2009 - 5:45 am UTC

Hi Tom,
I need a small help from you. I need to insert data from a flat file (having H and D type records)using sqlldr. There is no logical link between the H and D type of records. When I insert into the staging table, I need to create a link between the H and D type records. Data volume is around 700k records. There are around 4000 C type of records. 

Rgds,
Devang

Tom Kyte
November 23, 2009 - 4:21 pm UTC

having H and D type records. hmmm, I wonder what the heck H and D type records are.


I have to laugh out loud at this one. The level of vagueness and lack of useful information goes beyond the normal levels I see.

"there is no logical link between the H and D type of records"

"I need to create a link between the H and D type records"


Ok, I'll tell you what - go ahead and create a link.


What else could I say? Honestly? Please read this and see what you wrote and ask yourself "how could anyone say anything about this"

devang, November 24, 2009 - 1:22 am UTC

Sorry for the vagueness of my question. The flat file contains 2 record types viz H denoting Header record and D means Detail records. The Header record contains the Header ID, but the Detail records do not contain that ID. While inserting into the staging table, I want sqlldr to remember the Header ID and use that Header ID while inserting the Detail records. The File is of the following format:

HFX1BATCH   JPN000015222052220FH6005              20081104000000N
DDR7902000004USD20081031000000022550000 00000052220FXG81783061-HITACHI ZOSEN CODN00000000000000
DDR7902000004JPY20081031000000022162140 00000052220FXG81783061-HITACHI ZOSEN CODN00000000000000
DDR5901101005JPY20081031000000022162140 00000052220FXG81783061-HITACHI ZOSEN CODN00000000000000
HFX1BATCH   JPN000015246052220FH6005              20081104000000N
DCR7251209008USD20081031000000080000000 00000052220FXGCLSBUS3381936760JPMCB NA LO00000000000000
DCR7252209007JPY20081031000000077480000 00000052220FXGCLSBUS3381936760JPMCB NA LO00000000000000
DDR7902000004JPY20081031000000077232560 00000052220FXGCLSBUS3381936761JPMCB NA LO00000000000000

Above sample contains 2 Header records with ID as 
JPN000015222052220FH6005
JPN000015246052220FH6005              

Hope this clarifies my question. Any help would be greatly appreciated. Thanks.

Tom Kyte
November 24, 2009 - 11:12 am UTC

sqlldr isn't the right tool at all.

external tables and a multitable insert surely are.

ops$tkyte%ORA11GR2> create or replace directory data_dir as '/home/tkyte'
  2  /

Directory created.

ops$tkyte%ORA11GR2>
ops$tkyte%ORA11GR2> create table et
  2  (flag varchar2(1),
  3   c1   varchar2(10),
  4   c2   varchar2(15),
  5   /* ..... as many as you need ... */
  6   cN   varchar2(5),
  7   r    number
  8  )
  9  ORGANIZATION EXTERNAL
 10  ( type oracle_loader
 11    default directory data_dir
 12    access parameters
 13    ( records delimited by newline fields missing field values are null
 14      (flag position(1:1),
 15       c1   position(2:10),
 16       c2   position(11:25),
 17       cn   position(26:30),
 18       r    RECNUM)
 19    )
 20    location ('test.dat')
 21  )
 22  /

Table created.

ops$tkyte%ORA11GR2> !cat test.dat
H123      header data1   abc
Ddetail 1
Ddetail 2
H456      header data2   def
Ddetail 3
Ddetail 4

ops$tkyte%ORA11GR2>
ops$tkyte%ORA11GR2> select flag,
  2         last_value( case when flag = 'H' then c1 end ignore nulls ) over (order by r) id,
  3             case when flag='H' then c2 end header_data1,
  4             case when flag='H' then cN end header_dataN,
  5             case when flag='D' then c1 end detail_data1,
  6             case when flag='D' then c2 end detail_data2,
  7             case when flag='D' then cN end detail_dataN
  8    from et
  9  /

F ID         HEADER_DATA1    HEADE DETAIL_DAT DETAIL_DATA2    DETAI
- ---------- --------------- ----- ---------- --------------- -----
H 123        header data1    abc
D 123                              detail 1
D 123                              detail 2
H 456        header data2    def
D 456                              detail 3
D 456                              detail 4

6 rows selected.



then just use a multi-table insert to put the H records into one table and the D records into another.

devang, November 25, 2009 - 1:01 am UTC

Thank you so much tom for the solution.
Just one additional question. If you look at the format of the file:

HFX1BATCH JPN000015222052220FH6005 20081104000000N
DDR7902000004USD20081031000000022550000 00000052220FXG81783061-HITACHI ZOSEN CODN00000000000000
DDR7902000004JPY20081031000000022162140 00000052220FXG81783061-HITACHI ZOSEN CODN00000000000000
DDR5901101005JPY20081031000000022162140 00000052220FXG81783061-HITACHI ZOSEN CODN00000000000000
HFX1BATCH JPN000015246052220FH6005 20081104000000N
DCR7251209008USD20081031000000080000000 00000052220FXGCLSBUS3381936760JPMCB NA LO00000000000000
DCR7252209007JPY20081031000000077480000 00000052220FXGCLSBUS3381936760JPMCB NA LO00000000000000
DDR7902000004JPY20081031000000077232560 00000052220FXGCLSBUS3381936761JPMCB NA LO00000000000000

the Header records have a different position specification than the Detail records. I want to create an external table, which will treat these records differently and load into the single external table accordingly. I want to do something similar to:

load when (flag = "H")
fields (flag position(1:1),
c1 position(2:12),
c2 position(13:36))

load when (flag = "D")
fields (flag position(1:1),
c1 position(4:13),
c2 position(14:16))

Is it possible to use multiple "load when" conditions in the table creation script?
Tom Kyte
November 25, 2009 - 12:22 pm UTC

do this then


flag postition(1:1)
data postition(2:NNNNN) where NNNN is your line length


and then use substr in the select list to 'parse it out', hide that in the view if you want


create or replace view et_v
as
select flag, 
       case when flag = 'H' then substr(data,2,10) end header_c1,
       case when flag = 'H' then substr(data,12,5) end header_c2,
       ...
       case when flag = 'H' then substr(data,N,M) end header_cN,
       case when flag = 'D' then ....
       ....
  from et
/


and use that and the logic above to continue.


and realize that this idea is just a simple extension of the idea above - nothing fancy, just applying the same thought pattern to this. Basically, use the external table to get the data into SQL, then use the POWER of SQL to process the data.





External Table + CLOB error

MK, August 02, 2010 - 6:43 am UTC

Hi,
I am trying to load XML data into a CLOB field in an oracle table using the external table command as follows:

CREATE TABLE ext_content_track_license
 (
CONTENT_DELIVERY_ID   NUMBER(10)    ,   
TRACK_ID              NUMBER(10)    ,   
LICENSE               CLOB          
)
 ORGANIZATION external
 (
   TYPE oracle_loader
   DEFAULT DIRECTORY scot_dmp 
   ACCESS PARAMETERS 
   ( RECORDS DELIMITED BY NEWLINE CHARACTERSET AL32UTF8 
     BADFILE 'xt_content.bad' 
     LOGFILE 'xt_content.log' 
     FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '"' LDRTRIM 
     MISSING FIELD VALUES ARE NULL 
     REJECT ROWS WITH ALL NULL FIELDS 
     (
"CONTENT_DELIVERY_ID"             CHAR(300)
  TERMINATED BY "," ,
"TRACK_ID"                        CHAR(300)
  TERMINATED BY "," ,
"CLOB_FILENAME"                   CHAR(300)
  TERMINATED BY "," 
     )  
     COLUMN TRANSFORMS ( license FROM LOBFILE(CLOB_FILENAME) FROM (scot_dmp) CLOB)
   )
   location ('license.txt')
 )REJECT LIMIT UNLIMITED
  ;

License.txt has 320,000 entries in it! 
---------------------------------------------------
10,3977066,"00000000_IP9100-NPIA990001_00-c230ac657d504637a22804f9a88a0533_license.xml"
11,3977068,"00000001_IP9100-NPIA990001_00-25e78abc72d042e88f5aea5e0ccb8696_license.xml"
10,3977068,"00000000_IP9100-NPIA990001_00-25e78abc72d042e88f5aea5e0ccb8696_license.xml"
10,3977079,"00000000_IP9100-NPIA990001_00-74f4d36b0fc04654a2abe290756291bb_license.xml"


The error I get

SQL> drop table ext_content_track_license;
 
Table dropped
 
SQL> 
SQL> CREATE TABLE ext_content_track_license
  2   (
  3  CONTENT_DELIVERY_ID   NUMBER(10)    ,
  4  TRACK_ID              NUMBER(10)    ,
  5  LICENSE               CLOB
  6  )
  7   ORGANIZATION external
  8   (
  9     TYPE oracle_loader
 10     DEFAULT DIRECTORY scot_dmp
 11     ACCESS PARAMETERS
 12     ( RECORDS DELIMITED BY NEWLINE CHARACTERSET AL32UTF8
 13       BADFILE 'xt_content.bad'
 14       LOGFILE 'xt_content.log'
 15       FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '"' LDRTRIM
 16       MISSING FIELD VALUES ARE NULL
 17       REJECT ROWS WITH ALL NULL FIELDS
 18       (
 19  "CONTENT_DELIVERY_ID"             CHAR(300)
 20    TERMINATED BY "," ,
 21  "TRACK_ID"                        CHAR(300)
 22    TERMINATED BY "," ,
 23  "CLOB_FILENAME"                   CHAR(300)
 24    TERMINATED BY ","
 25       )
 26       COLUMN TRANSFORMS ( license FROM LOBFILE(CLOB_FILENAME) FROM (scot_dmp) CLOB)
 27     )
 28     location ('license.txt')
 29   )REJECT LIMIT UNLIMITED
 30    ;
 
Table created
 
SQL> select count(*) from ext_content_track_license;
 
select count(*) from ext_content_track_license
 
ORA-29913: error in executing ODCIEXTTABLEFETCH callout
ORA-29400: data cartridge error
KUP-04050: error while attempting to allocate 65536 bytes of memory
 
I have no idea why this happens
Some additional info:

SQL> show sga

Total System Global Area 4.0620E+10 bytes
Fixed Size                  2216024 bytes
Variable Size            2.2549E+10 bytes
Database Buffers         1.7985E+10 bytes
Redo Buffers               84054016 bytes

SQL> show parameter SGA

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
lock_sga                             boolean     FALSE
pre_page_sga                         boolean     FALSE
sga_max_size                         big integer 38G
sga_target                           big integer 0


MK, August 02, 2010 - 8:15 am UTC

SQL> select count(content_delivery_id) from ext_content_track_license where rownum < 70000;
  
ORA-29913: error in executing ODCIEXTTABLEFETCH callout
ORA-29400: data cartridge error
KUP-04050: error while attempting to allocate 65536 bytes of memory
 
SQL> select count(content_delivery_id) from ext_content_track_license where rownum < 50000;
 
COUNT(CONTENT_DELIVERY_ID)
--------------------------
                     49999
 
--> It suggests that there seems to be some sort of memory limit. I increased the Shared pool to 3G but the error persists. Hope this added information is useful in helping me troubleshoot. I also checked the directory grants and they are correct. So I am able to load a smaller subset of rows with CLOBS in it and unable to load say millions of rows with CLOBS in it using an external table. 


SQL> show sga

Total System Global Area 4.0620E+10 bytes
Fixed Size                  2216024 bytes
Variable Size            2.2549E+10 bytes
Database Buffers         1.7985E+10 bytes
Redo Buffers               84054016 bytes


[oracle@ora01prodscot mesg]$ free
             total       used       free     shared    buffers     cached
Mem:      99000020   64275432   34724588          0     481624   59926456
-/+ buffers/cache:    3867352   95132668
Swap:     20775536          0   20775536


SQL>  show parameter shared_pool_size

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
shared_pool_size                     big integer 3G

Tom Kyte
August 02, 2010 - 9:59 am UTC

what is the size of your clob (max size) and what is the amount of available free memory on the machine (forget about the SGA for a minute)

External Table Error

MK, August 02, 2010 - 1:41 pm UTC

Hi,

The previous post has the LINUX command "free" that shows the available memory on the LINUX box.
The maximum size of the XML file I want to store in the CLOB is about 40KB.

The error message's resolution on most sites suggested looking into the permissions of the directory, or just re-connecting from a fresh session and retrying the command.

Cheers,
MK
Tom Kyte
August 02, 2010 - 2:46 pm UTC

if this is happening after running for a while, then it sounds suspiciously like a leak - when running - does the size of your dedicated server grow and grow and grow?

External Table Error

MK, August 02, 2010 - 3:43 pm UTC

Sorry I am not on the machine at the moment, but will check this first thing when I get in tomorrow morning.
Is there a command to check for increases in the size of the dedicated server?
Also I ran this select on the external table directly from sqlplus on the oracle database server and it gave me the same result there too.
I also tried playing with the READSIZE option and it didn't make any difference either, I guess I was looking at some sort of array operation whereby it clears up the memory at some load frequency. 65536 seems like a familiar number to me.. is this the max a VARCHAR2 can hold?

I am just wondering if the memory leak is an Oracle 11g R2 bug or something wrong with the way its been configured?
Is there another way to load in millions of flat-file rows with BLOB/CLOB data in it that I should be looking into? An example would be really helpful in this case?
I was always of the opinion that External Tables based on Data Pump would be an extremely fast way of loading CLOB/BLOB data, but I haven't done so many CLOBs in the past, its mostly been alpha-numeric data, timestamps etc.

I am open to any other suggestions you might have to get around doing this, and using External Tables was just an idea so that I wouldn't have to create a temporary table to load into. Unless ofcourse there is a work-around available with this external table memory problem!

Thanks in advance,
MK
Tom Kyte
August 02, 2010 - 4:21 pm UTC

you can 'top' and see the size of a process

you can query v$sesstat and v$statname together to see 'session pga memory' from another session.



65536 - also known as "64k"


... something
wrong with the way its been configured?
....

seems unlikely.


I am open to any other suggestions you might have to get around doing this, and
using External Tables was just an idea so that I wouldn't have to create a
temporary table to load into. Unless ofcourse there is a work-around available
with this external table memory problem!


we have to diagnose the issue before suggesting workarounds

External Table Error

MK, August 02, 2010 - 3:53 pm UTC

Just to add, I am working on a POC to demo loading about 20-30 million such XML files into CLOB fields into Oracle DB using data pump or external tables in a matter of minutes if done correctly. Currently the people at work use SQL Server SSIS tools to load data into an Oracle 11g instance and it takes about 4 hours to load close to 1 million rows with CLOB data in it! I am very certain that this should be done directly from the file system and can be loaded in a matter of minutes (if not seconds)!!
So if you have any other alternatives that you suggest might help me get around this problem or dig around more to fix the memory leak problem then please let me know coz I am under the pump :)

Cheers,
MK
Tom Kyte
August 02, 2010 - 4:28 pm UTC

did you try sqlldr as a temporary workaround?

external Table Error

MK, August 02, 2010 - 5:35 pm UTC

No, I haven't used SQLLDR or DATA PUMP to load CLOBs before as I have always found that the External Table syntax worked fine for me. So maybe I should look into SQLLDR as well, but then is there some sort of parameters that I should be paying special attention to? Like say a "Commit Frequency" so I don't run into the same problems with memory again?
I will go ahead and give you more information with the top command and the v$ tables to see if the dedicated server process is increasing in size.

Cheers,
MK
Tom Kyte
August 03, 2010 - 8:14 am UTC

... Like say a "Commit Frequency" so I don't run into
the same problems with memory again?
...

I don't think you will - it is doubtful you would see the same issue with a completely different tool

External Table - Bad file rejection

Saranya, April 08, 2011 - 4:29 am UTC

Hi Tom,

We are using 11g external table to load data.I want my external table to reject the records as bad when the mapping file has less or more columns than the one created in the external table.

Giving my sample external table creation script.

create table stage.STAGE_EXT_SAMPLE(msisdn varchar(20),
imsi varchar(20),
imei varchar(20),
circle number(38,0),
apn varchar(4000),
firstName varchar(100)
)ORGANIZATION EXTERNAL
(
TYPE ORACLE_LOADER
DEFAULT DIRECTORY DIR_SAMPLE
ACCESS PARAMETERS
( RECORDS DELIMITED BY NEWLINE
READSIZE 20971520
CHARACTERSET AL32UTF8
STRING SIZES ARE IN CHARACTERS
BADFILE 'STAGE_EXT_SAMPLE.bad'
DISCARDFILE 'STAGE_EXT_SAMPLE.dis'
LOGFILE 'STAGE_EXT_SAMPLE.log'
LOAD WHEN ( msisdn !='NULL' )
FIELDS
TERMINATED BY ',' OPTIONALLY ENCLOSED BY "'" AND "'"
LRTRIM
MISSING FIELD VALUES ARE NULL
(
msisdn NULLIF msisdn='NULL',
imsi NULLIF imsi='NULL',
imei NULLIF imei='NULL',
circle NULLIF circle='NULL',
apn NULLIF apn='NULL',
firstName NULLIF firstName='NULL'
)
)
LOCATION ('empty.ext')
)
REJECT LIMIT UNLIMITED
PARALLEL 8
NOMONITORING
;

My sample file looks like,

12345,2345,11213,1,google,
14567,23498,23232,2,yahoo,sat
987654,23789,32324,3,fdfs,qwqw,123,3435

Can you please give a hint as how to reject record 1st and 3rd, and load only 2nd record.
1st record has 1 column less than my table structure and 3rd record has 2 columns greater than my table structure, hence i want them to reject it as a bad record.
Tom Kyte
April 12, 2011 - 3:31 pm UTC


what you probably would really have to do is this


create an external table that has more columns than your table (one more column at least).

use a where clause to load the records of interest - eg, something like:
select msisdn, .... firstName
  from ET
 where msisdn is not null
   and imsi is not null
   and imei is not null
   and circle is not null
   and apn is not null
   and firstName is not null
   and EXTRA_COLUMN_AT_THE_END IS NULL;



reading 5000 delimited values

Rajeshwaran, Jeyabal, November 10, 2011 - 4:26 pm UTC

data.txt
variable :b varchar2(10);
exec :b := 'a,b,c,d,e,f';

Script.txt
@data.txt
begin
 for x in (select * from table(parse_inlist(:x)))
 loop
 begin
  insert into t values (x.column_value); 
 exception
  when dup_val_on_index then
   null;
  when others then 
   raise_application_error (-20458,sqlerrm);
 end;
 end loop; 
end;
/

Tom, this works fine for less volume of data in "data.txt" file. But when we place about 3000 to 5000 values in "data.txt" we are getting error message. Can you help us on this? I know if we use sql*loader or external tables we can do this. But any idea that this can be done by using this approch?
Tom Kyte
November 10, 2011 - 6:00 pm UTC

you'd have to use a clob, not a varchar2.

varchar2's bound in SQL are limited to 4000 characters.

reading 5000 delimited values

Rajeshwaran, Jeyabal, November 10, 2011 - 9:14 pm UTC

Tom, As you suggested changed Varchar2 to Clob and getting this below error.

rajesh@ORA11GR2>
rajesh@ORA11GR2> @c:\a.txt
Error 45 initializing SQL*Plus
Internal error

C:\Documents and Settings\rjeyaba>

Tom Kyte
November 11, 2011 - 8:10 am UTC

well, good?

Not knowing what the heck is in a.txt makes this a mystery.

We'd need a real example, from start to finish, in its entirety. Including your version of parse_inlist, whatever that is/does.

Here is mine:

ops$tkyte%ORA11GR2> create or replace function str2tbl( p_str in clob, p_delim in varchar2 default ',' ) return sys.odciVarchar2List
  2  PIPELINED
  3  as
  4      l_str      long default p_str || p_delim;
  5      l_n        number;
  6  begin
  7      loop
  8          l_n := instr( l_str, p_delim );
  9          exit when (nvl(l_n,0) = 0);
 10          pipe row( ltrim(rtrim(substr(l_str,1,l_n-1))) );
 11          l_str := substr( l_str, l_n+1 );
 12      end loop;
 13      return;
 14  end;
 15  /

Function created.

ops$tkyte%ORA11GR2> 
ops$tkyte%ORA11GR2> variable b clob
ops$tkyte%ORA11GR2> begin
  2          :b := '0';
  3          for i in 1 .. 5000
  4          loop
  5                  :b := :b || ',' || to_char(i);
  6          end loop;
  7  end;
  8  /

PL/SQL procedure successfully completed.

ops$tkyte%ORA11GR2> 
ops$tkyte%ORA11GR2> drop table t;

Table dropped.

ops$tkyte%ORA11GR2> create table t ( x varchar2(20) );

Table created.

ops$tkyte%ORA11GR2> begin
  2      for x in (select * from table(str2tbl(:b)))
  3      loop
  4      begin
  5          insert into t values (x.column_value);
  6      end;
  7      end loop;
  8  end;
  9  /

PL/SQL procedure successfully completed.

ops$tkyte%ORA11GR2> select count(*) from t;

  COUNT(*)
----------
      5001

ops$tkyte%ORA11GR2> 

reading 5000 delimited values

Rajeshwaran, Jeyabal, November 11, 2011 - 7:55 pm UTC

Tom:

Here is a simple script for demonstration of the error.

rajesh@ORA10GR2> @C:\a.txt;
Error 45 initializing SQL*Plus
Internal error

C:\>
C:\>type a.txt
@@final.txt;
print a;
C:\>

<u><b>final.txt</b></u>
variable a clob;
exec :a :='-----|INVALID TEXT||X72|INVALID SUPER CODES|Y|LEE;<junk values>;<junk values>;*' ;

The maximum possible value length of the variable :a can be 323411.
Tom Kyte
November 15, 2011 - 7:30 am UTC

that isn't true. What is true is that the longest length of a character string literal is 4000 in SQL and 32765 in PLSQL.

ops$tkyte%ORA11GR2> begin
  2          :a := 'x';
  3          for i in 1 .. 32000
  4          loop
  5                  :a := :a || rpad( '*', 1000, '*' );
  6          end loop;
  7  end;
  8  /

PL/SQL procedure successfully completed.

ops$tkyte%ORA11GR2> 
ops$tkyte%ORA11GR2> exec dbms_output.put_line( length(:a) );
32000001

PL/SQL procedure successfully completed.


A reader, September 19, 2012 - 12:25 pm UTC

Tom,

I need to download about 100 CSV files using a URL and then load them to oracle database. I need to create tables as each file is different and then load them. Please let me know if you have any suggestions. Should be using any scripting language to do this?
Tom Kyte
September 20, 2012 - 5:49 am UTC


HTTP_URL ?

Duke Ganote, September 20, 2012 - 7:12 am UTC

external table with awkward format

A reader, April 28, 2013 - 9:23 pm UTC

I'm a bit stuck on how to process a file with an unusual format using external table.

The format of the file is this;

stringA 24 value with 24 characters stringB 9 small val stringC 17 another small val


So it's a repeating sequence of;

1) some string value (no space in the string)
2) delimited by a space
3) a number which indicates how long the next field is
4) delimited by a space
5) a string (may contain spaces) with a length indicated by 3) above
6) delimited by a space and back to 1) again

Can you think of some way of processing this using SQL ? Or do you think it's too complicated and I'll have to write a PL/SQL function to break it down?

Trying to avoid PL/SQL for performance reasons as it's a fairly large file.
Tom Kyte
April 30, 2013 - 2:14 pm UTC

ops$tkyte%ORA11GR2> select a, b, c,
  2         to_number( substr( rest4, 1, instr( rest4, ' ' )-1 ) ) n2,
  3             substr( rest4, instr( rest4, ' ' ) + 1 ) rest5
  4    from (
  5  select a, b,
  6         substr( rest3, 1, instr( rest3, ' ' )-1) c,
  7             substr( rest3, instr( rest3, ' ' ) + 1 ) rest4
  8    from (
  9  select a,
 10         substr( rest2, 1, n1 ) b,
 11             substr( rest2, n1+2 ) rest3
 12    from (
 13  select a,
 14         to_number( substr( rest, 1, instr( rest, ' ' )-1 ) ) n1,
 15             substr( rest, instr( rest, ' ' ) + 1 ) rest2
 16    from (
 17  select substr( x, 1, instr( x, ' ' )-1 ) a,
 18         substr( x, instr( x, ' ') +1 ) rest
 19    from t
 20         )
 21         )
 22         )
 23         )
 24  /

A                              B                              C
------------------------------ ------------------------------ ------------------------------
        N2 REST5
---------- ------------------------------------------------------------------------------------
stringA                        value with 24 characters       stringB
         9 small val stringC 17 another small val


ops$tkyte%ORA11GR2> 



that is one brute force method, lather, rinse, repeat ;) add a few more substr/instr layers...

pls answer

hello, October 06, 2014 - 9:27 am UTC

how to load data usimg utl files leaving top 5 records and bottom 5 records

More to Explore

PL/SQL demos

Check out more PL/SQL tutorials on our LiveSQL tool.

PL/SQL docs

PL/SQL reference manual from the Oracle documentation library