Skip to Main Content

Breadcrumb

May 4th

Question and Answer

Connor McDonald

Thanks for the question, Suresh.

Asked: June 23, 2002 - 2:15 pm UTC

Last updated: November 28, 2023 - 4:00 am UTC

Version: 9.0.1

Viewed 100K+ times! This question is

You Asked

Tom,
I was working with External tables and Oracle came up with an error when I tried to insert, update or delete from external tables, as shown below.

SQL> delete from student_ext;
delete from student_ext
*
ERROR at line 1:
ORA-30657: operation not supported on external organized table


SQL> update student_ext set total = 50;
update student_ext set total = 50
*
ERROR at line 1:
ORA-30657: operation not supported on external organized table


SQL> insert into student_ext values (6, 'NEWDATA', 'NEWDATA', 90);
insert into student_ext values (6, 'NEWDATA', 'NEWDATA', 90)
*
ERROR at line 1:
ORA-30657: operation not supported on external organized table

I understand why it is throwing those errors. My question is do you have any "tricks" by which you can insert, delete, update on external tables.

Thanks,
Suresh

and Tom said...

Sure,

SQL> host vi filename


that'll do it. Seriously though, no -- there is no way to modify these files using SQL. There is no locking, no concurrency controls, no nothing. They are read only.

Rating

  (93 ratings)

Is this answer out of date? If it is, please let us know via a Comment

Comments

Thank you so much

Suresh, June 23, 2002 - 3:04 pm UTC

Tom,
I just wanted to confirm with you whether it could be done before deciding so. Thanks a lot for your help.

OK

Kumar, April 27, 2005 - 12:57 pm UTC

Hello Tom,

How to find out
"In my schema what are the tables that are
External organized and Index Organized ??"
Please do reply.
Bye!


Tom Kyte
April 27, 2005 - 1:09 pm UTC

USER_EXTERNAL_TABLES

USER_TABLES where iot_type = 'IOT'

Long Data Type in External Tables

Owais, June 16, 2005 - 1:35 am UTC

hi tom,
my question is, A table have different columns, one of them has long datatype, data of long datatype column with other column are convert into os flat file
how can we use external table with long datatype ?

Tom Kyte
June 16, 2005 - 3:43 am UTC

you cannot

External tables

mohan, October 11, 2005 - 9:01 am UTC

Hi Tom,
When creating the external tables i was getting the below error
create table ext_tab( empno number(4)
, ename varchar2(10)
, job varchar2(9)
, mgr number(4)
, hiredate date
, sal number(7,2)
, comm number(7,2)
, deptno number(2)
)
organization external
( type oracle_loader
default directory DIR
access parameters ( records delimited by newline
badfile 'ext_tab.bad'
discardfile 'ext_tab.dis'
logfile 'ext_tab.log'
optionally enclosed by '"'
fields terminated by ','
missing field values are null
( empno
, ename
, job
, mgr
, hiredate date mask "dd-MON-yyyy"
, sal
, comm
, deptno
)
)
location ('E2.csv')
)
CSV-files holding the below data
7369,SMITH,CLERK,7902,17-DEC-80,800,,20
7499,ALLEN,SALESMAN,7698,20-FEB-81,1600,300,30
7521,WARD,SALESMAN,7698,22-FEB-81,1250,500,30
7566,JONES,MANAGER,7839,02-APR-81,2975,,20
7654,MARTIN,SALESMAN,7698,28-SEP-81,1250,1400,30
7698,BLAKE,MANAGER,7839,01-MAY-81,2850,,30
7782,CLARK,MANAGER,7839,09-JUN-81,2450,,10
7788,SCOTT,ANALYST,7566,19-APR-87,3000,,20
7839,KING,PRESIDENT,,17-NOV-81,5000,,10
7844,TURNER,SALESMAN,7698,08-SEP-81,1500,0,30
7876,ADAMS,CLERK,7788,23-MAY-87,1100,,20

ERROR at line 1:
ORA-29913: error in executing ODCIEXTTABLEOPEN callout
ORA-29400: data cartridge error
KUP-00554: error encountered while parsing input commands
KUP-01005: syntax error: found "optionally": expecting one of: "badfile,
characterset, colon, data, delimited, discardfile, exit, fields, fixed, l
logfile, nodiscardfile, nobadfile, nologfile, processing, string, skip,
variable"
KUP-01007: at line 5 column 23
ORA-06512: at "SYS.ORACLE_LOADER", line 14
ORA-06512: at line 1

Logfile
=======

LOG file opened at 10/11/05 20:49:51

Field Definitions for table ext_tab
Record format DELIMITED BY NEWLINE
Data in file has same endianness as the platform

Fields in Data Source:

EMPNO CHAR (255)
Terminated by ","
Trim whitespace same as SQL Loader
ENAME CHAR (255)
Terminated by ","
Trim whitespace same as SQL Loader
JOB CHAR (255)
Terminated by ","
Trim whitespace same as SQL Loader
MGR CHAR (255)
Terminated by ","
Trim whitespace same as SQL Loader
HIREDATE CHAR (11)
Date datatype DATE, date mask dd-MON-yyyy
Terminated by ","
Trim whitespace same as SQL Loader
SAL CHAR (255)
Terminated by ","
Trim whitespace same as SQL Loader
COMM CHAR (255)
Terminated by ","
Trim whitespace same as SQL Loader
DEPTNO CHAR (255)
Terminated by ","
Trim whitespace same as SQL Loader
error processing column EMPNO in row 1 for datafile D:\EXT\E2.csv
ORA-01722: invalid number

Thanks inadvance
Mohan


Tom Kyte
October 11, 2005 - 3:32 pm UTC

it would look more like this:

BADFILE 'SYS_SQLLDR_XT_TMPDIR_00000':'test.bad'
LOGFILE 'test.log_xt'
READSIZE 1048576
SKIP 20
FIELDS TERMINATED BY "," OPTIONALLY ENCLOSED BY '"' LDRTRIM
REJECT ROWS WITH ALL NULL FIELDS
(


Mohan, October 12, 2005 - 7:29 am UTC

Hi Tom,

I was getting the error even if 

it would look more like this:

    BADFILE 'SYS_SQLLDR_XT_TMPDIR_00000':'test.bad'
    LOGFILE 'test.log_xt'
    READSIZE 1048576
    SKIP 20
    FIELDS TERMINATED BY "," OPTIONALLY ENCLOSED BY '"' LDRTRIM
    REJECT ROWS WITH ALL NULL FIELDS
    (

SQL> create table ext_tab
    ( empno     number(4)
    , ename     varchar2(10)
    , job       varchar2(9)
    , mgr       number(4)
   , hiredate  date
    , sal       number(7,2)
    , comm      number(7,2)
    , deptno    number(2)
  )
   organization external
   ( type oracle_loader
     default directory DIR
     access parameters ( records delimited by newline
                      BADFILE 'SYS_SQLLDR_XT_TMPDIR_00000':'test.bad'
                           logfile     'test.log_xt'
                          READSIZE 1048576
                          SKIP 20
                         FIELDS TERMINATED BY "," OPTIONALLY ENCLOSED BY '"'
DRTRIM
      REJECT ROWS WITH ALL NULL FIELDS
                         ( empno
                         , ename
                         , job
                         , mgr
                         , hiredate    date mask "dd-MON-yyyy"
                         , sal
                         , comm
                        , deptno
                         )
                       )
     location ('E2.csv')
  )
  /

Table created.

SQL> SELECT * FROM ext_tab;
SELECT * FROM ext_tab
*
ERROR at line 1:
ORA-29913: error in executing ODCIEXTTABLEOPEN callout
ORA-29400: data cartridge error
KUP-00554: error encountered while parsing input commands
KUP-01005: syntax error: found "identifier": expecting one of: "badfile,
characterset, colon, data, delimited, discardfile, exit, fields, fixed, load,
logfile, nodiscardfile, nobadfile, nologfile, processing, string, skip,
variable"
KUP-01008: the bad identifier was: READSIZE
KUP-01007: at line 4 column 24
ORA-06512: at "SYS.ORACLE_LOADER", line 14
ORA-06512: at line 1


Thanks
Mohan




 

Tom Kyte
October 12, 2005 - 7:43 am UTC

ops$tkyte@ORA10GR1> CREATE TABLE "SYS_SQLLDR_X_EXT_EMP"
  2  (
  3    "EMPNO" NUMBER(4),
  4    "ENAME" VARCHAR2(10),
  5    "JOB" VARCHAR2(9),
  6    "MGR" NUMBER(4),
  7    "HIREDATE" DATE,
  8    "SAL" NUMBER(7,2),
  9    "COMM" NUMBER(7,2),
 10    "DEPTNO" NUMBER(2)
 11  )
 12  ORGANIZATION external
 13  (
 14    TYPE oracle_loader
 15    DEFAULT DIRECTORY MY_DIR
 16    ACCESS PARAMETERS
 17    (
 18      RECORDS DELIMITED BY NEWLINE
 19      BADFILE 'emp.bad'
 20      LOGFILE 't.log_xt'
 21      READSIZE 1048576
 22      FIELDS TERMINATED BY "," OPTIONALLY ENCLOSED BY '"' LDRTRIM
 23      REJECT ROWS WITH ALL NULL FIELDS
 24      (
 25        "EMPNO" CHAR(255)
 26          TERMINATED BY "," OPTIONALLY ENCLOSED BY '"',
 27        "ENAME" CHAR(255)
 28          TERMINATED BY "," OPTIONALLY ENCLOSED BY '"',
 29        "JOB" CHAR(255)
 30          TERMINATED BY "," OPTIONALLY ENCLOSED BY '"',
 31        "MGR" CHAR(255)
 32          TERMINATED BY "," OPTIONALLY ENCLOSED BY '"',
 33        "HIREDATE" CHAR(255)
 34          TERMINATED BY "," OPTIONALLY ENCLOSED BY '"',
 35        "SAL" CHAR(255)
 36          TERMINATED BY "," OPTIONALLY ENCLOSED BY '"',
 37        "COMM" CHAR(255)
 38          TERMINATED BY "," OPTIONALLY ENCLOSED BY '"',
 39        "DEPTNO" CHAR(255)
 40          TERMINATED BY "," OPTIONALLY ENCLOSED BY '"'
 41      )
 42    )
 43    location
 44    (
 45      'emp.dat'
 46    )
 47  )REJECT LIMIT UNLIMITED
 48  /
 
Table created.
 
ops$tkyte@ORA10GR1> select * from SYS_SQLLDR_X_EXT_EMP;
 
     EMPNO ENAME      JOB              MGR HIREDATE         SAL       COMM
---------- ---------- --------- ---------- --------- ---------- ----------
    DEPTNO
----------
      7369 SMITH      CLERK           7902 17-DEC-80        800
        20
 
      7499 ALLEN      SALESMAN        7698 20-FEB-81       1600        300
 

Partitioning external tables

Yelena, May 03, 2006 - 12:34 pm UTC

Tom,

It seems that it is possible to partition external tables. Can you provide more information on this and what kind of partions are allowed.

Thanks

Tom Kyte
May 03, 2006 - 1:28 pm UTC

er? really. An external table is just a mapping to a file in the operating system.

You can create an external table based on more than one file (partitioning? not really)

but there is no such thing as a partitioned external table.

Partition external tables

Yelena, May 03, 2006 - 2:13 pm UTC

Tom,

One of our developers who is no longer with the company created following external table, which is partitioned (at least Oracle thinks so) and the queries run much faster against it.

Can you please explain why Oracle allowed it (we are using version 9.0.5.0) and what are the implications of having this table partitioned. Also, where can I get more information on partitioning external tables (if this is what it's really doing).

create table STG.FACILITY_HIST_ROLLUP_TRANSIT (
FA_KEY_ID VARCHAR2(29) ,
TM_WORK_OF_DT NUMBER(38) ,
FA_REC_TYP_CD NUMBER(38) ,
FA_PROC_CYC_NO NUMBER(38) ,
GFRN VARCHAR2(20) ,
GFCID NUMBER(10) ,
RO_EFF_YYMM_DT NUMBER ,
FA_CR_AM NUMBER(18) ,
FA_LOC_APPRV_AM NUMBER(18) ,
FA_FACIL_UNUSE_COMIT_AM NUMBER(18) ,
FA_FACIL_UNUSE_UNCOM_AM NUMBER(18) ,
FA_FACIL_EXCESS_AM NUMBER(18) ,
FA_FACIL_OSUC_AM NUMBER(18) ,
FA_FACIL_DIR_RISK_AM NUMBER(18) ,
FA_FACIL_CONTG_RISK_AM NUMBER(18) ,
FA_FACIL_PRE_SETL_RISK_AM NUMBER(18) ,
FA_FACIL_CLRNG_RISK_AM NUMBER(18) ,
FA_FACIL_SETL_RISK_AM NUMBER(18) ,
FA_FACIL_OTH_RISK_CATGY_AM NUMBER(18) ,
FA_FACIL_CLASS_I_AM NUMBER(18) ,
FA_FACIL_CLASS_IA_AM NUMBER(18) ,
FA_FACIL_CLASS_I_IA_AM NUMBER(18) ,
FA_FACIL_CLASS_II_AM NUMBER(18) ,
FA_FACIL_CLASS_IA_II_AM NUMBER(18) ,
FA_FACIL_CLASS_III_AM NUMBER(18) ,
FA_FACIL_CLASS_II_III_AM NUMBER(18) ,
FA_FACIL_CLASS_IV_AM NUMBER(18) ,
FA_OUTST_AM NUMBER(18) ,
FA_MIN_MONTH_USGE_AM NUMBER(18) ,
FA_MAX_MONTH_USGE_AM NUMBER(18) ,
FA_AVG_MONTH_USGE_AM NUMBER(18) ,
FA_AGGR_MONTH_USGE_AM NUMBER(18) ,
FA_MONTH_USGE_CNT NUMBER(18) ,
FA_EXTRC_DT NUMBER(38) ,
FA_EXTRC_SRC_NM VARCHAR2(10) ,
FA_STATUS_CD VARCHAR2(1) ,
FA_USER_MOD_NM VARCHAR2(10) ,
FA_ADD_DT VARCHAR2(30) , -- DATE
FA_UPDT_DT VARCHAR2(30) , -- DATE
FA_APPRV_UNIT_ID NUMBER(10) ,
FA_APPRV_UNIT_CTRY_CD VARCHAR2(2) ,
FA_FACIL_COMIT_AM NUMBER(18) ,
FA_FACIL_UNCOM_AM NUMBER(18) ,
CP_PROG_AVAIL_AM NUMBER(18) ,
FA_CP_TYP_FG VARCHAR2(1),
FA_OUTST_DIR_RISK_AM NUMBER(18),
FA_OUTST_CONTG_RISK_AM NUMBER(18),
FA_OUTST_PRE_SETL_RISK_AM NUMBER(18),
FA_OUTST_CLRNG_RISK_AM NUMBER(18),
FA_OUTST_SETL_RISK_AM NUMBER(18),
FA_OUTST_OTH_RISK_AM NUMBER(18),
FA_ECA_AM NUMBER(18),
FA_OUTST_CMTM_AM NUMBER(18),
FA_OUTST_PSLE_AM NUMBER(18),
FA_OUTST_OREO_AM NUMBER(18),
FA_OUTST_RECOV_AM NUMBER(18),
FA_OUTST_WRDWN_AM NUMBER(18),
FA_OUTST_WROFF_AM NUMBER(18),
FA_OUTST_PERF_AM NUMBER(18),
FA_OUTST_NON_PERF_AM NUMBER(18),
FA_OUTST_DIR_CLS_I_AM NUMBER(18),
FA_OUTST_DIR_CLS_IA_AM NUMBER(18),
FA_OUTST_DIR_CLS_II_AM NUMBER(18),
FA_OUTST_DIR_CLS_III_AM NUMBER(18),
FA_OUTST_CONTG_CLS_I_AM NUMBER(18),
FA_OUTST_CONTG_CLS_IA_AM NUMBER(18),
FA_OUTST_CONTG_CLS_II_AM NUMBER(18),
FA_OUTST_CONTG_CLS_III_AM NUMBER(18),
FA_OUTST_PSR_CLS_I_AM NUMBER(18),
FA_OUTST_PSR_CLS_IA_AM NUMBER(18),
FA_OUTST_PSR_CLS_II_AM NUMBER(18),
FA_OUTST_PSR_CLS_III_AM NUMBER(18),
FA_OUTST_UNC_CLS_I_AM NUMBER(18),
FA_OUTST_UNC_CLS_IA_AM NUMBER(18),
FA_OUTST_UNC_CLS_II_AM NUMBER(18),
FA_OUTST_UNC_CLS_III_AM NUMBER(18),
FA_ECA_LOC_APPRV_AM NUMBER(18),
FA_ECA_TOT_PC NUMBER(4,3),
FA_APPRV_UNIT_LVL2_CD VARCHAR2(10),
FA_APPRV_UNIT_LVL1_CD VARCHAR2(10)
)
ORGANIZATION EXTERNAL
(
TYPE ORACLE_LOADER
DEFAULT DIRECTORY STG.GRR_HISTDAY_DATA
ACCESS PARAMETERS
(
records delimited by newline
badfile ORACLE_DIRECTORY_LOGS:'facilty_hist_trnsit_%a_%p.bad'
logfile ORACLE_DIRECTORY_LOGS:'facilty_hist_trnsit_%a_%p.log'
fields terminated by '~'
missing field values are null
)
LOCATION ('GRR_DGRPCURRDAY_VGR_HISTFACLTY.txt')
)
PARTITION BY HASH (FA_KEY_ID)
STORE IN (GLPDATA1, GLPDATA2)
PARALLEL
REJECT LIMIT UNLIMITED

Thank you


Tom Kyte
May 03, 2006 - 2:54 pm UTC

the partition clause is a big "no operation", current releases of the software flag this as an error instead of just ignoring it:

ops$tkyte@ORA10GR2> create table FACILITY_HIST_ROLLUP_TRANSIT  (
  2     FA_APPRV_UNIT_LVL1_CD VARCHAR2(10)
  3  )
  4  ORGANIZATION EXTERNAL
  5  (
  6    TYPE ORACLE_LOADER
  7    DEFAULT DIRECTORY GRR_HISTDAY_DATA
  8    ACCESS PARAMETERS
  9    (
 10     records delimited by newline
 11     fields terminated by '~'
 12     missing field values are null
 13    )
 14    LOCATION ('GRR_DGRPCURRDAY_VGR_HISTFACLTY.txt')
 15  )
 16  PARTITION BY HASH (FA_KEY_ID)
 17  STORE IN (GLPDATA1, GLPDATA2)
 18  PARALLEL
 19  REJECT LIMIT UNLIMITED
 20  /
create table FACILITY_HIST_ROLLUP_TRANSIT  (
             *
ERROR at line 1:
ORA-30657: operation not supported on external organized table


ops$tkyte@ORA10GR2> create table FACILITY_HIST_ROLLUP_TRANSIT  (
  2     FA_APPRV_UNIT_LVL1_CD VARCHAR2(10)
  3  )
  4  ORGANIZATION EXTERNAL
  5  (
  6    TYPE ORACLE_LOADER
  7    DEFAULT DIRECTORY GRR_HISTDAY_DATA
  8    ACCESS PARAMETERS
  9    (
 10     records delimited by newline
 11     fields terminated by '~'
 12     missing field values are null
 13    )
 14    LOCATION ('GRR_DGRPCURRDAY_VGR_HISTFACLTY.txt')
 15  )
 16  -- PARTITION BY HASH (FA_KEY_ID)
 17  -- STORE IN (GLPDATA1, GLPDATA2)
 18  PARALLEL
 19  REJECT LIMIT UNLIMITED
 20  /

Table created.


It is likely the PARALLEL that you are thinking about as far as performance goes.... 

Yelena, May 03, 2006 - 4:34 pm UTC

For some reason it works for me:

SQL> create table FACILITY_HIST_ROLLUP_TRANSIT_L ( FA_KEY_ID            VARCHAR2(29)) 
  2  ORGANIZATION EXTERNAL
  3  (
  4  TYPE ORACLE_LOADER
  5  DEFAULT DIRECTORY GRR_HISTDAY_DATA
  6  ACCESS PARAMETERS
  7  (
  8  records delimited by newline
  9  fields terminated by '~'
 10  missing field values are null
 11  )
 12  LOCATION ('GRR_DGRPCURRDAY_VGR_HISTFACLTY.txt')
 13  )
 14  PARTITION BY HASH (FA_KEY_ID)
 15  STORE IN (GLPDATA1, GLPDATA2)
 16  PARALLEL
 17  REJECT LIMIT UNLIMITED;

Table created.

SQL> 

And I am not confusing parallel with partitioning of this table. When I do following Oracle tells me that this table is partitioned: 

SQL> select t.partitioned from sys.all_all_tables t where t.table_name = 'FACILITY_HIST_ROLLUP_TRANSIT_L';

PAR
---
YES

SQL> 

Is there an explanation for this? Should I never try to partition external tables?

Thank you. 

Tom Kyte
May 04, 2006 - 1:43 am UTC

I *know* the syntax worked for you - as I said "current releases of the software
flag this as an error instead of just ignoring it:"




Tell you what - show us this "partitioned external table" outperforming a "non partitioned one" (that is, show us that "partition" makes a difference here, use TKPROF and sql_trace)


My point about parallel was

- partition isn't doing anything for you
- it is likely the PARALLEL bit that is.

Creating external table for a fixed width text file

Rishi, May 31, 2006 - 2:52 am UTC

Hi Tom,
I have a fixed width text file (Test_payee.txt) which looks like this

 033027025789E      577624774     1110H40101SUYDAM JR       
 14472100           396242386     1210D40200HENNESSY        
 14472100           366034549     0270D40200JONES           
 FS58032350         271507530     1210D40101ACTIS           
 M71336000000       000001111     1110H40320MISSIONIMPOSSIBL
 M7133600000090     002188080     1212D40320PRAY            
 NML000012296500    479381094     1210D40101ERICSON         
 NY142743           081309209     1110H40101ANSEL           
 NY142743           000001114     1110H40318DONOVAN         
 NY142743           000001111     1110H40315PETERSON        
 NY142743           000001118     1110H40301ROBINSON        
 NY142743           000001116     1110H40301WILSON          
 NY142743           125289324     1210S40101ZIMMER          

The structure is as follows
ACCOUNT_ID CHAR(20),
PARTICIPANT_ID_1 CHAR(3),
PARTICIPANT_ID_2 CHAR(2),
PARTICIPANT_ID_3 CHAR(4),
PARTICIPANTS_LAST_NAME CHAR(20),
PARTICIPANTS_FIRST_NAME CHAR(12)

I tried creating external table for this. The table gets created but on select i get errors.

create directory pay_ext_tab_dir as 'D:\Rishi\SEIpay\Test';

SQL> drop table test_payee;

Table dropped.

SQL> 
SQL> CREATE TABLE TEST_PAYEE
  2  (ACCOUNT_ID VARCHAR2(20),
  3  PARTICIPANT_ID_1 VARCHAR2(3),
  4  PARTICIPANT_ID_2 VARCHAR2(2),
  5  PARTICIPANT_ID_3 VARCHAR2(4),
  6  PARTICIPANTS_LAST_NAME VARCHAR2(20),
  7  PARTICIPANTS_FIRST_NAME VARCHAR2(12)
  8  )
  9  ORGANIZATION EXTERNAL
 10  (TYPE ORACLE_LOADER
 11  DEFAULT DIRECTORY pay_ext_tab_dir
 12  ACCESS PARAMETERS
 13  (RECORDS DELIMITED BY NEWLINE
 14  BADFILE 'pay_bad.txt'
 15  DISCARDFILE 'pay_discard.txt'
 16  LOGFILE 'pay_log.txt'
 17  MISSING FIELD VALUES ARE NULL
 18  (ACCOUNT_ID CHAR(20),
 19  PARTICIPANT_ID_1 CHAR(3),
 20  PARTICIPANT_ID_2 CHAR(2),
 21  PARTICIPANT_ID_3 CHAR(4),
 22  PARTICIPANTS_LAST_NAME CHAR(20),
 23  PARTICIPANTS_FIRST_NAME CHAR(12)
 24  )
 25  )
 26  LOCATION ('TEST_PAYEE.TXT')
 27  );

Table created.

SQL> 
SQL> select * from TEST_PAYEE;
select * from TEST_PAYEE
*
ERROR at line 1:
ORA-29913: error in executing ODCIEXTTABLEOPEN callout 
ORA-29400: data cartridge error 
KUP-00554: error encountered while parsing access parameters 
KUP-01005: syntax error: found "missing": expecting one of: "badfile, 
byteordermark, characterset, colon, column, data, delimited, discardfile, exit, 
fields, fixed, load, logfile, language, nodiscardfile, nobadfile, nologfile, 
date_cache, processing, readsize, string, skip, territory, variable" 
KUP-01007: at line 5 column 1 
ORA-06512: at "SYS.ORACLE_LOADER", line 19 

Please advise. 

Tom Kyte
May 31, 2006 - 10:03 am UTC

Missing the FIELDS clause - your positions look funny to me, you'll need to fix them of course.

ops$tkyte@ORA10GR2> CREATE TABLE TEST_PAYEE
  2  (ACCOUNT_ID VARCHAR2(20),
  3  PARTICIPANT_ID_1 VARCHAR2(3),
  4  PARTICIPANT_ID_2 VARCHAR2(2),
  5  PARTICIPANT_ID_3 VARCHAR2(4),
  6  PARTICIPANTS_LAST_NAME VARCHAR2(20),
  7  PARTICIPANTS_FIRST_NAME VARCHAR2(12)
  8  )
  9  ORGANIZATION EXTERNAL
 10  (TYPE ORACLE_LOADER
 11  DEFAULT DIRECTORY pay_ext_tab_dir
 12  ACCESS PARAMETERS
 13  (RECORDS DELIMITED BY NEWLINE
 14   BADFILE 'pay_bad.txt'
 15   DISCARDFILE 'pay_discard.txt'
 16   LOGFILE 'pay_log.txt'
 17   FIELDS
 18   MISSING FIELD VALUES ARE NULL
 19  (ACCOUNT_ID CHAR(20),
 20  PARTICIPANT_ID_1 CHAR(3),
 21  PARTICIPANT_ID_2 CHAR(2),
 22  PARTICIPANT_ID_3 CHAR(4),
 23  PARTICIPANTS_LAST_NAME CHAR(20),
 24  PARTICIPANTS_FIRST_NAME CHAR(12)
 25  )
 26  )
 27  LOCATION ('TEST_PAYEE.TXT')
 28  );

Table created.

ops$tkyte@ORA10GR2> select * from test_payee;

ACCOUNT_ID           PAR PA PART PARTICIPANTS_LAST_NA PARTICIPANTS
-------------------- --- -- ---- -------------------- ------------
 033027025789E       577 62 4774      1110H40101SUYDA M JR
 14472100            396 24 2386      1210D40200HENNE SSY
 14472100            366 03 4549      0270D40200JONES
 FS58032350          271 50 7530      1210D40101ACTIS
 M71336000000        000 00 1111      1110H40320MISSI ONIMPOSSIBL
 M7133600000090      002 18 8080      1212D40320PRAY
 NML000012296500     479 38 1094      1210D40101ERICS ON
 NY142743            081 30 9209      1110H40101ANSEL
 NY142743            000 00 1114      1110H40318DONOV AN
 NY142743            000 00 1111      1110H40315PETER SON
 NY142743            000 00 1118      1110H40301ROBIN SON
 NY142743            000 00 1116      1110H40301WILSO N
 NY142743            125 28 9324      1210S40101ZIMME R

13 rows selected.

 

Creating external table for a fixed width text file

Rishi, May 31, 2006 - 3:45 am UTC

Hi Tom,

Was able to sort out the problem.

SQL> drop table test;

Table dropped.

SQL> 
SQL>    CREATE TABLE TEST
  2      (       "ACCOUNT_ID" VARCHAR2(20 BYTE),
  3      "PARTICIPANT_ID_1" VARCHAR2(3 BYTE),
  4      "PARTICIPANT_ID_2" VARCHAR2(2 BYTE),
  5      "PARTICIPANT_ID_3" VARCHAR2(4 BYTE),
  6      "PARTICIPANTS_LAST_NAME" VARCHAR2(20 BYTE),
  7      "PARTICIPANTS_FIRST_NAME" VARCHAR2(12 BYTE)
  8      )
  9      ORGANIZATION EXTERNAL
 10       ( TYPE ORACLE_LOADER
 11         DEFAULT DIRECTORY "PAY_EXT_TAB_DIR"
 12         ACCESS PARAMETERS
 13         ( RECORDS DELIMITED BY NEWLINE
 14   BADFILE 'pay_bad.txt'
 15   DISCARDFILE 'pay_discard.txt'
 16   LOGFILE 'pay_log.txt'
 17   FIELDS
 18   (ACCOUNT_ID position(1:20),
 19   PARTICIPANT_ID_1 position(21:23),
 20   PARTICIPANT_ID_2 position(24:25),
 21   PARTICIPANT_ID_3 position(26:29),
 22   PARTICIPANTS_LAST_NAME position(30:49),
 23   PARTICIPANTS_FIRST_NAME position(50:61)
 24   )
 25       )
 26         LOCATION
 27          ( 'TEST_PAYEE.TXT'
 28          )
 29       )
 30   ;

Table created.

SQL> 
SQL> select * from test;

ACCOUNT_ID           PAR PA PART PARTICIPANTS_LAST_NA PARTICIPANTS
-------------------- --- -- ---- -------------------- ------------
 033027025789E       577 62 4774      1110H40101SUYDA M JR
 14472100            396 24 2386      1210D40200HENNE SSY
 14472100            366 03 4549      0270D40200JONES
 FS58032350          271 50 7530      1210D40101ACTIS
 M71336000000        000 00 1111      1110H40320MISSI ONIMPOSSIBL
 M7133600000090      002 18 8080      1212D40320PRAY
 NML000012296500     479 38 1094      1210D40101ERICS ON
 NY142743            081 30 9209      1110H40101ANSEL
 NY142743            000 00 1114      1110H40318DONOV AN
 NY142743            000 00 1111      1110H40315PETER SON
 NY142743            000 00 1118      1110H40301ROBIN SON

ACCOUNT_ID           PAR PA PART PARTICIPANTS_LAST_NA PARTICIPANTS
-------------------- --- -- ---- -------------------- ------------
 NY142743            000 00 1116      1110H40301WILSO N
 NY142743            125 28 9324      1210S40101ZIMME R

13 rows selected.

SQL>  

Bug in USER_EXTERNAL_TABLES

A reader, August 16, 2006 - 2:03 pm UTC

SQL> desc user_external_tables
 Name                                      Null?    Type
 ----------------------------------------- -------- ----------------------------
 TABLE_NAME                                NOT NULL VARCHAR2(30)
 TYPE_OWNER                                         CHAR(3)
 TYPE_NAME                                 NOT NULL VARCHAR2(30)
 DEFAULT_DIRECTORY_OWNER                            CHAR(3)
 DEFAULT_DIRECTORY_NAME                    NOT NULL VARCHAR2(30)
 REJECT_LIMIT                                       VARCHAR2(40)
 ACCESS_TYPE                                        VARCHAR2(7)
 ACCESS_PARAMETERS                                  VARCHAR2(4000)


SQL> desc sys.external_tab$
 Name                                      Null?    Type
 ----------------------------------------- -------- ----------------------------
 OBJ#                                      NOT NULL NUMBER
 DEFAULT_DIR                               NOT NULL VARCHAR2(30)
 TYPE$                                     NOT NULL VARCHAR2(30)
 NR_LOCATIONS                              NOT NULL NUMBER
 REJECT_LIMIT                              NOT NULL NUMBER
 PAR_TYPE                                  NOT NULL NUMBER
 PARAM_CLOB                                         CLOB
 PARAM_BLOB                                         BLOB

The user_external_tables view has 

DECODE(xt.par_type, 1, NULL,   2, xt.param_clob, NULL)

which makes the ACCESS_PARAMETERS a varchar2(4000) field.

So, if my access parameters are more than 4000 characters, the USER_EXTERNAL_TABLES view truncates it to 4000.

Is this a known bug? Is there a workaround?

Thanks 

Tom Kyte
August 16, 2006 - 4:00 pm UTC

I do not see any filed bugs.

only suggestable workaround - create your own view for now, later point the view to user_external_tables when it is corrected.

do you have access to support to file this?

External Tables

alicia, September 05, 2006 - 3:22 pm UTC

The thread on inserting from external tables has been helpful, except that I am working on updating records from an external table using multiple subqueries. Can this be done?

Here is the external table:

CREATE TABLE EXT_UPD_COST_ITM
(
C_ITEM VARCHAR2(30 BYTE),
C_ITEM_COST NUMBER(24,4)
)
ORGANIZATION EXTERNAL
( TYPE ORACLE_LOADER
DEFAULT DIRECTORY AG_DIR
ACCESS PARAMETERS
( RECORDS DELIMITED BY NEWLINE SKIP 1
BADFILE AG_DIR:'upd_cost_itm_bad.log'
LOGFILE AG_DIR:'upd_cost_itm_log.log'
FIELDS TERMINATED BY '|'
OPTIONALLY ENCLOSED BY '"'
MISSING FIELD VALUES ARE NULL
)
LOCATION (AG_DIR:'upd_cost_itm.txt')
)
REJECT LIMIT 0
NOPARALLEL
NOMONITORING;


SQL Update Query:

UPDATE cp.COST_ITM
SET cp.COST_ITM.ITEM_COST1 = (
SELECT ORACLE.EXT_UPD_COST_ITM.C_ITEM_COST
FROM ORACLE.EXT_UPD_COST_ITM
WHERE ORACLE.EXT_UPD_COST_ITM.C_ITEM = cp.COST_ITM.ITEM)
WHERE cp.COST_ITM.ITEM IN (
SELECT ORACLE.EXT_UPD_COST_ITM.C_ITEM
FROM ORACLE.EXT_UPD_COST_ITM)
AND cp.COST_ITM.BCR_TYPE = 'CUR'
AND cp.COST_ITM.CCN = '110';

Error:

ERROR at line 1:
ORA-29913: error in executing ODCIEXTTABLEFETCH callout
ORA-30653: reject limit reached
ORA-06512: at "SYS.ORACLE_LOADER", line 14
ORA-06512: at line 1


Text File:

C_ITEM|C_ITEM_COST
510337601|0.00272
510337635|0.007
506037055|2.7262
510337615|0.07318




Line 1 is the header that comes back in the log file. For some reason it does not skip this row. If we place a '0' in the 2nd column - which is a NUMBER datatype, then the update works, but not a good option.

Please advise.

Thanks!


Tom Kyte
September 05, 2006 - 5:31 pm UTC

it can be done (you already know the "fix" here! up the reject limit OR skip the first record)

however, in your case, I'm PRETTY SURE i would not want to do it. do you really want to read that external table over and over and over and over and over????

can we use merge instead? 10g the answer is YES, 9i the answer is maybe

External Table with multiple subqueries

alicia, September 06, 2006 - 8:20 am UTC

Thanks for the quick response. We've tried increasing the REJECT LIMIT and no, we don't want to read this file over and over again.

What I'm noticing is that the first row does not skip. Does an UPDATE statement have a different process than a SELECT? Or does Oracle validate the NUMBER(24,4) datatype first before skipping rows?

Is this a known issue on 9i? I am unable to find specific documentation on it.

Thanks again!

Tom Kyte
September 06, 2006 - 3:25 pm UTC

there is no "issue here", this update:

UPDATE cp.COST_ITM
SET cp.COST_ITM.ITEM_COST1 = (
SELECT ORACLE.EXT_UPD_COST_ITM.C_ITEM_COST
FROM ORACLE.EXT_UPD_COST_ITM
WHERE ORACLE.EXT_UPD_COST_ITM.C_ITEM = cp.COST_ITM.ITEM)
WHERE cp.COST_ITM.ITEM IN (
SELECT ORACLE.EXT_UPD_COST_ITM.C_ITEM
FROM ORACLE.EXT_UPD_COST_ITM)
AND cp.COST_ITM.BCR_TYPE = 'CUR'
AND cp.COST_ITM.CCN = '110'

likely fires the correlated subquery (causing a full scan) against the external table for each row to be updated! So, each scan hits that row and ups the rejects.


try merging:

merge into t1
using ( select *
from t2
where c_item in (select item from t1 where bcr_type = 'CUR' and ccn = '110' )
) t2
on (t1.item = t2.c_item and t1.bcr_type ='CUR' and t1.ccn = '110')
when matched then update set item_cost1 = c_item_cost
when not matched /* never happens!!! */ then insert (item) values (Null);

(verify that, didn't have anything to test with...)

and use the ability to skip records in your external table definition.




External table explain plan

Rahul, October 05, 2006 - 4:58 pm UTC

Tom,

   I had 'set autotrace traceonly' for my session and selected from the alert_log external table. I found '5' In memory sorts. I thought external tables are NOT sorted when the rows are streamed to the client(sql*plus). 


SQL> select * from alert_log;

12379 rows selected.


Execution Plan
----------------------------------------------------------
Plan hash value: 1977868570

----------------------------------------------------------------------------------------
| Id  | Operation                  | Name      | Rows  | Bytes | Cost (%CPU)| Time     |
----------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT           |           |  8168 |  1028K|    24   (0)| 00:00:01 |
|   1 |  EXTERNAL TABLE ACCESS FULL| ALERT_LOG |  8168 |  1028K|    24   (0)| 00:00:01 |
----------------------------------------------------------------------------------------


Statistics
----------------------------------------------------------
        306  recursive calls
          0  db block gets
        558  consistent gets
          0  physical reads
          0  redo size
     701000  bytes sent via SQL*Net to client
       9456  bytes received via SQL*Net from client
        827  SQL*Net roundtrips to/from client
          5  sorts (memory)
          0  sorts (disk)
      12379  rows processed

Thank you,
Rahul 

Tom Kyte
October 06, 2006 - 8:01 am UTC

See all of the recursive calls?  You had sql done on behalf of your sql.  They sorted.

There was no sort there.


ops$tkyte%ORA10GR2> create table external_table
  2  (EMPNO NUMBER(4) ,
  3   ENAME VARCHAR2(10),
  4   JOB VARCHAR2(9),
  5   MGR NUMBER(4),
  6   HIREDATE DATE,
  7   SAL NUMBER(7, 2),
  8   COMM NUMBER(7, 2) ,
  9   DEPTNO NUMBER(2)
 10  )
 11  ORGANIZATION EXTERNAL
 12  ( type oracle_loader
 13    default directory data_dir
 14    access parameters
 15    ( fields terminated by ',' )
 16    location ('emp.dat')
 17  )
 18  /

Table created.

ops$tkyte%ORA10GR2> host flat scott/tiger emp > /tmp/emp.dat
/bin/bash: flat: command not found

ops$tkyte%ORA10GR2> set autotrace traceonly
ops$tkyte%ORA10GR2> select * from external_table;

no rows selected


Execution Plan
----------------------------------------------------------
Plan hash value: 1024155065

---------------------------------------------------------------------------------------------
| Id  | Operation                  | Name           | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT           |                |  8168 |   693K|    24   (0)| 00:00:01 |
|   1 |  EXTERNAL TABLE ACCESS FULL| EXTERNAL_TABLE |  8168 |   693K|    24   (0)| 00:00:01 |
---------------------------------------------------------------------------------------------


Statistics
----------------------------------------------------------
       2017  recursive calls
          0  db block gets
       1929  consistent gets
         33  physical reads
          0  redo size
        656  bytes sent via SQL*Net to client
        374  bytes received via SQL*Net from client
          1  SQL*Net roundtrips to/from client
         36  sorts (memory)
          0  sorts (disk)
          0  rows processed

ops$tkyte%ORA10GR2> select * from external_table;

no rows selected


Execution Plan
----------------------------------------------------------
Plan hash value: 1024155065

---------------------------------------------------------------------------------------------
| Id  | Operation                  | Name           | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT           |                |  8168 |   693K|    24   (0)| 00:00:01 |
|   1 |  EXTERNAL TABLE ACCESS FULL| EXTERNAL_TABLE |  8168 |   693K|    24   (0)| 00:00:01 |
---------------------------------------------------------------------------------------------


Statistics
----------------------------------------------------------
         17  recursive calls
          0  db block gets
         53  consistent gets
          0  physical reads
          0  redo size
        656  bytes sent via SQL*Net to client
        374  bytes received via SQL*Net from client
          1  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
          0  rows processed

ops$tkyte%ORA10GR2> set autotrace off


Just run the sql over again and you'll probably see them "disappear" 

Perfect.

Rahul, October 06, 2006 - 11:35 am UTC

Tom,

I tested it too and now I know.

Thank you.

ORACLE_LOADER

Prashant, October 12, 2006 - 9:54 am UTC

Tom,

I get ORA-30657 error when I run this DDL:

CREATE TABLE EXT_SYSTEM_EVENT_LOG
ORGANIZATION EXTERNAL
(
TYPE ORACLE_LOADER
DEFAULT DIRECTORY ARCHIVE_DIR
LOCATION ('system_event_log.dat')
)
AS
SELECT *
FROM system_event_log
WHERE CREATE_DT BETWEEN '01-OCT-2006' AND '31-OCT-2006'

The reason is because I specified the type as ORACLE_LOADER instead of ORACLE_DATAPUMP above.

I know that Creating external tables like this (with 'AS SELECT * FROM... type), I cannot user ORACLE_LOADER type. Is there any way around to do this though?

Is there any way to create a comma delimited flat file and external table off of a normal table using 'AS SELECT * FROM.. ' syntax without having to specify the exact column length and type? I have seen your unloader package to create a control file and a .dat file. But is there a simpler way of doing it? I want to create an EXTERNAL table, a comma delimited flat file from a normal table without writing a PL/SQL procedure. If this can't be done, do you know whether this will be supported in future Oracle releases? Because we have a need to do this and I want to keep the code simpler than your unloader procedure. No offense to you!


Tom Kyte
October 12, 2006 - 11:42 am UTC

you know you cannot do it... but you want a workaround to do it?

you would have to use the unloader methods I've demonstrated.

what is "complex" about the tiny plsql routine?????? what could be "simplier" than "we got the code written for us and we just use it"

External tables

Robert, October 16, 2006 - 9:21 am UTC

Hi Tom

I would like to ask you how to solve this problem.
I have asc file in this structure

S|name|account|from_date|to_date|
I|account|name|value|col1|col2|col3|col4|
I|account|name|value|col1|col2|col3|col4|
I|account|name|value|col1|col2|col3|col4|
I|account|name|value|col1|col2|col3|col4|

where the first row is header (and has less fields then others) and the rest are items. Do you thing that is possible to get this result from my file like?

from_date,to_date,account,name,value,col1,col2,col3,col4
from_date,to_date,account,name,value,col1,col2,col3,col4
from_date,to_date,account,name,value,col1,col2,col3,col4

I tried to use external table and skip the first row:

CREATE TABLE SU
(
A1 VARCHAR2(50 BYTE),
A2 VARCHAR2(50 BYTE),
A3 VARCHAR2(50 BYTE),
A4 VARCHAR2(50 BYTE),
A5 VARCHAR2(50 BYTE),
A6 VARCHAR2(50 BYTE),
A7 VARCHAR2(50 BYTE),
A8 VARCHAR2(50 BYTE),
A9 VARCHAR2(50 BYTE)
)
ORGANIZATION EXTERNAL
( TYPE ORACLE_LOADER
DEFAULT DIRECTORY DATA_DIR
ACCESS PARAMETERS
( records delimited by 0x'0a' CHARACTERSET WE8ISO8859P1 Skip 1
fields terminated by '|' )
LOCATION (DATA_DIR:'ZINSFEE.ASC')
)
REJECT LIMIT 0
NOPARALLEL
NOMONITORING;

It works pretty well, but I do not know how to get dates from the first row. I thought to use the same principle to create external table only for the first row and then to join them in a view. Is it possible to create external table and reads the first row and the others skip?
Can you help me? Thank you very much

Robert




Tom Kyte
October 16, 2006 - 10:53 am UTC

ops$tkyte%ORA9IR2> CREATE TABLE SU
  2  (
  3    A1   VARCHAR2(50 BYTE),
  4    A2   VARCHAR2(50 BYTE),
  5    A3   VARCHAR2(50 BYTE),
  6    A4   VARCHAR2(50 BYTE),
  7    A5   VARCHAR2(50 BYTE),
  8    A6   VARCHAR2(50 BYTE),
  9    A7   VARCHAR2(50 BYTE),
 10    A8   VARCHAR2(50 BYTE),
 11    A9   VARCHAR2(50 BYTE)
 12  )
 13  ORGANIZATION EXTERNAL
 14    (  TYPE ORACLE_LOADER
 15       DEFAULT DIRECTORY DATA_DIR
 16       ACCESS PARAMETERS
 17         ( records delimited by 0x'0a' CHARACTERSET WE8ISO8859P1
 18           fields terminated by '|'
 19                   missing field values are null )
 20       LOCATION (DATA_DIR:'ZINSFEE.ASC')
 21    )
 22  REJECT LIMIT 0
 23  NOPARALLEL
 24  NOMONITORING;

Table created.

ops$tkyte%ORA9IR2> with S as
  2  (select a4 from_date, a5 to_date
  3     from su
  4   where a1 = 'S'
  5     and rownum = 1
  6  ),
  7  I as
  8  (select a2 account, a3 name, a4 value, a5 col1, a6 col2, a7 col3, a8 col4
  9    from su
 10   where a1 = 'I'
 11  )
 12  select * from s, i;

FROM_DATE TO_DATE ACCOUNT NAME VALUE COL1 COL2 COL3 COL4
--------- ------- ------- ---- ----- ---- ---- ---- ----
from_date to_date account name value col1 col2 col3 col4
from_date to_date account name value col1 col2 col3 col4
from_date to_date account name value col1 col2 col3 col4
from_date to_date account name value col1 col2 col3 col4

 

Alexander the ok, March 29, 2007 - 2:57 pm UTC

Hi,

Tom I have a real simple question about trimming fields in an external table definition. My fields are coming in with quotes around them and are delimited by commas, e.g.

"field1", "field2", .....etc

All I want to do is trim off the quotes. I can't find where I can do this in the access parameters clause. I've tried many variations but it is very tedious dropping and creating my table over and over to get it right. Do you know the snippet of syntax to do this?

I was trying things like

SQL> create table ext
2 (city varchar2(30) ,
3 state varchar2(30),
4 zip varchar(20),
5 ac varchar2(30),
6 fips varchar2(30),
7 country varchar2(300),
pref varchar(300),
8 9 type varchar(30)
10 )
11 organization external
12 ( type oracle_loader
13 default directory postal_files
14 access parameters
15 (records delimited by newline
16 fields terminated by ',' trim ('"' from (city))
17 missing field values are null)
18 location ('list.txt')
19 )
20 reject limit unlimited
21 /

Thanks.
Tom Kyte
March 30, 2007 - 1:00 pm UTC

CREATE TABLE "SYS_SQLLDR_X_EXT_DEPT"
(
  DEPTNO NUMBER(2),
  DNAME VARCHAR2(14),
  LOC VARCHAR2(13)
)
ORGANIZATION external
(
  TYPE oracle_loader
  DEFAULT DIRECTORY SYS_SQLLDR_XT_TMPDIR_00000
  ACCESS PARAMETERS
  (
    RECORDS DELIMITED BY NEWLINE CHARACTERSET US7ASCII
    BADFILE 'SYS_SQLLDR_XT_TMPDIR_00000':'test.bad'
    LOGFILE 'test.log_xt'
    READSIZE 1048576
    SKIP 7
    FIELDS TERMINATED BY "," OPTIONALLY ENCLOSED BY '"' LDRTRIM
    REJECT ROWS WITH ALL NULL FIELDS
    (
      DEPTNO CHAR(255)
        TERMINATED BY "," OPTIONALLY ENCLOSED BY '"',
      DNAME CHAR(255)
        TERMINATED BY "," OPTIONALLY ENCLOSED BY '"',
      LOC CHAR(255)
        TERMINATED BY "," OPTIONALLY ENCLOSED BY '"'
    )
  )
  location
  (
    'test.ctl'
  )
)REJECT LIMIT UNLIMITED



generated by using the control file:
LOAD DATA
INFILE *
INTO TABLE DEPT
replace
FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '"'
(DEPTNO, DNAME, LOC)



found in $ORACLE_HOME/rdbms/demo/ulcase1.ctl

$ sqlldr / test.ctl external_table=generate_only


Alexander the ok, April 06, 2007 - 11:15 am UTC

Thank you Tom that worked great.

Do you know what kind of magic I have to perform to allow a non-dba account to read from external tables? They get:

ORA-29913: error in executing ODCIEXTTABLEOPEN callout
ORA-29400: data cartridge error
KUP-04040: file ziplists.txt in POSTAL_FILES not found
ORA-06512: at "SYS.ORACLE_LOADER", line 19

I've granted everything under the sun to them. Read/write on the directory, select with admin, I've extacted the grants my account has that works and applied that to theirs, nothing. Thanks.

Tom Kyte
April 06, 2007 - 11:46 am UTC

got example for me to start with ?

Alexander the ok, April 06, 2007 - 12:19 pm UTC

You'll never guess what happened, it worked when I wrote up the test case. So i'll have to get back to you. The problem is different than a grant thing.

(I knew when you said "got an example" that really means "I don't believe you that should work" ;).

ORA-01722: Ungültige Zahl (invalid number)

Bernd, May 09, 2007 - 3:58 pm UTC

Hi Tom,
I got a ORA-01722 and I have no idea why?

My csv-file PM_Attribute.csv in the directory specified by dublettenscout:
ART_DESCR_ID;TOKEN_POS;KIND_ID;ASPECT_ID;DATATYPE;CARDINALITY;INTERVALL;ORG_NUM;ORG_TEXT;ORG_DATE;ORG_UNIT;STD_NUM;STD_TEXT;STD_DATE;STD_UNIT;IS_PART;KIND;ASPECT
1068919;3;21;0;Z;0;0;;Geißfuß;;mit;;Geißfuß;;mit;0;mit-ohne;ZusatzTyp

the external table:
drop table pm_attribute;

CREATE TABLE pm_attribute (
art_descr_id NUMBER,
token_pos NUMBER,
kind_id NUMBER,
aspect_id NUMBER,
datatype VARCHAR2(1),
CARDINALITY NUMBER(1),
intervall NUMBER(1),
org_num NUMBER,
org_text VARCHAR2(2000),
org_date VARCHAR2(25),
org_unit VARCHAR2(20),
std_num NUMBER,
std_text VARCHAR2(2000),
std_date VARCHAR2(25),
std_unit VARCHAR2(20),
is_part NUMBER(1),
kind VARCHAR2(100),
aspect VARCHAR2(100)
)
ORGANIZATION EXTERNAL (
TYPE oracle_loader
DEFAULT DIRECTORY dublettenscout
ACCESS PARAMETERS (
RECORDS DELIMITED BY NEWLINE
SKIP 1
BADFILE dublettenscout:'PM_ATTRIBUTE.bad'
DISCARDFILE dublettenscout:'PM_ATTRIBUTE.dsc'
LOGFILE dublettenscout:'PM_ATTRIBUTE.log'
FIELDS TERMINATED BY ';'
LDRTRIM
MISSING FIELD VALUES ARE NULL
REJECT ROWS WITH ALL NULL
FIELDS (
art_descr_id,
token_pos,
kind_id,
aspect_id,
datatype CHAR(1),
CARDINALITY,
intervall,
org_num,
org_text CHAR(2000),
org_date,
org_unit,
std_num,
std_text CHAR(2000),
std_date,
std_unit,
is_part,
kind,
aspect
)
)
LOCATION ( dublettenscout:'PM_Attribute.csv' )
)
REJECT LIMIT UNLIMITED;

select * from PM_ATTRIBUTE;

-> no rows selected.

In the log file I found:
error processing column STD_NUM in row 2 for datafile c:\Daten\DublettenScout\PM_Attribute.csv
ORA-01722: Ungültige Zahl (invalid number)

STD_NUM is empty. Isn't it allowded?

Thanks in Advance

Tom Kyte
May 11, 2007 - 10:28 am UTC

you have some data in your file that isn't a number and is mapped to a number column

suggestion: map everthing as a varchar2, then retrieve the data and look at it. see what you see

charset problem

Bernd, May 14, 2007 - 7:54 am UTC

It was a characterset problem. I added the parameter CHARACTERSET WE8ISO8859P1 to the access parameters and everything went well.

Ciao Bernd

Externa table accessing Remote Files.

Gowtham Sen., July 11, 2007 - 3:19 am UTC

Hi Tom,

This post was good.
I have one doubt regarding access of files.

I am using Oracle 10g.

I have a requirement that, I need to create external table to fetch data ( from files) from remote location.

Can we do that? Could you please provide us the details.

Do we have any performance issues, when we access files remotely.

Thanks in advance.

Thank you,
Regards,

Gowtham Sen.



Tom Kyte
July 11, 2007 - 8:57 am UTC

the file needs to be on the server for a server process to access it.

it would be very virus like otherwise - if the database server could just reach out and grab files from remote systems.

You must make it available on the server (networked if you want)

(Precision, Escale) in a Fixed Length

Rafael Trevisan, July 12, 2007 - 5:30 pm UTC

Hi Tom,

Please, how I can do this:

CREATE TABLE ext_logs (
   numero_documento NUMBER(6),
   numero_pedido    VARCHAR2(8),
   tipo_documento   VARCHAR2(1),
   tipo_registro    VARCHAR2(2),
   codigo_produto   NUMBER(20),
   tipo_produto     VARCHAR2(1),
   quantidade       NUMBER(8,3)
)
ORGANIZATION EXTERNAL (
   TYPE       oracle_loader
   DEFAULT DIRECTORY dir_logs
   ACCESS PARAMETERS (
     RECORDS DELIMITED BY NEWLINE
     FIELDS (
       numero_documento POSITION(1  :6)  CHAR(6),
       numero_pedido    POSITION(7  :8)  CHAR(8),
       tipo_documento   POSITION(15 :1)  CHAR(1),
       tipo_registro    POSITION(16 :2)  CHAR(2),
       codigo_produto   POSITION(18 :20) CHAR(20),
       tipo_produto     POSITION(38 :1)  CHAR(1),
       quantidade       POSITION(39 :8)  ?????
    )
  )
  location ('LV050338.999')
)
reject limit 0
/


The data file is:
---BOF---
054504        T01               98748 00001000
054504        T01               98748 00001000
054504        T01               49726 00001000
054504        T01               49726 00001000
054504        T01              154949 00001000
054504        T01              154949 00001000
054504        T01               18722 00001500
---EOF---


The field "quantidade" in last line must "imported" as: 1,5 and the same field in others lines as 1.

Thanks!
Rafael Trevisan.
Tom Kyte
July 12, 2007 - 5:33 pm UTC

00001500 is NOT a number(8,3)

it is a string (so use char/varchar2 as the input type and the datatype)

and simply to_number(quantidade)/1000


ops$tkyte%ORA10GR2> select to_number( '00001500') / 1000 from dual;

TO_NUMBER('00001500')/1000
--------------------------
                       1.5


external table - logging level

Tzvika, September 24, 2007 - 1:02 pm UTC

Hi Tom

Any time I select external table log generate, I there way control logging level, enable/disable. is the log file can be merge
Thanks
Tzvika


Tom Kyte
September 26, 2007 - 8:23 pm UTC

is the log file can be merge...

I don't know what you mean by that..

yes, we can control whether a log file is generated or not - but I'm not sure entirely what you are asking for.

External table - complex file structure

Yoav, October 15, 2007 - 2:12 pm UTC

Hi Tom,
I have a text file with the following structure:
Oct<S>14<S>00:14:49<S>mr8.yyyyyyyy.net<S>2712299727<T>MTA.MESSAGE.STATUS<T><T>JTZ13214<T><T>Mail<S>from<S>200.100.182.94<S>refused,<S>see<S>RBL<S>server<S>bl.yyyyyyyy.net

S=space
T=Tab

Is it possible to use external table to load this kind of file (e.g how) ?

Thanks
Tom Kyte
October 15, 2007 - 3:01 pm UTC

looks like you are fields terminated by whitespace, very "normal", yes you can use an external table for this.

Complex file

Yoav, October 15, 2007 - 3:31 pm UTC

Hi Tom,
Thank you very much for your answer.
In a case when a file is comma delimited then using:
FIELDS TERMINATED BY ','
is very simple.
In my case there are whitespaces OR Tabs between the fields.
Im sorry but i couldnt understood how deal this senario.
Thanks again

Convert tab into space in the source file

Balaji, October 16, 2007 - 5:31 pm UTC

If you don't have white space in the column values then you can consider replacing tab with space in the entire file and specify space as your delimiter in your external table definition.

-- Original text
Oct<S>14<S>00:14:49<S>mr8.yyyyyyyy.net<S>2712299727<T>MTA.MESSAGE.STATUS<T><T>JTZ13214<T><T>Mail<S>from<S>200.100.182.94<S>refused,<S>see<S>RBL<S>server<S>bl.yyyyyyyy.net

-- After replace
Oct<S>14<S>00:14:49<S>mr8.yyyyyyyy.net<S>2712299727<S>MTA.MESSAGE.STATUS<S><S>JTZ13214<S><S>Mail<S>from<S>200.100.182.94<S>refused,<S>see<S>RBL<S>server<S>bl.yyyyyyyy.net

Replacing file with new one

Scott, December 12, 2007 - 4:02 pm UTC

Here's an interesting problem I'm facing. I created an external table on my system, and it works fine, if I copy of the file it still works. when I put it on my clients system, the only way it works is if I actually copy and paste the contents of the CSV into the old file instead of deleting/copying or copying pasting.

create table nkw.ext_giftcard(
gc_no varchar2(20),
amount number,
date_issued date,
validity varchar2(15),
type char(1)
)
organization external
(type oracle_loader
default directory DBDIR
access parameters
( records delimited by newline
badfile 'giftcard.bad'
logfile 'giftcard.log'
skip 1
fields terminated by 0X'09'
lrtrim)
location ('giftcard.csv')
)
reject limit unlimited
/


CSV

GCID AMOUNT DATE_ISSUED VALIDITY ELECTRONIC/PHYSICAL ORDERID
D3D37295D89C2C67 1.00 05-OCT-2007 FOREVER E
92FD3C96B9CCC507 10000.00 06-NOV-2007 FOREVER E
CC2241188F96825C 50.00 06-NOV-2007 FOREVER E 16468


Any ideas on why this could be happening? Both servers are windows 2003.

thanks,
Scott
Tom Kyte
December 13, 2007 - 9:07 am UTC

are you moving via FTP?

if so, are you moving with BINARY or TEXT in place.

my guess: source file came from unix which smartly uses a single character to denote "end of line", you ftp file to windows - which uses TWO characters to denote end of line (carriage-return/linefeed - ascii 13+10 \r\n in "C" ).

When you "cut and paste", you use a tool like wordpad (not notepad) - and wordpad fakes us out by accepting either \r\n or \n as the end of line - so it looks OK, when you paste it, the text is "fixed" and it works.

to update my prior statement

Scott, December 12, 2007 - 4:36 pm UTC

After I generate the table, I can do this query

SQLWKS> select * from ext_giftcard
     2> 
GC_NO                AMOUNT     DATE_ISSU VALIDITY        T
-------------------- ---------- --------- --------------- -
D3D37295D89C2C67              1 05-OCT-07 FOREVER         E
92FD3C96B9CCC507          10000 06-NOV-07 FOREVER         E
CC2241188F96825C             50 06-NOV-07 FOREVER         E
0CE4C56521717CD5             50 06-NOV-07 FOREVER         P
5FE590F20455F0EB              1 12-DEC-07 FOREVER         E


I then run this command in dos

C:\Documents and Settings\Administrator>copy c:\giftcard.csv d:\apps\expor
t\giftcard.csv
Overwrite d:\apps\export\giftcard.csv? (Yes/No/All): y
        1 file(s) copied.


which is copying the same file over the original


when I try querying the table again

SQLWKS> select * from ext_giftcard
     2> 
GC_NO                AMOUNT     DATE_ISSU VALIDITY        T
-------------------- ---------- --------- --------------- -
0 rows selected.


There is data in the file and there is no badfile generated.



Thanks again,
Scott

Tom Kyte
December 13, 2007 - 9:38 am UTC

oh, what happens if you log out and log in again. Looks like you are wiping out a file we had open.

reviewing file in hex

Scott, December 13, 2007 - 11:21 am UTC

The client is supplying the file with just an LF, asking him to change to CRLF, and hoping that will fix my problem.

Thanks Tom,
Scott

external table

A reader, May 15, 2008 - 10:53 pm UTC


External Table

Dawar Naqvi, May 16, 2008 - 7:00 pm UTC

OS: Linux
DB: 10g

I logged as a sysdba
 
SQL> CREATE OR REPLACE DIRECTORY DATALOAD AS '/opt/oracle/admin/user02/udump';

Directory created.

SQL> GRANT READ ON DIRECTORY DATALOAD TO user02;

Grant succeeded.

SQL> GRANT WRITE ON DIRECTORY DATALOAD to user02;

Grant succeeded.

>cd ORACLE_HOME
> make sure directory is there, it was class.txt, 

I logged as user02:

create table external_class
  (
  item_no      number(10),
  class         varchar2(255),
  level_date    date,
  note          varchar2(4),
  sched_level   varchar2(10),
  min_range     number(12),
  max_range     number(14)
  )
  ORGANIZATION EXTERNAL
  (
 TYPE oracle_loader
DEFAULT DIRECTORY DATALOAD
ACCESS PARAMETERS (
RECORDS DELIMITED BY NEWLINE
FIELDS TERMINATED BY ','
MISSING FIELD VALUES ARE NULL
REJECT ROWS WITH ALL NULL FIELDS
(item_no, class, level_date, note,sched_level,min_range,max_range)
)
LOCATION ('class.txt')
) 
PARALLEL
REJECT LIMIT 0;


SQL> select count(*) from external_class;
select count(*) from external_class
*
ERROR at line 1:
ORA-29913: error in executing ODCIEXTTABLEOPEN callout
ORA-29400: data cartridge error
KUP-04040: file class.dat in DATALOAD not found
ORA-06512: at "SYS.ORACLE_LOADER", line 19
ORA-06512: at line 1

 I changed the file extension from .txt to .dat but same result.

Any help will be appreciated.

Dawar Naqvi

Tom Kyte
May 19, 2008 - 3:49 pm UTC

... I logged as a sysdba ....

that was a mistake, never do that unless you have to and you - almost never ever do! even to startup and shutdown you do not, should not.


so, do this as a regular user and prove to us the file is there - don't even use an external table, just use a bfile and see if you can see it, utl_file.fopen it, anything like that.

I'd say either

a) file is really not there
b) permissions on the file are such that the oracle account cannot access it.

excellent

Peter, July 04, 2008 - 1:08 pm UTC

Hi Tom,

I have over 50 files (such as test_1_2008_07_04.csv....test_50_2008_07_04.csv) with same dataformat needed to load into a partitioned table.
the file names are changed according to the date.  the log files needed for each input files
I have come out with two options:

1): use sql loader
 create a control file such as test.ctl without specifing the datafile 
and using sql loader as

 sqlldr user/passwd data=test_1_2008_07_04.csv control=test.ctl log=test_1_2008_07_04.csv.log 
bad=test1.bad discard=test_1_2008_07_04.csv.dsc
row=1000 ...



2): use the external table  test_ext...;

create a extern table based on one file.

create a pl/sql procedure and modify the ext table according to location
 
create or replace procedure
load_files(p_file in varchar2)
IS
 v_file_bad  varchar2(200);
 v_file_dsc  varchar2(200);
 v_file_log  varchar2(200);

BEGIN
 v_file_bad := replace(p_file, '.csv','')||'.bad';
 v_file_dsc := replace(p_file, '.csv','')||'.dsc';
 v_file_log := replace(p_file, '.csv','')||'.log';

  execute immediatel '
   alter table test_ext
   ACCESS PARAMETERS (
        RECORDS DELIMITED BY newline
        BADFILE   '''||v_file_bad||'''
        DISCARDFILE '''||v_file_dsc||'''
        LOGFILE '''||v_file_log||'''
        FIELDS TERMINATED BY ''|''
      )
   LOCATION ('''||p_file|||')
  )'
;
  
  
   MERGE INTO target a
  USING test_ext b
    ON (a.id = b.id)
  WHEN MATCHED THEN
    UPDATE SET a.col = b.col
  WHEN NOT MATCHED THEN
    INSERT (...)
    VALUES (...);
  COMMIT;
EXCEPTION
  WHEN OTHER 
    ROLLBACK;
    RAISE;
END;

then I can call procedure 

sql> @load_files('test_1_2008_07_04.csv');

this way I can have more control on how I do data validation before I load the data into target table.


Both method can load file into table. 

My problem is 

1: what is your choice among two methods?

2:log files:
unlike sql loader logfiles which tell how many rows loaded and not loaded 

The logfile for external table did not record how many records loaded succelly, only tell you the format and when the logfile accessed. do you know how to record the rows loaded in external table log files. 

Thanks,













that means 
I have to pass the file name into  exit 




Tom Kyte
July 07, 2008 - 11:15 am UTC

option 3:

utl_file.frename(to xxxx)
load xxx
utl_file.frename(back)

that way - no sql injection issues (you have to look at p_file and verify it is a file before you concat it into a string like that)


please:

EXCEPTION
WHEN OTHER
ROLLBACK;
RAISE;
END;


do not do that. plsql is already atomic - no need to rollback, just let the error raise out to the client. All you do with your block is hide the source of the error!!!


ops$tkyte%ORA10GR2> create table t ( x int constraint t_check check (x > 0) );

Table created.

ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> create or replace procedure p1
  2  as
  3  begin
  4          insert into t values ( 1 );
  5          insert into t values (-1 );
  6  end;
  7  /

Procedure created.

ops$tkyte%ORA10GR2> create or replace procedure p2
  2  as
  3  begin
  4          insert into t values ( 1 );
  5          insert into t values (-1 );
  6  exception
  7          when others
  8          then
  9                  rollback;
 10                  raise;
 11  end;
 12  /

Procedure created.

ops$tkyte%ORA10GR2> create or replace procedure p3
  2  as
  3  begin
  4          insert into t values ( 1 );
  5          insert into t values (-1 );
  6  exception
  7          when others
  8          then
  9                  -- log_error();
 10                  null;
 11  end;
 12  /

Procedure created.

ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> exec p1
BEGIN p1; END;

*
ERROR at line 1:
ORA-02290: check constraint (OPS$TKYTE.T_CHECK) violated
ORA-06512: at "OPS$TKYTE.P1", line 5
ORA-06512: at line 1


ops$tkyte%ORA10GR2> select * from t;

no rows selected

ops$tkyte%ORA10GR2> exec p2
BEGIN p2; END;

*
ERROR at line 1:
ORA-02290: check constraint (OPS$TKYTE.T_CHECK) violated
ORA-06512: at "OPS$TKYTE.P2", line 10
ORA-06512: at line 1


ops$tkyte%ORA10GR2> select * from t;

no rows selected

ops$tkyte%ORA10GR2> exec p3

PL/SQL procedure successfully completed.

ops$tkyte%ORA10GR2> select * from t;

         X
----------
         1



I included p3 to show why I despise when others - people use it to hide errors, thus changing (breaking) the atomic quality of the procedure - removing the A from ACID.


excellent

Peter, July 07, 2008 - 3:33 pm UTC

You are right, I should remove the exception part.
The option 3 is also very nice one. Thank you !

Have you compared the log files among the sqlloader and external table ? the log file from sql loader shows more details about the loading process such as how many rows loaded, formating, timing, and etc.. the external table seems not doing the same job. As the external table using the same sqlloader driver, the logfile content should be same. Is there any way to make external table logfile having more details as sqlloader does ?


Thank again.



Tom Kyte
July 07, 2008 - 3:36 pm UTC

with the external table, the logging can be as nice as you like.

You write whatever you want.
You know exactly how many records YOU loaded.
You know when you started, when you ended

You can put all of this stuff into a nifty table - much better than having to parse a log. You can log an incredible amount of information, in a single line of SQL.


remember, the external table is not just about "a single time load of a file", it is a much more general purpose thing - you can "query" an external table - never having loaded it, to report "number of rows loaded, time to load" doesn't make sense in that context.


excellent

Peter, July 07, 2008 - 3:47 pm UTC

Thanks !!

External Tables with Default Values

MK, October 18, 2008 - 8:06 pm UTC

I have just got in a request from my client that is using CSV files to load values into database tables. I use external tables to load the data into the target tables. They asked me if it was possible to fill up the missing field values with the DEFAULT value defined for that column. I currently use the "missing field values are NULL" clause in my external table definition.

For ex: the table has a column defined as
XYZ number(10) default 100 not null

and XYZ was not specified in the CSV file as it was not exported out after this column was added to the database, can the external table have the capability to stamp every XYZ column value in every row in the target table as value 100? Is this a reasonable request, or should I force them to always generate new CSV exports when the table DDL is altered?

Thanks in advance.


Tom Kyte
October 18, 2008 - 11:14 pm UTC

rather than play with the external table, I'm going to take a shortcut and tell you what I would do first and foremost.


create view........


use a view to do whatever sort of post processing on the columns you want - it is by far the easiest and most straightforward and flexible approach.

I am lost...

Vincent, January 16, 2009 - 11:21 pm UTC

Tom,
I appreciate your help to the oracle community..I am lost with the following problem

i have test data like below (TEST.txt)

1|23.10-
1|24.00-

I have the external table defined below

create table t1
(
col1 number,
col2 number
)
ORGANIZATION EXTERNAL
( TYPE ORACLE_LOADER
DEFAULT DIRECTORY LD_DIR
ACCESS PARAMETERS
( records delimited BY newline
fields terminated BY '|'
RTRIM
missing field VALUES are NULL
( col1,
col2 MINUSSIGN "99999.99mi"
)
)
LOCATION (GG_STAGE_DIR:'TEST.txt')
)
REJECT LIMIT 0
NOPARALLEL
NOMONITORING;


When i query the external table i want the data to be like
below(all numbers)

select * from t1

COLA COLB
---------- ----------
1 -23.1
1 -24


I am struggling with how to specify the transformation within the external table defintion to convert '23.10-' to
-23.1

Any thoughts?
Tom Kyte
January 17, 2009 - 8:40 am UTC

ops$tkyte%ORA10GR2> create table t1
  2  (
  3  col1 number,
  4  col2 varchar2(20)
  5  )
  6  ORGANIZATION EXTERNAL
  7    (  TYPE ORACLE_LOADER
  8       DEFAULT DIRECTORY dir
  9       ACCESS PARAMETERS
 10       ( records delimited  BY newline
 11         fields  terminated BY '|'
 12         RTRIM
 13         missing field VALUES are NULL
 14      ( col1,
 15        col2
 16      )
 17      )
 18       LOCATION ('test.txt')
 19    )
 20  REJECT LIMIT 0
 21  NOPARALLEL
 22  NOMONITORING;

Table created.

ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> select col1, to_number(col2,'9999999.9999999s') col2 from t1;

      COL1       COL2
---------- ----------
         1      -23.1
         1        -24


create a view if you don't want to select the to_number()

date cache

A reader, January 23, 2009 - 9:23 am UTC

Hi,

we are receiving the following error while loading data using external tables.

Date Cache Statistics for table TPTY_PAYR_FEED_LOAD
Date conversion cache disabled due to overflow (default size: 1000)

how can we change the value of date cache.

Thanks

Tom Kyte
January 23, 2009 - 9:44 am UTC

are you sure you are not using sqlldr?

extarnal table

A reader, January 26, 2009 - 11:02 am UTC

hi,

yes its an external table where the type is sql loader, so indirectly sql loader is used.

thanks

Tom Kyte
January 28, 2009 - 8:01 am UTC

post example please. give us the create. not sure what you mean. if you mean "oracle_loader", that is not sqlldr.

Select works but update doesn't

A reader, January 27, 2009 - 2:33 am UTC

Login as A
1. select * from a.external_table -- works fine
2. update B.some_table -- works fine

But when I do

update B.some_table set (..)=(select ... from a.external_table ...)

I get 'file xyz in directory my_dir not found'

This is really strange. Any ideas? Thanks




Tom Kyte
January 28, 2009 - 8:22 am UTC

got code?

we really need an example we can cut and paste and reproduce with.

Deleting log files for external tables

Lio, March 02, 2009 - 9:07 am UTC

Log files created by external tables are locked by the process oracle.exe (oracle is installed on windows server 2003).
Even if I close the session used to query the external table and even if I drop the external table, the log file is still locked.
Is there a way to remove the lock on external table log files without restarting the oracle server?
I need to delete external table log files or cleanup it's content.
Tom Kyte
March 03, 2009 - 3:26 pm UTC

I only see an issue with an older release (9.2) with the discard file - but explicitly NOT the log file.

I'll have to refer you to support for this, they'll collect the needed information to diagnose with - I do not have any windows databases to play with at all.

Solved deleting log files

Lio, March 02, 2009 - 10:54 am UTC

Excuse me for my previous post.
The problem was cause by the fact I forgotten to add %p or %a in the filename specification when using parallel access.
I got corrupted locked files.
This behaviour is documented:
"If %p or %a is not used to create unique filenames for output files and an external table
is being accessed in parallel, then output files may be corrupted or agents may be
unable to write to the files"

But without explicit error messages is hard to solve this kind of problems.
Tom Kyte
March 03, 2009 - 3:31 pm UTC

windows didn't give you an explicit error message like "file not found"?

Deleting log files for external tables

Lio, March 05, 2009 - 10:10 am UTC

Using %p or %a in the filename do not solve the problem.
No errors are generated.
Investigating with the program "Process Explorer" I found that the thread that have generated the locked log file do not exists any more but the process "oracle.exe" do not close the related file handle (I can see it).
Only some log files are locked (for example 10 over a total of 16)
Do you think it can be a bug of the windows process oracle.exe?
Tom Kyte
March 05, 2009 - 2:02 pm UTC

please utilize support for this.

Deleting log files for external tables

Lio, March 05, 2009 - 10:52 am UTC

On metalink I found the Bug 4007026 - Accessing an external table can leak file descriptors
But I'm using Oracle Database 10g Enterprise Edition version 10.2.0.1.0 and that bug is declared fixed on this version

Tom Kyte
March 05, 2009 - 2:08 pm UTC

that bug does not seem relevant. that had to do with resolving and opening the tnsnames.ora file with distributed stuff.

The subject gives that away:

Abstract: DBLINK SERVICE NAME LOOKUP FAILS AFTER A LARGE NO. OF SELECTS ON EXTERNAL TABLE


and the body of the bug supports it. It had to do with bad/discard files, not log files.

can we create external table dynamically

A reader, May 21, 2009 - 9:34 am UTC

HI Tome, I have over 400 talbes need to be populated using external table format...
wondering if I can write one query and dynamically populate these many of tables?
Tom Kyte
May 23, 2009 - 1:16 pm UTC

insufficient data here, would need a bit more 'detail'

is there one external table? or 400? or what -

sankara, June 03, 2009 - 5:26 am UTC


External tables

A reader, October 30, 2009 - 12:46 pm UTC

Hi Tom,

We created an external table in our database, while we do select we got error as below

ORA-29913: error in executing ODCIETTABKEOPEN callout
KUP-01005: syntax error:fount"territory" :expecting one of: "double-quoted-string,identifier,single-quoted-string"
ORA-06512: atSYS.ORACLE_LOADER,line 19

CREATE TABLE "GEMINISTAGING"."UDX_TCY_UNITS"
( "OUT_NUMBER" VARCHAR2(8 BYTE),
"TERRITORY" VARCHAR2(8 BYTE),
"SEGMENT" VARCHAR2(10 BYTE),
"CAT" VARCHAR2(3 BYTE),
"GROUP" VARCHAR2(3 BYTE),
"BUCKET01" NUMBER,
"BUCKET02" NUMBER,
"BUCKET03" NUMBER,
"BUCKET04" NUMBER,
"BUCKET05" NUMBER
)
ORGANIZATION EXTERNAL
( TYPE ORACLE_LOADER
DEFAULT DIRECTORY "UNIX"
ACCESS PARAMETERS
( records delimited by newline
fields terminated by '|'
missing field values are null
(
OUT_NUMBER,
TERRITORY,
SEGMENT,
CAT ,
GROUP ,
BUCKET01,
BUCKET02,
BUCKET03,
BUCKET04,
BUCKET05
) )
LOCATION
( "UNIX":'udx_tcy_unit_200909.txt'
)
)
REJECT LIMIT UNLIMITED;

* I checked for directory it's woring fine.
* I checked for data it's correct.

UNIX> more udx_tcy_unit_200909.txt
02118|81104||01|025|4.00|4.00|8.00|12.00|4.00
02118|81104||01|026|4.00|4.00|16.00|8.00|8.00
02118|81104||01|041|0.00|0.00|0.00|0.00|0.00
02118|81104||01|081|0.00|0.00|0.00|0.00|0.00

When i change the column name from TERRITORY to TERR_NUMBER(or any thing) it's loading records into External table.
Why sql loader give error for column name "TERRITORY"?

Thanks in advance.

Thanks,
Chandru
Tom Kyte
November 09, 2009 - 11:24 am UTC

territory is a keyword, you have to quote it to use it there.

ops$tkyte%ORA10GR2> CREATE TABLE "UDX_TCY_UNITS"
  2     (    "OUT_NUMBER" VARCHAR2(8 BYTE),
  3      "TERRITORY" VARCHAR2(8 BYTE),
  4      "SEGMENT" VARCHAR2(10 BYTE),
  5      "CAT" VARCHAR2(3 BYTE),
  6      "GROUP" VARCHAR2(3 BYTE),
  7      "BUCKET01" NUMBER,
  8      "BUCKET02" NUMBER,
  9      "BUCKET03" NUMBER,
 10      "BUCKET04" NUMBER,
 11      "BUCKET05" NUMBER
 12     )
 13     ORGANIZATION EXTERNAL
 14      ( TYPE ORACLE_LOADER
 15        DEFAULT DIRECTORY "UNIX"
 16        ACCESS PARAMETERS
 17        ( records delimited by newline
 18           fields terminated by '|'
 19           missing field values are null
 20           (
 21      OUT_NUMBER,
 22      "TERRITORY",
 23      SEGMENT,
 24      CAT ,
 25      GROUP ,
 26      BUCKET01,
 27      BUCKET02,
 28      BUCKET03,
 29      BUCKET04,
 30      BUCKET05
 31      )
 32          )
 33        LOCATION
 34         ( 'data.txt'
 35         )
 36      )
 37     REJECT LIMIT UNLIMITED
 38  /

Table created.

ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2>
ops$tkyte%ORA10GR2> select * from udx_tcy_units;

OUT_NUMB TERRITOR SEGMENT    CAT GRO   BUCKET01   BUCKET02   BUCKET03
-------- -------- ---------- --- --- ---------- ---------- ----------
  BUCKET04   BUCKET05
---------- ----------
02118    81104               01  025          4          4          8
        12          4

02118    81104               01  026          4          4         16
         8          8

02118    81104               01  041          0          0          0
         0          0

02118    81104               01  081          0          0          0
         0          0




or, just don't mention the column list the second time, it is not needed for this create at all.

DECIMAL SEPARATOR

Carles, January 25, 2010 - 10:37 am UTC

First of all, forgive my English.

I have a problem with the external tables and decimal separator.

In my file the decimal separator is ',' and I can not get the external table is displayed correctly.

Are you have any idea?

Thanks
Tom Kyte
January 29, 2010 - 8:33 am UTC

why don't you share with us the details? your nls_settings and external table - make it small small small (the example), for example if you have 500 columns- we probably only need 2 or 3 to demonstrate the issue!

Same error different table

David, March 18, 2010 - 8:16 am UTC

I am experiencing some the smae issues described here but I am not able to see how you fix them:

Here is the SQL:

Drop table EXT_q_OBJ;
Create Table EXT_q_OBJ
(
 q_Territory char(16),
 q_Province char(4),
 q_Product char(12),
 q_Month char(8),
 q_obj  char(12),
 q_objReason char(32)
 )
Organization External
(
 Type Oracle_Loader
 Default Directory EXT_GPM
 Access Parameters
 (
  Records Delimited by Newline
  LOGFILE 'q_ext.log'
  Fields Terminated by ',' (
   Territory char(32),
   Province char(32),
   Product  char(32),
   Month  char(32),
   Obj  char(32),
   ObjReason char(32)
  )
 )
Location ( 'q_obj.csv' )
);

Here is a data sample:

Territory,Province,Product,Month,Obj,ObjReason
1000,AB,qwerty,10-Jan,2479.07,Initial 2010
1201,AB,qwerty,10-Jan,160613.07,Initial 2010
1202,AB,qwerty,10-Jan,124016.41,Initial 2010
1203,AB,qwerty,10-Jan,182093.21,Initial 2010
1204,AB,qwerty,10-Jan,175798.09,Initial 2010
1000,AB,qwerty,10-Feb,2754.92,Initial 2010
1201,AB,qwerty,10-Feb,178484.98,Initial 2010
1202,AB,qwerty,10-Feb,137816.1,Initial 2010
1203,AB,qwerty,10-Feb,202355.29,Initial 2010
1204,AB,qwerty,10-Feb,195359.7,Initial 2010

And here is the error:

SQL> select q_Territory from EXT_q_OBJ;
select q_Territory from EXT_q_OBJ
*
ERROR at line 1:
ORA-29913: error in executing ODCIEXTTABLEOPEN callout
ORA-29400: data cartridge error
KUP-00554: error encountered while parsing access parameters
KUP-01005: syntax error: found "territory": expecting one of:
"double-quoted-string, identifier, single-quoted-string"
KUP-01007: at line 4 column 4
ORA-06512: at "SYS.ORACLE_LOADER", line 19
ORA-06512: at line 1

The log file does not get created.

Tom Kyte
March 18, 2010 - 8:55 am UTC

put territory in double quotes (a single character that is the double quote)

"TERRITORY" char(32),


it is an NLS 'word'. Like a keyword. Using a quoted identifier will resolve that.

Using %p for unique log & bad files

Kari, April 16, 2010 - 2:11 pm UTC

I am using your suggestion of %p to create a unique logfile & badfile name. I need to email these files back to the user, so I need to know the exact name. My files are created with names like this: EXT_INT_DTI_EE_1608_1908.LOG and EXT_INT_DTI_EE_1608_1908.BAD. I have determined that 1908 is the spid, but I can't figure out where the 1608 is coming from. (I'm running on Windows if that helps). I've looked at v$session & v$process with no luck.
Tom Kyte
April 16, 2010 - 5:06 pm UTC

it seems to be related to the instance itself - it remained constant unless I shutdown the service and started it again.

I do not see it with a cursory glance at the instance level v$ stuff.


But - other than that - I don't really know what it is... Searching on "%p" is futile.

You'll want to open an SR on that one to file a doc bug and get the developers to tell you what it is and where it comes from.

Followup on External Tables

Kari, May 26, 2010 - 8:46 am UTC

I did get an answer from Support. The value is the o/s pid for the instance. On windows, the value is not stored in the data dictionary. To solve the issue, I created an executible that gets the value from Task Manager and writes it to a flat file. I have an "after instance startup" trigger that uses dbms_scheduler to call the executible. I use a select from an external table to get the value. You just gotta love windows!
Tom Kyte
May 26, 2010 - 9:05 am UTC

thanks for the followup - I could not figure it out myself :)


and in response to your last comment - no, no i don't.....

bad file permissions

steven, June 08, 2010 - 7:55 am UTC

When Oracle generates the logfile and badfile during and external load on the unix server , they are created with the user and group of user -> oracle and a group of -> dba and permissions of -rw-r--r--. Is there a way to change the group or permissions on these files without having to log in as the Oracle account in unix. We run production scripts as a different user --> prods and group --> prodf. Oracle is a member of the prodf group. I would like to be able to work on these files as the prods user.
Tom Kyte
June 09, 2010 - 8:58 am UTC

you could do some things at the OS level - umask, setting the perms of the directory (this page talks about setting the group from the directory)
http://www.dartmouth.edu/~rc/help/faq/permissions.html
but - I would suggest this:

prodf not be allowed to log into the database server. prodf accesses these files via bfiles or external tables themselves. It would be easy to

"select * from badfile"

in sqlplus or any other tool to see the contents of the bad or logfile. Infinitely more secure than letting people have access to the server itself and pretty flexible for searching/reviewing the logs from anywhere in the world.

file permissions

steven, June 09, 2010 - 2:47 pm UTC

I forgot to mention we are on 11gR1. I load ~ 500 files a day. The external table option is interesting. The file names are different every day, every hour, but this is easy to handle. I will try this approach and see how it works out.
With 11gr2 I have another solution I have been thinking about. The DBA's set the umask for Oracle to 017 and I still see -rw-r--r-- for the log and bad files.
Tom Kyte
June 10, 2010 - 12:16 pm UTC

did they set the umask and restart - and are they sure the umask is not getting set back - and did you check out the other link, that has promise as well

How to read if there is a new file

Sujith, July 27, 2010 - 8:52 am UTC

Hi,

I hope this question is within the limits of this thread.

We are on 10gR2 and the situation we have is, we use a external table to bring in some data into the database. We have a job that keeps running every 1 minute to pull in the data. We wants to read the file(external table), only if it has been modified/replaced since we read it last time.

I guess if there is a way to read the timestamp of the file, then we can have some pl/sql logic to compare the timestamps(assuming I save the last read time in the database).

Do you have any ideas to accomplish this??

Thanks and appreciate your help always...

Sujith
Tom Kyte
July 27, 2010 - 12:50 pm UTC

sorting order

James Su, November 18, 2010 - 7:58 pm UTC

Hi Tom,
Is there a way to sort the data in its original order as it appears in the file? Will a select * from ext_table without order by work? (I know you hate sorting without order by :)
Tom Kyte
November 19, 2010 - 9:25 am UTC

for a serial, non-parallel query, it'll read sequentially - parallel invalidates all of that of course.

file watcher

Mike Kutz, December 20, 2010 - 5:09 pm UTC

I'm looking at using 11gR2's 'file watcher' feature to auto import data upon creation.

It seems that using an External Table then insert...select would be the logical method but every file created will have a different name.

How do I handle the different names?
Or is it ok to create an external table that will exist for the <1 min it takes to load the data?

Tom Kyte
December 21, 2010 - 6:52 am UTC

Just alter the table to rename the file associated with it. You don't have to drop and recreate.

http://docs.oracle.com/docs/cd/E11882_01/server.112/e17118/statements_3001.htm#i2104234

Difference between external table and UTL_file

suriya, May 11, 2011 - 6:55 am UTC

Hi Tom,

what's the difference between external table and UTL_file.i have few knowledge about that but my question is both of them reading a data's from file that is stored outside the database in flat files(stored in OS).then what's the difference??is there any performance difference??
Tom Kyte
May 11, 2011 - 10:37 am UTC

just about night and day.


utl_file is an API to read and write (amongst other file operations like rename, remove...) arbitrary files in the file system.

an external table is the rather unique ability to "select *" from a file - you can use a file as a table, in parallel, fast, using all of sql's power. You can also export using data pump with an external table (you can create an external table as select).


If you want to slow by slow process data - a line at a time, use utl_file.

If you want to process a file really fast using SQL, use an external table.

Difference between external table and UTL_file

suriya, May 11, 2011 - 11:59 pm UTC

thanks tom,very useful your answer.now i can differentiate.

SQL*Loader-702: Internal error - ulxgsw: not logical rec

Rajeshwaran, Jeyabal, August 22, 2011 - 10:18 pm UTC

Tom:

Can you please help me why I am getting this error message?

LOAD DATA 
INFILE *
TRUNCATE
INTO TABLE staging
WHEN NAME = 'rn' AND VAL <> ''
FIELDS TERMINATED BY ',' 
OPTIONALLY ENCLOSED BY '"'
TRAILING NULLCOLS
(  
 RECTYPE,
 ID,
 NAME, 
 VAL 
) 
INTO TABLE staging
WHEN NAME = 'rd' AND VAL <> ''
FIELDS TERMINATED BY ',' 
OPTIONALLY ENCLOSED BY '"'
TRAILING NULLCOLS
(  RECTYPE position(1), 
 ID,
 NAME,
 VAL 
) 


c:\>sqlldr userid=rajesh/oracle control=c:\TRASH\ctl.txt log=c:\TRASH\log.txt external_table=generate_only

SQL*Loader: Release 11.2.0.1.0 - Production on Mon Aug 22 22:14:02 2011

Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.

SQL*Loader-702: Internal error - ulxgsw: not logical rec

c:\>
c:\>

Tom Kyte
August 30, 2011 - 3:07 pm UTC

you cannot do the multiple into's with an external table.

Just use the first into - remove the when clause.

You'll be using a where clause on the select from the external table.


ops$tkyte%ORA11GR2> !cat t.ctl
LOAD DATA 
INFILE *
TRUNCATE
INTO TABLE staging
FIELDS TERMINATED BY ',' 
OPTIONALLY ENCLOSED BY '"'
TRAILING NULLCOLS
(     
    RECTYPE,
    ID,
    NAME, 
    VAL 
) 

ops$tkyte%ORA11GR2> !sqlldr / t.ctl external_table=generate_only

SQL*Loader: Release 11.2.0.2.0 - Production on Tue Aug 30 16:06:38 2011

Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.


ops$tkyte%ORA11GR2> !cat t.log

SQL*Loader: Release 11.2.0.2.0 - Production on Tue Aug 30 16:06:38 2011

Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.
.....

CREATE TABLE statement for external table:
------------------------------------------------------------------------
CREATE TABLE "SYS_SQLLDR_X_EXT_STAGING" 
(
  "RECTYPE" VARCHAR2(20),
  "ID" NUMBER,
  "NAME" VARCHAR2(20),
  "VAL" VARCHAR2(20)
)
ORGANIZATION external 
(
  TYPE oracle_loader
  DEFAULT DIRECTORY SYS_SQLLDR_XT_TMPDIR_00000
  ACCESS PARAMETERS 
  (
    RECORDS DELIMITED BY NEWLINE CHARACTERSET US7ASCII
    BADFILE 'SYS_SQLLDR_XT_TMPDIR_00000':'t.bad'
    LOGFILE 't.log_xt'
    READSIZE 1048576
    SKIP 14
    FIELDS TERMINATED BY "," OPTIONALLY ENCLOSED BY '"' LDRTRIM 
    MISSING FIELD VALUES ARE NULL 
    REJECT ROWS WITH ALL NULL FIELDS 
    (
      "RECTYPE" CHAR(255)
        TERMINATED BY "," OPTIONALLY ENCLOSED BY '"',
      "ID" CHAR(255)
        TERMINATED BY "," OPTIONALLY ENCLOSED BY '"',
      "NAME" CHAR(255)
        TERMINATED BY "," OPTIONALLY ENCLOSED BY '"',
      "VAL" CHAR(255)
        TERMINATED BY "," OPTIONALLY ENCLOSED BY '"'
    )
  )
  location 
  (
    't.ctl'
  )
)REJECT LIMIT UNLIMITED



and you'll just select * from that_externaL_table where name in ('rn','rd') and val is not null;


Loading External table's data in multiple parts

gopal misra, February 27, 2012 - 7:33 am UTC

Hi Tom, This is really adorable site for me with top class examples to understand how oracle is actually functioning.
Thanks you.

We are using External table on oracle 11g Exadata Machine. These External tables are accessing .GZ [CSV] files with preprocessor clause. Now we are receiving big CSV files .GZ like 1-2 TB files.

With Direct_path method + Force Parallel DML + Append + Nologging,

it is taking good amount of time to load. Is there any method available through we can break this single load into multiple load and commit inbetween? These files are not having any Unique columns. we tried to figure out a solution aroung SKIP clause, but failed because we need to break full load into multiple parts. SKIP will only skip NNNNN lines from start.
Please suggest.
Thank you for considering this review.
Tom Kyte
February 28, 2012 - 7:00 am UTC

why do you want to commit in between? that would make it take longer?


can the thing that generates this gz file create a few (many) gz files instead? I've done this on exadata (1tb of compressed gz data) and it only takes a few minutes.

the problem is - you are serializing around gunzip right now - with only one file...

continuation to bug user_external_tables

Biswaranjan, May 15, 2012 - 3:27 pm UTC

Hi Tom,

I was reading one comment on this page(Bug in USER_EXTERNAL_TABLES August 16, 2006).

And I tried to execute some statement as below......(in Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 )


CREATE TABLE T1(A NUMBER,B CLOB);
table T1 created.
CREATE VIEW T1_V AS SELECT A,DECODE(A,1,NULL,2,B) B FROM T1;
view T1_V created.
DESC T1_V

Name Null Type
---- ---- --------------
A NUMBER
B VARCHAR2(4000)

DROP VIEW T1_V;

view T1_V dropped.

CREATE VIEW T1_V AS SELECT A,DECODE(A,2,B,1,NULL) B FROM T1;
view T1_V created.

DESC T1_V
Name Null Type
---- ---- ------
A NUMBER
B CLOB

DROP VIEW T1_V;

view T1_V dropped.

CREATE VIEW T1_V AS SELECT A,DECODE(A,3,null,2,B,1,NULL) B FROM T1;

DESC T1_V
Name Null Type
---- ---- --------------
A NUMBER
B VARCHAR2(4000)

####################
So from above output I understood only if the first decode condition's value is a clob coumn then it is setting
the datatype to clob else varchar2(4000).

Then I tried with "case" as below.

DROP VIEW T1_V;
view T1_V dropped.

CREATE VIEW T1_V AS SELECT A,(CASE WHEN A=1 THEN NULL WHEN A=2 THEN B END) B FROM T1;
view T1_V created.

DESC T1_V
Name Null Type
---- ---- ------
A NUMBER
B CLOB


DROP VIEW T1_V;
view T1_V dropped.

CREATE VIEW T1_V AS SELECT A,(CASE WHEN A=2 THEN B WHEN A=1 THEN null END) B FROM T1;
view T1_V created.

DESC T1_V
Name Null Type
---- ---- ------
A NUMBER
B CLOB

CREATE VIEW T1_V AS SELECT A,(CASE WHEN A=5 THEN NULL WHEN A=2 THEN B WHEN A=1 THEN null END) B FROM T1;
view T1_V created.
DESC T1_V
Name Null Type
---- ---- ------
A NUMBER
B CLOB


I have done these tests after inserting with values into "t1" tables also and found the same results.
For above various cases of "case" it sets the datatype to clob.


Can we say the use of case is a good idea over decode for this case??

Thanks as always,
Biswaranjan.


Tom Kyte
May 16, 2012 - 12:59 am UTC

Can we say the use of case is a good idea over decode for this case??


Not really - you just need to know that the first return type from DECODE sets the return type from DECODE.

If you return NULL first - it is assumed to be a varchar2

If you return CLOB first - it'll be CLOB for the return type


continuation to my last post.

Biswaranjan, May 16, 2012 - 2:39 am UTC

Hi tom,


"If you return NULL first - it is assumed to be a varchar2

If you return CLOB first - it'll be CLOB for the return type"

I understood this two lines yday itself ,why I asked use of "case" is good than decode
in that particular case is because anywhere if we place the clob value in "case" it
returns "clob" but we have to place the "clob" at very first in decode to return "clob".

But we can do the same with the decode to using at first place.

But anyway thanks for the reply.


thanks again,
Biswaranjan
Tom Kyte
May 16, 2012 - 5:55 am UTC

If you know how decode works, decode will be marginally faster than case.

using case to "fix" this isn't necessary, so, no, I would not say "case is superior for this reason"

continuation to my last post

Biswaranjan, May 16, 2012 - 8:36 am UTC

Yes, you are right Tom. Thanks for your valuable time and
answer.
thanks,
Biswaranjan.

Data visibility with external tables

A reader, December 17, 2012 - 10:25 am UTC

Hi Tom,

Thanks for the great source of information you provide to the oracle-community!

There is a topic with external tables I have never seen specified in the oracle docs (or somewhere else) so far: At which time will changed content in an external file be visible to reading processes? Your quote from the beginning of this thread "... There is no locking, no concurrency controls, no nothing. ..." seems to coincide with the behaviour we have seen so far. Whenever there is a change in the external file it seems to become visible in the next SQL - at least on our OS and configuration. Nevertheless I couldn't find this behaviour being specified in the oracle docs. So it might turn out to be the "good old C-programming experience" - say it works for years before one detects that there is a quite problem with it...

Theoretically there may be situations when an OS (maybe in combination with an oracle shared server configuration) may prevent other oracle-sessions from seeing the content from the new file. For example: An oracle-process opens the file (and keeps it open). Then the file gets deleted on the OS but as the oracle process still has this file open the inode of this file will be preserved. A new file with the same name but new content is created. With a dedicated shadow process a new process would probably see the new file. But not sure about oracle shared server or with arhcitectures like oracle on windows...

Creating or replacing a table should guarantee that from that time on only the new file will be visible. But just replacing the file seems to be rather unspecified. Does one have to issue a new DDL to have the guarantee to see the content of a new file or can one just rely on the behaviour we have seen so far (i.e. just replace the file)?
Tom Kyte
December 18, 2012 - 12:37 pm UTC

At which time will changed content in an
external file be visible to reading processes?


immediately.

in fact, a single query against an external table might see all kinds of whacky data if someone is writing to the file while we are reading it.


may prevent other oracle-sessions from
seeing the content from the new file.


well, I'll argue with your semantics here - we do not prevent them from seeing the new file, we just already have the old (now unnamed) file open already. If they want to see the new file, they can - they just have to open it.



You don't have to issue anything to "see" the new file, the file you see will always be the file that was in existence when we opened the file. simply close the cursor and reopen it and you'll see whatever file exists right then and there.

External table

A reader, April 24, 2013 - 9:58 am UTC

Hi TOM,
It's an honour to read your answers giving fare knowlwdge how oracle functions in its own way.

There is a question,in which i am struglling a lot.

Can i create a oracle directory in my local machine so that i can create an external table using them?

The scenario is as follows:
I have one csv file,using which i have to create an external table.

But bottleneck is that i cant access database server to place these files and access it.

Is there any work around to get it done?

Please help me tom I am struglling with this by past few days.

Regards
Jyotiranjan
Tom Kyte
April 24, 2013 - 1:01 pm UTC

the file system from which a directory object in the database reads/writes from/to MUST be available on the server.

if the file system is on your client - then you must export that file system and mount it on the server in order to use it.

think about it - if the database could reach out and arbitrarily read any file system on any computer on the network - that would be a huge deal, a really big security issue.

you'll have to use sqlldr (or similar client tool) or get the file accessible to the database

external table read sequentially

A reader, April 30, 2013 - 7:20 am UTC


Followup November 19, 2010 - 9am UTC:
for a serial, non-parallel query, it'll read sequentially - parallel invalidates all of that of course.

How do you know that it will read sequentially when in non-parallel mode? Could you point me to the documentation please.
Tom Kyte
April 30, 2013 - 3:32 pm UTC

http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:52733181746448#52977916329285

you can use the recnum keyword to enforce that. beyond that, I don't think it is stated.

No of files limit for external table

Nikhilesh, June 03, 2013 - 5:04 pm UTC

Dear Tom,

We have to perform bulk load of almost 15 million records into a new single table (partitioned). Data is available in almost 60K CSV files.
1) Can I create a single external table for all of these files and load it in parallel.
2) Shall I concat all these files to a single 100GB file and create external table or do direct load using SQL Loader
3) What should be wise choice for this load operation, SQL Loader or External Table.

Thanks a lot in advance.
Tom Kyte
June 03, 2013 - 7:57 pm UTC

ops$tkyte%ORA11GR2> !oerr ora 30659
30659,0000, "too many locations specified for external table"
// *Cause:  An attempt was made to create an External Organized table with
//          more than the maximum allowable (32767) locations specified.
// *Action: Don't do that, use fewer location clauses.  Either consider
//          concatenating the input files, or, creating two external tables
//          with the input files split between them.




you'll have to have at least two external tables for this, or use an alter statement to change the file names on a single one.

other than that, just go for it - parallel will work with lots of files, external tables are easier to me than sqlldr. Just give it a give (automate the creation of things using plsql)

ops$tkyte%ORA11GR2> create or replace directory my_dir as '/tmp/test'
ops$tkyte%ORA11GR2> /
ops$tkyte%ORA11GR2> 
ops$tkyte%ORA11GR2> declare
ops$tkyte%ORA11GR2>         l_output utl_file.file_type;
ops$tkyte%ORA11GR2> begin
ops$tkyte%ORA11GR2>         for i in 1 .. 60000
ops$tkyte%ORA11GR2>         loop
ops$tkyte%ORA11GR2>                 l_output := utl_file.fopen( 'MY_DIR', to_char(i,'fm00000') || '.csv', 'w' );
ops$tkyte%ORA11GR2>                 utl_file.put_line( l_output, i || ',' || chr( ascii('a')+mod(i,26) ) || ',' || (i+1) );
ops$tkyte%ORA11GR2>                 utl_file.fclose( l_output );
ops$tkyte%ORA11GR2>         end loop;
ops$tkyte%ORA11GR2> end;
ops$tkyte%ORA11GR2> /
ops$tkyte%ORA11GR2> 
ops$tkyte%ORA11GR2> */
ops$tkyte%ORA11GR2> 
ops$tkyte%ORA11GR2> declare
  2          l_template clob := q'|
  3  (
  4    a number,
  5    b VARCHAR2(10),
  6    c number
  7  )
  8  ORGANIZATION external
  9  (
 10    TYPE oracle_loader
 11    DEFAULT DIRECTORY MY_DIR
 12    ACCESS PARAMETERS
 13    (
 14      RECORDS DELIMITED BY NEWLINE CHARACTERSET US7ASCII
 15      READSIZE 1048576
 16      FIELDS TERMINATED BY "," LDRTRIM
 17      REJECT ROWS WITH ALL NULL FIELDS
 18      (
 19                  a char(255), b char(255), c char(255)
 20      )
 21    )
 22    location
 23    (|';
 24  
 25    l_sql1 clob := 'create table et1 ' || l_template || q'| '00001.csv' |';
 26    l_sql2 clob := 'create table et2 ' || l_template || q'| '30001.csv' |';
 27  begin
 28          for i in 2 .. 30000
 29          loop
 30                  l_sql1 := l_sql1 || ',''' || to_char(i,'fm00000') || '.csv''';
 31                  l_sql2 := l_sql2 || ',''' || to_char(i+30000,'fm00000') || '.csv''';
 32          end loop;
 33          l_sql1 := l_sql1 || ') )';
 34          l_sql2 := l_sql2 || ') )';
 35  
 36          execute immediate l_sql1;
 37          execute immediate l_sql2;
 38  end;
 39  /

PL/SQL procedure successfully completed.

ops$tkyte%ORA11GR2> 
ops$tkyte%ORA11GR2> select count(*) from et1;

  COUNT(*)
----------
     30000

ops$tkyte%ORA11GR2> select count(*) from et2;

  COUNT(*)
----------
     30000

ops$tkyte%ORA11GR2> 


Multiple files, external table

David Aldridge, June 04, 2013 - 9:40 am UTC

I think that with 60,000 files I'd be tempted to use an external table's preprocessor to 'cat' them all and return all rows.

http://www.oracle.com/technetwork/issue-archive/2011/11-mar/o21nanda-312277.html
Tom Kyte
June 04, 2013 - 2:13 pm UTC

Nice idea.

but remember parallel query................ you want multiple inputs.... which we could do by passing not the name of a file, but some input that your script could use to figure out what set of files to cat.

ops$tkyte%ORA11GR2> create or replace directory exec_dir as '/home/tkyte'
  2  /

Directory created.

ops$tkyte%ORA11GR2> 
ops$tkyte%ORA11GR2> create table ET
  2  (
  3    a number,
  4    b VARCHAR2(10),
  5    c number
  6  )
  7  ORGANIZATION external
  8  (
  9    TYPE oracle_loader
 10    DEFAULT DIRECTORY MY_DIR
 11    ACCESS PARAMETERS
 12    (
 13      RECORDS DELIMITED BY NEWLINE CHARACTERSET US7ASCII
 14          preprocessor EXEC_DIR:'cat_files.sh'
 15      READSIZE 1048576
 16      FIELDS TERMINATED BY "," LDRTRIM
 17      REJECT ROWS WITH ALL NULL FIELDS
 18      (
 19                  a char(255), b char(255), c char(255)
 20      )
 21    )
 22    location ( '0', '1', '2', '3', '4', '5', '6' )
 23  )
 24  /

Table created.

ops$tkyte%ORA11GR2> !touch /tmp/test/0

ops$tkyte%ORA11GR2> !touch /tmp/test/1

ops$tkyte%ORA11GR2> !touch /tmp/test/2

ops$tkyte%ORA11GR2> !touch /tmp/test/3

ops$tkyte%ORA11GR2> !touch /tmp/test/4

ops$tkyte%ORA11GR2> !touch /tmp/test/5

ops$tkyte%ORA11GR2> !touch /tmp/test/6

ops$tkyte%ORA11GR2> !cat cat_files.sh
#!/bin/bash

/bin/cat $**

ops$tkyte%ORA11GR2> select count(*) from et;

  COUNT(*)
----------
     60000








other preprocessor examples (work in 10.2.0.5 and up now - used to be 11.2 specific):


http://asktom.oracle.com/pls/apex/f?p=100:11:0::::P11_QUESTION_ID:3021213500346960197

http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:3015912000346648463

http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:2155105300346643982

evitar la carga de datos nulos

cmc, June 05, 2013 - 8:40 am UTC

I avoid loading null values ​​in some fields in an external table. I thought simulate a "NOT NULL" with "LOAD WHEN" clause.



CREATE TABLE EXTERNAL_TABLE
(
FIELD1 NUMBER(2),
FIELD2 VARCHAR2(60 CHAR)
)
ORGANIZATION EXTERNAL
( TYPE ORACLE_LOADER
DEFAULT DIRECTORY TAB_DIRECTORY
ACCESS PARAMETERS
( records delimited by NEWLINE
CHARACTERSET AL32UTF8
string sizes are in characters
load when (FIELD1!= BLANKS and
FIELD2 != BLANKS )
badfile bad_directory:'E_EXTERNAL_TABLE%a_%p.bad'
logfile log_directory:'E_EXTERNAL_TABLE%a_%p.log'
discardfile dis_directory:'E_EXTERNAL_TABLE%a_%p.dis'
fields terminated by '|' missing field values are null (
FIELD1integer external (2),
FIELD2 char(60)) )
LOCATION (TAB_DIRECTORY:'EXTERNAL_TABLE.txt')
)
REJECT LIMIT UNLIMITED
PARALLEL ( DEGREE DEFAULT INSTANCES DEFAULT )
NOMONITORING;


file EXTERNAL_TABLE.txt :

1|test_1
2|
3|[white space]
4|test_2

In this example , records 2 and 3 go to discardfile, but I want that only second record goes to discarfile, because the third record hasn´t got null value in the second field.


I tried other expressions in "load when" clause

FIELD1!= '' and
FIELD2 != ''

ERROR at line 1:
ORA-29913: error in executing ODCIEXTTABLEOPEN callout
KUP-00600: internal error, arguments [kudmmal] [18446744073709551615]
[18446744073709551615] [] []



FIELD1!= NULL and
FIELD2 != NULL

ORA-29913: error in executing ODCIEXTTABLEOPEN callout
ORA-29400: data cartridge error
KUP-00554: error encountered while parsing access parameters
KUP-01005: syntax error: found "null": expecting one of: "blanks,
double-quoted-string, hexprefix, identifier, single-quoted-string"
KUP-01007: at line 13 column 23

please, can you help me?
Tom Kyte
June 05, 2013 - 1:59 pm UTC

my approach is to do more in sql, less in the external table definition

ops$tkyte%ORA11GR2> 
ops$tkyte%ORA11GR2> CREATE TABLE EXTERNAL_TABLE
  2  (
  3    FIELD1   NUMBER(2),
  4    FIELD2  VARCHAR2(60 CHAR)
  5  )
  6  ORGANIZATION EXTERNAL
  7  (  TYPE ORACLE_LOADER
  8     DEFAULT DIRECTORY TAB_DIRECTORY
  9     ACCESS PARAMETERS
 10     ( records delimited by NEWLINE
 11       CHARACTERSET AL32UTF8
 12       string sizes are in characters
 13     fields terminated by '|'
 14     missing field values are null
 15     (
 16         FIELD1 integer external (2),
 17         FIELD2 char(60))
 18     )
 19     LOCATION (TAB_DIRECTORY:'EXTERNAL_TABLE.txt')
 20  )
 21  REJECT LIMIT UNLIMITED
 22  PARALLEL ( DEGREE DEFAULT INSTANCES DEFAULT )
 23  NOMONITORING;

Table created.

ops$tkyte%ORA11GR2> 
ops$tkyte%ORA11GR2> select * from external_table;

    FIELD1 FIELD2
---------- ------------------------------------------------------------
         1 test_1
         2 *******
         3
         4 test_2

ops$tkyte%ORA11GR2> 
ops$tkyte%ORA11GR2> select * from external_table where field1 is not null and field2 is not null;

    FIELD1 FIELD2
---------- ------------------------------------------------------------
         1 test_1
         3
         4 test_2

ops$tkyte%ORA11GR2> 

Preprocessing + parallel external tables

David Aldridge, June 05, 2013 - 12:09 pm UTC

Ah, interesting -- there's a phrase in the docs somewhere that states that parallel access to an external tables requires that the data is loaded directly from files, not from a pipe or stream, and had incorrectly assumed that this rules out preprocessing and parallelism.

Using the location file names as preprocessor arguments is a nice hack that ought to have more official support in the syntax, I think, allowing the option of specifying N file names with a set of N arguments or N different preprocessor commands.

thanks
Tom Kyte
June 05, 2013 - 2:28 pm UTC

... Using the location file names as preprocessor arguments is a nice hack that
ought to have more official support in the syntax, I think, allowing the option
of specifying N file names with a set of N arguments or N different ...

I agree...

External table not seeing the file

Greg, June 12, 2013 - 3:33 pm UTC

Hi Tom,

I'm having an issue with external tables, that I don't understand.
Hoping you can help out.
I noticed a similar error posted by Alexander on Apr 6, 2007, however, he was unable to reproduce in test.
A second similar post on May 16, 2008 by Dawar Naqvi.

Ok, here's what I've done:

-- logged into user with "DBA" role.

SET PAUSE OFF

-- create a test user
CREATE USER user1 IDENTIFIED BY junk
PROFILE DEFAULT
DEFAULT TABLESPACE users
TEMPORARY TABLESPACE temp;

GRANT create session TO user1;

CREATE DIRECTORY MY_DIR AS '/some/dir';

-- create the test file using utl_file to ensure file privs and location are ok.

DECLARE
v_fh utl_file.file_type;
v_row VARCHAR2(4000);
BEGIN
v_fh := utl_file.fopen ( 'MY_DIR', 'schema.table.txt', 'w' );

utl_file.put_line ( v_fh, '123|||testing' );
utl_file.put_line ( v_fh, '234|||junk data' );
utl_file.put_line ( v_fh, '345|||read me' );

utl_file.fclose(v_fh);
dbms_output.put_line ( v_row );
END;
/

GRANT read,write ON DIRECTORY MY_DIR TO user1;

-- create the ext table.
CREATE TABLE USER1.MY_TABLE
(
ID NUMBER(6),
NAME VARCHAR(1000)
)
ORGANIZATION EXTERNAL
( TYPE ORACLE_LOADER
DEFAULT DIRECTORY MY_DIR
ACCESS PARAMETERS
( RECORDS DELIMITED BY NEWLINE
CHARACTERSET WE8ISO8859P1
FIELDS TERMINATED BY '|||'
MISSING FIELD VALUES ARE NULL
(
ID DECIMAL EXTERNAL,
NAME CHAR
)
)
LOCATION
(
'schema.table.txt'
)
);

-- force Oracle to read from it.
SELECT COUNT(*) FROM USER1.MY_TABLE WHERE ROWNUM = 1;

Now, when I run that in a 11.2.0.1.0 database, it runs fine.
However, when I then run it in a 11.2.0.2.0 (Exadata) database, it throws the following error on the final SELECT statement.

[Error] Execution (40: 28): ORA-29913: error in executing ODCIEXTTABLEOPEN callout
ORA-29400: data cartridge error
KUP-04040: file schema.table.txt in MY_DIR not found

I've tried to check as much as I can to verify everything is the same. The file privileges appear to be setup properly.
The following PL/SQL block works fine on both databases:

SET SERVEROUT ON

DECLARE
v_fh utl_file.file_type;
v_row VARCHAR2(4000);
BEGIN
v_fh := utl_file.fopen ( 'MY_DIR', 'schema.table.txt', 'r' );
utl_file.get_line ( v_fh, v_row );
utl_file.fclose(v_fh);
dbms_output.put_line ( v_row );
END;
/

Any ideas? I'm completely stumped! O_0
I'm worried it's either a bug, or something that Exadata is doing differently than our other databases.

Tom Kyte
June 18, 2013 - 3:07 pm UTC

are you connecting over the network in both cases or are you using a "no network" connection in one but not the other?

External table not seeing the file (update)

Greg, June 13, 2013 - 12:55 pm UTC

Ok, just an update, seems we found the issue, and I just wanted to share.

So, it seems our DBA found the problem and fixed it. I didn't get the explanation until this morning. Being a bit new at this place, I'm still getting used to the full architecture of all the environments.

It appears the issue was related to RAC more than anything else. He had to copy the files to a regular filesystem location on the RAC, and replicate them around. Initially he had placed them on an Oracle DBFS filesystem.

Cheers!! :)

Tom Kyte
June 18, 2013 - 3:11 pm UTC

that makes sense! thanks for the followup

Preserve space at column level in external table

rajakumar lakkakula, November 05, 2013 - 9:37 am UTC

HI Tom,

I need your help.
Here Columns CQ_WORK_COLLID,CQ_WORKED_FG,CQ_IN_PROCESS have blank space in data file.I need to preserve space as column value in External Table.
In SQL Loader we are using PRESEVE BLANKS SPACE, but in external table how to use same functionality.

OPTIONS (ERRORS=50, BINDSIZE=1048576, ROWS=1000, READSIZE=1048576)
LOAD DATA
PRESERVE BLANKS
INTO TABLE COL_QUEUE_S
TRUNCATE
REENABLE DISABLED_CONSTRAINTS

External Table Syntax:
CREATE TABLE COL_QUEUE_S_EXT_TEST1
(
"CQ_BRACCTNO" NUMBER(10,0),
"CQ_NUMBER" NUMBER(6,0),
"CQ_SEQNO" NUMBER(2,0),
"CQ_COLLID" CHAR(3 BYTE),
"CQ_REV_BEHAVIOR_RISK" NUMBER(1,0),
"CQ_SEQ_FG" CHAR(1 BYTE),
"CQ_DATE" CHAR(8 BYTE),
"CQ_START_TIME" NUMBER(5,0),
"CQ_REV_DEL" NUMBER(5,0),
"CQ_REV_PDUE_DAYS" NUMBER(5,0),
"CQ_REV_BAL" NUMBER(6,0),
"CQ_WORK_COLLID" CHAR(3 BYTE),
"CQ_WORKED_FG" CHAR(1 BYTE),
"CQ_IN_PROCESS" CHAR(1 BYTE),
"CQ_DATE_QUEUED" CHAR(8 BYTE),
"CQ_TIME_ZONE_ERROR" CHAR(1 BYTE)
)
ORGANIZATION EXTERNAL
(
TYPE ORACLE_LOADER
DEFAULT DIRECTORY "PARADATA_DATA"
ACCESS PARAMETERS
(
RECORDS DELIMITED BY newline CHARACTERSET WE8MSWIN1252
BADFILE PARADATA_BAD:'COL_QUEUE_S_EXT.bad'
LOGFILE PARADATA_bad:'COL_QUEUE_S_EXT.log'
FIELDS LDRTRIM
REJECT ROWS WITH ALL NULL FIELDS
(
cq_bracctno position(1:10) CHAR(10) ,
cq_number position(11:16) CHAR(6),
cq_seqno position(17:18) CHAR(2),
cq_collid position(19:21) CHAR(3),
cq_rev_behavior_risk position(22:22) CHAR(1),
cq_seq_fg position(23:23) CHAR(1),
cq_date position(24:31) CHAR(8),
cq_start_time position(32:36) CHAR(5),
cq_rev_del position(37:41) CHAR(5),
cq_rev_pdue_days position(42:46) CHAR(5),
cq_rev_bal position(47:52) CHAR(6),
cq_work_collid position(53:55) CHAR(3),
cq_worked_fg position(56:56) CHAR(1),
cq_in_process position(57:57) CHAR(1),
cq_date_queued position(58:65) CHAR(8),
cq_time_zone_error position(66:66) CHAR(1) ) ) LOCATION ( '110313.COL_QUEUE.txt' )
)
REJECT LIMIT UNLIMITED;
Tom Kyte
November 07, 2013 - 3:17 pm UTC

if you take your sqlldr control file and:

sqlldr u/p whatever.ctl external_table=generate_only

what do you get in your logfile?

External tables

A reader, March 27, 2014 - 1:41 pm UTC

Hi Tom,

I have created the below table on windows command prompt.
The Oracle database resides on linux box.

CREATE TABLE user_xt
( a NUMBER
, b VARCHAR2(100)
, c VARCHAR2(100)
)
ORGANIZATION EXTERNAL
(
TYPE ORACLE_LOADER
DEFAULT DIRECTORY user_dir
ACCESS PARAMETERS
(
RECORDS DELIMITED by NEWLINE
BADFILE 'user_xt.bad'
LOGFILE 'user_xt.log'
NODISCARDFILE
FIELDS TERMINATED BY ','
( a
, b
, c
)
)
LOCATION ('user.csv')
)
REJECT LIMIT UNLIMITED;

When I query the table it gives the below error.
ERROR at line 1:
ORA-29913: error in executing ODCIEXTTABLEOPEN callout
ORA-29400: data cartridge error
error opening file c:/user_xt.log

The directory does have the appropriate permissions for the logged in user as well.

I have noticed that it has appended a forward slash. I guess as the database resides on linux. The logfile was not even created on the directory folder.

How is it possible to run the external table script on windows when oracle resides on linux?


Tom Kyte
March 28, 2014 - 6:45 am UTC

external tables work entirely on the database server - the client is not relevant.

The SERVER reads the file
The SERVER writes any log files

all directories must be on the linux server - the windows client has nothing to do with it at all (think about it - if the SERVER could read/write to your file system - that would be considered a "virus")


External tables

A reader, March 29, 2014 - 9:02 am UTC

Thanks Tom,

I should have thought of this earlier :-)

External tables with dump

Lins, April 10, 2014 - 8:30 am UTC

Is it possible to create an external table with an existing export dmp file which contains (say) data of many tables?
Tom Kyte
April 16, 2014 - 4:45 pm UTC

no

External Tables with source file name

BC, April 18, 2014 - 5:43 pm UTC

Hey Tom,

Is it possible to determine the source name in the external table ? For example my external table foo_ext can be based on 1 or more os text files, when I select the data is there a built in way to identify the source file ?

If no built in way exists, do you have a trick or two up your sleeve that we could use to do this ?

Thanks

BC

Super Awesome

A reader, April 22, 2014 - 2:26 pm UTC

As usual, you are simply the best.

Thank you very much

BC

ask a ques

surya prakash, September 29, 2014 - 9:40 am UTC

While analyzing the application, you identify the following entities and their attributes:
Employee:
Emp_ID
Emp_Name
Gender
Contact_Number
Dept_ID
Dept_Name
Proj_ID
Proj_Name
Client:
Client_ID
Client_Name
Client_Address
Contact_Number
Proj_ID
Proj_Name
Project:
Proj_ID
Emp_ID
Emp_Name
Dept_ID
Dept_Name
Proj_StartDate
Proj_EndDate
Proj_Cost
Based on the preceding entities and attributes, you create the structure of the table for storing
employee details, as shown in the following table.
Similarly, you create the structure of the table for storing client details, as shown in the following
table.In addition, you create the structure of the table for storing the project details, as shown in the
following table.
1. Analyze the preceding table structures and identify the problems that you may face while
inserting, deleting or updating data in these. Justify your answer.
2. How can you resolve the problems related to the structure of the preceding tables?

Sir Can you Plz Tell me the answer of the above question i didnt recognize the ques

query on external table

Rajeshwaran Jeyabal, March 23, 2016 - 2:01 pm UTC

Team,

select * from T;

if table T is a External table mapped to a file "sample.dat" having 1000 of rows, then the Order of rows returned by this query will be very similar to the order of rows present in the File?

or in other words, does external table read the file mapped to it sequentially or randomly ?
Chris Saxon
March 24, 2016 - 2:16 am UTC

My observation is that this is normally the case, but you can't *guarantee* it.

An obvious counter example would be reading it with a parallel query.

query on external table

Rajeshwaran Jeyabal, March 24, 2016 - 5:25 am UTC

An obvious counter example would be reading it with a parallel query.

I know parallel execution would change the game. that is why i provided the query.

1) Assume no object level parallelism is defined.
2) Assume no statement level parallelism is defined.
3) No session/instance level parallel query/ parallel dml/parallel ddl is defined or force.

How could i come up with a simple test case to prove that
select * from external_table;
would read the file sequentially and not randomly ?


The idea behind this approach is that we have an Shell script which process the files and results of that calculation are loaded into database tables.
We are planning to have those process to be done inside the database using Analytics/model clauses but to access the file - we are planning to go with External table.

But if External table like Heap table does random reads, then we cannot proceed further, since the order of the file contents is very important for this calculation. so want to make sure that external table reads the file sequentially.

kindly help us with a test case to prove if possible.
Connor McDonald
March 24, 2016 - 5:50 am UTC

One of the error messages related to external tables (when the table is
referenced multiple times in the same query), lends some assistance here:

"KUP-04108: unable to reread file string

Cause: The query that is executing requires that a data source for an external table be read multiple times. However, the data source is a sequential device that cannot be reread. Examples of this type of data source is a tape or pipe.

Action: There are a few ways around this problem. One is to rewrite the query so that the external table is referenced only once. Another option is to move the data source to a rereadable device such as a disk file. A third option is to load the data for the external table into a temporary table and change the query to reference the temporary table."

So its referring to the data source as a sequential device.

But for me, its simply not something I would rely on. Who knows, in 12.2 we might invent an optimization that reads external tables with multiple worker threads (I dont know of such a plan btw)...so even if we can prove that the file is read top-to-bottom *today*, it might not be the case *tomorrow* (aka, the next version of the database, or OS, or platform etc).


Let's hope not

AndyP, March 24, 2016 - 1:32 pm UTC

I reckon that (not reading the file sequentially) would break a good many of the implementations of this approach around the world. It would probably have to remain at least a parameterised option. But I suppose to be on the safe side you could add a preprocessor script to your external table definition that added line number as the first field of each record (assuming a delimited format) so making the file contain its own ordering explicitly
Chris Saxon
March 24, 2016 - 5:57 pm UTC

That's not my experience - often data the external table is just for reading into a "real" table anyway. And lines in the file often aren't in the order you want.

But I'm sure there are people relying on this rather than the correct solution - an order by!

Chris

Query external table....

A reader, March 24, 2016 - 8:53 pm UTC

If we have an option to alter the external table definition, why can't we use RECNUM to get the exact line number instead of assuming it will read sequentially? This will work for PARALLEL reads also, right?
Connor McDonald
March 24, 2016 - 10:01 pm UTC

Agreed.

Order by is the issue

AndyP, April 04, 2016 - 12:46 pm UTC

Chris says that ORDER BY would be the "correct" approach, but that seems to be the very issue Rajesh is facing, ie that the data has nothing to order it by apart from its position in the file. That's why he needs to guarantee that the record can be tagged with its actual line number even if read out of sequence as it were

RECNUM yes, that's much simpler than my preprocessor idea

And, as usual, Tom addressed this very subject a decade ago with this:

https://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:52733181746448

Followup   December 09, 2005 - 1:13 am UTC

in a serial query (no parallel query), i see no issues - however, you have a point, so... here is 
perhaps a better approach:

CREATE TABLE "ALERT_LOG"
 (    "LINE1" VARCHAR2(4000),
       r number
 )
 ORGANIZATION EXTERNAL
  ( TYPE ORACLE_LOADER
    DEFAULT DIRECTORY "BDUMP_DIR"
    ACCESS PARAMETERS
    ( records delimited by newline fields missing field values are null
    ( line1 position(1:4000), r RECNUM )  )
    LOCATION
     ( 'alert_ora10gr2.log'
     )
  )

Connor McDonald
April 05, 2016 - 1:44 am UTC

Hence the previous followup

Sequential reading of data files

Rajeshwaran Jeyabal, April 05, 2016 - 5:30 am UTC

That's not my experience - often data the external table is just for reading into a "real" table anyway.
And lines in the file often aren't in the order you want.
But I'm sure there are people relying on this rather than the correct solution - an order by!


1) External table are replacement for the intermediate staging tables.
2) Sometimes we do analytic/business rules and validations/computations before persisting the results back into real tables.

Here is an Example for my case.

sample data in a file xxx.dat
003310304000J019
1BBH334000000000067484C310304
1BKN580000000000516281C310304
004410304000K020
1BBH334000000000067484C310304
1BKN580000000000516281C310304

Two type of records contained –
Header denoted by first character being 0,
Detail with first character of 1

I would like to load this into a table which, for each detail record, will contain the header and detail.
That is using the above sample of data, the output should be like this.

003310304000J019-1BBH334000000000067484C310304
003310304000J019-1BKN580000000000516281C310304
004410304000K020-1BBH334000000000067484C310304
004410304000K020-1BKN580000000000516281C310304

Option#1

rajesh@ORA11G> CREATE TABLE T
  2  (
  3   X NUMBER,
  4   Y VARCHAR2(30)
  5  )
  6  ORGANIZATION external
  7  (
  8   TYPE oracle_loader
  9   DEFAULT DIRECTORY ETL_DATA_DIR
 10   ACCESS PARAMETERS
 11   (
 12     RECORDS DELIMITED BY NEWLINE CHARACTERSET WE8MSWIN1252
 13     BADFILE ETL_DATA_DIR:'xxx.bad'
 14     LOGFILE 'log2.txt'
 15     READSIZE 1048576
 16     FIELDS LDRTRIM
 17     REJECT ROWS WITH ALL NULL FIELDS
 18     (
 19      "X" RECNUM,
 20      "Y" (1:30) CHAR(30)
 21     )
 22   )
 23   location
 24   (
 25     'xxx.dat'
 26   )
 27  )REJECT LIMIT UNLIMITED ;

Table created.

rajesh@ORA11G> select z2
  2  from (
  3  select x,y, max(z) over(order by x) z1,
  4      max(z) over(order by x) ||'-'|| y z2
  5  from (
  6  select x,y ,
  7     decode( substr(y,1,1),'0',y) z
  8  from t
  9  where x > 0
 10      )
 11      )
 12  where substr(y,1,1) <> '0'
 13  /

Z2
-------------------------------------------------------------
003310304000J019-1BBH334000000000067484C310304
003310304000J019-1BKN580000000000516281C310304
004410304000K020-1BBH334000000000067484C310304
004410304000K020-1BKN580000000000516281C310304

4 rows selected.

rajesh@ORA11G>


Option-2
Have a pipelined table function using UTL_FILE API, does all the validation and produce the result set needed.

Questions
1) In this case, i need to tie each detail with it Header record, if external table doesn't read the file sequentially - How does this can be done using pure SQL solution?
2) Could you help me with a simple /conscience test case to show that non-parallel query on the external table doesn't read the file sequentially
3) Having "recnum" in this case result in the sequentially access to file ?
Connor McDonald
April 07, 2016 - 2:53 am UTC

Sorry I should have said it was me (Connor) that added the "Agreed" to the RECNUM followup earlier in the thread.

For me, I would use RECNUM.


Validate my understanding

Rajeshwaran Jeyabal, April 07, 2016 - 5:39 am UTC

Please validate my understanding.

Questions:
1) Adding RECNUM to the external table will read the file sequentially ?

2) How about parallel queries with RECNUM in place, each PX slave will read a non-overlapping portion of file. In that case how to handle the above requirement using parallel queries? any insights or directions, please? 

Connor McDonald
April 07, 2016 - 11:39 am UTC

My point is this.

a) My *observation* is that the file data is always returned in the order that the data appears in the file.

b) Various bugs on MOS (which describe that the data is returned *not* in that order) suggest that our intention that (even with parallel) the data is meant to be returned in file order.

But both of those mean *nothing* unless you can find a definitive statement either on MOS or in the docs saying "Yes, the file data comes back in order".

Its quite possible that this is just an omission in the docs, and you could log a support call for that omission...but until its *in* there, you cant be 100% sure.

The *only* place in the docs I can find a reference that talks about a means of guaranteed knowledge of the file sequence in the data, is RECNUM, namely:

"Setting a Column to the Datafile Record Number
Use the RECNUM parameter after a column name to set that column to the number of the logical record from which that record was loaded. Records are counted sequentially from the beginning of the first datafile, starting with record 1. RECNUM is incremented as each logical record is assembled. Thus it increments for records that are discarded, skipped, rejected, or loaded. If you use the option SKIP=10, the first record loaded has a RECNUM of 11."

*That* is why I'm saying use RECNUM. Because (to my knowledge) that's the only option where if the data order in the file does *not* match the value assigned by RECNUM, you would have an avenue to raise it with Support.

I hope this makes sense.

Sequential reading of data in External Table.

Rajeshwaran Jeyabal, April 15, 2016 - 8:24 am UTC

Team,

Here is my test case to prove the "Sequential reading of data in External Table."

File big_table.lst has 10M records of size 1.2GB. ( file created by unloading data from table "big_table" in CSV format - big_table is just a copy of all_objects )

created an External Table with column X and other set of columns.
column X will be populated by sequential values using RECNUM and column id has unique values from 1 to 10,000,000

rajesh@ORA12C> CREATE TABLE big_table_ext
  2  (
  3    "X" NUMBER,
  4    "OWNER" VARCHAR2(128),
  5    "OBJECT_NAME" VARCHAR2(128),
  6    "SUBOBJECT_NAME" VARCHAR2(128),
  7    "OBJECT_ID" NUMBER,
  8    "DATA_OBJECT_ID" NUMBER,
  9    "OBJECT_TYPE" VARCHAR2(23),
 10    "CREATED" DATE,
 11    "LAST_DDL_TIME" DATE,
 12    "TIMESTAMP" VARCHAR2(19),
 13    "STATUS" VARCHAR2(7),
 14    "TEMPORARY" VARCHAR2(1),
 15    "GENERATED" VARCHAR2(1),
 16    "SECONDARY" VARCHAR2(1),
 17    "NAMESPACE" NUMBER,
 18    "EDITION_NAME" VARCHAR2(128),
 19    "SHARING" VARCHAR2(13),
 20    "EDITIONABLE" VARCHAR2(1),
 21    "ORACLE_MAINTAINED" VARCHAR2(1),
 22    "ID" NUMBER
 23  )
 24  ORGANIZATION external
 25  (
 26    TYPE oracle_loader
 27    DEFAULT DIRECTORY TMP
 28    ACCESS PARAMETERS
 29    (
 30      RECORDS DELIMITED BY NEWLINE CHARACTERSET WE8MSWIN1252
 31      BADFILE 'TMP':'big_table.bad'
 32      LOGFILE 'log.txt_xt'
 33      READSIZE 1048576
 34      FIELDS TERMINATED BY "," OPTIONALLY ENCLOSED BY '"' LDRTRIM
 35      REJECT ROWS WITH ALL NULL FIELDS
 36      (
 37        "X" RECNUM,
 38        "OWNER" CHAR(255)
 39          TERMINATED BY "," OPTIONALLY ENCLOSED BY '"',
 40        "OBJECT_NAME" CHAR(255)
 41          TERMINATED BY "," OPTIONALLY ENCLOSED BY '"',
 42        "SUBOBJECT_NAME" CHAR(255)
 43          TERMINATED BY "," OPTIONALLY ENCLOSED BY '"',
 44        "OBJECT_ID" CHAR(255)
 45          TERMINATED BY "," OPTIONALLY ENCLOSED BY '"',
 46        "DATA_OBJECT_ID" CHAR(255)
 47          TERMINATED BY "," OPTIONALLY ENCLOSED BY '"',
 48        "OBJECT_TYPE" CHAR(255)
 49          TERMINATED BY "," OPTIONALLY ENCLOSED BY '"',
 50        "CREATED" CHAR(255)
 51          TERMINATED BY "," OPTIONALLY ENCLOSED BY '"',
 52        "LAST_DDL_TIME" CHAR(255)
 53          TERMINATED BY "," OPTIONALLY ENCLOSED BY '"',
 54        "TIMESTAMP" CHAR(255)
 55          TERMINATED BY "," OPTIONALLY ENCLOSED BY '"',
 56        "STATUS" CHAR(255)
 57          TERMINATED BY "," OPTIONALLY ENCLOSED BY '"',
 58        "TEMPORARY" CHAR(255)
 59          TERMINATED BY "," OPTIONALLY ENCLOSED BY '"',
 60        "GENERATED" CHAR(255)
 61          TERMINATED BY "," OPTIONALLY ENCLOSED BY '"',
 62        "SECONDARY" CHAR(255)
 63          TERMINATED BY "," OPTIONALLY ENCLOSED BY '"',
 64        "NAMESPACE" CHAR(255)
 65          TERMINATED BY "," OPTIONALLY ENCLOSED BY '"',
 66        "EDITION_NAME" CHAR(255)
 67          TERMINATED BY "," OPTIONALLY ENCLOSED BY '"',
 68        "SHARING" CHAR(255)
 69          TERMINATED BY "," OPTIONALLY ENCLOSED BY '"',
 70        "EDITIONABLE" CHAR(255)
 71          TERMINATED BY "," OPTIONALLY ENCLOSED BY '"',
 72        "ORACLE_MAINTAINED" CHAR(255)
 73          TERMINATED BY "," OPTIONALLY ENCLOSED BY '"',
 74        "ID" CHAR(255)
 75          TERMINATED BY "," OPTIONALLY ENCLOSED BY '"'
 76      )
 77    )
 78    location
 79    (
 80      'big_table.lst'
 81    )
 82  )REJECT LIMIT UNLIMITED ;

Table created.


Now to prove the sequential reading, did these.

rajesh@ORA12C> select count(*)
  2  from big_table_ext
  3  where x <> id;

  COUNT(*)
----------
         0

1 row selected.

rajesh@ORA12C> select /*+ parallel(t1,4) */ count(*)
  2  from big_table_ext t1
  3  where x <> id;

  COUNT(*)
----------
         0

1 row selected.

rajesh@ORA12C>
rajesh@ORA12C> $dir d:\big_table.lst
 Volume in drive D is RAJESH
 Volume Serial Number is E079-099C

 Directory of d:\

04/15/2016  01:16 PM     1,187,080,405 big_table.lst
               1 File(s)  1,187,080,405 bytes
               0 Dir(s)  335,807,557,632 bytes free

rajesh@ORA12C>


Questions
1) Is there is any flaw in this test case ?
2) Do you have any other simplistic approach than this?
Chris Saxon
April 15, 2016 - 9:58 am UTC

Assuming that the ID field in the file starts at one increments by one for each line this, then yes it looks like Oracle is reading it sequentially.

This is only one example though. It doesn't prove that Oracle will *always* read the file in order.

As Connor said, you still can't be 100% sure this will always be the case.

Chris

External table on without delimited data.

Rajeshwaran Jeyabal, November 26, 2019 - 6:55 am UTC

Team,

questions:
1) is that "FIELDS TERMINATED BY" is mandatory for an External table? without that we can't read data from external table - please see below an example for it.
2) also how to load position based data using external table? tried with table T3 below, but didn't sucessed - please help.

demo@PDB1> $ type d:\trash\log.txt
REM INSERTING into EMP
SET DEFINE OFF;
Insert into EMP (EMPNO,ENAME,JOB,MGR,HIREDATE,SAL,COMM,DEPTNO) values (7782,'CLARK','MANAGER',7839,to_date('09-JUN-1981','DD-MON-YYYY'),2450,null,10);
Insert into EMP (EMPNO,ENAME,JOB,MGR,HIREDATE,SAL,COMM,DEPTNO) values (7839,'KING','PRESIDENT',null,to_date('17-NOV-1981','DD-MON-YYYY'),5000,null,10);
Insert into EMP (EMPNO,ENAME,JOB,MGR,HIREDATE,SAL,COMM,DEPTNO) values (7934,'MILLER','CLERK',7782,to_date('23-JAN-1982','DD-MON-YYYY'),1300,null,10);
Insert into EMP (EMPNO,ENAME,JOB,MGR,HIREDATE,SAL,COMM,DEPTNO) values (7369,'SMITH','CLERK',7902,to_date('17-DEC-1980','DD-MON-YYYY'),800,null,20);
Insert into EMP (EMPNO,ENAME,JOB,MGR,HIREDATE,SAL,COMM,DEPTNO) values (7566,'JONES','MANAGER',7839,to_date('02-APR-1981','DD-MON-YYYY'),2975,null,20);
Insert into EMP (EMPNO,ENAME,JOB,MGR,HIREDATE,SAL,COMM,DEPTNO) values (7788,'SCOTT','ANALYST',7566,to_date('19-APR-0087','DD-MON-YYYY'),3000,null,20);
Insert into EMP (EMPNO,ENAME,JOB,MGR,HIREDATE,SAL,COMM,DEPTNO) values (7876,'ADAMS','CLERK',7788,to_date('23-MAY-0087','DD-MON-YYYY'),1100,null,20);
Insert into EMP (EMPNO,ENAME,JOB,MGR,HIREDATE,SAL,COMM,DEPTNO) values (7902,'FORD','ANALYST',7566,to_date('03-DEC-1981','DD-MON-YYYY'),3000,null,20);
Insert into EMP (EMPNO,ENAME,JOB,MGR,HIREDATE,SAL,COMM,DEPTNO) values (7499,'ALLEN','SALESMAN',7698,to_date('20-FEB-1981','DD-MON-YYYY'),1600,300,30);
Insert into EMP (EMPNO,ENAME,JOB,MGR,HIREDATE,SAL,COMM,DEPTNO) values (7521,'WARD','SALESMAN',7698,to_date('22-FEB-1981','DD-MON-YYYY'),1250,500,30);
Insert into EMP (EMPNO,ENAME,JOB,MGR,HIREDATE,SAL,COMM,DEPTNO) values (7654,'MARTIN','SALESMAN',7698,to_date('28-SEP-1981','DD-MON-YYYY'),1250,1400,30);
Insert into EMP (EMPNO,ENAME,JOB,MGR,HIREDATE,SAL,COMM,DEPTNO) values (7698,'BLAKE','MANAGER',7839,to_date('01-MAY-1981','DD-MON-YYYY'),2850,null,30);
Insert into EMP (EMPNO,ENAME,JOB,MGR,HIREDATE,SAL,COMM,DEPTNO) values (7844,'TURNER','SALESMAN',7698,to_date('08-SEP-1981','DD-MON-YYYY'),1500,0,30);
Insert into EMP (EMPNO,ENAME,JOB,MGR,HIREDATE,SAL,COMM,DEPTNO) values (7900,'JAMES','CLERK',7698,to_date('03-DEC-1981','DD-MON-YYYY'),950,null,30);

14 rows selected.


demo@PDB1>
demo@PDB1> create table t1( x number, y varchar2(200) )
  2  organization external(
  3     type oracle_loader
  4     default directory DP
  5     access parameters(
  6             records delimited by newline
  7             nologfile
  8             nobadfile
  9             ( x recnum,
 10               y char(200)
 11             ) )
 12             location ('log.txt') )
 13             reject limit unlimited;

Table created.

demo@PDB1> select count(*) from t1;
select count(*) from t1
*
ERROR at line 1:
ORA-29913: error in executing ODCIEXTTABLEOPEN callout
ORA-29400: data cartridge error
KUP-00554: error encountered while parsing access parameters
KUP-01005: syntax error: found "(": expecting one of: "badfile, byteordermark, characterset, column, data, delimited, discardfile, dnfs_enable, dnfs_disable,
disable_directory_link_check, field, fields, fixed, io_options, load, logfile, language, nodiscardfile, nobadfile, nologfile, date_cache, dnfs_readbuffers, preprocessor,
string, skip, territory, variable, xmltag"
KUP-01007: at line 4 column 3


demo@PDB1> create table t2( x number, y varchar2(200) )
  2  organization external(
  3     type oracle_loader
  4     default directory DP
  5     access parameters(
  6             records delimited by newline
  7             nologfile
  8             nobadfile
  9             FIELDS TERMINATED BY '~'
 10             ( x recnum,
 11               y char(200)
 12             ) )
 13             location ('log.txt') )
 14             reject limit unlimited;

Table created.

demo@PDB1> select count(*) from t2;

  COUNT(*)
----------
        17

demo@PDB1> create table t3( x number, y varchar2(200) )
  2  organization external(
  3     type oracle_loader
  4     default directory DP
  5     access parameters(
  6             records delimited by newline
  7             nologfile
  8             nobadfile
  9             ( x recnum,
 10               y (1:200)
 11             ) )
 12             location ('log.txt') )
 13             reject limit unlimited;

Table created.

demo@PDB1> select count(*) from t3;
select count(*) from t3
*
ERROR at line 1:
ORA-29913: error in executing ODCIEXTTABLEOPEN callout
ORA-29400: data cartridge error
KUP-00554: error encountered while parsing access parameters
KUP-01005: syntax error: found "(": expecting one of: "badfile, byteordermark, characterset, column, data, delimited, discardfile, dnfs_enable, dnfs_disable,
disable_directory_link_check, field, fields, fixed, io_options, load, logfile, language, nodiscardfile, nobadfile, nologfile, date_cache, dnfs_readbuffers, preprocessor,
string, skip, territory, variable, xmltag"
KUP-01007: at line 4 column 3


demo@PDB1

Chris Saxon
November 27, 2019 - 1:51 pm UTC

If you're defining the columns, you still need a FIELDS clause - the TERMIANTED BY section is optional though:

declare
  f utl_file.file_type;
begin
  f := utl_file.fopen ('TMP', 'log.txt', 'w');
  utl_file.put_line(f, q'|Insert into EMP (EMPNO,ENAME,JOB,MGR,HIREDATE,SAL,COMM,DEPTNO) values (7782,'CLARK','MANAGER',7839,to_date('09-JUN-1981','DD-MON-YYYY'),2450,null,10);|');
  utl_file.put_line(f, q'|Insert into EMP (EMPNO,ENAME,JOB,MGR,HIREDATE,SAL,COMM,DEPTNO) values (7839,'KING','PRESIDENT',null,to_date('17-NOV-1981','DD-MON-YYYY'),5000,null,10);|');
  utl_file.put_line(f, q'|Insert into EMP (EMPNO,ENAME,JOB,MGR,HIREDATE,SAL,COMM,DEPTNO) values (7934,'MILLER','CLERK',7782,to_date('23-JAN-1982','DD-MON-YYYY'),1300,null,10);|');
  utl_file.put_line(f, q'|Insert into EMP (EMPNO,ENAME,JOB,MGR,HIREDATE,SAL,COMM,DEPTNO) values (7369,'SMITH','CLERK',7902,to_date('17-DEC-1980','DD-MON-YYYY'),800,null,20);|');
  utl_file.put_line(f, q'|Insert into EMP (EMPNO,ENAME,JOB,MGR,HIREDATE,SAL,COMM,DEPTNO) values (7566,'JONES','MANAGER',7839,to_date('02-APR-1981','DD-MON-YYYY'),2975,null,20);|');
  utl_file.put_line(f, q'|Insert into EMP (EMPNO,ENAME,JOB,MGR,HIREDATE,SAL,COMM,DEPTNO) values (7788,'SCOTT','ANALYST',7566,to_date('19-APR-0087','DD-MON-YYYY'),3000,null,20);|');
  utl_file.put_line(f, q'|Insert into EMP (EMPNO,ENAME,JOB,MGR,HIREDATE,SAL,COMM,DEPTNO) values (7876,'ADAMS','CLERK',7788,to_date('23-MAY-0087','DD-MON-YYYY'),1100,null,20);|');
  utl_file.put_line(f, q'|Insert into EMP (EMPNO,ENAME,JOB,MGR,HIREDATE,SAL,COMM,DEPTNO) values (7902,'FORD','ANALYST',7566,to_date('03-DEC-1981','DD-MON-YYYY'),3000,null,20);|');
  utl_file.put_line(f, q'|Insert into EMP (EMPNO,ENAME,JOB,MGR,HIREDATE,SAL,COMM,DEPTNO) values (7499,'ALLEN','SALESMAN',7698,to_date('20-FEB-1981','DD-MON-YYYY'),1600,300,30);|');
  utl_file.put_line(f, q'|Insert into EMP (EMPNO,ENAME,JOB,MGR,HIREDATE,SAL,COMM,DEPTNO) values (7521,'WARD','SALESMAN',7698,to_date('22-FEB-1981','DD-MON-YYYY'),1250,500,30);|');
  utl_file.put_line(f, q'|Insert into EMP (EMPNO,ENAME,JOB,MGR,HIREDATE,SAL,COMM,DEPTNO) values (7654,'MARTIN','SALESMAN',7698,to_date('28-SEP-1981','DD-MON-YYYY'),1250,1400,30);|');
  utl_file.put_line(f, q'|Insert into EMP (EMPNO,ENAME,JOB,MGR,HIREDATE,SAL,COMM,DEPTNO) values (7698,'BLAKE','MANAGER',7839,to_date('01-MAY-1981','DD-MON-YYYY'),2850,null,30);|');
  utl_file.put_line(f, q'|Insert into EMP (EMPNO,ENAME,JOB,MGR,HIREDATE,SAL,COMM,DEPTNO) values (7844,'TURNER','SALESMAN',7698,to_date('08-SEP-1981','DD-MON-YYYY'),1500,0,30);|');
  utl_file.put_line(f, q'|Insert into EMP (EMPNO,ENAME,JOB,MGR,HIREDATE,SAL,COMM,DEPTNO) values (7900,'JAMES','CLERK',7698,to_date('03-DEC-1981','DD-MON-YYYY'),950,null,30);|');
  utl_file.fclose(f);
end;
/

  
create table t1( x number, y varchar2(200) )
organization external(
 type oracle_loader
 default directory tmp
 access parameters(
   records delimited by newline
   fields (
     x recnum,
     y (1:200) char(200)
   ) 
 )
 location ('log.txt') 
)
reject limit unlimited;

select * from t1
where  x <= 2;

X    Y                                                                                                                                                         
   1 Insert into EMP (EMPNO,ENAME,JOB,MGR,HIREDATE,SAL,COMM,DEPTNO) values (7782,'CLARK','MANAGER',7839,to_date('09-JUN-1981','DD-MON-YYYY'),2450,null,10);     
   2 Insert into EMP (EMPNO,ENAME,JOB,MGR,HIREDATE,SAL,COMM,DEPTNO) values (7839,'KING','PRESIDENT',null,to_date('17-NOV-1981','DD-MON-YYYY'),5000,null,10);    


Problems after the upgrade to 19c

Moshe, September 14, 2022 - 3:38 pm UTC

Hi,

The same code

CREATE TABLE TEST_TBL
(
  A  NUMBER(3),
  B  NUMBER(3)
)
ORGANIZATION EXTERNAL
  (  TYPE ORACLE_LOADER
     DEFAULT DIRECTORY RISK_EXT_DIR
     ACCESS PARAMETERS 
       ( 
         records delimited BY newline nologfile nodiscardfile nobadfile fields terminated BY '' missing field VALUES are NULL
        ( 
            A  position ( 1:3) INTEGER external (3) ,      
            B position ( 4:3) INTEGER external (3)
       )
     )
     LOCATION (EXT_DIR:'FILE.TBL')
  )
REJECT LIMIT UNLIMITED;


works properly in 12.1, but raises an error in 19c:

ORA-29913: error in executing ODCIEXTTABLEOPEN callout
ORA-29400: data cartridge error
KUP-04021: field formatting error for field B
KUP-04024: field end is before field start

Seems the syntax is changed between 12 to 19... Is it correct?
Meaning, in the clause "position (x:y)", the "y" meant "end" or "length" in 12c. Now, in 19c, it should be only "end".

Noted that "B position ( 4:+3) ..." works properly in both versions.

Thank you in advance, M.

Connor McDonald
September 27, 2022 - 5:39 am UTC

I replicated your findings - I suspect we've just firmed up the syntax rules because it really should *not* have worked in 12 based on the rules in the docs.

External tables and In-memory.

Rajeshwaran, Jeyabal, December 08, 2022 - 12:46 pm UTC

Team,

the below testcase was from my local 21c database.

https://docs.oracle.com/en/database/oracle/oracle-database/21/inmem/populating-the-imcs-manually.html#GUID-F1B26BDD-614B-4E4C-A986-8977B4B27DF0

a) the docs says that we need to set "QUERY_REWRITE_INTEGRITY" to "Stale_Tolerated" to populate External table into inmemory.
but i was able to get things done without "QUERY_REWRITE_INTEGRITY" set to "stale_tolerated" , so what is the need for setting that parameter ?

b) why the external tables are created on "SYSTEM" tablespace than the schema default tablespace?

demo@PDB1> CREATE TABLE emp_ext
  2  (
  3    "EMPNO" NUMBER(4),
  4    "ENAME" VARCHAR2(10),
  5    "JOB" VARCHAR2(15),
  6    "MGR" NUMBER(4),
  7    "HIREDATE" VARCHAR2(255),
  8    "SAL" NUMBER(7,2),
  9    "COMM" NUMBER(7,2),
 10    "DEPTNO" NUMBER(2)
 11  )
 12  ORGANIZATION external
 13  (
 14    TYPE oracle_loader
 15    DEFAULT DIRECTORY TEMP_DIR
 16    ACCESS PARAMETERS
 17    (
 18      RECORDS DELIMITED BY NEWLINE CHARACTERSET WE8MSWIN1252
 19      BADFILE 'TEMP_DIR':'ctl.bad'
 20      LOGFILE 'ext_log.txt'
 21      READSIZE 1048576
 22      SKIP 1
 23      FIELDS TERMINATED BY "," optionally enclosed by '"'
 24      REJECT ROWS WITH ALL NULL FIELDS
 25      (
 26        "EMPNO" CHAR(255)
 27          TERMINATED BY ",",
 28        "ENAME" CHAR(255)
 29          TERMINATED BY ",",
 30        "JOB" CHAR(255)
 31          TERMINATED BY ",",
 32        "MGR" CHAR(255)
 33          TERMINATED BY ",",
 34        "HIREDATE" CHAR(255)
 35          TERMINATED BY ",",
 36        "SAL" CHAR(255)
 37          TERMINATED BY ",",
 38        "COMM" CHAR(255)
 39          TERMINATED BY ",",
 40        "DEPTNO" CHAR(255)
 41          TERMINATED BY ","
 42      )
 43    )
 44    location('emp.dat')
 45  )REJECT LIMIT UNLIMITED ;

Table created.

demo@PDB1> select count(*) from emp_ext;

  COUNT(*)
----------
      5040

demo@PDB1> alter table emp_ext inmemory ;

Table altered.

demo@PDB1> exec dbms_inmemory.populate(user,'EMP_EXT');

PL/SQL procedure successfully completed.

demo@PDB1> @printtbl 'select * from v$im_segments'
OWNER                         : "DEMO"
SEGMENT_NAME                  : "EMP_EXT"
PARTITION_NAME                : ""
SEGMENT_TYPE                  : "TABLE"
TABLESPACE_NAME               : "SYSTEM"
INMEMORY_SIZE                 : "1179648"
BYTES                         : "0"
BYTES_NOT_POPULATED           : "0"
POPULATE_STATUS               : "COMPLETED"
INMEMORY_PRIORITY             : ""
INMEMORY_DISTRIBUTE           : ""
INMEMORY_DUPLICATE            : ""
INMEMORY_COMPRESSION          : "FOR QUERY LOW"
INMEMORY_SERVICE              : ""
INMEMORY_SERVICE_NAME         : ""
IS_EXTERNAL                   : "TRUE"
CON_ID                        : "3"
-----------------

PL/SQL procedure successfully completed.

demo@PDB1>
demo@PDB1> show parameter query_rewrite_integrity

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------
query_rewrite_integrity              string      enforced
demo@PDB1>

Chris Saxon
December 14, 2022 - 2:35 pm UTC

Sessions that query In-Memory external tables must have the initialization parameter...

The quote is about querying, not populating.

2 I'm not sure why, but external tablespaces don't really use a tablespace - all the data exists in files on the file system; not in Oracle data files.

External table creation from multipartitioned parquet file

Ankesh phapale, November 23, 2023 - 6:30 pm UTC

Hello Tom i was creating multi partitioned external table from parquet file but end up having these error did so much research but not able find cause of this error please see below code and help me
my query


BEGIN
DBMS_CLOUD.CREATE_EXTERNAL_PART_TABLE (
table_name => 'mysales',
credential_name => 'DEF_CRED',
file_uri_list => ' https://objectstorage.us-ashburn-1.oraclecloud.com/n/idrzqj5ugpca/b/warehousedatastandard/o/SALES/*.parquet',
format =>
json_object( 'type' value 'parquet',
'schema' value 'first',
'partition_columns' value
json_array(
json_object('name' value 'REGION', 'type' value 'varchar2(300)'),
json_object('name' value 'COUNTRY', 'type' value 'varchar2(300)')
))

);
END;


error

ORA-20000: ORA-23306: schema does not exist
ORA-23306: schema "ADMIN" does not exist
ORA-06512: at "C##CLOUD$SERVICE.DBMS_CLOUD", line 1693
ORA-06512: at "C##CLOUD$SERVICE.DBMS_CLOUD", line 7476
ORA-06512: at line 2
Error at Line: 7 Column: 0

FYI
i was able to create external table without partition successfully
ADMIN schema is already present
my parquet folder structure SALES/REGION=ASIA/COUNTRY=INDIA/P10002.parquet
Connor McDonald
November 28, 2023 - 4:00 am UTC

Just to make sure we're not hitting any strange datatype conversion, what happens if you do


format=>'{"type":"parquet","schema":"first","partition_columns":[{"name":"REGION","type":"varchar2(300)"},{"name":"COUNTRY","type":"varchar2(300)"}]}'

Error in creating external partitioned table

Ankesh, December 01, 2023 - 12:34 pm UTC

I tried this still same error


BEGIN
DBMS_CLOUD.CREATE_EXTERNAL_PART_TABLE(
TABLE_NAME => 'sales_new_api7',
CREDENTIAL_NAME => 'DEF_CRED',
FILE_URI_LIST => ' https://objectstorage.us-ashburn-1.oraclecloud.com/n/idrzqj5ugpca/b/warehousedatastandard/o/SALES/*parquet',
FORMAT => '{"type":"parquet", "schema": "first","partition_columns":[{"name":"REGION","type":"VARCHAR2(64)"}]}'

);
END;



ERROR:

ORA-20000: ORA-23306: schema does not exist
ORA-23306: schema "ADMIN" does not exist
ORA-06512: at "C##CLOUD$SERVICE.DBMS_CLOUD", line 1693
ORA-06512: at "C##CLOUD$SERVICE.DBMS_CLOUD", line 7476
ORA-06512: at line 2
Error at Line: 7 Column: 0


FYI: SCHEMA ALREADY PRESENT
was able to create external table without partitioned but this is not working